After I completed a working version of the GoPiGo v2 to work with Windows IoT on Raspberry Pi 3, I started working on the next level of my rover (called Mocha) Sorry I like coffee ;). The samples from Dexter Robotics allowed me to use a Windows 10 UWP project to control the robot with a Panel to make it go forward and backward, left and right. Previous Project Here
No I wanted to not completely blind myself, I added a cheap web camera to the Raspberry Pi.
As the previous project I have the Rover listening to command on one port. With the help of a fellow Hackster Sascha here. I was able to incorporate the HTTP webcam stream to a Panel in Unity by setting up a second Socket Listener.
Now the biggest problem with Unity and streaming of this kind without paying for better asset controls, was to use an old approach. Using a StartCoroutine in Unity for every loop I ask the new stream for a picture. The HTTP webcam streamer will send a single static image back. It is slower than a standard MJpeg web page, but it works in Unity without the need to find a Html Viewer with JavaScript support.
First create a camera object
Camera camera = new Camera();
Then in the loaded event on the MainPage.xaml
var mediaFrameFormats = await camera.GetMediaFrameFormatsAsync();
ConfigurationFile.SetSupportedVideoFrameFormats(mediaFrameFormats);
var videoSetting = await ConfigurationFile.Read(mediaFrameFormats);
await camera.Initialize(videoSetting);
camera.Start();
var httpServer = new HttpServer(camera);
httpServer.Start();
The starts the HttpServer and sets the camera up. I did make a few adjustments to the quality of image and remarked out the code to allow the MJPEG and control screens from being available.
Now on the Unity side. I created a plane and set a script to handle the image request and display.
public class CameraMaterial : MonoBehaviour {
public Material frontPlane;
public Text debugText;
Plane remoatePlane;
WebSocket w = null;
Texture2D tex = null;
MeshRenderer mr = null;
int i = 0;
void Start() {
tex = new Texture2D (1, 1);
mr = GetComponent<MeshRenderer> ();
w = new WebSocket(new Uri("ws://192.168.1.52/videoframe"));
StartCoroutine (w.Connect ());
StartCoroutine (StreamPictures ());
}
IEnumerator StreamPictures()
{
while (true) {
if (w == null)
continue;
byte[] data = w.Recv ();
if (data != null) {
tex.LoadImage (data);
frontPlane.mainTexture = tex;
mr.material = frontPlane;
StartCoroutine (WaitForRefresh ());
}
w.SendString ("{ \"command\": \"VideoFrame\" }");
if (w.error != null) {
Debug.LogError ("Error: " + w.error);
break;
}
yield return 0;
}
w.Close();
}
IEnumerator WaitForRefresh()
{
yield return new WaitForSeconds (0.01f);
}
}
You will notice the SendString sending a command for a single VideoFrame and that I swap the texture on the Plane with the new image, thus giving the ability to have a somewhat video stream.
Make note that the code can be using in the Unity Editor but make sure to turn the rover upside down so it does not run away.
I welcome any comments on improvements made as this is the end of my rover tutorial. Happy coding!
Comments