OpenCV motion detection in Oculus Rift (Mark II)


, , , , , , , , , , , , ,

In my last post, I put on my Oculus Rift virtual reality headset and explored a virtual world. But I also wanted to know what was happening in the real world – so I added webcam images of the real world to each face of a cube in the virtual world. And whenever motion was detected in the webcam images, a cone suddenly appeared in the virtual world to let me know.

In this post I will amend how the motion detection works. Instead of simply showing the cone when motion is detected in the webcam images, I will use the cone to indicate how much motion is detected. The more motion detected, the higher the cone will rise into the air!

Here’s how it all hangs together…

The C++ code in the Oculus SDK for Windows ‘OculusRoomTiny(GL)’ Visual Studio project ‘Win32_GLAppUtil.h’ file uses:

  1. Assimp (Open Asset Import Library) to import a cube mesh and a cone mesh created in Blender (3D creation suite)
  2. OpenCV computer vision to obtain images from a webcam and detect motion
  3. OpenGL graphics library to render the cube and cone into a virtual world, on the Oculus Rift virtual reality headset

And here’s the code to animate the cone (the more motion detected in the webcam images, the higher the cone rises into the air):

Mat WebcamImage;
float MotionLevel;

struct Webcam
	VideoCapture Capture;
	bool IsThreadEnabled;

	Webcam() : IsThreadEnabled(true) {}

	void MotionThread() {;

		// create background subtraction and mask
		Ptr<BackgroundSubtractor> pMOG2 = createBackgroundSubtractorMOG2(20, 16, false);
		Mat mask;

		while (IsThreadEnabled) {
			// obtain image from webcam
			Capture >> WebcamImage;

			// apply background subtraction
			pMOG2->apply(WebcamImage, mask, 0.001);

			// update motion level
			MotionLevel = cv::countNonZero(mask) / 76800.0f; 


Webcam * webcam = new Webcam();
std::thread mt(&Webcam::MotionThread, webcam);

We create a new instance of our Webcam struct and run a thread against its MotionThread function (we use a thread so not to block the main program from rendering to the Oculus Rift virtual reality headset).

The MotionThread function fetches the latest image from the webcam and applies OpenCV background subtraction, to determine how much motion is in the image.

Motion in the image will be marked as white pixels, so we count them using the countNonZero function and assign a calculated value to the MotionLevel variable.

We want the cone to rise no more than 4 units on the Y axis – to ensure this, we divide the total number of pixels in the 640×480 webcam image (which is 307,200) by 4 (which is 76,800) and use this number in our calculation.

Note also that we create our background subtractor with a detectShadows parameter of false – shadow detection marks pixels as grey, but we don’t care about shadows.

Now when we come to render our cone, we set its height on the Y axis to our MotionLevel variable.

for (int i = 0; i < numMeshes; ++i) {
	// update cone height with motion level
	if (i == 0) {
		Meshes[i]->Pos[1] = MotionLevel;

	Meshes[i]->Render(view, proj);

And don’t forget to tidy up our resources when the application is shut down:

webcam->IsThreadEnabled = false;
delete webcam;

Okay, time for a demo. Let’s see the cone in the virtual world rise higher as more motion is detected in the webcam images of the real world:

The right-hand side window displays our virtual world with cone and cube, and the left-hand side window displays the amount of motion detected (white pixels) in our webcam images of the real world.

That nutter robot is causing cone erection, so to speak.

Note also that the cone falls back towards a Y axis position of 0 as motion detection reduces.