, , , , , , , , , , , , ,

Arkwood was standing in a pail of water, eating radishes.

Why? I asked.

‘Apparently it helps you to remember things.’

I told him that it was nonsense. However did he hear of such a thing? Anyway, it was all to do with Arkwood playing the retro Commodore 64 computer game Commando on a TV set in my virtual world: Commodore 64 in virtual reality

‘I keep forgetting to pick up the grenades,’ he cried.

Not to worry. I cranked open the C++ Microsoft Visual Studio application (with OpenGL graphics library and the Oculus SDK for Windows) and added some computer vision.

Now when each frame of Commando is rendered to the TV set, we can use OpenCV Template Matching to determine whether a stack of grenades are visible in the game. Yoggy has some nice sample code on GitHub to get the job done.

So whenever Arkwood dons the Oculus Rift virtual reality headset and plays Commando in the virtual world, a speech bubble will appear next to the TV set if grenades are available, reminding him to go collect.

The fragment shader discards fragments with a low alpha value, thus granting the speech bubble a nice curved look.

Here’s Arkwood being prompted to pick up the grenades during a game of Commando:

You might have noticed that although Arkwood collects the grenades when instructed, he doesn’t actually use any of them. Well, as the saying goes, “You can lead a fool to a pail of water, but you can’t make him eat radishes”.



A couple of technical points…

I used the CV_TM_CCOEFF_NORMED method for template matching, with a threshold of 0.82 (avoiding the use of cv::normalize method)

I only conducted template matching on every tenth iteration of my thread, as it can be expensive in terms of performance.

I also provided a grace period for displaying the speech bubble, to ensure that if the stack of grenades got temporarily obscured (e.g. by a soldier running in front of them) the speech bubble would not flash off and on.

And here’s the template image that we scour each frame of Commando for: