, , , , , , , , , , , , ,

Peters is Dutch. Peters watches TV in his grubby underpants. With a Ginsters pasty wrapper stuffed down the side of the sofa and a full fat bottle of Coke wedged between his sweaty blubberous thighs. He watches the darts.

Anyway, it gave me an idea. I will update SaltwashAR – my Python Augmented Reality application – so that the robots can watch television. And as if by magic:

Wonderful! Or, as Peters would say, ‘Vonderbarr!’ The TV footage is taken from My Surgeon.

How the hell can the robots watch TV? Simple. I added a new Television feature to the app, which uses OpenCV Video Capture to stream video on top of a 2D marker. Here’s the code:

from features.base import Feature
import numpy as np
import cv2
from televisionfunctions import *

class Television(Feature):

    TELEVISION_PATTERN = [1, 0, 1, 0, 1, 0, 1, 0, 1]

    def __init__(self):
        self.background_image = np.array([])
        self.video_capture = cv2.VideoCapture()

    # stop thread
    def stop(self):
        self.background_image = np.array([])

        if self.video_capture.isOpened():

    # get latest frame from video
    def _get_video_frame(self):

        success, frame = self.video_capture.read()
        if success: return frame

        if not self.video_capture.isOpened():
            self.video_capture.set(cv2.cv.CV_CAP_PROP_POS_FRAMES, 0)      

        return self.video_capture.read()[1]

    def _thread(self, args):
        image = args

        # Stage 1: Detect edges in image
        gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
        gray = cv2.GaussianBlur(gray, (5,5), 0)
        edges = cv2.Canny(gray, 100, 200)

        # Stage 2: Find contours
        contours, _ = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
        contours = sorted(contours, key=cv2.contourArea, reverse=True)[:10]

        for contour in contours:

            # Stage 3: Shape check
            perimeter = cv2.arcLength(contour, True)
            approx = cv2.approxPolyDP(contour, 0.01*perimeter, True)

            if len(approx) == self.QUADRILATERAL_POINTS:

                # Stage 4: Perspective warping
                topdown_quad = get_topdown_quad(gray, approx.reshape(4, 2))

                # Stage 5: Border check
                if topdown_quad[(topdown_quad.shape[0]/100.0)*5, 
                                (topdown_quad.shape[1]/100.0)*5] > self.BLACK_THRESHOLD: continue

                # Stage 6: Get marker pattern
                marker_pattern = None
                    marker_pattern = get_marker_pattern(topdown_quad, self.BLACK_THRESHOLD, self.WHITE_THRESHOLD)
                except Exception as ex:
                if not marker_pattern: continue

                # Stage 7: Match marker pattern
                if marker_pattern != self.TELEVISION_PATTERN: continue

                # Stage 8: Substitute marker
                if self.is_stop: return

                self.background_image = add_substitute_quad(image, self._get_video_frame(), approx.reshape(4, 2))
        self.background_image = np.array([])

Note: to get the OpenCV 2.4.9 Video Capture to work on my Windows 7 64-bit PC with Python Tools for Visual Studio, I had to copy opencv_ffmpeg249_64.dll from opencv\build\x64\vc12\bin to my Python root folder – StackOverflow provided the detail.

As with all our SaltwashAR features, Television inherits from the Feature base class so that it can run in a thread (thus not blocking the main OpenGL process from rendering to screen).

I set a class instance variable to Video Capture, along with class method _get_video_frame to fetch the latest frame of video (notice how the video will loop once the last frame is rendered, using CV_CAP_PROP_POS_FRAMES). OpenCV’s Getting Started with Videos Python tutorial was a great help to me.

We override the base class stop method, to ensure that the video is released on a stop thread request.

But, as a monk would say, it’s the _thread method where all the shit happens. Similar to how we detect 2D markers to render our robots upon, we detect a 2D marker to render our video stream upon.

OpenCV Canny Edge Detection is used to pick out the contours of objects in the latest webcam image. If the object is square-ish (i.e. it has four sides) then we get a top-down view of it and try to match its pattern to the television marker. On match, we can swap the marker surface for a frame of video.

Here’s the 2D television marker:


Notice how the marker is identical, no matter how we rotate it. This would be no good for robots, which need a different pattern on each rotation of the marker (so as to know whether they’re supposed to be facing the webcam, or turned 90 degrees, 180 degrees or 270 degrees). But for a television it is perfect, cos we don’t want the TV set turned upside down!

Another thing. Unlike our other features, a robot does not need to be facing the webcam for the Television feature to kick in. The robot need only be in shot. After all, we are not interacting with the robot, merely watching it watching TV. Creepy.

And that’s about it. Perhaps I will let the robot change the TV channel somehow? Switching one video file for another should be fairly straightforward.

Please check out the SaltwashAR Wiki for details on how to install and help develop the SaltwashAR Python Augmented Reality application.

As for Peters, I need to figure how to get his lardy arse off the sofa. Lest I’ll never get to watch my favourite TV programme, Celebrities with lead pipes.