, , , , , , , , , , , , ,

Arkwood has a long-standing phobia of Lego policemen. ‘They are watching me!’ he shrieked, the hairs on the back of his neck erect. Of course, it was just his drug-addled brain playing a cruel trick.

‘Anyway,’ I said, ‘it’s the Lego criminals you need to watch out for. Their sticky yellow fingers are oft found in the cookie jar.’

In my last post, OpenCV Camera Calibration and Pose Estimation using Python, I was able to calibrate my webcam and draw a 3D cube on a grid. And in post Lego detection using OpenCV (Mark III) I was able to detect Lego policemen using an OpenCV haar cascade classifier. Let’s bring the two together and provide some augmented reality!

Here’s the code:

from webcam import Webcam
from detection import Detection
from effects import Effects
import cv2

# set up classes
webcam = Webcam()
detection = Detection()
effects = Effects()
# loop forever
while True:
    # if lego policeman detected...
    image = webcam.get_current_frame()
    item_detected = detection.is_item_detected_in_image('haarcascade_lego_policeman.xml', image)
    # ...then draw a virtual jail for all those nasty criminals
    if item_detected:
    # show the scene

Dropping into a while loop, I obtain the current frame from my webcam via the Webcam class.

Next, the Detection class tries to find Lego policemen in the webcam image, using a haar cascade classifier.

If a Lego policeman has been found then I use the Effects class to draw a 3D jail on the image, to incarcerate any Lego criminals.

Finally, we show the augmented image in a window.

The code from the classes, along with a link to my haar cascade classifier, is at the foot of this post. But first, a demo…

The scene is set:


A Lego soldier walks into shot. A military man is not an officer of the law – alas – therefore our grid does not yield a virtual jail:


Who’s that strolling up the boulevard? It’s a nurse. Sorry, my angel of the wards, but the virtual jail is not for you:


But what is that I hear? Why, it is a bobby on the beat:


Hurray! Our haar cascade classifier has detected the Lego policeman in the webcam image. A virtual jail has been drawn on the grid.

We can build and pan our jail at different angles:



Granted, there is a bit of work still to do. For a start, the jail has no bars, so all those nasty criminals will be able to make good their escape. The grid I’ve used to render the jail is a bit big, taking up most of the scene. And we could do with omitting the detection rectangle around the policeman, as it spoils the sense of mind-boggling augmented reality.

Still, it’s a start.

‘Don’t worry Arkwood,’ I sympathised, ‘I’ll soon have a proper jail for the Lego policemen. You won’t have to worry about them watching you, as they’ll be too busy collaring all those crooks.’

What Madness!


Here’s the code I promised. First, the Webcam class, which runs in a thread:

import cv2
from threading import Thread
class Webcam:
    def __init__(self):
        self.video_capture = cv2.VideoCapture(0)
        self.current_frame = self.video_capture.read()[1]
    # create thread for capturing images
    def start(self):
        Thread(target=self._update_frame, args=()).start()
    def _update_frame(self):
            self.current_frame = self.video_capture.read()[1]
    # get the current frame
    def get_current_frame(self):
        return self.current_frame

Next, the Detection class. It uses my Lego Policeman haar cascade to attempt to find Lego policemen in the supplied image.

import cv2

class Detection(object):

    def is_item_detected_in_image(self, item_cascade_path, image):
        # do detection
        item_cascade = cv2.CascadeClassifier(item_cascade_path)
        gray_image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
        items = item_cascade.detectMultiScale(gray_image, scaleFactor=1.1, minNeighbors=36)

        for (x,y,w,h) in items:
        # indicate whether item detected in image
        return len(items) > 0

And finally, my Effects class, which borrows heavily from the OpenCV Pose Estimation article:

import cv2
import numpy as np
class Effects(object):
    def render(self, image):
        # load calibration data
        with np.load('webcam_calibration_ouput.npz') as X:
            mtx, dist, _, _ = [X[i] for i in ('mtx','dist','rvecs','tvecs')]
        # set up criteria, object points and axis
        criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
        objp = np.zeros((6*7,3), np.float32)
        objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)
        axis = np.float32([[0,0,0], [0,3,0], [3,3,0], [3,0,0],
                           [0,0,-3],[0,3,-3],[3,3,-3],[3,0,-3] ])
        # find grid corners in image
        gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
        ret, corners = cv2.findChessboardCorners(gray, (7,6), None)
        if ret == True:
            # project 3D points to image plane
            rvecs, tvecs, _ = cv2.solvePnPRansac(objp, corners, mtx, dist)
            imgpts, _ = cv2.projectPoints(axis, rvecs, tvecs, mtx, dist)
            # draw cube
            self._draw_cube(image, imgpts)
    def _draw_cube(self, img, imgpts):
        imgpts = np.int32(imgpts).reshape(-1,2)
        # draw floor
        cv2.drawContours(img, [imgpts[:4]],-1,(200,150,10),-3)
        # draw pillars
        for i,j in zip(range(4),range(4,8)):
            cv2.line(img, tuple(imgpts[i]), tuple(imgpts[j]),(255),3)
        # draw roof
        cv2.drawContours(img, [imgpts[4:]],-1,(200,150,10),3)