Tags

, , , , , , , , , ,

Arkwood was concerned that Lego policemen were watching him smoke marijuana. To ease his paranoia, I created an OpenCV haar cascade classifier for detecting Lego policemen. I attached a webcam to my Raspberry Pi computer to take photos of the plants on Arkwood’s windowsill, where he was convinced the cops were patrolling. Python code ran on the Raspberry Pi, using the classifier to detect police officers in each of the webcam snaps. My initial post somewhat relaxed his addled brain.

‘But what if the policemen are only on their lunch break!’ he sobbed, ‘I have to stub out my reefer unnecessarily.’ No matter how hard I tried, I could not placate his crazy notions. ‘Okay,’ I said, ‘I will create an additional classifier, that will detect the police motorbike. That way, you will only be alerted if the fuzz are on duty.’

In order to create a classifier for detecting Lego motorbikes, I made use of my Guitar detection using OpenCV post.

I have 14 positive images (.png, 160px width, cropped, side view), making use of 3 motorbikes and varying backgrounds.

legodetection_motorbike_sample1

legodetection_motorbike_sample2

legodetection_motorbike_sample3

Now, since I am working with Lego, I don’t actually need 3 motorbikes. I can simply swap about some of the bricks that make up the bike to obtain my variants. I will work with a single side view of the bike.

I have 100 negative images (.png, 960px), random photos of my house (not a motorbike in sight).

I created my training samples:

perl createtrainsamples.pl positives.dat negatives.dat samples 250 "./opencv_createsamples  -bgcolor 0 -bgthresh 0 -maxxangle 1.1 -maxyangle 1.1 maxzangle 0.5 -maxidev 40 -w 40 -h 33"

And used them to train a classifier:

opencv_haartraining -data haarcascade_lego_motorbike -vec samples.vec -bg negatives.dat -nstages 20 -nsplits 2 -minhitrate 0.999 -maxfalsealarm 0.5 -npos 250 -nneg 100 -w 40 -h 33 -nonsym -mem 2048 -mode ALL

It stalled at stage 19 of training, so I used the convert_cascade application to spit out my classifier xml file:

convert_cascade --size="40x33" haarcascade_lego_motorbike haarcascade_lego_motorbike-inter.xml

Now it is time to test the motorbike classifier in tandem with the policemen classifier I created in my previous post:

from webcam import Webcam
import pygame

webcam = Webcam()

# set up siren
pygame.mixer.init()
siren = pygame.mixer.Sound("221562__alaskarobotics__european-police-siren-1.wav")

# wait until lego detected
while webcam.detect_lego() == False:
    print ("no police on the beat")

# now play siren
siren.play()

The Python code uses Pygame to set up a siren sound, so that Arkwood can be alerted if Lego policemen and motorbikes are detected in the webcam snaps. The siren sound was obtained from Freesound: http://www.freesound.org/people/AlaskaRobotics/sounds/221562/

All we need now is our Python Webcam class, which will do the actual object detection:

import cv2
from datetime import datetime

class Webcam(object):

    WINDOW_NAME = "Arkwood's Surveillance System"

    # constructor
    def __init__(self):
        self.webcam = cv2.VideoCapture(0)
        
    # save image to disk
    def _save_image(self, path, image):
        filename = datetime.now().strftime('%Y%m%d_%Hh%Mm%Ss%f') + '.jpg'
        cv2.imwrite(path + filename, image)

    # detect lego in webcam
    def detect_lego(self):

        # get image from webcam
        img = self.webcam.read()[1]
        
        # do lego detection
        lego_policeman_cascade = cv2.CascadeClassifier('haarcascade_lego_policeman.xml')
        lego_motorbike_cascade = cv2.CascadeClassifier('haarcascade_lego_motorbike.xml')

        gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
        lego_policeman = lego_policeman_cascade.detectMultiScale(gray_img,1.3,5)
        lego_motorbike = lego_motorbike_cascade.detectMultiScale(gray_img,1.3,5)

        for (x,y,w,h) in lego_policeman:
            cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)

        for (x,y,w,h) in lego_motorbike:
            cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)

        # save image to disk
        self._save_image('WebCam/Detection/', img)

        # show image in window
        cv2.imshow(self.WINDOW_NAME, img)
        cv2.waitKey(2000)
        cv2.destroyAllWindows()

        # indicate whether lego detected 
        if len(lego_policeman) > 0 and len(lego_motorbike) > 0:
            return True

        return False

As you can see, we are loading up two classifier files, haarcascade_lego_policeman.xml and haarcascade_lego_motorbike.xml. A webcam image is saved to disk, with a rectangle drawn around any policemen and motorbikes detected. If at least one policeman AND motorbike are detected, the method will return True, otherwise False.

So, let’s have a look at how the system performs:

legodetection_policemanandmotorbike

Great. We have detected our policeman and motorbike, and our Raspberry Pi has played the siren sound to Arkwood through some speakers. He extinguishes his reefer. Note: this particular policeman and motorbike were not used in the training samples.

We need more positive samples and training stages to make our classifiers robust, but at least my Belgian buddy can waste his brain cells on ganja without fear of being arrested and put in an imaginary prison cell.

‘You’re the dope,’ he told me, his eyes bloodshot. ‘No. You are!’ I retorted. How we laughed.

Ciao!

Advertisements