, , , , , , , , , , , ,

Once upon a time, I wrote some code which detects roundabout road signs in Google Street View. I created an OpenCV Haar feature-based cascade classifier which can be utilised from a Python script:

roundabouts = roundabout_cascade.detectMultiScale(

It works quite well:


But what happens if we drop the minNeighbors parameter from 6 to, say, 3:


Oops, we are now detecting a ‘keep left’ road sign, as well as the roundabout road signs. By adjusting the minNeighbors parameter to make object detection less strict, we have inadvertently introduced a false-positive.

Why not just leave the minNeighbors at 6? you ask me. Problem is, by being so strict with this setting we may fail to detect roundabout signs in other Google Street View snaps. So let’s try a different approach – let’s set minNeighbors to 3 and then treat each detected object as a ‘candidate’ for a roundabout sign. This candidate can then be inspected, to ensure it is actually a roundabout sign.

Now, there are many OpenCV techniques we can use to inspect these candidate objects. Here I am going to use OpenCV ORB, which is one of a number of feature detection and description algorithms that we can match images on.

Let me explain the following code:

import cv2

# constants
IMAGE_SIZE = 200.0

# load haar cascade and street image
roundabout_cascade = cv2.CascadeClassifier('orb/repository/haarcascade_roundabout.xml')
street = cv2.imread('orb/repository/roundabout1.png')

# do roundabout detection on street image
gray = cv2.cvtColor(street,cv2.COLOR_RGB2GRAY)
roundabouts = roundabout_cascade.detectMultiScale(

# initialize ORB and BFMatcher
orb = cv2.ORB()
bf = cv2.BFMatcher(cv2.NORM_HAMMING,crossCheck=True)

# find the keypoints and descriptors for roadsign image
roadsign = cv2.imread('orb/roundabout.jpg',0)
kp_r,des_r = orb.detectAndCompute(roadsign,None)

# loop through all detected objects
for (x,y,w,h) in roundabouts:

    # obtain object from street image
    obj = gray[y:y+h,x:x+w]
    ratio = IMAGE_SIZE / obj.shape[1]
    obj = cv2.resize(obj,(int(IMAGE_SIZE),int(obj.shape[0]*ratio)))

    # find the keypoints and descriptors for object
    kp_o, des_o = orb.detectAndCompute(obj,None)
    if len(kp_o) == 0 or des_o == None: continue

    # match descriptors
    matches = bf.match(des_r,des_o)
    # draw object on street image, if threshold met
    if(len(matches) >= MATCH_THRESHOLD):

# show objects on street image
cv2.imshow('street image', street)

First up, I load the roundabout cascade and the Google Street View photograph. I use the aforementioned OpenCV detectMultiScale function to detect all roundabout objects in the street snap.

Now we get to the OpenCV ORB stuff. I create an instance of ORB and a matcher.

Next, I load an image of a roundabout road sign (for matching each detected object with). I use ORB to obtain the keypoints and descriptors of the roundabout image. Here’s the roundabout image:


We are ready to loop through all the objects detected by our cascade…

I use the object’s coordinates to obtain its image from the street photograph. I resize the object image, to bring it inline with our roundabout image.

I use ORB to obtain the keypoints and descriptors of the object. Now we can match the descriptors of our object with our roundabout image.

All that’s left is to draw a rectangle around our detected object, on the street photograph. But the object needs to pass the match threshold for this to happen!

So you see, OpenCV ORB has determined which of our candidate objects are actually roundabout signs. If fewer than 3 matches are found between our object and the roundabout image, the object is discarded.

Time for a demo…

Our cascade detected four objects from our Google Street View photograph. Here’s the first object, which shows no matches with our roundabout image:


Our next object is a roundabout road sign, but no matches are found. Perhaps it’s because the object is not a full sign i.e. its border has been chopped off?


Luckily, our third object is of the same sign, but this time we can see its border. Matches have been found!


Finally, our fourth object, with lots of matches:


And here’s the Google Street View photograph with rectangles around the objects which passed the match threshold:


So there you have it – OpenCV ORB has helped us to detect road signs! Notice how ORB’s rotation invariance has matched the arrows on the detected objects, which are upside down compared to roundabout image.

Of course, there are other feature detection and description algorithms to try e.g. where scale invariance may avoid the need for image resizing. There are parameters we can tweak on some of the function calls. There are alternative techniques to evaluate our matches (than simply using a threshold). We need object detection to work well across all Google Street View photographs. But it’s been a promising start.


If you want to get your hands on my roundabout cascade, as well as some Google Street View photographs, visit the repository: https://onedrive.live.com/redir?resid=74B6CEA107C215CA%21107

Perhaps when matching an object with the roundabout image, you’d rather the object was set into its surrounding area as thus:


No problem. Simply add some padding to the coordinates:


Now, you may have noticed from the OpenCV Feature Matching documentation that there is a cv2.drawMatches function. But this be for OpenCV 3.x, and I only have 2.4.9:

from cv2 import __version__
print __version__
>>> 2.4.9

A stackoverflow post provided some options for drawing matches, if you don’t have OpenCV 3.x. I plumped for the code provided by rayryeng.