, , , , , , , , , , , , ,

Peters, my fat Dutch lodger, is forever beating me at card games. His gloating has become insufferable. So I decided to build a poker bot using my Raspberry Pi computer, a webcam and some Python code, to win my money back. The only problem being, as my initial post Playing card detection using OpenCV indicated, the bot finds it a bit tricky to tell the three of hearts card from the three of diamonds:


I need a more robust system. So rather than just having one classifier for detecting playing cards, I’ll introduce a second classifier for detecting the suit of the card (in this case, the heart motif):


10 stages of training were completed on 42 positive images, using the following parameters:

perl createtrainsamples.pl positives.dat negatives.dat samples 500 "./opencv_createsamples  -bgcolor 0 -bgthresh 0 -maxxangle 1.1 -maxyangle 1.1 maxzangle 0.5 -maxidev 40 -w 50 -h 59"
opencv_haartraining -data haarcascade_heartmotif -vec samples.vec -bg negatives.dat -nstages 20 -nsplits 2 -minhitrate 0.999 -maxfalsealarm 0.5 -npos 500 -nneg 200 -w 50 -h 59 -nonsym -mem 2048 -mode ALL

Let’s give it a whirl and see the results:


Brilliant – the suit has been detected. We can also rotate the image 180 degrees and match the third heart motif on the card:


We now have the advantage of being able to count the number of heart motifs, checking that it equals the card number (in this case, the number 3).

Note: the motif classifier is not detecting the tiny heart motifs in the top corners of the card, which is ideal for our number count – but if it does start to detect them then we’ll need to employ a minSize parameter on our detectMultiScale method.

So there we have it – a more robust system, where all candidate playing cards are inspected for suit and number.

Here’s the Webcam class, amended to handle the additional motif classifier:

import cv2
from datetime import datetime
class Webcam(object):
    WINDOW_NAME = "Playing Card Detection System"
    # constructor
    def __init__(self):
        self.webcam = cv2.VideoCapture(0)       

    # save image to disk
    def _save_image(self, path, image):
        filename = datetime.now().strftime('%Y%m%d_%Hh%Mm%Ss%f') + '.jpg'
        cv2.imwrite(path + filename, image)

    # rotate image
    def _rotate_image(self, roi_gray):
        (h, w) = roi_gray.shape[:2]
        center = (w / 2, h / 2)
        M = cv2.getRotationMatrix2D(center, 180, 1.0)
        return cv2.warpAffine(roi_gray, M, (w, h))

    # detect cards in webcam
    def detect_cards(self, card_path, motif_path, motif_number):
        isDetected = False
        # get image from webcam
        img = self.webcam.read()[1]
        # do card detection
        card_cascade = cv2.CascadeClassifier(card_path)
        motif_cascade = cv2.CascadeClassifier(motif_path)

        gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
        cards = card_cascade.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=4, minSize=(200, 300))

        for (x,y,w,h) in cards:
            current_motif_number = 0

            roi_gray = gray[y:y+h, x:x+w]
            current_motif_number += len(motif_cascade.detectMultiScale(roi_gray, scaleFactor=1.1, minNeighbors=110))
            roi_gray = self._rotate_image(roi_gray)
            current_motif_number += len(motif_cascade.detectMultiScale(roi_gray, scaleFactor=1.1, minNeighbors=110))
            if current_motif_number == motif_number:
                isDetected = True

        # save image to disk
        self._save_image('WebCam/Detection/', img)
        # show image in window
        cv2.imshow(self.WINDOW_NAME, img)
        # indicate whether cards detected
        return isDetected

I got the image rotating code from Adrian Rosebrock’s Basic Image Manipulations article.

And here’s the main program, which feeds our Webcam class with the classifier files and motif number:

from webcam import Webcam
from speech import Speech

webcam = Webcam()
speech = Speech()

# play a game of cards
while True:
    # attempt to detect the three of hearts
    if webcam.detect_cards('haarcascade_threehearts.xml', 'haarcascade_heartmotif.xml', 3) == True:
        speech.text_to_speech("I have the cotton picking three of hearts")
        speech.text_to_speech("I do not have the darn gun slinging three of hearts")

We’ll use our Speech class to announce through a set of speakers whether the three of hearts card has been found (this class makes use of Google’s Text To Speech service):

from subprocess import PIPE, call
import urllib
class Speech(object):
    # converts text to speech
    def text_to_speech(self, text):
            # truncate text as google only allows 100 chars
            text = text[:100]
            # encode the text
            query = urllib.quote_plus(text)
            # build endpoint
            endpoint = "http://translate.google.com/translate_tts?tl=en&q=" + query
            # debug
            # get google to translate and mplayer to play
            call(["mplayer", endpoint], shell=False, stdout=PIPE, stderr=PIPE)
            print ("Error translating text")

Fantastic! Soon I will be able to write the actual logic to allow my poker bot to decide what to do with its hand of cards, granting it enough artificial intelligence to beat Peters and reclaim my cash.

But first I need more classifiers – one for each card in the deck, and another three for the remaining suits (diamonds, clubs, spades). Plus I’ll also need to ensure that the code does not take aeon to execute on my tiny Raspberry Pi.

‘How’s your bot coming along?’ Peters snidely asked, flakes of pastry falling from his plump lips.

‘Just swell,’ I replied quite pleasantly. Secretly I wanted to thrust a screwdriver into his ear.

But I fear the Postmortem.


Here’s some sample code for saving the motif images to disk…

motifs = motif_cascade.detectMultiScale(roi_gray, scaleFactor=1.1, minNeighbors=110)
for (x,y,w,h) in motifs:
    self._save_image('WebCam/Detection/', roi_gray)
    cv2.imshow(self.WINDOW_NAME, roi_gray)
    current_motif_number += 1

…replacing the following line of code:

current_motif_number += len(motif_cascade.detectMultiScale(roi_gray, scaleFactor=1.1, minNeighbors=110))