Tags

, , , , , , , , , , , ,

I am busy building a poker bot using a webcam, my Raspberry Pi computer and some Python code. Why? Because Peters, my Dutch lodger, has won so many late-night card games against me that he now owns the deeds to my house. I need the help of some artificial intelligence to win my property back.

In my last post, Playing card detection using OpenCV (Mark IV), I was able to detect the Queen of hearts playing card using a webcam. In a previous post, Playing card detection using OpenCV (Mark III), I detected the three of hearts playing card. Let’s put these posts together, so both playing cards can be detected at the same time.

Let’s start with a simply card game of Pontoon rather than poker. If memory serves me correct, a player receives two cards and can then twist for further cards.

Here’s the main Python program:

from cards import Card, Webcam, Detection
from speech import Speech

webcam = Webcam()
detection = Detection()
speech = Speech()

# initialize cards
three_hearts = Card('haarcascade_3hearts.xml', 3, True, None, None, None, 'haarcascade_heartmotif.xml', 110, 3)
queen_hearts = Card('haarcascade_Qhearts.xml', 1, True, None, None, None, 'haarcascade_heartminimotif.xml', 4, 1)

# play a game of cards
while True:
 
    # attempt to detect cards
    image = webcam.read_image()
    three_hearts_detected = detection.is_card_detected_in_image(three_hearts, image)
    queen_hearts_detected = detection.is_card_detected_in_image(queen_hearts, image)
    
    if three_hearts_detected and queen_hearts_detected:
        speech.text_to_speech("Twist as I only have 13 points")
    else:
        speech.text_to_speech("I have no idea what both my cards are")

First up, we create our two cards, three_hearts and queen_hearts. Now we simply loop, attempting to detect our two cards in an image from the webcam. If both cards are detected then we announce through a set of speakers, attached to the Raspberry Pi, that our cards have a combined score of 13 and we want to twist for another card. In a previous post, Playing card detection using OpenCV (Mark II), I detailed how to use Google’s Text To Speech service to make the announcement.

Obviously, to build a bot that is able to play Pontoon we need far more cards and logic. But let’s see how the bot got on with the task at hand…

playingcards_3hearts_qhearts

Great! Both cards have been detected and we have asked the dealer to twist for another card.

playingcards_3hearts_qdiamonds

Swapping the Queen of hearts playing card for the Queen of diamonds, we now have only one card being detected and thus receive the announcement “I have no idea what both my cards are”.

playingcards_8clubs_qdiamonds

Swapping the three of hearts playing card for the eight of clubs, we now have no cards detected.

So far so good, but as always we need to improve the various OpenCV haar cascade classifiers being employed by our detection so as to avoid false-positives and false-negatives.

Peters, my fat lodger, is threatening me with eviction from my own house! I’d better get my skates on.

Ciao!

P.S.

My Python code from previous posts has undergone some refactoring. Let’s take a look at the classes…

I’ve created a new Card class, to store all the information specific to e.g. the three of hearts or Queen of hearts playing card . Note that there are variables for storing the relevant cascades. The figure variables will allow us to detect e.g. the figure ‘3’ or ‘Q’ on our playing card, once the classifiers are ready for use.

import cv2
import numpy as np
from datetime import datetime

class Card(object):

    # constructor
    def __init__(self,
                 card_cascade_path, card_cascade_minneighbors, card_is_red,
                 figure_cascade_path, figure_cascade_minneighbors, figure_amount,
                 motif_cascade_path, motif_cascade_minneighbors, motif_amount):

        self.card_cascade = self._set_cascade(card_cascade_path)
        self.card_cascade_minneighbors = card_cascade_minneighbors
        self.card_is_red = card_is_red
        self.figure_cascade = self._set_cascade(figure_cascade_path)
        self.figure_cascade_minneighbors = figure_cascade_minneighbors
        self.figure_amount = figure_amount
        self.motif_cascade = self._set_cascade(motif_cascade_path)
        self.motif_cascade_minneighbors = motif_cascade_minneighbors
        self.motif_amount = motif_amount

    # set cascade
    def _set_cascade(self, cascade_path):

        if cascade_path == None:
            return None

        return cv2.CascadeClassifier(cascade_path)

Next, I’ve created a new Webcam class, which simply retrieves an image from the webcam.

class Webcam(object):

    # read image from webcam
    def read_image(self):
        return cv2.VideoCapture(0).read()[1]

Finally, our Detection class has a public method is_card_detected_in_image which attempts to detect the supplied card in the supplied webcam image. The scope of detection will depend on how much information has been stored against the card – for example, if the card has not stored information pertaining to its colour (i.e. its card_is_red variable) then this aspect of card detection will be skipped.

class Detection(object):
  
    WINDOW_NAME = "Playing Card Detection System"
 
    # is card detected in image
    def is_card_detected_in_image(self, card, image):

        if (card.card_cascade is None) or (card.card_cascade_minneighbors is None):
            return False
 
        # do detection
        is_detected = False
        is_detected = self._detect_card_in_image(card, image)
 
        if is_detected == False:
            image = self._rotate_image(image)
            is_detected = self._detect_card_in_image(card, image)
 
        # save image to disk
        self._save_image(image)
  
        # show image in window
        cv2.imshow(self.WINDOW_NAME, image)
        cv2.waitKey(2000)
        cv2.destroyAllWindows()
          
        # indicate whether card detected in image
        return is_detected
 
    # detect card in image
    def _detect_card_in_image(self, card, colour_image):
 
        # detect cards
        gray_image = cv2.cvtColor(colour_image, cv2.COLOR_RGB2GRAY)
        cards = card.card_cascade.detectMultiScale(gray_image, scaleFactor=1.1, minNeighbors=card.card_cascade_minneighbors)

        for (x,y,w,h) in cards:
            rio_colour = colour_image[y:y+h, x:x+w]
 
            # detect colour
            if card.card_is_red is not None:
                has_red_colour = self._has_red_colour(rio_colour)
                if (card.card_is_red and not has_red_colour) or (not card.card_is_red and has_red_colour):
                    continue
                 
            # detect figures
            if (card.figure_cascade is not None) and (card.figure_cascade_minneighbors is not None) and (card.figure_amount is not None):
                figure_count = 0
                roi_gray = cv2.cvtColor(rio_colour, cv2.COLOR_RGB2GRAY)
                figure_count += len(card.figure_cascade.detectMultiScale(roi_gray, scaleFactor=1.1, minNeighbors=card.figure_cascade_minneighbors))
                roi_gray = self._rotate_image(roi_gray)
                figure_count += len(card.figure_cascade.detectMultiScale(roi_gray, scaleFactor=1.1, minNeighbors=card.figure_cascade_minneighbors))
                print 'Figure count: {}'.format(figure_count) # debug only
                if card.figure_amount != figure_count:
                    continue

            # detect motifs
            if (card.motif_cascade is not None) and (card.motif_cascade_minneighbors is not None) and (card.motif_amount is not None):
                motif_count = 0
                roi_gray = cv2.cvtColor(rio_colour, cv2.COLOR_RGB2GRAY)
                motif_count += len(card.motif_cascade.detectMultiScale(roi_gray, scaleFactor=1.1, minNeighbors=card.motif_cascade_minneighbors))
                roi_gray = self._rotate_image(roi_gray)
                motif_count += len(card.motif_cascade.detectMultiScale(roi_gray, scaleFactor=1.1, minNeighbors=card.motif_cascade_minneighbors))
                print 'Motif count: {}'.format(motif_count) # debug only
                if card.motif_amount != motif_count:
                    continue

            cv2.rectangle(colour_image,(x,y),(x+w,y+h),(255,0,0),2)
            return True
 
        return False
 
    # rotate image
    def _rotate_image(self, img):
        (h, w) = img.shape[:2]
        center = (w / 2, h / 2)
        M = cv2.getRotationMatrix2D(center, 180, 1.0)
        return cv2.warpAffine(img, M, (w, h))
 
    # save image to disk
    def _save_image(self, img):
        filename = datetime.now().strftime('%Y%m%d_%Hh%Mm%Ss%f') + '.jpg'
        cv2.imwrite("WebCam/Detection/" + filename, img)
 
    # detect red colour
    def _has_red_colour(self, img):
        hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
        threshold = cv2.inRange(hsv, np.array([0,90,60]), np.array([10,255,255]))
        print 'Red count: {}'.format(cv2.countNonZero(threshold)) # debug only
        return cv2.countNonZero(threshold) > 0

No doubt further refactoring will occur. I want to investigate optional parameters for the Card class. Also, the Detection class could perhaps handle figures and motifs in a more generic manner, avoiding a growth in code duplication.

Note: I’ve dropped the OpenCV minSize=(200,300) parameter, previously used in detecting the three of hearts playing card – this can be reintroduced, if required.

Advertisements