Tags

, , , , , , , , , , ,

It is a well-known fact in the parish of Borrowstounness that Arkwood is gunning for the world record at retro arcade game Pac-Man. Daphne – the fat spotty girl that works down the chippy – has agreed to have sex with him, provided he furnishes a certificate from The Guinness Book of Records. My goodness, even the mayor has commissioned a bronze statue of my filthy Belgian buddy, to be erected in the town square. It’s that serious.

‘Please help me get a high score!’ Arkwood pleaded. Against my better judgement, I told him that I would help.

In my last post, OpenCV Contours for Pac-Man, I wrote some Python code that took regular screenshots of Arkwood playing Pac-Man on his Windows 7 PC. Using OpenCV Background Subtraction I was able to detect the foreground objects in the screenshot – namely the Pac-Man character and the four ghosts that chase him around the maze. I then used OpenCV Contours and shape matching to obtain the coordinates of Pac-Man on the screen.

The final part of the jigsaw was to show a side panel to Arkwood, whilst he was busy playing Pac-Man:

pacman_contour_screenshot

The side panel is a Google Chrome browser, utilising HTML and JavaScript. It contains the image of Pac-Man and the four ghosts after background subtraction, with the contour and coordinates of Pac-Man drawn on it:

pacman_contour_630

The side panel image is updated with regular screenshots, and runs only a fraction of a second behind the actual gameplay. If I can obtain the coordinates of the ghosts, as well as Pac-Man, I will be able to use the side panel to provide Arkwood with tactics and warnings. With my help, he will soon have the Pac-Man world record. And Daphne’s plump naked body.

But first, we need to change tack. Matching shapes to obtain coordinates worked well enough for Pac-Man, but won’t help if we want to separate the ghosts. Look, the ghosts all have the same shape:

Blinky
“Blinky”

Pinky
“Pinky”

Inky
“Inky”

Clyde
“Clyde”

So instead, I’ll use colour detection. Here’s the snippet of code that will extract a pixel from a foreground object:

# extract colour from object
colour = frame[coord[1] - PIXEL_OFFSET, coord[0]]

The Y coordinate has been offset. We don’t want to extract colour from the centre spot of the object because – as you can see from the ghost images above – we will extract their black eye rather than their actual ‘skin’ colour. Perhaps we will have to readjust the offset if it doesn’t cater for all scenarios? Perhaps we could obtain the mean colour of the ghost instead (though that might increase processing time)? All food for thought.

Here’s the method that will try to match the extracted colour to one of our characters i.e Pac-Man or one of the four ghosts:

# check for colour match
def is_colour_match(self, colour):
    for i in range(3):
        if colour[i] < self.lower_colour[i] or colour[i] > self.upper_colour[i]:
            return False
        
    return True

Pretty simple. We just loop through the extracted colour’s BGR (blue, green, red) values and check whether they are all within the lower and upper range of a given character.

I’ll provide all the code later in the post, but for now let’s see how we’ve done matching the objects we obtained from each screenshot to the characters:

pacman_objectdetection_86

pacman_objectdetection_104

pacman_objectdetection_628

pacman_objectdetection_1062

Wow! We now have the coordinates of Pac-Man and the four ghosts within the maze. For now, I’m just displaying them to Arkwood in the side panel:

pacman_objectdetection_screenshot

But, armed with this information, I’ll soon be able to provide my chum with tactics and warnings.

‘What do you think of it so far?’ I asked Arkwood.

‘That’s superb, but please hurry! Wayne, the deep fat frier, is paying Daphne special attention. If I don’t get that certificate from The Guinness Book of Records soon, she’ll fall for his free cod and batter.’

Okay, time to crack on.

P.S.

Here’s the code I promised. First up, the main program:

import cv2
import ImageGrab
import numpy
from character import Character

# constants
HISTORY = 12
SAMPLE_RATE = 2
PIXEL_OFFSET = 10
TEXT_OFFSET = 50

# set up characters
pacman = Character("Pac-Man", [114, 213, 205], [134, 233, 225], True)
red_ghost = Character("Blinky", [54, 63, 136], [74, 83, 156], True)
pink_ghost = Character("Pinky", [113, 117, 188], [133, 137, 208], True)
green_ghost = Character("Inky", [66, 169, 102], [86, 189, 122], True)
orange_ghost = Character("Clyde", [32, 95, 145], [52, 115, 165], True)

characters = [pacman, red_ghost, pink_ghost, green_ghost, orange_ghost]

# set up background subtraction
fgbg = cv2.BackgroundSubtractorMOG()

# set counter
sample_counter = 0 

while True:

    # apply background subtraction
    frame = grab_screenshot()
    fgmask = fgbg.apply(frame, learningRate=1.0/HISTORY)
    fgoutput = cv2.cvtColor(fgmask, cv2.COLOR_GRAY2RGB)

    # detect objects at set interval
    if sample_counter % SAMPLE_RATE == 0:
        
        # get contours for objects in foreground
        contours = get_contours(fgmask)
        contours = sorted(contours, key=cv2.contourArea, reverse=True)[:10]

        # find characters in foreground objects
        for contour in contours:

            # get centre coordinates of object
            coord = get_contour_centroid(contour)
            
            # extract colour from object
            colour = frame[coord[1] - PIXEL_OFFSET, coord[0]]
            
            # loop all characters and attempt to match colour
            for character in characters:
                if character.enabled and character.is_colour_match(colour):
                    character.enabled = False
                    draw_contour(contour, fgoutput, coord, character)
                    break

    # save image to disk
    cv2.imwrite('Images/pacman.jpg', fgoutput)

    # re-enable characters
    for character in characters:
        character.enabled = True

    # increment counter
    sample_counter += 1

A Character class is created for each ghost and Pac-Man. We can see that each character has its own name and colour range, plus an enabled property set to True.

Once we set up background subtraction, we drop into a while loop. We grab a screenshot, and use background subtraction to obtain the foreground objects. We also create a RGB (red, green, blue) image of the foreground objects, so that we can draw the contour detail on it for our side panel.

Next, depending on the sample rate, we obtain the contours for all our foreground objects and sort them in order of size.

Looping through each foreground object, we get the centre spot of the object and then extract its colour.

We then need to check which of our characters match the colour of the foreground object. If we find a match, we draw the contour detail on the side panel image. We also disable the character, so that it is no longer considered for any further foreground objects in our current screenshot.

Finally, we save the image to disk, so that it can be picked up by our side panel and displayed to Arkwood whilst he plays Pac-Man. All our characters are re-enabled for the next screenshot, and the sample counter is incremented.

Here’s the Character class:

class Character(object):
          
    # initialise the character
    def __init__(self, name, lower_colour, upper_colour, enabled):
        self.name = name
        self.lower_colour = lower_colour
        self.upper_colour = upper_colour
        self.enabled = enabled

    # check for colour match
    def is_colour_match(self, colour):
        for i in range(3):
            if colour[i] < self.lower_colour[i] or colour[i] > self.upper_colour[i]:
                return False
        
        return True

Dead simple. Just takes some initial parameters for name, colour range and enabled. It also has the is_colour_match method discussed previously.

And here’s the supporting functions:

# grab screenshot
def grab_screenshot():
    screenshot = ImageGrab.grab(bbox=(0,50,1300,900))
    return cv2.cvtColor(numpy.array(screenshot), cv2.COLOR_RGB2BGR)

# get contours from image
def get_contours(image):
    edges = cv2.Canny(image, 100, 200)
    contours, hierarchy = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
    return contours

# get contour centroid
def get_contour_centroid(contour):
    try:
        M = cv2.moments(contour)
        cx = int(M['m10']/M['m00'])
        cy = int(M['m01']/M['m00'])
        return (cx, cy)
    except:
        return (0, 0)

# draw contour detail on image
def draw_contour(contour, image, coord, character):
    cv2.drawContours(image, [contour], -1, (0, 255, 0), 3)
    cv2.putText(image, "{} {}".format(character.name, coord), (coord[0] + TEXT_OFFSET, coord[1]), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0))

The function grab_screenshot does exactly that – snapping Arkwood playing Pac-Man.

The get_contours function uses OpenCV Canny Edge Detection and OpenCV Contours to obtain the contours of our foreground objects.

The get_contour_centroid function uses OpenCV Moments to get the centre spot of an object.

The function draw_contour amends our side panel image, drawing a green line around the character and writing the coordinates.

And that’s it!

Note: all the HTML and JavaScript code for the side panel can be found in my Background Subtraction for Pac-Man post.

I used Python Tools for Visual Studio to run the Python code on the Windows 7 PC.

I used VICE emulator to play the Commodore 64 version of Pac-Man on the Windows 7 PC.

I used Google Chrome version ‘39.0.2171.95 m’ to run the web page code on the Windows 7 PC.

Why not swot up on our four Pac-Man ghosts? The Pac-Man aficionados amongst you will know that, when Pac-Man swallows a power pellet, all the ghosts turn blue and can be eaten. We can easily update our code to detect blue ghosts and plot their coordinates, if this helps Arkwood achieve a top score.

Advertisements