Tags

, , , , , , , , , ,

Arkwood, my nefarious Belgian buddy, is still gunning for a world record at the classic arcade game Pac-Man. ‘I won’t quit until I have the title in the bag, and a queue of pretty girls outside my door.’

In my last post, Background Subtraction for Pac-Man, I foolishly agreed to help. I wrote some Python code that took a screenshot of Arkwood playing Pac-Man on his Windows 7 PC. Using OpenCV Background Subtraction, I was able to remove the background from the screenshot, ending up with something like this:

pacman_backgroundsubtraction_after_115

Next, I displayed the screenshot to Arkwood in a side panel, so that he could ‘feel’ the flow of the game:

pacman_backgroundsubtraction_screenshot_2

The side panel is a Google Chrome browser, making use of HTML and JavaScript. ‘If I take regular screenshots,’ I told him, ‘I can provide you with a real-time display, which only lags a fraction of a second behind the actual gameplay.’ Arkwood was impressed. He soon learnt the Way of the Ghost. He understood the Yin and Yang of that little yellow man. Top scores followed.

But what if I could work out the coordinates of Pac-Man on the screen, in relation to the ghosts? I would be able to provide Arkwood with a much smarter side panel, and edge him ever closer to that elusive world record. Let’s give it a go!

Now, with OpenCV, there’s more than one way to skin a cat. Today I shall use OpenCV Contours in order to obtain the coordinates of Pac-Man. First up, let’s snag two images of the dude, facing right and left:

right_pacman

left_pacman

Next, we need to obtain the contour of Pac-Man in each of the images. Here’s the function:

# get contours from image
def get_contours(image):
    edges = cv2.Canny(image,100,200)
    contours, hierarchy = cv2.findContours(edges,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
    return contours

The first line uses OpenCV Canny Edge Detection to detect the outline of Pac-Man. The OpenCV findContours method is then employed to fetch a set of coordinates for the outline.

Here’s what Canny Edge Detection does to the images:

right_pacman_edges

left_pacman_edges

Now that we have the contours of Pac-Man (right- and left-facing), we can use them to detect him in each screenshot. It goes a bit like this…

Let’s apply Canny Edge Detection to our screenshot and fetch a contour for each object:

pacman_contour_edges

That’s great. But how can we find out which of those objects is Pac-Man? Let’s look at some code:

# get contours for objects in foreground
contours = get_contours(fgmask)
contours = sorted(contours, key = cv2.contourArea, reverse = True)[:10]

# find pacman in foreground objects
for contour in contours:
    right_sim = cv2.matchShapes(contour,right_pacman_contour,1,0.0)
    left_sim = cv2.matchShapes(contour,left_pacman_contour,1,0.0)

    if right_sim < SIMILARITY_THRESHOLD or left_sim < SIMILARITY_THRESHOLD:
        # draw pacman contour on image
        fgmask = cv2.cvtColor(fgmask,cv2.COLOR_GRAY2RGB)
        cv2.drawContours(fgmask, [contour], -1, (0, 255, 0), 3)
        cv2.putText(fgmask, get_contour_centroid(contour), (950,800), cv2.FONT_HERSHEY_SIMPLEX, 2, (0, 255, 0))
        break

Once we fetch a contour for each object, we can sort the objects in order of size. Looping through each object, we use OpenCV matchShapes to figure out if its contour relates to either the right- or left-facing Pac-Man. If so, we draw the contour on the image, along with coordinates:

pacman_contour

Fantastic! Not only have we successfully detected Pac-Man in the image, we’ve also printed its coordinates ‘568 : 718’. We know exactly where Pac-Man is in the maze. We’re well on our way to a smarter side panel.

Now, a word on the printed coordinates. We use OpenCV moments to find the centre spot of Pac-Man:

# get contour centroid
def _get_contour_centroid(contour):
    try:
        M = cv2.moments(contour)
        cx = int(M['m10']/M['m00'])
        cy = int(M['m01']/M['m00'])
        return "{} : {}".format(cx, cy)
    except:
        return "Error calculating contour centroid"

Using moments allow us to calculate an xy-coordinate at the heart of the little yellow chap, rather than just grabbing one of his outline coordinates.

‘Are you ready to try out the code?’ I asked Arkwood.

He started to play Pac-Man on his Windows 7 PC, whilst the Python code ran in the background, taking screenshots.

pacman_contour_screenshot

As you can see, the side panel is displaying our background subtraction image. If our code detects Pac-Man, the image is updated to include his contour and coordinates:

pacman_contour_776

pacman_contour_176

pacman_contour_630

Clearly the background images produced during gameplay are not perfect. Sometimes Pac-Man is too distorted, and a match is not found. Perhaps, once a contour of each object is fetched, a different technique rather than shape matching should be used? Colour detection would be worth a try. Like I said, there’s more than one way to skin a cat.

‘Just give me a little more time,’ I told Arkwood, ‘I’ll soon have a side panel smart enough to guide you to victory.’

‘Well, just hurry!’ my buddy cried, ‘I have my eye on Daphne, the fat spotty girl that works down the chippy. But without the world record, she’ll never agree to sex.’

Who am I to block the path to a beautiful romance! I rolled up my sleeves.

P.S.

Here’s the code in full:

import cv2
import ImageGrab
import numpy
 
# grab frame from screen
def grab_frame():
    screenshot = ImageGrab.grab(bbox=(0,50,1300,900))
    return cv2.cvtColor(numpy.array(screenshot),cv2.COLOR_RGB2BGR)

# get contours from image
def get_contours(image):
    edges = cv2.Canny(image,100,200)
    contours, hierarchy = cv2.findContours(edges,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
    return contours

# get contour centroid
def get_contour_centroid(contour):
    try:
        M = cv2.moments(contour)
        cx = int(M['m10']/M['m00'])
        cy = int(M['m01']/M['m00'])
        return "{} : {}".format(cx, cy)
    except:
        return "Error calculating contour centroid"

# constants
HISTORY = 12
SAMPLE_RATE = 2
SIMILARITY_THRESHOLD = 0.15

# global variables
sample_counter = 0 

# set up background subtraction
fgbg = cv2.BackgroundSubtractorMOG()

# set up contours for pacman character
right_pacman_image = cv2.imread('Images/right_pacman.jpg')
right_pacman_contour = get_contours(right_pacman_image)[0]

left_pacman_image = cv2.imread('Images/left_pacman.jpg')
left_pacman_contour = get_contours(left_pacman_image)[0]

while True:

    # apply background subtraction
    frame = grab_frame()
    fgmask = fgbg.apply(frame, learningRate=1.0/HISTORY)

    # draw contour at set interval
    if sample_counter % SAMPLE_RATE == 0:
        
        # get contours for objects in foreground
        contours = get_contours(fgmask)
        contours = sorted(contours, key = cv2.contourArea, reverse = True)[:10]

        # find pacman in foreground objects
        for contour in contours:
            right_sim = cv2.matchShapes(contour,right_pacman_contour,1,0.0)
            left_sim = cv2.matchShapes(contour,left_pacman_contour,1,0.0)

            if right_sim < SIMILARITY_THRESHOLD or left_sim < SIMILARITY_THRESHOLD:
                # draw pacman contour on image
                fgmask = cv2.cvtColor(fgmask,cv2.COLOR_GRAY2RGB)
                cv2.drawContours(fgmask, [contour], -1, (0, 255, 0), 3)
                cv2.putText(fgmask, get_contour_centroid(contour), (950,800), cv2.FONT_HERSHEY_SIMPLEX, 2, (0, 255, 0))
                break

    # save image to disk
    cv2.imwrite('Images/pacman.jpg', fgmask)
    sample_counter += 1

We set up some constants, background subtraction, and fetch the contours for our right- and left-facing Pac-Man.

Once in a while loop, we grab a screenshot of Arkwood playing Pac-Man and apply background subtraction, yielding an image containing only foreground objects.

Depending on a sample rate, we fetch the contour of each object in the foreground and sort the objects by size. Looping through each object, we use a similarity threshold to determine whether its contour matches either our right- or left-facing Pac-Man.

On finding a match, we draw the contour on the image before saving it to file.

My previous post Background Subtraction for Pac-Man has the HTML and JavaScript code used to display the image in the side panel.

Some further thoughts and notes…

Perhaps more variations of a right- and left-facing Pac-Man could be used, to aid matching?

Perhaps the sample rate for object detection could also be tweaked, to aid matching? The sample rate was introduced to ensure that the quality of the background subtraction image did not degrade due to the extra processing.

I used Python Tools for Visual Studio to run the Python code on the Windows 7 PC.

I used VICE emulator to play the Commodore 64 version of Pac-Man on the Windows 7 PC.

I used Google Chome version ‘39.0.2171.95 m’ to run the web page code on the Windows 7 PC.