OpenGL Shaders using Python


, , , , , , , , , , , , ,

‘Vertex and fragment shaders are the modern way to draw OpenGL 3D objects, squeezing all the processing juice out of a top-notch graphics card,’ I informed Isobel Cuthbert at our weekly embroidery club.

The 89-year-old furrowed her brow. ‘But what has that to do with cross-stitch, dear?’ she replied.

I told her to be patient, and I would explain. ‘Just fuckin’ be patient,’ I told her.

You see, a while back I attached two webcams to my head, using a Google Cardboard headset:


One webcam yielded images from the perspective of my left eye, and the other webcam yielded images from the perspective of my right eye:


Using OpenCV computer vision, I was able to determine the disparity between an image from each webcam and thus determine which items were closest to me:


As you can see, the Feng Shui tube of incense sticks on the right-hand side of the table is the item closest to me (as well as the front of the table).

So what has this to do with OpenGL shaders? you ask. Fuck sake, just give me a minute to explain, won’t you? The whole world is in such a rush to nowhere these days!

Okay, so imagine the Google Cardboard headset can show my left eye what the left-hand side webcam can see, and can show my right eye what the right-hand side webcam can see. In fact, I’ve already built an augmented reality application to do this – it’s called ArkwoodAR and it’s on GitHub.

Now image if we use the coordinates of those closest items in the webcam images to draw OpenGL objects. We could make the Feng Shui tube of incense sticks come alive in vibrant 3D, right in front of our eyes! The items further away in the images can simply fade into the distance.

But first, we must draw the webcam images as a background to our OpenGL 3D world. And for that, we will use vertex and fragment shaders.

Time for a bit of Python code. Here’s the shader programs, encapsulated in our Stereo Depth class:

from OpenGL.GL import *
from OpenGL.GLUT import *
from OpenGL.GL.shaders import *
import numpy, math
from PIL import Image

class StereoDepth:

    # constants
    BACKGROUND_IMAGE = 'image_left.png'

    # vertex shader program
    vertexShader = """
        #version 330 core
        attribute vec3 vert;
        attribute vec2 uV;
        uniform mat4 mvMatrix;
        uniform mat4 pMatrix;
        out vec2 UV;
        void main() {
          gl_Position = pMatrix * mvMatrix * vec4(vert, 1.0);
          UV = uV;

    # fragment shader program
    fragmentShader = """
        #version 330 core
        in vec2 UV;
        uniform sampler2D backgroundTexture;
        out vec3 colour;
        void main() {
          colour = texture(backgroundTexture, UV).rgb;

First up, the vertex shader program, which will place a rectangle in our 3D world for the background image. For now, we’ll just handle a single image from the left-hand side webcam.

Next, the fragment shader program, which will put the background image on the rectangle.

Okay, so let’s initialise our OpenGL application via the _init_opengl method of our Stereo Depth class:

# initialise opengl
def _init_opengl(self):

    # create shader program
    vs = compileShader(self.vertexShader, GL_VERTEX_SHADER)
    fs = compileShader(self.fragmentShader, GL_FRAGMENT_SHADER)
    self.program = compileProgram(vs, fs)

    # obtain uniforms and attributes
    self.aVert = glGetAttribLocation(self.program, "vert")
    self.aUV = glGetAttribLocation(self.program, "uV")
    self.uPMatrix = glGetUniformLocation(self.program, 'pMatrix')
    self.uMVMatrix = glGetUniformLocation(self.program, "mvMatrix")
    self.uBackgroundTexture = glGetUniformLocation(self.program, "backgroundTexture")

    # set background vertices
    backgroundVertices = [
		-2.0,  1.5, 0.0, 
		-2.0, -1.5, 0.0,
		 2.0,  1.5, 0.0, 
		 2.0,  1.5, 0.0, 
		-2.0, -1.5, 0.0, 
		 2.0, -1.5, 0.0]

    self.vertexBuffer = glGenBuffers(1)
    glBindBuffer(GL_ARRAY_BUFFER, self.vertexBuffer)
    vertexData = numpy.array(backgroundVertices, numpy.float32)
    glBufferData(GL_ARRAY_BUFFER, 4 * len(vertexData), vertexData, GL_STATIC_DRAW)

    # set background UV
    backgroundUV = [
		0.0, 0.0,
		0.0, 1.0,
		1.0, 0.0,
		1.0, 0.0,
		0.0, 1.0,
		1.0, 1.0]

    self.uvBuffer = glGenBuffers(1)
    glBindBuffer(GL_ARRAY_BUFFER, self.uvBuffer)
    uvData = numpy.array(backgroundUV, numpy.float32)
    glBufferData(GL_ARRAY_BUFFER, 4 * len(uvData), uvData, GL_STATIC_DRAW)

    # set background texture
    backgroundImage =
    backgroundImageData = numpy.array(list(backgroundImage.getdata()), numpy.uint8)
    self.backgroundTexture = glGenTextures(1)
    glBindTexture(GL_TEXTURE_2D, self.backgroundTexture)
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, backgroundImage.size[0], backgroundImage.size[1], 0, GL_RGB, GL_UNSIGNED_BYTE, backgroundImageData)

We compile our shader programs, and obtain their inputs (so we can pass values to the inputs when we want to draw our background image).

The vertices for our rectangle are buffered, as are the UV values for putting the background image on the rectangle.

Finally, we load our background image from disk and create a texture with it.

Now let’s have a gander at the _draw_frame method of our Stereo Depth class, which will draw our background image in our 3D world:

# draw frame
def _draw_frame(self):

    # create projection matrix
    fov = math.radians(45.0)
    f = 1.0 / math.tan(fov / 2.0)
    zNear = 0.1
    zFar = 100.0
    aspect = glutGet(GLUT_WINDOW_WIDTH) / float(glutGet(GLUT_WINDOW_HEIGHT))
    pMatrix = numpy.array([
        f / aspect, 0.0, 0.0, 0.0,
        0.0, f, 0.0, 0.0,
        0.0, 0.0, (zFar + zNear) / (zNear - zFar), -1.0,
        0.0, 0.0, 2.0 * zFar * zNear / (zNear - zFar), 0.0], numpy.float32)

    # create modelview matrix
    mvMatrix = numpy.array([
        1.0, 0.0,  0.0, 0.0,
        0.0, 1.0,  0.0, 0.0,
        0.0, 0.0,  1.0, 0.0,
        0.0, 0.0, -3.6, 1.0], numpy.float32)

    # use shader program

    # set uniforms
    glUniformMatrix4fv(self.uPMatrix, 1, GL_FALSE, pMatrix)
    glUniformMatrix4fv(self.uMVMatrix, 1, GL_FALSE, mvMatrix)
    glUniform1i(self.uBackgroundTexture, 0)

    # enable attribute arrays

    # set vertex and UV buffers
    glBindBuffer(GL_ARRAY_BUFFER, self.vertexBuffer)
    glVertexAttribPointer(self.aVert, 3, GL_FLOAT, GL_FALSE, 0, None)
    glBindBuffer(GL_ARRAY_BUFFER, self.uvBuffer)
    glVertexAttribPointer(self.aUV, 2, GL_FLOAT, GL_FALSE, 0, None)

    # bind background texture
    glBindTexture(GL_TEXTURE_2D, self.backgroundTexture)

    # draw
    glDrawArrays(GL_TRIANGLES, 0, 6)

    # disable attribute arrays

    # swap buffers

We create a projection matrix and a modelview matrix, to place our background image in our 3D world.

The matrices are sent to the vertix shader program inputs, along with our vertix and UV buffers.

Once the background image texture is bound, we are ready to use our shaders to draw our background image.

Here’s the last bit of code for our Stereo Depth class – the main method, which is invoked by a class instance:

    # setup and run OpenGL
    def main(self):
        glutInitWindowSize(640, 480)
        glutInitWindowPosition(100, 100)
        glutCreateWindow('Stereo Depth')

# run an instance of StereoDepth

We create a 640×480 window for our application, initialise and then draw! There is no need to redraw the window, unless we interact with it.


‘So you see, Isobel, OpenGL shaders are a bit like cross-stitch. With some weaving and threading you can achieve magical results!’

Isobel looked at me with an equal measure of confusion and hate. ‘Dear boy, you are talking out of your anus. There is no explicit threading in your code, and in the absence of ambient, diffuse and specular lighting it is simply as dull as a Utah teapot.’

With that, the old lady dunked a rich tea biscuit into her cup.

She stung bad. Just like Rodger Saltwash.



Get every new post delivered to your Inbox.

Join 96 other followers