Tags

, , , , , , , , , , ,

Now that Arkwood is a bling bling rap star (see previous post for the hot hip hop vid), he wants his own home studio. ‘No problem,’ I told him, ‘I will code you up a cutting edge mixing desk.’ Okey-dokey, he replied gangsta style, Just don’t do it on the cheap.

But this is no ordinary mixing desk, as we will be controlling it with nothing but the power of our voice. I will create it with the Python programming language, and run it on my little Raspberry Pi. Having already laid down some beats, guitar and lyric using Magix Music Maker, may I unveil to you the multitrack masterpiece:

from constants import *
from speech import Speech
import pygame

speech = Speech()

# set up instruments
pygame.mixer.init()
drums = pygame.mixer.Sound("Drums.WAV")
bass = pygame.mixer.Sound("Bass.WAV")
guitar = pygame.mixer.Sound("Guitar.WAV")
vocal = pygame.mixer.Sound("Vocal.WAV")

drums.set_volume(0.5)
bass.set_volume(0.5)
guitar.set_volume(0.5)
vocal.set_volume(0.5)

# control instrument
def control_instrument(instrument, commands):

    if (len(commands) < 2):
        instrument.play()
        return

    if(commands[1] == UP):
        instrument.set_volume(instrument.get_volume() + 0.1)
    elif(commands[1] == DOWN):
        instrument.set_volume(instrument.get_volume() - 0.1)
    elif(commands[1] == FADEOUT):
        instrument.fadeout(5000)
    elif(commands[1] == STOP):
        instrument.stop()

# play song
while True:
 
    # get next voice commands
    commands = speech.speech_to_text('/home/pi/PiAUISuite/VoiceCommand/speech-recog.sh').lower().split(' ')
    
    # control instrument
    if (commands[0] == DRUMS):
        control_instrument(drums, commands)
    elif (commands[0] == BASS):
        control_instrument(bass, commands)
    elif (commands[0] == GUITAR):
        control_instrument(guitar, commands)
    elif (commands[0] == VOCAL):
        control_instrument(vocal, commands)

Dead simple, really. We set up our instruments using the Pygame software, targeting a .WAV file for drums, bass, guitar and vocal, and put their volume at a neighbour-respecting 0.5

After that, we simply loop the Python program so that we can pick up voice commands through the microphone attached to our Raspberry Pi. Once we determine which instrument is being sought after, the control_instrument function is executed to either play, fadeout or stop the instrument, or adjust the volume up or down.

Here’s the constants file, which holds the valid commands:

DRUMS = "drums"
BASS = "bass"
GUITAR = "guitar"
VOCAL = "vocal"
STOP = "stop"
UP = "up"
DOWN = "down"
FADEOUT = "fade"

And here’s the class function that uses Google’s Speech To Text service to convert the voice commands from our microphone into text that can be used by the program:

from subprocess import Popen, PIPE
 
class Speech(object):
 
    # converts speech to text
    def speech_to_text(self, filepath):
        try:
            # utilise PiAUISuite to turn speech into text
            text = Popen(['sudo', filepath], stdout=PIPE).communicate()[0]
 
            # tidy up text
            text = text.replace('"', '').strip()
 
            # debug
            print(text)

            return text
        except:
            print ("Error translating speech")

Hurray! Works a treat. Just a little bit of a problem pronouncing “drums up”:

pi_mixingdesk

Arkwood came round my crib later that evening, holding his crotch. ‘Yo bro,’ he exclaimed, chucking me a high five which I missed. I showed him his new mixing desk.

I am not sure whether he was pleased or not. Said something about putting a cap in my ass?

Ciao!