Tags

, , , , , , , , , , , , ,

Peters was aghast. All my senses are filling up, he said.

It’s true. He puts on the Oculus Rift virtual reality headset and is stimulated.

‘It is the 2.0 of our species,’ I remarked. ‘Mixed reality has brought new synthetic possibilities to our natural world.’

‘Never mind that,’ my obese Dutch lodger replied, ‘I want to be a king! When I speak, all my subjects will obey me!’

So I updated Peters virtual world, to allow him to speak to his computer. And to let the computer speak back to him.

‘Now you can issue your decrees,’ I informed Peters. He was excited. He ran off to his bedroom and slipped on his dressing gown and Christmas party hat, transforming himself into a makeshift king.

But how did I add speech-to-text and text-to-speech to the virtual world? Here’s how…

Speech Recognition

Microsoft Speech API (SAPI) provides speech recognition to our C++ applications.

I am using Microsoft Visual Studio Community 2015 on Windows 10.

Cyril Leroux on Stack Overflow shows us how to issue voice commands through our computer microphone, using SAPI.

I updated Cyril’s code to use my default language:

WORD langId = GetUserDefaultUILanguage();

Running the code, I was prompted to configure speech recognition on my PC:

oculusrift_sapi_speechrecognitionprompt

Once done, the computer can listen for our commands:

oculusrift_sapi_speechrecognitiontool

Text To Speech

Great. I now have SAPI providing speech recognition to my C++ application i.e. letting me talk to my computer.

But can SAPI provide text-to-speech to my C++ application i.e. letting my computer talk to me?

Answer: yes. How: check out the Microsoft Text-to-Speech Tutorial. Step 4: Speak! provides the speech synthesis code to let the computer speak to us through our headphones or speakers.

SAPI in Virtual Reality

Peters can now be king. He can scream his royal commands at his subjects (i.e. his computer) and his subjects (i.e. his computer, again) can inform His Majesty that those commands have been carried out.

I amended a virtual world from a previous post. Now, when Peters is wearing the Oculus Rift virtual reality headset and says “move the cone” into the Rift microphone, the cone is moved up into the air out of the way. The computer courteously replies “the cone has been moved, sir” through the Rift headphones.

It’s simply a case of chaining together the SAPI speech recognition and text-to-speech code. As per the previous post, I run the SAPI code in a thread so not to block my VR app from rendering.

If Peters command has been recognised, we put the cone up in the air. Otherwise, we don’t:

// set cone height with speech recognition
if (sapiClient->IsSpeechRecognised) {
	Meshes[i]->Pos[1] = 2;
}
else {
	Meshes[i]->Pos[1] = -3;
}

Here’s the video of Peters instructing his computer to move the cone out of the way:

The computer tells Peters that the cone has been moved. And we can see that it has indeed!

Peters pouted with arrogance and then said to the room empty but for me, ‘I am the king and I rule all! I decree that every lady subject should get naked before me, and service my every whim.’

He was mad with power. I softly closed the living room door and left him wearing the virtual reality headset, wrapped within his wild sexual fantasies. It’s not something anyone needs to see.

Ciao!

P.S. my Windows 10 PC could run the speech recognition and text-to-speech code without the need to download SAPI.