Friday, 9 March 2018

Adding vision to your AIY Project in 4 easy steps (and 1 tricky one)

Back in May 2017 the The MagPi came with a Google Voice HAT, and instructions, that would turn a Raspberry Pi into a Google Assistant. Initially triggered by a button press, but soon updated to voice activated, allowing you to ask it questions and give it commands in a similar manner to a Google Home or Amazon Echo device.

After having put together the kit and playing with it for a while I decided to look at adding a camera to the device and connect it up to Google's Vision APIs. This was something that was covered in the Raspberry Pi Blog back in 2016, but it looks like the APIs have changed slightly since then. So after a bit of hunting around and testing here are the steps I took to setup and extend the base Google AIY package to include vision support.

Install Google AIY image

Follow the instructions on the offical Google Voice Kit page (Assuming you haven't already gone through these steps). For reference the software image I used was aiyprojects-2018-01-03.img.xz.

You'll need to complete the 'Custom Voice User Interface' section of the instructions to enable the cloud speech APIs as well (This should end with a cloud_speech.json file in the home directory of the Raspberry Pi).

Configure Camera

Follow the official instructions on how to setup and connect the Raspberry Pi Camera.
If you're reading through these instructions before following them (which of course you should be!) its almost certainly worth connecting the camera cable to the Raspberry Pi first, so it can be fed through the slot on the Google Voice Kit HAT.

The cardboard case included in the kit has a convenient hole for the camera lens to poke through, with the flaps holding the camera in place without needing to tape or screw it into place. Almost as if it was meant to have a camera installed in it!

The camera is held in place between the two pieces of cardboard.The lens of the camera pokes through the hole.

Every time we run 'raspistill' to take a photo the camera performs various calibration tasks, setting up the hardware, working out the light level etc. Which usually takes 5 or more seconds to complete.

To avoid having this delay every time we ask the Raspberry Pi to identify an object we want raspistill to be constantly running in the background, ready to take a photo at any time. Luckily raspistill already supports this with the '-s' option.

raspistill -rot 180 -o /tmp/aiyimage.jpg -s -t 0 -w 640 -h 480

We want this to execute every time the RPi starts up, so edit crontab using 'crontab -e' and add the following line to the end of the file.

@reboot raspistill -rot 180 -o /tmp/aiyimage.jpg -s -t 0 -w 640 -h 480 &

To test the above command is working reboot the RaspberryPi and then run

kill -s SIGUSR1 `pidof raspistill`

If all has gone well then the Raspberry Pi will take a picture and store it at '/tmp/aiyimage.jpg'.

Enable Vision API in Google account

This is, potentially, the tricky step as it requires having a credit card to enable billing on your Google account, as well as the service being available in your country.
Just follow the instructions at https://cloud.google.com/vision/docs/before-you-begin making sure you enable the API on the correct project (aiyproject if you followed the Google Voice setup instructions above).

Install Vision API Python libraries

To utilise the Google Vision APIs we need to install the python libraries (as detailed at https://cloud.google.com/vision/docs/reference/libraries#client-libraries-install-python). However the Google application runs within a Python Virtual Environment to keep its selection of python libraries separate from any others installed on the Raspberry Pi. This means we have to take the extra step of entering the virtual environment before installation. This can easily be achieved by launching the 'Start dev terminal' shortcut from the Desktop, or by running '~/bin/AIY-projects-shell.sh' from a normal terminal (e.g. if connecting via SSH).

Once inside the Virtual environment run the following to install the libraries

pip3 install google-cloud-vision

Using the Vision APIs

I've written two scripts that exercise the Vision APIs, 'whatisthat.py' which calls the Vision APIs themselves, and 'cloudspeech_whatisthat.py' that talks to the Voice APIs. The scripts should be places in the /home/pi/AIY-projects-python/src/examples/voice folder, and can be easily fetched using the 'wget' command.

cd /home/pi/AIY-projects-python/src/examples/voice
wget https://raw.githubusercontent.com/LeoWhite/RaspberryPi/master/AIY/cloudspeech_whatisthat.py
wget https://raw.githubusercontent.com/LeoWhite/RaspberryPi/master/AIY/whatisthat.py

Launch the cloudspeech_whatisthat.py script from inside the 'dev environment' (In the same way as you'd launch the regular cloudspeech_demo.py demo)

src/examples/voice/cloudspeech_whatisthat.py

Then all you have to do is point the camera at something, press the button and say one of the following voice commands.

CommandAction
What is that?Requests a list of labels from Google and reads out any over 80% confidence.
What logo is that?Requests Google to identify the logo in the picture
What does that say?Reads out any text detected in the picture

Example of use.

Below is a short demonstration video of the scripts in action. I've demonstrated this at a couple of Raspberry Jams and got interest from both kids and adults. The kids especially were trying different items for it to identify, one even taking his shoe off see what it would say ('blue trainer' being the result).
I do have an idea or two of what I can do with this script, nothing especially useful, but something that is a little more interactive. Hopefully, with the help of this guide, other people will come up with fun and interesting projects!

Leo

10 comments:

  1. Thank you so much. This helped me a lot, and I was able to get your code working. However, I have one problem with the cloudspeech_whatisthat.py program. When the program is speaking the "output," the voice will often cut out after a second or so. I don't have this problem using the aiy.audio.say method outside of your loop, so I am really confused and at a loss. Any ideas?

    ReplyDelete
  2. Hi,

    Glad you found this useful.

    I've not run across the audio cutting out before.. Are you trying from a clean install of the Google AIY image or from an existing one? Also which variant RPi are you running on? Lately I've been testing on a RPi 3, but have had it running on a RPi 2 before.

    If you have a working version of the cloudspeech.py script then you can try copying the changes across and seeing if that works any better.

    ReplyDelete
  3. Hi.. . I have a voice kit set-up and google vision kit v1.1 setup separately. . How do I post both if them

    ReplyDelete
  4. This looks great. I am ordering a camera for my voice kit.

    ReplyDelete
  5. Just wondering how this works without the vision bonnet...

    ReplyDelete
  6. This comment has been removed by the author.

    ReplyDelete
  7. It's working! Thanks for this!

    ReplyDelete
  8. hii, i want to know if you have created a new project (billing account, etc.) for vision part or are you using the same one as voice, I'm asking for RPi 3B+ model.... please reply

    ReplyDelete
  9. Hi,

    Sorry for the late response, Blogger kept eating all my replies!

    Its been a while since I did it, but I added the billing details to the same account I was using for the regular voice scripts.
    Checking that account I only have the one project created, so I don't think I created a new one. I did re-download the credentials, not sure if that changed after the vision APIs were enabled.

    ReplyDelete