March 01, 2017

The All-Seeing Pi

This post is a about vision enhancement platform called The All-Seeing Pi that I have been working on with my friend Dan, who is blind. People who are blind rarely have no vision at all though, and in Dan's case, he still has a little bit of sight in one eye. He's also the first to tell you how much technology can do to enable mobility.

From these discussions, we came up with the idea for a video feed connected to a display, with a wearable screen in the ideal spot for maximum vision. This allows someone to focus on just the screen, and let the camera capture the detail and depth of the environment.

In the end, the prototype served as a successful proof of concept. Checkout the video above for a field test and some more discussion! Dan also likes to push the limits of what can be done with his disability, which he chronicles at his blog Three Points of Contact.

In the rest of this post, I'll be talking about how to build the device. This may be useful if you or a friend have a similar condition, but it is also a great starting platform for a Raspberry Pi based augmented reality rig. The general setup is a raspberry pi with a camera module running on an HDMI (not SPI!) small display. The video feed is provided via OpenCV and RaspiCam, with code and install details below.

Build Guide

As always, feel free to mix and match parts. When it comes to batteries, cables, headsets, the world is your oyster. I would strongly recommend getting the same screen as me however, and a "picam" camera as they will work best with the 3d printed enclosures.


  1. Raspberry Pi 316GB Micro SD Card
  2. Waveshare 5" HDMI Display
  3. Raspberry Pi Camera Module (any version)
  4. 2000 mAH USB Charge Block + compatible micro USB cable
  5. GoPro Head Mount (extenders may be necessary too)
  6. 3D printed case and camera mount
If you do not have a 3d printer (like most people), the files can be ordered straight from the Thingiverse link, or downloaded and ordered through Shapeways! I personally endorse shapeways, as it's nice to support people in your neighbourhood.

You also need one skinny long screw to attach the pi camera. I don't know the size because I just found it in a drawer. If you're building this, you probably have a generous drawer with screws too. Tape will also work.


Once you've got the hardware, putting it all together is pretty easy!  

Start by mounting the monitor in the case with at least two screws. You may need to use a taper threaded screw to widen the holes and get the screws that come with the case to bite.

Next, attach the picamera enclosure, and slide the picamera in.

Now attach the pi to the screen making sure to line up the GPIO pins so the SD card side of the pi is flush. This will align the HDMI ports along the bottom as well.


Now you just attach the GoPro head mount, and plug in your power.


Congrats, your hardware is all setup! Now it's time to get the Raspberry Pi and software up and and running.

Raspberry Pi Setup

The first step in setting up the Raspberry Pi is the usual, download raspbian, transfer the image to your sd card, and update the operating system. Checkout this section if you need help with this bit.

The next step is getting your monitor driver functioning. Hopefully it just works when you plug it in, otherwise you may need another monitor temporarily. To find more instructions on this, checkout the Waveshare Driver Installation instructions.

Once the RPi is running, and your display is appearing as expected, we now need to download OpenCV and RaspiCam++. 

To install opencv, just run the following command to automatically install the package:

$> sudo apt-get install libopencv-dev 

RaspiCam provides an interface between the raspberry camera module, and OpenCV. Installing RaspiCam can be done by following the steps on their github page, or by running the following commands:

$> git clone .
$> cd raspicam
$> mkdir build
$> cd build
$> cmake ..
$> make
$> sudo make install

Writing the Software

The software is written in C++, and uses OpenCV to display the output. Using OpenCV just to display an output is overkill, but this leaves the project in a great position to add additional functionality such as edge highlighting, horizon tracking, and much more.

The project can be found in the github project here*, and can be installed by running

$> git clone 
$> cd piopencv
$> mkdir build
$> cd build
$> cmake ..

This will generate a "picamera" executable that can be launched from the build directory by running 


It would be a good idea to test that this happens! You can exit the program by holding escape.

*I have received some feedback that I may be oversharing when it comes to code on non programming focused posts. So I've just linked to the github here, please leave a comment if you'd prefer some more detail. 

Autolaunch (optional)

The next step is to setup the picamera executable to run automatically on startup. You can skip this step if you don't mind launching it yourself, but that requires a keyboard or remote connection.

If you are running the LXDE desktop environment (default with raspbian pixel), setting a program to autostart is really simple. You simply create a file in the designated autostart folder describing the executable you wish to run.

This can be done as follows

$> cd ~/.config/autostart
$> nano piopencv.desktop

Enter the contents of the file to be:

[Desktop Entry]

Type = Application


Now close and save the file (ctrl-x, y). When you restart your system (sudo reboot -n), the program should auto start !

Closing Remarks

Whew, what a project! I went through a few iterations of screens and software platforms to get this going, and did a whole lot of work trying to get non-HDMI screens (which get a lot smaller) to show a decent frame rate. I ended up scrapping this and just buying the smallest HDMI screen I could find.

Overall, I was really happy with the end result of this project. I'm also really excited about what is possible now that we've got the "reality" part of an augmented reality headset completed.

Hopefully you were able to follow along with the build process, and I encourage anyone interested to participate via github. For example, if you're a computer vision pro and have some ideas about what would help a blind person navigate, add it to the project! The base criteria is that it must run in 20fps on a Raspberry Pi. Then anything goes. 

Thanks for reading, and expect an update in the next few months.


  1. Thank you for this. You've given me a worthwhile project to work on instead of random robots. My cousin's son has been blind since birth and this might be something to help him out. Just got to work on downsizing and kid-proofing it.

  2. Very cool DarkWhite, I'd be happy to fire off some 3d prints for you if needed. I'll also brainstorm kidproofing and shrinking.

  3. Thanks Ben. I'm going to do a basic prototype using what I have and just see if it has an advantage for him. He refuses to wear his glasses most of the time so want to make it super comfortable. (He's only 2).

  4. What is the fps your display gets, I'm trying to build a HUD for inside a halloween costume but can't get the FPS up on the display.

  5. Sounds very cool Heather. This had about 20 - 30 fps. What kind of display are you using? I experimented with a few and found the DSI just couldn't support anything about 8 or so, so my recommendation would be strongly toward a small HDMI screen.

    If you let me know how you're running the feed as well (python? c++?) I might have some more suggestions.

  6. This is awesome man! I'm currently studying at Glasgow Caledonian University and about to embark on my honours project. I'm going to investigate how technology can aid mobility in the visually impaired; have you got any advice or tips for someone like me who has never worked with a raspberry pi before?

  7. Very cool unknown, accessibility technology is an interesting subject! My only recommendation would be do a lot of google searching for similar projects, there are tons of RPi tutorials and projects floating around which could save you a lot of time.

  8. Hi Ben, This is epic! Well done. I am just getting into Raspberry Pi and am researching a project for my father who is losing his sight as he moves further into his 80s. He has a cooker and can still heat up meals in the oven but struggles to read the rotary temperature setting knob and I was wondering if I could build something that would indicate aurally the temperature setting set by the rotary control. Really there is only two settings he needs to know 140 and 180 and I though I could maybe place two magnetic strips at those positions and the Pi could provide some aural indication when triggered?

  9. Hey Flying_Rower - sorry I didn't see this until now! There is a pretty well sized field of using cameras to read manual gauges to get their value. Once known, a Pi could easily give audible feedback, including voice output describing what the stove is set to.

    Here's a tutorial from intel on the topic:

    If you want to go with something like the magnetic strips approach, you could also build a simple circuit where there is conductive material under the dials, and contact points at the target temp. Using Arduino (or pure electronics) you can make closing the 140 or 180 circuit emit different frequencies.

    Hope that makes sense! If you want to chat more I'd suggest posting a comment on the youtube video here... that gets notifications through to me a lot more effectively.