March 01, 2017

The All-Seeing Pi

This post is a about vision enhancement platform called The All-Seeing Pi that I have been working on with my friend Dan, who is blind. People who are blind rarely have no vision at all though, and in Dan's case, he still has a little bit of sight in one eye. He's also the first to tell you how much technology can do to enable mobility.

From these discussions, we came up with the idea for a video feed connected to a display, with a wearable screen in the ideal spot for maximum vision. This allows someone to focus on just the screen, and let the camera capture the detail and depth of the environment.

In the end, the prototype served as a successful proof of concept. Checkout the video above for a field test and some more discussion! Dan also likes to push the limits of what can be done with his disability, which he chronicles at his blog Three Points of Contact.

In the rest of this post, I'll be talking about how to build the device. This may be useful if you or a friend have a similar condition, but it is also a great starting platform for a Raspberry Pi based augmented reality rig. The general setup is a raspberry pi with a camera module running on an HDMI (not SPI!) small display. The video feed is provided via OpenCV and RaspiCam, with code and install details below.

Build Guide

As always, feel free to mix and match parts. When it comes to batteries, cables, headsets, the world is your oyster. I would strongly recommend getting the same screen as me however, and a "picam" camera as they will work best with the 3d printed enclosures.


  1. Raspberry Pi 316GB Micro SD Card
  2. Waveshare 5" HDMI Display
  3. Raspberry Pi Camera Module (any version)
  4. 2000 mAH USB Charge Block + compatible micro USB cable
  5. GoPro Head Mount (extenders may be necessary too)
  6. 3D printed case and camera mount
If you do not have a 3d printer (like most people), the files can be ordered straight from the Thingiverse link, or downloaded and ordered through Shapeways! I personally endorse shapeways, as it's nice to support people in your neighbourhood.

You also need one skinny long screw to attach the pi camera. I don't know the size because I just found it in a drawer. If you're building this, you probably have a generous drawer with screws too. Tape will also work.


Once you've got the hardware, putting it all together is pretty easy!  

Start by mounting the monitor in the case with at least two screws. You may need to use a taper threaded screw to widen the holes and get the screws that come with the case to bite.

Next, attach the picamera enclosure, and slide the picamera in.

Now attach the pi to the screen making sure to line up the GPIO pins so the SD card side of the pi is flush. This will align the HDMI ports along the bottom as well.


Now you just attach the GoPro head mount, and plug in your power.


Congrats, your hardware is all setup! Now it's time to get the Raspberry Pi and software up and and running.

Raspberry Pi Setup

The first step in setting up the Raspberry Pi is the usual, download raspbian, transfer the image to your sd card, and update the operating system. Checkout this section if you need help with this bit.

The next step is getting your monitor driver functioning. Hopefully it just works when you plug it in, otherwise you may need another monitor temporarily. To find more instructions on this, checkout the Waveshare Driver Installation instructions.

Once the RPi is running, and your display is appearing as expected, we now need to download OpenCV and RaspiCam++. 

To install opencv, just run the following command to automatically install the package:

$> sudo apt-get install libopencv-dev 

RaspiCam provides an interface between the raspberry camera module, and OpenCV. Installing RaspiCam can be done by following the steps on their github page, or by running the following commands:

$> git clone .
$> cd raspicam
$> mkdir build
$> cd build
$> cmake ..
$> make
$> sudo make install

Writing the Software

The software is written in C++, and uses OpenCV to display the output. Using OpenCV just to display an output is overkill, but this leaves the project in a great position to add additional functionality such as edge highlighting, horizon tracking, and much more.

The project can be found in the github project here*, and can be installed by running

$> git clone 
$> cd piopencv
$> mkdir build
$> cd build
$> cmake ..

This will generate a "picamera" executable that can be launched from the build directory by running 


It would be a good idea to test that this happens! You can exit the program by holding escape.

*I have received some feedback that I may be oversharing when it comes to code on non programming focused posts. So I've just linked to the github here, please leave a comment if you'd prefer some more detail. 

Autolaunch (optional)

The next step is to setup the picamera executable to run automatically on startup. You can skip this step if you don't mind launching it yourself, but that requires a keyboard or remote connection.

If you are running the LXDE desktop environment (default with raspbian pixel), setting a program to autostart is really simple. You simply create a file in the designated autostart folder describing the executable you wish to run.

This can be done as follows

$> cd ~/.config/autostart
$> nano piopencv.desktop

Enter the contents of the file to be:

[Desktop Entry]

Type = Application


Now close and save the file (ctrl-x, y). When you restart your system (sudo reboot -n), the program should auto start !

Closing Remarks

Whew, what a project! I went through a few iterations of screens and software platforms to get this going, and did a whole lot of work trying to get non-HDMI screens (which get a lot smaller) to show a decent frame rate. I ended up scrapping this and just buying the smallest HDMI screen I could find.

Overall, I was really happy with the end result of this project. I'm also really excited about what is possible now that we've got the "reality" part of an augmented reality headset completed.

Hopefully you were able to follow along with the build process, and I encourage anyone interested to participate via github. For example, if you're a computer vision pro and have some ideas about what would help a blind person navigate, add it to the project! The base criteria is that it must run in 20fps on a Raspberry Pi. Then anything goes. 

Thanks for reading, and expect an update in the next few months.


DarkWhite said...

Thank you for this. You've given me a worthwhile project to work on instead of random robots. My cousin's son has been blind since birth and this might be something to help him out. Just got to work on downsizing and kid-proofing it.

Ben Eagan said...

Very cool DarkWhite, I'd be happy to fire off some 3d prints for you if needed. I'll also brainstorm kidproofing and shrinking.

DarkWhite said...

Thanks Ben. I'm going to do a basic prototype using what I have and just see if it has an advantage for him. He refuses to wear his glasses most of the time so want to make it super comfortable. (He's only 2).

Post a Comment