Software apps and online services
Hand tools and fabrication machines
In the world of sensors, it is very common to make an indirect measurement in order to simplify and reduce the cost of sensing. For example to detect human presence for alarms, ultrasonic or thermal sensing devices are used to detect motion. However, humans do not work that way. Although we can probably use sounds to detect if someone is around, vision is our best way of determining what is around us. In general, vision is probably our most valued and capable sense, which allow us to make sense of the world.
Recently, a lot of progress has been made in computer vision. New algorithms combined with Machine Learning allow us to determine a huge deal of information from video capturing. However, these solutions have been so far more expensive than other types of sensing so its use has been limited to surveillance and industrial applications. The cost comes mainly from powerful computer required to both capture and analyze the video.
A cheap computer vision (CV) sensor could become a breakthrough in sensing technology for many reasons:
- It can be made smarter than indirect sensors. For example in occupancy sensors, indirect sensors fail when the occupant is not moving. A real computer vision occupancy sensor can detect occupancy even if there is no movement.
- Many different things could be detected with just one vision sensor. For example, the same device could be used to detect position, occupancy (it could even now many persons are in a room), fire, light level, the state of devices (are the lights on? Is the TV on?), gesture control, you name it. If you can see it with your own eyes a computer vision sensor can detect it as well.
- As soon as it starts being used the price gets lower on its own. Basically, this is a result of mass-producing. If instead of producing occupancy sensors, fire sensors, position sensors, even wall switches replaced with camera-based gesture detection, everything is done with just one CV sensor, the price will be reduced much more than what the price is nowadays for indirect sensing.
For the reasons explained above I wanted to create using the FRDM-K82F some kind of computer vision based sensor. Although many ideas came to my mind only one was practical, useful and could be finished by the deadline of the contest.
It is very difficult to consistently park your car at the same location inside a garage. You end up too much to the sides or not leaving enough space in the front or back. Or maybe you are very careful but your significant other isn’t, and if you are like me that drives you insane. There are solutions for this at this moment that range from low-tech to “pseudo high-tech”. These are some of these:
My idea is to create something completely different more friendly to the user. The idea is using computer vision make a system that guides you to park the car correctly all the time. More similar to this than the existing solutions:
Using motion detection in a video image, the system should be able to indicate if the vehicle needs to move left, right or stop.
The indications to the user must come from a very simple light-based interface like this:
The left and right arrows would indicate if the car has to be steer left or right. The light in the middle will show three colors, green to advance, yellow to advance with caution, and red to stop.
Cameras like the UV7670 have been interfaced to other types of microprocessors before. However in most of those implementations the microprocessor is actively involved in capturing the data from the image. This takes a lot of processing power from the microprocessor. Once the image is acquired and is in RAM the real work begins with the image processing functions.
The Kinetis FlexIO allows retrieving images from the UV7670 sensor with almost no processing in the microprocessor. Once the image is completely captured, an interrupt indicates the image is ready and in memory. While the image is being processed the next image is being captured at the same time.
This makes the FRDM-K82F the ideal platform for the objective of this project: a very low-cost image-processing platform.
The OV7670 is a 300Kpixels camera. Resolution is not that critical for this application, actually at the time of this writing the program uses it in its lowest resolution (120x160). The key of the OV7670 is economies of scale. Because it has been used so much in small projects the cost of the camera assembled in a breakout board with a lens included is around $10. It also has other benefits like not having IR filter, making it appropriate to do applications using IR illumination.
Camera Assembly: It was nice from the designers of the FRDM-K82F platform to include a connector for FlexIO with the exact same pinout as the OV7670. It is possible to directly solder the OV7670 to the FRDM-K82F board, however, I soldered a .1” Socket to the board instead. That way I can later connect other cameras or devices to the FlexIO Interface.
Indication Lights: This is the only part of the project that requires some labor to replicate it. The indication lights were made using the WS2812B RGB LEDs with controller and using foam board for mounting them. The reason for using the WS2812B LEDs is because they allow a lot of flexibility (color could be customized and as many as required LEDs can be wired) just using one simple GPIO from the microcontroller. The schematic of the Indication Lights is a follows:
The LEDs were mounted in double side prototyping perforated board as three separate boards. The three boards are connected with a flat cable. The first board is the only one different because it contains the pull-up resistor.
It is easier to use WS2812B breakout boards if you have them available.
On foam board cut the shapes of the lights. Be careful not to cut all the way through. That is, leave the cardboard backing on one side. The remaining cardboard makes a nice diffuser for the LEDs. This is done twice and a sharpie is used to color the inside borders to avoid light spread beyond the created shape. The LED boards are mounted on one of the foam boards and using double sided tape everything is put together into one assembly.
The lights assembly data line is connected to Port C pin 12, that is PTC12 in the FRDM-K82F board:
So with the hardware ready is time to work in the software.
First of all install the Kinetis Design Studio. In my case, I installed it on my MacBook Pro, and although the documentation doesn't guarantee it will work, it worked flawlessly for me.
To flash the FRDM-K82F board, the JLink JTAG adapter is used everywhere, however, I used the OpenSDAv2.1 interface, or most commonly known as the mbed interface. Upon connecting the OpenSDA port to the MAC it will show as a USB Drive, where the .bin file generated by the KDS is copied.
I only had one issue doing this. At some point for some unexplained reason, the companion microprocessor (the one that implements the OpenSDA interface) got erased somehow. Instead of coming up as a USB drive called mbed it showed up as BOOTLOADER. For some time, I panicked, but a search on the internet got me to the fabulous NXP page where the OpenSDA firmware can be downloaded. I will include it in here in case it happens to someone else:
Just copy the firmware in the USB drive and in a few minutes I was up and running again.
I started my software based on the dev_video_flexio_ov7670_lite example that is copied in the examples folder when the Kinetis Design Studio is installed. For some time, I was planning on completely removing the USB part of the code, but it was good that I did not because it helped to debug the application.
The video format used by the example is RGB565, that is 16 bits are used to store and RGB Pixel. Usually in computer vision applications, instead of using RGB, black and white video is used. However, convert from RGB to an intensity based video takes too much processor time and I wanted to leave as much processing power for the image processing, so I changed the demo program so the image captured from the OV7670 sensor was done in YUV422. YUV 422 format uses again 16 bits per pixel, with one byte used for intensity and the other byte for color information.
This was easily changed in the flexio_ov7670.h file by just changing this line:
.outputFormat = (ov7670_output_format_config_t*)&OV7670_FORMAT_RGB565,
.outputFormat = (ov7670_output_format_config_t *)&OV7670_FORMAT_YUV422,
However as it was expected, this messed up the video send via USB. I tried for some time to change the USB side of things to change the format to YUV422, but I was not successful doing it, so in order to keep the USB video available, I converted the Intensity to an RGB image, that is a lot simpler than converting RGB to intensity.
I also added some information to the image before sending it trough the USB to help to debug the application. This is how my video looked on the PC side with the debugging information:
So this is what the video looks like:
Sorry for the low quality video, the resolution is reduced (remember we are woking at 120x160) and is upside down, because I needed more space in vertical to cover the garage.
Something to keep in mind in the video is that the video and the signals are the other way around. This is because if the microprocessor sees the car going to much to the right it will tell the driver to go to his right that is left from the microprocessor side of things.
This is a video of the whole thing working at my desk. After this was ready it didn't require much more to be installed in the final location: the garage. Look how I used Lego to create a base for the whole development board. This was very useful because it allowed me to move it back and forth from the garage to my desk when I needed to make changes.
Now this last video shows the whole thing from the desk to the final installation in the garage.
The software required for the parking assistant is in the parkingasst.c file and basically needs to be called with a buffer with the YUV422 image. So this is very easy to migrate to other platforms. However to replicate the whole project, it is only required to load the .bin file included in the project in the FRDM-K82F board and add the LEDs as explained in the project.
This is hardly a completed project. However, I'm expecting to complete it before the final deadline of the Flex Your Mind with Kinetis FlexIO contest. These are the next features steps to complete the project:
- Upload images to a cell phone using Bluetooth Low Energy, that way the user can configure exactly the limits for the parking.
- Change the parking area to be a triangle instead of a square. It needs to be a triangle to consider for the perspective of the camera.
- More thorough documentation both in the source code and in this description.
Finally, I want to express my most sincere thanks to NXP and Hackster.io for providing the hardware and creating this contest. I love to enter these contests because of how much you learn in the process. Thanks again.
Camera PINOUT: This is a photo of the used camera module pins:
OK The new editor didn't allow me to add the pictures.