S16: Ahava

From Embedded Systems Learning Academy
Revision as of 18:22, 24 May 2016 by Proj user10 (talk | contribs) (Car Controller)

Jump to: navigation, search

VisionCar

This project aims at tracking a known object from a vehicle and follow the target at a pre-fixed distance. The RC car is mounted with a camera which is interfaced to a Raspberry Pi Compute module. The Compute module performs the required image processing using OpenCV and provides relevant data to the Car Controller for driving. An Android Application is developed to allow a user to select an object by adjusting the HSV filter thresholds. These values are then used by the imaging application to track the desired object.

Objectives & Introduction

Show list of your objectives. This section includes the high level details of your project. You can write about the various sensors or peripherals you used to get your project completed.

Team Members & Responsibilities

  • Aditya Devaguptapu
  • Ajai Krishna Velayutham
  • Akshay Vijaykumar
  • Hemanth Konanur Nagendra
  • Vishwanath Balakuntla Ramesh

Schedule

Week# Date Task Actual Status
1 3/27/2016
  • Selection of platform for Mono / Stereo Vision processing.
  • Completion of PWM API Documentation.
  • To confirm on suitable platform for Mono / Stereo Vision processing.
  • To understand the PWM signals transmitted from the RCU for driving the car motors.
Completed
2 4/03/2016
  • To decide on suitable algorithm for Object Detection and Depth Perception.
  • Establishing direction and speed control on the car.
  • Order components.
  • Decided on disparity mapping for depth perception. Platform does not have enough computation power to perform stereo image processing. Switched to single camera based object tracking
  • Direction and speed control of the car is completed
Completed
4 4/20/2016
  • Implementation of Imaging Algorithm and Testing on a PC Environment.
  • Complete setup of Open CV library and verification of video streaming on the development platform.
  • Decision on Bluetooth or kill-switch implementation for car control.
  • PC setup and testing is in progress.
  • OpenCV is setup and running on the platform. Completed - Stream data from the cameras
Completed
5 4/27/2016
  • Mounting controllers and cameras on the car.
  • Complete Imaging and verification on the board.
Completed
6 5/04/2016
  • Integrating Imaging results with the car controls.
Completed
7 5/11/2016
  • Testing car integrated with control signals and perform calibrations.
Completed
8 5/18/2016
  • Final Testing and Enhancements.
Completed

Parts List & Cost

Sl No Item Cost
1 RC Car $188
2 Remote and Charger $48
3 SJOne Board $80
4 Raspberry Pi Compute Module $122
5 Raspberry Pi Camera $70
6 Raspberry Pi Camera Adapter $28
7 LCD Display $40
8 General Purpose PCB $10
9 Accessories
10 Total

Design & Implementation

Hardware Design

The hardware design for VisionCar involves using a SJOne board, Raspberry Pi Compute Module and Bluetooth Transciever as described in detail in the following sections. Information about the pins used for the interfacing of the boards and their power sources are provided.

System Architecture

The hardware design for VisionCar involves using a SJOne board, Raspberry Pi Compute Module and Bluetooth Transciever as described in detail in the following sections. Information about the pins used for the interfacing of the boards and their power sources are provided.


System Block Diagram

Power Distribution Unit

Power distribution is one of the most important aspects in the development of such an embedded system. VisionCar has 6 individual modules that require power supplies of various ranges for its operation as shown in the table below.

Module Voltage
SJOne Board 3.3V
Raspberry Pi Compute Module 5.0V
Servo Motor 3.3V
DC Motor 7.0V
Bluetooth Module 3.6V - 6V
LCD Display 5.0V

As most of the voltage requirements lies between 3.3V to 5V range we made use of SparkFun Breadboard Power supply (PRT 00114). It is a simple breadboard power supply kit that takes power from a DC input and outputs a selectable 5V or 3.3V regulated voltage. In this project, the Input to the PRT 00114 is provided by a 7V DC LiPo rechargeable battery.

<Image of Breadboard PowerSupply>

The schematic of the power supply design is as shown in the diagram below. It has a switch to configure the output voltage to either 3.3V or 5V.

<Breadboard Schematic>

For components requiring a 7V supply, a direct connection was provided from the battery. Additionally, suitable power banks were used to power these modules as and when required.

Car Controller

The car controller, in our case is the SJOne board and is connected to the bluetooth, compute module, servo and DC motors. The pins used are as shown below. The connections are all done through the connection matrix.

Pin Connection
P2.0 DC Motor PWM
P2.1 Servo Motor PWM
TXD2 To RXD of Bluetooth Module
RXD2 To TXD of Bluetooth Module
TDX3 To RXD of Compute Module
RXD3 To TXD of Compute Module

<Image of Car Controller Pin Connection Diagram>

Connection Matrix

This is a general purpose PCB which is used to mount the external wiring used in our system. The prototype PCB board houses the

  • Power Supply (PRT-00114)
  • Vcc and Ground signals.
  • PWM control signals.
  • UART interconnections.
  • Bluetooth module’s interconnections.

It is an essential component of our design, as it makes it easier to connect different modules together.

Motor Interface

VisionCar uses a DC and Servomotor to move the car around. These motors were interfaced to the Car Controller using the pins as described below and were controlled using PWM signals.

Servo Motor Interface

The VisionCar has an inbuilt configurable servo motor which is driven by PWM. The power required for the servo motor operation is provided the rechargeable LiPo battery. Servomotor requires three connections which are 'VCC', 'GND' and 'PWM’. The width of the PWM signal turns the servo across its allowed range of angles. The pin connections to the Car Controller are as shown in the table below.

Pin Connection
VCC 5V
GND Ground Connection
PWM PWM Signal from P2.1 on SJOne

The table below indicates the value of PWM signal required to drive the Servo Motor.

Direction PWM Duty Cycle
LEFT 5V
SLIGHT LEFT Ground Connection
STRAIGHT PWM Signal from P2.1 on SJOne
SLIGHT RIGHT PWM Signal from P2.1 on SJOne
RIGHT PWM Signal from P2.1 on SJOne

The Servo motor used in this project has an operating range of 5-10% which corresponds to complete left and right turn respectively. For our application, we make use of PWM duty cycle from 6.0 to 9.0 at 50Hz frequency.

DC Motor Interface

The RC Car comes installed with a DC Motor connected to an ESC unit. Controlling the DC Motor controls the speed of the car. We use PWM signals to drive the DC motor. The Motor draws power directly from the battery on the car. DC Motor pin connections as as shown in the table below.

Pin Connection
VCC 7V
GND Ground Connection
PWM PWM Signal from P2.0 on SJOne

DC Motor PWM operating range is from 5 - 10 % Duty cycle at 50Hz. We use the following settings to control the speed of the car.

Speed PWM Width
REVERSE 7.00
STOP 7.48
MEDIUM 7.80
FAST 7.90

Bluetooth Interface

The car is controlled through a customized mobile application which uses HC-06 controller (bluetooth interface) which is then connected over to the SJOne board (development board) over a UART communication.

This controller receives the control signals from the mobile application and passes the information in a byte level format over to the development board.

The HC-06 controller has 4 external pins which are VCC (power), GND (ground), TXD (Transmit over UART) and RXD (Receive over UART) which are currently used in our application -

Image Sensor - Raspberry Pi Compute Module

Raspberry Pi Compute Module

Features and Specifications

The compute module contains a BCM2835 processor, 512 MB RAM and 4GB EMMC Flash. This is integrated to the Compute Module IO board which fits into a standard DDR-2 SODIMM connector.

The compute module IO board has two onboard camera interfaces and comes with an adapter for Raspberry Pi Camera board. It has a HDMI output port which can be connected to a LCD display for viewing the image inputs and outputs. It exposes the GPIO pins of the processor. In total, the IO board has 120 GPIO ports grouped into 2 banks, each having 60 pins. It also has a USB port with USB 2.0 support.

Installation and Setup

Pins 14 and 15 are connected to the UART0 RX and UART0 TX of BCM2835. These pins are used to communicate with the Car Controller.

CAM1 interface is used for connecting the camera and it requires the following pins to be connected using jumper wires for operation:

  • Attach CD1_SDA (J6 pin 37) to GPIO0 (J5 pin 1)
  • Attach CD1_SCL (J6 pin 39) to GPIO1 (J5 pin 3)
  • Attach CAM1_IO1 (J6 pin 41) to GPIO2 (J5 pin 5)
  • Attach CAM1_IO0 (J6 pin 43) to GPIO3 (J5 pin 7)
Jumper wire connections for enabling CAM1 interface on CMIO board

Software Design and Implementation

Car Controller

Compute Module

Compute module provides a dual camera interface that is suitable for both mono and stereo vision based image processing. Combined with a HDMI display support, this makes it a suitable platform to prototype simple image based applications. The underlying architecture of the Raspberry Pi also has a VPU which can be used for relatively faster computations. The overall form factor of the platform and the camera board are also small and suitable for mounting on a RC car like the one used in this project.

The downside that we faced after selecting this platform was that there is not enough documentation or support to fully exploit the features of the compute module. As per available information online, using the VPU requires us to rewrite the imaging algorithms in assembly code. This task was not possible within the time frame that was available to implement this project. Moreover, we intended to use OpenCV support for building our applications. In the end, we had an OpenCV application running on a single-core CPU which was not responsive enough for a stereo based application. Our final implementation uses a single camera based object-tracking algorithm using OpenCV.

Platform Setup

Setting up the platform required building the necessary kernel and OS utilities, library packages, openCV library and the root file-system. We utilized the buildroot package to configure the software required by the platform. There were many dependencies that needed to be resolved to be able to finally bring up the board for OpenCV application development.

Buildroot provides the framework to download and cross-compile all the packages that are selected in the configuration file. Toolchain for the platform is also built on the host system by buildroot. We saved our buildroot configuration in a file named “compute_module_defconfig”.

Most of the board bring-up time was spent in enabling OpenCV support for the platform. There were many dependencies that were needed to be resolved. OpenCV requires the support of libraries such as Xorg Windowing System, GTK2 GUI library and openGL.

The basic steps to build a system software for compute module is given in the link below.

https://github.com/raspberrypi/tools/tree/master/usbboot

The first requirement is to build an application, “rpiboot”, that is used to flash the onboard eMMC flash of the compute module (Refer to the link above for steps to build the rpiboot application). This application program interacts with the bootloader that is already present in the compute module and registers the eMMC flash as a USB mass storage device on the host system. This way, we can copy programs and files to/from the file system already present on the target platform and also flash the entire eMMC if necessary.

Once we are able to interact with the eMMC flash onboard the compute module, we can build the required image using buildroot. Buildroot can be cloned from the repository below.

git.buildroot.net/buildroot

Run “make menuconfig” in the buildroot base directory to perform a menu driven configuration of the system software and supporting packages. After saving and exiting ‘menuconfig’, the saved configuration can be found in the file called “.config” in the base directory. Save this file under a desired name in “{BUILDROOT_BASE}/configs” directory. We saved ours as “compute_module_defconfig”. The next time we build using buildroot, we can just run “make compute_module_defconfig” to restore the saved configuration.

As stated above, we ran into dependency issues with respect to OpenCV support. We were faced with errors such as “GTK2: Unable to open display”, “Xinit failed”, etc. This link shown below was very helpful in resolving the issues.

https://agentoss.wordpress.com/2011/03/06/building-a-tiny-x-org-linux-system-using-buildroot

We were able to bring up “twm” or Tiny Window Manager on the platform and run some sample OpenCV applications.

The next task was to enable camera and display support on the compute module. By default, the camera is disabled and the HDMI interface is not configured to detect a hot-plug. We referred to the link shown below to enable HDMI hot-plug through the config.txt file in the boot partition.

https://www.raspberrypi.org/documentation/configuration/config-txt.md

The difficulty in getting the camera to work was that a large amount of documentation online pointed at installing "raspbian" OS on the pi and running apt-get or other rpi applications to enable the camera interface. The existing raspbian image does not fit into the compute module eMMC flash. The camera interfaces are disabled by default. To enable the same, config.txt requires 'start_x' variable to be set to '1'. This would cause the BCM bootloader to lookup a start_x.elf and fixup_x.dat to actually enable to camera interfaces. In brief, this would cause the loader to initialize the GPIOs, shown in hardware setup section, to function as camera pins. This step also requires a dtb-blob.bin, a compiled device tree, setting the necessary pin configurations. If this file is missing, the loader will use a default dtb-blob.bin built into the 'start.elf' ( Please note this is not the same as start_x.elf ) binary. We located this file by running “sudo wget http://goo.gl/lozvZB -O /boot/dt-blob.bin”. Once the CM boots up, executing “modprobe bcm2835-v4l2” on the module will load the v4l2 camera driver. OpenCV uses this driver to interact with the onboard camera interface.

Once the buildroot has produced the image, it needs to be flashed to the compute module eMMC. A step by step procedure for the same is given in the link below.

https://www.raspberrypi.org/documentation/hardware/computemodule/cm-emmc-flashing.md

Note that the jumper positions onboard needs to be changed for the CM to behave as a USB slave device.

<Jumper position image>

Object Tracking using OpenCV

Tracking of an object is made possible by using image processing and image analysis techniques on a live video feed from a camera mounted on the VisionCar. Relying on images as the primary ‘sensing’ element allows us to act upon real objects in a very intuitive way – similar to how we humans do. We use OpenCV, an open source real time computer vision library to process live video frames. The library provides definitions of datatypes such as Matrices and methods to work on them. In addition, it provides optimized implementations for complex image processing routines. We have designed and implemented an application to perform object tracking using features provided by OpenCV library.

An image is a 2D array of pixels. In the case of a coloured image, each pixel is represented by 3 components - red, green and blue. Apart from the RGB color scheme, the HSV or Hue, Saturation and Value scheme is widely used in image processing. Separating the color component from the intensity can help in obtaining more information from an image.

The algorithm for object tracking is as shown in Fig<>. The application performs tracking based on the color information of the object. On reading an image frame from the camera, we obtain an RGB frame. This is converted to HSV color format using the ‘cvtColor’ provided by OpenCV. A filter is applied to the resulting HSV frame based on which, we obtain a thresholded image. These values have to be configured per object that is to be tracked. The HSV frame is now subjected to morphological operations such as dilate and erode to filter noise, isolate individual elements and join disparate elements. This forms a perfectly segmented image with the target object in sight. Next, all the contours/shapes in the filtered image are determined using the findContours OpenCV function. This would return a list of contours and their sizes as per the contents in the filtered image. We use ‘moments’ to determine the area and location of the object. This information is also sent to the Car Controller through the Communication Interface. If the area of the object is not within the threshold specified, we consider the object to be absent. These sequence of operations are carried out continuously on every frame thereby tracking the target object.

The application was executed and tested on a desktop environment with a Webcam before porting it to compute module.

Communication Interface

The communication interface of the Compute Module is just a simple UART. The CM receives data regarding HSV thresholding values and light sensor readings from the LPC module. The position and distance to the tracked object is sent by the CM back to the Car Controller. The application on the CM has two threads. One is the main image processing thread. The UART write is performed by this thread once per frame as and when the position of the target object is computed. The second thread performs the UART read to adjust the HSV threshold values. This thread performs blocking read calls and remains asleep as long as there is no data available from the LPC module. The data format which is transmitted/received in the CM is explained below. The transmitted data is a simple 4 byte structure:

/* Copy Tx structure here */

The horizontal/lateral position of the target object in the frame is divided into multiple zones from extreme left to extreme right. This field is labelled ‘lZone’ in the structure. The depth or distance to the object which is based on the area of the object in the frame is also divided into multiple zones from ‘near’ to ‘far’. This zone information is encoded in the ‘dZone’ field of the data structure. The possible values for the ‘lZone’ and ‘dZone’ in the form of enumerations is shown below:

/* Copy the Enum for zone and depth */

The data format received by the compute module is as below:

/* Copy Rx structure here */

The values for the field ‘<command>’ is enumerated below. This value is sent from the bluetooth interfaced to the LPC controller. Each command increments or decrements its corresponding Hue, Saturation or Value thresholds. The “StoreFIle” command stores the current threshold values to a file. The “LoadFile” command restores the threshold values from the file.

/* Copy enum for commands */

The flowchart for the UART communication control is shown below

<UART threading flow chart here>

Flaws of Current Algorithm

The current algorithm has its own set of disadvantages. The first disadvantage is that the algorithm relies on area to compute distance to the target object. Any small amount of noise can easily mislead the algorithm to track some source of noise in the image. A partial solution to this problem could be to utilize the light sensor readings and modify the threshold values to eliminate noise under varying lighting conditions. This might still be insufficient as the object itself may move from one lighting condition to another.

Stereo vision can add depth as an extra dimension for thresholding the image. Noise can easily be eliminated by using distance as a threshold value. But this requires good computation power which is not available in the current platform.

Android Application

Motivation

The reasons for building an Android app are manyfold -

  • The Raspberry Pi Compute module, that is being used to perform the imaging, has a single USB port on board. Constantly having to connect a keyboard and a mouse to program on the platform, while copying source files to and fro from a pen drive, is a cumbersome process, which can be negated by outsourcing a part of the functionality to an Android application.
  • Since the project makes use of a RC Car that is capable of traveling at 35 miles/hr, it seemed prudent to design a kill-switch mechanism that would ensure the Car’s safety in the event of it going out of user control. The Android app effectively acts a Heartbeat for the vehicle, ensuring that the vehicle is switched off once a heartbeat is missed. There can be various reasons for a heartbeat miss: the Car could have traveled out of reach of the Bluetooth module and is incapable of receiving any messages from the user and thus misses a heartbeat. Or the app on the phone could have crashed for reasons unknown. For the situations described, it would be wise to disable the car to avoid any damage and the android app enables that.
  • In order to segment the target object from the background, it is necessary for us to modify the hue, saturation and intensity threshold values. Using an app to wirelessly set these values is less cumbersome than having to connect a keyboard to make modifications.
  • By configuring the Controller on the Car to connect wirelessly over Bluetooth, we are building a framework for the future, where it would be possible to connect the vehicle to other smart objects. Using Android as the platform was a natural choice as the API is very well documented and its global presence is unparalleled.

Design

The Vehicle at a given time, is receiving commands from either the bluetooth module or from the compute module. It would thus make sense for the app to provide the user an option to select from between a “compute module” mode and a “bluetooth mode”. Our next steps would involve determining the user choices between the two different modes. Apart from providing the user with options to connect/disconnect to the HC-06 Module, the user is provided with options to drive the car in a direction. The app thus incorporates buttons to drive the car in directions defined by the motor control task. In Compute Module mode, there is no need for the user to input directions, as the car uses the Pi Camera to recognize the target object and traverse towards it. In this mode, buttons have been added to modify the Hue, Saturation and Intensity values so that the object can be separated from the background. The process of object detection can be observed on the LCD screen which is connected to a compute module via HDMI.

The bluetooth app also constantly sends a heartbeat message on a separate thread at specific intervals to ensure that the channel exists between the HC-06 module and the phone. This is precautionary in nature. If the Car were to unknowingly drift away from the control of the user, this thread ensures that the car stops once it is out of the range of the Phone.

Implementation

An Android app consists of two parts - a front end user-interface that defines the User Interface and a back-end that defines the logic for UI elements. The UI for the app is written in XML (Extended Markup Language) in a syntax that is reminiscent of HTML. The backend is written in Java, and makes use of the Android studio SDK to build the app.

App Layout
Bluetooth App Layout for Vision Car

The App uses the Bluetooth API to create a channel between the phone and the HC- 06 module. The Bluetooth API at the Android Developer page gives a fair idea of the steps involved in using the phone’s bluetooth to create a channel. The backend ties common bluetooth methods such as connecting, disconnecting and communication to the buttons defined in the UI. This is done by defining the behavior of every individual UI element present in the xml file.

App Layout
Button Presses and their corresponding actions

The App makes use of a thread that uses a Blocking Queue to queue direction and HSV modifiers to be sent to the HC-06 bluetooth sensor. The logic to distinguish between commands directed to the bluetooth module and the compute module has been implemented in the Bluetooth task. Another thread constantly sends a message that has been designated as the heartbeat message between the motor control and the app. If the Car controller is unable to sync up with the heartbeat message, then the miss will essentially trigger a shut-down of the system.

Integration & Testing

This section explains the various stages of testing involved during the design and development of our Vision Car. We can broadly classify this process into five stages which are explained below:

Motor API

PWM based motor driver was our first and foremost implementation in terms of software. Once the design and coding of Motor API was completed, a test framework was developed to test the working of both the Servo and DC motor. The framework was designed in such a way to test the overall functionality of the Motor API. The framework also made sure that the PWM input always remained within the safe range, at the same it also assured that both the motors functioned as designed.

Bluetooth Interface

An android application built on the mobile device is the primary source of control of our Vision Car. Bluetooth interface also houses important features like KillSwitch, HeartBeat etc,. Once the Android application was developed, the bluetooth module which was connected to the SJOne board was paired with the mobile device to test the functionalities as listed below:

  • Motor control signals
  • Kill Switch implementation
  • HeartBeat implementation
  • Start and Stop State of the Vision Car.

Imaging Algorithm

Before using the compute module for image processing, the algorithms were tested on the local PC using a webcam. Various algorithms were provided as the part of OpenCV library, which was tested to suit our purpose. Finally few of the algorithms were selected and merged to fit our requirements.

Image Processing on Compute Module

After the image processing algorithm was tested on the PC using a webcam, it was ported to Compute Module which made use of Raspberry Pi Camera. Algorithm was tested for various objects in different lighting conditions. Based on trial and error method, various threshold values like HSV components and object area were finalized.

Integration testing

All the individual modules were integrated on our Vision Car and tested in the outdoor environment using the same objects used for testing in the simulated environment. Also testing was performed under different lighting conditions and backgrounds. It was also important to test the communication between individual modules which happened seamlessly.

Conclusion

Conclude your project here. You can recap your testing and problems. You should address the "so what" part here to indicate what you ultimately learnt from this project. How has this project increased your knowledge?

Project Video

Upload a video of your project and post the link here.

Project Source Code

References

Acknowledgement

Any acknowledgement that you may wish to provide can be included here.

References Used

List any references used in project.

Appendix

You can list the references you used.