The goal of the robot was to build and program a robot that could be controlled using a laser pointer. The robot uses an onboard camera and Open CV to locate the position of the laser point in front of it, it then moves based on the location and exports a frame that can then be stitched into a video to show the robot's point of view. A video of the robot in action is included below.
The purpose of this project was to give myself a task to complete so that I could gain experience using a Raspberry Pi and Open CV to control a driving robot. I hoped to combine the skills I learned from my mechatronics course with the skills I learned in my computer vision course. While this was a simple goal and project, I wanted to give myself a project to work on over winter break and I hoped to gain experience that could help me in the future to complete more complicated tasks.
Image Processing
After capturing a frame, the image is first converted from RGB into HSV color space. A mask is then created that only shows pixels within a range of HSV values to isolate red pixels that could be the laser point. A Gaussian blur is then applied to eliminate any pixels that were included in the mask that should not have been. The maximum point of the mask is selected which is then defined as the estimated point of the laser point.
If the estimated point is within a defined range, the robot will move forward, but if the point is outside of the range, the robot will turn until the point is within the forward range.
Exported Frames
I wanted to practice adding text and shapes to captured frames, so using the Open CV drawing functions, I printed a box around the laser point and a line connecting the point to the bottom of the image to represent the path that the robot should take. I also printed the estimated distance and angle in the top left corner of the frame. The angle is calculated based on the laser point's location in the frame. Originally, this value was going to be used to control how quickly the robot turned. I had a difficult time getting the continuous servo motors to work the way I wanted, so I changed my approach and did not use the angle. The distance is estimated using interpolation of measured distances based on the y-coordinate of the image. This value is a very rough estimate, but it is not used for anything other than printing it on the exported frame. The frames are saved so that I can stitch them together afterwards.
The video below shows clips of the robot driving alongside the exported frames captured by the onboard camera.
Michael Sherman
Copyright © 2021 Michael Sherman - All Rights Reserved.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.