Thursday 29 December 2016

PiWars 2017 - Autonomous challenges

Its the autonomous challenges at Pi Wars that separate the robots from the radio controlled cars, requiring one or more sensors on the robot to collect information about the real world, process it and make decisions on how to react in order to successfully complete the task at hand.

To add to the complexity you are competing against a whole raft of other robots, so you have to balance between going slow and steady or quick and fast. Do you concentrate on completing the course without any penalties, or risk pushing things to get a good time?

Of course the first step is working out how to approach each of the challenges and get a working, reliable system up and running, before trying to push things to the limits!

Line following

A returning event for Pi Wars 2017, the robot needs to follow a black line on the ground, following it around a twisty course as many times as possible within the time limit provided.

My attempts at line following in the last Pi Wars didn't go so well... In testing the night before the Arduino controlling the line sensor started triggering the watchdog and restarting, clearing the calibration data and causing the robot to lose the line, so on the day I only got about a quarter of the way around the course. So this time around I want to take an approach that uses just Raspberry Pis, partly to try and avoid this issue in future, and partly due to the fact it is a Raspberry Pi based event! 

The first approach would to re-use the line sensor from last time, connecting it directly to a Raspberry Pi instead of an Arduino (something I've successfully done before using the pigpio library). Due to Raspbian not being a realtime OS it can take a lot of CPU time to ensure no data is lost, so for this approach I may have to use a dedicated Raspberry Pi to do the sampling, and send the data to the 'master' Raspberry Pi for processing.

An alternate approach is to use the Raspberry Pi camera to do a spot of image processing to determine where the line is, and where it is going. I've not done image processing since University, so this would require lots of investigation, research and learning. As I like using these events to learn new things, I'll definitely be giving this approach some serous consideration.

Straight line speed test

Another returning event, the robot has to drive in a straight line as fast as possible along a 7.28m long trough. For Pi Wars 2017, however, the straight line speed test has been revised to be autonomous only, whereas previously I've only attempted this when under manual control (Attempts to use the compass on the Sense HAT having proven unreliable!).

A few approaches spring to mind for this event... Utilising feedback from motor encoders to try and ensure the robot is maintaining a straight line, using a sensor to ensure the robot stays a set distance away from one wall (probably reusing the range sensor from last Pi Wars) or utilising the Raspberry Pi camera to try and detect where the walls of the trough are and ensuring the robot stays in the middle.

Using the range sensor sounds the most feasible, but it would be nice to get something working with the camera.

Minimal maze

Another new event for Pi Wars 2017, the robot has to navigate its way through a maze, without touching the walls, to reach the exit. The walls will be of various colours, providing information the robot could potentially use to determine where in the maze it is.

Initial thoughts on this challenge are that I could use a variation of the speed test solution, using a range sensor to keep the robot following the left or right wall, combined with a second sensor on the front to work out when a corner is coming up. Alternatively I could use a single range sensor and start out following the 'right' wall, then when the right wall turns blue (A camera being used to determine this) switch over to following the left wall. Hopefully avoiding hitting the outer wall as the direction changes!

The maze needs to be completed twice, with the times combined for the final result, so can we be a little sneaky and use the first run through to 'map' out the maze? Then use this map on the second run to zoom through it? You'd need to get the starting point spot on for the second run, but it certainly sounds feasible!

So those are my thoughts on the autonomous challenges... Do they sound good, bad, very bad? Is there enough time left to implement all of these?

Leo

No comments:

Post a Comment