Wednesday, January 19, 2011

End Course Project 8: CTU and TU programming

Counter Terrorist and Terrorist Units both have exactly the same program running inside. They have indeed the same mechanical architecture, and have the same external behavior, despite their different goals. What makes them different are the commands input by human players and the fact that the TU carries a bomb trailer that it has to plant (but this last behavior is managed on the computer and bomb unit sides).

We will therefore call both CTU and TU by the name “unit” inside this part.

A unit has basically two main objectives it has to be able to follow and achieve in real time, i.e. being as fast and reactive as possible:
  •   Obey the command sent by the player controlling it
  •  Send feedback from the embedded sensors to the player’s computer

All those processes are largely supported by the Message Framework that makes the Bluetooth communication a smooth task properly handled during the running time. This framework entirely coded for this project - though now usable for any other project involving BT communication with Lejos- is described in another part of this report (see the article 5. Message framework).

To provide the reader with an idea of how easy it is to communicate with a robot and a computer, below are some examples of how we use this protocol layer. The idea is always to get an instance of the same message framework that is instantiated as a singleton before listening to messages coming from the computer or sending some data back. The singleton makes sure that the framework is not instantiated more than once for memory and running time efficiency which is precious when we deal with embedded systems.

At the beginning of the main program, we instantiate the message framework and add a listener. So each time a message is send by a computer to the brick, the listener will catch it and then process it.

The orders given by a player to its linked unit are only about the driving part of the robot. We therefore just listen for steering instructions, and if the unit has to increase or decrease its speed or stop. For this purpose, we get and analyze strings received on the brick, and then apply a command using the Lejos Pilot class. A command can be interrupted at any time by a newly received one so the unit is as responsive as possible and the player can control it more intuitively.

The decision of using the Pilot class made sense since it enables a smooth steering using the servomotors, the size of the wheels and the central turning position of the robot in order to get a parallel driving car doing accurate turns.

Picture 23 : Simplicity of using the message framework - get its instance, create a listener and then read the messages


Units have to constantly send data get by each of its sensors to the computer controlling it, in order to display them on the player’s screen.

Once again, the message framework is called, and we do so for each single string of data we wish to send in real time. Depending on the case, we may prefer to send a new data only if it is different from the previous one, so as to not overload the communication channel between the unit and the computer.

Messages are formatted in a predetermined way so the computer always knows from which sensor they come from.

Picture 24 : Sending data from the IRSeeker over bluetooth through the message framework


Threads are the key solution in managing all those tasks in real time. We have basically four main threads (sometimes split in other sub-threads): one for each sensor -IRSeeker, Compass and Ultrasound- and one for responding to the player’s orders.

Each sensor thread has the objective of sending as many data read by the corresponding sensor to the computer. For the compass sensor, we send the last value in degree (0 to 360°) that indicates where north is. For the IRSeeker, we send the new direction (a 0 to 9 integer) where an IR signal has been detected if any, on top of sending the five values (0 to 255) got by the five internal sensors.

On top of that, the robot emits beeping sounds that get louder and more frequent when the IR sensor is moving closer to the IR ball. This way, each player gets a immediate feedback on what is happening in the game, and even the spectator can have a good idea of the progress of the game since this information is critical (finding the ball being the main goal for the CTU) : the louder the sound is, the more likely the CTU is to win the game.

The case of the Ultrasound sensor is a little more complicated. It uses one thread to autonomously control the motor rotating the turret so the US sensor can detect obstacles with a 360° visible angle. And one more thread to read and send the values, after having converted them in Cartesian coordinates. The unit computes the Cartesian coordinates of an obstacle point detected by doing simple mathematics: this can be deduced thanks to the position of the turret motor and the distance of the obstacle. The point of sending the data into Cartesian coordinates is to display them on a screen using a Cartesian coordinate system, so we can actually see where the obstacles are in relation with the position of the robot (abscissa goes from the left to the right of the unit, while ordinate goes from the back to its front).

Picture 24 : computing the Cartesian Coordinates for the radar, based on the distance to the obstacle and the angle of the motor


All the sensors part is then autonomously managed by the unit, even if it does not always result in a direct behavior one can observe by looking at the robot. Since those behaviors are in no way interfering with each other (except for the Bluetooth communication – but this is handled by the message framework), it was not necessary to prioritize them: no thread gets access to another thread’s sensor or actuator.

However, our implementation is –in its current state- ready to fully handle complete autonomous behavior-based robot with a subsumption architecture if deemed desirable. This will be relevant when it comes to program an autonomous decoy wandering in the city and looking for the bomb. Since we already take care of managing different threads and that the overall more complex behavior (admittedly not fully autonomous yet) is already split into many simple behaviors and independent modules, it would be possible to implement Rodney Brooks’ paradigm[1], relevant for such a real-time system asking for robustness with multiple goals and sensors[2].




[2] Rodney Brooks MIT AI Memo 864, September 1985 : http://people.csail.mit.edu/brooks/papers/AIM-864.pdf

No comments:

Post a Comment