Thursday, September 23, 2010

Lab session 4 : LEGO Segway

Date: 23. September 2010

Duration of activity: 5 hours (11:00 - 16:00)
Group members participating : all group members, that is Anders Dyhrberg, Guillaume Depoyant, Michaƫl Ludmann and Michal Owsinski.

I. Objectives and summary

I. 1. Objectives

Trying to build a robot able to balance itself using only two wheels and a light sensor.

I. 2. Summary
During this lab session, we managed to :
  • Study and implement some examples for balancing legway robots
  • Build two different models to find the best one
  • Use given source code [1] and change some parameters to fit our own robot
After this lab session, some of us continued to work at home, and we therefore manage to :
  • Enable bluetooth dialog between the NXT and a computer
  • Exchange and vizualize on a computer data in "real time" coming from the NXT by BT
  • Build an other version of the Legway and experiment with it

II. Bluetooth communication : how to make experiments easier

This was not a part of the assigned task for this lab session. But we realized that debugging was tedious and time consuming on the NXT. And it was interrupting our work flow, to continuously reconnect and downloading the debug logs from from the Brick.
In order to make this more easy for us, we extended our already engineered MultiLogger with a BluetoothLogger both based on the same interface ILogger to make them interchangeable.
Fairly simple to implement and clearly use full. For now we are just using the 'nxjconsole' on the PC in order to read the stream sent from the NXT.

To test how well it was performing and how much data we can push through, we implemented a revised version of our SoundSampling program from Lab Session 3 where we log the sound levels as fast as we can read them, and convert it to a char array of  '*'
This gave the following interesting video.


III. Results

III. 1. Ploting the tilting action : light measured according to angle

Implementing the program and converting the parameters to floating points was fairly simple
Initially we did not have much success with the SegWay, and we quickly decided that in order to see what was going on, we needed to measure the linearity of the SegWay light sensor in order to see how we should proceed.
The result was quite surprising, and not really useful for a SegWay.


We decided to try move the sensor a bit further away from the ground, and did the measurement again.


This time the result was much better, in the range from 90 (+-10) degrees the chart was actually almost linear.
Therefor we decided to remove the code line compensating for the sensor not returning a linear result.
This improved our system a lot.

III. 2. Consequence : change the position of light sensor

Once more, before trying to focus on the code and modify the values of the distance that defines the robot's behaviour, we tried to analyse how the position and the orientation of the sensor could improve the sampling.

So we tried, geometrically speaking, to find the best configuration. And we figured out that the worst case would be put the sensor on top of the wheels and to orientate it towards the wheels axis. So as long as we tried to be as far as possible from this configuration, the sensor would be more efficient (because the more we avoid this configuration, the more we can get efficient and suitable values we can sample with the sensor)

IV. Conclusion

Finally, we managed to implement and make this robot work but however there are a few points that we must mention. Namely that the algorithm we used was not properly designed for the robot we constructed. So, that is why the robot wasn't 100% efficient (sometimes it had some problems to keep his stance). This is maybe because of a lot data like the position of the sensor, the radius of the wheel or the inertia of the robot. This way, we tried to change the value of the PID to test how the robot was reacting and changing his behaviour. Besides, we tried to change the robot, like change his inertia centre by adding some weight on top of it and that seemed to give the robot more stability;

We have also later tried to build exactly the same model that was used by Brian Bagnal who made the program we implemented. This time, the light sensor is much more further from the axis of the wheels, it is more sensitive to tilting. However, the results were not really more conclusive : the robot still needed sometimes to be pushed in a direction in order to stay balanced and not fall.

V. References

[1] Brian Bagnal, Maximum LEGO NXT, Building Robots with Java Brains

No comments:

Post a Comment