Wednesday, December 22, 2010

End Course Project 3: Scheduling our work

Date: Tuesday, 14. December 2010

Duration of activity: 2 hours

Group members participating : Michaël Ludmann, Guillaume Depoyant, Anders Dyhrberg




I. Goals for todays session

  • Define the most important and urgent tasks
  • Define the interdependencies between some of these tasks
  • Infer from that a more precise schedule for the next weeks.
  • Discuss about the progress so far


II. Progress on the software part

II. a GUI


As we discused it in the last lab report, we made a first shot for the GUI and came out with this structure. A 1280x800 window (the colors are here in order to show how the window is split) in which we'll have:
  • The video stream in the teal frame
  • The map in the grey frame
  • The radar in the green frame
  • The time informations in the red frame

And we still have to discuss about the yellow and blue one but seriously think about the command list and the creation of the perks in order to improve the game experience (but this step will be done after every other step is completely done, of course).



II. b Video streaming


A main part of our game design was that the players should have a Point Of View “POV” camera in front of the unit. Due to the importance of this component this was the first GUI component we designed. Basically we just wanted to prove the concept, and we figured the easiest way to achieve this feature was to use an HTC desire phone, as we had 3 available of these in the group. To avoid having to do any Android programming also, we scanned through Android Market evaluating 6 different programs that offered Video Streaming from the camera over the network.


The most suitable one was IP Webcam[1].

Problem was including the one we choose, that they all only gave the option to stream it to a browser, and we needed to embed it into our GameGUI. So we basically decided to figure out how to reverse engineer the JavaScript in the web page, and the copy this behavior to embed the stream into out JPanel component.

Fortunately our choice had a really simple script implementation.

Picture 8: Java script we had to reverse engineer



Clearly the main part of this code is just a clever way of double buffering. And the essence can be boiled down to this link exposing the latest image available from the camera.

<IP><PORT>/shot.jpg  (example: http://192.168.0.139:8000/shot.jpg)

Knowing this it was fairly simple to make a component with a thread continuously showing a double buffered presentation of the latest image received. Double as a common technique to avoid image flickering while redrawing the graphics on screen[2].

Even though this was just a simple solution the result was quite impressive. See POV example from the game field below:



During our testing of the overall game, this component started to cause some disturbances, because every time at associated video feed was not available it was waiting 30 seconds to time out the HTTP request and delaying the startup of the game.

To fix this we added an alternative image every time the video feed was not available or times out. This solved the problem. The alternative image could be anything we just kept it simple:



There are many possibilities to make this component even better. Both streaming wise if we make our own dedicated Android app, but also by adding a suspension to the camera mount to make it less shaky. But considering that we were only aiming for a proof of concept, we are quite happy with our result.

It is well worth noticing that in order to reuse this component in another context, it would only require adding the component to another GUI, which extends a JPanel and should be treated as such, and just set the IP for the Android Phone with the stream enabled.



III. Progress on the mechanical prototype

At this point, we believe we have a good working mechanical prototype for a robot unit played by a human (terrorist or counter terrorist - it does not matter since both will be identical).

Since last time, we managed to have the robot properly carry a HTC phone in front of itself, with the camera placed in the middle (hence the shift of the phone you can see in the picture bellow) and facing a little upward.

Furthermore, the robot has now two more features that are :
  • A radar turret (ultrasound sensor) moved by one motor. This will enable us to have a display of the surrounding environnement on the GUI of each player, by showing the obstacles around with points on a radar screen.
  • A infrared sensor [3] : this way, each robot will be able to locate the IR bomb from a good distance.



To show the efficency of the prototype, solve some working issues and have a first idea of what we should get as a result, we made a video with the robot moving around while a streaming video is being displayed on a computer screen. 

The robot is remote controlled by bluetooth using an other HTC Desire mobile phone that runs an official application made by the LEGO Company, namely MINDdroid [4]. We needed for this purpose to flash the NXT brick with the official LEGO NXT Firmware. To move the robot, we simply need to move the phone that use the internal accelerometer as a remote control.

The video feed is streamed by the mounted HTC Desire phone that runs the application IPWebcam [5]. The wifi connected computer displays the feed thanks to a java applet in a web browser.






IV. Assignements for next session

Skype meeting scheduled on Monday, the 27th of December 2010.

Tasks : 




[5] IPWebcam Android application by PAS : http://www.appbrain.com/app/ip-webcam/com.pas.webcam

No comments:

Post a Comment