Wednesday, January 19, 2011

End Course Project 14: Conclusion

The full PDF report for this project can be downloaded at the following address :
http://code.google.com/p/legointernational/downloads/list

Discussing and planning a lot at the beginning of the project helped us defining an achievable goal that did not change much during the process. We therefore agree on saying that we have fulfilled our main task and that we have created a game interesting to play and with only minor flaws. We are however aware that there is still room for lots of improvements.

The tests went even better that expected; by placing both computers at the middle of the sides of the battlefield, we were able to have a full coverage of the area. We had for sure to fix some part of the code before having a fully working game, even when we were on the playfield for the first time, but once done, we could be confident.

The final test happened almost seamlessly and we were able to play our game as we expected it to be. Once the process of starting the game is understood, one never encounters communication problems at all.

One added value of our game is that it can be played everywhere in numerous configurations. We basically only need a floor with obstacles (that can be doll or LEGO houses or chairs and tables as well), and the players can even play in separate rooms thanks to our All-Wireless communications. A user can play sat in front his computer or can follow his robot unit by moving at its sides.

Since a video is better than a long description, below are some excerpts of our games played during the testing (shown during our presentation, on the 13th of January 2011).


A first person point of view while wandering through the city

Overall presentation of a game

So we realized after our examination presentation and the interest from the company Rezultat (see part below: 16.4.2) that maybe we had something good going here. It was clear that it was working from a Technical perspective. But is it working from Kids perspectives, which don’t care about WiFi and BT interfaces?

We decided to benchmark our game against our top priority from the beginning. Is it fun to play?

So we invited two kids to play the game for an hour, and filmed their response. It was Laurits 8 years old “almost” and Frede 9½ years old.

Before showing them the game field, we did a small test where we had them playing the classic game of Simon[1] to get a sense of the length of color sequence they could remember, since this would prevent them from winning as CTU if the sequence was too long.

The result was that we should not expect them to remember more then 5-7 colors in the game.
To adjust this during the game we just helped the remembering the remaining 5 colors of the 10 in the sequence. This strategy worked well.

They really fast understood the strategy of the game. And after 2 games they were controlling the robots perfect. Using the Xbox controller clearly helped them adopting the control part of the game. It was a major advantage that the controllers were wireless. They quickly adopted that they could just move freely around the game field and back to the computer. And it was very interesting that we could se them moving to and from the computer several times during a game.


CTU Wins !


CTU loses... Great remarks and reactions to the result from the boys, in the end.


Video that has some good elements where we see Laurits start of the game by running to control the unit on the field. and a little later we see Frede go from the field instead to use the computer view mid-game.


As they had played the game +5 times, they started to challenge the game a lot.
·         They started to drive around the edges of the play field where the Bluetooth range was a little challenged.
·         They started to fight with the robots, and obstruct each other from pushing the ball of the bomb, which clearly uncovered a few weaknesses in our LEGO constructions.
·         They started to insist that they wanted to remember more of the color sequence them self.
·         They started to have stronger opinions about whose turn to place the bomb or defuse the bomb. There was a tendency that it was more fun to defuse the bomb. Mainly because it was fun to push of the ball, and cut the wires.
After the hours of playing we did a small interview, about the game. They had some good comments. The journalistic quality of the interview can most likely be disputed. But we are quite satisfied about the comments received from the boys. It was clear that was very fun for them.


Interview of the two boys who tested the game (in Danish).

Thanks to Frede and Laurits for helping us testing the game.

Why is it so funny and interesting to play such a game, that is neither like Counter Strike (a videogame) nor like playing paintball (real game) but rather in-between? It may precisely be because it is a different kind of gameplay that involves both reality and a virtual world. It is nice to play Counter Strike since you just need some computers and a not a lot of material with a big playground; it is fun to play real paintball since you get thrilling and real sensations, with the added feeling of playing almost like “for real”, with good equipment that provides great feedback.

In a 100% virtual world, you can get the advantages of getting things you could not have if you would like to play the game for real, with for instance an improved head-up display and other augmented reality features. In a 100% real world, you don’t have to comply with some developers’ constraints and you can go wherever you want and do whatever is physically possible to do – the world seems less closed and provides the player with an improved feeling of freedom.

Our game is in the middle of those worlds and tries to take the best from both. And in fact a player can also choose to limit himself to one of those realities, either by only focusing on his screen and almost leaving aside the real robot behind the game, or by never looking at his screen and following his real robot (like sometimes the kids were doing during our experiment). Obviously, you always have to take both virtual and real parts into account while playing (you have to be aware of the physical limitations of the robots and you need your screen to at least defuse the bomb by cutting the wires).

Besides, having physical robots playing in a small but “real” city implies more fun for spectators that can physically move around the playfield to have their very own point of view on the action. They can act as supporters for the player and be part of the game themselves by providing them with information on the action.

And there may be a lot more reasons to explain the interest of such a concept. Still, watching the enjoyment children may have while playing this game is a satisfaction that may asks for no explanation in itself…

“What could we do if we had some extra time?”

This is the question we’ll answer in the next part. Indeed the version of the game is the v1.0. We cannot even stop to come up with new ideas every time we make brainstorming. But here is a list of things that will are likely to be implemented within the next months (knowing that it’s highly likely that we will continue this project).

·         Being a terrorist is thrilling when you’re the one deciding where and when to plant the bomb. The only thing is that when you’ve planted it, there’s nothing left for you but trying to block the counter terrorist in the defusing mode. So as to change that, we thought about
o    Giving perks to the terrorist as soon as he drops the bomb
§  Disable video of the counter-terrorist unit
§  Invert commands of the counter terrorist
§  Reverse the defusing sequence
§ 
o    Implementing a combat system between the terrorist and the counter-terrorist in which every unit gets life points (losing all the point would lead to a temporary curse).

·         Put another webcam on the field (this could be linked to the perks: a unit could use the “sky view” in order to have a better understanding of the map and locate some strategic points). See the video below to get an idea of this feature.
Switching the video feed in real time

·         Improve the program so as to have a quicker way to get the game started and upgrade the “Server-Client” protocol (as we mentioned before).

·         Improve the mechanical part: robots more aggressive, more stable, a new system of claw for the bomb (which would lead to the ability of re-attachment for the bomb…).

·         Add autonomous units on the field which could be decoy or try to search and defuse the bomb. Those units could be deployed when the bomb is planted or at another time.

·         The latest testing with the kids revealed that users are not always as careful as we are while we test our game by ourselves, and as a consequence, some mechanical flaws are to be fixed :

o    The spear pole is not sturdy enough and may break when robots collide into each other (and kids love that!) or into a building.
o    The claw that attaches the bomb unit to the CTU may open by itself if the carrier unit makes a sharp turn or run over a small obstacle. Once the children noticed that, they were yet more careful and managed more often to plant the bomb where they wanted. Still, the mechanism needs to be improved – we already have ideas for that.
o    Since running into the opponent is something quite fun (even if not yet really awarded by the game rules), we have to make sure that the robots are more steady and can systematically keep their balance if they run into or over an obstacle. This is especially needed when dealing with children users.

·         Play again, again and again in order to balance the game and see how things can be even more improved (settle the game time, the penalty that the bomb chooses and many other details that we didn’t think about yet).

… since a company is interested by our project!

As a part of developing this project we had the opportunity to apply the concept working for at specific and well developed game field. The motivation for using the game field was mainly to increase the quality of the play experience from the Point Of View video stream from the robots.

This game field owned by Rezultat, was agreed to be used for our free disposal[2].

We started the project by going to see the game field mainly for Inspiration, on how to use it to our best advantage.



But we did not return to the game field before entering the New Year, because it did not make sense to go there again before we were ready to begin testing the concepts.

The last two weeks of the project we were at the game field several times testing the concepts as it grew more and more complete. While we were doing this testing the owner of the game field became more and more interested in the game concept we have developed. As he saw the final tests occur. He mentioned that he found several of the concepts of our game much better than the game he is using today, and asked us if we would allow him to adopt our game and shape it to suit his concept. He also asked us if we would like to be a part of this process of finalizing the gamer into something that can be used in a commercial context. All members of the group agree to this, and all would like to some extend to be a part of the finalization process. The process for the finalization is TBD. But the group expects to meet week 4, to discuss the plan with Rezultat.

End Course Project 13: How to start the game

In order to start a game, the user must follow a certain protocol so as to make everything work properly. To make it easier, make sure to proceed as follow in the right order:

·         Start the android phones video application, note the address:port information and place them accordingly on the right units;
·         Start the bricks (every single brick should be in “Waiting for PC” mode);
·         Set the settings (on the settings.java file) in order to ensure connectivity (video, Bluetooth, client-server communication)
·         Start the server (server.java)
·         As soon as the server machine displays the window “Waiting for client”, start the client (client.java)
·         From then on, everything should be working by itself. Every player should have a GUI on his screen within few seconds and wait for the game to start. In order to start, the server has to settle everything and as soon as it’s done, a window is displayed on the computer side so as to start the game;
·         Click on “Start Game” and enjoy.

You might experience a few problems starting the game. In order not to lose time if you want to fix the problems, try to use every device on a local network, make sure to have a suitable port and try to run the application without firewall (it saved us a couple of times).

The next step in our development of the program would be to make an easier interface, something more user-friendly if we want the game to be more “commercially” attractive.

For the time being, the communication between all the different features works as the scheme shows us on the picture below.


Sequence diagram of the game

End Course Project 12: Programming the bomb

One of the most important roles it the Lego Strike game, plays the bomb. It’s an NXT unit equipped with 2 lights (green and red), a touch sensor, a motor, an IR ball and build-in LCD display (see Picture 30 : Bomb unit with active countdown). Lights are used to inform player (Counter Terrorist) if he cuts the right cable. The green light flashes on correct color and the red one on incorrect. Motor has a function of claws which holds to terrorist unit. They open only when the terrorist plants the bomb.


The touch sensor is used as a trigger for a state “defusable”. It is important to mention that there was a bouncing problem we had to solve. During the drive the IR ball was bouncing on the touch sensor causing it to trigger a defusable state in the wrong moment. There was also the same problem while the bomb was planted and CTU was trying to push the ball off the bomb. We solved this problem by simply giving a 2 seconds delay during which we checked if the touch sensor was pressed again. 

Picture 37 : How to give a delay for the touch sensor in order to avoid taking ball bouncing into account

When counter terrorist pushes the IR ball from it, the bomb will generate and send the color sequence which is a string of 10 letters [ygbr] where y-yellow, g-green, b-blue, r-red. 

LCD display shows the countdown timer. We used image converter[1] which can convert general format images into LeJOS NXJ source code or LeJOS NXT format images and saved them in the source code file. We may use that countdown timer on NTX LCD for further improvements for example we may hide remaining time from game interface so it will be only visible on the bomb.

Picture 38 : Bomb unit with active countdown


The bomb is running through different states that are (see also Picture 31: Block diagram of bomb states and Picture 32: Bomb decision diagram):

1.       Waiting for PC connection – is passed when there is a connection between host and the bomb (when the game is launched).
2.       Waiting for Game Start – is passed when a method recievedNewMessage receives a “GT####” message with the game time in seconds.
3.       Waiting for planted – after the game is started, the bomb can explode (random chance or timeout) and can be planted. 
4.       Waiting for “defusable” – when the bomb is planted it can be set to “defusable” (by pushing the IR ball off) or can explode because of timeout or random chance.
5.       Waiting for Defuse Sequence (but could explode or be defused)

Picture 39: Block diagram of bomb states


Picture 40: Bomb decision diagram


During the game there is a chance that the bomb will explode without warning (sudden death mode). Here are the chances (Default, editable values):

·         For the first half time the chance is 0%
·         For next ¾ of time the chance raises to 1%
·         Between ¾ and 7/8 of game time the chance is 5%
·         And from 7/8 till 8/8 of game time the chance is 15%

The source code for Sudden death mode:


Picture 41 : source code for Sudden Death mode

When the bomb is becoming defusable (and each time CTU will cut incorrect cable), it will generate a sequence of randomly placed colors in a string.

Method checkSequence will verify if the cable cut was correct:

Picture 42 : checking that the cable cut is correct


End Course Project 11: Graphical User Interface (GUI) layout

The general GUI concept for our game was an ongoing process. Generally as a good design practice we have created a clean cut, between the GUI and the Game model. This turned out to be a good idea, since we changed the GUI layout several times during out process. With the GUI clearly decoupled for the model, we could easily experiment and change the without interrupting those working on the game model.

A GUI was quickly necessary to run different kinds of test, for the widgets, the video streaming, the timer, etc. So we ended up with a first version, still buggy but efficient in its tasks.

Picture 34 : Our early stage layout


In order to achieve a nice layout we have used layout managers. This is always a little troublesome to get right, but there is a well written set of official articles on how to achieve the goals that we wanted[1][2].

We mainly used two types of layout managers.
  • Box Layout to achieve the Horizontal layout or the Vertical layout.
  • Flow Layout as the std. layout. Where nothing particular was requested
The final layout ended like the sketch below, mainly based on the BoxLayout.

Picture 35 : Sketch for the final layout



Orange Color is Vertical BoxLayout JPanel

Yellow Color is Horizontal BoxLayout JPanel

1.        Obstacle RadarWidget
2.       Streaming Component Widget
3.       IR Radar Widget
4.       Time Panel
5.        Info Panel
6.       Map Panel
7.       Wirecut Widget

The final result of the layout is as shown below, based on the layout above.

Picture 36 : Final layout in action

End Course Project 10: Designing the widgets

Widgets are reusable elements of a graphical user interface that displays an information arrangement and provides standardized data manipulation[1].

We needed to implement many widgets to make a proper GUI for both players, in order to show them more information -got by the sensors data- that they could just get by the streaming video.

During the early stage of our game design, we needed to detect when the CTU had found and achieved access to the bomb. For this purpose we add the IR ball to the game (see 8.1). With the IR Ball available in the game it was quite obvious to include a sensor that could detect the presence if a ball. After a little research we found the IR Seeker V2 from Hitechnic[2].

This sensor could, detect the direction from where the IR light source is coming. This image below is from Hitechnic homepage gave us the inspiration for the IR Radar.


From the documentation it is stated that two sets of information is available.

There are 5 sensors in a horizontal row in the sensor. The sensor values can be read directly. The sensor itself derive 1 of 9 possible directions as shown in the image above, form the light intensity in these 5 sensors. This direction can also be read form the sensor.  From this we sketched our IR Radar Widget.



Testing the new widget and the range of the IR Seeker

Coding the Radar Widget was fairly straight forward. Half way result can be seen above to the right. At this stage we actually thought we could read the intensity in the direction also, but this was a misunderstanding from the interpretation of the documentation and in the end result direction is only on or off, and only one at a time.

Demo of the IR Radar during a game

During the early stage of our game design stage it was quite clear that, while driving around in the city with this narrow camera view, we need to assist the player with more information to ease his navigation in the play field. We decided to build classical obstacle radar. Initially there were some, mechanical issues to overcome, e.g. classical radars rotate continuously in the same direction, but with a wire connected to the sensor, we need to switch direction every 360 degrees. For more mechanical info, see article 5.2.2.

We decided that we wanted to give the Obstacle Radar a classical look’n’feel. So a little research on Google, basically just confirmed what we already had in our minds.

Picture 29: A few inspirational examples from making an image search for a radar on Google[3]

At the time of the OR widget development it was not decided yet what the final size would be. So it was a requirement that the widget should scale to any size. IT was also know from our lab experiment during our Lab session 2: Experimenting with the ultrasonic sensor[4] , that the range is around ~2,5m and no matter what size the final size of the radar was, it should be available for the application creator to scale the range to make the highest resolution of the dots in the radar.

The interface to use the radar was quite simple.

Add the Radar to the GUI. It inherited from JPanel and can be treated as such. Setting up the radar requires only 2 settings from the application GUI:

·         setCmToPixelRelation(double factor)
This factor is used to set a factor of how many pixels should represent the distance 1 cm on the size of the radar. This was a functional solution, but if we should spend more time in improving the radar widget, it would be more use full to set the range of the radar, and then this Cm to Pixel calculation can be calculated automatically.

·         setObjectAge(int timeInMs)
As it can be seen in the video, the objects detected by the radar, fades as they get older. This setting sets the lifespan of the objects. Setting this to 5000 indicates that instantly an object is added it will appear at full brightness, and the it will laniary fade over the next 5000ms to have the same color as the background, when this occurs it will be removed from the array of objects known by the radar.

To put the radar in action only thing required is to call the function addObject(int Xcm, int Ycm) to indicate where there is a new obstacle detected. The rest is handled by the radar.

The final result can be seen below.

Picture 31 : Final result for the obstacle radar widget

The distance rings all indicate 1 meter. Hence this image if from before we achieved the correct scaling. The actual calculation of the radar dots is done on the brick. It is a simple SIN/COS calculation based on the distance measured by the sensor and the angle measured by the tacho in the LEGO motor. (see 7.2.1 for the calculation)

A last minute improvement on this widget was to overlay the compass direction of the unit. This can also be seen above. To use this function only two functions are required.

·         setShowCompass(bool enable)
This function is used to enable the compass feature. It is disabled by default.

·         setCompassDirection(int angleDeg)
This is used to set the last know direction sent from the unit.

We considered improvements for the Obstacle radar. In most common radars there is a sweep indication of where the scanner is currently searching for obstacles. We considered to add this feature, but were afraid to overload the BT bandwidth by adding another 200-400 mgs/second in order to achieve a fluent correct sweep, so this was omitted. But with more time we could implement a sweep that would simulate the sweep, and only require re sync at every time we switch direction, show should work good enough for a game concept.

The game was designed to include a bomb and our originally idea was just to present the button press sequence. For the user to remember similar to the image below


This would have worked perfectly for the game. But we keep thinking that normally (in movies) when you defuse a bomb you have to cut the right wires. So we decided to combine this idée with wire cutting. So instead of pressing a color sequence, you cutting a colored wire sequence. It is the exact same exercise and complexity, but more fun in the game context.

So we found an image of some pretty wires on the net.[5]



From this image we just did a little make over using GIMP, to achieve the GFX we wanted. And we drew a se of uncut wires and a set of cut wires, in the color range we required. The result can be seen in the two images below.


With this graphics available it was quite simple to show the color sequence generated by the bomb.

The widget only requires a few functions to use it:

·         setSequence(String sequence)
The string must consist of the letters g, b, r, y, each representing the next color in the sequence, Green, Blue, Red, and Yellow respectively. The string can be any length from 1 to 10 chars, the widget will adjust accordingly. And the color combination can be any possible combination.

·         setShowColor(bool enabled)
This function will either allow the player to show the colors of the current wires or turn all the wires white. This is used in the game, when the player presses the first color. Then all the wires turn white, and the player is required to remember the rest.

·         nextWireColorCut(char color)
This function will just allow the application designer just to forward the color press from the user and the widget will handle the rest. If the color is right, the widget will show the wire as cut.. if it is wrong it will clear the screen and just await a new sequence from the bomb.

One improvement for the architecture of the game should be revised here. Both this widget and the bomb independently do this evaluation of the wire cur, and this could potentially lead to an inconsistent state between the wire cut widget and the bomb. This should in an improved version of the game only be evaluated one place. In our model I would be most appropriate for the bomb to tell if it was a success or not.

The widget in action can be seen in the video below.