Thursday, 29 December 2016

PiWars 2017 - Autonomous challenges

Its the autonomous challenges at Pi Wars that separate the robots from the radio controlled cars, requiring one or more sensors on the robot to collect information about the real world, process it and make decisions on how to react in order to successfully complete the task at hand.

To add to the complexity you are competing against a whole raft of other robots, so you have to balance between going slow and steady or quick and fast. Do you concentrate on completing the course without any penalties, or risk pushing things to get a good time?

Of course the first step is working out how to approach each of the challenges and get a working, reliable system up and running, before trying to push things to the limits!

Line following

A returning event for Pi Wars 2017, the robot needs to follow a black line on the ground, following it around a twisty course as many times as possible within the time limit provided.

My attempts at line following in the last Pi Wars didn't go so well... In testing the night before the Arduino controlling the line sensor started triggering the watchdog and restarting, clearing the calibration data and causing the robot to lose the line, so on the day I only got about a quarter of the way around the course. So this time around I want to take an approach that uses just Raspberry Pis, partly to try and avoid this issue in future, and partly due to the fact it is a Raspberry Pi based event! 

The first approach would to re-use the line sensor from last time, connecting it directly to a Raspberry Pi instead of an Arduino (something I've successfully done before using the pigpio library). Due to Raspbian not being a realtime OS it can take a lot of CPU time to ensure no data is lost, so for this approach I may have to use a dedicated Raspberry Pi to do the sampling, and send the data to the 'master' Raspberry Pi for processing.

An alternate approach is to use the Raspberry Pi camera to do a spot of image processing to determine where the line is, and where it is going. I've not done image processing since University, so this would require lots of investigation, research and learning. As I like using these events to learn new things, I'll definitely be giving this approach some serous consideration.

Straight line speed test

Another returning event, the robot has to drive in a straight line as fast as possible along a 7.28m long trough. For Pi Wars 2017, however, the straight line speed test has been revised to be autonomous only, whereas previously I've only attempted this when under manual control (Attempts to use the compass on the Sense HAT having proven unreliable!).

A few approaches spring to mind for this event... Utilising feedback from motor encoders to try and ensure the robot is maintaining a straight line, using a sensor to ensure the robot stays a set distance away from one wall (probably reusing the range sensor from last Pi Wars) or utilising the Raspberry Pi camera to try and detect where the walls of the trough are and ensuring the robot stays in the middle.

Using the range sensor sounds the most feasible, but it would be nice to get something working with the camera.

Minimal maze

Another new event for Pi Wars 2017, the robot has to navigate its way through a maze, without touching the walls, to reach the exit. The walls will be of various colours, providing information the robot could potentially use to determine where in the maze it is.

Initial thoughts on this challenge are that I could use a variation of the speed test solution, using a range sensor to keep the robot following the left or right wall, combined with a second sensor on the front to work out when a corner is coming up. Alternatively I could use a single range sensor and start out following the 'right' wall, then when the right wall turns blue (A camera being used to determine this) switch over to following the left wall. Hopefully avoiding hitting the outer wall as the direction changes!

The maze needs to be completed twice, with the times combined for the final result, so can we be a little sneaky and use the first run through to 'map' out the maze? Then use this map on the second run to zoom through it? You'd need to get the starting point spot on for the second run, but it certainly sounds feasible!

So those are my thoughts on the autonomous challenges... Do they sound good, bad, very bad? Is there enough time left to implement all of these?

Leo

Wednesday, 28 December 2016

PiWars 2017 - Manual challenges.

Its been a busy few months since I was accepted as a competitor for PiWars 2017, unfortunately very little of that time has been spent on PiWars itself. Work has been especially busy, culminating in a sudden business trip to Taiwan where I had access to shops with all the connectors, cables and components I'd need for a robot, but with no idea what I needed yet!

So in this post I've decided to jot down my ideas for the various manual challenges, as most of them have similar requirements, and its considerably easier to manually control a robot compared to telling it to deal with things itself.

My general approach will be similar to last PiWars, a four wheeled vehicle with tank style driving, controlled via a joystick, but with some improvements. Whilst I had the option on my previous robot to change the motors and wheels it ended up being too fiddly to do on the day and led to my robot being defeated by the 'humps' at the start of the obstacle course due to still being on small wheels. So this is an aspect I want to review and improve upon this time.

Obstacle course

As the exact details of the obstacle course won't be known until the day itself I have to make some assumptions about what will be required based on previous events. I would expect certain parts of the PiWars 2015 course to be reused (especially the rotating table) with a few new elements added. As such I'll be needing a setup with good ground clearance and fine controls. A high top speed won't help much here, with the marble obstacle from last year requiring the robot to drive slowly in order not to dislodge any and occur a time penalty.

All the above is subject to change of course, the marbles may be swapped out with a gravel pit and I'll be wanting chunkier wheels to cope with that.

Pi Noon

Having gotten to the final of Pi Noon last time (with a lucky win or two I must admit) there's probably not much to change here. I'm tempted by the mecanum wheels that Triangula was sporting last year, but I would need quite a lot of practise to be good at using them, something I've never ended up having much time for in previous events!

Skittles

Skittles was a little hit and miss last time, with my robot knocking down more skittles during the practice run than it did in the official attempts. Due to time issues I ended up just pushing the ball for PiWars 2015 which, unsurprisingly, proved to be very unreliable.

Possibly approaches to it this year could be a motorised launching system (a spinning wheel or two to get the ball up to speed), a catapult type system (pulling back and releasing an elastic band), a spring loaded system or maybe something that picks up the ball and rolls it. In theory anything that can propel the ball forwards in a straight line should work out okay for skittles, whereas the next challenge potentially requires a finer degree of control.

Slightly Deranged Golf

A new challenge for PiWars 2017 is golf, and apparently a slightly deranged variant of the game. As such its a bit of an unknown, with only the details of the challenge to base things on. The game is just a single hole (No 18 hole courses here!) and the robot is only allowed to push or hit the ball to get it around the obstacles. What those obstacles are we don't know (sand pits?) and whether its better to hit or push the ball won't be known until the day itself.

The traditional approach would favour having a mechanism that can 'hit' (or propel) the ball forwards at different speeds, allowing the robot to progress through the course, changing directions to move around obstacles and eventually reach the hole. However this does sound like quite a complicated approach, as each time you need to judge how far to hit the ball, chase it with the robot, line up the next shot etc.

A simpler approach maybe to push the ball the entire way, potentially lowering a cup over the top of it to ensure the ball doesn't get away from the robot, leaving you to just drive to the hole and drop it in. This, of course, assumes the course is flat and there is enough space between obstacles for the robot to drive through. If not the ball could escape and you'd be stuck trying to capture it again.

So that's my current thoughts on the manual challenges, next up will be the autonomous ones!

Leo

Monday, 21 November 2016

Controlling a 3D Printer with the new Raspberry Pi Zero 1.3 - follow up

Back in May I wrote a blog entry describing how I'd used a Raspberry Pi Zero to control my 3D printer and, at the time, came to the conclusion that the RPi Zero was indeed a viable alternative to the standard model Raspberry Pi once they were more readily available.

It took a while but I eventually ended up with enough Zeros that I felt I could dedicate one to controlling my printer and have been running it successfully for the past couple of months. My original goal for switching to a Zero was based around building a more compact solution, as opposed to a cheaper one, and I went through a couple of iterations to end up with my current solution.

The original test setup had cables, adaptors and components all haphazardly laid out, but since then various adaptors and boards have come out to provide additional connectivity to the standard Zero.
Its all a little messy
The first setup I tried was the Zero4U board, a 4 port USB hub that fits underneath the Zero, allowing both the printer and a USB WiFi dongle to be connected. Whilst that worked I was under using the hub itself, and it still looked a little untidy with the dongle and cables sticking out on multiple sides.
A Zero4U installed under the RPi Zero.
My next, and ultimately final, setup was using the RedBear IoT pHAT to provide the Zero with a wireless connection, leaving the single USB port available for connecting to the printer. The final step was getting a USB micro to mini OTG cable to allow connecting the printer without needing an extra adaptor. I decided to use a ZeroView board to mount everything on (due to having one spare) and then quickly designed and printed a support to mount it all on, with the camera pointing at the bed.


As mentioned earlier I've been running this setup for a few months now and the only issue I've run into was a failure when printing a 3D Benchy, which I thought may be an indication that the Zero was struggling to deal with larger gcode files. However I got a similar result after switching back to the RPi B+ (which had successfully printed 3D Benchy in the past) so have put that down to either one of the printers motors overheating, or an issue in the produced GCode.

I never got around to mounting my RPi B to the printer, so whenever it got moved (albeit rarely) I had to move the RPi and printer separately. With this new setup I can just unplug the power and lift up everything as a single unit, and I have indeed taken this setup to a Raspberry Jam to show it off.
Down at the Egham Raspberry Jam.

Leo

Monday, 17 October 2016

PiWars 2017

PiWars, the Raspberry Pi based robotics competition, is back for a third time and I will be, once again, competing. Last time I entered my robot 'OptimusPi' (Which I blogged about here) with my best result being runner up in the Pi Noon battle.

OptimusPi - From PiWars 2015
PiWars 2017 has grown in size and is being held over two days, instead of one, and I will be competing on Sunday 2nd April 2017 in the Pros/Veteran category as team 'Pi Squared'. Whilst I'm certainly a veteran of PiWars, I'm not sure I count as a Pro!

So what is my plan this time? Well for OptimusPi I started on the basic chassis quite early, having a basic bot driving around before working on anything else, but in the end I found myself feeling quite constrained with the design, especially when getting all the sensors and other components to fit inside. So this time around I'll be starting with determining how I'll be approaching the various challenges, trying it out on one of my existing robots, and coming up with a chassis design later in the day.

The challenges themselves are a mix of old and new with the speed test, skittles, Pi Noon, line following and obstacle course making a return. With the new challenges of Minimal Maze and Slightly Deranged Golf joining them. For the first set I can draw on  my experiences from previous events to know what approaches worked and what didn't, hopefully leaving most of the time to concentrate on the new events and actually having an interesting looking robot this time around (instead of a box on wheels that OptimusPi turned out to be).

Of course the other question is what to wear! So far I've turned up as a pirate and then a scientist... I already have a few ideas, but I'll need some good time management to pull everything off. In theory I have just over six months to get everything sorted, but its surprising how fast that time can go.Plus there's also the blogging challenge, so stay tuned here for more updates!

Leo






Monday, 22 August 2016

PiZero Bot V2.0

Back in December 2015 I created my first Raspberry Pi Zero based Bot. It was a little ungainly with cables looping all over it and a USB WiFi dongle sticking out the side but it was relatively small, moved around and, at the time, I felt there was still plenty of room to make it more compact.
PiZero Bot V1.0
The aim of the next variant was to squeeze all the components into a mint tin that was only slightly larger than the Pi Zero itself. This proved something of a challenge and it wasn't until the release of Pimoroni's Zero LiPo board and Average Man's ProtoZero development board that I was able to get everything compact enough to fit inside.

The tin is only slightly bigger then the Pi Zero
I had never quite figured out how to best connect the AdaFruit PowerBoost to my PiZero without it getting in the way of the other components I needed to squeeze in, but the Zero LiPo fits snugly onto the Pi Zero PCB, taking up a minimal amount of space. With the power in place next step was to come up with a more minimal motor driver. After ordering a pack of L283D motor controllers (Buying several in case I killed one) I mounted it upside down on the ProtoZero board, after doing several dry runs to ensure I had enough space to fit in the wires and somewhere for the motors to attach.
A thin, PCB sandwich.
As this bot has no sensors (yet) it has to be manually controlled. After several attempts to squeeze in a USB dongle failed I settled on using a TTL to Bluetooth adaptor (A HC-06 module), soldering it to GPIO pins 15 and 18, the Tx and Rx pins on the Raspberry Pi. This particular TTL module is designed for 3.3V IO operations, so I didn't need to risk driving the pins at 5V, or trying to find space to fit a 5V to 3.3V converter.  Using an app called 'BlueTerm' I could now connect to the serial console on the Raspberry Pi and send it commands from my phone.


Hardware is of little use without software to control it. Normally I'd be looking at connecting up a PS3 DualShock controller to drive the bot around, however that isn't quite so easy to set up over a serial link, so it was time to look at a different way to control a bot. A few weeks previous I had taken a look at the GPIO Zero python library and knew it had a variety of helper classes. Taking a look at the currently supported devices I found that it not only had a GPIO based Motor class but a fully implemented Robot class, allowing a 2 motor robot to be up and running in 3 lines of python.

from gpiozero import Robot
r = Robot(left=(23,22), right=(25,24))
r.forward(1)

A couple of tweaks to the connections to make sure the motors were going in the right direction and it was time to fit everything inside the tin. Being a metal tin it was conductive on the inside and, as I didn't want a short, I covered it as best as I could with Kapton tape, adding extra tape to the PiZero itself for extra safety.



With the motors and ball caster glued into place, and the battery held in place with blu-tak, PiZero Bot V2 was almost ready to go. I extended the python code to listen for the WASD keys to drive the robot from my phone and it was off for a test drive.


A relatively successful trip, although the battery appeared to be dying towards the end as it was starting to struggle to move. I know the battery wasn't fully charged at the time, so I'll have to test again to see how well it lasts. I do have a larger battery that is around the size of the mint tin, but I was avoiding using it as it adds to the height of the robot, and that somewhat defeated the point of making the smallest robot I can.

Of course its also a little tricky to control when driving it via key presses alone, but it should be possible to write an Android app, or similar, that can implement an analogue controller that communicates over serial to the Raspberry Pi.

As ever there is always more to do. I still want to add at least one sensor to this bot, probably a proximity detection one, so it can be left to drive around by itself. However with all the wires stuffed inside it there's not much room to extend it cleanly. I may have to look at designing my own PCB so I can get rid of most of the wires, freeing up some space to connect up a sensor, punching some holes in the front of the tin to allow it to be connected up. But that will have to wait for V3, as I have some other projects that need finishing up first.

Leo


Sunday, 10 July 2016

AstroPi 3D Printed Tripod Mount

Since printing my AstroPi back in March (and then never actually posting pics of it...oops.) I've been thinking of designing a mount that would allow the AstroPi to be held in the air in a similar manner to those up on the ISS. Thinking didn't actually become action until a friend of mine wanted a way to display his AstroPi at the Recursion Computer Science Fair so that visitors could get a close look at it without, presumably, accidentally unplugging or dropping it.

I dug an old magnifying lamp with spring loaded arm out of the attic, removed the lamp assembly (which was held on by a single bolt) and set out to design a printable part that would hold the AstroPi in place. Of course by design I mean launching Autodesk Fusion 360 and start creating and modifying objects. When this approach didn't work (unsurprisingly) I actually went and sketched out some ideas and started over.
The original design.
Whilst not going to win any prizes in technical drawing, the sketches were enough for me to better visualise the 3D object, create it in Fusion (Its basically a couple of rectangles with holes) and run off a test print, immediately followed by several more until I got the sizes correct and the mount functional (For example, the hole for the nut was large enough to fit the nut, but too small to fit the socket required to tighten it up). Once I was happy I swapped over to my silver filament, printed out the final version and attached it to the arm.
The AstroPi mounted on the end of the arm.
Arm goes up.Arm goes down.
With the sprung arm and mount out of the way it was time to tweak the design to make it more generally useful. A fairly common type of mount is the standard camera tripod which, according to the internet, a 1/4" UNC nut would fit so I ordered several from eBay.

Whilst waiting for those to arrive I found out Fusion supported creating screw threads as part of the design so I created two variants of the mount. One with a built in thread and one which takes a nut.
Nut based model on the left, threaded on the rightSide view, not all the curves printed cleanly.
I printed the 'threaded' variant with a 0.1mm layer height and, whilst a bit tight to screw on the first time, is working surprisingly well. Even after being attached and removed a dozen times it doesn't seem to be wearing out. The 'Nut' variant I printed at a 0.2mm layer height, which seemed to produce better rounded corners, and the nut fits in snugly. Both variants of the mount connect to two corners of the AstroPi itself, using some slightly longer bolts (35mm) to account for the extra thickness added by the arms.

With the mount attached to the AstroPi it can be connected to, and moved between, any camera tripod or similar that you have available so you can position it on your desk, hanging from a shelf or suspended looking out of a window, just like Izzy is doing on the ISS.
Suspended in the air.
Standing on the desk.Hanging from a shelf bracket.
If you want to print out your own AstroPi tripod mounts then I've uploaded the .STL files to Thingiverse (http://www.thingiverse.com/thing:1666712) where you can download and print them out.

Happy printing!

Leo



Monday, 30 May 2016

Controlling a 3D Printer with the new Raspberry Pi Zero 1.3


Last year I entered the world of 3D printing by purchasing a second hand RepRap Mendel Mono from eBay, with the initial aim of using it to print custom parts for my robots. The 3D printer has an Arduino based control board that contains the firmware required to drive the motors, heaters, fans and all the other time critical stuff that allows 3D printing to work, but with no LCD or any input controls a computer is required to tell this control board what to do.


I started off using a laptop to control the printer and then, once I was a bit more comfortable with how everything worked, I switched over to a Raspberry Pi B+, connecting to it via VNC. Eventually I started doing longer prints (i.e. the Astro Pi case) and wanted to be able to monitor the printer remotely, so after some research I switched over to using OctoPrint, which allows you to control the 3D printer via a web bowser, and has built in support for recording time lapses of the print itself.

When the Raspberry Pi Zero was originally released I did consider using one to control the printer, but didn't want to lose the camera support. Roll on six months and a new and improved Raspberry Pi Zero model was released, complete with camera connector, so it was time to give it a go.

Software setup

When setting up OctoPrint on the Raspberry Pi B+ I made use of the easily downloadable OctoPi SD card image, which meant it only took a few minutes to get everything up and running. However I wasn't sure it would support the latest board revision so instead followed the comprehensive instructions for building and installing OctoPrint, which took a little while on the Raspberry Pi Zero.

With the main OctoPrint software installed I then moved onto setting up mjpg-streamer for performing the video capture. As usual I'd managed to put the camera connector in upside down, despite checking the correct orientation before plugging the cable in, so ended up opening up the case to more easily flip it over.

Test Print

As the Raspberry Pi Zero only has a single USB port, and I had to connect it to the printer itself whilst simultaneously having a network connection, I needed to make use of a USB Hub. Unfortunately the UUGear Zero4U I recently purchased doesn't quite fit the new Raspberry Pi Zero board, so I ended up making use of a slightly less compact setup with an external USB Hub.



With the Raspberry Pi Zero connected to the printer and the camera positioned I started off with a fairly simple test print that only takes 10-15 minutes to complete, just to make sure everything was working as expected. During the print I kept an eye on the CPU load and memory usage of the Raspberry Pi Zero, just to make sure everything was running happily.


A longer print

With the test print out of the way it was time to try something a bit more time consuming, to see if the Raspberry Pi Zero had the staying power to keep on driving the printer. Something I had been meaning to print for a while was a 5 1/4" to 3 1/2" drive bay adaptor. This was something I had originally thought about buying in a shop, but as I looked at the price of it (around £6 to £7) I realised I should be able to print one at home for a fraction of the cost. A bit of hunting around on the internet had come up with a few designs that people had put together, so I selected one and set it off printing.

Total print time turned out to be just under 4 hours, of which I mostly monitored progress by viewing the video feed from my phone. Whilst the video resolution was the same as I use on the B+, the newer 8MP camera did seem to give a sharper image.

I did, of course, notice that the corners of the print had lifted at one end, but as the resulting item was going to end up inside a computer case I wasn't worried enough about it to stop the print and re-try. Besides, that edge is always a pain to get levelled!


With the 4 hours up the print looked good enough, apart from where the corners had lifted of course, and the Raspberry Pi Zero was still running happily (and still is now, several days later). A couple of times the OctoPrint page seemed to stop updating, but a reload got it working again. After removing the print, and detaching the raft that was supposed to help the print stick to the glass, it was time to see how well it worked. Due to the print lifting on one side I did have to slightly extend the screw holes, but my USB3 front panel fitted in nicely and I soon had it mounted in my PC case.

Conclusion

Based on this quick test am I going to switch over to controlling my 3D printer using a Raspberry Pi Zero? Probably, yes.. at least when I manage to get ahold of more than one of the newer boards!

Not because of the price, after all I already have the B+, but more because it will allow me to create a more compact setup.  I'll need to do something about requiring the external USB hub first, either getting a newer Zero4U that's been updated to work with the revision 1.3 board, wiring the Raspberry Pi's serial port directly to the printer, or by connecting up the networking differently e.g. by using the IoT HAT that I backed recently.

But for now I'll be switching back to the B+, as I have other uses for my singular Raspberry Pi Zero 1.3 board!

Leo


Sunday, 1 May 2016

Two weeks with the Oculus Rift - Part Two

This is the second half of my ramblings about the Oculus Rift, so if you've not read part one yet you may want to go check it out first before continuing on here.

After having played Lucky's Tale I thought I'd carry on taking things slowly before diving into the other launch title EVE: Valkyrie, just to make sure I was used to the headset before exposing myself to a high speed, high action style game.

So next up was 'Henry', a short animated movie from Oculus Story Studio that places you in Henry's front room, allowing you look all around and watch the story unfold. Henry isn't just an animated movie recorded with a '360' view, but its a proper 3D environment, allowing you to look over, under and behind the various objects in the room and generally gives you a much better impression of being there.



The instructions suggest the best viewing position is sitting down, however in my setup that would leave me sat on the floor in  front of my desk, leaving the sensor partially obscured. In theory if I had a bigger room I could set the sensor up to allow this, but for now I just watched the movie standing up (luckily its not too long!).

This was also the first title where I noticed the 'len flare/god rays' that have previously been mentioned. The movie itself wasn't a problem, but it was really obvious on the opening and ending titles.

There are a couple of other animations in the store, although these are much shorter, with the ones I watched next being INVASION!, Lost and COLOSSE. These follow a similar format where you are fixed in place, looking around as the scene unfolds. In INVASION! you have a clear view to the mountains and, if you look down, a fluffy bunny body. Lost has you in a forest where you can lean your body to look between the trees and has a 'The Iron Giant' kinda vibe to it, something that could possibly be included as a DVD/Blu-ray bonus feature.




COLOSSE is slightly different in that you get transported during the story to a couple of different locations, with the final scene encouraging you to look around in all directions. Whilst these shorts are fun enough to watch once, I'm not sure I'll go back to rewatch them (other than to show off to friends/family) and trying to do a full 1 to 2 hour long movie in this format may be a stretch, but we shall have to wait and see what people come up with on that one.

But are all these movie-like experiences static? Well no, there's the 'The Body VR' which takes you on a journey through the human body, with you sitting in a little pod that takes you from place to place. This generally works quite well, with a narrator describing what you are seeing (Instead of having to try and read text) with the pod moving slowly enough that there's plenty of time to look around and is unlikely to cause any nausea. Apart from, that is, 2 or 3 times when the 'pod' suddenly 'jumps' 90 degrees, which can leave you a little disoriented, especially if you were already looking in that direction.



Other media content that can be found on the Oculus Home store are the 360 pictures and videos. The pictures were fun, able to bounce around the world looking at the various landmarks, a considerable improvement compared to 'drag scrolling' the view in a web browser, or looking at it on a small mobile. I was also able to view a 360 photo I had taken the previous weekend and that looked nice (apart from the bits where it overlaps.. need to practise taking those more!).

The 360 videos were a bit of a disappointment, frequently buffering and when they do start playing they are often so compressed looking that its almost not worth watching them. Reminding me of the days where I would try and find the smallest video files to download over my modem, so it wouldn't take hours to arrive. The best 360 video in the list was the Virgin Media advert! Possibly due to it being streamed from the UK.

I had much better luck watching 360 videos from YouTube using Virtual Desktop, where it would download the entire high quality video before starting to play it back, often a bit of a wait but a much nicer experience overall.

But enough with the media playback... back to the games!

Farlands is a colourful, planet exploration game, where you analysis and photo the alien creatures to learn more about them, with your discoveries allowing you to unlock more items to use, and more areas to explore. Its a fun little diversion that seems to be designed for short, but daily visits, which as there is only so much to do each time works out fine.

Movement is handled by instantly teleporting to your destination, so you move around the area in short jumps. The system allows you to select what direction you're facing after the teleport, which means you don't have to keep turning around to see everything (Which would lead to cable tangle).

There's also a map of each area where your view pulls back, allowing you to see the entire area, in a sort of tilt-shift miniature world kinda way, and you can watch the aliens moving around before teleporting back to the area, or up to your ship.




The final game I played in this two week period was EVE: Valkyrie, I'd left it until last because I wanted to make sure I was comfortable enough in VR before trying anything too hectic, which flying around in a space ship that can go in any direction certainly is!

EVE: Valkyrie is basically dog-fighting in space, two teams battling to win each round. Either in a straight forwards team deathmatch, or trying to capture various points in the map. Whilst you can, in theory, dive straight in, I elected to go through the tutorial and training first to get a feel of the ship, and used to playing this in VR.

Traditionally in flight sims I either look around by using the 8 way hat, or by toggling into a look mode and using the mouse to change the view. With the Rift its much more natural, giving you the freedom to look in any direction without needing to think about what you are doing, what button to press and how many times to press it.

After taking a good look around my ship it was time to actually take control. Moving up and down and side to side was fine, then I tried rolling the ship... Now whilst it didn't make me feel nauseas, my stomach was certainly a little uncomfortable with the manoeuvre as my brain was expecting to feel some sort of movement. I persevered and carried on with the tutorial, performing a few more rolls, trying to get used to it a bit more.

In my younger years I was lucky enough to fly a few light aircraft and I would have to say that this is nothing like that! Its missing the noise, vibration, smell of the engine or any of the g-force that you feel doing similar manoeuvres.. Its a bit closer to gliding, with the lack of engine noise, and if you pretend you're in your favourite sci-fi program that has inertial dampers, null gravity fields or similar then all is good!

Weapon wise the ship I flew has a front facing cannon and a head tracking homing missile. With that you just have to look at an enemy ship, wait for it to lock on, and launch the missiles. Which enables you to fire missiles at one ship, whilst shooting the cannons at another one (In theory anyway!). There's also an ECM (Which really looks like a flak cannon) for stopping enemy missiles, as well as drones that can be deployed.



With the tutorial out of the way it was on to an actual match and once I was out in space and getting involved in the action I stopped noticing the mild discomfort when I rolled the ship, and after a match or two it went away. The matches tend to be quick and hectic, with me looking around in all directions trying to keep track of the enemy craft. I played the first few rounds with the XBox controller, before switching over to a joystick and throttle, finding that easier to use when turning, rolling, launching missiles and firing the cannon all at the same time.

The different levels have various ships/asteroids that you can fly around and through, giving lots of opportunities for sneak attacks, or just running and hiding when your shields are low. Team deathmatch is fairly simple, you're just trying to destroy the enemy ships before they destroy you. The 'capture the flag' style mode has three points around the map that you need to deploy drones nearby to 'capture' them, which you then have to protect, whilst also working to destroy the enemies drones to make it easier to capture the other points.

All in all a fun, fast action packed game, that I will be playing more of if I ever mange to finish writing this post! And as promised in my last posting, here is a short video of myself playing EVE: Valkyrie to give you a better idea of how it works, and how easy it is to look around, following the enemy ships in even the most hectic of moments.



Now if you'll excuse me I have more ships to go hunt down!

Leo

Tuesday, 19 April 2016

Two weeks with the Oculus Rift - Part One

Back in August 2012 a friend pointed me towards a web site called 'KickStarter' and a project called 'Oculus Rift: Step Into the Game' which was looking for $250,000 in backing so that they could build some prototype virtual reality headsets. I remembered the VR headsets/games from the 80s and 90s, although I never did get a chance to try them out, and after reading up on the current state of play decided to back the project a day or two before it finished.

The project finished at the end of August with 9522 backers, who together pledged a total of $2,437,429 almost 10 times the original target! With the project completed the waiting started. I knew the kits would need to be ordered and assembled so wasn't expecting to see anything soon, not to mention I was towards the back of the queue.

The December delivery target came and went (to no great surprise), things got delayed more over Chinese New Year and then finally in late May my Developer Kit arrived, the DK1.


Its was a little bulky, had lots of cables, but I set it up, plugged it in and tried out the Tuscany Demo, Epic Citadel village (plus roller coaster) and even played some Half Life 2. It was low-res, had limited tracking (you couldn't lean through a window for example), quite bulky, had a very obvious screen door effect and you had to jump through hoops to get anything to work (Changing resolutions, moving the output window to the correct screen etc.).

Whilst I had a few ideas I never really got far into the actual development side of things, so when the DK2 was released I decided to skip it, although I did get to try one out, and my DK1 started to gather dust as it got harder and harder to get up and running as people were moving over to the newer hardware.

Moving ahead a few more years and the release date of the first commercial version (CV1) was announced and, much to my surprise, Oculus announced that everyone who backed the original DK1 would be receiving a free KickStarter edition of the CV1. The release date came and waiting began anew. This time, instead of a 9 month wait, my CV1 turned up at my house a week after the release date. Of course I was already at work when it arrived, so it was a few more hours until I got to play with it.

The packaging looked sleek and professional, opening up to reveal the CV1 headset itself, sensor and remote. With those removed the case opens up further to reveal the XBox One controller, batteries, cleaning cloth and a few stickers. The CV1 looked compact and felt nice and solid, a far cry from the chunky DK1 with its wide front and thick straps.





The setup/installation process was much more slick than with the DK1, with a mixture of text, images and short videos to show you how to connect up the Rift (plus accessories) and confirmation that everything was successfully detected. I did run into one issue during the installation process, that being my USB 3 expansion card not being supported. Hoping I wouldn't have to wait for a new card to arrive I plugged the Rift and sensor into USB 2 ports instead and was happy to see that they were successfully detected, albeit with a warning stating the sensor may not work as well as it could.

With the software and hardware installed it was time to put on the headset and finish the setup process in VR!

Now the first thing I noticed was that the CV1 is much easier to put on compared to the DK1, although the headphones did catch on my ears slightly (They wouldn't stay in the 'lifted' position). The back rest pulling out on springed rails to let it slip easier over you head, and then tightening to hold it in place. Meaning once you had adjust the velcro straps, you don't need to touch them again.

With the headset on, and the displays active, I immediately spotted an improvement over the DK1. Yes, I could actually read text! Obviously the text size for the installation process is optimised for the headset itself, but even with the official demos for the DK1 text was hard to read, being blurry and fuzzy. Graphic quality and resolution was also obviously improved, with the screen door effect barely being noticeable.

Returning briefly to the headphones I mentioned earlier. Its so much easier to have these built in and not having to fumble around trying to find your headphones and putting them on over the CV1(along with the inevitable cable tangle), but if you prefer your own headphones the built in ones are removable. So far I haven't felt the need.

Setup continues in VR, asking you to stand up in your area so it can get the sensor correctly positioned, the actual sensor itself is represented in VR, however it didn't seem to be as accurate as some of the Vive footage I've seen (when reaching out to adjust it I'd miss slightly).

Once all the hardware is correctly configured the installation process moves on to the Oculus Dreamdeck demos (Note: The version you download from the store has a few extras demos in it that are worth a look, especially the submarine and paper town) taking you to a cartoon campfire, standing at the top of a tall building (making me think of the various Batman Arkham games), a short visit to an alien planet and a close encounter with a dinosaur (appropriately enough at the Hammond museum). Each one giving you a short time to look around before moving on.


Once you've watched (or skipped) all of these installation is complete and you are transported to Oculus Home, a (very) large room that acts as a launching point into the other applications (So far I've not worked out how to get a screen shot of that!). There's a fire place, books and pillows strewn over the floor, various bits of furniture and objects on display. I wonder if this will evolve into a customisable location?

From here you can download and buy games, applications and other media content, as well as launching/watching the ones you already purchased. So like any sensible person I went through the entire store 'buying' every free item available! Some should always be free, where as some other were limited time offers. Its at this point you get dragged back out of the experience, as whilst you can happily queue up and download items, watching the progress bar fill up, eventually it will get to 'installing' and it will then stay on 'installing' until you take off the headset, return to the 2D world of your computer and click on the 'yes this program may install things on my PC' dialogue.

To be fair this step is built into the Oculus application (assuming you remember to check your notifications), but its still a little annoying. There are a few pre-installed apps (Such as the 360 video and pictures, plus the regular movie theatres) so you might not run into this straight away.

With several games downloaded (luckily I have a fast connection) I decided to start with one of the launch titles,  'Lucky's Tale'. A third person platforming adventure game. Now I had watched someone playing this earlier in the day and thought that it looked 'okay', but I have to say that it was a completely different experience when I tried it myself. Whilst it is a bright, colourful, cartoon type world all the objects have a good presence of actually being there, and I found myself looking over things, around things, inside things as I played through the first chapter. Other than occasionally looking around a corner there wasn't much you really needed the headset for (The game could be played with just a 2D fixed camera view) but I did spend 30 minutes going through the first stage, looking back down each level when I got to the end to see if I had missed any coins or enemies.


As I had so recently come out of the installation process I ended up playing Lucky's Tale standing up. For the most part this was fine, and made it easier to look around the level, but a couple of times I did find myself swaying, as if trying to maintain my balance on a moving ship. This generally seemed to be triggered by the first camera move, as it follows Lucky moving down the level, and when something passed close to my eyes (e.g. one of the butterflies or floating coins).

It wasn't a huge problem, and didn't make me feel nauseous, but maybe I'll play the rest sitting down!

As this is getting rather long I'll stop here and carry on looking at some of the other games and applications in the next part (Including a video of me playing EVE: Valkyrie! That's bound to be embarrassing..)

Leo