OnLive

So been seeing a lot about this OnLive thing – so while I was at GDC I went to look at it.

Ahh, it’s that “lets render in a cloud and send video to your PC / Machine / Phone / SunGlasses” idea again. Nicely packaged and with some new compression it has to be said, but still, same idea.

So I seem a bit blase about this – why am I not excited about this?

I’ll tell you why. Because there are some inherent issues with this that’ll mean as a gaming experience this will result in a very frustrating and annoying experience.

Sure, it looks great on paper – you can play the latest Crytek game on your netbook PC and never worry about upgrading hardware ever. From the developers point of view it means the end of piracy as we know it. What’s not to win?

So lets look at this a bit more. The idea is that somewhere there is a server farm with a high specced PC that’s actually running the game you want to play. It captures the output video stream, compresses it and then squirts that along to you via the internet and your PC basically is just playing streaming video and sending back your keypresses / mouse movements.

Sounds great. Why am I down on it?

Because there’s this inherent thing in the internet called Latency. It’s the time it takes for a message to get from point A to point B. You’ve all heard of Ping – that’s what this means.

So, there is also latency for the video compression coming out of the server client. It takes time to take that real time image and compress it down and send it on. OnLive are quoting about 80ms in total (for reference, 1000ms – millisecond – is a full second. One 60th of a second – where you run at 60fps – is 16.6666rms. So 80ms is 4.8 – call it 5 – frames at 60fps. one 12th of a second).

Now that’s optimistic. They can control the speed at which the compression happens but they cannot control the latency and time it takes to get that one frame from their PC to yours. But lets say for arguments sake that it’s bang on accurate. So it takes one 15th of a second between you pressing a button and you getting visuals on the reaction of that.

Ah wait, no it doesn’t. Because your pressing a key has to be sent to the server first. Well, lets say that takes, oh I dunno, 32ms. That’s 2 frames at 60fps.

So now we are at 7 frames response time right? That’s ok, surely?

Um no. Because games *aren’t* 100% instantly responsive. Games can take between 2 and 4 frames to respond to an input depending on framerates, how the game is coded, how fast the simulation is running and so on. Sometimes it’s even worse – it depends on the genre and how fast the game needs to react. MMO’s and RTS’s for example don’t need to respond in the same way that driving, fighting and FPS games do. The faster your game responds the crisper the control feels.

Anyway, so lets assume that it takes 3 frames at 50fps to actually show a response, which is what most of the best games do.

We are now at 10 frames. That’s a 6th of a second. Best case scenario.

Now that doesn’t sound like much but when you are moving a mouse around it’s an eternity. It results in sluggish response, making the wrong decisions as you play and a damn frustrating experience.

And that’s best case. The reality is that the real world experience is going to be nothing like that – it’ll be factorially worse.

Playing at the GDC booth I could feel it in terms of sluggish response and those servers were 30 miles away with a dedicated line. On the real internet this is going to be terrible.

So hang on a second, how do real games do this bearing in mind they play on servers that are at the other end of the intertubes? Don’t they have the same problems? Well yes, they do. But they get around it by a neat little idea called Client Prediction. The idea being that the game running on your machine has enough smarts to look at your input, make a best guess about what’s going to happen when you press fire, and starts that action on the client before the server says “yes, this is ok”. Since the rendering is occurring on the client and not the server the client can afford to do this, and if it’s wrong (which is usually less than 5% of the time) it can just blend to the new state that the server says *is* correct.
However, once the rendering is occurring on the farm you can’t do this any more. No more client prediction, which makes the latency problem suddenly much more important.

The trouble is that as a business idea it’s great – it brings high end graphical loveliness to the masses. But it’s based on a promise that “we can manage the latency” that simply isn’t solveable currently – there’s too much crappy hardware out there in internet land that just won’t let this work.

I don’t know what else to say. This might work for some MMOs and casual games that don’t require fast reaction time but for everything else? Good luck.

My sad prediction is that this, unfortunately, is still born, which is a shame because a) we as an industry could really use this – it’s a great idea from lots of perspectives and b) it’s going to waste a lot of investor money who’ll be burnt and won’t invest in stuff like this again.

I hope I’m wrong.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>