The PS3 Skyrim Lag

In November 2011, Bethesda Softworks released the fifth instalment of its Elder Scrolls role playing videogame series. Skyrim launched to universal critical acclaim, obtaining a Metacritic score of between 92-96/100 depending on format. Famitsu, considered the most widely read and respected videogame magazine in Japan, gave Skyrim a perfect score, making it the first western video game to receive a 40/40 rating. A month after launch, it had sold over 7.9 million copies1.

To call the game ambitious would be an understatement. Bethesda employed over seventy actors (including the talents of Christopher Plummer, Max von Sydow and Joan Allen) to record the voices of non-player characters in the game,with the total number of lines recorded being over 60,000. It is estimated that to complete all of Skyrim's 250 or so quests and see its 300+ points of interest would take the average gamer around 250-300 hours.

However, it didn't take long for reports to emerge of PS3 players, who had invested a considerable amount of time into the game, experiencing significant reduction in graphical frame rates (often to zero refreshes per second) and poor controller responses. Bethesda released a patch to fix this issue in December and whilst this seemed to improve the overall frame rate, the graphical lags for gamers who had saved their progress in the game at the 60 hour mark or beyond were still evident. Word quickly spread on the internet that, to all intents and purposes, the game was unplayable for experienced players who played the game a lot: in other words, the target audience.

At the time of writing, further patches have been promised which may solve the problem. With no official word from Bethesda, commentators across the internet have been speculating what could have caused the latency issues, with many pointing to the way the game may interact with the internal architecture of the PS3 itself.

In the Skyrim game world, actions have consequences - if the player's character interacts with any other characters, objects or elements, the game stores the outcome so that a plundered treasure chest doesn't magically refill or a burning house suddenly reappear without so much as a scorch mark. All this information needs to be stored in memory and - like a giant database - may require greater processing the longer the game goes on. Coupled with this is the fact that unlike the Xbox 360, the PS3's memory architecture is split and the footprint of its operating system is comparatively large (even though it does not currently use much of what has been assigned to it), leading to a low bandwidth between its processor and its memory. Precise database management (including organising what, if anything, can be deleted or reused once interacted with) for a huge save file would, so the theory goes, have clear implications on the way the game performs, given the bottleneck between CPU and VRAM. It is perhaps simply, as Tom Morgan of Eurogamer.net put it, "an unbounded game running on [a] space-constricted system."

Online latency

Latency is not just confined to the delay experienced in an internal hardware system; more usually, it refers to delays experienced in online gaming due to connection speeds and geographical factors. Essentially, latency (or "ping time") is the time taken for data of a set size (a "data packet") to cross from the sender to the receiver and is measured by the time taken for a data packet to be sent to its destination and a response to be received (a "round-trip").

Latency is present in any transmission of data, whether the distance involved is a few centimetres (between components of a computer), a few metres (between computers in the same room) or thousands of miles (over the internet). The causes of latency can be grouped into three broad categories:

Propagation latency

This is the time it takes for a signal to travel from one end of a communication link to the other. There are two factors that determine propagation latency: distance and speed. Copper wire and fibre optic cable have differing properties but carry a signal at roughly 67% the speed of light. A straight-line communication link between New York and London of around 5,600km would have a propagation latency of approximately 28ms one-way and 56ms round-trip.

Wireless methods of transmission (commonly radio waves) travel at faster speeds depending on the frequency and relative degree of "interference" but signals can be intermittent and the distances involved are usually far greater than with cable.

Propagation latency can be minimised by finding the fastest method of transmission and the shortest distance between communicating devices.

Transmission latency

Transmission latency is the delay experienced in transmitting quantities of data across a communication link. Every communication link has a "speed" which measures the amount of data per second the link is capable of transferring (for example, 10 Mbit/sec). The slower the communication link, the longer it takes to send data across the link, and the higher the transmission latency. Conversely, the faster / higher capacity the communication link, the quicker the data is sent, and the lower the transmission latency.

Processing latency

Processing latency is the delay caused by hardware and software interacting with a data packet. All hardware and software which interacts with a data packet being sent or received causes processing latency. In some cases - such as the Skyrim example - this can be entirely internal and relate to the relationship between processing power and how memory is accessed.

Importance of latency

In the world of international online multiplayer services like Xbox Live and the PlayStation Network, particularly fast-paced FPS titles where the difference between clocking up a hit or staring at a Killcam replay can be measured in micro-seconds, eliminating latency is key. The three crucial technical challenges are: (i) to reduce latency as far as possible, (ii) to ensure transmission times are consistent, so data can be both fast and predictable; and (iii) to incorporate sophisticated prediction technologies so that all players experience the same events (and their consequences) happening at the same time.

The majority of online games operate in three distinct ways:

Dedicated server

As used by titles such as Electronic Arts' Battlefield 3, all decisions and output are driven by a dedicated central server to which all players are attached. This requires a great deal of investment in infrastructure, and has the risk for players that they will no longer be able to enjoy the multiplayer aspects of the game if retaining server space becomes financially unviable for the publisher.

Peer-to-peer / P2P

This is where one player hosts the game, and the game data is streamed from him to the next player and so on. P2P works best for turn-based RTS games where a simulation of all players' movements can be rendered on each terminal, as the peer with the poorest bandwidth tends to set the level for all players.

Client / server

A client / server network architecture is a mix of the two previous methods, where one player serves as the centralised server for other players. The quality of the game experience depends on the connection between the client and the server rather than the most-lagged peer. Valve's Left4Dead 2 uses the client / server method.

Measuring delay

Even taking into account advances in compression technology and prediction techniques, latencies between players can vary dramatically no matter which type of network architecture is being used, simply because of the geographical distance between them. Recent Digital Foundry analysis2 of round-trip latencies between UK and Tel Aviv gamers revealed lags of between 300ms - 500ms in certain cases.

Cloud gaming

With the rise of cloud computing, it was n'ot long before providers looked to make online gaming a reality, removing the need to install huge software files or having to upgrade expensive hardware to play the latest titles. As services such as OnLive (which recently launched a game streaming app for Android with an iOS version pending Apple approval at the time of writing) operate by streaming video back to the player of the consequences of their inputs as determined by OnLive's central servers, they will live or die by how they can combat the effects of latency.

Approxy, a new cloud-based gaming company, seems to believe it has the answer. In a recent interview with MCV , Approxy's COO Dr Bartu Ahiska boasted of the system's ability to deliver full 1080p, 3D stereoscopic games, "without latency no matter what your broadband speed". "As we stream game software, " he added, "[we offer] instant playing, HD, no installation, no .dll conflicts, no exceeding ISP caps, no latency and no need to upgrade server architecture every two years."3

This all sounds extremely impressive, but with the service yet to launch, certain providers are looking further afield for potential solutions.

Learning from the financial sector?

Unsurprisingly, latency is not just a problem for the videogame sector. The more time sensitive data is, the more important it will be to reduce latency and to date, the need for low latency has arisen most prominently in the financial services sector.

The days of the raucous trading floor are long gone. In many liquid markets today, electronic contracts are matched via software, housed within gently humming racks of hardware, hosted in vast data centres occupied only by a handful of engineers and connected around the globe via a web of fibres. In many cases, even traders at the end of computer screens have been supplanted by software that executes orders either partially or wholly on the basis of computer generated decisions driven by an algorithm.

Technology has enabled new trading methods, which assess all available information almost instantaneously, implement pre-programmed trading strategies and execute the best available transaction, all within milliseconds. This is best exemplified by "high-frequency traders", who create profit by frequently trading and capitalising on extremely short-term opportunities, available often only for a matter of milliseconds. In this way they can continually exploit shifting share prices and profit immediately as prices move by fractions over short periods of time. Not all trading strategies require latency to be reduced but many do and this area of the market is becoming increasingly important in terms of trading volumes and value. Market participants invest considerable amounts in technology that can reduce latency within and between front-office trading systems, because any delay between identifying a trading opportunity and execution can cost substantial amounts of money.

Whilst front-office developments lead the fight against latency, related administration and management systems need to "speed up" to prevent lag between trading and the processing and analysis of it. Regulators are also concerned about the potential risks of unequal trading speeds between market participants. Whilst it remains to be seen to what extent this area can be successfully regulated, as a minimum regulators will require administration, analysis and risk management systems which are "fast" and capable of effectively managing risk in the front-office. This is also fuelling spend and development driven by a need for reduced latency.

Suppliers are focussing on developing low latency targeted technology, networks and related consultancy services, and talk with a glint in the eye of the "competitive war" between their customers which is fuelling increased spend in the middle of otherwise unpredictable markets. In the words of Joseph M. Mecane of NYSE Euronext: "It's become a technological arms race, and what separates winners and losers is how fast they can move".

What are the legal implications?

Traditionally, very few game development contracts have addressed latency issues in any detail. However, the commercial risks and approaches are changing and provisions specifically dealing with lag are beginning to creep into agreements.

Matching the contractual to the technology

When drafting contracts where latency is an issue, it is absolutely critical that publishers and lawyers work closely together at all times. The devil is in the detail and in this case the detail can be extremely technical and complex. Where latency is an issue, lawyers and commercial teams should consider using the highest degree of specificity possible, especially in relation to technical descriptions. This will enable the contract to clearly identify who is responsible for what. It will also help ensure in respect of each technical component that it is clearly warranted to perform to the required standard within the exact context in which it will be used.

The key task is to ensure that each relevant contract identifies and deals with each and every form of potential latency risk within the game to be developed. This will inevitably involve analysing the entire round trip of a data packet and ensuring that every potential cause of latency is identified and appropriately accounted for in all relevant contracts, including at the development stage.

Liability

If game development is sub-contracted, the sub-contractor may typically seek to exclude all losses, whether direct or indirect, which fall under the heading of "loss of profits", "loss of revenue", "loss of business opportunities", "damage to business reputation or goodwill" and any and all indirect losses. The majority of losses likely to be incurred as a result of a latency failure will of course typically constitute these types of loss.

Development contracts also may traditionally set service credits and liability caps that are relative to the amount of money being paid under the contract – and as such traditional remedies for latency level failure are unlikely to be proportionate to the losses actually suffered as a result of the failure. There will often therefore be a significant delta between the amount of loss resulting from a latency failure and the amount of loss that is recoverable under the contract.

Live issues

Latency issues are a continual on-going concern when a game is in live use, whether in terms of solving high latency "spikes" or seeking to generally improve performance. Lawyers must therefore seek to ensure each contract contains clauses which enable flexibility, improvement and management of a developer in a way which mirrors the developer's own IT procedures and practices.

Where multiple developers are involved it is important to include cohesive contractual mechanisms in each relevant contract, to enable co-ordinated management of each developer within the chain, so as to assist in quickly identifying and fixing latency problems, in a co-ordinated manner.

Incorporating standardised "change control", "governance", "continual improvement", "technology refresh" and "bench-marking" clauses may not be adequate. Where latency is critical, processes and procedures may need to be faster and more flexible than traditional development contracts provide for and so some tailored drafting is often required. For example, continual improvement services might be managed by having small groups of developers working together on a daily and continual basis in a way that is similar to an agile development project. Meetings must be regular and reporting lines clear. Individuals are allocated specific issues to consider within a timeframe and then given freedom to consider potential solutions before reporting back. Good ideas are quickly identified and resources allocated quickly to developing appropriate solutions within a very short timeframe, often in the same day.

Whatever systems and processes a development team team seeks to employ to combat latency should ideally be captured by lawyers within the contract, so as to ensure all parties' obligations and responsibilities continue to be clearly set out as the game develops.

Measuring latency

When considering latency, the key metrics are likely to focus on both consistency and speed. It can be difficult to agree upon a methodology for measuring latency and this will require detailed attention and updating throughout the contract period. What is perhaps different is the size and complexity of affected systems, the degree of detail involved, the speed of maintenance services and the continual need for change.

Consideration should also be given to how frequently latency is being measured. Infrequent measuring or an average of measurements, will only provide a "snap shot" of latency. Jitter (that is, the extent to which latency varies over a given period of time) will be ignored if measuring in this way. Consistency and an equal playing field for gamers is the key.

While the above considerations are similar in other forms of IT agreements, a key differentiating factor is the potential technical difficulty involved in measuring latency. Latency measurement is therefore a key contractual point to fully detail and agree pre-contract for an effective service level mechanism.

For the gamer experience, being slow or slower than rivals can effectively ruin a game. Therefore what constitutes a "service failure" in traditional IT contracts (unavailability) is not an adequate definition for a low latency gaming solution. Gaming contracts where latency concerns are paramount must stipulate that high latency and/or high jitter, even if all the infrastructure functionality is technically "available", will in itself constitute a service failure.

Lawyers contracting for technology in those sectors already affected heavily by latency (such as financial services) are familiar with the issues and are re-thinking traditional technology contracts to suit the new environment. Given the demand for fairly-balanced online multiplayer or cloud based games, lawyers acting for games developers and publishers will also need to understand the technical issues around latency, and how to ensure development contracts provide both realistic protection and the ability to effect a technical solution.

Footnotes

1. According to www.vgchartz.com

2. See The Lag Effect: PSN and Xbox Live Analysed by Richard Leadbetter, 6 December 2011 www.eurogamer.net

3. See http://www.mcvuk.com/news/read/new-cloud-gaming-firm-materialises/087960

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.