Cloud Gaming; Present and Future

Are we ready for the next video game revolution?

Last June, we announced the creation of our new studio in Sherbrooke. During this time we also revealed the projects we were going to work on. Such transparency is rather unusual for our industry, which is used to keep a certain level of secrecy about projects in production. In this case, however, while not going into the details, we wanted to make our ambitions clear. Eidos-Sherbrooke is a studio focused on technological innovation for the video games of the future. Specifically, this article aims to present our first three technology projects, all of which are in the field of cloud gaming.

 

The different faces of cloud gaming

Offering fully remote simulated games on demand is nothing new. Today we think of Google’s Stadia or Microsoft’s xCloud project. But these two giants are not the first to offer this possibility. In 2009, OnLive announced the introduction of a cloud-based gaming service that did not require any game downloads and, even better, promised players access to new titles through a cheap micro-console without the need to invest in expensive hardware. In 2010,  Gaikai  showed a similar technology. A little later, both companies were acquired by Sony. Gaikai became Playstation Now and marked the arrival of a major player in cloud gamingFrom here, cloud gaming has continued to follow a more or less unchanged trajectory. 

Today, offers have multiplied. In addition to PlayStation Now, Stadia and xCloud, new services and technologies have appeared: GeForce Now (Nvidia), Steam Cloud Play  (Valve),  Luna  (Amazon),  Shadow Parsec, etc. to name a few. All of these services and technology providers make more or less the same promise: you don’t need a state-of-the-art console or a powerful PC to play the latest and most resource-hungry games. The premise is simple: all you need is a basic computer or micro-console along with the accompanying mouse, keyboard or controller to go with it. In addition to the hardware, of course, you’ll need a decent internet connection as well. Commands are then sent from your controller or keyboard/mouse via the internet to a data center near you where servers run the game simulation and send the video back to your screen. The challenge is no longer to have a fast machine at home, but rather a fast connection speed with as little latency as possible. 

Today, the interest in cloud gaming is touted mainly for its practicality. Everything one needs to get started is often already available at home, or cheap, there are no downloads, the game library is easily accessible from anywhere, etc. Sure, the Stadia platform also offers some value-added features (broadcasting on YouTube with spectator participation, game help via the Google assistant, etc.), but it does not make a major difference or seem to have convinced a significant number of players to adopt the platform. To this day, despite all its technological accomplishment, Stadia struggles to find an audience and to capitalize on its strengths in order to bring players to cloud gaming.  

As with all creative industries, content is the key issue. In the case of music, the audience switched to live broadcast platforms when they found most of the artists they love. Today, the majority of mainstream artists are found on all platforms (Apple Music, Google Music, Spotify, etc.). For film and television, the market is much more fragmented. Netflix may not have every Hollywood blockbuster on demand, but compensates with exclusive quality content developed specifically for its platform. Disney offers a small catalogue but is made up of exclusive franchises such as Marvel, Star Wars, and Pixar. Video games are no exception to this pattern. The console war that has been waging for several years is also expressed through content. To be successful, a gaming platform must offer a rich and varied selectioncomposed of original experiences that are not found elsewhere and a well-stocked common catalog. Technology, in this context, is not an end in itself. When players pick up their controller they want to experience immersive content that take them to another universe. Technology is just a tool. It is nothing without content. 

 

Ambitious projects

Given all this one can ask: what does the future look like for cloud gaming? It’s potentially huge, especially if we use cloud gaming to enrich the player experience. You have to go back to the roots of the experience you want to have as a player. Certainly the practical side of not having to buy a very powerful PC to play with the best possible graphics quality is “interesting” but it is not a major decision factor. The main opportunity in cloud gaming’s growth lies in content. Data centers offer virtually infinite resource capacity, not only because you can host very powerful servers, but also because they are all connected to super-fast local networks and can therefore collaborate to create unique game simulations. It is this strength that inspires our studio to explore where it can trailblaze down this exciting new frontierWe can now free our content creators from the constraints of machine power. In addition, the opportunity of a decentralized content creation environment will ultimately make it easier for talent to collaborate and, who knows, perhaps even more easily open our creative tools to players. 

We have announced three technology projects that depend on the computing power of data centers  and each project utilizes this new technology in its own way. 

Realtime geo-morphing

The real-time geomorphing project looks at all the transformations in a simulated world that are currently too expensive to simulate on a single machine. Of course, as machines become more powerful, so does the complexity of the algorithms they can run. That said, our goal is to target aproaches that are currently far too demanding for a modern console or PC and yet offer them to players on these same machines. The first examples that come to mind revolve around the simulation of physics in video games. We have made giant strides around physics in games in recent years. When I was working on Speed Devils Online Racing in 2000, it was already quite a feat to be able to afford some rigid collisions between vehicles and inanimate objects on a Dreamcast console. Today everything explodes everywhere in video games. Indeed, the mass destruction of objects and visual effects are omnipresent in modern titles. However even in these examples, this destruction minimally impacts the gaming experience and we are far from having the CPU budget to run simulations of soft or fluid objects that would affect the course of the game. 

With this project, we can imagine that we will simulate most of the game locally (on a console or PC) as we do right now, and save the local machine all complex operations by running them on remote servers. These algorithms that replicate realistic, though complex, physical principles are usually highly parallelizable and therefore multiple servers can simultaneously contribute to the calculation of the result in a fraction of the timeThe result can be a cloud of dots, an animation, textures or any other content format. Now the challenge for this project is to execute this loop of sending gameplay inputs, calculating the simulation and returning the result, with a latency small enough that the entire process is imperceptible to the player. 

Voxel-based raytracing

 

For our voxel-based raytracing project, we want to combine two research fields that revolve around modeling and visual rendering of the world. Unlike the realtime geomorphing project, the idea is to rely on a broadcast platform such as Stadia, xCloud or Nvidia Now that takes control of the game and sends it to the data center where all of the game simulation is calculated. In truth, neither raytracing nor voxels are new as they have both been around for decades. However both raytracing and voxels have their own limiting set of requirements. In the former case, voxels require large amounts of memory and in the latter case, raytracing requires high levels of computational power. Due to these restrictions the simulation of worlds using these approaches is currently not possible, even on the most powerful PCs of todayAs of now, raytracing in games is used in addition to classical rendering techniques especially to display shadows, reflections, or other partial elements. These elements are then superimposed on the image at the end of the rendering of the current frame. 

Thankfully, these limitations can be overcome through cloud computingThe amount of memory is no longer really a problem, and you can have access to GPUs in sufficient quantities to, in theory, display a scene of any complexity.  

Multi-node game engine

Our multi-node game engine project, contrary to what the name may imply, is not about creating a game engine from scratch but rather about inventing the gaming experiences of the future. The abundance of servers available in a data center makes it possible to consider distributing the simulation on many computational units. This approach offers us the opportunity to create massive interactions involving all facets of a game engine. Imagine being chased by thousands of zombies through a maze of streets, or taking part in an intense chase on an ultra-dense highway where each vehicle is simulated in a unique way. In addition, the other benefit of this technology is that it offers de-facto persistence of the world. The game can continue to evolve once the player leaves. 

Given all that we have described for previous projects, the question is no longer whether they are  feasible by using the firepower of the cloud, but rather to determine which of these examples offer an interesting experience to the player. And in so doing we will have demonstrated that the model  is economically viable. 

 

Beyond technology

Apart from the purely technological problems that we will have to solve, there are also economic issues. Besides their poorly stocked game catalog, one of the reasons for the downfall of some cloud gaming services is that they allocated a lot of computing resources to a small core clientele It is easy to understand that offering the equivalent of a high-end gaming PC worth of resources to every player will result in costs increasing very quickly. This problem is exacerbated by the fact that, for a given region, there are peaks in demand (e.g. in the evening) and that the system needs to be sized to absorb these peaks.  

The system must therefore be able to modulate the use of resources within a gaming session. We need to be able to pool IT resources and divide the use of this pool of resources according to demand. Naturally in most video games, demand itself can be quite dynamic with intensive spikes and moments of rest in a gameplay sequence. It is clear that opportunities to find such a balance exists, but it is, of course, not always trivial to implement. 

 

The future data center

Just as our PCs and consoles evolve by cycle, data centers embrace technological change. New processors, graphics cards and other specialized hardware are released by manufacturers (Nvidia/ARM, Intel, AMD, etc.) on an annual basis . They allow one to upgrade the computing power of data centers more frequently than even the personal equipment of players. 

 

The local network of data centers is also evolving rapidly. Companies like Mellanox (acquired by Nvidia in the past year) promise latency times of less than 100 nanoseconds, the same order of magnitude that the exchange of information on a computer motherboard took 20 years ago. It therefore is becoming realistic to consider several servers on such a network along with many coprocessors capable of contributing to the same simulation. 

The greatest revolutions, however, may already be upon us. There is a lot of talk about 5G mobile networks, but in the minds of the general public, the visible part of this new technology is more or less an dramatic acceleration of existing network speeds. However, arguably, the most interesting part of this new technology is  in the processing of data at the periphery of the network (Edge computing). In this model, a process that would typically run on a terminal (mobile phone, computer, console, etc.) can be decentralized to processors that are connected to the local 5G network (local antennas or computing centers). This decentralization will allow the multiplication of existing computational speeds and enable us to carry out a set of operations that is currently infeasible on modern machines. 

This decade will probably also see the advent of the quantum computer. For now this upcoming technology is restricted to laboratory prototypes, however in 3-5 years it could contribute to practical applications and become a powerful enough tool to threaten our computer encryption algorithms (according to some experts RSA-2048 keys, the current gold standard for encryption, could be easily calculated using quantum computers by 2030). Without being dramatic, we clearly see that it will offer computing power unparalleled with our current technology. 

So what gaming experiences can we afford to do on such machines? This is one of the big questions we’re going to dig into! 

 

Author

Julien Bouvrais is now the head of Eidos-Sherbrooke the research and development studio in charge of accelerating the technological innovation of the Eidos-Interactive group. 

When  Eidos-Montréal was founded in 2007, Julien joined the founding team  as programming director for Deus Ex: Human Revolution and later became the company’s Chief Technology Officer. In this role he supervised the departments of research and development, artificial intelligence, engine creation and tools. 

Along with his teams, Julien is now working on the technologies that will be at the heart of tomorrow’s video games. 

Llama