[ad_1]
Jim Poole sees a lot of games coming on his network, but they are not as easily identifiable as games. Poole is a senior executive at Equinix, one of the companies that supports the Internet's backbone.
Equinix has data centers filled with rows of server racks, each with processors and graphics processing units (GPUs) capable of processing games for cloud game companies such as Blade, which began offer a paid game service so that players can play high-end games. on any device. This is the same type of service that Google, using its own network, will provide with its Stadia service, presented at the Game Developers Conference. At the GDC, GamesBeat also hosted a multiplayer gaming breakfast.
I've talked with Poole about the state of the internet and how companies like hers can host the games we play and that we take for granted, will always be connected.
Here is a transcript of our interview.
Above: Jim Poole is Vice President of Business Development at Equinix.
Image Credit: Equinix
GamesBeat: It will be a pretty busy show this year.
Jim Poole: Many things are happening. My first job at Equinix, I was actually covering the game. That's probably one of the reasons I'm talking to you. [laughs] What can we let you know?
GamesBeat: An example that interests me is the contrast between what you see on the market. Electronic Arts has had two launches in the last five or six weeks. Apex Legends went very well, from zero to 50 million players in a non-advertised match in one month. Some things about the game design have made this possible.
Two weeks later, they also made a very enthusiastic announcement, Anthem, whose launch was rather jerky. The players felt that the game was not ready. It was an interesting contrast to the extent possible with big online games. Have you had an impression of why such things happen?
Poole: It's not specific to EA, but I'll say more generally, what I've noticed is that some companies appreciate the dynamics of the underlying network, what's going on in the network to allow a game to scale out, at least architecturally, in terms of the ability to provide the right bitrate and latency. This is obviously a big problem.
Many multiplayer games, especially early, there was no centralized coordination. If you wanted to install a Doom server during the day, you did it yourself. The further you get from the original server, the worse the experience. Nowadays, more people are aware. The extent to which they approach the issue is different. It's a bit like cloud computing, an issue we are often faced with. Take an Amazon or a Microsoft. When they started, they had two or three areas of availability in a given part of the country or the world. Now they are ready for distribution, at the age of 20 and 20 years old. The game is not different. I always like to jokingly say that you can not beat physics. You can not solve the speed of light.
Above: Equinix in Silicon Valley.
Image Credit: Equinix
GamesBeat: Is it the equivalent of 20 operational data centers in the country or something else?
Poole: The biggest dependency for all these games is access to a peering infrastructure. The vast majority of these games are played on the Internet. The easiest way to get localized access to a large number of eyeballs is to participate in many peering exchanges. From our point of view, that's good, because we are the largest supplier of probate in the world. We operate more exchanges than anyone else.
In fact, one of the things you can do, I do not know if you've ever played with Peering DB? It's a website, a public list of all peering fabrics, all peering exchanges, then all participants. You can enter by market or company, type their name and see where they are globally. This will show you the difference between one company and another. You can type in a gaming company and it appears in four places, grab another one and it appears in 20. In general, you'll notice that the guy with 20 years has better performance than the guy with four.
GamesBeat: I went to the big building in San Jose that you have. Is this what you mean by peering exchange?
Poole: A peering exchange is literally a large set of routers installed in a data center, in which there are many participants. Peering in general, the reason it was created, was due to the realization that no network goes everywhere. In order for Internet traffic to pbad from a Verizon user to an AT & T user, Verizon and AT & T need to look at each other. This place will tend to be an Equinix building. In this building, we are going to exploit what we call peering exchange, that is the hardware to which everyone connects, which allows the exchange of traffic between two networks.
Above: Equinix in Amsterdam.
Image Credit: Equinix
If you broaden that, it's not just networks that communicate with networks. It's the content that speaks to the networks, or the computers that talk to the networks. Gaming companies, cloud computing companies, network companies all participate in these peering exchanges. They are geographically distributed around the world. If we take only the example of North America, we organize exchange trading exchanges in Toronto, New York, Washington, Atlanta, Miami, Dallas, Chicago, the Bay Area, Seattle, Los Angeles . You can quickly see if you have been deployed in all these different peering exchanges and if you have drawn 10 millisecond circles around each of these facilities, you will find that you have reached the vast majority of the US population.
This is usually what you can do best. If you make a comparison about the ideal situation for any type of online content, whether it's a game or something else, you'll get the best performance, by focusing on just as much peering exchanges as possible. . Obviously, you must take into account the capex and the opex versus the advantage. Fortunately, many games are quite tolerant. It's only 60 milliseconds across the country. Most games tolerate very well 60 milliseconds.
What we usually see – I call them all over the country, from a peering point of view. This is the Bay Area, Washington, Chicago and Dallas. You will find that many online games and / or cloud services are still present in these four markets, because once you draw these circles, you have 20 milliseconds or less to touch anywhere in the country. It's more than enough for most big games.
GamesBeat: What if we then tackle the usual problems of consumer-specific products? For example, five people in a house are all watching Netflix or something like that, or trying to get into a multiplayer game over Wi-Fi, or too many routers in the house are trying to connect to the Internet.
Poole: Yes, too many jumps. Things like these can exacerbate the problem. Sometimes what I have seen is that someone will unfold something realizing that if the time is less than 100 milliseconds, everything will be fine. They will put something on the east coast and something on the west coast. Once you've added all the strangeness that can happen inside the house or the fact that not all Internet service providers to which people are connected are all based on fiber optics, I live in Washington and I have the Verizon FIOS. I have fiber directly at home, which is great. When I lived in the Bay Area, I had the Webpbad, which allowed me to access fiber optics up to the huge San Jose building, where there are many gaming companies. When I played a game, I could do a flow test and reach 900 MB in this building. My games would fly away.
However, as a general rule, one of the ways to compensate for the oddness on the home network is to deploy the system in a number of times potentially greater than the bare necessities. I saw that the games worked perfectly with only one place in all the country. They will work for anyone who has a broadband connection. But for those who do not, it would probably be better if the resource they are trying to focus on is sitting closer. Over time, what you see over time is this awareness. More and more, when you look at some of the bigger players, you will see them deploy – as I say, if you go to the peering website, you will see that they are in many places. It's their way of saying, "This is how I ensure a good experience for my end user".
Above: Equinix in Seoul, South Korea.
Image Credit: Equinix
GamesBeat: As far as cloud games are concerned, I've already seen some interesting customer strategies like Shadow. They said they would guarantee high-quality graphics to their customers by designating one GPU in the data center per customer, while some others included a large number of players per GPU. But what it costs eventually, they charge about $ 35 a month, almost like renting a computer.
Poole: When people talk about latency, they tend to talk about it: they think about the transmission latency, the speed of light of my rendering device, my laptop or my iPad, and so on. What they do not necessarily think too is the latency of calculations, which is why the GPU market has improved things tremendously. GPUs render much faster. You can take different approaches, as you say. You can create a dedicated GPU with the best performance or run a set of virtual machines on top of GPUs and get a different latency level on the machine. Not only the transmission latency, but also the processing latency, the actual computing cycle.
Again, this is one of those things where you can find a balance. If you dedicate GPUs, you can occupy a few fewer places because you do not experience as much latency in the compute cycle. Or you can be in more places and be more multiplexed, more virtualized. At the end of the day, you try to get end-to-end latency, including wi-fi, home router, data center transmission, and the compute cycle.
GamesBeat: If you support some of these things, what are the decisions that game companies or game designers make? What do you think is under their control? If I come back to the example of Apex Legends, I think the problem is this: they have a royal battle game, usually consisting of 100 players and a survivor. But they went up to 60 players and a team survives. They have a map that looks denser, a map smaller than some of the other royal battle games like PUBG, and they also have a faster charge in the game. Their artistic style is more inspired from the animation than realistic. It's not Call of Duty. These things seem to help make it what it is, a very fast loading game.
And then you have to contrast with Anthem, which is also made by EA, but with very realistic graphics. The loading times are very long, and the problem that many players have encountered during the first beta weekend is that they would not be loaded at all. He never finished charging for them. Then they would try to load again and put more stress on the system. Does this look like game designers can do to improve the quality of multiplayer mode?
Poole: This is an interesting question. To go back to the badogy, you have the calculating latency that comes with the speed of loading the game, the number of streams to normalize in the game process before it can return a stream and send it back to l & # 39; user. You are experiencing the same thing in generic computing. Some people understand how to write code for, for example, a cloud environment that can be scaled automatically, as opposed to a person more accustomed to writing code for a single server.
There are several different philosophies, and one philosophy, which is to write for a single server, could possibly direct a number of individual drives to a single server. The experiments can work because the load on a server can be managed by this server. Or you can use a game like this one to be elegantly compared to a group of virtual machines, and it can evolve to – years old, I was sitting at a time when Microsoft was launching one of its Xbox multiplayer games . They explained how they had worked with the game developer to use Azure's auto-scaling aspects. They showed that less than five minutes after the launch of the game, the system had generated 80,000 cores to deal with the charge. He could do it on a pretty elegant basis.
I do not know what every video game company does, but I imagine that you have a disparity in the way people think about the underlying platform. Nowadays, you always hear stories about – everyone writes something very easy like Java without understanding what's hidden below. Now, with the specialization of GPUs over processors, you get a lot more specificity with regards to the capabilities of the machine. The programmer must be aware of this.
Above: Ready to enter the demolition zone?
Image Credit: Microsoft
GamesBeat: I talked to people who created the game Microsoft Crackdown. It took them four or five years to finish. This is one of the first to take advantage of the Azure cloud. They said they had applied it in multiplayer mode, which was only four players to four, but it was in this dense urban environment where the buildings were completely destructible. The physics when you hit and hit a building and it collapses is supposed to be accurate. They said that only one Xbox could not do that itself, but when they played, they had the equivalent of 12 machines that calculated physics and other things for the multiplayer mode.
Poole: In your opinion, it's a hybrid game. Part of it is rendered in cloud, so it returns the game components rendered to the Xbox because the Xbox can not keep up. This has become more common among the biggest console players, from what I've noticed in games, especially in games where, as you've said, the more you increase the fidelity in the game, in terms of graphics and / or number of players, and these days, people want both.
You have the perfect storm in that you now have these huge reserves of compute resources – graphics processors and processors – that can be dynamically activated, and you still have the console that can do a lot of things. In the cloud games market, everyone is obsessed with the idea that you can get rid of the $ 400 console or the $ 2,000 gaming PC. It's the Holy Grail.
GamesBeat: The question is still topical: do you want to go in that direction and allow light games on any device, or do you want to use all the computing power of the data center and give it to someone? Something much more powerful?
Poole: We have been predicting the fall of the consoles for a while and that has not happened yet. [laughs] The cloud component, however, continues to grow.
Above: The stadium is the plural of the stadium, in case you wonder.
Image credit: Dean Takahashi
GamesBeat: The other thing that people mention in areas like Azure, AWS and Google Cloud – if you unload all their work, your multiplayer game will be better because you can focus on tasks for which you are good. Is there a decision level that makes a big difference to the success of this multiplayer mode?
Poole: It's hard to say. If you do not have the right game for this environment, or if you do not use the tools that allow you to write for this environment, all these guys have tools you do not have. so do not necessarily need to understand how to invoke the next. VM to add it to the resource pool. The system knows how to do it. More generally, the players in the cloud computing market are very focused on how to provide the tools to the people.
The other day I was chatting with someone about an AR application. It's very similar. They rely heavily on the GPU farm in the cloud for what they do. They can not escape that. They have no choice because the glbades themselves contain very limited amounts of computer science. They work quite closely with a cloud computing company to take advantage of the tools provided, which allows them to write their application so that it works well for most users.
Much attention is paid to this fact. If the cloud has been extremely disruptive for the traditional IT industry, because of the early deployment of millions of CPUs, reducing the cost of CPU cycles, you now see it doing the same thing. Just a few years ago, they did not even have GPU batteries. Google was one of the first to do this because they were taking on media companies that were making animation films. Now they are all heavily invested in GPUs.
That's a big part of their offer, because people do not just wait for games, but almost anything that involves rendering for AR or VR, or even a regular video, movie, or TV show. I was in the media business. You had to buy those ridiculous upmarket boxes of one million dollars and it would take you a week to film five minutes of video. Now you can get the same from one of the hyper-scale guys and they can do it in minutes.
My general idea is that the cloud, as such, is one of the elements that will favor the adoption of things like streaming for games. They have the infrastructure there. This is deployed. They have access to many peering locations. They are very distributed. They can be very close to the end user. This is a big problem, because of the latency problem.
Above: Equinix in Sydney, Australia.
Image Credit: Equinix
GamesBeat: If you're talking to an Amazon and you're a game developer, do you have questions about the level of service you need?
Poole: Almost all now have development tools specific to certain types of environments. If you want to perform facial recognition, for example, in Amazon, they have a set of tools called Rekognition specially designed. Here's how to create an application that would allow you to embed a camera stream and then run a recognition process. . All do so to encourage consumption of the processor capacity in which they have invested. That's part of their offer.
I've forgotten the latest stats that I've read on Amazon, but they publish about 30 or 40 new features a month on their entire platform. There are people going away, building new tools. The more you can get used to using your tools, as opposed to writing to each processor, the more likely you are to stay with you. It's the game of cloud computing.
GamesBeat: Can you talk about some of the companies you work with?
Poole: Blade and Shadow – Blade is the company, Shadow is the network – it's a publicly announced network, one of our customers. In fact, you can go to PeeringDB and get them back, and you will see that they are with us in many markets in Europe and the United States. They basically designed the latency circles around their deployments so that the longest latency for most end users is less than 20 milliseconds. It's better than most games. This is a good example of someone who follows this model in streaming games.
We then have customers like NHN, the Korean company. Zynga is another example, the most casual types of gaming companies that have existed ever since. They all generally recognize that this very big dependency goes back to the peering infrastructure. Given that we have more than 200 data centers across 52 metros in 25 countries around the world, we probably have the best footprint in the world for anyone trying to distribute.
In fact, there is another resource you can consult, a company called Cloud Scene. Cloud Scene tracks data center companies and their footprint for cloud providers. We still appear as the number one company on the American continent, Europe, Asia and Oceania. If you want to become global, that's how you do it. As a society, if you take our recipes and organize them according to the people deployed in several countries compared to those deployed in a city, you will notice that most of our revenue comes from large multinational companies. All major cloud companies, all CDNs are customers. Most gaming companies are customers in one way or another. It's because you have to be close.
We talk a lot about what we call the interconnection-oriented architecture, that is, whether you're in the gaming industry or in cloud computing, the model has shifted. People managed all their stuff in a data center in the center of the country. That was the model. You put everything there, and the further away you are, the more bad the experience. People have accelerated applications and all that stuff. Now, because of cloud computing, the model is flipped. If you look at most of the companies we are dealing with right now, they are very distributed. They have nothing centralized. Everything is distributed because everything is a matter of user experience. The game is the ultimate example of the user experience.
Source link