Virtual Vikings: How is VR capture?



[ad_1]

There's never been a better time to be a VR enthusiast. With the augmented presence of standalone, wireless headsets, and the more advanced motion tracking and screen resolutions. VR experiences with our body senses, we're now reaching for a point where VR simulations are starting to feel, if not real, certainly more realistic than they used to be.

We're still far from the Matrix, however. Developers are still trying to get a grip on the physical and emotional aspects of the game.

With volumetric capture, though, that might start to change.

I made a visit to Dimension, a VR production studio working with the new video capture technology, to find out how to simulate a future of interactive VR experiences.

What is volumetric capture?

When you're catching up, you need good lighting (Image Credit: TechRadar)

When you're catching up, you need good lighting (Image Credit: TechRadar)

(Image credit: TechRadar)

Volumetric capture is a relatively new video capture technology for recreating people and objects in virtual reality. Patented by Microsoft, with only two studios, currently licensing the technology worldwide – We are here to help you.

Reasons for using a 360-degree camera that captures real-life footage in all directions, or recreating a whole scene in a computer physics engine, capturing a large array of cameras Increased amount of detail, when is then scanned into a CGI environment.

Individual cameras (53 RGB, 53 infra-red) and directional microphones to capture audio in real-time, instead of adding in separately in post-production. The full array is able to capture 10GB of detail per second, at 30 frames per second – or 20GB / s at 60 frames per second.

106 cameras capturing the actors from all angles (Image Credit: TechRadar)

106 cameras capturing the actors from all angles (Image Credit: TechRadar)

(Image credit: TechRadar)

Steve Jelly, managing director of Hammerhead (which owns and operates Dimension), ran through the process:

"Half of [the cameras] are shooting visible light, and the other half are shooting infra-red light, which is read by these lasers here … and that helps our algorithms figure out form as well as color.

"We take those images, and then we run them through a massive computer farm on the road, which makes every single piece of every single pixel in space, creates a mesh, and wraps the video footage over the top of it."

The precision of the mapping method, which uses "thousands of tiny dots" to capture 3D objects, means that the cameras can even get better in your clothes – far more detail than you'd get with traditional motion capture methods , which rely on recreating gesture and movement within a computer-generated 'puppet'.

"That's the problem [with motion capture], "Said Jelley. "You can make it look fantastic, if you got a lot of money, and you're outputting a 2D frame, but you always lose something in translation."

What do Vikings have to do with anything?

A Viking vessel at sunset

A Viking vessel at sunset

We arrived at Virtual Viking: The Ambush. A collaboration between Ridley Scott's production studio RSA Movies and the interaction entertainment center The Viking Planet in Olso, it's one of the latest examples of how immersive VR experiences of tomorrow could be.

The Ambush is a historically-accurate recreation of a Viking battle ship in VR, using viking boats, and we have heard from Kim Hjardar's Vikings at War. Produced for the Viking Planet Center in Olso, Norway, the exhibition is a part of the exhibition of the lives of Norse seafarers, using a number of VR headsets

Given the involvement of Microsoft, it's unsurprising that the Ambush runs on a Windows Mixed Reality headset: the HP Reverb.

The Reverb does, however, have one of the sharpest displays on the market, with 2.160 x 2.160 resolution per-eye panels, delivering twice the display resolution of the HTC Vive Pro and Samsung Odyssey +. Not to mention six degrees of freedom for fluid movement in 360 degrees.

The HP Reverb features a crisp 2.160 x 2.160 resolution per eye (Image Credit: TechRadar)

The HP Reverb features a crisp 2.160 x 2.160 resolution per eye (Image Credit: TechRadar)

(Image credit: TechRadar)

We're going to be back to the road again, recreating the gentle rock of the boat to minimize motion sickness – another recurring obstacle for seamless VR.

Making the virtual feel real

We've been to a lot of VR demos – everything from 8K batman helmets to nausea-inducing paragliding – goal The Ambush felt incredibly fresh.

A bird's eye view of the sea and the sea on the beach. Viking boat, a bird's eye view of the ocean. Norway's west coast, with the historically-accurate recreations of boats .

The closest comparison I can think of sitting in a theater, with the actors only a few feet away from you. I could see the Vikings in front of me as they cheaved their way down the river at night, squinting their eyes to see them in the dark, knitting them around their CGIs, and flinching as projectiles began to rain down on them. allies.

A Viking battle scene (Image Credit: Dimension)

A Viking battle scene (Image Credit: Dimension)

While many of the objects – and the ship itself – was created in a single piece, it was the actors that really made the space feel peopled, and made the resulting destruction of the ship's crew all the more affecting.

It's those small details, in a look, a tightening grip on an oar, or the twitch of a facial muscle, that make a person feel real – without the 'uncanny valley' human face.

The challenge with capturing human performances, though, is that you can not blame the technology for bad acting. Lisa Joseph, producer at RSA Films, started her career in the theater, and is only too aware of how important this aspect is.

"You're taking real people, and putting them into a computer generated world," says Joseph. "So they really need to be able to act."

Had to run "rigorous casting" over several days, to make sure the result was worth the trouble of the new capture method. What made the process easier by adding NPCs (non-player characters), where the difference in detail would really count.

Image Credit: Dimension, Arrows had to be added to the image

Image Credit: Dimension, Arrows had to be added to the image

There are certainly big applications in VR games: imagining an open-world Fallout or Skyrim with complex, human expressions instead of rote facial animations could completely transform how engaging our interactions in games can be.

Those at Dimension, but their use of Unreal Engine – which powers titles ranging from Fortnite and Gears of War to the Final Fantasy VII remake – gives us hope that it is not too long before volumetric capture catches on in the wider industry. We have a lot more VR demos ahead of us, and we want them to feel a lot more like this.

[ad_2]
Source link