[ad_1]
The recent AI boom has had all sorts of strange and wonderful side effects while DIY enthusiasts are finding ways to reorient research universities and technology companies. One of the most unexpected applications is the world of video game mods. Fans have discovered that machine learning is the ideal tool to enhance the graphics of classic games.
The technique used is known as "AI upscaling". Basically, you give an algorithm a low resolution image and, based on the training data it sees, it creates a version that looks the same but contains more pixels. Scaling up, as a general technique, has been around for a long time, but the use of AI has dramatically improved the speed and quality of results.
"It was like witchcraft," says Daniel Trolie, a Norwegian teacher and student, who used AI to update images of the classic 2002 RPG. The Elder Scrolls III: Morrowind. "[It] looked like I just downloaded a high resolution texture pack of [game developers] Bethesda themselves. "
Trolie is moderator at the subreddit r / GameUpscale where, with specialized forums and discussion apps like Discord, fans share tips and tricks on the best use of these artificial intelligence tools.
Looking through these forums, it is clear that the editing process is very much like restoring old furniture or artwork. It is a work of skilled craftsmen, which requires patience and knowledge. Not all games are suitable for upscaling and all upscaling algorithms do not produce similar results. Modders must choose the right tool for the job before putting in hundreds of hours of work to tweak the end results. It's a work of love, not a miracle solution.
Despite the required work, it is still much faster than the previous methods. This means that the restoration of the graphics can be done in a few weeks by a single dedicated moderator, rather than a team that has to work for years. As a result, there has been an explosion of new graphics for old games in the last six months or so.
The range of titles is impressive and covers the decades since the first SNES games like Mario Kart and F-Zero, who were originally released in the 1990s, at a more recent rate like the 2010 Mass Effect 2. Titles that have been improved include Condemn, Half Life 2, Metroid Prime 2, Final Fantasy VII, and Grand Theft Auto Vice City. In each case, there are unauthorized upgrades, which means that a little more know-how is needed to install the new visuals.
In reality, creating such graphics requires a lot of work, says a moder called hidfan. He says The edge that the update Condemn The visuals he created required at least 200 hours of work to tweak the output of the algorithm and clean the final images by hand.
In Condemn, as in many video games, most visual elements are stored as texture packs. It is images of rocks, metals, grass, etc. stuck on the game's 3D maps, like wallpaper on the walls of a house. As with the wallpaper, these textures should perfectly dress, or players can spot the beginning of one picture and another.
Hidfan says that the output of artificial intelligence intensification algorithms tends to introduce a lot of noise, many manual modifications are still needed. The same goes for the visual elements of characters and enemies. Hidfan says that cleaning a single monster takes between 5 and 15 hours, depending on the complexity of their animation.
This is something to remember when you look at these updates or any project using machine learning. Just because AI is involved does not mean that human work is not.
But how does the process work? Albert Yang, CTO of Topaz Labs, a start-up offering a popular upscaling service used by many moders, said it was rather simple.
You start by taking a type of algorithm called contradictory generative network (GAN) and you train it on millions of low and high resolution image pairs. "After seeing these millions of photos, many times, he begins to learn what a high resolution image looks like when he sees a low-resolution image," Yang said. The edge.
Part of the algorithm tries to recreate this transition from low resolution to high resolution, while another part compares its work to the learning data, determining whether it can spot the difference and rejecting the output if This is not the case. This feedback loop allows GANs to improve over time.
Using AI to resize images is a relatively simple task, but it perfectly illustrates the fundamental benefit of machine learning. While traditional algorithms rely on rules defined by humans, machine learning techniques create their own rules by learning from data.
In the case of upscaling algorithms, these rules are often quite simple. If you want to resize a 50 x 50 pixel image to double its size, for example, a traditional algorithm simply inserts new pixels between existing ones, selecting the color of the new pixel based on the average of its neighbors. To give a very simplified example: if you have a red pixel on one side and a blue pixel on the other, the new pixel in the middle appears in purple.
This type of method is simple to code and execute, but it is a universal approach that produces mixed results, says Yang.
The algorithms created by machine learning are much more dynamic in comparison. Topaz Labs Upazaling Gigapixel is not limited to neighboring pixels; he looks at entire sections of images at a time. This allows him to better recreate larger structures, such as the contours of buildings and furniture or the edges of a circuit Mario Kart.
"This broader perceptual field is the main reason [AI upscaling algorithms] perform so much better, "says Yang.
Updating game graphics however is more than just a technical challenge. It's often about saving memories. Replaying the favorite video games of her youth can be a surprisingly bitter-sweet experience: the memories are intact, but the games themselves seem strangely ugly and raw. "Was I really impressed by those graphics? ", you wonder, wondering if you have lost the ability to play such games.
Take the Final Fantasy series, for example. These were titles that I played a lot in my childhood. Just hearing songs from their soundtracks can take me back to specific moments and places of the game. But playing the games as an adult again is a strange experience. I do not usually go too far when I try, despite the precious place they keep in my memory. They just look wrong.
Modder Stefan Rumen, who used the intensification of AI to improve the graphics of Final Fantasy VIIexplains that the new display technology is just as responsible for obsolete graphics.
"With the graphics of today in pixels / weak polygons, old TV monitors allowed to hide many imperfections," he says. "Your mind has finished the job and filled in the gaps [but] Modern displays display these old games in their unfiltered roughness. "
Fortunately, these early games are also the ideal target for the move upmarket of the AI. In the case of Final Fantasy series, partly because of their heavy use of pre-rendered funds, which means that moders have to process fewer images. The visual elements also occupy an "ideal place" in terms of details, explains Rumen.
"They're not as low as pixel art, which means there's more information for the machine to learn to do magic, but it's not a resolution too much high that would not require a high-end version, "he says. The results speak for themselves.
Rumen says that Final Fantasy VII is not really a game he played when he was young. ("I was a kid of PC.") But by updating the graphics, it makes these classics accessible again. Anyway, they convinced me. I have just downloaded myself the graphic pack of Rumen AI and I am preparing to play FFVII one more time.
[ad_2]
Source link