Support for scaling numbers for Turing, free style sharpening, etc.



[ad_1]

While this year's Gamescom event is in full swing, the German gaming exhibition seems to be gaining more and more space in the gaming and hardware sectors. Along with a host of game ads (and a Google Stadia event hidden in between), NVIDIA also uses the show to launch a new, unexpected GeForce driver. The first driver in the Release 435 family, the 436.02 driver, adds and revises several features, including GPU scaling, a new sharpness filter for the NVIDIA freestyling system, and a redesign of their frame limiter. pre-rendering (now called low latency mode). 30-bit color for OpenGL applications and the usual suite of game fixes and performance enhancements.

The latest NVIDIA driver has a number of different aspects, and overall it looks a lot like a reaction from Radeon's launch last month. In particular, the focus is on low-latency games (Radeon Anti-Lag), sharpness of image-based shader (Contrast Adaptive Sharpening) and the choice of NVIDIA games for the game. performance optimization (some of which in our 2019 suite). This should not downplay the importance of the driver – it's the most interesting NVIDIA driver for a long time – but it's certainly laser-centric in some way on the features that its AMD rival just introduce or focus on themselves in the world. last month.

A final scaling of the entire image

Anyway, let's start with what seems to be by far the most interesting aspect of today's ad, namely support for scaling the display of whole numbers . This has been a feature requested by players for a number of years – I've been asking for it since and early this decade – and the wheels have finally started moving earlier this year, when Intel announced it's taking Loads the functionality of their Gen 11 GPUs, aka Ice Lake. However, since these pieces will not be sold until next month, it seems that NVIDIA technically beats Intel before carrying it with its release.

The new driver contains integer scaling by NVIDIA. Because the announcement for this driver was made before the pilot himself – it must be returned at 9 pm ET, after writing it – I did not have a chance to try the feature. But according to NVIDIA, it behaves as you want it: resize lower resolution images with the nearest neighbor into an integer multiple of their original resolution, producing a sharp, pixelated image. Essentially, you get a lower resolution image that shows up on a higher resolution monitor, as it's a lower resolution monitor. It is important to note that this mode is very different from the traditional bilinear (ish) image scaling, which produces a smoother, more blurry image without rasterization.

Neither full scaling nor bilinear scaling is always the right solution, but depending on the situation, each method can produce better results. NVIDIA chose to center its own blog post on the use of scaling for pixel art games, where the pixelated aspect is very intentional, although these games typically (but not always) use the scaling.


Simulated zoom scale 2x: full scale (left) vs bilinear scale (right)

I think the most interesting use of this feature is for games on 4K and 5K monitors, especially with the sub-RTX 2080 class GPUs. Indeed, the high resource demand for 4K + games is difficult for all, with the exception of NVIDIA's most powerful GPUs (and even then …), which requires making a game at a sub-native resolution. This in turn introduces the fuzziness caused by bilinear oversampling. Scaling the numbers, on the other hand, would allow a game to be rendered in 1080p, then scaled up to 4K (2160p); it eliminates the pixel density benefits of a 4K monitor during gaming, but retains the sharpness of rendering in native resolution. It's not quite a "eat and eat" solution, but especially for laptop users where 4K gaming is not a real option when 4K panels are, the potential is huge.

It remains to be seen if this works well in practice, both for the NVIDIA drivers and for the games themselves. Although NVIDIA can control the former, they have less control over the latter. So there are always subtle ways to make the games less interacting with the scaling of integers. The size of the UI / text is special because it is sometimes related to the resolution. In addition, as NVIDIA notes in their own release notes, scaling integers does not work well with HDR; and in fact, all the functionality is still classified in beta, even if the drivers themselves are not.

Whatever the case may be, the feature is currently being deployed for Turing owners – and only for Turing owners. Specifically, this feature is available on the GeForce RTX 20 and GTX 16 cards, but not on the previous NVIDIA Pascal (GTX 10) and Maxwell (GTX 900) cards. According to the NVIDIA announcement, the feature depends on the "hardware acceleration programmable scaling filter available at Turing", but to be honest, I do not know the accuracy of this statement or its degree of blocking previous cards . NVIDIA first deploys new features for the latest generation parts, and then postpones it for a few weeks later for older cards, which may well be the case here.

Improved image sharpness for NVIDIA Freestyle

Later, this driver version also adds a new image sharpness filter to Freestyle, the company's post-processing filter feature built into GeForce Experience. While the company already had a free style sharpness filter, the new filter provides NVIDIA with better image quality while halving the impact on the performance of the previous filter. In practice, this latest addition seems to be the counter-taste of NVIDIA's new, contrasting shading accent AMD, itself opposed to Deep Learning Super Sampling, offering another more generic, shader-based approach, working in a similar way to that of AMD.

Although DLSS comparisons are minimized here because I have not tested the driver itself, DLSS support is still limited to a dozen games. Although these games are popular, they only represent part of the game's overall ecosystem. On the other hand, a post-processing shader-based approach can work on most games (that is, all that Freestyle does) and most APIs, with NVIDIA allowing it between DX9 and DX12, as well as Vulkan.

It remains to be seen what the quality of the image will be compared to the DLSS or AMD solution. Post-processing alone does not fully recover the lost data when rendering at lower resolution, and this is true for shader-based approaches and in-depth learning; Rendering in native resolution is the best approach for clarity of the image. However, for post-processing, performance and image quality are variables on a continuum rather than fixed values. So there are compromises and benefits going both ways. Depending on the game, the right algorithms with the right parameters can produce good results. Meanwhile, while the sharpness of the images seems to be a battleground on which AMD and NVIDIA wish to argue, I would expect them to continue working on their algorithms.

"Maximum number of pre-rendered images" becomes "Ultra-low latency" mode

NVIDIA's latest driver is also getting a facelift with the Max Pre-Rendered Frames feature, which seems to be a response to AMD's Radeon Anti-Lag feature again. The rarely-noticed feature is present in NVIDIA drivers for a decade – what NVIDIA likes to remind everyone, is their priority – and allows users to control the number of rendered images not yet rendered that can be set to waiting line to be rendered and displayed. . In 436.02, the feature is being redeployed in low latency mode and it also has a new mode.

Overall, the rechristened functionality is being simplified, both in name and in functionality. In addition to what NVIDIA undoubtedly expects to be a more accessible name, the low latency mode will have only 3 settings – Off, On and Ultra – down from 5 for the previous Max Prerendered Frames implementation.

In terms of functionality, while Off does exactly what is written in the name (ie nothing, leaving the queue to the game), On and Ultra have a little more nuance. Essentially, the settings of the previous incarnation are compressed into a single label; instead of being able to select a queue size of 1 to 4 pre-rendered images, one simply locks the queue to 1 frame. Ultra, meanwhile, is more or less new and goes even further by reducing the size of the queue to 0, which means that the images are submitted to the GPU just in time and that no Pre-rendered image is kept in reserve. .

The Ultra mode potentially offers the lowest latency, but the flip side is that all the usual caveats pertaining to manual adjustment of the queue size of rendering are required. always apply. The rendering queue exists to help smooth the timing of images from both a rendering and rendering viewpoint, but it takes time to maintain those images. images. Keeping the queue at a smaller size could be wasting time, and rendering just in time is even trickier because bad submission timing can not be hidden. That's why it's an optional feature, rather than setting Ultra by default. Nonetheless, for latency-sensitive uses (and latency-sensitive players), being able to adjust the size of the rendering queue was (and remains) a useful feature.

In the meantime, maybe the weirdest part of all this is not the first time NVIDIA has offered Ultra mode. Until the beginning of this decade, NVIDIA drivers also supported a queue size of 0. That's why I'm not sure that this fully counts as a new feature. However, given the delicate nature of queuing and the evolution of operating systems, it is also quite possible that NVIDIA has implemented a newer algorithm for the submission of pacing frames.

Whatever the case may be, as with its predecessor, Low Latency mode is limited to DX9 / DX10 / DX11 games. Low-level APIs such as DX12 and Vulkan give games a very clear control over the size of queues. As a result, drivers can not (or at least really should not) replace the queuing size of these new APIs. On the plus side, unlike scaling integers, this feature is not limited to Turing-based video cards, so all NVIDIA GPU owners have immediate access to them.

30-bit OpenGL color, More screens compatible with G-Sync, etc.

Finally, the 436.02 drivers also include other feature enhancements. In addition to the usual performance enhancements (NVIDIA focusing specifically on Apex Legends, Strange Brigade, Forza Horizon 4, World War Z, and Battlefield V), the new driver also incorporates support for 30-bit color in OpenGL applications. This feature had already been announced and deployed for GeForce Studio Driver users last month and, as the name suggests, allows OpenGL applications to generate GPU-format images with 30-bit (10-bpc) colors. Until now, NVIDIA has deliberately limited this feature to its family of Quadro video cards, in order to segment product families and guide users in content creation to Quadro cards. It is now available to GeForce and Quadro users in the Studio Ready and Game Ready driver families, allowing the use of extended color gamuts with all APIs.

Meanwhile, NVIDIA has added three more monitors to its G-Sync-compatible program: the ASUS VG27A and Acer CP3271 & XB273K GP monitors.

[ad_2]

Source link