CES 2025: nVidia unveils the GeForce RTX 50 series of graphics cards
Impressive specs, double the performance of RTX 40 equivalent GPUs boosted by AI and DLSS 4 at – more or less – consumer pricing
KOSTAS FARKONAS
PublishED: January 7, 2025
After more than a year of information leaks, rumors and speculation nVidia finally announced the next generation of its consumer graphics cards, the GeForce RTX 50 series (codenamed Blackwell) at CES 2025. As is usually the case with the launch of any of its new GPU architectures, nVidia will be releasing the most powerful of these GPUs first, namely the $1.999 RTX 5090, the $999 RTX 5080, the $749 RTX 5070 Ti and the $549 RTX 5070. The first two will be available for purchase on January 30th and the other two will follow in February.
The new nVidia flagship graphics card is none other than the RTX 5090, a surprisingly svelte, two-slot design in its Founders Edition form that does come with a 1000-Watt PSU recommendation (although one hopes that it will rarely need the full 575 Watts of power it’s rated at). The card features two double flow-through fans, a 3D vapor chamber and a massive 32GB of GDDR7 memory operating at 28 Gbps over 1,792GB/sec of bandwidth. It offers 21.760 CUDA cores, increased L3 cache and special memory compression routines that allow it to deliver roughly double the performance of an RTX 4090 in most modern, demanding PC games such as Cyberpunk 2077.
Twice the performance of its last-gen counterpart is what the RTX 5080 also promises to deliver in most cases – seemingly offering great value at its announced price point, although there will definitely be faster, more expensive variants coming to market from third-party manufacturers. It features 16GB of GDDR7 memory working over a total bandwidth of 960GB/sec, it sports 10.752 CUDA processing cores and is rated at 360 Watts (nVidia recommends an 850-Watt power supply).
The RTX 5070 Ti and RTX 5070 are less impressive spec-wise but they still promise to deliver a big jump in terms of performance. The RTX 5070 Ti sports 16GB of GDDR7 memory over 896GB/s of total bandwidth and 8.960 CUDA cores, while the RTX 5070 offers 12GB of GDDR7 memory over 672GB/sec of bandwidth and 6.144 CUDA cores. The RTX 5070 Ti is rated at 300 Watts (requiring a 750-Watt PSU) and the RTX 5070 is rated at 250 Watts (requiring a 650-watt PSU). Putting all of this into perspective is nVidia’s claim that the RTX 5070 will offer 4090-like levels of performance – unheard of for a $549 card.
The GeForce RTX 50 series is also coming to laptops sooner than most people expected: there will be several models available featuring these GPU chips in mobile form at some point during March from a number of different manufacturers. The RTX 5090 Max Q GPU will be sporting 24GB of memory, the RTX 5080 of the same series will come with 16GB, the RTX 5070 Ti with 12GB and the RTX 5070 with 8GB (sigh), this VRAM being of the GDDR7 kind too in all cases.
DLSS 4 for multiple frame generation on RTX 50 GPUs, improvements for all RTX owners
People following the evolution of PC hardware have probably figured it out already, so let’s get that out of the way first: nVidia’s claims regarding the doubling of performance of its RTX 40 graphics cards with its RTX 50 equivalents involves DLSS, the company’s specialized frame-boosting software that’s based on artificial intelligence. There’s no way a $549 GPU could deliver the performance of a GPU going for $1499 (or a $1999 one doubling the performance of the latter) without something like that. In fact, almost every RTX 50 benchmark nVidia has published so far in comparison to RTX 40 involves DLSS and raytracing combined, since the new cards appear to be significantly improved on that front too.
Here’s the thing, though: Jensen Huang, nVidia’s CEO, clearly believes that DLSS is an integral part of how consumer graphics cards will work in PC games going forward, repeatedly mentioning that all of the fancy stuff shown on the RTX 50 is simply not possible without AI. To that end, nVidia also unveiled DLSS 4, the most advanced and capable form of the frame-boosting set of algorithms it has put together thus far. The hardware on the RTX 50 series of GPUs can now create in real time not one, but three additional frames for every traditional frame rendered, which is nothing less of amazing – and even do it while producing better results when it comes to image quality and stability.
This is the result of half a decade’s worth of work on nVidia’s part – as the company trained and re-trained countless times the necessary AI models used on DLSS on hundreds of games – but it’s still mightily impressive to see how far this technology can push the framerates of demanding PC games, essentially generating most of the visual information shown out of thin air. It’s great to see that nVidia has put a lot of thought into how it can make all this easier for developers to implement as well as for consumers to marvel at.
Then again, if we are to accept frame generation, upscaling and ray reconstruction as core elements of how all GPUs will work from now on, then nVidia, AMD and Intel had better make sure that our PC games look good, not just run faster.
It’s also nice to see that nVidia continues to support the RTX 40-, RTX 30- and even RTX 20-series of graphics cards by offering the results of that continuous AI training work to consumers owning older GeForce GPUs. It’s only multiple frame generation, in fact, that’s exclusive to the RTX 50 series, as all other enhancements and improvements offered by DLSS 4 will also work on older GeForce cards (depending on the hardware they are based on).
The only concern about all this is obvious: by making the use of DLSS an essential part of high performance PC gaming, nVidia is giving developers the perfect excuse to not optimize their titles as much as they otherwise could, instead relying on AI-based frame generation and upscaling in order to hit traditional frame rate targets. It’s a problem already apparent on the PS5 and the Xbox Series S/X, so it would be a shame to watch an extremely useful technology like DLSS be used as a performance “crutch” long-term. Crossing fingers, then?