So my take DXR/RTX confusion: DXR is ray tracing API, an extension to MS DX12 with full interop with raster pipeline. MS will support DXR with a compute shader fallback layer, so devs can ship DXR effects to all DX12 class gpus. RTX is an implementation of the DXR API by NVIDIA (with dedicated HW elements in the ASIC). RTX is only supported on Volta and Turing, Pascal has to live wih the fallback path. AMD made no annoucment about their level of support regarding DXR.
Well even with the latest RTX card it sounds like realtime ray tracing is still knocking performance down substantially. Cherry picked 1080p demos that aren't smooth, ouch. Then you've got yet-another-propietary-supersampling method from Nvidia. I guess they had to figure out something to do with the AI block for their gaming cards, rather than let it sit dormant. Let's hope this SS method doesn't damage image quality.
With that being said the overall performance gains are still rumored to be substantial, if you're not turning on the new eyecandy. Best of all the 2070 will spur price drops on the older tech, even if the 2080 models are frickin' expensive as heck.
PowerVR has a much better and performance solution if you bring everything to right scale. No mobile solution can't even be any closer to what Imagination has developed by far. And yet we are still debating about NV solution that in RT is really mediocre solution considering the horsepower it has. Not efficient solution at all...
Tile rendering has nothing to do with ray tracing. The ray tracing calculations/arithmetic are what is slowing things down. The rendering differential to include ray tracing calculations is negligible, and sure, tiled viewpoint rendering would assist that performance like it would assist any sort of viewport rendering. But the calculations happen before the rendering.
Seconded; I'm very interested in how you combine RT with TBDR when, 20 years ago, it was heresy to even offer up the possibility that TBDR could be paired with hardware T&L.
I still don't understand why immediate mode rendering is still the go-to for PC and console gaming if deferred rendering is pretty much an improvement in every single way. *shrugs*
Yes, every single way except that it introduces one frame of latency. That ray tracing on the powervr units is born from their purchase and integration of the caustic raytracing accelerator with their rogue architecture in the Wizard. Nvidia is claiming 10 giga rays in 250watts. ONe prodyuction powerVR wizard, in a mobile form factor, was specced at 10 mega rays in 2 watts. Scaled up that would be 12.5 giga rays at 250 watts. This was also from 2015/16 sorta time. God I wish PowerVr would get off their asses and put out another PC GPU.
The GR6500 was pushing 300mrays at 3W theoretical max though Otoy only got 100mrays. An iPad Pro (A10X) was showing 30mrays under Metal2 at WWDC2018 with no special hardware.
Given the best AR solution needs RT and Apple could have bought ImgTec for the Caustic IP, I would say Apple have their own hardware solution. Not sure it’ll be in the A12/X though they’re more likely to leave it for the AR Glasses reveal.
The following is my take, which is different from yours.
I think RTX is more than just accelerated DXR. RTX is a few things. As far as software it is an implementation of DXR. It maps the DXR API to NVIDIA's hardware, whether that hardware be accelerated for ray tracing or not. So, the RTX software stack is still being used to implement DXR on Pascal GPUs.
RTX is also a hardware designation for a set of hardware technologies that accelerate NVIDIA's RTX software stack, and hence Microsoft's DXR.
Finally, from what I remember, part of the RTX software stack is sort of like a GameWorks library, which means that it sits on top of DXR and implements it for various rendering techniques developers can use.
So, if I understand correctly, RTX is three things.. the software libraries around DXR, the software that implements DXR on NVIDIA's hardware, and a hardware designation for cards with technology to accelerate the software side of RTX.
Elaborating further on these great points, there is also the DLSS technology. As I understand it this will allow the new (for GeForce) tensor cores to take some of the load off the RT calculations by using the AI to interpolate. I am wondering if tweaking that (how much is actual RT and how much is AI sampling to interpolate rays not traced) is ultimately not the key to making FPS etc with RTX more stable & adjustable.
"RTX is an implementation of the DXR API by NVIDIA" Not quite. Nvidia also developed the VK_NV_raytracing extension, which they offered to the Khronos Group back in early May. This extension is technically a ray-tracing API, quite similar to Microsoft's DXR.
The point of the extension/API is for it to play along with Nvidia's RTX API, because contrary to how it might have looked in the presentation of the 20xx series, Nvidia does not want to be locked to a single graphics API vendor *and* a single OS.
Their main concern is not of course non-Windows (largely Linux) gaming, but some game studios which have focused on Vulkan instead of DX12, particularly some (like Id Software) which support *only* Vulkan in their upcoming games (e.g. next year's Doom Eternal), or others which skipped DirectX 12 support in favor of Vulkan (e.g. Valve's Dota 2).
On the other hand, Vulkan should be at least a few months behind DX12 in ray-tracing support. I have no idea, for instance, if the VK_NV_raytracing extension was officially adopted by the Khronos Group, or if it still remains "off-tree". AMD, on the other hand, has the Radeon Rays ray-tracing engine (targeted at content creators, of course, since they lack ray-tracing hardware), but I believe it works only with OpenCL, not Vulkan.
What's important is that the VK_NV_raytracing extension will be able to be used from Day 1 of the 20xx series release, despite the Khronos Group's currently unclear support for ray-tracing.
"The demos didn’t clarify apples-to-apples performance differences between the GTX 1080 Ti and RTX 2080 Ti"
This is what I am most curious about. I suspect that we're not going to see anything too great for just vanilla GPU performance and the big leap here is solely the addition of ray tracing hardware.
Really? It has been 2 years since the last major card launch. I am expecting a 20-50% performance gain (especially for higher resolution displays) when using normal conventional graphics rendering methods. That said... this sort of confirms what was seen in the keynote; The demos in the keynote (and the previous quadro keynote) were obviously not running RTX graphics at 1080p/60. More like 720p/30 in real time... and I think that is just where the technology is at today, and it will be ready for 1080p gaming in 2 years when the next gen cards come out, and 4k gaming when the next-next gen cards come out. This will be great for rendering things. Great for AI research 'on the cheap'. But simply not ready for prime time on modern games at modern resolutions. Just like CUDA on the 8800, it is a pretty cool technology, but it is going to be a while before it is useful.
Jensen himself said all the stage demos were running at 4k though. This is why I'm trying to confirm the 1080p statement. All other sites said they couldn't confirm resolution settings, yet stated 1080p. Here he's saying he knows the monitors were 1080p 144fps. I just want a confirmation because ya, that's really unfortunate considering the 2080 on paper is maybe 10-20% faster than a 1080ti at a huge price difference considering the used market. To me it makes a lot more sense that the SotTR demo was running 4k 30-50fps considering I'd expect the 2080ti to pull roughly 100fps at 4k in normal raster threads.
Remember the 2080TI is the one with the most cuda cores though. Perhaps the whole demo was run on "just" the 2080 and not TI variant. To many variables in the demo to give a honest opinion. He even said he had to enable vsync because PROJECTOR was capped at 60fps. THat would also show noticeable lag on screen.
As with every past GPU release of the last few years, just looking at TFLOPs and assuming that is an accurate metric for gaming performance will to produce any useful results except when comparing between binnings of the same die.
Yes, a sense of entitlement. I feel I am entitled to substantial performance gains for the substantial rise in price. Quit using stupid righty buzzwords.
I wonder how well raytracing does (or doesn't) behave in a dual-GPU setup. If the scaling was good, it seems like this could be a case where having a 2nd card could make sense.
This seems to be the case. Massive marketing campaign/cash grab to help pay for the 7nm RTX-30 (21?) series.
For non-RT gaming, we're probably going to get 20% perf if we're lucky which makes the pricing ridiculous for most gamers. RT is super exciting and all that, but the demos they showed displayed some very strange artifacts. I don't have a word for it, but it seemed like there weren't enough rays (or "gigarays") to create realistic fire reflections, etc. Looked a lot like some bizarre form of aliasing. I'll have to go back and rewatch it or just wait for a deep-dive analysis.
But yeah, no way I'm pre-ordering anything here...
Stop bitching about preorders because unlike software hardware is generally RETURNABLE. So I have NOTHING to lose by ordering now. I will test it myself and if it sucks I will return it. duh
Damn... and here I was thinking (hoping) I could turn up SS to 4 in my Oculus Rift CV1 with that RTX2018. Oh, and fello's make it LESS than 25cm long, please ;-)
I think you will, it's just what they are marketing is the ray tracing abilities. The launch is still a month away and they don't want to distract from the ray tracing hype.
There are a lot more cores, so ray tracing aside, the 2080Ti will be 20% faster than the 1080Ti before accounting for architectural improvements.
It’s funny to see nvidia go the 3Dfx route here and focus on image quality over performance. It’s been awhile since they focused on a feature (T&L) that significantly improved the visual quality.
"the 2080Ti will be 20% faster than the 1080Ti" So let add 20% to the cost of my 1080ti I got for 699.99 plus tax NEW (yes I ordered before the mining craze) so that's approx 750.00 but NV want to make me pay 500.00+ MORE?! whats that a 45%+ increase for 15-20 increase in preference? hmmmmmm Pricing is wrong unless they just want to use it to sell 10 series cards.
I'd like to suggest that in order to make the table "NVIDIA RTX Support for Games" easier to read, you substitute all "No"s with em dashes. That way, you could see the "Yes"es with a glance.
About what I expected. First gen hardware is not good at doing it with real performance. Seems like the tradeoff is too steep. 1080p and not at 60fps with a $1000 GPU is not impressive no matter how it looks and apparently it's not obvious in some demonstrations which doesn't help matters. Rather take 4k HDR 60fps instead. At least you can see the difference in everything.
Actually it's a $1,200 GPU. You won't find it anywhere near $1,000 right now and the foreseeable future unless sales start to slump. At this point in time MSRP means nothing.
It depends on the demand. If the cards are in low supply at $1,200 you won't see it for $1,000. It going down to $1,000 has more to do with ramping up supply than demand slumping.
True, but there wasn't a whole graphics card lineup centered around Hairworks like there is for Ray-tracing. Having your primary marketing point for the 20 series be the feature that'll pretty much get turned off to maintain ideal framerate is a bad thing!
And it made graphics worst. This is another Gameworks scam by Nvidia. This is worst than Physix because they are charging their customers big time for it.
Disappointed and excited at the same time. I'm glad they are moving forward with this, but I can't help but think there is a more efficient way to do realistic graphics. I believe better artists that know how to draw realistic and know how to implement the correct dynamic range would go a lot further. Look at Uncharted 4 for example on lowly ps4 hardware. It looks better than most PC games that require a 1080.
There is no way an artist can substitute for what raytracing does with illumination -- except to artfully disguise with their artistic design much of what is being lost in the scene because there is no ray tracing.
what is with all of the down and disappointed posts here? Yes, it is a $1200 GPU that cant hold a solid 1080p/60 WITH RAY TRACING TURNED ON!!!!!! Know how many fps a 1080 ti can do with ray tracing in a modern game? I would give it a generous 2-5fps. I mean, this is a really big deal people! Turn the ray tracing off and render on traditional graphics and these things are going to be 4k monsters! The fact that they can keep above a solid 1080/30 is super impressive. A gimmick in the real world for sure. But in future generations this is going to be the new normal for graphics rendering and it is going to allow far more realistic (or varied stylistic) games in the future. The RTX part is going to be a feature that few use (I mean... if you drop $1k+ on a GPU you are *probably* playing on a 4k display), but that does not make this a bad card. Wait for the reviews; RTX rendering will be pretty an slow, while traditional rendering will finally be a solid 4K/60 on just about anything... and maybe the first playable games at 8k (if you can find the TV to put it on).
It's disappointing because it's ~15% more performance in traditionally rendered games for double the price, and enabling RTX kills performance so much as to relegate the feature to tech-demo status. 7nm cards from both Nvidia and AMD are both on the immediate horizon (12 months or less).
Paying $1200+ for a 2080Ti is madness when the 1080Ti can be found for $600-650 new or $400-450 used. Even for folks still rocking GTX 700/900 series, buying a cheap 1000 series or simply waiting for RTX 2100 is a much smarter decision
The 2080ti will be 50% faster than the 1080ti not 15%. The 2080 will be 15% faster but yes the price is bad atm due to used market on 1080ti, unless the ray tracing can be shown to be good after driver updates.
FLOPS aren't everything, ask AMD. In any case, even though I don't think this is a straightforward apples-to-apples comparison, you're probably near the money.
Actually, I think they kept the "conventional" performance metrics around the same at the same price points, in other words a 2070 will probably perform like a 1080. So think of RTX 20 line as offering the same performance/pricde as Pascal, but also giving you RT capabilities. It may not be an exciting release from the FPS/$ perspective, but you probably still will get the same value by that measure as you got last month.
So we are supposed to be happy that 2080 Ti's launch MSRP is 42% to 71% higher than the card it's directly replacing? This kind of reasoning does nothing but to encourage companies to ask more from customers.
Yes, ray tracing is very interesting but it does not justify such a massive price jump, specially not for a limited, hybrid rendering method. So if they improved ray tracing performance two times with 3080 Ti, would it be ok if they priced it $1500-$1800?
It's not ray tracing. It is hybrid ray tracing. There's a huge difference. It uses rays to calculate some things, that is not the same thing as rendering in movies where rays are used for everything.
From the (limited) analyses of the core that have been done this is not the case. The traditional rendering portion is very similar in scale to 10XX generation cards and then there is a separate section for the ray tracing part. This is not processing oomph that will be put towards traditional rendering if ray tracing is turned off but hardware specific to the task of ray tracing. Therefore I think that anyone who has designs on these being able to do 4K/60 on everything will be disappointed. So far the specs suggest a modest increase in traditional performance but certainly not as much as we'd expect for >2 years development time.
Consider it like a CPU with onboard graphics. If you use the onboard graphics it'll drag everything else down as it's so slow but if you turn it off, you can't dedicate that silicon to perform CPU tasks and make your CPU faster. It's not built for it.
Look genius, even anandtech couldn't see much difference between the huge FPS drop. Nobody playing FPS will use it at that expense if they cannot play the game competitively.
This. Although, it does feel as if they are over-charging for the first baby steps of a new technology. Then again, they are looking at this from a dominant position in the market place with a back stock of last generation cards.
"Know how many fps a 1080 ti can do with ray tracing in a modern game?" Don't care I never asked for RT. But let me ask you, how many dev's will implement it without NV paying them to do it?
Can you confirm the resolution displayed? I see a few places mentioning 1080p, however another site mentions 4k. Jensen said all demos on stage were 4k, so if they were struggling at 1080p, wouldn't you suspect a harder struggle to even show 5-10fps at 4k?
Sorry to clarify, the sites saying the SotTR was at 1080p couldn't verify that claim other than the statement that the game capture was running "at game resolution" which would indicate 1080p...however other sites mention it was running at 4k. Also can you confirm the monitor you saw was 1080p 144hz and not one of the newer 4k screens?
Yeah, it is like a Hairworks that makes major visual quality improvements, has dedicated acceleration hardware, has a DirectX extension designed by Microsoft to support it, and has industry-wide support and enthusiasm among games developers.
LOL, by like HairWorks I meant the early adopter premium is totally not worth the money. The facts on the ground is that until consoles get the horsepower to run them, developers will only use physics (HairWorks) and ray tracing (RTX) as gimmicks rather than differentiators.
Do games even do physX stuff anymore or is that just assumed? I remember people buying a 2nd video card to run that. At least this seems to be an implementation of a directX component so it has a chance at getting used. Makes me wonder if Microsoft has something planned for the next xbox. Direct X seems to telegraph the direction they want to move the console anymore.
Yes, they do but first you need to know what physx really is. It's a complete physics engine. It has two modes, CPU and GPU accelerated.
What you were referring to in your comment is the GPU mode that only runs on geforce cards. It never really caught on because of that limitation. Radeon cards were left out. Only a small number of games have GPU physx.
However, a game developer can implement physx in its CPU only mode, meaning it can run on any PC, regardless of the graphics card used. There are a lot of CPU-only physx games out there. Even the UE4 infiltrator demo uses physx. Actually, physx engine is built into unreal and unity engines.
These days I often have more fun playing low-fi indie games than the kind of games that are going to be using all this specialized ray-tracing. The only reason I follow any of the GPU news is the increasingly quixotic hope that I'll get better FP32 compute for less, and this looks like another disappointment. Anyone know if there will be CUDA-type programmatic access to the ray-tracing engine outside some game-specific API?
I've heard nothing about that and I've kept an eye out whilst I've been reading. I suspect the priority is to get the thing working properly for games first and then maybe work on something like that.
Volta had access to the tensor core, which are core doing matrix Multiplication + addition in FP32xFP32+FP32 -> FP64. Not sure (yet) of the precision of the tensor cores on turing and the acces to it, but it will be at least FP32xFP32+FP32->FP32 and should be cuda accessible.
"These days I often have more fun playing low-fi indie games than the kind of games that are going to be using all this specialized ray-tracing."
You aren't supposed to point out how few high-end games are even worth playing, never mind the eye-candy or FPS. Widespread consideration of that would lead to a graphics market collapse. Please get back in line.
Sounds like it's not quite where it needs to be for good performance, but the promise is there. In another generation or two when the raytracing hardware is 5x more powerful and the game support is better, it will be a must-have for high-end PC gaming.
Ooh! Reflections in puddles! Awesome! 8) Well actually no, I couldn't care less. I would be far more impressed if falling rain (or from wherever else) could properly accumulate, flow, cause damage, make things rot, affect vehicles, freeze at night, form fog later, etc.
This obsession with purely visual eye candy is just an extension of the fad for crazy FPS rates. What matters is functionality, immersion is about a believable environment. That door might look awesome, but if it can't be opened (if it's just some fancy texture or whatever), then it's not a door. If an object can't be picked up, then it's irrelevant no matter its appearance. Immersion gets broken any time one tries to do something that isn't possible because the game world doesn't support it.
A teapot in a virtual world should be able to make tea, regardless of how well it reflects its environment.
This is a chicken and egg problem. Adding graphics is purely optional and you can cut those without detracting from the game (too) much. Adding gameplay features cannot be optional. This was tried (and failed) during the physiX era, where a few game depending on the new PhysiX hardware came out, using destructible environments and interactions and commercially flopped as so few people could actually run them.
This is why most Physix interactions lately are, again, cosmetic (swirling papers in batman for example).
To have better interactions, your full market needs to support it, to have a chance of selling it. However, few people want to buy new hardware if nothing supports it... This locks us in cosmetics or forces companies to sneak/force capabilities in (to start the user base).
Yes, PPU/GPU based physx was doomed to fail, since it didn't work on radeons and consoles.
CPU mode physx caught on though. It's nowhere as fast as GPU mode, but runs on pretty much all major platforms. You can have gameplay related destructible environments running on CPU, although you cannot have as many elements as the GPU mode, obviously.
"This obsession with purely visual eye candy is just an extension of the fad for crazy FPS rates. What matters is functionality, immersion is about a believable environment. That door might look awesome, but if it can't be opened (if it's just some fancy texture or whatever), then it's not a door. If an object can't be picked up, then it's irrelevant no matter its appearance. Immersion gets broken any time one tries to do something that isn't possible because the game world doesn't support it."
I agree wholeheartedly. In fact, I would also add this is more of a benefit to a game when done right than a game without it but which has incredible graphics. A great early example of this for me is Sid Pirates Gold!. As a kid I played the crap out of that game. Not because of the graphics but because there was incredible depth built into incredible mechanics. Actual pirates with accurate flags you could read about in the manual, a living environment with different paths to success, historical ships and places etc. You grew old, could marry or died amassing treasure and retired into a hall of fame for crying out loud.
I really believe ready tracing will be an amazing addition to the realism of a game, but if a game doesn't have the foundation of what makes a game special (and not many do), it's really just another no man's sky. (I've buried over a hundred his into NMS wishing it had 10% of what makes Crusader Kings 2 amazing)
>I would be far more impressed if falling rain (or from wherever else) could properly accumulate, flow, cause damage, make things rot, affect vehicles, freeze at night, form fog later, etc.
That is a game software criticism. This is a review of a hardware part and supporting API, and the two have nothing to do with each other.
People are dumb, they pre-ordered blindy. Hours later we knew RTX enable you will get 25-40fps at 1080, near 60fps in some games. And this is with a 2080ti, 2080 got 20% worse RTX performance, you can imagine how mutilated it will be with 2070 and below (720p 30-60fps LOL)
Actual non RTX performance is around 15-20% over the previous gen.
1080ti can be found for $650 new, plenty of them, and many fanboys are paying $1200 for 20% over $650
the only thing dumb about it is you may want to return it later. This isn't a game its HARDWARE and thus returnable in most cases. There is no being blind it's having the insight to buy it and TEST it yourself and if it sucks you/I can return them. Personally I think everyone should do it and leave NV with a S**T ton of returns :)
Assetto Corsa Competizione could be the one thing that pushes me to buy an RTX card. With over 5000 hours dumped in the first game it might be worth the money. Totally not surprised you couldn't handle it without any experience, only one out of maybe 10 people that have tried my rig could even make it 1 lap at a decent pace without crashing and he has track experience IRL.
This would be intriguing to know actually. Ray tracing for 3D modeling scales very well from multiple gpu's. If real time ray tracing in games can scale similarly that bodes well. We'll be looking to GPU's with multiple cores akin to Ryzen in the not so distant future. Also with ray tracing you don't need to render a entire scene at once.
In theory a real time ray tracing API could render different area's of the screen at different real time ray traced FPS limits. Towards the peripheral edges of a ray traced scene in real time could be lower even 15FPS, but as you get closer to the center of the scene and image it could dynamically scale higher towards 60FPS.
As a example 640x480 might render at 60FPS then a framing around it of 1024x768 would render 45FPS and framing around that at 30fps 1440x900 and framing edging around that at 15FPS 1920x1200.
Much like this image you'd just see quicker FPS rendering within the image towards the center of it overall in aspect ratio's within a target resolution set to FPS render limits and targets with rendering priority emphasis toward the center of the image and reduced for peripheral vision portion of the image render as a whole.
It really looks promising for the geforce gtx 10 series - regarding the prices. Do you guys have any idea what the prices will be when the rtx series will be shipped? Because I might as well buy a gtx graphic card now, instead of waiting for black friday... Hoping to get some feedback.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
103 Comments
Back to Article
xXx][Zenith - Tuesday, August 21, 2018 - link
So my take DXR/RTX confusion: DXR is ray tracing API, an extension to MS DX12 with full interop with raster pipeline. MS will support DXR with a compute shader fallback layer, so devs can ship DXR effects to all DX12 class gpus. RTX is an implementation of the DXR API by NVIDIA (with dedicated HW elements in the ASIC). RTX is only supported on Volta and Turing, Pascal has to live wih the fallback path. AMD made no annoucment about their level of support regarding DXR.A quick perf video with MS Github DXR samples featuring Pascal: https://youtu.be/2kW3Rs4V6FA
Alexvrb - Tuesday, August 21, 2018 - link
Well even with the latest RTX card it sounds like realtime ray tracing is still knocking performance down substantially. Cherry picked 1080p demos that aren't smooth, ouch. Then you've got yet-another-propietary-supersampling method from Nvidia. I guess they had to figure out something to do with the AI block for their gaming cards, rather than let it sit dormant. Let's hope this SS method doesn't damage image quality.With that being said the overall performance gains are still rumored to be substantial, if you're not turning on the new eyecandy. Best of all the 2070 will spur price drops on the older tech, even if the 2080 models are frickin' expensive as heck.
lucam - Wednesday, August 22, 2018 - link
PowerVR has a much better and performance solution if you bring everything to right scale. No mobile solution can't even be any closer to what Imagination has developed by far. And yet we are still debating about NV solution that in RT is really mediocre solution considering the horsepower it has. Not efficient solution at all...jabber - Wednesday, August 22, 2018 - link
Yep, tile rendering for the win!edzieba - Wednesday, August 22, 2018 - link
Nvidia have been using tiled rendering since Maxwell.Samus - Wednesday, August 22, 2018 - link
Tile rendering has nothing to do with ray tracing. The ray tracing calculations/arithmetic are what is slowing things down. The rendering differential to include ray tracing calculations is negligible, and sure, tiled viewpoint rendering would assist that performance like it would assist any sort of viewport rendering. But the calculations happen before the rendering.silverblue - Friday, August 24, 2018 - link
Exactly; work out what you don't need to draw before you draw it.piiman - Saturday, August 25, 2018 - link
They already do this, if you had listened to the Key Noteiwod - Wednesday, August 22, 2018 - link
I would actually like a write up on the difference between PowerVR RT and NV RT.lucam - Wednesday, August 22, 2018 - link
This is a good idea actually...silverblue - Friday, August 24, 2018 - link
Seconded; I'm very interested in how you combine RT with TBDR when, 20 years ago, it was heresy to even offer up the possibility that TBDR could be paired with hardware T&L.I still don't understand why immediate mode rendering is still the go-to for PC and console gaming if deferred rendering is pretty much an improvement in every single way. *shrugs*
MrPoletski - Friday, August 24, 2018 - link
Yes, every single way except that it introduces one frame of latency. That ray tracing on the powervr units is born from their purchase and integration of the caustic raytracing accelerator with their rogue architecture in the Wizard. Nvidia is claiming 10 giga rays in 250watts. ONe prodyuction powerVR wizard, in a mobile form factor, was specced at 10 mega rays in 2 watts. Scaled up that would be 12.5 giga rays at 250 watts. This was also from 2015/16 sorta time. God I wish PowerVr would get off their asses and put out another PC GPU.lucam - Friday, August 24, 2018 - link
Very good point. I hope that PowerVR gets back tooD. Lister - Saturday, August 25, 2018 - link
Learn to use a calculator, "10 mega rays in 2 watts" scales upto 1.25 gigarays, not 12.5. Yeesh!McD - Thursday, August 30, 2018 - link
The GR6500 was pushing 300mrays at 3W theoretical max though Otoy only got 100mrays. An iPad Pro (A10X) was showing 30mrays under Metal2 at WWDC2018 with no special hardware.Given the best AR solution needs RT and Apple could have bought ImgTec for the Caustic IP, I would say Apple have their own hardware solution. Not sure it’ll be in the A12/X though they’re more likely to leave it for the AR Glasses reveal.
Yojimbo - Tuesday, August 21, 2018 - link
The following is my take, which is different from yours.I think RTX is more than just accelerated DXR. RTX is a few things. As far as software it is an implementation of DXR. It maps the DXR API to NVIDIA's hardware, whether that hardware be accelerated for ray tracing or not. So, the RTX software stack is still being used to implement DXR on Pascal GPUs.
RTX is also a hardware designation for a set of hardware technologies that accelerate NVIDIA's RTX software stack, and hence Microsoft's DXR.
Finally, from what I remember, part of the RTX software stack is sort of like a GameWorks library, which means that it sits on top of DXR and implements it for various rendering techniques developers can use.
So, if I understand correctly, RTX is three things.. the software libraries around DXR, the software that implements DXR on NVIDIA's hardware, and a hardware designation for cards with technology to accelerate the software side of RTX.
MadManMark - Wednesday, August 22, 2018 - link
Elaborating further on these great points, there is also the DLSS technology. As I understand it this will allow the new (for GeForce) tensor cores to take some of the load off the RT calculations by using the AI to interpolate. I am wondering if tweaking that (how much is actual RT and how much is AI sampling to interpolate rays not traced) is ultimately not the key to making FPS etc with RTX more stable & adjustable.Santoval - Wednesday, August 22, 2018 - link
"RTX is an implementation of the DXR API by NVIDIA"Not quite. Nvidia also developed the VK_NV_raytracing extension, which they offered to the Khronos Group back in early May. This extension is technically a ray-tracing API, quite similar to Microsoft's DXR.
The point of the extension/API is for it to play along with Nvidia's RTX API, because contrary to how it might have looked in the presentation of the 20xx series, Nvidia does not want to be locked to a single graphics API vendor *and* a single OS.
Their main concern is not of course non-Windows (largely Linux) gaming, but some game studios which have focused on Vulkan instead of DX12, particularly some (like Id Software) which support *only* Vulkan in their upcoming games (e.g. next year's Doom Eternal), or others which skipped DirectX 12 support in favor of Vulkan (e.g. Valve's Dota 2).
On the other hand, Vulkan should be at least a few months behind DX12 in ray-tracing support. I have no idea, for instance, if the VK_NV_raytracing extension was officially adopted by the Khronos Group, or if it still remains "off-tree".
AMD, on the other hand, has the Radeon Rays ray-tracing engine (targeted at content creators, of course, since they lack ray-tracing hardware), but I believe it works only with OpenCL, not Vulkan.
What's important is that the VK_NV_raytracing extension will be able to be used from Day 1 of the 20xx series release, despite the Khronos Group's currently unclear support for ray-tracing.
gijames1225 - Tuesday, August 21, 2018 - link
"The demos didn’t clarify apples-to-apples performance differences between the GTX 1080 Ti and RTX 2080 Ti"This is what I am most curious about. I suspect that we're not going to see anything too great for just vanilla GPU performance and the big leap here is solely the addition of ray tracing hardware.
CaedenV - Tuesday, August 21, 2018 - link
Really? It has been 2 years since the last major card launch. I am expecting a 20-50% performance gain (especially for higher resolution displays) when using normal conventional graphics rendering methods.That said... this sort of confirms what was seen in the keynote; The demos in the keynote (and the previous quadro keynote) were obviously not running RTX graphics at 1080p/60. More like 720p/30 in real time... and I think that is just where the technology is at today, and it will be ready for 1080p gaming in 2 years when the next gen cards come out, and 4k gaming when the next-next gen cards come out. This will be great for rendering things. Great for AI research 'on the cheap'. But simply not ready for prime time on modern games at modern resolutions. Just like CUDA on the 8800, it is a pretty cool technology, but it is going to be a while before it is useful.
SonicKrunch - Wednesday, August 22, 2018 - link
Jensen himself said all the stage demos were running at 4k though. This is why I'm trying to confirm the 1080p statement. All other sites said they couldn't confirm resolution settings, yet stated 1080p. Here he's saying he knows the monitors were 1080p 144fps. I just want a confirmation because ya, that's really unfortunate considering the 2080 on paper is maybe 10-20% faster than a 1080ti at a huge price difference considering the used market. To me it makes a lot more sense that the SotTR demo was running 4k 30-50fps considering I'd expect the 2080ti to pull roughly 100fps at 4k in normal raster threads.imaheadcase - Wednesday, August 22, 2018 - link
Remember the 2080TI is the one with the most cuda cores though. Perhaps the whole demo was run on "just" the 2080 and not TI variant. To many variables in the demo to give a honest opinion. He even said he had to enable vsync because PROJECTOR was capped at 60fps. THat would also show noticeable lag on screen.mode_13h - Wednesday, August 22, 2018 - link
You're expecting based on what: blindly extrapolating trends? a sense of entitlement?Look at the FP32 TFLOPS and you're sure to be disappointed. That and memory bandwidth are the two main bottlenecks of existing titles.
edzieba - Wednesday, August 22, 2018 - link
As with every past GPU release of the last few years, just looking at TFLOPs and assuming that is an accurate metric for gaming performance will to produce any useful results except when comparing between binnings of the same die.29a - Friday, August 24, 2018 - link
Yes, a sense of entitlement. I feel I am entitled to substantial performance gains for the substantial rise in price. Quit using stupid righty buzzwords.twtech - Wednesday, August 22, 2018 - link
I wonder how well raytracing does (or doesn't) behave in a dual-GPU setup. If the scaling was good, it seems like this could be a case where having a 2nd card could make sense.Lolimaster - Wednesday, August 22, 2018 - link
The actual jumps will come from 7nm, this is just marketing gimmick that people are pre-ordering blindly.nathanddrews - Wednesday, August 22, 2018 - link
This seems to be the case. Massive marketing campaign/cash grab to help pay for the 7nm RTX-30 (21?) series.For non-RT gaming, we're probably going to get 20% perf if we're lucky which makes the pricing ridiculous for most gamers. RT is super exciting and all that, but the demos they showed displayed some very strange artifacts. I don't have a word for it, but it seemed like there weren't enough rays (or "gigarays") to create realistic fire reflections, etc. Looked a lot like some bizarre form of aliasing. I'll have to go back and rewatch it or just wait for a deep-dive analysis.
But yeah, no way I'm pre-ordering anything here...
piiman - Saturday, August 25, 2018 - link
Stop bitching about preorders because unlike software hardware is generally RETURNABLE. So I have NOTHING to lose by ordering now. I will test it myself and if it sucks I will return it. duhepdm2be - Sunday, September 2, 2018 - link
Damn... and here I was thinking (hoping) I could turn up SS to 4 in my Oculus Rift CV1 with that RTX2018. Oh, and fello's make it LESS than 25cm long, please ;-)Yojimbo - Tuesday, August 21, 2018 - link
I think you will, it's just what they are marketing is the ray tracing abilities. The launch is still a month away and they don't want to distract from the ray tracing hype.Lolimaster - Wednesday, August 22, 2018 - link
Touring is what it is, optimized Pascal + RTXLike Ryzen+. but this time Nvidia charges you 70% more for minimal performance increase.
Samus - Wednesday, August 22, 2018 - link
There are a lot more cores, so ray tracing aside, the 2080Ti will be 20% faster than the 1080Ti before accounting for architectural improvements.It’s funny to see nvidia go the 3Dfx route here and focus on image quality over performance. It’s been awhile since they focused on a feature (T&L) that significantly improved the visual quality.
piiman - Saturday, August 25, 2018 - link
"the 2080Ti will be 20% faster than the 1080Ti"So let add 20% to the cost of my 1080ti I got for 699.99 plus tax NEW (yes I ordered before the mining craze) so that's approx 750.00 but NV want to make me pay 500.00+ MORE?! whats that a 45%+ increase for 15-20 increase in preference? hmmmmmm Pricing is wrong unless they just want to use it to sell 10 series cards.
Hul8 - Tuesday, August 21, 2018 - link
I'd like to suggest that in order to make the table "NVIDIA RTX Support for Games" easier to read, you substitute all "No"s with em dashes. That way, you could see the "Yes"es with a glance.boozed - Tuesday, August 21, 2018 - link
Or perhaps just a green box for yes, red box for noMtnStephen - Tuesday, August 21, 2018 - link
NO.The red-green color blind have enough of that crap to deal with already.
CaedenV - Tuesday, August 21, 2018 - link
must be tough for the color blind... you never know if you are buying nVidia or AMDhfm - Tuesday, August 21, 2018 - link
A+nathanddrews - Wednesday, August 22, 2018 - link
LOLboozed - Wednesday, August 22, 2018 - link
Whoops, you are quite right!silverblue - Wednesday, August 22, 2018 - link
Or, just tick in place of yes, and blanks for no.cmdrdredd - Tuesday, August 21, 2018 - link
About what I expected. First gen hardware is not good at doing it with real performance. Seems like the tradeoff is too steep. 1080p and not at 60fps with a $1000 GPU is not impressive no matter how it looks and apparently it's not obvious in some demonstrations which doesn't help matters. Rather take 4k HDR 60fps instead. At least you can see the difference in everything.evernessince - Tuesday, August 21, 2018 - link
Actually it's a $1,200 GPU. You won't find it anywhere near $1,000 right now and the foreseeable future unless sales start to slump. At this point in time MSRP means nothing.Yojimbo - Tuesday, August 21, 2018 - link
It depends on the demand. If the cards are in low supply at $1,200 you won't see it for $1,000. It going down to $1,000 has more to do with ramping up supply than demand slumping.Yojimbo - Tuesday, August 21, 2018 - link
I meant to say that it depends on the demand relative to supply... I don't expect the demand to slump, I expect the supply to grow.Yojimbo - Tuesday, August 21, 2018 - link
I think the performance will improve by the time they actually release this into the games. That's still months away.piiman - Saturday, August 25, 2018 - link
"with a $1000 GPU" You mean $11,999.00Hok - Tuesday, August 21, 2018 - link
Sounds like a 1200 dollar dud to me... dropping frames at 60hz?! but guess have to wait a month to know for sureimaheadcase - Wednesday, August 22, 2018 - link
Maybe the ray tracing part, but so was "hair works" when it came out remember?Devo2007 - Wednesday, August 22, 2018 - link
True, but there wasn't a whole graphics card lineup centered around Hairworks like there is for Ray-tracing. Having your primary marketing point for the 20 series be the feature that'll pretty much get turned off to maintain ideal framerate is a bad thing!eva02langley - Wednesday, August 22, 2018 - link
And it made graphics worst. This is another Gameworks scam by Nvidia. This is worst than Physix because they are charging their customers big time for it.piiman - Saturday, August 25, 2018 - link
" they are charging their customers big time for it."For something NO ONE ASKED for!
Dug - Tuesday, August 21, 2018 - link
Disappointed and excited at the same time. I'm glad they are moving forward with this, but I can't help but think there is a more efficient way to do realistic graphics. I believe better artists that know how to draw realistic and know how to implement the correct dynamic range would go a lot further. Look at Uncharted 4 for example on lowly ps4 hardware. It looks better than most PC games that require a 1080.MadManMark - Friday, August 24, 2018 - link
There is no way an artist can substitute for what raytracing does with illumination -- except to artfully disguise with their artistic design much of what is being lost in the scene because there is no ray tracing.piiman - Saturday, August 25, 2018 - link
Which is why I felt like the keynote speech was for dev's. "it just works"!CaedenV - Tuesday, August 21, 2018 - link
what is with all of the down and disappointed posts here? Yes, it is a $1200 GPU that cant hold a solid 1080p/60 WITH RAY TRACING TURNED ON!!!!!!Know how many fps a 1080 ti can do with ray tracing in a modern game? I would give it a generous 2-5fps. I mean, this is a really big deal people! Turn the ray tracing off and render on traditional graphics and these things are going to be 4k monsters! The fact that they can keep above a solid 1080/30 is super impressive. A gimmick in the real world for sure. But in future generations this is going to be the new normal for graphics rendering and it is going to allow far more realistic (or varied stylistic) games in the future. The RTX part is going to be a feature that few use (I mean... if you drop $1k+ on a GPU you are *probably* playing on a 4k display), but that does not make this a bad card.
Wait for the reviews; RTX rendering will be pretty an slow, while traditional rendering will finally be a solid 4K/60 on just about anything... and maybe the first playable games at 8k (if you can find the TV to put it on).
stephenbrooks - Tuesday, August 21, 2018 - link
I wonder if the hardware collision detection (bounded volume hierarchy) for rays can be repurposed for game physics?Mat3 - Tuesday, August 21, 2018 - link
According to the Powervr guys who also have a GPU with dedicated ray tracing hardware, they seem to think so.Destoya - Wednesday, August 22, 2018 - link
It's disappointing because it's ~15% more performance in traditionally rendered games for double the price, and enabling RTX kills performance so much as to relegate the feature to tech-demo status. 7nm cards from both Nvidia and AMD are both on the immediate horizon (12 months or less).Paying $1200+ for a 2080Ti is madness when the 1080Ti can be found for $600-650 new or $400-450 used. Even for folks still rocking GTX 700/900 series, buying a cheap 1000 series or simply waiting for RTX 2100 is a much smarter decision
SonicKrunch - Wednesday, August 22, 2018 - link
The 2080ti will be 50% faster than the 1080ti not 15%. The 2080 will be 15% faster but yes the price is bad atm due to used market on 1080ti, unless the ray tracing can be shown to be good after driver updates.silverblue - Wednesday, August 22, 2018 - link
Do you have any proof of the 50% uplift? I know people don't have any proof of the 15% uplift either, but it's admittedly far more achievable.Lolimaster - Wednesday, August 22, 2018 - link
Check the Tflops, 15-20%.silverblue - Thursday, August 23, 2018 - link
FLOPS aren't everything, ask AMD. In any case, even though I don't think this is a straightforward apples-to-apples comparison, you're probably near the money.piiman - Saturday, August 25, 2018 - link
Math man MATH! Show us how you arrived at your amazing preference increase ..I'll wait.............29a - Friday, August 24, 2018 - link
"It's disappointing because it's ~15% more performance in traditionally rendered games"You don't know that.
MadManMark - Friday, August 24, 2018 - link
Actually, I think they kept the "conventional" performance metrics around the same at the same price points, in other words a 2070 will probably perform like a 1080. So think of RTX 20 line as offering the same performance/pricde as Pascal, but also giving you RT capabilities. It may not be an exciting release from the FPS/$ perspective, but you probably still will get the same value by that measure as you got last month.eddman - Wednesday, August 22, 2018 - link
So we are supposed to be happy that 2080 Ti's launch MSRP is 42% to 71% higher than the card it's directly replacing? This kind of reasoning does nothing but to encourage companies to ask more from customers.Yes, ray tracing is very interesting but it does not justify such a massive price jump, specially not for a limited, hybrid rendering method. So if they improved ray tracing performance two times with 3080 Ti, would it be ok if they priced it $1500-$1800?
Alistair - Wednesday, August 22, 2018 - link
It's not ray tracing. It is hybrid ray tracing. There's a huge difference. It uses rays to calculate some things, that is not the same thing as rendering in movies where rays are used for everything.philehidiot - Wednesday, August 22, 2018 - link
From the (limited) analyses of the core that have been done this is not the case. The traditional rendering portion is very similar in scale to 10XX generation cards and then there is a separate section for the ray tracing part. This is not processing oomph that will be put towards traditional rendering if ray tracing is turned off but hardware specific to the task of ray tracing. Therefore I think that anyone who has designs on these being able to do 4K/60 on everything will be disappointed. So far the specs suggest a modest increase in traditional performance but certainly not as much as we'd expect for >2 years development time.Consider it like a CPU with onboard graphics. If you use the onboard graphics it'll drag everything else down as it's so slow but if you turn it off, you can't dedicate that silicon to perform CPU tasks and make your CPU faster. It's not built for it.
eva02langley - Wednesday, August 22, 2018 - link
Look genius, even anandtech couldn't see much difference between the huge FPS drop. Nobody playing FPS will use it at that expense if they cannot play the game competitively.Shadyghost - Wednesday, August 22, 2018 - link
This. Although, it does feel as if they are over-charging for the first baby steps of a new technology. Then again, they are looking at this from a dominant position in the market place with a back stock of last generation cards.piiman - Saturday, August 25, 2018 - link
"what is with all of the down and disappointed posts here? Yes, it is a $1200 GPU that cant hold a solid 1080p/60 WITH RAY TRACING TURNED ON!!!!!!"And you know this how? Marketing benchmarks? Jensen said so? Also when they can do it in 4k give me a call
piiman - Saturday, August 25, 2018 - link
"Know how many fps a 1080 ti can do with ray tracing in a modern game?"Don't care I never asked for RT. But let me ask you, how many dev's will implement it without NV paying them to do it?
SonicKrunch - Tuesday, August 21, 2018 - link
Can you confirm the resolution displayed? I see a few places mentioning 1080p, however another site mentions 4k. Jensen said all demos on stage were 4k, so if they were struggling at 1080p, wouldn't you suspect a harder struggle to even show 5-10fps at 4k?SonicKrunch - Tuesday, August 21, 2018 - link
Sorry to clarify, the sites saying the SotTR was at 1080p couldn't verify that claim other than the statement that the game capture was running "at game resolution" which would indicate 1080p...however other sites mention it was running at 4k. Also can you confirm the monitor you saw was 1080p 144hz and not one of the newer 4k screens?wr3zzz - Wednesday, August 22, 2018 - link
RTX sounds more and more like HairWorks.Yojimbo - Wednesday, August 22, 2018 - link
Yeah, it is like a Hairworks that makes major visual quality improvements, has dedicated acceleration hardware, has a DirectX extension designed by Microsoft to support it, and has industry-wide support and enthusiasm among games developers.wr3zzz - Wednesday, August 22, 2018 - link
LOL, by like HairWorks I meant the early adopter premium is totally not worth the money. The facts on the ground is that until consoles get the horsepower to run them, developers will only use physics (HairWorks) and ray tracing (RTX) as gimmicks rather than differentiators.Midwayman - Wednesday, August 22, 2018 - link
Do games even do physX stuff anymore or is that just assumed? I remember people buying a 2nd video card to run that. At least this seems to be an implementation of a directX component so it has a chance at getting used. Makes me wonder if Microsoft has something planned for the next xbox. Direct X seems to telegraph the direction they want to move the console anymore.eddman - Wednesday, August 22, 2018 - link
Yes, they do but first you need to know what physx really is. It's a complete physics engine. It has two modes, CPU and GPU accelerated.What you were referring to in your comment is the GPU mode that only runs on geforce cards. It never really caught on because of that limitation. Radeon cards were left out. Only a small number of games have GPU physx.
However, a game developer can implement physx in its CPU only mode, meaning it can run on any PC, regardless of the graphics card used. There are a lot of CPU-only physx games out there. Even the UE4 infiltrator demo uses physx. Actually, physx engine is built into unreal and unity engines.
piiman - Saturday, August 25, 2018 - link
You left out the "and if sucks"Impetuous - Wednesday, August 22, 2018 - link
These days I often have more fun playing low-fi indie games than the kind of games that are going to be using all this specialized ray-tracing. The only reason I follow any of the GPU news is the increasingly quixotic hope that I'll get better FP32 compute for less, and this looks like another disappointment. Anyone know if there will be CUDA-type programmatic access to the ray-tracing engine outside some game-specific API?philehidiot - Wednesday, August 22, 2018 - link
I've heard nothing about that and I've kept an eye out whilst I've been reading. I suspect the priority is to get the thing working properly for games first and then maybe work on something like that.frenchy_2001 - Wednesday, August 22, 2018 - link
Volta had access to the tensor core, which are core doing matrix Multiplication + addition in FP32xFP32+FP32 -> FP64.Not sure (yet) of the precision of the tensor cores on turing and the acces to it, but it will be at least FP32xFP32+FP32->FP32 and should be cuda accessible.
Heard nothing about the RT core (so far)...
Arbie - Wednesday, August 22, 2018 - link
"These days I often have more fun playing low-fi indie games than the kind of games that are going to be using all this specialized ray-tracing."You aren't supposed to point out how few high-end games are even worth playing, never mind the eye-candy or FPS. Widespread consideration of that would lead to a graphics market collapse. Please get back in line.
twtech - Wednesday, August 22, 2018 - link
Sounds like it's not quite where it needs to be for good performance, but the promise is there. In another generation or two when the raytracing hardware is 5x more powerful and the game support is better, it will be a must-have for high-end PC gaming.mapesdhs - Wednesday, August 22, 2018 - link
Ooh! Reflections in puddles! Awesome! 8) Well actually no, I couldn't care less. I would be far more impressed if falling rain (or from wherever else) could properly accumulate, flow, cause damage, make things rot, affect vehicles, freeze at night, form fog later, etc.
This obsession with purely visual eye candy is just an extension of the fad for crazy FPS rates. What matters is functionality, immersion is about a believable environment. That door might look awesome, but if it can't be opened (if it's just some fancy texture or whatever), then it's not a door. If an object can't be picked up, then it's irrelevant no matter its appearance. Immersion gets broken any time one tries to do something that isn't possible because the game world doesn't support it.
A teapot in a virtual world should be able to make tea, regardless of how well it reflects its environment.
http://www.sgidepot.co.uk/reflections.txt
frenchy_2001 - Wednesday, August 22, 2018 - link
This is a chicken and egg problem.Adding graphics is purely optional and you can cut those without detracting from the game (too) much.
Adding gameplay features cannot be optional. This was tried (and failed) during the physiX era, where a few game depending on the new PhysiX hardware came out, using destructible environments and interactions and commercially flopped as so few people could actually run them.
This is why most Physix interactions lately are, again, cosmetic (swirling papers in batman for example).
To have better interactions, your full market needs to support it, to have a chance of selling it. However, few people want to buy new hardware if nothing supports it... This locks us in cosmetics or forces companies to sneak/force capabilities in (to start the user base).
eddman - Wednesday, August 22, 2018 - link
Yes, PPU/GPU based physx was doomed to fail, since it didn't work on radeons and consoles.CPU mode physx caught on though. It's nowhere as fast as GPU mode, but runs on pretty much all major platforms. You can have gameplay related destructible environments running on CPU, although you cannot have as many elements as the GPU mode, obviously.
Shadyghost - Wednesday, August 22, 2018 - link
"This obsession with purely visual eye candy is just an extension of the fad for crazy FPS rates. What matters is functionality, immersion is about a believable environment. That door might look awesome, but if it can't be opened (if it's just some fancy texture or whatever), then it's not a door. If an object can't be picked up, then it's irrelevant no matter its appearance. Immersion gets broken any time one tries to do something that isn't possible because the game world doesn't support it."I agree wholeheartedly. In fact, I would also add this is more of a benefit to a game when done right than a game without it but which has incredible graphics. A great early example of this for me is Sid Pirates Gold!. As a kid I played the crap out of that game. Not because of the graphics but because there was incredible depth built into incredible mechanics. Actual pirates with accurate flags you could read about in the manual, a living environment with different paths to success, historical ships and places etc. You grew old, could marry or died amassing treasure and retired into a hall of fame for crying out loud.
I really believe ready tracing will be an amazing addition to the realism of a game, but if a game doesn't have the foundation of what makes a game special (and not many do), it's really just another no man's sky. (I've buried over a hundred his into NMS wishing it had 10% of what makes Crusader Kings 2 amazing)
MadManMark - Friday, August 24, 2018 - link
>I would be far more impressed if falling rain (or from wherever else) could properly accumulate, flow, cause damage, make things rot, affect vehicles, freeze at night, form fog later, etc.That is a game software criticism. This is a review of a hardware part and supporting API, and the two have nothing to do with each other.
Lolimaster - Wednesday, August 22, 2018 - link
People are dumb, they pre-ordered blindy. Hours later we knew RTX enable you will get 25-40fps at 1080, near 60fps in some games. And this is with a 2080ti, 2080 got 20% worse RTX performance, you can imagine how mutilated it will be with 2070 and below (720p 30-60fps LOL)Actual non RTX performance is around 15-20% over the previous gen.
1080ti can be found for $650 new, plenty of them, and many fanboys are paying $1200 for 20% over $650
ROFL
piiman - Saturday, August 25, 2018 - link
"People are dumb, they pre-ordered blindy."the only thing dumb about it is you may want to return it later. This isn't a game its HARDWARE and thus returnable in most cases. There is no being blind it's having the insight to buy it and TEST it yourself and if it sucks you/I can return them. Personally I think everyone should do it and leave NV with a S**T ton of returns :)
Lolimaster - Wednesday, August 22, 2018 - link
Nvidia should a new fancy moniker and put the 3080ti at $1999, and 3080 for $1499Lolimaster - Wednesday, August 22, 2018 - link
Complex effects needs exponential increases from gen to gen. Probably 8x the RTX performance by 2020.This is just a proof of concept. Pascal 1.1 in performance.
Valantar - Wednesday, August 22, 2018 - link
"With real-time raytracing, games will be able to recreate realistic reflections as seen in bad photos like this one..."I might have low standards for humor, but that made my day.
12345 - Wednesday, August 22, 2018 - link
Assetto Corsa Competizione could be the one thing that pushes me to buy an RTX card. With over 5000 hours dumped in the first game it might be worth the money. Totally not surprised you couldn't handle it without any experience, only one out of maybe 10 people that have tried my rig could even make it 1 lap at a decent pace without crashing and he has track experience IRL.invasmani - Wednesday, August 22, 2018 - link
This would be intriguing to know actually. Ray tracing for 3D modeling scales very well from multiple gpu's. If real time ray tracing in games can scale similarly that bodes well. We'll be looking to GPU's with multiple cores akin to Ryzen in the not so distant future. Also with ray tracing you don't need to render a entire scene at once.In theory a real time ray tracing API could render different area's of the screen at different real time ray traced FPS limits. Towards the peripheral edges of a ray traced scene in real time could be lower even 15FPS, but as you get closer to the center of the scene and image it could dynamically scale higher towards 60FPS.
As a example 640x480 might render at 60FPS then a framing around it of 1024x768 would render 45FPS and framing around that at 30fps 1440x900 and framing edging around that at 15FPS 1920x1200.
Much like this image you'd just see quicker FPS rendering within the image towards the center of it overall in aspect ratio's within a target resolution set to FPS render limits and targets with rendering priority emphasis toward the center of the image and reduced for peripheral vision portion of the image render as a whole.
http://the-web-mechanic.com/twm/wp-content/uploads...
Badelhas - Thursday, August 23, 2018 - link
These prices are outrageous. I´m out.MadManMark - Friday, August 24, 2018 - link
Bye Feliciagregorythompson - Friday, August 24, 2018 - link
They got the GTX 2080 TI pre-orders on Amazon toohttps://amzn.to/2MU3FQE
andersgg - Sunday, August 26, 2018 - link
Hey all.It really looks promising for the geforce gtx 10 series - regarding the prices. Do you guys have any idea what the prices will be when the rtx series will be shipped? Because I might as well buy a gtx graphic card now, instead of waiting for black friday... Hoping to get some feedback.
Kindest regards
Anders from http://gamingmagasinet.dk/