Intel hits out at RTX: 'We're definitely competitive or better than Nvidia with ray tracing hardware'
Is Intel set to beat Nvidia at its own game? At least for the time being, possibly.
Intel is confident it has designed its upcoming A770 and A750 graphics cards to match Nvidia's RTX 30-series GPUs, or perhaps even surpass them, in ray tracing performance.
In the run-up to the release of Intel's Arc A770 and A750 graphics cards, I sat down with Ryan Shrout and Tom Petersen from the Intel Graphics team to talk performance and expectations for its upcoming cards. For the most part, Intel's expectation of performance to match Nvidia's RTX 3060 in games running DirectX 12, but you might be surprised to hear that Intel is also extremely confident in the efficiency of its first generation ray tracing acceleration units.
"The RTU [ray tracing unit] that we have is particularly well suited for delivering real ray tracing performance," Petersen says. "And you'll see that when you do ray tracing on comparisons with an [RTX] 3060 versus A750 or A770, we should fare very, very well."
Such a strong claim for a first-generation ray tracing solution is enough to grab my attention, and I push for a little further clarification on what sort of performance we're looking at.
"Yeah, we're definitely competitive or better than Nvidia with ray tracing hardware."
Well, okay. So that means Intel believes its ray tracing acceleration is capable of matching Nvidia's 2nd Generation RT Cores, at the very least. Quite a feat if Intel's solution can live up to the hype during testing. Though I would assume Petersen's comments should be taken as a comparison of like-for-like RT performance, meaning the A770 is competitive with the RTX 3060.
Nvidia's 2nd Gen RT Cores are the more impressive solution today and AMD's first generation Ray Tracing Accelerators aren't quite up to par—this would see Intel's RTU beating both gaming GPU goliaths' ray tracing acceleration from the get-go (or until their next-gen GPUs arrive).
The biggest gaming news, reviews and hardware deals
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
It comes down to a few key technologies, Petersen tells me, though he's not supposed to divulge the information until a video explainer drops on the Intel Graphics YouTube channel later this week. He does, anyway.
I've only the CliffsNotes version and not a complete whitepaper, but one important puzzle piece to this touted performance is a BVH cache within the GPU. This is solely used to accelerate BVH traversal—BVH stands for bounding volume hierarchy and is a cornerstone of how modern ray tracing is implemented in real-time in games.
The other important piece is a thread sorting unit, though I'm lighter on the details on this one. Generally, it plays a big role in the processing hierarchy of how ray tracing functions on an Arc GPU.
Petersen says both technologies help Intel put a strong foot forward in ray tracing performance, but there was another important consideration for ray tracing on Arc: making sure these technologies required little to no developer engagement in order to function. They have to be as plug and play as possible with what's already out there, essentially.
"We tried to make ours generic because we know that we're not the established GPU vendor, right. So all of our technology pretty much has to work with low dev rel (developer relations) or dev tech engagement. And so things like our cache structure and our hierarchy, you know, our thread sorting unit, which are the two techs that we're going to talk about in this video, they work without any dev rel or dev tech work."
When I asked Petersen about why Intel's graphics team felt ray tracing performance was so important to get right with the first generation, and how I'd perhaps expected this sort of surplus acceleration to lag behind rasterized rendering capability, his response was surprisingly candid.
Best gaming monitor: Pixel-perfect panels for your PC
Best high refresh rate monitor: Screaming quick screens
Best 4K monitor for gaming: When only high-res will do
Best 4K TV for gaming: Big-screen 4K PC gaming
"I'm kind of torn on this one. Because to your point, there's some things that you would normally expect to lag. And the reason you would expect them to lag is because they're hard, and they need to come after you have a solid base. But for better or worse, we just said we need all these things. And so we did XeSS, we did RT, we did AV1, we kind of have a lot on the plate, right? I think we've learned that maybe, you know, in this case, we have a lot on the plate and we're gonna land all the planes, and that's taken us longer than we would have expected.
"So maybe next time we would have broken this up a little bit differently."
Nobody said creating a gaming graphics card would be easy, I suppose. And Intel will have to compete with a very complete and sleek product stack from Nvidia and AMD if it hopes to ever break into the lucrative GPU market. From drivers and upscaling to hardware-based acceleration, Intel needs to very swiftly master it all—if not only to try and outmatch Nvidia and AMD's GPUs today but also their next-generation GPUs, which could be only a matter of months away.
Jacob earned his first byline writing for his own tech blog. From there, he graduated to professionally breaking things as hardware writer at PCGamesN, and would go on to run the team as hardware editor. He joined PC Gamer's top staff as senior hardware editor before becoming managing editor of the hardware team, and you'll now find him reporting on the latest developments in the technology and gaming industries and testing the newest PC components.