Mantle, DX12 could boost multi-GPU performance—with help from game devs
As part of the fallout from the whole 4GB saga GTX 970 saga around graphics cards right now, it has emerged that one of the key frustrations of multi-GPU gaming could be on its way out.
AMD graphics guru Robert Hallock, took to Twitter defending the company’s dual-GPU R9 295X2 graphics card. After the backlash against Nvidia’s GTX 970 and its ‘3.5GB or 4GB?’ video memory debacle, some Nvidia supporters began railing against the R9 295X2 saying that it wasn’t really an 8GB card as it could only ever use 4GB at a time.
“Not true at all,” was Hallock’s Tweeted response.
With multi-GPU cards and systems the temptation has been to double up the available video memory and refer to it as one giant frame buffer. In the case of the R9 295X2 each of its Hawaii GPUs has access to 4GB of GDDR5 memory, so it has been referred to as an 8GB card in total.
Traditionally though that is not how AMD’s CrossFire or Nvidia’s SLI has worked.
Alternate reality
When using two or more GPUs in a system it will use alternate-frame rendering (AFR) where one GPU renders the first entire frame and the subsequent GPUs render the next, taking it in turns. Both GPUs though need to have everything loaded into their own frame buffers as though they were the only GPU running the game.
The biggest gaming news, reviews and hardware deals
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
If they shared the game’s data across the separate buffers they would have to keep chatting away across the PCIe bridge.
“Sure PCIe is fast,” explains Hallock, “but nowhere near as fast as going to local video memory.”
In this context, then, AMD’s top-end dual-GPU card is still a 4GB card. But not if you start using the low-level APIs like AMD’s own Mantle and the forthcoming DirectX 12.
Because these two APIs are designed to allow developers to code ‘close to the metal,’ developers can get right into the heart of the GPUs and their frame buffers. With Mantle and DX12 devs aren’t as limited as they currently are in how they optimise their games and game engines to deal with multi-GPU systems.
With this new, tighter control over the hardware and resources at a game engine’s disposal you could then pool the entirety of two graphics card frame buffers and have your game reference it as one giant chunk of video memory.
“Gamers believe that two 4GB cards can’t possibly give you 8GB of useful memory,” says Hallock. “That may have been true for the last 25 years of PC gaming, but that’s not true with Mantle, and it’s not true with the low-overhead APIs that follow in Mantle’s footsteps.”
Which all sounds great, and should give multi-GPU systems more of a boost then simply the doubling up of the graphics processor itself. That will also help when it comes to dealing with the sorts of massive resolutions and textures we want in the games of the near future.
The one huge caveat here though is that it’s all down to just how ‘close to the metal’ those devs want to get when it comes to optimising their games for multi-GPU systems.
The option, it seems, will be there to squeeze every last drop of potential performance out of a dual-GPU setup, but it's how simply this is to implement that will decide whether it gets used to its fullest.
As someone who runs both CrossFire and SLI systems support from games can be patchy at best, non-existent at worst. If devs have to put in more work to assign individual objects to a specific GPU, for example, I doubt many will really go the extra mile for such a small niche of the game-playing world.
Dave has been gaming since the days of Zaxxon and Lady Bug on the Colecovision, and code books for the Commodore Vic 20 (Death Race 2000!). He built his first gaming PC at the tender age of 16, and finally finished bug-fixing the Cyrix-based system around a year later. When he dropped it out of the window. He first started writing for Official PlayStation Magazine and Xbox World many decades ago, then moved onto PC Format full-time, then PC Gamer, TechRadar, and T3 among others. Now he's back, writing about the nightmarish graphics card market, CPUs with more cores than sense, gaming laptops hotter than the sun, and SSDs more capacious than a Cybertruck.