NVIDIA PhysX - CUDA Power in Gaming
As its name implies, PhysX is a physics engine which enables real-time physics effects in games with the assistance of CUDA. This week, we cover PhysX's history, its application in today's games, and the performance advantage of CUDA in PhysX. Read on to learn more about this breath-taking technology.
By Kenny Yeo -
Physics at Lightspeed
If you are well into your teens, chances are you would have a smattering of knowledge in physics. You might also know that Earth, though as wondrously complex as it, is actually strictly governed by certain laws of physics. So encompassing are these laws that it is even possible to predict to a great degree of accuracy, the height of waves at your nearby beach, 10 years from now.
While that might be possible, it is certainly not easy. Remember that in physics, formulae are king. That's right, we are talking about things like F = MA, E = MC2, and these are only the simple ones. There are also more infinitely complex ones, involving differential, trigonometry, summation and integration.
By now, you might be wondering what does all of this got to do with games and, more importantly, graphics cards? Well, like the world we live in, game worlds are governed by their own laws of physics which are getting more complex to portray realism. While they are not as complex as the real word, they are still complicated enough to bog down your traditional CPU. This is where the CPU needs help from the GPU to keep the game running smooth and realistic, hence enter PhysX.
As a brief introduction, PhysX is a middleware physics engine which enables real-time physics in games. By middleware, what it means is that game developers need not write their own physics code, only that they need to license the engine for use, thus saving time and effort. But what really makes PhysX stand out from its competitors is that it is primed to take advantage of the parallel processing power of graphics processors through CUDA. Doing this takes the strain of calculating complex physics equations and algorithms off the CPU, resulting in smoother game play, leading to a better gaming experience.
Using the GTX 280 as an example, the reason why GPUs are so suitable for tasks like physics rendering is because it can run run far more threads simultaneously with its vast array of shader processors operating in parallel than an average CPU ever could.
Right now all GeForce 8 and above graphics cards offer support for PhysX right out of the box, so long as you've downloaded and installed the latest drivers. A couple of years ago however, this wasn't the case. Let's step back a little to see how PhysX has progressed to its current stage and what its potential is from our testing.
PhysX: Then & Now
PhysX began nearly seven years ago in 2002, when a company called Ageia was founded. Their business was built around PhysX, which was at that point of a time, used to refer to their physics processing unit (PPU) and their middleware PhysX engine that interfaces between their PPU hardware and the game designed to take advantage of PhysX . They also licensed out their PhysX SDK, which could handle an array of physics effects like particles, smoke, dust, fluids and even hair, to game developers. This allowed developers to create worlds where objects would react and interact realistically with one another.
The very first Ageia PPUs were designed to work cooperatively with the CPU and GPU. With the CPU orchestrating the task, the GPU would render the graphical aspects of the game, while the PPU would take care of all the number crunching as required by the in-game psychics effect.
The PPU was made available sometime in mid May of 2006 and was essentially a separate card that needed to be mounted onto your system's motherboard. Although initial demos with the physics engine were promising (you should check the Cellfactor demos online), PhysX never really took off back then because getting an extra card just to process physics was a luxury most couldn't afford and neither was PhysX pervasive in the gaming world, thus both reasons led to a poor adoption rate.
This is the ASUS PhysX P1 GRAW Edition, one of the first commercially available PPUs. Back then, PhysX was still in its infancy and the extra effects you got was no where near as spectacular as the ones you'll see later.
Fortunately, however, NVIDIA saw potential in Ageia and with PhysX being steadily implemented in a few big-ticket games (one of the first prominent games to make use of it was Unreal Tournament 3), they acquired the outfit early last year in February. The first thing NVIDIA did after acquiring the Ageia was to port over PhysX so that it could be used in tandem with CUDA to enable hardware acceleration of physics in PhysX-supported games, without the need for a separate PPU.
Months later in August 2008, NVIDIA finally integrated PhysX into its CUDA framework, allowing all GeForce 8 and above graphics cards to support PhysX processing. This made PhysX add-on cards immediately redundant for users of NVIDIA graphics cards.
PhysX in Action
Hardware is only as good as the software, and with NVIDIA acquiring PhysX and implementing it into its CUDA framework, support for PhysX grew as well, as evident by the increasing number of games supporting the API.
One of the most significant titles, however, has got to be the recently released Mirror's Edge. Touted as the first game to fully take advantage of PhysX, it uses GPU-accelerated physics to simulate the effects of glass, smoke, wind and cloth so that they behave as they should in the real world.
Thanks to the wonders of PhysX, the man being kicked here by the game's heroine, Faith, falls to the ground realistically, unlike games of old where such animations were mostly scripted.
Take glass for instance, with PhysX enabled, glass in the game reacts realistically to outside forces and falls or shatters as it should when subjected to an external force. Not only that, it remains in the game world and continues to react and respond to movements of characters in the game. On the other hand, with PhysX is disabled, a scripted effect is triggered and the glass shatters and disappears from the game world.
PhysX also helps smoke and wind react more realistically and accurately. When enabled, incoming trains and helicopter gunships in the game cause banners to sway and stir up dust and debris. Also, running through smoky environments will leave a trail of smoke, as it should in the real world. But with PhysX disabled, the smoke is not physically rendered, but is instead a pre-scripted animation that doesn't interact with anything else. Likewise, the winds caused by oncoming trains and patrolling helicopters have no effect on the environment.
It is the interaction of various elements in the game-world that's free from scripting that makes PhysX an interesting component to a more immersive game world. The best way to experience the magic of PhysX is to see it for yourself, so enjoy the videos below:
That is not to say Mirror's Edge is the only game that heavily utilizes PhysX. Cryostasis, a first-person horror shooter, is another game that makes heavy use of PhysX. The key highlight of the game, made by Russian developer Action Forms, is that it is the first to employ PhysX to realistically render water effects. Watch the demo below and be prepared to be bewildered by how realistic and true-to-life the water effects look:
Getting the Best out of PhysX
As the videos quite clearly show, PhysX breathes life into the game and adds a lot more visual effects to the game, upping the level of realism to a whole new level. However, like we've mentioned earlier, physics involves lots of algorithms, formulae and equations, and to be able to generate the effects that we've seen requires a lot of number crunching and algorithm computing.
This is where CUDA comes in. Through CUDA, graphics cards, which are highly suited to such calculations thanks to its massively parallel architecture, are able to undertake the tedious task of physics processing and effects rendering. The end result, as the graphs below show, is a faster and smoother gameplay. On the other hand, without GPU-acceleration, the game becomes unplayable as the CPU simply cannot handle the sheer amount of computations needed to achieve the same effects and interaction.
Source: NVIDIA
Source: NVIDIA
Source: NVIDIA
To prove and verify that NVIDIA's claims were indeed accurate, we carried some tests on our own as well.
Windows Vista - Mirror's Edge Results
Using our trusty Vista system, we decided to try out for ourselves just how much of a performance boost we could get on Mirror's Edge if we let the GPU handle PhysX processes, or rather how much performance we would lose if we didn't. Anyhow, our system has the following specifications:
Windows Vista SP1 Test System
- Intel Core 2 Extreme QX6850 (3.00GHz)
- Gigabyte X38T-DQ6 motherboard
- 2 x 1GB DDR3-1333 Aeneon memory in dual channel mode
- Seagate 7200.10 200GB SATA hard drive
- Windows Vista Ultimate with SP1
We conducted the test using two NVIDIA GeForce cards - a high-end Zotac GeForce GTX 285 AMP! Edition and a middle-range Gigabyte GeForce 9800 GT. ATI was represented by the super-quick PowerColor PCS+ HD 4870. The list of cards tested and their driver versions below:
- Zotac GeForce GTX 285 AMP! Edition 1GB GDDR3 (ForceWare 181.20)
- PowerColor PCS+ HD 4870 1GB GDDR5 (Catalyst 8.12)
- Gigabyte GeForce 9800 GT (ForceWare 181.20)
First we ran Mirror's Edge on High settings with hardware-accelerated PhysX turned off, meaning all PhysX processes were handled solely by the CPU.
Expectedly, all cards gave us rather poor performance. With the CPU fully taxed and bottlenecking performance, only the Zotac GeForce GTX 285 AMP! Edition managed to give us decent enough frame-rates for play. The relative performance standings between the GeForce GTX 285 and the Radeon 4870 were to expectations, but the GeForce 9800 GT did have performance higher than expected (even though it still ranked last). Overall, the situation is not ideal for fluid gameplay.
Using the same High settings in Mirror's Edge, we now enabled hardware-accelerated PhysX:-
The results were astounding at about 300% improvement! Since ATI cards do not support PhysX as yet, it was dreadfully slow compared to its NVIDIA rivals. Even a mid-range GeForce 9800 GT handily trumps it, and by a fairly great margin too we might add. Evidently, a GPU with its massively parallel architecture is much better suited to processing PhysX than the CPU.
Looking Ahead: The Future of PhysX
In the beginning, PhysX was regarded mostly as a product of great promise, but little more. Most developers were impressed with what it could do, but the cost of getting an extra PPU meant that adoption rates were rather dreadful. And because of that, despite PhysX's obvious benefits, not many game developers were interested in implementing it into their games.
However, now that it has been ported to work with CUDA and CUDA-enabled GPUs, anyone who as a GeForce 8 and above graphics card can now easily enjoy the increased levels of realism and interaction that PhysX brings. More importantly, what this means for developers is that there is suddenly a large population of gamers who now have the necessary hardware to make PhysX work well on their systems.
Hardware-acceleration for PhysX is supported by any CUDA-enabled GeForce graphics card. This means any of the GeForce 8 series and newer graphics card will work - including even their integrated graphics chipsets such as the GeForce 9300 mGPU and more.
Recently, gaming powerhouse Electronic Arts signed a deal with NVIDIA, which allowed them to use PhysX at all their studios worldwide. Tim Wilson, Chief Technology Officer of EA's Redshore Studio has this to say: "PhysX is a great physics solution for the most popular platforms, and we're happy to make it available for EA's development teams worldwide."
Not to be outdone, 2KGames, famous for its sports titles, have also followed suit, with Technology Director, Jacob Hawley, saying that they were impressed with the PhysX engine, and what it could deliver.
Arguably the biggest name in games, EA has licensed PhysX for use at all their studio worldwide.
Certainly, the future looks bright for PhysX, as more developers are now jumping onboard and more games are now implementing it. And despite NVIDIA's current "monopoly" over hardware-accelerated PhysX, they have said that they are willing to extend their know-how so that ATI's Radeon series of cards could enable hardware acceleration of PhysX as well. This is a win-win situation for both companies. For ATI, their users will now be able to enjoy PhysX magic; whereas for NVIDIA, the widespread accessibility of PhysX will spur even more developers to implement PhysX into their games.
In light of recent developments, we think it's not an overstatement to say that right now, all signs point towards the healthy growth of PhysX.
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.