Ray tracing is a term you’re going to hear a lot now that Nvidia has announced professional and consumer graphics cards that use this technique to produce some of the most life-like simulations possible in games and other animations. So, what is ray tracing exactly, and how does it differ from current graphics rendering techniques?
The oversimplified answer is that ray tracing models the behavior of light in real time as it intersects objects in a scene.
It’s a feature that could lead to spectacular new graphics, but has been very hard to pull off because of the computational requirements. But Nvidia is tackling several issues facing ray tracing with a new graphics architecture known as Turing.
First, it’s tackling the problem of ushering in the next generation of computer graphics. Ray tracing is only one of many rendering techniques, but it’s where Nvidia is pushing hard because it’s especially suited for adding realistic, real-time lighting and effects.
The second issue is computational cost: the best Turing card for professional production costs $10,000, but it was even costlier to use ray tracing before. What’s new here is Nvidia is ready to bring ray-tracing tech to consumer-level GPUs; that hasn’t been done before.
Nvidia’s current graphics tech — and most of the industry’s — simulates light and how light behaves in a given scene in a much simpler way, using something called rasterization. Like a painter painting layers upon a canvas, objects are rendered from back to front, so those in the front obscure the objects in the back.
This makes it hard to model a mirror, for example, because rasterization techniques can’t track and model light itself. It’s used often in real-time scenes because current-generation hardware can’t keep up with the demands of simulating a complex scene in motion for something that requires it (say, a game or 3D animation).