There have been a few rounds of articles published in the past week or so on the topic of GPUs and real-time ray tracing, and I just sat down and went through them this morning as I had my coffee. In the interests of being a little more “bloggish” with Kit and of sharing information, I wanted to link these up for you guys to look at. In return, if you have any good reading recommendations on the topic (papers, blog posts, articles, etc.), please drop them in the comments thread.
Also note that this short list isn’t meant to be comprehensive, composed as it is of more recent articles on the topic. If you go through these, you’ll find links to older stuff (like the PC Perspective coverage of Daniel Pohl’s work at Intel, which I think I’ve linked previously).
Here are the articles in chronological order, with some evaluative comments to follow:
- Real Time Ray-Tracing: The End of Rasterization? [[email protected] Blog]Real-Time Ray Tracing: Holy Grail or Fool’s Errand? – Page 1 [Beyond3D]The Problems with GPGPU [[email protected] Blog]More on the Future of Ray-Tracing – from Alesh Jancarik [[email protected] Blog]Clearing up the confusion over Intel’s Larrabee [Ars Technica]
Let’s walk through this list real quick, and I’ll give you my take.
The first article has some serious deficits, most of which are pointed out in the comments attached to it. The article was written to publicize what Intel is doing with multicore and RTRT, and in that respect it succeeded. However, the author overstated the case against the GPU, which is a problem because Intel is currently developing a GPU. So you’ll see that another Intel employee who’s closer to the company’s GPU project later contributed a corrective piece (article #4) that reemphasizes the importance of the discrete GPU while framing the issue more subtly as one of “which approach to mixed rasterization and RTRT will win: NVIDIA’s or ours?”.
The second article is a Beyond3D piece that’s meant as a response to and correction of the Intel-generated RTRT hype. Just like the Intel guys’ commitments are pretty clear, it has been my experience that the B3D crowd in general is fairly loyal to NVIDIA (actually, they’re coders who are loyal to top-performing GPU hardware, and that has meant “NVIDIA” for some time now), and many of them seem to be really sold on GPGPU.
I have pretty serious disagreements with the B3D crowd and with NVIDIA about GPGPU, which is why I linked the third article from the Intel blog. I won’t go into GPGPU vs. Larrabee here because I’m not ready to have that argument outside of the one bulletin board discussion on the topic that I’ve participated in, but I do endorse the Intel piece above because it reflects not only what I think about the technical merits of GPGPU but what I’ve also heard from some insiders (not from Intel) who’ve been involved in trying to make GPGPU work.
Finally, I linked the last article for some background on the whole issue of Larrabee and RTRT.
Ultimately, the discrete GPU space (HPC applications included) will turn into a contest between NVIDIA on the one side and Intel’s Larrabee (i.e. multicore x86) on the other. I’m with Charlie at the Reg, who suggested somewhere recently (I think it was an RWT board post) that the first generation of Larrabee may not impress in terms of graphics, but the second will.
At any rate, you can read almost all the coverage of this topic through the prism of this GPU vs. multicore x86 debate by locating the author on one side or the other. Until someone convinces me otherwise, I’m in the “multicore x86 wins in the long-term” camp.