I have to say, I bought into the hype that surrounded Larrabee. I thought it would be awesome. So, excuse my disappointment when they finally released the full Larrabee instruction set. See: Larrabee @ GDC (rasterization) and an article over at Dr. Dobbs Journal on Larrabee.
One of the first bits of production coding I did was optimizing rendering code for MMX/3dNow. It was a painful experience, the code I'd hand optimized took months to write, and by that time computers and compilers had gotten so much faster that it would have been easier just to buy a faster PC. I stopped all pretense of writing assembly code for any practical purpose around the time the eight version of the Intel C++ compiler came out. It did vectorization, and it did it,.. not-bad. (This translates to f$#in awesome for anyone who dealt with 'vectorizing compilers' before)
Having a look at Larrabee, it seems immediately clear to me that it's just another Itanium (The wonderchip that wasn't). The instruction set (gather/scatter is cool!) is far too complex for a compiler to do well at, so to get good performance you'll need to go down to the metal. And not even Carmack does that, he hires someone else to do it. I don't think there will be many people taking advantage of this technology. Nevertheless if Intel and Microsoft get together to write the DX software renderer in Larrabee assembly, we might end up with half decent low power intel GPU's in laptops. (One presumes Michael and Tom can do a decent job of that!) At least that would be nice.
Bees in TVs
1 hour ago