oldStatesman said:
ATI and other graphics vendors have developed hardware T&L emulations - they are close to nVidia's algorthyms, but nVidias' are proprietary, (like Windows is proprietary to MS) the others simulate them the best they can. That is why ATI and other cards sometinmes have issues dealing with T&L.
Sorry, but no - hardware transforming & lighting works about the same on all graphics cards that support it. Hardware T&L is doing all the basic 4x4 matrix operations (rotating, translating, scaling, shearing, projecting etc.) on the vertex data (the "corner points" of the triangles 3D stuff is built with) that are needed to wrangle the 3D coordinates into 2D triangles on the graphics card instead of on the CPU - but those operations are the basics of 3D graphics, as there's only one way to do those - nothing proprietary there, just take a good book about 3D graphics and look them up.
Of course, there's more than one way to do the same transformation *FAST*, and many of those are probably proprietary, but the result needs to be the same so you don't get a weird polygon mess on the screen...
It's just that those operations can be done much faster using specialized hardware (well, having SSE in your CPU helps alleviate that somewhat, but that's still not nearly the same) than you can do them on a general purpose CPU...
Graphics cards without T&L just paint (textured) triangles into the screen buffer and rely on the CPU for number crunching while T&L cards only rely on the CPU to get the raw 3D data to them, but do the heavy mathematics themselves, freeing up a lot of CPU time for other tasks.
Also, stuff like Pixel and (to a lesser extent) Vertex Shaders (which are small programs that basically are run on the data of each Pixel/Vertex for stuff like reflections in the water; or the tree swaying animations) can't be practically done on the CPU, since the graphics card can do many of those in parallel, whereas the CPU would have to do one after another...
(There's a difference between running up to 24 such programs in parallel on current high-end cards and running one after another... yes, there's software emulation for shaders, but that's mainly used by game and graphics hardware developers to make sure the hardware gets it right, not to actually play the games...)
np: Richard Devine - Block Variation (Lipswitch)