What is a T&L?

All you wanted to know and more. Make sure you click the Tell me More link at the bottom of the article to go to page 2..and 3...

http://www.firingsquad.com/guides/3dbasics/default.asp


And here is the Wiklopedia Definition...

http://en.wikipedia.org/wiki/Transform_and_lighting


Basically, hardware T&L is complex mathematical algorithyms that are done to render graphics. nVidia developed them, and moved them from the CPU to the Video card processor to take the load of the CPU so it could do other things. Software T&L emulation moves them to the system CPU, at the cost of system speed and response time.

ATI and other graphics vendors have developed hardware T&L emulations - they are close to nVidia's algorthyms, but nVidias' are proprietary, (like Windows is proprietary to MS) the others simulate them the best they can. That is why ATI and other cards sometinmes have issues dealing with T&L.
 
oldStatesman said:
ATI and other graphics vendors have developed hardware T&L emulations - they are close to nVidia's algorthyms, but nVidias' are proprietary, (like Windows is proprietary to MS) the others simulate them the best they can. That is why ATI and other cards sometinmes have issues dealing with T&L.
Sorry, but no - hardware transforming & lighting works about the same on all graphics cards that support it. Hardware T&L is doing all the basic 4x4 matrix operations (rotating, translating, scaling, shearing, projecting etc.) on the vertex data (the "corner points" of the triangles 3D stuff is built with) that are needed to wrangle the 3D coordinates into 2D triangles on the graphics card instead of on the CPU - but those operations are the basics of 3D graphics, as there's only one way to do those - nothing proprietary there, just take a good book about 3D graphics and look them up. :)

Of course, there's more than one way to do the same transformation *FAST*, and many of those are probably proprietary, but the result needs to be the same so you don't get a weird polygon mess on the screen...

It's just that those operations can be done much faster using specialized hardware (well, having SSE in your CPU helps alleviate that somewhat, but that's still not nearly the same) than you can do them on a general purpose CPU...

Graphics cards without T&L just paint (textured) triangles into the screen buffer and rely on the CPU for number crunching while T&L cards only rely on the CPU to get the raw 3D data to them, but do the heavy mathematics themselves, freeing up a lot of CPU time for other tasks.

Also, stuff like Pixel and (to a lesser extent) Vertex Shaders (which are small programs that basically are run on the data of each Pixel/Vertex for stuff like reflections in the water; or the tree swaying animations) can't be practically done on the CPU, since the graphics card can do many of those in parallel, whereas the CPU would have to do one after another... :crazyeye:

(There's a difference between running up to 24 such programs in parallel on current high-end cards and running one after another... yes, there's software emulation for shaders, but that's mainly used by game and graphics hardware developers to make sure the hardware gets it right, not to actually play the games...)

np: Richard Devine - Block Variation (Lipswitch)
 
Briareos : right!

Transform and lighting is supported by every renderer via drivers.

Transform is the basis of the 3D rendering and lighting was also supported in the first cards.

The formulas themselves are known since...I dont know...but probably before 1800 !!! :lol:

Strictly T&L is only the commercial name given to fast hardware implementation of these very common and simple algorithms (take a look at GL specs if you are interested in the formulas behind)
For example in a "true" T&L card (FX5200 or better) the lighting does not affect the rendering speed ( texturing does not affect any card for what I know )

In OpenGL for example there is no way to determine if the card support it directly or not. Simply: if the card support it via hardware it is faster.

In DX there is probably a method (I know only GL!) to "touch" the hardware implementation directly.
As a lot of customers have seen, this is not the safest way (see the black terrain bug)!
In GL I never seen a lighting fails on any card !!!
 
Back
Top Bottom