Make your own free website on
PC Hardware
-=!K.F.X.!=- 2001 - 2013
PC Hardware
KFX Clan HQ.

Hardware reviewed and previewed.

The new Matrox and ATI Radeon cards reviewed soon.

Geforce 4

Nvidia unveil there new flagship range of Gpus. The Geforce 4 range of graphics cards.
I'll go through the family from the weakest and cheapest to the new daddy of 3d cards.

First in the new range is MX 420-440- and 460.




These versions in the GeForce4 line are basically an updated GeForce2 MX. They have no DirectX 8.0 pixel & vertex shader units.

They have two enhanced memory controllers to give it 2 x 64-bit balanced memory as opposed to 4 x 32-bit load balanced memory control of the GeForce4 Titanium range and have multi-sample anti-alias & updated video features.

They also use the same Accuview AA engine, same nView & same enhanced Visibility Subsystem.

These GeForce4 MX cards feature dual integrated TMDS transmitters for dual DVI output for resolutions up to 1280 x 1024 for each monitor.


They have hardware support for DVD playback & support hardware motion compression & iDCT along with other hardware features for MPEG2 playback & recording. They also have a built-in video encoder for video-out.

Heres a diagram of how the Geforce MX range work.

GF4 NV17

Now for the Titanium range.



With over 63 million transistors and about 5% larger the NV25 is made using 0.15micron technology. In a way it is basically an enhanced GeForce3. It now has two Vertex shader units much like the XBox which will be of great benefit to 3D graphics and games software particularly for vertex shading.


Here we have a diagram of the GeForce4 processor showing the essential sections.

From our earlier quick reference guide most of the hardware remains the same. It comes with a new nfiniteFX II, NVIDIA's name for its pixel & vertex shader units which is basically the addition of a second vertex unit. Both the vertex and pixel shaders work at higher clock frequences in the GeForce4.

A lot of the performance gains come from NVIDIA's Lightspeed Memory Architecture II (LMA II).


Nvidia GeForce3 nfiniteFX engine.
The Nvidia GeForce3 GPU nfiniteFX engine allows designers to program a virtually infinite number of special effects and custom looks, whereas before they had to choose from a pre-set selection of effects & operations. There are two patented Nvidia GeForce3 architectural advancements that enable nfiniteFX which are accessible through DirectX 8, these were greatly modified by Nvidia while developing the Xbox.

GeForce4 Lightning Memory architecture & enhancements.
The new NVIDIA LMA II comprises: Crossbar Memory Controller, Z-Occlusion culling, Lossless Z-buffer compression, Primitive cache, Vertex cache, Pixel cache, Dual Texture caches, Auto pre-charge & Fast Z-clear.

Crossbar memory controller has enhanced load balancing algorithms for different memory sections and improved priority system for better use of memory.

Dual Texture caches with new algorithms than GeForce3 for better 'look ahead' for improved 3 & 4 texture performance.

Lossless Z-buffer the Z-buffer is an integral part of the 3D process. The Z-buffer stores how deep the pixels will be displayed on the monitor. Overdraw is where a pixel or polygon that cannot be seen on the screen is rendered and outputted to the monitor. Nvidia's Visibility subsystem stops this wasted rendering and is identical to ATI's Hierarchical-Z system. Nvidia's system makes a comparison of the values in the z-buffer and will discard the values (and associated pixels) that will not be visible on the monitor before sending the data to the frame buffer.

Same as with GeForce3. NVIDIA's lossless compression algorithm can give as much as 4 to 1 compression with no loss of overall image quality or Z depth accuracy. This gives good reduction on memory bandwidth use.

Z-Occlusion culling has an improved algorithms. LMA II optimizes the read, write & selection of pixels to be rendered giving up to 25% more efficient discarding of unseen pixels than the GeForce3.

Fast Z-clear from the GeForce3 is a very fast algorithm that sets flaged areas of the the z-buffer values to zero instead of the whole z-buffer with a saving in bandwidth.

Auto pre-charge as DRAM uses capacitance to remember its state and must be refreshed which causes delays in getting the data. Auto pre-charge is logic based to guess what rows & colums in the DRAM array are next to be accessed. It then charges them before they are used which reduces the time the GPU need to wait if it needs to get data from them.

This with the Quad cache help speed up the GPU. Auto pre-charge with its effective lower latency memory access thus gives more useful memory bandwidth.

nfiniteFX II.
GeForce3 only had one vertex shader, GeForce4 now has two and they are more advanced. Both units are multi-threaded and the multi-threading is done in the chip.

New anti-aliasing engine called Accuview lets the new 4x AA mode under Direct3D applications called 4XS. 4XS has more texture samples per pixel than 4X to be able to create better looking AA image. Accuview makes anti-aliased look better and work faster


so should you upgrade from a geforce 3????
If you have to have then latest technology then the Ti 4600 is the fastest and most powerful card in the world.
Just as ATi caught up to Geforce3 speed with there Radeon 8500 Nvidia unleashed a new Gpu.
A geforce 3 Ti can quite happily smile at  the MX range  because it has full DirectX8 features but the Ti4400 and 4600 are the new benchmark in 3d Graphics cards.
The MX series are good cards but beware they are not full blown accelerators like the Geforce3 and 4Ti models.


Quake test

NVIDIA GeForce4 Ti & MX performance.
Here we show the performance you can expect with an AMD AthlonXP 1800 processor. The cards are shown running at their standard clock speeds.
Quake III performance.

As we one would expect the GeForce3 Ti200 with its slow standard core & memory clock speed 175/400 does not give such a good showing for itself.

While the GeForce4 MX 460 core & memory clocks at 300/550 shows how dependent all is on clock speed.

It must be remembered that the GeForce3 Ti200 can be amply overclocked for far better performance.

The GeForce4 Titanium range give sufficient performance difference to the GeForce3 Titanium range but one can hardly be impressed at the performance difference between the GeF4 Ti4600 & the GeF4 Ti4400. Considering the GeF4 Ti4600 will probably be about $100 more it hardly seems worth the extra than the Ti4400.

One is reminded just how far graphics processors have progressed in only a few years when you think back to the initial GeForce cards. What will they be like in a few years?