I'd say Voodoo 3 mattered because it killed 3dfx.
And the Matrox Parhelia mattered for much the same reason.
The site looks nice, which fools us into thinking thought and effort was put into this.
And that brought to mind my older dream machine, an 8800 GT from generations past, before which we made do with a Via Unichrome that worked sufficiently enough on the OpenChrome driver that I could edit open software (Freespace only needed a few constants changed) so it would render (though some of the image was smeared and so on I could play!).
Released before the Voodoo 1 with glquake and gl support for Tomb Raider.
At the same time I'd add the S3 ViRGE and the Matrox G200. Both mattered a lot at the time, but not long term.
RIP my Radeon 7500 from high school though, that was always a budget card, and we all had them but wanted the 9700. Couldn't beat the box are from that era though: https://www.ebay.com/itm/206159283550
One day one of my friends from school wanted to optimize airflow in our computer, and re-did the cabling, but he managed to block the CPU-fan from spinning. I am not sure how, but we didn't realise it for a couple of months.
When I got my own PC, it had an AMD Barton chip, and it allowed me to play Half-Life 2.
I think Sun and HP had some 3d capabilities, but it was mostly aimed at engineering/CAD
Oh well.
I'd put the 5700xt at #2 for being the longest lived GPU I've owned by a very wide margin. It's still in use today.
Few of the “pre-GPU” graphics accelerators that seem to have mattered are here. The ViRGE. The Mach32 and Mach64. The Trident cards, like the TGUI9440. Yet the Voodoo often isn’t considered a GPU and is on the list.
If I can at least tell myself that our technological achievements come with efficiency gains instead of just apeing power throughput, I can rest a little better
also, the gpu did not exist until 1999
looks like this was created for engagement
decelerator?
>Matrox G200
because it never got opengl driver? Because it was 2x slower than even Savage3D? Nvidia TNT released a month later offering 2x the speed at lower price
https://www.tomshardware.com/reviews/3d-chips,83-7.html
truly a graphic card that mattered! :)
Apparently it's a Millenial trait to insist on doing things with a "big screen".
One example is "No graphics API" by Sebastian Aaltonen shared here 3 months ago, which is a great tour de force of graphics stack innovations through contrasting the history of OpenGL/Vulkan and WebGPU/Metal development: https://news.ycombinator.com/item?id=46293062 Because it requires an in-depth understanding of the shader pipeline, the article touches on significant graphics cards of the era. I'd love to see more about that!
The old CPU is actually more of an issue. I couldn't run Civ 7 because the game (probably the DRM) uses some instructions that aren't implemented on that CPU. Other than that I bet it would run just fine.
I was just about to upgrade before hardware prices went through the roof. Now I'm just holding on until some semblance of sanity returns, hoping every day that the bubble pops and loads of gently loved hardware starts appearing on the secondary market. Also, the way nVidia has been skimping on memory for all but the most outrageously expensive chips has grated on me. I was really hoping they would buck the trend with the 5xxx generation, but nope, and with RAM prices the way they are I have little hope for the 6xxx generation. My current card is close to a decade old and has 8GB of VRAM. I'm not upgrading to a card with 8GB of VRAM, or ever 12GB. That 8GB was crucial in future proofing the original card, none of its 4GB contemporaries are of much use today.
Still there are a lot of laptops I'd like to try when they get cheaper. As far as GPUs I like the Nvidia founder designs, it was a while before I got a 3080 Ti Fe that I ended up having to sell at a loss when I didn't have a job that was sad. I have a 4070 founders now which does struggle on certain games at 1440p but I'm going to use it to run local LLMs.
I've been running the worst gaming set up I can get away with, which atm is a 3080 10gb, using random DDR3 ram, a budget WD 512gb ssd, and an i5 of the same socket as the i7-4790k that doesn't even support hyperthreading and can't do more than 4 tasks in parallel.
It's absolutely laughable at this point, but I'm unironically looking for a deal on that cpu lmao, it would be a huge upgrade.
not that it was an awesome product, but certainly it was flexible.
a good (albeit tiny) demo of that is that vquake has the same wobbling water distortion of the software renderer quake but rendered entirely through the gpu. Perhaps with some interpretation this could be called the "caveman discovered fire" of the pixel shading era.
About a decade ago, I discovered that the HD 530 iGPU included with my budget-oriented i3-6300 CPU was better-performing than the physically-impressive SLI pair of 9800GTs I had been using, at something like 1/10th the power consumption.
(It didn't do PhysX, but nobody cared.)
The 580 is a solid card that was an excellent price/performance value and held a respectable spot in the market for a very long time. Many video games now use is as the entry level bar for playability.
It doesn't hold the same "type" of spot, but it's a workhorse in the same way something like a NVIDIA 1070 was.
With the release of D3D9 in 2002, GPUs of different vendors didn't really stand out anymore since they all implemented the same feature set anyway (and that's a good thing).
I remember the main noticeable difference being ray traced reflections. However that was mostly on immovable objects in extremely simple scenes (office building). Old techniques could've gotten 90% there using cubemaps, screen space reflections, and/or rasterized overlays for dynamic objects like player characters. Or maybe just completely rasterize them, since the scenes are so simple and everything is flat surfaces with right angles anyways. Might've looked better even because you don't get issues with shaders written for a rasterized world on objects that are reflected.
Games that heavily advertise raytracing typically don't use traditional techniques properly at all, making it seem like a bigger graphical jump than it really is. You're not comparing to a real baseline.
Overall that was pretty much the poorest way to advertise the new tech. It's much more impressive in situations where traditional techniques struggle (such as reflections in situations with no right angles or irregular surfaces).
Both were only really famous for how terrible they were though. I think the S3 Virge might even qualify as 3D decelerator ;)
I'm on a 3060 currently and the changes in the 4xxx and 5xxx just aren't appealing to me. As soon as iGPUs get 3060 performance I'll probably switch. And they aren't far off.
But yeah this list has a on of incremental bumps on it. Maybe there was some mixing of cards that mattered historically and cards that mattered to the author.
I know this because I wrote a Unreal Engine texture repacking tool with a "DXT detection" feature so that I wouldn't be responsible for losing DXT compression on a texture which had already paid the price, only to find that this situation was already hyperabundant in the ecosystem.
Many Unreal Engine games of the day could have their size robotically halved just by re-enabling DXT compression in any case where this would cause zero pixel difference. This was at a time before Steam, when game downloads routinely took a day, so I was very excited about this discovery. Unfortunately, the first few developers I emailed all reacted with hostility to an unsolicited tip from what I'm sure they saw as a hacker, so I lost interest in pushing and it went nowhere. Ah well.
The "office building" setting meant resticted areas, sure, but it features TONS of reflections - especially transparent reflections (which are practically impossible to decently approximate with screen space techniques).
Oh, and: The Northlight Engine already did more than most other engines at the time to get "90% there" with a ton of hybrid techniques, not least being one of the pioneers regarding realtime software GI.
they arent a marketing company:
"Dashboards, CRMs, automations. We're a small consulting team that turns your messy spreadsheets into systems that run your business."
it seems to have helped path tracing by a lot.
Last time I saw a Matrox chip it was on a server, and somehow they had cut it down even more than the one I had used over a decade earlier. As I recall it couldn't handle a framebuffer larger than 800x600, which was sometimes a problem when people wanted to install and configure Windows Server.
Nvidia called the Geforce 256 the first ever GPU.
It was a good budget option those decades ago.
The only thing that holds this card back now is a handful of titles that will not run unless ray-tracing support present on card - Indiana Jones and The Great Circle springs to mind etc.
I am very likely going to get a decade of use out of it across three different builds, one of the best technology investments I've ever made.
The 5070 Ti would be the same spot.
If you compare these - the RTX 50 card has a bit higher TDP (which it will usually not reach due to clock limits), is a roughly 100mm² smaller die with around 4x the transistors and about 3x the compute (since much more of the chip is disabled compared to the 1080 Ti's chip). It has 5 GB more memory (11->16) and a lot more bandwidth.
Seeing Quake II run butter smooth on a Riva TNT at 1024x768 for the first time was like witnessing the second coming of Christ ;)