I now do some work with computers that involves making graphics cards do computational work on a headless server. The computational work it does has nothing to do with graphics.
The name is more for consumers based off the most common use for graphics cards and why they were first made in the 90s but now they’re used for all sorts of computational workloads. So what are some more fitting names for the part?
I now think of them as ‘computation engines’ analagous to a old car engine. Its where the computational horsepower is really generated. But how would ram make sense in this analogy?
Thinky boi, or computy boi.
Back in the day, you could slap a math coprocessor on your system so it could do floating point maths real gud.
Now, you slap in some card that does floating point maths even guder, but also in parallel in yuge vectors.
So my proposed name is “It’s like an old Cray supercomputer but real tiny”
Parallel compute accelerator.
Nobody is gonna say that in full, just like “graphics processing unit” becomes “GPU”, so maybe “PCA”.
Pissy, eh.
They are GPUs.
All of them, even the H100, B100, and MI300X all have texture units, pixel shaders, everything. They are graphics cards at a low level. Only the MI300X is missing ROPs, but the Nvidia cards have them (and can run realtime games on Linux), and they all can be used in Blender and such.
The compute programming languages they use are, fundamentally, hacked up abstractions to map to the same GPU hardware in consumer stuff.
That’s the whole point, they’re architected as GPUs so that they’re backwards compatible, as everything’s built on the days when consumer gaming GPUs were hacked to be used for compute.
Are there more dedicated accelerators? Yes. They’re called ASICs, or application specific integrated circuits. This is technically a broad term, but mostly its connotation is very purpose made compute.
The 5090 is missing rops too
Triangle makers
Massively Parallelized Floating-Point Computation Unit.
MPFPCU!
Hehehehehehehe!
Floating point coprocessor
matrix multiplication unit
We already have MMU for Memory Management Unit. Maybe Matrix Multiplication Accelerator instead?
Matrix Accelerator coProcessor card, MAP card
So MMA? Sounds sporty.
Probably something like Tensor Processing Unit. That’s a specific Google product, but something along those lines
Tensor Processing Unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google for neural network machine learning, using Google’s own TensorFlow software. Google began using TPUs internally in 2015, and in 2018 made them available for third-party use, both as part of its cloud infrastructure and by offering a smaller version of the chip for sale.
Compared to a graphics processing unit, TPUs are designed for a high volume of low precision computation (e.g. as little as 8-bit precision)[3] with more input/output operations per joule, without hardware for rasterisation/texture mapping.
Graphics cards.
Crypto Cultists and AI Evangelists found a
wasteful and often uselessdifferent function for them.Not just crypto and AI fucktards tho.
Theorethical physicists, astrophysicists, nuclear engineers, mechanical engineers, and countless other professions depend on the computational capabilities it provides.
Don’t let your anger and bitterness blindside you into thinking it’s for all the bullshit.
Carburator.
It mixes the fuel/air ratio, prepping it before it goes into the engine.
Similarly ram is holding data while it gets adjusted.
It’s not a great analogy, but it’s pretty much all there is
I think you need to add the exhaust or at least the catalysator to it because the RAM stores the results of the computations for further use.
It’s mixing the data that goes in to get the result…