What are TFLOPS?
TeraFLOPS (TFLOPS) is a unit that indicates how many trillions of calculations with floating point numbers a computer can carry out in one second. The value serves as a measure of the performance of processors, especially GPUs and supercomputers. TFLOPS are especially relevant for applications that involve a lot of calculation, like artificial intelligence, scientific simulations and machine learning.
What are FLOPS and what are they used for?
FLOPS stands for floating point operations per second and is a unit for computing power. A floating point operation is a mathematical calculation that involves decimal points. They are especially important for calculation-heavy applications that require a high degree of precision.
FLOPS are mostly used for scientific calculations, simulations, artificial intelligence, machine learning and graphics applications. They play a central role in various areas such as medical image processing and physical simulations. They are also important in finance, for example when it comes to the analysis of market data. In the gaming industry, FLOPS are used to determine the graphics performance of modern GPUs. With ever increasing FLOPS capacity, modern computers can deliver more and more realistic physical effects and high-resolution graphics.
FLOPS are typically measured using specially developed benchmark tests that determine the number of floating point operations per second. Frequently used benchmarks include LINPACK, which is mostly used for supercomputers, and FP32/FP64, which rate the computing power of GPUs. During tests, complex mathematical calculations are performed in order to determine how many operations per second a system can handle. Manufacturers often give theoretical FLOPS values based on the architecture of a computer. However, real-world applications can vary based on workload and efficiency.
- Enterprise hardware
- Power and flexibility
- Latest security technology
How many FLOPS are in a teraFLOPS?
One teraFLOPS is equal to one trillion (1,000,000,000,000 or 1012) floating point operations per second. That means that a processor with 1 TFLOPS can execute a trillion mathematical operations with floating point numbers per second.
By way of comparison, a computer that only has 1 FLOPS would need 31,000 years to perform a trillion floating point operations. So computers working in TFLOPS are powerful systems capable of modern applications in real time.
What other FLOPS units exist and how do they convert into TFLOPS?
There are many FLOPS units, which differ in how many operations per second they refer to.
Unit | FLOPS value | Conversion into TFLOPS |
---|---|---|
KiloFLOPS | 103 FLOPS (1,000) | 10-9 TFLOPS |
MegaFLOPS | 106 FLOPS (1 million) | 10-6 TFLOPS |
GigaFLOPS | 109 FLOPS (1 billion) | 10-3 TFLOPS |
TeraFLOPS | 1012 FLOPS (1 trillion) | 1 TFLOP |
PetaFLOPS | 1015 FLOPS (1 quadrillion) | 103 TFLOPS |
ExaFLOPS | 1018 FLOPS (1 quintillion) | 106 TFLOPS |
Supercomputers’ performance is measured in petaFLOPS and even exaFLOPS, while high-end graphics cards are usually rated in teraFLOPS.
How many FLOPS do modern computers and GPUs reach?
GPUs and modern computers in the area of high-performance computing have reached impressive FLOPS values. The NVIDIA H100, one of the most powerful GPUs for AI and data centres, achieves up to 989 teraFLOPS for FP32 Tensor Core calculations. That makes it ideal for large neural networks and simulations.
The NVIDIA A30, a GPU that’s optimised for data centres, reaches 10 TFLOPS and is particularly suitable for AI training and inferences. By comparison, the gamer-oriented NVIDIA RTX 4090 can overclock to over 100 TFLOPS and enables very realistic graphics.
Supercomputers are even more powerful: The Frontier supercomputer has surpassed the 1 exaFLOPS mark and is used for highly complex scientific simulations. Other powerful supercomputers used in research, like the Japanese computer Fugaku, also operate in this range.
Manage any workload with flexible GPU computing power, and only pay for the resources you use.