World's most popular travel blog for travel bloggers.

What are flops and how are they benchmarked?

, , No Comments
Problem Detail: 

Apple has just proudly stated that their new mac pro will be able to give up to 7 teraflops of computing power. Flops stands for Floating Point Operations Per Second. How exactly is this benchmarked though? Certain floating point operations are much heavier than others, so how exactly would a FLOP be a benchmark for computing power?

Asked By : Vincent Warmerdam

Answered By : Raphael

As far as I know, they give peak performance values. Given clock speed $f$ (in Hz) and number of cycles per (shortest) floating point operation $c$, the peak performance is essentially $f \cdot c$.

Of course, modern machines execute multiple floating point operations in parallel, have multiple cores etc. A more accurate formula can be found on Wikipedia.

This measure ignores all of the pain any real program encounters, e.g. cache misses and pipeline stalls. That is, any (reasonable) benchmark will not reach this number. But as an (unattainable) ultimate figure it can be useful to compare machines and even programs (how close to peak performance do they get?) -- take such statements with a grain of salt, always.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/12606

0 comments:

Post a Comment

Let us know your responses and feedback