Many common operations are monoids. Haskell has leveraged this observation to make many higher-order functions more generic (Foldable
being one example).
There is one obvious way in which using monoids can be used to improve performance: the programmers is asserting the operation's associativity, and so operations can be parallelized.
I'm curious if there are any other ways a compiler could optimize the code, knowing that we're dealing with a monoid.
Asked By : Xodarap
Answered By : FUZxxl
The compiler can optimize exponentiation with monoids. Let $\oplus$ be a binary operator calculateable in constant time such that $\oplus$ and $a_1, a_2, ... \in A$ form a monoid. Then the operation
$$\bigoplus_{[1..n]} a_k = \underbrace{a_k \oplus a_k \oplus \dots \oplus a_k}_{\text{$n$ times}}$$
which usually takes $\cal O(n)$ time can be evaluated with the square and multiply algorithm in only $\cal O(\log n)$ time if the compiler knows that $\oplus$ obeys the monoid laws.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/6858
0 comments:
Post a Comment
Let us know your responses and feedback