The Monster M is the largest of the finite sporadic groups that arises in the classification of finite, simple groups in mathematics.
M can be realized as a (very large!) set of 196882 X 196882 matrices with nothing more than entries of 1's and 0's, so long as we compute arithmetic as follows:
1+1=0 1+0=1+0=1 1*1=1 0*0=0 1*0=0*1=0
I have two simple questions for the reader. What is the minimum amount of bytes needed to store a single matrix? What is the computational cost (i.e., in FLOPS) of a single matrix multiplication in the most efficient implementation (i.e., taking into account that the entries are binary, not taking into account mathematical properties about the Monster)?
This is a question purely at the computational side of the problem. In other words, treat the matrices as general 196882 x 196882 matrices with binary entries. This is not a question about the Monster. That was just added as motivation.
Asked By : ntropy
Answered By : templatetypedef
If you are storing a matrix of 0's and 1's, you could consider using a bitvector for storage. This can pack some fixed number of bits (say, 32 bits or 64 bits) into a single integer, which decreases the required storage by a very large factor. In your case, your matrices need a total of 38,762,521,924 entries, so you'll need at least 4,845,315,241 bytes, or about 4GB, of storage per matrix. Since the Monster group has size about 8 × 1053, though, I seriously doubt you can fit everything into memory.
Another note: the arithmetic you're describing happens to have the plus operator defined as binary XOR and the multiplication operator defined as bitwise AND. Therefore, you could, in a language like C or C++, use the ^ operator to denote addition and the & operator to denote multiplication.
Finally, the complexity of multiplication depends on the matrix multiplication implementation. The naive method takes time O(n3), where n is the dimension of the matrix, but since the matrices are large you could probably use either the Strassen algorithm (time complexity about O(n2.8)) or Coppersmith-Winograd, the latter of which requires about O(n2.37) multiplications. You'd probably have to empirically profile these algorithms to see which is best, since the constant factors hidden here are pretty huge.
Hope this helps!
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/16513
0 comments:
Post a Comment
Let us know your responses and feedback