World's most popular travel blog for travel bloggers.

Hex Bit Pattern to IEEE 754 standard Floating Point Number

, , No Comments
Problem Detail: 

The question asks for the decimal number that 0x0C000000 represents if it is a floating number. I'm not too sure on how to approach this, but here's my thought process:

0x0C000000 = 0000 1100 0000 0000 0000 0000 0000 0000

The first digit is 0, so it's positive. The exponent is 0x0C = 12 - 127 = -116. The mantissa is 0x0C0000 = 12 * 16 ^ -2 = 0.046875, so the final answer is 1.046874 * 2 ^ -116. While my book has similar examples that I tried to follow along with, none were exactly of this type, so I highly suspect I'm doing something wrong. Any tips, hints, or strategies would be greatly appreciated.

Asked By : user40496

Answered By : Joseph R.

A good start is to expand the hex representation into binary like how you did it.

0000 1100 0000 0000 0000 0000 0000 0000

Then parse the word:

  • the left-most bit is the sign bit
  • the next 8 bits represent the biased-exponent
  • the last bits represent the fractional part with the "hidden one"

0 | 00011000 | 00000000000000000000000

Then, follow:

  • s represents the sign bit
  • evaluate the binary representation of the exponent to decimal
  • subtract the bias from the exponent (bias for single-precision is 127 and 1023 for double-precision)
  • evaluate the normalized form to decimal and you have your floating point decimal number

I hope this helps. It took me a while to understand encoding and decoding using IEEE-754. Slow and steady wins the race.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/47788

0 comments:

Post a Comment

Let us know your responses and feedback