Please help me to understand this problem.
The number of bits needed to represent an integer n is given by rounding down log2(n) and then adding 1. For example log2(100) is about 6.643856. Rounding this down and then adding 1, we see that we need 7 bits to represent 100.
But when I use the above method to find out the number of bits needed to represent sys.maxsize (9,223,372,036,854,775,807) like
math.log(sys.maxsize,2) gives 63
The above value directly gives the answer, ie we need 63 bits to represent sys.maxsize (9,223,372,036,854,775,807)
math.log(8,2) gives 3.0
For the 8 we have to apply the above method again (3+1), ie we need 4 bits to represent 8.
Can you explain why?
What I have tried:
math.log(8,2) gives 3.0 (but number of bits needed is 4)
math.log(sys.maxsize,2) gives 63 (here the number of bits needed is 63)