[ACCEPTED]-If byte is 8 bit integer then how can we set it to 255?-c#

Accepted answer
Score: 27

There are 256 different configuration of 2 bits in a byte

0000 0000
0000 0001
0000 0010
...
1111 1111

So can assign a byte a value 1 in the 0-255 range

Score: 19

Characters are described (in a basic sense) by 22 a numeric representation that fits inside 21 an 8 bit structure. If you look at the ASCII Codes for 20 ascii characters, you'll see that they're 19 related to numbers.

The integer count a bit 18 sequence can represent is generated by the 17 formula 2^n - 1 (as partially described 16 above by @Marc Gravell). So an 8 bit structure 15 can hold 256 values including 0 (also note 14 TCPIP numbers are 4 separate sequences of 13 8 bit structures). If this was a signed 12 integer, the first bit would be a flag for 11 the sign and the remaining 7 would indicate 10 the value, so while it would still hold 9 256 values, but the maximum and minimum 8 would be determined by the 7 trailing bits 7 (so 2^7 - 1 = 127).

When you get into Unicode 6 characters and "high ascii" characters, the 5 encoding requires more than an 8 bit structure. So 4 in your example, if you were to assign a 3 byte a value of 76, a lookup table could be 2 consulted to derive the ascii character 1 v.

Score: 11

11111111 (8 on bits) is 255: 128 + 64 + 32 2 + 16 + 8 + 4 + 2 + 1

Perhaps you're confusing 1 this with 256, which is 2^8?

Score: 9

8 bits (unsigned) is 0 thru 255, or (2^8)-1.

It 2 sounds like you are confusing integer vs text 1 representations of data.

Score: 5

255 = 2^8 − 1 = FF[hex] = 11111111[bin]

0

Score: 5

i thought 8 bits was the same thing as just 10 one character?

I think you're confusing the 9 number 255 with the string "255."

Think about 8 it this way: if computers stored numbers 7 internally using characters, how would it store those 6 characters? Using bits, right?

So in this hypothetical 5 scenario, a computer would use bits to represent 4 characters which it then in turn used to 3 represent numbers. Aside from being horrendous 2 from an efficiency standpoint, this is just 1 redundant. Bits can represent numbers directly.

Score: 4

range of values for unsigned 8 bits is 0 5 to 255. so this is perfectly valid

8 bits 4 is not the same as one character in c#. In 3 c# character is 16 bits. ANd even if character 2 is 8 bits it has no relevance to the main 1 question

Score: 2

I think you're confusing character encoding 7 with the actual integral value stored in 6 the variable.

A 8 bit value can have 255 5 configurations as answered by Arkain
Optionally, in 4 ASCII, each of those configuration represent 3 a different ASCII character
So, basically 2 it depends how you interpret the value, as 1 a integer value or as a character

ASCII Table
Wikipedia on ASCII

Score: 0

Sure, a bit late to answer, but for those 22 who get this in a google search, here we 21 go...

Like others have said, a character 20 is definitely different to an integer. Whether 19 it's 8-bits or not is irrelevant, but I 18 can help by simply stating how each one 17 works:

for an 8-bit integer, a value range 16 between 0 and 255 is possible (or -127..127 15 if it's signed, and in this case, the first 14 bit decides the polarity)

for an 8-bit character, it 13 will most likely be an ASCII character, of 12 which is usually referenced by an index 11 specified with a hexadecimal value, e.g. FF 10 or 0A. Because computers back in the day 9 were only 8-bit, the result was a 16x16 8 table i.e. 256 possible characters in the 7 ASCII character set.

Either way, if the byte 6 is 8 bits long, then both an ASCII address 5 or an 8-bit integer will fit in the variable's 4 data. I would recommend using a different 3 more dedicated data type though for simplicity. (e.g. char 2 for ASCII or raw data, int for integers 1 of any bit length, usually 32-bit)

More Related questions