• Welcome to Autism Forums, a friendly forum to discuss Aspergers Syndrome, Autism, High Functioning Autism and related conditions.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Private Member only forums for more serious discussions that you may wish to not have guests or search engines access to.
    • Your very own blog. Write about anything you like on your own individual blog.

    We hope to see you as a part of our community soon! Please also check us out @ https://www.twitter.com/aspiescentral

Stupid question about bits and bytes ?

GoofKing

All your bases are belong to us
I was wondering about why tutorials like these about the 8085 or any other CPU architect always refer to registers as eight bits when that's a byte if one binary number equals a bit and eight bits equal a byte. Is there something I'm missing here ? :/ I'm also having trouble with basic binary arithmetic like addition, subtraction, multiplication, and division.

I know about one's compliment where you invert all of the bits and two's compliment where you add one to the least significant bit ...
 
8-bit - Wikipedia, the free encyclopedia

I should know this stuff. But I don't.

It sounds like registers have to do with how a processor works. Bytes have to do with data storage, processing and transfer.

You want a detailed explanation of working with binary in this thread, or do you want links? I find binary pretty straightforward.
 
I was wondering about why tutorials like these about the 8085 or any other CPU architect always refer to registers as eight bits when that's a byte if one binary number equals a bit and eight bits equal a byte. Is there something I'm missing here ? :/

Bytes weren't always 8-bits, in fact historically there are architectures with other sized bytes (for similar reasons the term octet is used in networking to refer to an 8bit byte specifically)

I'm also having trouble with basic binary arithmetic like addition, subtraction, multiplication, and division.

I know about one's compliment where you invert all of the bits and two's compliment where you add one to the least significant bit ...
The beauty of storing negative integers in twos complement notation is that it means addition and subtraction can be preformed without doing anything differently than for unsigned integers! It also gives you a single bit to test (the high bit) to see if a number if negative, and is generally nice to work with.

Say you've got a 16bit register with the value 0xFFFF (ie. all bits high) and you add 0x0001 to it, it's going to wrap around to 0x0000. In 2s compliment 0xFFFF represents the value -1 and -1+1=0! All that's needed to to have different overflow flags, signed overflow for values that go from 0x7FFFF to 0x80000 or vice versa, and unsigned overflow for values that go from 0xFFFF to 0x0000 and vice versa.
 
8-bit - Wikipedia, the free encyclopedia

I should know this stuff. But I don't.

It sounds like registers have to do with how a processor works. Bytes have to do with data storage, processing and transfer.

You want a detailed explanation of working with binary in this thread, or do you want links? I find binary pretty straightforward.

Kind of like the difference between bits in storage and bytes in memory ? I'm trying to teach myself assembly but is all over the place when I run into a dead end. By dead end I mean not wasting time trying to get something to work in WINE when I come across a tutorial or dealing with a command line assembler that needs tinkering with just to assemble something :/

So far I need to stick with SNES assembly since it's the furthest I've gotten.

Bytes weren't always 8-bits, in fact historically there are architectures with other sized bytes (for similar reasons the term octet is used in networking to refer to an 8bit byte specifically)


The beauty of storing negative integers in twos complement notation is that it means addition and subtraction can be preformed without doing anything differently than for unsigned integers! It also gives you a single bit to test (the high bit) to see if a number if negative, and is generally nice to work with.

Say you've got a 16bit register with the value 0xFFFF (ie. all bits high) and you add 0x0001 to it, it's going to wrap around to 0x0000. In 2s compliment 0xFFFF represents the value -1 and -1+1=0! All that's needed to to have different overflow flags, signed overflow for values that go from 0x7FFFF to 0x80000 or vice versa, and unsigned overflow for values that go from 0xFFFF to 0x0000 and vice versa.

So no chip manufacturer or producer agreed on some standard definition of what a byte or bit is ? :? Okay I see, so it works just like decimal math when dealing with negative numbers instead we're using two's power notation ? I remember binary being anything to the second power example 1^2 is 2 and 2 ^ 2 is 4 and 4 ^ 2 is eight and so on ...

Amazing how eight digits can represent 256 decimals and sixteen can represent 65K (dunno the exact number lol) digits to play around with. I first got into this computer stuff when I took a Vo-tech class on computer repair in the tenth grade :)


Next is on my complicated things to learn is how Boolean algebra logic applies to gates and etc. I remember a little from reading a couple of old books on how electronics work that I borrowed from my dad. Transistors Transistor Logic is kind of fun actually :)
 
So no chip manufacturer or producer agreed on some standard definition of what a byte or bit is ? :?
^ That doesn't sound right :confused: Somewhere in the last 20 years (or possibly a wee bit longer) 8 bit bytes became the only size in use.

A "bit" has always meant a binary digit, I believe. I doubt you find any confusion as to what a bit is, though it's possible you could encounter it used in ways you might not expect (eg. see non-integer numbers bits in Information Theory)

Off the top of my head I recall, a "byte" might have been originally defined as the smallest size chunk of bits that a device could address or work with, if that helps you to understand the original (not necessarily 8 bit) usage better :D

As 8 bit bytes did definitely become the de facto standard a long time ago - I guess the resources on the 8085 architecture might be rather old, and if not I guess detailed chip documentation is one place where you might want to expect the documentation to spell out low-level details such as that a byte should be taken as the now conventional 8 bits.

A related concept you'll encounter might be that of a "word"... unlike bytes the size of a word does differ between architectures, and is a little messy as it sometimes has it's original meaning (being something like the size that a processor wants to deal with, eg. a 16bit processor might be said to have 16bit words, while a 32bit processor would have 32bit words) except on the x86 family where it's legacy use on the 16bit processors pretty much cemented the term as being 16bits ever more (so on the x86 family, 32bits are double words or "dwords", and 64bits are quad words or "qword").

Okay I see, so it works just like decimal math when dealing with negative numbers instead we're using two's power notation ? I remember binary being anything to the second power example 1^2 is 2 and 2 ^ 2 is 4 and 4 ^ 2 is eight and so on ...
I hope you're not confusing twos compliment notation with the powers of two?! o_O The are not the same at all...

The powers of two in binary are the equivalents of the powers of 10 in decimal (and the powers of 16 in hexadecimal). They are the numbers where you start need to use an extra digit
eg.
Binary: 0, 1, 10, 11, 100, 101, 110, 111, 1000 ...
Decimal: 0, 1, 2, 3 4, ... 9, 10, 11, 12, ... 99, 100, 101 ...
Hex: 0, 1, 2, 3, ... 9, A, B, ... E, F, 10, 11, 12... 1F, 2F ... FF, 100

(Hopefully you see the pattern/s)
Note that the if the power of the base tells you when you will need another digit, and also the number of values you can represent with a given number of digits provided you include 0; while the power of the base minus one gives you the highest integer you can represent with a number of digits (eg. 10 to the 2nd power minus 1 is 99, the largest 2 digit decimal integer is 99)

Now that's out of the way twos compliment is how we store negative numbers in a tricky way... hmmm lets imagine how a twos compliment-like representation might be used in decimal...

OK say we have a fixed number of decimal digits, I'll use 3. So we can represent the positive numbers 0-999. Now imagine we've already got already simple processor that handles addition and subtraction of our 3 digit decimals and but we want to extend our processor to handle negative numbers. We could duplicate the circuits that handle positive integers but we don't want to (eg. it'll increase cost) can we cheat and use what we have somehow?

What if we pretend that 999 is -1, and 500 is -501, but the numbers 0 - 499 retain their existing meaning?

998 (-2) + 7 (+7) = 1005 (+5)
but we only have 3 digits so the upper digit is lost meaning the answer is just 5. So, as 998 + 7 = 5 (if we lose the 4th digit because our 3 digit decimal processor truncates it) and -2 + +7 = 5, then I hope you can see that our mapping works and means we don't have to add circuits to handle addition or subtraction different for signed numbers ;) !

...Next is on my complicated things to learn is how Boolean algebra logic applies to gates and etc. I remember a little from reading a couple of old books on how electronics work that I borrowed from my dad. Transistors Transistor Logic is kind of fun actually :)
Awesome ;) ! You might enjoy reading up on how binary addition/subtraction can be implemented in hardware.
 
Last edited:

New Threads

Top Bottom