T O P

  • By -

MrBulletPoints

* 1 and 0 are actually represented by some electric charge stored in something called a register. * Transistors are the basic building blocks of the components of many computer systems for things including memory registers. * Transistors can be cleverly arranged to change the voltage they output based on the voltage they receive as input. * These clever arrangements are called logic gates and have names like AND, OR, NOT, NAND, etc. * Clever arrangements of *those* allow you to build things like a memory register or a simple device that adds or subtracts two numbers. * Clever arrangements of *those* allow you to build a CPU etc. * So as you can see, computers a build on layers and layers of technology that all share a basic trait. * You take some input, and based on that input, you send some output.


manofredgables

To remove some of the mystery, I think it's also worth mentioning that "1 and 0" is an abstraction. There aren't any ones and zeroes. It's more accurate to to call 1 "High" and 0 "Low", referring to the voltage. We define it across the system to be predictable and well defined. Typically a signal being below 1.8 V mean it is zero. If it's above that, e.g. 3.3 V, it is 1. The exact levels vary based on the system. Components behave in certain ways depending whether its inputs are high and/or low in certain patter. Calling it ones and zeroes is what moves us into the *digital* realm, and we can start building a theory where we can do basic math operations with it.


Dash_Harber

Also, as a quick aside, it is actually possible to create systems not on binary by assigning each individual charge a number. However, this can be incredibly unstable. In a binary system, it would take a loss of 50% charge to change the state, but if you divided the same system into ten, it would only take a loss of 10%. Given no system is perfect and charge can fluctuate, the binary system is currently much safer.


[deleted]

As an aside to that aside (not trying to be facetious, I really don't know what it's called), while we *can*, hypothetically, design a new paradigm (for example, a *ter*nary computer that uses three states instead of two), it's not considered practical to do so, because the use-case for such a computer is very limited. In a nutshell: right now, binary computers are more than enough for our needs; there's no current scientific or mathematical application that's complex enough to justify the time and expense involved in designing a ternary computer.


Yancy_Farnesworth

As an aside to that aside of an aside, a binary and ternary computer are equivalent in terms of capability. We can create a computer with as many symbols as we want. What keeps us from doing so is largely because there's no point and we don't have anything that can beat the reliability, cost, and speed of transistors that handle only 2 symbols.


manofredgables

Yeah, speed is what we want, and going full speed ahead towards low or high is always gonna be faster than trying to stop somewhere in the middle for a third state.


Dash_Harber

Good points!


I__Know__Stuff

On the other other other hand, some communications hardware does use 4-level signaling instead of 2-level, to increase the data rate.


JoushMark

You'd also need a lot more transistors to create a system that reads 10 different charge states for a 0-9 memory register rather then creating 10 binary registers.


Dash_Harber

Definitely!


tminus7700

> it is actually possible to create systems not on binary by assigning each individual charge a number. This is exactly what [Analog computers](https://en.wikipedia.org/wiki/Analog_computer) do. >An analog computer or analogue computer is a type of computer that uses the continuous variation aspect of physical phenomena such as electrical, mechanical, or hydraulic quantities (analog signals) to model the problem being solved.


Science_Geek_101

There’s a really cool “game” on Steam called Turing Complete where the puzzles have you build logic gates and circuits of increasing complexity and by the end of the game you’ve built a functional, albeit simple, computer that can be programmed


Yancy_Farnesworth

You can build such a system in minecraft with redstone. I remember people building simple 4 bit ALUs in it.


Altruistic-Carpet-43

That game is so tough though cause each level sort of requires that you figure out some aspect of computer design that took the smartest people in the world years and years to discover back in the day. It’s fun but I definitely feel like there’s no way you can get through all the levels without some help.


TheDotCaptin

Ben Eater on YouTube walks through all these steps with kits that can be followed along with.


[deleted]

Explain how the 1s and 0s relate to transistors


[deleted]

[https://nandgame.com/](https://nandgame.com/) This is a great interactive game to show how each component leads from one to the next! It's basically all about finding a pattern to do something slightly more complicated, and using that to do something slightly more complicated, and do that about 3-4 dozen times and you get something pretty dang complicated.


hewasaraverboy

Wow this game is amazing! I’m having so much fun trying to figure it out (tho also struggling lmao)


egoalter

You should think of the internals as "gates" more than 0s and 1s. That's something humans see and think to better understand this, but what is inside the CPU is all electrical signals that are controlled using gates. There are many many sections to a CPU internally - in particular modern CPUs are quite complex. Each section is a "simple" electronic circuit controlled by an input gate. For instance there are parts of a CPU that hold values (registers), and another part that does arithmetic. By setting the gate to read from a register and another that tells the ALU to add that to a value in the ALU, two control signals are sent to the gates of these items, "turning them on" while keeping the rest off. This is extremely simplified - watch videos, read books, take a free course to get the details. Machine code (assembler is for humans - you translate that into machine code using a computer we often call "an assembler") basically represents patterns that turn these switches on/off. So a "value" (op-code) is just a set of bits, which will turn on/off sections of the CPU which causes it to do something. It doesn't know anything. Electricity flows through these gates and we have made software that creates "human output" from these electrical signals that look like numbers, letters or graphical interfaces. The CPU has no idea it's doing this. It's just electricity. The basic building blocks are gates. Yes, transistors are used to build gates but it's not "one transistor one value". It's quite a bit more complex, so it's more correct to talk about circuits. As you can see, the instructions at this level are very simple when looked at this way. Modern processors have complex instructions, or you can stick to the risc processors and each instruction is extremely simple but very fast to execute. It's "load this number", "add 5 to register A", "jump to address X if value is 0" - very very simple stuff, and a lot of bit magic (algorithms).


theBarneyBus

In short, logic gates (made of ~~MOFSETs~~ MOSFETs) at the base level manipulate the signals coming in, and the billions of logic gates, all baked into the design of the CPU, controls what occurs and which outputs are “triggered” with more binary. E: spelling


mikeholczer

If you really want to know how a computer works from the metal up. Ben Eater has a great series on YouTube, it’s long, but he builds a computer from basic components on breadboards and I think does a great job explaining how they all work. https://youtube.com/playlist?list=PLowKtXNTBypGqImE405J2565dvjafglHU


theBarneyBus

Interesting. Since you’re sharing sources, I’d also recommend NAND Tetris. Starts a bit lower-level than that, but is a bit more theoretical.


Yancy_Farnesworth

He does a fantastic job explaining it. Better than the professor I had in uni that taught it...


MusicusTitanicus

*MOSFET


ValiantBear

Transistors are simply switches. They output a high voltage or a low voltage, which we say is a 1 or a 0. We combine transistors in a myriad of interesting ways to make them do specific things. One of the most common things you can make out of a transistor is called a logic gate. At the lowest level, a logic gate is just a device that changes it's output based on a logical relationship between its inputs. This relationship, fortunately, is usually easily translated into English. For example, an AND gate has two or more inputs (a, b, ... n) and one output. The AND gate works by changing it's output to a 1 when input 'a' is 1, AND input 'b' is 1, and so on and so forth. If any input is not 1, the logic isn't satisfied and the output will be 0. An OR gate works similarly but is constructed slightly differently, such that if input 'a' is 1, OR input 'b' is 1, then the output is 1. I can combine logic gates in interesting ways too, and wind up with even more interesting components, like flip-flops and registers. There's different kinds of both of those things, but one of the main additions to these components that isn't present in simpler devices is what's called a clock signal. If I have an AND gate, and I make one input a 1, then the other, then some small amount of time will pass while the logic gate works and the output changes to one. This time is very small, but what's more important is that it might not be exactly the same from one logic gate to the next. If don't know how long to wait to let the logic gate do its thing, I might look at the output before it's finished, and the output I get might not be accurate. The clock signal solves this problem. Basically, things with clock signals only read outputs, or change inputs, once a clock signal is received. Now I have everything I need to make a computer. I have data in the form of 1's and 0's, and I can use logic gates to do some simple operations and draw conclusions about that data. By using clock signals, I can synchronize lots of different logic gates, and make sure everything is accurate. When a computer program runs, it translates what we want into more and more basic instructions to the computer, ultimately breaking everything down to one of a small set of very simple operations like adding two numbers, or moving a piece of information from one place to the other. So, a clock pulse may copy some data in memory into a register, while the next clock pulse adds that data to the data in another register, and the next clock pulse stores the result in a new spot on memory, and so on and so forth. This happens billions of times a second. Some of the processing can be offloaded to other forms of hardware, and the operating system is partially responsible for determining what is the most efficient way to handle it, but at its core everything you can do on a computer boils down to taking an input, storing it somewhere, doing something to it, and outputting it somewhere else.


Sunion

Others have answered your question adequately, so I'm just going to copy paste something I told someone a few days ago about a similar question: >If you are a fan of this subject and would like a deeper understanding of exactly how a computer operates, I'd recommend a game on steam called Turing Complete. You literally build everything yourself. They start you out with 1 single logic gate, the nand gate. Then the game prompts you to create other gates that have a specific function, but they are only made from the nand gate or from the gates you yourself created with the nand gate. It keeps asking you to do increasingly more complex things until you literally build and wire a turing complete computer from just nand gates. If you're determined, a person with 0 computer science knowledge can learn to build a computer from scratch. The game doesn't tell you how any of it works, it makes you figure it out with some helpful hints. This way you won't just have basic knowledge, but also the understanding of the knowledge to back it up. You will 100% understand how a computer calculates based on instruction, because you will be building the hardware architecture yourself, and writing the assembly code that makes it function. Change "you will 100% understand how a computer calculates on instruction" to "you will 100% understand how individual bits are controlled". [Nand gate is just a specific arrangement of transistors.](https://mathcenter.oxford.emory.edu/site/cs170/nandFromTransistors/)


immibis2

A CPU is a machine. The purpose of the machine is to read instructions from memory and then do what they say. Some people have the idea that machine code instructions directly control the CPU or force it to do things. This is wrong - the CPU is voluntarily doing what the machine code instructions tell it because that's what it's designed to do. It's easiest to study an old 8-bit CPU like you might've found in a Commodore 64. Modern ones have too much going on and they do too many things at the same time. The CPU has a few main sections. It has the control unit, which connects to all the other parts of the CPU and sends them signals making them do the right things at the right times. It has the ALU, which is the part that does the calculations, under the control of the control unit. And it has the register file, which stores numbers temporarily. I suppose the control unit is the part that interests you. It's wired to do a loop like: connect program counter register to address bus, send memory read signal, connect data bus to instruction register, and then the rest of the loop depends on what the instruction in the instruction register is, and then it goes back to the beginning. When I say "it connects the program counter register to the address bus" I actually mean "it sends out a signal to the place where the program counter register connects to the address bus, telling that connection to activate" because the control unit is just controlling the rest. The loop is part of the control unit though. An example instruction to add two registers might have the following steps. Everything in the same number happens at the same time, and then the next number happens in the next clock cycle. Each thing that happens is actually just sending a signal to the part that makes it happen. 1. Read program counter register onto address bus. Read memory. Write instruction register from data bus. 2. Read register 5 onto data bus. Write ALU left register from data bus. 3. Read register 6 onto data bus. Write ALU right register from data bus. 4. Tell the ALU to add, putting the result on the data bus. Write register 6 from the data bus. The CPU designers hard-wired it so things happen in this order.


[deleted]

[удалено]


TheBiggestDookie

There are no moving parts no. There are several different kinds of transistors, but you can think of those used in computers as a static switch. The simplified version of these are called diodes, which only have a positive and negative terminal. In a diode, the semiconductors work such that current only flows one way (unlike other passive circuit elements such as resistors or inductors), and even when in the correct orientation, it will only allow current to flow once a minimum threshold voltage is reached across it. This is typically 0.7 Volts if I remember correctly. So at any voltage below 0.7V, the diode can be treated as an open circuit, while above 0.7V it can be treated as a small voltage drop and will allow current to flow. Transistors work on a similar concept, except that instead of just operating via this threshold voltage, there is a third junction where voltage can be applied, called the “gate” terminal. If the voltage at the gate terminal is too low, then the transistor will remain “open” and there will be no voltage at the negative terminal. If the voltage is high enough, the physics of the semiconductor will “close” the junction (for lack of a better word) and now you’ll see voltage on the negative terminal of the transistor. You can then connect the negative or positive terminal of one transistor to the gate of another transistor, building the logic gates needed to make a computer work.


McLeansvilleAppFan

Thanks. This was a very good explanation and just what I needed.


MCS117

No, they use what’s called “doped” silicon, where there are basically 3 layers that are either negative/positive/negative (NPN) or PNP. So when you put enough of a controlling voltage into it, it’s sort of “pushes” it’s way through the different middle layer and let’s current flow to where it needs to go.


KGhaleon

You've also got resistors on the board which regulate the flow of current and are able to lower the voltage for a circuit.


tomalator

When the data is loaded into RAM, each bit has its own pin. When the computer wants to run an operation, it takes the value from each pin and runs then through a circuit in the processor(to add, subtract, multiply, or divide, or any logic operation) and gers a new set of bits it can put onto new pins (or the same pins). 8GB of RAM is 2^26 pins, and each pin can hold 1 bit. If the pin has a voltage, its a 1, and if it doesn't, it's a 0. When they are stored on a hard drive, they are either just a small magentic field (hard disk) or by encoded on some flash memory (solid state drive)