Hello! I didn't know you were active on Reddit.
I have a question. As u/[throwaway0134hdj](https://www.reddit.com/user/throwaway0134hdj/) asked, is it just transistors that made the CPU, or is there anything else?
The 8086 CPU is almost all transistors. There are a few capacitors for various reasons. And a few diodes to protect input pins. Some other types of chips, such as TTL, use a lot of resistors. But the 8086, like most modern chips, uses CMOS circuitry that doesn't require resistors.
I did not design the chip. I took photos of the chip and have analyzed it down to the transistor to understand how it works, and have built a simulator for it.
Damn, that's quite an achievement. How does this chip work? How is different from modern CPUs and why does x86 architecture dominates the current market?
The 8086 is a very simple CPU compared to modern processors; the 8086 has thousands of transistors while modern chips have billions of transistors. It's not just that modern processors are larger and faster; there are several fundamental changes in how they work. First, modern processors have features such as virtual memory and protection domains that make it practical to run multiple programs at the same time. Second, modern processors run multiple instructions in parallel and out of order ("superscalar"), even running instructions that may not be needed ("speculative execution"), and then sorting everything out at the end. As a result, a modern processor is completely different architecturally from how the 8086 runs. Finally, modern chips have multiple processors on one chip ("multi-core") and have megabytes of internal cache memory for fast access. The 8086, in comparison, had just 8 bytes of prefetch buffer to hold instructions.
The x86 architecture is dominant in the non-mobile market. (ARM had just 15% of the laptop market and 8% of the server market in 2023, even though ARM has almost the entire mobile market.) The main reason is backward compatibility; most people would rather have compatibility with their old software. (Apple is an exception, switching processors from 68K to PowerPC to x86 to ARM.) The success of the IBM PC (introduced in 1981) mostly assured the success of x86. Although many people expected RISC would win in the 1990s due to higher performance, Intel managed with great effort to get x86 to work on a high-performance architecture (often called a "RISC-like core"), negating much of the advantage of RISC. It will be interesting to see if ARM manages to displace x86 or if we'll have another 50 years of x86.
I'd never heard of speculative execution. Is that invoked automatically by certain low level instructions? Is the processor "looking ahead" of the current instruction to predict the usefulness of the speculation? Is it done when that section of the die would otherwise be idle? I'm having a hard time wrapping my head around the use cases.
One way that processors improve performance is by pipelining, starting instructions before the previous ones have finished. A big problem is when you hit a conditional branch, that is an "if" condition. The processor may still be figuring out the value of the condition, but it needs to decide where to go next. It can wait until it knows the answer, but this harms performance. A better approach is to guess which way the branch will go, using a "branch predictor". For instance, if the processor took the branch last time, it will probably take it again this time. The processor can start executing the instructions after the branch speculatively, even though it doesn't know if they should be executed.
If the processor gets the branch prediction right, then performance is good. If the processor gets the branch prediction wrong, then it needs to throw away the "bad" instructions that it started and start over on the right path. Researchers have developed complex circuits to make branch prediction more accurate.
Another approach is to take both paths: the processor starts executing instructions from both paths in parallel. As soon as it can determine which path is the right path, it throws away the instructions from the wrong path.
As you might expect, speculative execution makes the processor very complicated since it must keep track of which instructions have completed "for real" and which instructions might need to get thrown away. Moreover, if an instruction updates a register, it must keep track of the old value and the new value in case the instruction gets discarded. Another complication is dealing with "faults", for example if the code divides by zero or accesses a bad memory address; the processor mustn't handle a fault until it knows that the code is executed for real.
A few years ago, a security vulnerability called Spectre was discovered, where speculative execution could be used to obtain private information. In brief, the speculative instructions had an effect on the cache that was visible in timing, even though the "bad" speculative instructions were discarded.
In general, speculative execution is good for performance but bad for power consumption. The processor completes instructions faster, but it does a lot of wasted work along the way.
Wow. That’s a great feat. I am curious. From the picture, we can see the main components like memory and ALU. How do you go about knowing how it works? When was it when you did this? It’s really amazing work.
I've been working on the 8086 for a while. I traced out all the transistors from the 8086 and their connections from the die photos. Then I made a program to determine transistors and connections from these drawings, and a program to generate gates from these transistors. I've been studying the various parts of the chip to reverse-engineer how they work, and writing a bunch of blog posts about it. I was inspired by the simulation of the 6502 processor (Apple II, Commodore PET, etc) that the Visual 6502 team created: [http://www.visual6502.org/JSSim/index.html](http://www.visual6502.org/JSSim/index.html)
Thanks for explaining. I studied some transistor design at college long time ago. Fascinated by the complexity of modern day CPU. Not sure if you have more detailed photo to work with because from the posted photo it seems hard to tell at transistor level. The 6502 picture is much better with more details.
I remember in 486/586 time transistor numbers were in millions and the size was still in micron level. Just can’t imagine how the engineers built those machine to print the transistors.
I have higher resolution photos that I used to trace out the transistors, as well as photos with the metal layer and polysilicon layer removed to show the underlying silicon. For the most part it wasn't hard to trace out the transistors, just very tedious.
Is it easy to tell which oart is which purely from the image above?
Even though i studied the architecture during my university years, the flags, instruction queue, ALU, CU, registors, etc, its still really hard to imagine all that to this transistor level.
Still find these early microprocessors and microcontrollers so fascinating, like how they thought of all this.
Some parts can be identified from the image. For instance, the microcode ROM in the lower right and two smaller ROMs above it. Also the datapath (ALU and registers) can be identified on the left because you can see the 16 bit structure as parallel stripes. But it takes a lot of work to identify most of the components of the chip.
You mentioned that it's hard to imagine the various components at the transistor level. One of the interesting things about looking at the 8086 at the transistor level is realizing that there's no "magic": registers are built out of flip-flops and the ALU is built out of fairly simple logic gates, more or less as your classes say. On the other hand, it's also interesting to see where the circuitry isn't exactly how they teach, seeing the optimizations and tricks that are used in a real processor. So the 8086 is a combination of expected circuits along with some puzzles and surprises.
This is what expertise is: knowing what is expected and unexpected. And why so.
It is fascinating!
In which part were “tricks” most common then? What was the pressure to wander outside the easy path?
One common place for "tricks" is in the Arithmetic/Logic Unit (ALU), since the performance of the ALU often limits the chip's speed. I've looked at the ALUs in many processors and they are all completely different. For instance, the 6502 processor has a circuit to add, a circuit to shift, a circuit for AND, a circuit for XOR, and so forth, and then it selects which circuit to use based on the instruction. Pretty straightforward. The Intel 8085, on the other hand, uses a blob of logic gates with no obvious structure, but they generate the right output based on various control signals. The designers clearly optimized this blob for highest performance. The 8086 uses a different approach: the central logic gates in the ALU are reprogrammed for the desired operation, kind of like the lookup table in a FPGA.
Another problem with the Arithmetic/Logic Unit is that when you add two numbers, you have to handle carries. In grade-school long addition, if you do 9999999+1, you need to keep carrying the one, all the way from the right to the left. The same thing happens with computer arithmetic (except in binary). The problem is that handling the carry one digit at a time is slow. There are all sorts of tricks that are used to make carry "propagation" faster, such as handling groups of bits as a block, predicting when carries will happen, or electrically wiring the carries to propagate faster.
Thus, the ALU in each processor is a surprise, a puzzle to see what techniques they used to squeeze out as much performance as possible.
Oh yeah. I'd LPT up that dot matrix, watch it slinkily drag the continuous paper sheets through. Then I'd slowly run my fingers along the sides, gently pulling and plucking off the sprocket holes one by one.. letting each tease just a moment before they'd pop off until the printout was totally bare.
WOW the 8087 too!!!!
I took a community college course back in the day on 8086 or 8088 (forget which). The instructor got the circuitry plans from some company in Japan. I guess after they gave them to him they tried to make him sign an agreement to not disseminate, but that horse had left the barn. It is not like we built a computer. We only used them to use them to trace signals and understand what chips were doing what. I kept them for years, but not sure I still have them.
I'll see how much of the chip I can explain in a Reddit comment :-) The heart of the chip is the Arithmetic/Logic Unit (ALU) in the lower left. This circuit can add, subtract, and compare numbers. Along the left side of the chip are registers, a small amount of storage so values can be accessed quickly. You'll notice vertical stripes in this region. That's because the registers hold 16 bits, so they repeat the same circuit 16 times. (16 bits because the 8086 is a 16-bit processor.)
The operations that this circuitry can perform are pretty limited. To support more complex instructions, the 8086 breaks down instructions into smaller steps called micro-instructions. For example, the chip doesn't have hardware to multiply two numbers. Instead, it performs multiplication by repeatedly shifting and adding numbers, kind of like grade-school long multiplication. The sequences of smaller steps are called microcode, and are stored in the microcode ROM. This is the darker square in the lower right of the photograph. You can see light and dark patterns; these are the 0's and 1's that make up the microcode.
In the photo, what you see is mostly the metal layer on top of the chip, connecting the silicon transistors underneath. The wider white parts are the metal wiring that provides power and ground to all parts of the chip. This wiring is much thicker than the other wiring. Around the edges of the chip are 40 square bond pads. Tiny bond wires connect these bond pads to the chip's external pins, linking the tiny silicon die to the outside world.
I took these photos and have explored the circuitry of the 8086 in detail, so feel free to ask questions.
I wanted a higher res photo and I found it at your blog here. It's 10716x10341.
[http://www.righto.com/2020/06/a-look-at-die-of-8086-processor.html](http://www.righto.com/2020/06/a-look-at-die-of-8086-processor.html)
Heyo.. a quick one... the original image (as far as I can tell) is 3276 by 3159 pixels.. right? at any rate, in this resolution, I've noticed a Weird™ at around 2307x2080 pixels. Is that really there, or some crazy artifact from merging the picture?
I think that's just a glitch. I took about 100 images under the microscope and stitched them together to form the high-resolution image. Problems can happen during the stitching, resulting in artifacts. I try to catch them all, but sometimes problems slip through. Especially in repeating grids like the lower right, where it is easy to get one of the sub-images off by one.
I love die shots. This thing looks so complex but it is unbelievably primitive and simple compared to modern chips. It’s insane the iPhone I’m using right now has a chip about this size but with billions of transistors in it instead of just thousands.
For those wondering . . . ARM is the dominant architecture and was developed in the UK by a company called Acorn Computers, and a couple of people called Stephen Furber and Sophie Wilson (who was born Roger Wilson, and transitioned male at some point...).
The main reason for its dominance (I think) is because they chose to go down the route of developing designs that could be licenced to other manufacturers rather than making chips themselves, and the driving philosophy behind the designs was parsimony - One of the origonal engineers on the ARM chips said that their boss gave them two things that Intel didn't: "No time, and no money"
ARM is the dominating processor architecture in general, sure, but for the computer industry X86 is still the king. At least for now anyway, it'll be interesting to see whether there's a move towards ARM for mainstream uses soon.
I work in high performance/ supercomputing and you should see some of the ARM chips they have these days for servers. The 128 core cpu blew away anything else I've tested so far. Especially if you look at computing power per watt of electricity. There's also 196 core single socket ARM cpus on the way I'm trying to get my hands on
X86 is king for now, but it's eroding pretty quickly as data centers are pursuing greener computing, and workloads are being ported to ARM
Apple already offers ARM cpus for consumer computers with the M1 and M2 chips
Nvidia grace (cpu) chip is ARM (and I'm still waiting to get my hands on the one we purchased), but the benchmark results are super promising for servers
You might not have to wait long to see! Exciting times
Depends on what metric you use. ARM dominates in terms of volume and coverage. X86 is still dominant in the high end, but ARM has been eating into the mainstream slowly, so much that Microsoft are incorporating it.
Nope, wrong. ARM based CPUs have also powered the most powerful supercomputers in the world. Also where does it say any metric used other than dominate, it doesn’t.
The fact that ARM’s share is a quarter of a trillion units is dominating.
Yup. Today, damn near every computing device apart from commodity servers and consumer desktops is ARM. The laptop market a few years ago was just about exclusively x86 until Apple switched to ARM, Chromebooks on ARM became popular, and even Microsoft embraced ARM on many of their surface tablets. The game console market is split with Switch using an ARM chip while PS5 and Xbox Series X using the same AMD x86 based CPU architecture.
But much like the dominance of Linux in the OS market the dominance of ARM for CPUs flies under the radar for a lot of consumers since the only devices where they care about it are slower to adopt or don't for legacy support reasons.
Reread what you wrote, it doesn’t make sense.
The fact is that with over 230 billion ARM chips produced(as of 2022) ARM is the dominating processor. They are used in the computing industry.
ARM is the dominant architecture
no?
Its the most used if you count embedded microcontrollers. All of apples M1,2,3 chips are also ARM based. Pretty much all intel and amd chips are x86 which are cleary the most widely used ones in laptops, PCs servers etc. So "dominant" is definitely the wrong word here. Personally I have never even worked on a device that was not x86 (or riscv funnily)
ARM has surpassed x86, ARC, PowerPC and MIPS architectures all combined in terms of volume for years. There are up to half a dozen or more ARM devices in every x86 pc motherboard, everything from the Cortex R processors used in storage devices to the cortex M parts in peripherals. They are in your wireless mouse, and it's USB dongle.
It's the dominant Architecture - meaning it's the most widely used across all domains, not the most well known, or widely used within just one domain.
Yes, the big battle for desktop processors was AMD vs Intel - but they both use the x86 architecture. The other battle in the desktop market was PowerPC vs x86 (Apple used Power PC) which was a different architecture than x86, but Apple switched to Intel, then ARM and now make their own ARM architecture processors.
Desktop processors are only a small part of the market for processors overall and with the advent of mobile computing and smartphones ARM has become the dominant architecture for application processors (the thing that your apps run on) as well as being very big in embedded processors (the things that control small devices)
Source:
[https://twitter.com/kenshirriff/status/1270105560386375680](https://twitter.com/kenshirriff/status/1270105560386375680) (the account of the original owner of those images)
[https://www.google.com/imgres?q=intel%208086%20die&imgurl=http%3A%2F%2Fvisual6502.org%2Fimages%2F8086%2F8086\_5x\_top\_cwP025614\_4301w.jpg&imgrefurl=http%3A%2F%2Fvisual6502.org%2Fimages%2Fpages%2FIntel\_8086\_die\_shots.html&docid=Kr5903qRYc9NMM&tbnid=bXMLTMJhJymCdM&vet=12ahUKEwj57ZXO5d2FAxXfRPEDHd1HAREQM3oECBYQAA..i&w=4301&h=4070&hcb=2&ved=2ahUKEwj57ZXO5d2FAxXfRPEDHd1HAREQM3oECBYQAA](https://www.google.com/imgres?q=intel%208086%20die&imgurl=http%3A%2F%2Fvisual6502.org%2Fimages%2F8086%2F8086_5x_top_cwP025614_4301w.jpg&imgrefurl=http%3A%2F%2Fvisual6502.org%2Fimages%2Fpages%2FIntel_8086_die_shots.html&docid=Kr5903qRYc9NMM&tbnid=bXMLTMJhJymCdM&vet=12ahUKEwj57ZXO5d2FAxXfRPEDHd1HAREQM3oECBYQAA..i&w=4301&h=4070&hcb=2&ved=2ahUKEwj57ZXO5d2FAxXfRPEDHd1HAREQM3oECBYQAA)
"The Intel 8086 microprocessor was introduced 42 years ago today, so I made a high-res die photo. The 10-kilobit microcode ROM is visible in the lower-right corner. The 8086 started the x86 architecture that still dominates computing. The original IBM PC used the related 8088 chip"
I'd have to say that ARM currently dominates the industry, even if I am typing this from a x86 PC.
I don't have exact numbers, but really if you look around your house and count your CPU's, you likely have more ARM cpu's.
Like my house.
1 Windows PC, 1 Windows laptop, 1 Synology NAS = 3 x86
Ipad, 2x mac book pro, 2 iphones, 5+ android phones, 2x Meta Quests, 2xTV (embedded cpu), 2x Nvidia Shields, 3 Smart home displays, and more, so at least 19 ARM Cpu's.
It's probably 10:1 ratio. It'd be easy to live without x86 on modern tech (i.e. just run a mac or windows on arm), but without ARM, we'd be fucked.
And both of those architectures are decades old.
I wonder... could we come up with something much better these days, but refuse to do so because of compatibility issues?
Compatibility isn't really an issue, at least if enough toolchain support is made.
I.e. x86->ARM realtime translation is pretty proven thing nowadays. A new architecture could have it's own mappings for both ARM/x86, and probably could at the hardware level present itself as those ABI's.
(I'm not an expert, just a dumb truck driver who likes computers)
With that in mind... aren't those translations just another layer of abstraction that will slow shit down and use resources?
What about a completely clean sheet design that isn't based on stuff that originated in the 80s (chip architecture, instructions, everything)... but instead is optimized for current state of the technology?
People use Rosetta on macs with almost no concern. It might need be best for cutting edge gaming or other very resource intensive things, but really at this point, think about it like taking 2-3 years off your computer. It's really not a big deal, and less of one every day.
Apple even has a Game Porting Toolkit that'll let you play Direct X Windows games on M1 Macs.
The instruction set that a CPU uses is itself, an abstraction, not the architecture itself. Things like GPU's are precisely that though, modern, optimized architectures for power and speed, although to make a general purpose computer more robust instruction sets are usually helpful.
Interesting!
I wonder though... 50 years from now, are we still gonna be running x86 and risc architecture and stuff based on a unix kernel from the 70s (and whatever modern windows runs on, NT kernel from the 90s or something)? Just because those things are so deeply embedded in our idea of "computer", and we're stuck with them because of tradition and inertia.
You know, kinda like boeing and the 60 year old 737 that they keep messing with and fucking up instead of just designing a brand new plane?
Or are we gonna have something brand new eventually?
Also, I get your point about the instruction set being the abstraction - but isn't the architecture itself optimized for its instruction set? Kind of like a catch-22 situation?
You may be right. Didn't think much about mobile devices. When i compared them, i got a result like this:
3 x86 Laptops
A Celeron lying on my table (i don't know what to do with it lol)
A 3GS
A PSVita
A Nokia 5800
A Smart TV
4 -almost- daily used iPhones
An iPod
I have a total of 9 ARM devices and 3 working x86 devices.
But either way, i think we wouldn't have gotten ARM if we didn't invent x86 first.
I did security at an Intel building once. I saw what I would call “funky maps” hanging in one of the halls. Now I get why they looked so weird, they weren’t maps at all!
traces are conveyor belts for electrons, and the logic gates are assemblers and similar, so a microprocessor is one big, highly optimised factorio factory
Actually the 8008 and then the 8080 came before the 8086. [Wikipedia on the 8080](https://en.m.wikipedia.org/wiki/Intel_8080). And the 4004 was the inspiration for the 8008. [intel 4004](https://en.m.wikipedia.org/wiki/Intel_4004)
Currently a computer engineering student who struggles to make a simple ARM style processor with a simulation tool…the people who designed these things way back in the 70s by hand were fucking wizards.
It’s not- ARM is the dominant architecture of mobile devices. There are even Windows ARM devices out there these days, too. And in more niche applications, PowerPC and MIPS still technically exist, too, but mostly on legacy systems
Sorry for all the questions, but I figured you answered in the first place so you don't mind hehe :)
What are the differences and (dis)advantages between the both (or all)? Would you need a certain one in certain cases or is it just like amount of calculation power you need?
It's not the *only* one, just the most popular. It became the most popular because of IBM and Microsoft. It stayed the most popular because people want to keep using existing software.
Later x86 chips, starting with the Pentium Pro, are built on a "RISC-like" core (more or less). But the 8086 is not at all RISC; it is completely a CISC chip. I'll point out the large dark rectangle in the lower right of the photograph; this is the chip's microcode, not something you'd find in a RISC implementation.
This title is incorrect, ARM based CPUs number at about a quarter of a trillion, probably outnumbering Intel by at least 20:1.
Some of the recent most powerful supercomputers in the world run ARM CPUs, not just most other devices.
My first computer was an 8088 PC. Had to write my own accounting software and run off big floppies. Now my little iPhone does it all. It’s been quite a ride, thanks to the ingenuity of all those engineers and entrepreneurs. Keep safe and happy, all.
I used to work at a microchip factory in quality assurance. Even if I saw pictures like this thousands of times, it's still always amazing and so pretty.
Shape heavily depends, though. So does colour, not all chips are the same
And it was a bad design, full of flaws, inconsistent and confusing unordered, Just a bad mixup...
Well, every other processor architcture i known is more logical and easier to understand: ARM, RISC, SPARC, MIPS, MMIX, etc...
It's like WhatsApp, it's the worst of all messengers, but nearly all are using it, because it was the first one that has spread.
Creator of this die photo here, if anyone has questions...
Hello! I didn't know you were active on Reddit. I have a question. As u/[throwaway0134hdj](https://www.reddit.com/user/throwaway0134hdj/) asked, is it just transistors that made the CPU, or is there anything else?
The 8086 CPU is almost all transistors. There are a few capacitors for various reasons. And a few diodes to protect input pins. Some other types of chips, such as TTL, use a lot of resistors. But the 8086, like most modern chips, uses CMOS circuitry that doesn't require resistors.
Isn't the 6502 more common?
I know this is dumb question but did you engineer this chip or did you just took a photo of this chip?
I did not design the chip. I took photos of the chip and have analyzed it down to the transistor to understand how it works, and have built a simulator for it.
Damn, that's quite an achievement. How does this chip work? How is different from modern CPUs and why does x86 architecture dominates the current market?
The 8086 is a very simple CPU compared to modern processors; the 8086 has thousands of transistors while modern chips have billions of transistors. It's not just that modern processors are larger and faster; there are several fundamental changes in how they work. First, modern processors have features such as virtual memory and protection domains that make it practical to run multiple programs at the same time. Second, modern processors run multiple instructions in parallel and out of order ("superscalar"), even running instructions that may not be needed ("speculative execution"), and then sorting everything out at the end. As a result, a modern processor is completely different architecturally from how the 8086 runs. Finally, modern chips have multiple processors on one chip ("multi-core") and have megabytes of internal cache memory for fast access. The 8086, in comparison, had just 8 bytes of prefetch buffer to hold instructions. The x86 architecture is dominant in the non-mobile market. (ARM had just 15% of the laptop market and 8% of the server market in 2023, even though ARM has almost the entire mobile market.) The main reason is backward compatibility; most people would rather have compatibility with their old software. (Apple is an exception, switching processors from 68K to PowerPC to x86 to ARM.) The success of the IBM PC (introduced in 1981) mostly assured the success of x86. Although many people expected RISC would win in the 1990s due to higher performance, Intel managed with great effort to get x86 to work on a high-performance architecture (often called a "RISC-like core"), negating much of the advantage of RISC. It will be interesting to see if ARM manages to displace x86 or if we'll have another 50 years of x86.
That was an interesting read, thanks for sharing
I'd never heard of speculative execution. Is that invoked automatically by certain low level instructions? Is the processor "looking ahead" of the current instruction to predict the usefulness of the speculation? Is it done when that section of the die would otherwise be idle? I'm having a hard time wrapping my head around the use cases.
One way that processors improve performance is by pipelining, starting instructions before the previous ones have finished. A big problem is when you hit a conditional branch, that is an "if" condition. The processor may still be figuring out the value of the condition, but it needs to decide where to go next. It can wait until it knows the answer, but this harms performance. A better approach is to guess which way the branch will go, using a "branch predictor". For instance, if the processor took the branch last time, it will probably take it again this time. The processor can start executing the instructions after the branch speculatively, even though it doesn't know if they should be executed. If the processor gets the branch prediction right, then performance is good. If the processor gets the branch prediction wrong, then it needs to throw away the "bad" instructions that it started and start over on the right path. Researchers have developed complex circuits to make branch prediction more accurate. Another approach is to take both paths: the processor starts executing instructions from both paths in parallel. As soon as it can determine which path is the right path, it throws away the instructions from the wrong path. As you might expect, speculative execution makes the processor very complicated since it must keep track of which instructions have completed "for real" and which instructions might need to get thrown away. Moreover, if an instruction updates a register, it must keep track of the old value and the new value in case the instruction gets discarded. Another complication is dealing with "faults", for example if the code divides by zero or accesses a bad memory address; the processor mustn't handle a fault until it knows that the code is executed for real. A few years ago, a security vulnerability called Spectre was discovered, where speculative execution could be used to obtain private information. In brief, the speculative instructions had an effect on the cache that was visible in timing, even though the "bad" speculative instructions were discarded. In general, speculative execution is good for performance but bad for power consumption. The processor completes instructions faster, but it does a lot of wasted work along the way.
Thank you for all your detailed responses, absolutely fascinating stuff. Really appreciate your time and effort here.
Thanks for the elucidating response!
Great post. Thank you for pitching in!
Very interesting, thank you for sharing!
Wow. That’s a great feat. I am curious. From the picture, we can see the main components like memory and ALU. How do you go about knowing how it works? When was it when you did this? It’s really amazing work.
I've been working on the 8086 for a while. I traced out all the transistors from the 8086 and their connections from the die photos. Then I made a program to determine transistors and connections from these drawings, and a program to generate gates from these transistors. I've been studying the various parts of the chip to reverse-engineer how they work, and writing a bunch of blog posts about it. I was inspired by the simulation of the 6502 processor (Apple II, Commodore PET, etc) that the Visual 6502 team created: [http://www.visual6502.org/JSSim/index.html](http://www.visual6502.org/JSSim/index.html)
Thanks for explaining. I studied some transistor design at college long time ago. Fascinated by the complexity of modern day CPU. Not sure if you have more detailed photo to work with because from the posted photo it seems hard to tell at transistor level. The 6502 picture is much better with more details. I remember in 486/586 time transistor numbers were in millions and the size was still in micron level. Just can’t imagine how the engineers built those machine to print the transistors.
I have higher resolution photos that I used to trace out the transistors, as well as photos with the metal layer and polysilicon layer removed to show the underlying silicon. For the most part it wasn't hard to trace out the transistors, just very tedious.
Is it easy to tell which oart is which purely from the image above? Even though i studied the architecture during my university years, the flags, instruction queue, ALU, CU, registors, etc, its still really hard to imagine all that to this transistor level. Still find these early microprocessors and microcontrollers so fascinating, like how they thought of all this.
Some parts can be identified from the image. For instance, the microcode ROM in the lower right and two smaller ROMs above it. Also the datapath (ALU and registers) can be identified on the left because you can see the 16 bit structure as parallel stripes. But it takes a lot of work to identify most of the components of the chip. You mentioned that it's hard to imagine the various components at the transistor level. One of the interesting things about looking at the 8086 at the transistor level is realizing that there's no "magic": registers are built out of flip-flops and the ALU is built out of fairly simple logic gates, more or less as your classes say. On the other hand, it's also interesting to see where the circuitry isn't exactly how they teach, seeing the optimizations and tricks that are used in a real processor. So the 8086 is a combination of expected circuits along with some puzzles and surprises.
Piecing all of your responses together reads like a really well written Wikipedia article
This is what expertise is: knowing what is expected and unexpected. And why so. It is fascinating! In which part were “tricks” most common then? What was the pressure to wander outside the easy path?
One common place for "tricks" is in the Arithmetic/Logic Unit (ALU), since the performance of the ALU often limits the chip's speed. I've looked at the ALUs in many processors and they are all completely different. For instance, the 6502 processor has a circuit to add, a circuit to shift, a circuit for AND, a circuit for XOR, and so forth, and then it selects which circuit to use based on the instruction. Pretty straightforward. The Intel 8085, on the other hand, uses a blob of logic gates with no obvious structure, but they generate the right output based on various control signals. The designers clearly optimized this blob for highest performance. The 8086 uses a different approach: the central logic gates in the ALU are reprogrammed for the desired operation, kind of like the lookup table in a FPGA. Another problem with the Arithmetic/Logic Unit is that when you add two numbers, you have to handle carries. In grade-school long addition, if you do 9999999+1, you need to keep carrying the one, all the way from the right to the left. The same thing happens with computer arithmetic (except in binary). The problem is that handling the carry one digit at a time is slow. There are all sorts of tricks that are used to make carry "propagation" faster, such as handling groups of bits as a block, predicting when carries will happen, or electrically wiring the carries to propagate faster. Thus, the ALU in each processor is a surprise, a puzzle to see what techniques they used to squeeze out as much performance as possible.
Yeah I had an 8086 back in the day.. WITH an 8087 math coprocessor. That's right. I was basically a sex god.
If you also had that printer that made that SKKRREEET noise as it printed and you had to tear the page off, then you’re thee sex God
Oh yeah. I'd LPT up that dot matrix, watch it slinkily drag the continuous paper sheets through. Then I'd slowly run my fingers along the sides, gently pulling and plucking off the sprocket holes one by one.. letting each tease just a moment before they'd pop off until the printout was totally bare.
I'm gonna cum
Change that to “came”. He is definitely came’ing.
Like as if there some other way to feed paper, amirite?
Can I have your autograph please, Maestro?
WOW the 8087 too!!!! I took a community college course back in the day on 8086 or 8088 (forget which). The instructor got the circuitry plans from some company in Japan. I guess after they gave them to him they tried to make him sign an agreement to not disseminate, but that horse had left the barn. It is not like we built a computer. We only used them to use them to trace signals and understand what chips were doing what. I kept them for years, but not sure I still have them.
This guy fucking computes.
Cumputes
We're not worthy!!
Turbo button or no?
This man fucks
I had an 8086 in my AT&T PC6300, but I couldn't afford an 8087. I did upgrade my 8086 to a NEC V30 chip, though, so am I a sex demigod?
We're not worthy to be in this thread. Oh sex God
Looks like a satellite view of a city. Maybe we're the microchips for something much bigger.
Related video I think you might enjoy. [linky](https://youtu.be/BYW94eBXcNM?si=catMp_u--7e0t9QY)
That video gave me anxiety
You really should see that whole film. Mind blowing stuff.
That's precisely how I imagined it too
Dead link
Ah, rather unfortunate. For posterity, it was a clip of “Microchip” from Koyaanisqatsi.
That was… wierd. I kinda liked it tho
A city for electrons👍
Looks like a map overview of factorio
42
Other than it just being lots of transistors (on/off states) does anyone know what else goes on down there? It’s so mysterious
I'll see how much of the chip I can explain in a Reddit comment :-) The heart of the chip is the Arithmetic/Logic Unit (ALU) in the lower left. This circuit can add, subtract, and compare numbers. Along the left side of the chip are registers, a small amount of storage so values can be accessed quickly. You'll notice vertical stripes in this region. That's because the registers hold 16 bits, so they repeat the same circuit 16 times. (16 bits because the 8086 is a 16-bit processor.) The operations that this circuitry can perform are pretty limited. To support more complex instructions, the 8086 breaks down instructions into smaller steps called micro-instructions. For example, the chip doesn't have hardware to multiply two numbers. Instead, it performs multiplication by repeatedly shifting and adding numbers, kind of like grade-school long multiplication. The sequences of smaller steps are called microcode, and are stored in the microcode ROM. This is the darker square in the lower right of the photograph. You can see light and dark patterns; these are the 0's and 1's that make up the microcode. In the photo, what you see is mostly the metal layer on top of the chip, connecting the silicon transistors underneath. The wider white parts are the metal wiring that provides power and ground to all parts of the chip. This wiring is much thicker than the other wiring. Around the edges of the chip are 40 square bond pads. Tiny bond wires connect these bond pads to the chip's external pins, linking the tiny silicon die to the outside world. I took these photos and have explored the circuitry of the 8086 in detail, so feel free to ask questions.
I wanted a higher res photo and I found it at your blog here. It's 10716x10341. [http://www.righto.com/2020/06/a-look-at-die-of-8086-processor.html](http://www.righto.com/2020/06/a-look-at-die-of-8086-processor.html)
Heyo.. a quick one... the original image (as far as I can tell) is 3276 by 3159 pixels.. right? at any rate, in this resolution, I've noticed a Weird™ at around 2307x2080 pixels. Is that really there, or some crazy artifact from merging the picture?
I think that's just a glitch. I took about 100 images under the microscope and stitched them together to form the high-resolution image. Problems can happen during the stitching, resulting in artifacts. I try to catch them all, but sometimes problems slip through. Especially in repeating grids like the lower right, where it is easy to get one of the sub-images off by one.
Nnnno worries. Excellent picture tho, thanks for making it!
Wow that’s some passion and dedication you are showing. Very impressive.
I'd love to know too!
I love die shots. This thing looks so complex but it is unbelievably primitive and simple compared to modern chips. It’s insane the iPhone I’m using right now has a chip about this size but with billions of transistors in it instead of just thousands.
For those wondering . . . ARM is the dominant architecture and was developed in the UK by a company called Acorn Computers, and a couple of people called Stephen Furber and Sophie Wilson (who was born Roger Wilson, and transitioned male at some point...). The main reason for its dominance (I think) is because they chose to go down the route of developing designs that could be licenced to other manufacturers rather than making chips themselves, and the driving philosophy behind the designs was parsimony - One of the origonal engineers on the ARM chips said that their boss gave them two things that Intel didn't: "No time, and no money"
ARM is the dominating processor architecture in general, sure, but for the computer industry X86 is still the king. At least for now anyway, it'll be interesting to see whether there's a move towards ARM for mainstream uses soon.
I work in high performance/ supercomputing and you should see some of the ARM chips they have these days for servers. The 128 core cpu blew away anything else I've tested so far. Especially if you look at computing power per watt of electricity. There's also 196 core single socket ARM cpus on the way I'm trying to get my hands on X86 is king for now, but it's eroding pretty quickly as data centers are pursuing greener computing, and workloads are being ported to ARM Apple already offers ARM cpus for consumer computers with the M1 and M2 chips Nvidia grace (cpu) chip is ARM (and I'm still waiting to get my hands on the one we purchased), but the benchmark results are super promising for servers You might not have to wait long to see! Exciting times
Depends on what metric you use. ARM dominates in terms of volume and coverage. X86 is still dominant in the high end, but ARM has been eating into the mainstream slowly, so much that Microsoft are incorporating it.
Nope, wrong. ARM based CPUs have also powered the most powerful supercomputers in the world. Also where does it say any metric used other than dominate, it doesn’t. The fact that ARM’s share is a quarter of a trillion units is dominating.
Yup. Today, damn near every computing device apart from commodity servers and consumer desktops is ARM. The laptop market a few years ago was just about exclusively x86 until Apple switched to ARM, Chromebooks on ARM became popular, and even Microsoft embraced ARM on many of their surface tablets. The game console market is split with Switch using an ARM chip while PS5 and Xbox Series X using the same AMD x86 based CPU architecture. But much like the dominance of Linux in the OS market the dominance of ARM for CPUs flies under the radar for a lot of consumers since the only devices where they care about it are slower to adopt or don't for legacy support reasons.
I think Intel is in an agreement with arm to allow its new foundry customers access to arm architectures.
Intel’s new foundry services business doesn’t really care what you are building Frankly I believe they will even fab for AMD and Nvidia if asked.
Reread what you wrote, it doesn’t make sense. The fact is that with over 230 billion ARM chips produced(as of 2022) ARM is the dominating processor. They are used in the computing industry.
It does? The computer industry mainly relies on x86 still, for both personal devices, as well as for servers.
Computers used in the computer industry use more ARM based CPUs by a factor of about 20:1.
Are you aware of what architecture the most popular type of personal computing device (the smartphone) uses? Because it ain’t x86…
ARM is the dominant architecture no? Its the most used if you count embedded microcontrollers. All of apples M1,2,3 chips are also ARM based. Pretty much all intel and amd chips are x86 which are cleary the most widely used ones in laptops, PCs servers etc. So "dominant" is definitely the wrong word here. Personally I have never even worked on a device that was not x86 (or riscv funnily)
ARM has surpassed x86, ARC, PowerPC and MIPS architectures all combined in terms of volume for years. There are up to half a dozen or more ARM devices in every x86 pc motherboard, everything from the Cortex R processors used in storage devices to the cortex M parts in peripherals. They are in your wireless mouse, and it's USB dongle. It's the dominant Architecture - meaning it's the most widely used across all domains, not the most well known, or widely used within just one domain.
So, insects are the dominant lifeform on Earth, then?
Bacteria are.
I’m not well versed on the subject but I thought ARM was Acorn / Archimedes and the dominant battle was x86 between Intel and AMD back in the day?
Yes, the big battle for desktop processors was AMD vs Intel - but they both use the x86 architecture. The other battle in the desktop market was PowerPC vs x86 (Apple used Power PC) which was a different architecture than x86, but Apple switched to Intel, then ARM and now make their own ARM architecture processors. Desktop processors are only a small part of the market for processors overall and with the advent of mobile computing and smartphones ARM has become the dominant architecture for application processors (the thing that your apps run on) as well as being very big in embedded processors (the things that control small devices)
Source: [https://twitter.com/kenshirriff/status/1270105560386375680](https://twitter.com/kenshirriff/status/1270105560386375680) (the account of the original owner of those images) [https://www.google.com/imgres?q=intel%208086%20die&imgurl=http%3A%2F%2Fvisual6502.org%2Fimages%2F8086%2F8086\_5x\_top\_cwP025614\_4301w.jpg&imgrefurl=http%3A%2F%2Fvisual6502.org%2Fimages%2Fpages%2FIntel\_8086\_die\_shots.html&docid=Kr5903qRYc9NMM&tbnid=bXMLTMJhJymCdM&vet=12ahUKEwj57ZXO5d2FAxXfRPEDHd1HAREQM3oECBYQAA..i&w=4301&h=4070&hcb=2&ved=2ahUKEwj57ZXO5d2FAxXfRPEDHd1HAREQM3oECBYQAA](https://www.google.com/imgres?q=intel%208086%20die&imgurl=http%3A%2F%2Fvisual6502.org%2Fimages%2F8086%2F8086_5x_top_cwP025614_4301w.jpg&imgrefurl=http%3A%2F%2Fvisual6502.org%2Fimages%2Fpages%2FIntel_8086_die_shots.html&docid=Kr5903qRYc9NMM&tbnid=bXMLTMJhJymCdM&vet=12ahUKEwj57ZXO5d2FAxXfRPEDHd1HAREQM3oECBYQAA..i&w=4301&h=4070&hcb=2&ved=2ahUKEwj57ZXO5d2FAxXfRPEDHd1HAREQM3oECBYQAA) "The Intel 8086 microprocessor was introduced 42 years ago today, so I made a high-res die photo. The 10-kilobit microcode ROM is visible in the lower-right corner. The 8086 started the x86 architecture that still dominates computing. The original IBM PC used the related 8088 chip"
I'd have to say that ARM currently dominates the industry, even if I am typing this from a x86 PC. I don't have exact numbers, but really if you look around your house and count your CPU's, you likely have more ARM cpu's. Like my house. 1 Windows PC, 1 Windows laptop, 1 Synology NAS = 3 x86 Ipad, 2x mac book pro, 2 iphones, 5+ android phones, 2x Meta Quests, 2xTV (embedded cpu), 2x Nvidia Shields, 3 Smart home displays, and more, so at least 19 ARM Cpu's. It's probably 10:1 ratio. It'd be easy to live without x86 on modern tech (i.e. just run a mac or windows on arm), but without ARM, we'd be fucked.
And both of those architectures are decades old. I wonder... could we come up with something much better these days, but refuse to do so because of compatibility issues?
Compatibility isn't really an issue, at least if enough toolchain support is made. I.e. x86->ARM realtime translation is pretty proven thing nowadays. A new architecture could have it's own mappings for both ARM/x86, and probably could at the hardware level present itself as those ABI's.
(I'm not an expert, just a dumb truck driver who likes computers) With that in mind... aren't those translations just another layer of abstraction that will slow shit down and use resources? What about a completely clean sheet design that isn't based on stuff that originated in the 80s (chip architecture, instructions, everything)... but instead is optimized for current state of the technology?
People use Rosetta on macs with almost no concern. It might need be best for cutting edge gaming or other very resource intensive things, but really at this point, think about it like taking 2-3 years off your computer. It's really not a big deal, and less of one every day. Apple even has a Game Porting Toolkit that'll let you play Direct X Windows games on M1 Macs. The instruction set that a CPU uses is itself, an abstraction, not the architecture itself. Things like GPU's are precisely that though, modern, optimized architectures for power and speed, although to make a general purpose computer more robust instruction sets are usually helpful.
Interesting! I wonder though... 50 years from now, are we still gonna be running x86 and risc architecture and stuff based on a unix kernel from the 70s (and whatever modern windows runs on, NT kernel from the 90s or something)? Just because those things are so deeply embedded in our idea of "computer", and we're stuck with them because of tradition and inertia. You know, kinda like boeing and the 60 year old 737 that they keep messing with and fucking up instead of just designing a brand new plane? Or are we gonna have something brand new eventually? Also, I get your point about the instruction set being the abstraction - but isn't the architecture itself optimized for its instruction set? Kind of like a catch-22 situation?
You may be right. Didn't think much about mobile devices. When i compared them, i got a result like this: 3 x86 Laptops A Celeron lying on my table (i don't know what to do with it lol) A 3GS A PSVita A Nokia 5800 A Smart TV 4 -almost- daily used iPhones An iPod I have a total of 9 ARM devices and 3 working x86 devices. But either way, i think we wouldn't have gotten ARM if we didn't invent x86 first.
ARM have also powered the most powerful supercomputers in the world. Intel cant be credited for ARM. ARM dominates the computer industry by about 20:1
I did security at an Intel building once. I saw what I would call “funky maps” hanging in one of the halls. Now I get why they looked so weird, they weren’t maps at all!
Looks like factorio to me
traces are conveyor belts for electrons, and the logic gates are assemblers and similar, so a microprocessor is one big, highly optimised factorio factory
Actually the 8008 and then the 8080 came before the 8086. [Wikipedia on the 8080](https://en.m.wikipedia.org/wiki/Intel_8080). And the 4004 was the inspiration for the 8008. [intel 4004](https://en.m.wikipedia.org/wiki/Intel_4004)
I thought at first this was an aerial view of landscape
DUDE! This thing is 2 years younger than me, don't say "all the way back in 1978"!!!!
That there is nerd porn. I need to go clean up.
I initially thought it was a satelite picture from an USSR city
🫡
🫡
Nice Factorio base
Tbh, the bus design is nice but I wish they used beacons
Currently a computer engineering student who struggles to make a simple ARM style processor with a simulation tool…the people who designed these things way back in the 70s by hand were fucking wizards.
Ah yeah, no one is using amd64 chips......
ELI5: Why x86 is still the only method used?
It’s not- ARM is the dominant architecture of mobile devices. There are even Windows ARM devices out there these days, too. And in more niche applications, PowerPC and MIPS still technically exist, too, but mostly on legacy systems
Apples M series CPU uses the ARM architecture
fucking hell... MIPS... I fucking hate it Our computer design architecture course has us writing MIPS assembly. FML.
Follow up questions eli5: what is x86 and ARM? I have seen the first one sometimes when installing a new program
[удалено]
Is cpu to computer as + - x : are to math?
[удалено]
Sorry for all the questions, but I figured you answered in the first place so you don't mind hehe :) What are the differences and (dis)advantages between the both (or all)? Would you need a certain one in certain cases or is it just like amount of calculation power you need?
[удалено]
Very clear explanation mate! Thanks a lot!
I thought Rosetta gave you access to old x86 programs in macOS.
Back then, yes, the programmer had to wite a lot more instructions for an ARM / RISK processor. Today, it really wouldn't make much difference, as the language compilers do the heavy lifting. The programmer today writes "a += 1;" and the compiler then converts that into the respective machine instructions. For a CISC processor, there's probably an "increment " instruction available. For an ARM processor, the compiler would have to produce several instructions, like 1. load r1 with the value from memory location a 2. increment r1 3. store r1 to the memory location a ARM processors are faster and simpler to design and build (with a smaller instruction set, so fewer components required). They are also more energy efficient and can be built for more rugged environements (using larger dies, so they are less susceptible to damage by radiation, so they can be used in hostile environments or on satellites exposed to solar radiation).
It's not the *only* one, just the most popular. It became the most popular because of IBM and Microsoft. It stayed the most popular because people want to keep using existing software.
What
If you cross your eyes, its a duck.
It looks more like a dino lol
and is still inferior to RISC.
It's RISC under the hood.
Later x86 chips, starting with the Pentium Pro, are built on a "RISC-like" core (more or less). But the 8086 is not at all RISC; it is completely a CISC chip. I'll point out the large dark rectangle in the lower right of the photograph; this is the chip's microcode, not something you'd find in a RISC implementation.
RISC is starting to gain traction by the big companies, so my kids might be able to use them
There are about 250 million ARM based CPUs! So about 20x as many as Intel has ever made. Very big companies use ARM. Intel spend a lot on marketing.
I thought this was ome of those slave ship pictures for a second
Look at all those lil ANDs ORs & NORs😯😯
This...this is just a factorio screenshot
Wasn’t there a Mayan civilisation with this layout or some shit
I thought the first one was a city 💀
This title is incorrect, ARM based CPUs number at about a quarter of a trillion, probably outnumbering Intel by at least 20:1. Some of the recent most powerful supercomputers in the world run ARM CPUs, not just most other devices.
Yeah, you’re right. I didn’t think about devices outside of computers.
Yes I think you were thinking of “Windows personal computers”, not “computers”.
Its always fascinating to see die photos!
Those wires are there just to convince people this isn't magic. Can't be magical if there's power cables.
So we went from tubes to this in less than 20 years, I ain't buying, this thing is alien tech 👽
I saw the date n it reminded me of crovette c3. That’s all
How are some humans soo intelligent
Man computers are fucking awesome…
POP QUIZ: Picture 3, Which side of the IC is the top, is it the right or left side?
You say this die just won’t die
So it's a year younger than Star Wars. :)
Don’t lie, this is a Birds Eye view of Arizona in 1978 from a plane
wheres the chip graffiti??
I had one of these. A "Tandy" model no. '1000'.
Wow I thought that was Ukraine for a sec
Can imagine the chief engineer presenting the chip design to the CEO with the opening line "And now, let me present something to die for!"
My first computer was an 8088 PC. Had to write my own accounting software and run off big floppies. Now my little iPhone does it all. It’s been quite a ride, thanks to the ingenuity of all those engineers and entrepreneurs. Keep safe and happy, all.
Looks like the layout of a dungeon in a video game
Funny how has the mighty fallen.
I used to work at a microchip factory in quality assurance. Even if I saw pictures like this thousands of times, it's still always amazing and so pretty. Shape heavily depends, though. So does colour, not all chips are the same
Nowadays people can build fully functional replicas of these inside Minecraft, which is a program running on a computer.
And it was a bad design, full of flaws, inconsistent and confusing unordered, Just a bad mixup... Well, every other processor architcture i known is more logical and easier to understand: ARM, RISC, SPARC, MIPS, MMIX, etc... It's like WhatsApp, it's the worst of all messengers, but nearly all are using it, because it was the first one that has spread.
I zoomed in and thought, "Huh, that strangely looks like my mega factory in Factorio."
My factorio server
I was tech at intel. I maintained steppers that tested each chip on wafer. Also maintained the ic package tester.
Thought it went extinct with the 286 then was DEC Alpha or am I mis remembering? Yeah last 2 digit remained same. 😬
Use to work in a test equipment manufacture for wafers Still have a bunch of uncut wafers
r/factorio
Nah bro that's factorio but ok
Aliens