Skip to content

32-bit + 64-bit#

Back in the "good" old days computers didn't have much RAM (more on that later) and were much slower than they are today. Despite this fact they still had to work on very large amounts of data and very long numbers. This era was that of the 32-bit processor. When a CPU is of the 32-bit variaty it means it can access 2^32, which means two to the 32nd power, which translates to 4,294,967,296 bytes of information. Each byte is used to store a small unit of information which is a number, essentially.

In the 90s through to the early 2000s this worked fine for the most part, but then we developed the 64-bit architecture. This is the most common architecture that you'll find today.

A 64-bit CPU isn't twice as good (32*2=64), it's four billion times better. That's because a 64-bit CPU can access (address is the technical term) 2^64 bytes of memory, which means a total of 18,446,744,073,709,551,616 bytes can be addressed by a 64-bit CPU. That's an insane jump in the amount of information accessible at once.

The reason the difference isn't double is because CPUs operate in binary, which is a base-2 numbering system. That's means 2 ^ 64 isn't twice as a good as 2 ^ 32, it's expotentially better. But don't worry about this maths right now.

The difference between a 32-bit and 64-bit processor is addressable memory size which in turn means the speed of the processor is significantly faster. Again 64-bit CPUs are the most common you're going to find today but it's important to at least understand what the difference is.

You may encounter software that's compiled (remember what that means?) for a 32-bit CPU architecture - which can be run on a 64-bit processor, but not visa versa - thus it may run quite a bit slower than the same software that's compiled for a 64-bit CPU architecture.