Within the past decade or so, computers have advanced at an unanticipated rate; and some would argue that computer processors, in particular, have come the farthest. When it comes to developments in processing, most people tend to look at things like frequency and transistors, but an important, and often overlooked, aspect is CPU cache.
So what exactly is CPU cache? What do terms like CPU cache ratio and CPU cache voltage mean? Read on to find out.
What is CPU Cache? How does it Work?
Simply, cache is just a really fast type of memory. A computer has multiple types of memory. The first is primary storage, which could be a hard disk or an SSD and which stores all the heavy data like your operating system and programs. Next is the RAM, much faster than your primary storage; and lastly you have cache, which comprises of the even faster memory units that the CPU holds within itself.
Cache is similar to the main memory or RAM in that when you shut your computer off, it loses its memory, and then begins collecting information from scratch the next time you fire your computer up. Unlike RAM though, cache doesn’t need a refresh. For this reason, cache is also called Static RAM or SRAM, and the main RAM is called Dynamic RAM or DRAM.
Cache can also be confused with virtual memory, but these are two different things. Virtual memory is what allows your computer to run multiple programs at the same time without losing data, whereas cache moves the inactive data within your operating system from the RAM to disk storage.
Computer memory works in a hierarchal system, and knowing this can help you understand how cache itself works. Remember that this hierarchy is based on the speed, and cache, being the fastest, is thus at the top of this hierarchy. As already mentioned, it is also a part of the CPU itself; which makes it closest to where the central processing itself happens.
To break it down, whenever you run a program, a set of instructions makes its way from the primary storage to the CPU. The RAM gets this data first and sends it to the CPU. Modern CPUs can process an enormous number of instructions like this per second, which means they need ultra-fast memory at the same time. This is where cache plays its part. The cache carries this data back and forth inside the CPU as it processes the instructions being given to it. A further hierarchy exists within the cache itself; this is described next.
Levels of Cache
CPU cache is further divided into three levels based on the size and the speed of the cache. These levels are called L1, L2, and L3; with L1 being at the top of the hierarchy.
L1 or Level 1 cache is the fastest memory that exists within a computer’s system. It also holds the data that the CPU is most likely to use when completing a task, so it is also the one that is most used. L1 cache used to go up to about 256 kB, but there are much more powerful CPUs out there now which can take it up to 1 MB. For instance, Intel’s Xeon CPUs can hold 1-2 MB of L1 cache. L1 cache comprises both instruction cache, which provides the information regarding the task that the CPU is about to perform; and data cache, which holds the necessary data on which the CPU is performing that task.
Next, we have L2 or Level 2 cache, which is slower than L1 but much larger. CPU cache size for L2 cache ranges from 254 kB to 8 MB even, while newer processors can, again, go further than that. L2 holds the data that the CPU will need next once it is done using L1 data. In modern computers, the CPU contains L1 and L2 caches within its cores, and each core gets its cache.
Finally, L3 or Level 3 cache is the slowest form of cache, but also the largest ones. Its size ranges from 4 MB to 50 MB. Most CPUs have a separate dedicated space for L3 cache. L3 serves as a backup for L1 and L2 cache, and it also helps boost the performance of its predeceasing levels of cache.
CPU Cache Ratio; and Other Tricky Terms
During processing, data flows from the RAM to L3, L2, and then L1 levels of cache. Every time the CPU looks for data with which to run a program and so on, it tries to find it in the L1 cache first. If your CPU is successful in finding it, this is known as a cache hit. If the CPU cannot find the needed data in L1, it proceeds to look for it in the remaining levels. If it doesn’t find the data, it tries to access it from the main memory. This is called a cache miss.
Cache ratio is thus a ratio of cache hits to misses and it measures how effective a cache is at fulfilling requests for content. Cache voltage is important when it comes to overclocking a CPU.
Now that you understand the L1, L2, and L3 hierarchy, let’s look at the CPU cache configurations that control how data is written. There are three different types. The first is direct-mapped cache, in which each block of data is mapped to one cache location. This location is specified in advance.
Next, we have fully associative cache mapping, which is like direct-mapped cache when it comes to structure, but it also means that a block can be mapped to any location rather than a specific cache.
Lastly, there is set-associative cache mapping, which falls between the two extremes of direct mapping and fully associative cache mapping. In set associative mapping, the mapping is pre-specified, each block is mapped to a subset of various cache locations and doesn’t have one designated to it.
The question you’ve now is probably, how much CPU cache do I need? While there is no definitive answer to this since every user has different needs, remember that most modern computers are designed with enough cache for most tasks, like gaming or programming even.
Additionally, cache size doesn’t always mean much by itself; other factors do come into play. If you’ve trouble with the amount of cache you have, there is always room for improvement. Cache memory can be increased by upgrading your CPU and cache chips. You should look into if your computer runs slowly.