Before the eta of multi-core processors, my techie friends and I spent much of our college years striving to squeeze our computer hardware to the limit. CPU clock speed felt like the most important thing in the world as we spent endless evening hunched over our desktops, tinkering with the amount of thermal paste to use or the perfect way to route cables in order to maximize airflow.
An early processor of line was the AMD Duron, the little brother of their flagship product. It was a speedy little chip, but most importantly it could be “unlocked” with very little effort. Most CPU manufacturers put hard stops on how high their chips could go; after all, if you could get their third-tier chip up to second-tier speeds, how could they hope to sell the more expensive chips? Unlocking the Duron—which in some cases just required a graphite pencil, and others a special pen with capacitive ink, applied under bright lights and heavy magnification, to tiny metal leads visible on the surface of the chip—allowed us to tell the processor how high it should try to go, rather than adhering to AMD’s rules.
Our typical process was to ramp up the numbers as high as possible, then testing the system for stability over the next few hours. If it failed the checks, we stepped it down a notch or two and tested again. If it did succeed, we bumped it up and repeated the process, trying to dial in exactly what stress the system could take before it gave up trying.
Without going too deeply into the rabbit warren that is CPU frequency tuning, the process largely relied on three variables: the front-side bus, CPU multiplier, and voltage. The first two numbers were combined in various combinations to get the highest value possible, and voltage would be increased to try to push enough energy to the chip, without frying it. The enemy in all cases was heat.
The reason miniaturization is such a big deal in computing is that smaller things take less power to do the same amount work. Less power means the item runs cooler. Running cooler means you can force it to work harder, if you want, by pushing more electricity into it. In this way you either get a small chip that does the work of a bigger one—but cooler and with less power draw—or you get a small chip that, for the same heat output, can do far more than the larger chip. Still with me?
I can’t remember exactly how high we managed to get that little Duron of mine, but I remember we were all impressed. This was the era where “only” getting a 50% overclock—the term for making the process perform better than designed/marketed—was considered a failure in our tech circles. The elusive 100% overclock—e.g. a 400 MHz chip running stably at 800 MHz—was rare but not unheard of.
In addition to the careful precision required to unlock the chips, after careful research went in to die tying which chips could be unlocked in the first place, and a keen eye toward balancing power draw and heat dissipation, a different and mechanical aspect of the stressful process was our constant fear: “the crunch.”
In modern times, the processor core(s) almost completely cover the surface of the chip, meaning the heat radiates evenly across the entire surface. There isn’t really any balancing act when it comes to changing a heat sink or fan—which combine to help evacuate heat from the chip as it performed its calculations—since you’re pressing two large and solid bodies together. Not so in my college days, where the cup’s single core would stick up from the surface of the chip like a tiny pedestal, one small region of concentrated heat, atop of which you had to very carefully wrench down the heat sink so it would make an easy and even contact. In our many (many) interactions and experiments of computing power, there ended up being a lot of swapping out CPU cooling systems.
One day I was re-seating my massive copper heat sink and fan combination back on top of my little Duron, probably after a cleaning or a fresh application of Arctic Silver thermal grease, when I heard the tell-tale crunch of a processor breaking. In either my haste or in attention I had brought the heat sink down at a slight angle, and so when I tightened the screws, the force was uneven and it broke the core of my precious CPU.
After hanging my head in shame—nobody wants to admit they crunched their CPU, or pay for a new one—I meekly reset the motherboard back to factory defaults, including the aforementioned overclocking elements, and hoped I wouldn’t have to go buy a new CPU before finals week.
To my supreme amazement, the computer booted. It wasn’t nearly as zippy as it had been before, owing to running at stock speeds, but more than anything I was blown away that the processor still worked at all, even after being terribly mistreated at my hands.
That must have been some time in 2001, but I’ve kept my little Duron all this time. I even know exactly which curio box it’s sitting in at this moment. After the crunch—and the damage is very evident on the surface of the chip (I’ll post pictures when I get around to it soon)—the little Duron kept on chugging, even if it wouldn’t ever sustain even a 1% overclock from that moment forward.
Once serving as my primary desktop, I believe we eventually repurposed it into the brain for our small network server, which is computationally less intense than hardcore gaming or composing the flood of essays my major had me writing. We kept that little guy around for years, not only for the novelty and the story of it, but also because it was the first machine I ever installed Linux on, back in 2002.
The “little Duron that could” is a reminder of late nights staying up and solving problems with my best friend, working toward a straightforward but very convoluted aim. Almost all the memories it stirs are good ones, no matter what else may have been going on.
Thank you for taking this small trip down memory lane with me, and I hope you’re able to look back at a situation that didn’t go quite as planned but still worked out well, and just laugh about it.