Wednesday, March 30, 2011

A reflection upon hard drives

A few months ago I was having an issue: my laptop’s hard drive was running out of storage space. I was in the heat of a project that I couldn’t put on hold so every day I had to find a few unimportant files to delete. Just five years ago this computer was state-of-the-art and had seemingly endless storage: 100 gigabytes of data. But, like a large purse, I found ways to fill it. The documents and photos didn’t take up so much space, but my music collection took up a large chunk. At the time I didn’t sweat it as the 40 GB or so of free space was surely enough for an eternity. But my music collection bloomed. To compound this, about a year ago I started a video collection and it was only a few weeks before it was filled to the brim.

After my project was done I knew I needed to reflect on the state of my laptop. The big picture was sunny: The outside still looked nice and not outdated, I upgraded the RAM a couple of years ago to give it some spring in its step, replaced the battery, and the processor was still capable. So instead of getting a new computer, I swapped out the hard drive for a new one. The new hard drive fit snugly in the old one’s spot. It boasted 500 GB of storage. I now had an amazing 400 GB of free space (which has sadly since declined to 300 GB).

ANSWER: How much more storage we get per square inch of hard drive material compared to 1956, the advent of the hard drive. QUESTION: What is one billion times more?

Today, you can put 40,000 mp3s onto a drive that fits inside of an iPod. In 1956, if you put just 1 mp3 file onto a hard drive, you would need a forklift to move it anywhere. A well-known rule of thumb in the computer industry is “Moore’s Law”, which predicts that the number of transistors on an integrated circuit will double every two years. Translation: your computer gets about twice as fast every two years. Similarly, hard drive storage has Kryder’s law: the storage per square inch doubles about every 2 years. My hard drive problem was a beautiful illustration of Kryder’s law in action. In the five years since I bought the original computer, the available storage (100 GB) in a hard drive that size should have doubled at least twice, which is what it did (500 GB). This type of growth is exponential, and this mathematical trend has held since 1956. The fact that the trend has held so long is pretty amazing, but peel back one layer and the story gets more interesting. It’s not a natural progression of one technology. Every few years we hit the wall with the current recording technology and someone has to develop something better. But someone always does come up with something and the trend marches on. One of these transitions happened around 2005 when “longitudinal recording” reached its limit and was replaced with “perpendicular” recording. Unfortunately, we’re now at the limit of perpendicular recording.

To understand the source of this limit let’s dig in to what a hard drive actually is. The main star of any hard drive is the platter: It is a disk containing magnetic material and this is where the information is stored. It is called a “drive” as the platter is forced (driven) to spin so that a stationary read/write head can read/write data at different locations on the platter. (Tangent: “Flash drive” is a misnomer as there are no moving parts inside your USB stick. The name persists as hard drives came before flash “drives” and both types can be interchanged in many systems.) It is called “hard” as the magnetic material has a high coercivity, which basically means that once it’s magnetized, it’s hard to demagnetize it. So once your data is written it will be permanently stored on the disk, unless of course you make a conscious choice to erase or overwrite it.

The magnetic material on the platter is not uniform. Rather, small islands of magnetic material coat the surface in a random mosaic, separated by thin channels of a nonmagnetic material such as glass. This is called “granular” recording media, as the islands of magnetism are called grains. Each grain corresponds to one bit of information: when you save a file, the recording head assigns each grain a 1 or 0 by forcing the magnetization up or down. The current grain size is about 10 nanometers. So to increase the storage for a given platter size (and continue the exponential trend) we will need to reduce the size of these grains.

However, it’s hard to make these islands smaller given the current uncontrolled deposition process. “Bit patterned media” has emerged to solve this problem. Using the same precision fabrication methods used to make computer chips, scientists can construct nanoislands of magnetic material that are uniform and evenly spaced, each island corresponding to one bit. It has been shown to increase the data density by several fold to 500 GB per square inch [1]. This process is not optimized yet, so the production of this media is expensive and the quality is not up to par.

But even if bit-patterned media becomes more efficient to produce, there is still another hurdle. Making the domains smaller increases the likelihood that they will spontaneously demagnetize due to random fluctuations. One obvious way around this problem is to increase the coercivity of the material. But this creates a new problem on the other end: it makes writing more difficult as a stronger magnetic field is needed to seal in the magnetization. A clever solution has been reached, called thermally assisted recording. Basically, a laser is shined onto the high-coercivity material to heat it up. This has the effect of temporarily reducing the coercivity, so writing is easy. Once the data is written, the laser is turned off and the information is sealed into the cooled-off material, restored to its normal high coercivity.

When combined together, bit patterned media and thermally assited recording have been shown to increase density up to 1000 GB (1 terabyte) per square inch [2]. If this combination can sustain the exponential growth, then in 2020 we should expect densities of 10,000 GB (10 TB) per square inch. When you dive into the details, this means each magnetic domain will be separated by only 1 or 2 nanometers. This corresponds to only 10-20 atoms. Thus 2020 might mark the limit of magnetic recording; we just can’t get any smaller!

So will magnetic media survive? Will it be taken over by some other new thing? Solid state (Flash) drives are gaining popularity for file storage. They are more physically resilient than hard drives since there are no moving parts to break. However, they are still much pricier than hard drives and concerns still loom about their permanence compared to magnetic storage. There are some other good candidates, for instance, phase change random access memory (PCRAM) and spin transfer torque random access memory (STTRAM) [3]. But no matter what ultimately wins, magnetic recording media will be around for a long time as it is cheap, well studied, and very permanent.

On the other hand, the question of survival may be moot to the future average consumer. Cloud-based storage is becoming more prevalent and might take over as our main file storage system. It’s similar to our current monetary system: while you carry some cash on you, most of your money is in a bank and can be accessed from any ATM. So in the future, your laptop’s hard drive (“wallet”) may actually have less storage than it does today and you won’t need to carry so much data (“cash”) on you. Most of your files will be stored on a central cloud that you can access from any computer. So while the companies that own the cloud (e.g. Google or Dropbox) will probably care about which recording medium they use for their massive storage banks, you might have your head in the clouds about the whole issue.


[1] Mate, Mathew. “How new disk drive technologies are pushing the nanoscale limits of materials and mechanics,” MEAM seminar, UPenn, 1 February 2011.
[2] Stipe, B.C. et. al. Nature Photonics 4, 484 (2010).
[3] Kryder, M.H and Kim, C.S. IEEE Transactions on Magnetics, 45, 3406 (2009).

No comments:

Post a Comment