Why are they putting "processors" on hard drives?

54

14

What does it mean when hard drives have a processor on the hard drive? How does it work, and what benefit does it have?

I don't understand - the CPU on the computer is the processor and the hard drive transfers its contents to the host computer's RAM. Do additional processors pre-process the data somehow?

Here are some examples:

  1. Western Digital WD Black WD1002FAEX 1TB "Dual processor speed"
  2. NETGEAR ReadyNAS 312 2-Bay Diskless Network Attached Storage "Dual-core Intel 2.1GHz processor and 2GB on-board memory"

Also, routers now have processors, too. Why is that necessary? I guess it sort of makes sense - some logic needs to happen for the packets to be read in to know which ports to send them out on, but why did old routers not need them?

Example of a wireless router with processor: "Dual-core processor"

I'm surprised, because the von Neumann machine model doesn't include processors on storage diagram.

Celeritas

Posted 2014-08-20T04:14:44.583

Reputation: 7 487

19That Netgear isn't just a router but a full fledged file server. With the hard drive, it does just do some preprocessing on one and IO on others. Theoretically a bit faster, but an SSD is still the king of speed. Looks like the ASUS router has some VPN features and other fanciness that would need to have some processing power, hence the dual core. – user341814 – 2014-08-20T04:22:25.960

17The Von Neumann model says nothing about the structure of I/O devices. You still need a graphics card to drive a monitor, even though that model lumps it all under a single "output" block. – user253751 – 2014-08-20T10:11:09.733

10The Von Neumann architecture (from 1945) is a great starting point (conceptually) to understand stored-program computers. The actual implementation of modern computers (including most peripherals) is significantly more detailed. In 1945 there were no "smart peripherals" so they would not be represented in the diagram. Cars are conceptually the same as they were in 1945 (four wheels, an engine, steering wheel) but you'd not expect a simplified diagram of a car from 1945 would give you a comprehensive understanding of them today. – Maxx Daymon – 2014-08-20T14:07:28.623

7

The von Neumann architecture diagram also doesn't include an arrow between "Memory" and "Storage". Consider DMA.

– a CVn – 2014-08-20T14:26:14.787

All that "Von Neumann architecture" means is that the processor is "programmable", and the program memory is shared with the data memory. (As opposed to a "Harvard machine", where the program memory is separate from the data memory.) – Daniel R Hicks – 2014-08-20T15:19:01.613

3Did you know that (apart from Apple - because of Woz), every early home microcomputer (that I can think of) had a processor in the floppy drive? Remember the chunk-chunk-chunk sound of early Apple floppy drives? That was because they found sector zero by moving the drive arm to the maximum distance three times. – Elliott Frisch – 2014-08-22T17:43:10.320

@ElliotFrisch I thought that was done by the OS? – user253751 – 2014-08-24T03:15:04.827

@ElliottFrisch: The reason that most 1980's home computers used a microprocessor in the floppy drive was that reading a floppy drive requires that a byte of data be accepted about once every 20-30 microseconds while a sector is being read; that required either using DMA circuitry, or else having a processor which could devote many thousands of consecutive cycles to the task of reading a disk. On a machine like the Commodore 64, the video chip takes over the processor bus 1500 times per second, delaying code execution by 40-43 microseconds each time. Many other machines with fancy graphics... – supercat – 2014-08-25T15:38:30.927

...such as the Atari 800 also use cycle-stealing. The ability to steal memory cycles allows the Commodore and Atari to display much fancier graphics than the Apple, but means that their main processors cannot perform any task which would require their undivided attention. Although the Apple II clock is slightly irregular because of the video (most cycles are 977.8ns, but every 65th cycle is 139.6ns longer), that discrepancy is small enough to be ignored. The loss of groups of 43 consecutive cycles, isn't. – supercat – 2014-08-25T15:45:04.877

Arguably, the VIC 20 (which preceded the Commodore 64) could have used its own processor to handle floppy drive access if it disabled the 60Hz keyboard-scanning interrupt during floppy access, but the machine only had 5K RAM and a single CPU-bus slot. The amount of circuitry that would have been needed to let the VIC 20's processor control a floppy drive "directly" while still having a slot to plug in RAM expansion units would have been sufficiently great that adding the processor as well represented a minimal added expense. – supercat – 2014-08-25T15:55:02.223

From an engineering standpoint, it might not have been a bad idea for Commodore to have produced an interface cartridge which could connect to either the floppy drive or printer, but from a marketing standpoint saying the computer could connect directly to a roughly-$400 (IIRC) printer and a $599 floppy was probably better than saying it would require a $100 controller, even if the $100 controller would have allowed the prices of the floppy and the printer to be reduced by $100 each. – supercat – 2014-08-25T15:59:16.113

@supercat Fair enough; and I did know most of that. My point was that integrating processors onto disk drives was not a recent phenomenon. I also found it amusing to reflect that the C64 had a MOS6502 as a main processor... and every 1541 floppy drive also had a 6502. Of course, Commodore bought MOS Technology so they could source them cheap.

– Elliott Frisch – 2014-08-25T16:17:24.637

@ElliottFrisch: I do not believe the IBM PC floppy drives, nor the IBM PC floppy controller cards, included anything that would be considered a microprocessor in the usual sense [the Apple's floppy controller card contained a discrete logic machine that executed two "instructions" from its own ROM for every 6502 cycle; I think that's just as much of a "processor" as anything in the PC drives or controller]. The PC's controller could perform a more sophisticated sequence of steps without processor involvement than could that of the Apple II, but... – supercat – 2014-08-25T16:31:51.973

@ElliottFrisch: I think things like "fetch N bytes of data and stop" were implemented by using a counter which was hard-wired to count bytes, rather than by using a shared ALU to decrement the value of a register which supported only simple "read value" and "write value" functionality. – supercat – 2014-08-25T16:36:41.230

Answers

80

Well, HDD always had processors, mainly to cache data and do other HDD stuff like marking bad blocks etc.

The Netgear product you linked is a NAS, which allows you to stream media from it over the network, so it's not really a HDD. It's more like a network connected HDD with some fancy software to allow you to stream information over the network.

Old routers also had processors, though they used to be slow and not advertised at all. The WRT54G, which came out in 2002 had a Broadcom BCM4702 running at 125Mhz. Not very fast indeed. However, these days we demand more from the routers, and features such as VPN require faster processors.

matthew5025

Posted 2014-08-20T04:14:44.583

Reputation: 770

14HDDs have not always had recognizable "processors", but they've certainly been common for 15-20 years. – Daniel R Hicks – 2014-08-20T12:17:48.097

21ST-506 drives were "dumb" drives and were popular right into the early 1990s. IDE (Integrated Drive Electronics aka "smart" drives) put the controller (CPU) right on the drives, as did SCSI. – Maxx Daymon – 2014-08-20T14:11:00.813

11

Hard drives containing processors goes back as far as the early 1960s with the peripheral processors of the CDC 6000 series and equivalents in the IBM System/360 (and possibly earlier machines).

– nobody – 2014-08-20T16:39:33.757

Aren't hard drive caches different than processors? – Celeritas – 2014-08-20T16:59:34.583

3Well you do need a processor for the cache to function optimally, like deciding what data to cache – matthew5025 – 2014-08-20T17:23:52.977

3Hard disks have certainly not always internally tracked bad blocks. Why do you think MS-DOS 6.0 introduced Scandisk and its surface scan feature to populate the FAT with a list of bad clusters? – a CVn – 2014-08-20T18:07:15.183

Because of floppy disks? – domen – 2014-08-21T13:06:16.060

2Drives don't need to be doing anything fancy (like caching) to make use of a processor. Drive processors handle even the most fundamental operations: receiving incoming commands, moving the heads, processing the magnetic signals, etc. Application programming would be extremely difficult (to say the least) if the CPU had to synchronously and directly deal with the disk platters. – nobody – 2014-08-21T16:20:06.780

Even Commodore's 1540 floppy drive, released in 1982, came with a MOS 6502 processor -- the same type of CPU used in the "host" computer. This was used to manage Commodore DOS (disk operating system) on the device, which was referred to at the time as an "intelligent peripheral". – Desty – 2014-08-26T11:40:23.287

125

I don't understand - the CPU on the computer is the processor and the hard drive transfers its contents to the host computer's RAM. Do additional processors pre-process the data somehow?

The CPU is a processor; there are others. A processor is what runs program code, so any device that has firmware (which is code) has a processor of some sort.

A hard drive has its own (small) processor running firmware that implements an interface protocol (e.g. SATA or SCSI) and controls the drive's motors. Think of your hard drive as a specialized computer-within-a-computer; the SATA cable is like a network cable that lets it communicate with the "main" computer. The CPU creates messages (such as SATA command packets) to tell the drive what data it wants, and sends them to the drive through the cable; the drive's processor looks at the messages from the CPU, and controls the drive's motors and magnetic heads to actually read or write the data.

A NAS is a computer running file-server software. In principle it's no different from setting up shared folders on your PC; the NAS is running a more lightweight operating system on a slower processor, but doing essentially the same work. Same goes for a router.

Wyzard

Posted 2014-08-20T04:14:44.583

Reputation: 5 832

53I like this answer. CPU is a *Central Processing Unit*, so there have to be other ones. – gronostaj – 2014-08-20T10:07:42.740

19Key point is "any device that has firmware (runs code) has a processor of some sort." Way to go Wyzard! – Mindwin – 2014-08-20T21:18:53.383

2Power Loss Protection is an example of a feature that can be implemented in a hard drive with a processor and program code. The drive can detect when power was lost from the MB. The program running in the HD's processor can then write the last bit of buffered data to the disk (with power from an on-board capacitor). Since the MB as no power, the CPU on it has no power and is useless to any unfinished HD write operation. So it makes sense that the HD has it's own bit of power, processor and program code to finish up buffered writes and shutdown cleanly. – MikeM – 2014-08-22T07:51:41.047

2And from Wikipedia: "Some early PC HDDs did not park the heads automatically when power was prematurely disconnected and the heads would land on data. In some other early units the user would run a program to manually park the heads." - With a processor and program code that issue was solved too. – MikeM – 2014-08-22T08:00:34.933

@gronostaj so we also need Decentral Processing Units? – Thorbjørn Ravn Andersen – 2014-08-28T06:56:20.280

@Michael.M Buffering is also an example of a feature which can be implemented in a hard drive with a processor. – Thorbjørn Ravn Andersen – 2014-08-28T06:57:24.170

33

If you could look in detail at the workings of a typical desktop PC, you'd find processors all over the place. If you have a keyboard and mouse connected to USB ports, there's a processor inside the keyboard and one inside the mouse speaking the USB protocol.

In the case of a hard drive, there's a ton of things for that processor to do. For one thing, the processor has to position the head, wait for the right moment, and then send the data out to the platters. When the CPU asks to read a bunch of data, the processor finds the optimum order to retrieve that data from the disk, and maybe even fetches some extra data that happened to pass under the head to put into cache in case the CPU asks for it next.

Modern hard drives can also do SMART health checks in the background. The CPU doesn't have to concern itself with these things, other than possibly to ask for the results periodically.

Modern SoHo "routers" aren't just routers. They're also access points, switches, DHCP servers, web servers, and they implement NAT, firewalling, sometimes even NAS functions, and a ton of other things. Their processors have tons of work to do.

Basically, a processor is so cheap to implement these days that they're used in almost any case where they make sense. The exception would be cases where the task is very simple or where high performance is required. Heck, there's probably even one in your power supply to manage fan speeds and optimize power consumption.

David Schwartz

Posted 2014-08-20T04:14:44.583

Reputation: 58 310

1

"If you have a keyboard and mouse connected to USB ports, there's a processor inside the keyboard and one inside the mouse speaking the USB protocol." I thought this was the job of a controller. Are controllers sometimes considered the same things as processors?

– Celeritas – 2014-08-22T17:24:43.853

1Controllers can be pure hardware, but the requirements to them tend to increase, making the hardware more and more complex. At a certain point of complexity, it's easier to use a processor and do the stuff in software. But that doesn't give the controller a different name; users typically don't want or have to know how the controller is implemented. Also, with the complex hardware ASICs and FPGAs these days, the distintion to processors becomes a bit fuzzy. – Guntram Blohm supports Monica – 2014-08-22T17:57:39.460

I read that as "mouse squeaking" at first :) – Tom Zych – 2014-08-31T08:17:14.600

21

Many current "smart" appliances are in fact full-fledged computers, often running some clone of Linux. If the device is permissible enough, or has been rooted/jailbroken, you might be able to tinker with it, install new packages or even change the OS. They of course use CPUs.

Examples include phones, TVs, DVD players, e-book readers, NAS boxes, home routers, modems and out-of-band management in servers, which are in fact whole computers with their own OS.

But even dumb devices have processors, often called microcontrollers, responsible for e.g. reading and writing data. Micro SD card in your phone contains a processor and a SIM card has another, capable of running Java applications.

Even simple children toys, like a traffic light, have microntroller, as it is easier and cheaper to implement the light logic in microcontroller's software than in discrete components.

Edheldil

Posted 2014-08-20T04:14:44.583

Reputation: 311

8actually few people know that SIM card is a real computer and you can even reprogram it on-the-fly though special SMSes – phuclv – 2014-08-20T13:22:02.433

See JavaCard

– nobody – 2014-08-20T16:20:09.857

Not just SIM cards but any Chip & Pin smart card that conforms to the ISO standard. Bank cards, Loyalty cards and many more all carry these things now and some of them are surprisingly powerful. – shawty – 2014-08-22T18:09:25.140

20

To answer your specific question about hard disk drives which no one seems to have addressed.

SATA (and all other disk attachment interfaces I can think of) works with blocks. Commands are defined to (among many other things) read and write specific physical storage blocks, and the data is provided over the attachment interface cabling. That command must be processed somewhere, which can be done either in software which runs on an on-board processor or using some sort of pure hardware setup which probably would need to do much the same thing.

Guess what's cheaper, physically smaller, far from unlikely easier to work with, and usually much more versatile? That's right, a processor, a small amount of program memory (flash, EPROM, ROM, or whatever else fits your needs) and a small amount of RAM, the latter two of which if your needs are modest enough might even be included within the processor itself (see for example the PIC family of microcontrollers).

Also, remember that the disk platters don't actually store bits. They store magnetic flux encodings of bits. Something must process the flux readings coming from the read head, or process the data into flux transitions to be given to the write head. If a read is imperfect, then error correction data (stored along with the data) is used to ideally (this is unfortunately not always the case) correct the error and return good data rather than garbage, or return an error if the problem is too severe to be correctable. Again, that's easiest to implement in software which must run on something, and a processor with some memory again fits the bill quite nicely.

Having lots of processing power on-board means that you are able to use more advanced encoding and error recovery schemes, which in the case of hard disks means that you can cram more data onto the same physical surface area. The end result is a larger storage capacity for you than what would otherwise be possible. The processing power of the hard disk microcontroller itself, however, is not of critical importance to the user of the drive.

a CVn

Posted 2014-08-20T04:14:44.583

Reputation: 26 553

1To elaborate on "or using some sort of pure hardware setup" -- microcontrollers (as you mention) and custom ASICs used to be much more common. Nowadays it's often cheaper and simpler to build with "real" processors which run an embedded OS off of ROM, than to design and fabricate custom ASICs and write highly-specialized firmware for the microcontrollers. The hardware problem becomes a software problem, and the components are more standardized; both of these reduce cost... and open the door for new capabilities. – echo on – 2014-08-21T20:40:16.010

12

Forgive me if I have overread this point but I haven't read it in the answers yet (though all other answers are great).

Deploying processors to hardware equipment also reduces workload of your central processor, which is your CPU on the mainboard.

Think of a computer with a single cpu that has to do every work that needs to be done. Control memory, control bus, manage hard drive specific calculations (spin the drive, access, magnitude to write, read etc.)

If everything that needs to be done be done by your CPU than there wouldn't be much time left for your actual tasks.

Stefan

Posted 2014-08-20T04:14:44.583

Reputation: 299

9

Lets start with the obvious - those "processors" have always been there at some level. With older drives, these were in controller cards, and with anything approaching modern, hard drives have had disc controllers - the "IDE" designation for pata drives refers to the fact that the electronics was onboard as opposed to having a seperate card.

While traditionally these have been micro-controllers, my ssd - a samsung 840 has a three core arm based processor. These chips do things like wear leveling, handling various internal translations (like converting ATA or SCSI commands to something the drive electronics groks), and two factors - that hardware is a lot more complicated than it used to be, and processors are cheaper and faster than they used to be means it makes sense to chuck a cut down general purpose core into a drive. However, yes, these processors have always been there.

With routers, they've always had mips or arm cores - they basically need the power to run a web server, and routing and so on. Many network attached drives use similar or better cores so they can handle things like smb, or the admin page.

For that matter for many years, keyboards had the same M68K processors you'd find in many old computers, and there's mice with arm cores to handle stuff like fancy lighting, and ever faster responses.

Journeyman Geek

Posted 2014-08-20T04:14:44.583

Reputation: 119 122

When have keyboards ever had 68K microprocessors!? And did drives like the ST-225 really have processors in them? – supercat – 2014-08-23T04:53:22.393

Well, this was an old, crappy packard bell keyboard I took apart something like 4-5 years ago, and it was second hand at the time. Was a bit of a surprise. The ST 225 predated IDE, and so needed a seperate controller I assume . I'd hardly consider it something you would find in a PC from the last 15 years or so – Journeyman Geek – 2014-08-23T05:04:55.823

Are you sure it was a 68K versus something like a 68HC05? – supercat – 2014-08-23T05:29:27.047

Re ST 225. MFM drives are old enough that they use the main computers CPU. The drive itself merely has a cable with the raw signal from the drives head(s), a signal to change tracks, a signal for the track change direction (to a higher track or to a lower track) and a signal to indicate track zero was reached. All management (including keeping track of faulty sectors as printed on the disks label) was done in software on the main computer. – Hennes – 2014-08-24T13:24:24.457

5

Also, routers now have processors, too. Why is that necessary? I guess it sort of makes sense - some logic needs to happen for the packets to be read in to know which ports to send them out on, but why did old routers not need them?

Routers have always had a processor. The two original routers were software running on PDP-11s (yes, the successor of the machine on which Unix was originally written for). One was developed at Stanford and the other at MIT. The Stanford router was later licensed to a then small start-up named Cisco Systems. Cisco re-packaged PDP computers into custom enclosures, slapped on a "Cisco" label and sold them as routers.

So that's what old routers used - processors.

I remember reading an interview of one of the founders of Cisco who said something along the lines of: "that's the advantage of selling software as metal boxes - you don't need to convince people not to pass copies of it to their friends". My google-fu fails me today so I can't find the actual quote. Those were the days before a certain founder of a small company called Microsoft convinced people that they must pay for software (back then it was an early version of Basic).

slebetman

Posted 2014-08-20T04:14:44.583

Reputation: 547

4

All semi autonomous equipment ever since the birth of the computer revolution has had some sort of "Processor" on it, it's just until now it was never really flagged as such.

What your seeing here is the ongoing corruption and half truths that are spread through our society by over zealous marketing agencies, where sales people are encouraged more and more and made to believe they are the stars of the show, simply because they are the one's making the profits.

The fact of the matter is this however, anything that has to perform a set of tasks where the next iteration of a process can be different to the previous iteration, must have some kind of interpreter that can make sense of the instructions the device is given, and then react to those instructions in some fashion.

Back in the mists of time, terminology such as "Controllers" where the norm, but these still boiled down to the same thing.

Take for example an "IDE Hard drive, with it's on board IDE controller", while this is not a CPU in the same sense that you think of a CPU on your PC's main board, it is never the less still a form of CPU.

The host PC sends "OP Codes" (Short for operation codes) across the bus (PCI, ISA, MCI, PCIe or whatever) to the drives controller, the controller then reads this code, and any data that's provided with it and turns them into physical operations that then cause the drive to move the heads to the correct place, and read the requested data.

Routers have an even longer history, Cisco has built networking gear now for best part of the last 50 years or more, and every single one of these devices have had a custom controller/CPU in them all that time. This CPU was designed by Cisco, for Cisco expressly for the purpose of programming and controlling their entire range or Routers & Switches.

Graphics cards are another thing, you hear people band the term "GPU" around like it's some mystical thing that only does graphics. It's not, it's a massively parallel mathematical algorithm processor, Iv'e just finished doing the technical edit on a book on Nvidia CUDA, and what I learned about Nvidia GPU's was rather surprising, these things are Processor's in their own right, processors that are designed to do a specialist set of jobs, but they are still semi intelligent and capable of many different types of operation.

As has been pointed out already, the Netgear Readynas is actually more like a full PC in it's own right. It's just specially designed to function only as a remote storage device.

If you wanted to there would be nothing stopping you from re-programming the Netgear device with new software and making it function perfectly fine as a web server, database server or even a small Linux development server. (A quick search will show you more than a handful of projects aimed to do such a thing with these NAS units)

In terms of processor, well it might surprise you to learn that it's not just Hard drives that have "Processors" on these days, try this little experiment.

Go stand in your kitchen and see just how many CPU's you can count.

I'm willing to bet that your Fridge/Freezer, Washing Machine, Dish washer, Oven and Microwave (at the very least) all have some sort of Processor in, it may not be an Intel Core i7, but it's still a processor, and it's designed to sit their quietly, interpreting instructions sent to it by other electrical/digital circuits which it then turns into the physical operations you see.

So what is the definition of a Processor?

Well it's a bit hard to pin down these days, but in general the definition of a "Processor" is something along the lines of "Any self contained unit, that is capable of acting on external inputs in a semi intelligent way, and producing a known set of outputs derived from those inputs"

SO any stand alone unit, circuit, chip or autonomous machine that can effect a physical manifestation of some known process based on a set of pre-defined inputs can in it's most basic and generic sense be considered to be a processor of some description.

shawty

Posted 2014-08-20T04:14:44.583

Reputation: 352

+1 find this a nice further-thinking answer. I would have liked to read about the massive parallalization of GPU in terms of "1024 cores do all the same instruction at the same time" to be more precise in that direction - but anyhow I like your answer:) – Stefan – 2014-08-22T06:57:32.217

1Thanks :-) if your interested in the massive parallelism of GPU's keep an eye out in Syncfusions free Ebook range for "CUDA Succintly" it should be released in the not to distant future, and it's free to download. – shawty – 2014-08-22T18:06:32.590

4

While hard drives and flash media cards have not always included processors, their design is subject to a fairly simple principle: something with a processor has to know what is necessary to store and retrieve data. If a storage device doesn't contain a processor but is connected to something that does, then the hardware must allow information to be stored and retrieved using the exact sequence of steps the connected device expects. Even if storing and retrieving information some other way might be more efficient, there may be no way by which the connected system could know about it.

As an example, most hard drives work by magnetizing each piece of the disk in one of two directions. If a "L" represents magnetization in one direction for a certain amount of time and a "R" represents magnetization in the other for that same amount of time, trying to store data directly using "L" to represent a "1" and "R" to represent a "0" would be very unreliable because of two factors:

  1. A long string of ones or zeroes would represent a long string of Ls or Rs, which may in turn be misread as a slightly-longer or slightly-shorter string. For example, if the drive motor is running 5% slower when data is read than when it was written, what was written as a string of 20 Ls might get misread as a string of 21 Ls.

  2. Two strings of Ls separated by a small number of Rs may spread into that small string of Rs and "gobble it up". Likewise two strings of Rs separated by a small number of Ls.

Because of these factors, drives generally need to code information into runs of Ls and Rs whose length falls between some maximum and minimum; the optimal values for the maximum and minimum length may vary depending upon the quality of the electronics, motor, head, and media. Additionally, because the outer tracks on a disk are longer than the inner tracks, they might be able to store shorter runs of Ls and Rs than the outer tracks.

In order for information to be stored a drive, it must be connected to something that knows how to convert data into strings of Ls and Rs that the media will be able to hold. If the act of converting Ls and Rs were the responsibility of a controller that was separate from the drive itself, then a drive would need to only use formats which would be understood by any controller to which it might be connected. Moving the controller to the drive assembly alleviates this problem: if each manufacturer ships a drive with a controller that can understand how it stores data, it won't have to worry about whether any other controllers would understand that data, since information will only be stored and retrieved by the controller contained in the drive assembly.

supercat

Posted 2014-08-20T04:14:44.583

Reputation: 1 649

3

As people already explained, many peripherals/devices have always had processors to provide their core functionality and even relatively basic routers are in effect small servers (the most visible aspect for the end-user would be the web-based configuration wizards, you need an IP stack, a web server, etc. and a processor to run them on).

But you should also realize that a modern consumer NAS is even more than that, usually you can log onto it through a web browser and will have access to a GUI with many applications, a software package management system, multiple services to stream media files, run automatic updates, read other storage devices attached to a USB port, etc. so almost a full-fledged desktop environment (although some work for the GUI is shared with the client machine obviously).

Relaxed

Posted 2014-08-20T04:14:44.583

Reputation: 131

2

---- All of the answers on this page were too long (or so I felt) - - - So id like to add one...

  • Disks have processors becuse the physical activity of moving from "Spot" to "Spot" on the disk, in a good order, is a semi-difficult task

  • If you Read/write Data in a "Bad" or "slow" order, based on distances and such from each to each, you can severely slow down data transfer.

The best way to discribe that, is if you work in a store, and you are told to get items from the most distant corners, before getting everything en route.

A smart command is to pick up everything en route = = This is kinda how AHCI works with NCQ.

NCQ needs more intelligent processing because it better plans its seeks.

Before this was done, somethign Called PIO or "Processor controlled... umm... I/O. Which was slow because 1. the distance between the CPU and the HDD is vast in computer terms: Latency. Latency to decide commands = slow transfer. 2. CPU does(needs to do) other stuff 3. Thats... really the main things.

The computer asks for files Here and Here The disk is responsible for "HOW" to get it to the computer.

... k im done

TardisGuy

Posted 2014-08-20T04:14:44.583

Reputation: 436

What you're describing is known as an elevator algorithm. Command queueing (such as SATA's NCQ) lets the CPU send multiple commands to the drive as a group, so that the drive can decide the most efficient order in which to fulfill them. Without command queueing, the CPU has to wait for the drive to service each request before sending the next one, so the drive has to service the requests in the order that the CPU sends them. That can be less efficient since the CPU doesn't know the internal physical layout of the disk.

– Wyzard – 2019-07-29T00:29:40.140

PIO is something different, though. That's a mode where the CPU has to run code to receive the data being read by the drive, which is inefficient. It's generally superseded by DMA, which lets the drive store the data directly into RAM while the CPU works on other things. – Wyzard – 2019-07-29T00:34:25.830

Yeah i wasnt being [exact][exact] but the function of the performance is the resulting latency per transaction. – TardisGuy – 2019-11-02T16:04:09.703

2

All hard drives have always had processors. All routers have always had processors.

Your graphics card has a processor. Always has. Your network interface card has a processor. Always has. Your printer has a processor, your keyboard, your mouse, and on and on and on. I would be hard pressed to think of a device that is connected to your computer that does NOT have a processor of some kind.

They are now being advertised more because their performance is more critical, because we area asking these devices to do more and more.

Bill

Posted 2014-08-20T04:14:44.583

Reputation: 21

2

There is virtually no device in computer electronics which is so dumb that it can perform its role without a processor - at the very least virtually everything has to encode a signal in or out at some time. If that signal varies, there must be rules for how it varies and a processor enforces those rules.

Drifting a little further from the question but reinforcing the everything has processors theme, back in the 80's I was a sysadmin in charge of a few VAX/VMS mainframes.

We had a very fast (noisy) band printer which ran a bank of hammers hitting a high-speed, high-tensile band. I think it was a 600 lines-per-minute printer. That's completely formed 132 character lines, not a line of dots.

To control the timing of how the hammers hit the band, it had some simple electronic circuitry. This needed a different program depending on the band - you could have even faster bands which only had uppercase letters (several sets of ASCII on one band).

The program for that processor was stored on a piece of paper tape which was also read in a continuous loop, every time the printer was switched on (yes it was left running most of the time).

I only found out when my operator got enthusiastic cleaning the printer and found the paper tape. Fortunately he realised it wasn't just a stray bit of paper and didn't try to remove it.

Andy Dent

Posted 2014-08-20T04:14:44.583

Reputation: 271

2

What does it mean when hard drives have a processor on the hard drive?

It means the drive has a small CPU. Generally, any device that has a CPU will have firmware.

How does it work, and what benefit does it have?

Computer peripherals are complex. For example, the act of reading and writing data to a floppy disk drive is fairly involved. You need to manipulate the hardware that moves the drive head, then look for sector headers, finding out if data coming in on a read line is making any sense according to a protocol, etc.

Let's take a simplified example of reading a floppy drive: Probably the most rudimentary way a CPU can communicate with the outside world is through I/O ports. These ports are connected to lines on the motherboard or sockets - if electricity is going through a line, a 1 is visible to the CPU on the port when it's read, if not, then an 0 is visible to the CPU. Similarly for writing, the CPU can write a 1 to a port to make electricity go through the line, or set it to 0 to stop that.

So, for a floppy drive, let's say somehow you have a line connected to the read/write head of the floppy. To read data, you need to wait for a "flux reversal" - basically a shift in magnetic energy which would cause the line to go from 0 to 1 or 1 to 0. You'd then need to keep track of how much time until you detect a second flux reversal, and keep doing that until you have all the bits in your sector, and put those measured durations together to recreate data. This doesn't even get into things like moving the drive head or waiting for the drive motor to attain a normal speed so your durations aren't messed up, and accounting for the fact that no two motors are likely the exact same so you need to be flexible in your measurements somehow.

Hopefully that sounded complicated, because it is.

So sure, you can program a normal computer CPU to do that, but because it's very time sensitive, your computer's CPU can't really do much else while this is going on. Old computers that actually did something like this in all CPU/all software to save money, like the old Apple IIe, could not do anything else while reading/writing to disc for this reason.

By placing a small CPU in the drive, and having a controller on the motherboard which is really just a communications bus, your CPU can run other programs, get/send data to the drive using the bus, and offload most of the physical low-level work to the drive itself. Furthermore, as technology improves, the low-level programming to handle it can stay in the drive, and there is no need to change programs on your computer to work with different internal drive formats.

Regarding routers, the actual low-level routing function is not difficult to do in hardware, and many enterprise level routers do just that, but it's things like firewalling, port forwarding, access control and the web interface or console that are complex enough to need a CPU to do it.

I'm surprised, because the von Neumann machine model doesn't include processors on storage

There is nothing in the von Neumann model that says any peripherals can't themselves be von Neumann machines. What makes a peripheral a peripheral is the fact that the CPU can send it commands over some sort of bus or other I/O mechanism and get results back.

LawrenceC

Posted 2014-08-20T04:14:44.583

Reputation: 63 487