How can I intentionally break/corrupt a sector on an SD card?

141

34

I need to test the resilience of some read/write code for some embedded hardware. How might I sacrifice a few SD cards and break several known sectors for a controlled study?

The only thing I can think of is to overwrite a single sector a few million times. I wonder if a Linux badblocks script can be created to run its destructive test on a single sector repeatedly for several hours.

Gabe Krause

Posted 2017-09-11T07:28:36.050

Reputation: 1 279

20Can you change the low-level SD driver to pretend there is a bad block, or is that out of the question? – None – 2017-09-11T19:39:38.067

3@MarkYisri, I don't think the driver is very accessible. Whatever driver we are using is ultra-rudimentary to maximize memory allocation to the rest of the firmware. Also, if it was possible, that would likely be beyond my capability. – Gabe Krause – 2017-09-11T20:51:09.117

3Can you build an SD card emulator? Not the simplest project, mind you. – user253751 – 2017-09-12T00:03:51.737

11Given the goal, you could buy some second-hand SD cards for little money and you may easily get a faulty one, or put an "looking for..." announce to specifically look for faulty cards. Or search eBay for defective cards. Then you test the card and you'll know the position of the defective areas. – FarO – 2017-09-12T09:18:54.980

Presumably your SD card read functionality is wrapped in an abstraction layer? If so, insert some test s/w into that – Mawg says reinstate Monica – 2017-09-12T09:21:17.463

28Ask any professional photographer. They'll have a pile of sketchy SD cards, surely. – J... – 2017-09-12T10:56:36.347

1Get a raspberry pi; they're notorious for breaking sd cards. Write a script to just write and delete files to it over and over. – None – 2017-09-12T14:15:49.723

2Perhaps you could contact an SD card vendor and ask them if you can buy bad SD cards along with information on which sectors are bad on each faulty card? – Kevin – 2017-09-12T17:57:01.470

2I have one that does that itself constantly. Want it? – T.E.D. – 2017-09-12T19:21:48.220

2

I feel like this is an XY Problem

– jkd – 2017-09-13T22:55:11.070

1@Mehrdad Actually you are not that far off. Not sure if this is still true with the latest flash media, but it used to be that if you removed power at just the right point (during a flash write) you would lose the entire erase block. Industrial grade devices would have a capacitor to allow any write in progress to complete when power was lost to protect against this failure mode. – Michael – 2017-09-15T20:48:39.543

1The cheapest no-name ones off Amazon usually don't last long if they even work at all – Mark K Cowan – 2017-09-19T16:25:20.397

Icepick? Electrostatic discharge? Carefully targeted drill? What do you mean that's not the kind of breaking you were thinking of? – Kaithar – 2017-09-20T08:39:43.047

1Yes, @jakekimdsΨ, this is definitely an XY Problem. What OP really needs is a good test environment for their code, what they think they want is bad SD cards (which there are plenty of people offering). OP is going to have to provide more information on their development environment for us to get them a real solution. – NH. – 2017-09-25T14:51:30.867

You're right. We need a better test environment. But I never requested bad SD cards, in general. Randomly bad cards are not going to contribute to a repeatable testing procedure in a reasonable time frame. I needed to test a known bad sector, which we have since learned is near impossible with built-in SD wear balancing. The right answer (for me) is most likely to be a controllable hardware interface between SD and Device to be tested. But I hesitate to select that as the Right Answer because there are several great solutions posed here for different environments. – Gabe Krause – 2017-09-25T19:28:42.737

Answers

167

An alternative approach that may be useful.

If your code runs under Linux then maybe you can test it with a "faulty" logical device. dmsetup can create devices that return I/O errors. Just build your device using error and/or flakey target. From man 8 dmsetup:

error
Errors any I/O that goes to this area. Useful for testing or for creating devices with holes in them.

flakey
Creates a similar mapping to the linear target but exhibits unreliable behaviour periodically. Useful for simulating failing devices when testing.

Note: flakey target usage is documented here. Basic example here.

As far as I know an I/O error will be reported immediately, so this is different than real SD card behavior where you can expect delay, stalling etc. Nevertheless I think this approach may be useful in some cases, at least to perform fast preliminary test or so.

Kamil Maciorowski

Posted 2017-09-11T07:28:36.050

Reputation: 38 429

34I appreciate that out-of-box thinking! We're interfacing on the block level with the SD via a 80MHz Atmel chip and no real OS. – Gabe Krause – 2017-09-11T18:44:21.203

1@GabeKrause In which case the usefulness of this answer depends on how similar the Linux block device API might be to the API of your embedded device driver. – Qsigma – 2017-09-12T08:10:00.290

1dmsetup command for setting up an error device that always returns read errors: https://stackoverflow.com/questions/1870696/simulate-a-faulty-block-device-with-read-errors – Peter Cordes – 2017-09-15T07:28:28.233

1I agree that this sounds like a better solution. First you can replicate over any hardware. And also you can simulate the different error modes. For example I have a 16GB USB flash drive that works all fine. After some time though a particular area on it starts to return wrong data. There is no FS error of any kind. You read the file but content is different. Some sectors are obviously unstable. But how some particular device will behave cannot be known in advance. – akostadinov – 2017-09-15T08:51:20.790

75

This guy hacked the microcontroller inside SD cards used to mark bad blocks: https://www.bunniestudios.com/blog/?p=3554

You may be able to do the same and arbitrarily mark blocks as faulty.

Today at the Chaos Computer Congress (30C3), xobs and I disclosed a finding that some SD cards contain vulnerabilities that allow arbitrary code execution — on the memory card itself. On the dark side, code execution on the memory card enables a class of MITM (man-in-the-middle) attacks, where the card seems to be behaving one way, but in fact it does something else. On the light side, it also enables the possibility for hardware enthusiasts to gain access to a very cheap and ubiquitous source of microcontrollers.

.

These algorithms are too complicated and too device-specific to be run at the application or OS level, and so it turns out that every flash memory disk ships with a reasonably powerful microcontroller to run a custom set of disk abstraction algorithms. Even the diminutive microSD card contains not one, but at least two chips — a controller, and at least one flash chip (high density cards will stack multiple flash die).

.

The embedded microcontroller is typically a heavily modified 8051 or ARM CPU. In modern implementations, the microcontroller will approach 100 MHz performance levels, and also have several hardware accelerators on-die. Amazingly, the cost of adding these controllers to the device is probably on the order of $0.15-$0.30, particularly for companies that can fab both the flash memory and the controllers within the same business unit. It’s probably cheaper to add these microcontrollers than to thoroughly test and characterize each flash memory chip, which explains why managed flash devices can be cheaper per bit than raw flash chips, despite the inclusion of a microcontroller.

.

The crux is that a firmware loading and update mechanism is virtually mandatory, especially for third-party controllers. End users are rarely exposed to this process, since it all happens in the factory, but this doesn’t make the mechanism any less real. In my explorations of the electronics markets in China, I’ve seen shop keepers burning firmware on cards that “expand” the capacity of the card — in other words, they load a firmware that reports the capacity of a card is much larger than the actual available storage. The fact that this is possible at the point of sale means that most likely, the update mechanism is not secured.

In our talk at 30C3, we report our findings exploring a particular microcontroller brand, namely, Appotech and its AX211 and AX215 offerings. We discover a simple “knock” sequence transmitted over manufacturer-reserved commands (namely, CMD63 followed by ‘A’,’P’,’P’,’O’) that drop the controller into a firmware loading mode. At this point, the card will accept the next 512 bytes and run it as code.

FarO

Posted 2017-09-11T07:28:36.050

Reputation: 1 627

10Of all the answers, this one is probably the closest to what the OP actually was asking for. – Cort Ammon – 2017-09-12T18:56:09.397

11That was a fantastic read! – Gabe Krause – 2017-09-12T21:21:03.863

@Twisty copied some of the relevant parts. – FarO – 2017-09-13T12:24:45.187

2Down the rabbit hole into the world of SD card architecture I go. – Tejas Kale – 2017-09-15T11:23:56.580

38

This typically won't work because most recent SD cards (or eMMC) use static and dynamic wear-levelling, meaning that an intelligent controller interprets your write instruction and maps it to one of the least used flash sectors.

The only thing you could do is try to contact your suppliers and ask for their datasheet; there might be some (vendor specific) ways to retrieve the state of their wear-levelling algorithm. This would potentially allow you to query the state/usage of the underlying flash. Or you might be unlucky and this might not exist.

If your goal is really to destroy flash, all you could do is run massive read and write cycles and continuously check that the data you are reading back is still consistent. E.g. create two large files, store their checksums and read/write them in order to verify their checksum. The larger the flash, the longer this process will take.

amo-ej1

Posted 2017-09-11T07:28:36.050

Reputation: 607

2Won't this still work if the SD card is completely filled with data, so that it can't remap much? I don't think they have a lot of spare hidden sectors. – Ruslan – 2017-09-11T11:59:07.537

@Ruslan: No. Block storage devices generally don't know which sectors are taken by files, and which are "free". The exception is deviecs that support the TRIM command - which is used by SATA disks, not SD cards. – MSalters – 2017-09-11T13:23:24.157

@MSalters the device must know which sectors are filled with something other than zeros/FFs, otherwise it's not a storage device. – Ruslan – 2017-09-11T13:32:17.640

2@Ruslan The device does not need to know if a sector is filled with anything. It only needs to know the content of which sectors to deliver on request and which sectors to write on request. And then there may be some abstraction layer in place making it use other physical memory to represent those sectors following some undisclosed algorithm... - And "full" only means "threshold for cuncurrently fillable blocks reached", of course. – I'm with Monica – 2017-09-11T13:37:33.883

@AlexanderKosubek in any case, the wear-levelling logic must be aware of whether a sector has something easily discardable or not to remap it. – Ruslan – 2017-09-11T13:39:56.680

@Ruslan Yes, it needs information about the state of the memory, but not of the content. But I don't see how this would make it possible to "trick" the wear leveling into ab-using a specific amount of memory to the level of actually failing. – I'm with Monica – 2017-09-11T13:42:21.690

6@Ruslan: Even if the entire device has data on it, the wear-levelling can still be effective: for example, if sector A has been written once, and sector B has been written 1,000 times, then when yet another write comes in for sector B the card can swap the data for the two sectors, so that sector A contains sector B's data (and will likely get overwritten lots more times - but that's OK because it's fresh), and sector B will contain sector A's data (which will hopefully not change much). Obviously the device also needs to store the mapping of which sector gets stored where. – psmears – 2017-09-11T15:13:40.537

I'm curious if the wear leveling still happens when we read and write at the block level. For instance, if you write to an SD card with a hex editor, each sector is exposed. If I write to a specific sector that's bad, does the SD remap, even at that raw level? – Gabe Krause – 2017-09-11T18:48:34.720

2@GabeKrause yes, that's the nature of the beast. On the lowest level you have either nand or nor flash chips (nowadays everything is using nand), and there is an intelligent controller in front of the nand chip which terminate the bus (eg. usb for a usb stick or mmc for an sd card), and this chip is responsible for the mapping/wear levelling etc, it abstracts the flash away from you. If you would be using nand on embedded Linux this is what for example ubifs would do for you. – amo-ej1 – 2017-09-12T07:57:25.147

@psmears and what if A -> once, B -> 1000, then A -> 1000? Do the controller swap after B -> 1000 thinking that if A was written once then probably it will not be overwritten easily? – frarugi87 – 2017-09-12T15:42:32.657

@frarugi87: What I wrote was just a simple example to show how it is possible for a controller to spread out the load of writes even if the whole device contains data. The actual algorithms that the controllers use are more complex than that (and often patented / proprietary). In general it's a tradeoff - swapping blocks around will be slower, and in general require more writes, but may extend the lifetime of the device by ensuring no one block gets so many blocks as to die when other blocks have hardly been written. – psmears – 2017-09-12T16:15:08.680

2SD cards have a microcontroller that implements a "Flash Translation Layer" - block requests are translated by this microcontroller to raw NAND commands. Some SD cards have hidden commands to change/update MCU firmware and there are even some reverse engineering efforts done on it. Most flash storage devices other than raw NAND (which can appear in some instances like many home routers) are probably "overprovisioned" - meaning your 1GB SD card probably has something like 1024MB+128MB raw NAND space on it, to cover wear leveling when full and also sector-sparing for bad flash pages. – LawrenceC – 2017-09-12T16:57:51.773

31

You can increase transistor wearing by increasing the operation temperature. Use write-erase cycles on a heated chip (70-120 °C); it will wear faster.

Pavlus

Posted 2017-09-11T07:28:36.050

Reputation: 528

18Excessive storage temperature is also damaging, so It may be more practical to "cook" the chip at 120C° (or even more) for some time, then check for defects. – Dmitry Grigoryev – 2017-09-11T14:08:37.983

2Slight overvoltage on the supply to the card might also be possible, and would similarly need experimenting. – Chris H – 2017-09-12T15:17:46.080

Undervoltage also could cause different kinds of defects, like controller lock-ups. – user253751 – 2017-09-19T03:26:22.430

17

Preface: This option requires additional programming and hardware modifications, but it would allow for controlled reads most likely transparent to the host.

An SD card has multiple I/O options, but it can be controlled over SPI. If you were to take an SD card and modify it so that you could attach the pins to a microcontroller (such as an Arduino) you could have the Arduino mimic the SD card and be transparent to the device reading the SD card. Your code on the microcontroller could purposely return bad data when needed. In addition, you could put an SD card on the microcontroller so the reads would be able to pass through the microcontroller to the SD card to allow for gigabytes of testing.

Eric Johnson

Posted 2017-09-11T07:28:36.050

Reputation: 451

3Most high-speed devices (including PC card readers) will simply refuse to work with a card which doesn't support four-bit SD. – Dmitry Grigoryev – 2017-09-11T13:30:33.550

1The OP said that it was an embedded system that would be using the card which would make it more likely to support SPI for sd cards – Eric Johnson – 2017-09-11T15:00:00.527

3

A variant on this, but harder work, would be to find an SD card for which you can reflash the firmware.

– Peter Taylor – 2017-09-11T15:09:38.463

2This is super interesting! Our embedded system is running I/O through SPI. I'm not sure if I have the bandwidth to modify our hardware to accomplish an addition like this, but I think it's brilliant thinking. – Gabe Krause – 2017-09-11T18:51:44.973

2Getting educated about dynamic wear leveling leads me to believe that strategically creating a "bad" SD card with known bad sectors is far more difficult (or not possible) than I had hoped when posing the question. While currently beyond the scope of my ability, this appears to be the most controllable and technically promising approach, followed possibly by @Olafm. Customizing intermediate hardware to intercept and "corrupt" data at any pre-defined sector location during data transfer seems like a good approach. – Gabe Krause – 2017-09-14T16:31:04.723

If hardware modification is beyond scope, you could also see if your SPI driver supports loop back so that you could have the embedded system also "send" the data without needing anything on the physical SPI bus – Eric Johnson – 2017-09-14T16:38:19.277

15

I would go to ebay/aliexpress and buy the cheapest SD card I can find from China, the one that are "too good to be true". They often come with faulty sectors or are in software set to be much larger than they actually are. Either way, you should end up with faulty SD card to use for testing.

GuzZzt

Posted 2017-09-11T07:28:36.050

Reputation: 151

Interesting approach, but how would you write to the bad areas in order to test the effects of the bad blocks on the stored code? – fixer1234 – 2017-09-18T21:34:50.173

@fixer1234, I had one of these SD cards that said it was 32GB but it was actually only 128MB. I put it in my camera and could take photos beyond the 128MB but only the first photos could be read back. The rest was listed but was read back as broken. Guess that is how they want you to notice the problems with the card first when it is too late to complain... – GuzZzt – 2017-09-19T06:42:14.507

11

Once upon a time, many years ago, I was paid to retrieve a set of graduation photos and videos from a SD card for a rather distraught mother. Upon close inspection, the card had somehow been physically damaged with a visible crack in the outer case and had several bad sectors, most notably several early, critical sectors, which made even the most reliable recovery programs at the time completely fail to read the card. Also, forensic data tools back then cost a fortune.

I ended up obtaining an identical brand/size SD card and writing my own custom raw data dump and restore utility to copy the data from the bad card to the good one. Every time the utility hit a bad sector, it would retry a number of times before writing all zeroes for that sector and, instead of giving up and stopping, ignore the failure and move on to the next sector. The retry attempts were made since I had also noticed that some sectors still had around a 40% read success rate. Once the data was on the new SD card, the recovery tools that had failed before worked flawlessly with minimal data loss/corruption. Overall, about 98% of all of the files were recovered. A number of items that had been previously deleted were also recovered because nothing is ever actually deleted - just marked as such and slowly overwritten. What started out as a slightly boring data recovery exercise became one of my more memorable and interesting personal software development projects. In case you were wondering, the mother was thrilled.

At any rate, this story goes to show that it is possible to physically damage a SD card such that data is still accessible but has sectors that are only barely functioning and anything attempting to read from it has difficulties doing so. SD card plastic tends to be pretty flimsy, so bending or cutting into some cheap ones might do the trick. Your mileage may vary.

You could also ask around at some data recovery places in your area. Since they specialize in data recovery from various failing or failed devices, they should have some useful input/tips and might even have some pre-busted SD cards on hand (e.g. for training purposes) that you could obtain from them.

CubicleSoft

Posted 2017-09-11T07:28:36.050

Reputation: 253

2Have you released that utility online? That would be great to add to my arsenal. – Ploni – 2017-09-13T19:50:08.423

1At this point, it probably wouldn't even function properly given the march of progress of technology (might not even compile) and the low-level system calls I used. There are also a couple of modern, open source forensic device/drive cloning tools that I'd be more apt to attempt to use first than to try to pull my old software out of mothballs. – CubicleSoft – 2017-09-14T06:00:46.063

I expect you can probably just give some parameters to dd to get it to behave in a similar way to this, nowadays. I'm not sure though. – wizzwizz4 – 2017-09-16T12:24:36.437

@wizzwizz4, look at ddrescue. – hildred – 2017-09-17T20:16:09.053

"Also, forensic data tools back then cost a fortune." I'm pretty sure they still do. – jpmc26 – 2017-09-19T02:40:32.490

There are a number of open source forensic data tools available these days for multiple platforms. – CubicleSoft – 2017-09-21T14:55:52.253

5

This answer is an expansion on the comment of @Ruslan

  1. Fill your SD card up to about 99.9%
  2. Continiously re-write the content of the remaining 0.1% (Write A -delete-write B-delete - Write A ...)
  3. Test (periodically) whether you have already broken the card

Possible alternative:

Not sure whether this works for your purposes, but maybe it will actually suffice to physically damage your card, which could be a lot faster.

Dennis Jaheruddin

Posted 2017-09-11T07:28:36.050

Reputation: 378

6Filling the card to 99% won't help since the whole purpose of wear leveling is to prevent exactly this kind of premature damage. Physically damaging the card will almost certainly result in a card which doesn't initialize anymore. – Dmitry Grigoryev – 2017-09-11T13:44:30.657

2@DmitryGrigoryev How will wear leveling be of much help (hindrance, in this case) unless the card has much more memory than its official capacity? – ispiro – 2017-09-11T14:10:31.387

12@ispiro For example, next time a sector with high write count is overwritten, its contents may be swapped with a sector with a low write count. – Dmitry Grigoryev – 2017-09-11T14:13:07.903

1

@DmitryGrigoryev If I interpret this answer correctly there should be SD cards that don't do wear lvling: https://electronics.stackexchange.com/a/27626/16104

– Dennis Jaheruddin – 2017-09-14T07:42:23.273

1@DennisJaheruddin Yes, older card don't do that. with these cards it's enough to repeatedly create/remove an empty file until the sector in the allocation table wears out. – Dmitry Grigoryev – 2017-09-14T11:04:40.393

@DmitryGrigoryev: I join the people who question your first two comments. As I understand wear leveling (and as you yourself say), it is a technique wherein the smarts inside the SD card (or other flash / SSD device) switches to different physical sectors (or pages) when the computer (or other client) repeatedly writes to the same address. ISTM that filling the device to 99.9% capacity reduces the number of free pages by three orders of magnitude and forces the device to start reusing the same physical pages 1000 times earlier than on an empty device. How does wear leveling defeat this attack? – Scott – 2017-09-26T18:36:36.750

@Scott Why do you think that only free pages can be used for wear leveling? Any page with low write count can be erased and reused for content which is being constantly updated. – Dmitry Grigoryev – 2017-09-26T23:24:48.870

@DmitryGrigoryev: What makes me think that?   The silly notion that a storage device that overwrites saved information that the user doesn’t want overwritten is not a functioning storage device.   If it overwrites a non-free page, wouldn’t it have to copy the data from that page somewhere else, causing a cascade?  What am I missing? – Scott – 2017-09-27T00:01:23.663

@Scott Yes, it will copy rarely modified data from a page with low write count to a page with a high write count first. The page with a high write count is presumably being erased anyway, so there will be no cascade. – Dmitry Grigoryev – 2017-09-27T10:52:27.430

3

You could try introducing an unstable power supply or higher voltage signalling.

A common fault for a family of devices I know have a strong correlation between SD card corruption and intermittent battery contact.

PCARR

Posted 2017-09-11T07:28:36.050

Reputation: 131

3

Some older, low-capacity SD cards (16MB-ish) use flash chips in TSOP/TSSOP style packages. A workshop capable of SMT rework (if you are doing embedded work, you might have that skill inhouse, otherwise check for small companies doing board level phone/laptop repair) could conceivably separate and reattach that chip, so that it can be read and written raw (including the ECC codes) with a device programmer.

Still, be aware that you will be mainly testing:

  • How your device will handle possible timing aberrations/hiccups introduced by internal error correction

and in the worst case

  • how your device handles a terminally failing SD card.

If you just want to check how it behaves with erratic behaviour for whatever reason from an SD card, it is probably best to just introduce electrical noise into the interface lines (eg by putting a FET bus switch in between, and at random times momentarily switching it to a source of nonsensical signals (of the right electrical levels though).

rackandboneman

Posted 2017-09-11T07:28:36.050

Reputation: 670

Terminally failing SD cards don't generate "electrical noise", they just return error codes for write operations. – Dmitry Grigoryev – 2017-09-27T10:55:58.420

2

Related to OlafM's answer but different: you can program a microcontroller of your own to speak the SD card protocol, and then emulate whatever behavior you want it to have.

R.. GitHub STOP HELPING ICE

Posted 2017-09-11T07:28:36.050

Reputation: 1 783

1

Perhaps this is not the direction you wanted but I found removing my sd card while my radio or laptop was reading from it guarantees a crashed SD card about 1/5 or 1/10 times. It seems the cards don't do well having power removed during a read and presumably writes. After reading Robert Calhoun's comments below, it leads me to believe it may be damaging the FAT. Though I don't know why just reading causes a crash - there should not be any writing going on?

jwzumwalt

Posted 2017-09-11T07:28:36.050

Reputation: 268

this could damage the FS but not sure it would actually create bad sectors – akostadinov – 2017-09-15T08:46:33.610

I can tell you for a fact it crashes the card and requires a re-format. I have done this many times with SD cards an a Raspberry Pie, my laptop, and several of my my home devices. – jwzumwalt – 2017-09-15T19:56:08.503

2Requires a reformat != causes damage to the sectors. File system, yes. Sectors, maybe. – wizzwizz4 – 2017-09-16T12:26:37.763

1

The FAT32 Master Boot Record area is probably the most susceptible to abuse, since on a logical level it always needs to be in the same place. (Perhaps this is handled by the soft-remapping of bad sectors, but I am somewhat skeptical that this is implemented on all hardware.) So you could run sfdisk in a loop and see if you can wreck it that way.

But I am going to beg you to do whatever you can to improve hardware reliability, instead of trying to handle bad hardware in software. The problem that is that SD cards fail in all kinds of weird ways. They become unreadable, they become unwriteable, the give you bad data, they time out during operations, etc. Trying to predict all the ways a card can fail is very difficult.

Here's one of my favorite failures, "big data mode":

bad sd fake big data

SD cards are commodity consumer products that are under tremendous cost pressure. Parts change rapidly and datasheets are hard to come by. Counterfeit product is not unheard of. For cheap storage they are tough to beat, but while SSDs make reliability a priority, the priority for SD cards is speed, capacity and cost (probably not in that order.)

Your first line of defense is to use a solderable eMMC part with a real datasheet from a reputable manufacturer instead of a removable SD card. Yes, they cost more per GB, but the part will be in production for a longer period of time, and at least you know what you are getting. Soldering the part down also avoids a whole host of potential problems (cards yanked out during writes, poor electrical contact, etc.) with a removable card.

If your product needs removable storage, or it's just too late to change anything, then consider either spending the extra money for "industrial" grade cards, or treat them as disposable objects. What we do (under linux) is fsck the card on boot and reformat it if any errors are reported, as reformatting is acceptable in this use case. Then we fsck it again. If it still reports errors after reformatting, we RMA it and replace the hardware with a newer variant that uses eMMC.

Good luck!

Robert Calhoun

Posted 2017-09-11T07:28:36.050

Reputation: 273

I gave you a thumbs up. I use SD cards alot and have one fail a couple times a year. I had never given it much thought but in my own experience my failed cards did exhibit the symptoms of a failing FAT before they finaly became worthless. I think you are on to something here :) So simply creating and deleting files should exercise the heck out of the FAT. – jwzumwalt – 2017-09-15T20:01:29.400

1

If your sd-card is FAT32 formatted, you may hex-edit the 2 fats, and mark a sector as bad with the correct hex code. This is only a trick if you want to logic test a software supposed to find a bad sector at this particular place ; it won't harm your sd-card either, a reformat will bring it back to normal condition.

Emile De Favas

Posted 2017-09-11T07:28:36.050

Reputation: 11

1Welcome to Super User! This seems like an interesting approach - could you maybe explain how specifically to perform the hex editing? Thanks. – Ben N – 2017-09-18T13:53:19.103

I think the Linux command hdparm will do the trick : it will allow you to save some sector you can later edit, and then write back to your card. you need to find documentation about vfat and man hdparm though. Sorry, I'm nowhere close to a windows computer. – Emile De Favas – 2017-09-18T14:01:48.137

The --make-bad-sector flag looks promising! However, I can't tell if this will only work within the linux system that initially runs this command. I'm hoping that the command hdparm --make-bad-sector 20000 /dev/sd# would somehow make sector 20000 bad, and be detected as bad on my embedded hardware device that isn't running linux. Any thoughts? – Gabe Krause – 2017-09-19T21:37:28.827

0

I wonder if a Linux badblocks script can be created to run its destructive test on a single sector repeatedly for several hours.

On a single sector—no, because the wear-levelling code inside the SD card will remap the logical blocks all over the place.

But you can easily run badblocks -w in a loop until it causes some bad blocks to appear. Something like this should work:

while badblocks -w /dev/xx; do :; done

assuming that badblocks returns 0 if no bad blocks were detected and ≠ 0 otherwise (the man page doesn't say and I haven't checked the source code.)

Tobia

Posted 2017-09-11T07:28:36.050

Reputation: 330

-1

Normally with SD/uSD cards they implement wear leveling so this could be quite hard. Depending on type (single layer cell, multilayer, TLC, 3D-NAND etc) the write cycle required to break it enough to exhaust the sector pool may be in the multiple TB.

I did actually test this with a 4GB, 64GB and 256GB Pro Duo, SSD and thumbdrive, the 64GB K---s--- using 4 Micron 16GB chips lasted about 3.84TB before it failed with a single soft error in the FAT area. The 256GB using lasted a bit less but would estimate without direct chip access it probably wrote maybe 5TB before it finally gave out with MBR corruption but wasn't clear if the controller caused it as worked solidly in USB3 mode but USB2 had more glitches during readback and it also ran very hot. 4GB Duo failed in the reader when copying data, again can't be sure but equates to maybe 6 years of use and camera was also showing "Recovering" messages. Incidentally varying power supply voltage during write will make it fail a LOT faster. My 128GB microSD failed after about 2 years of use with similar symptoms, also had excess power drain and heat yet data read and wrote fine.

Removed irrelevant notes about X-ray experiments.

Conundrum

Posted 2017-09-11T07:28:36.050

Reputation: 9

1There are already several answers helping to destroy specific rectors. Your suggestion about destroying random ones doesn't give anything extra. – Máté Juhász – 2019-07-15T05:21:14.133