C on amd64 Linux, 36 bytes (timestamp only), 52 49 bytes (real disk activity)
I hard-code the open(2)
flags, so this is not portable to other ABIs. Linux on other platforms likely uses the same O_TRUNC
, etc., but other POSIX OSes may not.
+4 bytes to pass a correct permission arg to make sure the file is created with owner write access, see below. (This happens to work with gcc 5.2)
somewhat-portable ANSI C, 38/51 bytes (timestamp only), 52/67 bytes (real disk activity)
Based on @Cat's answer, with a tip from @Jens.
The first number is for implementations where an int
can hold FILE *fopen()
's return value, second number if we can't do that. On Linux, heap addresses happen to be in the low 32 bits of address space, so it works even without -m32
or -mx32
. (Declaring void*fopen();
is shorter than #include <stdio.h>
)
Timestamp metadata I/O only:
main(){for(;;)close(open("a",577));} // Linux x86-64
//void*fopen(); // compile with -m32 or -mx32 or whatever, so an int holds a pointer.
main(){for(;;)fclose(fopen("a","w"));}
Writing a byte, actually hitting the disk on Linux 4.2.0 + XFS + lazytime
:
main(){for(;write(open("a",577),"",1);close(3));}
write
is the for-loop condition, which is fine since it always returns 1. close
is the increment.
// semi-portable: storing a FILE* in an int. Works on many systems
main(f){for(;f=fopen("a","w");fclose(f))fputc(0,f);} // 52 bytes
// Should be highly portable, except to systems that require prototypes for all functions.
void*f,*fopen();main(){for(;f=fopen("a","w");fclose(f))fputc(0,f);} // 67 bytes
Explanation of the non-portable version:
The file is created with random garbage permissions. With gcc
5.2, with -O0
or -O3
, it happens to include owner write permission, but this is not guaranteed. 0666
is decimal 438. A 3rd arg to open
would take another 4 bytes. We're already hard-coding O_TRUNC and so on, but this could break with a different compiler or libc on the same ABI.
We can't omit the 2nd arg to open
, because the garbage value happens to include O_EXCL
, and O_TRUNC|O_APPEND
, so open fails with EINVAL
.
We don't need to save the return value from open()
. We assume it's 3
, because it always will be. Even if we start with fd 3 open, it will be closed after the first iteration. Worst-case, open
keeps opening new fds until 3 is the last available file descriptor. So, up to the first 65531 write()
calls could fail with EBADF
, but will then work normally with every open
creating fd = 3.
577 = 0x241 = O_WRONLY|O_CREAT|O_TRUNC
on x86-64 Linux. Without O_TRUNC
, the inode mod time and change time aren't updated, so a shorter arg isn't possible. O_TRUNC
is still essential for the version that calls write
to produce actual disk activity, not rewrite in place.
I see some answers that open("a",1)
. O_CREAT is required if a
doesn't already exist. O_CREAT
is defined as octal 0100 (64, 0x40) on Linux.
No resource leaks, so it can run forever. strace
output:
open("a", O_WRONLY|O_CREAT|O_TRUNC, 03777762713526650) = 3
close(3) = 0
... repeating
or
open("a", O_WRONLY|O_CREAT|O_TRUNC, 01) = 3
write(3, "\0", 1) = 1 # This is the terminating 0 byte in the empty string we pass to write(2)
close(3) = 0
I got the decimal value of the open
flags for this ABI using strace -eraw=open
on my C++ version.
On a filesystem with the Linux lazytime
mount option enabled, a change that only affects inode timestamps will only cause one write per 24 hours. With that mount option disabled, timestamp updating might be a viable way to wear out your SSD. (However, several other answers only do metadata I/O).
alternatives:
shorter non-working:
main(){for(;;)close(write(open("a",577),"",3));}
uses write
's return value to pass a 3
arg to close. It saves another byte, but doesn't work with gcc -O0 or -O3 on amd64. The garbage in the 3rd arg to open
is different, and doesn't include write permission. a
gets created the first time, but future iterations all fail with -EACCESS
.
longer, working, with different system calls:
main(c){for(open("a",65);pwrite(3,"",1);)sync();}
rewrites a byte in-place and calls sync()
to sync all filesystems system-wide. This keeps the drive light lit up.
We don't care which byte, so we don't pass 4th arg to pwrite. Yay for sparse files:
$ ll -s a
300K -rwx-wx--- 1 peter peter 128T May 15 11:43 a
Writing one byte at an offset of ~128TiB led to xfs using 300kiB of space to hold the extent map, I guess. Don't try this on OS X with HFS+: IIRC, HFS+ doesn't support sparse files, so it will fill the disk.
XFS is a proper 64bit filesystem, supporting individual files up to 8 exabytes. i.e. 2^63-1, the maximum value off_t
can hold.
strace
output:
open("a", O_WRONLY|O_CREAT, 03777711166007270) = 3
pwrite(3, "\0", 1, 139989929353760) = 1
sync() = 0
pwrite(3, "\0", 1, 139989929380071) = 1
sync() = 0
...
6Does reading files instead of writing them also count as disk I/O? What about writing to
/dev/null
? (Isyes>/dev/null
a valid Bash answer?) – Doorknob – 2016-04-01T12:33:18.457Good point, I changed the question to require infinite writing. – MathuSum Mut – 2016-04-01T12:35:27.993
2Can it take any input ? – User112638726 – 2016-04-01T14:22:21.920
1Sure, as long as it is not infinite user input. – MathuSum Mut – 2016-04-01T14:26:08.883
1This is heavily biased towards shell scripts, but such is life. – Mateen Ulhaq – 2016-04-01T22:36:50.507
29Dang man...what did your SSD do to you? – R. Kap – 2016-04-01T23:14:06.047
1Would this kill the ssd though? Wouldn't the writes get cached (kernel or disk cache) and then removed again before hitting the nand chips? – Filip Haglund – 2016-04-02T08:13:22.470
2As I can't hope to compete with 6 byte solutions, would creating the file ./a with the 3 byte contents ./a count for a bonus prize for lateral thinking? AFAIK just executing a file causes some file system writing to take place on many systems, because at the very least 'last access time' gets updated as a byproduct ;-) – Stilez – 2016-04-02T13:57:37.750
Can't stay. Giving talk. Oh no, they switched to my screen...where is it...ah, there! Bye! – CalculatorFeline – 2016-04-02T21:43:43.937
(Note: I actually showed that during the talk a couple of minutes ago.) – CalculatorFeline – 2016-04-02T21:44:02.983
3Many of these answers will write the data into the same space over and over. That does not result in an actual disk write even if the data differs. (Extreme case, dos -> windows communications. I wrote 4k of data in dos and read it back in Windows--so long as data was flowing the disk light would stay off.) – Loren Pechtel – 2016-04-02T23:40:22.757
Does "infinite" here mean "until an error occurs" (e.g. disk full), or literally infinite? If you write sequentially until the disk is full, are you allowed to quit, or do you have to start over? – Nate Eldredge – 2016-04-04T03:57:11.423
Literally infinite, the disk cannot be full. – MathuSum Mut – 2016-04-04T05:54:48.520
@Doorknob
/dev/null
is provided by Linux/Mac/whatever kernel, it isn't on the disk (or at least will not affect the disk when written to). – sadljkfhalskdjfh – 2016-04-04T10:49:49.507um...what?
Infinite disk I/O is a sure way of sentencing your SSD to death. – cst1992 – 2016-04-05T08:35:37.313I was referring to CatsAreFluffy ;) – MathuSum Mut – 2016-04-05T08:38:07.367
@Stilez: A program that just
exec
's itself repeatedly, to create access-time metadata I/O on file holding the script? That will work as well as most answers on systems that don't use the default lazyrelatime
ornoatime
mount options. However, 3 bytes won't do it: you'll run out of PIDs. (Some of the script answers have the same problem. Programs with resource leaks aren't going to be able to wear out an SSD before crashing). – Peter Cordes – 2016-05-14T06:24:12.6901Does it need to be disk I/O, or will tape I/O work? – Mark – 2017-01-31T05:11:08.017