9

In windows filed deleted can be find in trash if you didn't press shift,

What about the files deleted by rm -f in linux

kenorb
  • 5,943
  • 1
  • 44
  • 53
wamp
  • 1,177
  • 3
  • 12
  • 17
  • 3
    Windows has called it a Recycle Bin for over 10 years now; and when you push delete it explicitly says it's moving the file, not deleting it. `rm` unlinks the i-node(s) associated with the file. – Chris S Aug 04 '10 at 02:55
  • 2
    Closely Related: [I overwrote a large file with a blank one on a linux server. Can I recover the existing file?](http://serverfault.com/questions/145506/i-overwrote-a-large-file-with-a-blank-one-on-a-linux-server-can-i-recover-the-ex) – Warner Aug 04 '10 at 04:41

4 Answers4

14

The first thing to remember is -- stop doing any further write-activities on the filesystem.

Then you can try some tools that will look at the filesystem and try to locate data in the deleted node. 'extundelete' is one such tool at sourceforge.

extundelete is a utility that can recover deleted files from an ext3 or ext4 partition. The ext3 file system is the most common file system when using Linux, and ext4 is its successor. extundelete uses the information stored in the partition's journal to attempt to recover a file that has been deleted from the partition. There is no guarantee that any particular file will be able to be undeleted, so always try to have a good backup system in place, or at least put one in place after recovering your files!

nik
  • 7,040
  • 2
  • 24
  • 30
  • 3
    The no further writes can't be emphasized enough. It's not built-in, so "undeleting" is a matter of recovery software reassembling the remaining pieces before the filesystem overwrites the data with future writes. – Jeremy M Aug 04 '10 at 02:36
2

The first step would be to try an undelete tool for the filesystem used for your root drive.

As mentioned, ext3grep and extundelete are the tools for the ext file system family.

Another option depending on the file type trying to be recovered is to run a file carver on the drive. This will take longer than the above utilities.

Foremost is one option I have used for this.

The final option, if you happen to know of a certain string within the file, is to open the drive in a hex editor and search for that string.

Depending on your setup, your window manager may provide a recycle bin/trash can.

At the end of the day, there's absolutely no substitute for having a good backup system setup. Find one that does its job without you touching it and set it up. You'll save yourself a lot of time, trouble, and pain in the long run.

dpflug
  • 158
  • 5
1

I'll give a try to this one, ext3grep:
http://www.xs4all.nl/~carlo17/howto/undelete_ext3.html
You have to unmount the partition before starting.

HTH

Paul
  • 1,837
  • 1
  • 11
  • 15
0

As undelete_ext3 seems to be gone, here is a humble bash script, that helped me recover some files that were unobtainable using extundelete or debugfs. Solution shared.

You can preseed a list of blocks to look at, see PRESEED. It takes one block number per line. If you don't preseed, all blocks will be searched, the default.

  • For each block the first bytes are probed for gzip content.
  • If successful, the block in question is gunzip'd to further probe for "ustar" string at byte 257, demarking a tar archive.
  • Finally, data matching a file pattern is extracted (suffix-path style using tar's wildcard option) and grepped for a certain string. See variables FILE_IN_TAR and FIT_CONTENT for this.
  • If found, save the file.

As you will probably have a different use case, this might give you a frame to base your own filtering on. You will definitely need to tweak values when looking for other filetypes.

Sample invocation: ./ext-undelete-tar-gz.sh devimage found_files/

#!/bin/bash

# Brute force (linear) search specific tar files with
# certain contents on ext2 / ext3 / ext4 devices or files
#
# .. this is a last resort if extundelete and/or debugfs
#    did not find what you were looking for and limited
#    in that recoverable data must not have been stored
#    in fragments, i.e. non-sequentially

[[ -n "$2" ]] || {
    echo "usage: $0 [ device | imagefile ] "\
    "[ destdir_for_recovered_data ] "\
    "[ max_blocks_to_search (optional) ]" 
    exit 1
}

IMG=$1
DEST=$2
TMP=/dev/shm/cand.tmp
PRESEED=/dev/shm/cand.list

GZMAGIC=$(echo -e "\x1f\x8b\x08")
TARMAGIC=$(echo -e "ustar")

# max bytes to read into $TMP when a .tar.gz has been found
LEN=$((160*1024))

# pick $TMP for recovery based on matched strings..
FILE_IN_TAR="debian/rules" # ..in the tar index (suffix-search)
FIT_CONTENT="link-doc="    # ..within FILE_IN_TAR matches

# determine FS parameters
BLOCKS=$(tune2fs -l $IMG | grep -Po "(?<=^Block count:).*" | xargs)
    BS=$(tune2fs -l $IMG | grep -Po "(?<=^Block size:).*"  | xargs)
LEN=$((LEN/BS))

function _dd     { dd     $@ 2>/dev/null ; }
function _gunzip { gunzip $@ 2>/dev/null ; }
function _tar    { tar    $@ 2>/dev/null ; }

function inspect_block {
    bnum=$1

    if _dd if="$IMG" skip=$bnum bs=$BS count=1 | tee "$TMP" \
    | _dd bs=1 count=3 \
    | grep -qF "$GZMAGIC" 
    then
        if _gunzip -c "$TMP" \
        | _dd bs=1 count=5 skip=257 \
    | grep -qF "$TARMAGIC"
        then
            _dd if="$IMG" skip=$((bnum+1)) bs=$BS count=$((LEN-1)) >> "$TMP"
            echo -n found $bnum.tar.gz

            if _tar xzf "$TMP" -O --wildcards *"$FILE_IN_TAR" \
            | grep -qF "$FIT_CONTENT"
            then
                echo " ..picked, stripping trailing garbage:"
                exec 3>&1
                gunzip -c "$TMP" 2>&3 | gzip > $DEST/$bnum.tar.gz
                exec 3>&-
            else
                echo
            fi
        fi
    fi

    echo -ne "$((bnum+1)) / $BLOCKS done.\r" >&2
}


if [[ -f "$PRESEED" ]]
then
    while read bnum
    do inspect_block $bnum
    done <"$PRESEED"
else
    for (( bnum = 0 ; bnum < ${3:-$BLOCKS} ; bnum++ ))
    do inspect_block $bnum
    done
fi | gzip >"$PRESEED.log.gz"

echo
  • Stop using the filesystem in question after noticing an erroneous deletion ASAP.
  • This script will probably fail on large files, it does not parse the higher level structs of the filesystem.
  • Basically, modern filesystems are not designed to robustly recover unlinked data, so there are no guarantees on recovering lost data.
  • Operate on a backup image of the filesystem.
kenorb
  • 5,943
  • 1
  • 44
  • 53
  • If the file is larger than 12 blocks, then I think it is likely you will find indirection blocks between the data blocks. In that case simply reading sequentially will produce garbage output. But if you are able to find the indirection blocks you can recover the rest of the file (unless it was overwritten of course). – kasperd Dec 18 '16 at 08:42