3

I am trying to remove a string from many text files on one of our servers. The string is identical across all these files and I can run:

grep -r -l 'string'  

to get the file list but I am stuck on how to get the files edited and written out to their original locations again. Sounds like a job for sed but not sure how to handle the output.

Manny T
  • 68
  • 1
  • 6
  • I don't know what the incantation is, but you'll want to mix that with awk or sed. If someone wants to post the command I'll happily upvote it. – squillman Jan 05 '10 at 22:12

5 Answers5

5

find -type f -print0 | xargs -0 -n 1 sed -i /string/d will do the trick, handling spaces in filenames and arbitrarily nested frufru, since apparently people aren't capable of expanding * on their own.

womble
  • 95,029
  • 29
  • 173
  • 228
3

Here's my script for this sort of thing, which I call remove_line:

#!/usr/bin/perl

use IO::Handle;

my $pat = shift(@ARGV) or
        die("Usage: $0 pattern files\n");
$pat = qr/$pat/;
die("Usage $0 pattern files\n")
        unless @ARGV;

foreach my $file (@ARGV) {
        my $io = new IO::Handle;
        open($io, $file) or
                die("Cannot read $file: $!\n");
        my @file = <$io>;
        close($io);
        foreach my $line (@file) {
                if($line =~ /$pat/) {
                        $line = '';
                        $found = 1;
                        last;
                }
        }
        if($found) {
                open($io, ">$file") or
                        die("Cannot write $file: $!\n");
                print $io @file;
                close($io);
        }
}

So you do remove_line 'string' the files in your list.

Advantages to doing this over using sed are you don't have to worry about the platform-dependent behavior of sed -i and you can use Perl regex for the matching pattern.

chaos
  • 7,463
  • 4
  • 33
  • 49
  • 2
    This did it. The line I was trying to strip out had some regex in it and this handled it really cleanly. Just had to match a unique part of the line and I was done. Thanks. – Manny T Jan 05 '10 at 22:48
  • Really? a screen and a half of perl, to accomplish what a single line of sed can do? – James Polley Jan 05 '10 at 23:30
  • My eyes... the goggles... etc – womble Jan 06 '10 at 00:06
  • 1
    Yeesh. Don't like it, don't use it. I like `sed -i` as much as the next guy, but it's not portable and there are enough cases it wasn't good for that I wrote the script. Get over it. – chaos Jan 11 '10 at 18:06
1

Ugh. I'm not a shell wizard at all, but I'd look at a pipe to xargs and then sed to remove the line with the string in question.

Little bit of Google perusal makes me think that this might make Bob your stepuncle - close enough to get there anyway.

grep -r -l 'string'  | xargs sed '/string/d' 
mfinni
  • 35,711
  • 3
  • 50
  • 86
  • Assuming that you know the string will be in all or most of your files in question, then womble's answer is better, because why waste your time testing for the presence? If that's not the case, depending on how many won't match, you may want to test for them first. – mfinni Jan 05 '10 at 22:16
  • 1
    This one retains the OP's recursive search that womble's doesn't. – Dennis Williamson Jan 05 '10 at 22:34
  • However, this one doesn't work; it doesn't write the output anywhere, and trying to xargs like that and writing to an output file will be... ugly. – womble Jan 06 '10 at 00:05
0

Ummmmmm, this is a perl one-liner, thanks to the lovely -i flag for in-place filtering of input files!!

   perl -ni.bak -e 'print unless /pattern.to.remove/' file1 file2 ...

In context...

% echo -e 'foo\ngoo\nboo' >test
% perl -ni.bak -e 'print unless /goo/' test
% diff test*
--- test 2010-01-06 05:09:13.503334739 -0800
+++ test.bak 2010-01-06 05:08:28.313583066 -0800
@@ -1,2 +1,3 @@
 foo
+goo
 boo

here is the trimmed quick-reference on the perl incantation used...

% perl --help
Usage: perl [switches] [--] [programfile] [arguments]
  -e program        one line of program (several -e's allowed, omit programfile)
  -i[extension]     edit <> files in place (makes backup if extension supplied)
  -n                assume "while (<>) { ... }" loop around program

and for extra credit, you can use touch -r file.bak file to copy the old timestamp to the new file. the inodes will differ, though, and strange things may happen if you have hard links in the mix...check the docs if you're that motivated to cover your tracks... Hmmmmm, what was your application again?

hackvan
  • 11
  • 2
-1

Don't forget about the -v option in grep which reverses the sense

grep -v -r -l 'string' 

Frok fom grep man page:

-v, --invert-match
Invert the sense of matching, to select non-matching lines.  (-v is specified by POSIX.)

You may be then able to pass that into the find command similar to this

find -name -exec grep -v -r -l 'string' {} \;

And that's getting close to what you want... but of course you'll need to write the result back to the original file...

hookenz
  • 14,132
  • 22
  • 86
  • 142