Appending new lines to multiple files

1

I'm trying to append new lines to multiple files with the following command:

find -name *.ovpn -exec sh echo "line to append" >> {} \;

before doing this, I ran a different command to make sure it would work the way I expected:

find -name *.ovpn -exec sh echo "hello" \;

but all this does is print out "sh: 0: Can't open echo" for every file found.

The Mungler

Posted 2018-06-02T02:51:02.043

Reputation: 11

Answers

5

There are few issues.

>> in your first command will be interpreted by your current shell as a redirection to a file literally named {}, unless it's quoted.

*.ovpn may be expanded by shell globbing before find ever runs. This will happen if you have at least one object in the current directory that matches the pattern. You do want to quote this. Compare this question.

You get Can't open echo because indeed you're telling sh to open echo. To execute a command you need sh -c.

find without specifying the path is not portable (compare this question). While you may get away with this, I mention the issue to make the answer more useful for other users.

This is the improved version of your first command that kinda works (don't run it, keep reading):

find . -name '*.ovpn' -exec sh -c 'echo "line to append" >> "{}"' \;

Notice I had to double quote {} inside single quotes. These double quotes are "seen" by the sh and make filenames with spaces etc. work as redirection targets. Without quotes you may end up with like echo "line to append" >> foo bar.ovpn which is equivalent to echo "line to append" bar.ovpn >> foo. Quoting makes it echo "line to append" >> "foo bar.ovpn" instead.

Unfortunately filenames containing " will break this syntax.

The right way to pass {} to sh is not to include it in the command string but to pass its content as a separate argument:

find . -name '*.ovpn' -exec sh -c 'echo "line to append" >> "$0"' {} \;

$0 inside the command string expands to the firsts argument our sh gets after -c '…'. Now even " in filename won't break the syntax.

Usually (like in a script) to refer to the first argument you use $1. This is the reason some users would rather use a dummy argument to be $0, like this:

find . -name '*.ovpn' -exec sh -c 'echo "line to append" >> "$1"' dummy {} \;

If it was a script, $0 would expand to its name. That's why it's not uncommon to see this dummy actually being sh (or bash, if one calls bash -c … etc.):

find . -name '*.ovpn' -exec sh -c 'echo "line to append" >> "$1"' sh {} \;

But wait! find calls a separate sh for every single file. I don't expect you to have thousands .ovpn files, but in general you might want to process many files without spawning unnecessary processes. We can optimize the approach with tee -a that can write to multiple files as a single process:

find . -name '*.ovpn' -exec sh -c 'echo "line to append" | tee -a "$@" >/dev/null' sh {} +

Notice {} +, this passes multiple paths at once. Inside the command executed by sh -c we retrieve them with "$@", which expands to "$1" "$2" "$3" …. In this case a dummy argument that populates (unused) $0 is a must.

In general there is also this issue: Why is printf better than echo? However in this case you're using echo without options and the string it gets is static, so it should be fine.

Kamil Maciorowski

Posted 2018-06-02T02:51:02.043

Reputation: 38 429