36

How is it possible to pipe out wget's downloaded file? If not what alternatives should I use?

orlp
  • 103
  • 4
Alex
  • 2,287
  • 5
  • 32
  • 41

6 Answers6

53
wget -O - -o /dev/null  http://google.com
pQd
  • 29,561
  • 5
  • 64
  • 106
36

Or use curl, where it's the default behaviour.

curl http://www.google.com/

http://curl.haxx.se/

GodEater
  • 540
  • 1
  • 6
  • 12
6

There are other methods you can use instead of wget and curl:

You can use lynx:

# lynx -source http://www.google.com

w3m:

# w3m -dump_source http://www.google.com

and libwww-perl comes with a handy program called GET (as well has HEAD and POST, which do what you think they do)

# GET http://www.google.com
David Pashley
  • 23,151
  • 2
  • 41
  • 71
  • 1
    Offtopic, but I've used lynx in some of my scripts to parse html for me automatically whenever I've needed the content of a page and didn't care about the markup. It's great for that – Matt Simmons Jun 15 '09 at 12:46
  • indeed, both lynx and w3m have a -dump option. I prefer w3m for its table and frame support. – David Pashley Jun 15 '09 at 14:00
1

This is how I did it:

URL='http://wordpress.org/extend/plugins/akismet/'
curl -s "$URL" | egrep -o "http://downloads.wordpress.org/plugin/[^']+" | xargs wget -qO-
Roger
  • 473
  • 11
  • 22
0

Just to add another option: I often use lwp-request, from libwww-perl, for this. It outputs to STDOUT by default and is more likely than curl to be installed on the systems I use (your situation my vary).

Jeff Tang
  • 141
  • 2
  • 1
    FWIW, Mac OS has curl (not wget) and I believe other BSD's do as well. As do a lot of embedded *nix systems I've used. Not sure if Perl is more common than curl. – Wyatt Ward Feb 02 '16 at 21:18
0

I suggest to use Aria2. It's powerful downloader.

aria2c http://google.com.tw
Phil Huang
  • 11
  • 2
  • Does it output to [standard output](https://en.wikipedia.org/wiki/Standard_streams#Standard_output_.28stdout.29) (the crux of the question)? – Peter Mortensen Aug 14 '21 at 18:13