76
65
I want to use Wget to save single web pages (not recursively, not whole sites) for reference. Much like Firefox's "Web Page, complete".
My first problem is: I can't get Wget to save background images specified in the CSS. Even if it did save the background image files I don't think --convert-links would convert the background-image URLs in the CSS file to point to the locally saved background images. Firefox has the same problem.
My second problem is: If there are images on the page I want to save that are hosted on another server (like ads) these wont be included. --span-hosts doesn't seem to solve that problem with the line below.
I'm using:
wget --no-parent --timestamping --convert-links --page-requisites --no-directories --no-host-directories -erobots=off http://domain.tld/webpage.html
1exactly the same line ( wget --no-parent --timestamping --convert-links --page-requisites --no-directories --no-host-directories -erobots=off domain.tld ) actually saves background images referenced from CSS after updating to 1.12. The manual says: "With http urls, Wget retrieves and parses the html or css from the given url, retrieving the files the document refers to, through markup like href or src, or css uri values specified using the ‘url()’ functional notation."
Second problem still needs to be solved – user14124 – 2009-10-14T00:23:10.797