1
There are thousands of files in a file server http://xxxx.com
I tried to crawl it with the tool httrack
It doesn't work, is there any alternative tool can download the whole files recursively base on an web url ?
Thanks
1
There are thousands of files in a file server http://xxxx.com
I tried to crawl it with the tool httrack
It doesn't work, is there any alternative tool can download the whole files recursively base on an web url ?
Thanks
3
Use wget:
wget --mirror -p --html-extension --convert-links www.example.com
the options explained:
-p get all images, etc. needed to display HTML page.
--mirror turns on recursion and time-stamping, sets infinite
recursion depth and keeps FTP directory listings
--html-extension save HTML docs with .html extensions
--convert-links make links in downloaded HTML point to local files.