How could I crawl all the files in file server recursively

1

There are thousands of files in a file server http://xxxx.com

I tried to crawl it with the tool httrack

It doesn't work, is there any alternative tool can download the whole files recursively base on an web url ?

Thanks

enter image description here

user3675188

Posted 2015-11-23T05:23:51.937

Reputation: 123

Answers

3

Use wget:

wget --mirror -p --html-extension --convert-links www.example.com

the options explained:

-p                  get all images, etc. needed to display HTML page.  
--mirror            turns on recursion and time-stamping, sets infinite 
                      recursion depth and keeps FTP directory listings
--html-extension    save HTML docs with .html extensions  
--convert-links     make links in downloaded HTML point to local files. 

Anand Sudhanaboina

Posted 2015-11-23T05:23:51.937

Reputation: 146