1
0
I tried to make a copy of the site wiredhealthresources.net
using the command:
wget -rpkl inf wiredhealthresources.net
But the command only downloaded 54 files! Most of the pages are missing, e.g. /topics-cardiology.html
, despite being linked to from /index.html
What did I do wrong? Why is wget
not downloading the whole site?
While I can't answer the question itself, I would suggest giving HTTrack a try, as I have had more success with that.
– Sam3000 – 2016-10-27T14:42:08.910