I am trying to download all links from aligajani.com. There are 7 of them, excluding the domain facebook.com–which I want to ignore. I don't want to download from links that start with facebook.com domain.
Also, I want them saved in a .txt file, line by line. So there would be 7 lines.
Here's what I've tried so far. This just downloads everything. Don't want that.
wget -r -l 1 http://aligajani.com
wget
downloads whole pages. Why do you think it might only save the links? – michas Feb 26 '14 at 06:44wget
is designed to do. In future, please don't assume a particular tool for a job when asking for a question; it's not a good way to ask questions in general, but it's especially poor practice in a technical venue. – Chris Down Feb 26 '14 at 06:52