472

I am keen to know the difference between curl and wget. Both are used to get files and documents but what the key difference between them.

Why are there two different programs?

polym
  • 10,852
lakshmen
  • 6,241

5 Answers5

487

The main differences are:

  • wget's major strong side compared to curl is its ability to download recursively.
  • wget is command line only. There's no lib or anything, but curl's features are powered by libcurl.
  • curl supports FTP, FTPS, GOPHER, HTTP, HTTPS, SCP, SFTP, TFTP, TELNET, DICT, LDAP, LDAPS, FILE, POP3, IMAP, SMTP, RTMP and RTSP. wget supports HTTP, HTTPS and FTP.
  • curl builds and runs on more platforms than wget.
  • wget is released under a free software copyleft license (the GNU GPL). curl is released under a free software permissive license (a MIT derivate).
  • curl offers upload and sending capabilities. wget only offers plain HTTP POST support.

You can see more details at the following link:

curl vs Wget

  • 2
    Thanks, this is exactly what I've been wondering about for a few hours. I used wget to do a recursive spider on a site, seems stalled on a page that has eleventy bazillion links on it. The reality is it's running at 100% CPU on one core. Was hoping that there was something better that does that newfangled multicore thing I've been hearing about. – Brian Topping Jun 07 '15 at 10:13
  • 2
    One of the things wget does that is left out of this answer is http mirroring (or 'spidering') ability. curl is very good at what it does, but it alone is not intended to be used to mirror a web site. – jsbillings Sep 26 '15 at 12:38
  • What about wput being an alias, supporting ftp. – mckenzm Jan 23 '17 at 05:01
  • 1
    These phrases have no sense: "and all copyrights are assigned to FSF", "is entirely stand-alone and independent with no organization parenting at all". It's obvious that the author of cURL is the copyright owner of it. It's obvious that the author of wget is the copyright owner of it. But both are free as in freedom software. You can say instead that wget is under a copyleft license, and cURL under a permissive license. – Valerio Bozz Apr 11 '18 at 11:59
  • 5
    @ValerioBozz: Actually no. Both curl and wget are community projects. With curl, each person owns the copyright to the code they contribute. With Wget, as with most other GNU programs, the various authors give away their copyrights to the FSF. That is, they don't own that code anymore. This is to allow the FSF to strictly enforce copyleft and to relicense the code if required. – darnir Nov 27 '18 at 23:20
  • 1
    the hyperlink in the end of the answer points to an article written by Daniel Sterberg, the author of cURL. – HongboZhu Feb 13 '19 at 10:59
  • 1
    Another important point (IMHO), is that the mirroring capabilities of wget, include converting the files to adapt the links to their new location (and also, it can use time stamps to only download the files which changed since the last download). – Camion Feb 26 '22 at 14:28
136

They were made for different purposes

  • wget is a tool to download files from servers
  • curl is a tool that let's you exchange requests/responses with a server

wget

Wget solely lets you download files from an HTTP/HTTPS or FTP server. You give it a link and it automatically downloads the file where the link points to. It builds the request automatically.

curl

Curl in contrast to wget lets you build the request as you wish. Combine that with the plethora of protocols supported - FTP, FTPS, Gopher, HTTP, HTTPS, SCP, SFTP, TFTP, Telnet, DICT, LDAP, LDAPS, IMAP, POP3, SMTP, RTSP and URI - and you get an amazing debugging tool (for testing protocols, testing server configurations, etc.).

As many already mentioned you can download a file with curl. True, but that is just an "extra". In practice, use CURL when you want to download a file via a protocol that wget doesn't support.

Pithikos
  • 3,294
  • 3
    Actually wget also follows the redirect then saves the response unlike curl. Both can achieve the opposite to the default behaviour wget -qO - http://google.co.uk/ or curl http://google.co.uk/ > index.html – Matt Aug 15 '14 at 17:46
  • 2
    @mtm curl http://google.co.uk/ > index.html is not using an inbuilt functionality though. Anyway the main distinction is the purpose each tool was made for. There is not denying that tools evolve and many times deviate from their initial trajectory. – Pithikos Aug 16 '14 at 10:24
  • 2
    @mtm curl http://google.co.uk -o index.html would use curl's internals instead of shell output redirection with >. – Petrus Repo Jan 15 '15 at 11:35
25

Actually, the major difference is that curl includes a library (libcurl), and that library is widely used by other applications. wget is standalone.

sendmoreinfo
  • 2,573
22

I did some performance tests with wget and curl, and the result is:

100 times tested average run time while downloading 1MB file:

wget: 0.844s
cURL: 0.680s

100 times tested average run time while downloading 5MB file:

wget: 1.075s
cURL: 0.863s

100 times tested average run time while downloading 10MB file:

wget: 1.182s
cURL: 1.074s

Command size on the system:

wget: 371K
cURL: 182K
Feriman
  • 969
  • 11
    The benchmarks are reversed when dealing with small files. I had to replace curl with wget to speed up backend scripts that were frequently polling making web API calls with small responses to each. My guess startup time makes the difference.

    I am also confused how your 10M file takes only 20% longer to download than 1M. There must be caching involved at some point. I am not convinced your benchmarks are valid. Also command size tells nothing about the size of all libraries it needs to load upon startup.

    – oᴉɹǝɥɔ Dec 09 '20 at 20:05
  • @oᴉɹǝɥɔ My RPi4 has 1000/1000 Mbit/s internet connection. I think the fast internet and slow hardware is the best combination to test the run time difference between wget and cURL commands. – Feriman Mar 09 '21 at 07:42
  • Doesn't curl use a library? So there are additional packages that add to the total size. – Junaga Apr 11 '21 at 15:18
  • 1
    curl links to libcurl.so. On my system: wget is 516K, curl is 255K, and libcurl.so.4 is 658K. So curl is a total of 913K. – Student Nov 26 '22 at 16:40
1

The main differences(1. curl mainly reminds of communicating in various protocols, wget mainly reminds of downloading, 2. curl provides - and is based on - the libcurl library, and other softwares are able to use the same library as well, wget is standalone) have been mentioned in other answers, but here's also another difference worth emphasizing, explained in an example.

Another interesting feature of curl not possible with wget is communicating with UNIX sockets (i.e., communication even without a network). For instance we can use curl to talk to Docker Engine using its socket in /var/run/docker.sock to get a list of all pulled docker images in JSON format (useful for "programming", in contrast to the docker images CLI command which is good for "readability"):

curl --unix-socket /var/run/docker.sock http://localhost/images/json | jq
aderchox
  • 691