(Replying to PARENT post)
However, just as curl (in standard usage) is an analog to cat, I feel that wget (in standard usage) is an analog to cp, and whilst I certainly can copy files by doing 'cat a > b', semantically cp makes more sense.
Most of the time if I'm using curl or wget, I want to cp, not cat. I always get confused by curl and not being able to remember the command to just cp the file locally, so I tend to default to wget because it's easier to remember,
(Replying to PARENT post)
I'm not saying he should change it. But if he thinks it's about typing less... he doesn't seem to realise how his users behave.
(Replying to PARENT post)
IMHO cURL is the best tool for interacting with HTTP and wget is the best tool for downloading files.
(Replying to PARENT post)
"curl -O foo" is not the same as "wget foo". wget will rename the incoming file to as to not overwrite something. curl will trash whatever might be there, and it's going to use the name supplied by the server. It might overwrite anything in your current working directory.
Try it and see.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
Luckily, Curl is much more than that and it is a great and powerful tool for people that work with HTTP. The fact that it writes to stdout makes things easier for people like me that are no gurus :) as it just works as I would expect.
When working with customers with dozens of different sites I like to be able to run a tiny script that leverages Curl to get me the HTTP status code from all the sites quickly. If you're migrating some networking bits this is really useful for a first quick check that everything is in place after the migration.
Also, working with HEAD instead of GET (-I) makes everything cleaner for troubleshooting purposes :)
My default set of flags is -LIkv (follow redirects, only headers, accept invalid cert, verbose output). I also use a lot -H to inject headers.
(Replying to PARENT post)
(Replying to PARENT post)
Haha I would never realize that
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
I really, really like libcurl's api (or at least the easy api, I didn't play around with the heavy duty multi api for simultaneous stuff). It's very clean and simple.
(Replying to PARENT post)
The one case where I will reach for wget first is making a static copy of a website. I need to do this sometimes for archival purposes, and though I always need to look up the specific wget options to do this properly, this use case seems to be one where wget is stronger than curl (especially converting links so they work properly in the downloaded copy).
(Replying to PARENT post)
Why not just alias it ("make a File from URL" -> furl?) if people want to use it with -O flag set as default?
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
Unless you forgot what the option was since you don't use it multiple times a day.
(Replying to PARENT post)
No big deal.
(Replying to PARENT post)
#!/usr/bin/env sh
case $(curl -sLI $1 | grep -i content-type) in
*text*) echo "curl $1"
;;
*) echo "curl $1 > $(basename $1)"
;;
esac
https://gist.github.com/agumonkey/b85cef0874822c470cc6Costs of one round trip though.
(Replying to PARENT post)
I will admit that rather than learn the right command to have curl print to file -- when I _do_ want to write to file, I do use wget (and appreciate it's default progress bar; there's probably some way to make curl do that too, but I've never learned it either).
When I want writing to stdout, I reach for curl, which is most of the time. (Also for pretty much any bash script use, I use curl; even if I want write to a file in a bash script, I just use `>` or lookup the curl arg).
It does seem odd that I use two different tools, with mostly entirely different and incompatible option flags -- rather than just learning the flags to make curl write to a file and/or to make wget write to stdout. I can't entirely explain it, but I know I'm not alone in using both, and choosing from the toolbox based on some of their default behaviors even though with the right args they can probably both do all the same things. Heck, in the OP the curl author says they use wget too -- now I'm curious if it's for something that the author knows curl doesn't do, or just something the author knows wget will do more easily!
To me, they're like different tools focused on different use cases, and I usually have a feel for which is the right one for the job. Although it's kind of subtle, and some of my 'feel' may be just habit or superstition! But as an example, recently I needed to download a page and all it's referenced assets (kind of like browsers will do with a GUI; something I only very rarely have needed to do), and I thought "I bet wget has a way to do this easily", and looked at the man page and it did, and I have no idea if curl can do that too but I reached for wget and was not disappointed.