πŸ‘€akerl_πŸ•‘11yπŸ”Ό166πŸ—¨οΈ96

(Replying to PARENT post)

I actually want printing to stdout more often than I want printing to file, it is more often what I need. I guess different people have different use cases.

I will admit that rather than learn the right command to have curl print to file -- when I _do_ want to write to file, I do use wget (and appreciate it's default progress bar; there's probably some way to make curl do that too, but I've never learned it either).

When I want writing to stdout, I reach for curl, which is most of the time. (Also for pretty much any bash script use, I use curl; even if I want write to a file in a bash script, I just use `>` or lookup the curl arg).

It does seem odd that I use two different tools, with mostly entirely different and incompatible option flags -- rather than just learning the flags to make curl write to a file and/or to make wget write to stdout. I can't entirely explain it, but I know I'm not alone in using both, and choosing from the toolbox based on some of their default behaviors even though with the right args they can probably both do all the same things. Heck, in the OP the curl author says they use wget too -- now I'm curious if it's for something that the author knows curl doesn't do, or just something the author knows wget will do more easily!

To me, they're like different tools focused on different use cases, and I usually have a feel for which is the right one for the job. Although it's kind of subtle, and some of my 'feel' may be just habit or superstition! But as an example, recently I needed to download a page and all it's referenced assets (kind of like browsers will do with a GUI; something I only very rarely have needed to do), and I thought "I bet wget has a way to do this easily", and looked at the man page and it did, and I have no idea if curl can do that too but I reached for wget and was not disappointed.

πŸ‘€jrochkind1πŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

I think his argument is valid, and thinking about curl as an analog to cat makes a lot of sense. Pipes are a powerful feature and it's good to support them so nicely.

However, just as curl (in standard usage) is an analog to cat, I feel that wget (in standard usage) is an analog to cp, and whilst I certainly can copy files by doing 'cat a > b', semantically cp makes more sense.

Most of the time if I'm using curl or wget, I want to cp, not cat. I always get confused by curl and not being able to remember the command to just cp the file locally, so I tend to default to wget because it's easier to remember,

πŸ‘€NickPollardπŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

I think he may be missing what people mean by "it's easier without an argument". It's not just "only one option" - what I see in reality quite often is: "curl http://...", screen is filled with garbage, ctrl-c, ctrl-c, ctrl-c, damn I'm on a remote host and ssh needs to catch up, ctrl-c, "cur...", actually terminal is broken and I'm writing garbage now, "reset", "wget http://...".

I'm not saying he should change it. But if he thinks it's about typing less... he doesn't seem to realise how his users behave.

πŸ‘€viraptorπŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Do one thing and do it well.

IMHO cURL is the best tool for interacting with HTTP and wget is the best tool for downloading files.

πŸ‘€shapeshedπŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

This "-O" seemed dubious to me so I took a look. Turns out... yep, it's not as simple as that.

"curl -O foo" is not the same as "wget foo". wget will rename the incoming file to as to not overwrite something. curl will trash whatever might be there, and it's going to use the name supplied by the server. It might overwrite anything in your current working directory.

Try it and see.

πŸ‘€rachelbythebayπŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

I think of curl as a somewhat more intelligent version of netcat that doesn't require me to do the protocol communication manually, so outputting to stdout makes great sense.
πŸ‘€userbinatorπŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

It would be really nice if curl took the content-type and results from isatty(STDOUT_FILENO) into consideration when deciding whether to spew to stdout.
πŸ‘€wyldfireπŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

HTTPie is a command line HTTP client, a user-friendly cURL replacement. http://httpie.org
πŸ‘€davidmhπŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Chrome dev tools have a super useful "Copy as cURL" right-click menu option in the network panel. Makes it very easy to debug HTTP!
πŸ‘€0x0πŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

We all have some user-bias and in this case it is geared towards seeing Curl as some shell command to download files through HTTP/S.

Luckily, Curl is much more than that and it is a great and powerful tool for people that work with HTTP. The fact that it writes to stdout makes things easier for people like me that are no gurus :) as it just works as I would expect.

When working with customers with dozens of different sites I like to be able to run a tiny script that leverages Curl to get me the HTTP status code from all the sites quickly. If you're migrating some networking bits this is really useful for a first quick check that everything is in place after the migration.

Also, working with HEAD instead of GET (-I) makes everything cleaner for troubleshooting purposes :)

My default set of flags is -LIkv (follow redirects, only headers, accept invalid cert, verbose output). I also use a lot -H to inject headers.

πŸ‘€mobiplayerπŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Having known both tools for a long time now, I never realized there was a rivalry between them - I just figured they're each used differently. cURL is everywhere, so it's a good default. I use it when I want to see all of the output of a request - headers, response raw, etc. It's my de facto API testing tool. And before I even read the article, I assumed the answer was "Everything is a pipe". It sucks to have to memorize the flags, but it's worthwhile when you're actually debugging the web.
πŸ‘€eddierogerπŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

> people who argue that wget is easier to use because you can type it with your left hand only on a qwerty keyboard

Haha I would never realize that

πŸ‘€tallesπŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

The "c" in "curl" stands for "cat". Any unix user knows what cat(1) does. Why the confusion?
πŸ‘€discardoramaπŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

I am surprised there is no mention of the BSD fetch(1) http://www.freebsd.org/cgi/man.cgi?query=fetch%281%29 , which probably pre-dates both curl and wget.
πŸ‘€gtrubetskoyπŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

I was recently playing with libcurl (easiest way I know to interact with a rest api in c), and libcurl's default callback for writing data does this too.It takes a file handle, and if no handle is supplied, it defaults to stdout. It's actually really nice as a default... you can use different handles for the headers vs the data, or use a different callback altogether.

I really, really like libcurl's api (or at least the easy api, I didn't play around with the heavy duty multi api for simultaneous stuff). It's very clean and simple.

πŸ‘€lsiebertπŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

I use curl over wget in most cases, just because I learned it first I guess. I use it enough that I rarely make the mistake of not redirecting when I want the output in a file.

The one case where I will reach for wget first is making a static copy of a website. I need to do this sometimes for archival purposes, and though I always need to look up the specific wget options to do this properly, this use case seems to be one where wget is stronger than curl (especially converting links so they work properly in the downloaded copy).

πŸ‘€ams6110πŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

"cat url", huh, that makes sense.

Why not just alias it ("make a File from URL" -> furl?) if people want to use it with -O flag set as default?

πŸ‘€pbhjpbhjπŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

I find it pretty cool how authors of text-mode UNIX programs are still around. In fact the GNU culture has kind of grown up around that. And yet, to me text-mode stuff is just a part of a much larger distribution, not something to be distributed to so many systems. Oh, how times have changed.
πŸ‘€zkhaliqueπŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

I am in the opposite camp, where I always try to pipe wget to file. Then I end up with two files. Argh.
πŸ‘€unclesaammπŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

> if you type the full commands by hand you’ll use about three keys less to write β€œwget” instead of β€œcurl -O”

Unless you forgot what the option was since you don't use it multiple times a day.

πŸ‘€geonπŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

OK, the screen filled with garbage happens the first time you use curl, then you read the README or --help, which you should have done before, you learn -o and… it never happens again.

No big deal.

πŸ‘€johncoltraneπŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

curl could parse mime type and decide where to push the stream, POC:

    #!/usr/bin/env sh
    
    case $(curl -sLI $1 | grep -i content-type) in
        *text*) echo "curl $1"
                ;;
        *) echo "curl $1 > $(basename $1)"
           ;;
    esac
https://gist.github.com/agumonkey/b85cef0874822c470cc6

Costs of one round trip though.

πŸ‘€agumonkeyπŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

tl;dr Because the author says so.
πŸ‘€angelortegaπŸ•‘11yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

99% of the time I'm using curl/wget is to download a compressed file. So, for me, `curl | tar` is shorter than `wget -O - | tar`, and much better than `wget` -> download -> decompress -> delete the file.
πŸ‘€_almosnowπŸ•‘11yπŸ”Ό0πŸ—¨οΈ0