agentS
π Joined in 2012
πΌ 228 Karma
βοΈ 92 posts
Load more
(Replying to PARENT post)
Perhaps to allow the content to be served from a CDN (over HTTPS), without requiring the site to CNAME over their domain to Google.
If webmasters are willing to CNAME over their domain to a caching proxy, then a less intrusive design is possible[0], such as the one recently announced by Cloudflare[1].
[0]: https://github.com/ampproject/amphtml/blob/master/spec/amp-c... [1]: https://blog.cloudflare.com/accelerated-mobile/
(Replying to PARENT post)
It forces the compiler to accept some truly awful running times for pathological cases. Atleast quadratic, probably exponential.
For languages that have reflection or pointer maps for GC or debug information for types, it can force large blowups in space as well. Go has all three of these.
The implementation would likely require runtime code-generation (or accept warts like Rust's "object safety").
Indeed, all of Ian's proposed implementations are polymorphic and seem to avoid each of these issues at first glance. The only advantage of a monomorphic implementation is performance, and considering the downsides, this'd be premature optimization forced by a language spec.
If its actually performance critical, I imagine it'd be easy to write a program that monomorphized a particular instantiation of the generic types. Indeed, the compiler would be free to do that itself, if it felt it would be worth it. Small, guaranteed non-pathological scenarios for instance.
Where if you guarantee monomorphization in a language spec, the compiler and all users are forced to accept the downsides in all instances, in exchange for often meaningless performance gains (example: any program that does computation then IO).
(Replying to PARENT post)
The caller can make function "non-blocking" by wrapping the call in a goroutine themselves. (There's some subtle differences, but they are mostly irrelevant here). For this reason, I'd say there is (almost) no reason to introduce asynchrony in your API in the way you suggest. The rest of your post built on this example seems shaky to me, since it seems built on an example API that doesn't need to exist.
I'd say that "you don't have to care whether your code is async or not" is a overstating the case. I would append the qualifier "unless you're introducing concurrency". Considering that almost no low-level APIs are asynchronous, this usually happens rarely (or happens in low-level code like the HTTP server). Examples that have come up for me: making N parallel RPCs, writing a TCP server. In those situations, you care about async vs not.
In event-loop based systems, it seems like async is in my face all the time, even when doing things that are entirely sequential.
(Replying to PARENT post)
I don't understand, and the link seems unclear. Perhaps a more direct question: I get a request X, and I need to consult a backend service to answer the request. Do I write synchronous code calling that backend? Or do I have some callback mechanism?
> ... generators or async/await
Ah. This perhaps answers my question. Both of these are essentially compiler-written callbacks.
If this is going to be like C#, then I presume there will be a thread-pool where user code will execute. It seems like a non-ideal story for concurrency. Users will have to take inordinate care not to call any blocking code; otherwise they will prevent one of the threads in the pool from doing useful work.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
Besides, I think this is a moot point, because chances are that less than 100 people's HTTP2 implementations will serve 99.9999% of traffic. It's not like you or I spend much of our time deep in nginx's code debugging some HTTP parsing; I think its just as unlikely we'll be doing that for HTTP2 parsing.
Also, HTTP2 will always (pretty much) be wrapped in TLS. So its not like you're going to be looking at a plain-text dump of that. You'll be using a tool and that tool author will implement a way to convert the binary framing to human-readable text.
Another way to put it is that the vast majority of HTTP streams are not examined by humans and only examined by computers. Choosing a text-based protocol just seems a way to adversely impact the performance of every single user's web-browsing.
Another another way to put it is that there is a reason that Thrift, Protocol Buffers, and other common RPC mechanisms do not use a text-based protocol. Nor do IP, TCP, or UDP, for that matter. And there's a reason that Memcached was updated with a binary protocol even though it had a perfectly serviceable text-based protocol.
(Replying to PARENT post)
Also, Go doesn't scan arrays that are marked as containing no pointers, so representing an index as a massive array of values has proven quite effective for me.
(Replying to PARENT post)
Note: I am not familiar with the implementation techniques behind shells.
(Replying to PARENT post)
Edit: And since I replied to a small part of your comment, I should say that I disagree completely with your "few people truly care about their craft" statement. At least, I think that writing code that handles a lack of Javascript is only valuable if you have enough users to justify it. i.e. if you spend 20% of your time working on features for 0.1% of users, then you are doing a disservice to the rest of your users. Even more so if you have to compromise the experience for everyone else such that degrading is an option.
In some cases, you go out of your way to accommodate small fractions of your audience. ARIA and catering to those with disabilities is a good example. But turning off JS is a choice; one I respect, but feel no obligation to cater to. I think pages should show a noscript warning, but other than that, its a matter of engineering tradeoffs.
(Replying to PARENT post)
But forcing your sites to use HTTPS will also prevent your users from unwittingly participating in DDOS attacks on other sites (e.g. https://en.wikipedia.org/wiki/Great_Cannon). Consider it herd immunity.
Also, to respond to some of your other anti-HTTPS comments:
regarding overhead: people are also working hard to minimize the amount of overhead inherent to TLS. For instance, TLS 1.3 will establish an encrypted connection in a single roundtrip, and is capable of resuming encrypted connections in zero roundtrips with application opt-in (see https://blog.cloudflare.com/tls-1-3-overview-and-q-and-a/). The encryption itself has fairly ubiquitous support in hardware, making most of them ridiculously fast.
regarding CAs: with HTTP you are implicitly relying on the honesty of people in the network path. With HTTPS you are implicitly relying on the honesty of the intersection of a) people in the network path, b) people who control a CA. This is strictly fewer people than with HTTP. People are also working hard to solidify our faith in the set (b), by requiring Certificate Transparency for all new certificates, thereby ensuring that misbehaving CAs can be detected, and drastically raising the cost of mounting a CA-based attack.
You say "What if I for what ever reason don't want to use HTTPs", I you'll have to layout some of those reasons explicitly. You'll probably find that people are working on all of them.
In general, the default expectation on the web should be encrypted and authenticated (i.e. only both endpoints can read/write the data). Once we live in that future, asking for the ability to allow plaintext network traffic will seem a lot like asking modern programming languages to explicitly allow buffer overflows. The language designer would be justified in saying "No", and ignoring you. The considerate language designer might ask "why would you want that", and try to address your real need. But they would still never actually give you what you ask for. This may be "taking away choice" in the same sense that mandating airbags is "taking away choice", but people shrug and accept it because the baseline has moved.