>however most of that is hidden behind the already established tooling
...and so everyone without Google-level funding is stuck with this tooling. Thus, control over the protocol that millions of people could use to communicate with one another directly (by making websites and possibly servers) is ceded to a handful of centralized authorities that can handle the complexity and also happen to benefit from the new features.
I remember how Node when it was just rising in popularity was usually demonstrated by writing a primitive HTTP server that served a "hello world" HTTP page. There were no special libraries involved, so it was super-easy to understand what's going on. We're moving away from able to do things of this sort without special tooling and almost no one seems to notice or care.
> I remember how Node when it was just rising in popularity was usually demonstrated by writing a primitive HTTP server that served a "hello world" HTTP page.
That is still possible in the exact same way.
But a toy is just a toy. All websites should encrypt their content with TLS. In fact, all protocols should encrypt their communications. The result? Sure, it is a binary stream of random-looking bits.
Yet to me, what matters about text protocols is not the ASCII encoding. It is the ability to read and edit the raw representation.
As long as your protocol has an unambiguous one-to-one textual representation with two-way conversion, I can inspect it and modify it with no headache.
>All websites should encrypt their content with TLS. In fact, all protocols should encrypt their communications.
I reject the notion that encryption should be mandatory for all websites. It should be best practice, especially for a "modern" website with millions of users, but we don't need every single website encrypted.
Strongly disagree. At the very least, all sites should serve HTTPS. I don't want to get ads and spyware injected from my ISP, nor do I want everyone tracking what I read, on news sites for example. Provide HTTP if you want, but only for backcompat.
> I don't want to get ads and spyware injected from my ISP
Honestly this is a notion that i do not understand. Why do you accept your ISP doing this? Why aren't people banding together and complain about it in an organized manner? If there aren't alternative ISPs in your area it means there will be a lot of people affected by this so more voices to be heard. Why are you just accepting ISPs adding ads and spyware to your content as some force of nature that everyone else must work to keep at bay?
In many places there are not alternatives, all available vendors have a history of hijinks. It's quite hard to band many small voices together and even when we do government agencies pick the wrong thing.
It's not really accepting. It's more like picking the least-shitty of a shitty set of options.
But as i wrote if there are no alternatives then it means there is a larger pool of people to band together. And you can start by making noise towards the company as one voice not with the government. Honestly this sounds like you've given up without trying and then blaming the sites for not trying to work around your ISP being shitty. Here the blame lies with your ISP, not the sites.
> Why aren't people banding together and complain about it in an organized manner?
Because the notion of collective organizing has been completely eroded and squeezed out of modern society at every level and replaced with "consumer choice". Is this supposed to be a difficult question or a rhetorical one? I'm not being factitious, that's the actual answer and it's very easy to observe if you look around. (If you check your watch I'm sure it's only a matter of time until someone here literally replies telling you to get a new ISP, in fact.)
That said, even beyond the need for collective organizing of regulation for cases like ISPs, it's been like a decade since Firesheep made waves over the internet because it turns out just being able to snoop passwords at Starbucks was in fact, not good, and actually was quite bad. So it's not like ISPs are the only unscrupulous actors out there, and unless you want to get into the realms of "This software is illegal to possess" (normally a pretty hot-button topic here) then someone has to deal, in this case. The whole pathway between a user and their destination is, by design, insecure; combine that with the fact the internet is practically a wild west, and you have a somewhat different problem.
Making system administrators adopt TLS en masse was probably the right course of action anyway, all things considered, and happens to help neutralize an array of problems here, even if you regulated ISPs and punished them excessively for hijinks like content manipulation (which I would wholeheartedly love to see, honestly.)
(The other histrionics about "simplicity" of HTTP/2 or text vs binary whatever are all masturbatory red herrings IMO so I'm just ignoring them)
This isn’t about having a shitty ISP specifically it’s the fact that the network path between your machine and the server is by definition untrusted. The much harder problem is securing the entire internet so that you don’t need encryption. Or you could just encrypt the content and be sure you have a clean connection.
You say this as if ISPs don't exist as natural monopolies that can unilaterally ignore customer complaints because "Who cares what you think? You're stuck with us.".
Indeed. When it comes to technology, I think resiliency and robustness in general should trump almost all other concerns.
It would be nice if HTTP were extended to accommodate the inverse of the Upgrade header. Something to signal to the server something like, "Please, I insist. I really need you to just serve me the content in clear text. I have my reasons." The server would of course be free to sign the response.
While I agree with you, it is best to be on the safe side. The damage from having a wrong website unencrypted could be massive vs. cost of simply encrypting everything. Demanding 100% encryption is an extra layer to protect against human mistakes.
Demanding 100% encryption also locks out some retrocomputing hardware that had existing browsers in the early Internet days. Not all sites need encryption. Where it's appropriate, most certainly. HTTPS should be the overwhelming standard. But there is a place for HTTP, and there should always be. Same for other unencrypted protocols. Unencrypted FTP still has a place.
HTTP/FTP certainly have their place, but that is not on the open internet. For retro computing and otherwise special cases a proxy on the local network can do HTTP->HTTPS conversion.
It's unfortunate that there doesn't seem to be a turn-key solution for this at the moment. I'm currently using Squid so I can use web-enabled applications on an older version of OS X, and it's great, but figuring out how to set it up took a solid day of work (partly because their documentation isn't very good), and the result will only work on macOS.
Mitmproxy is much easier to set up, but too heavy for 24/7 use.
Ideally this would be a DDWRT package, or maybe a Raspberry Pi image, all preconfigured and ready to go...
You can always uses a MITM proxy that presents an unencrypted view of the web. As long as you keep to HTML+CSS, that should be enough. Some simple js also, but you can't generate https URLs on the client side. Which, for retrocomputing, is probably fine.
You wouldn't want to expose these "retro" machines to the Internet anyways.
> ...and so everyone without Google-level funding is stuck with this tooling.
...no? It's actually pretty easy to write an HTTP/2 client library yourself. There are tons and tons of implementations of HTTP/2; at this point, nearly as many as there are of HTTP/1. Presuming you're familiar with the spec, you can code one yourself in an evening or two.
(I'm in the Elixir ecosystem myself. We have https://hex.pm/packages/kadabra — which is, apparently, 2408LOC at present. That's without stripping comments/trivial closing-token lines/etc, because Elixir doesn't have a good sloccount utility.)
The only thing that could potentially get in the way of HTTP/2 implementation, is a language not having good support for parsing/generating binary data. Languages like Javascript or Python — i.e. languages where you have to deal with binary data as if it were a type of string, where the tools for dealing with bytes and with codepoints are all mushed together and confused — struggle with binary protocols of all kinds. People do nevertheless write binary protocols for these languages.
But that's why people don't tend to think of these as "backend server languages", and instead use languages like Go or Erlang or even C—as in these languages, there are first-class "array/slice of bytes" types, and highly-efficient operations for manipulating them (with strict, predictable "bits are bits" semantics), which make writing binary-protocol libraries a breeze.
> The only thing that could potentially get in the way of HTTP/2 implementation, is a language not having good support for parsing/generating binary data. Languages like Javascript or Python — i.e. languages where you have to deal with binary data as if it were a type of string, where the tools for dealing with bytes and with codepoints are all mushed together and confused — struggle with binary protocols of all kinds. People do nevertheless write binary protocols for these languages.
Python explicitly changed that between Python 2 and Python 3 (not without considerable pain for its users -- the fact that open("/dev/urandom").read(10) makes Python 3 crash¹ was what put me off of learning it for some years, for example).
This isn't to say that Python users or documentation have all made the switch, but modern Python is very capable of distinguishing bytes and character set codepoints, and typically insists that programmers do so in most contexts.
¹ This crash turns out to be extremely easy to fix, by adding the file mode "rb" to the open() call, but Python 2 programmers wouldn't have expected to have to do that.
You’re right, unqualified that was kind of ridiculous.
What I meant/believe is specifically that there are nearly as many production-grade, actively-maintained, general-purpose HTTP/2 client and/or server libraries at this point, as there are production-grade, actively-maintained, general-purpose HTTP/1.1 client and/or server libraries.
The long tail of HTTP/1.1 client “libraries” consist of either 1. dashed-off single-purpose implementations (i.e. things that couldn’t be reused in any other other app than the one they’re in), or 2. long-dead projects for long-dead platforms.
Those dashed-off impls and long-dead platforms are the reason we will never (and should never) get rid of HTTP/1.1 support; and why every web server should continue to support speaking HTTP/1.1 to clients. But just because they exist, doesn’t mean they contribute to the “developer ecosystem” of HTTP/1.1. You can’t just scoop out and reuse the single-purpose HTTP/1.1 impl from someone else’s client. Nor can you make much use of an HTTP/1.1 library written for the original Macintosh.
Ignoring that pile of “impractical to use in greenfield projects” libraries—i.e. cutting off the long tail—you’re left with a set of libs (~6-8 per runtime, x N popular runtimes) that is pretty closely matched by the set of HTTP/2 libs (~1-2 per runtime, x N popular runtimes.) “Within an order of magnitude” is “nearly” in a programmer’s eyes :)
(Also, a fun fact to keep in mind: implementations of HTTP/1.1 clients and servers require very different architectures about connection establishment, flow-control, etc. But as far as HTTP/2 is concerned, every peer is both a client and a server—against an open HTTP/2 connection, either peer can initiate a new flow in which it is the client and the other peer is the server. [Browsers and servers deny the possibility of being an HTTP/2 server or client, respectively, by policy, not by mechanism.] As such, when you’re writing a production-grade HTTP/2 library, that library must be both a client and a server library; or rather, once you’ve implemented it to serve one role, you’ve done enough of the common work toward implementing for the other role that it is effectively trivial to extend it to also serve the other role. So every HTTP/2 library “matches up”, in some sense, with two HTTP/1.1 libraries—one HTTP/1.1 client library, and one HTTP/1.1 server library.)
Node has had a built in HTTP server since v0.1.17, are you sure those examples didn't use that? Because if they did then it was the same in those examples as it is now.
If you care about protocol simplicity and their afferant implementation costs, then the continuously creeping Web platform it a few magnitudes worse in this respect.
...and so everyone without Google-level funding is stuck with this tooling. Thus, control over the protocol that millions of people could use to communicate with one another directly (by making websites and possibly servers) is ceded to a handful of centralized authorities that can handle the complexity and also happen to benefit from the new features.
I remember how Node when it was just rising in popularity was usually demonstrated by writing a primitive HTTP server that served a "hello world" HTTP page. There were no special libraries involved, so it was super-easy to understand what's going on. We're moving away from able to do things of this sort without special tooling and almost no one seems to notice or care.