"5.2. HTTP/2
HTTP/2 [RFC7540] is the minimum RECOMMENDED version of HTTP for use with DoH."
One paper I read some years ago reported DoH is faster than DoT but for multiple queries in single TCP connection outside the browser I find that DoT is faster
I use a local forward proxy for queries with HTTP/2. (Using libnghttp2 is another alternative). In own case (YMMV) HTTP/2 is not signifcantly faster than using HTTP/1.1 pipelining
For me, streaming TCP queries with DoT blows DoH away
What libraries are ending support for HTTP/1.1? That seems like an extremely bad move and somewhat contrived.
>The messages in classic UDP-based DNS [RFC1035] are inherently unordered and have low overhead. A competitive HTTP transport needs to support reordering, parallelism, priority, and header compression to achieve similar performance. Those features were introduced to HTTP in HTTP/2 [RFC7540]. Earlier versions of HTTP are capable of conveying the semantic requirements of DoH but may result in very poor performance.
I'd bet basically all their clients are using HTTP/2 and they don't see the point in maintaining a worse version just for compatibility with clients that barely exist.
But I'm thinking a few lines of nginx config to proxy http 1.1 to 2
Most ISPs just want to sell your data and with encrypted client hello and DOH they’re losing visibility into what you’re doing.
DoT works fine, it's supported on all kinds of operating systems even if they don't advertise it, but DoH arrived in browsers. Some shitty ISPs and terrible middleboxes also block DoT (though IMO that should be a reason to switch ISPs, not a reason to stop using DoT).
On the hosting side, there are more options for HTTP proxies/firewalls/multiplexers/terminators than there are for DNS, so it's easier to build infra around DoH. If you're just a small server, you won't need more than an nginx stream proxy, but if you're doing botnet detection and redundant failovers, you may need something more complex.
And you can still block ad and scam domains with DoH. Either do so with a browser extension, in your hosts file, or with a local resolver that does the filtering and then uses DoH to the upstream for any that it doesn't block.
Ultimately though, its not like this is getting rid of http/1.1 in general, just DNS over http/1.1. I imagine the real reason is simply nobody was using it. Anyone not on the cutting edge is using normal dns, everyone else is using http/2 (or 3?) for dns. It is an extremely weird middle ground to use dns over http 1. Im guessing the ven diagram was empty.
Luckily it's pretty easy to run your own DoH server if you're deploying devices in the field, and there are alternatives to Quad9.
If I'm wrong then please provide some examples of servers that support ECH
How?
There are certain browsers that ignore your DNS settings and talk directly to DoH servers. How could I check what is that the browser requesting through a SSL session?
Do you want me to spoof a cert and put it on a MITM node?
These are my nameservers:
nameserver 10.10.10.65
nameserver 10.10.10.66
If the browser plays along than talking to these is the safest bet for me because it runs AdGuardHome and removes any ad or malicious (these are interchangable terms) content by returning 0.0.0.0 for those queries. I use DoT as uplink so the ISP cannot look into my traffic and I use http->https upgrades for everything.For me DoH makes it harder to filter the internet.
If someone can tell you're using HTTPS instead of some other TLS-encrypted protocol, that means they've broken TLS.
HTTP/1.1 is a simpler protocol and easier to implement, even with chunked Transfer-Encoding and pipelining. (For one thing, there's no need to implement HPACK.) It's trying to build multiplexing tunnels across it that is problematic, because buggy or confused handling of the line-delimited framing between ostensibly trusted end point opens up opportunities for desync that, in a simple 1:1 situation, would just be a stupid bug, no different from any other protocol implementation bug.
Because HTTP/2 is more complicated, there's arguably more opportunities for classic memory safety bugs. Contrary common wisdom, there's not a meaningful difference between text and binary protocols in that regard; if anything, text-based protocols are more forgiving of bugs, which is why they tend to promote and ossify proliferation of protocol violations. I've written HTTP and RTSP/RTP stacks several times, including RTSP/RTP nested inside bonded HTTP connections (what Quicktime used to use back in the day). I've also implemented MIME message parsers. The biggest headache and opportunity for bugs, IME, is dealing with header bodies, specifically the various flavors of structured headers, and unfortunately HTTP/2 doesn't directly address that--you're still handed a blob to parse, same as HTTP/1.1 and MIME generally. HTTP/2 does partially address the header folding problem, but it's common to reject those in HTTP/1.x implementations, something you can't do in e-mail stacks, unfortunately.
Rather than throwing HTTP/1.1 into the garbage can, why don't we throw Postel's Law [0] into the garbage where it belongs.
Every method of performing request smuggling relies on making an HTTP request that violates spec. A request that sends both Content-Length and Transfer-Encoding is invalid. Sending two Content-Lengths is invalid. Two Transfer-Encoding headers is allowed -- They should be treated as a comma-separated lists -- so allow them and treat them as such, or canonicalize them as a single header if you're transforming it to something downstream.
But for fuck's sake, there's literally no reason to accept requests that contain most of the methods that smuggling relies upon. Return a 400 Bad Request and move on. No legit client sends these invalid requests unless they have a bug, and it's not your job as a server to work around their bug.
[0] Aka, The Robustness Principle, "Be conservative in what you send, liberal in what you accept."
Whoever designed TLS did not expect third parties, so-called "content delivery networks", "cloud providers", etc., wanting to offer hosting to an unlimited number of customers ($$) on a limited pool of IP addresses
Problem of cleartext SNI was solved in 2011, well before "QUIC" existed
http://curvecp.org/addressing.html
Without TLS and without SNI anyone can host multiple HTTPS sites on a single IP address
You can also configure the browser to use your chosen DoH server directly, but this is often as much work as just telling the browser to use the system DNS server and setting that up as DoH anyways.
Lots of clients just tell the world. ALPN is part of the unecrypted client hello.
For example, people passing requests received by HTTP/2 frontends to HTTP/1.1 backends
http/1.0 w/keepalive is common (amazon s3 for example) perfectly suitable simple protocol for this
If I really do need to get that last bit, there's always other analysis to be done (request/response size/cadence, always talks to host X before making connections to other hosts, etc)
For true government level interest in what you are doing, it's a much harder conversation than e.g. avoiding ISPs making a buck intercepting with wildcard fallbacks and is probably going to need to extend to something well beyond just DoH if one is convinced that's their primary concern.
For this usecase you want to be able to send off multiple requests before recieving their responses (you want to prevent head of line blocking).
If anything, keep alive is probably counter productive. If that is your only option its better to just make separate connections.
They force you to stay behind their NAT and recently started blocking VPN connections to home labs even.
For DNS this might come up in format parsing. E.g. in html, First you see <script> tag, fire off the DNS request for that, and go back to parsing. Before you get the DNS result you see an <img> tag for a different domain and want to fire off the DNS result for that. With a batch method you would have to wait until you have all the domain names before sending off the request (this might get more important if you are recieving the file you are patsing over the network and you dont know if the next packet containing the next part of the file is 1ms away or 2000ms).
the problem with relying on the wire protocol to streamline requests that should've been batched is that it lacks the context to do it well
October 9, 2025 blog

Quad9 will be discontinuing support within DNS-over-HTTPS (DOH) using HTTP/1.1 on December 15, 2025. This should have no impact on most users, but there are some older or non-compliant devices or software which may be unsupported after that time with DOH and which will have to revert to unencrypted DNS or shift to DNS-over-TLS.
Quad9 was the first large-scale recursive resolver to offer standards-based encryption (DNS-over-TLS in 2017). We also provide DNS-over-HTTPS (DOH) as an encryption method, which has been slowly increasing as a percentage of our traffic since standardization and our inclusion of that protocol in 2018. Browsers have been the primary devices operating with DOH, which has some benefits: browsers are updated frequently and are typically kept up to date with newer standards.
The DOH standard recommends HTTP/2 as the lowest version of the protocol for use for DOH (https://datatracker.ietf.org/doc/html/rfc8484#section-5.2) but does not rule out using the older HTTP/1.1 standard. We have supported both HTTP/1.1 and HTTP/2 since our inclusion of DOH in our protocol stack seven years ago. However, we are reaching the end of life for the libraries and code that support HTTP/1.1 in our production environment and, therefore, will be sunsetting support for DOH over HTTP/1.1 on December 15, 2025.
This sunsetting of HTTP/1.1 should not be noticed by the vast majority of our user community who are using Chrome (or any Chromium-based browser or stack), Firefox or Firefox forked projects, Safari (and to our knowledge all other Apple products/apps), or Android and iOS operating systems. They are all fully compliant with our existing and future DOH implementations and, to our knowledge, have always been compliant.
If your platform does not work without the older HTTP/1.1 protocol, then we would suggest you upgrade your system or shift to DNS-over-TLS which does not have an HTTP layer. There is always the possibility of moving to unencrypted DNS, but that decision should be closely considered as a downgrade of security and needs to be made carefully if you are in a network environment of higher risk.
The only platform that we are aware of directly that has ever used HTTP/1.1 and which will stop working after the sunset date are MikroTik devices that have been configured to use DNS-over-HTTPS, as those devices do not support the modern and recommended HTTP/2 transport protocol. We have communicated this to MikroTik on their support forum (https://forum.mikrotik.com/t/quad9-to-drop-support-for-http-1-1/264174/4), but there has not yet been an announcement by MikroTik as to when they will update their software to this more recent standard. Other than MikroTik, we have no specific knowledge of any other HTTP/1.1 devices or libraries with sizable user communities, though that does not mean there are no IOT devices or software libraries which are using that method.
From a geographic perspective, there is a community of users in Brazil who are on HTTP/1.1 which we believe to be MikroTik-based. Due to the fact that we cannot associate queries with users (or even one query with another) it is not easily possible for us to determine what types of devices these are, if not MikroTik, nor is it possible for us to inform those users about the impending change as by design we do not know who they are. We welcome any comments from our Brazilian community from knowledgeable users who can enlighten us as to the reasons for this geographic concentration (please contact support@quad9.net with details).
Despite our large geographic footprint and sizable user community, Quad9 remains a relatively small team. Our limited development efforts are better spent on bringing new features and core stability support to the Quad9 community, and we cannot justify the expense of integrating backwards compatibility for clients that are not meeting the recommended minimum version of protocols. HTTP/2 has been the recommended standard since the publication of the Request for Comments, and we believe this minimization of code is a reasonable step to take when compared with the costs and complexity of backwards compatibility development. In addition, HTTP/1.1 has significant speed and scale challenges, and as time progresses it may be the case that leaving it in our stack would introduce edge-case security or DOS attack vectors which would be difficult to discover and expensive to keep in our testing models.
The update allows us to move forward with additional, newer protocol support that we have been testing, which is ready for deployment and is part of a general refresh of our entire platform and system stack. We will have more flexibility and additional protocol support (keep watching this blog area for details), and the refresh also allows us to take better advantage of newer server hardware that we have been deploying worldwide to continue keeping pace with adoption rates.
We recognize this will cause inconvenience for some subset of users, and many users will not be aware of the change before it is applied as there is no assured direct method for us to communicate with our end users. This is the double-edged sword of not storing user data: we cannot directly notify everyone of changes.
If you know someone who will be impacted, please share and encourage them to take the necessary steps now to avoid interruption of service.