Grover attacks are very blatantly impractical. When someone describes Grover-type attacks in the same breath as Shor-type attacks, without caveats, that's a red flag.
What is going on?
If that’s the case, would the time eventually be basically irrelevant with enough compute? For instance, if what’s now a data center is able to fit in the palm of your hand (comparing early computers that took up rooms to phones nowadays). So if compute is (somehow) eventually able to be incredibly well optimized or if we use something new, like how microprocessors were the next big thing, would that then be a quantum threat to 128-bit symmetric keys?
WPA3 moved from symmetric AES to ECDH which is vulnerable to Quantum. Gonna be a tonne of IOT inverters waste.
And for ECC, I know many are using the "2 exp 255 - 19" / 25519 for it's unlikely to be backdoored but it's only 256 bits but... Can't we find, say, "2 exp 2047 - 19" (just making that one up) and be safe for a while too?
Basically: for RSA and ECC, is there anything preventing us from using keys 10x bigger?
As far as the Grover speedup goes, it's already optimal. Requiring O(sqrt(N)) queries is the proven lower bound for unstructured search.
I dont know what the quantum future holds, but if quantum actually happens then i have low faith in your plan.
There are enough order-of-magnitude breakthroughs between today and scalable quantum error correction, that it makes no sense to try to to guess exactly the order of magnitude of the attacks that will be feasible.
Either you believe they won't happen, in which case you can keep using long-term ECDSA keys, or you believe they will happen, in which case they are likely to overshoot your rotation period.
https://bas.westerbaan.name/notes/2026/04/02/factoring.html
It doesn't say much by itself, but it has four very good links on the subject. One of these has a picture of the smallest known factor-21 circuit, which is vastly larger than that of the factor-15 circuit, and comparable to much larger numbers. Another is Scott Aaronson's article making the analogy of asking factoring small numbers as asking for a "small nuclear explosion" - if you're in 1940 and not able to make a small nuclear explosion, that doesn't mean you're much farther away from a big nuclear explosion.
I think an analogy would be, imagine you are driving across north america in a car, but your engine is broken. The mechanic is near by so you put it in neutral and push it.
If someone said, well it took you half an hour to push it to the mechanic, it will take the rest of your life to get it across north america - that would be the wrong take away. If the mechanic actually fixes the engine, you'll go quite fast quite quickly. On the other hand maybe its just broke and can't be fixed. Either way how fast you can push it has no bearing on how fast the mechanic can fix it or how fast it will work after its fixed.
Maybe people will figure out quantum computers maybe they won't, but the timeline of "factoring" 15 is pretty unrelated.
In the context of cryptography, keep in mind its hard to change algorithms and cryptographers have to plan for the future. They are interested in questions like: is there a > 1% change that a quantum computer will break real crypto in the next 15 years. I think the vibe has shifted to that sounding plausible. Doesn't necessarily mean it will happen, its just become prudent to plan for that eventuality, and now is when you would have to start.
That's correct. The quantum computer needs to be "sufficiently larger" than your RSA key.
> Basically: for RSA and ECC, is there anything preventing us from using keys 10x bigger?
For RSA things get very unwieldy (but not technically infeasible) beyond 8192 bits. For ECC there are different challenges, some of which have nothing to do with the underlying cryptography itself: one good example is how the OpenSSH team still haven't bothered supporting Ed448, because they consider it unnecessary.
the time to run the algorithm has cubic scaling - 1000x more time required.
but it remains exponentially faster, just 1 minute becomes 1 day, 1 day becomes 3 years. still "easily" broken
you can run benchmarks yourself: openssl speed rsa1024 rsa2048
also this (slightly dated) java ex writeup covers this well: https://www.javamex.com/tutorials/cryptography/rsa_key_lengt...
tldr trade off is found between better performance and how many years the data needs to be assumed confidential
every encryption scheme has at least one way to be decrypted.
fidelity of information is one use of encryption, if you apply the solution and get garbage, something is wrong, somewhere.
occultation of information is another use, that is commonly abused by extending undue trust. under the proviso that encryption will eventually be broken, you cant trust encryption to keep a secret forever, but you can keep it secret, for long enough that it is no longer applicible to an attack,or slightly askew usecase, thus aggressive rotation of keys becomes desirable
Grover's algorithm is provably optimal [0]. No quantum algorithm will ever find an n-bit key by queries to any reasonable sort of oracle faster than Grover's algorithm, and Grover's algorithm is way too slow to be a serious problem.
But symmetric ciphers are not black boxes. They're mostly built on some variant of a Feistel network, which is a very nice construction for turning a messy function into an invertible function that, in a potentially very strong sense, acts like a cryptographically secure permutation.
When I was in grad school, one project I contemplated but never spent any real time on was trying to either generate a real security proof for quantum attacks on Feistel networks or to come up with interesting quantum attacks. And there is indeed an interesting quantum attack against 3-round Feistel networks [1].
This is interesting, because, depending on what sort of security is needed, three or four rounds of Feistel network are sufficient against classical attack [2].
Now ciphers like AES have many more than 3 rounds, so hopefully they're fine. But maybe they're not. My intuition is that there is probably a reasonably small n1 and a reasonably small n2 >= n1 (probably constants, maybe logarithmic in numbers of bits) for which there is no quantum algorithm that can break symmetric crypto given classical query access even to the round functions (n1) or quantum query access to the round functions (n2) [3], but I'm not aware of any proof of anything of the sort. And my intuition definitely should be be trusted fully! (Maybe, even if I'm wrong, there is still a number of rounds that is sufficient for security against query access to the entire cipher.)
[0] The classic result is https://arxiv.org/abs/quant-ph/9701001 and there are newer, more exact results, e.g. https://arxiv.org/abs/0810.3647
[1] https://ieeexplore.ieee.org/document/5513654
[2] https://en.wikipedia.org/wiki/Feistel_cipher
[3] It would be extremely cool if someone built quantum computers and networks and storage such that two parties that don't trust each other could actually communicate and exchange (interesting [5]) qubits. I've written some fun papers on the possible implications of this. If we ever get the technology, then it might actually be meaningful to consider things like chosen-quantum-ciphertext attacks against a classical symmetric cipher. But that's many, many years away, and, in any case, an attacker will only ever get to do a quantum query attack against a cryptosystem if a victim lets them. [4] Otherwise all queries will be classical.
[4] Or in very complex settings where there is an obfuscated black box, for example. This may be relevant for zk-snarks or similar constructions.
[5] I don’t consider the optical qubits exchanged in commercial devices that supposedly implement quantum key distribution to be interesting. To the vendors of such devices, sorry.
If an attacker can break the symmetric encryption in a reasonable amount of time, they can capture the output and break it later.
In addition, how are you doing the key rotation? You have to have some way of authenticating with the rotation service, and what is to stop them from breaking THAT key, and getting their own new certificate? Or breaking the trusted root authority and giving themselves a key?
Basically, the best they've proven is something like O(n * inverse_ackermann(n)), but it seems likely the algorithm actually runs in O(n). We also already have a randomised algorithm for this problem that runs in O(n) expected time on worst case input. The expectation is over the random choices.
https://en.wikipedia.org/wiki/Expected_linear_time_MST_algor...
I think there are too many unknowns to bet it all on one horse.
So, if we have to change all of our infrastructure due to a supposed quantum computing threat, I'd go with HybridPQ for asymmetric encryption.
This assumes that there will not be other problems that arise. I suspect that "error correcting" thousands of qubits entangled with one another will be one of those problems.
None of those are remotely practical, even imagining quantum computers that become as fast (and small! and long-term coherent!) as classical computers.
The say the 's' in IoT stands for secure, and from my experience that is true. Pretty much nothing is getting thrown out, because it isn't secure.
One-time pads [0] are actually impossible to break, but they're pretty tricky to use: you must never ever reuse them, they must be truely random, and you need some way to share them between both parties (which isn't that easy since they need to be at least as large as all the data that you ever want to transmit).
I agree. The point I am trying to make is that even for asymmetric encryption (which is far more vulnerable), there are still plausible ways to make a quantum break more difficult.
The only thing that could compromise this scheme, aside from breaking the signing keys, would be to have TLS broken to the extent that viewing real-time traffic is possible. Any TLS break delayed by more than 15 minutes would be worthless.
To get useful results, a quantum computer needs all of its qbits to stay entangled with each other, until the entire group collapses into the result. With current technology, it is very difficult for a reasonable sized group of qbits to stay coherently entangled, so it can only solve problems that are also relatively easy to solve on classical computers.
If someone today were to figure out how to keep large numbers of bits entangled, then quantum computing would instantly be able to break any encryption that isn't quantum safe. It's not something that we are slowly working toward; it's a breakthrough that we can't predict when, or even if, it will happen.
Compute has seen in the ballpark of a 5-10 orders of magnitude increase over the last 40 years in terms of instructions per second. We would need an additional 20-30 orders of magnitude increase to make it even close to achievable with brute force in a reasonable time frame. That isn’t happening with how we make computers today.
...but even if they had, what realistically could they have done about it? ML-KEM was only standardized in 2024 [1].
also, the addition of ECDH in WPA3 was to address an existing, very real, not-theoretical attack [2]:
> WPA and WPA2 do not provide forward secrecy, meaning that once an adverse person discovers the pre-shared key, they can potentially decrypt all packets encrypted using that PSK transmitted in the future and even past, which could be passively and silently collected by the attacker. This also means an attacker can silently capture and decrypt others' packets if a WPA-protected access point is provided free of charge at a public place, because its password is usually shared to anyone in that place.
0: https://en.wikipedia.org/wiki/Wi-Fi_Protected_Access#WPA3
1: https://en.wikipedia.org/wiki/ML-KEM
2: https://en.wikipedia.org/wiki/Wi-Fi_Protected_Access#Lack_of...
It sounds like you’re talking about breaking TLS’s key exchange? Why would this not have the usual issue of being able to decrypt recorded traffic at any time in the future?
Edit: If it’s because the plaintext isn’t useful, as knorker got at in a sibling comment… I sure hope we aren’t still using classical TLS by the time requiring it to be broken in 1 minute instead of 15 is considered a mitigation. Post-quantum TLS already exists and is being deployed…
I don't think I understand the threat model you are using here?
What makes you say that? This is the store now decrypt later attack, and it's anything but worthless.
Oh, worthless for your oauth? Uh… but how do you bootstrap the trust? Sounds to me like you need post quantum to carry the whole thing anyway.
Or you mean one key signs the next? Ok, so your bet is that within the time window an RSA key, RSA can't be cracked?
Why in the world would anyone want to depend on that? Surely you will also pair it with PQ?
I’ve seen so much change so fast my assumption is someone did it already and preprints are making the rounds.
Shor's and Grover's still are algorithm that require a massive amount of steps...
Keep here in mind that computers today have features approaching the size of a single atom, switching frequencies where the time to cross a single chip from one end to the other is becoming multiple cycles, and power densities that require us to operate at the physical limits of heat transfer for matter that exists at ambient conditions.
We can squeeze it quite a bit further, sure. But anything like 20-30 orders of magnitude is just laughable even with an infinite supply of unobtanium and fairy dust.
if you know something about the content e.g. it is for russians, or americans.
you can use a frequency analysis to identify vowels. that goes for a simple substitution cypher that is relying on low frequency of usage[one time use] and does not keep it brief.
when you further substitute numbers for words, you gain more room for verbosity.
if you have high stakes, your message in the clear, should only be useful for a limited time, at the point that it is no longer actionable.
im very familiar with one time pads random, and keyed.
they are a little simple, you can use a triaxial scheme, or a tensor like scheme, for more leg room and more complexity.
depending on what you are doing it may be necessary, to not carry any pads, but to have access at some point, to agreed upon keys, in order to generate a pad on the spot. or even work in your head, if you have skill. e.g. jackdwlovemybigsphnxfqurtz as a weak example.
Right, which is why I didn't quote that part :)
> you can use a frequency analysis to identify vowels.
That will help in many cases, but not against a properly-used one-time-pad.
> but to have access at some point, to agreed upon keys, in order to generate a pad on the spot
That's not really a one-time pad then, that's just a stream cipher. Which do work better than one-time pads in the vast majority of cases, aside from not being "perfectly" secure.
why do you have to assume that?
you're at Acme Coffeeshop. their wifi password is "greatcoffee" and it's printed next to the cash register where all customers can see it.
with WPA2 you have to consider N possible adversaries - Acme Coffee themselves, as well as every single other person at the coffeeshop.
...and also anyone else within signal range of their AP. maybe I live in an apartment above the coffeeshop, and think "lol it'd be fun to collect all that traffic and see if any of it is unencrypted".
with WPA3 you only have to consider the single possible adversary, the coffeeshop themselves.
The advancing threat of cryptographically-relevant quantum computers has made it urgent to replace currently-deployed asymmetric cryptography primitives—key exchange (ECDH) and digital signatures (RSA, ECDSA, EdDSA)—which are vulnerable to Shor’s quantum algorithm. It does not, however, impact existing symmetric cryptography algorithms (AES, SHA-2, SHA-3) or their key sizes.
There’s a common misconception that quantum computers will “halve” the security of symmetric keys, requiring 256-bit keys for 128 bits of security. That is not an accurate interpretation of the speedup offered by quantum algorithms, it’s not reflected in any compliance mandate, and risks diverting energy and attention from actually necessary post-quantum transition work. The misconception is usually based on a misunderstanding of the applicability of a different quantum algorithm, Grover’s.
AES-128 is safe against quantum computers. SHA-256 is safe against quantum computers. No symmetric key sizes have to change as part of the post-quantum transition. This is a near-consensus opinion amongst experts and standardization bodies and it needs to propagate to the rest of the IT community. The rest of this article backs up this claim both technically and with references to relevant authorities.
Grover’s is a quantum algorithm that allows searching an input space of size N of an unstructured function f for the “right answer” in π/4×N\pi / 4 \times \sqrt{N} invocations of f.
This is commonly interpreted to mean that Grover’s algorithm can find an AES-128 key in 2642^{64} “time.” That is not the case in practice, because running such an attack as a single sequential thread would take hundreds of thousands of years, and parallelizing it makes its total cost grow.
A few important things to understand about Grover’s algorithm:
Why does the last point matter? Because unlike regular bruteforce attacks, which are “embarrassingly parallel,” partitioning the search space degrades the Grover quadratic speedup.
Consider a classical bruteforce of a 64-bit key, where each attempt takes 5 ns (~ 16 cycles at 3 GHz). Running that on a single CPU would take nearly
264×5 ns≈3,000 years. 2^{64} \times 5 \text{ ns} \approx 3{,}000 \text{ years}.
So we parallelize it across
216=65,536 CPUs 2^{16} = 65{,}536 \text{ CPUs}
each exploring
2(64−16)=248 keys 2^{(64 - 16)} = 2^{48} \text{ keys}
in a little over
248×5 ns≈16 days. 2^{48} \times 5 \text{ ns} \approx 16 \text{ days}.
Note how the total amount of work done across the system
216×248=264 2^{16} \times 2^{48} = 2^{64}
has not changed.
This is why we consider 64-bit keys weak: because they can be searched in parallel efficiently. If the attack had to be sequential, there would be no risk.1
Let’s try the same with a Grover attack on 128-bit keys. Again, running
2128=264 \sqrt{2^{128}} = 2^{64}
operations in a row is infeasible, so we again parallelize the attack across 2162^{16} quantum computers, each exploring
2(128−16)=2112 keys. 2^{(128 - 16)} = 2^{112} \text{ keys}.
Each instance will need to do
2128/216=256 work. \sqrt{2^{128} / 2^{16}} = 2^{56} \text{ work}.
Notice how that’s not 2482^{48}!
A 2162^{16} search space reduction factor inside a square root only saves 282^{8} of work per instance, whereas classically it saves the full 2162^{16}.
The total amount of work across the system went up from 2642^{64} to
256×216=272 2^{56} \times 2^{16} = 2^{72}
because we parallelized the attack, diluting the quadratic speedup in the process.
That gives us the intuition for why Grover’s algorithm doesn’t parallelize. To decide if it’s still a threat we need to run the numbers with concrete orders of magnitude.
First, we need to establish how many operations, or gates, in a row we can perform. To be conservative, let’s say that we have a fast-clock quantum architecture like superconducting qubits, and that a gate takes 1 µs.2 If we are willing to keep the attack running (with no power outage or loss of fidelity!) for a decade, that gives us a maximum sequence of gates or “depth” of
10 years/1 µs≈248 10 \text{ years} / 1 \text{ µs} \approx 2^{48}
Next, we need to know how many sequential gates it takes to compute AES-128 inside the quantum circuit. Liao and Luo (2025) provide a highly optimized Grover oracle for AES-128 with a depth of 232 T-gates (and a circuit “width” of 724, which is roughly speaking the number of logical qubits operating in parallel).
Now we can solve for the lowest parallelization factor that will keep each instance within a maximum depth of 2482^{48} (i.e. completing in a decade on these hypothetical fast and perfect quantum computers).
π/4×2128/x×232=248 \pi / 4 \times \sqrt{2^{128} / x} \times 232 = 2^{48} x≈247 x \approx 2^{47}
This means we’ll need 140 trillion quantum circuits of 724 logical qubits each operating in parallel for 10 years to break AES-128 with Grover’s.
Another way to measure the cost of that attack is its DW cost, the depth × width product, roughly equivalent to discussing the product of cycles and cores for classical computation.
π/4×2128/247×232×724×247≈2104.5 \pi / 4 \times \sqrt{2^{128} / 2^{47}} \times 232 \times 724 \times 2^{47} \approx 2^{104.5}
Note that unlike Shor’s algorithm instantiations (and quantum error correction) which have been getting drastically better over the years, there aren’t many terms in that formula that can improve. The only two that are open to optimization are the AES-128 Grover oracle depth (232) and width (724), but they contribute only 17 bits to the total cost, and there is probably little space left for improvement. Liao and Luo (2025) shaved 7.5 bits off the very first estimate by Grassl et al. (2015) and 1.5 bits off the earliest Grover-specific estimate of Jaques et al. (2019).
Speaking of Shor’s, how does this compare with the recently discussed quantum attacks against 256-bit elliptic curves? After all, there are people who believe or believed those to be infeasible, too, but I’ve been arguing to take them seriously.
Babbush et al. (2026) claim a Shor’s execution in
70M≈226 gates 70\text{M} \approx 2^{26} \text{ gates}
which would take minutes on an architecture with “fast” gate time of 10 µs (which is 10 times slower than what we conservatively assumed above).
2104.5/226=278.5 2^{104.5} / 2^{26} = 2^{78.5}
Breaking AES-128 with Grover is 430,000,000,000,000,000,000,000 times more expensive than breaking 256-bit elliptic curves with Shor’s.
The U.S. National Institute of Standards and Technology (NIST) is the standardization body that ran the international competition for post-quantum cryptography and wrote the ML-KEM and ML-DSA specification documents.
NIST not only considers AES-128 to be safe, but made it the benchmark for the security of post-quantum primitives. AES-128 is by definition a Category 13 post-quantum algorithm.
In justifying the use of AES-128, NIST refers to the same observations we explained above, and introduces the concept of MAXDEPTH which is exactly the maximum serial computation that forces parallelization and limits Grover’s quadratic speedup.
It is worth noting that the security categories based on these reference primitives provide substantially more quantum security than a naïve analysis might suggest. For example, categories 1, 3 and 5 are defined in terms of block ciphers, which can be broken using Grover’s algorithm, with a quadratic quantum speedup. But Grover’s algorithm requires a long-running serial computation, which is difficult to implement in practice. In a realistic attack, one has to run many smaller instances of the algorithm in parallel, which makes the quantum speedup less dramati[c].
NIST’s calculation uses less optimized (and hence less conservative) AES quantum circuits (from Grassl et al. (2015) instead of Liao and Luo (2025)), but faster (and hence more conservative) logical gate speed. Nonetheless, it lands in the same ballpark and comes to the same conclusion.
Further proof, NIST IR 8547 ipd, Transition to Post-Quantum Cryptography Standards, disallows all quantum-vulnerable algorithms from 2035. However, it reiterates that all AES key sizes remain allowed in Section 4.1.3.
NIST’s existing standards in symmetric cryptography — including hash functions, XOFs, block ciphers, KDFs, and DRBGs — are significantly less vulnerable to known quantum attacks than the public-key cryptography standards in SP 800-56A, SP 800-56B, and FIPS 186. In particular, all NIST-approved symmetric primitives that provide at least 128 bits of classical security are believed to meet the requirements of at least Category 1 security within the system of five security strength categories for evaluating parameter sets in the NIST PQC standardization process (see Table 1).
Finally, in the Post-Quantum Cryptography FAQs, there is an explicit answer to “should we double AES key sizes.” (Emphasis mine.)
To protect against the threat of quantum computers, should we double the key length for AES now? (added 11/18/18)
Grover’s algorithm allows a quantum computer to perform a brute force key search using quadratically fewer steps than would be required classically. Taken at face value, this suggests that an attacker with access to a quantum computer might be able to attack a symmetric cipher with a key up to twice as long as could be attacked by an attacker with access only to classical computers. However there are a number of mitigating factors suggesting that Grover’s algorithm will not speed up brute force key search as dramatically as one might suspect from this result. First of all, quantum computing hardware will likely be more expensive to build and use than classical hardware. Additionally, it was proven by Zalka in 1997 that in order to obtain the full quadratic speedup, all the steps of Grover’s algorithm must be performed in series. In the real world, where attacks on cryptography use massively parallel processing, the advantage of Grover’s algorithm will be smaller.
Taking these mitigating factors into account, it is quite likely that Grover’s algorithm will provide little or no advantage in attacking AES, and AES 128 will remain secure for decades to come. Furthermore, even if quantum computers turn out to be much less expensive than anticipated, the known difficulty of parallelizing Grover’s algorithm suggests that both AES 192 and AES 256 will still be safe for a very long time. This of course assumes that no new cryptographic weaknesses, either with respect to classical or quantum cryptanalysis, are found in AES.
Based on such understanding, current applications can continue to use AES with key sizes 128, 192, or 256 bits. NIST will issue guidance regarding any transitions of symmetric key algorithms and hash functions to protect against threats from quantum computers when we can foresee a transition need. Until then, users should follow the recommendations and guidelines NIST has already issued. In particular, anything with less than 112 bits of classical security should not be used.
NIST is not equivocating about this at all: 128-bit keys are fine.
What about other standards bodies? The German Federal Office for Information Security recently published BSI TR-02102-1 Cryptographic Mechanisms: Recommendations and Key Lengths with “updates in the area of PQ cryptography.”
In it, they mention Grover’s in passing and with less conclusiveness than NIST, but come to the same conclusion:
The following block ciphers are recommended for use in new cryptographic systems:
AES-128, AES-192, AES-256
In the same publication, they recommend transitioning off quantum-vulnerable primitives (which notably does not include AES-128) even sooner than NIST:
The sole use of classic key agreement mechanisms is only recommended until the end of 2031, see also Section 2.3.
Samuel Jaques is an assistant professor in the Department of Combinatorics and Optimization at the University of Waterloo, specializing in cryptography and quantum computing. You might be familiar with his Landscape of Quantum Computing.
I found this 2024 slide deck after having run all the numbers above, and I felt a bit silly having essentially re-done the same work two years later and with less domain expertise, but it’s actually encouraging how closely the conclusions match: the margin is wide enough and the relevance of guessing and optimization small enough, that we’re all repeatedly coming to equivalent conclusions.


He talks about how hard it is to build even a single logical qubit, which I am ignoring above because we are assuming a world where scalable quantum error correction does happen, or we wouldn’t be worried about asymmetric cryptography either.
He also talks about decoherence, which I only hinted at by mentioning how unlikely it is that a quantum computer can actually operate for a decade uninterrupted, like we are conservatively assuming.
He makes the excellent point that the right metric to optimize Grover oracle circuits against is depth² × width, because depth contributes both to cost and to hitting the maximum depth. This suggests the Liao and Luo (2025) circuit might not be optimal, and he uses one from Jang et al. (2022) instead, but again comes to a very similar result.
Finally, he points out that each logical T-gate in a surface code architecture requires 2162^{16} physical operations, so there are still terms missing in the formula before reaching a practical resource estimate.
The whole post-quantum transition is about mitigating speculative risk, so why not be “safe” with symmetric keys, too? Because resources are finite, churn has a cost, and coordination is important.
The field is rushing to deploy post-quantum cryptography because experts are telling us there is a concrete danger to asymmetric cryptography. Conversely, the same experts are telling us that there is not any danger to symmetric cryptography.
Symmetric encryption is separate enough from asymmetric encryption that we usually don’t get to switch both at the same time for minimal marginal cost. For example, changing the negotiated TLS key exchange and PKI signature algorithms has little to do with changing supported cipher suites.
Conflating necessary and unnecessary changes will cause needless churn and take resources away from the urgent updates. We’re lucky we can leave the symmetric cryptography (sub-)systems untouched; we should take that blessing and focus on the work that actually needs doing, which is plenty.
There is also a complexity and coordination angle: transitioning any open ecosystem like TLS or anything that’s implemented across different libraries and programming languages requires agreeing on the target. It’s a lot easier to agree on something that’s technically accurate: we can either converge spontaneously or convince each other with technical arguments. Aiming for unnecessary and underspecified targets, instead, breeds interoperability issues and requires supporting multiple different idiosyncratic opinions, which introduces complexity.
There is admittedly one compliance regime which requires 256-bit symmetric keys: CNSA 2.0.
That’s not a quantum computing adjustment, though: CNSA 2.0 requires a 256-bit “security level” for everything, like its predecessor Suite B always did for the Top Secret classification, and indeed mandates ML-KEM-1024 and ML-DSA-87. As far as I can tell, its intention is to avoid the very same fragmentation introduced by “security levels” by picking one oversized primitive for all settings.
By accepting AES-256 (instead of a non-existing AES-512) at the 256-bit security level, CNSA 2.0 also acknowledges that Grover’s algorithm doesn’t halve the security of AES.
It’s likely Go will implement CNSA 2.0 profiles, if nothing else because “256-bit everything” is a well-defined target that’s unlikely to move. It’s not the “secure and reasonable” target I tend to prefer, but it’s something we can commit to, unlike “secure and reasonable but with a partial misunderstanding of the security of some primitives” which can mean different things to different people.
I generally believe that 256-bit “security levels” are somewhere between a comfort blanket and numerology, because as a colleague put it recently “we don’t believe in counting to 21282^{128},” but that doesn’t mean 256-bit keys (or block sizes) are always useless to achieve the 128-bit security level.
When collisions are a concern and birthday bounds are in play, the security in bits really is halved, so you need e.g. a 256-bit hash output for 128 bits of collision security. (This is why SHA-128 doesn’t exist.) Similarly, multi-target attacks, where the adversary’s advantage is measured as the chance of breaking one of many messages/keys, can require additional margin depending on how nonces are used.
These are concerns that are deep within the purview of the cryptography engineers designing the protocols, though, and not in scope for the policy-making or system administration choices. Well-designed protocols already account for all this. For example, TLS with AES-128 meets the 128-bit security level with large multi-target margins, thanks to its nonce design.
As a cryptography engineer, I do wish all ciphers took 256-bit keys, as it would make my life easier, but
For more wishful-vs-real PQ migration, follow me on Bluesky at @filippo.abyssdomain.expert or on Mastodon at @filippo@abyssdomain.expert.
Last month I attended AtmosphereConf 2026, which was held at UBC. The campus is so beautiful, and there’s a spectacular path through the forest to the beach, where you can watch the sun setting over the water.

My work is made possible by Geomys, an organization of professional Go maintainers, which is funded by Ava Labs, Teleport, Tailscale, and Sentry. Through our retainer contracts they ensure the sustainability and reliability of our open source maintenance work and get a direct line to my expertise and that of the other Geomys maintainers. (Learn more in the Geomys announcement.) Here are a few words from some of them!
Teleport — For the past five years, attacks and compromises have been shifting from traditional malware and security breaches to identifying and compromising valid user accounts and credentials with social engineering, credential theft, or phishing. Teleport Identity is designed to eliminate weak access patterns through access monitoring, minimize attack surface with access requests, and purge unused permissions via mandatory access reviews.
Ava Labs — We at Ava Labs, maintainer of AvalancheGo (the most widely used client for interacting with the Avalanche Network), believe the sustainable maintenance and development of open source cryptographic protocols is critical to the broad adoption of blockchain technology. We are proud to support this necessary and impactful work through our ongoing sponsorship of Filippo and his team.
There are sequential attacks which indeed were misinterpreted as more practical than they are. For example, a static Diffie-Hellman oracle attack against Curve25519 or ristretto255 requires 263.52^{63.5} oracle invocations, which sound practical, except they need to be sequential, making the attack take two centuries in extremely optimal circumstances. ↩
“Gate” here means logical T-gate, not physical gate. Superconducting qubits execute physical gates in ~100 ns, but a logical T-gate on an error-corrected surface-code machine is much slower. Published estimates for logical T-gate latency fall in the single-to-tens-of-microseconds range, so 1 µs sits at the conservative (for us) end. ↩
Is Category 1 enough? Don’t we use ML-KEM-768 and ML-DSA-44 because they are Category 3 and Category 2, respectively? We do that not because Category 1 is insufficient, but because there’s concern that lattice cryptanalysis might advance slightly, and move ML-KEM-768 or ML-DSA-44 down to Category 1. There is no equivalent worry around AES, which indeed was picked as the very bar for Category 1. ↩