When Will Quantum Computers Break Current Encryption Technologies? a16z Research Partners Deep Dive into the Real Timeline of Quantum Threats, Clarify the Different Risks Faced by Encryption and Signatures, and Offer Seven Recommendations for the Blockchain Industry. This article is based on a research report by Justin Thaler / a16z, translated and refined by Dongqu.
(Background: Physical Expert: Give Quantum Computers Five More Years to Break Bitcoin Private Keys, Need to Fully Halt BTC Upgrades?)
(Additional Context: Cracking Bitcoin Before 2030? Google Willow’s “Quantum Echo” Sparks Expert Debate: Most Public Keys Are Exposed Early)
Table of Contents
How far are we from the advent of quantum computers capable of breaking Bitcoin?
When will quantum computers crack existing cryptography? The timeline for this question is often exaggerated, leading to calls for “urgent and comprehensive transition to post-quantum cryptography.”
However, these calls often overlook the costs and risks of premature migration, and fail to recognize that different cryptographic tools face fundamentally different threats:
Distinguishing this is crucial. Misconceptions can distort cost-benefit analyses, leading teams to overlook more urgent security risks like software vulnerabilities.
The real challenge in successfully transitioning to post-quantum cryptography is aligning the urgency of action with the actual threat. The following will clarify common misconceptions about quantum computing threats to cryptography, covering encryption, signatures, and zero-knowledge proofs, with particular focus on their implications for blockchain.
Despite ongoing hype, the likelihood of “cryptography-related quantum computers” appearing in the 2020s is extremely low.
“Cryptography-related quantum computers” refer to fault-tolerant, error-corrected quantum computers capable of running Shor’s algorithm, with enough scale to break elliptic curve cryptography (e.g., secp256k1) or RSA (e.g., RSA-2048) within a reasonable time frame (e.g., within a month of continuous operation).
Based on publicly available technological milestones and resource assessments, we are still far from such machines. Although some companies claim that they might achieve this by 2030 or 2035, current progress does not support these claims.
Currently, no quantum computing platform based on ion traps, superconducting qubits, or neutral atoms approaches the tens of thousands or millions of physical qubits needed to break RSA-2048 or secp256k1 (the exact number depends on error rates and error correction schemes).
The bottleneck is not just the number of qubits but also the gate fidelity, connectivity between qubits, and the depth of error-corrected circuits needed to run Shor’s algorithm. Some systems have physical qubits exceeding 1,000, but this number is misleading: they lack the connectivity and fidelity required for cryptographic computations.
Recent systems are gradually approaching the physical error rates needed for quantum error correction, but so far no one can reliably run more than a few logical qubits, let alone thousands of high-fidelity, deep, fault-tolerant logical qubits needed for Shor’s algorithm. The gap from proof-of-principle to the scale required for cryptanalysis remains vast.
In short: before the number and fidelity of qubits improve by several orders of magnitude, cryptography-relevant quantum computers remain out of reach.
However, corporate press releases and media reports often cause confusion. Major sources of misunderstanding include:
These practices severely distort public (including expert) understanding of quantum computing progress.
Of course, progress is exciting. For example, Scott Aaronson recently wrote that, given the “astonishing speed of hardware progress,” he believes “it is a real possibility that before the next US presidential election, we will have a fault-tolerant quantum computer capable of running Shor’s algorithm.” But he clarified that this does not refer to cryptography-related quantum computers—just small-scale demonstrations like factoring 15=3×5 (which is faster by hand). These are still small experiments, with 15 as the target because modular arithmetic with 15 is simple; larger numbers like 21 are much harder.
Key conclusion: There is little public support for the idea that a cryptography-relevant quantum computer capable of cracking RSA-2048 or secp256k1 will emerge within the next 5 years—lack of publicly available progress data supports this. Even 10 years remains ambitious.
Thus, excitement about progress and the “still ten-plus years” timeline are not contradictory.
So how about the US government setting 2035 as the deadline for full post-quantum migration? I think this is a reasonable planning horizon for large-scale transition, but it does not mean a cryptography-related quantum computer will necessarily appear by then.
“Harvest Now, Decrypt in the Future” attacks refer to storing encrypted traffic now and decrypting it once quantum computers capable of breaking cryptography appear. State-level adversaries are likely already archiving large amounts of encrypted communications from US government sources for future decryption.
Therefore, encryption must be upgraded immediately, at least for data with confidentiality requirements of 10-50 years or more.
But digital signatures (the cornerstone of all blockchains) are different: they do not involve confidentiality that needs to be retroactively attacked. Even if quantum computers emerge in the future, they can only forge signatures from that point onward, not decrypt past signatures. As long as you can prove a signature was generated before the quantum computer’s appearance, it cannot be forged.
This makes the transition to post-quantum digital signatures less urgent than encryption.
Mainstream platforms are already doing this:
In contrast, deploying post-quantum signatures on critical network infrastructure has been delayed until cryptography-related quantum computers are truly imminent, due to performance degradation (discussed below).
Zero-knowledge proofs (zkSNARKs) face similar issues. Even those not post-quantum secure (using elliptic curve cryptography) have the property of zero-knowledge itself, which is post-quantum secure. This property ensures proofs do not reveal any secret information (which quantum computers cannot break), so there is no “harvest and decrypt” secret to be stored now. Therefore, zkSNARKs generated before quantum computers appear are trustworthy; attackers can only forge fake proofs after the fact.
Most blockchains are not easily vulnerable to HNDL attacks.
Like today’s Bitcoin and Ethereum (non-privacy chains), their post-quantum security mainly relies on signatures for transaction authorization, not encryption. These signatures do not pose an HNDL risk. For Bitcoin, which is public, the quantum threat is about signature forgery (stealing funds), not decrypting publicly available transaction data. This removes immediate cryptographic urgency from HNDL.
Unfortunately, even authorities like the Federal Reserve have mistakenly claimed Bitcoin is vulnerable to HNDL, overstating the urgency of transition.
Of course, lower urgency does not mean Bitcoin is safe. It faces different timing pressures related to protocol changes requiring significant social coordination (discussed below).
The current exception is privacy coins. Many encrypt recipient addresses and amounts. These secrets can be stolen now and later de-anonymized after quantum computers break elliptic curve cryptography. The severity depends on design (e.g., Monero’s ring signatures and key images may allow full transaction graph reconstruction). If users care about future quantum exposure, privacy coins should transition to post-quantum primitives (or hybrid schemes) or adopt architectures that do not put secrets on-chain.
For Bitcoin, two real-world factors drive the urgency of planning for post-quantum signatures, unrelated to quantum technology itself:
However, the quantum threat is not an immediate doomsday but a gradual, selective threat targeting high-value wallets. Early quantum attacks will be costly and slow, with attackers selectively targeting valuable addresses.
Additionally, avoiding address reuse and not using Taproot addresses (which expose public keys on-chain) makes most users relatively safe—public keys are hidden behind hashes until spent. When a transaction is broadcast, the public key is revealed, creating a brief window for quantum attack: honest users should confirm quickly, while attackers try to compute the private key before that.
Thus, the truly vulnerable coins are those with exposed public keys: early P2PK outputs, reused addresses, and Taproot-held assets.
For abandoned vulnerable coins, solutions are tricky: either community agrees on a “deadline” after which unspent coins are considered burned, or they remain vulnerable to future quantum attackers. The latter raises serious legal and security issues.
Bitcoin’s other unique challenge is low transaction throughput. Even with a planned upgrade, migrating all vulnerable funds could take months at current speeds.
These challenges mean Bitcoin must start planning for post-quantum transition now—not because quantum computers might appear before 2030, but because the governance, coordination, and technical logistics of moving hundreds of billions of dollars require years.
Quantum threat to Bitcoin is real, but the main pressure comes from its own limitations, not an imminent quantum computer.
Note: The above signature vulnerabilities do not affect Bitcoin’s economic security (PoW consensus). PoW relies on hash computations, only slightly accelerated by Grover’s algorithm, and the actual cost remains high, making significant speedups unlikely. Even if possible, it would mainly favor large miners, not undermine the security model.
Why shouldn’t blockchains rush to deploy post-quantum signatures? We need to understand their performance costs and our confidence in these evolving schemes.
Post-quantum cryptography mainly relies on five classes of hard problems: hash, coding, lattices, multivariate quadratic equations, and elliptic curve isogenies. The diversity stems from the efficiency-security trade-off: more structure often means higher efficiency but more potential attack vectors.
These issues pose risks far more immediate than the distant prospect of quantum computers.
Historical lessons remind us to be cautious: leading candidates in NIST’s standardization, like Rainbow (multivariate signatures) and SIKE/SIDH (isogeny-based cryptography), have been broken by classical algorithms. This underscores the risks of premature standardization and deployment.
Internet infrastructure has adopted a cautious approach to signature migration, which is especially important because cryptographic transitions are inherently slow (e.g., the long, ongoing transition from MD5/SHA-1).
Open-source blockchain projects (e.g., Ethereum, Solana) can upgrade faster than traditional internet infrastructure. However, traditional systems can rotate keys frequently to reduce attack surfaces, while blockchain assets and keys may remain exposed for a long time.
Overall, blockchains should follow the cautious signature migration strategies used in internet protocols. Both are not vulnerable to HNDL attacks; early migration carries high costs and risks.
Blockchain faces additional complexities that make early migration especially risky:
The more pressing issue is implementation security.
Over the coming years, implementation vulnerabilities will pose a greater security risk than quantum computers. For SNARKs, the main threat is software bugs. Digital signatures and encryption already face challenges; SNARKs are even more complex. In fact, digital signatures can be viewed as a minimal form of zkSNARK.
For post-quantum signatures, side-channel and fault-injection attacks are more immediate threats. The community needs years to harden these implementations.
Premature transition before standards are finalized could lock us into suboptimal schemes or force a second migration to fix vulnerabilities.
Based on these realities, I propose the following advice for builders and policymakers. The overarching principle: take quantum threats seriously but do not assume cryptography-related quantum computers will appear before 2030 (current progress does not support this). Meanwhile, some actions can and should be taken now:
Of course, technological breakthroughs may accelerate, and bottlenecks may delay predictions. I do not claim that a 5-year timeline is impossible, only that it is unlikely. Following these recommendations can help us avoid more immediate and probable risks: implementation vulnerabilities, rushed deployments, and common pitfalls in cryptographic transitions.