What exactly are they trying to accomplish? The article talks about sending entangled photons down a fiber optic... but that just sounds like ordinary fiber with extra steps.
Technology
Which posts fit here?
Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.
Rules
1. English only
Title and associated content has to be in English.
2. Use original link
Post URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communication
All communication has to be respectful of differing opinions, viewpoints, and experiences.
4. Inclusivity
Everyone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacks
Any kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangents
Stay on topic. Keep it relevant.
7. Instance rules may apply
If something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.
Companion communities
[email protected]
[email protected]
Icon attribution | Banner attribution
Quantum networks allow you to transmit quantum states between physically separated quantum processors.
You can’t just put quantum before random words.
This isn’t VXJunkies
And what're you going to quantum do to quantum stop me? Quantum suck it, nerd.
**quantum nerd
He did it, it's done, the system is collapsed already
Perchance
I’m guessing for quantum cryptography. It would allow you to have perfect crypto (assuming the non quantum hardware isn’t hacked (a big if)).
You can have post-quantum cryptography using classical computation, though
("Simply" pick a problem with no quantum acceleration. I think Elliptic Curves Cryptography works, but I'm not an expert)
Quantum crypto is different than cracking encryption with a quantum computer. The point of quantum crypto is that the key exchange is perfectly secret. If it is observed, the people exchanging keys will know due to entanglement bs that I’m too dumb to understand.
But you basically get the perfect uncrackable encryption of one time pads without having to manage one time pads.
The problem with the one-time pads is that they're also the most inefficient cipher. If we switched to them for internet communication (ceteris paribus), it would basically cut internet bandwidth in half overnight. Even moreso, it's a symmetric cipher, and symmetric ciphers cannot be broken by quantum computers. Ciphers like AES256 are considered still quantum-computer-proof. This means that you would be cutting the internet bandwidth in half for purely theoretical benefits that people wouldn't notice in practice. The only people I could imagine finding this interesting are overly paranoid governments as there are no practical benefits.
It also really isn't a selling point for quantum key distribution that it can reliably detect an eavesdropper. Modern cryptography does not care about detecting eavesdroppers. When two people are exchanging keys with a Diffie-Hellman key exchange, eavesdroppers are allowed to eavesdrop all they wish, but they cannot make sense of the data in transit. The problem with quantum key distribution is that it is worse than this, it cannot prevent an eavesdropper from seeing the transmitted key, it just discards it if they do. This to me seems like it would make it a bit harder to scale, although not impossible, because anyone can deny service just by observing the packets of data in transit.
Although, the bigger issue that nobody seems to talk about is that quantum key distribution, just like the Diffie-Hellman algorithm, is susceptible to a man-in-the-middle attack. Yes, it prevents an eavesdropper between two nodes, but if the eavesdropper sets themselves up as a third node pretending to be different nodes when queried from either end, they could trivially defeat quantum key distribution. Although, Diffie-Hellman is also susceptible to this, so that is not surprising.
What is surprising is that with Diffie-Hellman (or more commonly its elliptic curve brethren), we solve this using digital signatures which are part of public key infrastructure. With quantum mechanics, however, the only equivalent to digital signatures relies on the No-cloning Theorem. The No-cloning Theorem says if I gave you a qubit and you don't know it is prepared, nothing you can do to it can tell you its quantum state, which requires knowledge of how it was prepared. You can use the fact only a single person can be aware of its quantum state as a form of a digital signature.
The thing is, however, the No-cloning Theorem only holds true for a single qubit. If I prepared a million qubits all the same way and handed them to you, you could derive its quantum state by doing different measurements on each qubit. Even though you could use this for digital signatures, those digital signatures would have to be disposable. If you made too many copies of them, they could be reverse-engineered. This presents a problem for using them as part of public key infrastructure as public key infrastructure requires those keys to be, well, public, meaning anyone can take a copy, and so infinite copy-ability is a requirement.
This makes quantum key distribution only reliable if you combine it with quantum digital signatures, but when you do that, it no longer becomes possible to scale it to some sort of "quantum internet." It, again, might be something useful an overly paranoid government could use internally as part of their own small-scale intranet, but it would just be too impractical without any noticeable benefits for anyone outside of that. As, again, all this is for purely theoretical benefits, not anything you'd notice in the real world, as things like AES256 are already considered uncrackable in practice.
Oh yeah, that. My bad, mixed 'em up.
The original algorithm doesn't use entanglement, though! Just the fact that measurements can change the state. You can pick an axis to measure a quantum state in. If you pick two axes that are diagonal to each other, measuring a state in the "wrong" axis can give a random result (the first time), whereas the "right" one always gives the original data.
So the trick is to have the sender encode their bits into a randomly-picked axis per bit (the quantum states), send the states over, and then the receiver decodes them along a random axis as well. On average, half the axes will match up and those bits will correspond. The other bits are junk (random). They then tell each other the random axes they picked, which identifies the right bits!
They can compare a certain amount of their "correct" bits: if there's an eavesdropper, they must have measured in the wrong state half the time (on average). Measurement changes the state into its own axis, so the receiver gets a random bit instead of the right one half the time. 25% of the time, the bits mismatch, when they should always correspond.
Entanglement plays a key role.
Any time you talk about "measurement" this is just observation, and the result of an observation is to reduce the state vector, which is just a list of complex-valued probability amplitudes. The fact they are complex numbers gives rise to interference effects. When the eavesdropper observes definite outcome, you no longer need to treat it as probabilistic anymore, you can therefore reduce the state vector by updating your probabilities to simply 100% for the outcome you saw. The number 100% has no negative or imaginary components, and so it cannot exhibit interference effects.
It is this loss of interference which is ultimately detectable on the other end. If you apply a Hadamard gate to a qubit, you get a state vector that represents equal probabilities for 0 or 1, but in a way that could exhibit interference with later interactions. Such as, if you applied a second Hadamard gate, it would return to its original state due to interference. If you had a qubit that was prepared with a 50% probability of being 0 or 1 but without interference terms (coherences), then applying a second Hadamard gate would not return it to its original state but instead just give you a random output.
Hence, if qubits have undergone decoherence, i.e., if they have lost their ability to interfere with themselves, this is detectable. Obvious example is the double-slit experiment, you get real distinct outcomes by a change in the pattern on the screen if the photons can interfere with themselves or if they cannot. Quantum key distribution detects if an observer made a measurement in transit by relying on decoherence. Half the qubits a Hadamard gate is randomly applied, half they are not, and which it is applied to and which it is not is not revealed until after the communication is complete. If the recipient receives a qubit that had a Hadamard gate applied to it, they have to apply it again themselves to cancel it out, but they don't know which ones they need to apply it to until the full qubits are transmitted and this is revealed.
That means at random, half they receive they need to just read as-is, and another half they need to rely on interference effects to move them back into their original state. Any person who intercepts this by measuring it would cause it to decohere by their measurement and thus when the recipient applies the Hadamard gate a second time to cancel out the first, they get random noise rather than it actually cancelling it out. The recipient receiving random noise when they should be getting definite values is how you detect if there is an eavesdropper.
What does this have to do with entanglement? If we just talk about "measuring a state" then quantum mechanics would be a rather paradoxical and inconsistent theory. If the eavesdropper measured the state and updated the probability distribution to 100% and thus destroyed its interference effects, the non-eavesdroppers did not measure the state, so it should still be probabilistic, and at face value, this seems to imply it should still exhibit interference effects from the non-eavesdroppers' perspective.
A popular way to get around this is to claim that the act of measurement is something "special" which always destroys the quantum probabilities and forces it into a definite state. That means the moment the eavesdropper makes the measurement, it takes on a definite value for all observers, and from the non-eavesdroppers' perspective, they only describe it still as probabilistic due to their ignorance of the outcome. At that point, it would have a definite value, but they just don't know what it is.
However, if you believe that, then that is not quantum mechanics and in fact makes entirely different statistical predictions to quantum mechanics. In quantum mechanics, if two systems interact, they become entangled with one another. They still exhibit interference effects as a whole as an entangled system. There is no "special" interaction, such as a measurement, which forces a definite outcome. Indeed, if you try to introduce a "special" interaction, you get different statistical predictions than quantum mechanics actually makes.
This is because in quantum mechanics, every interaction leads to growing the scale of entanglement, and so the interference effects never go away, just spread out. If you introduce a "special" interaction such as a measurement whereby it forces things into a definite value for all observers, then you are inherently suggesting there is a limitation to this scale of entanglement. There is some cut-off point whereby interference effects can no longer be scaled passed that, and because we can detect if a system exhibits interference effects or not (that's what quantum key distribution is based on), then such an alternative theory (called an objective collapse model) would necessarily have to make differ from quantum mechanics in its numerical predictions.
The actual answer to this seeming paradox is provided by quantum mechanics itself: entanglement. When the eavesdropper observes the qubit in transit, for the perspective of the non-eavesdroppers, the eavesdropper would become entangled with the qubit. It then no longer becomes valid in quantum mechanics to assign the state vector to the eavesdropper and the qubit separately, but only them together as an entangled system. However, the recipient does not receive both the qubit and the eavesdropper, they only receive the qubit. If they want to know how the qubit behaves, they have to do a partial trace to trace out (ignore) the eavesdropper, and when they do this, they find that the qubit's state is still probabilistic, but it is a probability distribution with only terms between 0% and 100%, that is to say, no negatives or imaginary components, and thus it cannot exhibit interference effects.
Quantum key distribution does indeed rely on entanglement as you cannot describe the algorithm consistently from all reference frames (within the framework of quantum mechanics and not implicitly abandoning quantum mechanics for an objective collapse theory) without taking into account entanglement. As I started with, the reduction of the wave function, which is a first-person description of an interaction (when there are 2 systems interacting and one is an observer describing the second), leads to decoherence. The third-person description of an interaction (when there are 3 systems and one is on the "outside" describing the other two systems interacting) is entanglement, and this also leads to decoherence.
You even say that "measurement changes the state", but how do you derive that without entanglement? It is entanglement between the eavesdropper and the qubit that leads to a change in the reduced density matrix of the qubit on its own.
... what you said is correct, but that's superposition, not entanglement. Entanglement is when you create a product state of several qubits that cannot be decomposed into a tensor product of basic states (a single proton/photon/whatever).
I am factually correct, I am not here to "debate," I am telling you how the theory works. When two systems interact such that they become statistically correlated with one another and knowing the state of one tells you the state of the other, it is no longer valid to assign a state vector to the system subsystems that are part of the interaction individually, you have to assign it to the system as a whole. When you do a partial trace on the system individually to get a reduced density matrix for the two systems, if they are perfectly entangled, then you end with a density matrix without coherence terms and thus without interference effects.
This is absolutely entanglement, this is what entanglement is. I am not misunderstanding what entanglement is, if you think what I have described here is not entanglement but a superposition of states then you don't know what a superposition of states is. Yes, an entangled state would be in a superposition of states, but it would be a superposition of states which can only be applied to both correlated systems together and not to the individual subsystems.
Let's say R = 1/sqrt(2) and Alice sends Bob a qubit. If the qubit has a probability of 1 of being the value 1 and Alice applies the Hadamard gate, it changes to R probability of being 0 and -R probability of being 1. In this state, if Bob were to apply a second Hadamard gate, then it undoes the first Hadamard gate and so it would have a probability of 1 of being a value of 1 due to interference effects.
However, if an eavesdropper, let's call them Eve, measures the qubit in transit, because R and -R are equal distances from the origin, it would have an equal chance of being 0 or 1. Let's say it's 1. From their point of view, they would then update their probability distribution to be a probability of 1 of being the value 1 and send it off to Bob. When Bob applies the second Hadamard gate, it would then have a probability of R for being 0 and a probability of -R for being 1, and thus what should've been deterministic is now random noise for Bob.
Yet, this description only works from Eve's point of view. From Alice and Bob's point of view, neither of them measured the particle in transit, so when Bob received it, it still is probabilistic with an equal chance of being 0 and 1. So why does Bob still predict that interference effects will be lost if it is still probabilistic for him?
Because when Eve interacts with the qubit, from Alice and Bob's perspective, it is no longer valid to assign a state vector to the qubit on its own. Eve and the qubit become correlated with one another. For Eve to know the particle's state, there has to be some correlation between something in Eve's brain (or, more directly, her measuring device) and the state of the particle. They are thus entangled with one another and Alice and Bob would have to assign the state vector to Eve and the qubit taken together and not to the individual parts.
Eve and the qubit taken together would have a probability distribution of R for the qubit being 0 and Eve knowing the qubit is 0, and a probability of -R of the qubit being 1 and Eve knowing the qubit is 1. There is still interference effects but only of the whole system taken together. Yet, Bob does not receive Eve and the qubit taken together. He receives only the qubit, so this probability distribution is no longer applicable to the qubit.
He instead has to do a partial trace to trace out (ignore) Eve from the equation to know how his qubit alone would behave. When he does this, he finds that the probability distribution has changed to 0.5 for 0 and 0.5 for 1. In the density matrix representation, you will see that the density matrix has all zeroes for the coherences. This is a classical probability distribution, something that cannot exhibit interference effects.
Bob simply cannot explain why his qubit loses its interference effects by Eve measuring it without Bob taking into account entanglement, at least within the framework of quantum theory. That is just how the theory works. The explanation from Eve's perspective simply does not work for Bob in quantum mechanics. Reducing the state vector simultaneously between two different perspectives is known as an objective collapse model and makes different statistical predictions than quantum mechanics. It would not merely be an alternative interpretation but an alternative theory.
Eve explains the loss of coherence due to her reducing the state vector due to seeing a definite outcome for the qubit, and Bob explains the loss of coherence due to Eve becoming entangled with the qubit which leads to decoherence as doing a partial trace to trace out (ignore) Eve gives a reduced density matrix for the qubit whereby the coherence terms are zero.
One-time pads fascinate me. Ancient yet uncrackable tech.
You can break elliptic curve cryptography with quantum computers. Post-quantum cryptography is instead based on something called the lattice problem, sometimes called lattice-based cryptography.
Ah, my bad then.
Extra quantum steps
Sabine Hossenfelder has a PhD in Physics and gives some good information about what quantum internet is, and what it is not: https://www.youtube.com/watch?v=u-j8nGvYMA8
Basically, this is not a means of transferring data faster than the speed of light.
It does allow for an ultra secure connection, but that connection is so fragile and susceptible to noise that this would not be useful for transferring any large amounts of data.
Most likely this would be used for exchanging keys/tokens which could then be used with a more reliable connection for transmitting larger amounts of data.
Personally, I think there is a much bigger issue with the quantum internet that is often not discussed and it's not just noise.
Imagine, for example, I were to offer you two algorithms. One can encrypt things so well that it would take a hundred trillion years for even a superadvanced quantum computer to break the encryption, and it almost has no overhead. The other is truly unbreakable even in an infinite amount of time, but it has a huge amount of overhead to the point that it will cut your bandwidth in half.
Which would you pick?
In practice, there is no difference between an algorithm that cannot be broken for trillions of years, and an algorithm that cannot be broken at all. But, in practice, cutting your internet bandwidth in half is a massive downside. The tradeoff just isn't worth it.
All quantum "internet" algorithms suffer from this problem. There is always some massive practical tradeoff for a purely theoretical benefit. Even if we make it perfectly noise-free and entirely solve the noise problem, there would still be no practical reason at all to adopt the quantum internet.