Wait, the article mentions that Shor's algorithm is factoring (which is what I understood), but then it's talking about elliptic curve cryptography? I thought ECC didn't use the same mathematical foundations of RSA, and RSA has been slowly phased out anyways...
Quite the contrary. Shor's algorithm actually works better for the shorter keys of ECC. The rule of thumb is 2n qbits for RSA keys and 6n qbits for ecc. I believe it has something to do with hownit applies to the hidden subgroup problem of finite abelian groups rather than factorisation, but I am really not a cryptographer not especially mathsy. I just asked the same question you did, and someone in the know pointed me to that.
This has been used for centuries. It is not a new invention.
Hundreds of years ago, it was not unusual to publish an encrypted solution of some mathematical problem, in order to establish priority without disclosing the algorithm that was used.
Of course, at that time very simple encryption methods were used, for instance an anagram of the solution was published (i.e. encryption by letter transposition).
But the algorithm still isn't practical on existing quantum computers, or ones that are going to be around any time soon, so there's no reason not to publish in full.
If only AI safety research had a mechanism this clear. "We have proof that building the machine will kill everybody, so get to work making a provably safe version."
Except that you have the logic backwards. It's an argument that something ("safe" general purpose AI) can't exist rather than that it has to.
People want AI to be able to do every good thing but no bad thing, which is impossible twice. First because false positives and false negatives trade against each other, so a general purpose AI which can do anything approximating all the good things is going to have the bias leaning heavily towards being able to do things in general and therefore being able to do many things that are bad. And second because "good" and "bad" aren't things that anybody can agree on and then some people will demand that it must do X while others demand that it not do X (e.g. "help the rebels win the war"), which means someone is inherently going to be unsatisfied and it's not a thing that can be sensibly regarded as everyone working towards a common goal.
> If the paper's authors had chosen to release their circuit, they would certainly have been recognized for the important progress they made in the science of quantum computing. Other researchers would have gone on to build on their work, and the entire scientific community would be richer for it.
... and the world could well have been unsafer. There is pretty strong reason not to release insights which could be used as an attack on public key cryptography. We already know the fix anyway, post quantum cryptography algorithms.
Sometimes scientific curiosity has to step back when it comes to potentially dangerous research. Scott Aaronson recently [1] compared this case to when scientists stopped publishing on nuclear fission research because the possibility of developing an atomic bomb became concrete:
> When I got an early heads-up about these results—especially the Google team’s choice to “publish” via a zero-knowledge proof—I thought of Frisch and Peierls, calculating how much U-235 was needed for a chain reaction in 1940, but not publishing it, even though the latest results on nuclear fission had been openly published just the year prior.
They're closely related, ECC and RSA are both instances of the hidden subgroup problem.
It kinda does, it just uses them differently
The basis here is the discrete inverse logarithm in a specific group (elliptic curves over rationals or multiplicative group module n)
Hundreds of years ago, it was not unusual to publish an encrypted solution of some mathematical problem, in order to establish priority without disclosing the algorithm that was used.
Of course, at that time very simple encryption methods were used, for instance an anagram of the solution was published (i.e. encryption by letter transposition).
"God doesn't exist" is essentially incoherent. God is the perfect being, and if he didn't exist, he wouldn't be perfect.
I think the logical mistake is obvious.
People want AI to be able to do every good thing but no bad thing, which is impossible twice. First because false positives and false negatives trade against each other, so a general purpose AI which can do anything approximating all the good things is going to have the bias leaning heavily towards being able to do things in general and therefore being able to do many things that are bad. And second because "good" and "bad" aren't things that anybody can agree on and then some people will demand that it must do X while others demand that it not do X (e.g. "help the rebels win the war"), which means someone is inherently going to be unsatisfied and it's not a thing that can be sensibly regarded as everyone working towards a common goal.
... and the world could well have been unsafer. There is pretty strong reason not to release insights which could be used as an attack on public key cryptography. We already know the fix anyway, post quantum cryptography algorithms.
Sometimes scientific curiosity has to step back when it comes to potentially dangerous research. Scott Aaronson recently [1] compared this case to when scientists stopped publishing on nuclear fission research because the possibility of developing an atomic bomb became concrete:
> When I got an early heads-up about these results—especially the Google team’s choice to “publish” via a zero-knowledge proof—I thought of Frisch and Peierls, calculating how much U-235 was needed for a chain reaction in 1940, but not publishing it, even though the latest results on nuclear fission had been openly published just the year prior.
1: https://scottaaronson.blog/?p=9665