Classiq Quantum AI is self-correcting.
This is not your grandfather's LLM.
The Dracarys Award-winning Classiq Quantum AI can purportedly translate papers to code. I had not done that during my initial testing, so I decided to give it a try. I always start by tossing a softball, because you can’t hit a curveball if you can’t hit a softball, so I gave it “Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer.” At least it would be easy to verify if it works or not.
Shor, I can do it.
Classiq Quantum AI recognized Shor’s algorithm. In fact, all of the AIs did. The downside to this is that you have to verify that each AI is at least claiming to translate the paper and not cheating by using SDKs or classical shenanigans.
Classiq has its own library, which the AI insisted it wasn’t using. The other AIs all defaulted to Qiskit. I told them to switch to Qrisp to make it harder for them to cheat.
ChatGPT tapped out immediately.
Perplexity alternated cheating with Qrisp tutorials and classical shenanigans. It eventually declared SUCCESS, despite its code generating errors, blaming my Qrisp installation for those errors.
Grok first tried classical shenanigans, then it tried stealing from Qrisp’s website, and then it tapped out.
Gemini relied heavily on classical shenanigans but eventually output circuits to factor 15 and 21. The downside is that you have to endure a lot of grief to get 15 to work, and then you have to endure it again to get to 21. It cannot execute or simulate the code in any way, so it estimates the output using classical shortcuts.
Classiq Quantum AI initially resembled Gemini’s approach in some ways. It created a shell with placeholder code, which was detectable because it failed. It then added code, but it then used error handling with classical shenanigans to make it look like it was working when it wasn’t.
No, I really CAN do it.
After nudging it further, Classiq Quantum AI eventually got into a groove. It was taking a while, and that’s actually when I started using the other AIs for comparisons. Upon closer inspection, I noticed that Classiq Quantum AI was debugging itself. In other words, it took away the grief that I attributed to Gemini.
With Gemini, you can copy-and-paste the code. Run it. Give Gemini the errors. It’ll give you corrections. And you iterate waaaaay too many times, often without a successful outcome. Classiq Quantum AI was checking its code automatically and troubleshooting itself without any input. Most of what I saw were the sort of Python issues that are common with AIs. Anyway, it eliminated countless rounds of copy-and-paste, and I appreciate that.
A word to the wise: Classiq Quantum AI continuously asks for permission to do everything, so make sure auto-approve is turned on. It still asks for approval to proceed after 20 auto-approvals, but to allow it to continue autonomously, that’s fine.
Hey, I did it!
I noticed Classiq Quantum AI got stuck in a loop, so I cancelled it. It then figured out why it got stuck in that loop and fixed the problem. It then proceeded to create circuits to factor 15, 21, and then 35. Interestingly, it assumed that I would want to verify the functionality, so it automatically executed each of the circuits with the Classiq simulator. It didn’t just execute the algorithm, for that matter, it also drew the quantum circuits, sort of. That visualization could be better. It also automatically included gate counts, circuit analysis, and a Quantum Volume (QV) estimate, which is the first I’ve seen anywhere.
Let’s do this for real.
It looked like Classiq Quantum AI eventually translated the paper into code. Had it cheated and peeked into Classiq’s library, it shouldn’t have taken so long to troubleshoot. But its Python code was messed up, as AIs are prone to generate, and it looked like it troubleshot its way to good results. Again, that was using the Classiq simulator.
I asked Classiq Quantum AI to execute on an “open” IBM backend. It returned “API configuration issue” for each one, including the simulators. That’s possibly my fault, as I’ve historically used Classiq to synthesize circuits for export to OpenQASM, and I don’t recall ever configuring anything with my IBM credentials. I then asked it to execute on any free backend, and they all require authentication, account setup, credentials, permissions, or API setup.
My objective this round was to translate a paper to code, and it looks like I can check that box, so maybe for my next round I will test execution on real hardware.
Go big or go home.
I asked Classiq Quantum AI what the largest number is it can factor. It didn’t know, so it proposed testing larger and larger numbers. It finally estimated that it could factor 8,633 using 28 qubits with the Classiq free simulator. However, even though the simulator has a 28-qubit limit, the circuit actually required 34 qubits. Its actual limit appears to be 1,517 (37 x 41). It output an analysis, claiming success with 15, 77, 143, 323, 667, 1003, and 1517, before exceeding the simulator’s capacity with 8633.
Conclusion
The key takeaway from this experiment is that the Classiq Quantum AI still needs a better name, because if they take too long in doing that, I’m going to start calling it something else myself.
No, seriously, the key takeaway is that the Classiq Quantum AI is self-correcting. It finds the errors in its responses and automatically begins troubleshooting without your input. Maybe the key takeaway should be that it translated a paper to code, at least it looks like it did, but vibe coding is so infuriating that this autonomy is too refreshing to play second fiddle.
Filed under: Quantum Computing • Artificial Intelligence • Algorithm Development
Image generated by OpenAI’s DALL·E.




I so appreciate how you take highly complex quantum coding, actually implement it yourself, it can then describe it so us non-coders can appreciate the process and results. Thanks Brian!