The Facts Behind DeepMinds AlphaFold’s CASP14 Breakthrough Announcement

DeepMind breakthrough with AlphaFold aids drug research ...

Deep Learning is a pattern matching algorithm. Quantum Computing is an ab initio method. it starts from scratch and is supposed to solve any problem of a specific type thrown at it. Congratulations to the DeepMind team. But apart from the vendor speak and jargons this is the announcement from the CASP14 organizers

"During the latest round of the challenge, DeepMind’s AlphaFold program has determined the shape of around two thirds of the proteins with accuracy comparable to laboratory experiments*. AlphaFold’s accuracy with most of the other proteins was also high, though not quite at that level."

Automatski believes the solutions to the worlds most pressing problems cannot be done using function approximation or pattern matching which is Deep Learning in entirety. We simply cannot train on known structures and try to construct new solutions from existing patterns. Thats NOT how we solve problems in a space of gazillions of unknowns. We need an Ab initio approach that can solve any problem thrown at it, amongst gazillions of possible problems at the scale of this universe. That and only that will define the next millennium of mankind. – Amen!

Here is the press release from CASP14 organizers

And here is the video from the DeepMind Team

Its a free world. Please believe whatever you want to believe. Automatski is only in the business of dealing with facts and millennium inventions.

Think about it. 😊

Finally, The Answer To What Is Gravity?

The Hidden Cosmic Code of Gravity - Hashem
The hidden cosmic code of Gravity

In 1990’s Automatski proved that the universe is a simulation. And not only that it is an O(N) aka linear order simulation.

Automatski also discovered the underlying algorithm of the functioning of the universe. Which we later developed into a – Non-Deterministic Calculus.

And was the basis of our Second Generation Quantum Computers. Which are in production since 2014 CE. And have a capacity of a billion infinite precision qubits and gates. And also work on O(N) linear time and complexity.

While at Automatski we understood Gravity for quite some time. We always struggled to describe it with an analogy. What analogy could that even be?

In the last decade we took a diversion and focused on Loop Quantum Gravity aka Quantum Gravity in an attempt to create the Second more Regular Theory Of Everything. The first one was the one we developed in 1990’s using our own theories and simulations.

But then back to the main question – What is Gravity Exactly?

We all know that Gravity is a real weakling – 1040 times weaker than the electromagnetic force that holds atoms together. Although the other forces act over different ranges, and between very different kinds of particles, they seem to have strengths that are roughly comparable with each other.

Are we trying to shy away from accepting the answer all this while? We have been desperately trying to fit Gravity using Gravitons into the Standard Model all this time outside Automatski.

So if Automatski knows the answer – say it. Say it once and for all.

So without further adieu the answer is this…

First – What Gravity is NOT???

Its NOT A FORCE. (In the sense of a fundamental force in the scheme of the universe)

Second – Then What Is It?

It is a by product of 3D Space and Non-Determinism. But what is the best analogy for it? Its an – EFFECT OF THE DENSITY GRADIENT IN THE 3D SPACE. Its like a slope on the road on which things move.


Thats it. Thats all there is to it. Nothing more nothing less.

Answering this questions amongst 100’s of similar questions was one of life goals of the founder of Automatski. While we knew the answer for quite some time now. Today we formally publicly publish the answer.

How Fast & Powerful Is The Computer Running The Universe?

How to Check CPU Speed in Windows 10 [With Images]

Speed of a Computer Processor (Artists View)

In 1990’s Automatski proved that the universe is a simulation. And not only that it is an O(N) aka linear order simulation.

Automatski also discovered the underlying algorithm of the functioning of the universe. Which we later developed into a – Non-Deterministic Calculus.

And was the basis of our Second Generation Quantum Computers. Which are in production since 2014 CE. And have a capacity of a billion infinite precision qubits and gates. And also work on O(N) linear time and complexity.

So, can we answer the question – How powerful is the computer running the universe?

I think we can. And actually estimating it is extremely simple. Lets see…

The Number of Particles in the Universe is 1082

The minimum time (or a tick in the universe) is around a Plancks Time i.e. 5.39121e-44 secs

And we use a constant (a small number like 1000 or 1000,000 in comparison to the universe) – w (Window of a Particle)

The speed or power of the computer simulating the universe can be obtained by multiplying all of the above. And it comes to about = w * 10127 ops

The final answer should be around ~ 10130 ops

*** ops means operations per second. In the case of the universe its Quantum Operations.

If we compare this to the speed of our best processors ~ 1 Teraflops = 1012 ops

So thats it. Thats all there is to it.

Answering this questions amongst 100’s of similar questions was one of life goals of the founder of Automatski. While we knew the answer for quite some time now. Today we formally publicly publish the answer.

Breakthrough – Automatic Quantum Error Correction

Quantum Error Correction (ebook) by Daniel A. Lidar ...

The biggest problem with Quantum Computers is Error Correction. Besides creating high Fidelity Qubits and Gates and Long Coherence Times.

People like Professor Gil Kalai say that Error Correction is NOT an Engineering Problem that requires more research or engineering effort to fix. It is inherent to the nature of Quantum Systems and it cannot be fixed.

Basically he is saying that it is impossible to create a Physical Hardware Quantum Computer.

Automatski already realises this. That creating a Physical Hardware Quantum Computer has maybe 1 in a billion possibility of success. And we are fighting against the laws of the universe itself in trying to create one.

Automatski has already built the worlds first Billion Infinite Precision Qubit Quantum Computer in 2014 CE.

Today we would like to announce that we have achieved something very novel in Quantum Computing.

What if the Quantum Computer during its execution was able to figure out if it had made errors? And also repeat the Unitary Gate Operations as many times as required to fix the errors?

If we could do that. We could theoretically have the unparalleled possibility of creating a completely error free Quantum Computer.

And today Automatski would like to state that we have done just that.

MTV Enjoy! We finished the race to the Next Millennium 20+ years ago!

Hypercomputers & Hypercomputations

Hypercomputation: computing more than the Turing machine

Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.

Basically, a system able to perform countable infinite computational steps within a finite time.

In laymans terms, Hypercomputing and Hypercomputers are way beyond any classical computer we have today or any quantum computer. It borders on ‘infinite’ ‘god’ like powers of computation. Which is why everyone is interested in it.

Various theoretical models for such hypercomputation aka Hypermachines or Hypercomputers

  • O-machines
  • TM’s with initial inscription
  • Coupled TM’s
  • Asynchronous networks of TM’s
  • Error prone TM’s
  • Probabilistic TM’s
  • Infinite state TM’s
  • Accelerated TM’s
  • Infinite time TM’s
  • Fair non-deterministic TM’s

O-machines or Oracle machines are some fantasy machines with an absolutely magical oracle which knows all the answers, but nobody can explain how it does so. So from Automatski’s standpoint O-machines are a No Go!

And since we will never be able to physically build a machine with ‘Infinite’ state or time. Or basically infinite of anything. These machines are great for theoretical analysis and mind experiments. But again a No Go!

Similarly Probabilistic TM’s lead to Exponential Complexity Computations. And are a No Go!

Accelerated TM’s are basically built on some super fast physical process. But however fast it is. It will never be able to solve Exponential Complexity problems. Hence this too is a No Go!


So that leaves us with (things we are pursueing at Automatski)

  • Coupled TM’s
  • Asynchronous networks of TM’s
  • Error prone TM’s
  • Fair non-deterministic TM’s

These are all promising concepts. Along with our other efforts on Quantum Gravity Computers.

Someone will ask if we have a QGC why do we bother with these 4? The answer is it is always benefitial and also it is what we do at Automatski. To pursue multiple approaches together. It gives us multiple perspectives, reduces our risks and drives breakthrough inventions and innovation.

Immense Progress Under The Hoods

Student Progress Monitoring – O.A. THORP SCHOLASTIC ACADEMY

Its been a long and tiring journey

While our Quantum Computer(s) went into production in 2014. And we achieved Quantum Supremacy (in 2014). Developing Millennium Inventions is NOT A Project with a definitive beginning and a definitive end. Our roots of all our inventions go back to early 1990’s. And we continuously work on improving our inventions

Firstly, The Most Important Thing – Price (Money)

When we launched our Quantum Computers in 2014 assuming the capability required to break RSA-2048. Our pricing was about ~ US$1 billion per quantum computer.

3-4 years ago we reduced it to US$ 150 million per quantum computer for the same capability.

And recently we have been able to reduce the pricing to double digit millions of dollars.

We have really commoditized Quantum Computers for the masses. Our vision is – One Quantum Computer For Every Man On The Planet.

Secondly, The Capability

Our first Quantum Computer had a complexity of O(X^3) and we could barely offer scalability in the range of 1000-5000 Qubits.

But then we implemented the Algorithm of The Universe. And recreated our Quantum Computers in O(N) just like this universe. We also proved that the universe is a simulation and that too a linear order simulation. And that the Extended Church Turing Thesis holds true.

Thirdly, The Costs

We needed a data center with 5 Racks. 3-4 years ago we reduced it to 2-3 Racks. And now we can do the same in 1 Rack. (1 Rack = 42 U)

Fourthly, Efficiency and Accuracy

While we can do Quantum Computations with 5000 decimal place precision. Lets talk about what we can do on a standard issue laptop.

Our initial capability on a Laptop was 4 decimal places precision. And that gave us Gate Fidelity of 99.998% that is about 20 errors in 1 million.

Then we were able to do 8 decimal places precision. And that gave us a Gate Precision of 4 errors in 1 million.

Now we can do 20 decimal places precision. Which doesn’t give any errors for circuits with even 100,000 Gate Depth. Phew! Amazing Right? Yeah we think so too.

*** The Computational Effort is polynomial in number of decimal places of “precision”.

P.S. Oh BTW we absolutely don’t use any GPGPU’s 🙂


In layman's terms: we can crack RSA with 500,000,000 bits. Yes thats 500 million bits. Not just 2048.

Come join us in our journey to define the next millennium.

Finish Line Tape - MPA Graphics

Automatski’s Quantum Gravity Computer

Physics,Chemistry & Nanotechnologies News & Press - A Blog ...

>>> What is a Quantum Gravity Computer?

Its a computer that is infinitely more powerful than even a Quantum Computer or any other Classical Computer. It uses Quantum Computing and Quantum Gravity (General Relativity). It eliminates Causality and Time. And hence is infinitely more powerful. Or so it is believed, but really we have only one way of finding out. By building one and taking it for a spin.

>>> How does a Quantum Gravity Computer work? Can you share some more details?

Well, it combines Quantum Computing and Quantum Gravity. So we all understand that. And we have to understand that in a Quantum Gravity Computer, effects of Quantum Gravity is particularly relevant to the way it functions and is the source of its infinite power.

So we start with a Gate Based Quantum Computer. Which basically has Qubits, Gates and Circuits. But all this is NOT deterministic. It has Quantum Uncertainty. So we have the concept of an Environment within which this Quantum Computer functions. This introduction of an Environment sets it apart from a regular Quantum Computer.

Now we know that in a Quantum Gravity Computer there will be NO FIXED Causal Structure. Which means the Quantum Computer will NOT proceed in one direction from Cause(s) to Effect(s). This is also called as – The Causal Structure is ‘Indefinite’

A regular Classical and Quantum Computer involves its states going through time steps. But a Quantum Gravity Computer HAS NO CONCEPT of time. And the sequence of time doesn’t make any sense in a indefinite causal structure.

So the model of computation in a Quantum Gravity Computer in the absence of a definite causal structure is built using a certain framework called ‘Causaloid Formalism’. Which was developed primarily for correlating data in such situations aka the absence of definite causal structures.

So the first building block is obviously ‘Quantum Gates’. And the second building block is a ‘causaloid’ or ‘lambda’ (this is a mathematical object containing information about the causal connections between different spacetime regions). A quantum gravity computer program is given/defined by the pair {Lambda, G} where G is a set of gates. These Quantum Gates exist in regions which exist in an ‘Environment’ which like, affect each other through causaloids.

To summarise, we have Quantum Gates which affect each other depending on the definitions of Lambda. Now using just this as a computational resource we will try to solve problems which even Quantum Computers cannot.

>>> Open Questions

The reason behind the conceptualization of Quantum Gravity Computers was that it could be infinitely more powerful than even Quantum Computers. But there is a significant debate as to whether that will be the case.

One argument is that Quantum Gravity can be simulated by Quantum Computers. So there might not be anything significant that Quantum Gravity Computers can do which regular Quantum Computers can’t.

Quantum Gravity is the only missing piece from our Theory of Everything. Which humans have been trying to build for centuries. The ONE Theory which will explain the entire universe. One that will combine general relativity and quantum physics.

One way to find out whether it will be infinitely more powerful or not, is to actually build it and see. And thats the approach we have taken here at Automatski instead of debating endlessly.

>>> Why can’t anyone else build it? Why only Automatski? Are you trying to pull a marketing hoax?

Like we said a Quantum Gravity Computer builds on a Quantum Computer and Quantum Gravity. And nobody I mean nobody in the world today has a Quantum Computer worth anything outside Automatski. Automatski created a billion qubit infinite precision quantum computer in 2014. And recently Automatski has also done billion qubit plank scale tests of Quantum Gravity and found a great correlation between theory and calculations. Which again nobody in the world has been able to do.

So this puts Automatski in a unique advantage (of a 1000 years) to try and build a Quantum Gravity Computer. Which nobody else in the world is in any position to even attempt reasonably. Other than theoretically on paper.

The bottomline is that this is NOT a marketing hoax like the millions of hoax’s one comes across the industry and academia today. Automatski almost always makes a public statement only after making significant progress in terms of inventions and achievements. Nothing we say is baseless. It is based on scientifically verified progress.

>>> Why has there been a delay in putting it into Production if it was ready years ago?

Yes our underlying Quantum Gravity Computer was ready years ago. But there is a problem.

Problem Formulations

"A computer is a physical device that can give correct answers to well formulated questions."

At Automatski we don’t have a Universal representation of Problem Formulations which we can feed into our Quantum Gravity Computer. Which is Universal in the sense that it can represent all the problems we might want our quantum gravity computer to solve for us and all the problems which it is capable of solving.

Once we have such a universal formulation. We have to figure out ways to represent useful problems. And then solve them using our Quantum Gravity Computer which can solve such problem formulations. And then and only then we can benchmark and release our Quantum Gravity Computer for our customers.

What is the use of having the worlds best Mathematical Formulation of a Quantum Gravity Computer if it cannot solve any and all useful and valuable problems?

Lets try to understand it in another way.

Lets say we want to solve the notorious NP-Hard Travelling Salesman Problem. And we have a Quantum Gravity Computer which implements Lambdas and Quantum Gates. In essence it is a theoretically perfect Quantum Gravity Computer.

But the problem is that we have no way of representing the TSP Problem in terms of Lambdas and Quantum Gates. And furthermore we don’t have any generic way of representing all problems of interest to us in terms of Lambda’s and Quantum Gates.

*** Please note – The Quantum Gravity computer is ready and can solve all problems represented as Lambdas and Quantum Gates.

Conclusion: We need to develop our theories a lot more before we can enjoy the fruits of our labor. 🙂

Quantum Gravity: A New Theory Of Everything - YouTube ...

Quantum Time Crystals

Time crystals—how scientists created a new state of matter

>>> What is a [Quantum] Time Crystal?

A time crystal or space-time crystal is a state of matter that repeats in time, as well as in space. Normal three-dimensional crystals have a repeating pattern in space, but remain unchanged as time passes. Time crystals repeat themselves in time as well, leading the crystal to change from moment to moment.

Its considered a new state of matter.

See this short introductory video

See this longer more technical video

>>> History

The idea of a quantized time crystal was first described by Nobel laureate Frank Wilczek in 2012. In 2014 Krzysztof Sacha predicted the behavior of discrete time crystals in a periodically-driven many-body system and in 2016, Norman Yao et al. proposed a different way to create discrete time crystals in spin systems.

From there, Christopher Monroe and Mikhail Lukin independently confirmed this in their labs. Both experiments were published in Nature in 2017.

In 2019 it was theoretically proven that a quantum time crystal can be realized in isolated systems with long range multi-particle interactions.

>>> How were they first created?

Folks at University of Maryland took 10 Ytterbium atoms and then used a laser to create an electromagnetic field around those atoms. Which entangled various atoms in repeating patterns before blasting it with a second laser which jostled the atoms.

And as predicted as energy was introduced into the system – it never stopped and after a while it started moving in an oscilating pattern which was not created by the laser in the first place.

But a team from Harvard did it in a totally different way. They used Nitrogen Vacancy Centers which are flaws in diamonds.

Since then people have created Time Crystals in multiple other ways.

>>> Ultracold & Condensed Matter Physics

We need to create Time Crystals at one ten thousandth of a degree from absolute zero (0.0001K or -273.15°C). So thats ultracold physics.

And crystals are basically condensed matter [physics].

>>> Broken Time Translation Symmetry & Law of Conservation of Energy

Symmetries in nature lead directly to conservation laws, something which is precisely formulated by the Noether theorem

The basic idea of time-translation symmetry is that a translation in time has no effect on physical laws, i.e. that the laws of nature that apply today were the same in the past and will be the same in the future. This symmetry implies the conservation of energy.

Noether Theorem

Noether's theorem or Noether's first theorem states that every differentiable symmetry of the action of a physical system has a corresponding conservation law. The theorem was proven by mathematician Emmy Noether in 1915 and published in 1918, after a special case was proven by E. Cosserat and F. Cosserat in 1909

>>> Applications

First of all Time Crystals make a Perfect Time Piece. We can use them to create clocks that are infinitely more precise than current Atomic Clocks. Because of their constant, repeating motion in time despite no external input. Their atoms are constantly oscillating, spinning, or moving first in one direction, and then the other. They endlessly move perfectly in repeating patterns with time. At a certain frequency without using any extra external energy input. They could also improve technology such as gyroscopes, and systems that rely on atomic clocks, such as GPS.

The quantum nature of time crystals shift from moment to moment in a predictable, repeating pattern — can be used to simulate large, specialized networks, such as communication systems or artificial intelligence.

“In the classical world, this would be impossible as it would require a huge amount of computing resources,” said Marta Estarellas, one of the first authors of the paper from the National Institute of Informatics. “We are not only bringing a new method to represent and understand quantum processes, but also a different way to look at quantum computers.” their goal is to propose real applications for embedding exponentially large complex networks in a few qubits, or quantum bits.

>>> New & Better Topological Quantum Computers

“Time crystals form when arbitrary physical states of a periodically driven system spontaneously break discrete time-translation symmetry.” What the researchers noticed is that when they introduced “one-dimensional time-crystalline topological superconductors” they found a fascinating interaction where “time-translation symmetry breaking and topological physics intertwine—yielding anomalous Floquet Majorana modes that are not possible in free-fermion systems.”

Majorana fermions are particles that have their own anti-particles.

The research was led by Jason Alicea and Aaron Chew from CalTech, as well as David Mross from the Weizmann Institute in Israel.

While studying Majorana fermions, the team observed that it is possible to enhance topological superconductors by coupling them to magnetic degrees of freedom that could be controlled. “Then we realized that by turning those magnetic degrees of freedom into a time crystal, topological superconductivity responds in remarkable ways,” shared Alicea.

One way the phenomen noticed by the scientists could be potentially exploited is to create more stable qubits – the bit of quantum information in quantum computing. The race to create qubits is at the threshold of bringing on a true quantum technology revolution.

“It’s tempting to imagine generating some useful quantum operations by controlling the magnetic degrees of freedom that intertwine with the topological physics. Or perhaps certain noise channels can be suppressed by exploiting time crystals,” said Alicea.

>>> The Bullshit Explanation

Unlike clocks or any other known objects, time crystals derive their movement not from stored energy but from a break in the symmetry of time, enabling a special form of perpetual motion.

The nonsense explanation given by 3000 BC Pseudo Scientists is that – Time crystals “spontaneously” break time-translational symmetry . They emphasize “spontaneous” breaking of Time translation symmetry and Laws of conservation of energy. That somehow is ok according to their theories of physics. Hence they imply and assert that their theoretical Laws of Time Translation Symmetry and Laws of Conservation of Energy still hold as unviolatable laws of physics of our universe.

We all know that this is utter rubbish. Automatski has uncovered the underlying algorithm of the functioning of the universe. And we can state that this is the worst explanation possible and a nonsensical effort to still stick to one’s archaic outdated theories even in the face of counter-facts. This is an act of desperation by 3000 BC pseudo science in the absence of better explanations and theories at their disposal.

>>> Conclusion

While Time Crystals are a new state of matter and the entire physics community is excited about it and its prospective applications. Including the possibility of being able to create Topological Quantum Computers which will be error free.

But more than that this is a inflection point moment in history. We are looking at the possibility of completely new physics. And having to develop new physics theories to explain our universe. (Outside Automatski) At Automatski we pretty much have all this covered since 1990’s. But for the rest of the world this is a moment for a reality check.

What is Quantum Machine Learning Exactly?

Quantum Machine Learning | SpringerLink
Quantum Machine Learning

>>> Lets understand why would anyone use Quantum Computers? For Machine Learning?

Its proven that Quantum Computers cannot solve NP problems in sub-exponential time. Which was the primary purpose of inventing quantum computers. These are all the problems humanity has not been able to solve using Classical Computers.

Then, why does one use Quantum Computers? What do we hope to achieve? What benefits can anyone derive from them?

Well, Quantum Computers can deliver ‘acceleration’ over classical computers for solving the same problems albeit differently using Quantum Mechanics Principles. Normally we expect Quadratic Acceleration or rarely Exponential Acceleration at best. (Over Classical Computers)

So that is our intent in using Quantum Computers for Machine Learning. To get acceleration over classical computers. That is to solve machine learning problems many times faster than classical computers.

>>> So, what does Quantum Machine Learning involve?

The Model, Structure & Parameters

With machine learning we do two things together. Firstly we learn from the data we have. And secondly for unseen data we make predictions using what we have learnt. The thing we learn is called ‘The Model’

The model has a structure and parameters. Structure basically means how it is internally designed or connected, its width, depth, layers etc. And parameters basically mean the numerical values used in the model that represent and correspond to the data we feed into it to learn from.

The Quantum Data & State Preparation

It is also quite clear that what we have at our disposal is Classical Data. While what the Quantum Computer can operate on is quite complex due to its quantum nature. For example we know that 100 Qubits can represent 2^100 combinations. Which a quantum algorithm can use.

Hence we have to load and convert the classical data into a more compressed, complicated form for processing by the Quantum Machine Learning Algorithm. This is done by loading and encoding to prepare the initial quantum state in terms of Qubits. Which are then put into the Quantum Circuit to execute. Please see the second column ‘QML algorithm’ in the first diagram above to understand how this step differs from classical machine learning in the first column.

Parametrized Quantum Circuits

Without going into details about how exactly a Quantum Machine Algorithm is represented as Quantum Circuits that can be executed on a Quantum Computer to learn from data. We should note that the circuit corresponding to ‘any’ quantum machine learning algorithm will have a specific structure and will have a lot of parameters used in it which we hope to learn from data.

Such circuits are in general called parameterized quantum circuits.

>>> The Overall Workflow

Such solutions are called as Hybrid Quantum-Classical Solutions because the higher level logic is executed in a Classical Program which delegates intractable problem solving functionality to a Quantum Program.

In technical terms such solutions are also called Variational Quantum Algorithms. The classical program sets some parameters at a time and executes the parameterized circuit on a quantum computer. It does this many many times e.g. millions. Each time with different parameters for the quantum circuits. In doing this repetitive process the classical program tries to figure out the best parameters for the parameterized quantum circuit. And hence in a sense ‘learns’ the best parameters. Which basically in our case means that the solution will learn the ‘Best’ Quantum Machine Learning Model from the data we had at our disposal.

Such hybrid solutions can run reasonably well on NISQ quantum computers. Because ‘any’ hybrid solution ‘cannot’ give a perfect answer. All such algorithms are heuristics. And they can give approximate answers at best. Which is great for the NISQ quantum computers we have at our disposal because they are also not accurate and can only solve problems approximately or never at all.

>>> The HHL [Quantum] Algorithm

The HHL algorithm is a quantum algorithm for solving a linear systems of equations, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd, formulated in 2009. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.

Why is HHL Algorithm so important?

The algorithm at the center of the “quantum machine learning” mini-revolution is called HHL. Many of the subsequent quantum learning algorithms extend HHL or use it as a subroutine. It is the single most important underlying quantum algorithm in quantum machine learning.

>>> Conclusion

Simply remember this, if some Quantum Machine Learning Algorithm internally uses Grover’s Algorithm then the Speedup/Benefit is Quadratic, and if it internally uses HHL Algorithm then the Speedup/Benefit is Exponential. This can also be seen in the table below.

A generic overview of quantum learning methods. The algorithm column names the classical learning method. The papers column refers to the most important papers related to the quantum variant. The Grover column indicates whether the algorithm uses Grover’s search or an extension thereof. The speedup column indicates how much faster the quantum variant is compared to the best known classical version. Quantum data refers to whether the input, output, or both are quantum states, as opposed to states prepared from classical vectors. The column generalization performance refers to whether this algorithm generalizes and performs well to unseen data. Implementation refers to attempts to develop a physical realization vis-a-vis a theoretical analysis.

Outside Automatski, we are far from developing scalable universal quantum computers. Learning methods, however, do not always require universal quantum computing hardware: special cases of quantum machine learning are attainable with quantum annealing quantum computers just by using optimization models.

>>> Where can I learn more?

The World Of Quantum Computing Status Update Nov-2020

So what is happening in the world of quantum computing around us?

Without any specific order. Lets talk about the tools and applications startups are building for Quantum Computing.

7 powerful SEO & management tools to make your social ...

>>> Tools

Horizon Quantum Computing

They are pioneering an approach to quantum programming that allows programs written in a single unified language to be compiled and run on either conventional or quantum computers, producing fast efficient implementations no matter the platform. At the core of their technology is a process that automatically constructs quantum algorithms based on programs written in Matlab or Octave.

*** This approach is like Cross Compilation of Classical -> Quantum Programs

We are not sure how successful this approach will be. Because one is to one translation of Classical Programs into Quantum Programs makes little or no sense. It takes huge engineering and research effort to create a ‘specific’ Quantum Algorithm which solves a ‘classical’ problem. And achieves a Quadratic or Exponential Quantum Advantage. So from where we stand this is a No Go!

QC Ware

Is making Quantum Data Loaders to load data into Quantum Machine Learning Models. It also creates GPU based Quantum Simulators which can do 1000 Gate 20 Qubit Circuits on GPU’s in 6 seconds.

Cambridge Quantum

They are into creating Architecture Agnostic Quantum Solutions. This has huge value because there are multiple Quantum Computers from multiple vendors and there is no standardization yet or expected for the next 10-20 years. So Customers need to be able to build solutions once and deploy it over any Quantum Computer Backend. Without being locked to any specific backend by using its proprietary API’s and Quantum Programming Languages and Frameworks. This is like creating the Java of Quantum Computing.

Zapata Computing

Zapata Computing created a Workflow Engine called ORQUESTRA that allows one to Compose ‘quantum-enabled workflows’TM and execute freely
across the full range of classical and quantum devices. This is quite interesting, because it allows someone to exploit classical quantum inspired algorithms, mix and use Quantum Annealing and Gate Based Quantum Computers, and use hybrid – both classical and quantum backends to create solutions. We think this is a great way to go.

Sidenote: Quantum Benchmarks

Some people use useless misleading terms like Quantum Volume to describe a Quantum Computers capabilities. The problem is that if something doesn't work well enough to solve my problems that I'm interested in then a number like Quantum Volume 200 doesn't help. I'm only interested in knowing if a Quantum Computer can solve the problem I want to solve. What the world needs is a suite of representative problems that can be executed periodically on Quantum Computers to create a Benchmark of the performance of Quantum Computers and what customers can expect from them. A metric like Quantum Volume is probably great for Quantum Computer Vendors who want to measure their progress. But from a customers viewpoint it is pretty much useless.


Strangeworks is creating Standards for Quantum Computing Definitions, Performance Metrics & Performance Benchmarking.


Image for post


Alibaba Says Its New “Tai Zhang” Is the World’s Most Powerful Quantum Circuit Simulator. Please see the various techniques used in implementing Quantum Simulators above. Creating a Quantum Simulator is a Research effort that will need to utilize GPGPU’s and FPGA’s and also invent new more efficient algorithms.

University of Michigan

QuIDDPro is a fast, scalable, and easy-to-use computational interface for generic quantum circuit simulation. It supports state vectors, density matrices, and related operations using the Quantum Information Decision Diagram (QuIDD) datastructure. Other efforts have used Matlab, Octave, QCSim, and libquantum to simulate quantum circuits. However, unlike these efforts, QuIDDPro does not always suffer from the exponential blow-up in size of the matrices required to simulate quantum circuits. QuIDDPro is significantly faster and uses significantly less memory as compared to other generic simulation methods.

High Level Programming Languages


One of the newest efforts in this space is Silq, a high-level programming language for quantum computers out of Switzerland’s ETH Zurich. Existing quantum languages for programmers still work at a very low abstraction level, which makes life for quantum programmers a lot harder than necessary. Programming Quantum Computers with low level gate descriptions is erroneous due to the side effects entangled temporary qubits have with other qubits. Silq primarily solves that.

There are other prominent High Level programming languages for example…

  • QCL – A Programming Language for Quantum Computers
  • LanQ – A quantum imperative programming language

IDE – Integrated Development Environment

Eclipse XACC

Eclipse XACC

Eclipse XACC is probably the worlds first IDE for Quantum Computing.

XACC is an extensible compilation framework for hybrid quantum-classical computing architectures. It provides extensible language frontend and hardware backend compilation components glued together via a novel quantum intermediate representation. XACC currently supports quantum-classical programming and enables the execution of quantum kernels on IBM, Rigetti, and D-Wave QPUs, as well as a number of quantum computer simulators.

Eclipse XACC is a programming specification and software framework that tackles the aforementioned challenges and provides a hybrid classical-quantum programming model that enables quantum acceleration within existing classical HPC applications. XACC provides the software interfaces and infrastructure required by domain computational scientists to offload computationally intractable work to an attached quantum accelerator. It handles algorithm programming in a manner similar to OpenCL, with code expressed as language-agnostic quantum kernels, thus enabling XACC to interact with existing quantum programming languages (QPLs) such as Scaffold, Quipper, and QCL. To accomplish this language interoperability, XACC keeps track of associated compilers for each programming language and orchestrates their compilation, or translation, to accelerator-level assembly code. XACC provides developers with two mechanisms for compilation: (1) a runtime API (just-in-time compilation) that enables the control of all aspects of high-level programming, translation, and execution, and (2) a static compiler that transforms invoked quantum kernels into an execution of the compiled result on the accelerator. Both mechanisms delegate to a classical compiler for generation of the hybrid classical-quantum executable. Furthermore, since XACC is extensible in languages and compilers, users can program and execute algorithms suited for either gate model quantum computing or adiabatic quantum computing.

>>> Applications

It seems everyone is trying to solve Chemistry, Finance, Materials, Machine Learning, Molecular Modelling and Drug Discovery Problems with Quantum Computers. So creating those applications for the customers makes sense. We need to be able to define the problem conceptually at a high level and encode it into an algorithm and execute it over ‘any’ suitable Quantum Backend.

Sidenote: Applications for Gate Based Quantum Computers

Everyone understands at a high level how a Gate Based Quantum Computer works. But it is extremely complicated to program one and create algorithms to solve interesting problems. So there is a huge multi-billion dollar market to create Applications that can encode a problem of interest, execute it over 'any' Gate Based Quantum Computer and decode the results into a form which is usable for solving business problems. This is a huge idea for a Quantum Applications Startup.

Sidenote: Applications for Quantum Annealing Quantum Computers

Everyone understands at a high level how a Quantum Annealing Quantum Computer works. But it is extremely complicated to program one and create algorithms to solve interesting problems. So there is a huge multi-billion dollar market to create Applications that can encode a problem of interest, execute it over 'any' Quantum Annealing Quantum Computer or a Classical Solver and decode the results into a form which is usable for solving business problems. This is a huge idea for a Quantum Applications Startup.

The Path Ahead

So how do the above two Applications be offered. Yes, they will primarily be offered as a SaaS service but other than that the Customer just needs to worry about the problem and (the problem description) format.

So lets say the customer needs to crack RSA-2048 (thats his problem [type]) he checks the problem description format for that service and uses that to enter the RSA Key he needs to be cracked. Apart from that he doesn’t need to worry about anything other than the estimate of the costs of using the service for his problem ‘instance’. The service should be able to analyse the problem instance described in the problem description format and offer that estimate (of time and costs). Beyond which it is the service’s job to accurately and efficiently encode [convert] that problem definition into a Gate Based Circuit or a Annealing Optimization problem. Optimize the encodings etc. and execute it over a suitable backend. And then decode the results and give the two numbers which are the factors of the given RSA-2048 key. The customer should just be worried about taking and using those.

Similarly for other problems like Route Optimization, Portfolio Optimization, Trading Decisions etc.

So this kind of a service will be a front end SaaS service to a Quantum Backend Service like Amazon Braket.

>>> Cyber Security

Quintessence Labs

Is working on Quantum Random Number Generators. And Enterprise Key & Policy Management solutions.

Isara Corporation

Isara has emerged as an early frontrunner, working to develop security systems that essentially allow communication between classical and quantum algorithms.

Sidenote: The Insight

It is not just necessary to create Quantum Secure algorithms. One has to build infrastructure and services around them too. Just like we have built for Classical Cryptographic Algorithms over the last few decades.

Until that happens lets just hope nobody outside Automatski develops a quantum computer and breaks all cryptography.