Just a decade and we’ll have quantum computing

 

Computational calculus based on qubits and neuromorphic chips: this is the new computing expertise at work in the IBM research center in Zurich – systems that might lead to new and even more powerful cognitive systems. In a decade we might already see the first ‘standard’ quantum computer

 

A center of pure experimental research where approximately a hundred scientists from 45 different countries work and which allowed as many as 4 Nobel Prizes to be achieved in Physics. It is the IBM Research Center in Zurich, where some scientists are working on the new frontiers of computational capacity and cognitive systems. Let’s see in which domains.

 

1) Quantum computing

Stefan Filipp, scientist at IBM Research Center in Zurich
Stefan Filipp, scientist at IBM Research Center in Zurich                                                                                       cccccc  

Scientists Stefan Filipp and Andreas Fuhrer, among others, are working on it. They led us along the journey of discovery of computing of the future, which might reshape the IT hardware supply business (not just IBM), as well as open new frontiers in medical and pharmacological research – today’s quantum calculations are used for encryption purposes, however, future processing based on qubits will solve currently unsolvable problems of physics and chemistry. It will reveal new aspects of artificial intelligence that could accelerate the development of new and more powerful technologies and generate new developments in materials science to change industrial production and enable research on massive data volumes. An important milestone was achieved over the last year in the labs in Zurich, by overcoming what was regarded as one of the main obstacles to the development of quantum computing, i.e. the ability to simultaneously identify both types of errors that occur in calculation based on qubits – bit flips, an unintended change of status from 0 to 1 or vice versa, and phase flips, a signal error. Up to last year, it was impossible to identify them both, as they were detected only one at a time and this was a huge limit in carrying out calculations. Filipp and Fuhrer worked on designing a new circuit board (a grid of 4 qubits) – a microchip used to simultaneously recognise the two types of error.

To date, there is still no universal quantum computer. However, IBM expects that the first medium-sized 50-100 qubit quantum processors will be developed within the next decade. “A smartphone might already accommodate 2 to 4 qubits,” says Fuhrer. “One of the challenges we still need to overcome is the creation of high quality qubits, grouping them according to the scalability of the computing capacity (in order to perform complex calculations in a controllable manner) and cooling.” Quantum information is very fragile and needs to be protected from any error caused by heat or electromagnetic radiation. Just think of the fact that at the moment, to measure activities on the quantum processor, signals have to pass through a cryogenic dilution refrigerator [its refrigeration process employs a mixture of two isotopes of helium, i.e. two helium atoms with a different mass number due to a different number of neutrons in the atomic nucleus – Ed].

 

2) Neuromorphic chips

The latest news on the front of neuromorphic chips – circuits that mimic the neural connections of a human brain – comes from Manuel Le Gallo, co-author of a very recent scientific publication (published in Nature Nanotechnology) with which scientists globally presented new artificial neurons made in a lab with ‘phase change’ materials.  Researchers used ‘germanium antimony telluride’ [derived from the GeSbTe alloy of germanium, antimony and tellurium, a phase transition material used in rewritable optical discs – Ed.], a material with two stable states (a so-called amorphous one, without a defined structure, and a crystalline one, i.e. with a structure). It is not used to store information but to enable synapses, as what happens between biological neurons. Through a series of electrical impulses, these artificial neurons show progressive crystallisation of the material, though the truly innovative aspect is the change of the electric charge between the inside and the outside of the chip. This is called the ‘integrate-and-fire’ property, which is what happens in a human brain, for example, when you touch something hot and forms the basis of calculation based on events. Based on these findings, IBM scientists are working on structuring ‘populations of hundreds of artificial neurons’, using them to handle complex and fast signals. These artificial neurons are showing they can withstand billions of processing cycles with very low energy consumption levels: the energy required to update every neuron – that is, for it to change phase – is less than 5 picojoules with a lower average consumption of 120 microwatts. As a term of comparison, 60 million microwatts represent the power of a 60-watt light bulb.

A number of populations of this kind of artificial neurons (featuring high speed and low power consumption) may represent the hardware base of new processors and the keystone in creating high density neuromorphic computing systems. If we think of other applications, each single neuron may be used to detect patterns and find correlations in real-time in flows of data that feature the events: in the field of the IoT, the sensors could collect and analyse volumes of data on climatic conditions for more accurate forecasts; in finance, artificial neurons could be used to identify patterns in financial transactions and highlight any discrepancies.

 

3) Cognitive System

In our journey of discovery about what we will see in a few years in terms of new technology, we could not leave out artificial intelligence with cognitive systems based on the now well-known Watson. In this case, we are moving away from pure experimental research and towards applied research, since Watson is now used in a number of projects ranging from medical to industrial applications. In Zurich, we saw an interesting project where Watson is applied to diagnose rare diseases.

The Rhoen-Klinikum research center is one of the largest private companies in the healthcare sector in Germany, with 5 hospitals and internal research centers working with the country’s top universities. It is starting a pilot project, due to begin at the end of this year, to apply Watson as a ‘cognitive assistant’ in a center for undiagnosed and rare diseases. The pilot project came after months of ‘preparation’ during which physicians and technicians (IT and scientists) worked on the verticalisation of Watson, a cognitive system that can interact in a natural language, think, learn and operate within the context of rare diseases. Many patients with undiagnosed diseases have a long medical history featuring a huge amount of unstructured data (lab tests, clinical reports, prescriptions, X-rays, pathological reports, etc.). Physicians should usually carry out a long preliminary investigation when they meet a patient before understanding and then diagnosing a disease, which is even more complicated when it comes to rare diseases or diseases that have never been diagnosed. This is where Watson comes into play: all this data (in an anonymous form to protect the privacy of patients) will be ‘fed’ into it and analysed (using Watson’s APIs in the IBM cloud), by also looking for language correspondences, given that it will receive documents in German, but it will have a pool of international sources (all medical and scientific publications, medical reports of doctors from all over the world, documents made available by other research centres, etc.) where it can ‘unearth’ the information needed to establish the diagnosis. This will always be made by the physician, though thanks to Watson, all the investigative phase will be reduced from months to days, thus opening up new opportunities for timely care – Watson can also be used in the next step, i.e. identifying the best therapy.