Archive for the ‘Quantum Computer’ Category

International partners and Government agencies join Arqit’s Federated Quantum System – PRNewswire

LONDON, June 11, 2021 /PRNewswire/ --Arqit Limited ("Arqit"), a leader in quantum encryption technology, has today announced at the G7 Leaders Conference in Cornwall, that it has formed an international consortium of companies and government organisations to provide its quantum encryption technology to government customers in a federated system concept, called Federated Quantum System (FQS). The UK, USA, Japan, Canada, Italy, Belgium, and Austria are now represented.

Arqit invented a system which uses satellites to distribute quantum keys to data centres.These keys are delivered using a new patented protocol called ARQ19, which solves the "Global versus Trustless" problem which previously prevented the adoption of Satellite Quantum Key Distribution (QKD). Arqit further invented a method, called QuantumCloud to translate the benefits of this quantum key distribution to any form of endpoint or cloud machine without the need for any special hardware. The first version of QuantumCloud launches for live service to commercial customers in 2021.

Government customers typically have more stringent requirements for control and are more inclined to buy "Private Instances" of cloud technology rather than managed services.Arqit has therefore designed a different version of its technology to meet this need and has recruited a strong community of partners from allied countries to collaborate in bringing the FQS system to use.

Collaboration partners includeBT, Sumitomo Corporation, Northrop Grumman, Leonardo, QinetiQ Space N.V., qtlabs and Honeywell. Other Western Allied countries are expected to announce their inclusion during 2021.

FQS has been developed with support from the UK Space Agency (UKSA through its National Space Innovation Programme). The system consists of dedicated satellites, control systems and QuantumCloud software. It will be provided to the UK's "Five Eyes" allied governments and other international partners, allowing sovereign protection of strategic national assets andinteroperability for joint operations.

The first FQS satellites are to be integrated and tested at the National Satellite Test Facilityin Harwell near Oxford and are expected to be launched on Virgin Orbit's LauncherOne from Newquay in Cornwall in 2023, after the launch of the first commercial Arqit satellites. The role of Virgin Orbit in providing responsive launch services for government customers from any location is additive to the plan to deploy many FQS satellites to support the needs of a growing list of allied country partners.

Arqit'sChairman and Chief Executive Officer, David Williams, said, "FQS enables collaboration between NATO-allied governments around the world to form a federated version of QuantumCloud infrastructure. It also enables the Joint All Domain Command and Control vision to come to life.The FQS system is global in its nature, and there is now strong momentum in an international consortium joining forces to bring it into use."

General Stephen Wilson, Director ofArqit Inc., said, "For Allies working together Joint All Domain Command and Control (JADC2) is essential. For JADC2 to work, we must have a real identity, credentialing, and access management solution.Arqit's technology makes trusted data security possible."

Minister for Science, Research and Innovation, Amanda Solloway MP,added, "As a global science superpower, the UK continues to make advances in quantum science which is revolutionising cyber-security across the world. Backed by government funding, Arqit is paving the way in developing a new generation of quantum technologies that defend against sophisticated cyber-attacks on national governments, strengthening our resilience and helping us all the build back better from the pandemic."

UK Space AgencyCEO, Graham Turnock, added,"Space technologies have become embedded in almost every aspect of our daily lives, and UK Space Agency funding is accelerating our development as a world leader in space technology. Arqit's advances in quantum technologies will strengthen the UK's resilience against harmful cyber-attacks, helping us protect our critical services. The announcement today shows the attraction of Arqit's model to our partners."

Head of Spaceport Cornwall, Melissa Thorpe,added, "This is a key consortium to be involved with, placing UK companies at the forefront of cutting-edge global technologies. Arqit's products will bring high-volume international business through launch at Spaceport Cornwall with Virgin Orbit. Not only does this provide us with advancements in UK innovation, but also brings that innovation to Cornwall, supporting high-skilled jobs and economic growth."

Dan Hart, CEO ofVirgin Orbit, commented, "Recent headlines from across the world show how vital it is to have encrypted communications supporting our economy and our security. The team at Arqit has already demonstrated that its encryption capabilities provide a critical service to a global customer base. By adding a space layer with its Federated Quantum System, Arqit is taking that service to an even higher level. We're delighted to support the team, and to add its quantum satellites to our manifest for flights out of Cornwall."

Kevin Brown, Managing Directorof BT Security, said,"BT is pleased to be part of the first phase of the FQS project, which aligns with our wider commitment to providing security solutions for the most critical organisations. As this project advances to a global stage, it provides a clear example of how the UK is playing a leading role in developing important new technologies."

Eiji Ishida, Executive Officer and General Manager, Lease, Ship and Aerospace Business Division of Sumitomo Corporation,said, "The FQS concept is important because it allows us to manage a local instance of the infrastructure and deliver the control that our defence customers will require.Sumitomo Corporation is pleased to be in this consortium, which is very far ahead of other technologies."

Professor Rupert Ursin ofqtlabs said, "Austria has been a strong supporter of the Satellite QKD technology which we have helped Arqit to build, and I am pleased that qtlabs is able to continue to represent Austria in this consortium and will try to bring the benefits of FQS to the Austrian and other government users."

Marina Mississian, Senior Director Space Payloads forHoneywell Aerospace, Canadaadded, "Satellite enabled quantum encryption is strategically important for Honeywell and Canada. With the support of the Canadian government, we have been pleased to be associated with Arqit's commercial mission and now to join the FQS system, which will further the collective security goals of the 'Five Eyes' community of nations."

Frank Preud'homme, Sales and Business Development Director forQinetiQ Belgium, said, "We have been pleased to support the Arqit space mission since 2017 and see strong potential for the creation of sovereign capabilities for a close alliance of allied countries in sharing this technology."

Norman Bone, Chair and Managing Director, Leonardo UK,concluded, "Leonardo and Telespazio recognise the increasingly digital nature of UK and allied national defence and security and the expectations of our customers that their systems are secure and resilient. The rapid, collaborative evaluation of new and cutting-edge technologies such as those developed by Arqit are a key element of Leonardo's strategy to establish and deliver next generation systems to our customers enabling effective and secure multi-domain operations including in the cyber and space domains."

The fundamental science behind Arqit's solutions to decades-old problems with Satellite QKD is important, but the company's ability to translate that into usable products that meet very precisely the needs of users is very advanced.This is a major advantage in moving quickly to implement the security technology amongst allied nations that will deliver valuable security improvements immediately and underpin the future of the quantum battlespace.

About Arqit Limited:

Arqit has invented a unique quantum encryption technology which makes the communications links of any networked device secure against current and future forms of hacking even an attack from a quantum computer. Arqit's product, called QuantumCloud, creates unbreakable software encryption keys, using satellite to deliver quantum information to data centres.The keys which are easy and efficient to use remotely with no hardware or disruption to software required. The software has universal application to every edge device and cloud machine in the world. Headquartered in the United Kingdom with subsidiaries in the United States, Arqit was founded in 2017 by UK satellite industry veteran David Williams. Visit us at http://www.arqit.uk.

SOURCE Arqit

https://arqit.uk

Excerpt from:
International partners and Government agencies join Arqit's Federated Quantum System - PRNewswire

Readers reply: how do we know were not living in a simulation like the Matrix? – The Guardian

How do we know were not living in a simulation like the Matrix? Jack Freedom, Bristol

Send new questions to nq@theguardian.com.

Isnt this just the kind of article our biomechanical overlords would simulate in order to keep us compliant in our pods? kingsize

I took the red pill and nothing materially changed other than a rash that I had had for a week or so cleared up. OfficerKrupke

Not ruling it out, but if we were living in software, it is the most reliable software ever because there never seem to be any disruptive updates. Liam Collins

The idea that we may be living in a matrix-like universe is called the simulation theory, and was first proposed by Nick Bostrom. It argues that human technology is advancing at such a rate that in the future we will have the ability to simulate entire universes filled with details as rich and beautifully complex as our own. These simulated universes would also contain beings that were genuinely conscious as a result of the advanced ability of the simulation, and so would be able to think and would be self-aware in the same way that we can and do. These beings could be indistinguishable from us in terms of the depth of their minds, the only difference being that their life springs from circuit boards and artificial design rather than the real world which has given life to us. These beings then being no less able or imaginative than us would progress to a point of technological advancement at which they could create and run their own simulations. The simulated minds they create may do likewise, and so there could be simulations inside of simulations. There could be billions of universes therefore being simulated in a chain with only one base reality (the real world) at the start. That being the case, it looks far more likely that any one individual would be living in a simulated universe, rather than the real one. Once we acknowledge this possibility, we have to then consider that these odds apply to us as well, and so according to the theory presented we are far more likely to be living in a simulation than the real world.

One counter-argument is to consider that all of these simulations have a common feature: they all have their own simulation. The only universes that might not are the most recent simulated universe as its inhabitants may not have yet developed the technology necessary to create one or base reality, if it turns out that simulated universes arent possible. That brings our odds to at least 50/50, which is preferable to the billion-to-one conclusion reached above. Unfortunately, this line of reasoning assumes that each universe can only create one simulation, which isnt necessarily the case. Each node on the chain of simulated universes could have many branches, each with a simulation on the end, bringing our probability back to a billion to one. Benjamin Dixon

What I always found interesting about Bostroms idea are the ethics that emerge from this assumption. Basically, we should treat any simulated realities with dignity and respect because if we dont we increase the likelihood that consciousnesses in higher reality than ours will mess around with us. I feel much worse about how I treated my Sims now ajukes2k

You may be interested in David Kippings paper A Bayesian Approach to the Simulation Argument. Much more maths than in Bostroms original paper, but nothing fiercer than conditional probability and Bayes theorem, plus the ability to sum a geometric series, is required. As you would expect, there is a good reference list to the literature too. FinrodFelagund

Michio Kaku has an answer to this basically because the smallest size of computer needed to run a simulation of the universe is the universe, its more logical that we are not living in a simulation. I rather like the idea, though, not least because it offers the small chance of an afterlife for the non-religious. ChestnutSlug

Not sure thats true, though. All thats needed is to run something that looks like the universe from where you (or I) sit. You might think theres an awfully big universe out there, but if you only look at it in terms of images on a screen, then all you need is enough power to colour the screen. I quite like the idea that a simulation explains quantum uncertainty: a state doesnt exist until its been observed: its uncertain because it hasnt yet been computed in the simulation No, of course I dont believe any of that. Its fun trying, though. conejo

Some make a pretty plausible case: see Rizwan Virks The Simulation Hypothesis and a recent article in Scientific American. Madeleine Bowman

In a sense we definitely are living in a simulation, since what we experience is coloured by our own subjective experience and judgment, expectations, our own programming. How we perceive reality may well not be particularly real. Equally, what we are fed, plus groupthink, societal norms and expectations, biases etc, can take us a very long way from being able to objectively perceive what is actually happening. We are a walking Matrix. Its virtually impossible to step outside your own normal and become embedded in any kind of physical reality. You only have to look at other societies around the world and how insane they look to realise that. LorLala

We are living in a simulation, but not in the way you might think. In his Republic, Plato suggests that something can be tangible and unreal, if it purports to be something it is not (as, for example, a statue does). As I look out of my window in 2021 England, I see toytown cars styled to look friendly or aggressive, driving past toytown newbuild houses designed to evoke fake nostalgia, inhabited by disoriented people who vote for toytown politicians and watch surgically enhanced bimbos on so-called reality TV. They are firmly in the Matrix, albeit a tangible Matrix, and the perennial sigh of their oppressed nature is O God, please protect me from everything that is really real. Im sorry, but you did ask. PaulSecret

The state of the current government suggests that if not a simulation we may indeed be living in some bleak dark comedy. DougieGee

There is one piece of evidence that we do indeed live in a computer simulation. Computer simulations are essentially bits of data, which is then presented to the observer, or subject in our case, as objects. The data will contain all the information necessary to present and animate the object, including physical and psychological characteristics. But if the data gets corrupted, then the representation will change unexpectedly. And if the data goes missing, or is corrupted so badly that it cannot be represented, then the object will disappear.

Which brings me to my one piece of evidence. How many of us have experienced the inexplicable disappearance of a sock? Yes, folks, odd socks are the irrefutable piece of evidence that we do live in a simulation and a sloppy one at that vishnoo

Id like to think that a simulated world would be free of pandemics, Brexits, racists, uber-capitalists, tabloid journalism, super-leagues, sausage bans, hives, bad smells, etc surely our Matrix Overlords would want to keep us feeling complacently sedate and safe, no? Unless, of course, they had a sadistic streak and a perverse sense of humour AmadanDubh

Have you never played SimCity? At least half the fun is in dealing with disasters. saganIsMyHomeboy

This is an epistemic question. Epistemology is concerned with the beliefs we hold and our justification for holding them. I think the lesson to learn from this question is that we can never be sure we know anything, and we should be constantly evaluating our beliefs and what we know in light of new experience, as it is difficult to prove we know anything. Cauvghn

Philosophers have spent an absurd amount of time attempting to answer this question. It is easy to get bogged down in the details of their numerous theories of knowledge, which typically (though not invariably) seek to establish that we do know that were not living in a simulation. But all those theories dont change a fundamental point: everything would appear to us exactly the same if we are in a (perfect) simulation and if we are not. As a result, there will always be some reason to doubt that things are as they appear. Paul Dimmock

The Middle East, The Kardashians, racism and sexism, homophobia and Trump are all human conditions that a machine could never attain the sufficient level of advanced stupidity to mimic. Jeremy Jones

We are living in a simulation that we create with our own minds. Pavlin Petkov

I believe simulation theory and our current understanding of physics are incompatible. Why?

First, if everything in the simulation is captured within one framework of true determinism, the processing power required for modelling all the trajectories of the units of the (visible) universe would in fact, due to power laws, implode our own universe even when some of these trajectories and interactions are constrained by universal rules (eg max velocity at speed of light). And yes, this applies even when the simulation is run via quantum computing (where we assume near perfect energy efficiency). In line with the mass-energy equivalence law, E=mc2, information processing = energy = mass. Then, for simulation theory to still work out, there needs to be an external source of mass/energy, far greater than the universe simulated, to supply the processing power to simulate our universe. This simulation therefore needs to physically take place in a different and far greater entity than our own visible universe. So: if simulation operates within a framework of true determinism, processing power required for that single simulation we are all in would far exceed that which is embodied by the mass of our known universe. The simulated universe would implode in on itself or requires a significant supply from an external entity entirely.

Now, if we want to look beyond this processing-power limitation in the case of true determinism, a simulation of our universe would require a significant degree of random laws dictating trajectories of the simulated agents (whatever their unit may be) and their interactions (leading to a far smaller parameter space, which relieves, to some extent, from the power laws that determinism needs to deal with). Computer science has yet to find a way for generating true randomness, but for arguments sake, lets assume this limitation has long been overcome by those superior beings running the simulation of our universe. Then still, by virtue of lack of complete determinism, no simulation would be the same; no valuable patterns can be extracted from each simulation alone. This would mean that multiple (read: infinitely many) simulations would need to be run in parallel in order to be valuable, implying that, without determinism, simulation theory would go hand in hand with infinitely many parallel universes. This again lands us at the issue of processing power required, which would be so enormous that it seems to defeat the purpose. Whatever that may be (perhaps this is the true psychological conundrum with simulation theory). Naomi Iris van den Berg

When I first watched The Matrix, I had to leave the room when it got to the point of the choice between the red pill and the blue pill, and chose to watch the microwave oven instead It was too plausible and I couldnt decide which one to take. Being a diagnosed schizophrenic probably plays a role here, but I also receive enough synchronicity and precognition to keep me guessing as to the possibility of a holographic universe. It would explain a lot. There is a theory along these lines in modern quantum physics and Ive seen the physical universe behave in some odd ways. My life remains beautifully surreal in the meantime Sam Bowen

We dont and we never will. But Occams razor applies; is it simpler/more likely to assume that everything we perceive has been designed by a third-party intelligence, expending vast amounts of energy for unknown reasons, or that the world around us is real? My money is on the latter. SRF999

Does it matter? I dont think it does. What does matter is how we respond to our perceived surroundings. Each of us has to adapt our responses in such a way that they affect our immediate environment so that we effect beneficial change. Such is intelligence. It doesnt matter by whom or why the environment was constructed. The funny thing to note is that as a whole (as opposed to us acting as individuals), we appear to be failing big style. Bristol_Fashion

Hilary Putnam posed the question: how do we know that we are not just a brain in a vat. Putnam argued that to ask the question we needed to have a causal relationship with an external world and hence we could not possibly just be brains in a vat. My own view however is that this assumes that we can peek outside the box, which I do not think we can.

We could therefore very possibly be just brains in a vat (or just living in a simulation like the Matrix). It really depends on what you are asking. Most people assume that there has to be something else either a god or external reality that contains our universe. So in effect yes we are just brains in a vat. But what is the vat?

I would suggest that language is the vat. Language is the DNA of the mind and we are living in a sea of language, which is creating the consciousness that we perceive. If you think about it, you can only pose the question that you did (Are we in a simulation?) because of language. It is language that enables that thought to be entertained and language that demands the answer. The physical, material world has no need for that question. It has all the answers it needs. It is only the human mind and the language that structures it that creates this need. soonah98

What does it matter? The objective of life is the same try to enjoy yourself while making things better for others, your loved ones and society as a whole. Simon Ellis

Here is the original post:
Readers reply: how do we know were not living in a simulation like the Matrix? - The Guardian

Honeywell Takes Quantum Leap. The Apple of Quantum Computing Is Here. – Barron’s

Text size

Honeywell International and Cambridge Quantum Computing are merging their fledgling quantum-computing businesses into a stand-alone company, signaling that quantum computing is just about ready for prime time.

The deal, essentially, combines Honeywells (ticker: HON) quantum hardware expertise with privately held Cambridges software and algorithms. It is as if the two had formed the Apple (AAPL) of the quantum computing world, in that Apple makes hardware, operating systems, and software applications.

This is an inflection point company that will drive the future of quantum computing, said Tony Uttley, currently the president of Honeywells quantum business. He will be president of the new company.

Honeywell says quantum computing can be a trillion-dollar-a-year industry some day, just like smartphones, although for now, the smartphone market is some 2,000 times bigger. Moving now, at the point before the gap begins to close, could be a win.

We are at a [industry] phase where people are looking to hear more about practical quantum use cases and investors want to know if this is investible, said Daniel Newman, founder of Futurum, a research and advisory firm focused on digital innovation and market-disrupting technologies.

This deal will speed the process of investor education. The new business is targeting $1 billion in annual revenue in the next two to four years. Wed be disappointed if we were only at a billion in a few years, said Ilyas Khan, Cambridges CEO and founder. He will be CEO of the new company, which he said will decide whether to pursue an initial public offering by the end of the year.

A name for the business has yet to be chosen.

The new company plans to have commercial products as soon as late 2021. The initial offerings will be in web security, with products such as unhackable passwords. Down the road, there are commercial applications in chemicals and drug development.

In terms of sheer brainpower the new enterprise is impressive. It will have about 350 employees, including 200 scientists, 120 of them with doctorate degrees.

The company will start off with a cash injection of about $300 million from Honeywell. The industrial giant will own about 54% of the new company for contributing its cash and technology.

Honeywell stock isnt reacting to the news. Quantum computing is still too small to move the needle for a $160 billion conglomerate. Shares were down slightly in early Tuesday trading, similar to moves in the S&P 500 and Dow Jones Industrial Average.

Year to date, Honeywell stock has gained 7%.

Write to Al Root at allen.root@dowjones.com

Continued here:
Honeywell Takes Quantum Leap. The Apple of Quantum Computing Is Here. - Barron's

BBVA and Zapata Computing Release Study Showing the Potential to Speed Up Monte Carlo Calculations for – GlobeNewswire

The research proposes novel circuit designs that significantly reduce the resources needed to gain a quantum advantage in derivative pricing calculations

BOSTON, June 09, 2021 (GLOBE NEWSWIRE) -- Zapata Computing, a leading enterprise software company for quantum-classical applications, today announced the results of a research project conducted with the global bank BBVA. The projects aim was to identify challenges and opportunities for quantum algorithms to speed up Monte Carlo simulations in finance. Monte Carlo simulations are commonly used for credit valuation adjustment (CVA) and derivative pricing. The research proposes novel circuit designs that significantly reduce the resources needed to gain a practical quantum advantage in derivative calculations, taking years off the projected timeline for the day when financial institutions can generate real value from quantum computers.

Fueled by regulatory pressure to minimize systemic financial risk since the global financial crisis of 2008, banks and other financial institutions have been increasingly focused on accounting for credit risk in derivative pricing. In the US, similar regulation exists to stress-test financial scenarios for Comprehensive Capital Analysis andReview (CCAR) and Dodd-Frank compliance. Monte Carlo simulation is the standard approach for this type of risk analysis, but the calculations required which must account for all possible credit default scenarios are immensely complex and prohibitively time-consuming for classical computers. Zapata and BBVAs research reveals practical ways for quantum algorithms to speed up the Monte Carlo simulation process.

Our innovative approach to quantum-accelerated Monte Carlo methods uses a novel form of amplitude estimation, combined with additional improvements that make the quantum circuit much shallower, in some cases hundreds of times shallower than the well-known alternatives in the literature, said Yudong Cao, CTO and founder of Zapata Computing. This approach reduces the time needed for a quantum computer to complete the CVA calculation by orders of magnitude, and also dramatically reduces the number of qubits needed to gain a quantum advantage over classical methods. Zapata highlights that, in their enterprise customer collaborations, they perform in-depth studies of how much quantum computing resource will be required to obtain practical benefit for business operations. This type of in-depth research can directly inform the hardware specifications needed for quantum advantage in specific use cases.

Improving the performance of these calculations in realistic settings will have a direct impact on the technological resources and costs required for financial risk management, said Andrea Cadarso, BBVA Mexicos Team Lead for Quantitative & Business Solutions. The implications of this research are not limited to CVA calculations. We intend to extend our approach to other applications in quantitative finance, where Monte Carlo simulations are widely used for everything from policy making and risk assessment to financial product pricing calculations.

The BBVA-Zapata Computing joint publication is the result of one in a series of research initiatives thatBBVA Research & Patents launched in 2019. These projects, conducted in partnership with leading institutions and companies including Spanish National Research Council, Multiverse, Fujitsu and Accenture, explore the potential advantages of applying quantum computing in the financial sector.

Escolstico Snchez, leader of the Research & Patents discipline at BBVA, emphasized BBVA's intention to continue exploring this cutting-edge technology: BBVA is fully committed to its work in the quantum area. The bank has assembled a quantum team and is getting professionals from different areas involved in the development of a set of quantum solutions that meet the bank's needs.

About Zapata ComputingZapata Computing, Inc. builds quantum-ready applications for enterprise deployment using our flagship product Orquestra. Zapata has pioneered a new quantum-classical development and deployment paradigm that focuses on a range of use cases, including ML, optimization and simulation. Orquestra integrates best-in-class quantum and classical technologies including Zapatas leading-edge algorithms, open-source libraries in Python, and more. Zapata partners closely with hardware providers across the quantum ecosystem such as Amazon, Google, Honeywell, IBM, IonQ, Microsoft and Rigetti. Investors in Zapata include Comcast Ventures, BASF Venture Capital, Honeywell Ventures, Itochu Corporation, Merck Global Health and Robert Bosch Venture Capital.

Media Contact:Anya NelsonScratch Marketing + Media for Zapata Computinganyan@scratchmm.com617.817.6559

Originally posted here:
BBVA and Zapata Computing Release Study Showing the Potential to Speed Up Monte Carlo Calculations for - GlobeNewswire

Why Is Quantum Computing So Hard to Explain – Quanta Magazine

Quantum computers, you might have heard, are magical uber-machines that will soon cure cancer and global warming by trying all possible answers in different parallel universes. For 15 years, on my blog and elsewhere, Ive railed against this cartoonish vision, trying to explain what I see as the subtler but ironically even more fascinating truth. I approach this as a public service and almost my moral duty as a quantum computing researcher. Alas, the work feels Sisyphean: The cringeworthy hype about quantum computers has only increased over the years, as corporations and governments have invested billions, and as the technology has progressed to programmable 50-qubit devices that (on certain contrived benchmarks) really can give the worlds biggest supercomputers a run for their money. And just as in cryptocurrency, machine learning and other trendy fields, with money have come hucksters.

In reflective moments, though, I get it. The reality is that even if you removed all the bad incentives and the greed, quantum computing would still be hard to explain briefly and honestly without math. As the quantum computing pioneer Richard Feynman once said about the quantum electrodynamics work that won him the Nobel Prize, if it were possible to describe it in a few sentences, it wouldnt have been worth a Nobel Prize.

Not that thats stopped people from trying. Ever since Peter Shor discovered in 1994 that a quantum computer could break most of the encryption that protects transactions on the internet, excitement about the technology has been driven by more than just intellectual curiosity. Indeed, developments in the field typically get covered as business or technology stories rather than as science ones.

That would be fine if a business or technology reporter could truthfully tell readers, Look, theres all this deep quantum stuff under the hood, but all you need to understand is the bottom line: Physicists are on the verge of building faster computers that will revolutionize everything.

The trouble is that quantum computers will not revolutionize everything.

Yes, they might someday solve a few specific problems in minutes that (we think) would take longer than the age of the universe on classical computers. But there are many other important problems for which most experts think quantum computers will help only modestly, if at all. Also, while Google and others recently made credible claims that they had achieved contrived quantum speedups, this was only for specific, esoteric benchmarks (ones that I helped develop). A quantum computer thats big and reliable enough to outperform classical computers at practical applications like breaking cryptographic codes and simulating chemistry is likely still a long way off.

But how could a programmable computer be faster for only some problems? Do we know which ones? And what does a big and reliable quantum computer even mean in this context? To answer these questions we have to get into the deep stuff.

Lets start with quantum mechanics. (What could be deeper?) The concept of superposition is infamously hard to render in everyday words. So, not surprisingly, many writers opt for an easy way out: They say that superposition means both at once, so that a quantum bit, or qubit, is just a bit that can be both 0 and 1 at the same time, while a classical bit can be only one or the other. They go on to say that a quantum computer would achieve its speed by using qubits to try all possible solutions in superposition that is, at the same time, or in parallel.

This is what Ive come to think of as the fundamental misstep of quantum computing popularization, the one that leads to all the rest. From here its just a short hop to quantum computers quickly solving something like the traveling salesperson problem by trying all possible answers at once something almost all experts believe they wont be able to do.

The thing is, for a computer to be useful, at some point you need to look at it and read an output. But if you look at an equal superposition of all possible answers, the rules of quantum mechanics say youll just see and read a random answer. And if thats all you wanted, you couldve picked one yourself.

What superposition really means is complex linear combination. Here, we mean complex not in the sense of complicated but in the sense of a real plus an imaginary number, while linear combination means we add together different multiples of states. So a qubit is a bit that has a complex number called an amplitude attached to the possibility that its 0, and a different amplitude attached to the possibility that its 1. These amplitudes are closely related to probabilities, in that the further some outcomes amplitude is from zero, the larger the chance of seeing that outcome; more precisely, the probability equals the distance squared.

But amplitudes are not probabilities. They follow different rules. For example, if some contributions to an amplitude are positive and others are negative, then the contributions can interfere destructively and cancel each other out, so that the amplitude is zero and the corresponding outcome is never observed; likewise, they can interfere constructively and increase the likelihood of a given outcome. The goal in devising an algorithm for a quantum computer is to choreograph a pattern of constructive and destructive interference so that for each wrong answer the contributions to its amplitude cancel each other out, whereas for the right answer the contributions reinforce each other. If, and only if, you can arrange that, youll see the right answer with a large probability when you look. The tricky part is to do this without knowing the answer in advance, and faster than you could do it with a classical computer.

Twenty-seven years ago, Shor showed how to do all this for the problem of factoring integers, which breaks the widely used cryptographic codes underlying much of online commerce. We now know how to do it for some other problems, too, but only by exploiting the special mathematical structures in those problems. Its not just a matter of trying all possible answers at once.

Compounding the difficulty is that, if you want to talk honestly about quantum computing, then you also need the conceptual vocabulary of theoretical computer science. Im often asked how many times faster a quantum computer will be than todays computers. A million times? A billion?

This question misses the point of quantum computers, which is to achieve better scaling behavior, or running time as a function of n, the number of bits of input data. This could mean taking a problem where the best classical algorithm needs a number of steps that grows exponentially with n, and solving it using a number of steps that grows only as n2. In such cases, for small n, solving the problem with a quantum computer will actually be slower and more expensive than solving it classically. Its only as n grows that the quantum speedup first appears and then eventually comes to dominate.

But how can we know that theres no classical shortcut a conventional algorithm that would have similar scaling behavior to the quantum algorithms? Though typically ignored in popular accounts, this question is central to quantum algorithms research, where often the difficulty is not so much proving that a quantum computer can do something quickly, but convincingly arguing that a classical computer cant. Alas, it turns out to be staggeringly hard to prove that problems are hard, as illustrated by the famous P versus NP problem (which asks, roughly, whether every problem with quickly checkable solutions can also be quickly solved). This is not just an academic issue, a matter of dotting is: Over the past few decades, conjectured quantum speedups have repeatedly gone away when classical algorithms were found with similar performance.

Note that, after explaining all this, I still havent said a word about the practical difficulty of building quantum computers. The problem, in a word, is decoherence, which means unwanted interaction between a quantum computer and its environment nearby electric fields, warm objects, and other things that can record information about the qubits. This can result in premature measurement of the qubits, which collapses them down to classical bits that are either definitely 0 or definitely 1. The only known solution to this problem is quantum error correction: a scheme, proposed in the mid-1990s, that cleverly encodes each qubit of the quantum computation into the collective state of dozens or even thousands of physical qubits. But researchers are only now starting to make such error correction work in the real world, and actually putting it to use will take much longer. When you read about the latest experiment with 50 or 60 physical qubits, its important to understand that the qubits arent error-corrected. Until they are, we dont expect to be able to scale beyond a few hundred qubits.

Once someone understands these concepts, Id say theyre ready to start reading or possibly even writing an article on the latest claimed advance in quantum computing. Theyll know which questions to ask in the constant struggle to distinguish reality from hype. Understanding this stuff really is possible after all, it isnt rocket science; its just quantum computing!

See the article here:
Why Is Quantum Computing So Hard to Explain - Quanta Magazine