Archive for the ‘Quantum Computer’ Category

Honeywell Wants To Show What Quantum Computing Can Do For The World – Forbes

The race for quantum supremacy heated up in June, when Honeywell brought to market the worlds highest performing quantum computer. Honeywell claims it is more accurate (i.e., performs with less errors) than competing systems and that its performance will increase by an order of magnitude each year for the next five years.

Inside the chamber of Honeywells quantum computer

The beauty of quantum computing, says Tony Uttley, President of Honeywell Quantum Solutions, is that once you reach a certain level of accuracy, every time you add a qbit [the basic unit of quantum information] you double the computational capacity. So as the quantum computer scales exponentially, you can scale your problem set exponentially.

Tony Uttley, President, Honeywell Quantum Solutions

Uttley sees three distinct eras in the evolution of quantum computing. Today, we are in the emergent erayou can start to prove what kind of things work, what kind of algorithms show the most promise. For example, the Future Lab for Applied Research and Engineering (FLARE) group of JPMorgan Chase published a paper in June summarizing the results of running on the Honeywell quantum computer complex mathematical calculations used in financial trading applications.

The next era Uttley calls classically impractical, running computations on a quantum computer that typically are not run on todays (classical) computers because they take too long, consume too much power, and cost too much. Crossing the threshold from emergent to classically impractical is not very far away, he asserts, probably sometime in the next 18 to 24 months. This is when you build the trust with the organizations you work with that the answer that is coming from your quantum computer is the correct one, says Uttley.

The companies that understand the potential impact of quantum computing on their industries, are already looking at what it would take to introduce this new computing capability into their existing processes and what they need to adjust or develop from scratch, according to Uttley. These companies will be ready for the shift from emergent to classically impractical which is going to be a binary moment, and they will be able to take advantage of it immediately.

The last stage of the quantum evolution will be classically impossible"you couldnt in the timeframe of the universe do this computation on a classical best-performing supercomputer that you can on a quantum computer, says Uttley. He mentions quantum chemistry, machine learning, optimization challenges (warehouse routing, aircraft maintenance) as applications that will benefit from quantum computing. But what shows the most promise right now are hybrid [resources]you do just one thing, very efficiently, on a quantum computer, and run the other parts of the algorithm or calculation on a classical computer. Uttley predicts that for the foreseeable future we will see co-processing, combining the power of todays computers with the power of emerging quantum computing solutions.

You want to use a quantum computer for the more probabilistic parts [of the algorithm] and a classical computer for the more mundane calculationsthat might reduce the number of qbits needed, explains Gavin Towler, vice president and chief technology officer of Honeywell Performance Materials Technologies. Towler leads R&D activities for three of Honeywell's businesses: Advanced Materials (e.g., refrigerants), UOP (equipment and services for the oil and gas sector), and Process Automation (automation, control systems, software, for all the process industries). As such, he is the poster boy for a quantum computing lead-user.

Gavin Towler, Vice President and Chief Technology Officer, Honeywell Performance Materials and ... [+] Technologies

In the space of materials discovery, quantum computing is going to be critical. Thats not a might or could be. It is going to be the way people do molecular discovery, says Towler. Molecular simulation is used in the design of new molecules, requiring the designer to understand quantum effects. These are intrinsically probabilistic as are quantum computers, Towler explains.

An example he provides is a refrigerant Honeywell produces that is used in automotive air conditioning, supermarkets refrigeration, and homes. As the chlorinated molecules in the refrigerants were causing the hole in the Ozone layer, they were replaced by HFCs which later tuned out to be very potent greenhouse gasses. Honeywell already found a suitable replacement for the refrigerant used in automotive air conditioning, but is searching for similar solutions for other refrigeration applications. Synthesizing in the lab molecules that will prove to have no effect on the Ozone layer or global warming and will not be toxic or flammable is costly. Computer simulation replaces lab work but ideally, you want to have computer models that will screen things out to identify leads much faster, says Towler.

This is where the speed of a quantum computer will make a difference, starting with simple molecules like the ones found in refrigerants or in solvents that are used to remove CO2 from processes prevalent in the oil and gas industry. These are relatively simple molecules, with 10-20 atoms, amenable to be modeled with [todays] quantum computers, says Towler. In the future, he expects more powerful quantum computers to assist in developing vaccines and finding new drugs, polymers, biodegradable plastics, things that contain hundred and thousands of atoms.

There are three ways by which Towlers counterparts in other companies, the lead-users who are interested in experimenting with quantum computing, can currently access Honeywells solution: Run their program directly on Honeywells quantum computer; through Microsoft Azure Quantum services; and working with two startups that Honeywell has invested in, Cambridge Quantum Computing (CQC) and Zapata Computing, both assisting in turning business challenges into quantum computing and hybrid computing algorithms.

Honeywell brings to the quantum computing emerging market a variety of skills in multiple disciplines, with its decades-long experience with precision control systems possibly the most important one. Any at-scale quantum computer becomes a controls problem, says Uttley, and we have experience in some of the most complex systems integration problems in the world. These past experiences have prepared Honeywell to show what quantum computing can do for the world and to rapidly scale-up its solution. Weve built a big auditorium but we are filling out just a few seats right now and we have lots more seats to fill, Uttley sums up this point in time in Honeywells journey to quantum supremacy.

See the rest here:
Honeywell Wants To Show What Quantum Computing Can Do For The World - Forbes

Major quantum computational breakthrough is shaking up physics and maths – The Conversation UK

MIP* = RE is not a typo. It is a groundbreaking discovery and the catchy title of a recent paper in the field of quantum complexity theory. Complexity theory is a zoo of complexity classes collections of computational problems of which MIP* and RE are but two.

The 165-page paper shows that these two classes are the same. That may seem like an insignificant detail in an abstract theory without any real-world application. But physicists and mathematicians are flocking to visit the zoo, even though they probably dont understand it all. Because it turns out the discovery has astonishing consequences for their own disciplines.

In 1936, Alan Turing showed that the Halting Problem algorithmically deciding whether a computer program halts or loops forever cannot be solved. Modern computer science was born. Its success made the impression that soon all practical problems would yield to the tremendous power of the computer.

But it soon became apparent that, while some problems can be solved algorithmically, the actual computation will last long after our Sun will have engulfed the computer performing the computation. Figuring out how to solve a problem algorithmically was not enough. It was vital to classify solutions by efficiency. Complexity theory classifies problems according to how hard it is to solve them. The hardness of a problem is measured in terms of how long the computation lasts.

RE stands for problems that can be solved by a computer. It is the zoo. Lets have a look at some subclasses.

The class P consists of problems which a known algorithm can solve quickly (technically, in polynomial time). For instance, multiplying two numbers belongs to P since long multiplication is an efficient algorithm to solve the problem. The problem of finding the prime factors of a number is not known to be in P; the problem can certainly be solved by a computer but no known algorithm can do so efficiently. A related problem, deciding if a given number is a prime, was in similar limbo until 2004 when an efficient algorithm showed that this problem is in P.

Another complexity class is NP. Imagine a maze. Is there a way out of this maze? is a yes/no question. If the answer is yes, then there is a simple way to convince us: simply give us the directions, well follow them, and well find the exit. If the answer is no, however, wed have to traverse the entire maze without ever finding a way out to be convinced.

Such yes/no problems for which, if the answer is yes, we can efficiently demonstrate that, belong to NP. Any solution to a problem serves to convince us of the answer, and so P is contained in NP. Surprisingly, a million dollar question is whether P=NP. Nobody knows.

The classes described so far represent problems faced by a normal computer. But computers are fundamentally changing quantum computers are being developed. But if a new type of computer comes along and claims to solve one of our problems, how can we trust it is correct?

Imagine an interaction between two entities, an interrogator and a prover. In a police interrogation, the prover may be a suspect attempting to prove their innocence. The interrogator must decide whether the prover is sufficiently convincing. There is an imbalance; knowledge-wise the interrogator is in an inferior position.

In complexity theory, the interrogator is the person, with limited computational power, trying to solve the problem. The prover is the new computer, which is assumed to have immense computational power. An interactive proof system is a protocol that the interrogator can use in order to determine, at least with high probability, whether the prover should be believed. By analogy, these are crimes that the police may not be able to solve, but at least innocents can convince the police of their innocence. This is the class IP.

If multiple provers can be interrogated, and the provers are not allowed to coordinate their answers (as is typically the case when the police interrogates multiple suspects), then we get to the class MIP. Such interrogations, via cross examining the provers responses, provide the interrogator with greater power, so MIP contains IP.

Quantum communication is a new form of communication carried out with qubits. Entanglement a quantum feature in which qubits are spookishly entangled, even if separated makes quantum communication fundamentally different to ordinary communication. Allowing the provers of MIP to share an entangled qubit leads to the class MIP*.

It seems obvious that communication between the provers can only serve to help the provers coordinate lies rather than assist the interrogator in discovering truth. For that reason, nobody expected that allowing more communication would make computational problems more reliable and solvable. Surprisingly, we now know that MIP* = RE. This means that quantum communication behaves wildly differently to normal communication.

In the 1970s, Alain Connes formulated what became known as the Connes Embedding Problem. Grossly simplified, this asked whether infinite matrices can be approximated by finite matrices. This new paper has now proved this isnt possible an important finding for pure mathematicians.

In 1993, meanwhile, Boris Tsirelson pinpointed a problem in physics now known as Tsirelsons Problem. This was about two different mathematical formalisms of a single situation in quantum mechanics to date an incredibly successful theory that explains the subatomic world. Being two different descriptions of the same phenomenon it was to be expected that the two formalisms were mathematically equivalent.

But the new paper now shows that they arent. Exactly how they can both still yield the same results and both describe the same physical reality is unknown, but it is why physicists are also suddenly taking an interest.

Time will tell what other unanswered scientific questions will yield to the study of complexity. Undoubtedly, MIP* = RE is a great leap forward.

View original post here:
Major quantum computational breakthrough is shaking up physics and maths - The Conversation UK

This Twist on Schrdinger’s Cat Paradox Has Major Implications for Quantum Theory – Scientific American

What does it feel like to be both alive and dead?

That question irked and inspired Hungarian-American physicist Eugene Wigner in the 1960s. He was frustrated by the paradoxes arising from the vagaries of quantum mechanicsthe theory governing the microscopic realm that suggests, among many other counterintuitive things, that until a quantum system is observed, it does not necessarily have definite properties. Take his fellow physicist Erwin Schrdingers famous thought experiment in which a cat is trapped in a box with poison that will be released if a radioactive atom decays. Radioactivity is a quantum process, so before the box is opened, the story goes, the atom has both decayed and not decayed, leaving the unfortunate cat in limboa so-called superposition between life and death. But does the cat experience being in superposition?

Wigner sharpened the paradox by imagining a (human) friend of his shut in a lab, measuring a quantum system. He argued it was absurd to say his friend exists in a superposition of having seen and not seen a decay unless and until Wigner opens the lab door. The Wigners friend thought experiment shows that things can become very weird if the observer is also observed, says Nora Tischler, a quantum physicist at Griffith University in Australia.

Now Tischler and her colleagues have carried out a version of the Wigners friend test. By combining the classic thought experiment with another quantum head-scratcher called entanglementa phenomenon that links particles across vast distancesthey have also derived a new theorem, which they claim puts the strongest constraints yet on the fundamental nature of reality. Their study, which appeared in Nature Physics on August 17, has implications for the role that consciousness might play in quantum physicsand even whether quantum theory must be replaced.

The new work is an important step forward in the field of experimental metaphysics, says quantum physicist Aephraim Steinberg of the University of Toronto, who was not involved in the study. Its the beginning of what I expect will be a huge program of research.

Until quantum physics came along in the 1920s, physicists expected their theories to be deterministic, generating predictions for the outcome of experiments with certainty. But quantum theory appears to be inherently probabilistic. The textbook versionsometimes called the Copenhagen interpretationsays that until a systems properties are measured, they can encompass myriad values. This superposition only collapses into a single state when the system is observed, and physicists can never precisely predict what that state will be. Wigner held the then popular view that consciousness somehow triggers a superposition to collapse. Thus, his hypothetical friend would discern a definite outcome when she or he made a measurementand Wigner would never see her or him in superposition.

This view has since fallen out of favor. People in the foundations of quantum mechanics rapidly dismiss Wigners view as spooky and ill-defined because it makes observers special, says David Chalmers, a philosopher and cognitive scientist at New York University. Today most physicists concur that inanimate objects can knock quantum systems out of superposition through a process known as decoherence. Certainly, researchers attempting to manipulate complex quantum superpositions in the lab can find their hard work destroyed by speedy air particles colliding with their systems. So they carry out their tests at ultracold temperatures and try to isolate their apparatuses from vibrations.

Several competing quantum interpretations have sprung up over the decades that employ less mystical mechanisms, such as decoherence, to explain how superpositions break down without invoking consciousness. Other interpretations hold the even more radical position that there is no collapse at all. Each has its own weird and wonderful take on Wigners test. The most exotic is the many worlds view, which says that whenever you make a quantum measurement, reality fractures, creating parallel universes to accommodate every possible outcome. Thus, Wigners friend would split into two copies and, with good enough supertechnology, he could indeed measure that person to be in superposition from outside the lab, says quantum physicist and many-worlds fan Lev Vaidman of Tel Aviv University.

The alternative Bohmian theory (named for physicist David Bohm) says that at the fundamental level, quantum systems do have definite properties; we just do not know enough about those systems to precisely predict their behavior. In that case, the friend has a single experience, but Wigner may still measure that individual to be in a superposition because of his own ignorance. In contrast, a relative newcomer on the block called the QBism interpretation embraces the probabilistic element of quantum theory wholeheartedly (QBism, pronounced cubism, is actually short for quantum Bayesianism, a reference to 18th-century mathematician Thomas Bayess work on probability.) QBists argue that a person can only use quantum mechanics to calculate how to calibrate his or her beliefs about what he or she will measure in an experiment. Measurement outcomes must be regarded as personal to the agent who makes the measurement, says Ruediger Schack of Royal Holloway, University of London, who is one of QBisms founders.According to QBisms tenets, quantum theory cannot tell you anything about the underlying state of reality, nor can Wigner use it to speculate on his friends experiences.

Another intriguing interpretation, called retrocausality, allows events in the future to influence the past. In a retrocausal account, Wigners friend absolutely does experience something, says Ken Wharton, a physicist at San Jose State University, who is an advocate for this time-twisting view. But that something the friend experiences at the point of measurement can depend upon Wigners choice of how to observe that person later.

The trouble is that each interpretation is equally goodor badat predicting the outcome of quantum tests, so choosing between them comes down to taste. No one knows what the solution is, Steinberg says. We dont even know if the list of potential solutions we have is exhaustive.

Other models, called collapse theories, do make testable predictions. These models tack on a mechanism that forces a quantum system to collapse when it gets too bigexplaining why cats, people and other macroscopic objects cannot be in superposition. Experiments are underway to hunt for signatures of such collapses, but as yet they have not found anything. Quantum physicists are also placing ever larger objects into superposition: last year a team in Vienna reported doing so with a 2,000-atom molecule. Most quantum interpretations say there is no reason why these efforts to supersize superpositions should not continue upward forever, presuming researchers can devise the right experiments in pristine lab conditions so that decoherence can be avoided. Collapse theories, however, posit that a limit will one day be reached, regardless of how carefully experiments are prepared. If you try and manipulate a classical observera human, sayand treat it as a quantum system, it would immediately collapse, says Angelo Bassi, a quantum physicist and proponent of collapse theories at the University of Trieste in Italy.

Tischler and her colleagues believed that analyzing and performing a Wigners friend experiment could shed light on the limits of quantum theory. They were inspired by a new wave of theoretical and experimental papers that have investigated the role of the observer in quantum theory by bringing entanglement into Wigners classic setup. Say you take two particles of light, or photons, that are polarized so that they can vibrate horizontally or vertically. The photons can also be placed in a superposition of vibrating both horizontally and vertically at the same time, just as Schrdingers paradoxical cat can be both alive and dead before it is observed.

Such pairs of photons can be prepared togetherentangledso that their polarizations are always found to be in the opposite direction when observed. That may not seem strangeunless you remember that these properties are not fixed until they are measured. Even if one photon is given to a physicist called Alice in Australia, while the other is transported to her colleague Bob in a lab in Vienna, entanglement ensures that as soon as Alice observes her photon and, for instance, finds its polarization to be horizontal, the polarization of Bobs photon instantly syncs to vibrating vertically. Because the two photons appear to communicate faster than the speed of lightsomething prohibited by his theories of relativitythis phenomenon deeply troubled Albert Einstein, who dubbed it spooky action at a distance.

These concerns remained theoretical until the 1960s, when physicist John Bell devised a way to test if reality is truly spookyor if there could be a more mundane explanation behind the correlations between entangled partners. Bell imagined a commonsense theory that was localthat is, one in which influences could not travel between particles instantly. It was also deterministic rather than inherently probabilistic, so experimental results could, in principle, be predicted with certainty, if only physicists understood more about the systems hidden properties. And it was realistic, which, to a quantum theorist, means that systems would have these definite properties even if nobody looked at them. Then Bell calculated the maximum level of correlations between a series of entangled particles that such a local, deterministic and realistic theory could support. If that threshold was violated in an experiment, then one of the assumptions behind the theory must be false.

Such Bell tests have since been carried out, with a series of watertight versions performed in 2015, and they have confirmed realitys spookiness. Quantum foundations is a field that was really started experimentally by Bells [theorem]now over 50 years old. And weve spent a lot of time reimplementing those experiments and discussing what they mean, Steinberg says. Its very rare that people are able to come up with a new test that moves beyond Bell.

The Brisbane teams aim was to derive and test a new theorem that would do just that, providing even stricter constraintslocal friendliness boundson the nature of reality. Like Bells theory, the researchers imaginary one is local. They also explicitly ban superdeterminismthat is, they insist that experimenters are free to choose what to measure without being influenced by events in the future or the distant past. (Bell implicitly assumed that experimenters can make free choices, too.) Finally, the team prescribes that when an observer makes a measurement, the outcome is a real, single event in the worldit is not relative to anyone or anything.

Testing local friendliness requires a cunning setup involving two superobservers, Alice and Bob (who play the role of Wigner), watching their friends Charlie and Debbie. Alice and Bob each have their own interferometeran apparatus used to manipulate beams of photons. Before being measured, the photons polarizations are in a superposition of being both horizontal and vertical. Pairs of entangled photons are prepared such that if the polarization of one is measured to be horizontal, the polarization of its partner should immediately flip to be vertical. One photon from each entangled pair is sent into Alices interferometer, and its partner is sent to Bobs. Charlie and Debbie are not actually human friends in this test. Rather, they are beam displacers at the front of each interferometer. When Alices photon hits the displacer, its polarization is effectively measured, and it swerves either left or right, depending on the direction of the polarization it snaps into. This action plays the role of Alices friend Charlie measuring the polarization. (Debbie similarly resides in Bobs interferometer.)

Alice then has to make a choice: She can measure the photons new deviated path immediately, which would be the equivalent of opening the lab door and asking Charlie what he saw. Or she can allow the photon to continue on its journey, passing through a second beam displacer that recombines the left and right pathsthe equivalent of keeping the lab door closed. Alice can then directly measure her photons polarization as it exits the interferometer. Throughout the experiment, Alice and Bob independently choose which measurement choices to make and then compare notes to calculate the correlations seen across a series of entangled pairs.

Tischler and her colleagues carried out 90,000 runs of the experiment. As expected, the correlations violated Bells original boundsand crucially, they also violated the new local-friendliness threshold. The team could also modify the setup to tune down the degree of entanglement between the photons by sending one of the pair on a detour before it entered its interferometer, gently perturbing the perfect harmony between the partners. When the researchers ran the experiment with this slightly lower level of entanglement, they found a point where the correlations still violated Bells bound but not local friendliness. This result proved that the two sets of bounds are not equivalent and that the new local-friendliness constraints are stronger, Tischler says. If you violate them, you learn more about reality, she adds. Namely, if your theory says that friends can be treated as quantum systems, then you must either give up locality, accept that measurements do not have a single result that observers must agree on or allow superdeterminism. Each of these options has profoundand, to some physicists, distinctly distastefulimplications.

The paper is an important philosophical study, says Michele Reilly, co-founder of Turing, a quantum-computing company based in New York City, who was not involved in the work. She notes that physicists studying quantum foundations have often struggled to come up with a feasible test to back up their big ideas. I am thrilled to see an experiment behind philosophical studies, Reilly says. Steinberg calls the experiment extremely elegant and praises the team for tackling the mystery of the observers role in measurement head-on.

Although it is no surprise that quantum mechanics forces us to give up a commonsense assumptionphysicists knew that from Bellthe advance here is that we are a narrowing in on which of those assumptions it is, says Wharton, who was also not part of the study. Still, he notes, proponents of most quantum interpretations will not lose any sleep. Fans of retrocausality, such as himself, have already made peace with superdeterminism: in their view, it is not shocking that future measurements affect past results. Meanwhile QBists and many-worlds adherents long ago threw out the requirement that quantum mechanics prescribes a single outcome that every observer must agree on.

And both Bohmian mechanics and spontaneous collapse models already happily ditched locality in response to Bell. Furthermore, collapse models say that a real macroscopic friend cannot be manipulated as a quantum system in the first place.

Vaidman, who was also not involved in the new work, is less enthused by it, however, and criticizes the identification of Wigners friend with a photon. The methods used in the paper are ridiculous; the friend has to be macroscopic, he says. Philosopher of physics Tim Maudlin of New York University, who was not part of the study, agrees. Nobody thinks a photon is an observer, unless you are a panpsychic, he says. Because no physicist questions whether a photon can be put into superposition, Maudlin feels the experiment lacks bite. It rules something outjust something that nobody ever proposed, he says.

Tischler accepts the criticism. We dont want to overclaim what we have done, she says. The key for future experiments will be scaling up the size of the friend, adds team member Howard Wiseman, a physicist at Griffith University. The most dramatic result, he says, would involve using an artificial intelligence, embodied on a quantum computer, as the friend. Some philosophers have mused that such a machine could have humanlike experiences, a position known as the strong AI hypothesis, Wiseman notes, though nobody yet knows whether that idea will turn out to be true. But if the hypothesis holds, this quantum-based artificial general intelligence (AGI) would be microscopic. So from the point of view of spontaneous collapse models, it would not trigger collapse because of its size. If such a test was run, and the local-friendliness bound was not violated, that result would imply that an AGIs consciousness cannot be put into superposition. In turn, that conclusion would suggest that Wigner was right that consciousness causes collapse. I dont think I will live to see an experiment like this, Wiseman says. But that would be revolutionary.

Reilly, however, warns that physicists hoping that future AGI will help them home in on the fundamental description of reality are putting the cart before the horse. Its not inconceivable to me that quantum computers will be the paradigm shift to get to us into AGI, she says. Ultimately, we need a theory of everything in order to build an AGI on a quantum computer, period, full stop.

That requirement may rule out more grandiose plans. But the team also suggests more modest intermediate tests involving machine-learning systems as friends, which appeals to Steinberg. That approach is interesting and provocative, he says. Its becoming conceivable that larger- and larger-scale computational devices could, in fact, be measured in a quantum way.

Renato Renner, a quantum physicist at the Swiss Federal Institute of Technology Zurich (ETH Zurich), makes an even stronger claim: regardless of whether future experiments can be carried out, he says, the new theorem tells us that quantum mechanics needs to be replaced. In 2018 Renner and his colleague Daniela Frauchiger, then at ETH Zurich, published a thought experiment based on Wigners friend and used it to derive a new paradox. Their setup differs from that of the Brisbane team but also involves four observers whose measurements can become entangled. Renner and Frauchiger calculated that if the observers apply quantum laws to one another, they can end up inferring different results in the same experiment.

The new paper is another confirmation that we have a problem with current quantum theory, says Renner, who was not involved in the work. He argues that none of todays quantum interpretations can worm their way out of the so-called Frauchiger-Renner paradox without proponents admitting they do not care whether quantum theory gives consistent results. QBists offer the most palatable means of escape, because from the outset, they say that quantum theory cannot be used to infer what other observers will measure, Renner says. It still worries me, though: If everything is just personal to me, how can I say anything relevant to you? he adds. Renner is now working on a new theory that provides a set of mathematical rules that would allow one observer to work out what another should see in a quantum experiment.

Still, those who strongly believe their favorite interpretation is right see little value in Tischlers study. If you think quantum mechanics is unhealthy, and it needs replacing, then this is useful because it tells you new constraints, Vaidman says. But I dont agree that this is the casemany worlds explains everything.

For now, physicists will have to continue to agree to disagree about which interpretation is best or if an entirely new theory is needed. Thats where we left off in the early 20th centurywere genuinely confused about this, Reilly says. But these studies are exactly the right thing to do to think through it.

[Disclaimer: The author writes frequently for the Foundational Questions Institute, which sponsors research in physics and cosmology, and partially funded the Brisbane teams study.]

Continued here:
This Twist on Schrdinger's Cat Paradox Has Major Implications for Quantum Theory - Scientific American

Quantum mechanics is immune to the butterfly effect – The Economist

That could help with the design of quantum computers

Aug 15th 2020

IN RAY BRADBURYs science-fiction story A Sound of Thunder, a character time-travels far into the past and inadvertently crushes a butterfly underfoot. The consequences of that minuscule change ripple through reality such that, upon the time-travellers return, the present has been dramatically changed.

The butterfly effect describes the high sensitivity of many systems to tiny changes in their starting conditions. But while it is a feature of classical physics, it has been unclear whether it also applies to quantum mechanics, which governs the interactions of tiny objects like atoms and fundamental particles. Bin Yan and Nikolai Sinitsyn, a pair of physicists at Los Alamos National Laboratory, decided to find out. As they report in Physical Review Letters, quantum-mechanical systems seem to be more resilient than classical ones. Strangely, they seem to have the capacity to repair damage done in the past as time unfolds.

To perform their experiment, Drs Yan and Sinitsyn ran simulations on a small quantum computer made by IBM. They constructed a simple quantum system consisting of qubitsthe quantum analogue of the familiar one-or-zero bits used by classical computers. Like an ordinary bit, a qubit can be either one or zero. But it can also exist in superposition, a chimerical mix of both states at once.

Having established the system, the authors prepared a particular qubit by setting its state to zero. That qubit was then allowed to interact with the others in a process called quantum scrambling which, in this case, mimics the effect of evolving a quantum system backwards in time. Once this virtual foray into the past was completed, the authors disturbed the chosen qubit, destroying its local information and its correlations with the other qubits. Finally, the authors performed a reversed scrambling process on the now-damaged system. This was analogous to running the quantum system all the way forwards in time to where it all began.

They then checked to see how similar the final state of the chosen qubit was to the zero-state it had been assigned at the beginning of the experiment. The classical butterfly effect suggests that the researchers meddling should have changed it quite drastically. In the event, the qubits original state had been almost entirely recovered. Its state was not quite zero, but it was, in quantum-mechanical terms, 98.3% of the way there, a difference that was deemed insignificant. The final output state after the forward evolution is essentially the same as the input state before backward evolution, says Dr Sinitsyn. It can be viewed as the same input state plus some small background noise. Oddest of all was the fact that the further back in simulated time the damage was done, the greater the rate of recoveryas if the quantum system was repairing itself with time.

The mechanism behind all this is known as entanglement. As quantum objects interact, their states become highly correlatedentangledin a way that serves to diffuse localised information about the state of one quantum object across the system as a whole. Damage to one part of the system does not destroy information in the same way as it would with a classical system. Instead of losing your work when your laptop crashes, having a highly entangled system is a bit like having back-ups stashed in every room of the house. Even though the information held in the disturbed qubit is lost, its links with the other qubits in the system can act to restore it.

The upshot is that the butterfly effect seems not to apply to quantum systems. Besides making life safe for tiny time-travellers, that may have implications for quantum computing, too, a field into which companies and countries are investing billions of dollars. We think of quantum systems, especially in quantum computing, as very fragile, says Natalia Ares, a physicist at the University of Oxford. That this result demonstrates that quantum systems can in fact be unexpectedly robust is an encouraging finding, and bodes well for potential future advances in the field.

This article appeared in the Science & technology section of the print edition under the headline "A flutter in time"

Here is the original post:
Quantum mechanics is immune to the butterfly effect - The Economist

Supercomputers Just Hosted the Most Detailed Tornado and Earthquake Simulations Ever – HPCwire

Even with the pandemic raging, natural disasters are having a busy 2020: tornadoes ravaged Nashville a few months ago; the chances of a new big one have dramatically risen in Californias fault zones; and meteorologists are anticipating a stronger-than-usual hurricane season for the U.S. More than ever, understanding and anticipating these events is crucial and now, two teams of researchers have announced that they have used supercomputers to run the higher-resolution-ever simulations of tornadoes and earthquakes.

While researchers have understood the basics of tornado formation for some time, the particulars are difficult to work out so difficult, in fact, that the National Weather Service has a 70 percent false alarm rate for tornado warnings. Leigh Orf, an atmospheric scientist with the University of Wisconsin-Madisons Space Science and Engineering Center is on a quest to change that using the most detailed tornado simulations ever produced.

Using a piece of software he developed, Orf has simulating and visualizing fully resolved tornadoes and their parent supercells for a decade. To run these powerful simulations, Orf has used a variety of supercomputers most recently, Frontera at the Texas Advanced Computing Center (TACC). Frontera delivers 23.5 Linpack petaflops of computing power, placing it 8th on the most recent Top500 list of the worlds most powerful publicly ranked supercomputers. With Frontera, Orf has been able to run simulations at high spatial and temporal resolutions ten meters and a fifth of a second, respectively.

It is only with this level of granularity that some features become evident, Orf said in an interview with TACCs Aaron Dubrow. We need to throw a lot of computational power to get it right and resolve salient features. Ultimately, the goal is prediction, but the truth is, we still dont understand some basic things about how supercell thunderstorms really work. Its really hard to answer questions like, will this supercell that just formed produce a tornado, and if so, will it be especially violent?

Orfs research has, to date, produced a variety of insights into the tornadogenesis process. When studying a deadly tornado event in Oklahoma, for instance, Orf found several characteristic features that might help explain how the tornadoes formed. In these simulations, theres a lot of spinning going on that you wouldnt see with the naked eye, he said. That spinning is sometimes in the form of vortex sheets rolling up, or misocyclones, what you might call mini tornadoes, that arent quite tornado strength that spin along different boundaries in the storm. Similarly, his simulations revealed that certain types of currents serve as driving forces for tornado intensity.

Now, with his allocation on Frontera, Orf is looking to re-simulate storms in a variety of conditions to see how minor variable changes might impact the formation or intensity of tornadoes. Very small changes early on in the simulation can lead to very big changes in the simulation down the road, he said. This is an intrinsic predictability issue in our field. Were doing some of the frontier work to try to tease out these variables.

While Orf is looking to the sky, a team at Lawrence Livermore National Laboratory (LLNL) is looking to the ground. Using code developed at LLNL, the researchers simulated a magnitude 7.0 earthquake on the Hayward Fault, which runs along the San Francisco Bay Area. The new simulations ran at double the resolution of previous iterations, capturing seismic waves as short as 50 meters across the entire fault zone. These simulations, too, required extraordinary computing power: in this case, LLNLs Sierra system, which delivers 94.6 Linpack petaflops, placing it third on the most recent Top500 list. The Sierra-based simulations were run during Sierras open science period in 2018, before it switched to classified work. The team also made use of LLNLs Lassen system (an unclassified machine with similar architecture to Sierra), which delivers 18.2 Linpack petaflops and placed 14th.

The [Institutional Center of Excellence] prepared computer codes at LLNL to run efficiently on Sierra and Lassen prior to their arrival so they could immediately take advantage of those capabilities when they came online, and this earthquake simulation and other science-based projects are achieving exactly what they were meant to do, said Chris Clouse, associate program director for computational physics at LLNL, in an interview with LLNLs Anne Stark.

We used a recently developed empirical model to correct ground motions for the effects of soft soils not included in the Sierra calculations, said Arthur Rodgers, a seismologist at LLNL. These improved the realism of the simulated shaking intensities and bring the results in closer agreement with expected values.

Now, with hurricane season beginning, eyes are turning to the wide range of weather and climate supercomputer centers many of which have recently received large installations or investments to see if the 2020 hurricane season can be more accurately anticipated.

Original post:
Supercomputers Just Hosted the Most Detailed Tornado and Earthquake Simulations Ever - HPCwire