Media Search:



Who will dominate the tech arms race? – The Jerusalem Post

It is almost impossible to overstate what a quantum computer will be able to do, Christopher Monroe told the Magazine in a recent interview.

Monroe a professor at both the University of Maryland and Duke University, as well as co-founder of the quantum computing company IonQ discussed how quantum computing will change the face of the planet, even if this might take some more time.

The Magazine also interviewed four other experts in the quantum field and visited seven of their labs at the University of Maryland.

cnxps.cmd.push(function () { cnxps({ playerId: '36af7c51-0caf-4741-9824-2c941fc6c17b' }).render('4c4d856e0e6f4e3d808bbc1715e132f6'); });

These labs the full likes of which do not yet exist in Israel hosted all kinds of qubits (the basis of quantum computers), lasers blasting targets to cause plasma to come off to form distinctive films, infrared lasers, furnaces reaching 2,000C, a tetra arc furnace for growing silicon crystals, special dilution refrigerators to achieve cryostorage (deep freezing) and a variety of vacuum chambers that would seem like an alternate reality to the uninitiated.

Before entering each lab, there needed to be a conversation about whether this reporter should be wearing the special goggles that were handed out to avoid getting blinded.

One top quantum official at Maryland, Prof. Dr. Johnpierre Paglione, assured the Magazine that the ultrahazardous materials warning on many of the lab doors was not a concern at that moment.

From cracking the Internet as we know it, to military and economic dominance, to changing the way people manage their lives, quantum computers are predicted to make mincemeat of todays supercomputers. Put simply, they are made out of and operate from a completely different kind of material and set of principles connected to qubits and quantum mechanics, with computing potential that dwarfs classical computers capabilities.

But lets say the US wins the race who in the US would win it? Would it be giants like Google, Microsoft, Amazon, IBM and Honeywell? Or might it be a lean and fast solely quantum-focused challenger like Monroes IonQ?

At first glance, Google has no real challenger. In 2019, Google said it achieved quantum supremacy when its quantum computer became the first to perform a calculation that would be practically impossible for a classical machine, by checking the outputs from a quantum random-number generator.

The search-engine giant has already built a 54-qubit computer whereas IonQs largest quantum computer only has 32 qubits. Google has also promised to achieve the holy grail of quantum computing, a system large enough to revolutionize the Internet, military and economic issues, by 2029. Although China recently reproduced Googles experiment, Google is still regarded as ahead of the game.

Why is a 32-qubit quantum computer better than a 54-qubit one?

So why is Monroe so confident that his company will finish the race long before Google?

First, he takes a shot at the Google 2019 experiment.

It was a fairly academic exercise. The problem they attacked was one of those rare problems where you can prove something and you can prove the super computer cannot do it. Quantum mechanics works. It is not a surprise. The problem Google tackled was utterly useless. The system was not flexible enough to program to hit other problems. So a big company did a big academic demonstration, he said with a sort of whoop-dee-do tone and expression on his face.

Google had to repeat its experiment millions of times The signal went down by orders of magnitude. There are special issues to get the data. There are general problems where it cannot maintain [coherence]. The Google experiment and qubits decayed by seven times the constant. We gauge on one time for the constant and we can do 100 operations, with IonQs quantum computers.

In radioactive decay, the time constant is related to the decay constant and essentially represents the average lifetime of a decaying system, such as an atom. Some of the tactics for potentially overcoming decay go back to the lasers, vacuum chambers and cryostorage refrigerators mentioned above.

Monroe said from a business perspective, the experiment was a big distraction, and you will hear this from Google computer employees. They had to run simulations to prove how hard it would be to do what they were doing with old computers instead of building better quantum computers and solving useful algorithms.

We believe quantum computers work now it is time to build them, he stressed.

Describing IonQs quantum computers, Monroe said, The 32-qubit computer is fifth generation. The third and fourth generation is available to [clients of] Microsoft, Amazon and Google Cloud. It is 11 qubits, which is admittedly small, but it still runs more than any IBM machine can run. An 11-qubit computer is very clean operationally. It can run 100 or so ops [operations] before the laser noise causes coherence to be lost [before the qubits stop working]. That is many more ops [operations] than superconductors. If [a computer] has one million qubits, but can only run a few ops [operations], it is boring. But trapped ions adding more qubits at the same time makes things cheaper.

He added, The 32-qubit computer is not yet on the cloud. We are working in private with customers financials, noting that a future publication will discuss the baby version of an algorithm which could be very interesting when you start to scale it up. Maybe in the next generation, we can engineer it to solve an optimization problem something we dont get from the cloud, where we dont get any telemetry, which would be an unusual benefit for clients.

According to Monroe, that he will be able to build a 1,000-qubit computer by 2025 practically tomorrow in the sphere of new inventions will in and of itself be game-changing. This is true even if it is not yet capable of accomplishing all the extreme miracles that much larger quantum computers may someday accomplish.

A major innovation or risk (depending on your worldview) by Monroe is how he treats the paramount challenge of quantum computers and error correction basically the idea that for quantum computers to work, some process must be conceived to prevent qubits from decaying at the rate they currently decay at otherwise crucial calculations get interrupted mid-calculation.

Here, Monroe critiques both the Google approach and responds to criticism from some of his academic colleagues about his approach to error correction. Google is trying to get to one million qubits that do not work well together.

In contrast, a special encoding process could allow IonQ to create what Monroe called a single sort of super qubit, which would eliminate 99.9% of native errors. This is the easiest way to get better at quantum computing, as opposed to the quantity over quality path Google is pursuing.

But he has to defend himself from others poking holes in his approach as unrealistic, including some of his colleagues at University of Maryland (all sides still express great respect for each other). Confronted by this criticism, he responded that their path of attack was based on the theory of error correction. It implies that you will do indefinitely long computations, [but] no one will ever need this high a standard to do business.

We do not use error correction on our CPU [central processing unit] because silicon is so stable. We call it OK if it fails in one year, since that is more than enough time to be economically worthwhile. Instead of trying to eliminate errors, his strategy is to gradually add more qubits, which achieves slightly more substantial results. His goal is to work around the error-correction problem.

Part of the difference between Monroe and his academic colleagues relates to his having crossed over into a mix of business and academia. Monroes view on this issue? Industry and academia do not always see things the same way. Academics are trained to prove everything we do. But if a computer works better to solve a certain problem, we do not need to prove it.

For example, if a quantum computer doubled the value of a financial portfolio compared to a super computers financial recommendations, the client is thrilled even if no one knows how.

He said that when shortcuts solve problems and certain things cannot be proven but where quantum computing finds value academics hate it. They are trained to be pessimists. I do believe quantum computers will find narrow applications within five years.

Besides error correction, another question is what the qubits themselves, the basis of different kinds of quantum computers, should be made out of. The technique that many of his competitors are using to make computers out of a particular kind of qubit has the benefit of being not hard to do, inexpensive and representing beautiful physics.

However, he warned, No one knows where to find it if it exists So stay in solid-state physics and build computers out of solid-state systems. Google, Amazon and others are all invested in solid-state computers. But I dont see it happening without fundamental physics breakthroughs. If you want to build and engineer a device if you want to have a business you should not be reliant on physics breakthroughs.

Instead of the path of his competitors, Monroe emphasized working with natural quantum atoms and tricking and engineering them to act how he wants using low pressure instead of low temperatures.

I work with charged atoms or ions. We levitate them inside a vacuum chamber which is getting smaller every year. We have a silicon chip. Just electrodes, electric force fields are holding up these atoms. There are no solids and no air in the vacuum chamber, which means the atoms remain extremely well isolated. They are the most perfect atoms we know, so we can scale without worrying about the top of the noise [the threshold where qubits decay]. We can pick qubit levels that do not yet decay.

Why are Google and IBM investing in natural qubits? Because they have a blind spot. They have been first in solid-state physics and engineering for 50 years. If there is a silicon solid-state quantum computer, Intel will make that, but I dont see how it will be scaled, he declared.

MONROE IS far from the full quantum show at Maryland.

Paglione has been a professor at University of Maryland for 13 years and the director of the Maryland Quantum Materials Center for the last five years.

In 1986, the center was working on high-temperature superconductors, Paglione said, noting that work on quantum computers is a more recent development. The development has not merely altered the focus of the centers research. According to Paglione, it has also helped grow the center from around seven staff members 30 years ago to around 100 staff members when all of the affiliate members, students and administrative staff are taken into account.

Similarly, Dr. Gretchen Campbell, director of the Joint Quantum Institute, told the Magazine that a big part of her institutions role and her personal role has been to first bring together people from atomic physics and condensed-matter physics even within physics, we do not always talk to each other, followed by connecting these experts with computer science experts.

Campbell explained it was crucial to explore the interaction between the quantum realm and quantum algorithms, for which they needed more math and computer science backgrounds and to continue to move from laboratories to real-world applications to translating into technology and interacting more with industry.

She also guided the Magazine, adorning goggles, through a lab with a digital micromirror device and laser beams relating to atom clouds and light projectors.

Add in some additional departments at Maryland as well as a partnership with the National Institute of Standards and Technology (NIST) and the number of staff swells way past 100. What are their many different teams working on? The lab studies and experiments are as varied as the different disciplines, with Paglione talking about possibilities for making squid devices or sensitive magnetic sensors that could be constructed by using a superconducting quantum interference device.

Paglione said magnetometer systems could be used with squids to sense the magnetic field of samples. These could be used as detectors in water. If they were made sensitive enough, they could sense changes in a magnetic field, such as when a submarine passes by and generates a changed magnetic field.

This has drawn attention from the US Department of Defense.

A multidisciplinary mix of Pagliones team recently captured the most direct evidence to date of a quantum quirk, which permits particles to tunnel through a barrier as if it is not even there. The upshot could be assisting engineers in designing more uniform components to build both future quantum computers and quantum sensors (reported applications could detect not only submarines but aircraft).

Pagliones team, headed by Ichiro Takeuchi, a professor of materials science and engineering at Maryland, successfully carried out a new experiment in which they observed Klein tunneling. In the quantum world, tunneling enables particles, such as electrons, to pass through a barrier even if they lack sufficient energy to actually climb over it. A taller barrier usually makes climbing over harder and fewer particles are able to cross through. The phenomenon, known as Klein tunneling, happens when the barrier becomes completely transparent and opens up a portal that particles can traverse regardless of the barriers height.

Scientists and engineers from Marylands Center for Nanophysics and Advanced Materials, the Joint Quantum Institute and the Condensed Matter Theory Center along with the Department of Materials Science and Engineering and Department of Physics, succeeded in making the most compelling measurements of the phenomenon to date.

Given that Klein tunneling was initially predicted to occur in the world of high-energy quantum particles moving close to the speed of light, observing the effect was viewed as impossible. That was until scientists revealed that some of the rules governing fast-moving quantum particles can also apply to the comparatively sluggish particles traveling near the surface of some highly unusual materials.

It was a piece of serendipity that the unusual material and an elemental relative of sorts shared the same crystal structure, said Paglione. However, the multidisciplinary team we have was one of the keys to this success. Having experts on topological physics, thin-film synthesis, spectroscopy and theoretical understanding really got us to this point.

Bringing this back to quantum computing, the idea is that interactions between superconductors and other materials are central ingredients in some quantum computer architectures and precision-sensing devices. Yet, there has always been a problem that the junction, or crossover spot, where they interact is slightly different. Takeuchi said this led to sucking up countless amounts of time and energy tuning and calibrating to reach the best performance.

Takeuchi said Klein tunneling could eliminate this variability, which has played havoc with device-to-device interactions.

AN ENTIRELY separate quantum application could be physics department chairman Prof. Steve Rolstons work on establishing a quantum communications network. Rolston explained that when a pair of photons are quantum entangled you can achieve quantum encryption over a communications network, by using entangled particles to create secure keys that cannot be hacked. There are varying paths to achieve such a quantum network and Rolston is skeptical of others in the field who could be seen as cutting corners.

He also is underwhelmed by Chinas achievements in this area. According to Rolston, no one has figured out how to extend a secure quantum network over any space sizable enough to make the network usable and marketable in practical terms.

Rather, he said existing quantum networks are either limited to very small spaces, or to extend their range they must employ gimmicks that usually impair how secure they are. Because of these limitations, Rolston went as far as to say that his view is that the US National Security Agency views the issue as a distraction.

In terms of export trade barriers or issues with China, he said he opposes controls and believes cooperation in the quantum realm should continue, especially since all of his centers research is made public anyway.

Rolston also lives up to Monroes framing of the difference between academics and industry-focused people. He said that even Monroe would have to admit that no one is close to the true holy grail of quantum computers computers with a massive number of qubits and that the IonQ founder is banking on interesting optimization problems being solvable for industry to an extent which will justify the hype instead.

In contrast, Rolston remained pessimistic that such smaller quantum computers would achieve sufficient superiority at optimization issues in business to justify a rushed prediction that transforming the world is just around the corner.

In Rolstons view, the longer, more patient and steadier path is the one that will eventually reap rewards.

For the moment, we do not know whether Google or IonQ, or those like Monroe or Rolston will eventually be able to declare they were right. We do know that whoever is right and whoever is first will radically change the world as we know it.

Originally posted here:
Who will dominate the tech arms race? - The Jerusalem Post

Why Quantum Resistance Is the Next Blockchain Frontier – Tech Times

(Photo : Why Quantum Resistance Is the Next Blockchain Frontier)

As decentralized networks secured by potentially thousands of miners and/or nodes, blockchains are widely considered to be an incredibly secure example of distributed ledger technology.

On the back of this, they also have dozens of potential applications - ranging from decentralized content storage networks, to medical records databases, and supply chain management. But to this day, they're most commonly thought of as the ideal platform hosting the financial infrastructure of tomorrow - such as decentralized exchanges and payment settlement networks.

But there's a problem. While the blockchains of today are practically unhackable - due to the type of encryption they use to secure private keys and transactions - this might not be the case for much longer. This is due to the advent of so-called "quantum computers", that is, computers that can leverage the properties of quantum mechanics to solve problems that would be impossible with traditional computers... such as breaking the cryptography that secures current generation blockchains.

Many blockchains of today use at least two types of cryptographic algorithms - asymmetric key algorithms and hash functions.

The first kind, also known as public-key cryptography, is used to produce pairs of private and public keys that are provably cryptographically linked. In Bitcoin, this private key is used to spend UTXOs - thereby transferring value from one person to another. The second kind - the hash function - is used to securely process raw transaction data into a block in a way that is practically irreversible.

As you might imagine, a sufficiently powerful quantum computer capable of breaking either of these security mechanisms could have devastating consequences for susceptible blockchains - since they could be used to potentially derive private keys or even mine cryptocurrency units much faster than the expected rate (leading to supply inflation).

So, just how far away from this are we? Well, according to recent estimates, a quantum computer possessing 4,000 qubits of processing power could be the minimum necessary to break the public key cryptography that secures Bitcoin user funds. A sufficiently flexible quantum computer with this processing power could, theoretically, take over the funds contained in any Bitcoin p2pk address - that's a total of around 2 million BTC (circa $67 billion at today's rates).

Fortunately, this isn't an immediate concern. As it stands, the world's most powerful quantum computer - the Zuchongzhi quantum computer- currently clocks in at an impressive (albeit insufficient) 66 qubits. However, given the rapid pace of development in the quantum computing sector, some experts predict that Bitcoin's Elliptic Curve Digital Signature Algorithm (ECDSA) could meet its quantum match within a decade.

(Photo : The Next Platform)

The algorithm that could be potentially used to break ECDSA has already been developed. If generalized and applied by a powerful enough quantum computer, it is widely thought that Peter Shor's polynomial time quantum algorithm would be able to attack the Bitcoin blockchain - while similar algorithms could be applied to other forms of traditional encryption.

But this might not be a concern for much longer, thanks to the introduction of what many consider to be the world's first truly quantum-resistant blockchain. The platform, known as QANplatform, is built to resist all known quantum attacks by using lattice cryptography. QAN manages to achieve quantum resistance while simultaneously tackling the energy concerns that come with some other blockchains through its highly efficient consensus mechanism known as Proof-of-Randomness (PoR).

Unlike some other so-called quantum-resistant blockchains, QAN is unusual in that it also supports decentralized applications (DApps) - allowing developers to launch quantum-resistant DApps within minutes using its free developer tools.

Besides platforms like QAN, the development communities behind several popular blockchains are already beginning to consider implementing their own quantum-resistance solutions, such as the recently elaboratedcommit-delay-reveal scheme - which could be used to transition Bitcoin to a quantum-resistant state. Nonetheless, the future of post-quantum cryptography still remains up in the air, as none of the top ten blockchains by user count have yet committed to a specific quantum-resistant signature scheme.

2021 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Tags:

Read this article:
Why Quantum Resistance Is the Next Blockchain Frontier - Tech Times

Life, the universe and everything Physics seeks the future – The Economist

Aug 25th 2021

A WISE PROVERB suggests not putting all your eggs in one basket. Over recent decades, however, physicists have failed to follow that wisdom. The 20th centuryand, indeed, the 19th before itwere periods of triumph for them. They transformed understanding of the material universe and thus peoples ability to manipulate the world around them. Modernity could not exist without the knowledge won by physicists over those two centuries.

Your browser does not support the

Get The Economist app and play articles, wherever you are

In exchange, the world has given them expensive toys to play with. The most recent of these, the Large Hadron Collider (LHC), which occupies a 27km-circumference tunnel near Geneva and cost $6bn, opened for business in 2008. It quickly found a long-predicted elementary particle, the Higgs boson, that was a hangover from calculations done in the 1960s. It then embarked on its real purpose, to search for a phenomenon called Supersymmetry.

This theory, devised in the 1970s and known as Susy for short, is the all-containing basket into which particle physicss eggs have until recently been placed. Of itself, it would eliminate many arbitrary mathematical assumptions needed for the proper working of what is known as the Standard Model of particle physics. But it is also the vanguard of a deeper hypothesis, string theory, which is intended to synthesise the Standard Model with Einsteins general theory of relativity. Einsteins theory explains gravity. The Standard Model explains the other three fundamental forceselectromagnetism and the weak and strong nuclear forcesand their associated particles. Both describe their particular provinces of reality well. But they do not connect together. String theory would connect them, and thus provide a so-called theory of everything.

String theory proposes that the universe is composed of minuscule objects which vibrate in the manner of the strings of a musical instrument. Like such strings, they have resonant frequencies and harmonics. These various vibrational modes, string theorists contend, correspond to various fundamental particles. Such particles include all of those already observed as part of the Standard Model, the further particles predicted by Susy, which posits that the Standard Models mathematical fragility will go away if each of that models particles has a heavier supersymmetric partner particle, or sparticle, and also particles called gravitons, which are needed to tie the force of gravity into any unified theory, but are not predicted by relativity.

But, no Susy, no string theory. And, 13 years after the LHC opened, no sparticles have shown up. Even two as-yet-unexplained results announced earlier this year (one from the LHC and one from a smaller machine) offer no evidence directly supporting Susy. Many physicists thus worry they have been on a wild-goose chase.

They have good reason to be nervous. String theory already comes with a disturbing conceptual price tagthat of adding six (or in one version seven) extra dimensions to the universe, over and above the four familiar ones (three of space and one of time). It also describes about 10500 possible universes, only one of which matches the universe in which human beings live. Accepting all that is challenging enough. Without Susy, though, string theory goes bananas. The number of dimensions balloons to 26. The theory also loses the ability to describe most of the Standard Models particles. And it implies the existence of weird stuff such as particles called tachyons that move faster than light and are thus incompatible with the theory of relativity. Without Susy, string theory thus looks pretty-much dead as a theory of everything. Which, if true, clears the field for non-string theories of everything.

The names of many of these do, it must be conceded, torture the English language. They include causal dynamical triangulation, asymptotically safe gravity, loop quantum gravity and the amplituhedron formulation of quantum theory. But at the moment the bookies favourite for unifying relativity and the Standard Model is something called entropic gravity.

Entropy is a measure of a systems disorder. Famously, the second law of thermodynamics asserts that it increases with time (ie, things have a tendency to get messier as they get older). What that has to do with a theory of gravity, let alone of everything, is not, perhaps, immediately obvious. But the link is black holes. These are objects which have such strong gravitational fields that even light cannot escape from them. They are predicted by the mathematics of general relativity. And even though Einstein remained sceptical about their actual existence until the day he died in 1955, subsequent observations have shown that they are indeed real. But they are not black.

In 1974 Stephen Hawking, of Cambridge University, showed that quantum effects at a black holes boundary allow it to radiate particlesespecially photons, which are the particles of electromagnetic radiation, including light. This has peculiar consequences. Photons carry radiant heat, so something which emits them has a temperature. And, from its temperature and mass, it is possible to calculate a black holes entropy. This matters because, when all these variables are plugged into the first law of thermodynamics, which states that energy can be neither created nor destroyed, only transformed from one form (say, heat) into another (say, mechanical work), what pops out are Einsteins equations of general relativity.

That relationship was discovered in 2010 by Erik Verlinde of Amsterdam University. It has serious implications. The laws of thermodynamics rely on statistical mechanics. They involve properties (temperature, entropy and so on) which emerge from probabilistic descriptions of the behaviour of the underlying particles involved. These are also the particles described by quantum mechanics, the mathematical theory which underpins the Standard Model. That Einsteins equations can be rewritten thermodynamically implies that space and time are also emergent properties of this deeper microscopic picture. The existing forms of quantum mechanics and relativity thus do indeed both seem derivable in principle from some deeper theory that describes the underlying fabric of the universe.

String theory is not so derivable. Strings are not fundamental enough entities. But entropic gravity claims to describe the very nature of space and timeor, to use Einsteinian terminology, spacetime. It asserts this is woven from filaments of quantum entanglement linking every particle in the cosmos.

The idea of quantum entanglement, another phenomenon pooh-poohed by Einstein that turned out to be true, goes back to 1935. It is that the properties of two or more objects can be correlated (entangled) in a way which means they cannot be described independently. This leads to weird effects. In particular, it means that two entangled particles can appear to influence each others behaviour instantaneously even when they are far apart. Einstein dubbed this spooky action at a distance, because it seems to violate the premise of relativity theory that, in the speed of light, the universe has a speed limit.

As with black holes, Einstein did not live long enough to see himself proved wrong. Experiments have nevertheless shown he was. Entanglement is real, and does not violate relativity because although the influence of one particle on another can be instantaneous there is no way to use the effect to pass information faster than light-speed. And, in the past five years, Brian Swingle of Harvard University and Sean Carroll of the California Institute of Technology have begun building models of what Dr Verlindes ideas might mean in practice, using ideas from quantum information theory. Their approach employs bits of quantum information (so-called qubits) to stand in for the entangled particles. The result is a simple but informative analogue of spacetime.

Qubits, the quantum equivalent of classical bitsthe ones and zeros on which regular computing is builtwill be familiar to those who follow the field of quantum computing. They are the basis of quantum information theory. Two properties distinguish qubits from the regular sort. First, they can be placed in a state of superposition, representing both a one and a zero at the same time. Second, several qubits can become entangled. Together, these properties let quantum computers accomplish feats such as performing multiple calculations at once, or completing certain classes of calculation in a sensible amount of time, that are difficult or impossible for a regular computer.

And because of their entanglement qubits can also, according to Dr Swingle and Dr Carroll, be used as stand-ins for how reality works. More closely entangled qubits represent particles at points in spacetime that are closer together. So far, quantum computers being a work in progress, this modelling can be done only with mathematical representations of qubits. These do, though, seem to obey the equations of general relativity. That supports entropic-gravity-theorys claims.

All of this modelling puts entropic gravity in pole position to replace strings as the long-sought theory of everything. But the idea that spacetime is an emergent property of the universe rather than being fundamental to it has a disturbing consequence. It blurs the nature of causality.

In the picture built by entropic gravity, spacetime is a superposition of multiple states. It is this which muddies causality. The branch of maths that best describes spacetime is a form of geometry that has four axes at right angles to each other instead of the more familiar three. The fourth represents time, so, like the position of objects, the order of events in spacetime is determined geometrically. If different geometric arrangements are superposed, as entropic gravity requires, it can therefore sometimes happen that the statements A causes B and B causes A are both true.

This is not mere speculation. In 2016 Giulia Rubino of the University of Bristol, in England, constructed an experiment involving polarised photons and prisms which achieved exactly that. This spells trouble for those who have old-fashioned notions about causalitys nature.

However, Lucien Hardy of the Perimeter Institute, in Canada, has discovered a way to reformulate the laws of quantum mechanics to get around this. In his view, causality as commonly perceived is like data compression in computing: it is a concept that gives you more bang for your buck. With a little bit of information about the present, causality can infer a lot about the futurecompressing the amount of information needed to capture the details of a physical system in time.

But causality, Dr Hardy thinks, may not be the only way to describe such correlations. Instead, he has invented a general method for building descriptions of the patterns in correlations from scratch. This method, which he calls the causaloid framework, tends to reproduce causality but it does not assume it, and he has used it to reformulate both quantum theory (in 2005) and general relativity (in 2016). Causaloid maths is not a theory of everything. But there is a good chance that if and when such a theory is found, causaloid principles will be needed to describe it, just as general relativity needed a geometry of four dimensions to describe spacetime.

Entropic gravity has, then, a lot of heavy-duty conceptual work to back it up. But it is not the only candidate to replace string theory. Others jostling for attention include an old competitor called loop quantum gravity, originally proposed in 1994 by Carlo Rovelli, then at the University of Pittsburgh, and Lee Smolin, of the Perimeter Institute. This, and causal dynamical triangulation, a more recent but similar idea, suggest that spacetime is not the smooth fabric asserted by general relativity, but, rather, has a structureeither elementary loops or triangles, according to which of the two theories you support.

A third option, asymptotically safe gravity, goes back still further, to 1976. It was suggested by Steven Weinberg, one of the Standard Models chief architects. A natural way to develop a theory of quantum gravity is to add gravitons to the model. Unfortunately, this approach got nowhere, because when the interactions of these putative particles were calculated at higher energies, the maths seemed to become nonsensical. However, Weinberg, who died in July, argued that this apparent breakdown would go away (in maths speak, the calculations would be asymptotically safe) if sufficiently powerful machines were used to do the calculating. And, with the recent advent of supercomputers of such power, it looks, from early results, as if he might have been right.

One of the most intriguing competitors of entropic gravity, though, is the amplituhedron formulation of quantum theory. This was introduced in 2013 by Nima Arkani-Hamed of the Institute of Advanced Study at Princeton and Jaroslav Trnka of the University of California, Davis. They have found a class of geometric structures dubbed amplituhedrons, each of which encodes the details of a possible quantum interaction. These, in turn, are facets of a master amplituhedron that encodes every possible type of physical process. It is thus possible to reformulate all of quantum theory in terms of the amplituhedron.

Most attempts at a theory of everything try to fit gravity, which Einstein describes geometrically, into quantum theory, which does not rely on geometry in this way. The amplituhedron approach does the opposite, by suggesting that quantum theory is actually deeply geometric after all. Better yet, the amplituhedron is not founded on notions of spacetime, or even statistical mechanics. Instead, these ideas emerge naturally from it. So, while the amplituhedron approach does not as yet offer a full theory of quantum gravity, it has opened up an intriguing path that may lead to one.

That space, time and even causality are emergent rather than fundamental properties of the cosmos are radical ideas. But this is the point. General relativity and quantum mechanics, the physics revolutions of the 20th century, were viewed as profound precisely because they overthrew common sense. To accept relativity meant abandoning a universal notion of time and space. To take quantum mechanics seriously meant getting comfortable with ideas like entanglement and superposition. Embracing entropic gravity or its alternatives will require similar feats of the imagination.

No theory, though, is worth a damn without data. That, after all, is the problem with Supersymmetry. Work like Dr Rubinos points the way. But something out of a particle-physics laboratory would also be welcome. And, though their meaning is obscure, the past few months have indeed seen two experimentally induced cracks in the Standard Model.

On March 23rd a team from CERN, the organisation that runs the LHC, reported an unexpected difference in behaviour between electrons and their heavier cousins, muons. These particles differ from one another in no known properties but their masses, so the Standard Model predicts that when other particles decay into them, the two should each be produced in equal numbers. But this appears not to be true. Interim results from the LHC suggest that a type of particle called a B-meson is more likely to decay into an electron than a muon. That suggests an as-yet-undescribed fundamental force is missing from the Standard Model. Then, on April 7th, Fermilab, Americas biggest particle-physics facility, announced the interim results of its own muon experiment, Muon g-2.

In the quantum world, there is no such thing as a perfect vacuum. Instead, a froth of particles constantly pops in and out of existence everywhere in spacetime. These are virtual rather than real particlesthat is, they are transient fluctuations which emerge straight out of quantum uncertainty. But, although they are short-lived, during the brief periods of their existence they still have time to interact with more permanent sorts of matter. They are, for example, the source of the black-hole radiation predicted by Hawking.

The strengths of their interactions with types of matter more conventional than black holes are predicted by the Standard Model, and to test these predictions, Muon g-2 shoots muons in circles around a powerful superconducting magnetic-storage ring. The quantum froth changes the way the muons wobble, which detectors can pick up with incredible precision. The Muon g-2 experiment suggests that the interactions causing these wobbles are slightly stronger than the Standard Model predicts. If confirmed, this would mean the model is missing one or more elementary particles.

There is a slim chance that these are the absent sparticles. If so, it is the supporters of supersymmetry who will have the last laugh. But nothing points in this direction and, having failed thus far to stand their ideas up, they are keeping sensibly quiet.

Whatever the causes of these two results, they do show that there is something out there which established explanations cannot account for. Similarly unexplained anomalies were starting points for both quantum theory and relativity. It looks possible, therefore, that what has seemed one of physicss darkest periods is about to brighten into a new morning.

This article appeared in the Science & technology section of the print edition under the headline "Bye, bye, little Susy"

Originally posted here:
Life, the universe and everything Physics seeks the future - The Economist

This Exotic Particle Had an Out-of-Body Experience These Surprised Scientists Took a Picture of It – SciTechDaily

Artists illustration of ghost particles moving through a quantum spin liquid. Credit: Jenny Nuss/Berkeley Lab

An unexpected finding by scientists at Berkeley Lab and UC Berkeley could advance quantum computers and high-temperature superconductors.

Scientists have taken the clearest picture yet of electronic particles that make up a mysterious magnetic state called a quantum spin liquid (QSL).

The achievement could facilitate the development of superfast quantum computers and energy-efficient superconductors.

The scientists are the first to capture an image of how electrons in a QSL decompose into spin-like particles called spinons and charge-like particles called chargons.

Artists illustration of ghost particles moving through a quantum spin liquid. Credit: Jenny Nuss/Berkeley Lab

Other studies have seen various footprints of this phenomenon, but we have an actual picture of the state in which the spinon lives. This is something new, said study leader Mike Crommie, a senior faculty scientist at Lawrence Berkeley National Laboratory (Berkeley Lab) and physics professor at UC.

Spinons are like ghost particles. They are like the Big Foot of quantum physics people say that theyve seen them, but its hard to prove that they exist, said co-author Sung-Kwan Mo, a staff scientist at Berkeley Labs Advanced Light Source. With our method weve provided some of the best evidence to date.

In a QSL, spinons freely move about carrying heat and spin but no electrical charge. To detect them, most researchers have relied on techniques that look for their heat signatures.

Now, as reported in the journal Nature Physics, Crommie, Mo, and their research teams have demonstrated how to characterize spinons in QSLs by directly imaging how they are distributed in a material.

Schematic of the triangular spin lattice and star-of-David charge density wave pattern in a monolayer of tantalum diselenide. Each star consists of 13 tantalum atoms. Localized spins are represented by a blue arrow at the star center. The wavefunction of the localized electrons is represented by gray shading. Credit: Mike Crommie et al./Berkeley Lab

To begin the study, Mos group at Berkeley Labs Advanced Light Source (ALS) grew single-layer samples of tantalum diselenide (1T-TaSe2) that are only three-atoms thick. This material is part of a class of materials called transition metal dichalcogenides (TMDCs). The researchers in Mos team are experts in molecular beam epitaxy, a technique for synthesizing atomically thin TMDC crystals from their constituent elements.

Mos team then characterized the thin films through angle-resolved photoemission spectroscopy, a technique that uses X-rays generated at the ALS.

Scanning tunneling microscopy image of a tantalum diselenide sample that is just 3 atoms thick. Credit: Mike Crommie et al./Berkeley Lab

Using a microscopy technique called scanning tunneling microscopy (STM), researchers in the Crommie lab including co-first authors Wei Ruan, a postdoctoral fellow at the time, and Yi Chen, then a UC Berkeley graduate student injected electrons from a metal needle into the tantalum diselenide TMDC sample.

Images gathered by scanning tunneling spectroscopy (STS) an imaging technique that measures how particles arrange themselves at a particular energy revealed something quite unexpected: a layer of mysterious waves having wavelengths larger than one nanometer (1 billionth of a meter) blanketing the materials surface.

The long wavelengths we saw didnt correspond to any known behavior of the crystal, Crommie said. We scratched our heads for a long time. What could cause such long wavelength modulations in the crystal? We ruled out the conventional explanations one by one. Little did we know that this was the signature of spinon ghost particles.

With help from a theoretical collaborator at MIT, the researchers realized that when an electron is injected into a QSL from the tip of an STM, it breaks apart into two different particles inside the QSL spinons (also known as ghost particles) and chargons. This is due to the peculiar way in which spin and charge in a QSL collectively interact with each other. The spinon ghost particles end up separately carrying the spin while the chargons separately bear the electrical charge.

Illustration of an electron breaking apart into spinon ghost particles and chargons inside a quantum spin liquid. Credit: Mike Crommie et al./Berkeley Lab

In the current study, STM/STS images show that the chargons freeze in place, forming what scientists call a star-of-David charge-density-wave. Meanwhile, the spinons undergo an out-of-body experience as they separate from the immobilized chargons and move freely through the material, Crommie said. This is unusual since in a conventional material, electrons carry both the spin and charge combined into one particle as they move about, he explained. They dont usually break apart in this funny way.

Crommie added that QSLs might one day form the basis of robust quantum bits (qubits) used for quantum computing. In conventional computing a bit encodes information either as a zero or a one, but a qubit can hold both zero and one at the same time, thus potentially speeding up certain types of calculations. Understanding how spinons and chargons behave in QSLs could help advance research in this area of next-gen computing.

Another motivation for understanding the inner workings of QSLs is that they have been predicted to be a precursor to exotic superconductivity. Crommie plans to test that prediction with Mos help at the ALS.

Part of the beauty of this topic is that all the complex interactions within a QSL somehow combine to form a simple ghost particle that just bounces around inside the crystal, he said. Seeing this behavior was pretty surprising, especially since we werent even looking for it.

Reference: Evidence for quantum spin liquid behaviour in single-layer 1T-TaSe2 from scanning tunnelling microscopy by Wei Ruan, Yi Chen, Shujie Tang, Jinwoong Hwang, Hsin-Zon Tsai, Ryan L. Lee, Meng Wu, Hyejin Ryu, Salman Kahn, Franklin Liou, Caihong Jia, Andrew Aikawa, Choongyu Hwang, Feng Wang, Yongseong Choi, Steven G. Louie, Patrick A. Lee, Zhi-Xun Shen, Sung-Kwan Mo & Michael F. Crommie, 19 August 2021, Nature Physics.DOI: 10.1038/s41567-021-01321-0

Researchers from SLAC National Accelerator Laboratory; Stanford University; Argonne National Laboratory; the Massachusetts Institute of Technology; the Chinese Academy of Sciences, Shanghai Tech University, Shenzhen University, Henan University of China; and the Korea Institute of Science and Technology and Pusan National University of Korea contributed to this study. (Co-first author Wei Ruan is now an assistant professor of physics at Fudan University in China; co-first author Yi Chen is currently a postdoctoral fellow at the Center for Quantum Nanoscience, Institute for Basic Science of Korea.)

This work was supported by the DOE Office of Science, and used resources at Berkeley Labs Advanced Light Source and Argonne National Laboratorys Advanced Photon Source. The Advanced Light Source and Advanced Photon Source are DOE Office of Science user facilities.

Additional support was provided by the National Science Foundation.

More here:
This Exotic Particle Had an Out-of-Body Experience These Surprised Scientists Took a Picture of It - SciTechDaily

11 Reasons Why Chess Is The King Of All Games – Chess.com

Ever since 2020, chess has seen tremendous growth as a game. In December 2019, chess was averaging at a little over 1,900 viewers per month on Twitch. Fast forward to February 2021, when PogChamps 3 was happening, and that number grew to 30,000.

While events like PogChamps and The Queen's Gambit have elevated the game's popularity to new heights, that doesn't explain why chess is still so successful. After all, hype can only last for so long. Millions of the newer chess players are still enjoying the game, long after Beth Harmon made her last move against Borgov.

So, why is this ancient board game so magnetic? What makes chess one ofif not thegreatest games of all time?

Here are 11 reasons:

Have you ever tried to eat a juicy and delicious steak using nothing but flimsy plastic silverware? If you haven't, let me tell you: it's a terribly frustrating experience. You try to savor that fine meal, but the plastic fork shatters in your hands while your food gets colder and colder.

Playing a game with beautiful graphics and a great storyline but awful gameplay is much like that experience. You sit there trying to have a good time, but you simply don't have the means to do it. In the gaming world, gameplay is king.

Gameplay is, at an elementary level, the way a user interacts with a game. There are quite a few technical factors that go into "great gameplay," and let me tell you: chess hits them all. And it hits them hard. Good playability, just the right balance between challenge and reward, plenty of tools for players to improve... you name it!

Chess.com makes this point stick out even more! Fair pairing is a no-brainer; you can't make illegal moves (even that mysterious pawn move is legal!); we take care of your clock for you; you can learn all the rules for free. What a great time to be alive!

That alone would be reason enough to make it a great game, but there's more.

Time for some mind-boggling numbers. Let's suppose two random people are playing a game of chess. After only two moves by each player, the game could've reached one of 197,281 different positions (including the Fool's Mate). After four more moves for each side, there are 84,998,978,956 possible positions.

In the video below, Dr. James Grime from the Numberphile YouTube channel talks about the number of possible chess games. If you like numbers, I highly suggest you watch the video. If you don't like numbers, I still think you should watch it as it'll blow your mind.

According to Dr. Grime, if we take into consideration only three sensible moves per player per turn and a 20-move game (20 moves for each side), you would still have 1040 different games that could arise. I know this doesn't mean much to us, but Dr. Grime puts it into perspective: "If, for example, everyone in the world paired off and they had to play a [different] game of chess every day, (...) to play all possible games, it would still take you trillions and trillions of years to play them all."

Mind-blowing.

As you can see, chess is a game that's full of possibilities and replayability. You'll hardly ever play the same game twice unless you're playing for a quick draw with the Bongcloud. Yet the game is consistent enough that it doesn't feel like you're navigating a sea of chaos every time you start a new game. That balance is just divine!

If you do a quick search online, you can find a complete tournament chess set and board for less than $20. What's more, that chess set will probably last for a lifetimeno patches or upgrades needed.

If you don't feel like you need a tournament-approved chess set, the price goes even lower. For less than $15, you can find decent sets that will also last forever.

Would you rather spend $15 on a month of Netflix? That's fine. You can play chess for free on Chess.com! And if you're good enough, you don't even need an internet connection to playyou can do it using nothing but your mind! Take that, Grand Theft Auto!

Do you know that Dragonite you spent hours training to make it invincible in? When you turn your console off (or play the next generation of Pokemon), all that work will go down the drain. What a waste.

As satisfying as it is to level up your character or the monsters you carry around in your pocket, there's nothing like leveling yourself up. When you're playing chess, you are the one getting faster, sharper, and stronger. Not only that, but you can also transfer the skills you've acquired from chess to other aspects of your life.

We also have the added benefit that people think chess players are geniuses. Would you mind not letting them know that's not true? I appreciate it.

"Appealing visuals in chess? Are you serious? Have you ever seen games like Assassins Creed or Watch Dogs? How can you even compare chess to them?" Well, first of all, I'm not comparing chess to those games. Chess is clearly much better (wink).

When I say "appealing visuals," I'm not talking about mind-boggling 3D effects or ultra-realistic representations of reality. But if you've ever taken some time to appreciate the beauty of a wooden chess set, you know what I mean. Take an extra eight minutes to watch the video below, and I dare you to not fall in love with the shapes and elegance of chess pieces:

Chess is one of the oldest board games in the world. The last time chess had a meaningful change in its rules was most likely more than 400 years ago. Compare that to the fast-paced world of online games where new versions or new game patches come out very often.

While some might say that's boring, I would suggest it's marvelous. Try watching a replay of Counter-Strike 1.6 or playing the first few versions of FIFA. You'll tear up, and not positively. Aside from the nostalgia, there's little value to doing that.

Now, I invite you to take a look at this game played in 1851, aptly called the Immortal Game. If this game doesn't give you the chills, I don't know what will.

If you enjoy classical, near-perfect intellectual battles, chess is the game for you. If you like fast-paced games fueled by trash-talking, chess is also the game for you. Chess has something for everyone.

From the classy World Chess Championship to the meme-esque PogChamps, chess appeals to all audiences and brings us all together. It's hard to see a game that can bring together men in suits, women in dresses, and streamers wearing whimsical shirts. Yet, somehow chess can pull it off.

If you like gambling, this is probably not a good thing about chess. But if you enjoy playing games where the outcome depends (almost) purely on your ability, chess is perfect for you!

While games of chance can be fun, they can also be disheartening. There is a scene in the hit show "The Office" that illustrates this point. Kevin, the office's accountant and seasoned poker player, is at a poker table that includes Phillys, a saleswoman with no poker experience. Kevin goes all-in with three queens but loses after Phillys accidentally finds out she had "all the clovers" (a flush).

While skilled poker players do tend to win more in the long run, this sort of unlucky blow can and does happen occasionally. And let's face it: once is more than enough. So, apart from the sporadic mouse slip, the future of your chess games depends exclusively on you.

"Now you're just pushing things." I know, this seems like a bit too much. I don't blame you if you call me out in the comments. However, let me explain myself.

While two people sitting down and staring at a board for hours on end may not seem like much, in reality, it is. With every passing second during a chess game, both players intensely calculate the infinite possibilities ahead of them. Although the board remains unchanged for several minutes, the players and spectators constantly move pieces around in their heads. IM Levy Rozman's video lets you peek behind the curtain to appreciate all the action going on inside GM Garry Kasparov's head while playing against GM Anatoly Karpov:

I don't know of many games where both players are always playing. Even people watching the game are thinking about multiple variations and trying to figure out the next move. Now, that's what I call action!

I understand if you're not satisfied with my logic, though. You prefer "real" actionintense battles with stuff flying around the screen and where every millisecond counts. Well, I leave you with this short clip:

When it comes to online gaming, chess is the OG. Chess was the first app on a computer and the first game ever played online. If that's not cool enough for you, I'll give you one more: chess was likely one of the first games played between Earth and space. It happened in 1970, when two Russian cosmonauts played against ground control. They even had a special zero-gravity chessboard with them to make this possible.

Does Valorant have special zero-gravity gear for their players? Didn't think so.

There's only one thing that can bring together art, science, and fun togetherand no, I'm not talking about papier mache volcanos.

Chess competitions have been around for a few hundred years. As with any other competitive game or sport, serious players who dedicated their lives to chess have developed numerous techniques to improve and get ahead of the pack. In particular, GM Mikhail Botvinnik (who even looks like a scientist, by the way) gets credit for giving chess a systematic training approach for professional players. Chess improvement became a science.

Still, chess leaves room for improvisation and creativity. Take some time to go over some of GM Mikhail Tal's best games and I dare you not to be awestruck. Much like Salvador Dali paintings, Tal's jaw-dropping sacrifices can take you to a surreal world that will leave you breathless.

Tal's games are only a tiny fraction of the truly stunning games played throughout history. Check out this masterpiece created in 1912. Legend has it this game was so beautiful that spectators started throwing gold coins over the board after the game was over. Simply exquisite!

Now, the best part of it all. Not everyone is as smart as a scientist or inspiring as an artist. Yet, we can all have fun with chess. Sure, my games lack precision and when I move my queen within the enemy pawn's reach you can bet good money it's a blunder. Nonetheless, I still have a lot (maybe even too much) of fun with chess.

And my guess is that if you play chess (or if you start playing) you will too.

What are your favorite things about chess? Let us know in the comments below!

Original post:
11 Reasons Why Chess Is The King Of All Games - Chess.com