Archive for the ‘Quantum Computing’ Category

FM holds talks with US NSF chief, discusses collaboration in science and technology – Devdiscourse

Finance Minister Nirmala Sitharaman on Sunday met Director of the US National Science Foundation (NSF) Sethuraman Panchanathan and discussed fostering ties in domains such as artificial intelligence, space, agriculture and health. The two sides discussed areas of collaboration related to science and technology (S&T) which emerged during the meeting between Prime Minister Narendra Modi and US President Joe Biden during the QUAD Summit in Tokyo in May, the finance ministry said in a series of tweets. ''Both sides emphasised to further enhance and strengthen the time-tested, democratic & value-based mutual partnership in specific domains such as artificial intelligence data science quantum computing, space, agriculture and health,'' it added. Panchanathan indicated that many projects will be launched soon in association with the Department of Science and Technology under six technology innovation hubs. ''While talking about the mission and objectives of @NSF, @DrPanch elaborated on achieving innovation at speed and scale with inclusion and solution-based approach in research,'' the ministry tweeted. ''Union Finance Minister Smt. @nsitharaman talked about India's achievement in fostering innovation through #AtalInnovationMission, #Start-upIndia, #StandUpIndia, reforms in patent processes and advancement of appropriate technology in agriculture,'' it added.

(This story has not been edited by Devdiscourse staff and is auto-generated from a syndicated feed.)

See the rest here:
FM holds talks with US NSF chief, discusses collaboration in science and technology - Devdiscourse

CXL Brings Datacenter-sized Computing with 3.0 Standard, Thinks Ahead to 4.0 – HPCwire

A new version of a standard backed by major cloud providers and chip companies could change the way some of the worlds largest datacenters and fastest supercomputers are built.

The CXL Consortium on Tuesday announced a new specification called CXL 3.0 also known as Compute Express Link 3.0 that eliminates more chokepoints that slow down computation in enterprise computing and datacenters.

The new spec provides a communication link between chips, memory and storage in systems, and it is two times faster than its predecessor called CXL 2.0.

CXL 3.0 also has improvements for more fine-grained pooling and sharing of computing resources for applications such as artificial intelligence.

CXL 3.0 is all about improving bandwidth and capacity, and can better provision and manage computing, memory and storage resources, said Kurt Lender, the co-chair of the CXL marketing work group (and senior ecosystem manager at Intel), in an interview with HPCwire.

Hardware and cloud providers are coalescing around CXL, which has steamrolled other competing interconnects. This week, OpenCAPI, an IBM-backed interconnect standard, merged with CXL Consortium, following the footsteps of Gen-Z, which did the same in 2020.

CXL released the first CXL 1.0 specification in 2019, and quickly followed it up with CXL 2.0, which supported PCIe 5.0, which is found in a handful of chips such as Intels Sapphire Rapids and Nvidias Hopper GPU.

The CXL 3.0 spec is based on PCIe 6.0, which was finalized in January. CXL has a data transfer speed of up to 64 gigatransfers per second, which is the same as PCIe 6.0.

The CXL interconnect can link up chips, storage and memory that are near and far from each other, and that allows system providers to build datacenters as one giant system, said Nathan Brookwood, principal analyst at Insight 64.

CXLs ability to support the expansion of memory, storage and processing in a disaggregated infrastructure gives the protocol a step-up over rival standards, Brookwood said.

Datacenter infrastructures are moving to a decoupled structure to meet the growing processing and bandwidth needs for AI and graphics applications, which require large pools of memory and storage. AI and scientific computing systems also require processors beyond just CPUs, and organizations are installing AI boxes, and in some cases, quantum computers, for more horsepower.

CXL 3.0 improves bandwidth and capacity with better switching and fabric technologies, CXL Consortiums Lender said.

CXL 1.1 was sort of in the node, then with 2.0, you can expand a little bit more into the datacenter. And now you can actually go across racks, you can do decomposable or composable systems, with the fabric technology that weve brought with CXL 3.0, Lender said.

At the rack level, one can make CPU or memory drawers as separate systems, and improvements in CXL 3.0 provide more flexibility and options in switching resources compared to previous CXL specifications.

Typically, servers have a CPU, memory and I/O, and can be limited in physical expansion. In disaggregated infrastructure, one can take a cable to a separate memory tray through a CXL protocol without relying on the popular DDR bus.

You can decompose or compose your datacenter as you like it. You have the capability of moving resources from one node to another, and dont have to do as much overprovisioning as we do today, especially with memory, Lender said, adding its a matter of you can grow systems and sort of interconnect them now through this fabric and through CXL.

The CXL 3.0 protocol uses the electricals of the PCI-Express 6.0 protocol, along with its protocols for I/O and memory. Some improvements include support for new processors and endpoints that can take advantage of the new bandwidth. CXL 2.0 had single-level switching, while 3.0 has multi-level switching, which provides more latency on the fabric.

You can actually start looking at memory like storage you could have hot memory and cold memory, and so on. You can have different tiering and applications can take advantage of that, Lender said.

The protocol also accounts for the ever-changing infrastructure of datacenters, providing more flexibility on how system administrators want to aggregate and disaggregate processing units, memory and storage. The new protocol opens more channels and resources for new types of chips that include SmartNICs, FPGAs and IPUs that may require access to more memory and storage resources in datacenters.

HPC composable systems youre not bound by a box. HPC loves clusters today. And [with CXL 3.0] now you can do coherent clusters and low latency. The growth and flexibility of those nodes is expanding rapidly, Lender said.

The CXL 3.0 protocol can support up to 4,096 nodes, and has a new concept of memory sharing between different nodes. That is an improvement from a static setup in older CXL protocols, where memory could be sliced and attached to different hosts, but could not be shared once allocated.

Now we have sharing where multiple hosts can actually share a segment of memory. Now you can actually look at quick, efficient data movement between hosts if necessary, or if you have an AI-type application that you want to hand data from one CPU or one host to another, Lender said.

The new feature allows peer-to-peer connection between nodes and endpoints in a single domain. That sets up a wall in which traffic can be isolated to move only between nodes connected to each other. That allows for faster accelerator-to-accelerator or device-to-device data transfer, which is key in building out a coherent system.

If you think about some of the applications and then some of the GPUs and different accelerators, they want to pass information quickly, and now they have to go through the CPU. With CXL 3.0, they dont have to go through the CPU this way, but the CPU is coherent, aware of whats going on, Lender said.

The pooling and allocation of memory resources is managed by a software called Fabric Manager. The software can sit anywhere in the system or hosts to control and allocate memory, but it could ultimately impact software developers.

If you get to the tiering level, and when you start getting all the different latencies in the switching, thats where there will have to be some application awareness and tuning of application. I think we certainly have that capability today, Lender said.

It could be two to four years before companies start releasing CXL 3.0 products, and the CPUs will need to be aware of CXL 3.0, Lender said. Intel built in support for CXL 1.1 in its Sapphire Rapids chip, which is expected to start shipping in volume later this year. The CXL 3.0 protocol is backward compatible with the older versions of the interconnect standard.

CXL products based on earlier protocols are slowly trickling into the market. SK Hynix this week introduced its first DDR5 DRAM-based CXL (Compute Express Link) memory samples, and will start manufacturing CXL memory modules in volume next year. Samsung has also introduced CXL DRAM earlier this year.

While products based on CXL 1.1 and 2.0 protocols are on a two-to-three-year product release cycle, CXL 3.0 products could take a little longer as it takes on a more complex computing environment.

CXL 3.0 could actually be a little slower because of some of the Fabric Manager, the software work. Theyre not simple systems when you start getting into fabrics, people are going to want to do proof of concepts and prove out the technology first. Its going to probably be a three-to-four year timeframe, Lender said.

Some companies already started work on CXL 3.0 verification IP six to nine months ago, and are finetuning the tools to the final specification, Bender said.

The CXL has a board meeting in October to discuss the next steps, which could also involve CXL 4.0. The standards organization for PCIe, called the PCI-Special Interest Group, last month announced it was planning PCIe 7.0, which increases the data transfer speed to 128 gigatransfers per second, which is double that of PCIe 6.0.

Lender was cautious about how PCIe 7.0 could potentially fit into a next-generation CXL 4.0. CXL has its own set of I/O, memory and cache protocols.

CXL sits on the electricals of PCIe so I cant commit or absolutely guarantee that [CXL 4.0] will run on 7.0. But thats the intent to use the electricals, Lender said.

Under that case, one of the tenets of CXL 4.0 will be to double the bandwidth by going to PCIe 7.0, but beyond that, everything else will be what we do more fabric or do different tunings, Lender said.

CXL has been on an accelerated pace, with three specification releases since its formation in 2019. There was confusion in the industry on the best high-speed, coherent I/O bus, but the focus has now coagulated around CXL.

Now we have the fabric. There are pieces of Gen-Z and OpenCAPI that arent even in CXL 3.0, so will we incorporate those? Sure, well look at doing that kind of work moving forward, Lender said.

Link:
CXL Brings Datacenter-sized Computing with 3.0 Standard, Thinks Ahead to 4.0 - HPCwire

Quantum computing and the Australians on the cutting edge – 9News

Fans of Marvel movies know the word 'quantum' too well.

It's the name of the realm the Avengers used to time travel and fantastical as that is, the concept of quantum mechanics is far from fiction.

Scientists have toyed with the idea since the 1920s in an attempt to explain the mysteries of our universe that can not be explained by traditional physics.

The University of Sydney (USYD) and University of New South Wales Sydney (UNSW) are among Google's new partners, which already included Macquarie University (MQ) and the University of Technology (UTS).

Associate Professor Ivan Kassal, from USYD believes advancements in quantum chemistry could develop life saving medicines and help predict the impact of atmospheric matter on our climate.

"Simulating chemistry is likely to be one of the first applications of quantum computers, and my goal is to develop the quantum algorithms that will allow near-term quantum computers to give us insights into chemical processes that are too complicated to simulate on any classical supercomputer," Kassal said.

Those are very physical problems to solve, but the potential of quantum computers could also speed up solving systems, crack cryptography and enable new applications of machine learning.

Australia's Chief Scientist, Dr Cathy Foley said Google's interest in Australia is "testament to the world class research that has been supported by the Australian Research Council for over two decades".

"I am delighted that Google sees Australia as somewhere to do quantum research. A step in building Australia's quantum industry here," said Dr Foley.

Google is building its quantum research team in Sydney, including its newly-appointed quantum computing scientist, Dr Marika Kieferova.

Professor Michael Bremner of UTS said one of this biggest challenges in quantum computing "is understanding which applications quantum computers can deliver performance that goes beyond classical computing."

"In this project, my team at UTS will work with Google on this problem, examining the mathematical structures that drive quantum algorithms to go beyond classical computing," Professor Michael Bremner, UTS

Original post:
Quantum computing and the Australians on the cutting edge - 9News

USC’s Biggest Wins in Computing and AI – USC Viterbi | School of Engineering – USC Viterbi School of Engineering

USC has been an animating force for computing research since the late 1960s.

With the advent of the USC Information Sciences Institute (ISI) in 1972 and the Department of Computer Science in 1976 (born out of the Ming Hsieh Department of Electrical and Computer Engineering), USC has played a propulsive role in everything from the internet to the Oculus Rift to recent Nobel Prizes.

Here are seven of those victories reimagined as cinemagraphs still photographs animated by subtle yet remarkable movements.

Cinemagraph: Birth of .Com

1. The Birth of the .com (1983)

While working at ISI, Paul Mockapetris and Jon Postel pioneered the Domain Name System, which introduced the .com, .edu, .gov and .org internet naming standards.

As Wired noted on the 25th anniversary, Without the Domain Name System, its doubtful the internet could have grown and flourished as it has.

The DNS works like a phone book for the internet, automatically translating text names, which are easy for humans to understand and remember, to numerical addresses that computers need. For example, imagine trying to remember an IP address like 192.0.2.118 instead of simply usc.edu.

In a 2009 interview with NPR, Mockapetris said he believed the first domain name he ever created was isi.edu for his employer, the (USC) Information Sciences Institute. That domain name is still in use today.

Grace Park, B.S. and M.S. 22 in chemical engineering, re-creates Len Adlemans famous experiment.

2. The Invention of DNA Computing (1994)

In a drop of water, a computation took place.

In 1994, Professor Leonard Adleman, who coined the term computer virus, invented DNA computing, which involves performing computations using biological molecules rather than traditional silicon chips.

Adleman who received the 2002 Turing Award, often called the Nobel Prize of computer science saw that a computer could be something other than a laptop or machine using electrical impulses. After visiting a USC biology lab in 1993, he recognized that the 0s and 1s of conventional computers could be replaced with the four DNA bases: A, C, G and T. As he later wrote, a liquid computer can exist in which interacting molecules perform computations.

As the New York Times noted in 1997: Currently the worlds most powerful supercomputer sprawls across nearly 150 square meters at the U.S. governments Sandia National Laboratories in New Mexico. But a DNA computer has the potential to perform the same breakneck-speed computations in a single drop of water.

Weve shown by these computations that biological molecules can be used for distinctly non-biological purposes, Adleman said in 2002. They are miraculous little machines. They store energy and information, they cut, paste and copy.

Professor Maja Matari with Blossom, a cuddly, robot companion to help people with anxiety and depression practice breathing exercises and mindfulness.

3. USC Interaction Lab Pioneers Socially Assistive Robotics (2005)

Named No. 5 by Business Insider as one of the 25 Most Powerful Women Engineers in Tech, Maja Matari leads the USC Interaction Lab, pioneering the field of socially assistive robotics (SAR).

As defined by Matari and her then-graduate researcher David Feil-Seifer 17 years ago, socially assistive robotics was envisioned as the intersection of assistive robotics and social robotics, a new field that focuses on providing social support for helping people overcome challenges in health, wellness, education and training.

Socially assistive robots have been developed for a broad range of user communities, including infants with movement delays, children with autism, stroke patients, people with dementia and Alzheimers disease, and otherwise healthy elderly people.

We want these robots to make the user happier, more capable and better able to help themselves, said Matari, the Chan Soon-Shiong Chair and Distinguished Professor of Computer Science, Neuroscience and Pediatrics at USC. We also want them to help teachers and therapists, not remove their purpose.

The field has inspired investments from federal funding agencies and technology startups. The assistive robotics market is estimated to reach $25.16 billion by 2028.

Is the ball red or blue? Is the cat alive or dead? Professor Daniel Lidar, one of the worlds top quantum influencers, demonstrates the idea of superposition.

4. First Operational Quantum Computing System in Academia (2011)

Before Google or NASA got into the game, there was the USC-Lockheed Martin Quantum Computing Center (QCC).

Led by Daniel Lidar, holder of the Viterbi Professorship in Engineering, and ISIs Robert F. Lucas (now retired), the center launched in 2011. With the worlds first commercial adiabatic quantum processor, the D-Wave One, USC is the only university in the world to host and operate a commercial quantum computing system.

As USC News noted in 2018, quantum computing is the ultimate disruptive technologyit has the potential to create the best possible investment portfolio, dissolve urban traffic jams and bring drugs to market faster. It can optimize batteries for electric cars, predictions for weather and models for climate change.Quantum computing can do this, and much more, because it can crunch massive data and variables and do it quickly with advantage over classical computers as problems get bigger.

Recently, QCC upgraded to D-Waves Advantage system, with more than 5,000 qubits, an order of magnitude larger than any other quantum computer. The upgrades will enable QCC to host a new Advantage generation of quantum annealers from D-Wave and will be the first Leap quantum cloud system in the United States. Today, in addition to Professor Lidar one of the worlds top quantum computing influencers QCC is led by Research Assistant Professor Federico Spedalieri, as operations director, and Research Associate Professor Stephen Crago, associate director of ISI.

David Traum, a leader at the USC Institute for Creative Technologies (ICT), converses with Pinchas Gutter, a Holocaust survivor, as part of the New Dimensions in Testimony.

5. USC ICT Enables Talking with the Pastin the Future (2015)

New Dimensions in Testimony, a collaboration between the USC Shoah Foundation and the USC Institute for Creative Technologies (ICT), in partnership with Conscience Display, is an initiative to record and display testimony in a way that will continue the dialogue between Holocaust survivors and learners far into the future.

The project uses ICTs Light Stage technology to record interviews using multiple high-end cameras for high-fidelity playback. The ICT Dialogue Groups natural language technology allows fluent, open-ended conversation with the recordings. The result is a compelling and emotional interactive experience that enables viewers to ask questions and hear responses in real-time, lifelike conversation even after the survivors have passed away.

New Dimensions in Testimony debuted in the Illinois Holocaust Museum & Education Center in 2015. Since then, more than 50 survivors and other witnesses have been recorded and presented in dozens of museums around the United States and the world. It remains a powerful application of AI and graphics to preserve the stories and lived experiences of culturally and historically significant figures.

Eric Rice and Bistra Dilkina are co-directors of the Center for AI in Society (CAIS), a remarkable collaboration between the USC Dworak-Peck School of Social Work and the USC Viterbi School of Engineering.

6. Among the First AI for Good Centers in Higher Education (2016)

Launched in 2016, the Center for AI in Society (CAIS) became one of the pioneering AI for Good centers in the U.S., uniting USC Viterbi and the USC Suzanne Dworak-Peck School of Social Work.

In the past, CAIS used AI to prevent the spread of HIV/AIDS among homeless youth. In fact, a pilot study demonstrated a 40% increase in homeless youth seeking HIV/AIDS testing due to an AI-assisted intervention. In 2019, the technology was also used as part of the largest global deployment of predictive AI to thwart poachers and protect endangered animals.

Today, CAIS fuses AI, social work and engineering in unique ways, such as working with the Los Angeles Homeless Service Authority to address homelessness; battling opioid addiction; mitigating disasters like heat waves, earthquakes and floods; and aiding the mental health of veterans.

CAIS is led by co-directors Eric Rice, a USC Dworak-Peck professor of social work, and Bistra Dilkina, a USC Viterbi associate professor of computer science and the Dr. Allen and Charlotte Ginsburg Early Career Chair.

Pedro Szekely, Mayank Kejriwal and Craig Knoblock of the USC Information Sciences Institute (ISI) are at the vanguard of using computer science to fight human trafficking.

7. AI That Fights Modern Slavery (2017)

Beginning in 2017, a team of researchers at ISI led by Pedro Szekely, Mayank Kejriwal and Craig Knoblock created software called DIG that helps investigators scour the internet to identify possible sex traffickers and begin the process of capturing, charging and convicting them.

Law enforcement agencies across the country, including in New York City, have used DIG as well as other software programs spawned by Memex, a Defense Advanced Research Projects Agency (DARPA)-funded program aimed at developing internet search tools to help investigators thwart sex trafficking, among other illegal activities. The specialized software has triggered more than 300 investigations and helped secure 18 felony sex-trafficking convictions, according to Wade Shen, program manager in DARPAs Information Innovation Office and Memex program leader. It has also helped free several victims.

In 2015, Manhattan District Attorney Cyrus R. Vance Jr. announced that DIG was being used in every human trafficking case brought by the DAs office. With technology like Memex, he said, we are better able to serve trafficking victims and build strong cases against their traffickers.

This is the most rewarding project Ive ever worked on, said Szekely. Its really made a difference.

Published on July 28th, 2022

Last updated on July 28th, 2022

The rest is here:
USC's Biggest Wins in Computing and AI - USC Viterbi | School of Engineering - USC Viterbi School of Engineering

IQT Predicts Blockchain and Quantum Threat to Spread Beyond Cybercurrencies – HPCwire

NEW YORK, July 27, 2022 IQT Research foresees major commercial opportunities arising to protect blockchain against future quantum computer intrusions and agrees with the White House National Security Memorandum NSM-10, released on May 04, 2022, which indicates the urgency of addressing imminent quantum computing threats and the risks they present to the economy and to national security in our latest report The Quantum Threat to Blockchain: Emerging Business Opportunities.

Although primarily associated with cryptocurrencies, blockchain has been proposed for a wide range of transactions, including in insurance, real estate, voting, supply chain tracking, gaming, etc. These areas are all vulnerable to quantum threats, which lead to operations disruption, trust damage, and loss of intellectual property, financial assets, and regulated data.

For a sample of this report, click on Request Excerpt here.

About the Report:

Quantum computers threaten classical public-key cryptography blockchain technologies because they can break the computational security assumptions of elliptic curve cryptography. They also weaken the security of hash function algorithms, which protect blockchains secrets. This new IQT Research report identifies not only the challenges, but also the opportunities in terms of new products and services that arise from the threat that quantum computers pose to the blockchain mechanism. According to a recent study by the consulting firm Deloitte, approximately one-fourth of the blockchain-based cybercurrency Bitcoin in circulation in 2022 is vulnerable to quantum attack.

This report covers both technical and policy issues relating to the quantum vulnerability of blockchain.

From the Report:

About IQT Research

IQT Research is a division of 3DR Holdings, and the first industry analyst firm dedicated to meeting the strategic information and analysis needs of the emerging quantum technology sector. In addition to publishing reports on critical business opportunities in the quantum technology sector, Inside Quantum Technology produces a daily news website on business-related happenings in the quantum technology field. For more information, please visit https://www.insidequantumtechnology.com.

3DR Holdings also organizes the Inside Quantum Technology conferences. The next conference is dedicated to quantum cybersecurity and will be held October 25-27 in New York City.

Source: IQT

Read this article:
IQT Predicts Blockchain and Quantum Threat to Spread Beyond Cybercurrencies - HPCwire