Archive for February, 2021

DoDs AI center striving to be connective tissue across all projects – Federal News Network

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drives daily audio interviews onApple PodcastsorPodcastOne.

Its unclear if anyone really knows just how many pilot projects in the Defense Department are using artificial intelligence, machine learning or intelligent automation.

Some say its around 300, while others say its closer to 600, and then there are those who believe the number could be more than 1,000.

But unlike so many technology innovations that came before it, the Pentagon, through its Joint Artificial Intelligence Center (JAIC), is taking aggressive action to stop, or at least limit, AI-sprawl.

Theres a lot of efforts that are out there that are not very well tied together and theres a whole bunch of them that are dealing with exactly the same thing. So one of them is talent. Do they have talent? Or do they have to grow their talent or do they have to acquire the talent? The other big one, of course, is data and its almost invariably when anybody in the Department of Defense talks about doing work, they get to the data saying, Okay, my data hasnt been cleansed so is it usable? said Anthony Robbins, the vice president of the North American public sector business for NVIDIA, in an interview with Federal News Network. They try to assess use cases, and then theyre trying to figure out how to get started. The JAIC wants to help them figure out this out.

DoD launched the JAIC in June 2018 with a much different vision than where it stands today. Whereas the Pentagon saw JAIC nearly three years ago as pushing AI to the military services and defense agencies through pathfinder projects, its now focused on providing services and setting the foundational elements for mission areas to take advantage of the technologies.

In November, DoD announced JAIC 2.0 detailing its new vision and mission. As part of that new approach, the JAIC awarded a $106 million contract in September to build the Joint Common Foundation Artificial Intelligence (JCF), and plans to create three new other transaction agreements (OTA) vehicles in the coming year under the Tradewinds moniker to further build out its services catalog.

Jacqueline Tame, the acting deputy director of JAIC, said the move to 2.0 is a recognition that the services and defense agencies need a different kind of help to ensure AI tools improve and measure mission readiness.

The JAIC doesnt need to be a doer, but a trainer, educator and supporter because the adoption of AI and AI-like capabilities think robotics process automation (RPA) and predictive analytics are spreading across the department like wildfire.

What we have been able to do over the last two-and-a-half years is really test what the department actually needs, what the department is actually ready for and what the foundational building blocks of AI-readiness actually are. JAIC 2.0 is a recognition and learnings that weve undertaken that there are some key building blocks we have to put in place departmentwide to be AI ready, Tame said during AFCEA NOVA IC IT day. Where we are today, having developed a lot of capabilities, deployed a lot of prototypes and implemented a lot of solutions across the department is that weve learned that what the department actually needs is enabling services.

Tame said while some like the Army Futures Command, the Special Operations Command and in the Air Force have matured their AI capabilities, the efforts too often are rolling out in siloes.

What is still not happening, and this is the underpinning of JAIC 2.0, is the connective tissues between all of those capabilities that is being researched or deployed. What is still lacking in our assessment is the aggregate of the components of AI-readiness, she said. That includes removing some of the barriers to entry that present themselves in terms of both education and awareness about what AI is and what AI is not, what things actually lend themselves to AI and AI-enabled applications. Really understanding what the data need to looks like, the status of AI readiness in order to leverage it, test it appropriately and an understanding of the ethical underpinnings in terms of what that needs to look like as we consider some of the more advanced capabilities that we are trying to deploy across the force. Having a really foundational understanding of the types of infrastructure and architectures that need to be able to be interoperable in order to achieve the goals we are trying to achieve here. And really trying to understand the culture barriers to entry that still exist.

Like with any new technology, the culture barriers to AI arent unusual. But Tame, Robbins and other experts say trust, confidence and usability are at the heart of AI-readiness.

This is a technology that is and will affect every person, every country and every industry around the world, Robbins said. It is a technology that can go into every industry from transportation to healthcare to defense. Technology transformation is as much about leading change in transformation as it is the technology. The technology is ready.

Robbins said a predictive and preventive maintenance program, as well as its use to help with humanitarian assistance, are two examples of how DoD already is using AI.

One example is the Armys Aviation and Missile Command G-3s work with the JAIC since 2019 on the predictive and preventive maintenance for the UH-60 Blackhawk helicopter.

When it comes to logistics and maintenance, there is an overwhelming amount of data available anything from aircraft sensor data to maintenance forms and part records, Chris Shumeyko, JAIC product manager, said in an Army release. Ordinarily, subject matter experts play a huge role in understanding this data and identifying trends that may affect the readiness of the Armys vehicle fleet. However, as the amount of data grows, you either need more experts to comb through that data or possible warning signs of problems may get missed. By injecting AI/ML, were not replacing these experts, but rather providing them with tools that can find hard-to-spot trends, anomalies or warning signs in a fraction of the time. Our goal is to increase the efficiency of the experts.

Its this type of service that the JAIC is providing under its latest iteration.

Tame said the new services include or will include:

Robbins said these services and other recent actions by JAIC is part of how DoD is moving AI out of the testing phase and into the operations phase.

Tame added part of the way to address that operational need is not to develop, test and deploy in the siloes of yesterday, but through a common framework that creates a starting point for all AI technology.

These critical building blocks will enable us to get to the point of implementation of AI across the force in a really cohesive way are not there yet, she said. The JAICs role really needs to be driving that advocacy and education of our senior executive leadership all the way down to line analysts and intelligence agencies about institutionalizing the ethical underpinnings that need to be talked about every time we are thinking about AI, about ensuring there is a departmentwide test and evaluation framework that is specific to AI, which is different than everything else the test and evaluation community has been saying before, and ensuring we have a really foundational understanding across the board of those data standards, many of which do not exist yet or havent been agreed upon, and the level of infrastructure interoperability that we need to both put in place in terms of new systems and reimagine in terms of our legacy systems.

The end goal of JAIC 2.0 isnt just about offering new services or changing its mission focus, but addressing the AI-sprawl that seems to be quickly happening by giving the military services and Defense agencies a common baseline to build on top of and ensure the necessary trust, confidence, security and ethical foundations are in place. This is something that was missing with cloud, mobile devices and many other technologies that led to unabated sprawl.

Read the original here:
DoDs AI center striving to be connective tissue across all projects - Federal News Network

IBM-Red Hat deal with Palantir is big boost for its artificial intelligence, cloud strategy – WRAL Tech Wire

Editors note: Nicole Catchpole is a Senior Analyst with Technology Business Research.

HAMPTON, N.H. Since Arvind Krishna took the helm as CEO in April, IBM has engaged in a series of acquisitions and partnerships to support its transformative shift to fully embrace an open hybrid cloud strategy. The company is further solidifying the strategy with the announcement that IBM and Palantir are coming together in a partnership that combines AI, hybrid cloud, operational intelligence and data processing into an enterprise offering.

The partnership will leverage Palantir Foundry, a data integration and analysis platform that enables users to easily manage and visualize complex data sets, to create a new solution called Palantir for IBM Cloud Pak for Data. The new offering, which will be available in March, will leverage AI capabilities to help enterprises further automate data analysis across a wide variety of industries and reduce inherent silos in the process.

A core benefit that customers will derive from the collaboration between IBM (NYSE: IBM) and Palantir (NYSE: PLTR) is the easement of the pain points associated with adopting a hybrid cloud model, including integration across multiple data sources and the lack of visibility into the complexities of cloud-native development. By partnering with Palantir, IBM will be able to make its AI software more user-friendly, especially for those customers who are not technical by nature or trade. Palantirs software requires minimal, if any, coding and enhances the accessibility of IBMs cloud and AI business.

IBMs latest cloud move: Linking with big datas Palantir for hybrid using AI, Red Hat

According to Rob Thomas, IBMs senior vice president of software, cloud and data, the new offering will help to boost the percentage of IBMs customers using AI from 20% to 80% and will be sold to 180 countries and thousands of customers, which is a pretty fundamental change for us.Palantir for IBM Cloud Pak for Datawill extend the capabilities of IBM Cloud Pak for Data and IBM Cloud Pak for Automation, and according to a recent IBM press release, the new solution is expected to simplify how businesses build and deploy AI-infused applications with IBM Watson and help users access, analyze and take action on the vast amounts of data that is scattered across hybrid cloud environments, without the need for deep technical skills.

By drawing on the no-code and low-code capabilities of Palantirs software as well as the automated data governance capabilities embedded into the latest update of IBM Cloud Pak for Data, IBM is looking to drive AI adoption across its businesses, which, if successful, can serve as a ramp to access more hybrid cloud workloads. IBM perhaps summed it up best during its 2020 Think conference, with the comment: AI is only as good as the ecosystem that supports it. While many software companies are looking to democratize AI, Red Hats open hybrid cloud approach, underpinned by Linux and Kubernetes, positions IBM to bring AI to chapter 2 of the cloud.

IBM graphic

For historical context, it is important to remember that the acquisition of Red Hat marked the beginning of IBMs dramatic transformation into a company that places the values of flexibility, openness, automation and choice at the core of its strategic agenda. IBM Cloud Paks, which are modular AI-powered solutions that enable customers to efficiently and securely move workloads to the cloud, have been a central component of IBMs evolving identity.

After more than a year of messaging to the market the critical role Red Hat OpenShift plays in IBMs hybrid cloud strategy, Big Blue is now tasked with delivering on top of the foundational layer with the AI capabilities it has been tied to since the inception of Watson. By leveraging the openness and flexibility of OpenShift, IBM continues to emphasize its Cloud Pak portfolio, which serves as the middleware layer, allowing clients to run IBM software as close or as far away from the data as they desire. This architectural approach supports IBMs cognitive applications, such as Watson AIOps and Watson Analytics, while new integrations, such as those with Palantir Foundry will support the data integration process for customers SaaS offerings.

The partnership with IBM is a landmark relationship for Palantir that provides access to a broad network of internal sales and research teams as well as IBMs expansive global customer base. To start, Palantir will now have access to the reach and influence of IBMs Cloud Paks sales force, which is a notable expansion from its current team of 30. The company already primarily sells to companies that have over $500 million in revenue, and many of them already have relationships with IBM. By partnering with IBM, Palantir will not only be able to deepen its reach into its existing customer base but also have access to a much broader customer base across multiple industries. The partnership additionally provides Palantir with access to the IBM Data Science and AI Elite Team, which helps organizations across industries address data science use cases as well as the challenges inherent in AI adoption.

As a rebrand of its partner program, IBM unveiled the Public Cloud Ecosystem program nearly one year ago, onboarding key global systems integrators, such as inaugural partner Infosys, to push out IBM Cloud Paks solutions to customers on a global scale. As IBM increasingly looks up the technology stack, where enterprise value is ultimately generated, the company is emphasizing the IBM Cloud Pak for Data, evidenced by the November launch of version 3.5 of the solution, which offers support for new services.

In addition, IBM refreshed the IBM Cloud Pak for Automation while integrating robotic process automation technology from the acquisition of WDG Automation. Alongside the product update, IBM announced there are over 50 ISV partners that offer services integrated with IBM Cloud Pak for Data, which is also now available on the Red Hat Marketplace. IBMs ability to leverage technology and services partners to draw awareness to its Red Hat portfolio has become critical and has helped accelerate the vendors efforts in industry cloud following the launch of the financial services-ready public cloud and the more recent telecommunications cloud. New Cloud Pak updates such as these highlight IBMs commitment to OpenShift as well as its growing ecosystem of partners focused on AI-driven solutions.

Palantirs software, which serves over 100 clients in 150 countries, is diversified across various industries, and the new partner solution will support IBMs industry cloud strategy by targeting AI use cases. Palantir for IBM Cloud Pak for Data was created to mitigate the challenges faced by multiple industries, including retail, financial services, healthcare and telecommunications in other words, some of the most complex, fast-changing industries in the world, according to Thomas. For instance, many financial services organizations have been involved in extensive M&A activity, which results in a fragmented and dispersed environment involving multiple pools of data.

Palantir for IBM Cloud Pak for Data will remediate associated challenges with rapid data integration, cleansing and organization. According to IBMs press release, Guy Chiarello, chief administrative officer and head of technology at Fiserv (Nasdaq: FISV), an enterprise focused on supporting financial services institutions, reacted positively to the announcement, stating, This partnership between two of the worlds technology leaders will help companies in the financial services industry provide business-ready data and scaleAIwith confidence.

(C) TBR

Follow this link:
IBM-Red Hat deal with Palantir is big boost for its artificial intelligence, cloud strategy - WRAL Tech Wire

Top Ten Legal Considerations for Use and/or Development of Artificial Intelligence in Health Care – JD Supra

The purpose of this article is to provide an overview of the top ten legal issues that health care providers and health care companies should consider when using and/or developing artificial intelligence (AI). In particular, this article summarizes, in no particular order:

Thats a long list. However, we will attempt to break down these considerations and briefly summarize them as described below.

1. Statutory, Regulatory and Common Law Requirements

Regardless of whether you encounter AI as a health care provider or a developer (or both), there are statutory, regulatory and common law requirements that may be implicated when considering AI in the health care space. Depending on the functionality that the AI is discharging, there could be state and federal laws that require a health care provider or an AI developer to seek licensure, permits and/or other registrations (for example, AI may be employed in a way that requires FDA approval if it provides diagnosis without a health care professionals review). Additionally, as AI functionality expands (and potentially replaces physicians in the provision of physician services), the question may be raised as to how these services are regulated, and whether the provision of such services would be considered the unlicensed practice of medicine or in violation of corporate practice of medicine prohibitions.

2. Ethical Considerations

Where health care decisions have been almost exclusively human in the past, the use of AI in the provision of health care raises ethical questions relating to accountability, transparency and consent. In the instance where complex, deep-learning algorithm AI is used in the diagnosis of patients, a physician may not be able to fully understand or, even more importantly, explain to his or her patient the basis of their diagnosis. As a result, a patient may be left not understanding the status of their diagnosis or being unsatisfied with the delivery of their diagnosis. Further, it may be difficult to establish accountability when errors occur in diagnosis as a result of the use of AI. Additionally, AI is not immune from algorithmic biases, which could lead to diagnosis based on gender or race or other factors that do not have a causal link to the diagnosis.

3. Reimbursement Issues

The use of AI in both patient care and administrative functions raises questions relating to reimbursement by payors for health care services. How will payors reimburse for health care services provided by AI (will they even reimburse for such services)? Will federal and state health care programs (e.g., Medicare and Medicaid) recognize services provided by AI, and will AI impact provider enrollment? AI has the potential to affect every aspect of revenue cycle management. In particular, there are concerns that errors could occur when requesting reimbursement through AI. For example, if AI is assisting providers with billing and coding, could the provider be at risk of a False Claims Act violation as a result of an AI error? In the inevitable event that an error occurs, it may be ambiguous as to who is ultimately responsible for such errors unless clearly defined contractually.

4. Contractual Exposure

5. Torts and Private Causes of Action

If AI is involved in the provision of health care (or other) services, both the developer and provider of the services may have liability under a variety of tort law principles. Under theories of strict liability, a developer may be held liable for defects in their AI that are unreasonably dangerous to users. In the case of design defects, a developer may be held liable if the AI is inadequately planned or unreasonably hazardous to consumers. At least for the near term, the AI itself probably will not be liable for its acts or omissions (but recognize that as AI evolves, tort theories could also evolve to hold the AI itself liable). As a result, those involved in the process (the developer and provider) will likely have exposure to liability associated with the AI. Whether the liability is professional liability or product liability will likely depend on the functions the AI is performing. Further, depending on how the AI is used, a provider may be required to disclose the use of AI to their patients as a part of the informed consent process.

6. Antitrust Issues

The Antitrust Division of the Department of Justice (the DOJ) has made remarks regarding algorithmic collusion that may impact the use of AI in the health care space. While acknowledging the fact that algorithmic pricing can be highly competitive, the DOJ has acknowledged that concerted action to fix prices may occur when competitors have a common understanding to use the same software to achieve the same results. As a result, the efficiencies gained by using AI with pricing information, and other competitive data, may be offset by the antitrust risks.

7. Employment and Labor Considerations

The use of AI in the workforce will likely impact the structure of employment arrangements as well as employment policies, training and liability. AI may change the structure of the workforce by increasing the efficiencies in job performance and competition for those jobs (i.e., less workforce members are necessary when tasks are performed more quickly and efficiently by AI). However, integration of AI into the workforce also may create new bases for litigation and causes of actions based on discrimination in hiring practices. If AI is used in making hiring decisions, how can you ensure decisions based on any discriminatory characteristics are removed from the analysis? AI also may affect the terms of the employment and independent contractor contractual agreements with workforce members, particularly with respect to ownership of intellectual property, restrictive covenants and confidentiality.

8. Privacy and Security Risks

Speaking of confidentiality, the use and development of AI in health care poses unique challenges to companies that have ongoing obligations to safeguard protected health information, personally identifiable information and other sensitive information. AIs processes often require enormous amounts of data. As a result, it is inevitable that using AI may implicate the Health Insurance Portability and Accountability Act (HIPAA) and state-level privacy and security laws and regulations with respect to such data, which may need to be de-identified. Alternatively, an authorization from the patient may be required prior to disclosure of the data via AI or to the AI. Further, AI poses unique challenges and risks with respect to privacy breaches and cybersecurity threats, which has an obvious negative impact on patients and providers.

9. Intellectual Property Considerations

It is of particular importance for AI developers to preserve and protect the intellectual property rights that they may be able to assert over their developments (patent rights, trademark rights, etc.) and for users of AI to understand the rights they have to use the AI they have licensed. It also is important to consider carefully who owns the data that the AI uses to learn and the liability associated with such ownership.

10. Compliance Program Implications

As technology evolves, so should a providers compliance program. When new technology such as AI is introduced, compliance program policies and procedures should be updated based on the new technology. In addition, it is important that the workforce implementing and using the AI technology is trained appropriately. As in a traditional compliance plan, continual monitoring and evaluation should take place and programs and policies should be updated pursuant to such monitoring and changes in AI.

We predict that as the use and development of AI grows in health care, so will this list of legal considerations.

See more here:
Top Ten Legal Considerations for Use and/or Development of Artificial Intelligence in Health Care - JD Supra

Colorado makes a bid for quantum computing hardware plant that would bring more than 700 jobs – The Denver Post

The Colorado Economic Development Commission normally doesnt throw its weight behind unproven startups, but it did so on Thursday, approving $2.9 million in state job growth incentive tax credits to try and land a manufacturing plant that will produce hardware for quantum computers.

Given the broad applications and catalytic benefits that this companys technology could bring, retaining this company would help position Colorado as an industry leader in next-generation and quantum computing, Michelle Hadwiger, the deputy director of the Colorado Office of Economic Development & International Trade, told commissioners.

Project Quantum, the codename for the Denver-based startup, is looking to create up to 726 new full-time jobs in the state. Most of the positions would staff a new facility making components for quantum computers, an emerging technology expected to increase computing power and speed exponentially and transform the global economy as well as society as a whole.

The jobs would carry an average annual wage of $103,329, below the wages other technology employers seeking incentives from the state have provided, but above the average annual wage of any Colorado county. Hadwiger said the company is also considering Illinois, Ohio and New York for the new plant and headquarters.

Quantum computing is going to be as important to the next 30 years of technology as the internet was to the past 30 years, said the companys CEO, who only provided his first name Corban.

He added that he loves Colorado and doesnt want to see it surpassed by states like Washington, New York and Illinois in the transformative field.

If we are smart about it, and that means doing something above and beyond, we can win this race. It will require careful coordination at the state and local levels. We need to do something more and different, he said.

The EDC also approved $2.55 million in job growth incentive tax credits and $295,000 in Location Neutral Employment Incentives for Nextworld, a growing cloud-based enterprise software company based in Greenwood Village. The funds are linked to the creation of 306 additional jobs, including 59 located in more remote parts of the state.

But in a rare case of dissent, Nextworlds CEO Kylee McVaney asked the commission to go against staff recommendations and provide a larger incentive package.

McVaney, daughter of legendary Denver tech entrepreneur Ed McVaney, said the companys lease is about to expire in Greenwood Village and most employees would prefer to continue working remotely. The company could save substantial money by not renewing its lease and relocating its headquarters to Florida, which doesnt have an income tax.

We could go sign a seven-year lease and stay in Colorado or we can try this new grand experiment and save $11 million, she said.

Hadwiger insisted that the award, which averages out to $9,500 per job created, was in line with the amount offered to other technology firms since the Colorado legislature tightened the amount the office could provide companies.

But McVaney said the historical average award per employee was closer to $18,000 and the median is $16,000 and that Colorado was not competitive with Florida given that states more favorable tax structure.

See the article here:
Colorado makes a bid for quantum computing hardware plant that would bring more than 700 jobs - The Denver Post

How researchers are mapping the future of quantum computing, using the tech of today – GeekWire

Pacific Northwest National Laboratory computer scientist Sriram Krishnamoorthy. (PNNL Photo)

Imagine a future where new therapeutic drugs are designed far faster and at a fraction of the cost they are today, enabled by the rapidly developing field of quantum computing.

The transformation on healthcare and personalized medicine would be tremendous, yet these are hardly the only fields this novel form of computing could revolutionize. From cryptography to supply-chain optimization to advances in solid-state physics, the coming era of quantum computers could bring about enormous changes, assuming its potential can be fully realized.

Yet many hurdles still need to be overcome before all of this can happen. This one of the reasons the Pacific Northwest National Laboratory and Microsoft have teamed up to advance this nascent field.

The developer of the Q# programming language, Microsoft Quantum recently announced the creation of an intermediate bridge that will allow Q# and other languages to be used to send instructions to different quantum hardware platforms. This includes the simulations being performed on PNNLs own powerful supercomputers, which are used to test the quantum algorithms that could one day run on those platforms. While scalable quantum computing is still years away, these simulations make it possible to design and test many of the approaches that will eventually be used.

We have extensive experience in terms of parallel programming for supercomputers, said PNNL computer scientist Sriram Krishnamoorthy. The question was, how do you use these classical supercomputers to understand how a quantum algorithm and quantum architectures would behave while we build these systems?

Thats an important question given that classical and quantum computing are so extremely different from each other. Quantum computing isnt Classical Computing 2.0. A quantum computer is no more an improved version of a classical computer than a lightbulb is a better version of a candle. While you might use one to simulate the other, that simulation will never be perfect because theyre such fundamentally different technologies.

Classical computing is based on bits, pieces of information that are either off or on to represent a zero or one. But a quantum bit, or qubit, can represent a zero or a one or any proportion of those two values at the same time. This makes it possible to perform computations in a very different way.

However, a qubit can only do this so long as it remains in a special state known as superposition. This, along with other features of quantum behavior such as entanglement, could potentially allow quantum computing to answer all kinds of complex problems, many of which are exponential in nature. These are exactly the kind of problems that classical computers cant readily solve if they can solve them at all.

For instance, much of the worlds electronic privacy is based on encryption methods that rely on prime numbers. While its easy to multiply two prime numbers, its extremely difficult to reverse the process by factoring the product of two primes. In some cases, a classical computer could run for 10,000 years and still not find the solution. A quantum computer, on the other hand, might be capable of performing the work in seconds.

That doesnt mean quantum computing will replace all tasks performed by classical computers. This includes programming the quantum computers themselves, which the very nature of quantum behaviors can make highly challenging. For instance, just the act of observing a qubit can make it decohere, causing it to lose its superposition and entangled states.

Such challenges drive some of the work being done by Microsoft Azures Quantum group. Expecting that both classical and quantum computing resources will be needed for large-scale quantum applications, Microsoft Quantum has developed a bridge they call QIR, which stands for quantum intermediate representation. The motivation behind QIR is to create a common interface at a point in the programming stack that avoids interfering with the qubits. Doing this makes the interface both language- and platform-agnostic, which allows different software and hardware to be used together.

To advance the field of quantum computing, we need to think beyond just how to build a particular end-to-end system, said Bettina Heim, senior software engineering manager with Microsoft Quantum, during a recent presentation. We need to think about how to grow a global ecosystem that facilitates developing and experimenting with different approaches.

Because these are still very early days think of where classical computing was 75 years ago many fundamental components still need to be developed and refined in this ecosystem, including quantum gates, algorithms and error correction. This is where PNNLs quantum simulator, DM-SIM comes in. By designing and testing different approaches and configurations of these elements, they can discover better ways of achieving their goals.

As Krishnamoorthy explains: What we currently lack and what we are trying to build with this simulation infrastructure is a turnkey solution that could allow, say a compiler writer or a noise model developer or a systems architect, to try different approaches in putting qubits together and ask the question: If they do this, what happens?

Of course, there will be many challenges and disappointments along the way, such as an upcoming retraction of a 2018 paper in the journal, Nature. The original study, partly funded by Microsoft, declared evidence of a theoretical particle called a Majorana fermion, which could have been a major quantum breakthrough. However, errors since found in the data contradict that claim.

But progress continues, and once reasonably robust and scalable quantum computers are available, all kinds of potential uses could become possible. Supply chain and logistics optimization might be ideal applications, generating new levels of efficiency and energy savings for business. Since quantum computing should also be able to perform very fast searches on unsorted data, applications that focus on financial data, climate data analysis and genomics are likely uses, as well.

Thats only the beginning. Quantum computers could be used to accurately simulate physical processes from chemistry and solid-state physics, ushering in a new era for these fields. Advances in material science could become possible because well be better able to simulate and identify molecular properties much faster and more accurately than we ever could before. Simulating proteins using quantum computers could lead to new knowledge about biology that would revolutionize healthcare.

In the future, quantum cryptography may also become common, due to its potential for truly secure encrypted storage and communications. Thats because its impossible to precisely copy quantum data without violating the laws of physics. Such encryption will be even more important once quantum computers are commonplace because their unique capabilities will also allow them to swiftly crack traditional methods of encryption as mentioned earlier, rendering many currently robust methods insecure and obsolete.

As with many new technologies, it can be challenging to envisage all of the potential uses and problems quantum computing might bring about, which is one reason why business and industry need to become involved in its development early on. Adopting an interdisciplinary approach could yield all kinds of new ideas and applications and hopefully help to build what is ultimately a trusted and ethical technology.

How do you all work together to make it happen? asks Krishnamoorthy. I think for at least the next couple of decades, for chemistry problems, for nuclear theory, etc., well need this hypothetical machine that everyone designs and programs for at the same time, and simulations are going to be crucial to that.

The future of quantum computing will bring enormous changes and challenges to our world. From how we secure our most critical data to unlocking the secrets of our genetic code, its technology that holds the keys to applications, fields and industries weve yet to even imagine.

See original here:
How researchers are mapping the future of quantum computing, using the tech of today - GeekWire