Archive for the ‘Artificial Intelligence’ Category

NIH launches Bridge2AI program to expand the use of artificial intelligence in biomedical and behavioral research – National Institutes of Health…

News Release

Tuesday, September 13, 2022

The National Institutes of Health will invest $130 million over four years, pending the availability of funds, to accelerate the widespread use of artificial intelligence (AI) by the biomedical and behavioral research communities. The NIH Common Funds Bridge to Artificial Intelligence (Bridge2AI) program is assembling team members from diverse disciplines and backgrounds to generate tools, resources, and richly detailed data that are responsive to AI approaches. At the same time, the program will ensure its tools and data do not perpetuate inequities or ethical problems that may occur during data collection and analysis. Through extensive collaboration across projects, Bridge2AI researchers will create guidance and standards for the development of ethically sourced, state-of-the-art, AI-ready data sets that have the potential to help solve some of the most pressing challenges in human health such as uncovering how genetic, behavioral, and environmental factors influence a persons physical condition throughout their life.

Generating high-quality ethically sourced data sets is crucial for enabling the use of next-generation AI technologies that transform how we do research, said Lawrence A. Tabak, D.D.S., Ph.D., Performing the Duties of the Director of NIH. The solutions to long-standing challenges in human health are at our fingertips, and now is the time to connect researchers and AI technologies to tackle our most difficult research questions and ultimately help improve human health.

AI is both a field of science and a set of technologies that enable computers to mimic how humans sense, learn, reason, and take action. Although AI is already used in biomedical research and healthcare, its widespread adoption has been limited in part due to challenges of applying AI technologies to diverse data types. This is because routinely collected biomedical and behavioral data sets are often insufficient, meaning they lack important contextual information about the data type, collection conditions, or other parameters. Without this information, AI technologies cannot accurately analyze and interpret data. AI technologies may also inadvertently incorporate bias or inequities unless careful attention is paid to the social and ethical contexts in which the data is collected. In order to harness the power of AI for biomedical discovery and accelerate its use, scientists first need well-described and ethically created data sets, standards, and best practices for generating biomedical and behavioral data that is ready for AI analyses.

As it creates tools and best practices for making data AI-ready, Bridge2AI will also produce a variety of diverse data types ready to be used by the research community for AI analyses. These types include voice and other data to help identify abnormal changes in the body. Researchers will also generate data that can be used to make new connections between complex genetic pathways and changes in cell shape or function to better understand how they work together to influence health. In addition, AI-ready data will be prepared to help improve decision making in critical care settings to speed recovery from acute illnesses and to help uncover the complex biological processes underlying an individuals recovery from illness.

The Bridge2AI program is committed to fostering the formation of research teams richly diverse in perspectives, backgrounds, and academic and technical disciplines. Diversity is fundamental to the ethical generation of data sets, and for training future AI technologies to reduce bias and improve effectiveness for all populations, including those who are underrepresented in biomedical and behavioral research. Bridge2AI will develop ethical practices for data generation and use, addressing key issues such as privacy, data trustworthiness, and reducing bias.

NIH has issued four awards for data generation projects, and three awards to create a Bridge Center for integration, dissemination and evaluation activities. The data generation projects will generate new biomedical and behavioral data sets ready to be used for developing AI technologies, along with creating data standards and tools for ensuring data are findable, accessible, interoperable, and reusable, a principle known as FAIR. In addition, data generation projects will develop training materials that promote a culture of diversity and the use of ethical practices throughout the data generation process. The Bridge Center will be responsible for integrating activities and knowledge across data generation projects, and disseminating products, best-practices, and training materials.

The Bridge2AI program is an NIH-wide effort managed collaboratively by the NIH Common Fund, the National Center for Complementary and Integrative Health, the National Eye Institute, the National Human Genome Research Institute, the National Institute of Biomedical Imaging and Bioengineering, and the National Library of Medicine. To learn more about the Bridge2AI program, visit the Musings from the Mezzanine blog from the National Library of Medicine, and watch this video about the Bridge2AI program.

About the NIH Common Fund: The NIH Common Fund encourages collaboration and supports a series of exceptionally high-impact, trans-NIH programs. Common Fund programs are managed by the Office of Strategic Coordination in the Division of Program Coordination, Planning, and Strategic Initiatives within the NIH Office of the Director in partnership with the NIH Institutes, Centers, and Offices. More information is available at the Common Fund website: https://commonfund.nih.gov.

About the National Institutes of Health (NIH):NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit http://www.nih.gov.

NIHTurning Discovery Into Health

###

Read the original:
NIH launches Bridge2AI program to expand the use of artificial intelligence in biomedical and behavioral research - National Institutes of Health...

What Is Artificial Intelligence in Healthcare? – University of Colorado Anschutz Medical Campus

Casey Greene, PhD, chair of the University of Colorado School of Medicines Department of Biomedical Informatics, is working toward a future of serendipity in healthcare using artificial intelligence (AI) to help doctors receive the right information at the right time to make the best decision for a patient.

Finding that serendipity begins with the data. Greene said the Departments faculty works with data ranging from genomic-sequencing information, cell imaging, and electronic health records. Each area has its own robust constraints ethical and privacy protections to ensure that the data are being used in accordance with peoples wishes.

His team uses petabytes of sequencing data that are available to anyone, Greene said. I think its empowering, he said, noting that anyone with an internet connection can conduct scientific research.

Following the selection or creation of a data set, Greene and other AI researchers at the CU Anschutz Medical Campus begin the core focus of AI work building algorithms and programs that can detect patterns. The goal is to find links in these large data sets that ultimately offer better treatments for patients. Still, human insight brings essential perspectives to the research, Greene said.

The algorithms do learn patterns, but they can be very different patterns and can become confused in interesting ways, he said. Greene used a hypothetical example of sheep and hillsides, two things often seen together. Researchers must teach the program to separate the two items, he said.

A person can look at a hillside and see sheep and recognize sheep. They can also see a sheep somewhere unexpected and realize that the sheep is out of place. But these algorithms don't necessarily distinguish between sheep and hillsides at first because people usually take pictures of sheep on hillsides. They don't often take pictures of sheep at the grocery store, so these algorithms can start to predict that all hillsides have sheep, Greene said.

It's a little bit esoteric when you're thinking about hillsides and sheep, he said. But it matters a lot more if you're having algorithms that look at medical images where you'd like to predict in the same way that a human would predict based on the content of the image and not based on the surroundings. Encoding prior human knowledge (knowledge engineering) into these systems can lead to better healthcare down the line, Greene said.

And when it comes to AI in healthcare, Greene said it is key to have open models and diverse teams doing the work. It gives others a chance to probe these models with their own questions. And I think that leads to more trust.

In the Q&A below, Greene provides a general overview of the terms and technology behind AI alongside the challenges he and his fellow researchers face.

Read more from the original source:
What Is Artificial Intelligence in Healthcare? - University of Colorado Anschutz Medical Campus

The New Artificial Intelligence Of Car Audio Might Improve More Than Just Tunes – Forbes

As Artificial Intelligence is applied to car audio, the system can start to sense competing noise ... [+] and adjust the experience dynamically.

Hollywood has perennially portrayed Artificial Intelligence (AI) as the operating layer of dystopian robots who replace unsuspecting humans and create the escalating, central conflict. In a best case reference, you might imagine a young Hailey Joel Osment playing David, the self-aware, artificial kid in Spielbergs polar-caps-thawed-and-flooded-coastal-cities world (sound familiar?) of AI: Artificial Intelligence who (spoiler alert) only kills himself. Or maybe you recall Robin Williamss voice as Bicentennial Man who, once again, is a self-aware robot attempting to thrive who (once again on the spoiler alert), ends up being his only victim. And, of course, theres the nearly clich reference to Terminator and its post-apocalyptic world with machines attempting to destroy humans and, well, (not-so-spoiler alert) lots of victims over a couple of decades. In none of these scenarios, however, do humans coexist with an improved life, let alone enhanced entertainment and safety.

That, however, is the new reality. Artificial Intelligence algorithms can be included into audio designs and continuously improved via over-the-air updates to improve the driving experience. And in direct contradiction to these Hollywood examples, such AI might actually improve the humans likelihood to survive.

How the car audio performs can now become an innovative, self-tuned system that enhances the ... [+] experience for the user.

Until recently, all User Interface (UI) including audio development has required complex programming by expert coders over the standard thirty-six (36) months of a vehicle program. Sheet metal styling and electronic boxes are specified, sourced and developed in parallel only to calibrate individual elements late in development. Branded sounds. Acoustic signatures. All separate initiatives within the same, anemic system design that has cost manufacturers billions.

But Artificial Intelligence has allowed a far more flexible and efficient way of approaching audio experience design. What were seeing is the convergence of trends, states Josh Morris, DSP Concepts Machine Learning Engineering Manager. Audio is becoming a more dominant feature within automotive, but at the same time youre seeing modern processors become stronger with more memory and capabilities.

And, therein, using a systems-focused development platform, Artificial Intelligence and these stronger processors provides drivers and passengers with a new level of adaptive, real-time responsiveness. . Instead of the historical need to write reams of code for every conceivable scenario, AI guides system responsiveness based on a learned awareness of environmental conditions and events, states Steve Ernst, DSP Concepts Head of Automotive Business Development.

The very obvious way to use such a learning system is de-noising the vehicle so that premium audio can be tailored and improved despite having swapped to winter tires or other such ambient changes. But LG Electronics has developed algorithms running in the DSP Concepts Audio Weaver platform to allow voice enhancements of the movies dialogue during rear-seat entertainment to accentuate it versus in-movie explosions, thereby allowing the passenger to better hear the critical content

Another non-obvious aspect would be how branded audio sounds are orchestrated in the midst of other noises. Does this specific vehicle require the escalating boot-up sequence to play while other sounds like the radio and chimes are automatically turned down? Each experience can be adjusted.

How to deal with the ongoing, internal, external and ever-changing audio alerts will be a ... [+] development challenge for autonomous and electric vehicles alike.

As the world races into both electric vehicles and autonomous driving, the frequency and needs of audible warnings will likely change drastically. For instance, an autonomous taxis safety engineer cannot assume the passengers are anywhere near a visual display when a timely alert is required. And how audible is that alert for the nearly 25 million Americans with disabilities for whom autonomous vehicles should open new mobility possibilities? Audio now isnt just for listening to your favorite song, states Ernst. With autonomous driving, there are all sorts of alerts that are required to keep the driver engaged or to alert the non-engaged driver about things going on around them.

And what makes it more challenging, injects Adam Levenson, DSP Conceptss Head of Marketing, are all of the things being handled simultaneously within the car: telephony, immersive or spatial sound, engine noise, road noise, acoustic vehicle alert systems, voice systems, etc. We like to say the most complex audio product is the car.

For instance, imagine the scenario where a driver has enabled autonomous drive mode on the highway, has turned up his tunes and is pleasantly ignorant of an approaching emergency vehicle. At what accuracy (and distance) of siren-detection using the vehicles microphone(s) does the car alert its quasi-distracted-driver? How must that alert be presented to overcome ambient noise, provide sufficient attention but not needlessly startle the driver? All of this can be tuned via pre-developed models, upfront training with different sirens and subsequent cloud-based tuning. This is where the overall orchestration becomes really important, explains Morris. We can take the output of the [AIs detection] model and direct that to different places in the car. Maybe you turn the audio down, trigger some audible warning signal and flash something on the dashboard for the driver to pay attention.

The same holds true for external alerts. For instance, quiet electric vehicle may have tuned alarms for pedestrians. And so new calibrations can be created offline and downloaded to vehicles as software updates based upon the enabled innovation.

Innovation everywhere. And Artificial Intelligence feeding the utopian experience rather than creating Hollywoods dystopian world.

Heres my prediction of the week (and its only Tuesday, folks): the next evolution of audio shall include a full, instantaneous feedback loop including the subtle, real-time users delight. Yes, much of the current design likely improves the experience, but an ongoing calibration of User-Centered Design (UCD) might be additionally enhanced based upon the passengers expressions, body language and comments, thereby individually tuning the satisfaction in real-time. All of the enablers are all there: camera, AI, processors and an adaptive platform.

Yes, weve previously heard of adaptive mood lighting and remote detection of boredom, stress, etc. to improve safety, but nothing that enhances the combined experience based upon real-time, learning algorithms of all user-pointed sensors.

Maybe Im extrapolating too much. But just like Robin Williamss character Ive spanned two centuries so maybe Im also just sensitive to what humans might want.

See the original post:
The New Artificial Intelligence Of Car Audio Might Improve More Than Just Tunes - Forbes

The Worldwide Artificial Intelligence Robots Industry is Expected to Reach $38.4 Billion by 2027 – ResearchAndMarkets.com – Business Wire

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence Robots Market Research Report by Offering (Hardware and Software), Robot Type, Technology, Deployment Mode, Application, Region (Americas, Asia-Pacific, and Europe, Middle East & Africa) - Global Forecast to 2027 - Cumulative Impact of COVID-19" report has been added to ResearchAndMarkets.com's offering.

The Global Artificial Intelligence Robots Market size was estimated at USD 5,860.10 million in 2021, USD 8,003.14 million in 2022, and is projected to grow at a CAGR 36.82% to reach USD 38,450.19 million by 2027.

Competitive Strategic Window:

The Competitive Strategic Window analyses the competitive landscape in terms of markets, applications, and geographies to help the vendor define an alignment or fit between their capabilities and opportunities for future growth prospects. It describes the optimal or favorable fit for the vendors to adopt successive merger and acquisition strategies, geography expansion, research & development, and new product introduction strategies to execute further business expansion and growth during a forecast period.

FPNV Positioning Matrix:

The FPNV Positioning Matrix evaluates and categorizes the vendors in the Artificial Intelligence Robots Market based on Business Strategy (Business Growth, Industry Coverage, Financial Viability, and Channel Support) and Product Satisfaction (Value for Money, Ease of Use, Product Features, and Customer Support) that aids businesses in better decision making and understanding the competitive landscape.

Market Share Analysis:

The Market Share Analysis offers the analysis of vendors considering their contribution to the overall market. It provides the idea of its revenue generation into the overall market compared to other vendors in the space. It provides insights into how vendors are performing in terms of revenue generation and customer base compared to others. Knowing market share offers an idea of the size and competitiveness of the vendors for the base year. It reveals the market characteristics in terms of accumulation, fragmentation, dominance, and amalgamation traits.

The report provides insights on the following pointers:

1. Market Penetration: Provides comprehensive information on the market offered by the key players

2. Market Development: Provides in-depth information about lucrative emerging markets and analyze penetration across mature segments of the markets

3. Market Diversification: Provides detailed information about new product launches, untapped geographies, recent developments, and investments

4. Competitive Assessment & Intelligence: Provides an exhaustive assessment of market shares, strategies, products, certification, regulatory approvals, patent landscape, and manufacturing capabilities of the leading players

5. Product Development & Innovation: Provides intelligent insights on future technologies, R&D activities, and breakthrough product developments

The report answers questions such as:

1. What is the market size and forecast of the Global Artificial Intelligence Robots Market?

2. What are the inhibiting factors and impact of COVID-19 shaping the Global Artificial Intelligence Robots Market during the forecast period?

3. Which are the products/segments/applications/areas to invest in over the forecast period in the Global Artificial Intelligence Robots Market?

4. What is the competitive strategic window for opportunities in the Global Artificial Intelligence Robots Market?

5. What are the technology trends and regulatory frameworks in the Global Artificial Intelligence Robots Market?

6. What is the market share of the leading vendors in the Global Artificial Intelligence Robots Market?

7. What modes and strategic moves are considered suitable for entering the Global Artificial Intelligence Robots Market?

Market Dynamics

Drivers

Restraints

Opportunities

Challenges

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/s13vtl

Continue reading here:
The Worldwide Artificial Intelligence Robots Industry is Expected to Reach $38.4 Billion by 2027 - ResearchAndMarkets.com - Business Wire

Artificial intelligence is playing a bigger role in cybersecurity, but the bad guys may benefit the most – CNBC

Security officers keep watch in front of an AI (Artificial Intelligence) sign at the annual Huawei Connect event in Shanghai, China, September 18, 2019.

Aly Song | Reuters

Artificial intelligence is playing an increasingly important role in cybersecurity for both good and bad. Organizations can leverage the latest AI-based tools to better detect threats and protect their systems and data resources. But cyber criminals can also use the technology to launch more sophisticated attacks.

The rise in cyberattacks is helping to fuel growth in the market for AI-based security products. A July 2022 report by Acumen Research and Consulting says the global market was $14.9 billion in 2021 and is estimated to reach $133.8 billion by 2030.

An increasing number of attacks such as distributed denial-of-service (DDoS) and data breaches, many of them extremely costly for the impacted organizations, are generating a need for more sophisticated solutions.

Another driver of market growth was the Covid-19 pandemic and shift to remote work, according to the report. This forced many companies to put an increased focus on cybersecurity and the use of tools powered with AI to more effectively find and stop attacks.

Looking ahead, trends such as the growing adoption of the Internet of Things (IoT) and the rising number of connected devices are expected to fuel market growth, the Acumen report says. The growing use of cloud-based security services could also provide opportunities for new uses of AI for cybersecurity.

Among the types of products that use AI are antivirus/antimalware, data loss prevention, fraud detection/anti-fraud, identity and access management, intrusion detection/prevention system, and risk and compliance management.

Up to now, the use of AI for cybersecurity has been somewhat limited. "Companies thus far aren't going out and turning over their cybersecurity programs to AI," said Brian Finch, co-leader of the cybersecurity, data protection & privacy practice at law firm Pillsbury Law. "That doesn't mean AI isn't being used. We are seeing companies utilize AI but in a limited fashion," mostly within the context of products such as email filters and malware identification tools that have AI powering them in some way.

"Most interestingly we see behavioral analysis tools increasingly using AI," Finch said. "By that I mean tools analyzing data to determine behavior of hackers to see if there is a pattern to their attacks timing, method of attack, and how the hackers move when inside systems. Gathering such intelligence can be highly valuable to defenders."

In a recent study, research firm Gartner interviewed nearly 50 security vendors and found a few patterns for AI use among them, says research vice president Mark Driver.

"Overwhelmingly, they reported that the first goal of AI was to 'remove false positives' insofar as one major challenge among security analysts is filtering the signal from the noise in very large data sets," Driver said."AI can trim this down to a reasonable size, which is much more accurate.Analysts are able to work smarter and faster to resolve cyber attacks as a result."

In general, AI is used to help detect attacks more accurately and then prioritize responses based on real world risk, Driver said. And it allows automated or semi-automated responses to attacks, and finally provides more accurate modelling to predict future attacks. "All of this doesn't necessarily remove the analysts from the loop, but it does make the analysts' job more agile and more accurate when facing cyber threats," Driver said.

On the other hand, bad actors can also take advantage of AI in several ways. "For instance, AI can be used to identify patterns in computer systems that reveal weaknesses in software or security programs, thus allowing hackers to exploit those newly discovered weaknesses," Finch said.

When combined with stolen personal information or collected open source data such as social media posts, cyber criminals can use AI to create large numbers of phishing emails to spread malware or collect valuable information.

"Security experts have noted that AI-generated phishing emails actually have higher rates of being opened [for example] tricking possible victims to click on them and thus generate attacks than manually crafted phishing emails," Finch said. "AI can also be used to design malware that is constantly changing, to avoid detection by automated defensive tools."

Constantly changing malware signatures can help attackers evade static defenses such as firewalls and perimeter detection systems. Similarly, AI-powered malware can sit inside a system, collecting data and observing user behavior up until it's ready to launch another phase of an attack or send out information it has collected with relatively low risk of detection. This is partly why companies are moving towards a "zero trust" model, where defenses are set up to constantly challenge and inspect network traffic and applications in order to verify that they are not harmful.

But Finch said, "Given the economics of cyberattacks it's generally easier and cheaper to launch attacks than to build effective defenses I'd say AI will be on balance more hurtful than helpful. Caveat that, however, with the fact that really good AI is difficult to build and requires a lot of specially trained people to make it work well. Run of the mill criminals are not going to have access to the greatest AI minds in the world."

Cybersecurity program might have access to "vast resources from Silicon Valley and the like [to] build some very good defenses against low-grade AI cyber attacks," Finch said. "When we get into AI developed by hacker nation states [such as Russia and China], their AI hack systems are likely to be quite sophisticated, and so the defenders will generally be playing catch up to AI-powered attacks."

Read more here:
Artificial intelligence is playing a bigger role in cybersecurity, but the bad guys may benefit the most - CNBC