Archive for November, 2020

Artificial Intelligence Agreement Will Enhance Environmental Monitoring and Weather Prediction – HSToday

NOAAs Satellite and Information Service (NESDIS) has signed an agreement with Google to explore the benefits of Artificial Intelligence (AI) and Machine Learning (ML) for enhancing NOAAs use of satellite and environmental data.

Under this three-year Other Transaction Authority (OTA) agreement, NESDIS and Google will pilot specific AI- and ML-related projects to amplify NOAAs environmental monitoring, weather forecasting, climate research, and technical innovation.

Strengthening NOAAs data processing through the use of big data, artificial intelligence, machine learning, and other advanced analytical approaches is critical for maintaining and enhancing the performance of our systems in support of public safety and the economy, said Neil Jacobs, Ph.D., acting NOAA administrator. I am excited to utilize new authorities granted to NOAA to pursue cutting-edge technologies that will enhance our mission and better protect lives and property.

Research will initially focus on developing small-scale AI/ML systems. With the results yielded from those efforts, NOAA and Google Cloud will then focus on executing full-scale prototypes that NOAA could ultimately operationalize across its organization. If successful, this has the potential to be a significant leap in NOAAs ability to leverage the enormous volume and diversity of environmental data in order to enhance prediction, including for extreme weather events such as hurricanes and tornadoes.

By bringing together NOAA and Googles expertise and talent, we can both resource and jointly explore AI/ML methods to achieve a more effective use of satellite and other environmental data, said Mike Daniels, vice president, Global Public Sector, Google Cloud. Our goal is to increase scientific impact, and improve the efficiency and effectiveness of environmental and satellite data by leveraging Google Clouds infrastructure and AI/ML know-how. All this will help improve weather forecasting, research and unlock innovation.

Through this agreement, NOAA and Google will work together on a number of projects, offering hands-on AI training opportunities to the NOAA workforce. NOAAs AI strategy aims to infuse new technologies and approaches to increase efficiency and skills through partnerships, training, and AI-related research and development.

NOAA developed an AI/ML strategy and a Data Strategy to dramatically accelerate the use of data across the agency and with other key partners, maximize openness and transparency, deliver on mission, and steward resources while protecting quality, integrity, security, privacy, and confidentiality.

(Visited 22 times, 1 visits today)

Originally posted here:
Artificial Intelligence Agreement Will Enhance Environmental Monitoring and Weather Prediction - HSToday

Responsible Artificial Intelligence Research and Innovation for International Peace and Security – World – ReliefWeb

In 2018 the United Nations Secretary-General identified responsible research and innovation (RRI) in science and technology as an approach for academia, the private sector and governments to work on the mitigation of risks that are posed by new technologies.

This report explores how RRI could help to address the humanitarian and strategic risks that may result from the development, diffusion and military use of artificial intelligence (AI) and thereby achieve arms control objectives on the military use of AI.

The report makes recommendations on how the arms control community could build on existing responsible AI initiatives and export control and compliance systems to engage with academia and the private sector in the governance of risks to international peace and security posed by the military use of AI.

Contents

1. Introduction

2. Addressing the risks posed by the military use of AI

3. Responsible research and innovation as a means to govern the development, diffusion and use of AI technology

4. Building on existing efforts to promote responsible research and innovation in AI

5. Key findings and recommendations

Luke Richards is a Research Assistant working on emerging military and security technologies.

Kolja Brockmann is a Researcher in the SIPRI Dual-Use and Arms Trade Control programme.

Dr Vincent Boulanin is a Senior Researcher on emerging military and security technologies.

Excerpt from:
Responsible Artificial Intelligence Research and Innovation for International Peace and Security - World - ReliefWeb

Over 80% of Health Execs Have Artificial Intelligence Plans in Place – HealthITAnalytics.com

November 02, 2020 -Eighty-three percent of healthcare organizations have implemented an artificial intelligence strategy, while another 15 percent are planning to develop one, according to a recent survey conducted by Optum.

Fifty-nine percent of leaders said they believe AI will deliver significant cost savings within three years, a 90 percent increase since 2018.

The results of the survey show that the healthcare industrys increase in AI adoption is driven by executives seeing more tangible benefits from the technology including improved business performance and patient outcomes.

These insights demonstrate that as those in late-stage AI implementation grow more familiar with AI as well as the benefits it yields they in turn become more comfortable and confident, generating momentum in which AI grows more beneficial more quickly, researchers stated.

With AI, the more quickly organizations in early or middle stages of AI deployment move forward, the sooner they will overcome uncertainty and unlock the rewards of this powerful business tool.

READ MORE: Expanding Access to Mental Healthcare with Artificial Intelligence

The survey also revealed that the current healthcare crisis has catalyzed the use of AI in medical settings. More than half (56 percent) said that their response to COVID-19 has caused them to accelerate or expand their AI implementation strategies.

Additionally, of those who reported being in the late stages of AI development, 51 percent believe theyll achieve a return on AI investments faster because of their pandemic response.

The need to have a strategy in place may have come into sharp focus during the COVID-19 pandemic, when organizations scrambled to use every tool at their disposal to overcome the unprecedented strain being placed on the industry, the team said.

AIs ability to automate workflows and help simplify the communication and analysis of complex data can help alleviate that burden.

Researchers also noted that organizations who take too long to plan or deploy AI strategies are at risk of falling behind their tech-savvy counterparts: Fifty-five percent of companies with $1 billion or more in revenue have an AI strategy in place, compared to just 37 percent of their lower-revenue peers.

READ MORE: Applying Artificial Intelligence to Chronic Disease Management

In addition to cost savings, healthcare executives are looking forward to improve patients health. Fifty-five percent of leaders rank improving health outcomes as the greatest impact of AI investments, while another 55 percent rank improving patient experiences as the top impact.

Executives emphasis on the consumer-focused benefits serves as a reminders that health care is first and foremost an industry focused on the well-being of those it serves and that AI has implications for real people most in need, researchers said.

To realize these benefits, healthcare leaders are planning to apply AI to a range of tasks. Forty percent plan to monitor data from Internet of Things (IoT) devices such as wearable technologies, while 37 percent want to accelerate research for new therapeutic or clinical discoveries.

Another 37 percent want to use AI tools to assign codes for accurate diagnoses, facilities, and procedures.

These applications are all well-suited to advanced analytics technologies, the team noted.

READ MORE: Enhancing Cervical Cancer Screenings with Artificial Intelligence

Internet-connected remote patient monitoring devices enable more complete virtual health offerings, and AI can identify signals and trends within those data streams; AI can help prioritize potential investigative targets for treatments or vaccines; and automating business processes can enable organizations to achieve more even when resources are under duress, researchers said.

While the industry appears to be increasingly adopting and deploying AI technology, several barriers to use and implementation still exist. Seventy-three percent of respondents said they had concerns about AI because of a lack of transparency in how the data is used or how the technology makes decisions. Just under 70 percent said the role of humans in the decision-making process was a top concern.

These findings highlight ongoing concerns that AI will take over the jobs of human clinicians, or come to conclusions that may not be based in evidence or provider expertise.

As executives prepare to infuse AI into their operations, they should ask system designers to include an explainable interface whenever possible to help recipients of AI-driven predictions better understand whats influencing those recommendations, researchers said.

Likewise, while routine processes can be targeted for automation, complex decisions should always include a human perspective within the workflow. That means the AI is augmenting human capabilities and helping individuals work at the top of their license. Peoples judgment remains the deciding factor and the human touch of health care is maintained.

Many leaders have also recognized the importance of integrating social determinants of health data into AI algorithms. Fifty-nine percent have already incorporated non-clinical information into their AI plans to improve predictions about future health needs, while another 36 percent plan to do so.

The results of the survey demonstrate the steadily increasing prevalence of AI in healthcare, as well as the significant benefits the technology can bring.

As AI grows more and more popular across all industries, healthcare executives will see increasing opportunities to capitalize on the insights it offers, setting the stage to radically alter their industry from the bottom line all the way to patient experience, the researchers concluded.

The results of this survey capture not only how AI is becoming increasingly the norm at a more rapid pace, but how its benefits as well as the ways in which the industry can overcome pitfalls will be more widespread as familiarity with AI grows.

Read more:
Over 80% of Health Execs Have Artificial Intelligence Plans in Place - HealthITAnalytics.com

Patenting Artificial Intelligence in Canada, the UK and Europe: A Primer – Lexology

Artificial intelligence (AI) has been the subject of human fascination and awe (and Hollywood movies) for many years. Who can forget the iconic scene in 2001: A Space Odyssey when the intelligent computer HAL 9000 says Im sorry Dave, Im afraid I cant do that, and refuses to allow Dave to re-enter the pod bay doors because HAL knows that Dave is planning to disconnect it (him?). AI is busy learning, growing, computing, and in some cases inventing, all while we sleep.

It is therefore no surprise that the patentability of AI is a current focus of many innovators and Patent Offices around the world. Recent decisions and practice in Canada, the UK and Europe, bring into sharp focus the unique challenges of protecting these innovations. Below we consider some of the foibles of the AI-related patent practices of these jurisdictions.

Canada

The Backstory

Amazons One-Click Patent

The patentability of computer-implemented inventions in Canada has been evolving since the early 2000s, when Amazons patent application for what is known as its one-click internet shopping solution, which covered the automation of several steps ordinarily involved in placing an online shopping order on its website, was rejected by the Commissioner of Patents as lacking patentable subject matter. The Federal Court allowed Amazons appeal, and the Federal Court of Appeal agreed, concluding that the Commissioner of Patents was required to purposively construe the claims to identify essential elements and thereby the alleged invention (which it had not done), and consider whether the invention (i) had a method of practical application; (ii) was a new and inventive method of applying skill and knowledge; and (iii) had a commercially useful result. The matter was referred back to the Commissioner, who ultimately granted Amazons one-click patent.

The Patent Offices Problem-Solution

Shortly thereafter, the Canadian Patent Office issued notices to the profession establishing that claim construction was to be focused on identification of the problem-solution the invention addressed. As a result, Patent Office examiners were able to ignore elements of claims from construction if those elements were not essential to accomplishing the solution or provided context to the claim (e.g., a computer). In so doing, the Patent Office could (and did) designate AI-related claims as being directed to ineligible subject matter. Earlier this year, several patent applications for computer-related inventions were found to lack patentable subject matter because the examiners concluded the computers were not essential elements or simply provided working environments for data analysis.

The Federal Court rejects the Problem-Solution Test

However, a recent Federal Court decision criticized the Patent Offices problem-solution claim construction approach, holding it was failing to apply the purposive construction test established by the Supreme Court of Canada. The invention at issue covered a computer implementation of a new method for selecting and weighing investment portfolio assets that minimizes risk without impacting returns. Despite the claim language explicitly including a computer, the Patent Office found that the invention lacked subject matter patentability because the Office deemed the computer non-essential based on the problem-solution test.

The Federal Court reviewed and discarded the problem-solution test as incorrect, and reiterated that purposive construction is required to assess the essential elements of a claim to identify the claimed invention. It also observed that the Commissioner had failed to explain why she had excluded computer processing as a solution. The Court then sent the application back to the Patent Office for reassessment. The Patent Office did not appeal the Courts decision.

CIPO Updates its Patentable Subject Matter Guidance

Instead, the Canadian Intellectual Property Office released new guidance on patentable subject matter in response to the Choueifaty decision, which specifies that if [a] computer merely processes [an] algorithm in a well-known manner and the processing of the algorithm on the computer does not solve any problem in the functioning of the computer, the computer and the algorithm do not form part of a single actual invention that solves a problem related to the manual or productive arts and the claim is therefore unpatentable.

The Takeaway for AI Inventions

Accordingly, it seems that (at least for now) AI-related inventions employing a computer programmed to execute an algorithm, where the result has no physical existence or does not manifest a discernable physical effect or change (e.g., the generation or display of data) will likely still be difficult to protect. However, if the AI-related inventions are employed in technical fields where the end result is physical and tangible (e.g., the control of an external process, improving the functioning of computers), they will likely be patentable.

United Kingdom and Europe

The Backstory

Across the Atlantic, the situation is a little more settled. It is well established in UK and European case law that AI inventions can be patented if they provide a technical contribution. The law in the UK and Europe specifies that computer programs and mathematical methods as such are not patentable. However, over a number of years, the case law has evolved to define the meaning of as such, and it is now generally recognised in both European and UK law that an invention that essentially lies in novel mathematics but that is configured to control a technical process, for example an anti-lock braking system, would be considered to have technical character, and so would not be considered to be a computer program as such. In contrast, a computer program for providing a non-technical process, for example providing a personalised shopping itinerary, would likely not be considered as having technical character.

The EPO and UK courts however disagree on how to correctly assess patentability for software-based inventions and there is divergence in the practices of the EPO and UKIPO in this area. While the UK courts have indicated that the outcome from the two approaches is the same, many practitioners in the UK will say it is easier to obtain a patent to a software-based invention at the EPO than it is at the UKIPO.

Different Strokes

The EPO Approach

The EPO consider AI-related inventions to generally relate to computational models that have an abstract mathematical nature. Therefore, in order to patent an AI-related invention in Europe (e.g., a software based mathematical method), it must find technical character from outside the computational model itself. The EPO have published guidance for assessing whether or not such an invention has technical character; a mathematical method may contribute to the technical character of an invention if the method is either, (i) applied to a field of technology, or is (ii) adapted to a specific technical implementation. Thus, novel and inventive AI innovations that are applied to well-defined technical fields as provided for under option (i), such as image/speech processing, data encoding/encryption, optimising load distribution in a computer network, or controlling a physical process (e.g., robotic arm, self driving car, etc.) would likely be patentable. Similarly, AI inventions that are adapted to a specific technical implementation as provided under option (ii), such as the adaptation of a polynomial reduction algorithm to exploit word-size shifts matched to the word size of the computer hardware, would also likely be patentable.

If, however, an AI-related invention does not satisfy option (i) or option (ii), the EPO will likely consider it to lack technical character. It is worth noting that the mere fact that an AIs invention can be executed on physical hardware is not enough to demonstrate that it has technical character. Rather, the method itself must provide some technical contribution to be patentable.

The UKIPO Approach

Like the EPO, the UKIPO also considers AI-related inventions to be related to software-based mathematical models. Again, like the EPO, if the UKIPO deem an AI-related invention to provide a technical contribution, the invention should not be excluded from patentability. To determine if a technical contribution is made, the UKIPO considers five signposts (known as the AT&T signposts) that may hint at a technical contribution, which can be broadly summarized as whether the invention (i) provides a technical effect outside of the computer, (ii) means that a computer operates in a different way, and (iii) overcomes a perceived problem or merely circumvents said problem. Generally, AI inventions related to image processing or the control of an external process (e.g., a robotic arm) are not excluded from patentability. Similarly, AI-related inventions that operate at the level of the architecture of the computer, or that make a computer operate in a new way are also generally not excluded from patentability. However, if none of the AT&T signposts are satisfied, generally the invention will not be patentable; the mere fact that an AI invention can be executed on physical hardware is not enough to demonstrate that the invention has technical character.

The Takeaway

Would Mr. Choueifatys invention be patentable in the UK or Europe?

The subject of Mr. Choueifatys invention, described above, generally relates to a method involving tradeable financial assets. In the UK or Europe, the tradeable financial assets themselves will generally be considered non-technical and the method would therefore only be patentable if the method was judged to have technical character lying outside of that data. For example, if the invention lay in a novel encryption method for sending trading data between servers securely, this could provide the required technical character. However, if the effect of the invention was judged to be solely solving a business related problem, for example relating to improvements in how much money is made through the trading, such a method would likely not be considered by the EPO and UKIPO to have technical character.

Read the original here:
Patenting Artificial Intelligence in Canada, the UK and Europe: A Primer - Lexology

Artificial Intelligence: The Next Front of the Fight Against Institutional Racism – IoT For All

Its been three months since the world was shaken by the brutal murder of George Floyd. The image of a white police officer kneeling on a black citizen for 8 minutes and 46 seconds are still fresh in Americas collective memory.

This wasnt the first case of racially-charged police brutality in the US. And unfortunately, it wont be the last one either.

Racism in this country has deep roots. It is a festering wound thats either left ignored or treated with an infective medicine. Theres no end in sight to institutional racism in the country and to make matters worse, this disease is finding new ways to spread.

Even Artificial Intelligence, which is said to be one of the biggest technological breakthroughs in modern history, has inherited some of the prejudices that sadly prevail in our society.

It wouldve been ridiculous to suggest that computer programs are biased a few years ago. After all, why would any software care about someones race, gender, and color? But that was before machine learning and big data empowered computers to make their own decisions.

Algorithms now are enhancing customer support, reshaping contemporary fashion, and paving the way for a future where everything from law & order to city management can be automated.

Theres an extremely realistic chance we are headed towards an AI-enabled dystopia, explains Michael Reynolds of Namobot, a website that generates blog names with the help of big data and algorithms. Erroneous dataset that contains human interpretation and cognitive assessments can make machine-learning models transfer human biases into algorithms.

This isnt something far into the future but is already happening.

Risk assessment tools are often used in the criminal justice system to predict the likelihood of a felon committing a crime again. In theory, this Minority Report type technology is used to deter future crimes. However, critics believe these programs harm minorities.

ProPublica put this to test in 2016 when it examined the risk scores for over 7000 people. The non-profit organization analyzed data of prisoners arrested over two years in Broward County Florida to see who was charged for new crimes in the next couple of years.

The result showed what many had already feared. According to the algorithm, Black defendants were twice as likely to commit crimes than white ones. But as it turned out, only 20% of those who were predicted to engage in criminal activity did so.

Similarly, facial recognition software used by police could end up disproportionately affecting African Americans. As per a study co-authored by FBI, face recognition used in cities such as Seattle may be less accurate on Black people, leading to misidentification and false arrests.

Algorithm bias isnt just limited to the justice system. Black Americans are routinely denied programmers that are designed to improve care for patients with complex medical conditions. Again, these programs are less likely to refer Black patients than White patients for the same ailments.

To put it simply, tech companies are feeding their own biases into the systems. The exact systems that are designed to make fair, data-based decisions.

So whats being done to fix this situation?

Algorithmic bias is a complex issue mostly because its hard to observe. Programmers are often baffled to find out their algorithm discriminates against people on the basis of gender and color. Last year, Steve Wozniak revealed that Apple gave him a 10-times higher credit limit than his wife even though she had a better credit score.

It is rare for consumers to find such disparities. Studies that examine discrimination on part of AI also take considerable time and resources. Thats why advocates demand more transparency around how the entire system operates.

The problem merits an industry-wide solution but there are hurdles along the way. Even when algorithms are revealed to be biased, companies do not allow others to analyze the data and arent thorough with their investigations. Apple said it would look into the Wozniak issue but so far, nothing has come of it.

Bringing transparency would require companies to reveal their training data to observers or open themselves to a third-party audit. Theres also an option for programmers to take the initiative and run tests to determine how their system fares when applied to individuals belonging to different backgrounds.

To ensure a certain level of transparency, the data used to train the AI and the data used to evaluate it should be made public. Getting this done should be easier in government matters. However, the corporate world would resist such ideas.

According to a paper published by New York University research center, the lack of diversity in AI has reached a moment of reckoning. The research indicates that the AI field is overwhelmingly white and male due to which, it risks reasserting power imbalances and historical biases.

The industry has to acknowledge the gravity of the situation and admit that its existing methods have failed to address these problems, explained Kate Crawford, an author of the report.

With both Facebook and Microsoft having 4% of the workforce thats Black its quite clear that minorities are not being fairly represented in the AI field. Researchers and programmers are a homogeneous population who come from a certain level of privilege.

If the pool is diversified, the data would be much more representative of the world we inhabit. Algorithms would gain perspectives that are currently being ignored and AI programs would be much less biased.

Is it possible to create an algorithm thats completely free of bias? Probably not.

Artificial Intelligence is designed by humans and people are never truly unbiased. However, programs created by individuals from dominant groups will only help in perpetuating injustices against minorities.To make sure that algorithms dont become a tool of oppression against Black and Hispanic communities public and private institutions should be pushed to maintain a level of transparency.

Its also imperative that big tech embraces diversity and elevates programmers belonging to ethnic minorities. Moves like these can save our society from becoming an AI dystopia.

Excerpt from:
Artificial Intelligence: The Next Front of the Fight Against Institutional Racism - IoT For All