Archive for the ‘Artificial Intelligence’ Category

Award-winner warns of the failures of artificial intelligence – The Australian Financial Review

On a positive note, he says AI has been identified as a key enabler on 79 per cent (134 targets) of the United Nations Sustainable Development Goals (SDGs). However, 35 per cent (59 targets) may experience a negative impact from AI.

Unfortunately, he says unless we start to address the inequities associated with the development of AI right now, were in grave danger of not achieving the UNs SDG goals and, more pertinently, if AI is not properly governed and proper ethics are applied from the beginning, it will have not only a negative physical impact, it will also have a significant social impact globally.

There are significant risks to human dignity and human autonomy, he warns.

If AI is not properly governed and its not underpinned by ethics, it can create socio-economic inequality and impact on human dignity.

A part of the problem at present is most AI is being developed for a commercial outcome, with estimates suggesting its commercial worth to be $15 trillion a year by 2030.

Unfortunately, the path were on poses some significant challenges.

Samarawickrama says AI ethics is underpinned by human ethics and the underlying AI decision-making is driven by data and a hypothesis created by humans.

The danger is much AI is built off the back of the wrong hypothesis because there is an unintentional bias built into the initial algorithm. Every conclusion the AI is making is reached from the hypothesis, which means every decision and the quality of that decision its making is based off a humans ethics and biases.

For Samarawickrama, this huge flaw in AI can only be rectified if diversity, inclusion and socio-economic inequality are taken into account from the very beginning of the AI process.

We can only get to that point if we ensure we have good AI governance and ethics.

The alternative is were basically set up to fail if we do not have that diversity of data.

Much of his work in Australia is with the Australian Red Cross and its parent the International Federation of Red Cross and Red Crescent Societies (IFRC), where he has built a framework linking AI to the seven Red Cross principles in a bid to link AI to the IFRCs global goal of mitigating human suffering.

And while this is enhancing the data literacy across the Red Cross, it also has a potential usage in many organisations, because its about increasing diversity and social justice around AI.

Its a complex problem to solve because there are lot of perspectives as to what mitigating human suffering involves. It goes beyond socio-economic inequality and bias.

For example, the International Committee of the Red Cross is concerned about autonomous weapons and their impact on human suffering.

Samarawickrama says if we are going to achieve the UNSDGs as well as reap the benefits of a $15 trillion a year global economy by 2030, we have to work hard to ensure we get AI right now by focussing on AI governance and ethics.

If we dont, we create a risk of failing to achieve those goals and we need to reduce those by ensuring AI can bring the benefits and value it promises to all of us.

Its why the Red Cross is a good place to start because its all about reducing human suffering, wherever its found and, we need to link that to AI, Samarawickrama says.

Excerpt from:
Award-winner warns of the failures of artificial intelligence - The Australian Financial Review

Meet Ithaca, Artificial Intelligence that will reveal hidden secrets of ancient civilisations – India Today

The earliest form of writing originated nearly 5000 years ago in Mesopotamia (present-day Iraq), representing the Sumerian language. However, these early manuscripts, inscriptions, manuals have suffered the wrath of time. Historians have long worried about the missing texts that could give an insight into the life and culture of ancient civilisation, Artificial Intelligence has now come to their aid.

Named after the Greek island in Homers Odyssey, Ithaca, the first deep neural network will help in not only restoring the missing text of damaged inscriptions, but also identifying their original location, and establishing the date they were written. Designed to assist and expand the historians workflow, this AI has achieved 62 per cent accuracy when restoring damaged texts and improved the accuracy of historians from 25 per cent to 72 per cent.

In a study published in the journal Nature, researchers said that models such as Ithaca can unlock the cooperative potential between artificial intelligence and historians, transformationally impacting the way that we study and write about one of the most important periods in human history.

Inspired by biological neural networks, deep neural networks can discover and harness intricate statistical patterns in vast quantities of data. Ithaca is one such development that merges the fields of technology, supercomputing, and ancient history to reveal unknown secrets hidden in plain sight.

Ithaca was trained to simultaneously perform the tasks of textual restoration, geographical attribution, and chronological attribution. Researchers trained the system on inscriptions written in the ancient Greek language and across the ancient Mediterranean world between the seventh century BC and the fifth century AD.

Credit: Ca' Foscari University of Venice

The architecture of Ithaca was carefully tailored to each of the three epigraphic tasks, meaningfully handling long-term context information and producing interpretable outputs to enhance the potential for human-machine cooperation. We believe machine learning could support historians to expand and deepen our understanding of ancient history, just as microscopes and telescopes have extended the realm of science Yannis Assael, Staff Research Scientist at DeepMind said in a statement.

Researchers said that as centuries went by, many ancient inscriptions were damaged and became partially or completely illegible. In some cases, they were removed from their original location, and they can be difficult to date. For instance, 2500 years ago, Greeks started writing on stone, ceramics, and metal, in order to register all sorts of transactions, laws, calendars, and oracles. Today, these archaeological findings reveal a lot of information on the Mediterranean area. Unfortunately, this tale is incomplete.

DeepMind has partnered with Google Cloud and Google Arts & Culture to launch a free interactive version of Ithaca. (File Pic)

Historians have already used Ithaca to shed light on current disputes in Greek history, including the dating of a series of important Athenian decrees thought to have been written before 446/445 BCE. Ithacas average predicted date for the decrees is 421 BCE, aligning with the new evidence and demonstrating how machine learning might contribute to historical debates.

Although it might seem like a small difference, this date shift has significant implications for our understanding of the political history of Classical Athens. We hope that models like Ithaca can unlock the cooperative potential between AI and the humanities, transformationally impacting the way we study and write about some of the most significant periods in human history, Thea Sommerschield, Marie Curie Fellow at Ca' Foscari University of Venice and fellow at Harvard Universitys CHS said.

Historians are now working on other versions of the AI, which has been trained in different ancient languages to study other ancient writing systems, from Akkadian to Demotic and Hebrew to Mayan.

Go here to see the original:
Meet Ithaca, Artificial Intelligence that will reveal hidden secrets of ancient civilisations - India Today

Breakthrough Study Validates Artificial Intelligence as a Novel Biomarker in Predicting Immunotherapy Response – Published in Journal of Clinical…

The JCO is an international, peer-reviewed medical journal published by the American Society of Clinical Oncology (ASCO), with an impact factor (IF) of 44.54. This is the first time that research on AI biomarkers has been published in an international SCI-grade journal of JCO's prestige.

"Immune phenotyping of tumor microenvironment is a logical biomarker for immunotherapy, but objective measurement of such would be extremely challenging," said Professor Tony Mok from the Chinese University of Hong Kong, co-senior author of the journal. "This is the first study that adopted AI technology to define the tumor immune phenotype, and to demonstrate its ability in predicting treatment outcomes of anti-PD-L1 therapy in two large cohorts of patients with advanced non-small cell lung cancer."

Immune checkpoint inhibitors (ICI) are a standard therapy method for advanced NSCLC with programmed death ligand-1 (PD-L1) expression. However, outcomes vary depending on the patient's tumor microenvironment.

Assessing the PD-L1 tumor proportion score (TPS) can bring predictive benefit for patients with high expression (over 50%), who show superior response to ICI therapy over standard chemotherapy. However, ICIs lose their potency in patients with PD-L1 TPS between 1% and 49%, showing outcomes similar to chemotherapy. Therefore, the development of an accuracy-enhanced biomarker to predict ICI response in NSCLC patients with low PD-L1 expression is highly warranted.

While tumor infiltrating lymphocytes (TIL) are promising biomarkers for predicting ICI treatment outcomes apart from PD-L1, clinical application remains challenging as TIL quantification involves a manual evaluation process bound to practical limitations of interobserver bias and intensive labor. Employing AI's superhuman computational capabilities should open new possibilities for the objective quantification of TIL.

To validate immune phenotyping as a complementary biomarker in NSCLC, researchers divided 518 NSCLC patients into three groups based on their tumor microenvironment: inflamed, immune-excluded, and immune-desert. As a result, clinical characteristics based on each immune phenotype group showed statistically significant differences in progression-free survival (PFS) and overall survival (OS).

Furthermore, analysis of NSCLC patients with PD-L1 TPS between 1% and 49% based on their immune phenotype found that the inflamed group showed significantly higher results in objective response rate (ORR) and progression-free survival (PFS), compared to the non-inflamed groups. This shows Lunit SCOPE IO's ability to supplement PD-L1 TPS as a biomarker by accurately predicting immunotherapy response for patients with low PD-L1 TPS.

"Lunit has demonstrated through several abstracts the credibility of Lunit SCOPE IO as a companion diagnostic tool to predict immunotherapy treatment outcomes," said Chan-Young Ock, Chief Medical Officer at Lunit. "This study is a proof-of-concept that compiles all of our past research that elucidates Lunit AI's ability to optimize cancer treatment selection."

Last year, Lunit announced a strategic investment of USD 26 million from Guardant Health, Inc., a leading precision oncology company. Following this major collaboration intended to reshape and innovate the precision oncology landscape, Lunit continues to refine its global position by validating the effectiveness of its AI technology through various studies.

SOURCE Lunit

Read the rest here:
Breakthrough Study Validates Artificial Intelligence as a Novel Biomarker in Predicting Immunotherapy Response - Published in Journal of Clinical...

A European approach to artificial intelligence | Shaping …

The European approach to artificial intelligence (AI) will help build a resilient Europe for the Digital Decade where people and businesses can enjoy the benefits of AI. It focuses on 2 areas: excellence in AI and trustworthy AI. The European approach to AI will ensure that any AI improvements are based on rules that safeguard the functioning of markets and the public sector, and peoples safety and fundamental rights.

To help further define its vision for AI, the European Commission developed an AI strategy to go hand in hand with the European approach to AI. The AI strategy proposed measures to streamline research, as well as policy options for AI regulation, which fed into work on the AI package.

The Commission published its AI package in April 2021, proposing new rules and actions to turn Europe into the global hub for trustworthy AI. This package consisted of:

Fostering excellence in AI will strengthen Europes potential to compete globally.

The EU will achieve this by:

The Commission and Member States agreed boost excellence in AI by joiningforces on AI policy and investment. The revised Coordinated Plan on AI outlines a vision to accelerate, act, and align priorities with the current European and global AI landscape and bring AI strategy into action.

Maximising resources and coordinating investments is a critical component of the Commissions AI strategy. Through the Digital Europe and Horizon Europe programmes, the Commission plans to invest 1 billion per year in AI. It will mobilise additional investments from the private sector and the Member States in order to reach an annual investment volume of 20 billion over the course of the digital decade.

The newly adopted Recovery and Resilience Facility makes 134 billion available for digital. This will be a game-changer, allowing Europe to amplify its ambitions and become a global leader in developing cutting-edge, trustworthy AI.

Access to high quality data is an essential factor in building high performance, robust AI systems. Initiatives such as the EU Cybersecurity Strategy, the Digital Services Act and the Digital Markets Act, and the Data Governance Act provide the right infrastructure for building such systems.

Building trustworthy AI will create a safe and innovation-friendly environment for users, developers and deployers.

The Commission has proposed 3 inter-related legal initiatives that will contribute to building trustworthy AI:

The Commission aims to address the risks generated by specific uses of AI through a set of complementary, proportionate and flexible rules. These rules will also provide Europe with a leading role in setting the global gold standard.

This framework gives AI developers, deployers and users the clarity they need by intervening only in those cases that existing national and EU legislations do not cover. The legal framework for AI proposes a clear, easy to understand approach, based on four different levels of risk: unacceptable risk, high risk, limited risk, and minimal risk.

Read this article:
A European approach to artificial intelligence | Shaping ...

Digital assistants, artificial intelligence and the blurred lines of intervention – SUNY Oswego

How are Alexa, Siri and artificial intelligence (AI) impacting and intervening in dangerous situations in daily life? Thats an evolving issue that SUNY Oswego communication studies faculty member Jason Zenor continues to explore, including in an award-winning publication.

In If You See Something, Say Something: Can Artificial Intelligence Have a Duty to Report Dangerous Behavior in the Home, published in the Denver Law Review, Zenor recounted a 2017 incident where police reported a jealous man threatening his girlfriend at gunpoint unknowingly caused their Amazon Echos Alexa to call the police, leading to his arrest.

While the incident made national news - in part because of its relative rarity - Zenor noted it represents the tip of an iceberg for how AI evolves to interact with daily online activity.

You can find a few dozen stories over the last several years where Siri or Alexa save a life, such as with crime, accidents, heart attacks or the like, Zenor explained. In those situations the victim has their phone or in-home device set up to recognize Call 911 or emergency. This is a simple setting and most are now set up for this automatically.

Zenors publication, recognized as a top paper in the 2021 National Communication Association Conferences Freedom of Expression division, explored the trend further, and his research found that smartphones and in-home devices are not capable of enabling anything beyond direct requests to call 911. But artificial intelligence is at work behind the scenes in other situations.

Facebook and other tech companies can monitor things like bullying, hate speech and suicidal tendencies in online spaces through AI, Zenor noted. But it is still looking for certain words and will respond with pre-approved responses like hotline numbers. In-home AI and other devices are set up to listen when we want them to -- but it still needs certain prompts, though the language ability is getting better.

AI is not yet making a big difference in home safety - other than in-home audio and video, as after-the-fact evidence - because of the complicated nature of doing, while in fact, it is more likely right now that perpetrators will use apps to track and surveil their victims than it is that an AI will help a victim, although certainly not proactively, Zenor noted. But the field is making strides elsewhere.

Outside the home, predictive AI is being used in both health care and law enforcement, Zenor This is admirable in health care and similar to screenings that health care facilities now give to patients such as depression, drug abuse or safety in the home. But with both of these spheres, it is only predictive and we also run into issues of implicit bias programmed into AI leading to disparate treatment based on race, sexuality, income and other factors, and this is already happening in the justice system. Any time someone is reported it can lead to unnecessary involvement with law enforcement or mental health systems that changes the trajectory of someone's life. This can have grave consequences.

Related to that, these questions also take into account such legal issues as privacy, criminal procedure, duty to report and liability.

The first question that will need to be answered is what is the 'status' of our AI companions, Zenor explained. The courts are slowly giving more privacy protection to our connected devices. No longer can law enforcement simply just ask the tech companies for the data. But if AI advances to be more anthropomorphic and less of a piece of tech, then the question is what is the legal parallel? Is it law enforcement seizing our possessions -- as it does with phones and records -- or will the in-home AI be more like a neighbor or family member reporting us? The former invokes the Fourth Amendment, the latter does not, as committing a crime or harm is not protected by general privacy laws.

The other side of the coin involves proactive duties to report. Generally, people have no duty to report, Zenor said. The exception is certain relationships - such as teachers, doctors or parents - who would have a duty to report possible harms when it comes to those to whom they have a responsibility such as students, patients or children.

Liability issues could complicate the picture even further, and could lead to unexpected lawsuits for companies using AI.

Once you do act, then you do have a duty of due care, Zenor said. If you do not use due care and it leads to an injury, then there could be liability. So, companies may open themselves up to liability if they program AI to be able to respond and it goes wrong. Conversely, if the companies could program AI to do this and choose not to, then there will certainly be at a minimum PR issues, but I could see it turning into class action negligence cases when deaths do occur.

Like many issues related to evolution of technology, individuals and society have to consider trade-offs.

Ultimately, we have to consider how much more encroachment into our private lives we are willing to accept in exchange for protecting us from harm, Zenor noted. This is not a new question it arises everytime we have an advancement in technology. Ultimately, privacy is a social construction in the law -- what we as a society consider to be the boundaries. We seem to become more comfortable as time passes and technology natives see no issue while older generations think of it as gross violation.

As for the future of how and how often AI will intervene while attempting to provide help?

My best guess is that there will be incidents that make the news where AI saves a life and there will be public pressure to add more safety features to the technology, Zenor said. AI will advance enough that machines become companions like our pets so we will have a relationship with them that includes divulging private information that it could keep permanently. As it is today, we would expect that if our companion could save us, then they will try to many people own pets as a form of protection or as service pet. The big issue from this will be liability. I assume companies will seek out liability protections either through waivers in terms of agreement or through special legislation similar to 'good samaritan' laws.

Read this article:
Digital assistants, artificial intelligence and the blurred lines of intervention - SUNY Oswego