Archive for April, 2021

This Researcher Says AI Is Neither Artificial nor Intelligent – WIRED

Technology companies like to portray artificial intelligence as a precise and powerful tool for good. Kate Crawford says that mythology is flawed. In her book Atlas of AI, she visits a lithium mine, an Amazon warehouse, and a 19th-century phrenological skull archive to illustrate the natural resources, human sweat, and bad science underpinning some versions of the technology. Crawford, a professor at the University of Southern California and researcher at Microsoft, says many applications and side effects of AI are in urgent need of regulation.

Crawford recently discussed these issues with WIRED senior writer Tom Simonite. An edited transcript follows.

WIRED: Few people understand all the technical details of artificial intelligence. You argue that some experts working on the technology misunderstand AI more deeply.

KATE CRAWFORD: It is presented as this ethereal and objective way of making decisions, something that we can plug into everything from teaching kids to deciding who gets bail. But the name is deceptive: AI is neither artificial nor intelligent.

Buy this book at:

If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.

AI is made from vast amounts of natural resources, fuel, and human labor. And it's not intelligent in any kind of human intelligence way. Its not able to discern things without extensive human training, and it has a completely different statistical logic for how meaning is made. Since the very beginning of AI back in 1956, weve made this terrible error, a sort of original sin of the field, to believe that minds are like computers and vice versa. We assume these things are an analog to human intelligence and nothing could be further from the truth.

You take on that myth by showing how AI is constructed. Like many industrial processes it turns out to be messy. Some machine learning systems are built with hastily collected data, which can cause problems like face recognition services more error prone on minorities.

We need to look at the nose to tail production of artificial intelligence. The seeds of the data problem were planted in the 1980s, when it became common to use data sets without close knowledge of what was inside, or concern for privacy. It was just raw material, reused across thousands of projects.

This evolved into an ideology of mass data extraction, but data isnt an inert substanceit always brings a context and a politics. Sentences from Reddit will be different from those in kids books. Images from mugshot databases have different histories than those from the Oscars, but they are all used alike. This causes a host of problems downstream. In 2021, there's still no industry-wide standard to note what kinds of data are held in training sets, how it was acquired, or potential ethical issues.

You trace the roots of emotion recognition software to dubious science funded by the Department of Defense in the 1960s. A recent review of more than 1,000 research papers found no evidence a persons emotions can be reliably inferred from their face.

Emotion detection represents the fantasy that technology will finally answer questions that we have about human nature that are not technical questions at all. This idea thats so contested in the field of psychology made the jump into machine learning because it is a simple theory that fits the tools. Recording people's faces and correlating that to simple, predefined, emotional states works with machine learningif you drop culture and context and that you might change the way you look and feel hundreds of times a day.

That also becomes a feedback loop: Because we have emotion detection tools, people say we want to apply it in schools and courtrooms and to catch potential shoplifters. Recently companies are using the pandemic as a pretext to use emotion recognition on kids in schools. This takes us back to the phrenological past, this belief that you detect character and personality from the face and the skull shape.

You contributed to recent growth in research into how AI can have undesirable effects. But that field is entangled with people and funding from the tech industry, which seeks to profit from AI. Google recently forced out two respected researchers on AI ethics, Timnit Gebru and Margaret Mitchell. Does industry involvement limit research questioning AI?

Visit link:
This Researcher Says AI Is Neither Artificial nor Intelligent - WIRED

The EU path towards regulation on artificial intelligence – Brookings Institution

Advances in AI are making their way across all products and services we interact with. Our cars are outfitted with tools that trigger automatic breaking, platforms such as Netflix proactively suggest recommendations for viewing, Alexa and Google can predict our search needs, and Spotify can recommend songs and curate listening lists much better than you or I can.

Although the advantages of AI in our daily lives are undeniable, people are concerned about its dangers. Inadequate physical security, economic losses, and ethical issues are just a few examples of the damage AI could cause. In response to AI dangers, the European Union is working on a legal framework to regulate artificial intelligence. Recently, the European Commission proposed its first legal framework on Artificial Intelligence. This proposal is the result of a long and complicated work carried out by the European authorities. Previously, the European Parliament had issued a resolution containing recommendations to the European Commission. Before that, the EU legislators enacted the 2017 Resolution and the Report on the safety and liability implications of Artificial Intelligence, the Internet of Things, and Robotics accompanying the European Commission White Paper on Artificial Intelligence in 2020.

In the Resolution of October 20, 2020 on the civil liability regime for artificial intelligence, the European Parliament acknowledged that the current legal system lacks a specific discipline concerning AI-systems liability. According to the legislative body, abilities and autonomy of the technologies make it challenging to trace back specific human decisions. As a result, the person who suffers from damage caused by AI-systems generally cannot be compensated without proof of the operators liability. For this reason, the Resolution formulated a proposal at annex B with recommendations to the European Commission. This proposal has 17 pages, five chapters, and 15 articles.

Following the recommendations of the European parliament, on April 21, 2021, the European Commission developed its proposal for an AI legal framework through a 108-pages and nine annexes. This framework follows a risk-based approach and differentiates the uses of AI according to whether they create an unacceptable risk, a high risk, or a low risk. The risk is unacceptable if it poses a clear threat to peoples security and fundamental rights and is prohibited for this reason. The European Commission has identified examples of unacceptable risk as uses of AI that manipulate human behavior and systems that allow social-credit scoring. For example, this European legal framework would prohibit an AI system similar to Chinas social credit scoring.

The European Commission defined high-risk as a system intended to be used as a security component, which is subject to a compliance check by a third party. The concept of high-risk is better specified by the Annex III of the European Commissions proposal, which considers eight areas. Among these areas are considered high-risk AI systems related to critical infrastructure (such as road traffic and water supply), educational training (e.g., the use of AI systems to score tests and exams), safety components of products (e.g., robot-assisted surgery), and employees selection (e.g., resume-sorting software). AI systems that fall into the high-risk category are subject to strict requirements, which they must comply with before being placed on the market. Among these are the adoption of an adequate risk assessment, the traceability of the results, adequate information on the AI system must be provided to the user, and a guarantee of a high level of security. Furthermore, adequate human control must be present.

If AI systems have a low risk, they must comply with transparency obligations. In this case, users need to be aware that they are interacting with a machine. For example, in the case of a deepfake, where a persons images and videos are manipulated to look like someone else, users must declare that the image or video content has been manipulated. The European Commission draft does not regulate AI systems that pose little or no risk to European citizens, such as AI used in video games.

In its framework, the European Commission adopts an innovation-friendly approach. A very interesting aspect is that the Commission supports innovation through so-called AI regulatory sandboxes for non-high-risk AI systems, which provide an environment that facilitates the development and testing of innovative AI systems.

The Commissions proposal represents a very important step towards the regulation of artificial intelligence. As a next step, the European Parliament and the member states will have to adopt the Commissions proposal. Once adopted, the new legal framework will be directly applicable throughout the European Union. The framework will have a strong economic impact on many individuals, companies, and organizations. Its relevance is related to the fact that its effects could extend beyond the European Unions borders, affecting foreign tech companies that operate within the EU. From this point of view, the need to adopt a legal framework on artificial intelligence appears crucial. Indeed, AI systems have shown in several cases to have severe limitations, such as an Amazon recruiting system that discriminated against women, or a recent accident involving a Tesla car driving in Autopilot mode that caused the death of two men. These examples lead to serious reflection about the need to adopt a legal framework in jurisdictions other than the European Union.

Amazon and Google are general, unrestricted donors to the Brookings Institution. The findings, interpretations and conclusions in this piece are solely those of the authors and not influenced by any donation.

View post:
The EU path towards regulation on artificial intelligence - Brookings Institution

NATO tees up negotiations on artificial intelligence in weapons – C4ISRNet

COLOGNE, Germany NATO officials are kicking around a new set of questions for member states on artificial intelligence in defense applications, as the alliance seeks common ground ahead of a strategy document planned for this summer.

The move comes amid a grand effort to sharpen NATOs edge in what officials call emerging and disruptive technologies, or EDT. Autonomous and artificial intelligence-enabled weaponry is a key element in that push, aimed at ensuring tech leadership on a global scale.

Exactly where the alliance falls on the spectrum between permitting AI-powered defense technology in some applications and disavowing it in others is expected to be a hotly debated topic in the run-up to the June 14 NATO summit.

We have agreed that we need principles of responsible use, but were also in the process of delineating specific technologies, David van Weel, the alliances assistant secretary-general for emerging security challenges, said at a web event earlier this month organized by the Estonian Defence Ministry.

Different rules could apply to different systems depending on their intended use and the level of autonomy involved, he said. For example, an algorithm sifting through data as part of a back-office operation at NATO headquarters in Brussels would be subjected to a different level of scrutiny than an autonomous weapon.

In addition, rules are in the works for industry to understand the requirements involved in making systems adhere to a future NATO policy on artificial intelligence. The idea is to present a menu of quantifiable principles for companies to determine what their products can live up to, van Weel said.

For now, alliance officials are teeing up questions to guide the upcoming discussion, he added.

Those range from basic introspections about whether AI-enabled systems fall under NATOs legal mandates, van Weel explained, to whether a given system is free of bias, meaning if its decision-making tilts in a particular direction.

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

Accountability and transparency are two more buzzwords expected to loom large in the debate. Accidents with autonomous vehicles, for example, will the raise the question of who is responsible manufacturers or operators.

The level of visibility into of how systems make decisions also will be crucial, according to van Weel. Can you explain to me as an operator what your autonomous vehicle does, and why it does certain things? And if it does things that we didnt expect, can we then turn it off? he asked.

NATOs effort to hammer out common ground on artificial intelligence follows a push by the European Union to do the same, albeit without considering military applications. In addition, the United Nations has long been a forum for discussing the implications of weaponizing AI.

Some of those organizations have essentially reinvented the wheel every time, according to Frank Sauer, a researcher at the Bundeswehr University in Munich.

Regulators tend to focus too much on slicing and dicing through various definitions of autonomy and pairing them with potential use cases, he said.

You have to think about this in a technology-agnostic way, Sauer argued, suggesting that officials place greater emphasis on the precise mechanics of human control. Lets just assume the machine can do everything it wants what role are humans supposed to play?

Read more from the original source:
NATO tees up negotiations on artificial intelligence in weapons - C4ISRNet

NRC Exploring Potential Role of Artificial Intelligence in Commercial Nuclear Power Operations – JD Supra

As artificial intelligence (AI) and machine learning tools become more widely adopted in various products and industries, the NRC has begun studying what roles these technologies can play in commercial nuclear power operations. On April 21, as part of its study, the NRCs Office of Nuclear Regulatory Research requested public comments on the role of these technologies in the various phases of nuclear power generation operational experience and plant management. The NRC requests feedback on the state of practice, benefits, and future trends related to [these technologies] computational tools and techniques in predictive reliability and predictive safety assessments in the commercial nuclear power industry. These technologies are emerging, analytical tools, which, if used properly, show promise in their ability to improve reactor safety, yet offer economic savings. Comments are due by May 21, 2021.

The NRC intends to use the comments to enhance its understanding of the benefits of AI and machine learning as well as the potential pitfalls and challenges associated with their application.

The NRC has requested comments on the following questions:

The NRC is in the early stages of its review, and the agency does not promise to use the information collected in any formal regulatory action. Morgan Lewis will continue to follow the NRCs regulatory initiatives.

[View source.]

Read the rest here:
NRC Exploring Potential Role of Artificial Intelligence in Commercial Nuclear Power Operations - JD Supra

JG Wentworth Welcomes Andrey Zelenovsky as their Vice President of Artificial Intelligence and Machine Learning – PRNewswire

"We are thrilled to have Andrey's leadership and experience and believe he will be instrumental in continuing to expand the use of systems and technology within the company," said Ajai Nair, CIO. "His extensive background in application development and business robotic automation software brings a wealth of knowledge to the team that is necessary to accelerate a successful digital transformation, allowing us to faster determine measurable business benefits and better serve our customers."

Andrey joins the JG Wentworth team from UiPath where he served as Director on their Competitive and Market Intelligence team. During his tenure at UiPath he utilized data mining techniques to analyze the marketplaces, enable sales and predict cashflows.

"I am excited to join a market leader focused on helping customers improve their financial health. I look forward to this unique opportunity to be part of the evolution of JG Wentworth by leveraging AI and automation to positively impact our customers' lives," said Andrey.

Andrey earned his Bachelor of Science in both Information & Systems Engineering and Analytical Finance from the Lehigh University and holds a Master of Science from The George Washington University and a Master of Business Administration from New York University, Leonard N. Stern School of Business.

About JG WentworthJG Wentworth is a financial services company that focuses on helping customers who are experiencing financial hardship or need to quickly access cash. Its services include debt relief, structured settlement payment purchasing, annuity payment purchasing, lottery and casino payment purchasing. J.G. Wentworth was founded in 1991 and currently has offices in Chesterbrook, Pennsylvania, Radnor, Pennsylvania and Rockville, Maryland. For more information about J.G. Wentworth visit http://www.jgwentworth.com or use the information provided below.

SOURCE The JG Wentworth Company

Home

Original post:
JG Wentworth Welcomes Andrey Zelenovsky as their Vice President of Artificial Intelligence and Machine Learning - PRNewswire