Archive for the ‘Artificial Intelligence’ Category

US government watchdog finds federal use of artificial intelligence poses threat to federal agencies and public – JURIST

The US Government Accountability Office (GAO) released a public report Tuesday stating that most federal agencies that use facial recognition technology systems are unaware of the privacy and accuracy-related risks that such systems pose to federal agencies and the American public.

After holding a forum on AI oversight, the GAO developed an artificial intelligence (AI) accountability framework focused on governance, data, performance, and monitoringto help federal agencies and others use AI responsibly.

Of the 42 federal agencies that the GAO surveyed, 20 reported owning or using facial recognition technology systems. The GAO confirmed that most federal agencies that use facial recognition technology are unaware of which AI systems their employees use; hence, the GEO remarked that these agencies have not fully assessed the potential risks of using these systems, such as risks related to privacy and accuracy. Consequently, the GAO also noted that the use of these AI systems can pose [n]umerous risks to federal agencies and the public.

The GAO, which has provided objective, non-partisan information on government operations for a century, said:

AI is a transformative technology with applications in medicine, agriculture, manufacturing, transportation, defense, and many other areas. It also holds substantial promise for improving government operations. Federal guidance has focused on ensuring AI is responsible, equitable, traceable, reliable, and governable. Third-party assessments and audits are important to achieving these goals. However, AI systems pose unique challenges to such oversight because their inputs and operations are not always visible.

In March, the American Civil Liberties Union (ACLU) requested information on how intelligence agencies use AI for national security. In its request, the ACLU warned that AI systems can be biased against marginalized communities and may pose a risk to civil rights.

View post:
US government watchdog finds federal use of artificial intelligence poses threat to federal agencies and public - JURIST

Artificial intelligence and algorithms in the workplace – Lexology

Is removing subjective human choice from HR decisions going to create more problems than it solves?

We are all very aware of human failings when it comes to people management in the workplace. Everything from unconscious bias through to wholly intentional discrimination. To that extent the handing over of some management decisions to algorithms and AI (a term for which there is no common definition but which can cover a scenario where many algorithms work together with the ability to improve their own function) may seem like a no brainer. The technology is certainly out there and being aggressively marketed.

The rise of the gig economy is tied into the increase in the use of algorithms and AI, as the software began to be used on platforms such as Uber in an attempt to optimise the deployment of workers. It has also been adopted in many other sectors and workplaces - including many global brands such as Amazon and Unilever. Common uses include recruitment, workforce management (eg task or shift allocation) and performance review. The benefits to business include faster decision making, more efficient workforce planning, improved speed of recruitment and the obvious reduction in opportunity for human bias.

However, the very nature of "algorithmic management" means increases in monitoring and collection of data upon which the automated, or semi-automated, decisions are made. This is particularly so for performance monitoring and brings with it the risk of monitoring and processing data without appropriate consent. Removing humans from the decision making process entirely also creates the potential for lack of accountability. Additionally, if bias is embedded into an algorithm this will increase rather than decrease the risk of discrimination.

In May 2021, the TUC and the AI Consultancy published a report - Technology Managing People - the legal implications - highlighting exactly these sorts of issues and calling for legal reform. One focus of the report is the lack of transparency in decision making that comes with the use of AI - the basis of the decision being made is often an unknown to those that the decisions are being made about. The report points out that where it is difficult to identify when, how and by whom discrimination is introduced, it becomes more difficult for workers and employees to enforce their rights to protection from discrimination.

Other issues identified by the report include a lack of guidance for employers explaining when workers' privacy rights under the ECHR may be infringed by AI and the risks posed by the lack of clarity in the application of the UK GDPR to the use of AI within the employment relationship. Although unfair dismissal rights provide some protection from dismissals that are factually inaccurate or opaque, and this could be applied to an AI based decision making processes, the need for qualifying service means this protection is not universal. The UK GDPR also provides protection for employees via the requirement, amongst other things, for all personal data that is processed by AI to be accurate but a complaint arising from such a breach cannot, in itself, be brought within the employment tribunal system.

The TUC report makes a number of recommendations on how theses issues can be overcome. The provision of statutory guidance on how to avoid discrimination in the use of AI and on the interplay between AI and workers' rights to privacy; the introduction of a statutory right not to subjected to detrimental treatment (including dismissal) due to the processing of inaccurate data; the right to "explainability" in relation to high risk AI systems; and a change to the UK's data protection regime to state that discriminatory data processing is always unlawful are amongst those recommendations. However, even if any of these proposals are acted upon by UK Government, they will take time to implement.

For employers looking for ideas on good practice in this area, the policy paper published by ACAS - My boss the algorithm: an ethical look at algorithms in the workplace - is a good starting point, although it should be noted this is not ACAS guidance. The recommendations look at what can be done at a human level within a business. Key to those recommendations is the need for human input - algorithms being used alongside human management rather than replacing it. This is something that the TUC report also picks up on, albeit more formally suggesting that there should be a comprehensive and universal right to human review of AI decisions made in the workplace that are "high risk". Both reports also highlight the need for good communication between employers and employees (or their representatives) to ensure technology is effectively used to improve workplace outcomes.

Given the growth in this area, further regulation to manage the use of algorithms and AI in the workplace seems inevitable. In the meantime, businesses making use of this technology need to fully understand exactly what it does, where there are risks to its use and the importance of transparency in its use.

Read the original post:
Artificial intelligence and algorithms in the workplace - Lexology

A History of Regular Expressions and Artificial Intelligence – kottke.org

I have an unusually good memory, especially for symbols, words, and text, but since I dont use regular expressions (ahem) regularly, theyre one of those parts of computer programming and HTML/EPUB editing that I find myself relearning over and over each time I need it. How did something this arcane but powerful even get started? Naturally, its creators were trying to discover (or model) artificial intelligence.

Thats the crux of this short history of regex by Buzz Andersen over at Why is this interesting?

The term itself originated with mathematician Stephen Kleene. In 1943, neuroscientist Warren McCulloch and logician Walter Pitts had just described the first mathematical model of an artificial neuron, and Kleene, who specialized in theories of computation, wanted to investigate what networks of these artificial neurons could, well, theoretically compute.

In a 1951 paper for the RAND Corporation, Kleene reasoned about the types of patterns neural networks were able to detect by applying them to very simple toy languagesso-called regular languages. For example: given a language whose grammar allows only the letters A and B, is there a neural network that can detect whether an arbitrary string of letters is valid within the A/B grammar or not? Kleene developed an algebraic notation for encapsulating these regular grammars (for example, a*b* in the case of our A/B language), and the regular expression was born.

Kleenes work was later expanded upon by such luminaries as linguist Noam Chomsky and AI researcher Marvin Minsky, who formally established the relationship between regular expressions, neural networks, and a class of theoretical computing abstraction called finite state machines.

This whole line of inquiry soon falls apart, for reasons both structural and interpersonal: Pitts, McCullough, and Jerome Lettvin (another early AI researcher) have a big falling out with Norbert Wiener (of cybernetics fame), Minsky writes a book (Perceptrons) that throws cold water on the whole simple neural network as model of the human mind thing, and Pitts drinks himself to death. Minsky later gets mixed up with Jeffrey Epsteins philanthropy/sex trafficking ring. The world of early theoretical AI is just weird.

But! Ken Thompson, one of the creators of UNIX at Bell Labs comes along and starts using regexes for text editor searches in 1968. And renewed takes on neural networks come along in the 21st century that give some of that older research new life for machine learning and other algorithms. So, until Skynet/global warming kills us all, it all kind of works out? At least, intellectually speaking.

(Via Jim Ray)

More about...

See the original post:
A History of Regular Expressions and Artificial Intelligence - kottke.org

Nexyad and HERE improve vehicle safety with next generation, cognitive artificial intelligence – GlobeNewswire

July 6, 2021

Paris and Amsterdam Nexyad, the embedded, real-time platform for aggregating on-board data, and HERE Technologies, the leading location data and technology platform, are now working together to apply cognitive AI to road safety.

On-board data

Nexyad uses cognitive AI to aggregate extensive data sources in a vehicle in real time and interprets them to assess whether a certain driving behaviour is appropriate given the surrounding context. Nexyads assessment, that can easily be delivered to a driver via a mobile phone, can be calculated from four sets of data only: HERE map, Global Navigation Satellite System, electronic horizon and acceleration. Nexyads platform is also scalable and can aggregate data from Advanced Driving Assistant Systems (ADAS) sensors to include camera, radar and lidar, weather (visibility and temperature), and traffic data.

Maximum speed recommended for a specific vehicle at a specific time

Nexyads real-time data aggregation platform provides two output values 20 times every second: the lack of caution of the driver and the maximum speed recommended given the road conditions legal speed limit, road roughness, topography of the road, weather, and traffic. Nexyad bases its analysis on several thousand road accident reports, using a set of rules from modern hybrid AI which includes knowledge-based systems, deep learning, neural gas, PAC (Possibly Approximatively Correct) learning, game theory, reinforcement learning, possibility theory and fuzzy logic.

By recommending a maximum cautious speed based on real-time data and context-specific to every single vehicle, driver and driving environment, Nexyads approach goes much further than the European requirement for vehicles to be aware of the legal speed limit on each road segment (Intelligent Speed Assist). Nexyads safety coach called SafetyNex acts as a true co-pilot for the driver as it provides real-time guidance so as to anticipate possible emergency situations ahead that may lead to an accident. This proactive coaching activates while driving and has been demonstrated to reduce accident rates by at least 25%1.

A risk score for drivers and autonomous shuttles

Nexyad provides drivers with a score that reflects the risk associated with their driving behavior. Nexyads platform is being used by insurers to provide recommendations to drivers and generate a risk profile. For example, Brightmile, a start-up incubated by Kamet, AXAs insurer tech studio, is using Nexyads SafetyNex software as one of the parameters of their smartphone-based telematics solutions for fleets. Indias Montbleu also relies on Nexyads SafetyNex for its smartphone-based app ROAD-Drive it Safe. Milla, the French autonomous electric shuttle, uses SafetyNex to adapt vehicle speeds according to driving conditions and alerts the service operator (on-board and/or off-board) to take appropriate action when the level of risk is estimated too high.

Nexyad has started to integrate the HERE HD Live Map to provide OEMs with Predictive Automotive Cruise Controlservices whereby appropriate speeds are not only being recommended but automatically implemented. Moving forward, connected vehicles will use SafetyNex to assess the level of caution of their own driving and will be able to adopt the appropriate speed even in unknown road conditions.

We found that the maps from HERE are accurate to the centimetre and constantly updated to the second. Every detail counts for us - the topography of the road, the exact positioning of the crossing, the location of a school. With our mission being to save lives, we cannot settle for anything less than the best, says Grard Yahiaoui, CEO of Nexyad.

Nexyads SafetyNex software is one of a kind not only does it provide a score for the lack of caution of the driver, based on the environment in real time, it also recommends an appropriate driving speed. This is the future for Predictive Automotive Cruise Control systems, insurers and autonomous vehicles, says Gilles Martinelli, Director of Automotive at HERE Technologies.

Demos of Nexyads safety coach SafetyNext can be found here and here.

Media Contacts

Adrianne Montgobert +49 151 72 11 67 81 adrianne.montgobert@here.com

Gerard Yahiaoui gyahiaoui@nexyad.net

About HERE Technologies

HERE, a location data and technology platform, moves people, businesses and cities forward by harnessing the power of location. By leveraging our open platform, we empower our customers to achieve better outcomes - from helping a city manage its infrastructure or a business optimize its assets to guiding drivers to their destination safely. To learn more about HERE, please visit here.com and 360.here.com

About Nexyad

Nexyad is a Paris-based AI start-up founded by former professors and researchers of AI and applied maths, specialized in road safety. We propose a unique next generation hybrid cognitive AI that improves road safety, avoids emergency situations and road accidents, and saves lives. We help our customers integrate our technology into their valuable products for insurance & fleets, for automotive Safety Coach or Predictive ACC, and for Autonomous Vehicles aware of their level of caution in driving regarding context and able to adapt to unknown situations to keep caution level high enough.

1 Impact assessment on road accident rate reduction of NEXYAD cognitive AI SafetyNex, available on demand.

Here is the original post:
Nexyad and HERE improve vehicle safety with next generation, cognitive artificial intelligence - GlobeNewswire

Using A.I. to Find Bias in A.I. – The New York Times

In 2018, Liz OSullivan and her colleagues at a prominent artificial intelligence start-up began work on a system that could automatically remove nudity and other explicit images from the internet.

They sent millions of online photos to workers in India, who spent weeks adding tags to explicit material. The data paired with the photos would be used to teach A.I. software how to recognize indecent images. But once the photos were tagged, Ms. OSullivan and her team noticed a problem: The Indian workers had classified all images of same-sex couples as indecent.

For Ms. OSullivan, the moment showed how easily and often bias could creep into artificial intelligence. It was a cruel game of Whac-a-Mole, she said.

This month, Ms. OSullivan, a 36-year-old New Yorker, was named chief executive of a new company, Parity. The start-up is one of many organizations, including more than a dozen start-ups and some of the biggest names in tech, offering tools and services designed to identify and remove bias from A.I. systems.

Soon, businesses may need that help. In April, the Federal Trade Commission warned against the sale of A.I. systems that were racially biased or could prevent individuals from receiving employment, housing, insurance or other benefits. A week later, the European Union unveiled draft regulations that could punish companies for offering such technology.

It is unclear how regulators might police bias. This past week, the National Institute of Standards and Technology, a government research lab whose work often informs policy, released a proposal detailing how businesses can fight bias in A.I., including changes in the way technology is conceived and built.

Many in the tech industry believe businesses must start preparing for a crackdown. Some sort of legislation or regulation is inevitable, said Christian Troncoso, the senior director of legal policy for the Software Alliance, a trade group that represents some of the biggest and oldest software companies. Every time there is one of these terrible stories about A.I., it chips away at public trust and faith.

Over the past several years, studies have shown that facial recognition services, health care systems and even talking digital assistants can be biased against women, people of color and other marginalized groups. Amid a growing chorus of complaints over the issue, some local regulators have already taken action.

In late 2019, state regulators in New York opened an investigation of UnitedHealth Group after a study found that an algorithm used by a hospital prioritized care for white patients over Black patients, even when the white patients were healthier. Last year, the state investigated the Apple Card credit service after claims it was discriminating against women. Regulators ruled that Goldman Sachs, which operated the card, did not discriminate, while the status of the UnitedHealth investigation is unclear.

A spokesman for UnitedHealth, Tyler Mason, said the companys algorithm had been misused by one of its partners and was not racially biased. Apple declined to comment.

More than $100 million has been invested over the past six months in companies exploring ethical issues involving artificial intelligence, after $186 million last year, according to PitchBook, a research firm that tracks financial activity.

But efforts to address the problem reached a tipping point this month when the Software Alliance offered a detailed framework for fighting bias in A.I., including the recognition that some automated technologies require regular oversight from humans. The trade group believes the document can help companies change their behavior and can show regulators and lawmakers how to control the problem.

Though they have been criticized for bias in their own systems, Amazon, IBM, Google and Microsoft also offer tools for fighting it.

Ms. OSullivan said there was no simple solution to bias in A.I. A thornier issue is that some in the industry question whether the problem is as widespread or as harmful as she believes it is.

Changing mentalities does not happen overnight and that is even more true when youre talking about large companies, she said. You are trying to change not just one persons mind but many minds.

When she started advising businesses on A.I. bias more than two years ago, Ms. OSullivan was often met with skepticism. Many executives and engineers espoused what they called fairness through unawareness, arguing that the best way to build equitable technology was to ignore issues like race and gender.

Increasingly, companies were building systems that learned tasks by analyzing vast amounts of data, including photos, sounds, text and stats. The belief was that if a system learned from as much data as possible, fairness would follow.

But as Ms. OSullivan saw after the tagging done in India, bias can creep into a system when designers choose the wrong data or sort through it in the wrong way. Studies show that face-recognition services can be biased against women and people of color when they are trained on photo collections dominated by white men.

Designers can be blind to these problems. The workers in India where gay relationships were still illegal at the time and where attitudes toward gays and lesbians were very different from those in the United States were classifying the photos as they saw fit.

Ms. OSullivan saw the flaws and pitfalls of artificial intelligence while working for Clarifai, the company that ran the tagging project. She said she had left the company after realizing it was building systems for the military that she believed could eventually be used to kill. Clarifai did not respond to a request for comment.

She now believes that after years of public complaints over bias in A.I. not to mention the threat of regulation attitudes are changing. In its new framework for curbing harmful bias, the Software Alliance warned against fairness through unawareness, saying the argument did not hold up.

They are acknowledging that you need to turn over the rocks and see what is underneath, Ms. OSullivan said.

Still, there is resistance. She said a recent clash at Google, where two ethics researchers were pushed out, was indicative of the situation at many companies. Efforts to fight bias often clash with corporate culture and the unceasing push to build new technology, get it out the door and start making money.

It is also still difficult to know just how serious the problem is. We have very little data needed to model the broader societal safety issues with these systems, including bias, said Jack Clark, one of the authors of the A.I. Index, an effort to track A.I. technology and policy across the globe. Many of the things that the average person cares about such as fairness are not yet being measured in a disciplined or a large-scale way.

Ms. OSullivan, a philosophy major in college and a member of the American Civil Liberties Union, is building Parity around a tool designed by and licensed from Rumman Chowdhury, a well-known A.I. ethics researcher who spent years at the business consultancy Accenture before becoming an executive at Twitter. Dr. Chowdhury founded an earlier version of Parity and built it around the same tool.

While other start-ups, like Fiddler A.I. and Weights and Biases, offer tools for monitoring A.I. services and identifying potentially biased behavior, Paritys technology aims to analyze the data, technologies and methods a business uses to build its services and then pinpoint areas of risk and suggest changes.

The tool uses artificial intelligence technology that can be biased in its own right, showing the double-edged nature of A.I. and the difficulty of Ms. OSullivans task.

Tools that can identify bias in A.I. are imperfect, just as A.I. is imperfect. But the power of such a tool, she said, is to pinpoint potential problems to get people looking closely at the issue.

Ultimately, she explained, the goal is to create a wider dialogue among people with a broad range of views. The trouble comes when the problem is ignored or when those discussing the issues carry the same point of view.

You need diverse perspectives. But can you get truly diverse perspectives at one company? Ms. OSullivan asked. It is a very important question I am not sure I can answer.

See the original post:
Using A.I. to Find Bias in A.I. - The New York Times