Archive for the ‘Artificial Intelligence’ Category

Artificial intelligence success is tied to ability to augment, not just automate – ZDNet

Artificial intelligence is only a tool, but what a tool it is. It may be elevating our world into an era of enlightenment and productivity, or plunging us into a dark pit. To help achieve the former, and not the latter, it must be handled with a great deal of care and forethought. This is where technology leaders and practitioners need to step up and help pave the way, encouraging the use of AI to augment and amplify human capabilities.

Those are some of the observations drawn from Stanford University's recently released report, the next installment out of itsOne-Hundred-Year Study on Artificial Intelligence, an extremely long-term effort to track and monitor AI as it progresses over the coming century. The report, first launched in 2016, was prepared by a standing committee that includes a panel of 17 experts, and urges that AI be employed as a tool to augment and amplify human skills. "All stakeholders need to be involved in the design of AI assistants to produce a human-AI team that outperforms either alone. Human users must understand the AI system and its limitations to trust and use it appropriately, and AI system designers must understand the context in which the system will be used."

AI has the greatest potential when it augments human capabilities, and this is where it can be most productive, the report's authors argue. "Whether it's finding patterns in chemical interactions that lead to a new drug discovery or helping public defenders identify the most appropriate strategies to pursue, there are many ways in which AI can augment the capabilities of people. An AI system might be better at synthesizing available data and making decisions in well-characterized parts of a problem, while a human may be better at understanding the implications of the data -- say if missing data fields are actually a signal for important, unmeasured information for some subgroup represented in the data -- working with difficult-to-fully quantify objectives, and identifying creative actions beyond what the AI may be programmed to consider."

Complete autonomy "is not the eventual goal for AI systems," the co-authors state. There needs to be "clear lines of communication between human and automated decision makers. At the end of the day, the success of the field will be measured by how it has empowered all people, not by how efficiently machines devalue the very people we are trying to help."

The report examines key areas where AI is developing and making a difference in work and lives:

Discovery:"New developments in interpretable AI and visualization of AI are making it much easier for humans to inspect AI programs more deeply and use them to explicitly organize information in a way that facilitates a human expert putting the pieces together and drawing insights," the report notes.

Decision-making:AI helps summarize data too complex for a person to easily absorb. "Summarization is now being used or actively considered in fields where large amounts of text must be read and analyzed -- whether it is following news media, doing financial research, conducting search engine optimization, or analyzing contracts, patents, or legal documents. Nascent progress in highly realistic (but currently not reliable or accurate) text generation, such as GPT-3, may also make these interactions more natural."

AI as assistant:"We are already starting to see AI programs that can process and translate text from a photograph, allowing travelers to read signage and menus. Improved translation tools will facilitate human interactions across cultures. Projects that once required a person to have highly specialized knowledge or copious amounts of time may become accessible to more people by allowing them to search for task and context-specific expertise."

Language processing:Language processing technology advances have been supported by neural network language models, including ELMo, GPT, mT5, and BERT, that "learn about how words are used in context -- including elements of grammar, meaning, and basic facts about the world -- from sifting through the patterns in naturally occurring text. These models' facility with language is already supporting applications such as machine translation, text classification, speech recognition, writing aids, and chatbots. Future applications could include improving human-AI interactions across diverse languages and situations."

Computer vision and image processing:"Many image-processing approaches use deep learning for recognition, classification, conversion, and other tasks. Training time for image processing has been substantially reduced. Programs running on ImageNet, a massive standardized collection of over 14 million photographs used to train and test visual identification programs, complete their work 100 times faster than just three years ago." The report's authors caution, however, that such technology could be subject to abuse.

Robotics: "The last five years have seen consistent progress in intelligent robotics driven by machine learning, powerful computing and communication capabilities, and increased availability of sophisticated sensor systems. Although these systems are not fully able to take advantage of all the advances in AI, primarily due to the physical constraints of the environments, highly agile and dynamic robotics systems are now available for home and industrial use."

Mobility: "The optimistic predictions from five years ago of rapid progress in fully autonomous driving have failed to materialize. The reasons may be complicated, but the need for exceptional levels of safety in complex physical environments makes the problem more challenging, and more expensive, to solve than had been anticipated. The design of self-driving cars requires integration of a range of technologies including sensor fusion, AI planning and decision-making, vehicle dynamics prediction, on-the-fly rerouting, inter-vehicle communication, and more."

Recommender systems:The AI technologies powering recommender systems have changed considerably in the past five years, the report states. "One shift is the near-universal incorporation of deep neural networks to better predict user responses to recommendations. There has also been increased usage of sophisticated machine-learning techniques for analyzing the content of recommended items, rather than using only metadata and user click or consumption behavior."

The report's authors caution that "the use of ever-more-sophisticated machine-learned models for recommending products, services, and content has raised significant concerns about the issues of fairness, diversity, polarization, and the emergence of filter bubbles, where the recommender system suggests. While these problems require more than just technical solutions, increasing attention is paid to technologies that can at least partly address such issues."

Go here to read the rest:
Artificial intelligence success is tied to ability to augment, not just automate - ZDNet

Justice, Equity, And Fairness: Exploring The Tense Relationship Between Artificial Intelligence And The Law With Joilson Melo – Forbes

Law Library

AI is becoming more and more prevalent in society, with many people wondering how it will affect the law. How artificial intelligence is impacting our laws and what we can expect for future technology/legal interactions.

The conversation surrounding the relationship between AI and law also touches quite clearly on the ability to rely on Artificial Intelligence to deliver fair decisions and to enhance the legal systems delivery of equity and justice.

In this article, I share insights from my conversations on this topic with Joilson Melo, a Brazilian law expert, and programmer whose devotion to equity and fairness led to a historic change in the Brazilian legal system in 2019, this change mainly affected the system that controls all processes processed digitally in Brazil, the PJe (Electronic Judicial Process).

As a law student, Melo filed a request for action in the National Council of Justice (CNJ) against the Court of Justice of Mato Grosso, resulting in a decision allowing citizens to file applications in court electronically without a lawyer and within the Special Court, observing the value of the case, so that it does not exceed 20 minimum wages. Melos petition revealed provisions in the law that allowed for this and his victory enforced those provisions. The results for the underprivileged and those who couldnt afford lawyers have been immense.

On the relationship between AI and the Law, Melo remains a bit on the fence;

The purpose of the law is justice, equity, and fairness, says Melo.

Any technology that can enhance that is welcome in the legal arena. Artificial Intelligence has already been shown that it can be as biased as the data that it is fed. This instantly places a greater burden of care on us to ensure that it is adopted through a careful process in the legal space and society at large

The use of AI to predict jury verdicts has been around for quite some time now, but it's unclear whether or not an algorithm can accurately predict human behavior. There have also been studies that prove that machine learning algorithms can be used to help judges make sentencing decisions based on factors such as recidivism rates.

In theory, this seems to solve a glaring problem, the algorithm tools are supposed to predict criminal behavior and help judges make decisions based on data-driven recommendations and not their gut.

However, as Melo explains, this also presents some deep concerns for legal experts, AI risk assessment tools run on algorithms that are trained on historical crime data. In countries like America and many other nations, law enforcement has already been accused of targeting certain minorities and this is shown by the high number of these minorities in prisons. If the same data is fed, the AI is going to be just as biased.

Melo continues, Besides, the Algorithms turn correlative insights into causal insights. If the data shows that a particular neighborhood is correlated with high recidivism, it doesnt prove that this neighborhood caused recidivism in any given case. These are things that a Judge should be able to tell from his observations. Anything less is a far cry from justice, unless we figure out a way to cure the data.

As we continue developing smarter technologies, data protection becomes an increasingly important issue. This includes protecting private information from hackers and complying with GDPR standards across all industries that collect personal data about their customers.

Apart from the GDPR, not many countries have passed targeted laws that affect big data. According to the 2018 Technology Survey by the International Legal Technology Association, 100 percent of law firms with 700 or more lawyers use AI tools or are pursuing AI projects.

If this trend continues and meets with the willingness of courts and judges to adopt AI, then they would eventually fall into the category of companies that need to abide by the data protection rules. Client/Attorney privilege could be at risk of a hack and court decisions as well.

The need for stringent local laws that help regulate how data is received and managed has never been more clear, and this is why it is shocking that many governments have not acted faster.

Joilson Melo

Many governments have an unholy alliance with tech giants and the companies that deal most with data, says Melo.

These companies are at the front of national development and are the most attractive national propositions for investments. Leaders do not want to stifle them or be seen as impeding technological advancement. However, if the law must apply equally, governments should take a cue from the GDPR and start now before we see privacy violation worse than we already have.

As Artificial Intelligence becomes more ingrained in our lives, so do the legal issues that surround it.

One of the most prevalent legal questions is whether machines should be allowed to possess self-driving cars and deadly weapons. Self-driving cars are already on the market but they have a long way to go before they could replace human drivers. The technology has not been perfected yet and will require huge strides forward before we can say with certainty that these vehicles are safe for society at large.

The larger concerns about these touch on how easily these algorithms can be hacked and influenced externally.

AI and Weapons/War Crimes: The possibility of autonomous weapons systems has been touted in many spheres as a powerful way to identify and eliminate threats. This has come against strong pushback for obvious reasons. Empathy, concession, and a certain big-picture approach have always played crucial roles in war and border security. These are traits that we still cannot inculcate into an algorithm.

Human Rights Questions: One of the main questions that arise in the area of human rights is with regards to algorithmic transparency. There have been reports of people losing jobs, being denied loans, and being put on no-fly zones with no explanation other than, it was an algorithmic determination.

If this pattern persists the risk to human rights is enormous. The questions of cybersecurity vulnerabilities, AI bias, and lack of contestability are also concerns that touch on human rights.

Melos concern seems more targetted at the law and how it can be preserved as an arbiter of justice and enforcer of human rights and he rightly points out the implications of leaving these questions unanswered;

Deciding not to adopt AI in society and legal systems is deciding not to move forward as a civilization, Melo comments.

However, deciding to adopt AI blindly would see us move back into a barbaric civilization.I believe that the best approach is to take a piece-meal approach towards adoption; take a step, spot the problems, eliminate them and then take another step.

The law and legal practitioners stand to gain a lot from a proper adoption of AI into the legal system. Legal research is one area that AI has already begun to help out with. AI can streamline the thousands of results an internet or directory search would otherwise provide, offerring a smaller digestible handful of relevant authorities for legal research. This is already proving helpful and with more targeted machine learning it would only get better.

The possible benefits go on; automated drafts of documents and contracts, document review, and contract analysis are some of those considered imminent.

Many have even considered the possibilities of AI in helping with more administrative functions like the appointment of officers and staff, administration of staff, and making the citizens aware of their legal rights.

A future without AI seems bleak and laborious for most industries including the legal and while we must march on, we must be cautious about our strategies for adoption. This point is better put in the words of Joilson Melo; The possibilities are endless, but the burden of care is very heavy we must act and evolve with cautiously.

Thank you for your feedback!

Read the rest here:
Justice, Equity, And Fairness: Exploring The Tense Relationship Between Artificial Intelligence And The Law With Joilson Melo - Forbes

New report assesses progress and risks of artificial intelligence – Brown University

While many reports have been written about the impact of AI over the past several years, the AI100 reports are unique in that they are both written by AI insiders experts who create AI algorithms or study their influence on society as their main professional activity and that they are part of an ongoing, longitudinal, century-long study, said Peter Stone, a professor of computer science at the University of Texas at Austin, executive director of Sony AI America and chair of the AI100 standing committee. The 2021 report is critical to this longitudinal aspect of AI100 in that it links closely with the 2016 report by commenting on what's changed in the intervening five years. It also provides a wonderful template for future study panels to emulate by answering a set of questions that we expect future study panels to reevaluate at five-year intervals.

Eric Horvitz, chief scientific officer at Microsoft and co-founder of the One Hundred Year Study on AI, praised the work of the study panel.

"I'm impressed with the insights shared by the diverse panel of AI experts on this milestone report," Horvitz said. The 2021 report does a great job of describing where AI is today and where things are going, including an assessment of the frontiers of our current understandings and guidance on key opportunities and challenges ahead on the influences of AI on people and society.

In terms of AI advances, the panel noted substantial progress across subfields of AI, including speech and language processing, computer vision and other areas. Much of this progress has been driven by advances in machine learning techniques, particularly deep learning systems, which have made the leap in recent years from the academic setting to everyday applications.

In the area of natural language processing, for example, AI-driven systems are now able to not only recognize words, but understand how theyre used grammatically and how meanings can change in different contexts. That has enabled better web search, predictive text apps, chatbots and more. Some of these systems are now capable of producing original text that is difficult to distinguish from human-produced text.

Elsewhere, AI systems are diagnosing cancers and other conditions with accuracy that rivals trained pathologists. Research techniques using AI have produced new insights into the human genome and have sped the discovery of new pharmaceuticals. And while the long-promised self-driving cars are not yet in widespread use, AI-based driver-assist systems like lane-departure warnings and adaptive cruise control are standard equipment on most new cars.

Some recent AI progress may be overlooked by observers outside the field, but actually reflect dramatic strides in the underlying AI technologies, Littman says. One relatable example is the use of background images in video conferences, which became a ubiquitous part of many people's work-from-home lives during the COVID-19 pandemic.

To put you in front of a background image, the system has to distinguish you from the stuff behind you which is not easy to do just from an assemblage of pixels, Littman said. Being able to understand an image well enough to distinguish foreground from background is something that maybe could happen in the lab five years ago, but certainly wasnt something that could happen on everybodys computer, in real time and at high frame rates. Its a pretty striking advance.

As for the risks and dangers of AI, the panel does not envision a dystopian scenario in which super-intelligent machines take over the world. The real dangers of AI are a bit more subtle, but are no less concerning.

Some of the dangers cited in the report stem from deliberate misuse of AI deepfake images and video used to spread misinformation or harm peoples reputations, or online bots used to manipulate public discourse and opinion. Other dangers stem from an aura of neutrality and impartiality associated with AI decision-making in some corners of the public consciousness, resulting in systems being accepted as objective even though they may be the result of biased historical decisions or even blatant discrimination, the panel writes. This is a particular concern in areas like law enforcement, where crime prediction systems have been shown to adversely affect communities of color, or in health care, where embedded racial bias in insurance algorithms can affect peoples access to appropriate care.

As the use of AI increases, these kinds of problems are likely to become more widespread. The good news, Littman says, is that the field is taking these dangers seriously and actively seeking input from experts in psychology, public policy and other fields to explore ways of mitigating them. The makeup of the panel that produced the report reflects the widening perspective coming to the field, Littman says.

The panel consists of almost half social scientists and half computer science people, and I was very pleasantly surprised at how deep the knowledge about AI is among the social scientists, Littman said. We now have people who do work in a wide variety of different areas who are rightly considered AI experts. Thats a positive trend.

Moving forward, the panel concludes that governments, academia and industry will need to play expanded roles in making sure AI evolves to serve the greater good.

View post:
New report assesses progress and risks of artificial intelligence - Brown University

A Closer Look at Artificial Intelligence-Inspired Policing Technologies – University of Virginia

Artificial intelligence-inspired policing technology and techniques like facial recognition software and digital surveillance continue to find traction and champions among law enforcement agencies, but at what cost to the public?

Some cities like Wilmington, North Carolina, have even adopted AI-driven policing, where technology like ShotSpotter identifies gunshots and their locations. The software also recommends to patrol officers next best action based on their current location, police data on past crime records, time of day, and housing and population density.

Rene Cummings, data activist in residence at the University of Virginias School of Data Science, warns that the rules of citizenship are changing with the development of AI-inspired policing technologies. She explains, If the rules are changing, then the public needs to have a voice and has the right to provide input on where we need to go with these technologies as well as demand solutions that are accountable, explainable and ethical.

As artificial intelligence is used toward the development of technology-based solutions, Cummings research questions the ethical use of technology to collect and track citizen data, aiming to hold agencies more accountable and to provide citizens greater transparency.

Law enforcement, national security, and defense agencies are spending a lot of money on surveillance tools with little oversight as to their impact on communities and an individuals right to privacy, Cummings said. Were creating a tool that would give citizens the ability to see how these powerful tools are used and how they impact our lives.

Cummings and a team of data science graduate students are developing an algorithmic tool to evaluate the impact of AI-inspired law enforcement technologies. Their goal is to create an algorithmic force score that would eventually be used in an application that tracks technologies currently used by law enforcement agencies by force and zip code.

Sarah Adams and Claire Setser, both students in the online M.S. in Data Science program, said they chose the project because they wanted to put their data science skills to work for the public good. Cummings praised their effort. The algorithmic foundation was created with tremendous effort by Sarah and Claire who went through massive amounts of existing data to create an algorithm force model.

Adams said she wanted to work on a capstone project that contributed to and supported the ongoing efforts toward increasing police accountability and citizen activism. Our cohort chose our capstone projects at the beginning of 2021, which was less than one year after the loss of George Floyd and our country had been in civil unrest for quite some time. I was inspired by Rene Cummings energy and passion for data ethics and its application in criminology.

Setser agreed. I was attracted to this capstone project because of the possibility to enact and help push for real change. Citizens have a right to understand the technologies that are used to police them and surveil their lives every day. The problem is that this information is not readily available, so the idea of creating a tool to educate the public and encourage dialogue was of great interest to me.

Students in the M.S. in Data Science program are required to complete a capstone project sponsored by corporate, government and non-profit organizations. Students collaborate closely with sponsors and faculty across disciplines to tackle applied problems and generate data-driven solutions. Capstone projects range in scope and focus, and past projects have explored health disparities, consumer behavior, election forecasting, disease diagnosis, mental health, credit card fraud and climate change.

The capstone project was a valuable opportunity to combine and implement almost all of the skills and knowledge that we gained throughout the program, Setser said. Its an opportunity to experience the data pipeline from beginning to end while providing your sponsor a better understanding of the data. This is incredibly rewarding.

The projects next stage is to fine-tune and test, and Cummings and her team hope to collaborate with UVA and the wider Charlottesville community. What makes this so exciting is that were creating something brand new and adding new insights into emerging technology. Sarah and Claire have been amazing, delivering something extraordinary in such a short space of time. It really speaks to their expertise, determination, and commitment toward AI for the public and social good.

Cummings joined the School of Data Science in 2020 as its first data activist in residence. She is a criminologist, criminal psychologist, therapeutic jurisprudence specialist, AI ethicist and AI strategist. Her research places her on the frontline of artificial intelligence for social good, justice-oriented AI design, and social justice in AI policy and governance. She is the founder of Urban AI and a community scholar at Columbia University.

Link:
A Closer Look at Artificial Intelligence-Inspired Policing Technologies - University of Virginia

San Diego ranks relatively high in national ranking for artificial intelligence innovation – The San Diego Union-Tribune

Artificial Intelligence is jockeying to become the focal point of U.S. technology innovation in coming years, and San Diego is among the cities well positioned to be a frontrunner in this looming AI race.

A new report from the Metropolitan Policy Program at the Brookings Institution ranked more than 360 cities based on their AI economic prowess.

Bay Area metros San Francisco and San Jose- topped the list, according to Brookings, a public policy think tank based in Washington, D.C. They were followed by 13 earlier adopter cities that managed to claw out a toehold in AI, including San Diego.

Not everywhere should be looking to artificial intelligence for a major change in its economy, but places like San Diego really need to, said Mark Muro, a Brookings fellow and co-author of the report. I think the costs of being out of position on it are pretty high for San Diego, and the benefits of leveraging it fully are really high.

To rank cities, Brookings combined data on federal research grants, AI academic papers, AI patents, job postings and AI-related companies, among other factors.

Besides San Diego, Los Angeles, Seattle, Boston, Austin, Washington, D.C., and Raleigh, N.C., are in strong positions. Smaller cities with significant AI footprints relative to their size include Santa Barbara, Santa Cruz, Boulder, Colo., Lincoln, Neb., and Santa Fe, N.M.

An additional 87 cities have the potential to become players but so far have limited AI activities, according to the study.

For most of us. AI is best known through recommendations that pop up on Amazon or Spotify, when smart speakers answer voice commands, or when navigation apps give turn-by-turn directions.

But AI is much more than that, with the potential to permeate thousands of industries. It could prevent power outages and help heal grids quickly, better route shipping to cut emissions, aid in medical diagnoses, and power navigation for self-driving vehicles.

Muro said Brookings undertook the research after receiving requests from economic development officials.

They watched the digitization of everything during the pandemic, he said. Theyre asking where do we stand on these advanced digital technologies? How do we engage with this?

As with other technologies, artificial intelligence tends to be clustered on the coasts. Of the 363 metro areas in the study, 261 had no significant AI footprint.

This is not everywhere, said Muro. But we think there can be a happy medium where we retain our coastal innovation centers while also taking steps to help other places make progress and counter some of this massive concentration.

In San Diego, companies such as Qualcomm, Oracle, Intuit, Teradata, Cubic, Viasat, Thermo Fisher and Illumina develop artificial intelligence and machine learning algorithms.

But key drivers of the regions AI prowess stems from the military and universities.

The Naval Information Warfare Systems Command (NAVWAR) is based locally, creating a magnet for defense contractors and cyber security firms working in AI.

San Diegos affiliation with the military has been extremely important, said Nate Kelley, senior researcher at the San Diego Regional Economic Development Corp. There are more and more contracts coming, particularly through NAVWAR. Those federal contracts tend to be large, and theyre multi-year. So, theyre less vulnerable to business cycles.

UC San Diego was an early researcher in neural networks, said Rajesh Gupta, director of the Halicioglu Data Sciences Institute. That work helped pave the way for the machine learning engines that banks use to uncover credit card transaction fraud.

Gupta thinks the Brookings report underestimates San Diegos AI capabilities. This summer, a new AI Research Institute at UCSD won a $20 million grant from the National Science Foundation to tackle big, complicated problems.

Among them: tapping artificial intelligence to cut the time and cost of designing semiconductors; finding ways to improve communications networks; and researching how robots interact with humans to make self-driving cars safer.

The San Diego Super Computer Center also performs research related to AI, and the San Diego Association of Governments (SANDAG) has been an early proponent of AI-based smart cities technologies, said Gupta.

We have a $39 million effort going on today basically on grid response and making it intelligent, said Gupta. Its smart buildings, smart parking, smart transportation. These are what will define the metropolitan areas of tomorrow with AI embedded in them.

Here is the original post:
San Diego ranks relatively high in national ranking for artificial intelligence innovation - The San Diego Union-Tribune