Archive for the ‘Artificial Intelligence’ Category

Will A.I. Become the New McKinsey? – The New Yorker

When we talk about artificial intelligence, we rely on metaphor, as we always do when dealing with something new and unfamiliar. Metaphors are, by their nature, imperfect, but we still need to choose them carefully, because bad ones can lead us astray. For example, its become very common to compare powerful A.I.s to genies in fairy tales. The metaphor is meant to highlight the difficulty of making powerful entities obey your commands; the computer scientist Stuart Russell has cited the parable of King Midas, who demanded that everything he touched turn into gold, to illustrate the dangers of an A.I. doing what you tell it to do instead of what you want it to do. There are multiple problems with this metaphor, but one of them is that it derives the wrong lessons from the tale to which it refers. The point of the Midas parable is that greed will destroy you, and that the pursuit of wealth will cost you everything that is truly important. If your reading of the parable is that, when you are granted a wish by the gods, you should phrase your wish very, very carefully, then you have missed the point.

So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinseya consulting firm that works with ninety per cent of the Fortune 100and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to turbocharge sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America.

A former McKinsey employee has described the company as capitals willing executioners: if you want something done but dont want to get your hands dirty, McKinsey will do it for you. That escape from accountability is one of the most valuable services that management consultancies provide. Bosses have certain goals, but dont want to be blamed for doing whats necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that its just doing what the algorithm says, even though it was the company that commissioned the algorithm in the first place.

The question we should be asking is: as A.I. becomes more powerful and flexible, is there any way to keep it from being another version of McKinsey? The question is worth considering across different meanings of the term A.I. If you think of A.I. as a broad set of technologies being marketed to companies to help them cut their costs, the question becomes: how do we keep those technologies from working as capitals willing executioners? Alternatively, if you imagine A.I. as a semi-autonomous software program that solves problems that humans ask it to solve, the question is then: how do we prevent that software from assisting corporations in ways that make peoples lives worse? Suppose youve built a semi-autonomous A.I. thats entirely obedient to humansone that repeatedly checks to make sure it hasnt misinterpreted the instructions it has received. This is the dream of many A.I. researchers. Yet such software could easily still cause as much harm as McKinsey has.

Note that you cannot simply say that you will build A.I. that only offers pro-social solutions to the problems you ask it to solve. Thats the equivalent of saying that you can defuse the threat of McKinsey by starting a consulting firm that only offers such solutions. The reality is that Fortune 100 companies will hire McKinsey instead of your pro-social firm, because McKinseys solutions will increase shareholder value more than your firms solutions will. It will always be possible to build A.I. that pursues shareholder value above all else, and most companies will prefer to use that A.I. instead of one constrained by your principles.

Is there a way for A.I. to do something other than sharpen the knife blade of capitalism? Just to be clear, when I refer to capitalism, Im not talking about the exchange of goods or services for prices determined by a market, which is a property of many economic systems. When I refer to capitalism, Im talking about a specific relationship between capital and labor, in which private individuals who have money are able to profit off the effort of others. So, in the context of this discussion, whenever I criticize capitalism, Im not criticizing the idea of selling things; Im criticizing the idea that people who have lots of money get to wield power over people who actually work. And, more specifically, Im criticizing the ever-growing concentration of wealth among an ever-smaller number of people, which may or may not be an intrinsic property of capitalism but which absolutely characterizes capitalism as it is practiced today.

As it is currently deployed, A.I. often amounts to an effort to analyze a task that human beings perform and figure out a way to replace the human being. Coincidentally, this is exactly the type of problem that management wants solved. As a result, A.I. assists capital at the expense of labor. There isnt really anything like a labor-consulting firm that furthers the interests of workers. Is it possible for A.I. to take on that role? Can A.I. do anything to assist workers instead of management?

Some might say that its not the job of A.I. to oppose capitalism. That may be true, but its not the job of A.I. to strengthen capitalism, either. Yet that is what it currently does. If we cannot come up with ways for A.I. to reduce the concentration of wealth, then Id say its hard to argue that A.I. is a neutral technology, let alone a beneficial one.

Many people think that A.I. will create more unemployment, and bring up universal basic income, or U.B.I., as a solution to that problem. In general, I like the idea of universal basic income; however, over time, Ive become skeptical about the way that people who work in A.I. suggest U.B.I. as a response to A.I.-driven unemployment. It would be different if we already had universal basic income, but we dont, so expressing support for it seems like a way for the people developing A.I. to pass the buck to the government. In effect, they are intensifying the problems that capitalism creates with the expectation that, when those problems become bad enough, the government will have no choice but to step in. As a strategy for making the world a better place, this seems dubious.

You may remember that, in the run-up to the 2016 election, the actress Susan Sarandonwho was a fervent supporter of Bernie Sanderssaid that voting for Donald Trump would be better than voting for Hillary Clinton because it would bring about the revolution more quickly. I dont know how deeply Sarandon had thought this through, but the Slovenian philosopher Slavoj iek said the same thing, and Im pretty sure he had given a lot of thought to the matter. He argued that Trumps election would be such a shock to the system that it would bring about change.

What iek advocated for is an example of an idea in political philosophy known as accelerationism. There are a lot of different versions of accelerationism, but the common thread uniting left-wing accelerationists is the notion that the only way to make things better is to make things worse. Accelerationism says that its futile to try to oppose or reform capitalism; instead, we have to exacerbate capitalisms worst tendencies until the entire system breaks down. The only way to move beyond capitalism is to stomp on the gas pedal of neoliberalism until the engine explodes.

I suppose this is one way to bring about a better world, but, if its the approach that the A.I. industry is adopting, I want to make sure everyone is clear about what theyre working toward. By building A.I. to do jobs previously performed by people, A.I. researchers are increasing the concentration of wealth to such extreme levels that the only way to avoid societal collapse is for the government to step in. Intentionally or not, this is very similar to voting for Trump with the goal of bringing about a better world. And the rise of Trump illustrates the risks of pursuing accelerationism as a strategy: things can get very bad, and stay very bad for a long time, before they get better. In fact, you have no idea of how long it will take for things to get better; all you can be sure of is that there will be significant pain and suffering in the short and medium term.

Im not very convinced by claims that A.I. poses a danger to humanity because it might develop goals of its own and prevent us from turning it off. However, I do think that A.I. is dangerous inasmuch as it increases the power of capitalism. The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. Its A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value. Capitalism is the machine that will do whatever it takes to prevent us from turning it off, and the most successful weapon in its arsenal has been its campaign to prevent us from considering any alternatives.

People who criticize new technologies are sometimes called Luddites, but its helpful to clarify what the Luddites actually wanted. The main thing they were protesting was the fact that their wages were falling at the same time that factory owners profits were increasing, along with food prices. They were also protesting unsafe working conditions, the use of child labor, and the sale of shoddy goods that discredited the entire textile industry. The Luddites did not indiscriminately destroy machines; if a machines owner paid his workers well, they left it alone. The Luddites were not anti-technology; what they wanted was economic justice. They destroyed machinery as a way to get factory owners attention. The fact that the word Luddite is now used as an insult, a way of calling someone irrational and ignorant, is a result of a smear campaign by the forces of capital.

Whenever anyone accuses anyone else of being a Luddite, its worth asking, is the person being accused actually against technology? Or are they in favor of economic justice? And is the person making the accusation actually in favor of improving peoples lives? Or are they just trying to increase the private accumulation of capital?

Today, we find ourselves in a situation in which technology has become conflated with capitalism, which has in turn become conflated with the very notion of progress. If you try to criticize capitalism, you are accused of opposing both technology and progress. But what does progress even mean, if it doesnt include better lives for people who work? What is the point of greater efficiency, if the money being saved isnt going anywhere except into shareholders bank accounts? We should all strive to be Luddites, because we should all be more concerned with economic justice than with increasing the private accumulation of capital. We need to be able to criticize harmful uses of technologyand those include uses that benefit shareholders over workerswithout being described as opponents of technology.

Excerpt from:
Will A.I. Become the New McKinsey? - The New Yorker

NSF Announces $140 Million Investment In Seven Artificial Intelligence Research Institutes – Forbes

with several other federal agencies, will fund seven new artificial intelligence research centers led by university investigators.getty

The U.S. National Science Foundation (NSF), along with several other federal agencies and higher education institutions, has announced a $140 million investment to establish seven new National Artificial Intelligence Research Institutes (AI Institutes).

The initiative represents a major effort by the federal government to develop an AI workforce and to advance fundamental understanding of the technologys uses and risks. Funding for each institute, which includes collaborations among several universities, runs up to $20 million over a five-year period.

According to the announcement, the new AI Institutes will conduct research in several areas, including promoting ethical and trustworthy AI systems and technologies, developing novel approaches to cybersecurity, addressing climate change, expanding our understanding of the brain, and enhancing education and public health.

The National AI Research Institutes are a critical component of our Nations AI innovation, infrastructure, technology, education, and partnerships ecosystem, said NSF Director Sethuraman Panchanathan, in the announcement. These institutes are driving discoveries that will ensure our country is at the forefront of the global AI revolution.

In addition to the National Science Foundation, the AI Institutes will be supported by funding from the U.S. Department of Commerces National Institutes of Standards and Technology; U.S. Department of Homeland Securitys Science and Technology Directorate; U.S. Department of Agricultures National Institute of Food and Agriculture; U.S. Department of Educations Institute of Education Sciences; U.S. Department of Defenses Office of the Undersecretary of Defense for Research and Engineering; and the IBM Corporation.

The new AI Institutes focus on the following six research themes:

Trustworthy AI

Led by the University of Maryland, the NSF Institute for Trustworthy AI in Law & Society (TRAILS) aims to transform the practice of AI from one driven primarily by technological innovation to one driven by attention to ethics, human rights, and support for voices that have been marginalized in mainstream AI. It will focus on investigating what trust in AI looks like, whether current technical solutions for AI can be trusted, and which policy models can effectively sustain AI trustworthiness.

Intelligent Agents for Next-Generation Cybersecurity

Led by the University of California, Santa Barbara, the AI Institute for Agent-based Cyber Threat Intelligence and Operation (ACTION) will develop approaches that use AI to anticipate and take corrective actions against cyberthreats targeting the security and privacy of computer networks and their users. Researchers will work with experts in security operations to develop an approach in which AI-enabled intelligent security agents cooperate with humans to improve the resilience of security of computer systems.

Climate Smart Agriculture and Forestry

The AI Institute for Climate-Land Interactions, Mitigation, Adaptation, Tradeoffs and Economy (AI-CLIMATE) will be led by the University of Minnesota Twin Cities. It will focus on incorporating knowledge from agriculture and forestry sciences to develop AI methods to curb climate effects while enhancing rural economies. A main goal will be to lower the cost of and improve accounting for carbon in farms and forests to empower carbon markets and inform decision-making.

Neural and Cognitive Foundations of Artificial Intelligence

Led by Columbia University, the Neural and Cognitive Foundations of Artificial Intelligence Institute (ARNI) will focus on connecting progress made in AI to the revolution in understanding of the brain. It will conduct interdisciplinary research between neuroscience, cognitive science, and AI.

AI for Decision Making

The AI-Institute for Societal Decision Making (AI-SDM) under the leadership of Carnegie Mellon University, will develop AI for more effective responses in rapidly developing scenarios like disaster management and public health. AI-SDM will enable emergency managers, public health officials, first responders, community workers, and the public to make better data-driven decisions.

AI-Augmented Learning to Expand Education Opportunities and Improve Outcomes

Led by the University of Illinois, Urbana-Champaign, the AI Institute for Inclusive Intelligent Technologies for Education (INVITE) seeks to develop AI tools and approaches to support three noncognitive skills that underlie effective learning: persistence, academic resilience, and collaboration. It will focus on how children communicate STEM content, how they learn to persist through challenging work, and how teachers can promote noncognitive skill development.

The AI Institute for Exceptional Eduction (AI4ExceptionalEd) will be led by the University at Buffalo. It will attempt to develop a universal speech and language screener for children. The AI screener will analyze video and audio of children in their classrooms and help tailor interventions for children who need speech and language services.

Increasing AI system trustworthiness while reducing its risks will be key to unleashing AIs potential benefits and ensuring our shared societal values, said Under Secretary of Commerce for Standards and Technology and National Institutes of Science and Technology Director Laurie E. Locascio. Today, the ability to measure AI system trustworthiness and its impacts on individuals, communities and society is limited. TRAILS can help advance our understanding of the foundations of trustworthy AI, ethical and societal considerations of AI, and how to build systems that are trusted by the people who use and are affected by them.

I am president emeritus of Missouri State University. After earning my B.A. from Wheaton College (Illinois), I was awarded a Ph.D. in clinical psychology from the University of Illinois in 1973. I then joined the faculty at the University of Kentucky, where I progressed through the professorial ranks and served as director of the Clinical Psychology Program, chair of the department of psychology, dean of the graduate school, and provost. In 2005, I was appointed president of Missouri State University. Following retirement from Missouri State in 2011, I became senior policy advisor to Missouri Governor Jay Nixon. Recently, I have authored two books: Degrees and Pedigrees: The Education of America's Top Executives (2017) and Coming to Grips With Higher Education (2018), both published by Rowman & Littlefield.

See the original post:
NSF Announces $140 Million Investment In Seven Artificial Intelligence Research Institutes - Forbes

With Artificial Intelligence and Leadership, There is a ‘Learning Curve’ – GovExec.com

Cookie List

A cookie is a small piece of data (text file) that a website when visited by a user asks your browser to store on your device in order to remember information about you, such as your language preference or login information. Those cookies are set by us and called first-party cookies. We also use third-party cookies which are cookies from a domain different than the domain of the website you are visiting for our advertising and marketing efforts. More specifically, we use cookies and other tracking technologies for the following purposes:

Strictly Necessary Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a sale of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit http://www.allaboutcookies.org to learn more.

Functional Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a sale of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit http://www.allaboutcookies.org to learn more.

Performance Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a sale of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit http://www.allaboutcookies.org to learn more.

Sale of Personal Data

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated sale of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Social Media Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated sale of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Targeting Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated sale of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

The rest is here:
With Artificial Intelligence and Leadership, There is a 'Learning Curve' - GovExec.com

AI: Reducing lawyers’ workloads and costs with artificial intelligence … – The Tri-City News

Automation, not advice, the current path forward for artificial intelligence in the legal sector

Large language models like ChatGPT are based on a huge corpus of text meaning they have access to enough information to pass the standardized law school admission test (LSAT) with high scores.

But ask ChatGPT for legal advice and it will tell you to call a lawyer, adding that it is not authorized to give legal advice not yet, at least.

Russell Alexander, a Canadian lawyer who runs a family law firm in Ontario, is writing a book about AI and the law.

He thinks its only a matter of time before non-professionals start using AI programs to offer legal advice without proper legal training or certification.

I think this will be just around the corner The unauthorized practice of law, Alexander told BIV. Therell be people, probably, using AI to give legal advice when theyre not licensed to do that. Or they might be licensed but theyre not licensed to practise in British Columbia or Ontario. So thats going to be a tough regulatory issue for our governing bodies to deal with.

That is just one of the issues Canadas newArtificial Intelligence and Data Actmay have to address the use of AI to provide services or advice by non-professionals.

There may be other ethical and legal challenges that arise from the use of AI in the law as well, but generally speaking, Alexander said he believes AI will be a positive new tool that reduces lawyers workload and costs.

Lawyers are not going to be replaced by AI lawyers who use AI will replace other lawyers, Alexander said. You need to get on board.

In January, Alexander started a 30-day daily blog series on artificial intelligence and the law, based on his experiences using OpenAIs ChatGPT-3. As a result, Alexander decided he needed to write a book, which he expects to be out in a few weeks.

His firm has also contracted a software company in Seattle to tailor-make some software so that his firm can use AI as part of its routine practice.

Alexander has identified 30 ways that AI can help law firms and lawyers.

The implications are huge, he said. Predictive analysis, contract analysis, legal research, legal drafting, document management, case management, legal chatbots, virtual assistants.

One way Alexanders firm is using AI to reduce lawyer workloads is by applying it to the production of final reports to clients.

One of the things lawyers dont like to do is the final report to the client because it takes some time to get a court order, Alexander said.

Usually, theyll bill for it. What we can do now is take the court order, drop it into AI and AI will produce the final report based on that court order. The lawyers still going to edit it and review it, but its going to be a lot more time efficient.

So those are real life examples of how we can use AI right now. Our firm has started doing this.

AI can take on some of the grunt work that heretofore has required a human with the ability to read, analyze and write. While it can reduce workloads, lawyers will still be needed to oversee the work, because AI is not without its flaws and foibles.

Theres some biases that are built into AI, Alexander noted. Theres examples of this amazing AI where the program makes stuff up thats completely false. So, lawyers still need to have their hand on the rudder.

But overall, he sees it as a new tool that will improve efficiencies in law firms everywhere.

Its a great opportunity to make us much more efficient.

nbennett@biv.com

twitter.com/nbennett_biv

Here is the original post:
AI: Reducing lawyers' workloads and costs with artificial intelligence ... - The Tri-City News

Snoop Dogg addresses risks of artificial intelligence: ‘Sh– what the f—‘ – Fox News

American rapper Snoop Dogg expressed confusion about recent developments in artificial intelligence, comparing the technology to movies he saw as a child.

At the Milken Institute Global Conference in Beverly Hills this week, Snoop, whose given name is Calvin Broadus, turned his focus to artificial intelligence while discussing a strike of the Writers Guild of America. The writers strike is, in part, about the potential for artificial intelligence to take writing jobs.

"I got a motherf---ing AI right now that they did made for me," Snoop said. "This n----- could talk to me. Im like, man, this thing can hold a real conversation? Like real for real? Like its blowing my mind because I watched movies on this as a kid years ago."

WHITE HOUSE ANNOUNCES PLAN FOR RESPONSIBLE AI USE, VP HARRIS TO MEET WITH TECH EXECUTIVES

Snoop Dogg discussed artificial intelligence at the Milken Institute 2023 Global Conference (Milken Institute)

Snoop also referenced Geoffrey Hintons recent warnings about artificial intelligence, who recently quit his job at Google so he could discuss the harms of AI.

"And I heard the dude, the old dude that created AI saying, This is not safe, 'cause the AIs got their own minds, and these mother---ers gonna start doing their own s---. I'm like, are we in a f---ing movie right now, or what? The f-- man?"

GODFATHER OF ARTIFICIAL INTELLIGENCE SAYS AI IS CLOSE TO BEING SMARTER THAN US, COULD END HUMANITY

Hinton is often referred to as the "Godfather of AI," told the New York Times he believes bad actors will use artificial intelligence platforms the very ones his research helped create for nefarious purposes.

Snoop Dogg compared artificial intelligence to movies he saw as a child. ((Photo by Jerod Harris/Getty Images))

Snoop Dogg questioned the safety of artificial intelligence at the Milken Institute 2022 Global Conference. ((Photo by Jerod Harris/Getty Images))

And while Snoop highlighted potential concerns about artificial intelligence, he also questioned whether he should invest in the technology.

"So do I need to invest in AI so I can have one with me? Or like, do y'all know? S---, what the f---? I'm lost, I don't know," Snoop continued, drawing laughter from the audience.

MEET THE 72-YEAR-OLD CONGRESSMAN GOING BACK TO SCHOOL TO LEARN ABOUT AI

The release of ChatGPT last year has sparked both excitement and concern among experts, who believe the technology will revolutionize business and human interactions.

CLICK HERE TO GET THE FOX NEWS APP

Thousands of tech leaders and experts, including Musk, signed an open letter in March that called on artificial intelligence labs to pause research on systems that were more powerful than GPT-4, OpenAIs most advanced AI system. The letter argued that "AI systems with human-competitive intelligence can pose profound risks to society and humanity."

Read more from the original source:
Snoop Dogg addresses risks of artificial intelligence: 'Sh-- what the f---' - Fox News