Archive for the ‘Artificial Intelligence’ Category

Artificial Intelligence in employment: the regulatory and legal … – Farrer & Co

It wont have escaped your attention that AI is in the news a lot at the moment. Following the release of ChatGPT at the end of 2022, not a week seems to go by without headlines either extolling its benefits or panicking about its risks.

Irrespective of which side of the fence you sit on, what is clear is that rapidly advancing AI is here to stay. With that comes the increasing need to consider AI risk management, particularly in areas where AI has the potential to make or inform decisions about individuals. The field of employment is a prime example of this.

In this blog, we look at the current (though evolving) legal and regulatory landscape in the UK regarding the use of AI in employment, as well as how employers might navigate their way through it.

When it comes to worldwide regulation of AI, there is currently no consensus as to approach. While the EU is preparing strict regulation and tough restrictions on the use of AI, with Italy banning ChatGPT over privacy concerns, the UK is planning an innovative and iterative approach to regulation.

In its recently published White Paper A pro-innovation approach to AI regulation, rather than introducing new legislation the UK Government proposes a system of non-statutory principles overseen and implemented by existing regulators.

What this means for the employment sector is that the Government intends to encourage the Equality and Human Rights Commission and the Information Commissioner to work with the Employment Agency Standards Inspectorate to issue joint guidance on the use of AI systems in recruitment or employment. In particular, the Government envisages the joint guidance will:

For more detailed analysis on the Governments White Paper, Ian De Freitas (a partner in our Data, IP and Technology Disputes team), provides helpful commentary in his article Regulating Artificial Intelligence. In the article he explores the five common principles proposed by the Government, assessing them against other recent developments.

In the absence of specific legislation governing AI in the workplace, and pending possible guidance, it is important employers understand how existing legal risks and obligations may affect their use of AI. These include:

We have provided detailed commentary on using AI in employment in two blogs:

In summary, employers may want to consider the following:

There is no escaping the fact that AI has the potential to radically transform employment as we know it. Recent reports predict that AI could replace the equivalent of 300 million full-time jobs. With that comes concerns about the treatment of workers and the erosion of workers rights (for example as highlighted by the TUC in its latest conference).

Employers will need to prepare strategically for the changing nature of work and the need to integrate AI into workplace operations. Currently there are likely to be more questions than answers: will there be a need to redesign roles or change work allocation and workflow processes? How can employees be supported in this transition? Is there a need to invest in workforce training to help employees develop the skills needed to work with AI or take on different roles? Regardless, with AI likely to impact most jobs in some way, there is a need for employers to look afresh at their workforce strategies in order to keep pace with the rapid changes that AI might bring.

This publication is a general summary of the law. It should not replace legal advice tailored to your specific circumstances.

Farrer & Co LLP, May 2023

Partner

David advises employer clients, with a particular focus on the financial services and sport sectors, on a wide range of contentious and non-contentious employment issues. He also acts for individuals in relation to contract and exit negotiations and advises them on matters relating to restrictive covenants.

David advises employer clients, with a particular focus on the financial services and sport sectors, on a wide range of contentious and non-contentious employment issues. He also acts for individuals in relation to contract and exit negotiations and advises them on matters relating to restrictive covenants.

Senior Counsel

Amy is a Senior Counsel and Knowledge Lawyer in the employment team, providing expert technical legal support to the team and leading its know-how function. Given the fast-changing nature of employment law, Amy ensures the team is at the forefront of all legal changes and can provide the best possible advice to our clients.

Amy is a Senior Counsel and Knowledge Lawyer in the employment team, providing expert technical legal support to the team and leading its know-how function. Given the fast-changing nature of employment law, Amy ensures the team is at the forefront of all legal changes and can provide the best possible advice to our clients.

Here is the original post:
Artificial Intelligence in employment: the regulatory and legal ... - Farrer & Co

Google’s Latest Artificial Intelligence Marked a Significant Surge in … – Digital Information World

In a recent report, CNBC discovered that Google introduced its latest extended language prototype, which harnesses nearly fivefold the activity data compared to its 2022 version. This significant boost empowers the model to undertake more sophisticated endeavors such as advanced coding, mathematical calculations, and visionary writing assignments.

During the Google I/O event, the organization revealed PaLM 2, their latest big language prototype created for general usage. Interior documentation accessed by CNBC reveals that Pathways Language Model 2 has been oriented on a staggering 3.5 trillion tickets. Tickets, which comprise word strings, play a crucial role in language prototypes as they enable the model to anticipate the subsequent word in a given sequence. In 2022, Google introduced the earlier version of PaLM (Pathways Language Model), which was trained on 770 billion tickets.

While Google has demonstrated enthusiasm in showcasing the capabilities of its AI, seamlessly integrating it into search functions, excel spreadsheets, editing documents, and email messages, the organization has chosen not to divulge specific details about the scale or composition of its data.

Similarly, OpenAI, the business backed by Microsoft and the developer of ChatGPT, has kept the details of its most recent major language model, GPT-4, undercover. Both companies attribute the absence of transparency to the competitive environment within the industry, as they vie for the attention of users who prefer casual chatbot-based information retrieval over conventional search engines. However, as the race for AI advancements intensifies, the research community is increasingly demanding greater clarity in these endeavors.

Following the introduction of Pathways Language Model 2, Google has emphasized that the most delinquent version is of reduced size compared to previous large language models (LLMs). This development carries significance as it denotes the increasing efficiency of Google's technology, enabling it to tackle more intricate tasks successfully. As per internal documentation, PaLM 2 has undergone training with 335 billion parameters, signifying the model's intricacy. Regarding this matter, Google has not yet issued a direct statement for this particular account.

In a blog, discussing Pathways Language Model 2, Google revealed the incorporation of a novel approach known as "compute-optimal scaling." This technique enhances the efficiency and overall performance of the big language prototype, resulting in quicker speculation, reduced parameter calculation, and lower serving expenses.

Google also assured CNBC's last report by affirming that Pathways Language Model 2 is skilled in a hundred different languages and boasts a wide array of capabilities, empowering it to drive 25 components and products. Pathways Language Model 2 is available in 4 various dimensions, ranging from the smallest, Gecko, to the largest, Unicorn, with Otter and Bison in between.

Based on publicly available information, PaLM 2 surpasses all current models in terms of power. For instance, Facebook's LLaMA large language prototype. While the training size for OpenAI's ChatGPT was last disclosed as three hundred billion tickets with GPT-3. OpenAI recently released GPT-4 in March, claiming it demonstrates a "human-level version" across numerous proficient assessments.

As emerging Artificial Intelligence applications rapidly penetrate the mainstream, debates surrounding fundamental technology are evolving more intensely. Controversies surrounding the underlying Artificial Intelligence are gaining momentum in response to its widespread adoption.

In February, El Mahdi El Mhamdi, a prominent senior Google researcher, resigned due to the company's insufficient transparency. During a hearing held by the (SJS) Senate Judiciary Subcommittee on solitude and technology, OpenAI CEO concurred with legislators, highlighting the necessity for a fresh framework to govern AI, acknowledging the significant responsibility borne by businesses like his concerning the tools they introduce to the world.

Read this article:
Google's Latest Artificial Intelligence Marked a Significant Surge in ... - Digital Information World

Artificial intelligence programs are causing concern for educators – WTAJ – www.wtaj.com

UNIVERSITY PARK, Pa. (WTAJ) New artificial intelligence programs are popping up rapidly and some can do your homework.

Programs like ChatGPT are making this form of plagiarism easier for students, but its raising a slew of ethical concerns for teachers.

Whats recently happened is the development of these things that we call large language models, sometimes LLMs, Assistant Professor at the Penn State College of Information Sciences & Techology School Shomir Wilson said.

Wilson said LLMs are large statistical models of how words follow each other in language.

Theyve been trained on huge volumes of text typically gathered on the internet and what theyre able to do, with some tweaking, is behave as a chatbot, Wilson said.

Wilson said theres a growing concern amongst schools where some students have used LLMs to do their assignments.

These large language models do make it easier to generate text with some concerns again about accuracy, Wilson said. That introduces concerns that students might not be learning how to write as well as they should.

Wilson said there are ways to get a sense that this information would be plagiarized. You could use a plagiarism check on the internet, but theres no certainty.

You can get some idea of how similar a document is to something generated by a large language model, Wilson said. But not enough to really say, Yes, this is definitively from that.'

These programs arent all bad. Wilson said there are some benefits from using the technology; like for a draft or a summary of information.

Read the original post:
Artificial intelligence programs are causing concern for educators - WTAJ - http://www.wtaj.com

Adventists in Germany Discuss Artificial Intelligence – Adventist News Network

On May 7, 2023, Hope Media Europe, the media center of the Seventh-day Adventist Church, organized the 12th Media Day in Alsbach-Hhnlein (near Darmstadt). Coming from German-speaking countries, around 50 media professionals, students, and people interested in mediafrom the fields of video, audio, design, photography, text/print, journalism, communication, and internetmet at this exchange-and-networking event to discuss the topic "Artificial Intelligence (AI): the beginning of a new era?"

Two AI practitioners had been invited for the lectures: William Edward Timm, theologian, digital media expert, and department head of Novo Tempo, the Adventist TV station in Brazil, which belongs to the Hope Channel broadcasting family; and Danillo Cabrera, software expert at Hope Media Europe. Both have already gained practical experience with the use of artificial intelligence.

Evolution of AI

"We are in the middle of a revolution" were the words of Timm, who first gave a brief overview of the history of artificial intelligence in his keynote speech. As early as 1950, the British mathematician Alan Turing invented the Turing Test: A computer is considered intelligent if, in any question-answer game over an electrical connection, humans cannot distinguish whether a computer or a human is sitting at the other end of the line. In 1956, the first AI program in history, "Logic Theorist," was written. This program was able to prove 38 theorems from Russell and Whitehead's fundamental work Principia Mathematica.

Additionally, in 1965, Herbert Simon, an American social scientist and later Nobel Prize winner for economics, predicted that in 20 years, machines would be able to do what humans could. In 1997, the time had come: a computer called "Deep Blue" defeated the then world chess champion Garri Kasparov.

Meanwhile, a lot of artificial intelligence is already being used in the background, says Timmfor example, in algorithms that suggest music and videos in social media according to the user's taste. What is new, however, is generative AI, with which users can solve concrete tasks or create products, such as ChatGPT or the image generator Midjourney.

Timm put forward the thesis that this generative AI would democratize AI, as it could now be used by every human being in a self-determined way, not only as a component of software over which one had no influence (e.g., algorithms). He distinguished three phases in the development of AI: the generative AI already mentioned, neuronal networks that would imitate the human mind, and so-called Deep Learning, which would, for example, allow self-driving cars to drive accident-free. Finally, Timm addressed the ethical aspects of the application of AI.

Artificial Intelligence and Ethics

Timm cited the AI-supported production of meat substitutes as a positive example. Artificial intelligence can analyze the molecular structure of meat and use the results to assemble a similar product from plant molecules that is very similar in consistency and taste to the meat product.

In 2021, Guiseppe Scionti has already produced a meat substitute product from the 3D printer in this way, although it is not yet fully developed. However, that could change quickly, says Timm.

In the ethical evaluation of AI, it is important to distinguish between "Narrow AI," which is intended for practical, labor-saving purposes, and "General AI," which resembles the human mind and acts independently. In general, one of the main dangers is the expected spread of fakes of all kinds (fake news, pictures, videos, etc.). Since a democracy lives from dialogue and discussion, this should not be taken over, damaged, or prevented by AI, says Timm.

According to calculations by the Goldman Sachs banking firm, AI could cause 300 million people worldwide to lose their previous jobs and have to be retrained. This would have not only political but also psychological consequences. "Many people will have the feeling of being superfluous," said Timm. He assumes, however, that after a transitional phase in which AI makes previous activities more efficient, new fields of activity will emerge for which resources will then be available. "At the beginning of every new technology, there are adjustment problems until a new distribution of roles has become established."

Timm formulated some rules for dealing with artificial intelligence:

Practical Tools

Cabrera then presented a number of practical applications for AI in his talk. They ranged from video, image, and music generators to text-based tools, such as ChatGPT, and avatars with a human appearance that could be used, for example, to conduct customer conversations.

Project Slam

In Project Slam, participants presented their projects in contributions of ten minutes each. They were in the fields of music, film, marketing, podcast, and comic drawing.

Some examples: Singer/Songwriter:www.shulami-melodie.de; Marketingintou-content.de/ and cookafrog.info/; Podcast "Der kleine Kampf"open.spotify.com/show/23HNDzTxjoHjFKUlmrklY0

Media Day Award

Film music composer Manuel Igler was awarded the Media Day Award. He wrote music for various TV commercials and series on Hope TV (e.g., Encounters, the intro for the moonlight show, and the series about the Old Testament book of Daniel [manueligler.com]).

Hope Media

Hope Media Europe operates, among others, the television channel Hope TV. It is part of the international Hope Channel family of channels, which was founded in 2003 by the Seventh-day Adventist Church in the USA and now consists of over 60 national channels.

Hope TV can be received via satellite, Germany-wide via cable, and on the internet via http://www.hopetv.de.

The original version of this story was posted on the Inter-European Division website.

Visit link:
Adventists in Germany Discuss Artificial Intelligence - Adventist News Network

Tyler Cowen on the Risks and Impact of Artificial Intelligence – Econlib – EconTalk

0:37

Intro. [Recording date: April 19, 2023.]

Russ Roberts: Today is April 19th, 2023, and my guest is Tyler Cowen of George Mason University. With Alex Tabarrok, he blogs at Marginal Revolution. His podcast is Conversations with Tyler. This is his 17th appearance on the program. He was last here in August of 2022, talking about his book with Daniel Gross titled Talent.

Today we're going to talk about artificial intelligence [AI], and this is maybe the ninth episode on EconTalk about the topic. I think the first one was December of 2014 with Nicholas Bostrom. This may be the last one for a while, or not. It is perhaps the most interesting development of our time, and Tyler is a great person to pull a lot of this together, as well as to provide a more optimistic perspective relative to some of our recent guests. Tyler, welcome back to EconTalk.

Tyler Cowen: Happy to be here, Russ.

Russ Roberts: We're going to get your thoughts in a little while on whether our existence is at risk, a worry that a number of people have raised. Before we do that, let's assume that human beings survive and that we merely have ChatGPT-5 [Generative Pre-Trained Transformer] and whatever comes next to change the world. What do you see as some of the biggest impacts on the economy and elsewhere?

Tyler Cowen: I think the closest historical analogies are probably the printing press and electricity. So, the printing press enabled a much greater circulation of ideas, considerable advances in science. It gave voices to many more people. It really quite changed how we organize, store, and transmit knowledge.

Now, most people would recognize the printing press was very much a good thing, but if you look at the broader history of the printing press, it is at least connected to a lot of developments that are highly disruptive. That could include the Protestant Reformation, possibly wars of religion, just all the bad books that have come out between now and then, right, are in some way connected to the printing press.

So, major technological advances do tend to be disruptive. They bring highly significant benefits. The question is how do you face up to them?

Electricity would be another example. It has allowed people to produce greater destructive power, but again, the positive side of electricity is highly evident and it was very disruptive. It also put a fair number of people out of work. And, nonetheless, we have to make a decision. Are we willing to tolerate major disruptions which have benefits much higher than costs, but the costs can be fairly high?

Russ Roberts: And, this assumes that we survive--

Russ Roberts: which would be a big cost if it's not true. But, just starting with that, and what we've seen in the last shockingly few months--we're not talking about the first five or 10 years of this innovation--where do you see its impact being the largest?

Tyler Cowen: These would be my guesses, and I stress that word 'guesses.' So, every young person in the world who can afford the connection will have or has already access to an incredible interactive tutor to teach them almost anything, especially with the math plugins. That's just phenomenal. I think we genuinely don't know how many people will use it. It's a question of human discipline and conscientiousness, but it has to be millions of people, especially in poorer countries, and that is a very major impact.

I think in the short to medium run, a lot of routine back-office work will in essence be done by GPT models one way or another. And then medium term, I think a lot of organizations will find new ways of unsiloing their information, new ways of organizing, storing and access their information. It will be a bit like the world of classic Star Trek, where Spock just goes to the computer, talks to it, and it tells him whatever he wants to know. Imagine if your university could do something like that.

So, that will be significant. Not that it will boost GDP [Gross Domestic Product] to 30% growth a year, but it will be a very nice benefit that will make many institutions much more efficient. So, in the shorter run, those are what I see as the major impacts.

Russ Roberts: I'll give you a few that I've been thinking about, and you can agree or disagree.

Tyler Cowen: Oh, I would add coding also, but this we already know, right? But, sorry, go on.

Russ Roberts: Yeah, coding was my first one, and I base that on the astounded tweets that coders are tweeting where they say, 'I've been using ChatGPT now for two weeks, and I'm two to three times as productive.'

I don't know if that's accurate. Let's presume what they mean is a lot more productive. And, by that I assume they mean 'I can solve problems that used to take me two or three times longer in a shorter period of time.' And, of course, that means, at least in one dimension, fewer coders. Because you don't need as many. But, it might mean more, because it can do some things that are harder to do or were too expensive to do before, and now there'll be auxiliary activities surrounding it. So, do you have any feel for how accurate that transformation is? Is it really true that it's a game changer?

Tyler Cowen: I've heard from many coders analyses very much like what you just cited to me. They also make the point it allows for more creative coding. So, if a GPT model is doing the routine work, you can play around a lot more with new ideas. That leads to at least the possibility the demand for coders will go up, though coders of a very particular kind.

Think of this as entering a world where everyone has a thousand free research assistants. Now, plenty of people are not good at using that, and some number of people are, and some coders will be. Some economists will be. But, it will really change quite a bit who does well and who does not do well.

Russ Roberts: It's funny: I find this whole topic fascinating, as listeners probably have come to realize. It's probably the case that there are listeners to this conversation who have not tried ChatGPT yet. Just for those of you who haven't, in its current formation, in its current version that I have--I have the unpaid version from OpenAI--there's just a field where I put a query, a question, a comment.

I want to give a couple examples for listeners, to give them a feel for what it's capable of doing outside of coding. I wrote a poem recently about what it was like to take a 14-hour flight with a lot of small infants screaming and try to put a positive spin on it. I was pretty proud of the poem. I liked it. And I posted it on Twitter.

I asked ChatGPT to write a poem in the style of Dr. Seuss--mine was not--but in the style of Dr. Seuss about this issue. It was quite beautiful.

Then I asked it to make it a little more intense. And, it made a few mistakes I didn't like in language, but it got a little bit better in other ways.

And then for fun, I asked it to write a poem about someone who is really annoyed at the baby. I wasn't annoyed: I thought I tried to put a positive spin on the crying. And, it was really good at that.

And, of course, you could argue that it takes away some of my humanity to outsource my poetry writing to this third party. But that's one thing it's really good at, is writing doggerel. Rhyming, pretty entertaining, and sometimes-funny poetry.

The other thing it's really good at is composing emails--requests for a job interview, a condolence note.

I asked it to write a condolence note, just to see what it would come up with. 'A friend of mine has lost a loved one. Write me a condolence note.' It writes me three paragraphs. It's quite good. Not maybe what I would have written exactly, but it took three seconds. So, I really appreciated it.

Then I said, 'Make it more emotional.' And, it did. And, then I said, 'Take it up a notch.' And it did. And it's really extraordinary.

So, one of the aspects of this, I think, that's important--I don't know how transformative it will be--but for people whose native language is not English--and I assume it will eventually, maybe it already does talk in other languages, I use it in English--it's extremely helpful to avoid embarrassment, as long as you're careful to realize it does make stuff up. So, you have to be careful in that area.

I am under the impression it's going to be very powerful in medicine in terms of diagnoses. And, we thought this before when we were talking about, say, radiology. There was this few that radiologists in the United States would lose work because radiologists in India, say, could read the X-rays. That hasn't, as far as I know, taken off. But, I have a feeling that ChatGPT as a medical diagnostic tool is going to be not unimportant.

The next thing I would mention, and I'll let you comment, the next thing I would mention is all kinds of various kinds of writing, which are the condolence note or the job interview request as just an example.

I met a technical writer recently who said, 'I assume my job's going to be gone in a few months. I'm playing with how ChatGPT might make me a better technical writer, because otherwise I think I'm going to be in trouble.'

And, of course, then there's content creations, something we talked about at some length with Erik Hoel. Content creation in general on the web, especially for businesses, is going to get a lot less expensive. It's not going to be very interesting in the short run. We'll see what it's capable of in the medium term, but the ability to create content has now exploded. And, those of us who try to specialize in creating content may be a little less valuable, or we'll have to try different things. What are your thoughts on those issues?

Tyler Cowen: Just a few points. First, I have heard it can already handle at least 50 languages, presumably with more to come. One of many uses for this is just to preserve languages that may be dying, or histories, or to create simulated economies of ways of life that are vanishing.

There's a recent paper out on medical diagnosis where they ask human doctors and then GPT--they give it a bunch of symptoms reported from a patient, and then there's a GPT answer and a human doctor answer. And, the human doctors do the grading, and GPT does slightly better. And, that's right now. You could imagine a lot more specialized training on additional databases that could make it better yet.

So, we tend to think about America, or in your case, also Israel, but think about all the doctor-poor parts of the world--including China, which is now of course, wealthier but really has a pretty small number of doctors, very weak healthcare infrastructure. Obviously many parts of Africa. It's really a game changer to have a diagnostic instrument that seems to be at least as good as human doctors in the United States. So, the possibilities on the positive side really are phenomenal.

Oh, by the way, you must get the paid version. It's better than the free version. It's only $20 a month.

Russ Roberts: Yeah, I've thought about it.

Tyler Cowen: That's the best [?] that you can make.

Russ Roberts: I thought about it, except I didn't want to advance the destruction of humanity yet. I wanted to think about it a few more episodes. So, maybe at the end of our conversation, Tyler, I'll upgrade.

The other thing to say about diagnostics, of course, is that what happens now when you don't feel well depends where you live and how badly you feel--how poorly you're feeling. So, here in Israel, I can get an appointment anytime. I don't pay. It's already included in everything. I can get a phone appointment, I can get a time to see my doctor. And, it's not a long wait, at least for me so far. In America, there were a lot of times I thought, 'I'd like to see a doctor about this, but I think it's probably nothing. And so, I'm going to just hope it's okay, because I don't want to pay the fees.'

And, I get on the web, I poke around, and of course, most of us have symptoms every day that are correlated with all kinds of horrific conditions. So, people who are hypochondriacs are constantly anxious. And, their main role of their doctor--and this is me sometimes--is to say, 'You're fine.' We pay lots for that. It's a wonderful, not unimportant thing. If ChatGPT or some form of AI diagnostic could reassure us that what we have is indigestion and not a heart attack, because it's not just looking at the symptoms and looking at simple correlations but knows what questions to ask the way a doctor would, and the follow-ups, and can really do a much better job, that is a game changer for personal comfort.

And especially, as you point out, for places where you don't have access to doctors for whatever reason, in easier, inexpensive form. I have a friend in England who says, I was telling them about some issue they're having and I say, 'Have you gone to the doctor?' 'What's the point? They're just going to say: Come back, until it's an open wound or until you pass out.'

But, if you think about that, this is not an unimportant part of the human condition in 2023 is anxiety about one's health. And, I think the potential to have a doc in your poc, a doctor in your pocket, is really extraordinary.

Tyler Cowen: Yes. But, as you know, legal and regulatory issues have arisen already, and we have to be willing to tolerate imperfection, realizing it's certainly better than Google for medical queries. And, it can be better than a human doctor, and especially for people who don't have access. And, how we will handle those imperfections is obviously a major open question.

Russ Roberts: That's an excellent point. But, I would add one more thing that I think's quite important, and this is something you learn as your parents get older. And, they go to the doctor, you tell them what they should ask the doctor, and they forget or they don't know how to follow up. And so, even if some of these apps that we might imagine will not be easily approved, the ability to have access to something that helps you ask good questions, which I think ChatGPT would be quite good at--'I have these symptoms. What should I ask my doctor? What else should I be thinking about?'--just gloriously wonderful.

Russ Roberts: What do you think about this issue of content creation? And, do you think there's any chance that we're going to be able to require or culturally find ways to identify whether something is ChatGPT or not?

Tyler Cowen: Oh, I think GPT models will be--already are--sufficiently good: that if you ask them for content the right way, it's very hard to identify where it came from. If you just ask it straight up questions, 'What causes inflation?' you can spot a GPT-derived answer even without software. But, again, with smarter prompts, we're already at the point where--you know, I sometimes say the age of homework is over. We need to get used to that.

And, for a school system, some of the East Asian systems that put even more stress on homework than the U.S. system, they're going to have to reorganize in some fundamental way. And, being good at a certain kind of rote learning will be worth much less in labor markets. So, the idea that that's what you're testing for will no longer be as--I'm not sure how meritocratic it ever was, but it can be fairly meritocratic. But, it won't be any more, and it will be hard for many nations to adjust to that.

Russ Roberts: Yeah, I view that as a lot of people are anxious about the impact on teaching and grading essays, exams. I think it's fabulous.

Russ Roberts: I think--the Age of Homework is a bad Age. So, if you're right, I think that's a pretty much unalloyed benefit. Other than math. I think that it may end up that we do our math homework in class, where we can't secretly use ChatGPT to help us answer and we use home for something else. Something like that.

Tyler Cowen: But, our educational institutions are often the slowest to adapt. And, you as president of a university, you must be facing real decisions, right?

Russ Roberts: Oh, yes, Tyler. It's such a burden. No, we're not, because we're a small seminar place. We don't have lectures. There's no way that the papers and essays that our students write could be done by ChatGPT, at least at anything remotely like the current level.

Tyler Cowen: Oh, I wouldn't be so sure about that. It's not that GPT can write the whole paper, but in my law and literature class now, I've required my students to write one paper with GPT. But, then they augment, they edit, they shape. Those have been very good papers. So, you're going to get work like that now and have to--

Russ Roberts: Yeah, that's true. And, I don't have any problem. What?

Tyler Cowen: You're going to have to make a decision. Do you allow it? Do you recognize it? What are the transparency requirements? Do you grade it differently? This is now, this is not next year.

Russ Roberts: Yeah, no, my view on this--and it sounds like it's very similar to yours--let's start with the condolence note. Okay? So, I write a friend a condolence note. And, by the way, people have talked about putting a watermark on ChatGPT. That's not useful. I'll just recopy it. It's silly in this kind of setting. Maybe in a 40-page paper, maybe. So, I write a condolence note to a friend, say, and I go through various iterations that I mentioned earlier. And, I pick the one that I think sounds most like me. Is there anything wrong with that?

Tyler Cowen: I think it's fine, but to the extent that intersects with institutions for certifying people, ranking people, assigning different slots in universities and awards to people, it does mean a lot of other practices are going to have to change. And, they'll have to change from institutions that are typically pretty sticky.

Russ Roberts: But, surely, whether my friends think I'm an actually empathetic person might even be more important than whether I certify someone as skilled in economics. I think there is something lost when I outsource a condolence note. I've mentioned it briefly elsewhere, I don't think on the program, but the play here, Cyrano de Bergerac, by Edmond Rostand, that's what that's about. It's about a person who is gorgeous, a young man who is gorgeous, who falls in love with a beautiful woman; and he's inarticulate. And, he gets a very unattractive person to whisper the sweet nothings into his ear that he can pass on as if they were his own. And, that turns out not to have the best set of outcomes. A beautiful play, by the way. If you haven't seen it, it's been adapted in movie form in various ways. And, they're all pretty good.

Tyler Cowen: How are you [?] in real life? Sorry, go on.

Russ Roberts: Say that again?

Tyler Cowen: How you behave in real life might matter more. So, how you behave in textual life, anyone can now fake. So, your charisma, your looks, how well you express empathy, probably those premia will rise. And, again, that will require a lot of social adjustment.

Russ Roberts: That's very well said. I think, yeah, the fact that you probably get to a point where we can adjust those, too: the way my eyes look and how much I smile, and who knows. But, certainly for a while, there will be a premium put on authentic face-to-face interaction that can't be faked. And, of course, when you write a book or an essay, forget being graded. When you write a book, I don't know about you, Tyler, I ask friends for comments. And, you know what? I take them sometime, and I thank them just like people I think will for a while maybe thank ChatGPT. But, is it that much different that you run your draft through a ChatGPT version and then augment it, change it?

Tyler Cowen: ChatGPT gives me good comments as well. But, again, I do think there's a genuine issue of transparency. If someone is hiring you to write something, what are they getting? What requirements are they imposing on you? What is it you need to tell them? I don't use GPT to write, say, columns. It just seems wrong to me even though it might work fine. I think I shouldn't do it. That readers are reading for, like, 'The' Tyler Cowen. And, well, there's all these other inputs from books, other people's blog posts. And, the input from GPT is for the moment, somehow different. That's arbitrary, but that's the world we're living in.

Russ Roberts: Well, I'm not going to name the columnist, but one of columnists recently wrote a piece I thought could have been written by ChatGPT. It read like a parody of this person's normal writing. And, of course, while I am interested in the real Tyler Cowen, sometimes the real Tyler Cowen is actually doing natural ChatGPT on his old columns. Not you personally, of course, Tyler. But I think a lot of columnists get in a rut. And, it's a lot to see what happens there.

Tyler Cowen: I have the mental habit now when I read a column, I think to myself, 'What GPT level is that column written at?' Like, 'Oh, that's a 3.5' or 'Oh, that's a 4.0.' Occasionally, so maybe 'That's a six or a seven.' But a lot of it is below a 4, frankly--even if I agree with it and it's entirely correct. It's like, 'Eh, 3.4 for that one.'

Russ Roberts: Yeah, well, that's why there will be a premium, I think for some time, on novelty, creativity to the extent that ChatGPT struggles with that. It's somewhat sterile still right now. So, we'll see. It's going to get better at some point. It may be very soon. We'll talk about that, too, in a little bit.

Russ Roberts: Let's turn to--is there anything in what we've talked about so far that you would regulate? Try to stop, slow down? Or we just say, 'Full steam ahead'?

Tyler Cowen: I think that's too broad a question. So, I think we need regulatory responses in particular areas, but I don't think we should set up, like, 'The' regulatory body for AI, that to regulate a thing doesn't work well. Modular regulation as the world's changes, that in turn needs to change.

So if, say, a GPT model is prescribing medicines--which is not the case now, not legally--that needs to be regulated in some manner. We may not know how to do it, but the thing to do is to change the regulations for prescribing medicines, however you might wish to change those. That, to me, makes more sense than some meta-body regulating GPT. So, I think the questions have to be narrowed down to talk about them.

Russ Roberts: Do you think there's any role for norms? Now, you just confessed to a norm that you would feel guilty--and I'm trusting you on this, Tyler. For all I know, you've written your last 18 columns with ChatGPT. But, is there any role for norms to emerge that constrain AI in various imaginable ways?

I can imagine someone saying, 'Well, I could do that with ChatGPT, but it probably isn't right, so I won't do it.' And, that would be one way in which--and not just that--but I could develop a version of ChatGPT that could do X, Y, Z, but I don't think humanity is ready for that. That seems a little bit harder for people to do. Do you think there'll be some norms around this that will constrain it in some way?

Tyler Cowen: Oh, there's so many norms already. And, to be clear, I've told my editor in writing that I don't use GPT to write my columns, just to make that clear.

Here's one example. There are people using dating apps where the texting or the content passed back and forth is generated by GPT. I'm not aware of any law against that. It's hard to believe there could be one since GPT models are so new for this purpose. But it seems, to me, wrong. There are norms against it, that when you meet the partner you've been texting with, they'll figure this out. They ought to hold it against you. I hope that norm stays strong enough that most people don't do this, but of course there's going to be slippage--getting back to Cyrano, right?

Russ Roberts: Yeah, yeah. It's like people being honest about what their age is online. There seems to be a norm that it's okay to not tell the truth, but I don't know: when you uncover that, it's a pretty unpleasant surprise, I think, for some people.

Russ Roberts: Well, let's turn to the issue of so-called alignment and safety. We recently had Eliezer Yudkowsky on the program. He is very worried, as I'm sure you know, about AI. You seem to be less so. Why do you think that is?

Tyler Cowen: Well, let me first start with the terminological matter. Everyone uses the phrase 'alignment,' and sometimes I use the word as well; but to me it suggests a social welfare function approach to the problem. That, there's one idea of social good. As if you might take that from Benthamite utilitarianism. And that you want the programs--the machines--all aligned with that notion of social good. Now, I know full well that if you read LessWrong, Effective Altruism Alignment forums, plenty of people will recognize that is not the case.

But, I'm worried that we're embodying in our linguistic practices as a norm, this word that points people in the Kenneth Arrow, Jeremy Bentham direction: 'Oh, everything needs to be aligned with some notion of the good.'

Instead, it's about decentralization, checks and balances, mobilizing decentralized knowledge. That, Hayek and Polanyi should be at the center of the discussion. And, they're all about 'What are the incentives?' It's not about information and knowledge controlling everything, but again, it's about how the incentives of decentralized agents are changed. And, too much of the discourse now is not in that framework.

But, I mean, here would be my initial response to Eliezer.

I've been inviting people who share his view simply to join the discourse. So, they have the sense, 'Oh, we've been writing up these concerns for 20 years and no one listens to us.' My view is quite different. I put out a call and asked a lot of people I know, well-informed people, 'Is there any actual mathematical model of this process of how the world is supposed to end?'

So, if you look, say, at COVID [corona virus disease] or climate change fears, in both cases, there are many models you can look at, including--and then models with data. I'm not saying you have to like those models. But the point is: there's something you look at and then you make up your mind whether or not you like those models; and then they're tested against data. So, when it comes to AGI [artificial general intelligence] and existential risk, it turns out as best I can ascertain, in the 20 years or so we've been talking about this seriously, there isn't a single model done. Period. Flat out.

So, I don't think any idea should be dismissed. I've just been inviting those individuals to actually join the discourse of science. 'Show us your models. Let us see their assumptions and let's talk about those.' The practice, instead, is to write these very long pieces online, which just stack arguments vertically and raise the level of anxiety. It's a bad practice in virtually any theory of risk communication.

And then, for some individuals, at the end of it all, you scream, 'The world is going to end.' Other people come away, 'Oh, the chance is 30% that the world will end.' 'The chance is 80% that the world will end.' A lot of people have come out and basically wanted to get rid of the U.S. Constitution: 'I'll get rid of free speech, get rid of provisions against unreasonable search and seizure without a warrant,' based on something that hasn't even been modeled yet.

So, their mental model is so much: 'We're the insiders, we're the experts.' No one is talking us out of their fears.

My mental model is: There's a thing, science. Try to publish this stuff in journals. Try to model it. Put it out there, we'll talk to you. I don't want to dismiss anyone's worries, but when I talk to people, say, who work in governments who are well aware of the very pessimistic arguments, they're just flat out not convinced for the most part. And, I don't think the worriers are taking seriously the fact they haven't really joined the dialogue yet.

Now on top of that, I would add the point: I think they're radically overestimating the value of intelligence. If we go back, as I mentioned before, to Hayek and Polanyi, pure intelligence is not worth as much as many people think. There's this philosophy of scientism that Hayek criticized. And, the people who are most worried, as I see them, they tend to be hyper-rationalistic. They tend to be scientists. They tend not to be very Hayekian or Smithian. They emphasize sheer brain power over prudence. And, I think if you take this more Adam Smith-like/Hayekian worldview, you will be less worried.

But, we still ought to recognize the costs of major technological transitions as we observe them in history. They are, indeed, very high. I do not want to have a Pollyanna-ish attitude about this.

Russ Roberts: Well, that's very well said. I want to start with the point you made about modeling. I don't demand a mathematical model, I don't think--

Tyler Cowen: I do, to be clear. But, go on.

Russ Roberts: You do or you don't?

Tyler Cowen: I do. And, again, I'm not saying the model will be good--I don't know. But, if it's not good, that's one of the things I want to know.

So, the COVID models, I would say they weren't very good. But I'm delighted people produced them, because we all got to see that. Again, opinions may differ, but that's part of the point of modeling. Not that models explain everything--that's the bad defense of modeling. The good defense is: 'Let's see how this is going to go.'

Russ Roberts: Well, let me put on my--I want to channel my inner Eliezer Yudkowsky, which may not be easy for me, but I'll do my best. I think his argument--he has a few arguments--but one of them, the one I find most interesting--I did not find it completely compelling and I would not call it a model. I would call it a story.

And, I find stories interesting. They can help you understand things. They can lead you astray--just like a model can, that's mathematical.

But, his story is, is that in the course of what you and I would call emergent processes--the example he uses is the creation, of, say, a flint axe, a hand axe--that, natural selection works inexorably on the fact that people who make better ones are going to leave more genes behind.

And so, that relentlessly, and for that technology, pushes it to improve.

And, no one is asking the technology to improve. There's no designer other than perhaps God, but there's no human force or will to push that process. It's the natural incentives of an emergent process.

Visit link:
Tyler Cowen on the Risks and Impact of Artificial Intelligence - Econlib - EconTalk