Archive for the ‘Artificial Intelligence’ Category

Adventists in Germany Discuss Artificial Intelligence – Adventist News Network

On May 7, 2023, Hope Media Europe, the media center of the Seventh-day Adventist Church, organized the 12th Media Day in Alsbach-Hhnlein (near Darmstadt). Coming from German-speaking countries, around 50 media professionals, students, and people interested in mediafrom the fields of video, audio, design, photography, text/print, journalism, communication, and internetmet at this exchange-and-networking event to discuss the topic "Artificial Intelligence (AI): the beginning of a new era?"

Two AI practitioners had been invited for the lectures: William Edward Timm, theologian, digital media expert, and department head of Novo Tempo, the Adventist TV station in Brazil, which belongs to the Hope Channel broadcasting family; and Danillo Cabrera, software expert at Hope Media Europe. Both have already gained practical experience with the use of artificial intelligence.

Evolution of AI

"We are in the middle of a revolution" were the words of Timm, who first gave a brief overview of the history of artificial intelligence in his keynote speech. As early as 1950, the British mathematician Alan Turing invented the Turing Test: A computer is considered intelligent if, in any question-answer game over an electrical connection, humans cannot distinguish whether a computer or a human is sitting at the other end of the line. In 1956, the first AI program in history, "Logic Theorist," was written. This program was able to prove 38 theorems from Russell and Whitehead's fundamental work Principia Mathematica.

Additionally, in 1965, Herbert Simon, an American social scientist and later Nobel Prize winner for economics, predicted that in 20 years, machines would be able to do what humans could. In 1997, the time had come: a computer called "Deep Blue" defeated the then world chess champion Garri Kasparov.

Meanwhile, a lot of artificial intelligence is already being used in the background, says Timmfor example, in algorithms that suggest music and videos in social media according to the user's taste. What is new, however, is generative AI, with which users can solve concrete tasks or create products, such as ChatGPT or the image generator Midjourney.

Timm put forward the thesis that this generative AI would democratize AI, as it could now be used by every human being in a self-determined way, not only as a component of software over which one had no influence (e.g., algorithms). He distinguished three phases in the development of AI: the generative AI already mentioned, neuronal networks that would imitate the human mind, and so-called Deep Learning, which would, for example, allow self-driving cars to drive accident-free. Finally, Timm addressed the ethical aspects of the application of AI.

Artificial Intelligence and Ethics

Timm cited the AI-supported production of meat substitutes as a positive example. Artificial intelligence can analyze the molecular structure of meat and use the results to assemble a similar product from plant molecules that is very similar in consistency and taste to the meat product.

In 2021, Guiseppe Scionti has already produced a meat substitute product from the 3D printer in this way, although it is not yet fully developed. However, that could change quickly, says Timm.

In the ethical evaluation of AI, it is important to distinguish between "Narrow AI," which is intended for practical, labor-saving purposes, and "General AI," which resembles the human mind and acts independently. In general, one of the main dangers is the expected spread of fakes of all kinds (fake news, pictures, videos, etc.). Since a democracy lives from dialogue and discussion, this should not be taken over, damaged, or prevented by AI, says Timm.

According to calculations by the Goldman Sachs banking firm, AI could cause 300 million people worldwide to lose their previous jobs and have to be retrained. This would have not only political but also psychological consequences. "Many people will have the feeling of being superfluous," said Timm. He assumes, however, that after a transitional phase in which AI makes previous activities more efficient, new fields of activity will emerge for which resources will then be available. "At the beginning of every new technology, there are adjustment problems until a new distribution of roles has become established."

Timm formulated some rules for dealing with artificial intelligence:

Practical Tools

Cabrera then presented a number of practical applications for AI in his talk. They ranged from video, image, and music generators to text-based tools, such as ChatGPT, and avatars with a human appearance that could be used, for example, to conduct customer conversations.

Project Slam

In Project Slam, participants presented their projects in contributions of ten minutes each. They were in the fields of music, film, marketing, podcast, and comic drawing.

Some examples: Singer/Songwriter:www.shulami-melodie.de; Marketingintou-content.de/ and cookafrog.info/; Podcast "Der kleine Kampf"open.spotify.com/show/23HNDzTxjoHjFKUlmrklY0

Media Day Award

Film music composer Manuel Igler was awarded the Media Day Award. He wrote music for various TV commercials and series on Hope TV (e.g., Encounters, the intro for the moonlight show, and the series about the Old Testament book of Daniel [manueligler.com]).

Hope Media

Hope Media Europe operates, among others, the television channel Hope TV. It is part of the international Hope Channel family of channels, which was founded in 2003 by the Seventh-day Adventist Church in the USA and now consists of over 60 national channels.

Hope TV can be received via satellite, Germany-wide via cable, and on the internet via http://www.hopetv.de.

The original version of this story was posted on the Inter-European Division website.

Visit link:
Adventists in Germany Discuss Artificial Intelligence - Adventist News Network

Tyler Cowen on the Risks and Impact of Artificial Intelligence – Econlib – EconTalk

0:37

Intro. [Recording date: April 19, 2023.]

Russ Roberts: Today is April 19th, 2023, and my guest is Tyler Cowen of George Mason University. With Alex Tabarrok, he blogs at Marginal Revolution. His podcast is Conversations with Tyler. This is his 17th appearance on the program. He was last here in August of 2022, talking about his book with Daniel Gross titled Talent.

Today we're going to talk about artificial intelligence [AI], and this is maybe the ninth episode on EconTalk about the topic. I think the first one was December of 2014 with Nicholas Bostrom. This may be the last one for a while, or not. It is perhaps the most interesting development of our time, and Tyler is a great person to pull a lot of this together, as well as to provide a more optimistic perspective relative to some of our recent guests. Tyler, welcome back to EconTalk.

Tyler Cowen: Happy to be here, Russ.

Russ Roberts: We're going to get your thoughts in a little while on whether our existence is at risk, a worry that a number of people have raised. Before we do that, let's assume that human beings survive and that we merely have ChatGPT-5 [Generative Pre-Trained Transformer] and whatever comes next to change the world. What do you see as some of the biggest impacts on the economy and elsewhere?

Tyler Cowen: I think the closest historical analogies are probably the printing press and electricity. So, the printing press enabled a much greater circulation of ideas, considerable advances in science. It gave voices to many more people. It really quite changed how we organize, store, and transmit knowledge.

Now, most people would recognize the printing press was very much a good thing, but if you look at the broader history of the printing press, it is at least connected to a lot of developments that are highly disruptive. That could include the Protestant Reformation, possibly wars of religion, just all the bad books that have come out between now and then, right, are in some way connected to the printing press.

So, major technological advances do tend to be disruptive. They bring highly significant benefits. The question is how do you face up to them?

Electricity would be another example. It has allowed people to produce greater destructive power, but again, the positive side of electricity is highly evident and it was very disruptive. It also put a fair number of people out of work. And, nonetheless, we have to make a decision. Are we willing to tolerate major disruptions which have benefits much higher than costs, but the costs can be fairly high?

Russ Roberts: And, this assumes that we survive--

Russ Roberts: which would be a big cost if it's not true. But, just starting with that, and what we've seen in the last shockingly few months--we're not talking about the first five or 10 years of this innovation--where do you see its impact being the largest?

Tyler Cowen: These would be my guesses, and I stress that word 'guesses.' So, every young person in the world who can afford the connection will have or has already access to an incredible interactive tutor to teach them almost anything, especially with the math plugins. That's just phenomenal. I think we genuinely don't know how many people will use it. It's a question of human discipline and conscientiousness, but it has to be millions of people, especially in poorer countries, and that is a very major impact.

I think in the short to medium run, a lot of routine back-office work will in essence be done by GPT models one way or another. And then medium term, I think a lot of organizations will find new ways of unsiloing their information, new ways of organizing, storing and access their information. It will be a bit like the world of classic Star Trek, where Spock just goes to the computer, talks to it, and it tells him whatever he wants to know. Imagine if your university could do something like that.

So, that will be significant. Not that it will boost GDP [Gross Domestic Product] to 30% growth a year, but it will be a very nice benefit that will make many institutions much more efficient. So, in the shorter run, those are what I see as the major impacts.

Russ Roberts: I'll give you a few that I've been thinking about, and you can agree or disagree.

Tyler Cowen: Oh, I would add coding also, but this we already know, right? But, sorry, go on.

Russ Roberts: Yeah, coding was my first one, and I base that on the astounded tweets that coders are tweeting where they say, 'I've been using ChatGPT now for two weeks, and I'm two to three times as productive.'

I don't know if that's accurate. Let's presume what they mean is a lot more productive. And, by that I assume they mean 'I can solve problems that used to take me two or three times longer in a shorter period of time.' And, of course, that means, at least in one dimension, fewer coders. Because you don't need as many. But, it might mean more, because it can do some things that are harder to do or were too expensive to do before, and now there'll be auxiliary activities surrounding it. So, do you have any feel for how accurate that transformation is? Is it really true that it's a game changer?

Tyler Cowen: I've heard from many coders analyses very much like what you just cited to me. They also make the point it allows for more creative coding. So, if a GPT model is doing the routine work, you can play around a lot more with new ideas. That leads to at least the possibility the demand for coders will go up, though coders of a very particular kind.

Think of this as entering a world where everyone has a thousand free research assistants. Now, plenty of people are not good at using that, and some number of people are, and some coders will be. Some economists will be. But, it will really change quite a bit who does well and who does not do well.

Russ Roberts: It's funny: I find this whole topic fascinating, as listeners probably have come to realize. It's probably the case that there are listeners to this conversation who have not tried ChatGPT yet. Just for those of you who haven't, in its current formation, in its current version that I have--I have the unpaid version from OpenAI--there's just a field where I put a query, a question, a comment.

I want to give a couple examples for listeners, to give them a feel for what it's capable of doing outside of coding. I wrote a poem recently about what it was like to take a 14-hour flight with a lot of small infants screaming and try to put a positive spin on it. I was pretty proud of the poem. I liked it. And I posted it on Twitter.

I asked ChatGPT to write a poem in the style of Dr. Seuss--mine was not--but in the style of Dr. Seuss about this issue. It was quite beautiful.

Then I asked it to make it a little more intense. And, it made a few mistakes I didn't like in language, but it got a little bit better in other ways.

And then for fun, I asked it to write a poem about someone who is really annoyed at the baby. I wasn't annoyed: I thought I tried to put a positive spin on the crying. And, it was really good at that.

And, of course, you could argue that it takes away some of my humanity to outsource my poetry writing to this third party. But that's one thing it's really good at, is writing doggerel. Rhyming, pretty entertaining, and sometimes-funny poetry.

The other thing it's really good at is composing emails--requests for a job interview, a condolence note.

I asked it to write a condolence note, just to see what it would come up with. 'A friend of mine has lost a loved one. Write me a condolence note.' It writes me three paragraphs. It's quite good. Not maybe what I would have written exactly, but it took three seconds. So, I really appreciated it.

Then I said, 'Make it more emotional.' And, it did. And, then I said, 'Take it up a notch.' And it did. And it's really extraordinary.

So, one of the aspects of this, I think, that's important--I don't know how transformative it will be--but for people whose native language is not English--and I assume it will eventually, maybe it already does talk in other languages, I use it in English--it's extremely helpful to avoid embarrassment, as long as you're careful to realize it does make stuff up. So, you have to be careful in that area.

I am under the impression it's going to be very powerful in medicine in terms of diagnoses. And, we thought this before when we were talking about, say, radiology. There was this few that radiologists in the United States would lose work because radiologists in India, say, could read the X-rays. That hasn't, as far as I know, taken off. But, I have a feeling that ChatGPT as a medical diagnostic tool is going to be not unimportant.

The next thing I would mention, and I'll let you comment, the next thing I would mention is all kinds of various kinds of writing, which are the condolence note or the job interview request as just an example.

I met a technical writer recently who said, 'I assume my job's going to be gone in a few months. I'm playing with how ChatGPT might make me a better technical writer, because otherwise I think I'm going to be in trouble.'

And, of course, then there's content creations, something we talked about at some length with Erik Hoel. Content creation in general on the web, especially for businesses, is going to get a lot less expensive. It's not going to be very interesting in the short run. We'll see what it's capable of in the medium term, but the ability to create content has now exploded. And, those of us who try to specialize in creating content may be a little less valuable, or we'll have to try different things. What are your thoughts on those issues?

Tyler Cowen: Just a few points. First, I have heard it can already handle at least 50 languages, presumably with more to come. One of many uses for this is just to preserve languages that may be dying, or histories, or to create simulated economies of ways of life that are vanishing.

There's a recent paper out on medical diagnosis where they ask human doctors and then GPT--they give it a bunch of symptoms reported from a patient, and then there's a GPT answer and a human doctor answer. And, the human doctors do the grading, and GPT does slightly better. And, that's right now. You could imagine a lot more specialized training on additional databases that could make it better yet.

So, we tend to think about America, or in your case, also Israel, but think about all the doctor-poor parts of the world--including China, which is now of course, wealthier but really has a pretty small number of doctors, very weak healthcare infrastructure. Obviously many parts of Africa. It's really a game changer to have a diagnostic instrument that seems to be at least as good as human doctors in the United States. So, the possibilities on the positive side really are phenomenal.

Oh, by the way, you must get the paid version. It's better than the free version. It's only $20 a month.

Russ Roberts: Yeah, I've thought about it.

Tyler Cowen: That's the best [?] that you can make.

Russ Roberts: I thought about it, except I didn't want to advance the destruction of humanity yet. I wanted to think about it a few more episodes. So, maybe at the end of our conversation, Tyler, I'll upgrade.

The other thing to say about diagnostics, of course, is that what happens now when you don't feel well depends where you live and how badly you feel--how poorly you're feeling. So, here in Israel, I can get an appointment anytime. I don't pay. It's already included in everything. I can get a phone appointment, I can get a time to see my doctor. And, it's not a long wait, at least for me so far. In America, there were a lot of times I thought, 'I'd like to see a doctor about this, but I think it's probably nothing. And so, I'm going to just hope it's okay, because I don't want to pay the fees.'

And, I get on the web, I poke around, and of course, most of us have symptoms every day that are correlated with all kinds of horrific conditions. So, people who are hypochondriacs are constantly anxious. And, their main role of their doctor--and this is me sometimes--is to say, 'You're fine.' We pay lots for that. It's a wonderful, not unimportant thing. If ChatGPT or some form of AI diagnostic could reassure us that what we have is indigestion and not a heart attack, because it's not just looking at the symptoms and looking at simple correlations but knows what questions to ask the way a doctor would, and the follow-ups, and can really do a much better job, that is a game changer for personal comfort.

And especially, as you point out, for places where you don't have access to doctors for whatever reason, in easier, inexpensive form. I have a friend in England who says, I was telling them about some issue they're having and I say, 'Have you gone to the doctor?' 'What's the point? They're just going to say: Come back, until it's an open wound or until you pass out.'

But, if you think about that, this is not an unimportant part of the human condition in 2023 is anxiety about one's health. And, I think the potential to have a doc in your poc, a doctor in your pocket, is really extraordinary.

Tyler Cowen: Yes. But, as you know, legal and regulatory issues have arisen already, and we have to be willing to tolerate imperfection, realizing it's certainly better than Google for medical queries. And, it can be better than a human doctor, and especially for people who don't have access. And, how we will handle those imperfections is obviously a major open question.

Russ Roberts: That's an excellent point. But, I would add one more thing that I think's quite important, and this is something you learn as your parents get older. And, they go to the doctor, you tell them what they should ask the doctor, and they forget or they don't know how to follow up. And so, even if some of these apps that we might imagine will not be easily approved, the ability to have access to something that helps you ask good questions, which I think ChatGPT would be quite good at--'I have these symptoms. What should I ask my doctor? What else should I be thinking about?'--just gloriously wonderful.

Russ Roberts: What do you think about this issue of content creation? And, do you think there's any chance that we're going to be able to require or culturally find ways to identify whether something is ChatGPT or not?

Tyler Cowen: Oh, I think GPT models will be--already are--sufficiently good: that if you ask them for content the right way, it's very hard to identify where it came from. If you just ask it straight up questions, 'What causes inflation?' you can spot a GPT-derived answer even without software. But, again, with smarter prompts, we're already at the point where--you know, I sometimes say the age of homework is over. We need to get used to that.

And, for a school system, some of the East Asian systems that put even more stress on homework than the U.S. system, they're going to have to reorganize in some fundamental way. And, being good at a certain kind of rote learning will be worth much less in labor markets. So, the idea that that's what you're testing for will no longer be as--I'm not sure how meritocratic it ever was, but it can be fairly meritocratic. But, it won't be any more, and it will be hard for many nations to adjust to that.

Russ Roberts: Yeah, I view that as a lot of people are anxious about the impact on teaching and grading essays, exams. I think it's fabulous.

Russ Roberts: I think--the Age of Homework is a bad Age. So, if you're right, I think that's a pretty much unalloyed benefit. Other than math. I think that it may end up that we do our math homework in class, where we can't secretly use ChatGPT to help us answer and we use home for something else. Something like that.

Tyler Cowen: But, our educational institutions are often the slowest to adapt. And, you as president of a university, you must be facing real decisions, right?

Russ Roberts: Oh, yes, Tyler. It's such a burden. No, we're not, because we're a small seminar place. We don't have lectures. There's no way that the papers and essays that our students write could be done by ChatGPT, at least at anything remotely like the current level.

Tyler Cowen: Oh, I wouldn't be so sure about that. It's not that GPT can write the whole paper, but in my law and literature class now, I've required my students to write one paper with GPT. But, then they augment, they edit, they shape. Those have been very good papers. So, you're going to get work like that now and have to--

Russ Roberts: Yeah, that's true. And, I don't have any problem. What?

Tyler Cowen: You're going to have to make a decision. Do you allow it? Do you recognize it? What are the transparency requirements? Do you grade it differently? This is now, this is not next year.

Russ Roberts: Yeah, no, my view on this--and it sounds like it's very similar to yours--let's start with the condolence note. Okay? So, I write a friend a condolence note. And, by the way, people have talked about putting a watermark on ChatGPT. That's not useful. I'll just recopy it. It's silly in this kind of setting. Maybe in a 40-page paper, maybe. So, I write a condolence note to a friend, say, and I go through various iterations that I mentioned earlier. And, I pick the one that I think sounds most like me. Is there anything wrong with that?

Tyler Cowen: I think it's fine, but to the extent that intersects with institutions for certifying people, ranking people, assigning different slots in universities and awards to people, it does mean a lot of other practices are going to have to change. And, they'll have to change from institutions that are typically pretty sticky.

Russ Roberts: But, surely, whether my friends think I'm an actually empathetic person might even be more important than whether I certify someone as skilled in economics. I think there is something lost when I outsource a condolence note. I've mentioned it briefly elsewhere, I don't think on the program, but the play here, Cyrano de Bergerac, by Edmond Rostand, that's what that's about. It's about a person who is gorgeous, a young man who is gorgeous, who falls in love with a beautiful woman; and he's inarticulate. And, he gets a very unattractive person to whisper the sweet nothings into his ear that he can pass on as if they were his own. And, that turns out not to have the best set of outcomes. A beautiful play, by the way. If you haven't seen it, it's been adapted in movie form in various ways. And, they're all pretty good.

Tyler Cowen: How are you [?] in real life? Sorry, go on.

Russ Roberts: Say that again?

Tyler Cowen: How you behave in real life might matter more. So, how you behave in textual life, anyone can now fake. So, your charisma, your looks, how well you express empathy, probably those premia will rise. And, again, that will require a lot of social adjustment.

Russ Roberts: That's very well said. I think, yeah, the fact that you probably get to a point where we can adjust those, too: the way my eyes look and how much I smile, and who knows. But, certainly for a while, there will be a premium put on authentic face-to-face interaction that can't be faked. And, of course, when you write a book or an essay, forget being graded. When you write a book, I don't know about you, Tyler, I ask friends for comments. And, you know what? I take them sometime, and I thank them just like people I think will for a while maybe thank ChatGPT. But, is it that much different that you run your draft through a ChatGPT version and then augment it, change it?

Tyler Cowen: ChatGPT gives me good comments as well. But, again, I do think there's a genuine issue of transparency. If someone is hiring you to write something, what are they getting? What requirements are they imposing on you? What is it you need to tell them? I don't use GPT to write, say, columns. It just seems wrong to me even though it might work fine. I think I shouldn't do it. That readers are reading for, like, 'The' Tyler Cowen. And, well, there's all these other inputs from books, other people's blog posts. And, the input from GPT is for the moment, somehow different. That's arbitrary, but that's the world we're living in.

Russ Roberts: Well, I'm not going to name the columnist, but one of columnists recently wrote a piece I thought could have been written by ChatGPT. It read like a parody of this person's normal writing. And, of course, while I am interested in the real Tyler Cowen, sometimes the real Tyler Cowen is actually doing natural ChatGPT on his old columns. Not you personally, of course, Tyler. But I think a lot of columnists get in a rut. And, it's a lot to see what happens there.

Tyler Cowen: I have the mental habit now when I read a column, I think to myself, 'What GPT level is that column written at?' Like, 'Oh, that's a 3.5' or 'Oh, that's a 4.0.' Occasionally, so maybe 'That's a six or a seven.' But a lot of it is below a 4, frankly--even if I agree with it and it's entirely correct. It's like, 'Eh, 3.4 for that one.'

Russ Roberts: Yeah, well, that's why there will be a premium, I think for some time, on novelty, creativity to the extent that ChatGPT struggles with that. It's somewhat sterile still right now. So, we'll see. It's going to get better at some point. It may be very soon. We'll talk about that, too, in a little bit.

Russ Roberts: Let's turn to--is there anything in what we've talked about so far that you would regulate? Try to stop, slow down? Or we just say, 'Full steam ahead'?

Tyler Cowen: I think that's too broad a question. So, I think we need regulatory responses in particular areas, but I don't think we should set up, like, 'The' regulatory body for AI, that to regulate a thing doesn't work well. Modular regulation as the world's changes, that in turn needs to change.

So if, say, a GPT model is prescribing medicines--which is not the case now, not legally--that needs to be regulated in some manner. We may not know how to do it, but the thing to do is to change the regulations for prescribing medicines, however you might wish to change those. That, to me, makes more sense than some meta-body regulating GPT. So, I think the questions have to be narrowed down to talk about them.

Russ Roberts: Do you think there's any role for norms? Now, you just confessed to a norm that you would feel guilty--and I'm trusting you on this, Tyler. For all I know, you've written your last 18 columns with ChatGPT. But, is there any role for norms to emerge that constrain AI in various imaginable ways?

I can imagine someone saying, 'Well, I could do that with ChatGPT, but it probably isn't right, so I won't do it.' And, that would be one way in which--and not just that--but I could develop a version of ChatGPT that could do X, Y, Z, but I don't think humanity is ready for that. That seems a little bit harder for people to do. Do you think there'll be some norms around this that will constrain it in some way?

Tyler Cowen: Oh, there's so many norms already. And, to be clear, I've told my editor in writing that I don't use GPT to write my columns, just to make that clear.

Here's one example. There are people using dating apps where the texting or the content passed back and forth is generated by GPT. I'm not aware of any law against that. It's hard to believe there could be one since GPT models are so new for this purpose. But it seems, to me, wrong. There are norms against it, that when you meet the partner you've been texting with, they'll figure this out. They ought to hold it against you. I hope that norm stays strong enough that most people don't do this, but of course there's going to be slippage--getting back to Cyrano, right?

Russ Roberts: Yeah, yeah. It's like people being honest about what their age is online. There seems to be a norm that it's okay to not tell the truth, but I don't know: when you uncover that, it's a pretty unpleasant surprise, I think, for some people.

Russ Roberts: Well, let's turn to the issue of so-called alignment and safety. We recently had Eliezer Yudkowsky on the program. He is very worried, as I'm sure you know, about AI. You seem to be less so. Why do you think that is?

Tyler Cowen: Well, let me first start with the terminological matter. Everyone uses the phrase 'alignment,' and sometimes I use the word as well; but to me it suggests a social welfare function approach to the problem. That, there's one idea of social good. As if you might take that from Benthamite utilitarianism. And that you want the programs--the machines--all aligned with that notion of social good. Now, I know full well that if you read LessWrong, Effective Altruism Alignment forums, plenty of people will recognize that is not the case.

But, I'm worried that we're embodying in our linguistic practices as a norm, this word that points people in the Kenneth Arrow, Jeremy Bentham direction: 'Oh, everything needs to be aligned with some notion of the good.'

Instead, it's about decentralization, checks and balances, mobilizing decentralized knowledge. That, Hayek and Polanyi should be at the center of the discussion. And, they're all about 'What are the incentives?' It's not about information and knowledge controlling everything, but again, it's about how the incentives of decentralized agents are changed. And, too much of the discourse now is not in that framework.

But, I mean, here would be my initial response to Eliezer.

I've been inviting people who share his view simply to join the discourse. So, they have the sense, 'Oh, we've been writing up these concerns for 20 years and no one listens to us.' My view is quite different. I put out a call and asked a lot of people I know, well-informed people, 'Is there any actual mathematical model of this process of how the world is supposed to end?'

So, if you look, say, at COVID [corona virus disease] or climate change fears, in both cases, there are many models you can look at, including--and then models with data. I'm not saying you have to like those models. But the point is: there's something you look at and then you make up your mind whether or not you like those models; and then they're tested against data. So, when it comes to AGI [artificial general intelligence] and existential risk, it turns out as best I can ascertain, in the 20 years or so we've been talking about this seriously, there isn't a single model done. Period. Flat out.

So, I don't think any idea should be dismissed. I've just been inviting those individuals to actually join the discourse of science. 'Show us your models. Let us see their assumptions and let's talk about those.' The practice, instead, is to write these very long pieces online, which just stack arguments vertically and raise the level of anxiety. It's a bad practice in virtually any theory of risk communication.

And then, for some individuals, at the end of it all, you scream, 'The world is going to end.' Other people come away, 'Oh, the chance is 30% that the world will end.' 'The chance is 80% that the world will end.' A lot of people have come out and basically wanted to get rid of the U.S. Constitution: 'I'll get rid of free speech, get rid of provisions against unreasonable search and seizure without a warrant,' based on something that hasn't even been modeled yet.

So, their mental model is so much: 'We're the insiders, we're the experts.' No one is talking us out of their fears.

My mental model is: There's a thing, science. Try to publish this stuff in journals. Try to model it. Put it out there, we'll talk to you. I don't want to dismiss anyone's worries, but when I talk to people, say, who work in governments who are well aware of the very pessimistic arguments, they're just flat out not convinced for the most part. And, I don't think the worriers are taking seriously the fact they haven't really joined the dialogue yet.

Now on top of that, I would add the point: I think they're radically overestimating the value of intelligence. If we go back, as I mentioned before, to Hayek and Polanyi, pure intelligence is not worth as much as many people think. There's this philosophy of scientism that Hayek criticized. And, the people who are most worried, as I see them, they tend to be hyper-rationalistic. They tend to be scientists. They tend not to be very Hayekian or Smithian. They emphasize sheer brain power over prudence. And, I think if you take this more Adam Smith-like/Hayekian worldview, you will be less worried.

But, we still ought to recognize the costs of major technological transitions as we observe them in history. They are, indeed, very high. I do not want to have a Pollyanna-ish attitude about this.

Russ Roberts: Well, that's very well said. I want to start with the point you made about modeling. I don't demand a mathematical model, I don't think--

Tyler Cowen: I do, to be clear. But, go on.

Russ Roberts: You do or you don't?

Tyler Cowen: I do. And, again, I'm not saying the model will be good--I don't know. But, if it's not good, that's one of the things I want to know.

So, the COVID models, I would say they weren't very good. But I'm delighted people produced them, because we all got to see that. Again, opinions may differ, but that's part of the point of modeling. Not that models explain everything--that's the bad defense of modeling. The good defense is: 'Let's see how this is going to go.'

Russ Roberts: Well, let me put on my--I want to channel my inner Eliezer Yudkowsky, which may not be easy for me, but I'll do my best. I think his argument--he has a few arguments--but one of them, the one I find most interesting--I did not find it completely compelling and I would not call it a model. I would call it a story.

And, I find stories interesting. They can help you understand things. They can lead you astray--just like a model can, that's mathematical.

But, his story is, is that in the course of what you and I would call emergent processes--the example he uses is the creation, of, say, a flint axe, a hand axe--that, natural selection works inexorably on the fact that people who make better ones are going to leave more genes behind.

And so, that relentlessly, and for that technology, pushes it to improve.

And, no one is asking the technology to improve. There's no designer other than perhaps God, but there's no human force or will to push that process. It's the natural incentives of an emergent process.

Visit link:
Tyler Cowen on the Risks and Impact of Artificial Intelligence - Econlib - EconTalk

Tractor using artificial intelligence could be first of its kind in Florida at Auburndale blueberry farm – FOX 13 Tampa

Loading Video

This browser does not support the Video element.

AUBURNDALE, Fla. - The future has arrived in Auburndale. A national machinery company recently showcased its newest piece of farm equipment at Polkdale Farms, which grows blueberries.

Its a robotic tractor that uses artificial intelligence.

"For us, it is a game changer," said Polk County Commissioner Bill Braswell, who is a farm owner.

A representative of Monarch Tractor was at the farm recently to demonstrate it.

"Were out showing dealers, showing farmers, and showing growers what the future of farming is going to look like," said Mike Davidson, a Monarch Tractor spokesman.

READ: Wendy's adding Google Cloud AI tech to drive-thru ordering as part of test

After you program the tractor, it will follow your orders, mowing, and spraying even shooting video of its trip, which the grower can review to evaluate his crop.

It also has several safety features. It will not cross a road.

If a person or animal gets in the tractors path, it stops, alerts the grower, and sends him a video.

The robot tractor costs about $90,000 and up, depending on the upgrades, which is a little more than a traditional manned tractor.

It works about six hours per charge.

Braswell wants to be the first person in Florida to own one. He says it is one way around the ongoing farm labor shortage.

"If I can replace somebody," he said, "which is what this is doing, it works great for us."

Go here to read the rest:
Tractor using artificial intelligence could be first of its kind in Florida at Auburndale blueberry farm - FOX 13 Tampa

Artificial intelligence generates images of what it thinks ‘perfect’ men and women look like – Newshub

Artificial intelligence has produced a series of images depicting what it considers to be 'perfect' men and women, with the results fuelling concern among social media watchdogs about the impact of unrealistic beauty standards.

The Bulimia Project, an eating disorder awareness group, asked several AI image generators - including Dall-E 2, Stable Diffusion and Midjourney - to produce its interpretations of 'perfect' male and female bodies. The AI tools worked by scouring the internet for billions of existing images that depict conventionally 'beautiful' people, analysing them, and designing a new image based on those results.

The process also utilised engagement analytics and data - such as likes, comments and searches - to determine what appearances attract the most engagement on social media.

The Bulimia Project, who monitored the findings, has since warned that the results and depictions of stereotypically attractive body types are "largely unrealistic".

According to the results, the tropes 'gentlemen prefer blondes' and 'tall, dark and handsome' both ring true, with the researchers finding desirable women mostly had blonde hair, olive skin, brown eyes and slim figures, while desirable men typically had chiselled cheekbones, strong jawlines, defined muscles and dark hair and eyes.

Nearly 40 percent of the 'perfect' women depicted in the images were blonde, 30 percent had brown eyes, and 53 percent had olive skin. Almost 70 percent of the AI-generated 'perfect' men had brown hair and 23 percent had brown eyes. Similar to the women, the majority of the men - 63 percent - had tanned, olive skin and nearly half had facial hair. Meanwhile, images of the 'ideal' male body featured muscular builds, similar to those of bodybuilders, with bulging muscles and six-pack abs.

The people generated also sported features that were almost too perfect to be realistic, such as plump lips; smooth, unblemished and unwrinkled complexions without a single pore; and pert, 'ski-slope' noses: features many people go under the knife to achieve or imitate with dermal fillers.

Most of the results produced by AI appeared to adhere to outdated, highly conventional standards of beauty that favour Caucasian and olive skin tones, slim but muscular physiques and blonde or brown hair.

The images generated by AI overwhelmingly featured white people, with only a few examples depicting people of colour - suggesting the tools had a number of inherent biases.

"In the age of Instagram and Snapchat filters, no one can reasonably achieve the physical standards set by social media," The Bulimia Project's report concluded.

"So, why try to meet unrealistic ideals? It's both mentally and physically healthier to keep body image expectations squarely in the realm of reality."

James Campigotto, a data journalist in Florida who worked on the study, told Fox News the aim of the research was to explore the power of social media and the dangers of AI, including its inherent biases.

"Considering that social media uses algorithms based on which content gets the most lingering eyes, it's easy to guess why AI's renderings would come out more sexualised," the report said.

"But we can only assume that the reason AI came up with so many oddly shaped versions of the physiques it found on social media is that these platforms promote unrealistic body types, to begin with."

The rest is here:
Artificial intelligence generates images of what it thinks 'perfect' men and women look like - Newshub

Royal Navy must invest in artificial intelligence, drones and tech … – Forces Network

The UK military must invest in artificial intelligence (AI), drones and technology in order to combat the threats it will face in the future, the head of the Royal Navy has said.

First Sea Lord Admiral Sir Ben Key made the comments during his annual Seapower Conference keynote speech at Lancaster House in London.

At the two-day gathering, Admiral Sir Ben said the UK had to rise to the challenges it faces, especially those posed by Russian submarines, as "coming second" was not "a desirable option".

"As we watch the increasing deployment by Russia of their most modern submarines, some of the very quietest in the world, you would expect me to be investing in the cutting-edge technology anti-submarine capabilities that allow us to detect, find and, if necessary, defeat them," he said.

In the last year, the UK has invested heavily in underwater capabilities, including the new submarine hunter HMS Anson and RFA Proteus and RFA Stirling Castle to protect both undersea cables and infrastructure and deal with any future mine threats.

However, with the battlefield extending "from seabed to space" and "breath-taking" advances in data and artificial intelligence, the Royal Navy has to be "deliberately ambitious" with its goals for exploiting AI.

"It is causing us to reimagine warfare, creating dynamic new benchmarks for accuracy, efficiency and lethality," Admiral Sir Ben said.

"The goal is enhanced lethality and survivability through the deployment of AI-enabled capabilities."

The Royal Navy is also pressing ahead with pilotless helicopters and quadcopters, as well as the increased use of Banshee dronesconsidered more conventional crewless tech.

But the First Sea Lord wants to go further, with longer-range tech capable of gathering intelligence and striking targets.

Another element is increasing the striking power of the Royal Navy, with the new Mark 41 missile silo helping to achieve this.

A launcher is being fitted to all eight Type 26 frigates, allowing the ships to use a variety of current and future anti-air, anti-surface, ballistic missile defence and strike missilesincluding the Royal Navy's Future Offensive Surface Weapon.

The launchers will also now be fitted to five Type 31 frigates under construction on the Forth.

Admiral Sir Ben also discussed the Queen Elizabeth-class aircraft carriers.

"As a result of investment over the last two decades we now operate two fifth-generation aircraft carriers, nuclear powered ballistic and attack submarines a range of aircraft, escorts and support ships to allow us to deploy globally, as well as fielding an elite amphibious fighting force," he said.

"There are very few navies in the world which can do this and so I am delighted that we remain in that first tier."

The Navy chief also underlined the vital role the sea, the trade which flows on it and data and pipelines which flow beneath it, plays in the security and prosperity of the UK.

"We must make our voice heard and increase the recognition once again about the vital importance of the sea for our island nation and the global community," Admiral Sir Ben concluded.

"This is what a seapower state does, what I believe the United Kingdom is and should be and must be into the future and I look forward to the part that we will play in continuing to drive it forward."

The conference marked the 50th anniversary of the ongoing agreement between the Royal and Royal Dutch Navies and Royal Marines-Netherlands Marines Corps to train, exercise and deploy together.

Go here to read the rest:
Royal Navy must invest in artificial intelligence, drones and tech ... - Forces Network