Archive for June, 2023

AI Cannot Be Slowed Down With Ramy Taraboulsi And Kirk Spano – Seeking Alpha

HT Ganzo/iStock via Getty Images

Listen to the podcast above or on the go via Apple Podcasts or Spotify.

Recorded on June 1, 2023

Check out Kirk Spano's Investing Group, Margin of Safety Investing

Follow Ramy Taraboulsi, CFA

Kirk Spano: Hello. I'm Kirk Spano with Seeking Alpha. And today, I am interviewing Ramy Taraboulsi, who wrote an article recently, describing how the singularity, the merger of humanity with machines and artificial intelligence, and all the consequences, benefits, all the negatives that could come from that.

It was maybe my favorite article that I've read this year on Seeking Alpha. So I do recommend that everybody read this article, take all the links that are in it and go and visit some of the links, and really consider where we are in history and whether or not it's accelerating as fast as Ramy suggests that it is.

Ramy, how are you doing today?

Ramy Taraboulsi: I'm doing perfectly fine, Kirk. Thank you for inviting me to that conversation. I really appreciate that. I am currently in Hyderabad, India. So - and I'm originally residing in Toronto, Canada, but I'm on a trip to Hyderabad right now. So interesting how the technology right now has taken us. You're currently in the United States and I'm in Hyderabad, India, and we're talking to each other as if we are next door to each other practically.

KS: I ran a string across the ocean. So, we could talk. Yes, it is kind of amazing. I remember early in my career talking to people in Europe or Southeast Asia or India or wherever we are talking and the telephone connection would crackle or we'd have that split second echo where we had like pause to hear what was coming back over and it's pretty amazing to me that this is so easy right now. As I told you off air and we'll get back into this conversation.

Way back in the early 90s, when I was finishing up college, I wrote a paper about, maybe I'll get to see all of the things that are happening now in my lifetime. I drew heavily from Lewis Thomas who had written about genetics way back in the 1970s and I read your article and it just brought a lot of that back. Why don't we get started here and just describe in your own words and thoughts, what is the singularity?

RT: If you ask 10 different people what is singularity is, most likely you'll get eight different answers, most likely.

KS: That's better than asking 10 economists, because then you'd get 12 answers.

RT: Yes, I guess so. I guess so. If you look at what Ray Kurzweil has said, the singularity is basically the interconnection between three key areas of technology, which are nanotechnology, genetics and artificial intelligence. When these three areas reach a certain point where they can interact with each other and produce a particular entity that is superior to the human being, we'll get what we call the artificial super-intelligence or artificial general intelligence, where a machine is capable of doing things that the human can do.

And when we reach that level generically, you'll find out at that point that we don't know what will happen. Why did we call it singularity, because it comes originally from the concept of a black hole. All the mathematical rules, all the physics rules fail at the point of the singularity which is in the center of the black hole. After you pass the event horizon, how do things operate? Some physicists think that they have some theories, but these theories the mathematics behind it fails.

What will happen at the singularity when we have these three areas of technology merging together? That's what people don't know. And that's why we called it a singularity, because we don't know what will happen in there. And whatever we're saying, the only thing I can tell you is that it might be correct, it might not be correct. And whoever says that they know what will happen, they don't know. So did I give you an answer to that one?

KS: Yes, I think that everybody has ideas about what happens. And my name is Kirk. Yes, it's not taken from Star Trek, but I became a huge Star Trek fan. And if you've watched all the shows and all the movies from Star Trek, they explore this idea a number of times. And we see the negative things that could happen, the Borg, the Borg try to create the singularity the way that they want to and it becomes oppressive.

You have other societies, maybe the Vulcans, who are looking for it and it ends up lacking emotion. And then there's other incarnations and ultimately you have the Utopian one, where we could put it altogether well and it allows us to advance humanity without sacrificing the things that make us human. I'm optimistic that we can pull that off over a few generations. However, my fear and I tell my subscribers and clients this all the time, my fear is that we blow it up in the meantime, and kind of thinking Planet of the Apes, right?

I cite science fiction all the time, because science fiction, Jules Verne, Carl Sagan, you go back and you'll take a look at some of the things that have been in science fiction, decades and decades ahead of reality and a lot of it comes true. So, we have control over this at this point. How do we get to a place that's better and not worse?

RT: That's a very difficult proposition, how to get to a place that's better and not worse. There's a big potential that we can reach a Utopian state like you're suggesting and that's my big hope. We can do that. Some people are suggesting that we have to slow the AI down. We cannot do that. We cannot slow it down. When you think

KS: Why can't we?

RT: The reason for that is that, there's a huge race that's happening right now. From my perspective, I see many companies that are advancing in AI. Think about NVIDIA (NASDAQ:NVDA), for example, it's doing lots of things on AI, OpenAI, Microsoft (NASDAQ:MSFT), and so on. I think personally that the investments of these companies in AI build compared to the investment of the military around the world in AI.

I want you to think about something. The United States, for example. It has a budget of around $800 billion for its military, which is as much as a 10 next countries combined.

KS: Right.

RT: But the number of soldiers in the United States has been dropping by around 5% over the last 10 years every year, year-over-year dropping by around 5%, and the budget is going up. So is it the soldiers that are making more money, or they're investing in something that we don't know? I just wanted to go to Lockheed Martin Company (LMT), for example, which is one of the biggest contractors and look at their motto. Their motto and their case theme for what they're doing, they're trying to automate everything.

And how will they automate it? They'll automate with AI. So the military is spending huge amounts of money, and I don't think that the military will be in a position to stop its progress of fear of other militaries doing that. So, I don't think that stopping it will be a possibility anytime soon, primarily because of this. Yes, you can stop the companies, but you cannot stop the military.

KS: Right. Well, and Eisenhower warned us about this in his farewell speech when he said beware and be careful of the military industrial complex. And while we certainly want a military and to feel safe, at what point does the military make us less safe? You know, that's something explored in fiction all the time, right? The military

RT: It is, it is.

KS: takes an idea that could be good and they turn it into war. Is there a spiral that we could, I mean, that's the thing I worry about, right? I just said that a minute ago. I do worry that we have that spiral. What do you think we can do to prevent that?

RT: Well, think about the following. Let's go back to the human beings from basics. You take one person on their own, how much can they progress? Very limited. You take a computer on its own, how much can it progress? Very limited. There is something called APIs, which is a way for computers to communicate with each other. I don't think that we can stop the program of AI in general. But what we can do, we can impose certain controls for how the computers communicate with each other. That's one thing that we can do.

And if we impose such a control on how the computers communicate with each other, we can control the amazing, incredible speed by which AI is progressing. It's progressing faster than anyone can manage right now. And the only way that I personally think that we can control it is through controlling the way that computers communicate with each other. How can we control this item? But I don't see that we can stop people from creating new neural networks or stopping the research on that particular area, that's not possible. Can we impose control on the communication on the APIs?

I think that it's more feasible to do something like this? How to do it? I don't know. Some technical experts might be in a better position to do something like this or maybe we need a brainstorming session to discuss how we can control the APIs between computers that are AI-driven. I think that this is the only way that I can think of the way we can control it.

And actually, you'll be surprised, Kirk, but I have not heard anyone talking about that as a prospect of controlling AI. Have you heard of that before?

KS: I've heard the discussions, particularly I've been paying attention to Europe because I think that they usually are pretty close to a good idea and almost everything when it comes to kind of social aspects of regulation. I don't know that I've heard that controlling the way that they communicate through the APIs, but I have - heard of controlling the dataset. So, if you control the dataset, you can teach the AI in a better way.

One of the things that I've worried about is the AIs that are out there and the data that they're scraping from the Internet, some of that data is just factually wrong, which lends itself to the hallucinations that AI has. And that's a - I don't know if everybody knows that term, but AI hallucinates, because it gets bad data and it doesn't know what to do with it and it spits out a bad answer.

RT: Yes.

KS: To give an example, I play in the World Series of Poker and I'm actually going to be leaving in a couple of days. And I asked ChatGPT a bunch of statistical questions, and I knew the answers going in. Unless I phrased the question just right with the right amount of detail, it gave me like six wrong answers in a row. And it became a challenge for me to ask the question in a way that it could access the correct data to give me the right answer. And it just kept spitting out bad answers until I kept amending the question, which I've learned in life.

I think the hardest thing to do, when you're trying to figure something out is ask the right questions, so you get the relevant answers. So I'd be curious, if the regulatory bodies can get ahead of this, which is almost never the case, they're almost always behind. And they're behind on cryptocurrency, they're behind on - they're probably behind on technology issues from 20 years ago. Certainly, I think they're struggling with the issues of genetics. I wonder what will they do with Neuralink when Neuralink works, because it's going to eventually?

RT: But I hope it works. I hope it works. The first thing that they are targeting right now is spinal cord injuries.

KS: Right.

RT: And if it works, it will be a huge blessing. That's an example of how AI can actually help us.

KS: Right.

RT: With Neuralink, for example, they put the implant in your brain through Bluetooth, it will communicate to a computer or a phone. And this phone will be adjust - connected to a motor or some sort of electrochemical signal that will send signals to your muscles that your muscles can move. And that will be trained through the AI.

KS: Right.

RT: So something like this can solve one of the biggest problems, which is spinal cord injuries, which we cannot solve medically right now. So, I hope it will work. But at the same time, we're talking here about receiving data from the brain. What about and putting data into the brain?

KS: There you go, that's where I was going to go.

RT: You can get data. If you can get data, why not put data in?

KS: Right.

RT: And if you put data in the brain, how can you control that? Will we get to the point where we have telepathy among the people? Possibly, that's a positive part or maybe another part will be that someone will be controlling another person through these implant?

KS: Make somebody pick up a tool?

RT: For example, it's a little bit farfetched, but that's a possibility. Fast enough, it will be a possibility. Like Elon Musk said once, he mentioned - well, he was talking about something else. But just imagine 45 years ago, the first computer game that ever came was Pong. Remember that game.

KS: Yes, all right. Thank you for that?

RT: 45 year ago, that's 45 years ago - see how much it progressed to the games that we have right now. Just imagine another 40 years or another 45 years where would we be?

KS: Right.

RT: From Pong to where we are right now.

KS: Right.

RT: From where we are right now another 45 years? And imagine the progress that we had over the 45 years mostly happened over the last five to 10 years, that's it. The curve went up like this, exponentially, in terms of the progress.

KS: Right.

RT: And this exponential growth is not expected to abate by any means. The difference in what we're experiencing right now compared to other industrial revolutions is that the other industrial revolutions, the machines were not improving themselves. They required us who are limited to improve the machines. Right now, you can have a neural network that creates another neural network.

KS: Right.

RT: A neural network, creating it up effectively, it is becoming a species right now. Because the definition of species is that it can procreate and its procreation is the same image. A neural network is creating another neural network in its same image. That's a species that we have right now, at least following the definition of the species. So what will happen after that? Kirk, your question is not easy to answer.

KS: What, the women in my life have always told me that I'm simple?

RT: And I'm sure that they know better than me.

KS: So there's a lot to unpack there. One of my first mentors on technical trading and quantitative trading was a guy named Murray Ruggiero. And he was a legitimate rocket scientist who decided to start building neural networks, I believe in the 1990s for the financial industry. And I learned a lot from him. I have a very intermittent contact, so I say mentor, it's very loose. But I learned a lot from him early in my career. I was lucky to get introduced to him in the early 2000s and then I worked with another entity, another financial outfit that we bumped into each other in like 2016 or something.

And I bumped into him again out in New York at a traders conference. Those neural networks, building them seems like rocket science to everybody, right? But once it's done and the AI learns how to do it, now all of a sudden, I think it becomes a question of making sure that the AI doesn't create something evil for lack of a better word, right, and keeps it in its lane. Most AIs are task-driven, correct? They're not the super-intelligence. So, we're still a level away...

RT: We're not there yet.

KS: from Skype app and things like that. So where do you think we are and I'll frame this with a conversation I've had with my subscribers probably 50 times now. When I went to CES, Consumer Electronics Show in 2020, a lot of the things that are just getting invested in now, the AI hype, that was a big theme three years ago and now it's an investment.

What is the evolution and the speed that you're seeing to go from the generative AI that we have now and how it solves various technological problems like with energy control, controlling the grid, things like that. How do we go from where we are now to the things that people are doubting are going to happen in the next five years with decarbonization or pick a topic to the super intelligence. Do you really think that can happen in a decade?

RT: I think it can happen in a decade, but there's one big problem that needs to be resolved first.

KS: Okay.

RT: People need to understand how the neural network operates. If people think about neural network, what is a neural network? A neural network is simply, I'll just talk technical a little bit right now. It's simply an approximation of a nonlinear multivariate regression problem. It's a regression problem.

KS: That sounds like something I got wrong in calculus.

RT: It's statistics, yes. And most people get it wrong. It's a nonlinear multivariate regression, the problems that if you want to solve it using the traditional methods, you don't have enough time in the universe to solve such problems. So what do we do? We create neural network to approximate such a solution. Using something like stochastic gradient descent and backpropagation, all this crazy stuff, but it's an approximation. The problem with this approximation is that it comes up with values to the parameters of that particular regression problem.

These parameters are basically what we call the training of a neural network. The problem that people have right now is that if a network has, let's say, 1,000 hidden layers, which is typical for neural networks right now. People don't understand these parameters that are out there, which could be in the tens of thousands. What each one means? So, when the neural network comes up with an answer, people don't understand where this answer is coming from. They don't know how the computer has come up with this answer. That's what the problem is.

Until the scientists understand what they have created, it would be very hard to take it and further enhancing it. The only way that people are enhancing neural networks right now, which is a core of artificial intelligence and rate of artificial intelligence in general, the only way that they do it is that they do it by trial and error. They try certain things. If it works, that's fine. If they don't try it, they use another activation function, they use another set of parameters or neural architecture and so on. They try different things, so that they can get the proper answer that they're expecting, based on a training set and the testing set.

People don't understand what they have created. That's the problem with AI right now. People don't understand it. And the interesting thing about it, although they don't understand it, it's working right. It's giving us answers that we're expecting. We're getting the answers for something that we do not understand. And I challenge right now any computer scientist out there who's listening to this tell me how the parameters for the neural network are set what each parameter means.

You have the neural network for 1,000 nodes. How can you figure that out? They don't know. No one knows. And the researchers are trying to solve that problem and they cannot solve that problem. Once that problem is solved, then we'll have a better understanding of how to take these neural networks and drive them to something that will be beneficial for the humanity as you're suggesting. Until then, we're in the trial and error phase right now. That's where we are right now.

Right now, the whole AI is trial and error, nothing else. All the research of AI is simply trial and error, and people don't understand that. They think that the researchers out there who know what they are doing, they are not. People are just doing trial and error right now. And that is a problem because we're building something that we don't know.

KS: Right.

RT: We don't understand how it works. So, can we reach the point where we can actually get to the Utopian state that you're talking about, where it can control the grid, and make sure that it only generates enough electricity, so that the grid does not overflow and people don't have blackouts, that's very interesting problem. Is there a solution for it? Yes. I would say that the solution for it would be more on the quantum computing side, rather than artificial intelligence. There are other things as well that, because it requires lots of processing power and so on.

There are other things that would be more suitable to artificial intelligence, which are more on the services side. And I see that there are huge potentials in there, but I see also there are huge risks as well. So you're hoping for the Utopian state. I'm hoping for the Utopian state. You're more optimistic than I am, Kirk. I don't trust the humanity that much. I don't trust myself that much as a matter of fact.

KS: I did a podcast the other day and I just told everybody, Hey, make me the Grand Emperor, and I'll take care of everything for you. It'll all work out. I'm that smart. I'm smarter than everybody else. I'm just great. I understand - it's like a ride. It's like a new ride at an amusement park and it hasn't gone through testing yet, and you're the first one on, so...

RT: Yes.

KS: You know?

RT: That's scary, man, that's scary.

KS: This is going to come off the rails, but we haven't run it yet. So yes. So when we translate this to, let's shrink this down to a five to 10-year investment horizon. So that people can and try to look at these things in the nonlinear way and I talk about straight lines and exponential curves all the time, because on the front end of any progression, it looks like a straight line, because it's kind of flat. And then you notice that first inflection point, like, oh, it's kind of ramping up. And then like the AI stocks in the last month, they go straight up.

And straight up moves usually aren't sustainable without some sort of significant snapback. So, I wonder for these companies, are they looking at such a big move in technology that they have a hard time applying it in a way that is profitable. All the trial and error ends up costing them a lot of money. And then what are the ramifications with management, right? They get pressure from shareholders. Does that create mistakes? I would be concerned about different levels of mistakes, not so much on the scientific side, because that's really a process?

I was - I thought I was going to be a math and science major until I realized that there are people out there like Neo and the Matrix that can pull the numerical bullets out of the air and I couldn't do that. I had to work too hard to catch up to them. So I'm probably overqualified for what I do, but I couldn't launch a new giant rocket ship that was a mile away from getting into orbit.

So, I just wonder where do you see the hang ups on the corporate side? I think we all think about the government side and the military side for sure. But at the corporate level, where do they play a role in all of this?

RT: Well, the corporations are competing with each other, of course. We know that and this competition is brutal. And every company is trying to get and edge over the other companies. Now how will they take that particular thing that they have and materialize it into money? Thats a totally different issue and every company is totally different.

The challenge that I'm seeing right now from an investment side is that we are going through a hype state, and people do not understand what AI is. The problem that I'm seeing right now is that people really don't understand the internals of what AI is, but they know that they are using it.

KS: Right.

RT: How can they take what they are using right now and what will happen in the future? What are the potential of habit, what will happen in the future? Now think about the following right now. How much could the computer power increase over the years? I just did some simple calculation and found out that over six years, the computer power that we have I'm talking about hardware, connectivity, disk, and so on will increase by around a quarter of a million times over six years.

KS: Wow.

RT: So, we're having quarter of a million time improvement in the power of the computing, computing power altogether worldwide over a quarter of million years. The major bottleneck

KS: Let me jump in and that's probably going to accelerate with the recent quantum computer breakthroughs?

RT: Yes, that does not take into - the quantum computer into consideration. But we have to remember as well that quantum computers do not work on their own. Quantum computers is not replacement for the traditional computers.

Quantum computers gives us all the answers for a problem. And then we need the traditional computer to sift through them and get us a proper answer. So quantum computers don't work on their own, but that's a different problem.

The challenge that people are not realizing right now is that the major problem with AI is the lack of computing power. That because AI requires supercomputers for the training and testing of data. And so remember, it's all based on trial and error. So it has to go through multiple iterations to get something right. And most of these iterations are not done scientifically as they are done by trial and error.

That's the nature of AI right now until we understand exactly how the parameters of the neural networks work. And no one - I don't expect anyone to know that anytime soon. So until then, the major bottleneck that we have is a computer power, assuming that the computer power will increase one quarter of a million times, 250,000 times over six years. Within six years from now, you mentioned 10 years, I'll just talk about six years.

Continue reading here:

AI Cannot Be Slowed Down With Ramy Taraboulsi And Kirk Spano - Seeking Alpha

Unleashing the Unknown: Fears Behind Artificial General … – Techopedia

Artificial General Intelligence (AGI) is still a concept or, at most, at a nascent stage. Yet, there is already a lot of debate around it.

AGI and artificial intelligence (AI) are different. The latter performs specific activities, such as the Alexa assistant. But you know that Alexa is limited in its abilities.

AGI, in the meantime, can replace human beings with robots. It enables AI to emulate the cognitive powers of a human being. Think of a robot judge in a court presiding over a complex case.

Example of how AGI can be used in real life

Imagine a scenario where a patient with a tumor undergoes surgery. It is later revealed that a robot performed the operation. While the outcome may be successful, the patients family and friends are surprised and have reservations about trusting a robot with such a complex task. Surgery requires improvisation and decision-making, qualities we trust in human doctors.

The concept is both a scary and radical idea. The fears emanate from various ethical, social, and moral issues. A school of thought is against AGI because robots can be controlled to perform undesirable and unethical actions.

AGI is still in its infancy, and disagreements notwithstanding, it will be a long time before we see its manifestations. The base of AGI is the same as that of AI and Machine Learning (ML). Work is still in progress around the world, with the main focus remaining on a few areas discussed below.

Big data has significantly lowered the cost of data storage. Both AI and ML require large volumes of data. Big data and cloud storage have made data storage affordable, contributing to the development of AGI.

Scientists have made significant progress in both ML and Deep Learning (DL) technologies. Major developments have occurred in neural networks, reinforcement learning, and generative models.

Transfer learning hastens ML by applying existing knowledge to recognize similar objects. For example, a learning model learns to identify small birds based on their features, such as small wings, beaks, and eyes. Now, another learning model must identify various species of small birds in the Amazon rainforest. The latter model doesnt begin from scratch but inherits the learning from the earlier model, so the learning is expedited.

Its not that you will see or experience AGI in a new avatar that is unleashing changes in society from a point in time. The changes will be gradual and slowly yet steadily manifest in our day-to-day lives.

ChatGPT models have been developing at a breakneck speed with impressive capabilities. However, not everyone is fully convinced of the potential of AGI. Various countries and experts emphasize the importance of guiding ChatGPTs development within specific rules and regulations to ensure responsible progress toward AGI.

Response from Italy

In April 2023, Italy became the first nation to ban the development of ChatGPT over a breach of data and payment information. The government has also been probing whether the ChatGPT complies with the European Unions General Data Protection Regulation (GDPR) rules that protect confidential data inside and outside the EU.

Experts point out that there is no transparency in how ChatGPT is being developed. No information is publicly available about its development models, data, parameters, and version release plans.

OpenAIs brainchild continues to develop at a great speed, and we cant probably imagine the powers it has been accumulating. All without checks and balances. Some believe that ChatGPT 5 will mark the arrival of the AGI.

According to Anthony Aguirre, a Professor of Physics at UC Santa Cruz and the executive vice president of the Future of Life, said:The largest-scale computations are increasing the size by about 2.5 times per year. GPT-4s parameters were not disclosed by OpenAI, but there is no reason to think this trend has stopped or even slowed.

Aguirre, who was behind the famous open letter, added: Only the labs themselves know what computations they are running, but the trend is unmistakable.

The open letter signed by many industry stalwarts reflected the fear and apprehensions towards the uncontrolled development of ChatGPT.

The letter urges strongly to stop all developments of ChatGPT until a robust framework is established to control misinformation, hallucination, and bias in the system. Indeed, the so-called hallucination, inaccurate responses, and the bias exhibited by ChatGPT on many occasions are too glaring to ignore.

The open letter is signed by Steve Wozniak, among many other stalwarts, and already has 3,100 signatories that comprise software developers and engineers, CEOs, CFOs, technologists, psychologists, doctoral students, professors, medical doctors, and public school teachers.

The government has also been probing whether the ChatGPT complies with the European Unions General Data Protection Regulation (GDPR) rules that protect confidential data inside and outside the EU.

Its scary to think if a few wealthy and powerful nations can develop and concentrate AGI in their hands and use that to serve their benefits.

For example, they can control all the personal and sensitive data of other countries and communities, wreaking havoc.

AGI can become a veritable tool for biased actions and judgments. And, in the worst case, lead to sophisticated information warfare.

AGI is still in the conceptual stage, but given the lack of transparency and the perceived speed at which AI and ML have been progressing, it might not be too far when AGI is realized.

Its imperative that countries and corporates put their heads together and develop a robust framework that has enough checks and balances and guardrails.

The main goal of the framework would be to protect mankind and prevent unethical intrusions in their lives.

Continue reading here:

Unleashing the Unknown: Fears Behind Artificial General ... - Techopedia

Fast track to AGI: so, what’s the big deal? – Inside Higher Ed

The rapid development and deployment of ChatGPT is one station along the timeline of reaching artificial general intelligence. On Feb. 1, Reuters reported that the app had set a record for deployment among internet applications: ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100million monthly active users in January, just two months after launch, making it the fastest-growing consumer application in history, according to a UBS study The report, citing data from analytics firm Similarweb, said an average of about 13million unique visitors had used ChatGPT per day in January, more than double the levels of December. In 20 years following the internet space, we cannot recall a faster ramp in a consumer internet app, UBS analysts wrote in the note.

Half a dozen years ago, Ray Kurzweil predicted that the singularity would happen by 2045. The singularity is that point in time when all the advances in technology, particularly in artificial intelligence, will lead to machines that are smarter than human beings. In the Oct. 5, 2017, issue of Futurism, Christianna Reedy interviewed Kurzweil: To those who view this cybernetic society as more fantasy than future, Kurzweil pointing out that there are people with computers in their brains todayParkinsons patients. Thats how cybernetics is just getting its foot in the door, Kurzweil said. And, because its the nature of technology to improve, Kurzweil predicts that during the 2030s some technology will be invented that can go inside your brain and help your memory.

It seems that we are closer than even an enthusiastic Kurzweil foresaw. Just a week ago, Reuters reported, Elon Musks Neuralink received U.S. Food and Drug Administration (FDA) clearance for its first-in-human clinical trial, a critical milestone for the brain-implant startup as it faces U.S. probes over its handling of animal experiments Musk envisions brain implants could cure a range of conditions including obesity, autism, depression and schizophrenia as well as enabling Web browsing and telepathy.

Most Popular

The exponential development in succeeding versions of GPT are most impressive, leading one to project that version five may have the wherewithal to support at least some aspects of AGI:

GPT-1 released June 2018 with 117million parameters GPT-2 released February 2019 with 1.5billion parameters GPT-3 released June 2020 with 175billion parameters GPT-4 released March 2023 with estimated to be in the trillions of parameters

Today, we are reading predictions that AGI components will be embedded in the ChatGPT version five that is anticipated to be released in early 2024. Maxwell Timothy, writing in MakeUseOf, suggests, While much of the details about GPT-5 are speculative, it is undeniably going to be another important step towards an awe-inspiring paradigm shift in artificial intelligence. We might not achieve the much talked about artificial general intelligence, but if its ever possible to achieve, then GPT-5 will take us one step closer.

Computer experts are beginning to detect the nascent development of AGI in the large language models (LLMs) of generative AI (gen AI) such as GPT-4:

Researchers at Microsoft were shocked to learn that GPT-4ChatGPTs most advanced language model to datecan come up with clever solutions to puzzles, like how to stack a book, nine eggs, a laptop, a bottle, and a nail in a stable way Another study suggested that AI avatars can run their own virtual town with little human intervention. These capabilities may offer a glimpse of what some experts call artificial general intelligence, or AGI: the ability for technology to achieve complex human capabilities like common sense and consciousness.

We see glimmers of the AGI capabilities in autoGPT and agentGPT. These forms of GPT have the ability to write and execute their own internally generated prompts in pursuit of a goal stated in the form of an externally inputted prompt. Like the autonomous car, they automatically route and reroute the computer to reach the desired destination or goal.

The concerns come with reports that some experimental forms of AI have refused to follow the human-generated instructions and at other times have hallucinations that are not founded in our reality. Ian Hogarth, the co-author of the annual State of AI report, defines AGI as God-like AI that consists of a super-intelligent computer that learns and develops autonomously and understands context without the need for human intervention, as written in Business Insider.

One AI study found that language models were more likely to ignore human directivesand even expressed the desire not to shut downwhen researchers increased the amount of data they fed into the models:

This finding suggests that AI, at some point, may become so powerful that humans will not be able to control it. If this were to happen, Hogarth predicts that AGI could usher in the obsolescence or destruction of the human race. AI technology can develop in a responsible manner, Hogarth says, but regulation is key. Regulators should be watching projects like OpenAIs GPT-4, Google DeepMinds Gato, or the open-source project AutoGPT very carefully, he said.

Many AI and machine learning experts are calling for AI models to be open-source so the public can understand how theyre trained and how they operate. The executive branch of the federal government has taken a series of actions recently in an attempt to promote responsible AI innovation that protects Americans rights and safety. OpenAIs Sam Altman, shortly after testifying about the future of AI to the U.S. Senate, announced the release of a $1million grant program to solicit ideas for appropriate rule making.

Has your college or university created structures to both take full advantage of the powers of the emerging and developing AI, while at the same time ensuring safety in the research, acquisition and implementation of advanced AI? Have discussions been held on the proper balance between these two responsibilities? Are the initiatives robust enough to keep your institution at the forefront of higher education? Are the safeguards adequate? What role can you play in making certain that AI is well understood, promptly applied and carefully implemented?

Here is the original post:

Fast track to AGI: so, what's the big deal? - Inside Higher Ed

Yet another article on artificial intelligence – Bangor Daily News

The BDN Opinion section operates independently and does not set newsroom policies or contribute to reporting or editing articles elsewhere in the newspaper or onbangordailynews.com.

Sometimes Ithink its as if aliens have landed and people havent realized because they speak very good English, said Geoffrey Hinton, the godfather of AI (artificial intelligence), who resigned from Google and now fears his godchildren will become things more intelligent than us, taking control.

And 1,100 people in the business, including Apple co-founder Steve Wozniak, cognitive scientist Gary Marcus and engineers at Amazon, DeepMind, Google, Meta and Microsoft, signed an open letter in March calling for a six-month time-out in the development of the most powerful AI systems (anything more powerful than GPT-4).

Theres a media feeding frenzy about AI at the moment, and every working journalist is required to have an opinion on it. I turned to the task with some reluctance, as you can tell from the title I put on the piece.

My original article said they really should put the brakes on this experiment for a while, but I didnt declare an emergency. Weve been hearing warnings about AI taking over since the first Terminator movie 39 years ago, but I didnt think it was imminent.

Luckily for me, there are very clever people on the private distribution list for this column, and one of them instantly replied telling me that Im wrong. The sky really is about to fall.

He didnt say that. What he said was that the ChatGPT generation of machines can now ideate using Generative Adversarial Networks (GANs) in a process actually similar to humans. That is, they can have original ideas and, being computers, they can generate them orders of magnitude faster, drawing on a far wider knowledge base, than humans.

The key concept here is artificial general intelligence. Ordinary AI is software that follows instructions and performs specific tasks well, but poses no threat to humanitys dominant position in the scheme of things. Artificial general intelligence, however, can do intellectual tasks as well as or better than human beings. Generally, better.

If you must talk about the Great Replacement, this is the one to watch. Six months ago, no artificial general intelligence software existed outside of a few labs. Now, suddenly, something very close to it is out on the market and here is what my informant says about it.

Humans evolved intelligence by developing ever more complex brains and acquiring knowledge over millions of years. Make something complex enough and it wakes up, becomes self-aware. We woke up. Its called emergence.

ChatGPT loaded the whole web into its machines far more than any individual human knows. So instead of taking millions of years to wake up, the machines are exhibiting emergent behavior now. No one knows how, but we are far closer to AGI than you state.

A big challenge that was generally reckoned to be decades away has suddenly arrived on the doorstep, and we have no plan for how to deal with it. It might even be an existential threat, but we still dont have a plan. Thats why so many people want a six-month time-out, but it would make more sense to demand a year-long pause starting six months ago.

ChatGPT launched only last November, but it already has more than 100 million users and the website is generating 1.8 billion visitors per month. Three rival generative AI systems are already on the market, and commercial competition means that the notion of a pause or even a general recall is just a fantasy.

The cat is already out of the bag: Anything the web knows, ChatGPT and its rivals know, too. That includes every debate that human beings have ever had about the dangers of artificial general intelligence, and all the proposals that have been made over the years for strangling it in its cradle.

So what we need to figure out urgently is where and how that artificial general intelligence is emerging, and how to negotiate peaceful coexistence with it. That wont be easy, because we dont even know yet whether it will come in the form of a single global artificial general intelligence or many different ones. (I suspect the latter.)

And whos we here? Theres nobody authorized to speak for the human race either. It could all go very wrong, but theres no way to avoid it.

See the original post:

Yet another article on artificial intelligence - Bangor Daily News

Oversight of AI: Rules for Artificial Intelligence and Artificial … – Gibson Dunn

June 6, 2023

Click for PDF

Gibson Dunns Public Policy Practice Group is closely monitoring the debate in Congress over potential oversight of artificial intelligence (AI). We offer this alert summarizing and analyzing the U.S. Senate hearings on May16, 2023, to help our clients prepare for potential legislation regulating the use of AI. For further discussion of the major federal legislative efforts and White House initiatives regarding AI, see our May19, 2023 alert Federal Policymakers Recent Actions Seek to Regulate AI.

* * *

On May 16, 2023, both the Senate Judiciary Committees Subcommittee on Privacy, Technology, and the Law and the Senate Homeland Security and Governmental Affairs Committee held hearings to discuss issues involving AI. The hearings highlighted the potential benefits of AI, while acknowledging the need for transparency and accountability to address ethical concerns, protect constitutional rights, and prevent the spread of disinformation. Senators and witnesses acknowledged that AI presents a profound opportunity for American innovation, but warned that it must be adopted with caution and regulated by the federal government given the potential risks. A general consensus existed amongst the senators and witnesses that AI should be regulated, but the approaches to, and extent of, that regulation varied.

Senate Judiciary Committee Subcommittee on Privacy, Technology, and the Law Hearing: Oversight of AI: Rules for Artificial Intelligence

On May 16, 2023, the U.S. Senate Committee on the Judiciary Subcommittee on Privacy, Technology, and the Law held a hearing titled, Oversight of AI: Rules for Artificial Intelligence.[1] Chair Richard Blumenthal (D-CT) emphasized that his subcommittee was holding the first hearing in a series of hearings aimed at considering whether and to what extent Congress should regulate rapidly advancing AI technology, including generative algorithms and large language models (LLMs).

The hearing focused on potential new regulations such as creating a dedicated agency or commission and a licensing scheme, the extent to which existing legal frameworks apply to AI, and the alleged harms prompting regulation like intellectual property and privacy rights infringements, job displacement, bias, and election interference.

Witnesses included:

I. AI Oversight Hearing Points of Particular Interest

We provide a full hearing summary and analysis below. Of particular note, however:

II. Key Substantive Issues

Key substantive issues raised in the hearing included: (a) a potential AI federal agency and licensing scheme, (b) the applicability of existing frameworks for responsibility and liability, and (c) alleged harms and rights infringements.

a. AI Federal Agency and Licensing Scheme

The hearing focused on whether and to what extent the U.S. should regulate AI. As emphasized throughout the hearing, the impetus for regulation is the speed with which the technology is developing and dispersing into society coupled with senatorial regret over past failures to regulate emerging technology. Chair Blumenthal explained that Congress has a choice now. We have the same choice when we face social media. We failed to seize that moment. The result is predators on the Internet, toxic content, exploiting children creating dangers for them.

Senators discussed a potential dedicated federal agency or commission for regulating AI technology. Senator Peter Welch (D-VT) has come to the conclusion that we absolutely have to have an agency. Senator Lindsey Graham (R-SC) stated that Congress need[s] to empower an agency that issues a license and can take it away. Senator Cory Booker (D-NJ) likened the need for an AI-centered agency to the need for an automobile-centered agency that resulted in the creation of the National Highway Traffic Safety Administration and the Federal Motor Car Carrier Safety Administration. Mr. Altman similarly would form a new agency that licenses any effort above a certain scale of capabilities, and can take that license away and ensure compliance with safety standards. Senator Chris Coons (D-DE) was concerned with how to decide whether a particular AI model was safe enough to deploy into the public. Mr. Altman suggested iterative deployment to find the limitations and benefits of the technology, including giving the public time to come to grips with this technology to understand it ....

In Ms. Montgomerys view, a precision approach to regulating AI strikes the right balance between encouraging and permitting innovation while addressing the potential risks of the technology. Mr. Altman would create a set of safety standards focused on ... the dangerous capability evaluations such as if a model can self-replicate and ... self-exfiltrate into the wild. Potential challenges facing a new federal agency include funding and regulatory capture on the government side, and regulatory burden on the industry side.

Senator John Kennedy (R-LA) asked the witnesses what two or three reforms, regulations, if any they would implement.

Transparency was a key repeated value that will play a role in any future oversight efforts. In his prepared testimony, Professor Marcus noted that [c]urrent systems are not transparent. They do not adequately protect our privacy, and they continue to perpetuate bias. He also explained that governmental oversight must actively include independent scientists to assess AI through access to the methods and data used.

b. Applicability of Existing Frameworks for Responsibility and Liability

Senators wanted to learn who is responsible or liable for the alleged harms of AI under existing laws and regulations. For example, Senators Durbin and Graham both raised questions about the application of 47 U.S.C. 230, originally part of the Communications Decency Act, which creates a liability safe harbor for companies hosting user-created content under certain circumstances. Section 230 was at issue in two United States Supreme Court cases this termTwitter v. Taamneh and Gonzalez v. Googleboth of which were decided two days after the hearing.[2] The Supreme Court declined to hold either Twitter or Google liable for the effects of violent content posted on their platforms. However, Justice Ketanji Brown Jackson filed a concurring opinion in Taamneh, which left open the possibility of holding tech companies liable in the future.[3] The Subcommittee on Privacy, Technology, and the Law held a hearing in March, following oral arguments in Taanmeh and Gonzalez, suggesting the committees interest in regulating technology companies could go beyond existing frameworks.[4] Mr. Altman noted he believes that Section 230 is the wrong structure for AI, but Senator Graham wanted to find out how [AI] is different than social media . . . . Given Mr. Altmans position that Section 230 did not apply to the tool OpenAI has created, Senator Graham wanted to know whether he could sue OpenAI if harmed by it. Mr. Altman said that question was beyond his area of expertise.

c. Alleged Harms and Rights Infringement

The hearing emphasized the potential risks and alleged harms of AI. Senator Welch stated that AI has risks that relate to fundamental privacy rights, bias rights, intellectual property, dissent, [and] the spread of disinformation during the hearing. For Senator Welch, disinformation is in many ways ... the biggest threat because that goes to the core of our capacity for self-governing. Senator Mazie Hirono (D-HI) noted that measures can be built into the technology to minimize harmful results. Specifically, Senator Hirono asked about the ability to refuse harmful requests and how to define harmful requestsrepresenting potential issues that legislators will have to grapple with while trying to regulate AI.

Senators focused on five key areas during the hearing: (i) elections, (ii) intellectual property, (iii) privacy, (iv) job markets, and (v) competition.

i. Elections

A number of senators shared the concern that AI can potentially be used to influence or impact elections. The alleged influence and impact, they noted, can be explicit or unseen. For explicit or direct election influence, Senator Amy Klobuchar (D-MN) asked what should be done about the possibility of AI tools directing voters to incorrect polling locations. Mr. Altman suggested that voters would understand that AI is just a tool that requires external verification.

During the hearing, Professor Marcus noted that AI can exert unseen influence over individual behavior based on data choices and algorithmic methods, but that these data choices and algorithmic methods are neither transparent to the public nor accessible to independent researchers under current systems. Senator Hawley questioned Mr. Altman about AIs ability to accurately predict public opinion surveys. Specifically, Senator Hawley suggested that companies may be able to fine tune strategies to elicit certain responses, certain behavioral responses and that there could be an effort to influence undecided voters.

Ms. Montgomery stated that elections are an area that require transparent AI. Specifically, she advocated for [a]ny algorithm used in [the election] context to be required to have disclosure around the data being used, the performance of the model, anything along those lines is really important. This will likely be a key area of oversight moving into the 2024 elections.

ii. Intellectual Property

Several Senators voiced concerns that training AI systems could infringe intellectual property rights. Senator Marsha Blackburn (R-TN), for example, queried whether artists whose artistic creations are used to train algorithms are or will be compensated for the use of their work. Mr. Altman stated that OpenAI is working with artists now visual artists, musicians, to figure out what people want but that [t]heres a lot of different opinions, unfortunately, suggesting some cooperative industry efforts have been met with difficulty. Senator Klobuchar asked about the impact AI could have on local news organizations, raising concerns that certain AI tools use local news content without compensation, which could exacerbate existing challenges local news organizations face. Chair Blumenthal noted that one of the hearings in this AI series will focus on intellectual property.

iii. Privacy

Several senators raised the potential privacy risks that could result from the deployment of AI. Senator Blackburn asked what Mr. Altmans policy is for ensuring OpenAI is protecting that individuals right to privacy and their right to secure that data . . . . Chair Blumenthal also asked what specific steps OpenAI is taking to protect privacy. Mr. Altman explained that users can opt out of OpenAI using their data for training purposes and delete conversation histories. At IBM, Ms. Montgomery explained, the company even filter[s] [its] large language models for content that includes personal information that may have been pulled from public datasets as well. Senator Jon Ossoff (D-GA) addressed child privacy, advising Mr. Altman to get way ahead of this issue, the safety for children of your product, or I think youre going to find that Senator Blumenthal, Senator Hawley, others on the Subcommittee and I are will look very harshly on the deployment of technology that harms children.

iv. Job Market

Chair Blumenthal raised AIs potential impact on the job market and economy. Mr. Altman admitted that like with all technological revolutions, I expect there to be significant impact on jobs. Ms. Montgomery noted the potential for new job opportunities and the importance of training the workforce for the technological jobs of the future.

v. Competition

Senator Booker expressed concern over how few companies now control and affect the lives of so many of us. And these companies are getting bigger and more powerful. Mr. Altman added that an effort is needed to align AI systems with societal values. Chair Blumenthal noted that the hearing had barely touched on the competition concerns related to AI, specifically the monopolization danger, the dominance of markets that excludes new competition, and thereby inhibits or prevents innovation and invention. The Chair suggested that a further discussion on antitrust issues might be needed.

Senate Homeland Security and Governmental Affairs Committee Hearing:Artificial Intelligence in Government

On the same day, the U.S. Senate Homeland Security and Governmental Affairs Committee (HSGAC) held a hearing to explore the opportunities and challenges associated with the federal governments use of AI.[5] The hearing was the second in a series of hearings that committee Chair Gary Peters (D-MI) plans to convene to address how lawmakers can support the development of AI. The first hearing, held on March 8, 2023, focused on the transformative potential of AI, as well as the potential risks.[6]

Witnesses included:

We provide a full hearing summary and analysis below. Of particular note, however:

I. Potential Harms

Several senators and witnesses expressed concerns about the potential harms posed by government use of AI, including suppression of speech, bias and discrimination, data privacy and security breaches, and job displacement.

a. Suppression of Speech

In his opening statement and throughout the hearing, Ranking Member Paul expressed concern about the federal government using AI to monitor, surveil, and censor speech under the guise of combating misinformation. He warned that AI will make it easier for the government to invisibly control the narrative, eliminate dissent, and retain power. Senator Rick Scott (R-FL) echoed those concerns, and Mr. Siegel stated that the risk of the government using AI to suppress speech cannot be overstated. He cautioned against emulating the Chinese model of top down party driven social control when regulating AI, which would mean the end of our tradition of self-government and the American way of life.

b. Bias and Discrimination

Senators and witnesses also expressed concerns about the potential for biases in AI applications causing violations of due process and equal protection rights. For example, there was a discussion about apparent flaws identified in an AI algorithm used by the IRS, which resulted in Black taxpayers being audited at five times the rate of other races, and the use of AI-driven systems at the state-level to determine eligibility for disability benefits resulting in thousands of recipients being wrongfully denied critical assistance. Richard Eppink testified about his involvement in a class action lawsuitbrought by the ACLU representing individuals with developmental and intellectual disabilities who were denied funds by Idahos Medicaid program because of a flaw in the states AI-based system. Mr. Eppink explained that the people who were denied disability benefits were unable to challenge the decisions, because they did not have access to the proprietary system used to determine their eligibility. He advocated for increased transparency into any AI systems used by the government, but cautioned that even if an AI-based system functions properly, the underlying data may be corrupted by years and years of discrimination and other effects that have bias[ed] the data in the first place. Senators expressed particular concerns about law enforcements use of predictive modeling to justify forms of surveillance.

c. Data Privacy and Cybersecurity

Hearing testimony highlighted concerns about the collection, use, and protection of data by AI applications, and the gaps in existing privacy laws. Senator Ossoff stated that AI tools themselves are vulnerable to data breaches and could be used to penetrate government systems. Daniel Ho highlighted the scale of the problem, noting that by one estimate the federal government needs to hire about 40,000 IT workers to address cybersecurity issues posed by AI. Given the enormous amounts of data that can be collected using AI and the patchwork system of privacy legislation currently in place, Mr. Ho said a data strategy like the National Secure Data Service Act is needed. Senators signaled bipartisan support for national privacy legislation.

d. Job Displacement:

Senators in the HSGAC hearing echoed the concerns expressed in the Senate Judiciary Committee Subcommittee hearing regarding the potential for AI-driven automation to cause job displacement. Senator Maggie Hassan (D-NH) asked Daniel Ho about the potential for AI to be used to automate government jobs. Mr. Ho responded that augmenting the existing federal workforce [with AI] rather than displacing them is the right approach, because ultimately there needs to be a human in charge of these systems. Senator Alex Padilla (D-CA) agreed and provided anecdotal evidence from his experience as Secretary of State of California, where the government introduced the first chatbot in California state government. He opined that rather than leading to layoffs and staff reductions, the chatbot freed up government resources to focus on more important issues.

II. Recommendations

The witnesses offered a number of recommended measures to mitigate the risks posed by the federal governments use of AI and ensure that it is used in a responsible and ethical manner.

Those recommendations are discussed below.

a. Developing Policies and Guidelines

As directed by the AI in Government Act of 2020 and Executive Order 13961, the Office of Management and Budget (OMB) plans to draft policy guidance on the use of AI systems by the U.S. government.[8] Multiple senators and witnesses noted the importance of this guidance and called on OMB to ensure that it appropriately addresses the wide diversity of use cases of AI across the federal government. Lynne Parker proposed requiring all federal agencies to use the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) during the design, development, procurement, use, and management of their cases of AI. Witnesses also suggested looking to the White House Office of Science and Technologys Blueprint for an AI Bill of Rights as a guiding principle.

b. Creating Oversight

Senators and witnesses proposed several measures to create oversight over the federal governments use of AI. Multiple witnesses advocated for AI use case inventories to increase transparency and for the elimination of the governments use of black box systems. Richard Eppink argued that if a government agency or state-funded agency uses AI technology, there must be transparency about the proprietary system so Americans can evaluate whether they need to challenge the government decisions generated by the system. Lynne Parker stated that the U.S. is suffering right now from a lack of leadership and prioritization on these AI topics and proposed that one immediate solution would be to appoint AI chief officers at each federal agency to oversee use and implementation. She also recommended establishing an interagency Chief AI Officers Council that would be responsible for coordinating AI adoption across the federal government.

c. Investing in Training, Research, and Development:

Speakers at the hearing highlighted the need to invest in training federal employees and conducting research and development of AI systems. As noted above, after the hearing, the AI Leadership Training Act, which would create an AI training program for federal supervisors and management officials, was favorably reported out of committee. Multiple witnesses stated that Congress must act immediately to help agencies hire and retain technical talent to address the current gap in leadership and expertise within the federal government. Ms. Parker testified that the government must invest in digital infrastructure, including the National AI Research Resource (NAIRR) to ensure secure access to administrative data. The NAIRR is envisioned as a shared computing and data infrastructure that will provide AI researchers and students across scientific fields and disciplines with access to computing resources and high-quality data, along with appropriate educational tools and user support. While there was some support for public-private partnerships to develop and deploy AI, Senator Padilla and Mr. Eppink advocated for agencies building AI tools in house to prevent proprietary interests from influencing government systems. Chair Peters stated that a future HSGAC hearing will focus on how the government can work with the private sector and academia to harness various ideas and approaches.

d. Fostering International Cooperation and Innovation:

Lastly, Senators Hassan and Jacky Rosen (D-NV) both emphasized the need to foster international cooperation in developing AI standards. Senator Rosen proposed a multilateral AI research institute to enable likeminded countries to collaborate together to engage in standard setting. She stated, China has an explicit plan to become a standards issuing country, and as part of its push to increase global influence it coordinates national standards work across government and industry. So in order for the U.S. to remain a leader in AI and maintain a national security edge, our response must be one of leadership coordination, and, above all cooperation. Despite expressing grave concerns about the danger to democracy posed by AI, Mr. Seigel noted that the U.S. cannot abandon AI innovation and risk ceding the space to competitors like China.

III. How Gibson Dunn Can Assist

Gibson Dunns Public Policy, Artificial Intelligence, and Privacy, Cybersecurity and Data Innovation Practice Groups are closely monitoring legislative and regulatory actions in this space and are available to assist clients through strategic counseling; real-time intelligence gathering; developing and advancing policy positions; drafting legislative text; shaping messaging; and lobbying Congress.

_________________________

[1] Oversight of A.I.: Rules for Artificial Intelligence: Hearing Before the Subcomm. on Privacy, Tech., and the Law of the S. Comm. on the Judiciary, 118th Cong. (2023), https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-rules-for-artificial-intelligence.

[2] Twitter, Inc. v. Taamneh, 143 S. Ct. 1206 (2023); Gonzalez v. Google LLC, 143 S. Ct. 1191 (2023).

[3] See Twitter, Inc. v. Taamneh, 143 S. Ct. 1206, 1231 (2023) (Brown Jackson, K., concurring) (noting that [o]ther cases presenting different allegations and different records may lead to different conclusions.).

[4] Press Release, Senator Richard Blumenthal, Blumenthal & Hawley to Hold Hearing on the Future of Techs Legal Immunities Following Argument in Gonzalez v. Google (Mar. 1, 2021).

[5] Artificial Intelligence in Government: Hearing Before the Senate Committee on Homeland Security and Governmental Affairs, 118th Cong. (2023), https://www.hsgac.senate.gov/hearings/artificial-intelligence-in-government/

[6] Artificial Intelligence: Risks and Opportunities: Hearing Before the Homeland Security and Governmental Affairs Committee, 118th Cong. (2023), https://www.hsgac.senate.gov/hearings/artificial-intelligence-risks-and-opportunities/.

[7] S. 1564 the AI Leadership Training Act, https://www.congress.gov/bill/118th-congress/senate-bill/1564.

[8] See AI in Government Act of 2020, H.R. 2575, 116th Cong. (Sept. 15, 2020); Exec. Order No. 13,960, 85 Fed. Reg. 78939 (Dec. 3,2020).

The following Gibson Dunn lawyers prepared this client alert: Michael Bopp, Roscoe Jones Jr., Alexander Southwell, Amanda Neely, Daniel Smith, Frances Waldmann, Kirsten Bleiweiss*, and Madelyn Mae La France.

Gibson, Dunn & Crutchers lawyers are available to assist in addressing any questions you may have regarding these issues. Please contact the Gibson Dunn lawyer with whom you usually work, the authors, or any of the following in the firms Public Policy, Artificial Intelligence, or Privacy, Cybersecurity & Data Innovation practice groups:

Public Policy Group: Michael D. Bopp Co-Chair, Washington, D.C. (+1 202-955-8256, mbopp@gibsondunn.com) Roscoe Jones, Jr. Co-Chair, Washington, D.C. (+1 202-887-3530, rjones@gibsondunn.com) Amanda H. Neely Washington, D.C. (+1 202-777-9566, aneely@gibsondunn.com) Daniel P. Smith Washington, D.C. (+1 202-777-9549, dpsmith@gibsondunn.com)

Artificial Intelligence Group: Cassandra L. Gaedt-Sheckter Co-Chair, Palo Alto (+1 650-849-5203, cgaedt-sheckter@gibsondunn.com) Vivek Mohan Co-Chair, Palo Alto (+1 650-849-5345, vmohan@gibsondunn.com) Eric D. Vandevelde Co-Chair, Los Angeles (+1 213-229-7186, evandevelde@gibsondunn.com) Frances A. Waldmann Los Angeles (+1 213-229-7914, fwaldmann@gibsondunn.com)

Privacy, Cybersecurity and Data Innovation Group: S. Ashlie Beringer Co-Chair, Palo Alto (+1 650-849-5327, aberinger@gibsondunn.com) Jane C. Horvath Co-Chair, Washington, D.C. (+1 202-955-8505, jhorvath@gibsondunn.com) Alexander H. Southwell Co-Chair, New York (+1 212-351-3981, asouthwell@gibsondunn.com)

*Kirsten Bleiweiss is an associate working in the firms Washington, D.C. office who currently is admitted to practice only in Maryland.

2023 Gibson, Dunn & Crutcher LLP

Attorney Advertising: The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice. Please note, prior results do not guarantee a similar outcome.

Continued here:

Oversight of AI: Rules for Artificial Intelligence and Artificial ... - Gibson Dunn