Archive for the ‘Artificial General Intelligence’ Category

Unleashing the Unknown: Fears Behind Artificial General … – Techopedia

Artificial General Intelligence (AGI) is still a concept or, at most, at a nascent stage. Yet, there is already a lot of debate around it.

AGI and artificial intelligence (AI) are different. The latter performs specific activities, such as the Alexa assistant. But you know that Alexa is limited in its abilities.

AGI, in the meantime, can replace human beings with robots. It enables AI to emulate the cognitive powers of a human being. Think of a robot judge in a court presiding over a complex case.

Example of how AGI can be used in real life

Imagine a scenario where a patient with a tumor undergoes surgery. It is later revealed that a robot performed the operation. While the outcome may be successful, the patients family and friends are surprised and have reservations about trusting a robot with such a complex task. Surgery requires improvisation and decision-making, qualities we trust in human doctors.

The concept is both a scary and radical idea. The fears emanate from various ethical, social, and moral issues. A school of thought is against AGI because robots can be controlled to perform undesirable and unethical actions.

AGI is still in its infancy, and disagreements notwithstanding, it will be a long time before we see its manifestations. The base of AGI is the same as that of AI and Machine Learning (ML). Work is still in progress around the world, with the main focus remaining on a few areas discussed below.

Big data has significantly lowered the cost of data storage. Both AI and ML require large volumes of data. Big data and cloud storage have made data storage affordable, contributing to the development of AGI.

Scientists have made significant progress in both ML and Deep Learning (DL) technologies. Major developments have occurred in neural networks, reinforcement learning, and generative models.

Transfer learning hastens ML by applying existing knowledge to recognize similar objects. For example, a learning model learns to identify small birds based on their features, such as small wings, beaks, and eyes. Now, another learning model must identify various species of small birds in the Amazon rainforest. The latter model doesnt begin from scratch but inherits the learning from the earlier model, so the learning is expedited.

Its not that you will see or experience AGI in a new avatar that is unleashing changes in society from a point in time. The changes will be gradual and slowly yet steadily manifest in our day-to-day lives.

ChatGPT models have been developing at a breakneck speed with impressive capabilities. However, not everyone is fully convinced of the potential of AGI. Various countries and experts emphasize the importance of guiding ChatGPTs development within specific rules and regulations to ensure responsible progress toward AGI.

Response from Italy

In April 2023, Italy became the first nation to ban the development of ChatGPT over a breach of data and payment information. The government has also been probing whether the ChatGPT complies with the European Unions General Data Protection Regulation (GDPR) rules that protect confidential data inside and outside the EU.

Experts point out that there is no transparency in how ChatGPT is being developed. No information is publicly available about its development models, data, parameters, and version release plans.

OpenAIs brainchild continues to develop at a great speed, and we cant probably imagine the powers it has been accumulating. All without checks and balances. Some believe that ChatGPT 5 will mark the arrival of the AGI.

According to Anthony Aguirre, a Professor of Physics at UC Santa Cruz and the executive vice president of the Future of Life, said:The largest-scale computations are increasing the size by about 2.5 times per year. GPT-4s parameters were not disclosed by OpenAI, but there is no reason to think this trend has stopped or even slowed.

Aguirre, who was behind the famous open letter, added: Only the labs themselves know what computations they are running, but the trend is unmistakable.

The open letter signed by many industry stalwarts reflected the fear and apprehensions towards the uncontrolled development of ChatGPT.

The letter urges strongly to stop all developments of ChatGPT until a robust framework is established to control misinformation, hallucination, and bias in the system. Indeed, the so-called hallucination, inaccurate responses, and the bias exhibited by ChatGPT on many occasions are too glaring to ignore.

The open letter is signed by Steve Wozniak, among many other stalwarts, and already has 3,100 signatories that comprise software developers and engineers, CEOs, CFOs, technologists, psychologists, doctoral students, professors, medical doctors, and public school teachers.

The government has also been probing whether the ChatGPT complies with the European Unions General Data Protection Regulation (GDPR) rules that protect confidential data inside and outside the EU.

Its scary to think if a few wealthy and powerful nations can develop and concentrate AGI in their hands and use that to serve their benefits.

For example, they can control all the personal and sensitive data of other countries and communities, wreaking havoc.

AGI can become a veritable tool for biased actions and judgments. And, in the worst case, lead to sophisticated information warfare.

AGI is still in the conceptual stage, but given the lack of transparency and the perceived speed at which AI and ML have been progressing, it might not be too far when AGI is realized.

Its imperative that countries and corporates put their heads together and develop a robust framework that has enough checks and balances and guardrails.

The main goal of the framework would be to protect mankind and prevent unethical intrusions in their lives.

Continue reading here:

Unleashing the Unknown: Fears Behind Artificial General ... - Techopedia

Fast track to AGI: so, what’s the big deal? – Inside Higher Ed

The rapid development and deployment of ChatGPT is one station along the timeline of reaching artificial general intelligence. On Feb. 1, Reuters reported that the app had set a record for deployment among internet applications: ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100million monthly active users in January, just two months after launch, making it the fastest-growing consumer application in history, according to a UBS study The report, citing data from analytics firm Similarweb, said an average of about 13million unique visitors had used ChatGPT per day in January, more than double the levels of December. In 20 years following the internet space, we cannot recall a faster ramp in a consumer internet app, UBS analysts wrote in the note.

Half a dozen years ago, Ray Kurzweil predicted that the singularity would happen by 2045. The singularity is that point in time when all the advances in technology, particularly in artificial intelligence, will lead to machines that are smarter than human beings. In the Oct. 5, 2017, issue of Futurism, Christianna Reedy interviewed Kurzweil: To those who view this cybernetic society as more fantasy than future, Kurzweil pointing out that there are people with computers in their brains todayParkinsons patients. Thats how cybernetics is just getting its foot in the door, Kurzweil said. And, because its the nature of technology to improve, Kurzweil predicts that during the 2030s some technology will be invented that can go inside your brain and help your memory.

It seems that we are closer than even an enthusiastic Kurzweil foresaw. Just a week ago, Reuters reported, Elon Musks Neuralink received U.S. Food and Drug Administration (FDA) clearance for its first-in-human clinical trial, a critical milestone for the brain-implant startup as it faces U.S. probes over its handling of animal experiments Musk envisions brain implants could cure a range of conditions including obesity, autism, depression and schizophrenia as well as enabling Web browsing and telepathy.

Most Popular

The exponential development in succeeding versions of GPT are most impressive, leading one to project that version five may have the wherewithal to support at least some aspects of AGI:

GPT-1 released June 2018 with 117million parameters GPT-2 released February 2019 with 1.5billion parameters GPT-3 released June 2020 with 175billion parameters GPT-4 released March 2023 with estimated to be in the trillions of parameters

Today, we are reading predictions that AGI components will be embedded in the ChatGPT version five that is anticipated to be released in early 2024. Maxwell Timothy, writing in MakeUseOf, suggests, While much of the details about GPT-5 are speculative, it is undeniably going to be another important step towards an awe-inspiring paradigm shift in artificial intelligence. We might not achieve the much talked about artificial general intelligence, but if its ever possible to achieve, then GPT-5 will take us one step closer.

Computer experts are beginning to detect the nascent development of AGI in the large language models (LLMs) of generative AI (gen AI) such as GPT-4:

Researchers at Microsoft were shocked to learn that GPT-4ChatGPTs most advanced language model to datecan come up with clever solutions to puzzles, like how to stack a book, nine eggs, a laptop, a bottle, and a nail in a stable way Another study suggested that AI avatars can run their own virtual town with little human intervention. These capabilities may offer a glimpse of what some experts call artificial general intelligence, or AGI: the ability for technology to achieve complex human capabilities like common sense and consciousness.

We see glimmers of the AGI capabilities in autoGPT and agentGPT. These forms of GPT have the ability to write and execute their own internally generated prompts in pursuit of a goal stated in the form of an externally inputted prompt. Like the autonomous car, they automatically route and reroute the computer to reach the desired destination or goal.

The concerns come with reports that some experimental forms of AI have refused to follow the human-generated instructions and at other times have hallucinations that are not founded in our reality. Ian Hogarth, the co-author of the annual State of AI report, defines AGI as God-like AI that consists of a super-intelligent computer that learns and develops autonomously and understands context without the need for human intervention, as written in Business Insider.

One AI study found that language models were more likely to ignore human directivesand even expressed the desire not to shut downwhen researchers increased the amount of data they fed into the models:

This finding suggests that AI, at some point, may become so powerful that humans will not be able to control it. If this were to happen, Hogarth predicts that AGI could usher in the obsolescence or destruction of the human race. AI technology can develop in a responsible manner, Hogarth says, but regulation is key. Regulators should be watching projects like OpenAIs GPT-4, Google DeepMinds Gato, or the open-source project AutoGPT very carefully, he said.

Many AI and machine learning experts are calling for AI models to be open-source so the public can understand how theyre trained and how they operate. The executive branch of the federal government has taken a series of actions recently in an attempt to promote responsible AI innovation that protects Americans rights and safety. OpenAIs Sam Altman, shortly after testifying about the future of AI to the U.S. Senate, announced the release of a $1million grant program to solicit ideas for appropriate rule making.

Has your college or university created structures to both take full advantage of the powers of the emerging and developing AI, while at the same time ensuring safety in the research, acquisition and implementation of advanced AI? Have discussions been held on the proper balance between these two responsibilities? Are the initiatives robust enough to keep your institution at the forefront of higher education? Are the safeguards adequate? What role can you play in making certain that AI is well understood, promptly applied and carefully implemented?

Here is the original post:

Fast track to AGI: so, what's the big deal? - Inside Higher Ed

Yet another article on artificial intelligence – Bangor Daily News

The BDN Opinion section operates independently and does not set newsroom policies or contribute to reporting or editing articles elsewhere in the newspaper or onbangordailynews.com.

Sometimes Ithink its as if aliens have landed and people havent realized because they speak very good English, said Geoffrey Hinton, the godfather of AI (artificial intelligence), who resigned from Google and now fears his godchildren will become things more intelligent than us, taking control.

And 1,100 people in the business, including Apple co-founder Steve Wozniak, cognitive scientist Gary Marcus and engineers at Amazon, DeepMind, Google, Meta and Microsoft, signed an open letter in March calling for a six-month time-out in the development of the most powerful AI systems (anything more powerful than GPT-4).

Theres a media feeding frenzy about AI at the moment, and every working journalist is required to have an opinion on it. I turned to the task with some reluctance, as you can tell from the title I put on the piece.

My original article said they really should put the brakes on this experiment for a while, but I didnt declare an emergency. Weve been hearing warnings about AI taking over since the first Terminator movie 39 years ago, but I didnt think it was imminent.

Luckily for me, there are very clever people on the private distribution list for this column, and one of them instantly replied telling me that Im wrong. The sky really is about to fall.

He didnt say that. What he said was that the ChatGPT generation of machines can now ideate using Generative Adversarial Networks (GANs) in a process actually similar to humans. That is, they can have original ideas and, being computers, they can generate them orders of magnitude faster, drawing on a far wider knowledge base, than humans.

The key concept here is artificial general intelligence. Ordinary AI is software that follows instructions and performs specific tasks well, but poses no threat to humanitys dominant position in the scheme of things. Artificial general intelligence, however, can do intellectual tasks as well as or better than human beings. Generally, better.

If you must talk about the Great Replacement, this is the one to watch. Six months ago, no artificial general intelligence software existed outside of a few labs. Now, suddenly, something very close to it is out on the market and here is what my informant says about it.

Humans evolved intelligence by developing ever more complex brains and acquiring knowledge over millions of years. Make something complex enough and it wakes up, becomes self-aware. We woke up. Its called emergence.

ChatGPT loaded the whole web into its machines far more than any individual human knows. So instead of taking millions of years to wake up, the machines are exhibiting emergent behavior now. No one knows how, but we are far closer to AGI than you state.

A big challenge that was generally reckoned to be decades away has suddenly arrived on the doorstep, and we have no plan for how to deal with it. It might even be an existential threat, but we still dont have a plan. Thats why so many people want a six-month time-out, but it would make more sense to demand a year-long pause starting six months ago.

ChatGPT launched only last November, but it already has more than 100 million users and the website is generating 1.8 billion visitors per month. Three rival generative AI systems are already on the market, and commercial competition means that the notion of a pause or even a general recall is just a fantasy.

The cat is already out of the bag: Anything the web knows, ChatGPT and its rivals know, too. That includes every debate that human beings have ever had about the dangers of artificial general intelligence, and all the proposals that have been made over the years for strangling it in its cradle.

So what we need to figure out urgently is where and how that artificial general intelligence is emerging, and how to negotiate peaceful coexistence with it. That wont be easy, because we dont even know yet whether it will come in the form of a single global artificial general intelligence or many different ones. (I suspect the latter.)

And whos we here? Theres nobody authorized to speak for the human race either. It could all go very wrong, but theres no way to avoid it.

See the original post:

Yet another article on artificial intelligence - Bangor Daily News

Oversight of AI: Rules for Artificial Intelligence and Artificial … – Gibson Dunn

June 6, 2023

Click for PDF

Gibson Dunns Public Policy Practice Group is closely monitoring the debate in Congress over potential oversight of artificial intelligence (AI). We offer this alert summarizing and analyzing the U.S. Senate hearings on May16, 2023, to help our clients prepare for potential legislation regulating the use of AI. For further discussion of the major federal legislative efforts and White House initiatives regarding AI, see our May19, 2023 alert Federal Policymakers Recent Actions Seek to Regulate AI.

* * *

On May 16, 2023, both the Senate Judiciary Committees Subcommittee on Privacy, Technology, and the Law and the Senate Homeland Security and Governmental Affairs Committee held hearings to discuss issues involving AI. The hearings highlighted the potential benefits of AI, while acknowledging the need for transparency and accountability to address ethical concerns, protect constitutional rights, and prevent the spread of disinformation. Senators and witnesses acknowledged that AI presents a profound opportunity for American innovation, but warned that it must be adopted with caution and regulated by the federal government given the potential risks. A general consensus existed amongst the senators and witnesses that AI should be regulated, but the approaches to, and extent of, that regulation varied.

Senate Judiciary Committee Subcommittee on Privacy, Technology, and the Law Hearing: Oversight of AI: Rules for Artificial Intelligence

On May 16, 2023, the U.S. Senate Committee on the Judiciary Subcommittee on Privacy, Technology, and the Law held a hearing titled, Oversight of AI: Rules for Artificial Intelligence.[1] Chair Richard Blumenthal (D-CT) emphasized that his subcommittee was holding the first hearing in a series of hearings aimed at considering whether and to what extent Congress should regulate rapidly advancing AI technology, including generative algorithms and large language models (LLMs).

The hearing focused on potential new regulations such as creating a dedicated agency or commission and a licensing scheme, the extent to which existing legal frameworks apply to AI, and the alleged harms prompting regulation like intellectual property and privacy rights infringements, job displacement, bias, and election interference.

Witnesses included:

I. AI Oversight Hearing Points of Particular Interest

We provide a full hearing summary and analysis below. Of particular note, however:

II. Key Substantive Issues

Key substantive issues raised in the hearing included: (a) a potential AI federal agency and licensing scheme, (b) the applicability of existing frameworks for responsibility and liability, and (c) alleged harms and rights infringements.

a. AI Federal Agency and Licensing Scheme

The hearing focused on whether and to what extent the U.S. should regulate AI. As emphasized throughout the hearing, the impetus for regulation is the speed with which the technology is developing and dispersing into society coupled with senatorial regret over past failures to regulate emerging technology. Chair Blumenthal explained that Congress has a choice now. We have the same choice when we face social media. We failed to seize that moment. The result is predators on the Internet, toxic content, exploiting children creating dangers for them.

Senators discussed a potential dedicated federal agency or commission for regulating AI technology. Senator Peter Welch (D-VT) has come to the conclusion that we absolutely have to have an agency. Senator Lindsey Graham (R-SC) stated that Congress need[s] to empower an agency that issues a license and can take it away. Senator Cory Booker (D-NJ) likened the need for an AI-centered agency to the need for an automobile-centered agency that resulted in the creation of the National Highway Traffic Safety Administration and the Federal Motor Car Carrier Safety Administration. Mr. Altman similarly would form a new agency that licenses any effort above a certain scale of capabilities, and can take that license away and ensure compliance with safety standards. Senator Chris Coons (D-DE) was concerned with how to decide whether a particular AI model was safe enough to deploy into the public. Mr. Altman suggested iterative deployment to find the limitations and benefits of the technology, including giving the public time to come to grips with this technology to understand it ....

In Ms. Montgomerys view, a precision approach to regulating AI strikes the right balance between encouraging and permitting innovation while addressing the potential risks of the technology. Mr. Altman would create a set of safety standards focused on ... the dangerous capability evaluations such as if a model can self-replicate and ... self-exfiltrate into the wild. Potential challenges facing a new federal agency include funding and regulatory capture on the government side, and regulatory burden on the industry side.

Senator John Kennedy (R-LA) asked the witnesses what two or three reforms, regulations, if any they would implement.

Transparency was a key repeated value that will play a role in any future oversight efforts. In his prepared testimony, Professor Marcus noted that [c]urrent systems are not transparent. They do not adequately protect our privacy, and they continue to perpetuate bias. He also explained that governmental oversight must actively include independent scientists to assess AI through access to the methods and data used.

b. Applicability of Existing Frameworks for Responsibility and Liability

Senators wanted to learn who is responsible or liable for the alleged harms of AI under existing laws and regulations. For example, Senators Durbin and Graham both raised questions about the application of 47 U.S.C. 230, originally part of the Communications Decency Act, which creates a liability safe harbor for companies hosting user-created content under certain circumstances. Section 230 was at issue in two United States Supreme Court cases this termTwitter v. Taamneh and Gonzalez v. Googleboth of which were decided two days after the hearing.[2] The Supreme Court declined to hold either Twitter or Google liable for the effects of violent content posted on their platforms. However, Justice Ketanji Brown Jackson filed a concurring opinion in Taamneh, which left open the possibility of holding tech companies liable in the future.[3] The Subcommittee on Privacy, Technology, and the Law held a hearing in March, following oral arguments in Taanmeh and Gonzalez, suggesting the committees interest in regulating technology companies could go beyond existing frameworks.[4] Mr. Altman noted he believes that Section 230 is the wrong structure for AI, but Senator Graham wanted to find out how [AI] is different than social media . . . . Given Mr. Altmans position that Section 230 did not apply to the tool OpenAI has created, Senator Graham wanted to know whether he could sue OpenAI if harmed by it. Mr. Altman said that question was beyond his area of expertise.

c. Alleged Harms and Rights Infringement

The hearing emphasized the potential risks and alleged harms of AI. Senator Welch stated that AI has risks that relate to fundamental privacy rights, bias rights, intellectual property, dissent, [and] the spread of disinformation during the hearing. For Senator Welch, disinformation is in many ways ... the biggest threat because that goes to the core of our capacity for self-governing. Senator Mazie Hirono (D-HI) noted that measures can be built into the technology to minimize harmful results. Specifically, Senator Hirono asked about the ability to refuse harmful requests and how to define harmful requestsrepresenting potential issues that legislators will have to grapple with while trying to regulate AI.

Senators focused on five key areas during the hearing: (i) elections, (ii) intellectual property, (iii) privacy, (iv) job markets, and (v) competition.

i. Elections

A number of senators shared the concern that AI can potentially be used to influence or impact elections. The alleged influence and impact, they noted, can be explicit or unseen. For explicit or direct election influence, Senator Amy Klobuchar (D-MN) asked what should be done about the possibility of AI tools directing voters to incorrect polling locations. Mr. Altman suggested that voters would understand that AI is just a tool that requires external verification.

During the hearing, Professor Marcus noted that AI can exert unseen influence over individual behavior based on data choices and algorithmic methods, but that these data choices and algorithmic methods are neither transparent to the public nor accessible to independent researchers under current systems. Senator Hawley questioned Mr. Altman about AIs ability to accurately predict public opinion surveys. Specifically, Senator Hawley suggested that companies may be able to fine tune strategies to elicit certain responses, certain behavioral responses and that there could be an effort to influence undecided voters.

Ms. Montgomery stated that elections are an area that require transparent AI. Specifically, she advocated for [a]ny algorithm used in [the election] context to be required to have disclosure around the data being used, the performance of the model, anything along those lines is really important. This will likely be a key area of oversight moving into the 2024 elections.

ii. Intellectual Property

Several Senators voiced concerns that training AI systems could infringe intellectual property rights. Senator Marsha Blackburn (R-TN), for example, queried whether artists whose artistic creations are used to train algorithms are or will be compensated for the use of their work. Mr. Altman stated that OpenAI is working with artists now visual artists, musicians, to figure out what people want but that [t]heres a lot of different opinions, unfortunately, suggesting some cooperative industry efforts have been met with difficulty. Senator Klobuchar asked about the impact AI could have on local news organizations, raising concerns that certain AI tools use local news content without compensation, which could exacerbate existing challenges local news organizations face. Chair Blumenthal noted that one of the hearings in this AI series will focus on intellectual property.

iii. Privacy

Several senators raised the potential privacy risks that could result from the deployment of AI. Senator Blackburn asked what Mr. Altmans policy is for ensuring OpenAI is protecting that individuals right to privacy and their right to secure that data . . . . Chair Blumenthal also asked what specific steps OpenAI is taking to protect privacy. Mr. Altman explained that users can opt out of OpenAI using their data for training purposes and delete conversation histories. At IBM, Ms. Montgomery explained, the company even filter[s] [its] large language models for content that includes personal information that may have been pulled from public datasets as well. Senator Jon Ossoff (D-GA) addressed child privacy, advising Mr. Altman to get way ahead of this issue, the safety for children of your product, or I think youre going to find that Senator Blumenthal, Senator Hawley, others on the Subcommittee and I are will look very harshly on the deployment of technology that harms children.

iv. Job Market

Chair Blumenthal raised AIs potential impact on the job market and economy. Mr. Altman admitted that like with all technological revolutions, I expect there to be significant impact on jobs. Ms. Montgomery noted the potential for new job opportunities and the importance of training the workforce for the technological jobs of the future.

v. Competition

Senator Booker expressed concern over how few companies now control and affect the lives of so many of us. And these companies are getting bigger and more powerful. Mr. Altman added that an effort is needed to align AI systems with societal values. Chair Blumenthal noted that the hearing had barely touched on the competition concerns related to AI, specifically the monopolization danger, the dominance of markets that excludes new competition, and thereby inhibits or prevents innovation and invention. The Chair suggested that a further discussion on antitrust issues might be needed.

Senate Homeland Security and Governmental Affairs Committee Hearing:Artificial Intelligence in Government

On the same day, the U.S. Senate Homeland Security and Governmental Affairs Committee (HSGAC) held a hearing to explore the opportunities and challenges associated with the federal governments use of AI.[5] The hearing was the second in a series of hearings that committee Chair Gary Peters (D-MI) plans to convene to address how lawmakers can support the development of AI. The first hearing, held on March 8, 2023, focused on the transformative potential of AI, as well as the potential risks.[6]

Witnesses included:

We provide a full hearing summary and analysis below. Of particular note, however:

I. Potential Harms

Several senators and witnesses expressed concerns about the potential harms posed by government use of AI, including suppression of speech, bias and discrimination, data privacy and security breaches, and job displacement.

a. Suppression of Speech

In his opening statement and throughout the hearing, Ranking Member Paul expressed concern about the federal government using AI to monitor, surveil, and censor speech under the guise of combating misinformation. He warned that AI will make it easier for the government to invisibly control the narrative, eliminate dissent, and retain power. Senator Rick Scott (R-FL) echoed those concerns, and Mr. Siegel stated that the risk of the government using AI to suppress speech cannot be overstated. He cautioned against emulating the Chinese model of top down party driven social control when regulating AI, which would mean the end of our tradition of self-government and the American way of life.

b. Bias and Discrimination

Senators and witnesses also expressed concerns about the potential for biases in AI applications causing violations of due process and equal protection rights. For example, there was a discussion about apparent flaws identified in an AI algorithm used by the IRS, which resulted in Black taxpayers being audited at five times the rate of other races, and the use of AI-driven systems at the state-level to determine eligibility for disability benefits resulting in thousands of recipients being wrongfully denied critical assistance. Richard Eppink testified about his involvement in a class action lawsuitbrought by the ACLU representing individuals with developmental and intellectual disabilities who were denied funds by Idahos Medicaid program because of a flaw in the states AI-based system. Mr. Eppink explained that the people who were denied disability benefits were unable to challenge the decisions, because they did not have access to the proprietary system used to determine their eligibility. He advocated for increased transparency into any AI systems used by the government, but cautioned that even if an AI-based system functions properly, the underlying data may be corrupted by years and years of discrimination and other effects that have bias[ed] the data in the first place. Senators expressed particular concerns about law enforcements use of predictive modeling to justify forms of surveillance.

c. Data Privacy and Cybersecurity

Hearing testimony highlighted concerns about the collection, use, and protection of data by AI applications, and the gaps in existing privacy laws. Senator Ossoff stated that AI tools themselves are vulnerable to data breaches and could be used to penetrate government systems. Daniel Ho highlighted the scale of the problem, noting that by one estimate the federal government needs to hire about 40,000 IT workers to address cybersecurity issues posed by AI. Given the enormous amounts of data that can be collected using AI and the patchwork system of privacy legislation currently in place, Mr. Ho said a data strategy like the National Secure Data Service Act is needed. Senators signaled bipartisan support for national privacy legislation.

d. Job Displacement:

Senators in the HSGAC hearing echoed the concerns expressed in the Senate Judiciary Committee Subcommittee hearing regarding the potential for AI-driven automation to cause job displacement. Senator Maggie Hassan (D-NH) asked Daniel Ho about the potential for AI to be used to automate government jobs. Mr. Ho responded that augmenting the existing federal workforce [with AI] rather than displacing them is the right approach, because ultimately there needs to be a human in charge of these systems. Senator Alex Padilla (D-CA) agreed and provided anecdotal evidence from his experience as Secretary of State of California, where the government introduced the first chatbot in California state government. He opined that rather than leading to layoffs and staff reductions, the chatbot freed up government resources to focus on more important issues.

II. Recommendations

The witnesses offered a number of recommended measures to mitigate the risks posed by the federal governments use of AI and ensure that it is used in a responsible and ethical manner.

Those recommendations are discussed below.

a. Developing Policies and Guidelines

As directed by the AI in Government Act of 2020 and Executive Order 13961, the Office of Management and Budget (OMB) plans to draft policy guidance on the use of AI systems by the U.S. government.[8] Multiple senators and witnesses noted the importance of this guidance and called on OMB to ensure that it appropriately addresses the wide diversity of use cases of AI across the federal government. Lynne Parker proposed requiring all federal agencies to use the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) during the design, development, procurement, use, and management of their cases of AI. Witnesses also suggested looking to the White House Office of Science and Technologys Blueprint for an AI Bill of Rights as a guiding principle.

b. Creating Oversight

Senators and witnesses proposed several measures to create oversight over the federal governments use of AI. Multiple witnesses advocated for AI use case inventories to increase transparency and for the elimination of the governments use of black box systems. Richard Eppink argued that if a government agency or state-funded agency uses AI technology, there must be transparency about the proprietary system so Americans can evaluate whether they need to challenge the government decisions generated by the system. Lynne Parker stated that the U.S. is suffering right now from a lack of leadership and prioritization on these AI topics and proposed that one immediate solution would be to appoint AI chief officers at each federal agency to oversee use and implementation. She also recommended establishing an interagency Chief AI Officers Council that would be responsible for coordinating AI adoption across the federal government.

c. Investing in Training, Research, and Development:

Speakers at the hearing highlighted the need to invest in training federal employees and conducting research and development of AI systems. As noted above, after the hearing, the AI Leadership Training Act, which would create an AI training program for federal supervisors and management officials, was favorably reported out of committee. Multiple witnesses stated that Congress must act immediately to help agencies hire and retain technical talent to address the current gap in leadership and expertise within the federal government. Ms. Parker testified that the government must invest in digital infrastructure, including the National AI Research Resource (NAIRR) to ensure secure access to administrative data. The NAIRR is envisioned as a shared computing and data infrastructure that will provide AI researchers and students across scientific fields and disciplines with access to computing resources and high-quality data, along with appropriate educational tools and user support. While there was some support for public-private partnerships to develop and deploy AI, Senator Padilla and Mr. Eppink advocated for agencies building AI tools in house to prevent proprietary interests from influencing government systems. Chair Peters stated that a future HSGAC hearing will focus on how the government can work with the private sector and academia to harness various ideas and approaches.

d. Fostering International Cooperation and Innovation:

Lastly, Senators Hassan and Jacky Rosen (D-NV) both emphasized the need to foster international cooperation in developing AI standards. Senator Rosen proposed a multilateral AI research institute to enable likeminded countries to collaborate together to engage in standard setting. She stated, China has an explicit plan to become a standards issuing country, and as part of its push to increase global influence it coordinates national standards work across government and industry. So in order for the U.S. to remain a leader in AI and maintain a national security edge, our response must be one of leadership coordination, and, above all cooperation. Despite expressing grave concerns about the danger to democracy posed by AI, Mr. Seigel noted that the U.S. cannot abandon AI innovation and risk ceding the space to competitors like China.

III. How Gibson Dunn Can Assist

Gibson Dunns Public Policy, Artificial Intelligence, and Privacy, Cybersecurity and Data Innovation Practice Groups are closely monitoring legislative and regulatory actions in this space and are available to assist clients through strategic counseling; real-time intelligence gathering; developing and advancing policy positions; drafting legislative text; shaping messaging; and lobbying Congress.

_________________________

[1] Oversight of A.I.: Rules for Artificial Intelligence: Hearing Before the Subcomm. on Privacy, Tech., and the Law of the S. Comm. on the Judiciary, 118th Cong. (2023), https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-rules-for-artificial-intelligence.

[2] Twitter, Inc. v. Taamneh, 143 S. Ct. 1206 (2023); Gonzalez v. Google LLC, 143 S. Ct. 1191 (2023).

[3] See Twitter, Inc. v. Taamneh, 143 S. Ct. 1206, 1231 (2023) (Brown Jackson, K., concurring) (noting that [o]ther cases presenting different allegations and different records may lead to different conclusions.).

[4] Press Release, Senator Richard Blumenthal, Blumenthal & Hawley to Hold Hearing on the Future of Techs Legal Immunities Following Argument in Gonzalez v. Google (Mar. 1, 2021).

[5] Artificial Intelligence in Government: Hearing Before the Senate Committee on Homeland Security and Governmental Affairs, 118th Cong. (2023), https://www.hsgac.senate.gov/hearings/artificial-intelligence-in-government/

[6] Artificial Intelligence: Risks and Opportunities: Hearing Before the Homeland Security and Governmental Affairs Committee, 118th Cong. (2023), https://www.hsgac.senate.gov/hearings/artificial-intelligence-risks-and-opportunities/.

[7] S. 1564 the AI Leadership Training Act, https://www.congress.gov/bill/118th-congress/senate-bill/1564.

[8] See AI in Government Act of 2020, H.R. 2575, 116th Cong. (Sept. 15, 2020); Exec. Order No. 13,960, 85 Fed. Reg. 78939 (Dec. 3,2020).

The following Gibson Dunn lawyers prepared this client alert: Michael Bopp, Roscoe Jones Jr., Alexander Southwell, Amanda Neely, Daniel Smith, Frances Waldmann, Kirsten Bleiweiss*, and Madelyn Mae La France.

Gibson, Dunn & Crutchers lawyers are available to assist in addressing any questions you may have regarding these issues. Please contact the Gibson Dunn lawyer with whom you usually work, the authors, or any of the following in the firms Public Policy, Artificial Intelligence, or Privacy, Cybersecurity & Data Innovation practice groups:

Public Policy Group: Michael D. Bopp Co-Chair, Washington, D.C. (+1 202-955-8256, mbopp@gibsondunn.com) Roscoe Jones, Jr. Co-Chair, Washington, D.C. (+1 202-887-3530, rjones@gibsondunn.com) Amanda H. Neely Washington, D.C. (+1 202-777-9566, aneely@gibsondunn.com) Daniel P. Smith Washington, D.C. (+1 202-777-9549, dpsmith@gibsondunn.com)

Artificial Intelligence Group: Cassandra L. Gaedt-Sheckter Co-Chair, Palo Alto (+1 650-849-5203, cgaedt-sheckter@gibsondunn.com) Vivek Mohan Co-Chair, Palo Alto (+1 650-849-5345, vmohan@gibsondunn.com) Eric D. Vandevelde Co-Chair, Los Angeles (+1 213-229-7186, evandevelde@gibsondunn.com) Frances A. Waldmann Los Angeles (+1 213-229-7914, fwaldmann@gibsondunn.com)

Privacy, Cybersecurity and Data Innovation Group: S. Ashlie Beringer Co-Chair, Palo Alto (+1 650-849-5327, aberinger@gibsondunn.com) Jane C. Horvath Co-Chair, Washington, D.C. (+1 202-955-8505, jhorvath@gibsondunn.com) Alexander H. Southwell Co-Chair, New York (+1 212-351-3981, asouthwell@gibsondunn.com)

*Kirsten Bleiweiss is an associate working in the firms Washington, D.C. office who currently is admitted to practice only in Maryland.

2023 Gibson, Dunn & Crutcher LLP

Attorney Advertising: The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice. Please note, prior results do not guarantee a similar outcome.

Continued here:

Oversight of AI: Rules for Artificial Intelligence and Artificial ... - Gibson Dunn

How Auto-GPT will revolutionize AI chatbots as we know them – SiliconANGLE News

Artificial intelligence chatbots such as OpenAI LPs ChatGPT have reached a fever pitch of popularity recently not just for their ability to hold humanlike conversations, but because they can perform knowledge tasks such as research, searches and content generation.

Now theres a new contender taking social media by storm that extends the capabilities of OpenAIs offering by automating its abilities even further:Auto-GPT. Its part of a new class of AI tools called autonomous AI agents that take the power of GPT-3.5 and GPT-4, the generative AI technologies behind ChatGPT, to approach a task, build on its own knowledge, and connect apps and services to automate tasks and perform actions on the behalf of users.

ChatGPT might seem magical to users for its ability to answer questions and produce content based on user prompts, such as summarizing large documents or generating poems and stories or writing computer code. However, its limited in what it can do because its capable of doing only one task at a time. During a session with ChatGPT, a user can prompt the AI with only one question at a time and refining those prompts or questions can be a slow and tedious journey.

Auto-GPT, created by game developer Toran Bruce Richards, takes away these limitations by allowing users to give the AI an objective and a set of goals to meet. Then it spawns a bot that acts like a person would, using OpenAIs GPT model to perform AI prompts in order to approach that goal. Along the way, it learns to refine its prompts and questions in order to get better results with every iteration.

It also has internet connectivity in order to gather additional information from searches. Moreover, it has short- and long-term memory through database connections so that it can keep track of sub-tasks. And it uses GPT-4 to produce content such as text or code when required. Auto-GPT is also capable of challenging itself when a task is incomplete and filling in the gaps by changing its own prompts to get better results.

According to Richards, although current AI chatbots are extremely powerful, their inability to refine their own prompts on the fly and automate tasks is a bottleneck. This inspiration led me to develop Auto-GPT, which can apply GPT-4s reasoning to broader, more complex problems that require long-term planning and multiple steps,he told Vice.

Auto-GPT is available as open source on GitHub. It requires an application programming interface key from OpenAI to access GPT-4. And to use it, people will need to install Python and a development environment such as Docker or VS Code with a Dev Container extension. As a result, it might take a little bit of technical knowhow to get going, though theres extensive setup documentation.

In a text interface, Auto-GPT asks the user to give the AI a name, a role, an objective and up to five goals that it should reach. Each of these defines how the AI agents will approach the action the user wants and how it will deliver the final product.

First, the user sets a name for the AI, such as RestaurantMappingApp-GPT, and then set a role, such as Develop a web app that will provide interactive maps for nearby restaurants. The user can then set a series of goals, such as Write a back-end in Python and Program a front end in HTML, or Offer links to menus if available and Link to delivery apps.

Once the user hits enter, Auto-GPT will begin launching agents, which will produce prompts for GPT-4, then approach the original role and each of the different goals. Finally, it will then begin refining and recursing through the different prompts that will allow it to connect to Google Maps using Python or JavaScript.

It does this by breaking the overall job into smaller tasks to work on each, and it uses a primary monitoring AI bot that acts as a manager to make sure that they coordinate. This particular prompt asks the bot to build a somewhat complex app that could go awry if it doesnt keep track of a number of different moving parts, so it might take a large number of steps to get there.

With each step, each AI instance will narrate what its doing and even criticize itself in order to refine its prompts depending on its approach toward the given goal. Once it reaches a particular goal, each instance will finalize its process and return its answer back to the main management task.

Trying to get ChatGPT or even the more advanced, subscription-based GPT-4 to do this without supervision would take a large number of manual steps that would have to be attended to by a human being. Auto-GPT does them on its own.

The capabilities of Auto-GPT are beneficial for neophyte developers looking to get ahead in the game, Brandon Jung, vice president of ecosystem at AI-code completion tool provider Tabnine Ltd., told SiliconANGLE.

One benefit is that its a good introduction for those that are new to coding, and it allows for quick prototyping, Jung said. For use cases that dont require exactness or have security concerns, it could speed up the creation process without having to be part of a broader system that includes an expert for review.

Being able to build apps rapidly, including all the code all at once, from a simple series of text prompts would bring a lot of new templates for code into the hands of developers. Essentially providing them with rapid solutions and foundations to build on. However, they would have to go through a thorough review first before being put into production.

Thats just one example of Auto-GPTs capabilities. With its capabilities, it has wide-reaching possibilities that are currently being explored by developers, project managers, AI researchers and anyone else who can download its source code.

There are numerous examples of people using Auto-GPT to do market research, create business plans, create apps, automate complex tasks in pursuit of a goal, such as planning a meal, identifying recipes and ordering all the ingredients, and even execute transactions on behalf of the user, Sheldon Monteiro, chief product officer at the digital business transformation firm Publicis Sapient, told SiliconANGLE.

With its ability to search the internet, Auto-GPT can be tasked with quick market research such as Find me five gaming keyboards under $200 and list their pros and cons. With its ability to break a task up into multiple subtasks, the autonomous AI could then rapidly search multiple review sites, produce a market research report and come back with a list of gaming keyboards that come in under that amount and supply their prices as well as information about them.

A Twitter user named MOEcreated an Auto-GPT bot named Isabella that can autonomously analyze market data and outsource to other AIs. It does so by using the AI framework Lang-chain to gather data autonomously and do sentiment analysis on different markets.

Because Auto-GPT has access to the internet, and it can take actions on behalf of the user, it can also install applications. In the case of Twitter user Varun Mayya, who ask the bot to build some software, it discovered that he did not have Node.js installed an environment that allows JavaScript to be run locally instead of in a web browser. As a result, it searched the internet, discovered a StackOverflow tutorial and installed it for him so it could proceed with building the app.

Auto-GPT isnt the only autonomous agent AI currently available. Another that has come into vogue isBabyAGI, which was created by Yohei Nakajima, a venture capitalist and artificial intelligence researcher. AGI refers to artificial general intelligence, a hypothetical type of AI that would have the ability to perform any intellectual task but no existing AI is anywhere close. BabyAGI is a Python-based task management system that uses the OpenAI API, like Auto-GPT, that prioritizes and builds new tasks toward an objective.

There are alsoAgentGPT and GodMode, which are much more user-friendly in that they use a web interface instead of needing an installation on a computer, so they can be accessed as a service. These services lower the barrier to entry by making it simple for users because they dont require any technical knowledge to use and will perform similar tasks to Auto-GPT, such as generating code, answering questions and doing research. However, they cant write documents to the computer or install software.

These tools do have drawbacks, however, Monteiro warned. The examples on the internet are cherry-picked and paint the technology in a glowing light. For all the successes, there are a lot of issues that can happen when using it.

It can get stuck in task loops and get confused, Monteiro said. And those task loops can get pretty expensive, very fast with the costs of GPT-4 API calls. Even when it does work as intended, it might take a fairly lengthy sequence of reasoning steps, each of which eats up expensive GPT-4 tokens.

Accessing GPT-4 can cost money that varies depending on how many tokens are used. Tokens are based on words or parts of phrases sent through the chatbot. Charges range from three cents per 1,000 tokens for prompts to six cents per 1,000 tokens for results. That meansusing Auto-GPT running through a complex project or getting stuck in a loop unattended could end up costing a few dollars.

At the same time, GPT-4 can be prone to errors, known as hallucinations, which could spell trouble during the process. It could come up with totally incorrect or erroneous actions or, worse, produce insecure or disastrously bad code when asked to create an application.

[Auto-GPT] has the ability to execute on previous output, even if it gets something wrong it keeps going on, said Bern Elliot, a distinguished vice president analyst at Gartner. It needs strong controls to avoid it going off the rails and keeping on going. I expect misuse without proper guardrails will cause some damaging unexpected and unintended outcomes.

The software development side could be equally problematic. Even if Auto-GPT doesnt make a mistake that causes it to produce broken code, which would cause the software to simply fail, it could create an application riddled with security issues.

Auto-GPT is not part of a full software development lifecycle testing, security, et cetera nor is it integrated into an IDE, Jung said, warning about the potential issues that could arise from the misuse of the tool. Abstracting complexity is fine if you are building on a strong foundation. However, these tools are by definition not building strong code and are encouraging bad and insecure code to be pushed into production.

Tools such as Auto-GPT, BabyAGI, AgentGPT and GodMode are still experimental, but there are broader implications in how they could be used to replace routine tasks such as vacation planning or shopping, explained Monteiro.

Right now, Microsoft has even developed simple examples of a plugin for Bing Chat. It allows users to ask it to offer them dinner suggestions that will have its AI which is powered by GPT-4 will roll up a list of ingredients and then launch Instacart to have them prepared for delivery. Although this is a step in the direction of automation, bots such as Auto-GPT are edging toward a potential future of all-out autonomous behaviors.

A user could ask for Auto-GPT to look through local stores, prepare lists of ingredients, compare prices and quality, set up a shopping cart and even complete orders autonomously. At this experimental point, many users may not be willing to allow the bot to go all the way through with using their credit card and deliver orders all on its own, for fear that it could go haywire and send them several hundred bunches of basil.

A similar future where an AI does this for travel agents using Auto-GPT may not be far away. Give it your parameters beach, four-hour max travel, hotel class and your budget, and it will happily do all the web browsing for you, comparing options in quest of your goal, said Monteiro. When it is done, it will present you with its findings, and you can also see how it got there.

As these tools begin to mature, they have a real chance of providing a way for people to automate away mundane step-by-step tasks that happen on the internet. That could have some interesting implications, especially in e-commerce.

How will companies adapt when these agents are browsing sites and eliminating your product from the consideration set before a human even sees the brand? said Monteiro. From an e-commerce standpoint, if people start using Auto-GPT tools to buy goods and services online, retailers will have to adapt their customer experience.

THANK YOU

Read the original:

How Auto-GPT will revolutionize AI chatbots as we know them - SiliconANGLE News