Archive for the ‘Artificial General Intelligence’ Category

The apocalypse isnt coming. We must resist cynicism and fear about AI – The Guardian

Opinion

Remember when WeWork would kill commercial real estate? Crypto would abolish banks? The metaverse would end meeting people in real life?

Mon 15 May 2023 04.06 EDT

In the field of artificial intelligence, doomerism is as natural as an echo. Every development in the field, or to be more precise every development that the public notices, immediately generates an apocalyptic reaction. The fear is natural enough; it comes partly from the lizard-brain part of us that resists whatever is new and strange, and partly from the movies, which have instructed us, for a century, that artificial intelligence will take the form of an angry god that wants to destroy all humanity.

The recent public letter calling for a six-month ban on AI lab work will not have the slightest measurable effect on the development of artificial intelligence, it goes without saying. But it has changed the conversation: every discussion about artificial intelligence must begin with the possibility of total human extinction. Its silly and, worse, its an alibi, a distraction from the real dangers technology presents.

The most important thing to remember about tech doomerism in general is that its a form of advertising, a species of hype. Remember when WeWork was going to end commercial real estate? Remember when crypto was going to lead to the abolition of central banks? Remember when the metaverse was going to end meeting people in real life? Silicon Valley uses apocalypse for marketing purposes: they tell you their tech is going to end the world to show you how important they are.

I have been working with and reporting on AI since 2017, which is prehistoric in this field. During that time, I have heard, from intelligent sources who were usually reliable, that the trucking industry was about to end, that China was in possession of a trillion-parameter natural language processing AI with superhuman intelligence. I have heard geniuses bona fide geniuses declare that medical schools should no longer teach radiology because it would all be automated soon.

One of the reasons AI doomerism bores me is that its become familiar Ive heard it all before. To stay sane, I have had to abide by twin principles: I dont believe it until I see it. Once I see it, I believe it.

Many of the most important engineers in the field indulge in AI doomerism; this is unquestionably true. But one of the defining features of our time is that the engineers who do not, in my experience, have even the faintest education in the humanities or even recognize that society and culture are worthy of study simply have no idea how their inventions interact with the world. One of the most prominent signatories of the open letter was Elon Musk, an early investor in OpenAI. He is brilliant at technology. But if you want to know how little he understands about people and their relationships to technology, go on Twitter for five minutes.

Not that there arent real causes of worry when it comes to AI; its just that theyre almost always about something other than AI. The biggest anxiety that an artificial general intelligence is about to take over the world doesnt even qualify as science fiction. That fear is religious.

Computers do not have will. Algorithms are a series of instructions. The properties that emerge in the emergent properties of artificial intelligence have to be discovered and established by human beings. The anthropomorphization of statistical pattern-matching machinery is storytelling; its a movie playing in the collective mind, nothing more. Turning off ChatGPT isnt murder. Engineers who hire lawyers for their chatbots are every bit as ridiculous as they sound.

The much more real anxieties brought up by the more substantial critics of artificial intelligence are that AI will super-charge misinformation and will lead to the hollowing out of the middle class by the process of automation. Do I really need to point out that both of these problems predate artificial intelligence by decades, and are political rather than technological?

AI might well make it slightly easier to generate fake content, but the problem of misinformation has never been generation but dissemination. The political space is already saturated with fraud and its hard to see how AI could make it much worse. In the first quarter of 2019, Facebook had to remove 2.2bn fake profiles; AI had nothing to do with it. The response to the degradation of our information networks from government and from the social media industry has been a massive shrug, a bunch of antiquated talk about the first amendment.

Regulating AI is enormously problematic; it involves trying to fathom the unfathomable and make the inherently opaque transparent. But we already know, and have known for over a decade, about the social consequences of social media algorithms. We dont have to fantasize or predict the effects of Instagram. The research is consistent and established: that technology is associated with higher levels of depression, anxiety and self-harm among children. Yet we do nothing. Vague talk about slowing down AI doesnt solve anything; a concrete plan to regulate social media might.

As for the hollowing out of the middle class, inequality in the United States reached the highest level since 1774 back in 2012. AI may not be the problem. The problem may be the foundational economic order AI is entering. Again, vague talk about an AI apocalypse is a convenient way to avoid talking about the self-consumption of capitalism and the extremely hard choices that self-consumption presents.

The way you can tell that doomerism is just more hype is that its solutions are always terminally vague. The open letter called for a six-month ban. What, exactly, do they imagine will happen over those six months? The engineers wont think about AI? The developers wont figure out ways to use it? Doomerism likes its crises numinous, preferably unsolvable. AI fits the bill.

Recently, I used AI to write a novella: The Death of an Author. I wont say that the experience wasnt unsettling. It was quite weird, actually. It felt like I managed to get an alien to write, an alien that is the sum total of our language. The novella itself has, to me anyway, a hypnotic but removed power inhuman language that makes sense. But the experience didnt make me afraid. It awed me. Lets reside in the awe for a moment, just a moment, before we go to the fear.

If we have to think through AI by way of the movies, can we at least do Star Trek instead of Terminator 2? Something strange has appeared in the sky lets be a little more Jean-Luc Picard and a little less Klingon in our response. The truth about AI is that nobody not the engineers who have created it, not the developers converting it into products understands fully what it is, never mind what its consequences will be. Lets get a sense of what this alien is before we blow it out of the sky. Maybe its beautiful.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read the original post:

The apocalypse isnt coming. We must resist cynicism and fear about AI - The Guardian

The Potential of AI in Tax Practice Relies on Understanding its … – Thomson Reuters Tax & Accounting

Curiosity, conversation, and investment into artificial intelligence are quickly gaining traction in the tax community, but proper due diligence requires an acknowledgement of what such tools are and arent yet capable of, as well as an assessment of security and performance risks, according to industry experts.

With the tax world exploring how AI can improve practice and administration; firms, the IRS, and taxpayers alike are in the early stages of considering its potential for streamlining tasks, saving time, and improving access to information. Regardless of ones individual optimism or skepticism about the possible future of AI in the tax space, panelists at an American Bar Association conference in Washington, D.C., this past week suggested that practitioners arm themselves with important fundamentals and key technological differences under the broad-stroke term of AI.

An increasingly popular and publicly available AI tool is ChatGPT. Users can interact with ChatGPT by issuing whatever prompts come to mind, such as telling it to write a script for a screenplay or simply asking a question. As opposed to algorithmic machine learning tools specifically designed with a narrow focus, such as those in development at the IRS to crack down on abusive transactions like conservation easements, ChatGPT is what is called a large language model (LLM).

LLMs, according to PricewaterhouseCoopers Principal Chris Kontaridis, are text-based and use statistical methodologies to create a relationship between your question and patterns of data and text. In other words, the more data an LLM like ChatGPTwhich is currently learning from users across the entire internetabsorbs, the better it can attempt to predict and algorithmically interact with a person. Importantly, however, ChatGPT is not a knowledge model, Kontaridis said. Calling ChatGPT a knowledge model would insinuate that it is going to give you the correct answer every time you put in a question. Because it is not artificial general intelligence, something akin to a Hollywood portrayal of sentient machines overtaking humanity, users should recognize that ChatGPT is not self-reasoning, he said.

Were not even close to having real AGI out there, Kontaridis added.

Professor Abdi Aidid of the University of Toronto Faculty of Law and AI research-focused Blue J Legal, said at the ABA conference that the really important thing when youre using a tool like [ChatGPT] is recognizing its limitations. He explained that it is not providing source material for legal or tax advice. What its doing, and this is very important, is simply making a probabilistic determination about the next likely word. For instance, Aidid demonstrated that if you ask ChatGPT what your name is, it will give you an answer whether it knows it or not. You can rephrase the same question and ask it again, and it might give you a slightly different answer with different words because its responding to a different prompt.

At a separate panel, Ken Crutchfieldvice president and general manager of Legal Markets said he asked ChatGPT who invented the Trapper Keeper binder, knowing in fact his father Bryant Crutchfield is credited with the invention. ChatGPT spit out a random name. In telling the story, Crutchfield said: I went through, and I continued to ask questions, and I eventually convinced ChatGPT that it was wrong, and it admitted it and it said yes, Bryant Crutchfield did invent the Trapper Keeper.' Crutchfield said that when someone else tried asking ChatGPT who invented the Trapper Keeper, it gave yet another name. He tried it again himself more recently, and the answer included his fathers name, but listed his own alma mater. So its getting better and kind of learns through the these back-and-forths with people that are interacting.

Aidid explained that these instances are referred to as hallucinations. That is, when an AI does not know the answer and essentially makes something up on the spot based on the data and patterns it has up to that point. If a user were to ask ChatGPT about the Inflation Reduction Act, it would hallucinate an answer because it currently is limited to knowledge as recent as September 2021. Generative AI ChatGPT is still more sophisticated than more base-level tools that work off of decision trees, such as when a taxpayer interacts with the IRS Tax Assistant Tool, Aidid said. The Tax Assistant Tool, Aidid said, is not generative AI.

Mindy Herzfeld, professor at the University of Florida Levin College of Law, responsed that it is especially problematic because the [Tax Assistant Tool] is implying that it has all that information and its generating responses based on the world of information, but its really not doing that, so its misleading.

The most potential for the application of generative AI is with so-called deep learning tools, which are supposedly more advanced and complex iterations of machine learning platforms. Aidid said deep learning can work with unstructured data. Such technology can not only synthesize and review information, but review new information for us. Its starting to take all that and generate thingsnot simple predictionsbut actually generate things that are in the style and mode of human communication, and thats where were seeing significant investment today.

Herzfeld said that machine learning is already being used in tax on a daily basis, but deep learning is a little harder to see where that is in tax law. These more advanced tools will likely be developed in-house at firms, likely in partnership with AI researchers.

PwC is working with Blue J in pursuit of tax-oriented deep learning generative AI to help reduce much of the clerical work that is all too time-consuming in tax practice, according to Kontaridis. Freeing up staff to focus efforts to other things while AI sifts through mountains of data is a boon, he said.

However, as the saying goes, with great power comes with great responsibility. Here, that means guarding sensitive information and ensuring accuracy. Kontaridis said that its really important to make sure before you deploy something like this to your staff or use it yourself that youre doing it in a safe environment where you are protecting the confidentiality of your personal IP and privilege that you have with your clients.

Herzfeld echoed that practitioners should bear in mind how easily misinformation could be perpetuated through an overreliance or lack of oversight of AI, which she called a very broadly societal risk. Kontaridis assured the audience that he is not worried about generative AI replacing our role and the tax professional this is a tool that will help us do our work better.

Referring to the myth that CPA bots will take over the industry, he said: What Im worried about is the impact it has on our profession at the university level of it, discouraging bright young minds from pursuing careers in tax and accounting consulting.

Get all the latest tax, accounting, audit, and corporate finance news with Checkpoint Edge. Sign up for afree 7-day trialtoday.

Continued here:

The Potential of AI in Tax Practice Relies on Understanding its ... - Thomson Reuters Tax & Accounting

Operation HOPE and CAU Host ChatGPT Creator to Discuss AI – Black Enterprise

Operation HOPE recently partnered with Clark Atlanta University (CAU) to host two events focused on The Future of Artificial Intelligence, with Sam Altman, Open AI Founder and ChatGPT Creator. The conversations were led by Operation HOPE Founder, Chairman, and CEO John Hope Bryant and featured the President of Clark Atlanta University, Dr. GeorgeT.French,Jr.

Held on CAUs campus, the first event provided Atlantas most prominent Black leaders from the public and private sectors an opportunity to engage with Altman and discuss pressing issues around artificial intelligence (AI). The second discussion provided local HBCU and Atlanta-based college students with the same opportunity.

Altman, a billionaire tech pioneer, shared how he believes AI can positively impact lives and create new economic opportunities for communities of color, particularly among students at Historically Black Colleges and Universities (HBCUs). The standing-room-only event included representatives from government, technology, non-profit, education, and the creative industries, among others.

In 2015, Altman co-founded OpenAI, a nonprofit artificial intelligence research and deployment company with the stated mission, to ensure that artificial general intelligence highly autonomous systems that outperform humans at most economically valuable work benefits all of humanity. In partnership with Operation HOPE, serial entrepreneur Altman has committed to makingAI a force for good by stimulating economic growth, increasing productivity at lower costs and stimulating job creation.

The promise of an economic boost via machine learning is understandably seductive, but if we want to ensure AI technology has a positive impact, we must all be engaged early on.With proper policy oversight, I believe it can transform the future of the underserved, said Operation HOPE Chairman, Founder, and CEO John Hope Bryant. The purpose of this discussion is to discover new ways to leverage AI to win in key areas of economic opportunity such as education, housing, employment, and credit. If it can revolutionize business, it can do the same for our communities.

Getting this right by figuring out the new society that we want to build and how we want to integrate AI technology is one of the most important questions of our time, Altman said. Im excited to have this discussion with a diverse group of people so that we can build something that humanity as a whole wants and needs.

Throughout the event, Altman and Bryant demystified AI and how modern digital technology is revolutionizing the way todays businesses compete and operate. By putting AI and data at the center of their capabilities, companies are redefining how they create, capture, and share valueand are achieving impressive growth as a result. During the Q&A session, they also discussed how government agencies can address AI policies that will lead to more equitable outcomes.

Altman is an American entrepreneur, angel investor, co-founder of Hydrazine Capital, former president of Y Combinator, founder and former CEO of Loopt, and co-founder and CEO of OpenAI. He was also one of TIME Magazines 100 Most Influential People of 2023.

According to recent research by IBM, more than one in threebusinesses were using AI technology in 2022. The report also notes that the adoption rate is exponential, with 42% currently considering incorporating AI into their business processes. Other research suggests that although the public sector is lagging, an increasing number of government agencies are considering or starting to use AI to improve operational efficiencies and decision-making. (McKinsey, 2020)

Link:

Operation HOPE and CAU Host ChatGPT Creator to Discuss AI - Black Enterprise

AI can be transformative technology, only with appropriate restrictions and safeguards against malicious use – ZAWYA

Check Point Research (CPR), the Threat Intelligence arm of Check Point Software Technologies Ltd. (NASDAQ: CHKP) and a leading provider of cyber security solutions globally, warns that artificial intelligence has the potential to be a transformative technology that can significantly impact our daily lives, but only with appropriate bans and regulations in place to ensure AI is used and developed ethically and responsibly.

"AI has already shown its potential and has the possibility to revolutionize many areas such as healthcare, finance, transportation and more. It can automate tedious tasks, increase efficiency and provide information that was previously not possible. AI could also help us solve complex problems, make better decisions, reduce human error or tackle dangerous tasks such as defusing a bomb, flying into space or exploring the oceans. But at the same time, we see massive use of AI technologies to develop cyber threats as well," says Ram Narayanan, Country Manager at Check Point Software Technologies, Middle East. Such misuse of AI has been widely reported in the media, with select reports around ChatGPT being leveraged by cybercriminals to contribute to the creation of malware.

Overall, the development of AI is not just another passing craze, but it remains to be seen how much of a positive or negative impact it will have on society. And although AI has been around for a long time, 2023 will be remembered by the public as the "Year of AI". However, there continues to be a lot of hype around this technology and some companies may be overreacting. We need to have realistic expectations and not see AI as an automatic panacea for all the world's problems.

We often hear concerns of whether AI will approach or even surpass human capabilities. Predicting how advanced AI will be is difficult, but there are already several categories. Current AI is referred to as narrow or "weak" AI (ANI Artificial Narrow Intelligence). General AI (AGI Artificial General Intelligence) should function like the human brain, thinking, learning and solving tasks like a human. The last category is Artificial Super Intelligence (ASI) and is basically machines that are smarter than us.

If artificial intelligence reaches the level of AGI, there is a risk that it could act on its own and potentially become a threat to humanity. Therefore, we need to work on aligning the goals and values of AI with those of humans.

Ram Narayanan further states, To mitigate the risks associated with advanced AI, it is important that governments, companies and regulators work together to develop robust safety mechanisms, establish ethical principles and promote transparency and accountability in AI development. Currently, there is a minimum of rules and regulations. There are proposals such as the AI Act, but none of these have been passed and essentially everything so far is governed by the ethical compasses of users and developers. Depending on the type of AI, companies that develop and release AI systems should ensure at least minimum standards such as privacy, fairness, explainability or accessibility."

Unfortunately, AI can also be used by cybercriminals to refine their attacks, automatically identify vulnerabilities, create targeted phishing campaigns, socially engineer, or create advanced malware that can change its code to better evade detection. AI can also be used to generate convincing audio and video deepfakes that can be used for political manipulation, false evidence in criminal trials, or to trick users into paying money.

But AI is also an important aid in defending against cyberattacks in particular. For example, Check Point uses more than 70 different tools to analyse threats and protect against attacks, more than 40 of which are AI-based. These technologies help with behavioral analysis, analyzing large amounts of threat data from a variety of sources, including the darknet, making it easier to detect zero-day vulnerabilities or automate patching of security vulnerabilities.

"Various bans and restrictions on AI have also been discussed recently. In the case of ChatGPT, the concerns are mainly related to privacy, as we have already seen data leaks, nor is the age limit of users addressed. However, blocking similar services has only limited effect, as any slightly more savvy user can get around the ban by using a VPN, for example, and there is also a brisk trade in stolen premium accounts. The problem is that most users do not realise that the sensitive information entered into ChatGPT will be very valuable if leaked, and could be used for targeted marketing purposes. We are talking about potential social manipulation on a scale never seen before," points out Ram Narayanan.

The impact of AI on our society will depend on how we choose to develop and use this technology. It will be important to weigh the potential benefits and risks whilst striving to ensure that AI is developed in a responsible, ethical and beneficial way for society.

Read more:

AI can be transformative technology, only with appropriate restrictions and safeguards against malicious use - ZAWYA

How AI Knows Things No One Told It – Scientific American

No one yet knows how ChatGPT and its artificial intelligence cousins will transform the world, and one reason is that no one really knows what goes on inside them. Some of these systems abilities go far beyond what they were trained to doand even their inventors are baffled as to why. A growing number of tests suggest these AI systems develop internal models of the real world, much as our own brain does, though the machines technique is different.

Advertisement

Everything we want to do with them in order to make them better or safer or anything like that seems to me like a ridiculous thing to ask ourselves to do if we dont understand how they work, says Ellie Pavlick of Brown University, one of the researchers working to fill that explanatory void.

At one level, she and her colleagues understand GPT (short for generative pretrained transformer) and other large language models, or LLMs, perfectly well. The models rely on a machine-learning system called a neural network. Such networks have a structure modeled loosely after the connected neurons of the human brain. The code for these programs is relatively simple and fills just a few screens. It sets up an autocorrection algorithm, which chooses the most likely word to complete a passage based on laborious statistical analysis of hundreds of gigabytes of Internet text. Additional training ensures the system will present its results in the form of dialogue. In this sense, all it does is regurgitate what it learnedit is a stochastic parrot, in the words of Emily Bender, a linguist at the University of Washington. But LLMs have also managed to ace the bar exam, explain the Higgs boson in iambic pentameter, and make an attempt to break up their users marriage. Few had expected a fairly straightforward autocorrection algorithm to acquire such broad abilities.

That GPT and other AI systems perform tasks they were not trained to do, giving them emergent abilities, has surprised even researchers who have been generally skeptical about the hype over LLMs. I dont know how theyre doing it or if they could do it more generally the way humans dobut theyve challenged my views, says Melanie Mitchell, an AI researcher at the Santa Fe Institute.

Advertisement

It is certainly much more than a stochastic parrot, and it certainly builds some representation of the worldalthough I do not think that it is quite like how humans build an internal world model, says Yoshua Bengio, an AI researcher at the University of Montreal.

At a conference at New York University in March, philosopher Raphal Millire of Columbia University offered yet another jaw-dropping example of what LLMs can do. The models had already demonstrated the ability to write computer code, which is impressive but not too surprising because there is so much code out there on the Internet to mimic. Millire went a step further and showed that GPT can execute code, too, however. The philosopher typed in a program to calculate the 83rd number in the Fibonacci sequence. Its multistep reasoning of a very high degree, he says. And the bot nailed it. When Millire asked directly for the 83rd Fibonacci number, however, GPT got it wrong: this suggests the system wasnt just parroting the Internet. Rather it was performing its own calculations to reach the correct answer.

Although an LLM runs on a computer, it is not itself a computer. It lacks essential computational elements, such as working memory. In a tacit acknowledgement that GPT on its own should not be able to run code, its inventor, the tech company OpenAI, has since introduced a specialized plug-ina tool ChatGPT can use when answering a querythat allows it to do so. But that plug-in was not used in Millires demonstration. Instead he hypothesizes that the machine improvised a memory by harnessing its mechanisms for interpreting words according to their contexta situation similar to how nature repurposes existing capacities for new functions.

Advertisement

This impromptu ability demonstrates that LLMs develop an internal complexity that goes well beyond a shallow statistical analysis. Researchers are finding that these systems seem to achieve genuine understanding of what they have learned. In one study presented last week at the International Conference on Learning Representations (ICLR), doctoral student Kenneth Li of Harvard University and his AI researcher colleaguesAspen K. Hopkins of the Massachusetts Institute of Technology, David Bau of Northeastern University, and Fernanda Vigas, Hanspeter Pfister and Martin Wattenberg, all at Harvardspun up their own smaller copy of the GPT neural network so they could study its inner workings. They trained it on millions of matches of the board game Othello by feeding in long sequences of moves in text form. Their model became a nearly perfect player.

To study how the neural network encoded information, they adopted a technique that Bengio and Guillaume Alain, also at the University of Montreal, devised in 2016. They created a miniature probe network to analyze the main network layer by layer. Li compares this approach to neuroscience methods. This is similar to when we put an electrical probe into the human brain, he says.In the case of the AI, the probe showed that its neural activity matched the representation of an Othello game board, albeit in a convoluted form. To confirm this, the researchers ran the probe in reverse to implant information into the networkfor instance, flipping one of the games black marker pieces to a white one. Basically, we hack into the brain of these language models, Li says. The network adjusted its moves accordingly. The researchers concluded that it was playing Othello roughly like a human: by keeping a game board in its minds eye and using this model to evaluate moves. Li says he thinks the system learns this skill because it is the most parsimonious description of its training data. If you are given a whole lot of game scripts, trying to figure out the rule behind it is the best way to compress, he adds.

This ability to infer the structure of the outside world is not limited to simple game-playing moves; it also shows up in dialogue. Belinda Li (no relation to Kenneth Li), Maxwell Nye and Jacob Andreas, all at M.I.T., studied networks that played a text-based adventure game. They fed in sentences such as The key is in the treasure chest, followed by You take the key. Using a probe, they found that the networks encoded within themselves variables corresponding to chest and you, each with the property of possessing a key or not, and updated these variables sentence by sentence. The system had no independent way of knowing what a box or key is, yet it picked up the concepts it needed for this task. There is some representation of the state hidden inside of the model, Belinda Li says.

Advertisement

Researchers marvel at how much LLMs are able to learn from text. For example, Pavlick and her then Ph.D. student Roma Patel found that these networks absorb color descriptions from Internet text and construct internal representations of color. When they see the word red, they process it not just as an abstract symbol but as a concept that has certain relationship to maroon, crimson, fuchsia, rust, and so on. Demonstrating this was somewhat tricky. Instead of inserting a probe into a network, the researchers studied its response to a series of text prompts. To check whether it was merely echoing color relationships from online references, they tried misdirecting the system by telling it that red is in fact greenlike the old philosophical thought experiment in which one persons red is another persons green. Rather than parroting back an incorrect answer, the systems color evaluations changed appropriately in order to maintain the correct relations.

Picking up on the idea that in order to perform its autocorrection function, the system seeks the underlying logic of its training data, machine learning researcher Sbastien Bubeck of Microsoft Research suggests that the wider the range of the data, the more general the rules the system will discover. Maybe were seeing such a huge jump because we have reached a diversity of data, which is large enough that the only underlying principle to all of it is that intelligent beings produced them, he says. And so the only way to explain all of this data is [for the model] to become intelligent.

In addition to extracting the underlying meaning of language, LLMs are able to learn on the fly. In the AI field, the term learning is usually reserved for the computationally intensive process in which developers expose the neural network to gigabytes of data and tweak its internal connections. By the time you type a query into ChatGPT, the network should be fixed; unlike humans, it should not continue to learn. So it came as a surprise that LLMs do, in fact, learn from their users promptsan ability known as in-context learning. Its a different sort of learning that wasnt really understood to exist before, says Ben Goertzel, founder of the AI company SingularityNET.

Advertisement

One example of how an LLM learns comes from the way humans interact with chatbots such as ChatGPT. You can give the system examples of how you want it to respond, and it will obey. Its outputs are determined by the last several thousand words it has seen. What it does, given those words, is prescribed by its fixed internal connectionsbut the word sequence nonetheless offers some adaptability. Entire websites are devoted to jailbreak prompts that overcome the systems guardrailsrestrictions that stop the system from telling users how to make a pipe bomb, for exampletypically by directing the model to pretend to be a system without guardrails. Some people use jailbreaking for sketchy purposes, yet others deploy it to elicit more creative answers. It will answer scientific questions, I would say, better than if you just ask it directly, without the special jailbreak prompt, says William Hahn, co-director of the Machine Perception and Cognitive Robotics Laboratory at Florida Atlantic University. Its better at scholarship.

Another type of in-context learning happens via chain of thought prompting, which means asking the network to spell out each step of its reasoninga tactic that makes it do better at logic or arithmetic problems requiring multiple steps. (But one thing that made Millires example so surprising is that the network found the Fibonacci number without any such coaching.)

In 2022 a team at Google Research and the Swiss Federal Institute of Technology in ZurichJohannes von Oswald, Eyvind Niklasson, Ettore Randazzo, Joo Sacramento, Alexander Mordvintsev, Andrey Zhmoginov and Max Vladymyrovshowed that in-context learning follows the same basic computational procedure as standard learning, known as gradient descent. This procedure was not programmed; the system discovered it without help. It would need to be a learned skill, says Blaise Agera y Arcas, a vice president at Google Research. In fact, he thinks LLMs may have other latent abilities that no one has discovered yet. Every time we test for a new ability that we can quantify, we find it, he says.

Advertisement

Although LLMs have enough blind spots not to qualify as artificial general intelligence, or AGIthe term for a machine that attains the resourcefulness of animal brainsthese emergent abilities suggest to some researchers that tech companies are closer to AGI than even optimists had guessed. Theyre indirect evidence that we are probably not that far off from AGI, Goertzel said in March at a conference on deep learning at Florida Atlantic University. OpenAIs plug-ins have given ChatGPT a modular architecture a little like that of the human brain. Combining GPT-4 [the latest version of the LLM that powers ChatGPT] with various plug-ins might be a route toward a humanlike specialization of function, says M.I.T. researcher Anna Ivanova.

At the same time, though, researchers worry the window may be closing on their ability to study these systems. OpenAI has not divulged the details of how it designed and trained GPT-4, in part because it is locked in competition with Google and other companiesnot to mention other countries. Probably theres going to be less open research from industry, and things are going to be more siloed and organized around building products, says Dan Roberts, a theoretical physicist at M.I.T., who applies the techniques of his profession to understanding AI.

And this lack of transparency does not just harm researchers; it also hinders efforts to understand the social impacts of the rush to adopt AI technology. Transparency about these models is the most important thing to ensure safety, Mitchell says.

Visit link:

How AI Knows Things No One Told It - Scientific American