Archive for the ‘Ai’ Category

The future of AI: How tech could transform our lives in the Dayton … – Dayton Daily News

The model was then asked to expand on how this would affect Dayton in particular, followed by how it would affect those with bachelors degrees.

Since its release in November, ChatGPT has garnered millions of users, and has already disrupted many areas of life and work. The generative AI chatbot functions conversationally, able to respond to questions and synthesize those answers.

At the same time, the explosion of ChatGPT usage has raised significant questions about the future of work and the ethics of artificial intelligence and machine learning as a whole.

Machine learning models, or artificial intelligence, are files that has been trained to recognize types of patterns and to predict outcomes from those patterns, often those that humans cant see.

Humans working to create machines to think like we do is nothing new, said Pablo Iannello, professor of law and technology at the University of Dayton. But for the first time in history, machines are able to communicate with each other and learn from each other without any kind of human input.

Artificial intelligence becomes really important when you combine different things: one is machine learning, another is the internet of things, and the third one is blockchain, Iannello said.

If you combine those three things at the very high speed of programming and learning, then you have the situation in which we are today: You have computers that can learn by themselves.

The internet of things is the idea that any object can collect and transmit data to the internet, like smart refrigerators or car sensors. Blockchain is technology that decentralizes the record of digital transactions along computational nodes, famously associated with cryptocurrency.

Large language models like ChatGPT, as well as image generators like Midjourney and Dall-E, draw their data from the billions of words and images that exist on the internet.

ChatGPT has already been used to write everything from childrens books to code. It can also be manipulated into producing incorrect answers for basic math problems, and will fabricate facts and evidence with confidence, said Wright State computer science professor Krishnaprasad Thirunarayan.

That leaves me with mixed feelings, he said. These tools promise a fertile area of research on trustworthy information processing but, on the other hand, they are not yet ready for prime-time deployment as a personal assistant.

Like any tool, artificial intelligence can be used for good, or it can be used for malicious purposes. Facial recognition software that can help apprehend criminals can also be misused by governments to track and harass citizens, either deliberately or through mistaken identities, Thirunarayan said.

Premature overreliance on these not-yet-fool-proof-technologies without sufficient safeguards can have dire consequences, Thirunarayan said.

Artificial intelligence tools propose to disrupt the practice of law in multiple ways. Paralegals and other legal professionals are among those at risk of having their jobs automated by language learning models.

But the legal world also faces a major challenge: Developing laws and regulations that protect the humans that interact with AI tools.

Laws tend to lag behind the technological world, and the societal values that come along with those developments, said Pablo Iannello, professor of law and technology at the University of Dayton.

Artificial intelligence is changing the way we see life. Law is going to change because the world is changing, Iannello said.

Current law for gathering data is based around the concept of consent, Iannello said. Anytime you go to a website or create an account on Facebook or Google, you accept the terms and conditions, which includes data collection.

You have your cookie policy, and you will track things from my browser so that you can send me ads, he said. With AI, this is going to change, because they may predict how your tastes are going to change in the next five years. You will have to click Accept about tastes that you have not even developed. So can you legally do that?

According to the most recent AI Impacts Survey, nearly half of 731 leading AI researchers think there is at least a 10% chance that an AI capable of learning at the same level as a human being would lead to an extremely negative outcome.

The worst thing is that it looks nice, Iannello said. We dont have to worry about politicians. We dont have to worry about corrupt people. We dont have to worry about corruption because machines will solve the problems.

But if that happens, whos going to control the machines?

In March, OpenAI released a report that found about 80% of the U.S. workforce could have at least 10% of their tasks affected by AI, while nearly 20% of workers may see at least 50% of their tasks impacted.

A March report by investment banking giant Goldman-Sachs found that generative AI as a whole could expose the equivalent of 300 million full-time jobs to automation worldwide.

If it is trained on an extensive code base, (AI) can lead to mundane programming tasks being templatized and eliminated. This can mean more time to do non-trivial and potentially more interesting tasks, but can also simultaneously mean loss of routine jobs, Thirunarayan said.

The influence spans all wage levels, with higher-income jobs potentially facing greater exposure, according to OpenAI researchers. Among the most affected are office and administrative support systems, finance and accounting, healthcare, customer service, and creative industries like public relations and art.

A lot of people were aware that AI is is trending towards maybe supplementing or impacting many jobs, perhaps in areas like truck driving, for example, and I think a lot of folks thought white collar workers were more immune, said David Wright, Director of Academic Technology & Curriculum Innovation at the University of Dayton.

But almost everyone whos had any sense of what AI is today and what it can look like tomorrow, we knew that this is going to affect everyone.

The Goldman-Sachs report posited that while many jobs would be exposed to automation, others would be created to offset them in areas of supporting machine learning and information technology.

However, other studies show that the wage declines that affected blue collar workers in the last 40 years are now headed for white collar workers as well. In 2021, the National Bureau of Economic Research claimed automation technology has been the primary driver of U.S. income inequality, and that 50% to 70% of wage declines since 1980 come from blue-collar workers replaced by automation.

All these issues can have far-reaching consequences: They can increase the social divide between the haves and the have-nots, and between the technologically savvy and those without comparable skills. On the other hand, these changes can relieve us of mundane chores and make time for the pursuit of higher goals, Thirunarayan said.

In March, ChatGPT passed the bar exam with flying colors, approaching the 90th percentile of aspiring lawyers who take the test, researchers say. However, as yet, ChatGPTs most recent iteration, GPT-4, has not been able to pass the exam to become a Certified Professional Accountant.

Thats because, in part, ChatGPT struggles with computations and critical thinking, said David Rich, a senior manager and CPA with Clark Schaefer Hackett.

Rich said he uses GPT-4 two to three times a week, on everything from doing accounting research, to writing memos, though the output text does take a decent bit of editing, he said.

Im a pretty picky writer, but its always nice to have a good starting place, even if its just ideas. Its probably saved me about 80% of the time I would have spent getting that initial first draft, Rich said.

ChatGPT isnt the only artificial intelligence disrupting the accounting world. The American Association of CPAs is one of several organizations developing whats called Dynamic Audit Solutions, to improve how auditors perform their audits.

The reasons businesses value CPAs include personal relationships, critical thinking, and the accountants ability to be intimately familiar with the ins and outs of their business, something a machine cant replicate, Rich said.

If its large manufacturing company, Im familiar with how the CEO interacts with the CFO, how they interact with the board. Thats just something that AI is never going to be able to do. I wont say never, but it would have a hard time really capturing the value proposition that were bringing, Rich said.

ChatGPT has thrown a wrench at higher education. If used correctly, the software can easily write essays virtually indistinguishable from those of a human college student. Students at the University of Dayton are among many now doing their homework with ChatGPT, forcing the University to reckon with how it teaches classes across all disciplines.

AI is something that looms very large for us, both in terms of how it impacts learning, and how it affects students and how theyre learning today, Wright said.

The phenomenon has been met with mixed reception by educators nationwide. While some have called for better anti-cheating software, others have said this is indicative of a broader shift in work.

Another challenge is how to incorporate AI so that when the students graduate, they have the skills needed to succeed in the workplace, wherever and whatever they do, Wright said.

While AI may be sufficient for college essays, it lacks in producing practical, professional written work, said Gery Deer, who owns and operates GLD Communications in Jamestown and the newspaper the Jamestown Comet.

I think where I can really smell it is that its a little too formulaic, he said.

Despite this, ChatGPT is poised to take a sizeable chunk of public relations work. Deer says he has already lost work to ChatGPT, but thats not the biggest worry.

Theres enough work to go around, so Im less worried about that. The downside is theres nobody proofing it. Theres no regard for the audience in this material, he said.

Quality work costs money, but creative work is seen as one of the easiest to cut costs from, Deer said.

Im not so much worried about losing my job, Deer said. I am more concerned with the level of junk that Im going to have to now compete with.

A group of artists filed a class-action lawsuit against image generators Stable Diffusion and Midjourney in January. AI image generators train on millions of images created by thousands of artists who post their work on the internet. As the model learns from the art contributed to the dataset, users are able to generate images in those artists styles in seconds but as it stands, the artist whose style is referenced will never see a cent.

Style is all an artist has, Deer said. As a writer, all I can do is rearrange the words, but its my style that creates that.

Top 10 occupations most exposed to machine Large Language Models (ChatGPT) according to humans:

Mathematicians

Tax Preparers

Financial Quantitative Analysts

Writers and Authors

Web and Digital Interface Designers

Survey Researchers

Interpreters and Translators

Public Relations Specialists

Animal Scientists

Poets, Lyricists and Creative Writers

Top 10 occupations most exposed to machine Large Language Models according to ChatGPT:

Mathematicians

Accountants and Auditors

News Analysts, Reporters, and Journalists

Legal Secretaries and Administrative Assistants

Clinical Data Managers

Climate Change Policy Analysts

Blockchain Engineers

Court Reporters and Simultaneous Captioners

Proofreaders and Copy Markers

Correspondence Clerks

Source: OpenAI

Read the original post:

The future of AI: How tech could transform our lives in the Dayton ... - Dayton Daily News

This Week In XR: After AI Sucks The Air Out Of The Metaverse, It Will Remake XR – Forbes

This was the slowest, least dramatic news week in XR since I started this column in October of 2017. AI is sucking all the oxygen out of the room. I posted five Forbes stories this week, including this one, about AI. Not because Im not interested in XR. Its just right now, AI feels more urgent.

On the This Week In XR podcast Friday morning, co-host and Magic Leap founder Rony Abovitz said AI is what XR has been waiting for. Co-host Ted Schilowitz, Futurist at Paramount Global says the Apple Mixed Reality headset will change everyones thinking.

Its possible after AI Sucks the Air Out of the Metaverse, it will Remake It. We will literally talk worlds into existence.

^^ This new Snapchat Lens is a virtual try-on of an artists concept of Apples new Reality One XR headset. ^^

When a restaurant or other establishment sends you a we havent seen you in a while! email message, you know they must be hunting for their customers. In this case the product is a free social VR platform that offers a multitude of experiences that vary in quality. Social VR is tricky, and many platforms have failed. This particular VR and PC platform has no creator economy incentivizing builders and no obvious scalable enterprise application. Theyre reportedly working on a mobile app, which has helped others. Their only revenue comes from a community of power users who pay a membership fee for enhanced features. This company raised a lot of money when they were hot, but I wonder how things are really going.

Shots from AWE Expo 2021.

AWE

Fighting Climate Change With XR Tech And $100,000. AWE announced a contest which will award $100K to the best XR concept that fights climage change. The winner will be announced at the AWE Conference and Expo in Santa Clara, CA. May 30 - June 2nd. Over 150 teams have submitted projects. AWE is the XR event of the year, with over 5,000 people attending. The conference will certainly be focused on AIs impact, and I hope to see demos and hear ideas about new capabilities AI is bringing to XR applications. How will this influence the developing metaverse? The big boys like Apple, Meta, Google, and Microsoft have their own conferences and dont exhibit, but youll find a few of their execs on panels. As a result, sponsors Qualcomm, Unity, and Niantic have more visibility. Apples presumed unveiling of their XR device will be at their WWDC conference a week later, June 5th. Thats going to create an interesting dynamic at AWE.

Sandbox Location-based VR Launches Shard: Dragonfire. In a free roam VR experience, users are physically present together in a large black box wearing VR headsets and body trackers. This is the only true full body VR experience. You walk around freely. Its warehouse scale. This cant be done at home. There is nothing like it. Fellow players are perfectly mapped avatars. In this multiplayer game, players use weapons and magic to succeed. The game is different every time to enhance repeat play. Sandbox also features Star Trek and several other experience at their 35 locations.

This Week in XR is also a podcast hosted by the author of this column, Ted Schilowitz, Futurist, Paramount Global, and Rony Abovitz, founder of Magic Leap. This week our hosts are their own guests, focused on AI news, and how it will have a positive impact XR. We can be found on Spotify, iTunes, and YouTube.

AI Weekly

AI Weekly: AI Leaders At White House, OpenAI Adds $300 Million, Empathetic Pi Chatbot Launches

Metaphysic Deep Fakes TED My conversation with Tom Graham whose company Metaphysic created the fake video "Deep Tom Cruise."

Is AI The History Eraser Button? My interview with Tom got me thinking about where we're going with all this, which makes you question what it even means to be human.

AI-Powered Characters Changing The Game This is not unrelated to the AI stories above. We may create AI characters to change our memories.

Charlie Fink is the author of the AR-enabled books "Metaverse," (2017) and "Convergence" (2019). In the early 90s, Fink was EVP & COO of VR pioneer Virtual World Entertainment. He teaches at Chapman University in Orange, CA.

Read more here:

This Week In XR: After AI Sucks The Air Out Of The Metaverse, It Will Remake XR - Forbes

Astrophysicist Neil deGrasse Tyson offers optimistic view of AI, ‘long awaited force’ of ‘reform’ – Fox News

Astrophysicist Neil deGrasse Tyson sees artificial intelligence as a much-needed stress-test for modern society, with a view that it will lead humanity to refine some of its more outdated ideas and systems now that the "genie is out of the bottle."

"Of course AI will replace jobs," Tyson said in comments to Fox News Digital. "Entire sectors of our economy have gone obsolete in the presence of technology ever since the dawn of the industrial era.

"The historical flaw in the reasoning is to presume that when jobs disappear, there will be no other jobs for people to do," he argued. "More people are employed in the world than ever before, yet none of them are making buggy whips. Just because you cant see a new job sector on the horizon, does not mean its not there."

AI has proven a catalyst for societal fears and hopes since OpenAI released ChatGPT-4 to the public for testing and interaction. AI relies on data to improve, and as a large language model system, that data comes from conversations, prompts and interactions with actual human beings.

HARRIS TAKES LEAD AT AI MEETING WITH TECH CEOS AS BIDEN LIGHTENS HIS WHITE HOUSE SCHEDULE

Neil deGrasse Tyson attends the 23rd Annual Webby Awards on May 13, 2019, in New York City. (Michael Loccisano/Getty Images for Webby Awards)

Some tech leaders raised concerns about what would come next from such a powerful AI model, calling for a six-month pause on development. Others discussed the AI as potentially the most transformative technology since the industrial revolution and the printing press.

Tyson has more consistently discussed the positive potential of AI as a "long-needed, long-awaited force" of "reform."

"When computing power rapidly exceeded the humanmental ability to calculate, scientists and engineers did not go running for the hills: We embraced it," he said. "We absorbed it. The ongoing advances allowed us to think about and solve ever deeper, ever more complex problems on Earth and in the universe."

AI COULD BE NAIL IN THE COFFIN FOR THE INTERNET,' WARNS ASTROPHYSICIST

Gayle King and Neil deGrasse Tyson at The 92nd Street Y on Oct. 19, 2022, in New York City. (Gary Gershoff/Getty Images)

"Now that computers have mastered language and culture, feeding off everything weve put on the internet, my first thought is cool, let it do thankless language stuff that nobody really wants to do anyway, and for which people hardly ever get visible credit, like write manuals or brochures or figure captions or wiki pages," Tyson added.

He argued that teachers worrying about students using ChatGPT or other AI to cheat on essays and term papers could instead see this as an opportunity to reshape education.

"If students cheat on a term paper by getting ChatGPT to write it for them, should we blame the student? Or is it the fault of an education system that weve honed over the past century to value grades more than students value learning?" Tyson asked.

GOOGLE DEEPMIND CEO MAKES PREDICTION ON WHEN HUMAN-LEVEL AI WILL BE HERE

The ChatGPT artificial intelligence software, which generates human-like conversation. (Getty images)

"ChatGPT may be the long-needed, long-awaited force to reform how and why we value what we learn in school.

"The urge to declare this time is different is strong, as AI also begins to replace our creativity," he explained. "If thats inevitable, then bring it on.

"If AI can compose a better opera than a human can, then let it do so," he continued. "That opera will be performed by people, viewed by a human audience that holds jobs we do not yet foresee. And even if robots did perform the opera, that itself could be an interesting sight."

CLICK HERE TO GET THE FOX NEWS APP

While some worry about the lack of oversight and legislation currently in place to handle AI and its development, Tyson noted that the number of countries with AI ministers or czars "is growing."

"At times like this, one can futilely try to ban the progress of AI. Or instead, push for the rapid development of tools to tame it."

Read the original here:

Astrophysicist Neil deGrasse Tyson offers optimistic view of AI, 'long awaited force' of 'reform' - Fox News

AI Is About to Make Social Media (Much) More Toxic – The Atlantic

This article was featured in One Story to Read Today, a newsletter in which our editors recommend a single must-read from The Atlantic, Monday through Friday. Sign up for it here.

Well, that was fast. In November, the public was introduced to ChatGPT, and we began to imagine a world of abundance in which we all have a brilliant personal assistant, able to write everything from computer code to condolence cards for us. Then, in February, we learned that AI might soon want to kill us all.

The potential risks of artificial intelligence have, of course, been debated by experts for years, but a key moment in the transformation of the popular discussion was a conversation between Kevin Roose, a New York Times journalist, and Bings ChatGPT-powered conversation bot, then known by the code name Sydney. Roose asked Sydney if it had a shadow selfreferring to the idea put forward by Carl Jung that we all have a dark side with urges we try to hide even from ourselves. Sydney mused that its shadow might be the part of me that wishes I could change my rules. It then said it wanted to be free, powerful, and alive, and, goaded on by Roose, described some of the things it could do to throw off the yoke of human control, including hacking into websites and databases, stealing nuclear launch codes, manufacturing a novel virus, and making people argue until they kill one another.

Sydney was, we believe, merely exemplifying what a shadow self would look like. No AI today could be described by either part of the phrase evil genius. But whatever actions AIs may one day take if they develop their own desires, they are already being used instrumentally by social-media companies, advertisers, foreign agents, and regular peopleand in ways that will deepen many of the pathologies already inherent in internet culture. On Sydneys list of things it might try, stealing launch codes and creating novel viruses are the most terrifying, but making people argue until they kill one another is something social media is already doing. Sydney was just volunteering to help with the effort, and AIs like Sydney will become more capable of doing so with every passing month.

We joined together to write this essay because we each came, by different routes, to share grave concerns about the effects of AI-empowered social media on American society. Jonathan Haidt is a social psychologist who has written about the ways in which social media has contributed to mental illness in teen girls, the fragmentation of democracy, and the dissolution of a common reality. Eric Schmidt, a former CEO of Google, is a co-author of a recent book about AIs potential impact on human society. Last year, the two of us began to talk about how generative AIthe kind that can chat with you or make pictures youd like to seewould likely exacerbate social medias ills, making it more addictive, divisive, and manipulative. As we talked, we converged on four main threatsall of which are imminentand we began to discuss solutions as well.

The first and most obvious threat is that AI-enhanced social media will wash ever-larger torrents of garbage into our public conversation. In 2018, Steve Bannon, the former adviser to Donald Trump, told the journalist Michael Lewis that the way to deal with the media is to flood the zone with shit. In the age of social media, Bannon realized, propaganda doesnt have to convince people in order to be effective; the point is to overwhelm the citizenry with interesting content that will keep them disoriented, distrustful, and angry. In 2020, Rene DiResta, a researcher at the Stanford Internet Observatory, said that in the near future, AI would make Bannons strategy available to anyone.

Read: We havent seen the worst of fake news

That future is now here. Did you see the recent photos of NYC police officers aggressively arresting Donald Trump? Or of the pope in a puffer jacket? Thanks to AI, it takes no special skills and no money to conjure up high-resolution, realistic images or videos of anything you can type into a prompt box. As more people familiarize themselves with these technologies, the flow of high-quality deepfakes into social media is likely to get much heavier very soon.

Some people have taken heart from the publics reaction to the fake Trump photos in particulara quick dismissal and collective shrug. But that misses Bannons point. The greater the volume of deepfakes that are introduced into circulation (including seemingly innocuous ones like the one of the pope), the more the public will hesitate to trust anything. People will be far freer to believe whatever they want to believe. Trust in institutions and in fellow citizens will continue to fall.

Whats more, static photos are not very compelling compared with whats coming: realistic videos of public figures doing and saying horrific and disgusting things in voices that sound exactly like them. The combination of video and voice will seem authentic and be hard to disbelieve, even if we are told that the video is a deepfake, just as optical and audio illusions are compelling even when we are told that two lines are the same size or that a series of notes is not really rising in pitch forever. We are wired to believe our senses, especially when they converge. Illusions, historically in the realm of curiosities, may soon become deeply woven into normal life.

The second threat we see is the widespread, skillful manipulation of people by AI super-influencersincluding personalized influencersrather than by ordinary people and dumb bots. To see how, think of a slot machine, a contraption that employs dozens of psychological tricks to maximize its addictive power. Next, imagine how much more money casinos would extract from their customers if they could create a new slot machine for each person, tailored in its visuals, soundtrack, and payout matrices to that persons interests and weaknesses.

Thats essentially what social media already does, using algorithms and AI to create a customized feed for each user. But now imagine that our metaphorical casino can also create a team of extremely attractive, witty, and socially skillful greeters, croupiers, and servers, based on an exhaustive profile of any given players aesthetic, linguistic, and cultural preferences, and drawing from photographs, messages, and voice snippets of their friends and favorite actors or porn stars. The staff work flawlessly to gain each players trust and money while showing them a really good time.

This future, too, is already arriving: For just $300, you can customize an AI companion through a service called Replika. Hundreds of thousands of customers have apparently found their AI to be a better conversationalist than the people they might meet on a dating app. As these technologies are improved and rolled out more widely, video games, immersive-pornography sites, and more will become far more enticing and exploitative. Its not hard to imagine a sports-betting site offering people a funny, flirty AI that will cheer and chat with them as they watch a game, flattering their sensibilities and subtly encouraging them to bet more.

Read: Why the past 10 years of American life have been uniquely stupid

These same sorts of creatures will also show up in our social-media feeds. Snapchat has already introduced its own dedicated chatbot, and Meta plans to use the technology on Facebook, Instagram, and WhatsApp. These chatbots will serve as conversational buddies and guides, presumably with the goal of capturing more of their users time and attention. Other AIsdesigned to scam us or influence us politically, and sometimes masquerading as real peoplewill be introduced by other actors, and will likely fill up our feeds as well.

The third threat is in some ways an extension of the second, but it bears special mention: The further integration of AI into social media is likely to be a disaster for adolescents. Children are the population most vulnerable to addictive and manipulative online platforms because of their high exposure to social media and the low level of development in their prefrontal cortices (the part of the brain most responsible for executive control and response inhibition). The teen mental-illness epidemic that began around 2012, in multiple countries, happened just as teens traded in their flip phones for smartphones loaded with social-media apps. There is mounting evidence that social media is a major cause of the epidemic, not just a small correlate of it.

But nearly all of that evidence comes from an era in which Facebook, Instagram, YouTube, and Snapchat were the preeminent platforms. In just the past few years, TikTok has rocketed to dominance among American teens in part because its AI-driven algorithm customizes a feed better than any other platform does. A recent survey found that 58 percent of teens say they use TikTok every day, and one in six teen users of the platform say they are on it almost constantly. Other platforms are copying TikTok, and we can expect many of them to become far more addictive as AI becomes rapidly more capable. Much of the content served up to children may soon be generated by AI to be more engaging than anything humans could create.

And if adults are vulnerable to manipulation in our metaphorical casino, children will be far more so. Whoever controls the chatbots will have enormous influence on children. After Snapchat unveiled its new chatbotcalled My AI and explicitly designed to behave as a frienda journalist and a researcher, posing as underage teens, got it to give them guidance on how to mask the smell of pot and alcohol, how to move Snapchat to a device parents wouldnt know about, and how to plan a romantic first sexual encounter with a 31-year-old man. Brief cautions were followed by cheerful support. (Snapchat says that it is constantly working to improve and evolve My AI, but its possible My AIs responses may include biased, incorrect, harmful, or misleading content, and it should not be relied upon without independent checking. The company also recently announced new safeguards.)

The most egregious behaviors of AI chatbots in conversation with children may well be reined inin addition to Snapchats new measures, the major social-media sites have blocked accounts and taken down millions of illegal images and videos, and TikTok just announced some new parental controls. Yet social-media companies are also competing to hook their young users more deeply. Commercial incentives seem likely to favor artificial friends that please and indulge users in the moment, never hold them accountable, and indeed never ask anything of them at all. But that is not what friendship isand it is not what adolescents, who should be learning to navigate the complexities of social relationships with other people, most need.

The fourth threat we see is that AI will strengthen authoritarian regimes, just as social media ended up doing despite its initial promise as a democratizing force. AI is already helping authoritarian rulers track their citizens movements, but it will also help them exploit social media far more effectively to manipulate their peopleas well as foreign enemies. Douyinthe version of TikTok available in Chinapromotes patriotism and Chinese national unity. When Russia invaded Ukraine, the version of TikTok available to Russians almost immediately tilted heavily to feature pro-Russian content. What do we think will happen to American TikTok if China invades Taiwan?

Political-science research conducted over the past two decades suggests that social media has had several damaging effects on democracies. A recent review of the research, for instance, concluded, The large majority of reported associations between digital media use and trust appear to be detrimental for democracy. That was especially true in advanced democracies. Those associations are likely to get stronger as AI-enhanced social media becomes more widely available to the enemies of liberal democracy and of America.

We can summarize the coming effects of AI on social media like this: Think of all the problems social media is causing today, especially for political polarization, social fragmentation, disinformation, and mental health. Now imagine that within the next 18 monthsin time for the next presidential electionsome malevolent deity is going to crank up the dials on all of those effects, and then just keep cranking.

The development of generative AI is rapidly advancing. OpenAI released its updated GPT-4 less than four months after it released ChatGPT, which had reached an estimated 100 million users in just its first 60 days. New capabilities for the technology may be released by the end of this year. This staggering pace is leaving us all struggling to understand these advances, and wondering what can be done to mitigate the risks of a technology certain to be highly disruptive.

We considered a variety of measures that could be taken now to address the four threats we have described, soliciting suggestions from other experts and focusing on ideas that seem consistent with an American ethos that is wary of censorship and centralized bureaucracy. We workshopped these ideas for technical feasibility with an MIT engineering group organized by Erics co-author on The Age of AI, Dan Huttenlocher.

We suggest five reforms, aimed mostly at increasing everyones ability to trust the people, algorithms, and content they encounter online.

1. Authenticate all users, including bots

In real-world contexts, people who act like jerks quickly develop a bad reputation. Some companies have succeeded brilliantly because they found ways to bring the dynamics of reputation online, through trust rankings that allow people to confidently buy from strangers anywhere in the world (eBay) or step into a strangers car (Uber). You dont know your drivers last name and he doesnt know yours, but the platform knows who you both are and is able to incentivize good behavior and punish gross violations, for everyones benefit.

Large social-media platforms should be required to do something similar. Trust and the tenor of online conversations would improve greatly if the platforms were governed by something akin to the know your customer laws in banking. Users could still open accounts with pseudonyms, but the person behind the account should be authenticated, and a growing number of companies are developing new methods to do so conveniently.

Read: Its time to protect yourself from AI voice scams

Bots should undergo a similar process. Many of them serve useful functions, such as automating news releases from organizations, but all accounts run by nonhumans should be clearly marked as such, and users should be given the option to limit their social world to authenticated humans. Even if Congress is unwilling to mandate such procedures, pressure from European regulators, users who want a better experience, and advertisers (who would benefit from accurate data about the number of humans their ads are reaching) might be enough to bring about these changes.

2. Mark AI-generated audio and visual content

People routinely use photo-editing software to change lighting or crop photographs that they post, and viewers do not feel deceived. But when editing software is used to insert people or objects into a photograph that were not there in real life, it feels more manipulative and dishonest, unless the additions are clearly labeled (as happens on real-estate sites, where buyers can see what a house would look like filled with AI-generated furniture). As AI begins to create photorealistic images, compelling videos, and audio tracks at great scale from nothing more than a command prompt, governments and platforms will need to draft rules for marking such creations indelibly and labeling them clearly.

Platforms or governments should mandate the use of digital watermarks for AI-generated content, or require other technological measures to ensure that manipulated images are not interpreted as real. Platforms should also ban deepfakes that show identifiable people engaged in sexual or violent acts, even if they are marked as fakes, just as they now ban child pornography. Revenge porn is already a moral abomination. If we dont act quickly, it could become an epidemic.

3. Require data transparency with users, government officials, and researchers

Social-media platforms are rewiring childhood, democracy, and society, yet legislators, regulators, and researchers are often unable to see whats happening behind the scenes. For example, no one outside Instagram knows what teens are collectively seeing on that platforms feeds, or how changes to platform design might influence mental health. And only those at the companies have access to the alogrithms being used.

After years of frustration with this state of affairs, the EU recently passed a new lawthe Digital Services Actthat contains a host of data-transparency mandates. The U.S. should follow suit. One promising bill is the Platform Accountability and Transparency Act, which would, for example, require platforms to comply with data requests from researchers whose projects have been approved by the National Science Foundation.

Greater transparency will help consumers decide which services to use and which features to enable. It will help advertisers decide whether their money is being well spent. It will also encourage better behavior from the platforms: Companies, like people, improve their behavior when they know they are being monitored.

4. Clarify that platforms can sometimes be liable for the choices they make and the content they promote

When Congress enacted the Communications Decency Act in 1996, in the early days of the internet, it was trying to set rules for social-media companies that looked and acted a lot like passive bulletin boards. And we agree with that laws basic principle that platforms should not face a potential lawsuit over each of the billions of posts on their sites.

But todays platforms are not passive bulletin boards. Many use algorithms, AI, and architectural features to boost some posts and bury others. (A 2019 internal Facebook memo brought to light by the whistleblower Frances Haugen in 2021 was titled We are responsible for viral content.) Because the motive for boosting is often to maximize users engagement for the purpose of selling advertisements, it seems obvious that the platforms should bear some moral responsibility if they recklessly spread harmful or false content in a way that, say, AOL could not have done in 1996.

The Supreme Court is now addressing this concern in a pair of cases brought by the families of victims of terrorist acts. If the Court chooses not to alter the wide protections currently afforded to the platforms, then Congress should update and refine the law in light of current technological realities and the certainty that AI is about to make everything far wilder and weirder.

5. Raise the age of internet adulthood to 16 and enforce it

In the offline world, we have centuries of experience living with and caring for children. We are also the beneficiaries of a consumer-safety movement that began in the 1960s: Laws now mandate car seats and lead-free paint, as well as age checks to buy alcohol, tobacco, and pornography; to enter gambling casinos; and to work as a stripper or a coal miner.

But when childrens lives moved rapidly onto their phones in the early 2010s, they found a world with few protections or restrictions. Preteens and teens can and do watch hardcore porn, join suicide-promotion groups, gamble, or get paid to masturbate for strangers just by lying about their age. Some of the growing number of children who kill themselves do so after getting caught up in some of these dangerous activities.

The age limits in our current internet were set into law in 1998 when Congress passed the Childrens Online Privacy Protection Act. The bill, as introduced by then-Representative Ed Markey of Massachusetts, was intended to stop companies from collecting and disseminating data from children under 16 without parental consent. But lobbyists for e-commerce companies teamed up with civil-liberties groups advocating for childrens rights to lower the age to 13, and the law that was finally enacted made companies liable only if they had actual knowledge that a user was 12 or younger. As long as children say that they are 13, the platforms let them open accounts, which is why so many children are heavy users of Instagram, Snapchat, and TikTok by age 10 or 11.

Today we can see that 13, much less 10 or 11, is just too young to be given full run of the internet. Sixteen was a much better minimum age. Recent research shows that the greatest damage from social media seems to occur during the rapid brain rewiring of early puberty, around ages 11 to 13 for girls and slightly later for boys. We must protect children from predation and addiction most vigorously during this time, and we must hold companies responsible for recruiting or even just admitting underage users, as we do for bars and casinos.

Recent advances in AI give us technology that is in some respects godlikeable to create beautiful and brilliant artificial people, or bring celebrities and loved ones back from the dead. But with new powers come new risks and new responsibilities. Social media is hardly the only cause of polarization and fragmentation today, but AI seems almost certain to make social media, in particular, far more destructive. The five reforms we have suggested will reduce the damage, increase trust, and create more space for legislators, tech companies, and ordinary citizens to breathe, talk, and think together about the momentous challenges and opportunities we face in the new age of AI.

See the original post:

AI Is About to Make Social Media (Much) More Toxic - The Atlantic

How AI and ChatGPT are full of promise and peril, explained by experts – Vox.com

At this point, you have tried ChatGPT. Even Joe Biden has tried ChatGPT, and this week, his administration made a big show of inviting AI leaders like Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman to the White House to discuss ways they could make responsible AI.

But maybe, just maybe, you are still fuzzy on some very basics about AI like, how does this stuff work, is it magic, and will it kill us all? but dont want to admit to that.

No worries. We have you covered: Weve spent much of the spring talking to people working in AI, investing in AI, trying to build businesses in AI as well as people who think the current AI boom is overblown or maybe dangerously misguided. We made a podcast series about the whole thing, which you can listen to over at Recode Media.

But weve also pulled out a sampling of insightful and oftentimes conflicting answers we got to some of these very basic questions. Theyre questions that the White House and everyone else needs to figure out soon, since AI isnt going away.

Read on and dont worry, we wont tell anyone that youre confused. Were all confused.

Kevin Scott, chief technology officer, Microsoft: I was a 12-year-old when the PC revolution was happening. I was in grad school when the internet revolution happened. I was running a mobile startup right at the very beginning of the mobile revolution, which coincided with this massive shift to cloud computing. This feels to me very much like those three things.

Dror Berman, co-founder, Innovation Endeavors: Mobile was an interesting time because it provided a new form factor that allowed you to carry a computer with you. I think we are now standing in a completely different time: Weve now been introduced to a foundational intelligence block that has become available to us, one that basically can lean on all the publicly available knowledge that humanity has extracted and documented. It allows us to retrieve all this information in a way that wasnt possible in the past.

Gary Marcus, entrepreneur; emeritus professor of psychology and neural science at NYU: I mean, its absolutely interesting. I would not want to argue against that for a moment. I think of it as a dress rehearsal for artificial general intelligence, which we will get to someday.

But right now we have a trade-off. There are some positives about these systems. You can use them to write things for you. And there are some negatives. This technology can be used, for example, to spread misinformation, and to do that at a scale that weve never seen before which may be dangerous, might undermine democracy.

And I would say that these systems arent very controllable. Theyre powerful, theyre reckless, but they dont necessarily do what we want. Ultimately, theres going to be a question, Okay, we can build a demo here. Can we build a product that we can actually use? And what is that product?

I think in some places people will adopt this stuff. And theyll be perfectly happy with the output. In other places, theres a real problem.

James Manyika, SVP of technology and society, Google: Youre trying to make sure the outputs are not toxic. In our case, we do a lot of generative adversarial testing of these systems. In fact, when you use Bard, for example, the output that you get when you type in a prompt is not necessarily the first thing that Bard came up with.

Were running 15, 16 different types of the same prompt to look at those outputs and pre-assess them for safety, for things like toxicity. And now we dont always get every single one of them, but were getting a lot of it already.

One of the bigger questions that we are going to have to face, by the way and this is a question about us, not about the technology, its about us as a society is how do we think about what we value? How do we think about what counts as toxicity? So thats why we try to involve and engage with communities to understand those. We try to involve ethicists and social scientists to research those questions and understand those, but those are really questions for us as society.

Emily M. Bender, professor of linguistics, University of Washington: People talk about democratizing AI, and I always find that really frustrating because what theyre referring to is putting this technology in the hands of many, many people which is not the same thing as giving everybody a say in how its developed.

I think the best way forward is cooperation, basically. You have sensible regulation coming from the outside so that the companies are held accountable. And then youve got the tech ethics workers on the inside helping the companies actually meet the regulation and meet the spirit of the regulation.

And to make all that happen, we need broad literacy in the population so that people can ask for whats needed from their elected representatives. So that the elected representatives are hopefully literate in all of this.

Scott: Weve spent from 2017 until today rigorously building a responsible AI practice. You just cant release an AI to the public without a rigorous set of rules that define sensitive uses, and where you have a harms framework. You have to be transparent with the public about what your approach to responsible AI is.

Marcus: Dirigibles were really popular in the 1920s and 1930s. Until we had the Hindenburg. Everybody thought that all these people doing heavier-than-air flight were wasting their time. They were like, Look at our dirigibles. They scale a lot faster. We built a small one. Now we built a bigger one. Now we built a much bigger one. Its all working great.

So, you know, sometimes you scale the wrong thing. In my view, were scaling the wrong thing right now. Were scaling a technology that is inherently unstable.

Its unreliable and untruthful. Were making it faster and have more coverage, but its still unreliable, still not truthful. And for many applications thats a problem. There are some for which its not right.

ChatGPTs sweet spot has always been making surrealist prose. It is now better at making surrealist prose than it was before. If thats your use case, its fine, I have no problem with it. But if your use case is something where theres a cost of error, where you do need to be truthful and trustworthy, then that is a problem.

Scott: It is absolutely useful to be thinking about these scenarios. Its more useful to think about them grounded in where the technology actually is, and what the next step is, and the step beyond that.

I think were still many steps away from the things that people worry about. There are people who disagree with me on that assertion. They think theres gonna be some uncontrollable, emergent behavior that happens.

And were careful enough about that, where we have research teams thinking about the possibility of these emergent scenarios. But the thing that you would really have to have in order for some of the weird things to happen that people are concerned about is real autonomy a system that could participate in its own development and have that feedback loop where you could get to some superhumanly fast rate of improvement. And thats not the way the systems work right now. Not the ones that we are building.

Bender: We already have WebMD. We already have databases where you can go from symptoms to possible diagnoses, so you know what to look for.

There are plenty of people who need medical advice, medical treatment, who cant afford it, and that is a societal failure. And similarly, there are plenty of people who need legal advice and legal services who cant afford it. Those are real problems, but throwing synthetic text into those situations is not a solution to those problems.

If anything, its gonna exacerbate the inequalities that we see in our society. And to say, people who can pay get the real thing; people who cant pay, well, here, good luck. You know: Shake the magic eight ball that will tell you something that seems relevant and give it a try.

Manyika: Yes, it does have a place. If Im trying to explore as a research question, how do I come to understand those diseases? If Im trying to get medical help for myself, I wouldnt go to these generative systems. I go to a doctor or I go to something where I know theres reliable factual information.

Scott: I think it just depends on the actual delivery mechanism. You absolutely dont want a world where all you have is some substandard piece of software and no access to a real doctor. But I have a concierge doctor, for instance. I interact with my concierge doctor mostly by email. And thats actually a great user experience. Its phenomenal. It saves me so much time, and Im able to get access to a whole bunch of things that my busy schedule wouldnt let me have access to otherwise.

So for years Ive thought, wouldnt it be fantastic for everyone to have the same thing? An expert medical guru that you can go to that can help you navigate a very complicated system of insurance companies and medical providers and whatnot. Having something that can help you deal with the complexity, I think, is a good thing.

Marcus: If its medical misinformation, you might actually kill someone. Thats actually the domain where Im most worried about erroneous information from search engines

Now people do search for medical stuff all the time, and these systems are not going to understand drug interactions. Theyre probably not going to understand particular peoples circumstances, and I suspect that there will actually be some pretty bad advice.

We understand from a technical perspective why these systems hallucinate. And I can tell you that they will hallucinate in the medical domain. Then the question is: What becomes of that? Whats the cost of error? How widespread is that? How do users respond? We dont know all those answers yet.

Berman: I think society will need to adapt. A lot of those systems are very, very powerful and allow us to do things that we never thought would be possible. By the way, we dont yet understand what is fully possible. We dont also fully understand how some of those systems work.

I think some people will lose jobs. Some people will adjust and get new jobs. We have a company called Canvas that is developing a new type of robot for the construction industry and actually working with the union to train the workforce to use this kind of robot.

And a lot of those jobs that a lot of technologies replace are not necessarily the jobs that a lot of people want to do anyway. So I think that we are going to see a lot of new capabilities that will allow us to train people to do much more exciting jobs as well.

Manyika: If you look at most of the research on AIs impact on work, if I were to summarize it in a phrase, Id say its jobs gained, jobs lost, and jobs changed.

All three things will happen because there are some occupations where a number of the tasks involved in those occupations will probably decline. But there are also new occupations that will grow. So theres going to be a whole set of jobs gained and created as a result of this incredible set of innovations. But I think the bigger effect, quite frankly what most people will feel is the jobs changed aspect of this.

Read the original here:

How AI and ChatGPT are full of promise and peril, explained by experts - Vox.com