Archive for the ‘Ai’ Category

Opinion: I asked AI about myself. The answers were all wrong – The Virginian-Pilot

My interest in artificial intelligence piqued after a colleague told me he was using it for research and writing. Before I used AI for my own work, I decided to test its authenticity with a question I could verify. I asked OpenAIs ChatGPT about my own identity expecting a text version of a selfie. After a week of repeating the same question, the responses were confounding and concerning.

ChatGPT answered who is Philip Shucet by listing 15 distinct positions I supposedly held at one time or another. The positions included specific titles, job responsibilities and employment dates. But only three of the 15 jobs were accurate. The other 12 were fabrications; the positions were real, but I was never in any of them. The misinformation included jobs in two states I never lived in, as well as a congressional appointment to the Amtrak Review Board. How could AI be so wrong?

Although newsrooms, boardrooms and classrooms are buzzing with stories, AI is not new. The first chatbot, Eliza, was created in 1966 by Joseph Weizenbaum at MIT. Weizenbaum, who died in 2008, became skeptical of artificial intelligence, telling the New Age Journal in 1985, The dependence on computers is merely the most recent, and the most extreme, example of how man relies on technology in order to escape the burden of acting as an independent agent.

Was Weizenbaum sending a warning that technology might make us lazy?

In an interview about AI on a March segment of 60 Minutes, Brad Smith, president of Microsoft, told Leslie Stahl that a benefit of AI could be, looking at forms to see if theyve been filled out correctly. But what if the form is a resume created by AI? Can AI check its own misinformation? What happens when an employment record is tainted with false information created by AI? Can job recruiters rely on AI queries? Can employers rely on recruiters who use AI? And who is accountable when someone is hired based on misinformation generated by a machine and not by a human?

In the same 60 Minutes segment, Ellie Pavlik, an assistant professor at Brown, told Stahl, It (AI) doesnt really understand what it is saying is wrong. If AI doesnt know when it is wrong, how can anyone rely on AI to be correct?

In May, two New York attorneys used ChatGPT to write a court brief. The brief cited misinformation from cases that didnt exist. Schwartz told the judge that he failed miserably to do his own research to make sure the information was correct. The judge fined each attorney $5,000.

I asked ChatGPT about the consequences of giving out bad information. ChatGPT answered by saying that false information results in misrepresentation, confusion, legal concerns, emotional distress and erodes trust in AI. If ChatGPT understands the implications of false information, why does it continue to provide fabrications when a search engine could easily provide correct information? Because, as I know now, ChatGPT is not a search engine. I know because I asked.

ChatGPT says it is a language model designed to understand and generate human-like text based on input. ChatGPT says it doesnt crawl the web or search the Internet. Instead, it generates responses based on patterns and information it learned from the text it was trained on.

If AI needs to be trained, then theres a critical human element of accountability we cant ignore. So I started training ChatGPT by correcting it each time it answered with false information. After a week of training, ChatGPT was still returning a mix of accurate and inaccurate information, sometimes repeating fabrications. Im still sending back correct information, but Im ready to bring this experiment to an end for now.

This wasnt a test of ego, it was a test of reliability and trust. A 20% accuracy rate is a failing grade.

In 1976, Weizenbaum wrote, No other organism, and certainly no computer, can be made to confront genuine human problems in human terms. Im not a luddite. But as technology continues to leap forward further and faster, lets remember that we are in control of the information that defines us. We are the trainers.

Philip Shucet is a journalist. He previously held positions as the commissioner of VDOT, president and CEO of Hampton Roads Transit, and CEO of Elizabeth River Crossings. He has never held a congressional appointment.

Read more:

Opinion: I asked AI about myself. The answers were all wrong - The Virginian-Pilot

ICYMI: As California Fires Worsen, Can AI Come to the Rescue … – Office of Governor Gavin Newsom

WHAT YOU NEED TO KNOW: No other jurisdiction in the world comes close to Californias use of technology and innovation including AI to fight fires.

SACRAMENTO Short answer: yes.

California is leveraging technologies like AI to fight fires faster and smarter, saving countless lives and communities from destruction.

As reported by the Los Angeles Times, CAL FIRE recently launched a pilot program that uses AI to monitor live camera feeds and issues alerts if anomalies are detected. Already, the program has successfully alerted CAL FIRE to 77 fires before any 911 calls were made.

This program is made possible by record investments by Governor Newsom and the Legislature in wildfire prevention and response totaling $2.8 billion.

IN CASE YOU MISSED IT:

As California Fires Worsen, Can AI Come to the Rescue?

By Hayley Smith

Los Angeles Times

Just before 3 a.m. one night this month, Scott Slumpff was awakened by the ding of a text message.

An ALERTCalifornia anomaly has been confirmed in your area of interest, the message said.

Slumpff, a battalion chief with the California Department of Forestry and Fire Protection, sprang into action. The message meant the agencys new artificial intelligence system had identified signs of a wildfire with a remote mountaintop camera in San Diego County.

Within minutes, crews were dispatched to the burgeoning blaze on Mount Laguna squelching it before it grew any larger than a 10-foot-by-10-foot spot.

Without the alert, we wouldnt have even known about the fire until the next morning, when people are out and about seeing smoke, Slumpff said. We probably would have been looking at hundreds of acres rather than a small spot.

The rapid response was part of a new AI pilot project operated by Cal Fire in partnership with UC San Diegos ALERTCalifornia system, which maintains 1,039 high-definition cameras in strategic locations throughout the state.

The AI constantly monitors the camera feeds in search of anomalies such as smoke, and alerts Cal Fire when it detects something. A red box highlights the anomaly on a screen, allowing officials to quickly verify and respond.

The project rolled out just two months ago to six Cal Fire emergency command centers in the state. But the proof of concept has already been so successful correctly identifying 77 fires before any 911 calls were logged that it will soon roll out to all 21 centers.

The success of this project is the fires you never hear about, said Phillip SeLegue, staff chief of fire intelligence with Cal Fire.

Read more here.

Read more from the original source:

ICYMI: As California Fires Worsen, Can AI Come to the Rescue ... - Office of Governor Gavin Newsom

AI chips, shared trips, and a shorter work week : The Indicator from … – NPR

AI chips, shared trips, and a shorter work week : The Indicator from Planet Money It's Indicators of the Week, our weekly news roundup. Today, AI doesn't want to invest in AI, a county in Washington state implements a 4-day work week, and NYC says bye bye to Airbnb, sorta.

For sponsor-free episodes of The Indicator from Planet Money, subscribe to Planet Money+ via Apple Podcasts or at plus.npr.org.

Music by Drop Electric. Find us: TikTok, Instagram, Facebook, Newsletter.

Lionel Bonaventure/AFP via Getty Images

It's Indicators of the Week, our weekly news roundup. Today, AI doesn't want to invest in AI, a county in Washington state implements a 4-day work week, and NYC says bye bye to Airbnb, sorta.

For sponsor-free episodes of The Indicator from Planet Money, subscribe to Planet Money+ via Apple Podcasts or at plus.npr.org.

Music by Drop Electric. Find us: TikTok, Instagram, Facebook, Newsletter.

View post:

AI chips, shared trips, and a shorter work week : The Indicator from ... - NPR

How Schools Can Survive A.I. – The New York Times

Last November, when ChatGPT was released, many schools felt as if theyd been hit by an asteroid.

In the middle of an academic year, with no warning, teachers were forced to confront the new, alien-seeming technology, which allowed students to write college-level essays, solve challenging problem sets and ace standardized tests.

Some schools responded unwisely, I argued at the time by banning ChatGPT and tools like it. But those bans didnt work, in part because students could simply use the tools on their phones and home computers. And as the year went on, many of the schools that restricted the use of generative A.I. as the category that includes ChatGPT, Bing, Bard and other tools is called quietly rolled back their bans.

Ahead of this school year, I talked with numerous K-12 teachers, school administrators and university faculty members about their thoughts on A.I. now. There is a lot of confusion and panic, but also a fair bit of curiosity and excitement. Mainly, educators want to know: How do we actually use this stuff to help students learn, rather than just try to catch them cheating?

Im a tech columnist, not a teacher, and I dont have all the answers, especially when it comes to the long-term effects of A.I. on education. But I can offer some basic, short-term advice for schools trying to figure out how to handle generative A.I. this fall.

First, I encourage educators especially in high schools and colleges to assume that 100 percent of their students are using ChatGPT and other generative A.I. tools on every assignment, in every subject, unless theyre being physically supervised inside a school building.

At most schools, this wont be completely true. Some students wont use A.I. because they have moral qualms about it, because its not helpful for their specific assignments, because they lack access to the tools or because theyre afraid of getting caught.

But the assumption that everyone is using A.I. outside class may be closer to the truth than many educators realize. (You have no idea how much were using ChatGPT, read the title of a recent essay by a Columbia undergraduate in The Chronicle of Higher Education.) And its a helpful shortcut for teachers trying to figure out how to adapt their teaching methods. Why would you assign a take-home exam, or an essay on Jane Eyre, if everyone in class except, perhaps, the most strait-laced rule followers will use A.I. to finish it? Why wouldnt you switch to proctored exams, blue-book essays and in-class group work, if you knew that ChatGPT was as ubiquitous as Instagram and Snapchat among your students?

Second, schools should stop relying on A.I. detector programs to catch cheaters. There are dozens of these tools on the market now, all claiming to spot writing that was generated with A.I., and none of them work reliably well. They generate lots of false positives, and can be easily fooled by techniques like paraphrasing. Dont believe me? Ask OpenAI, the maker of ChatGPT, which discontinued its A.I. writing detector this year because of a low rate of accuracy.

Its possible that in the future, A.I. companies may be able to label their models outputs to make them easier to spot a practice known as watermarking or that better A.I. detection tools may emerge. But for now, most A.I. text should be considered undetectable, and schools should spend their time (and technology budgets) elsewhere.

My third piece of advice and the one that may get me the most angry emails from teachers is that teachers should focus less on warning students about the shortcomings of generative A.I. than on figuring out what the technology does well.

Last year, many schools tried to scare students away from using A.I. by telling them that tools like ChatGPT are unreliable, prone to spitting out nonsensical answers and generic-sounding prose. These criticisms, while true of early A.I. chatbots, are less true of todays upgraded models, and clever students are figuring out how to get better results by giving the models more sophisticated prompts.

As a result, students at many schools are racing ahead of their instructors when it comes to understanding what generative A.I. can do, if used correctly. And the warnings about flawed A.I. systems issued last year may ring hollow this year, now that GPT-4 is capable of getting passing grades at Harvard.

Alex Kotran, the chief executive of the AI Education Project, a nonprofit that helps schools adopt A.I., told me that teachers needed to spend time using generative A.I. themselves to appreciate how useful it could be and how quickly it was improving.

For most people, ChatGPT is still a party trick, he said. If you dont really appreciate how profound of a tool this is, youre not going to take all the other steps that are going to be required.

There are resources for educators who want to bone up on A.I. in a hurry. Mr. Kotrans organization has a number of A.I.-focused lesson plans available for teachers, as does the International Society for Technology in Education. Some teachers have also begun assembling recommendations for their peers, such as a website made by faculty at Gettysburg College that provides practical advice on generative A.I. for professors.

In my experience, though, there is no substitute for hands-on experience. So Id advise teachers to start experimenting with ChatGPT and other generative A.I. tools themselves, with the goal of getting as fluent in the technology as many of their students already are.

My last piece of advice for schools that are flummoxed by generative A.I. is this: Treat this year the first full academic year of the post-ChatGPT era as a learning experience, and dont expect to get everything right.

There are many ways A.I. could reshape the classroom. Ethan Mollick, a professor at the University of Pennsylvanias Wharton School, thinks the technology will lead more teachers to adopt a flipped classroom having students learn material outside class and practice it in class which has the advantage of being more resistant to A.I. cheating. Other educators I spoke with said they were experimenting with turning generative A.I. into a classroom collaborator, or a way for students to practice their skills at home with the help of a personalized A.I. tutor.

Some of these experiments wont work. Some will. Thats OK. Were all still adjusting to this strange new technology in our midst, and the occasional stumble is to be expected.

But students need guidance when it comes to generative A.I., and schools that treat it as a passing fad or an enemy to be vanquished will miss an opportunity to help them.

A lot of stuffs going to break, Mr. Mollick said. And so we have to decide what were doing, rather than fighting a retreat against the A.I.

Read the original here:

How Schools Can Survive A.I. - The New York Times

Young professionals are turning to AI to create headshots. But there … – NPR

The photo on the left was what Sophia Jones fed the AI service. It generated the two images on the right. Sophia Jones hide caption

The photo on the left was what Sophia Jones fed the AI service. It generated the two images on the right.

Sophia Jones is juggling a lot right now. She just graduated from her master's program, started her first full-time job with SpaceX and recently got engaged. But thanks to technology, one thing isn't on her to-do list: getting professional headshots taken.

Jones is one of a growing number of young professionals who are relying not on photographers to take headshots, but on generative artificial intelligence.

The process is simple enough: Users send in up to a dozen images of themselves to a website or app. Then they pick from sample photos with a style or aesthetic they want to copy, and the computer does the rest. More than a dozen of these services are available online and in app stores.

For Jones, the use of AI-generated headshots is a matter of convenience, because she can tweak images she already has and use them in a professional setting. She found out about AI-generated headshots on TikTok, where they went viral recently, and has since used them in everything from her LinkedIn profile to graduation pamphlets, and in her workplace.

So far no one has noticed.

"I think you would have to do some serious investigating and zooming in to realize that it might not truly be me," Jones told NPR.

Still, many of these headshot services are far from perfect. Some of the generated photos give users extra hands or arms, and they have consistent issues around perfecting teeth and ears.

These issues are likely a result of the data sets that the apps and services are trained on, according to Jordan Harrod, a Ph.D. candidate who is popular on YouTube for explaining how AI technology works.

Harrod said some AI technology being used now is different in that it learns what styles a user is looking for and applies them "almost like a filter" to the images. To learn these styles, the technology combs through massive data sets for patterns, which means the results are based on the things it's learning from.

"Most of it just comes from how much training data represents things like hands and ears and hair in various different configurations that you'd see in real life," Harrod said. And when the data sets underrepresent some configurations, some users are left behind or bias creeps in.

Rona Wang is a postgraduate student in a joint MIT-Harvard computer science program. When she used an AI service, she noticed that some of the features it added made her look completely different.

"It made my skin kind of paler and took out the yellow undertones," Wang said, adding that it also gave her big blue eyes when her eyes are brown.

Others who have tried AI headshots have pointed out similar errors, noticing that some websites make women look curvier than they are and that they can wash out complexions and have trouble accurately depicting Black hairstyles.

"When it comes to AI and AI bias, it's important for us to be thinking about who's included and who's not included," Wang said.

For many, the decision may come down to cost and accessibility.

Grace White, a law student at the University of Arkansas, was an early adopter of AI headshots, posting about her experience on TikTok and attracting more than 50 million views.

The close-up photo on the right was one of 10 real images that Grace White submitted to an AI service, which generated the two images on the left. Grace White hide caption

Ultimately, White didn't use the generated images and opted for a professional photographer to take her photo, but she said she recognizes that not everyone has the same budget flexibility.

"I do understand people who may have a lower income, and they don't have the budget for a photographer," White said. "I do understand them maybe looking for the AI route just to have a cheaper option for professional headshots."

Go here to see the original:

Young professionals are turning to AI to create headshots. But there ... - NPR