Archive for the ‘Ai’ Category

ICYMI: As California Fires Worsen, Can AI Come to the Rescue … – Office of Governor Gavin Newsom

WHAT YOU NEED TO KNOW: No other jurisdiction in the world comes close to Californias use of technology and innovation including AI to fight fires.

SACRAMENTO Short answer: yes.

California is leveraging technologies like AI to fight fires faster and smarter, saving countless lives and communities from destruction.

As reported by the Los Angeles Times, CAL FIRE recently launched a pilot program that uses AI to monitor live camera feeds and issues alerts if anomalies are detected. Already, the program has successfully alerted CAL FIRE to 77 fires before any 911 calls were made.

This program is made possible by record investments by Governor Newsom and the Legislature in wildfire prevention and response totaling $2.8 billion.

IN CASE YOU MISSED IT:

As California Fires Worsen, Can AI Come to the Rescue?

By Hayley Smith

Los Angeles Times

Just before 3 a.m. one night this month, Scott Slumpff was awakened by the ding of a text message.

An ALERTCalifornia anomaly has been confirmed in your area of interest, the message said.

Slumpff, a battalion chief with the California Department of Forestry and Fire Protection, sprang into action. The message meant the agencys new artificial intelligence system had identified signs of a wildfire with a remote mountaintop camera in San Diego County.

Within minutes, crews were dispatched to the burgeoning blaze on Mount Laguna squelching it before it grew any larger than a 10-foot-by-10-foot spot.

Without the alert, we wouldnt have even known about the fire until the next morning, when people are out and about seeing smoke, Slumpff said. We probably would have been looking at hundreds of acres rather than a small spot.

The rapid response was part of a new AI pilot project operated by Cal Fire in partnership with UC San Diegos ALERTCalifornia system, which maintains 1,039 high-definition cameras in strategic locations throughout the state.

The AI constantly monitors the camera feeds in search of anomalies such as smoke, and alerts Cal Fire when it detects something. A red box highlights the anomaly on a screen, allowing officials to quickly verify and respond.

The project rolled out just two months ago to six Cal Fire emergency command centers in the state. But the proof of concept has already been so successful correctly identifying 77 fires before any 911 calls were logged that it will soon roll out to all 21 centers.

The success of this project is the fires you never hear about, said Phillip SeLegue, staff chief of fire intelligence with Cal Fire.

Read more here.

Read more from the original source:

ICYMI: As California Fires Worsen, Can AI Come to the Rescue ... - Office of Governor Gavin Newsom

AI chips, shared trips, and a shorter work week : The Indicator from … – NPR

AI chips, shared trips, and a shorter work week : The Indicator from Planet Money It's Indicators of the Week, our weekly news roundup. Today, AI doesn't want to invest in AI, a county in Washington state implements a 4-day work week, and NYC says bye bye to Airbnb, sorta.

For sponsor-free episodes of The Indicator from Planet Money, subscribe to Planet Money+ via Apple Podcasts or at plus.npr.org.

Music by Drop Electric. Find us: TikTok, Instagram, Facebook, Newsletter.

Lionel Bonaventure/AFP via Getty Images

It's Indicators of the Week, our weekly news roundup. Today, AI doesn't want to invest in AI, a county in Washington state implements a 4-day work week, and NYC says bye bye to Airbnb, sorta.

For sponsor-free episodes of The Indicator from Planet Money, subscribe to Planet Money+ via Apple Podcasts or at plus.npr.org.

Music by Drop Electric. Find us: TikTok, Instagram, Facebook, Newsletter.

View post:

AI chips, shared trips, and a shorter work week : The Indicator from ... - NPR

How Schools Can Survive A.I. – The New York Times

Last November, when ChatGPT was released, many schools felt as if theyd been hit by an asteroid.

In the middle of an academic year, with no warning, teachers were forced to confront the new, alien-seeming technology, which allowed students to write college-level essays, solve challenging problem sets and ace standardized tests.

Some schools responded unwisely, I argued at the time by banning ChatGPT and tools like it. But those bans didnt work, in part because students could simply use the tools on their phones and home computers. And as the year went on, many of the schools that restricted the use of generative A.I. as the category that includes ChatGPT, Bing, Bard and other tools is called quietly rolled back their bans.

Ahead of this school year, I talked with numerous K-12 teachers, school administrators and university faculty members about their thoughts on A.I. now. There is a lot of confusion and panic, but also a fair bit of curiosity and excitement. Mainly, educators want to know: How do we actually use this stuff to help students learn, rather than just try to catch them cheating?

Im a tech columnist, not a teacher, and I dont have all the answers, especially when it comes to the long-term effects of A.I. on education. But I can offer some basic, short-term advice for schools trying to figure out how to handle generative A.I. this fall.

First, I encourage educators especially in high schools and colleges to assume that 100 percent of their students are using ChatGPT and other generative A.I. tools on every assignment, in every subject, unless theyre being physically supervised inside a school building.

At most schools, this wont be completely true. Some students wont use A.I. because they have moral qualms about it, because its not helpful for their specific assignments, because they lack access to the tools or because theyre afraid of getting caught.

But the assumption that everyone is using A.I. outside class may be closer to the truth than many educators realize. (You have no idea how much were using ChatGPT, read the title of a recent essay by a Columbia undergraduate in The Chronicle of Higher Education.) And its a helpful shortcut for teachers trying to figure out how to adapt their teaching methods. Why would you assign a take-home exam, or an essay on Jane Eyre, if everyone in class except, perhaps, the most strait-laced rule followers will use A.I. to finish it? Why wouldnt you switch to proctored exams, blue-book essays and in-class group work, if you knew that ChatGPT was as ubiquitous as Instagram and Snapchat among your students?

Second, schools should stop relying on A.I. detector programs to catch cheaters. There are dozens of these tools on the market now, all claiming to spot writing that was generated with A.I., and none of them work reliably well. They generate lots of false positives, and can be easily fooled by techniques like paraphrasing. Dont believe me? Ask OpenAI, the maker of ChatGPT, which discontinued its A.I. writing detector this year because of a low rate of accuracy.

Its possible that in the future, A.I. companies may be able to label their models outputs to make them easier to spot a practice known as watermarking or that better A.I. detection tools may emerge. But for now, most A.I. text should be considered undetectable, and schools should spend their time (and technology budgets) elsewhere.

My third piece of advice and the one that may get me the most angry emails from teachers is that teachers should focus less on warning students about the shortcomings of generative A.I. than on figuring out what the technology does well.

Last year, many schools tried to scare students away from using A.I. by telling them that tools like ChatGPT are unreliable, prone to spitting out nonsensical answers and generic-sounding prose. These criticisms, while true of early A.I. chatbots, are less true of todays upgraded models, and clever students are figuring out how to get better results by giving the models more sophisticated prompts.

As a result, students at many schools are racing ahead of their instructors when it comes to understanding what generative A.I. can do, if used correctly. And the warnings about flawed A.I. systems issued last year may ring hollow this year, now that GPT-4 is capable of getting passing grades at Harvard.

Alex Kotran, the chief executive of the AI Education Project, a nonprofit that helps schools adopt A.I., told me that teachers needed to spend time using generative A.I. themselves to appreciate how useful it could be and how quickly it was improving.

For most people, ChatGPT is still a party trick, he said. If you dont really appreciate how profound of a tool this is, youre not going to take all the other steps that are going to be required.

There are resources for educators who want to bone up on A.I. in a hurry. Mr. Kotrans organization has a number of A.I.-focused lesson plans available for teachers, as does the International Society for Technology in Education. Some teachers have also begun assembling recommendations for their peers, such as a website made by faculty at Gettysburg College that provides practical advice on generative A.I. for professors.

In my experience, though, there is no substitute for hands-on experience. So Id advise teachers to start experimenting with ChatGPT and other generative A.I. tools themselves, with the goal of getting as fluent in the technology as many of their students already are.

My last piece of advice for schools that are flummoxed by generative A.I. is this: Treat this year the first full academic year of the post-ChatGPT era as a learning experience, and dont expect to get everything right.

There are many ways A.I. could reshape the classroom. Ethan Mollick, a professor at the University of Pennsylvanias Wharton School, thinks the technology will lead more teachers to adopt a flipped classroom having students learn material outside class and practice it in class which has the advantage of being more resistant to A.I. cheating. Other educators I spoke with said they were experimenting with turning generative A.I. into a classroom collaborator, or a way for students to practice their skills at home with the help of a personalized A.I. tutor.

Some of these experiments wont work. Some will. Thats OK. Were all still adjusting to this strange new technology in our midst, and the occasional stumble is to be expected.

But students need guidance when it comes to generative A.I., and schools that treat it as a passing fad or an enemy to be vanquished will miss an opportunity to help them.

A lot of stuffs going to break, Mr. Mollick said. And so we have to decide what were doing, rather than fighting a retreat against the A.I.

Read the original here:

How Schools Can Survive A.I. - The New York Times

Young professionals are turning to AI to create headshots. But there … – NPR

The photo on the left was what Sophia Jones fed the AI service. It generated the two images on the right. Sophia Jones hide caption

The photo on the left was what Sophia Jones fed the AI service. It generated the two images on the right.

Sophia Jones is juggling a lot right now. She just graduated from her master's program, started her first full-time job with SpaceX and recently got engaged. But thanks to technology, one thing isn't on her to-do list: getting professional headshots taken.

Jones is one of a growing number of young professionals who are relying not on photographers to take headshots, but on generative artificial intelligence.

The process is simple enough: Users send in up to a dozen images of themselves to a website or app. Then they pick from sample photos with a style or aesthetic they want to copy, and the computer does the rest. More than a dozen of these services are available online and in app stores.

For Jones, the use of AI-generated headshots is a matter of convenience, because she can tweak images she already has and use them in a professional setting. She found out about AI-generated headshots on TikTok, where they went viral recently, and has since used them in everything from her LinkedIn profile to graduation pamphlets, and in her workplace.

So far no one has noticed.

"I think you would have to do some serious investigating and zooming in to realize that it might not truly be me," Jones told NPR.

Still, many of these headshot services are far from perfect. Some of the generated photos give users extra hands or arms, and they have consistent issues around perfecting teeth and ears.

These issues are likely a result of the data sets that the apps and services are trained on, according to Jordan Harrod, a Ph.D. candidate who is popular on YouTube for explaining how AI technology works.

Harrod said some AI technology being used now is different in that it learns what styles a user is looking for and applies them "almost like a filter" to the images. To learn these styles, the technology combs through massive data sets for patterns, which means the results are based on the things it's learning from.

"Most of it just comes from how much training data represents things like hands and ears and hair in various different configurations that you'd see in real life," Harrod said. And when the data sets underrepresent some configurations, some users are left behind or bias creeps in.

Rona Wang is a postgraduate student in a joint MIT-Harvard computer science program. When she used an AI service, she noticed that some of the features it added made her look completely different.

"It made my skin kind of paler and took out the yellow undertones," Wang said, adding that it also gave her big blue eyes when her eyes are brown.

Others who have tried AI headshots have pointed out similar errors, noticing that some websites make women look curvier than they are and that they can wash out complexions and have trouble accurately depicting Black hairstyles.

"When it comes to AI and AI bias, it's important for us to be thinking about who's included and who's not included," Wang said.

For many, the decision may come down to cost and accessibility.

Grace White, a law student at the University of Arkansas, was an early adopter of AI headshots, posting about her experience on TikTok and attracting more than 50 million views.

The close-up photo on the right was one of 10 real images that Grace White submitted to an AI service, which generated the two images on the left. Grace White hide caption

Ultimately, White didn't use the generated images and opted for a professional photographer to take her photo, but she said she recognizes that not everyone has the same budget flexibility.

"I do understand people who may have a lower income, and they don't have the budget for a photographer," White said. "I do understand them maybe looking for the AI route just to have a cheaper option for professional headshots."

Go here to see the original:

Young professionals are turning to AI to create headshots. But there ... - NPR

Generative AI and data analytics on the agenda for Pamplin’s Day … – Virginia Tech

On Friday, Sept. 8, the second annual Day for Data symposium will gather industry leaders and academia together for a practical exploration of business analytics. The event is scheduled from 8 a.m. to 4 p.m. EDT in Virginia Tech's Owens Ballroom.

Virginia Tech is a leader in advanced analytics programs and capabilities, said Jay Winkeler, executive director of the Center for Business Analytics. Building off the success from last year, Day for Data will be bigger and bolder, with a focus on the AI [artificial intelligence] revolution happening all around us.

The conference, hosted by the Pamplin College of Businesss Center for Business Analytics, is an opportunity for shared learning and thought leadership in the field of business analytics. Corporate leaders and university faculty converge to fill a robust agenda with expertise in a wide range of topics including generative AI and large language models, advanced data analytics, digital privacy, business leadership and intelligence, and more.

Beyond the rich learning component, Day for Data also lends itself to opportunities for professional advancement. With a strong turnout expected from both academia and industry, the event offers students a chance to see the real-world applications of their studies and companies an opportunity to scout for emerging talent.

The interaction between students, faculty, and corporations is critical to harnessing the power of analytics and showing how skilled professionals translate analytics into meaningful business decisions, said Winkeler. For industry professionals, it is a chance to tell their success stories and gain critical exposure to a talented student and faculty population.

The symposium will begin with opening remarks by Saonee Sarker, Richard E. Sorensen Dean for the Pamplin College of Business, followed by a keynote address from Andrew Allwine, senior director of data optimization for Norfolk Southern. During the session, Allwine will share his strategies for aggregating and translating complex datasets into actionable insights and tangible return on investment for organizational decision-makers.

Key contributions by faculty working within Pamplin include a session led by Voices of Privacy, an initiative spearheaded by Professors France Blanger and Donna Wertalik that seeks to prepare society to manage their information privacy amid the challenging modern digital landscape, as well as a research poster session highlighting the latest research in the field.

After a lunch and networking break, Keith Johnson, director of solutions architecture for partner systems integrators at Amazon Web Services, will deliver a presentation and live demonstration of Amazons latest innovations with generative AI and large language models. Tracy Jones, data strategy and management executive for Guidehouse, will follow with a session on the opportunities and threats of artificial intelligence implementation, including case studies of organizations that neglected ethical principles and suffered consequences.

Both experts will return to join Kevin Davis, chief growth officer for MarathonTS, and Cayce Myers, director of graduate studies for the School of Communication at Virginia Tech, for a panel discussion and interactive conversation on artificial intelligence, including ethical, legal, and technical considerations. Day for Data will conclude with a networking reception.

Day for Data 2023 is sponsored by Norfolk Southern, Guidehouse, MarathonTS, Ernst & Young, and Amazon Web Services.

For more information on Day for Data and to register, please visit the event page.

Read the original post:

Generative AI and data analytics on the agenda for Pamplin's Day ... - Virginia Tech