Archive for the ‘Ai’ Category

How AI and ChatGPT are full of promise and peril, explained by experts – Vox.com

At this point, you have tried ChatGPT. Even Joe Biden has tried ChatGPT, and this week, his administration made a big show of inviting AI leaders like Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman to the White House to discuss ways they could make responsible AI.

But maybe, just maybe, you are still fuzzy on some very basics about AI like, how does this stuff work, is it magic, and will it kill us all? but dont want to admit to that.

No worries. We have you covered: Weve spent much of the spring talking to people working in AI, investing in AI, trying to build businesses in AI as well as people who think the current AI boom is overblown or maybe dangerously misguided. We made a podcast series about the whole thing, which you can listen to over at Recode Media.

But weve also pulled out a sampling of insightful and oftentimes conflicting answers we got to some of these very basic questions. Theyre questions that the White House and everyone else needs to figure out soon, since AI isnt going away.

Read on and dont worry, we wont tell anyone that youre confused. Were all confused.

Kevin Scott, chief technology officer, Microsoft: I was a 12-year-old when the PC revolution was happening. I was in grad school when the internet revolution happened. I was running a mobile startup right at the very beginning of the mobile revolution, which coincided with this massive shift to cloud computing. This feels to me very much like those three things.

Dror Berman, co-founder, Innovation Endeavors: Mobile was an interesting time because it provided a new form factor that allowed you to carry a computer with you. I think we are now standing in a completely different time: Weve now been introduced to a foundational intelligence block that has become available to us, one that basically can lean on all the publicly available knowledge that humanity has extracted and documented. It allows us to retrieve all this information in a way that wasnt possible in the past.

Gary Marcus, entrepreneur; emeritus professor of psychology and neural science at NYU: I mean, its absolutely interesting. I would not want to argue against that for a moment. I think of it as a dress rehearsal for artificial general intelligence, which we will get to someday.

But right now we have a trade-off. There are some positives about these systems. You can use them to write things for you. And there are some negatives. This technology can be used, for example, to spread misinformation, and to do that at a scale that weve never seen before which may be dangerous, might undermine democracy.

And I would say that these systems arent very controllable. Theyre powerful, theyre reckless, but they dont necessarily do what we want. Ultimately, theres going to be a question, Okay, we can build a demo here. Can we build a product that we can actually use? And what is that product?

I think in some places people will adopt this stuff. And theyll be perfectly happy with the output. In other places, theres a real problem.

James Manyika, SVP of technology and society, Google: Youre trying to make sure the outputs are not toxic. In our case, we do a lot of generative adversarial testing of these systems. In fact, when you use Bard, for example, the output that you get when you type in a prompt is not necessarily the first thing that Bard came up with.

Were running 15, 16 different types of the same prompt to look at those outputs and pre-assess them for safety, for things like toxicity. And now we dont always get every single one of them, but were getting a lot of it already.

One of the bigger questions that we are going to have to face, by the way and this is a question about us, not about the technology, its about us as a society is how do we think about what we value? How do we think about what counts as toxicity? So thats why we try to involve and engage with communities to understand those. We try to involve ethicists and social scientists to research those questions and understand those, but those are really questions for us as society.

Emily M. Bender, professor of linguistics, University of Washington: People talk about democratizing AI, and I always find that really frustrating because what theyre referring to is putting this technology in the hands of many, many people which is not the same thing as giving everybody a say in how its developed.

I think the best way forward is cooperation, basically. You have sensible regulation coming from the outside so that the companies are held accountable. And then youve got the tech ethics workers on the inside helping the companies actually meet the regulation and meet the spirit of the regulation.

And to make all that happen, we need broad literacy in the population so that people can ask for whats needed from their elected representatives. So that the elected representatives are hopefully literate in all of this.

Scott: Weve spent from 2017 until today rigorously building a responsible AI practice. You just cant release an AI to the public without a rigorous set of rules that define sensitive uses, and where you have a harms framework. You have to be transparent with the public about what your approach to responsible AI is.

Marcus: Dirigibles were really popular in the 1920s and 1930s. Until we had the Hindenburg. Everybody thought that all these people doing heavier-than-air flight were wasting their time. They were like, Look at our dirigibles. They scale a lot faster. We built a small one. Now we built a bigger one. Now we built a much bigger one. Its all working great.

So, you know, sometimes you scale the wrong thing. In my view, were scaling the wrong thing right now. Were scaling a technology that is inherently unstable.

Its unreliable and untruthful. Were making it faster and have more coverage, but its still unreliable, still not truthful. And for many applications thats a problem. There are some for which its not right.

ChatGPTs sweet spot has always been making surrealist prose. It is now better at making surrealist prose than it was before. If thats your use case, its fine, I have no problem with it. But if your use case is something where theres a cost of error, where you do need to be truthful and trustworthy, then that is a problem.

Scott: It is absolutely useful to be thinking about these scenarios. Its more useful to think about them grounded in where the technology actually is, and what the next step is, and the step beyond that.

I think were still many steps away from the things that people worry about. There are people who disagree with me on that assertion. They think theres gonna be some uncontrollable, emergent behavior that happens.

And were careful enough about that, where we have research teams thinking about the possibility of these emergent scenarios. But the thing that you would really have to have in order for some of the weird things to happen that people are concerned about is real autonomy a system that could participate in its own development and have that feedback loop where you could get to some superhumanly fast rate of improvement. And thats not the way the systems work right now. Not the ones that we are building.

Bender: We already have WebMD. We already have databases where you can go from symptoms to possible diagnoses, so you know what to look for.

There are plenty of people who need medical advice, medical treatment, who cant afford it, and that is a societal failure. And similarly, there are plenty of people who need legal advice and legal services who cant afford it. Those are real problems, but throwing synthetic text into those situations is not a solution to those problems.

If anything, its gonna exacerbate the inequalities that we see in our society. And to say, people who can pay get the real thing; people who cant pay, well, here, good luck. You know: Shake the magic eight ball that will tell you something that seems relevant and give it a try.

Manyika: Yes, it does have a place. If Im trying to explore as a research question, how do I come to understand those diseases? If Im trying to get medical help for myself, I wouldnt go to these generative systems. I go to a doctor or I go to something where I know theres reliable factual information.

Scott: I think it just depends on the actual delivery mechanism. You absolutely dont want a world where all you have is some substandard piece of software and no access to a real doctor. But I have a concierge doctor, for instance. I interact with my concierge doctor mostly by email. And thats actually a great user experience. Its phenomenal. It saves me so much time, and Im able to get access to a whole bunch of things that my busy schedule wouldnt let me have access to otherwise.

So for years Ive thought, wouldnt it be fantastic for everyone to have the same thing? An expert medical guru that you can go to that can help you navigate a very complicated system of insurance companies and medical providers and whatnot. Having something that can help you deal with the complexity, I think, is a good thing.

Marcus: If its medical misinformation, you might actually kill someone. Thats actually the domain where Im most worried about erroneous information from search engines

Now people do search for medical stuff all the time, and these systems are not going to understand drug interactions. Theyre probably not going to understand particular peoples circumstances, and I suspect that there will actually be some pretty bad advice.

We understand from a technical perspective why these systems hallucinate. And I can tell you that they will hallucinate in the medical domain. Then the question is: What becomes of that? Whats the cost of error? How widespread is that? How do users respond? We dont know all those answers yet.

Berman: I think society will need to adapt. A lot of those systems are very, very powerful and allow us to do things that we never thought would be possible. By the way, we dont yet understand what is fully possible. We dont also fully understand how some of those systems work.

I think some people will lose jobs. Some people will adjust and get new jobs. We have a company called Canvas that is developing a new type of robot for the construction industry and actually working with the union to train the workforce to use this kind of robot.

And a lot of those jobs that a lot of technologies replace are not necessarily the jobs that a lot of people want to do anyway. So I think that we are going to see a lot of new capabilities that will allow us to train people to do much more exciting jobs as well.

Manyika: If you look at most of the research on AIs impact on work, if I were to summarize it in a phrase, Id say its jobs gained, jobs lost, and jobs changed.

All three things will happen because there are some occupations where a number of the tasks involved in those occupations will probably decline. But there are also new occupations that will grow. So theres going to be a whole set of jobs gained and created as a result of this incredible set of innovations. But I think the bigger effect, quite frankly what most people will feel is the jobs changed aspect of this.

Read the original here:

How AI and ChatGPT are full of promise and peril, explained by experts - Vox.com

AI Is Helping Airlines Prevent Delays and Turbulence – The New York Times

It may be a tough summer to fly. More passengers than ever will be taking to the skies, according to the Transportation Security Administration. And the weather so far this year hasnt exactly been cooperating.

A blizzard warning in San Diego, sudden turbulence that injured 36 people on a Hawaiian Airlines flight bound for Honolulu, a 25-inch deluge of rain that swamped an airport in Fort Lauderdale, Fla.: The skies have been confounding forecasters and frustrating travelers.

And it may only get worse as the climate continues to change. Intense events are happening more often and outside their seasonal norms, said Sheri Bachstein, chief executive of the Weather Company, part of IBM, which makes weather-forecasting technology.

So, will flights just get bumpier and delays even more common? Not necessarily. New sensors, satellites and data modeling powered by artificial intelligence are giving travelers a fighting chance against more erratic weather.

The travel industry cares about getting their weather predictions right because weather affects everything, said Amy McGovern, director of the National Science Foundations A.I. Institute for Research on Trustworthy A.I. in Weather, Climate and Coastal Oceanography at the University of Oklahoma.

Those better weather predictions rely on a type of artificial intelligence called machine learning, where in essence, a computer program is able to use data to improve itself. In this case, companies create software that uses historical and current weather data to make predictions. The algorithm then compares its predictions with outcomes and adjusts its calculations from there. By doing this over and over, the software makes more and more accurate forecasts.

The amount of data fed into these types of software is enormous. IBMs modeling system, for example, integrates information from 100 other models. To that, it adds wind, temperature and humidity data from more than 250,000 weather stations on commercial buildings, cellphone towers and private homes around the globe. In addition, it incorporates satellite and radar reports from sources like the National Weather Service, the National Oceanic and Atmospheric Administration and the Federal Aviation Administration. Some of the worlds most powerful computers then process all this information.

Heres how all this may improve your future trips:

The skies are getting bumpier. According to a recent report from the National Aeronautics and Space Administration, severe turbulence at typical airplane cruising altitudes could become two to three times more common.

Knowing where those disturbances are and how to avoid them is mission-critical for airlines, Ms. Bachstein said.

Pilots have long radioed their encounters with turbulence to air traffic controllers, giving aircraft coming in behind them a chance to illuminate the seatbelt sign in time for the rough air. Now, a new fleet of satellites could help warn them earlier.

Tomorrow.io, a weather intelligence company based in Boston, received a $19 million grant from the U.S. Air Force to launch more than 20 weather satellites, beginning with two by the end of this year and scheduled for completion in 2025. The constellation of satellites will provide meteorological reporting over the whole globe, covering some areas that are not currently monitored. The system will report conditions every hour, a vast improvement over the data that is currently available, according to the company.

The new weather information will be used well beyond the travel industry. For their part, though, pilots will have more complete information in the cockpit, said Dan Slagen, the companys chief marketing officer.

The turbulence that caused dozens of injuries aboard the Hawaiian Airlines flight last December came from an evolving thunderstorm that didnt get reported quickly enough, Dr. McGovern said. Thats the kind of situation that can be seen developing and then avoided when reports come in more frequently, she explained.

The F.A.A. estimates that about three-quarters of all flight delays are weather-related. Heavy precipitation, high winds, low visibility and lightning can all cause a tangle on the tarmac, so airports are finding better ways to track them.

WeatherSTEM, based in Florida, reports weather data and analyzes it using artificial intelligence to make recommendations. It also installs small hyperlocal weather stations, which sell for about $20,000, a fifth of the price of older-generation systems, said Ed Mansouri, the companys chief executive.

While airports have always received detailed weather information, WeatherSTEM is among a small set of companies that use artificial intelligence to take that data and turn it into advice. It analyzes reports, for example, from a global lightning monitoring network that shows moment-by-moment electromagnetic activity to provide guidance on when planes should avoid landing and taking off, and when ground crews should seek shelter. The software can also help reduce unnecessary airport closures because its analysis of the lightnings path is more precise than what airports have had in the past.

The companys weather stations may include mini-Doppler radar systems, which show precipitation and its movement in greater detail than in standard systems; solar-powered devices that monitor factors like wind speed and direction; and digital video cameras. Tampa International, Fort Lauderdale-Hollywood International and Orlando International airports, in Florida, are all using the new mini-weather stations.

The lower price will put the equipment within reach of smaller airports and allow them to improve operations during storms, Mr. Mansouri said, and larger airports might install more than one mini-station. Because airports are often spread out over large areas, conditions, especially wind, can vary, he said, making the devices valuable tools.

More precise data and more advanced analysis are helping airlines fly better in cold weather, too. De-icing a plane is expensive, polluting and time-consuming, so when sudden weather changes mean it has to be done twice, that has an impact on the bottom line, the environment and on-time departures.

Working with airlines like JetBlue, Tomorrow.io analyzes weather data to help ground crews use the most efficient chemical de-icing sprays. The system can, for example, recommend how much to dilute the chemicals with water based on how quickly the temperature is changing. The system can also help crews decide if a thicker chemical treatment called anti-icing is needed and to determine the best time to apply the sprays to limit pollution and cost.

At the University of Oklahoma, Dr. McGoverns team is working on using machine learning to develop software that would provide hailstorm warnings 30 or more minutes in advance, rather than the current 10 to 12 minutes. That could give crews more time to protect planes especially important in places like Oklahoma, where she works. We get golf balls falling out of the sky, and they can do real damage, Dr. McGovern said.

More on-time departures and smoother flights are most likely only the beginning. Advances in weather technology, Dr. McGovern said, are snowballing.

Follow New York Times Travel on Instagram and sign up for our weekly Travel Dispatch newsletter to get expert tips on traveling smarter and inspiration for your next vacation. Dreaming up a future getaway or just armchair traveling? Check out our 52 Places to Go in 2023.

Read more from the original source:

AI Is Helping Airlines Prevent Delays and Turbulence - The New York Times

Amnesty International Slammed Over AI Protest Images – Hyperallergic

Screenshots of the since-deleted Amnesty International campaign, which employed AI-generated images (screenshots Maya Pontone/Hyperallergic)

This week, international human rights watchdog Amnesty International faced backlash from photojournalists and other online critics for using AI-generated images depicting photorealistic scenes of Colombias 2021 protests. Although there is no shortage of photographs from the demonstrations, the advocacy group told the Guardian that it opted to use artificially edited imagery to protect the identities of protesters who may be vulnerable to state retribution.

The 2021 strike which was incited by an unpopular tax raise and then fueled by police brutality and other forms of state violence left at least 40 people dead and many more missing, according to official figures.

Amnesty International shared the AI images as part of a since-deleted social media campaign marking the two years since the Colombian protests, paired with disclaimers that acknowledged the use of AI. Commentators online were quick to notice errors in the fake images. For instance, one of them showed a woman wearing the tri-colored Colombian flag and being dragged off by police, a familiar still from the 2021 protests. But on social media, people pointed out that the colors in the national flag were in the wrong order, and the faces of the protesters and police officers were eerily smoothed over. Additionally, the uniforms of the officers were out-of-date.

In response to the public outcry, Amnesty International has since deleted the images from its social media channels.

The organization has not yet responded to Hyperallergics request for comment. In an interview with the Guardian, Director for Americas Erika Guevara Rosas said Amnesty International did not want the AI controversy to distract from the core message in support of the victims and their calls for justice in Colombia.

But we do take the criticism seriously and want to continue the engagement to ensure we understand better the implications and our role to address the ethical dilemmas posed by the use of such technology, Rosas added.

Amnesty also directly responded to the backlash online, apologizing for the misrepresentative photos and reiterating their initial intentions.

Our main goal was to highlight the grotesque violence by the police against people in Colombia. It is important to state that the purpose was to protect people who could be exposed. But we could choose drawings or other things, Amnesty International tweeted.

Some members of the photojournalism and larger arts community have also shared their frustration with the mock photos since the popularization of AI over the past year has raised questions about plagiarism and job displacement.

Molly Crabapple, a New York-based writer and artist who recently authored an open letter against the use of AI-generated art, condemned Amnesty Internationals use of the tool in its campaign.

By using AI-generated photos of police brutality in Colombia, Amnesty International is practically begging atrocity-deniers to call them liars, Crabapple tweeted. Either use the work of brave photojournalists, or use actual illustrations. AI-generated photos just undermine trust in your findings.

Read the original post:

Amnesty International Slammed Over AI Protest Images - Hyperallergic

The best way to avoid a down round is to found an AI startup – TechCrunch

As we see unicorns slash staffand the prevalence of down rounds spike, it may seem that the startup ecosystem is chock-full of bad news and little else. Thats not precisely the case.

While AI, and in particular the generative AI subcategory, are as hot as the sun, not all venture attention is going to the handful of names that you already know. Sure, OpenAI is able to land nine and 10-figure rounds from a murderers row of tech investors and mega-cap corporations. And rising companies like Hugging Face and Anthropic cannot stay out of the news, proving that smaller AI-focused startups are doing more than well.

In fact, new data from Carta, which provides cap table management and other services, indicates that AI-focused startups are outperforming their larger peer group at both the seed and Series A stage.

The dataset, which notes that AI-centered startups are raising more and at higher valuations than other startups, indicates that perhaps the best way to avoid a down round today is to build in the artificial intelligence space.

Per Carta data relating to the first quarter of the year, seed funding to non-AI startups in the U.S. market that use its services dipped from $1.64 billion to $1.08 billion, or a decline of around 34%. That result is directionally aligned with other data that weve seen regarding Q1 2023 venture capital totals; the data points down.

See the rest here:

The best way to avoid a down round is to found an AI startup - TechCrunch

Microsoft economist warns of A.I. election interference from ‘bad actors’ – CNBC

Microsoft logo seen at its building in Redmond, Washington.

Toby Scott | SOPA Images | LightRocket | Getty Images

People should worry more about "AI being used by bad actors" than they should about AI productivity outpacing human productivity, Microsoft chief economist Michael Schwarz said at a World Economic Forum event Wednesday.

"Before AI could take all your jobs, it could certainly do a lot of damage in the hands of spammers, people who want to manipulate elections," Schwarz added while speaking on a panel on harnessing generative AI.

Microsoft first invested $1 billion in OpenAI in 2019, years before the two companies would integrate OpenAI's GPT large language model into Microsoft's Bing search product. In January, Microsoft announced a new multiyear multibillion-dollar investment in the company. OpenAI relies on Microsoft to provide the computing heft that powers OpenAI's products, a relationship that Wells Fargo recently said could result in up to $30 billion in new annual revenue for Microsoft.

Schwarz tempered his caution about AI by noting that all new technologies, even cars, carried a degree of risk when they first came to market. "When AI makes us more productive, we as mankind ought to be better off," he noted, "because we are able to produce more stuff."

OpenAI's ChatGPT sparked a flood of investment in the AI sector. Google moved to launch a rival chatbot, Bard, sparking a wave of internal concern about a botched rollout. Politicians and regulators have expressed growing concern about the potential effect of AI technology as well.

Vice President Kamala Harris will meet Thursday with top executives from Anthropic, another AI firm, and Google, Microsoft and OpenAI to discuss responsible AI development, the White House told CNBC on Tuesday. Meanwhile, FTC Chair Lina Khan penned an op-ed in The New York Times on Wednesday warning "enforcers and regulators must be vigilant."

"Please remember, breaking is much easier than building," Schwarz said.

Go here to see the original:

Microsoft economist warns of A.I. election interference from 'bad actors' - CNBC