Archive for the ‘Artificial Intelligence’ Category

Airbus and Helsing to collaborate on artificial intelligence for the teaming of manned and unmanned military aircraft – Airbus

Berlin, 5 June 2024 Airbus Defence and Space and Helsing, Europes leading defence AI and software company, signed a framework cooperation agreement at the ILA aerospace trade show in Berlin. According to the agreement, the companies will work together on artificial intelligence (AI) technologies which will be used in a future Wingman system. This unmanned fighter-type aircraft will operate with current combat jets and receive its tasks from a pilot in a command aircraft such as the Eurofighter.

Airbus is also presenting its Wingman concept for the first time at the ILA. As a response to increasing operational requirements by the German Air Force, the Wingman is intended to augment the capabilities of manned combat aircraft with uncrewed platforms that can carry weapons and other effectors.

"The current conflicts on Europe's borders show how important air superiority is," said Mike Schoellhorn, CEO at Airbus Defence and Space. "Manned-Unmanned Teaming will play a central role in achieving air superiority: With an unmanned Wingman at their side, fighter pilots can operate outside the danger zone. They give the orders and always have the decision-making authority. Supported by AI, the wingman then takes over the dangerous tasks, including target reconnaissance and destruction or electronic jamming and deception of enemy air defense systems."

Whilst we will always have a human in the loop, we must realize that the most dangerous parts of an unmanned mission will see a high degree of autonomy and thus require AI, said Gundbert Scherf, Co-CEO at Helsing. From the processing of data from sensors over the optimization of sub-systems to closing the loop on system-level: software-defined capabilities and AI will be a critical component of the Wingman system for the German Air Force.

Under the AI agreement, Airbus will provide its expertise in the interaction of unmanned and manned military aircraft, so-called Manned-Unmanned Teaming, and as prime contractor of major European defense programs such as the Eurofighter or the A400M military transporter. Helsing will contribute its AI stack of relevant software-defined mission capabilities, including the fusion of various sensors and algorithms for electronic warfare.

More information about the Wingman can be found here.

Photo: Michael Schoellhorn, CEO at Airbus Defence and Space (on the left), and Gundbert Scherf, Co-CEO at Helsing, in front of the Airbus Wingman model.

#Wingman #TeamAirbus #DefenceMatters #Eurofighter #Technology #Innovation

Read more here:
Airbus and Helsing to collaborate on artificial intelligence for the teaming of manned and unmanned military aircraft - Airbus

Plymouth Whitemarsh High School film club explores artificial intelligence impacts – The Times Herald

Freddie Combs, a minister who appeared on the second season of The X-Factor, has passed away at age 49.

According to a Cocoa, Florida funeral home, Combs died on September 10 surrounded by his friends and family. His wife, Kay, who appeared alongside Combs on the TLC show Ton of Love in 2010, told TMZ that his death was a result of kidney failure following a slate of health issues.

Combs featured on the singing competition show in 2012, becoming an instant fan-favorite with his rendition of Bette Midlers 1988 song Wind Beneath My Wings. His performance impressed the celebrity judges, including Simon Cowell, L.A. Reid, Britney Spears, and Demi Lovato, with Cowell and Reid promising theyd support him if he got healthier.

The Tennessee native was escorted to the stage in a wheelchair by his wife when he appeared on the show. During his audition, he opened up about his battle with weight loss, explaining how, in 2009, he weighed 920 pounds and almost died. He had lost almost 400 pounds by the time he appeared on The X-Factor through exercise and diet.

My wife Kay, shes an incredible woman, he said on the show. She started caring for me right after we were married in 96, and as my weight rose, more things were required of her. Shes the closest thing to an angel and a saint that I know.

When I was bedridden and never came out of the house, my music was never heard, he continued. My biggest dream would be to give hope to people who are my size so they can achieve their dreams. And I know people might think I would never have a chance, and maybe I dont, but I hope the judges will look past my exterior and give a fat boy a chance.

Speaking to TMZ, Kay said that she knew the day before that she was going to lose her husband. I have so much gratitude to be his wife for 25 years, she said, and to be his best friend.

Go here to read the rest:
Plymouth Whitemarsh High School film club explores artificial intelligence impacts - The Times Herald

The Life, Death and Rebirth of an A.I.-Generated News Outlet – The New York Times

The news was featured on MSN.com: Prominent Irish broadcaster faces trial over alleged sexual misconduct. At the top of the story was a photo of Dave Fanning.

But Mr. Fanning, an Irish D.J. and talk-show host famed for his discovery of the rock band U2, was not the broadcaster in question.

You wouldnt believe the amount of people who got in touch, said Mr. Fanning, who called the error outrageous.

The falsehood, visible for hours on the default homepage for anyone in Ireland who used Microsoft Edge as a browser, was the result of an artificial intelligence snafu.

A fly-by-night journalism outlet called BNN Breaking had used an A.I. chatbot to paraphrase an article from another news site, according to a BNN employee. BNN added Mr. Fanning to the mix by including a photo of a prominent Irish broadcaster. The story was then promoted by MSN, a web portal owned by Microsoft.

The story was deleted from the internet a day later, but the damage to Mr. Fannings reputation was not so easily undone, he said in a defamation lawsuit filed in Ireland against Microsoft and BNN Breaking. His is just one of many complaints against BNN, a site based in Hong Kong that published numerous falsehoods during its short time online as a result of what appeared to be generative A.I. errors.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Excerpt from:
The Life, Death and Rebirth of an A.I.-Generated News Outlet - The New York Times

OpenAI Whistle-Blowers Describe Reckless and Secretive Culture – The New York Times

A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created.

The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company has not done enough to prevent its A.I. systems from becoming dangerous.

The members say OpenAI, which started as a nonprofit research lab and burst into public view with the 2022 release of ChatGPT, is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can.

They also claim that OpenAI has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign.

OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there, said Daniel Kokotajlo, a former researcher in OpenAIs governance division and one of the groups organizers.

The group published an open letter on Tuesday calling for leading A.I. companies, including OpenAI, to establish greater transparency and more protections for whistle-blowers.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Read the rest here:
OpenAI Whistle-Blowers Describe Reckless and Secretive Culture - The New York Times

OpenAI, Anthropic and Google DeepMind workers warn of AIs dangers – The Washington Post

A handful of current and former employees at OpenAI and other prominent artificial intelligence companies warned that the technology poses grave risks to humanity in a Tuesday letter, calling on companies to implement sweeping changes to ensure transparency and foster a culture of public debate.

The letter, signed by 13 people including current and former employees at Anthropic and Googles DeepMind, said AI can exacerbate inequality, increase misinformation, and allow AI systems to become autonomous and cause significant death. Though these risks could be mitigated, corporations in control of the software have strong financial incentives to limit oversight, they said.

Because AI is only loosely regulated, accountability rests on company insiders, the employees wrote, calling on corporations to lift nondisclosure agreements and give workers protections that allow them to anonymously raise concerns.

The move comes as OpenAI faces a staff exodus. Many critics have seen prominent departures including of OpenAI co-founder Ilya Sutskever and senior researcher Jan Leike as a rebuke of company leaders, who some employees argue chase profit at the expense of making OpenAIs technologies safer.

Daniel Kokotajlo, a former employee at OpenAI, said he left the start-up because of the companys disregard for the risks of artificial intelligence.

Summarized stories to quickly stay informed

I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence, he said in a statement, referencing a hotly contested term referring to computers matching the power of human brains.

They and others have bought into the move fast and break things approach, and that is the opposite of what is needed for technology this powerful and this poorly understood, Kokotajlo said.

Liz Bourgeois, a spokesperson at OpenAI, said the company agrees that rigorous debate is crucial given the significance of this technology. Representatives from Anthropic and Google did not immediately reply to a request for comment.

The employees said that absent government oversight, AI workers are the few people who can hold corporations accountable. They said that they are hamstrung by broad confidentiality agreements and that ordinary whistleblower protections are insufficient because they focus on illegal activity, and the risks that they are warning about are not yet regulated.

The letter called for AI companies to commit to four principles to allow for greater transparency and whistleblower protections. Those principles are a commitment to not enter into or enforce agreements that prohibit criticism of risks; a call to establish an anonymous process for current and former employees to raise concerns; supporting a culture of criticism; and a promise to not retaliate against current and former employees who share confidential information to raise alarms after other processes have failed.

The Washington Post in December reported that senior leaders at OpenAI raised fears about retaliation from CEO Sam Altman warnings that preceded the chiefs temporary ouster. In a recent podcast interview, former OpenAI board member Helen Toner said part of the nonprofits decision to remove Altman as CEO late last year was his lack of candid communication about safety.

He gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically just impossible for the board to know how well those safety processes were working, she told The TED AI Show in May.

The letter was endorsed by AI luminaries including Yoshua Bengio and Geoffrey Hinton, who are considered godfathers of AI, and renowned computer scientist Stuart Russell.

See more here:
OpenAI, Anthropic and Google DeepMind workers warn of AIs dangers - The Washington Post