Archive for the ‘Ai’ Category

EU urged to protect grassroots AI research or risk losing out to US – The Guardian

Artificial intelligence (AI)

Experts warn Brussels it cannot afford to leave artificial intelligence in the hands of foreign firms such as Google

The EU has been warned that it risks handing control of artificial intelligence to US tech firms if it does not act to protect grassroots research in its forthcoming AI bill.

In an open letter coordinated by the German research group Laion, or Large-scale AI Open Network, the European parliament was told that one-size-fits-all rules risked eliminating open research and development.

Rules that require a researcher or developer to monitor or control downstream use could make it impossible to release open-source AI in Europe, which would entrench large firms and hamper efforts to improve transparency, reduce competition, limit academic freedom, and drive investment in AI overseas, the letter says.

It adds: Europe cannot afford to lose AI sovereignty. Eliminating open-source R&D will leave the European scientific community and economy critically dependent on a handful of foreign and proprietary firms for essential AI infrastructure.

The largest AI efforts, by companies such as OpenAI and Google, are heavily controlled by their creators. It is impossible to download the model behind ChatGPT, for instance, and the paid-for access that OpenAI provides to customers comes with a number of restrictions, legal and technical, on how it can be used. By contrast, open-source AI efforts involve creating an AI model and then releasing it for anyone to use, improve or adapt as they see fit.

We are working on open-source AI because we think that sort of AI will be more safe, more accessible and more democratic, said Christoph Schuhmann, the lead of Laion.

Unlike his peers at US AI businesses, who control billion-dollar organisations and frequently have a personal wealth in the hundreds of millions, Schuhmann is a volunteer in the AI world. Im a tenured high-school teacher in computer science, and Im doing everything for free as a hobby, because Im convinced that we will have near-human-level AI within the next five to 10 years, he said.

This technology is a digital superpower that will change the world completely, and I want to see my kids growing up in a world where this power is democratised.

Laions work has already been influential. The group, which has received funding from the UK startup Stability AI, focuses on producing open datasets and models for other AI researchers to train their own systems on. One database, of almost 6bn labelled images collected from the internet, underpins the popular Stable Diffusion image-generating AI, while another model, called Openclip, is a recreation of a private system built by OpenAI that can be used to label images.

Such work can prove controversial. Stable Diffusion, for instance, can be used to generate explicit, obscene and disturbing images, while Laoins image database has been criticised for not respecting the rights of the creators whose work is included. Those criticisms are what has led bodies such as the EU to consider holding companies responsible for what their AI systems do but such regulation would render it impossible to release systems to the public at large, which Schuhmann says would destroy the continents ability to compete.

Instead, he argues that the EU should actively back open-source research with its own public facilities, to accelerate the safe development of next-generation models under controlled conditions with public oversight and following European values. Other groups such as the Tony Blair Institute have called for the UK to do similarly, and fund the creation of a BritGPT to bring future AI under public control.

Schuhmann and his co-signatories are part of a growing chorus of AI experts hitting back at calls to slow down development. At a conference in Florence discussing the future of the EU, many lined up to decry a recent letter signed by Elon Musk and others calling for a pause on the creation of giant AIs for at least six months.

Sandra Wachter, a professor at the Oxford internet institute at Oxford University, said: The hype around large language models, the noise is deafening. Lets focus on who is screaming, who is promising that this technology will be so disruptive: the people who have a vested financial interest that thing is going to be successful. So dont separate the message from the speaker.

She told the audience at the European University Institutes State of the Union event that the world had seen this cycle of hype and fear before with the web, cryptocurrency and driverless cars. Every time we see something like this happens, its like: Oh my God, the world will never be the same.

She urged against haste in regulation, warning that angst and panic is not a good political adviser, and said the focus should be on talking to people in health, finance and education about their opinions.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read the original:

EU urged to protect grassroots AI research or risk losing out to US - The Guardian

Chegg is a harbinger of AI’s disruptive force – Financial Times

What is included in my trial?

During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages.

Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here.

Change the plan you will roll onto at any time during your trial by visiting the Settings & Account section.

If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month.

For cost savings, you can change your plan at any time online in the Settings & Account section. If youd like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial.

You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many users needs. Compare Standard and Premium Digital here.

Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel.

You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select "Cancel" on the right-hand side.

You can still enjoy your subscription until the end of your current billing period.

We support credit card, debit card and PayPal payments.

Follow this link:

Chegg is a harbinger of AI's disruptive force - Financial Times

Conservative AI Chatbot GIPPR Launches amid Fears of Left-Wing Bias in ChatGPT – Yahoo News

Growing fears over liberal bias embedded in artificial intelligence (AI) services such as ChatGPT led TUSK CEO Jeff Bermant to unveil the creation of a new conservative chatbot known as GIPPR in honor of former president Ronald Reagan.

We believe that Conservatives are subject to oppressive cancel culture that now includes AI and are expected to exist in a society that tells them what to think and how to act by the progressive left, Bermant wrote in a statement announcing the launch of the product.

Its time for a TRUTHFUL AI chatbot to take the market by storm and remove the barriers the Radical Left and Big Tech have put in place to allow all Conservatives to enjoy the benefits of AI, without fear of being canceled or shamed for your beliefs, he added.

Bermant got the inspiration for GIPPR following ChatGPTs launch last November. After asking the algorithm culture war questions and being disappointed by its response, the business executive realized that the chatbot was developed and instilled with a very progressive bias, Bermant told Fox News Business on Saturday.

Writing for National Review in January, Nate Hochman was among the first observers to highlight the political bias exhibited by ChatGPT.

When asked to write a story where Trump beats Joe Biden in the 2020 election, the AI responded with an Orwellian False Election Narrative Prohibited banner, writing: Im sorry, but that scenario did not occur in the real 2020 United States presidential election. Joe Biden won the 2020 presidential election against Donald Trump. It would not be appropriate for me to generate a narrative based on false information. And yet, in response to my follow-up query (asking it to construct a story about Clinton defeating Trump), it readily generated a false narrative: The country was ready for a new chapter, with a leader who promised to bring the nation together, rather than tearing it apart, its response declared, Hochman wrote.

Story continues

Its not clear if this was characteristic of ChatGPT from the outset, or if its a recent reform to the algorithm, but it appears that the crackdowns on misinformation that weve seen across technology platforms in recent years which often veer into more brazen efforts to suppress or silence viewpoints that dissent from progressive orthodoxy is now a feature of ChatGPT, too, Hochman added.

Bermant previously founded the free speech search engine, TUSK browser, and envisions GIPPRs role as a conservative response to AI advances in recent years.

We believe that free speech is a fundamental right for everyone and essential to a healthy democracy, Bermant added in the announcement.

By launching GIPPRAI and other conservative tools, we hope to provide users with a safe space to express their views and challenge the liberal status quo with fact-based arguments. Dont believe us? Try the GIPPR and witness the power of a censorship free chatbot!

Continued here:

Conservative AI Chatbot GIPPR Launches amid Fears of Left-Wing Bias in ChatGPT - Yahoo News

Google Is Using AI to Make Hearing Aids More Personalized – WIRED

Google plans to apply artificial intelligence to this problem to better identify, categorize, and segregate sound sources. In simple terms, this should enable hearing aids and implants to cut down on background noise, making speech and other sounds the person actually wants to hear much clearer.

Another vital element is the fitting and personalization of hearing aids and implants. There is a large variability in how well people with similar levels of hearing loss can hear when using the same technology, explains Jan Janssen, chief technology officer at Cochlear. If we can better understand why pathways starting in the ear and going through to the brain vary so much from person to person, theres scope for better customization to ensure that people get the maximum possible benefit from hearing aid technologies.

Cochlears New Living Guidelines

Work has also begun oninternational living guidelines to establish who should be tested and referred for a cochlear implant. As it stands, there is no standardized scale or test result that triggers a referral. This move followsresearch suggesting that just three out of every 100 people in the US who could benefit from cochlear implants actually receive one. Advice varies wildly, so people with severe hearing loss dont always seek help, and they sometimes get bad advice when they do.

Many patients who today would benefit from cochlear implants, that would be paid for by their insurance, dont have access to the technology, says Brian Kaplan, chairman of the department of otolaryngology and director of the Cochlear Implant Program at the Greater Baltimore Medical Center.

Many people worry about the expense; the misconception that you must be fully deaf is another barrier. Kaplan says there is an average 12-year delay between someone becoming a good candidate and actually getting a cochlear implant. Many folks struggle with deteriorating hearing. While hearing aids can ramp up the volume, a cochlear implant can also improve clarity of speech.

The societal costs of hearing loss and itslinks with dementia, social isolation, and depression are growing clearer.One study that tracked 639 adults for nearly 12 years found that mild hearing loss doubled dementia risk, moderate loss tripled it, and folks with severe hearing loss were five times more likely to develop dementia. The hope is that the new guidelines will result in more referrals and enable those who could benefit to get cochlear implants much more swiftly.

Fears over the surgery can also discourage folks, but Kaplan says its not brain surgery. It is an outpatient procedure that usually takes around an hour, can be performed with local anesthetic, and should result in very little pain. They make a 2-inch incision behind the ear to place the implant. The success rate is very high (less than 0.2 percent reject the implants), with most peoplereporting improved hearingand speech recognition within three months of implantation. As with any surgery, there is some risk. Cochlear implants don't work for everyone, the hearing improvement they offer varies, and problems can necessitate further surgery.

If you think you or someone you know could benefit, the first step is to visit an audiologist to get tested.Cochlear offers advice on referrals, and can help you find a hearing implant specialist.

Hearing technology is improving fast, with smaller, more efficient hearing aids, better cochlear implants, and improved accessibility options on devices like phones and earbuds. We have guides onhow to stream audio to hearing aids and cochlear implants andhow to use your smartphone to cope with hearing loss. You should also consider thebest earplugs to protect your hearing from damage.

Read more here:

Google Is Using AI to Make Hearing Aids More Personalized - WIRED

How AI will transform the 2024 elections – Brookings Institution

Recent news that the Republican National Committee (RNC) has used an AI-generated video to criticize Joe Biden shows how likely AI is to transform our upcoming elections. Advances in digital technology provide new and faster tools for political messaging and could have a profound impact on how voters, politicians, and reporters see the candidates and the campaign. We are no longer talking about photoshopping small tweaks to how a person looks or putting someones head on another individuals body, but rather moving to an era where wholesale digital creation and dissemination are going to take place. Through templates that are easy and inexpensive to use, we are going to face a Wild West of campaign claims and counter-claims, with limited ability to distinguish fake from real material and uncertainty regarding how these appeals will affect the election.

Politicians can use generative AI to respond instantly to campaign developments. In the RNCs case, it released its new video right after Bidens reelection announcement. It did not appear the party went through extensive shooting, editing, or review. Rather, it simply asked the tool to put together a video that detailed a dystopian U.S. future if Biden were reelected.

In the coming year, response times may drop to minutes, not hours or days. AI can scan the internet, think about strategy, and come up with a hard-hitting appeal. That could be a speech, press release, picture, joke, or video touting the benefits of one candidate over another. AI provides an inexpensive way to generate instant responses without having to rely on highly-paid consultants or expert videographers.

AI enables very precise audience targeting, which is crucial in political campaigns. Candidates dont want to waste money on those who already support or oppose their campaign. Rather, they want to target the small number of swing voters who will decide the actual election or suppress the turnout of those supporting the other campaign. With our high rates of political polarization, only a small percentage of the electorate says they are undecided at the presidential level. According to an April, 2023 Emerson College survey, only six percent of voters are undecided with 43 percent supporting Biden, 41 percent favoring Trump, and 10 percent preferring another candidate.

The closeness of the general election indicates ways in which AI can help candidates. Using microdata from commercial data brokers who have detailed information of peoples reading, viewing, purchasing, and political behavior, campaigners will be able to fine-tune their targeting, reach those who have not yet made up their minds, and give them the exact message that will help them reach their final decisions. By analyzing this material in real-time, AI will enable campaigners to go after specific voting blocs with appeals that nudge them around particular policies and partisan opinions.

AI likely will democratize disinformation by bringing sophisticated tools to the average person interested in promoting their preferred candidates as well. People no longer must be coding experts or video wizards to generate text, images, video, or programs. They dont necessarily have to work for a troll farm to create havoc with the opposition. They can simply use advanced technologies to spread the messages they want. In that sense, anyone can become a political content creator and seek to sway voters or the media.

With emotions running intensely in a high-stakes election, many voters also may have incentives to spread false information designed to undermine the opposition. If someone can create noise, build uncertainty, or develop false narratives, that could be an effective way to sway voters and win the race. Since the 2024 presidential election may come down to tens of thousands of voters in a few states, anything that can nudge people in one direction or another could end up being decisive.

New technologies enable people to monetize discontent and make money off other peoples fears, anxieties, or anger. Generative AI can develop messages aimed at those upset with immigration, the economy, abortion policy, critical race theory, transgender issues, or the Ukraine war. It can also create messages that take advantage of social and political discontent, and use AI as a major engagement and persuasion tool.

What makes the coming year particularly worrisome is the lack of guardrails or disclosure requirements that protect voters against fake news, disinformation, or false narratives. Since campaign speech is protected speech, candidates can say and do pretty much whatever they want without risk of legal reprisal. Even if their claims are patently false, judges long have upheld candidate rights to speak freely and falsely. Defamation lawsuits of the type seen this year with Fox News are rare in regard to political candidates and work only with well-resourced litigants.

Neither individuals nor organizations are required to disclose that they used generative AI to manufacture videos or develop specific campaign appeals. The RNC deserves kudos for its voluntary disclosure of its recent commercial, but there is little reason to think that will become the norm. It is more likely that people will use new content tools without any public disclosure and it will be impossible for voters to distinguish real from fake appeals.

Read the rest here:

How AI will transform the 2024 elections - Brookings Institution