Archive for the ‘Ai’ Category

Wisconsin man arrested for allegedly creating AI-generated child sexual abuse material – The Verge

A Wisconsin software engineer was arrested on Monday for allegedly creating and distributing thousands of AI-generated images of child sexual abuse material (CSAM).

Court documents describe Steven Anderegg as extremely technologically savvy, with a background in computer science and decades of experience in software engineering. Anderegg, 42, is accused of sending AI-generated images of naked minors to a 15-year-old boy via Instagram DM. Anderegg was put on law enforcements radar after the National Center for Missing & Exploited Children flagged the messages, which he allegedly sent in October 2023.

According to information law enforcement obtained from Instagram, Anderegg posted an Instagram story in 2023 consisting of a realistic GenAI image of minors wearing BDSM-themed leather clothes and encouraged others to check out what they were missing on Telegram. In private messages with other Instagram users, Anderegg allegedly discussed his desire to have sex with prepubescent boys and told one Instagram user that he had tons of other AI-generated CSAM images on his Telegram.

Anderegg allegedly began sending these images to another Instagram user after learning he was only 15 years old. When this minor made his age known, the defendant did not rebuff him or inquire further. Instead, he wasted no time in describing to this minor how he creates sexually explicit GenAI images and sent the child custom-tailored content, charging documents claim.

When law enforcement searched Andereggs computer, they found over 13,000 images with hundreds if not thousands of these images depicting nude or semi-clothed prepubescent minors, according to prosecutors. Charging documents say Anderegg made the images on the text-to-image model Stable Diffusion, a product created by Stability AI, and used extremely specific and explicit prompts to create these images. Anderegg also allegedly used negative prompts to avoid creating images depicting adults and used third-party Stable Diffusion add-ons that specialized in producing genitalia.

Last month, several major tech companies including Google,Meta, OpenAI, Microsoft, and Amazon said theyd review their AI training data for CSAM. The companies committed to a new set of principles that include stress-testing models to ensure they arent creating CSAM. Stability AI also signed on to the principles.

According to prosecutors, this is not the first time Anderegg has come into contact with law enforcement over his alleged possession of CSAM via a peer-to-peer network. In 2020, someone using the internet in Andereggs Wisconsin home tried to download multiple files of known CSAM, prosecutors claim. Law enforcement searched his home in 2020, and Anderegg admitted to having a peer-to-peer network on his computer and frequently resetting his modem, but he was not charged.

In a brief supporting Andereggs pretrial detention, the government noted that hes worked as a software engineer for more than 20 years, and his CV includes a recent job at a startup, where he used his excellent technical understanding in formulating AI models.

If convicted, Anderegg faces up to 70 years in prison, though prosecutors say the recommended sentencing range may be as high as life imprisonment.

The rest is here:

Wisconsin man arrested for allegedly creating AI-generated child sexual abuse material - The Verge

Beyond keywords: AI-driven approaches to improve data discoverability – World Bank

This blog is part of AI for Data, Data for AI, a series aiming to unwrap, explain and foster the intersection of artificial intelligence and data. This post is the third installment of the seriesfor further reading, here are the first and second installments.

Data is essential for generating knowledge and informing policies. Organizations that produce large volumes of diverse data face challenges in managing and disseminating it effectively. One major challenge is ensuring users can easily find the most relevant data for their needs, a problem known as data discoverability.

Organizations like the World Bank have systems to make their data assets discoverable. Traditionally, these systems use lexical or keyword search applications, indexing available metadata to enable data discovery through search terms. However, this approach limits discovery to the keywords in the accompanying metadata documentation, returning nothing beyond those terms.

Artificial intelligence (AI), primarily large language models (LLMs), can enhance data systems to ensure relevant and timely data are discoverable. With richer metadata and taking advantage of AI-enabled solutions, semantic search, hybrid search, knowledge graphs, and recommendation systems can be utilized.

In this post, we explore how simple AI applications can overcome the limitations of keyword-based search. We also discuss AI-enabled techniques that improve our understanding of users' information needs, leading to a better data search experience.

More:

Beyond keywords: AI-driven approaches to improve data discoverability - World Bank

Wearable AI Pin maker Humane is reportedly seeking a buyer – Engadget

The tech startup Humane is seeking a buyer for its business, just a bit over a month since it released the AI Pin, according to Bloomberg. Engadget's Cherlynn Low described the AI Pin as a "wearable Siri button," because it's a small device you can wear that was designed with a very specific purpose in mind: To give you ready access to an AI assistant. Humane is working with a financial adviser, Bloomberg said, and is apparently hoping to sell for anywhere between $750 million and $1 billion.

The company drummed up a lot of interest and successfully raised $230 million from high-profile investors. However, a billion may be a huge ask when its AI pin was mostly panned by critics upon launch. We gave the AI Pin a score of 50 out of 100 in our review due to several reasons. It was slow and took a few seconds to reply when we asked it questions. The responses were irrelevant at times and weren't any better than what you could get with a quick Google search. Its touchpad grew warm with use, it had poor battery life and its projector screen, while novel, was pretty hard to control. The Humane AI Pin also isn't cheap: It costs $700 to buy and requires a monthly fee of $24 to access the company's artificial intelligence technology and 4G service riding on T-Mobile's network. In a post on its website, Humane said that it was listening to feedback and listed several problem areas it intends to focus on.

Another dedicated AI gadget, the Rabbit R1, is much more affordable at $199, but it's still not cheap enough to make the category more popular than it is, especially since you could easily take out your phone to use AI tools when needed. Humane's efforts to sell its business is still in its very early stages, Bloomberg noted, and it might not close a deal at all.

See original here:

Wearable AI Pin maker Humane is reportedly seeking a buyer - Engadget

Total Recall: the only Copilot+ AI feature that matters is a huge privacy risk – Tom’s Hardware

Microsoft's just-announced classification of "Copilot+ PCs" leaves a lot of users out in the cold. To access a suite of new AI features in an upcoming build of Windows 11, you'll need a processor with a Neural Processing Unit (NPU) that capable of hitting 40 TOPS (Trillion Operations Per Second). To date, there's only one processor family that can hit that number: Qualcomm's upcoming Snapdragon X series of mobile chips. And the first laptops with Snapdragon X aren't even due to ship for a few weeks.

Anyone who currently owns a laptop or desktop even if it has one of the best CPUs is out of luck and won't have access to these features. Laptops based on Intel's next-gen Lunar Lake CPUs will ship in Q3 and will meet the 40+ TOPS requirement, but any computer you currently own, or buy today, will not.

So, how bad should you feel about being left out in the cold? If Microsoft's Copilot+ PC press event this week is any indication, not very bad at all. Of the four exclusive AI features Microsoft showed, three are either available elsewhere or are so niche that few people will use them. Only one, Recall, offers something PC users haven't seen before and it has some very creepy implications for your privacy.

Before we talk about Recall, why it's useful, and why it might also be a security nightmare, let's spend a moment on why you shouldn't care about the other Copilot+ features. They are:

Recall, on the other hand, offers a feature that you can't get in Windows right now. When enabled, it takes a screenshot, called a "snap," every few seconds of your entire desktop. You can then open Recall and query the content of your images or scroll through your timeline to remind you of what you were doing.

So, for example, if you were doing some online shopping a few days ago, you can search Recall for "red shoes" and it will show you all the snapshots of the moments where you were looking at red shoes. If you were on a website, you'll also get a link under the snapshot, that you can click to go back to the page you were on. If the result was in a Word document or a PowerPoint presentation, Recall will open the appropriate file for you and even take you straight to the slide (in the case of PowerPoint) with the data on it.

If you ask about a presentation or a spreadsheet you were working on, Recall will take you back to the snapshot you were working on and you can even copy text directly from the snapshots (essentially, image-to-text). Since it has screenshots of all your online conversations, you can also ask it "what did grandma say" and it will show you screenshots of your conversations with her.

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Senior Editor Brandon Hill got a chance to see Recall in action at Microsoft's press event. He saw someone search for "papaya salad" and get a snapshot of a web page in the timeline from when the user had visited a recipe website. He saw someone query "can you find my water report" and Recall showed a mix of web pages, a PDF, and an Excel document that all had water reports on them.

Image 1 of 4

He also shot the video below. In it, you can see a Microsoft rep scroll through Recall's timeline and then ask it for "offer" which pulls up images of offers.

Google Photos and other online services have a lot of the same image-recognition capabilities as Recall. I went through my Google Photos album and asked for a variety of things, the online tool did a great job of locating photos I had never tagged. When I asked it for "Halloween," it accurately pulled up pictures of me and my kids in costume. And when I asked it for "Maya," it pulled up pictures of my daughter from her birthday because she was wearing a crown with her name on it and it read the text.

Recall combines the ability to index photos based on content with an archive of constant screenshots of your activity. And it adds in image-to-text and image-to-web-page-URL capabilities. It's just speculation on my part, but perhaps a future version of the tool will integrate with other parts of your OS; I could imagine it adding something to your calendar if you tell a friend in a chat message, "I'll meet you at 7:30."

For Mac and iOS users, there's a third-party app called Rewind that does what Recall does and it also includes audio recognition. But we've never seen anything like Recall for PCs until now.

The question is this: How comfortable do you feel about having an app that tracks your every activity, keeping an image record of it? And do you really want (or need) a tool to ask about... your own activities?

Microsoft is taking some steps to make sure that the snaps Recall takes stay private. According to the company, the snaps remain encrypted on your local storage drive and are never synced to the cloud or sent to Microsoft. You also have the ability to exclude apps from Recall, delete individual snaps, or disable the feature entirely. It's easy to see why Recall would need a powerful NPU to search through all of those images without getting any help from a server on the Internet.

But let's be clear: Recall poses some serious privacy risks even if it works as advertised. You're creating photographic records of all your activities including every time you type in a password that isn't hidden by **** symbols. This also applies also to sensitive personal data, such as your social security number or bank account number. If someone found a way to log into your computer either in-person or remotely they could easily find this treasure trove of important information. And if you share a family computer with the kids or a spouse and they don't log in under their own accounts (which they should) they can also see your snaps.

Whether you feel comfortable with the privacy risks of Recall, you have to ask: Is Recall a game-changing feature that would make you buy a new PC or, conversely, not buy a PC that doesn't support it? My guess is that, for most people, the answer is "no."

If the answer is "yes," you'll need to either buy one of the handful of Snapdragon X-powered laptops coming out in June or wait until the fall, when Intel (and possibly AMD) will have mobile chips with similarly-powerful NPUs. If you want a desktop PC of any kind, get ready to wait we don't expect desktop chips with NPUs until at least Q4, when Intel Arrow Lake processors come out.

Read more:

Total Recall: the only Copilot+ AI feature that matters is a huge privacy risk - Tom's Hardware

In Seoul summit, heads of states and companies commit to AI safety – TechCrunch

Government officials and AI industry executives agreed on Tuesday to apply elementary safety measures in the fast-moving field and establish an international safety research network.

Nearly six months after the inaugural global summit on AI safety at Bletchley Park in England, Britain and South Korea are hosting the AI safety summit this week in Seoul.The gathering underscores thenew challenges and opportunities the world faceswith the advent of AI technology.

The British government announced on Tuesdaya new agreementbetween 10 countries and the European Union to establish an international networksimilar to the U.K.s AI Safety Institute, which is the worlds first publicly backed organization, to accelerate the advancement of AI safety science. The network will promote a common understanding of AI safety and align its work with research, standards, and testing. Australia, Canada, the EU, France, Germany, Italy, Japan, Singapore, South Korea, the U.K., and the U.S. have signed the agreement.

On the first day of the AI Summit in Seoul, global leaders and leading AI companies convened for a virtual meeting chaired by U.K. prime minister Rishi Sunak and South Korean president Yoon Suk Yeol to discuss AI safety, innovationandinclusion.

During the discussions, the leaders agreed to the broader Seoul Declaration, emphasizing increased international collaboration in building AI to address major global issues, uphold human rights, and bridge digital gaps worldwide, all while prioritizing being human-centric, trustworthy, and responsible.

AI is a hugelyexciting technology and the U.K. has led global efforts to deal with its potential, hosting the worlds first AI Safety Summit last year, Sunak said in a U.K. government statement. But to get the upside, we must ensure its safe. Thats why Im delighted we have got an agreement today for a network of AI Safety Institutes.

Just last month, the U.K. and the U.S.sealed a partnership memorandum of understandingto collaborate on research, safety evaluation, and guidance on AI safety.

The agreement announced today followsthe worlds first AI Safety Commitments from 16 companies involved in AI, including Amazon, Anthropic, Cohere, Google, IBM, Inflection AI, Meta, Microsoft, Mistral AI, Open AI, Samsung Electronics, Technology Innovation Institute, xAi and Zhipu.ai. (Zhipu.ai is a Chinese company backed by Alibaba, Ant and Tencent.)

The AI companies, including those from the U.S., China, and the United Arab Emirates (UAE), have agreed to the safety commitments to not develop or deploy a model or system at all if mitigations cannot keep risks below the thresholds, according to the U.K. government statement.

Its a world first to have so many leadingAIcompanies from so many different parts of the globe all agreeing to the same commitments onAIsafety, Sunak said. These commitments ensure the worlds leadingAIcompanies will provide transparency and accountability on their plans to develop safeAI.

Continued here:

In Seoul summit, heads of states and companies commit to AI safety - TechCrunch