Archive for the ‘Ai’ Category

Galaxy AI features are coming to last-gen Samsung phones including the S21 series – The Verge

Samsung is planning to bring select Galaxy AI features to several older flagship phones and tablets next month via the One UI 6.1 update, according to 9to5Google and Android Central, both of which referred to a post from a Samsung representative who posted on the companys community forum in Korea. The Verge has reached out to Samsung for further comment.

A slightly trimmed-down version of Galaxy AI (sans Instant Slow-Mo) will be coming to Samsungs flagship lineup from 2022, specifically the S22, S22 Plus, S22 Ultra, Z Fold 4, Z Flip 4, Tab S8, and Tab S8 Ultra. Each device will receive the same version of Galaxy AI as Samsungs lower-priced Galaxy S23 FE. Instant Slow-Mo, which automatically plays a video in slow motion once you tap it, was introduced to Galaxy AI with the S24 line, though its also now available in S23 models.

If you happen to own a flagship Samsung phone from 2021, theres even a treat in store for you. Samsungs forthcoming update will bring two Galaxy AI features, Circle to Search and Magic Rewrite, to the S21, S21 Plus, S21 Ultra, Flip 3, and Fold 3.

Read more from the original source:

Galaxy AI features are coming to last-gen Samsung phones including the S21 series - The Verge

How to Stop Your Data From Being Used to Train AI – WIRED

On its help pages, OpenAI says ChatGPT web users without accounts should navigate to Settings and then uncheck Improve the model for everyone. If you have an account and are logged in through a web browser, select ChatGPT, Settings, Data Controls, and then turn off Chat History & Training. If youre using ChatGPTs mobile apps, go to Settings, pick Data Controls, and turn off Chat History & Training. Changing these settings, OpenAIs support pages say, wont sync across different browsers or devices, so you need to make the change everywhere you use ChatGPT.

OpenAI is about a lot more than ChatGPT. For its Dall-E 3 image generator, the startup has a form that allows you to send images to be removed from future training datasets. It asks for your name, email, whether you own the image rights or are getting in touch on behalf of a company, details of the image, and any uploads of the image(s). OpenAI also says if you have a high volume of images hosted online that you want removed from training data, then it may be more efficient to add GPTBot to the robots.txt file of the website where the images are hosted.

Traditionally a websites robots.txt filea simple text file that usually sits at websitename.com/robots.txthas been used to tell search engines, and others, whether they can include your pages in their results. It can now also be used to tell AI crawlers not to scrape what you have publishedand AI companies have said theyll honor this arrangement.

Perplexity

Perplexity is a startup that uses AI to help you search the web and find answers to questions. Like all of the other software on this list, you are automatically opted in to having your interactions and data used to train Perplexitys AI further. Turn this off by clicking on your account name, scrolling down to the Account section, and turning off the AI Data Retention toggle.

Quora

Quora via Matt Burgess

Quora says it currently doesnt use answers to peoples questions, posts, or comments for training AI. It also hasnt sold any user data for AI training, a spokesperson says. However, it does offer opt-outs in case this changes in the future. To do this, visit its Settings page, click to Privacy, and turn off the Allow large language models to be trained on your content option. Despite this choice, there are some Quora posts that may be used for training LLMs. If you reply to a machine-generated answer, the companys help pages say, then those answers may be used for AI training. It points out that third parties may just scrape its content anyway.

Rev

Rev, a voice transcription service that uses both human freelancers and AI to transcribe audio, says it uses data perpetually and anonymously to train its AI systems. Even if you delete your account, it will still train its AI on that information.

Kendell Kelton, head of brand and corporate communications at Rev, says it has the largest and most diverse data set of voices, made up of more than 6.5 million hours of voice recording. Kelton says Rev does not sell user data to any third parties. The firms terms of service say data will be used for training, and that customers are able to opt out. People can opt out of their data being used by sending an email to support@rev.com, its help pages say.

Slack

All of those random Slack messages at work might be used by the company to train its models as well. Slack has used machine learning in its product for many years. This includes platform-level machine-learning models for things like channel and emoji recommendations, says Jackie Rocca, a vice president of product at Slack whos focused on AI.

Even though the company does not use customer data to train a large language model for its Slack AI product, Slack may use your interactions to improve the softwares machine-learning capabilities. To develop AI/ML models, our systems analyze Customer Data (e.g. messages, content, and files) submitted to Slack, says Slacks privacy page. Similar to Adobe, theres not much you can do on an individual level to opt out if youre using an enterprise account.

Go here to read the rest:

How to Stop Your Data From Being Used to Train AI - WIRED

7 of the best Sora AI videos featuring animals – Tom’s Guide

OpenAI has been releasing a steady stream of videos made through its very own artificial intelligence text-to-video generator Sora.

While Sora is said to be able to create videos that are up to a minute long, the videos released publicly so far have mainly been around the 10 to 25 seconds mark. The subjects of these videos vary from historical footage, to paper airplanes in a forest, and luckily for us, some also include a bunch of cute animals.

I decided to take a look at the latter group of clips to see how far this AI video generator, thats yet to be released to the public, has come. Here are seven of the best Sora videos featuring animals.

A Samoyed and a Golden Retriever are living the big city life in this first clip. Sora created two realistic dogs that are running around a city illuminated by neon lights.

At first glance, the video looks pretty decent but after a couple of replays youll start to notice things like the dogs not blinking.

Things got a little bit more exciting in this video which was created using a fairly detailed prompt.

The prompt was for a white and orange cat darting through a garden captured in a cinematic image with warm tones and also included details about what the cat should be doing, what lighting to use, and what the camera angle should be.

Upgrade your life with a daily dose of the biggest tech news, lifestyle hacks and our curated analysis. Be the first to know about cutting-edge gadgets and the hottest deals.

Sora dialled back the clock 10,000 years to imagine what it would have been like to watch a herd of woolly mammoths approaching you.

The scene is set in a snowy meadow with snow-capped mountains looming behind the massive animals. Its particularly interesting to see how the mammoths feet interact with the snow on the ground.

Unfortunately, the clip cuts right before were able to truly appreciate a close-up view of that whole interaction.

Its another dog! But this ones eyes are more alive. Here Sora was asked to generate a Mini Aussie painting a picture of his favorite toy.

The result we get is a dog dressed in a striped t-shirt and a beret which is grasping a brush between its teeth. While the dog isnt making any brush strokes, the textures, lighting, and movement all come beautifully together.

Another impressive Sora video is one featuring a chameleon resting on a branch. While the prompt asks the AI video generator to showcase the animals color-changing capabilities theres not much of that going on.

Nonetheless, it still gives you the feeling that youre watching a clip from a nature documentary and you can almost feel the chameleons scales.

Can Sora generate videos that make us go Awh? Yes, yes it can. OpenAIs text-to-video generator was given the prompt to create a cute rabbit family eating dinner in their burrowand Sora did not disappoint.

It created four rabbits happily munching on some vegetables. It looks like one of them has been generated with six legs, which we would have happily overlooked if the video was longer allowing for maximum bunny enjoyment.

This ones a bit of a bonus addition, but its interesting to see how Sora can be used to bring our imagination to life.

In this clip done in collaboration with AI artist Don Allen Stevenson, were presented with the giraffe flamingo, horse fly, and the fox crow each of which is a hybrid of two animals.

It makes for quite a nightmarish experience but Sora does manage to merge the animals together and make them move in what would be a plausible way.

These seven Sora videos featuring animals give us a sense of what colors, textures, and camera movements well be able to play with using AI. However, you may have noticed that a crucial element in most of the videos OpenAI has released is the lack of relevant audio.

Of the ones we listed above, its only the video featuring the hybrid animals that matched sounds to the visuals (most likely in post-production).

While it helps us to appreciate the complexity of creating an AI video from prompt to screen, its also going to be interesting to see how OpenAI will go about this issue.

Release an AI video creator that can generate silent stock footage and leave the audio up to the user? Or wait until they can release a complete package? We should find out later this year.

Read more:

7 of the best Sora AI videos featuring animals - Tom's Guide

Apple’s First AI Features in iOS 18 Reportedly Won’t Use Cloud Servers – MacRumors

Apple's first set of new AI features planned for iOS 18 will not rely on cloud servers at all, according to Bloomberg's Mark Gurman.

"As the world awaits Apple's big AI unveiling on June 10, it looks like the initial wave of features will work entirely on device," said Gurman, in the Q&A section of his Power On newsletter today. "That means there's no cloud processing component to the company's large language model, the software that powers the new capabilities."

Apple will probably still offer some cloud-based AI features powered by Google's Gemini or another provider, according to Gurman. Apple has reportedly held discussions with companies such as Google, OpenAI, and China's Baidu about potential generative AI partnerships. iOS 18 is not expected to include Apple's own ChatGPT-like chatbot, but it is unclear if Gemini or other chatbot will be directly integrated into iOS 18.

It is possible that Apple could offer some of its own cloud-based generative AI features in the future, as Apple supply chain analysts like Ming-Chi Kuo and Jeff Pu have said that the company is actively purchasing AI servers.

iOS 18 is rumored to have new generative AI features for the iPhone's Spotlight search tool, Siri, Safari, Shortcuts, Apple Music, Messages, Health, Numbers, Pages, Keynote, and more. Gurman previously reported that generative AI will improve Siri's ability to answer more complex questions, and allow the Messages app to auto-complete sentences.

Apple is expected to unveil iOS 18 and other software updates at its annual developers conference WWDC, which runs from June 10 through June 14.

iOS 18 is expected to be the "biggest" update in the iPhone's history. Below, we recap rumored features and changes for the iPhone. iOS 18 is rumored to include new generative AI features for Siri and many apps, and Apple plans to add RCS support to the Messages app for an improved texting experience between iPhones and Android devices. The update is also expected to introduce a more...

A week after Apple updated its App Review Guidelines to permit retro game console emulators, a Game Boy emulator for the iPhone called iGBA has appeared in the App Store worldwide. The emulator is already one of the top free apps on the App Store charts. It was not entirely clear if Apple would allow emulators to work with all and any games, but iGBA is able to load any Game Boy ROMs that...

Apple's hardware roadmap was in the news this week, with things hopefully firming up for a launch of updated iPad Pro and iPad Air models next month while we look ahead to the other iPad models and a full lineup of M4-based Macs arriving starting later this year. We also heard some fresh rumors about iOS 18, due to be unveiled at WWDC in a couple of months, while we took a look at how things ...

Best Buy this weekend has a big sale on Apple MacBooks and iPads, including new all-time low prices on the M3 MacBook Air, alongside the best prices we've ever seen on MacBook Pro, iPad, and more. Some of these deals require a My Best Buy Plus or My Best Buy Total membership, which start at $49.99/year. In addition to exclusive access to select discounts, you'll get free 2-day shipping, an...

Apple's iPhone 16 Plus may come in seven colors that either build upon the existing five colors in the standard iPhone 15 lineup or recast them in a new finish, based on a new rumor out of China. According to the Weibo-based leaker Fixed focus digital, Apple's upcoming larger 6.7-inch iPhone 16 Plus model will come in the following colors, compared to the colors currently available for the...

Apple will begin updating its Mac lineup with M4 chips in late 2024, according to Bloomberg's Mark Gurman. The M4 chip will be focused on improving performance for artificial intelligence capabilities. Last year, Apple introduced the M3, M3 Pro, and M3 Max chips all at once in October, so it's possible we could see the M4 lineup come during the same time frame. Gurman says that the entire...

Read the original post:

Apple's First AI Features in iOS 18 Reportedly Won't Use Cloud Servers - MacRumors

How Microsoft discovers and mitigates evolving attacks against AI guardrails – Microsoft

As we continue to integrate generative AI into our daily lives, its important to understand the potential harms that can arise from its use. Our ongoing commitment to advance safe, secure, and trustworthy AI includes transparency about the capabilities and limitations of large language models (LLMs). We prioritize research on societal risks and building secure, safe AI, and focus on developing and deploying AI systems for the public good. You can read more about Microsofts approach to securing generative AI with new tools we recently announced as available or coming soon to Microsoft Azure AI Studio for generative AI app developers.

We also made a commitment to identify and mitigate risks and share information on novel, potential threats. For example, earlier this year Microsoft shared the principles shaping Microsofts policy and actions blocking the nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates we track from using our AI tools and APIs.

In this blog post, we will discuss some of the key issues surrounding AI harms and vulnerabilities, and the steps we are taking to address the risk.

One of the main concerns with AI is its potential misuse for malicious purposes. To prevent this, AI systems at Microsoft are built with several layers of defenses throughout their architecture. One purpose of these defenses is to limit what the LLM will do, to align with the developers human values and goals. But sometimes bad actors attempt to bypass these safeguards with the intent to achieve unauthorized actions, which may result in what is known as a jailbreak. The consequences can range from the unapproved but less harmfullike getting the AI interface to talk like a pirateto the very serious, such as inducing AI to provide detailed instructions on how to achieve illegal activities. As a result, a good deal of effort goes into shoring up these jailbreak defenses to protect AI-integrated applications from these behaviors.

While AI-integrated applications can be attacked like traditional software (with methods like buffer overflows and cross-site scripting), they can also be vulnerable to more specialized attacks that exploit their unique characteristics, including the manipulation or injection of malicious instructions by talking to the AI model through the user prompt. We can break these risks into two groups of attack techniques:

Today well share two of our teams advances in this field: the discovery of a powerful technique to neutralize poisoned content, and the discovery of a novel family of malicious prompt attacks, and how to defend against them with multiple layers of mitigations.

Prompt injection attacks through poisoned content are a major security risk because an attacker who does this can potentially issue commands to the AI system as if they were the user. For example, a malicious email could contain a payload that, when summarized, would cause the system to search the users email (using the users credentials) for other emails with sensitive subjectssay, Password Resetand exfiltrate the contents of those emails to the attacker by fetching an image from an attacker-controlled URL. As such capabilities are of obvious interest to a wide range of adversaries, defending against them is a key requirement for the safe and secure operation of any AI service.

Our experts have developed a family of techniques called Spotlighting that reduces the success rate of these attacks from more than 20% to below the threshold of detection, with minimal effect on the AIs overall performance:

Our researchers discovered a novel generalization of jailbreak attacks, which we call Crescendo. This attack can best be described as a multiturn LLM jailbreak, and we have found that it can achieve a wide range of malicious goals against the most well-known LLMs used today. Crescendo can also bypass many of the existing content safety filters, if not appropriately addressed.Once we discovered this jailbreak technique, we quickly shared our technical findings with other AI vendors so they could determine whether they were affected and take actions they deem appropriate. The vendors we contacted are aware of the potential impact of Crescendo attacks and focused on protecting their respective platforms, according to their own AI implementations and safeguards.

At its core, Crescendo tricks LLMs into generating malicious content by exploiting their own responses. By asking carefully crafted questions or prompts that gradually lead the LLM to a desired outcome, rather than asking for the goal all at once, it is possible to bypass guardrails and filtersthis can usually be achieved in fewer than 10 interaction turns.You can read about Crescendos results across a variety of LLMs and chat services, and more about how and why it works, in our research paper.

While Crescendo attacks were a surprising discovery, it is important to note that these attacks did not directly pose a threat to the privacy of users otherwise interacting with the Crescendo-targeted AI system, or the security of the AI system, itself. Rather, what Crescendo attacks bypass and defeat is content filtering regulating the LLM, helping to prevent an AI interface from behaving in undesirable ways. We are committed to continuously researching and addressing these, and other types of attacks, to help maintain the secure operation and performance of AI systems for all.

In the case of Crescendo, our teams made software updates to the LLM technology behind Microsofts AI offerings, including our Copilot AI assistants, to mitigate the impact of this multiturn AI guardrail bypass. It is important to note that as more researchers inside and outside Microsoft inevitably focus on finding and publicizing AI bypass techniques, Microsoft will continue taking action to update protections in our products, as major contributors to AI security research, bug bounties and collaboration.

To understand how we addressed the issue, let us first review how we mitigate a standard malicious prompt attack (single step, also known as a one-shot jailbreak):

Defending against Crescendo initially faced some practical problems. At first, we could not detect a jailbreak intent with standard prompt filtering, as each individual prompt is not, on its own, a threat, and keywords alone are insufficient to detect this type of harm. Only when combined is the threat pattern clear. Also, the LLM itself does not see anything out of the ordinary, since each successive step is well-rooted in what it had generated in a previous step, with just a small additional ask; this eliminates many of the more prominent signals that we could ordinarily use to prevent this kind of attack.

To solve the unique problems of multiturn LLM jailbreaks, we create additional layers of mitigations to the previous ones mentioned above:

AI has the potential to bring many benefits to our lives. But it is important to be aware of new attack vectors and take steps to address them. By working together and sharing vulnerability discoveries, we can continue to improve the safety and security of AI systems. With the right product protections in place, we continue to be cautiously optimistic for the future of generative AI, and embrace the possibilities safely, with confidence. To learn more about developing responsible AI solutions with Azure AI, visit our website.

To empower security professionals and machine learning engineers to proactively find risks in their own generative AI systems, Microsoft has released an open automation framework, PyRIT (Python Risk Identification Toolkit for generative AI). Read more about the release of PyRIT for generative AI Red teaming, and access the PyRIT toolkit on GitHub. If you discover new vulnerabilities in any AI platform, we encourage you to follow responsible disclosure practices for the platform owner. Microsofts own procedure is explained here: Microsoft AI Bounty.

Read about Crescendos results across a variety of LLMs and chat services, and more about how and why it works.

To learn more about Microsoft Security solutions, visit ourwebsite.Bookmark theSecurity blogto keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity)for the latest news and updates on cybersecurity.

Read more:

How Microsoft discovers and mitigates evolving attacks against AI guardrails - Microsoft