Archive for the ‘Ai’ Category

Releasing a new paper on openness and artificial intelligence – Mozilla & Firefox

For the past six months, the Columbia Institute of Global Politics and Mozilla have been working with leading AI scholars and practitioners to create a framework on openness and AI. Today, we are publishing a paper that lays out this new framework.

During earlier eras of the internet, open source technologies played a core role in promoting innovation and safety. Open source technology provided a core set of building blocks that software developers have used to do everything from create art to design vaccines to develop apps that are used by people all over the world; it is estimated that open source software is worth over $8 trillion in value. And, attempts to limit open innovation such as export controls on encryption in early web browsers ended up being counterproductive, further exemplifying the value of openness.

The paper surveys existing approaches to defining openness in AI models and systems, and then proposes a descriptive framework to understand how each component of the foundation model stack contributes to openness.

Today, open source approaches for artificial intelligence and especially for foundation models offer the promise of similar benefits to society. However, defining and empowering open source for foundation models has proven tricky, given its significant differences from traditional software development. This lack of clarity has made it harder to recommend specific approaches and standards for how developers should advance openness and unlock its benefits. Additionally, these conversations about openness in AI have often operated at a high level, making it harder to reason about the benefits and risks from openness in AI. Some policymakers and advocates have blamed open access to AI as the source of certain safety and security risks, often without concrete or rigorous evidence to justify those claims. On the other hand, people often tout the benefits of openness in AI, but without specificity about how to actually harness those opportunities.

Thats why, in February, Mozilla and the Columbia Institute of Global Politics brought together over 40 leading scholars and practitioners working on openness and AI for the Columbia Convening. These individuals spanning prominent open source AI startups and companies, nonprofit AI labs, and civil society organizations focused on exploring what open should mean in the AI era.

Today, we are publishing a paper that presents a framework for grappling with openness across the AI stack. The paper surveys existing approaches to defining openness in AI models and systems, and then proposes a descriptive framework to understand how each component of the foundationmodel stack contributes to openness. It enables without prescribing an analysis of how to unlock specific benefits from AI, based on desired model and system attributes. Furthermore, the paper also adds clarity to support further work on this topic, including work to develop stronger safety safeguards for open systems.

We believe this framework will support timely conversations around the technical and policy communities. For example, this week, as policymakers discuss AI policy at the AI Seoul Summit 2024, this framework can help clarify how openness in AI can support societal and political goals, including innovation, safety, competition, and human rights. And, as the technical community continues to build and deploy AI systems, this framework can support AI developers in ensuring their AI systems help achieve their intended goals, promote innovation and collaboration, and reduce harms. We look forward to working with the open source and AI community, as well as the policy and technical communities more broadly, to continue building on this framework going forward.

Read more from the original source:

Releasing a new paper on openness and artificial intelligence - Mozilla & Firefox

Cooler Master introduces colored ‘AI Thermal Paste’ CryoFuze 5 comes with nano-diamond technology – Tom’s Hardware

Cooler Master just released a new line of CryoFuze 5 'AI Thermal Paste' that comes in six different colors. The company uses zinc oxide and aluminum powder to make the colorful thermal paste, while also claiming that it uses 'nano-molecular technology' to deliver stable performance.

While the added colors are likely just a gimmick or for creators filming their PC builds, the bigger claim here is the thermal pastes performance and stability across a wide range of temperatures. According to the CryoFuze 5 China product page, the thermal paste has a thermal conductivity coefficient of 12.6 W/mK, giving it better performance than all other thermal pastes weve tested in our Best Thermal Paste for 2024 guide, save for the SYY 157 that has a rating of 15.7 W/mK. It won't match the values you can get from liquid metal thermal pastes, however, which offer thermal conductivity ratings of 73 W/mK or higher.

Image 1 of 2

Cooler Master uses the AI branding on CryoFuze 5, but there is nothing AI about a thermal paste solution. While perhaps Cooler Master could've designed it for AI processors, especially as next-generation AI chips like Intels Falcon Shores and Nvidias B100 and B200 GPUs have TDPs higher than 1,000 watts, the CryoFuze 5's thermal performance isnt that far ahead of its competitors.

The CryoFuze 5 might not mean much for the average PC builder. But enthusiasts looking for style points on their video builds might love it (even though no one will ever see it again once the PC is assembled, unless they take the CPU cooler off). This also isnt the first colored thermal paste from Cooler Master, as it already sells the CryoFuze Violet thermal grease.

More importantly, the CryoFuze 5s high thermal conductivity (for a thermal paste) allows overclockers to push high-performance silicon even more. This is particularly crucial for builders using more exotic solutions, like using the EKWB AIO liquid-cooler designed for delidded CPUs, or those who replace the stock heat spreader on the processor with a custom one from Thermal Grizzly.

The stability of Cooler Masters colorful thermal paste adds another advantage, especially for overclockers who aim to get the most out of their silicon. If youre one of the few who use liquid nitrogen to cool your PC, you'll appreciate the CryoFuze 5's ability to work from -50C to 240C.

Liquid metal should still perform better than the CryoFuze 5, but it comes with the added risk of shorting components as it's a conductive material. While the color options and AI branding are likely just for marketing purposes, its improved performance should help enthusiasts looking to redline their systems.

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Read the original:

Cooler Master introduces colored 'AI Thermal Paste' CryoFuze 5 comes with nano-diamond technology - Tom's Hardware

Wisconsin man arrested for allegedly creating AI-generated child sexual abuse material – The Verge

A Wisconsin software engineer was arrested on Monday for allegedly creating and distributing thousands of AI-generated images of child sexual abuse material (CSAM).

Court documents describe Steven Anderegg as extremely technologically savvy, with a background in computer science and decades of experience in software engineering. Anderegg, 42, is accused of sending AI-generated images of naked minors to a 15-year-old boy via Instagram DM. Anderegg was put on law enforcements radar after the National Center for Missing & Exploited Children flagged the messages, which he allegedly sent in October 2023.

According to information law enforcement obtained from Instagram, Anderegg posted an Instagram story in 2023 consisting of a realistic GenAI image of minors wearing BDSM-themed leather clothes and encouraged others to check out what they were missing on Telegram. In private messages with other Instagram users, Anderegg allegedly discussed his desire to have sex with prepubescent boys and told one Instagram user that he had tons of other AI-generated CSAM images on his Telegram.

Anderegg allegedly began sending these images to another Instagram user after learning he was only 15 years old. When this minor made his age known, the defendant did not rebuff him or inquire further. Instead, he wasted no time in describing to this minor how he creates sexually explicit GenAI images and sent the child custom-tailored content, charging documents claim.

When law enforcement searched Andereggs computer, they found over 13,000 images with hundreds if not thousands of these images depicting nude or semi-clothed prepubescent minors, according to prosecutors. Charging documents say Anderegg made the images on the text-to-image model Stable Diffusion, a product created by Stability AI, and used extremely specific and explicit prompts to create these images. Anderegg also allegedly used negative prompts to avoid creating images depicting adults and used third-party Stable Diffusion add-ons that specialized in producing genitalia.

Last month, several major tech companies including Google,Meta, OpenAI, Microsoft, and Amazon said theyd review their AI training data for CSAM. The companies committed to a new set of principles that include stress-testing models to ensure they arent creating CSAM. Stability AI also signed on to the principles.

According to prosecutors, this is not the first time Anderegg has come into contact with law enforcement over his alleged possession of CSAM via a peer-to-peer network. In 2020, someone using the internet in Andereggs Wisconsin home tried to download multiple files of known CSAM, prosecutors claim. Law enforcement searched his home in 2020, and Anderegg admitted to having a peer-to-peer network on his computer and frequently resetting his modem, but he was not charged.

In a brief supporting Andereggs pretrial detention, the government noted that hes worked as a software engineer for more than 20 years, and his CV includes a recent job at a startup, where he used his excellent technical understanding in formulating AI models.

If convicted, Anderegg faces up to 70 years in prison, though prosecutors say the recommended sentencing range may be as high as life imprisonment.

The rest is here:

Wisconsin man arrested for allegedly creating AI-generated child sexual abuse material - The Verge

Beyond keywords: AI-driven approaches to improve data discoverability – World Bank

This blog is part of AI for Data, Data for AI, a series aiming to unwrap, explain and foster the intersection of artificial intelligence and data. This post is the third installment of the seriesfor further reading, here are the first and second installments.

Data is essential for generating knowledge and informing policies. Organizations that produce large volumes of diverse data face challenges in managing and disseminating it effectively. One major challenge is ensuring users can easily find the most relevant data for their needs, a problem known as data discoverability.

Organizations like the World Bank have systems to make their data assets discoverable. Traditionally, these systems use lexical or keyword search applications, indexing available metadata to enable data discovery through search terms. However, this approach limits discovery to the keywords in the accompanying metadata documentation, returning nothing beyond those terms.

Artificial intelligence (AI), primarily large language models (LLMs), can enhance data systems to ensure relevant and timely data are discoverable. With richer metadata and taking advantage of AI-enabled solutions, semantic search, hybrid search, knowledge graphs, and recommendation systems can be utilized.

In this post, we explore how simple AI applications can overcome the limitations of keyword-based search. We also discuss AI-enabled techniques that improve our understanding of users' information needs, leading to a better data search experience.

More:

Beyond keywords: AI-driven approaches to improve data discoverability - World Bank

Wearable AI Pin maker Humane is reportedly seeking a buyer – Engadget

The tech startup Humane is seeking a buyer for its business, just a bit over a month since it released the AI Pin, according to Bloomberg. Engadget's Cherlynn Low described the AI Pin as a "wearable Siri button," because it's a small device you can wear that was designed with a very specific purpose in mind: To give you ready access to an AI assistant. Humane is working with a financial adviser, Bloomberg said, and is apparently hoping to sell for anywhere between $750 million and $1 billion.

The company drummed up a lot of interest and successfully raised $230 million from high-profile investors. However, a billion may be a huge ask when its AI pin was mostly panned by critics upon launch. We gave the AI Pin a score of 50 out of 100 in our review due to several reasons. It was slow and took a few seconds to reply when we asked it questions. The responses were irrelevant at times and weren't any better than what you could get with a quick Google search. Its touchpad grew warm with use, it had poor battery life and its projector screen, while novel, was pretty hard to control. The Humane AI Pin also isn't cheap: It costs $700 to buy and requires a monthly fee of $24 to access the company's artificial intelligence technology and 4G service riding on T-Mobile's network. In a post on its website, Humane said that it was listening to feedback and listed several problem areas it intends to focus on.

Another dedicated AI gadget, the Rabbit R1, is much more affordable at $199, but it's still not cheap enough to make the category more popular than it is, especially since you could easily take out your phone to use AI tools when needed. Humane's efforts to sell its business is still in its very early stages, Bloomberg noted, and it might not close a deal at all.

See original here:

Wearable AI Pin maker Humane is reportedly seeking a buyer - Engadget