Archive for the ‘Ai’ Category

An A.I. Supercomputer Whirs to Life, Powered by Giant Computer … – The New York Times

Inside a cavernous room this week in a one-story building in Santa Clara, Calif., six-and-a-half-foot-tall machines whirred behind white cabinets. The machines made up a new supercomputer that had become operational just last month.

The supercomputer, which was unveiled on Thursday by Cerebras, a Silicon Valley start-up, was built with the companys specialized chips, which are designed to power artificial intelligence products. The chips stand out for their size like that of a dinner plate, or 56 times as large as a chip commonly used for A.I. Each Cerebras chip packs the computing power of hundreds of traditional chips.

Cerebras said it had built the supercomputer for G42, an A.I. company. G42 said it planned to use the supercomputer to create and power A.I. products for the Middle East.

What were showing here is that there is an opportunity to build a very large, dedicated A.I. supercomputer, said Andrew Feldman, the chief executive of Cerebras. He added that his start-up wanted to show the world that this work can be done faster, it can be done with less energy, it can be done for lower cost.

Demand for computing power and A.I. chips has skyrocketed this year, fueled by a worldwide A.I. boom. Tech giants such as Microsoft, Meta and Google, as well as myriad start-ups, have rushed to roll out A.I. products in recent months after the A.I.-powered ChatGPT chatbot went viral for the eerily humanlike prose it could generate.

But making A.I. products typically requires significant amounts of computing power and specialized chips, leading to a ferocious hunt for more of those technologies. In May, Nvidia, the leading maker of chips used to power A.I. systems, said appetite for its products known as graphics processing units, or GPUs was so strong that its quarterly sales would be more than 50 percent above Wall Street estimates. The forecast sent Nvidias market value soaring above $1 trillion.

For the first time, were seeing a huge jump in the computer requirements because of A.I. technologies, said Ronen Dar, a founder of Run:AI, a start-up in Tel Aviv that helps companies develop A.I. models. That has created a huge demand for specialized chips, he added, and companies have rushed to secure access to them.

To get their hands on enough A.I. chips, some of the biggest tech companies including Google, Amazon, Advanced Micro Devices and Intel have developed their own alternatives. Start-ups such as Cerebras, Graphcore, Groq and SambaNova have also joined the race, aiming to break into the market that Nvidia has dominated.

Chips are set to play such a key role in A.I. that they could change the balance of power among tech companies and even nations. The Biden administration, for one, has recently weighed restrictions on the sale of A.I. chips to China, with some American officials saying Chinas A.I. abilities could pose a national security threat to the United States by enhancing Beijings military and security apparatus.

A.I. supercomputers have been built before, including by Nvidia. But its rare for start-ups to create them.

Cerebras, which is based in Sunnyvale, Calif., was founded in 2016 by Mr. Feldman and four other engineers, with the goal of building hardware that speeds up A.I. development. Over the years, the company has raised $740 million, including from Sam Altman, who leads the A.I. lab OpenAI, and venture capital firms such as Benchmark. Cerebras is valued at $4.1 billion.

Because the chips that are typically used to power A.I. are small often the size of a postage stamp it takes hundreds or even thousands of them to process a complicated A.I. model. In 2019, Cerebras took the wraps off what it claimed was the largest computer chip ever built, and Mr. Feldman has said its chips can train A.I. systems between 100 and 1,000 times as fast as existing hardware.

G42, the Abu Dhabi company, started working with Cerebras in 2021. It used a Cerebras system in April to train an Arabic version of ChatGPT.

In May, G42 asked Cerebras to build a network of supercomputers in different parts of the world. Talal Al Kaissi, the chief executive of G42 Cloud, a subsidiary of G42, said the cutting-edge technology would allow his company to make chatbots and to use A.I. to analyze genomic and preventive care data.

But the demand for GPUs was so high that it was hard to obtain enough to build a supercomputer. Cerebrass technology was both available and cost-effective, Mr. Al Kaissi said. So Cerebras used its chips to build the supercomputer for G42 in just 10 days, Mr. Feldman said.

The time scale was reduced tremendously, Mr. Al Kaissi said.

Over the next year, Cerebras said, it plans to build two more supercomputers for G42 one in Texas and one in North Carolina and, after that, six more distributed across the world. It is calling this network Condor Galaxy.

Start-ups are nonetheless likely to find it difficult to compete against Nvidia, said Chris Manning, a computer scientist at Stanford whose research focuses on A.I. Thats because people who build A.I. models are accustomed to using software that works on Nvidias A.I. chips, he said.

Other start-ups have also tried entering the A.I. chips market, yet many have effectively failed, Dr. Manning said.

But Mr. Feldman said he was hopeful. Many A.I. businesses do not want to be locked in only with Nvidia, he said, and there is global demand for other powerful chips like those from Cerebras.

We hope this moves A.I. forward, he said.

Read more from the original source:

An A.I. Supercomputer Whirs to Life, Powered by Giant Computer ... - The New York Times

Generative AI bots will change how we write forever and thats a good thing – The Hill

Is generative artificial intelligence (GenAI) really destroying writing?

There’s been a widespread argument that the technology is allowing high school and college students to easily cheat on their essay assignments. Some teachers across the country are scrambling to ban students from using writing applications like OpenAI’s ChatGPT, Bard AI, Jasper and Hugging Face, while others explore ways to integrate these emerging technologies.

But things are getting a little too panicky too quickly.

While media reports have cast GenAI writing bots as the “death” of high school and college writing, knee-jerk responses to these emerging technologies have been shortsighted. The public is failing to see the bigger picture — not just about GenAI writing bots but about the very ideas of GenAI and writing in general. 

When it comes to technology and writing, public cries about moral crises are not new. We’ve heard the same anxious arguments about every technology that has ever interacted with the production and teaching of writing — from Wikipedia and word processors to spell checkers, citation generators, chalkboards, the printing press, copy machines and ballpoint pens.

Remember the outrage over Wikipedia in the early 2000s, and the fear that students might use it to avoid conducting “actual research” when writing? Teachers and educational institutions then held meetings and filled syllabi with rules banning students from accessing Wikipedia.

Within a decade of Wikipedia’s introduction, however, the educational outrage has dissipated and the use of the site in classroom assignments is now commonplace. This is proof that all technologies — not just digital or writing technologies — have two possible paths: either they become ubiquitous and naturalized into how we do things, or they become obsolete. In most cases, they become obsolete because another technology surpasses the old technology’s usefulness. 

GenAI writing bots are not destroying writing; they are reinvigorating it. Ultimately, we shouldn’t be so concerned about how students might use ChatGPT or Bard AI or the others to circumvent hegemonic educational values. Instead, we should be thinking about how we can prepare our students and the future workforce for ethically using these technologies. Resisting these changes in defense of wholesale nostalgia for how we learned or taught writing is tantamount to behaving like the proverbial ostrich with its head in the sand.  

So, what will come next with GenAI for writing?

Right now, it is clear that ChatGPT can produce fundamental writing that is generic. However, as companies develop algorithms that are discipline-specific, GenAI writing bots will start building more complex abilities and producing more dynamic writing. Just as “Social Media Marketing Manager” evolved into a now-familiar job as online commerce emerged, so too will we see “Prompt Engineer” (someone who can prompt GenAI to deliver useful outcomes) become a prevalent career path throughout the next decade.

For example, think about the U.S. outdoor recreational industry, which accounts for 1.9 percent of the Gross Domestic Product (GDP) and amounts to about $454 billion per year. This is an industry — like many others — that relies on the ability to rapidly produce nearly endless content in the form of magazines, product descriptions, travel guides, advertisements, videos, reviews and social media posts. When this industry further develops GenAI writing bots specific to its needs, or when tech companies develop these bots and sell access to them, the bots will evolve to produce the writing that is both needed and effective. Students will need to know how to write the prompts that will guide GenAI-driven content in those industries. 

Subscription GenAI services will inevitably become the norm for much of the content produced for commercial consumption, and many companies will build their own writing bots for their specific and private needs. Companies like Jasper AI are banking on this, and with nearly 1,000 new GenAI platforms launching each week, the model appears to be heading toward subscription-based access to proprietary GenAI platforms. Thus, schools and colleges will need to develop new ways to understand the role of writing in education, surrender ingrained beliefs about teaching writing, and teach students how to operate in the GenAI-supported environments of the future. 

Fortunately, not all educational institutions or teachers are jumping aboard the anti-AI bandwagon. Institutions like the University of Florida (UF), with its forward-thinking AI Initiative, are using this moment of technophobic reaction to critically engage the role of AI in all teaching and learning situations. Rather than imposing restrictions, UF administrators are holding roundtables and symposia about how to address GenAI writing bots in classrooms. 

When it comes down to it, GenAI is not the enemy of writers or writing instructors. It is just a new technological teaching tool, and we can learn something from it if we listen.

Sidney I. Dobrin, Ph.D., is a professor and the chair of the Department of English at the University of Florida. He is the director of the Trace Innovation Initiative, a member of the Florida Institute for National Security, and an Adobe Digital Thought Leader. He is also the author of “Talking About Generative AI: A Guide for Educators and AI and Writing.”

The rest is here:

Generative AI bots will change how we write forever and thats a good thing - The Hill

A Blessing and a Boogeyman: Advertisers Warily Embrace A.I. – The New York Times

The advertising industry is in a love-hate relationship with artificial intelligence.

In the past few months, the technology has made ads easier to generate and track. It is writing marketing emails with subject lines and delivery times tailored to specific subscribers. It gave an optician the means to set a fashion shoot on an alien planet and helped Denmarks tourism bureau animate famous tourist sites. Heinz turned to it to generate recognizable images of its ketchup bottle, then paired them with the symphonic theme that charts human evolution in the film 2001: A Space Odyssey.

A.I., however, has also plunged the marketing world into a crisis. Much has been made about the technologys potential to limit the need for human workers in fields such as law and financial services. Advertising, already racked by inflation and other economic pressures as well as a talent drain due to layoffs and increased automation, is especially at risk of an overhaul-by-A.I., marketing executives said.

The conflicting attitudes suffused a co-working space in downtown San Francisco where more than 200 people gathered last week for an A.I. for marketers event. Copywriters expressed worry and skepticism about chatbots capable of writing ad campaigns, while start-up founders pitched A.I. tools for automating the creative process.

It really doesnt matter if you are fearful or not: The tools are here, so what do we do? said Jackson Beaman, whose AI User Group organized the event. We could stand here and not do anything, or we can learn how to apply them.

Machine learning, a subset of artificial intelligence that uses data and algorithms to imitate how humans learn, has quietly powered advertising for years. Madison Avenue has used it to target specific audiences, sell and buy ad space, offer user support, create logos and streamline its operations. (One ad agency has a specialized A.I. tool called the Big Lebotski to help clients compose ad copy and boost their profile on search engines).

Enthusiasm came gradually. In 2017, when the advertising group Publicis introduced Marcel, an A.I. business assistant, its peers responded with what it described as outrage, jest and negativity.

At last months Cannes Lions International Festival of Creativity, the glittering apex of the advertising industry calendar, Publicis got its I told you so moment. Around the festival, where the agenda was stuffed with panels about A.I.s being unleashed and affecting the future of creativity, the company plastered artificially generated posters that mocked the original reactions to Marcel.

Is it OK to talk about A.I. at Cannes now? the ads joked.

The answer is clear. The industry has wanted to discuss little else since late last year, when OpenAI released its ChatGPT chatbot and set off a global arms race around generative artificial intelligence.

McDonalds asked the chatbot to name the most iconic burger in the world and splashed the answer the Big Mac across videos and billboards, drawing A.I.-generated retorts from fast food rivals. Coca-Cola recruited digital artists to generate 120,000 riffs on its brand imagery, including its curved bottle and swoopy logo, using an A.I. platform built in part by OpenAI.

The surge of A.I. experimentation has brought to the fore a host of legal and logistical challenges, including the need to protect reputations and avoid misleading consumers.

A recent campaign from Virgin Voyages allowed users to prompt a digital avatar of Jennifer Lopez to issue customized video invitations to a cruise, including the names of potential guests. But, to prevent Ms. Lopez from appearing to use inappropriate language, the avatar could say only names from a preapproved list and otherwise defaulted to terms like friend and sailor.

Its still in the early stages there were challenges to get the models right, to get the look right, to get the sound right and there are very much humans in the loop throughout, said Brian Yamada, the chief innovation officer of VMLY&R, the agency that produced the campaign for Virgin.

Elaborate interactive campaigns like Virgins make up a minority of advertising; 30-second video clips and captioned images, often with variations lightly adjusted for different demographics, are much more common. In recent months, several large tech companies, including Meta, Google and Adobe, have announced artificial intelligence tools to handle that sort of work.

Major advertising companies say the technology could streamline a bloated business model. The ad group WPP is working with the chip maker Nvidia on an A.I. platform that could, for example, allow car companies to easily incorporate footage of a vehicle into scenes customized for local markets without laboriously filming different commercials around the world.

To many of the people who work on such commercials, A.I.s advance feels like looming obsolescence, especially in the face of several years of slowing growth and a shift in advertising budgets from television and other legacy media to programmatic ads and social platforms. The media agency GroupM predicted last month that artificial intelligence was likely to influence at least half of all advertising revenue by the end of 2023.

Theres little doubt that the future of creativity and A.I. will be increasingly intertwined, said Philippe Krakowsky, the chief executive of the Interpublic Group of Companies, an ad giant.

IPG, which was hiring chief A.I. officers and similar executives years before ChatGPTs debut, now hopes to use the technology to deliver highly personalized experiences.

That said, we need to apply a very high level of diligence and discipline, and collaborate across industries, to mitigate bias, misinformation and security risk in order for the pace of advancement to be sustained, Mr. Krakowsky added.

A.I.s ability to copy and deceive, which has already found widespread public expression in political marketing from Gov. Ron DeSantis of Florida and others, has alarmed many advertising executives. They are also concerned about intellectual property issues and the direction and speed of A.I. development. Several ad agencies joined organizations such as the Coalition for Content Provenance and Authenticity, which wants to trace content from its origins, and the Partnership on AI, which aims to keep the technology ethically sound.

Amid the doom and gloom, the agency Wunderman Thompson decided this spring to take A.I. down a peg.

In an Australian campaign for Kit Kat candy bars, the agency used text and image generators from OpenAI to create intentionally awkward ads with the tagline AI made this ad so we could have a break. In one, warped figures chomped on blurry chocolate bars over a script narrated in a mechanical monotone: Someone hands them a Kit Kat bar. They take a bite.

The campaign would be trickier to pull off now, in part because the fast-improving technology has erased many of the flaws present just a few months ago, said Annabelle Barnum, the general manager for Wunderman Thompson in Australia. Still, she said, humans will always be key to the advertising process.

Creativity comes from real human insight A.I. is always going to struggle with that because it relies purely on data to make decisions, she said. So while it can enhance the process, ultimately it will never be able to take away anything that creators can really do because that humanistic element is required.

See the rest here:

A Blessing and a Boogeyman: Advertisers Warily Embrace A.I. - The New York Times

Meta Unveils a More Powerful A.I. and Isn’t Fretting Over Who Uses It – The New York Times

The largest companies in the tech industry have spent the year warning that development of artificial intelligence technology is outpacing their wildest expectations and that they need to limit who has access to it.

Mark Zuckerberg is doubling down on a different tack: Hes giving it away.

Mr. Zuckerberg, the chief executive of Meta, said on Tuesday that he planned to provide the code behind the companys latest and most advanced A.I. technology to developers and software enthusiasts around the world free of charge.

The decision, similar to one that Meta made in February, could help the company reel in competitors like Google and Microsoft. Those companies have moved more quickly to incorporate generative artificial intelligence the technology behind OpenAIs popular ChatGPT chatbot into their products.

When software is open, more people can scrutinize it to identify and fix potential issues, Mr. Zuckerberg said in a post to his personal Facebook page.

The latest version of Metas A.I. was created with 40 percent more data than what the company released just a few months ago and is believed to be considerably more powerful. And Meta is providing a detailed road map that shows how developers can work with the vast amount of data it has collected.

Researchers worry that generative A.I. can supercharge the amount of disinformation and spam on the internet, and presents dangers that even some of its creators do not entirely understand.

Meta is sticking to a long-held belief that allowing all sorts of programmers to tinker with technology is the best way to improve it. Until recently, most A.I. researchers agreed with that. But in the past year, companies like Google and OpenAI, a San Francisco start-up that is working closely with Microsoft, have set limits on who has access to their latest technology and placed controls around what can be done with it.

The companies say they are limiting access because of safety concerns, but critics say they are also trying to stifle competition. Meta argues that it is in everyones best interest to share what it is working on.

Meta has historically been a big proponent of open platforms, and it has really worked well for us as a company, said Ahmad Al-Dahle, vice president of generative A.I. at Meta, in an interview.

The move will make the software open source, which is computer code that can be freely copied, modified and reused. The technology, called LLaMA 2, provides everything anyone would need to build online chatbots like ChatGPT. LLaMA 2 will be released under a commercial license, which means developers can build their own businesses using Metas underlying A.I. to power them all for free.

By open-sourcing LLaMA 2, Meta can capitalize on improvements made by programmers from outside the company while Meta executives hope spurring A.I. experimentation.

Metas open-source approach is not new. Companies often open-source technologies in an effort to catch up with rivals. Fifteen years ago, Google open-sourced its Android mobile operating system to better compete with Apples iPhone. While the iPhone had an early lead, Android eventually became the dominant software used in smartphones.

But researchers argue that someone could deploy Metas A.I. without the safeguards that tech giants like Google and Microsoft often use to suppress toxic content. Newly created open-source models could be used, for instance, to flood the internet with even more spam, financial scams and disinformation.

LLaMA 2, short for Large Language Model Meta AI, is what scientists call a large language model, or L.L.M. Chatbots like ChatGPT and Google Bard are built with large language models.

The models are systems that learn skills by analyzing enormous volumes of digital text, including Wikipedia articles, books, online forum conversations and chat logs. By pinpointing patterns in the text, these systems learn to generate text of their own, including term papers, poetry and computer code. They can even carry on a conversation.

Meta is teaming up with Microsoft to open-source LLaMA 2, which will run on Microsofts Azure cloud services. LLaMA 2 will also be available through other providers, including Amazon Web Services and the company HuggingFace.

Dozens of Silicon Valley technologists signed a statement of support for the initiative, including the venture capitalist Reid Hoffman and executives from Nvidia, Palo Alto Networks, Zoom and Dropbox.

Meta is not the only company to push for open-source A.I. projects. The Technology Innovation Institute produced Falcon LLM and published the code freely this year. Mosaic ML also offers open-source software for training L.L.M.s.

Meta executives argue that their strategy is not as risky as many believe. They say that people can already generate large amounts of disinformation and hate speech without using A.I., and that such toxic material can be tightly restricted by Metas social networks such as Facebook. They maintain that releasing the technology can eventually strengthen the ability of Meta and other companies to fight back against abuses of the software.

Meta did additional Red Team testing of LLaMA 2 before releasing it, Mr. Al-Dahle said. That is a term for testing software for potential misuse and figuring out ways to protect against such abuse. The company will also release a responsible-use guide containing best practices and guidelines for developers who wish to build programs using the code.

But these tests and guidelines apply to only one of the models that Meta is releasing, which will be trained and fine-tuned in a way that contains guardrails and inhibits misuse. Developers will also be able to use the code to create chatbots and programs without guardrails, a move that skeptics see as a risk.

In February, Meta released the first version of LLaMA to academics, government researchers and others. The company also allowed academics to download LLaMA after it had been trained on vast amounts of digital text. Scientists call this process releasing the weights.

It was a notable move because analyzing all that digital data requires vast computing and financial resources. With the weights, anyone can build a chatbot far more cheaply and easily than from scratch.

Many in the tech industry believed Meta set a dangerous precedent, and after Meta shared its A.I. technology with a small group of academics in February, one of the researchers leaked the technology onto the public internet.

In a recent opinion piece in The Financial Times, Nick Clegg, Metas president of global public policy, argued that it was not sustainable to keep foundational technology in the hands of just a few large corporations, and that historically companies that released open source software had been served strategically as well.

Im looking forward to seeing what you all build! Mr. Zuckerberg said in his post.

Go here to see the original:

Meta Unveils a More Powerful A.I. and Isn't Fretting Over Who Uses It - The New York Times

Dev News: Google Unlearns, Fresh 1.3 and Wix’s AI Plan – The New Stack

Theres a lot of talk about machine learning and large language models, but one related challenge thats just now getting attention is machine unlearning the process of removing data from a trained AI.

Its not simply enough to delete the data. Its a complex problem, as this Google post explains.

Fully erasing the influence of the data requested to be deleted is challenging since, aside from simply deleting it from databases where its stored, it also requires erasing the influence of that data on other artifacts such as trained machine learning models, Google Research Scientists Fabian Pedregosa and Eleni Triantafillou wrote in a June 29 blog post. Moreover, recent research has shown that in some cases it may be possible to infer with high accuracy whether an example was used to train a machine learning model using membership inference attacks (MIAs).

This creates privacy concerns since it indicates that it may be possible to infer that an individuals data trained the model even when its deleted, they added.

To help dress the problem, Google is running a competition that started in mid-July and will continue through September 2023. Theyve also published a starting kit to provide a foundation for participants to build and test their unlearning models on a toy dataset.

The contest is part of the NeurIPS 2023 Competition Track Program, but theres no mention of what the prize will be. Developers can contact unlearning-challenge@googlegroups.com for more information.

The Fresh team plans to release new minor versions of its full-stack JavaScript web framework each month but this months release Fresh 1.3 includes quite a few changes, including expanded and improved documentation, bug fixes and new features.

Among the changes is a merged Get handler and component feature. The two were already highly coupled but required a bit of annoying boilerplate.

To ensure type-safety, youd always have to create an interface for the components props, pass that to the Handlers type as a generic and use that in the component definition. Thats quite a lot of steps! noted frontend developer and Fresh maintainer Marvin Hagemeister. This led to a similar snippet of code needed to pass data between the two. The change is simpler but doesnt require developers to rewrite their routes.

Also, Fresh 1.3 plugins can now inject virtual routes and middleware, which is useful for plugins adding development-specific routes or admin dashboards, he wrote.

Another change: Fresh will now automatically render the _500.tsx template as a fallback when an error is thrown in a route handler.

Previously, Fresh required every island to live in its own file and be exported via a default export. That meant every island file was treated as its own entry point and shipped in a separate JavaScript file to the browser. Fresh 1.3 removes that requirement so that many islands can be exported in a single file.

Theres also support for Deno.serve.

With the recent Deno 1.35.0 release, the Deno.serve API was marked stable, wrote Hagemeister. We followed suit in Fresh and with version 1.3 well use Deno.serve when its available. This new API is not just faster, but also a lot simpler than the previous serve API from std/http.

Website builder Wix plans to introduce a suite of new AI-powered tools, including an automated tool to create websites using natural language prompts.

The tool, called AI Site Generator, will allow users to describe their intent and instantly it will generate a website. However, in an unusual move, Mondays press release did not indicate when these AI capabilities will be available. A request for a timeline by The New Stack had not been answered by the time this post was published.

The tailor-made website is complete with a homepage and all inner pages with text, images, and any business solution including Stores, Bookings, Restaurants, Events and more, the release stated. Users can continue to customize the site and edit based on their needs with integrated AI tools.

Wix is a web hosting solution that competes with the likes of Squarespace, GoDaddy and WordPress.

Other promised tools in the AI suite are:

Currently, Wix offers a number of AI-powered features, such as AI Text Creator, which leverages ChatGPT to create content for particular sections of a site, such as titles, taglines and paragraphs; AI Template Text Creator, which generates the homepage and inner pages of a site after choosing a ready-made template; and AI Domain Generator, which helps users choose a unique domain name.

See the original post:

Dev News: Google Unlearns, Fresh 1.3 and Wix's AI Plan - The New Stack