Archive for the ‘Ai’ Category

A Bug in the Logic: Regulators try to solve the workplace AI problem … – The Federalist Society

Earlier this month, the Biden administration published a request for information on artificial intelligence in the workplace. The request asked workers to submit, among other things, anecdotes about how they had been affected by AI. These anecdotes would then be used to develop new policy proposals.

The request failed to say, however, why new policies were needed. The administration had already conceded that AI tools were covered by existing law. And in fact, it had already issued guidance under those laws. So it didnt seem to be covering any legal or policy gap. Instead, it seemed to be making a political statement. It seemed to be targeting AI because AI is poorly understood, and therefore unpopular. But that kind of approach to regulation promises to produce no real solutions. Instead, it promises only talking points and red tape.

The administration is hardly the first to see AI as an easy political target. States and cities have already started planting their flags. First out of the gate was New York City, which passed the nations first law regulating AI-powered selection tools. The New York law requires employers to disclose their AI-powered selection tools, put the tools through annual bias audits, and give candidates a chance to ask for other selection methods. Likewise, there are at least four AI bills pending in California. The most far-reaching one, AB 331, would require employers not only to disclose their AI tools, but also to report AI-related data to a state agency. The law would also create a private right of action, serving up still more work to the busy Golden State plaintiffs bar.

In short, lawmakers are clearly interested in AI and its effects on workers. Less clear, however, is what they hope to add to existing law. Just last week, the EEOC published updated guidance explaining how Title VII applies to AI-powered tools. Similarly, the NLRBs General Counsel recently announced that the National Labor Relations Act already forbids AI tools that chill protected concerted activity. And Lina Khan, chair of the FTC, has written that [e]xisting laws prohibiting discrimination will apply [to AI tools], as well as existing authorities proscribing exploitative collection or use of personal data.

Given this existing coverage, its unclear what new policies the administration thinks it needs. Nor is it clear what harms the administration is trying to prevent. In the RFI, the administration linked to a handful of articles published on general-interest websites. But some of the articles were more than seven years old, and none of them established any discriminatory effects. One even suggested that companies were using AI tools to keep workers and consumers safe. How any of this called for a new policy response was left unsaid.

One suspects the administration left so much unsaid because it has so little to say. It cited no real evidence that AI is harming workers. But finding real harm didnt seem to be the point. Rather, the point seemed to be scoring an easy political win. The administration is targeting AI because few people understand the technology. It can therefore crack down on AI tools without generating much backlash.

That kind of thinking is short-sighted. Not only has the administration identified no harm; it has failed to consider AIs potential benefits. For example, AI-powered tools might help workers be more productive. The tools might help workers find jobs more suited to their skillsets. The tools might even help workers stay safe. Without more real-world experience, those benefits are impossible to quantify. Yet the administration is rushing ahead anyway, assuming the tools are nefarious without considering their possible upside.

For now, then, the RFI looks like a regulatory misstep in the making. Workplace AI is too new and too unfamiliar to know whether regulation is necessary, much less what a proper regulatory regime would look like. For once, regulators should aim before they fire.

Note from the Editor: The Federalist Society takes no positions on particular legal and public policy matters. Any expressions of opinion are those of the author. To join the debate, please email us atinfo@fedsoc.org.

Go here to read the rest:

A Bug in the Logic: Regulators try to solve the workplace AI problem ... - The Federalist Society

Here’s What AI Thinks an Illinoisan Looks Like And Apparently, Real Illinoisans Agree – NBC Chicago

Does this person look like he lives in Illinois? AI thinks so. And a handful of posts, allegedly from real people on social media, agree.

That's the basis of a Reddit post titled "The Most Stereotypical People in the States." The post, shared in a section of Reddit dedicated to discussions on Artificial Intelligence, shares AI-generated photos of what the an average person looks like in each state.

The results, according to commenters, are relatively accurate -- at least for Illinois.

Each of the photos shows the portrait of person, most often a male, exhibiting some form of creative expression -- be it through clothing, environment, facial expression or otherwise -- that's meant to clearly represent a location.

For example, the AI-generated photo of a stereotypical person shows a man sitting behind a giant block of cheese.

A stereotypical person in Illinois, according to the post, appears less distinctive, and rather ordinary. In fact, one commenter compares the man from Illinois to Waldo.

"Illinois is Waldo," the comment reads.

"Illinois," another begins. "A person as boring as it sounds to live there."

To other commenters, the photo of the average person who lives in Illinois isn't just dull. It's spot on.

"Hahaha," one commenter says. "Illinois is PRECISELY my brother-in-law."

"Illinois' is oddly accurate," another says.

Accurate or not, in nearly all the AI-generated photos -- Illinois included -- no smiles are captured, with the exception of three states: Connecticut, Hawaii and West Virginia.

You can take a spin through all the photos here. Just make sure you don't skip over Illinois, since, apparently, that one is easy to miss.

Continued here:

Here's What AI Thinks an Illinoisan Looks Like And Apparently, Real Illinoisans Agree - NBC Chicago

From Amazon to Wendy’s, how 4 companies plan to incorporate AIand how you may interact with it – CNBC

Smith Collection/Gado | Archive Photos | Getty Images

Artificial intelligence is no longer limited to the realm of science-fiction novels it's increasingly becoming a part of our everyday lives.

AI chatbots, such as OpenAI's ChatGPT, are already being used in a variety of ways, from writing emails to booking trips. In fact, ChatGPT amassed over 100 million users within just months of launching.

But AI goes beyond large language models (LLMs) like ChatGPT. Microsoft defines AI as "the capability of a computer system to mimic human-like cognitive functions such as learning and problem-solving."

For example, self-driving cars use AI to simulate the decision-making processes a human driver would usually make while on the road such as identifying traffic signals or choosing the best route to reach a given destination, according to Microsoft.

AI's boom in popularity has many companies racing to integrate the technology into their own products. In fact, 94% of business leaders believe that AI development will be critical to the success of their business over the next five years, according to Deloitte's latest survey.

For consumers, this means AI may be coming to a store, restaurant or supermarket nearby. Here are four companies that are already utilizing AI's capabilities and how it may impact you.

Amazon delivery package seen in front of a door.

Sopa Images | Lightrocket | Getty Images

Amazon uses AI in a number of ways, but one strategy aims to get your orders to you faster, Stefano Perego, vice president of customer fulfilment and global ops services for North America and Europe at Amazon, told CNBC on Monday.

The company's "regionalization" plan involves shipping products from warehouses that are closest to customers rather than from a warehouse located in a different part of the country.

To do that, Amazon is utilizing AI to analyze data and patterns to determine where certain products are in demand. This way, those products can be stored in nearby warehouses in order to reduce delivery times.

Screens displaying the logos of Microsoft and ChatGPT, a conversational artificial intelligence application software developed by OpenAI.

Lionel Bonaventure | Afp | Getty Images

Microsoft is putting its $13 billion investment in OpenAI to work. In March, the tech behemoth announced that a new set of AI features, dubbed Copilot, will be added to its Microsoft 365 software, which includes popular apps such as Excel, PowerPoint and Word.

When using Word, for example, Copilot will be able to produce a "first draft to edit and iterate on saving hours in writing, sourcing, and editing time," Microsoft says. But Microsoft acknowledges that sometimes this type of AI software can produce inaccurate responses and warns that "sometimes Copilot will be right, other times usefully wrong."

A Brain Corp. autonomous floor scrubber, called an Auto-C, cleans the aisle of a Walmart's store. Sam's Club completed the rollout of roughly 600 specialized scrubbers with inventory scan towers last October in a partnership Brain Corp.

Source: Walmart

Walmart is using AI to make sure shelves in its nearly 4,700 stores and 600 Sam's Clubs stay stocked with your favorite products. One way it's doing that: automated floor scrubbers.

As the robotic scrubbers clean Sam's Club aisles, they also capture images of every item in the store to monitor inventory levels. The inventory intelligence towers located on the scrubbers take more than 20 million photos of the shelves every day.

The company has trained its algorithms to be able to tell the difference between brands and determine how much of the product is on the shelf with more than 95% accuracy, Anshu Bhardwaj, senior vice president of Walmart's tech strategy and commercialization, told CNBC in March. And when a product gets too low, the stock room is automatically alerted to replenish it, she said.

A customer waits at a drive-thru outside a Wendys Co. restaurant in El Sobrante, California, U.S.

Bloomberg | Bloomberg | Getty Images

An AI chatbot may be taking your order when you pull up to a Wendy's drive-thru in the near future.

The fast-food chain partnered with Google to develop an AI chatbot specifically designed for drive-thru ordering, Wendy's CEO Todd Penegor told CNBC last week. The goal of this new feature is to speed up ordering at the speaker box, which is "the slowest point in the order process," the CEO said.

In June, Wendy's plans to test the first pilot of its "Wendy's FreshAI" at a company-operated restaurant in the Columbus, Ohio area, according to a May press release.

Powered by Google Cloud's generative AI and large language models, it will be able to have conversations with customers, understand made-to-order requests and generate answers to frequently asked questions, according to the company's statement.

DON'T MISS: Want to be smarter and more successful with your money, work & life?Sign up for our new newsletter!

Get CNBC's free report,11 Ways to Tell if We're in a Recession,where Kelly Evans reviews the top indicators that a recession is coming or has already begun.

CHECK OUT: Mark Cuban says the potential impact of AI tools like ChatGPT is beyond anything Ive ever seen in tech

Read more:

From Amazon to Wendy's, how 4 companies plan to incorporate AIand how you may interact with it - CNBC

Boston Isn’t Afraid of Generative AI – WIRED

After ChatGPT burst on the scene last November, some government officials raced to prohibit its use. Italybanned the chatbot. New York City, Los Angeles Unified, Seattle, and Baltimore School Districts eitherbanned or blocked access to generative AI tools, fearing that ChatGPT, Bard, and other content generation sites could tempt students to cheat on assignments, induce rampant plagiarism, and impede critical thinking. This week, US Congressheard testimony from Sam Altman, CEO of OpenAI, and AI researcher Gary Marcus as it weighed whether and how to regulate the technology.

In a rapid about-face, however, a few governments are now embracing a less fearful and more hands-on approach to AI. New York City Schools chancellor David Banksannounced yesterday that NYC is reversing its ban because the knee jerk fear and risk overlooked the potential of generative AI to support students and teachers, as well as the reality that our students are participating in and will work in a world where understanding generative AI is crucial. And yesterday, City of Boston chief information officer Santiago Garces sentguidelines to every city official encouraging them to start using generative AI to understand their potential. The city also turned on use of Google Bard as part of the City of Bostons enterprise-wide use of Google Workspace so that all public servants have access.

The responsible experimentation approach adopted in Bostonthe first policy of its kind in the UScould, if used as a blueprint, revolutionize the public sectors use of AI across the country and cause a sea change in how governments at every level approach AI. By promoting greater exploration of how AI can be used to improve government effectiveness and efficiency, and by focusing on how to use AI for governance instead of only how to govern AI, the Boston approach might help to reduce alarmism and focus attention on how to use AI for social good.

Bostons policy outlines several scenarios in which public servants might want to use AI to improve how they work, and even includes specific how-tos for effective prompt writing.

Generative AI, city officials were told in an email that went out from the CIO to all city officials on May 18, is a great way to get started on memos, letters, and job descriptions, and might help to alleviate the work of overburdened public officials.

The tools can also help public servants translate government-speak and legalese into plain English, which can make important information about public services more accessible to residents. The policy explains that public servants can indicate the reading level or audience in the prompt, allowing the AI model to generate text suitable for elementary school students or specific target audiences.

Generative AI can also help with translation into other languages so that a citys non-English speaking populations can enjoy equal and easier access to information about policies and services affecting them.

City officials were also encouraged to use generative AI tosummarize lengthy pieces of text or audio into concise summaries, which could make it easier for government officials to engage in conversations with residents.

View original post here:

Boston Isn't Afraid of Generative AI - WIRED

Bloomsbury admits using AI-generated artwork for Sarah J Maas novel – The Guardian

Books

Publisher says cover of House of Earth and Blood was prepared by in-house designers unaware the stock image chosen was not human-made

Fri 19 May 2023 10.30 EDT

Publisher Bloomsbury has said it was unaware an image it used on the cover of a book by fantasy author Sarah J Maas was generated by artificial intelligence.

The paperback of Maass House of Earth and Blood features a drawing of a wolf, which Bloomsbury had credited to Adobe Stock, a service that provides royalty-free images to subscribers.

But the Verge reported that the illustration of the wolf matches one created by a user on Adobe Stock called Aperture Vintage, who has marked the image as AI-generated.

A number of illustrators and fans have criticised the cover for using AI, but Bloomsbury has said it was unaware of the images origin.

Bloomsburys in-house design team created the UK paperback cover of House of Earth and Blood, and as part of this process we incorporated an image from a photo library that we were unaware was AI when we licensed it, said Bloomsbury in a statement. The final cover was fully designed by our in-house team.

This is not the first time that a book cover from a major publishing house has used AI. In 2022, sci-fi imprint Tor discovered that a cover it had created had used a licensed image created by AI, but decided to go ahead anyway due to production constraints.

And this month Bradford literature festival apologised for the hurt caused after artists criticised it for using AI-generated images on promotional material.

Meanwhile, sci-fi publisher Clarkesworld, which publishes science fiction short stories, was forced to close itself to submissions after a deluge of entries generated by AI.

The publishing industry is more broadly grappling with the use and role of AI. It has led to the Society of Authors (SoA) issuing a paper on artificial intelligence, in which it said that while there are potential benefits of machine learning, there are risks that need to be assessed, and safeguards need to be put in place to ensure that the creative industries will continue to thrive.

The SoA has advised that consent should be sought from creators before their work is used by an AI system, and that developers should be required to publish the data sources they have used to train their AI systems.

The guidance addresses concerns similar to those raised by illustrators and artists who spoke to the Guardian earlier this year about the way in which AI image generators use databases of already existing art and text without the creators permission.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Continue reading here:

Bloomsbury admits using AI-generated artwork for Sarah J Maas novel - The Guardian