Archive for the ‘Free Software’ Category

Read the letter: Twitter accuses Microsoft of using its data in unauthorized ways – CNBC

Elon Musk, CEO of Tesla, speaks with CNBC on May 16th, 2023.

David A. Grogan | CNBC

Twitter is accusing Microsoft of using the social media company's data in ways that were unauthorized and never disclosed.

Alex Spiro, a partner at Quinn Emanuel Urquhart & Sullivan and attorney for Twitter owner Elon Musk, sent a letter to Microsoft on Thursday laying out the claims, including that the software company "may have been in violation of multiple provisions" of its agreement with Twitter over data use.

It's the latest rift among tech companies in the growing debate over who owns data that can be used to train artificial intelligence and machine learning software. The New York Times first reported on the letter, a copy of which was obtained by CNBC.

After Musk led a buyout of Twitter in October and appointed himself CEO, the company started charging for use of its application programming interface, which enables developers to embed tweets into their software and services and access Twitter data.

The API was previously free to use for some researchers, partners and developers who agreed to Twitter's terms. Twitter API-driven apps include Hootsuite, Sprout Social and Sprinklr.

According to the letter from Spiro to Microsoft CEO Satya Nadella and the company's board, last month Microsoft "declined to pay even a discounted rate for continued access to Twitter's APIs and content."

As of April, Microsoft had at least five products that used the Twitter API, including the Azure cloud, Bing search engine and Power Platform low-code application development tools, Spiro wrote.

The agreement restricts excessive use of Twitter's programming interfaces. However, for one of the Microsoft services using Twitter data, "account information outright states that it intends to allow its customers to 'go around throttling limits,'" Spiro wrote.

A Microsoft spokesperson acknowledged receipt of the letter and told CNBC the company will review it and "respond appropriately."

"Today we heard from a law firm representing Twitter with some questions about our previous use of the free Twitter API," the spokesperson said in an email. "We look forward to continuing our long-term partnership with the company."

Musk has been openly critical of Microsoft's tight relationship with OpenAI, the creator of the chatbot ChatGPT. Musk was an early backer of OpenAI, but the company has since raised billions of dollars from Microsoft, which is embedding its AI technology into many core products.

"Microsoft has a very strong say, if not directly controls, OpenAI at this point," Musk told CNBC in an interview this week. Nadella recently challenged Musk's claim in an interview with CNBC's Andrew Ross Sorkin, saying Microsoft has "a noncontrolling interest" in the startup.

Spiro did not name OpenAI or mention its ChatGPT and DALL-E applications or large language models in the letter. He did press Microsoft for any details about, "a description of any token pooling implemented in any of the Microsoft Apps, including the time period(s) when any such token pooling occurred and the number of tokens that were pooled."

Musk and Nadella have had other interactions of late.

Last year, Musk approached Nadella as he was raising money for his Twitter buyout, according to text messages that became public via court filings. Nadella wrote in one text to Musk, "will for sure follow-up on Teams feedback!" Teams is Microsoft's chat app.

Read the full letter from Twitter to Microsoft, here.

Here is the original post:
Read the letter: Twitter accuses Microsoft of using its data in unauthorized ways - CNBC

Police Facial Recognition Technology Can’t Tell Black People Apart – Scientific American

Imagine being handcuffed in front of your neighbors and family for stealing watches. After spending hours behind bars, you learn that the facial recognition software state police used on footage from the store identified you as the thief. But you didnt steal anything; the software pointed cops to the wrong guy.

Unfortunately this is not a hypothetical. This happened three years ago to Robert Williams, a Black father in suburban Detroit.Sadly Williams story is not a one-off. In a recent case of mistaken identity, facial recognition technology led to the wrongful arrest of a Black Georgian for purse thefts in Louisiana.

Ourresearch supports fears that facial recognition technology (FRT) can worsen racial inequities in policing. We found that law enforcement agencies that use automated facial recognition disproportionately arrest Black people. We believe this results from factors that include the lack of Black faces in the algorithms training data sets, a belief that these programs are infallible and a tendency of officers own biases to magnify these issues.

While no amount of improvement will eliminate the possibility of racial profiling, we understand the value of automating the time-consuming, manual face-matching process. We also recognize the technologys potential to improve public safety. However, considering the potential harms of this technology, enforceable safeguards are needed to prevent unconstitutional overreaches.

FRT is an artificial intelligencepowered technology that tries to confirm the identity of a person from an image. The algorithms used by law enforcement are typically developed by companies like Amazon, Clearview AI and Microsoft, which build their systems for different environments.Despite massive improvements in deep-learning techniques, federal testing shows that most facial recognition algorithms perform poorly at identifying people besides white men.

Civil rights advocates warn that the technology struggles to distinguish darker faces, which will likely lead to more racial profiling and more false arrests. Further, inaccurate identification increases the likelihood of missed arrests.

Still some government leaders, including New Orleans Mayor LaToya Cantrell, tout this technology's ability to help solve crimes. Amid the growing staffing shortages facing police nationwide, some champion FRT as a much-needed police coverage amplifierthat helps agencies do more with fewer officers. Such sentiments likely explain why more than one quarter of local and state police forces and almost half of federal law enforcement agencies regularly access facial recognition systems, despite their faults.

This widespread adoption poses a grave threat to our constitutional right against unlawful searches and seizures.

Recognizing the threatto our civil liberties, cities like San Francisco and Boston banned or restricted government use of this technology. At the federal level President Bidens administration released the Blueprint for an AI Bill of Rights in 2022. While intended to incorporate practices that protect our civil rights in the design and use of AI technologies, the blueprints principles are nonbinding. In addition, earlier this year congressional Democrats reintroduced the Facial Recognition and Biometric Technology Moratorium Act. This bill would pause law enforcements use of FRT until policy makers can create regulations and standards that balance constitutional concerns and public safety.

The proposed AI bill of rights and the moratorium are necessary first steps in protecting citizens from AI and FRT. However, both efforts fall short. The blueprint doesnt cover law enforcements use of AI, and the moratorium only limits the use of automated facial recognition by federal authoritiesnot local and state governments.

Yet as the debate heats up over facial recognitions role in public safety, our research and others show how even with mistake-free software, this technology will likely contribute to inequitable law enforcement practices unless safeguards are put in place for nonfederal use too.

First, the concentration of police resources in many Black neighborhoods already results in disproportionate contact between Black residents and officers. With this backdrop, communities served by FRT-assisted police are more vulnerable to enforcement disparities, as the trustworthiness of algorithm-aided decisions is jeopardized by the demands and time constraints of police work, combined with an almost blind faith in AI that minimizes user discretion in decision-making.

Police typically use this technology in three ways: in-field queries to identify stopped or arrested persons, searches of video footage or real-time scans of people passing surveillance cameras. The police upload an image, and in a matter of seconds the software compares the image to numerous photos to generate a lineup of potential suspects.

Enforcement decisions ultimately lie with officers. However, people often believe that AI is infallible and dont question the results. On top of this using automated tools is much easier than making comparisons with the naked eye.

AI-powered law enforcement aids also psychologically distance police officers from citizens. This removal from the decision-making process allows officers to separate themselves from their actions. Usersalso sometimes selectively follow computer-generated guidance, favoring advice that matches stereotypes, including those about Black criminality.

Theres no solid evidence that FRT improves crime control. Nonetheless, officials appear willing to tolerate these racialized biases as cities struggle to curb crime.This leaves people vulnerable to encroachments on their rights.

The time for blind acceptance of this technology has passed. Software companies and law enforcement must take immediate steps towards reducing the harms of this technology.

For companies, creating reliable facial recognition software begins with balanced representation among designers. In the U.S. most software developers are white men. Research shows the software is much better at identifying members of the programmers race. Experts attribute such findings largely to engineers unconscious transmittal of own-race bias into algorithms.

Own-race bias creeps in as designers unconsciously focus on facial features familiar to them. The resulting algorithm is mainly tested on people of their race. As such many U.S.-made algorithms learn by looking at more white faces, which fails to help them recognize people of other races.

Using diverse training sets can help reduce bias in FRT performance. Algorithms learn to compare images by training with a set of photos. Disproportionate representation of white males in training images produces skewed algorithms because Black people are overrepresented in mugshot databases and other image repositories commonly used by law enforcement. Consequently AI is more likely to mark Black faces as criminal, leading to the targeting and arresting of innocent Black people.

We believe that the companies that make these products need to take staff and image diversity into account. However, this does not remove law enforcements responsibility. Police forces must critically examine their methods if we want to keep this technology from worsening racial disparities and leading to rights violations.

For police leaders, uniform similarity score minimums must be applied to matches. After the facial recognition software generates a lineup of potential suspects, it ranks candidates based on how similar the algorithm believes the images are. Currently departments regularly decide their own similarity score criteria, which some experts contend raises the chances for wrongful and missed arrests.

FRTs adoption by law enforcement is inevitable, and we see its value. But if racial disparities already exist in enforcement outcomes, this technology will likely exacerbate inequities like those seen in traffic stops and arrests without adequate regulation and transparency.

Fundamentally police officers need more training on FRTs pitfalls, human biases and historical discrimination. Beyond guiding officers who use this technology, police and prosecutors should also disclose that they used automated facial recognition when seeking a warrant.

Although FRT isnt foolproof, following these guidelines will help defend against uses that drive unnecessary arrests.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those ofScientific American.

See the original post:
Police Facial Recognition Technology Can't Tell Black People Apart - Scientific American

Porsche Taycan Gets EV Charging Station Finder in Apple Maps – Car and Driver

Porsche has added integration for Apple Maps to include charger locations for U.S. Taycan models, giving CarPlay users yet another reason to stick with the software. The car was already equipped with Porsche's native charging planner, which can suggest stops based on information like the vehicle's state of charge (SOC), expected traffic conditions, and average speed. But the reality is that most owners seem to prefer third-party software like Apple CarPlay and Android Auto. As for Android, a Porsche spokesperson told Car and Driver that the Taycan does come with Android Auto capability as standard, but it doesn't have the EV SOC integration or charge stop suggestions that the new CarPlay system does.

Porsche

The new integration means that Taycan owners won't need to leave CarPlay or settle for using the native navigation system when trying to map out charging stops. On top of doing a lot of the same quality-of-life things the native system does (like analyze SOC and expected traffic), the Apple system can also analyze elevation changes along a given route to get a more accurate estimation of battery usage. According to Porsche, if you allow the vehicle's SOC to deplete to a low enough margin, the new software will automatically offer a route to the nearest compatible charging station.

The system relies on both CarPlay and the information fed to it from the vehicle. That means the normal Apple Maps app on your phone won't give the same charging recommendations. The system should work with any Taycan, but according to Porsche, any models from 2021 or earlier will need to go to a service center for a free software update. Porsche also provided a link for setup and FAQs for the software, which can be found here.

This content is imported from poll. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

Associate News Editor

Jack Fitzgeralds love for cars stems from his as yet unshakable addiction to Formula 1. After a brief stint as a detailer for a local dealership group in college, he knew he needed a more permanent way to drive all the new cars he couldnt afford and decided to pursue a career in auto writing. By hounding his college professors at the University of Wisconsin-Milwaukee, he was able to travel Wisconsin seeking out stories in the auto world before landing his dream job at Car and Driver. His new goal is to delay the inevitable demise of his 2010 Volkswagen Golf.

See the rest here:
Porsche Taycan Gets EV Charging Station Finder in Apple Maps - Car and Driver

Tesla to roll out free Full Self-Driving software, but there’s a catch. Know here – HT Auto

Tesla is planning to roll out the Full Self-Driving (FSD) software for its consumers for free. Tesla CEO Elon Musk has said that the company plans to offer its customers the FSD for free for one month as a trial. Musk has confirmed via a tweet that all Tesla car owners in North America can avail of a one-month FSD free trial. Also, after that the company will roll out the software for its global consumers in other regions around the world.

By: HT Auto Desk Updated on: 15 May 2023, 13:11 PM

As Tesla is aiming to get more users to sample its much-hyped FSD software, the company believes a one-month free trial will offer the consumers a chance to try and test the technology that is claimed to allow the vehicles to run autonomously without any driver interference, a significantly advanced version of the car manufacturer's existing semi-autonomous driver assisting technology known as Autopilot. Tesla CEO Elon Musk was responding to a tweet from a user who wanted to know when the subscription option for FSD would be released in Canada. The billionaire confirmed that the free trials would be coming soon, paving the way for the subscriptions.

Also Read : Delhi Electric Vehicle Policy to be revised in 2023. What to expect

Currently, Tesla is offering the FSD software's beta version to a select number of consumers. A few days back, Musk hinted that Tesla would roll out the FSD soon once it's fully functional and glitch-free. His latest tweet further indicates that the auto company is nearing a smoother functional FSD to avoid the embarrassment it faced when it rolled out the software for the first time and the technology was found glitchy. Once FSD is super smooth (not just safe), we will roll out a free month trial for all cars in North America. Then extend to rest of world after we ensure it works well on local roads and regulators approve it in that country," Musk wrote in his latest tweet. However, despite hinting at a nearing rollout of the software, Tesla or its CEO has not given a specific timeframe for the launch.

First Published Date: 15 May 2023, 13:11 PM IST

Read more here:
Tesla to roll out free Full Self-Driving software, but there's a catch. Know here - HT Auto

Meta Made Its AI Tech Open-Source. Rivals Say Its a Risky Decision. – The New York Times

In February, Meta made an unusual move in the rapidly evolving world of artificial intelligence: It decided to give away its A.I. crown jewels.

The Silicon Valley giant, which owns Facebook, Instagram and WhatsApp, had created an A.I. technology, called LLaMA, that can power online chatbots. But instead of keeping the technology to itself, Meta released the systems underlying computer code into the wild. Academics, government researchers and others who gave their email address to Meta could download the code once the company had vetted the individual.

Essentially, Meta was giving its A.I. technology away as open-source software computer code that can be freely copied, modified and reused providing outsiders witheverything they needed to quickly build chatbots of their own.

The platform that will win will be the open one, Yann LeCun, Metas chief A.I. scientist, said in an interview.

As a race to lead A.I. heats up across Silicon Valley, Meta is standing out from its rivals by taking a different approach to the technology. Driven by its founder and chief executive, Mark Zuckerberg, Meta believes that the smartest thing to do is share its underlying A.I. engines as a way to spread its influence and ultimately move faster toward the future.

Its actions contrast with those of Google and OpenAI, the two companies leading the new A.I. arms race. Worried that A.I. tools like chatbots will be used to spread disinformation, hate speech and other toxic content, those companies are becoming increasingly secretive about the methods and software that underpin their A.I. products.

Google, OpenAI and others have been critical of Meta, saying an unfettered open-source approach is dangerous. A.I.srapid rise in recent months has raised alarm bells about the technologys risks, including how it could upend the job market if it is not properly deployed. And within days of LLaMAs release, the system leaked onto 4chan, the online message board known for spreading false and misleading information.

We want to think more carefully about giving away details or open sourcing code of A.I. technology, said Zoubin Ghahramani, a Google vice president of research who helps oversee A.I. work. Where can that lead to misuse?

Some within Google have also wondered if open-sourcing A.I. technology may pose a competitive threat. In a memo this month, which was leaked on the online publication Semianalysis.com, a Google engineer warned colleagues that the rise of open-source software like LLaMA could cause Google and OpenAI to lose their lead in A.I.

But Meta said it saw no reason to keep its code to itself. The growing secrecy at Google and OpenAI is a huge mistake, Dr. LeCun said, and a really bad take on what is happening. He argues that consumers and governments will refuse to embrace A.I. unless it is outside the control of companies like Google and Meta.

Do you want every A.I. system to be under the control of a couple of powerful American companies? he asked.

OpenAI declined to comment.

Metas open-source approach to A.I. is not novel. The history of technology is littered with battles between open source and proprietary,or closed, systems. Some hoard the most important tools that are used to build tomorrows computing platforms, while others give those tools away. Most recently, Google open-sourced the Android mobile operating system to take on Apples dominance in smartphones.

Many companies have openly shared their A.I. technologies in the past, at the insistence of researchers. But their tactics are changing because of the race around A.I. That shift began last year when OpenAI released ChatGPT. The chatbots wild success wowed consumers and kicked up the competition in the A.I. field, with Google moving quickly to incorporate more A.I. into its products and Microsoft investing $13 billion in OpenAI.

While Google, Microsoft and OpenAI have since received most of the attention in A.I., Meta has also invested in the technology for nearly a decade. The company has spent billions of dollars building the software and the hardware needed to realize chatbots and other generative A.I., which produce text, images and other media on their own.

In recent months, Meta has worked furiously behind the scenes to weave its years of A.I. research and development into new products. Mr. Zuckerberg is focused on making the company an A.I. leader, holding weekly meetings on the topic with his executive team and product leaders.

On Thursday, in a sign of its commitment to A.I., Meta said it had designed a new computer chip and improved a new supercomputer specifically for building A.I. technologies. It is also designing a new computer data center with an eye toward the creation of A.I.

Weve been building advanced infrastructure for A.I. for years now, and this work reflects long-term efforts that will enable even more advances and better use of this technology across everything we do, Mr. Zuckerberg said.

Metas biggest A.I. move in recent months was releasing LLaMA, which is what is known as a large language model, or L.L.M. (LLaMA stands for Large Language Model Meta AI.) L.L.M.s are systems that learn skills byanalyzing vast amounts of text, including books, Wikipedia articles and chat logs. ChatGPT and Googles Bard chatbot are also built atop such systems.

L.L.M.s pinpoint patterns in the text they analyze and learn to generate text of their own, including term papers, blog posts, poetry and computer code. They can even carry on complex conversations.

In February, Meta openly released LLaMA, allowingacademics, government researchers and others who provided their email address todownload the code and use it to build a chatbot of their own.

But the company went further than many other open-source A.I. projects. Itallowed people to download a version of LLaMA after it had been trained on enormous amounts of digital text culled from the internet. Researchers call this releasing the weights, referring to the particular mathematical values learned by the system as it analyzes data.

This was significant because analyzing all that data typically requires hundreds of specialized computer chips and tens of millions of dollars, resources most companies do not have. Those who have the weights can deploy the software quickly, easily and cheaply, spending a fraction of what it would otherwise cost to create such powerful software.

As a result, many in the tech industry believed Meta had set a dangerous precedent. And within days, someone released the LLaMA weights onto 4chan.

At Stanford University, researchers used Metas new technology to build their own A.I. system, which was made available on the internet. A Stanford researcher named Moussa Doumbouya soon used it to generate problematic text, according to screenshots seen by The New York Times. In one instance, the system provided instructions for disposing of a dead body without being caught.It also generated racistmaterial, including commentsthat supported the views of Adolf Hitler.

In a private chat among the researchers, which was seen by The Times, Mr. Doumbouya said distributing the technology to the public would be like a grenade available to everyone in a grocery store. He did not respond to a request for comment.

Stanford promptly removed the A.I. system from the internet. The project was designed to provide researchers with technology that captured the behaviors of cutting-edge A.I. models, said Tatsunori Hashimoto, the Stanford professor who led the project. We took the demo down as we became increasingly concerned about misuse potential beyond a research setting.

Dr. LeCun argues that this kind of technology is not as dangerous as it might seem. He said small numbers of individuals could already generate and spread disinformation and hate speech. He added that toxic material could be tightly restricted by social networks such as Facebook.

You cant preventpeople from creating nonsense or dangerous information or whatever, he said. But you can stop it from being disseminated.

For Meta, more people using open-source software can also level the playing field as it competes with OpenAI, Microsoft and Google. If every software developer in the world builds programs using Metas tools, it could help entrench the company for the next wave of innovation, staving off potential irrelevance.

Dr. LeCun also pointed to recent history to explain why Meta was committed toopen-sourcing A.I. technology. He said the evolution of the consumer internet was the result of open, communal standards that helped build the fastest, most widespread knowledge-sharing network the world had ever seen.

Progress is faster when it is open, he said. You have a more vibrant ecosystem where everyone can contribute.

Read this article:
Meta Made Its AI Tech Open-Source. Rivals Say Its a Risky Decision. - The New York Times