Media Search:



Privacy watchdog fractures over 702 opinion – POLITICO – POLITICO

Some of the Majoritys recommendations are sound, and could provide helpful additional protections for privacy and civil liberties, write Beth Williams and Richard DiZinno in an annex to the report. Others would cause serious damage to the country and our national security, while negatively impacting the privacy of U.S. persons.

On Thursday morning, those disagreements exploded into a complete breakdown of the normally low-profile privacy watchdog, with Williams and DiZinno issuing a press statement blasting the report as contrary to the evidence and unmoored from the law.

We voted against releasing this Report on Section 702 of the Foreign Intelligence Surveillance Act it was approved only 3 to 2, the press release reads. Therefore, we did not think it appropriate to legitimize its release with our participation today.

The 3-2 split marks a sharp break from the boards last review of the same law, in 2014, when it unanimously approved a baseline of 10 recommendations on better protecting Americans from improper eavesdropping.

The panel's two Republicans, including Beth Williams, criticized the roughly 300-page analysis and its 19 recommendations as deeply flawed. | Mariam Zuhaib/AP Photo

Codified into law almost two decades ago amid the global war on terrorism, Section 702 allows the National Security Agency to collect the texts, emails and other digital communications of foreigners located abroad from U.S. tech providers, like Facebook and Google.

But when Americans communicate with targeted foreigners, their messages are swept up into a repository of data collected under the law. Four U.S. intelligence agencies the NSA, CIA, National Counterterrorism Center and the FBI can then query that database for information on U.S. citizens without acquiring a warrant.

Critics of the law have long alleged the authority offers a backdoor around Americans privacy rights. In recent years, a special court overseeing the program has unearthed systematic privacy violations by the FBI, fueling a new, largely bipartisan push to overhaul the spy tool, which will expire at the end of the year absent congressional action.

A court opinion released in May from the Foreign Intelligence Surveillance Court, which oversees the program, found that FBI personnel had improperly accessed the database to seek information on individuals at the January 6 Capitol riot, the protests following George Floyds death and even donors to a U.S. congressional campaign.

In the report, expected to be released Thursday, the three Democrats concluded that Section 702 remains extraordinarily valuable to countering a wide range of national security threats, and should be reauthorized. But they argued that the eavesdropping program presents significant privacy risks to Americans and could be reined in without undermining its intelligence value.

The Board believes that the privacy and civil liberties risks posed by Section 702 can be reduced while preserving the programs value in protecting Americans national security, they concluded.

Of the boards 19 recommendations, the most significant and contentious is likely to be a requirement that all U.S. spies and law enforcement personnel receive approval from the FISC each time they want to query the 702 repository for information on U.S. citizens.

The scale of U.S. person queries, the number of compliance issues surrounding U.S. person queries, and the failure of current law and procedures to protect U.S. persons compels the Board to recommend a new approach, the three Democratic members wrote.

The FISC would accept or reject each request using roughly the same compliance criteria the four agencies follow internally today: that the search is reasonably likely to return foreign intelligence information or, in the case of the FBI, evidence of a crime.

The requirement falls short of demanding that the spy agencies acquire a probable cause warrant before combing through the database for information on Americans one of the fixes pro-reform lawmakers have pushed this year. The board also suggested carveouts in case of an emergency or if the agencies receive express consent from the object of a query.

Still, the proposal will come as a major disappointment to the White House, which has argued that any form of court approval for those queries would significantly undercut national security. It contends warrantless searches are critical to identifying and protecting U.S. individuals who have been targeted by foreign intelligence services, terrorists or cyber criminals.

The recommendation was one of the key points of contention behind the boards split.

Williams and DiZinno, the two Republicans, argued that requiring a FISC review of searches would make it bureaucratically infeasible to conduct U.S. person searches and effectively destroy the crucial portion of the program that enables the U.S. government to prevent, among other things, terrorist attacks on our soil.

For her part, the chair of the board, Sharon Bradford Franklin, recommended Congress go further and require a probable cause warrant in cases linked to domestic crime.

The Privacy and Civil Liberties Oversight Board is given access to classified information on Section 702, making it a trusted voice on the controversial and arcane eavesdropping program. But the split guidance means it is unlikely to settle the surveillance debate on Capitol Hill or within the White House.

Sharon Bradford Franklin, chair of the Privacy and Civil Liberties Oversight Board, recommended Congress go further and require a probable cause warrant in cases linked to domestic crime. | Mariam Zuhaib/AP Photo

A bipartisan coalition of civil liberties-focused Democrats and conservative Republicans are pushing for a warrant requirement, a step that has long sparked heartburn among intelligence community allies in Congress and adamant opposition from the White House.

But conservatives who are calling for changes to the spying tool are also seeking to push forward reforms that extend beyond Section 702 and tap into broader concerns within the party about politicization of the intelligence community.

In their annex to the report, Williams and DiZinno offer seven independent recommendations, which are organized into three policy objectives.

Two of those are likely to resonate strongly with conservatives: procedural, cultural and structural changes aimed at re-establishing public trust in the FBI and those to guard against the political weaponization of Section 702.

The Majoritys Report fails to address many of these concerns, focusing instead on a scattershot list of old ideas disconnected from the current moment, they write.

For the boards three Democrats, a through line of the reports 19 recommendations is that Congress should do more to limit how frequently Americans data is vacuumed into the database in the first place.

The board recommended that lawmakers codify stricter guidelines about when foreigners can be targeted by U.S. spies, introduce post-hoc reviews of new eavesdropping requests, and force the intelligence community to try to estimate the volume of data it collects incidentally on Americans each year.

Overall, nearly 250,000 foreigners were targeted under Section 702 last year, a figure that has increased 276 percent since 2013, the board noted.

Although Section 702 targets can only be non-U.S. persons, through incidental collection the government acquires a substantial amount of U.S. persons communications as well, the report reads. While the term may make this collection sound insignificant it should not be understood as occurring infrequently or as an inconsequential part of the Section 702 program.

Continued here:
Privacy watchdog fractures over 702 opinion - POLITICO - POLITICO

NSA is creating a hub for AI security, Nakasone says – The Record from Recorded Future News

The National Security Agency is consolidating its various artificial intelligence efforts into a new hub, its director announced Thursday.

The Artificial Intelligence Security Center will become the spy agencys focal point for AI activities such as leveraging foreign intelligence insights, helping to develop best practices guidelines for the fast-developing technology and creating risk frameworks for AI security, Army Gen. Paul Nakasone said during an event at the National Press Club in Washington.

The new entity will be housed within the agencys Cybersecurity Collaboration Center and help industry understand the threats against their intellectual property and collaborate to help prevent and eradicate threats, Nakasone told the audience, adding it would team with organizations throughout the Defense Department, intelligence community, academia and foreign partners.

The announcement comes after the NSA and U.S. Cyber Command, which Nakasone also helms, recently finished separate reviews of how they would use artificial intelligence in the future. The Central Intelligence Agency also said it plans to launch its own artificial intelligence-based chatbot.

One of the findings of the study was a clear need to focus on AI security, according to Nakasone, who noted NSA has particular responsibilities for such work because the agency is the designated federal manager for national security systems and already has extensive ties to the sprawling defense industrial base.

While U.S. firms are increasingly acquiring and developing generative AI technology, foreign adversaries are also moving quickly to develop and apply their own AI and we anticipate they will begin to explore and exploit vulnerabilities of U.S. and allied AI systems, the four-star warned.

He described AI security as protecting systems from learning, doing and revealing the wrong thing, as well as safeguarding them from digital attacks and ensuring malicious foreign actors can't steal America's innovative AI capabilities.

Nakasone did not specify who would lead the center or how large it might grow.

Today, the U.S leads in this critical area but this lead should not be taken for granted, he said.

Recorded Future

Intelligence Cloud.

No previous article

No new articles

Martin Matishak is a senior cybersecurity reporter for The Record. He spent the last five years at Politico, where he covered Congress, the Pentagon and the U.S. intelligence community and was a driving force behind the publication's cybersecurity newsletter.

View original post here:

NSA is creating a hub for AI security, Nakasone says - The Record from Recorded Future News

Google adds a switch for publishers to opt out of becoming AI training data – The Verge

Google just announced its giving website publishers a way to opt out of having their data used to train the companys AI models while remaining accessible through Google Search. The new tool, called Google-Extended, allows sites to continue to get scraped and indexed by crawlers like the Googlebot while avoiding having their data used to train AI models as they develop over time.

The company says Google-Extended will let publishers manage whether their sites help improve Bardand Vertex AIgenerative APIs, adding that web publishers can use the toggle to control access to content on a site. Google confirmed in July that its training its AI chatbot, Bard, on publicly available data scraped from the web.

Google-Extended is available through robots.txt, also known as the text file that informs web crawlers whether they can access certain sites. Google notes that as AI applications expand, it will continue to explore additional machine-readable approaches to choice and control for web publishers and that it will have more to share soon.

Already, many sites have moved to block the web crawler that OpenAI uses to scrape data and train ChatGPT, including The New York Times, CNN, Reuters, and Medium. However, there have been concerns over how to block out Google. After all, websites cant close off Googles crawlers completely, or else they wont get indexed in search. This has led some sites, such as The New York Times, to legally block Google instead by updating their terms of service to ban companies from using their content to train AI.

Continued here:

Google adds a switch for publishers to opt out of becoming AI training data - The Verge

Google will let publishers hide their content from its insatiable AI – Engadget

Google has announced a new control in its robots.txt indexing file that would let publishers decide whether their content will "help improve Bard and Vertex AI generative APIs, including future generations of models that power those products." The control is a crawler called Google-Extended, and publishers can add it to the file in their site's documentation to tell Google not to use it for those two APIs. In its announcement, the company's vice president of "Trust" Danielle Romain said it's "heard from web publishers that they want greater choice and control over how their content is used for emerging generative AI use cases."

Romain added that Google-Extended "is an important step in providing transparency and control that we believe all providers of AI models should make available." As generative AI chatbots grow in prevalence and become more deeply integrated into search results, the way content is digested by things like Bard and Bing AI has been of concern to publishers.

While those systems may cite their sources, they do aggregate information that originates from different websites and present it to the users within the conversation. This might drastically reduce the amount of traffic going to individual outlets, which would then significantly impact things like ad revenue and entire business models.

Google said that when it comes to training AI models, the opt-outs will apply to the next generation of models for Bard and Vertex AI. Publishers looking to keep their content out of things like Search Generative Experience (SGE) should continue to use the Googlebot user agent and the NOINDEX meta tag in the robots.txt document to do so.

Romain points out that "as AI applications expand, web publishers will face the increasing complexity of managing different uses at scale." This year has seen an explosion in the development of tools based on generative AI, and with search being such a huge way people discover content, the state of the internet looks set to undergo a huge shift. Google's addition of this control is not only timely, but indicates it's thinking about the way its products will impact the web.

Update, September 28 at 5:36pm ET: This article was updated to add more information about how publishers can keep their content out of Google's search and AI results and training.

Originally posted here:

Google will let publishers hide their content from its insatiable AI - Engadget

‘The Creator’ review: This drama about AI fails to take on a life of its … – NPR

Madeleine Yuna Voyles plays Alphie, a pensive young robot child in The Creator. 20th Century Studios hide caption

Madeleine Yuna Voyles plays Alphie, a pensive young robot child in The Creator.

The use of AI in Hollywood has been one of the most contentious issues in the writers and actors strikes, and the industry's anxiety about the subject isn't going away anytime soon. Some of that anxiety has already started to register on-screen. A mysterious robotic entity was the big villain in the most recent Mission: Impossible film, and AI is also central to the ambitious but muddled new science-fiction drama The Creator.

Set decades into the future, the movie begins with a prologue charting the rise of artificial intelligence. Here it's represented as a race of humanoid robots that in time become powerful enough to detonate a nuclear weapon and wipe out the entire city of Los Angeles.

As a longtime LA resident who's seen his city destroyed in countless films before this one, I couldn't help but watch this latest cataclysm with a chuckle and a shrug. It's just part of the setup in a story that patches together numerous ideas from earlier, better movies. After the destruction of LA, we learn, the U.S. declared war on AI and hunted the robots to near-extinction; the few that still remain are hiding out in what is now known as New Asia.

The director Gareth Edwards, who wrote the script with Chris Weitz, has cited Blade Runner and Apocalypse Now as major influences. And indeed, there's something queasy and heavy-handed about the way Edwards evokes the Vietnam War with images of American soldiers terrorizing the poor Asian villagers whom they suspect of sheltering robots.

John David Washington plays Joshua Taylor, a world-weary ex-special-forces operative. 20th Century Studios hide caption

John David Washington plays Joshua Taylor, a world-weary ex-special-forces operative.

The protagonist is a world-weary ex-special-forces operative named Joshua Taylor, played by John David Washington. He's reluctantly joined the mission to help destroy an AI superweapon said to be capable of wiping out humanity for good. Amid the battle that ensues, Joshua manages to track down the weapon, which in a twist that echoes earlier sci-fi classics like Akira and A.I. turns out to be a pensive young robot child, played by the excellent newcomer Madeleine Yuna Voyles.

Joshua's superior, played by Allison Janney, tells him to kill the robot child, but he doesn't. Instead, he goes rogue and on the run with the child, whom he calls Alpha, or Alphie. Washington doesn't have much range or screen presence, but he and Voyles do generate enough chemistry to make you forget you're watching yet another man tag-teaming with a young girl a trope familiar from movies as different as Paper Moon and Lon: The Professional.

Joshua's betrayal is partly motivated by his grief over his long-lost love, a human woman named Maya who allied herself with the robots; she's played by an underused Gemma Chan. One of the more bothersome aspects of The Creator is the way it reflexively equates Asians with advanced technology; it's the latest troubling example of "techno-orientalism," a cultural concept that has spurred a million Blade Runner term papers.

In recycling so many spare parts, Edwards, best known for directing the Star Wars prequel Rogue One, is clearly trying to tap into our memories of great Hollywood spectacles past. To his credit, he wants to give us the kind of philosophically weighty, visually immersive science-fiction blockbuster that the studios rarely attempt anymore. The most impressive aspect of The Creator is its world building; much of the movie was shot on location in different Asian countries, and its mix of real places and futuristic design elements feels more plausible and grounded than it would have if it had been rendered exclusively in CGI.

But even the most strikingly beautiful images like the one of high-tech laser beams shimmering over a beach at sunset are tethered to a story and characters that never take on a life of their own. Not even the great Ken Watanabe can breathe much life into his role as a stern robo-warrior who does his part to help Joshua and Alphie on their journey.

In the end, Edwards mounts a sincere but soggy plea for human-robot harmony, arguing that AI isn't quite the malicious threat it might seem. That's a sweet enough sentiment, though it's also one of many reasons I left The Creator asking myself: Did an AI write this?

Here is the original post:

'The Creator' review: This drama about AI fails to take on a life of its ... - NPR