Media Search:



How Mexico has become the enemy of America’s Republicans – The Economist

More than once, as president, Donald Trump mused about firing missiles at the drug labs of Mexican cartels. No one would know it was us, he declared, before being talked out of the idea. Mark Esper, the then defence secretary, recounted the incident in his memoirs published last year, astonished that bombing a neighbour could be seriously thought of.

Your browser does not support the

Now the extraordinary is becoming more commonplace as Republicans argue that greater use of military force, or the threat of it, can help control Americas southern border and curb the smuggling of fentanyl, a synthetic opioid that is produced illegally in Mexico.

One congressman, Michael McCaul of Texas, has introduced a bill to classify fentanyl as a chemical weapon. Lindsey Graham, a senator from South Carolina, is pushing one to designate Mexican cartels as foreign terrorist organisations. Dan Crenshaw and Mike Waltz, congressmen from Texas and Florida respectively, have proposed another that would authorise all necessary and appropriate force against foreign states, organisations or people linked to trafficking fentanyl.

Republican presidential candidates, too, are talking tough. Ron DeSantis, the governor of Florida, has suggested a naval blockade of Mexico-bound shipping to halt the import of fentanyl precursor chemicals from China. Nikki Haley, a former ambassador to the UN, has proposed sending in special forces with a warning to Mexico: Either you do it or we do it. Tim Scott, the other senator from South Carolina, declared in May, I will allow the worlds greatest military to fight these terrorists.

Tucker Carlson, a former Fox News host beloved by Americas hard right, goes further, regarding Mexico as an outright foe. On July 14th, while interrogating Republican presidential hopefuls (minus Mr Trump) at the Family Leadership Summit, a gathering of religious conservatives in Iowa, he grilled Mr Scott about his support for Ukraine: No Americans killed by Russia. Hundreds of thousands killed by Mexico. But Mexico is our ally and Russia is our enemyhow does that work? Mr Scott did not demur from the idea that Mexico was an enemy, but said America could deal with Russia and Mexico simultaneously.

Many Mexicans feel they are again the piata of Americas election season, freely beaten by any politician. Earlier this year Mexicos president, Andrs Manuel Lpez Obrador, a left-wing populist who got on with Mr Trump, took issue with the militarist talk, saying: In addition to being irresponsible, it is an offence to the people of Mexico, a lack of respect for our sovereignty. He warned that he might urge Mexican and Hispanic voters not to cast their ballots for Republicans.

The anti-Mexican mood on Americas right is hardening, moving beyond Mr Trumps cheap shots against migrants in 2015, when he said they are bringing drugs, theyre bringing crime, theyre rapists. According to tracking polls by YouGov, Republican voters are fast turning against Mexico. Roughly as many think Mexico is now an enemy as consider it an ally, with about 45% supporting each proposition (see chart). Democrats are largely unchanged, with about 70% regarding their southern neighbour as an ally. The Republican disenchantment has grown in the past year.

At least three factors may be at play. The first is frustration over fentanyl-related deaths, which rose sharply in 2020 and 2021. The drug has become the biggest killer of Americans aged 18-45, responsible for most of the 70,000 deaths from overdoses of synthetic opioids in 2021. Second, suggests Mark Jones of Rice University, the defeat of Mr Trump unshackled Republicans, freeing them to denounce President Joe Biden for his handling of the border. There is no better issue for Republicans, he says. It mobilises their base. And it splits Democrats: whatever Joe Biden does will seem too fascist by the left and too permissive by centrists.

A third factor, adds David Frum, a writer and former speechwriter for President George W. Bush, is the war in Ukraine. Given the MAGA movements hostility to Ukraine and sympathy for Russiaa position that runs against many voters viewsdenouncing Mexico allows them to cast themselves as guardians of the country.

Such policies are gaining an intellectual underpinning through a network of Trump-leaning think-tanks preparing for a future administration. A paper by the Centre for Renewing America, entitled Its time to wage war on transnational drug cartels, is reported to have caught the attention of Mr Trump, among others. Its author, Ken Cuccinelli, argues that America should be free to take military action in Mexico given that its government does not fully control its territory. Never mind that it would stir deep anti-Americanism, or that treating Mexico like a failed state might turn it into one. Mexico is not a friend. It is complicit in the drug cartels, says Mr Cuccinelli. Its time to acknowledge that the relationship has changed.

Stay on top of American politics with Checks and Balance, our weekly subscriber-only newsletter, which examines the state of American democracy and the issues that matter to voters. For more coverage of Joe Bidens presidency, visit our dedicated hub.

Visit link:
How Mexico has become the enemy of America's Republicans - The Economist

Disappointingly Low T From Ken: The Republican War on Barbie … – Vanity Fair

Matt Gaetz, the Republican Congressman from Florida recently under investigation for allegedly having sex with a minor, is apparently not immune to Barbie mania. He and his wife Ginger Gaetz attended a special screening of Barbie at the home of the British ambassador in Washington, D.C. on Monday, and were photographed posing on the pink carpet in their Barbiecore best. But it seems the fashion was the highlight of their night, based on Gingers tweet the next day.

The 2023 Barbie movie, unfortunately, neglects to address any notion of faith or family, and tries to normalize the idea that men and women cant collaborate positively (yuck), writes Ginger, who eloped with Gaetz in 2021; they met and got engaged at Mar-a-Lago. Ginger continues that while while she liked Margot Robbies performance and the stunning costume design, she was not a fan of the unfair treatment of pregnant Barbie Midge or the disappointingly low T from Ken.

Shes not the only member of the family whos a fan of Robbie. Gaetz, responding to a tweet pondering why he would see a movie with a trans Barbie given his beliefs, singled out Robbie in a spelling-challenged comeback

Honestly, were getting some mixed messages from the Gaetz crew here. Ginger would prefer you not see it at all and wear your pink gear to Oppenheimer (a film where Republicans come off great!), while Matt thinks your choices are to buy a Barbie ticket or let the terrorists win. As with everything else in the right-wing backlash to Barbie, it probably only benefits Barbie. With Ted Cruz also out there railing against the movie for its alleged promotion of Chinese communism, and Barbie still on track to make as much as $100 million domestically in its opening weekend, it remains Barbies world. and hes just Matt Gaetz.

Read the original here:
Disappointingly Low T From Ken: The Republican War on Barbie ... - Vanity Fair

Warner Calls on Biden Administration to Remain Engaged in AI … – Senator Mark Warner

WASHINGTON U.S. Sen. Mark R.Warner(D-VA), Chairman of the Senate Select CommitteeonIntelligence,today urged the Bidenadministration to build on itsrecently announced voluntary commitmentsfrom several prominent artificial intelligence (AI) leaders in order to promote greater security, safety, and trust in the rapidly developing AI field.

As AI is rolled out more broadly, researchers have repeatedly demonstrated a number of concerning, exploitable weaknesses inprominentproducts, including abilitiestogenerate credible-seeming misinformation, developmalware,and craftsophisticatedphishing techniques. On Friday, the Biden administration announced that several AI companies had agreed to a series of measures that would promotegreatersecurity and transparency. Sen. Warner wrote to the administration to applaud these efforts and laid out a series of next steps to bolster this progress, including extending commitments to less capable models, seeking consumer-facing commitments, anddeveloping an engagement strategy to better addresssecurity risks.

These commitments have the potential to shape developer norms and best practices associated with leading-edge AI models. At the same time, even less capable models are susceptible to misuse, security compromise, and proliferation risks,Sen.Warnerwrote.As the current commitments stand, leading vendors do not appear inclined to extending these vital development commitments to the wider range of AI products they have released that fall below this threshold or have been released as open source models.

The letter builds on Sen. Warners continued advocacy for the responsible development and deployment of AI. InApril, Sen.Warnerdirectlyexpressed concerns to several AI CEOs about the potential risks posed byAI,and calledoncompaniestoensure that their productsandsystems are secure.

The letter also affirms Congress role in regulating AI, and expands on the annualIntelligence Authorization Act, legislation that recently passed unanimously through the Sente Select Committee on Intelligence. Sen. Warner urges theadministrationto adopt the strategy outlined in this pending bill as well as work with the FBI, CISA, ODNI, and other federal agencies to fully address the potential risks of AI technology.

Sen.Warner, a former tech entrepreneur,has been a vocal advocate for Big Tech accountabilityanda stronger national posture against cyberattacksandmisinformationonline. In addition to his April letters, has introduced several pieces of legislationaimed at addressing these issues, including theRESTRICT Act, which would comprehensively address theongoing threat posed by technology from foreign adversaries; theSAFE TECH Act,which would reform Section230andallow social mediacompaniestobe held accountable for enabling cyber-stalking,online harassment,anddiscriminationonsocial media platforms;andtheHonest Ads Act, which would requireonline political advertisementstoadheretothe same disclaimer requirements as TV, radio,andprintads.

A copy of thelettercan be foundhereandbelow.

Dear President Biden,

I write to applaud the Administrations significant efforts to secure voluntary commitments from leading AI vendors related to promoting greater security, safety, and trust through improved development practices. These commitments largely applicable to these vendors most advanced products can materially reduce a range of security and safety risks identified by researchers and developers in recent years. In April, I wrote to a number of these same companies, urging them to prioritize security and safety in their development, product release, and post-deployment practices. Among other things, I asked them to fully map dependencies and downstream implications of compromise of their systems; focus greater financial, technical and personnel resources on internal security; and improve their transparency practices through greater documentation of system capabilities, system limitations, and training data.

These commitments have the potential to shape developer norms and best practices associated with leading-edge AI models. At the same time, even less capable models are susceptible to misuse, security compromise, and proliferation risks. Moreover, a growing roster of highly-capable open source models have been released to the public and would benefit from similar pre-deployment commitments contained in a number of the July 21stobligations. As the current commitments stand, leading vendors do not appear inclined to extending these vital development commitments to the wider range of AI products they have released that fall below this threshold or have been released as open source models.

To be sure, responsibility ultimately lies with Congress to develop laws that advance consumer and patient safety, address national security and cyber-crime risks, and promote secure development practices in this burgeoning and highly consequential industry and in the downstream industries integrating their products. In the interim, the important commitments your Administration has secured can be bolstered in a number of important ways.

First, I strongly encourage your Administration to continue engagement with this industry to extend these all of these commitments more broadly to less capable models that, in part through their wider adoption, can produce the most frequent examples of misuse and compromise.

Second, it is vital to build on these developer- and researcher-facing commitments with a suite of lightweight consumer-facing commitments to prevent the most serious forms of abuse. Most prominent among these should be commitments from leading vendors to adopt development practices, licensing terms, and post-deployment monitoring practices that prevent non-consensual intimate image generation, social-scoring, real-time facial recognition (in contexts not governed by existing legal protections or due process safeguards), and proliferation activity in the context of malicious cyber activity or the production of biological or chemical agents.

Lastly, the Administrations successful high-level engagement with the leadership of these companies must be complemented by a deeper engagement strategy to track national security risks associated with these technologies. In June, the Senate Select Committee on Intelligence on a bipartisan basis advanced our annualIntelligence Authorization Act, a provision of which directed the President to establish a strategy to better engage vendors, downstream commercial users, and independent researchers on the security risks posed by, or directed at, AI systems.

This provision was spurred by conversations with leading vendors, who confided that they would not know how best to report malicious activity such as suspected intrusions of their internal networks, observed efforts by foreign actors to generate or refine malware using their tools, or identified activity by foreign malign actors to generate content to mislead or intimidate voters. To be sure, a highly-capable and well-established set of resources, processes, and organizations including the Cybersecurity and Infrastructure Security Agency, the Federal Bureau of Investigation, and the Office of the Director of National Intelligences Foreign Malign Influence Center exist to engage these communities, including through counter-intelligence education and defensive briefings. Nonetheless, it appears that these entities have not been fully activated to engage the range of key stakeholders in this space. For this reason, I would encourage you to pursue the contours of the strategy outlined in our pending bill.

Thank you for your Administrations important leadership in this area. I look forward to working with you to develop bipartisan legislation in this area.

###

Read more here:

Warner Calls on Biden Administration to Remain Engaged in AI ... - Senator Mark Warner

Advisory report begins integration of generative AI at U-M | The … – The University Record

A committee looking into how generative artificial intelligence affects University of Michigan students, faculty, researchers and staff has issued a report that attempts to lay a foundation for how U-M will live and work with this new technology.

Recommendations include:

The report is available to the public at a website created by the committee and Information and Technology Services to guide how faculty, staff and students can responsibly and effectively use GenAI in their daily lives.

U-M also has announced it will release its own suite of university-hosted GenAI services that are focused on providing safe and equitable access to AI tools for all members of the U-M community. They are expected to be released before students return to campus this fall.

GenAI is shifting paradigms in higher education, business, the arts and every aspect of our society. This report represents an important first step in U-Ms intention to serve as a global leader in fostering the responsible, ethical and equitable use of GenAI in our community and beyond, said Laurie McCauley, provost and executive vice president for academic affairs.

The report offers recommendations on everything from how instructors can effectively use GenAI in their classrooms to how students can protect themselves when using popular GenAI tools, such as ChatGPT, without exposing themselves to risks of sharing sensitive data.

More than anything, the intention of the report is to be a discussion starter, said Ravi Pendse, vice president for information technology and chief information officer. We have heard overwhelmingly from the university community that they needed some direction on how to work with GenAI, particularly before the fall semester started. We think this report and the accompanying website are a great start to some much-needed conversations.

McCauley and Pendse sponsored the creation of the Generative Artificial Intelligence Advisory Committee in May. Since then, the 18-member committee composed of faculty, staff and students from across all segments of U-M has worked together to provide vital insights into how GenAI technology could affect their communities.

Our goals were to present strategic directions and guidance on how GenAI can enhance the educational experience, enrich research capabilities, and bolster U-Ms leadership in this era of digital transformation, said committee chair Karthik Duraisamy, professor of aerospace engineering and of mechanical engineering, and director of the Michigan Institute for Computational Discovery and Engineering.

Committee members put in an enormous amount of work to identify the potential benefits of GenAI to the diverse missions of our university, while also shedding light on the opportunities and challenges of this rapidly evolving technology.

This is an exciting time, McCauley added. I am impressed by the work of this group of colleagues. Their report asks important questions and provides thoughtful guidance in a rapidly evolving area.

Pendse stressed the GenAI website will be constantly updated and will serve as a hub for the various discussions related to the topic across U-M.

We know that almost every group at U-M is having their own conversations about GenAI right now, Pendse said. With the release of this report and the website, we hope to create a knowledge hub where students, faculty and staff have one central location where they can come looking for advice. I am proud that U-M is serving both as a local and global leader when it comes to the use of GenAI.

Read the original here:

Advisory report begins integration of generative AI at U-M | The ... - The University Record

From Hollywood to Sheffield, these are the AI stories to read this month – World Economic Forum

AI regulation is progressing across the world as policymakers try to protect against the risks it poses without curtailing AI's potential.

In July, Chinese regulators introduced rules to oversee generative AI services. Their focus stems from a concern over the potential for generative AI to create content that conflicts with Beijings viewpoints.

The success of ChatGPT and similarly sophisticated AI bots have sparked announcements from Chinese technology firms to join the fray. These include Alibaba, which has launched an AI image generator to trial among its business customers.

The new regulation requires generative AI services in China to have a licence, conduct security assessments, and adhere to socialist values. If "illegal" content is generated, the relevant service provider must stop this, improve its algorithms, and report the offending material to the authorities.

The new rules relate only to generative AI services for the public, not to systems developed for research purposes or niche applications, striking a balance between keeping close tabs on AI while also making China a leader in this field.

The use of AI in film and TV is one of the issues behind the ongoing strike by Hollywood actors and writers that has led to production stoppages worldwide. As their unions renegotiate contracts, workers in the entertainment sector have come out to protest against their work being used to train AI systems that could ultimately replace them.

The AI proposal put forward by the Alliance of Motion Picture and Television Producers reportedly stated that background performers would receive one day's pay for getting their image scanned digitally. This scan would then be available for use by the studios from then on.

China is not alone in creating a framework for AI. A new law in the US regulates the influence of AI on recruitment as more of the hiring process is handed over to algorithms.

From browsing CVs and scoring interviews to scraping social media for personality profiles, recruiters are increasingly using the capabilities of AI to speed up and improve hiring. To protect workers against a potential AI bias, New York City's local government is mandating greater transparency about the use of AI and annual audits for potential bias in recruitment and promotion decisions.

A group of AI experts, including Meta, Google, and Samsung, has created a new framework for developing AI products safely. It consists of a checklist with 84 questions for developers to consider before starting an AI project. The World Ethical Data Foundation is also asking the public to submit their own questions ahead of its next conference. Since its launch, the framework has gained support from hundreds of signatories in the AI community.

In response to the uncertainties surrounding generative AI and the need for robust AI governance frameworks to ensure responsible and beneficial outcomes for all, the Forums Centre for the Fourth Industrial Revolution (C4IR) has launched the AI Governance Alliance.

The Alliance will unite industry leaders, governments, academic institutions, and civil society organizations to champion responsible global design and release of transparent and inclusive AI systems.

Meanwhile, generative AI is gaining a growing user base, sparked by the launch of ChatGPT last November. A survey by Deloitte found that more than a quarter of UK adults have used generative AI tools like chatbots. This is even higher than the adoption rate of voice-assisted speakers like Amazon's Alexa. Around one in 10 people also use AI at work.

Nearly a third of college students have admitted to using ChatGPT for written assignments such as college essays and high-school art projects. Companies providing AI-detecting tools have been run off their feet as teachers seek help identifying AI-driven cheating. With only one full academic semester since the launch of ChatGPT, AI detection companies are predicting even greater disruption and challenges as schools need to take comprehensive action.

30% of college students use ChatGPT for assignments, to varying degrees.

Image: Intelligent.com

Another area where AI could ring in fundamental changes is journalism. The New York Times, the Washington Post, and News Corp are among publishers talking to Google about using artificial intelligence tools to assist journalists in writing news articles. The tools could help with options for headlines and writing styles but are not intended to replace journalists. News about the talks comes after the Associated Press announced a partnership with OpenAI for the same purpose. However, some news outlets have been hesitant to adopt AI due to concerns about incorrect information and differentiating between human and AI-generated content.

Developers of robots and autonomous machines could learn lessons from honeybees when it comes to making fast and accurate decisions, according to scientists at the University of Sheffield. Bees trained to recognize different coloured flowers took only 0.6 seconds on average to decide to land on a flower they were confident would have food and vice versa. They also made more accurate decisions than humans, despite their small brains. The scientists have now built these findings into a computer model.

Generative AI is set to impact a vast range of areas. For the global economy, it could add trillions of dollars in value, according to a new report by McKinsey & Company. It also found that the use of generative AI could lead to labour productivity growth of 0.1-0.6% annually through 2040.

At the same time, generative AI could lead to an increase in cyberattacks on small and medium-sized businesses, which are particularly exposed to this risk. AI makes new, highly sophisticated tools available to cybercriminals. However, it can be used to create better security tools to detect attacks and deploy automatic responses, according to Microsoft.

Because AI systems are designed and trained by humans, they can generate biased results due to the design choices made by developers. AI may therefore be prone to perpetuating inequalities, and this can be overcome by training AI systems to recognize and overcome their own bias.

Read more from the original source:

From Hollywood to Sheffield, these are the AI stories to read this month - World Economic Forum