Media Search:



George Santos, liar and fantasist, fits in with the Republican party just fine – The Guardian

Opinion

Even where the technicalities of the apparent malfeasance are different, the Republican spirit is the same

When news broke on Tuesday afternoon that the justice department was indicting George Santos the disgraced Republican Long Island congressman whose election to the House of Representatives in 2022 was enabled by a series of lies about his background and elaborate, inventive frauds it was at first hard to think of just what he was being indicted for. George Santos, after all, is alleged to have been so prolifically criminal in his 34 years that one imagines law enforcement would have a hard time narrowing things down.

Would Santos be charged over the fake pet charity he seems to have invented, collecting money for things like surgery for the beloved dog of a veteran, which was never turned over to the animals owner? Or would he face charges stemming from his lies about his professional background, like the claim he made during his most recent congressional campaign, wholly false, that he used to work for Goldman Sachs, or his bizarre story, also a fabrication, about having been a college volleyball star?

Would it be something like the check fraud he allegedly committed in Brazil as a teenager, or like the bad check he supposedly wrote to, of all people, a set of Amish dog breeders in Pennsylvania?

What George Santos has been indicted for is not one of his funnier or more colorful scandals, but something extremely typical in Washington: lying about money. On Wednesday, prosecutors at a federal courthouse in Central Islip, New York, charged Santos with seven counts of wire fraud, three counts of money laundering, two counts of making false statements to the House of Representatives, and one count of theft of public funds. He pleaded not guilty, and was released on a half-million dollar bond.

The indictment against Santos is sprawling and complicated, reflecting the expansiveness of the congressmans alleged frauds, but the allegations that federal prosecutors make fall essentially into three columns: first, they charge that Santos set up a fraudulent LLC, where he directed donors to give money that he claimed would be spent on his political campaign. Instead, he used the funds to make car payments, pay off his debts, and notably, to buy expensive clothes.

Second, the Department of Justice charges that Santos defrauded the government when he applied for and received special Covid unemployment benefits in New York, despite drawing a salary of approximately $120,000 from an investment firm in Florida. (That firm, Harbor City Capital, is itself alleged to be a classic Ponzi scheme.)

And third, the indictment claims that Santos falsified financial disclosure forms related to his congressional seat, falsely certifying to Congress that he drew a $750,000 salary and between $1m and $5m in dividends, and had between $100,000 and $250,000 in a checking account and between $1m and $5m in savings. It was often remarked upon with wonder, and not a small amount of alarm, that Santos, who had not long before his election to Congress struggled to pay rent and faced eviction, was suddenly in possession of so much income and such apparent good luck. How, exactly, had Santos come across all that money? Now, a federal indictment alleges that he simply didnt: he made it up, like so many college volleyball championships.

Maybe its for the best that Santos is being charged, ultimately, for the most typically white-collar of his crimes: it will help dispel the myth that he is not a typical Republican. Since the revelation of Santoss seemingly bottomless dishonesty and malfeasance, a number of House Republicans have tried to distance themselves from the congressman. Nancy Mace, a South Carolina congresswoman trying to style herself as a moderate, called for his resignation; so did Max Milner, of Ohio, over Santoss false claims of Jewish heritage and having lost relatives in the Holocaust. Reportedly, Senator Mitt Romney encountered Santos at the State of the Union address and told him, with his signature air of the put-upon patrician: You dont belong here.

But doesnt George Santos belong in the modern Republican party? After all, how different, really, is Santoss alleged scheme to defraud donors for his own enrichment from Donald Trumps insistence, in the aftermath of the 2020 election, that his supports should donate to him to fight the election fraud that didnt exist? How different is Santoss use of his congressional campaign to raise funds for fancy clothes from Clarence Thomass use of his seat on the supreme court to get fancy vacations on Harlan Crows dime? How different is George Santoss alleged falsification of his financial records to Congress from the conspicuous omissions on the financial disclosure forms required of justices of the supreme court?

Even where the technicalities of the malfeasance are different, the Republican spirit is the same, in everyone from George Santos to Clarence Thomas to Donald Trump: the use of public office for personal enrichment, the contempt for the public interest, the indignant declarations that any efforts to hold them accountable are partisan, illegitimate and conducted in bad faith. Outside the federal courthouse on Wednesday, George Santos channeled Trump, calling the indictment against him a witch-hunt. Id say he fits in with the Republican party just fine.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read the original:
George Santos, liar and fantasist, fits in with the Republican party just fine - The Guardian

Dozens of House Republicans demand Biden take cognitive test or drop out of 2024 race – Fox News

FIRST ON FOX: Texas GOP Rep. Ronny Jackson led a letter of 61 Republicans demanding President Biden take a cognitive test or pull out of the 2024 presidential race.

Jackson, a former White House doctor, and the Republicans wrote that, in light of Bidens 2024 presidential re-election campaign announcement, they were concerned with his "current cognitive state and ability to serve another term as President."

"We believe that, regardless of gender, age, or political party, all Presidents should document and demonstrate sound mental abilities," the Republicans wrote.

EX-WHITE HOUSE DOCTOR REP RONNY JACKSON DEMANDS BIDEN TAKE COGNITIVE TEST OR DROP OUT OF 2024 RACE

Texas GOP Rep. Ronny Jackson led a letter of 61 Republicans demanding President Biden take a cognitive test or pull out of the 2024 presidential race. (Sunday Morning Futures/Screengrab)

"While you have undergone two physical exams during your presidency, one on November 19, 2021, and another on February 16, 2023, there is no indication you have had any cognitive assessment, or if you have, such results were concealed from the public," they continued.

The Republicans wrote that, following Bidens February physical, the White House physician Kevin OConnor "claimed you were a healthy, vigourous, 80-year-old male, who is fit successfully execute the duties of the Presidency, to include those as Chief Executive, Head of State, and Commander in Chief."

"However, this is a statement based on a physical exam that excluded the evaluation of your cognitive and mental abilities, which is where our concerns, and the concerns of the American public, lie," they wrote.

Jackson and his 61 Republican colleagues noted the three separate "letters on this issue" since Biden took office and that the president has "failed to respond to any of these letters and have actively ignored the requests of over 50 Members of Congress for you to submit to a cognitive exam."

"While you and your staff dismiss these inquiries, the American people continue to question your mental and cognitive abilities and lose faith in your ability to lead this country," they wrote, pointing to a Harvard CAPS-Harris Poll that found 57% of voters "do not believe you are mentally fit to serve as President or have doubts about your mental fitness."

"When you first announced your bid to run in the 2020 presidential election, questions and concerns were raised surrounding your cognitive abilities. Those concerns have only increased because your mental decline and forgetfulness have become more apparent since you were elected. Over the past two years, public appearances where you shuffle your feet, trip when you walk, slur your words, forget names, lose your train of thought, and appear momentarily confused have become more of a common occurrence."

"These incidents are so common and noticeable that if you search Biden gaffes online, over 14,000,000 results appear," the lawmakers wrote. "These incidents and the rate at which they occur are highly concerning and cast doubt upon your ability to execute the duties required of the President of the United States."

The Republicans wrote that U.S. citizens "should have absolute confidence in their President and know that he or she can perform their duties as Head of State and Commander in Chief."

Additionally, the lawmakers said the "American people deserve complete transparency" on the presidents "mental capabilities" and that the countrys national security "relies on a cognitively sound Commander in Chief," blasting Biden as not fitting "that bill."

"Therefore, we call on you to either renounce your bid for reelection or submit to a clinically validated cognitive screening assessment and make those results available to the public," the Republicans wrote. "Successful completion of this type of exam will ease the minds of the concerned American public and prove that you are capable of performing the duties required by the President of the United States."

Lawmakers said the "American people deserve complete transparency" on the presidents "mental capabilities" and that the countrys national security "relies on a cognitively sound Commander in Chief," blasting President Biden as not fitting "that bill." (SAUL LOEB/AFP via Getty Images)

"More importantly, failure of such a test will allow you to come to terms with the many failures of your administration over the past two years and allow a mentally fit leader to emerge," they continued.

White House spokesperson Andrew Bates pointed Fox News Digital to his previous comments on Jacksons calls for Biden to take a cognitive test.

"I honestly dont care about Ronny Jacksons look at me routine," Bates said. "But if yall get any mail from Nick Riviera, please dont be a stranger."

Bates was referring to "The Simpsons" character Dr. Nick Riviera better known as Dr. Nick who is a quack medical doctor with shady credentials.

CLICK HERE TO GET THE FOX NEWS APP

Jackson has been vocal in his calls for Biden to take a cognitive test since the president took office in 2021.

The Texas Republicans letter has been circulating since last month and garnered dozens of GOP signatures, including prominent lawmakers House Republican Conference chairwoman Elise Stefanik of New York, House chief deputy whip Guy Reschanthaler of Pennsylvania, and Texas Rep. Dan Crenshaw.

Fox News Digital's Tyler Olson contributed reporting.

The rest is here:
Dozens of House Republicans demand Biden take cognitive test or drop out of 2024 race - Fox News

As AutoGPT released, should we be worried about AI? – Cosmos

A new artificial intelligence tool coming just months after ChatGPT appears to offer a big leap forward it can improve itself without human intervention.

The artificial intelligence (AI) tool AutoGPT was released by the same company, OpenAI, which brought us ChatGPT last year. AutoGPT promises to overcome the limitations of large language models (LLMs) such as ChatGPT.

ChatGPT exploded onto the scene at the end of 2022 for its ability to respond to text prompts in a (somewhat) human-like and natural way. It has, caused concern for occasionally including misleading or incorrect information in its responses and for its potential to be used for plagiarising assignments in schools and universities.

But its not these limitations that AutoGPT seeks to overcome.

AI is categorised as weak (narrow) or strong (general). As an AI tool designed to carry out a single task, ChatGPT is considered weak AI.

AutoGPT is created with a view to becoming a strong AI, or artificial general intelligence, theoretically capable of carrying out many different types of task, including those for which it wasnt originally designed to perform.

LLMs are designed to respond to prompts produced by human users. They then respond to that and await the next prompt.

AutoGPT is being designed to give itself prompts, creating a loop. Masa, a writer on AutoGPTs website, explains: It works by breaking a larger task into smaller sub-tasks and then spinning off independent Auto-GPT instances in order to work on them. The original instance acts as a kind of project manager, coordinating all of the work carried out and compiling it into a finished result.

But is a self-improving AI a good thing? Many experts are worried about the trajectory of artificial intelligence research.

The respected and influential British Medical Journal has published an article titled Threats by artificial intelligence to human health and human existence in which they explain three key reasons we should be concerned about AI.

Get an update of science stories delivered straight to your inbox.

Threats identified by the international team of doctors and public health experts, including those from Australia, relate to misuse of AI and the impact of the ongoing failure to adapt to and regulate the technology.

The authors note the significance of AI and its potential to have transformative effect on society. But they also warn that artificial general intelligence in particular poses an existential threat to humanity.

First, they warn of the ability of AI to clean, organise, and analyse massive data sets including of personal data such as images. Such capabilities could be used to manipulate and distort information and for AI surveillance. The authors note that such surveillance is in development in more than 75 countries ranging from liberal democracies to military regimes, [which] have been expanding such systems.

Second they say Lethal Autonomous Weapon Systems (LAWS) capable of locating, selecting, and engaging human targets without the need for human supervision, could lead to killing at an industrial scale.

Finally, the authors raise concern over the loss of jobs that will come from the spread of AI technology in many industries. Estimates are that tens to hundreds of millions of jobs will be lost in the coming decade.

While there would be many benefits from ending work that is repetitive, dangerous and unpleasant, we already know that unemployment is strongly associated with adverse health outcomes and behaviour, they write.

The authors highlight artificial general intelligence as a threat to the existence of human civilisation itself.

We are now seeking to create machines that are vastly more intelligent and powerful than ourselves. The potential for such machines to apply this intelligence and powerwhether deliberately or notin ways that could harm or subjugate humansis real and has to be considered

With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing. The future outcomes of the development of AI and AGI will depend on policy decisions taken now and on the effectiveness of regulatory institutions that we design to minimise risk and harm and maximise benefit, they write.

Here is the original post:

As AutoGPT released, should we be worried about AI? - Cosmos

Opinion | We Need a Manhattan Project for AI Safety – POLITICO

At the heart of the threat is whats called the alignment problem the idea that a powerful computer brain might no longer be aligned with the best interests of human beings. Unlike fairness, or job loss, there arent obvious policy solutions to alignment. Its a highly technical problem that some experts fear may never be solvable. But the government does have a role to play in confronting massive, uncertain problems like this. In fact, it may be the most important role it can play on AI: to fund a research project on the scale it deserves.

Theres a successful precedent for this: The Manhattan Project was one of the most ambitious technological undertakings of the 20th century. At its peak, 129,000 people worked on the project at sites across the United States and Canada. They were trying to solve a problem that was critical to national security, and which nobody was sure could be solved: how to harness nuclear power to build a weapon.

Some eight decades later, the need has arisen for a government research project that matches the original Manhattan Projects scale and urgency. In some ways the goal is exactly the opposite of the first Manhattan Project, which opened the door to previously unimaginable destruction. This time, the goal must be to prevent unimaginable destruction, as well as merely difficult-to-anticipate destruction.

Dont just take it from me. Expert opinion only differs over whether the risks from AI are unprecedentedly large or literally existential.

Even the scientists who set the groundwork for todays AI models are sounding the alarm. Most recently, the Godfather of AI himself, Geoffrey Hinton, quit his post at Google to call attention to the risks AI poses to humanity.

That may sound like science fiction, but its a reality that is rushing toward us faster than almost anyone anticipated. Today, progress in AI is measured in days and weeks, not months and years.

As little as two years ago, the forecasting platform Metaculus put the likely arrival of weak artificial general intelligence a unified system that can compete with the typical college-educated human on most tasks sometime around the year 2040.

Now forecasters anticipate AGI will arrive in 2026. Strong AGIs with robotic capabilities that match or surpass most humans are forecasted to emerge just five years later. With the ability to automate AI research itself, the next milestone would be a superintelligence with unfathomable power.

Dont count on the normal channels of government to save us from that.

Policymakers cannot afford a drawn-out interagency process or notice and comment period to prepare for whats coming. On the contrary, making the most of AIs tremendous upside while heading off catastrophe will require our government to stop taking a backseat role and act with a nimbleness not seen in generations. Hence the need for a new Manhattan Project.

A Manhattan Project for X is one of those clichs of American politics that seldom merits the hype. AI is the rare exception. Ensuring AGI develops safely and for the betterment of humanity will require public investment into focused research, high levels of public and private coordination and a leader with the tenacity of General Leslie Groves the projects infamous overseer, whose aggressive, top-down leadership style mirrored that of a modern tech CEO.

Ensuring AGI develops safely and for the betterment of humanity will require a leader with the tenacity of General Leslie Groves, Hammond writes.|AP Photo

Im not the only person to suggest it: AI thinker Gary Marcus and the legendary computer scientist Judea Pearl recently endorsed the idea as well, at least informally. But what exactly would that look like in practice?

Fortunately, we already know quite a bit about the problem and can sketch out the tools we need to tackle it.

One issue is that large neural networks like GPT-4 the generative AIs that are causing the most concern right now are mostly a black box, with reasoning processes we cant yet fully understand or control. But with the right setup, researchers can in principle run experiments that uncover particular circuits hidden within the billions of connections. This is known as mechanistic interpretability research, and its the closest thing we have to neuroscience for artificial brains.

Unfortunately, the field is still young, and far behind in its understanding of how current models do what they do. The ability to run experiments on large, unrestricted models is mostly reserved for researchers within the major AI companies. The dearth of opportunities in mechanistic interpretability and alignment research is a classic public goods problem. Training large AI models costs millions of dollars in cloud computing services, especially if one iterates through different configurations. The private AI labs are thus hesitant to burn capital on training models with no commercial purpose. Government-funded data centers, in contrast, would be under no obligation to return value to shareholders, and could provide free computing resources to thousands of potential researchers with ideas to contribute.

The government could also ensure research proceeds in relative safety and provide a central connection for experts to share their knowledge.

With all that in mind, a Manhattan Project for AI safety should have at least 5 core functions:

1. It would serve a coordination role, pulling together the leadership of the top AI companies OpenAI and its chief competitors, Anthropic and Google DeepMind to disclose their plans in confidence, develop shared safety protocols and forestall the present arms-race dynamic.

2. It would draw on their talent and expertise to accelerate the construction of government-owned data centers managed under the highest security, including an air gap, a deliberate disconnection from outside networks, ensuring that future, more powerful AIs are unable to escape onto the open internet. Such facilities would likely be overseen by the Department of Energys Artificial Intelligence and Technology Office, given its existing mission to accelerate the demonstration of trustworthy AI.

3. It would compel the participating companies to collaborate on safety and alignment research, and require models that pose safety risks to be trained and extensively tested in secure facilities.

4. It would provide public testbeds for academic researchers and other external scientists to study the innards of large models like GPT-4, greatly building on existing initiatives like the National AI Research Resource and helping to grow the nascent field of AI interpretability.

5. And it would provide a cloud platform for training advanced AI models for within-government needs, ensuring the privacy of sensitive government data and serving as a hedge against runaway corporate power.

The alternative to a massive public effort like this attempting to kick the can on the AI problem wont cut it.

The only other serious proposal right now is a pause on new AI development, and even many tech skeptics see that as unrealistic. It may even be counterproductive. Our understanding of how powerful AI systems could go rogue is immature at best, but stands to improve greatly through continued testing, especially of larger models. Air-gapped data centers will thus be essential for experimenting with AI failure modes in a secured setting. This includes pushing models to their limits to explore potentially dangerous emergent behaviors, such as deceptiveness or power-seeking.

The Manhattan Project analogy is not perfect, but it helps to draw a contrast with those who argue that AI safety requires pausing research into more powerful models altogether. The project didnt seek to decelerate the construction of atomic weaponry, but to master it.

Even if AGIs end up being farther off than most experts expect, a Manhattan Project for AI safety is unlikely to go to waste. Indeed, many less-than-existential AI risks are already upon us, crying out for aggressive research into mitigation and adaptation strategies. So what are we waiting for?

More here:

Opinion | We Need a Manhattan Project for AI Safety - POLITICO

I created a billion-pound start-up business Elon Musk & Jeff Bezos asked to meet me heres the secret to… – The Sun

A DAD who created a billion-pound start-up business has revealed the secret to his success.

Emad Mostaque, 40, is the founder and CEO of artificial intelligence giant Stability AI and has recently been in talks with the likes of Elon Musk and Jeff Bezos.

But the London dad-of-two has worked hard to get where he is today - and doesn't plan on stopping any time soon.

Emad has gone from developing AI at home to help his autistic son, to employing 150 people across the globe for his billion-pound empire.

The 40-year-old usually calls Notting Hill home, but has started travelling to San Francisco for work.

On his most recent trip, Emad met with Bezos, the founder and CEO of Amazon, and made a deal with Musk, the CEO of Twitter.

He says the secret to his success in the AI world is using it to help humans, not overtake them.

Emad told The Times: I have a different approach to everyone else in this space, because Im building narrow models to augment humans, whereas almost everyone else is trying to build an AGI [Artificial general intelligence] to pretty much replace humans and look over them.

Emad is from Bangladesh but his parents shifted to the UK when he was a boy and settled the family in London's Walthamstow.

The dad said he was always good at numbers in school but struggled socially as he has Aspergers and ADHD.

The 40-year-old studied computer science and maths at Oxford, then became a hedge fund manager.

But when Emad's son was diagnosed with autism he quit to develop something to help the youngster.

Emad recalled: We built an AI to look at all the literature and then extract what could be the case, and then the drug repurposing.

He says that homemade AI allowed his family create an approach that took his son to a better, more cheerful place.

And, as a result, Emad inspired himself.

He started a charity that aims to give tablets loaded with AI tutors to one billion children.

He added: Can you imagine if every child had their own AI looking out for them, a personalised system that teaches them and learns from them?

"In 10 to 20 years, when they grow up, those kids will change the world.

Emad also founded the billion-pound start-up Stability AI in recent years, and it's one of the companies behind Stable Diffusion.

The tool has taken the world by storm in recent months with its ability to create images that could pass as photos from a mere text prompt.

Today, Emad is continuing to develop AI - and he says it is one of the most important inventions of history.

He explained it as somewhere between fire and the internal combustion engine.

Read the rest here:

I created a billion-pound start-up business Elon Musk & Jeff Bezos asked to meet me heres the secret to... - The Sun