Archive for the ‘Artificial General Intelligence’ Category

Most of Surveyed Americans Do Not Want Super Intelligent AI – 80.lv

In response to the question, "Which goal of AI policy is more important?", a significant 65% of respondents opted for the answer, "Keeping dangerous models out of the hands of bad actors." This choice notably outperformed the alternative of "Providing the benefits of AI to everyone" picked up by 22% of the voters. This suggests a prevailing concern about the potential misuse of AI, which outweighs the desire for widespread access to AI benefits.

Interestingly, the apprehension around AI does not extend to AI education. When asked about an initiative to expand access to AI education, research, and training, 55% of the respondents showed support, while 24% opposed, and the rest were undecided.

The results align with the stance of the Artificial Intelligence Policy Institute, which holds the view that proactive government regulation can significantly mitigate the potentially destabilizing effects of AI. As it stands, tech companies like OpenAI and Google have a daunting task ahead in convincing the public of the benefits of Advanced General Intelligence (AGI), given the current negative sentiment around increasingly powerful AI.

Follow this link:

Most of Surveyed Americans Do Not Want Super Intelligent AI - 80.lv

A former OpenAI leader says safety has ‘taken a backseat to shiny products’ at the AI company – Winnipeg Free Press

A former OpenAI leader who resigned from the company earlier this week said Friday that safety has taken a backseat to shiny products at the influential artificial intelligence company.

Jan Leike, who ran OpenAIs Superalignment team alongside a company co-founder who also resigned this week, wrote in a series of posts on the social media platform X that he joined the San Francisco-based company because he thought it would be the best place to do AI research.

However, I have been disagreeing with OpenAI leadership about the companys core priorities for quite some time, until we finally reached a breaking point, wrote Leike, whose last day was Thursday.

An AI researcher by training, Leike said he believes there should be more focus on preparing for the next generation of AI models, including on things like safety and analyzing the societal impacts of such technologies. He said building smarter-than-human machines is an inherently dangerous endeavor and that the company is shouldering an enormous responsibility on behalf of all of humanity.

OpenAI must become a safety-first AGI company, wrote Leike, using the abbreviated version of artificial general intelligence, a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can.

Open AI CEO Sam Altman wrote in a reply to Leikes posts that he was super appreciative of Leikes contributions to the company was very sad to see him leave.

Leike is right we have a lot more to do; we are committed to doing it, Altman said, pledging to write a longer post on the subject in the coming days.

The company also confirmed Friday that it had disbanded Leikes Superalignment team, which was launched last year to focus on AI risks, and is integrating the teams members across its research efforts.

Winnipeg Free Press | Newsletter

Leikes resignation came after OpenAI co-founder and chief scientist Ilya Sutskever said Tuesday that he was leaving the company after nearly a decade. Sutskever was one of four board members last fall who voted to push out Altman only to quickly reinstate him. It was Sutskever who told Altman last November that he was being fired, but he later said he regretted doing so.

Sutskever said he is working on a new project thats meaningful to him without offering additional details. He will be replaced by Jakub Pachocki as chief scientist. Altman called Pachocki also easily one of the greatest minds of our generation and said he is very confident he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone.

On Monday, OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect peoples moods.

The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of the APs text archives.

Original post:

A former OpenAI leader says safety has 'taken a backseat to shiny products' at the AI company - Winnipeg Free Press

DeepMind CEO says Google to spend more than $100B on AGI despite hype – Cointelegraph

Googles not backing down from the challenge posed by Microsoft when it comes to the artificial intelligence sector. At least not according to the CEO of Google DeepMind, Demis Hassabis.

Speaking at a TED conference in Canada, Hassabis recently went on the record saying that he expected Google to spend more than $100 billion on the development of artificial general intelligence (AGI) over time. His comments reportedly came in response to a question concerning Microsofts recent Stargate announcement.

Microsoft and OpenAI are reportedly in discussions to build a $100 billion supercomputer project for the purpose of training AI systems. According to the Intercept, a person wishing to remain anonymous, who has had direct conversations with OpenAI CEO Sam Altman and seen the initial cost estimates on the project, says its currently being discussed under the codename Stargate.

To put the proposed costs into perspective, the worlds most powerful supercomputer, the U.S.-based Frontier system, cost approximately $600 million to build.

According to the report, Stargate wouldnt be a single system similar to Frontier. It will instead spread out a series of computers across the U.S. in five phases with the last phase being the penultimate Stargate system.

Hassabis comments dont hint at exactly how Google might respond, but seemingly confirm the notion that the company is aware of Microsoft's endeavors and plans on investing just as much, if not more.

Ultimately, the stakes are simple. Both companies are vying to become the first organization to develop artificial general intelligence (AGI). Todays AI systems are constrained by their training methods and data and, as such, fall well short of human-level intelligence across myriad benchmarks.

AGI is a nebulous term for an AI system theoretically capable of doing anything an average adult human could do, given the right resources. An AGI system with access to a line of credit or a cryptocurrency wallet and the internet, for example, should be able to start and run its own business.

Related: DeepMind co-founder says AI will be able to invent, market, run businesses by 2029

The main challenge to being the first company to develop AGI is that theres no scientific consensus on exactly what an AGI is or how one could be created.

Even among the worlds most famous AI scientists Metas Yann LeCun, Googles Demis Hassabis, etc. there exists no small amount of disagreement as to whether AGI can even be achieved using the current brute force method of increasing datasets and training parameters, or if it can be achieved at all.

In a Financial Times article published in March, Hassabis made a negative comparison to the current AI/AGI hype cycle and the scams its attracted to the cryptocurrency market. Despite the hype, both AI and crypto have exploded their respective financial spaces in the first four months of 2024.

Where Bitcoin, the worlds most popular cryptocurrency sat at about $30,395 per coin in April of 2023, its now over $60,000 as of the time of this articles publishing, having only recently retreated from an all-time-high about $73K.

Meanwhile, the current AI industry leader, Microsoft, has seen its stock go from $286 a share to around $416 in the same time period.

Continued here:

DeepMind CEO says Google to spend more than $100B on AGI despite hype - Cointelegraph

The Potential and Perils of Advanced Artificial General Intelligence – elblog.pl

Artificial General Intelligence (AGI) presents a new frontier in the evolution of machine capabilities. In essence, AGI stands as a level of artificial intelligence where machines are equipped to tackle any intellectual task that a human being can perform. Unlike narrow AI that excels in specific tasks such as image recognition or weather forecasting, AGI stretches its capacity to learning, self-improvement, and adaptability across various situations, emulating human-like intellect.

The development and application of AGI is a double-edged sword. The technology holds promise for immense societal benefits, such as resolving intricate problems, enhancing the quality of life, and offering support across sectors including healthcare, scientific research, and resource management.

On the flip side, the rise of AGI comes with significant risks and challenges. Theres a tangible fear that uncontrolled AGI could become overpowering and autonomous, making decisions that might lead to dire consequences for humanity. AGIs efficiency in performing tasks could also result in job displacements across numerous professions. Furthermore, albeit AGI could lead to the creation of powerful information systems, it may simultaneously raise concerns regarding data security and privacy.

Its clear that while AGI harbors the potential for tremendous advantages, it is essential for society to carefully weigh and prepare for the potential risks and challenges that may arise from its advancement and utilization.

The Ethical and Moral Implications of AGI are substantial. As we imbue machines with human-like intelligence, questions arise about the rights of these intelligent systems, and how they fit into our moral and legal frameworks. There is an ongoing debate concerning whether AGIs should be granted personhood or legal protections, similar to those afforded to humans and animals.

Control and Alignment Issues with AGI pose critical challenges. Ensuring that AGI systems act in ways that are aligned with human values and do not diverge from intended goals is a complex problem known as the alignment problem. Researchers are working on developing safety measures to ensure that AGIs remain under human control and are beneficial rather than detrimental.

Advantages of AGI: Problem Solving: AGI can potentially solve complex issues that are beyond human capability, including those relating to climate change, medicine, and logistics. Acceleration of Innovation: AGI may dramatically speed up the pace of scientific and technological discovery, leading to rapid advancements in various fields. Efficiency and Cost Savings: By automating tasks, AGI can increase efficiency and reduce costs, making goods and services more affordable and accessible.

Disadvantages of AGI: Job Displacement: AGI could automate jobs across many sectors, leading to mass unemployment and economic disruption. Safety and Security: The difficulty in predicting the behavior of AGI systems makes them a potential risk to global security, and AGI could be utilized for malicious purposes if not properly regulated. Loss of Human Skills: Over-reliance on AGI could lead to the degradation of human skills and knowledge.

Most Important Questions regarding AGI: 1. How can we ensure that AGI will align with human values? Developing robust ethical frameworks and control mechanisms is crucial. 2. What are the implications of AGI for employment and the workforce? Proactive strategies are necessary to address job displacement, including retraining and education. 3. How can we protect against the misuse of AGI? International cooperation and regulation are key to prevent the weaponization or malicious use of AGI.

Key Controversies: Regulation: There is debate over what forms of regulation are appropriate for AGI to encourage innovation while ensuring safety. Accessibility: Concerns exist about who should have access to AGI technology and whether it could exacerbate inequality. Economic Impact: The potential transformation of the job market and economy by AGI is contested, with differing views on how to approach the transition.

For more information on AI and related topics, you can visit the following links: DeepMind OpenAI Future of Life Institute

These links direct you to organizations actively involved in the development and research of advanced AI technologies and their implications.

Read this article:

The Potential and Perils of Advanced Artificial General Intelligence - elblog.pl

Congressional panel outlines five guardrails for AI use in House – FedScoop

A House panel has outlined five guardrails for deployment of artificial intelligence tools in the chamber, providing more detailed guidance as lawmakers and staff explore the technology.

The Committee on House Administration released the guardrails in a flash report on Wednesday, along with an update on the committees work exploring AI in the legislative branch. The guardrails are human oversight and decision-making; clear and comprehensive policies; robust testing and evaluation; transparency and disclosure; and education and upskilling.

These are intended to be general, so that many House Offices can independently apply them to a wide variety of different internal policies, practices, and procedures, the report said. House Committees and Member Offices can use these to inform their internal AI practices. These are intended to be applied to any AI tool or technology in use in the House.

The report comes as the committee and its Subcommittee on Modernization have focused on AI strategy and implementation in the House, and is the fifth such document it has put out since September 2023.

According to the report, the guardrails are a product of a roundtable the committee held in March that included participants such as the National Institute of Standards and Technologys Elham Tabassi, the Defense Departments John Turner, the Federation of American Scientists Jennifer Pahlka, the House chief administrative officer, the clerk of the House, and senior staff from lawmakers offices.

The roundtable represented the first known instance of elected officials directly discussing AIs use in parliamentary operations, the report said. The report added that templates for the discussion were also shared with the think tank Bssola Tech, which works on modernization of parliaments and legislatures.

Already, members of Congress are experimenting with AI tools for things like research assistance and drafting, though use doesnt appear widespread. Meanwhile, both chambers have introduced policies to rein in use. In the House, the CAO has approved only ChatGPT Plus, while the Senate has allowed use of ChatGPT, Microsoft Bing Chat, and Google Bard with specific guardrails.

Interestingly, AI was used in the drafting of the committees report, modeling the transparency guardrail the committee outlined. A footnote in the document discloses that early drafts of this document were written by humans. An AI tool was used in the middle of the drafting process to research editorial clarity and succinctness. Subsequent reviews and approvals were human.

Here is the original post:

Congressional panel outlines five guardrails for AI use in House - FedScoop