Archive for the ‘Artificial General Intelligence’ Category

AI News Today July 15, 2024 – The Dales Report

Welcome to AI News Today, your daily summary of the AI Industry.

OpenAIs Path to Achieving Artificial General Intelligence

OpenAI has introduced a five-level system to track progress toward Artificial General Intelligence (AGI), starting at Level 1, which represents current AI capabilities in conversational interactions. The ultimate goal is Level 5, where AI systems can perform the work of an entire organization autonomously. Read all about it on the TDR Website!

Virginia Congresswoman Advocates for AI Voice Technology

Congresswoman Jennifer Wexton is pushing for advancements in AI voice technology to enhance accessibility for individuals with disabilities. Her advocacy highlights the potential of AI in creating more inclusive communication tools.

SoftBank Acquires British AI Chipmaker Graphcore

SoftBank has acquired British AI chipmaker Graphcore, aiming to strengthen its position in the AI hardware market. This acquisition is part of SoftBanks broader strategy to invest in cutting-edge AI technologies.

Older Workers Key to AI Understanding

Older workers bring valuable experience and understanding to the AI field, bridging the gap between traditional practices and new technologies. Their insights are crucial for the successful integration of AI in various industries.

Market Correction Sparks Profit-Taking in Tech and AI Sectors

Yesterday was the wake-up call many expected and wanted in order to start taking at least some profits in Mag 7 tech and semi-AI winners, Mizuho Securities trading-desk analyst Jordan Klein said in a client note Friday.

OpenAI Develops Advanced Tool Strawberry

OpenAI is building a new advanced AI tool called Strawberry, designed to enhance user interaction and AI capabilities. This tool aims to push the boundaries of what AI can achieve in practical applications.

Research Shows AI Chatbots Enhance Creativity

Research indicates that AI chatbots can boost creativity in writing. These findings suggest that AI tools could play a significant role in creative industries by providing new avenues for inspiration and innovation.

Big Techs Talent Poaching Under Scrutiny

The ongoing issue of Big Tech companies poaching talent is raising concerns about market competition and innovation. This practice is drawing attention from regulators and industry observers alike.

Whistleblowers and SEC Investigate OpenAI Over NDAs

Whistleblowers have prompted an SEC investigation into OpenAI over allegations of illegal non-disclosure agreements. This investigation could have significant implications for OpenAIs operational transparency and legal practices.

Read more AI news on the TDR Website!

Want to be updated on Cannabis, AI, Small Cap, and Crypto? Subscribe to our Daily Baked in Newsletter!

Visit link:

AI News Today July 15, 2024 - The Dales Report

The Evolution Of Artificial Intelligence: From Basic AI To ASI – Welcome2TheBronx

In the realm of artificial intelligence (AI), we currently operate at the Language Learning Models (LLMS) level, while Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI) remain in the future. Understanding these different levels is crucial as they each represent significant advancements in the capabilities of AI. Let us explore these levels in detail.

Basic AI, often referred to as Narrow AI or Weak AI, represents the most fundamental level of artificial intelligence. This type of AI is designed to perform specific tasks and operates within a predefined set of parameters. It lacks the ability to understand broader concepts or learn beyond its initial programming.

Basic AI systems excel in performing repetitive or narrowly defined tasks. They are limited to their specific function and cannot adapt to new tasks or situations. Some common examples of Basic AI include:

The primary limitation of Basic AI is its lack of generalization. These systems cannot transfer knowledge from one domain to another or improve their performance through learning beyond their initial programming. They operate purely based on the data and instructions they have been given.

Language Learning Models (LLMS) represent a more advanced form of AI, specializing in understanding and generating natural language. These models are capable of comprehending the context and meaning of text, allowing them to produce coherent and contextually relevant responses.

LLMS, such as GPT (Generative Pre-trained Transformer), are trained on vast amounts of text data. They learn patterns, grammar, and context from this data, enabling them to generate human-like text. Examples of LLMS applications include:

The key advantage of LLMS is their ability to understand and generate natural language. This capability allows for more dynamic and flexible interactions with users. LLMS can be fine-tuned for specific tasks, improving their performance and accuracy over time.

Despite their advanced capabilities, LLMS still operate within the confines of their training data. They can generate impressive results but do not possess true understanding or consciousness. Their responses are based on patterns learned from data rather than genuine comprehension.

Artificial General Intelligence (AGI) represents a significant leap in AI development. Unlike Narrow AI, AGI has the ability to understand, learn, and apply knowledge across a wide range of tasks, much like a human being.

AGI systems would possess cognitive abilities comparable to human intelligence. They could learn from experiences, adapt to new situations, and perform various tasks without requiring task-specific programming. A hypothetical example of AGI might include:

The development of AGI holds immense potential. It could revolutionize industries by performing tasks that currently require human intelligence. However, achieving AGI poses significant challenges, including ensuring safety, ethical considerations, and the sheer complexity of creating an AI that can understand and interact with the world at a human level.

The advent of AGI would raise profound ethical and societal questions. Issues such as job displacement, privacy, and the moral status of intelligent machines would need careful consideration. Ensuring that AGI systems are aligned with human values and do not pose risks to society is a critical concern.

Artificial Superintelligence (ASI) represents the pinnacle of AI development. ASI would surpass human intelligence in every aspect, from creativity to problem-solving abilities, and would be capable of driving unprecedented advancements in science and technology.

ASI would possess cognitive abilities far beyond those of humans. It could solve complex problems, create new technologies, and make discoveries that are currently beyond human reach. The potential applications of ASI are vast, including:

The development of ASI also presents significant risks. The immense power and intelligence of ASI could potentially be misused or result in unintended consequences. Ensuring that ASI is developed and controlled responsibly is paramount. Key considerations include:

The journey from Basic AI to ASI represents a profound evolution in the field of artificial intelligence. Each levelBasic AI, LLMS, AGI, and ASIbrings unique capabilities and challenges. While we currently operate at the LLMS level, the future holds the promise of AGI and ASI, which could transform our world in unimaginable ways.

Understanding these different levels is crucial for navigating the ethical, societal, and technological implications of AI development. As we progress towards more advanced forms of AI, it is essential to ensure that these technologies are developed responsibly, with a focus on enhancing human well-being and addressing global challenges.

Go here to see the original:

The Evolution Of Artificial Intelligence: From Basic AI To ASI - Welcome2TheBronx

What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality – Observer

OpenAI CEO Sam Altman has previously discussed his desire to achieve human-level reasoning in A.I. Justin Sullivan/Getty Images

As part of OpenAIs path towards artificial general intelligence (A.G.I), a term for technology matching the intelligence of humans, the company is reportedly attempting to enable A.I. models to perform advanced reasoning. Such work is taking place under a secretive project code-named Strawberry, as reported by Reuters, which noted that the project was previously known as Q* or Q Star. While its name may have changed, the project isnt exactly new. Researchers and co-founders of OpenAI have previously warned against the initiative, with concerns over it reportedly playing a part in the brief ousting of Sam Altman as OpenAIs CEO in November.

Strawberry uses a unique method of post-training A.I. models, a process that improves their performance after being trained on datasets, according to Reuters, which cited internal OpenAI documents and a person familiar with the project. With the help of deep-research datasets, the company aims to create models that display human-level reasoning. OpenAI reportedly is looking into how Strawberry can allow models to be able to complete tasks over an extended period of time, search the web by themselves and take actions on its findings, and perform the work of engineers. OpenAI did not respond to requests for comment from Observer.

Altman, who has previously reiterated OpenAIs desire to create models able to reason, briefly lost control of his company last year when his board fired him for four days. Shortly before the ousting, several OpenAI employees had become concerned over breakthroughs presented by what was then known as Q*, a project spearheaded by Ilya Sutskever, OpenAIs former chief scientist.Sutskever himself had reportedly begun to worry about the projects technology, as did OpenAI employees working on A.I. safety at the time. After his reinstatement, Altman referred to news reports about Q* as an unfortunate leak in an interview with the Verge.

Elon Musk, another OpenAI co-founder, has also raised the alarm about Q* in the past. The billionaire, who severed ties with the company in 2018, referred to the project in a lawsuit filed against OpenAI and Altman that has since been dropped. While discussing OpenAIs close partnership with Microsoft (MSFT), Musks suit claimed that the terms of the deal dictate that Microsoft only has rights to OpenAIs pre-A.G.I. technology and that it is up to OpenAIs board to determine when the company has achieved A.G.I.

Musk argued that OpenAIs GPT-4 model constitutes as A.G.I, which he believes poses a grave threat to humanity, according to the suit. Court filings stated that OpenAI is currently developing a model known as Q* that has an even stronger claim to A.G.I.

Recent internal meetings have suggested that OpenAI is making rapid progress toward the type of human-level reasoning that Strawberry is working on. In an OpenAI all-hands meeting held earlier this month, the company unveiled a five-tiered system to track its progress towards A.G.I., as reported by Bloomberg. While the company said it is currently on the first level, known as chatbots, it revealed that it has nearly reached the second level of reasoners, which involves technology that can display human-level problem-solving. The subsequent steps consist of A.I. systems acting as agents that can take actions, innovators that aid in invention and organizations that do the work of an organization.

View post:

What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality - Observer

Companies are losing faith in AI, and AI is losing money – Android Headlines

We just cant stop hearing about AI technology nowadays. This is because its supposed to be the next step in human achievement. Also, because companies just dont shut up about it! For all of its potential pitfalls and challenges, the tech industry seems to be confident in AI tech. However, thats just how it seems to the public. Behind the scenes, however, it seems that companies are losing faith in AI.

Its always behind closed doors where we see the most news. Weve seen Sundar Pichai, Sam Altman, and Satya Nadella among countless others on stage talking about how they have immense faith in their AI tech. Thats all fine and dandy, but do you think that theyre going to dedicate a keynote to any issues that are going on with their AI tech? Of course not! Its their job to make us think that everything is A-OK.

But, the thing is that things arent always A-OK in the world of AI tech. Theres a ton of doubt and tension throughout the tech industry regarding AI, and we only know about a fraction of it. We only hear what slips through the cracks. A testimony from an employee at a tech firm here, an exclusive leak there.

The fact of the matter is that the companies propping up this technology (the ones injecting billions of dollars into AI companies) are starting to shy away. Theyre not as likely to invest so much money in it. Sure, you cant go online without seeing an ad for some new AI service. You cant go on social media without seeing some new AI-generated video that makes you fear for the film industry. But, the people making that possible might be stepping back a bit.

Its money that makes the world go around, and its what makes your chatbot so smart. In case you dont know, AI is an extremely expensive technology to nurture. It costs money to train models, run data centers, secure GPUs, and so on. If youre looking to make an AI start-up, youll need some major investors.

Companies like Microsoft, Google, and Amazon among many others have been investing billions in AI start-ups to make the dream of AGI (artificial general intelligence) materialize. So, why are these investments slowing down?

The 2024 report from Standford Universitys Institution for Human-Centered AI revealed something a bit surprising. Investments in AI have been dropping year-over-year. According to the report, the peak of investments was actually a year before the big AI boom. The report (via GIZMONDO) 2021 AI saw investments of about $337 billion. That fell more than $100 billion to $234 billion in 2022. This was the year of the AI boom, so youd expect the numbers to soar in the next year. However, thats not the case. In 2023, investments dropped around $40 billion.

Even with the potential of generative AI, companies still seem to be wary of the technology. AI has infected just about every tech and creative industry on the planet, theres a ton of money to be made right?

The count of billion-dollar investments has slowed and is all but over, Garter analyst John-David Lovelock told TechCrunch earlier this year. Companies are investing in AI start-ups, but the age of $13 billion investments like we saw with Microsoft and OpenAI might be gone. Why?

Well, why did these companies invest in AI in the first place? Theyre pouring money into the technology because it has the potential to be a massive moneymaker. It has POTENTIAL. It shows all of the signs, and companies are hopeful. However, the fact of the matter is that no one really knows whats going to happen with AI technology. Were still in the early stages of generative AI development even though its been in production for years. AI employees, companies, and investors are all dreaming of a world where AI is spitting out money like a broken slot machine. Well, guess what, thats a dream.

AI is a gigantic money void. Companies invest a ton of money into it in the hopes that it will turn a profit in the future. However, it seems that the journey to a profit is taking longer than expected. If youve invested $5 billion in a company, and its still not turning a profit, then youre less likely to invest that much again.

Companies are starting to realize that AI isnt going to start making money soon. Several of the AI companies offer their services for monthly subscriptions. Thats a model that needs millions, if not hundreds of millions of customers to see some sort of return depending on how much money youve invested in a company. Disney+, with more than 200 million users still struggled to make a profit. It still might not have turned a profit yet.

Investors dont know when/if AI technology will be a cash cow tomorrow; what they know is that theyre burning a ton of money today. Companies are losing faith and money over AI.

There are reasons why you shouldnt trust everything that AI produces. There are people who use AI to spread misinformation, but AI can spread it on its own sometimes. The thing is that AI hallucinations are still a pretty big issue in the AI space, and thats something that companies are looking at. This is another reason why companies are losing faith in AI.

AI hallucinations are when an AI model basically makes up information. Youd get responses with no rhyme or reason. Its still one of the main problems holding AI technology back. General users are losing faith in the technology because of this. Along with general users, major companies are also slowing down their development because of this.

According to a recent study from Lucidworks (via Reuters), manufacturers are starting to get pretty wary about AI technology because of the accuracy issues. Earlier this year, the company surveyed 2,500 leaders who have a say in AI. About 58% of those leaders planned to increase spending on AI. Thats a massive drop from the 93% from last year. Back in 2023, the world was still getting a feel for what AI had to offer, and companies were still trying to get in as early as possible.

Now, companies are starting to see the true cost of AI. Not only that, but theyre starting to see just how badly AI can mess up. 44% of the manufacturing respondents expressed some sort of concern over AI accuracy.

So, these companies are holding onto their dollar bills just a little bit tighter.

Its tough to say what this means for the AI industry as a whole. Companies like Google, Microsoft, and OpenAI are going to continue dumping gallons of green into their AI machines. OpenAI has probably the most popular AI tool on the market, Google has been an AI company for years before ChatGPT, and Microsoft is still going crazy over AI. However, it seems that the rest of the industry is starting to lose some of the hype for AI.

At the end of the day, it all comes down to the almighty dollar. It depends on how much money companies are still willing to spend on AI technology.

Maybe the money that companies were investing is like Meta Threads user base. Remember when Threads was new? Its user base shot up to over 100 million within a week. Then, as people started to learn more about the app and what it was missing, its user base dropped. After Meta made improvements and added features, people started to rejoin.

Well, this might be what we could see with AI spending. During the initial period when ChatGPT was wowing the world, everyone jumped on board and wrote giant checks to fund this revolutionary new technology. However, after learning a bit more about the costs associated and the AI inaccuracies, theyre backing off. Well, as AI technology gets better, who knows if well see investments pick up again?

Right now, its anyones guess. Companies ar losing faith in AI, and that doesnt bode well for it. For all we know, this could be the start of the slow heat death of AI technology.

See the rest here:

Companies are losing faith in AI, and AI is losing money - Android Headlines

AGI isn’t here (yet): How to make informed, strategic decisions in the meantime – VentureBeat

It's time to celebrate the incredible women leading the way in AI! Nominate your inspiring leaders for VentureBeats Women in AI Awards today before June 18. Learn More

Ever since the launch of ChatGPT in November 2022, the ubiquity of words like inference, reasoning and training-data is indicative of how much AI has taken over our consciousness. These words, previously only heard in the halls of computer science labs or in big tech company conference rooms, are now overhead at bars and on the subway.

There has been a lot written (and even more that will be written) on how to make AI agents and copilots better decision makers. Yet we sometimes forget that, at least in the near term, AI will augment human decision-making rather than fully replace it. A nice example is the enterprise data corner of the AI world with players (as of the time of this articles publication) ranging from ChatGPT to Glean to Perplexity. Its not hard to conjure up a scenario of a product marketing manager asking her text-to-SQL AI tool, What customer segments have given us the lowest NPS rating?, getting the answer she needs, maybe asking a few follow-up questions and what if you segment it by geo?, then using that insight to tailor her promotions strategy planning.

This is AI augmenting the human.

Looking even further out, there likely will come a world where a CEO can say: Design a promotions strategy for me given the existing data, industry-wide best practices on the matter and what we learned from the last launch, and the AI will produce one comparable to a good human product marketing manager. There may even come a world where the AI is self-directed and decides that a promotions strategy would be a good idea and starts to work on it autonomously to share with the CEO that is, act as an autonomous CMO.

VB Transform 2024 Registration is Open

Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now

Overall, its safe to say that until artificial general intelligence (AGI) is here, humans will likely be in the loop when it comes to making decisions of significance. While everyone is opining on what AI will change about our professional lives, I wanted to return to what it wont change (anytime soon): Good human decision making. Imagine your business intelligence team and its bevy of AI agents putting together a piece of analysis for you on a new promotions strategy. How do you leverage that data to make the best possible decision? Here are a few time (and lab) tested ideas that I live by:

Before seeing the data:

While looking at the data:

While making the decision:

At this point, if youre thinking, this sounds like a lot of extra work, you will find that this approach very quickly becomes second nature to your executive team and any additional time it incurs is high ROI: Ensuring all the expertise at your organization is expressed, and setting guardrails so the decision downside is limited and that you learn from it whether it goes well or poorly.

As long as there are humans in the loop, working with data and analyses generated by human and AI agents will remain a critically valuable skill set in particular, navigating the minefields of cognitive biases while working with data.

Sid Rajgarhia is on the investment team at First Round Capital and has spent the last decade working on data-driven decision making at software companies.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

Continue reading here:

AGI isn't here (yet): How to make informed, strategic decisions in the meantime - VentureBeat