Archive for the ‘Ai’ Category

‘Copper is the new oil,’ and prices could soar 50% as AI, green energy, and military spending boost demand, top … – Fortune

Copper is emerging as the next indispensable industrial commodity, mirroring oils rise in earlier decades, a top commodities analyst said.

This time around, new forces in the economy, namely the advent of artificial intelligence, explosion of data centers, and the green energy revolution, are boosting demand for copper, while the development of new weapons is adding to it as well, according to Jeff Currie, chief strategy officer of Energy Pathways at Carlyle.

Copper is the new oil, he told Bloomberg TV on Tuesday, noting that his conversations with traders also reinforce his bullishness. It is the highest-conviction trade Ive ever seen.

Copper has long been a key industrial bellwether as its uses range widely from manufacturing and construction to electronics and other high-tech products.

But billions of dollars pouring into artificial intelligence and renewable energy are a relatively new part of coppers outlook, Currie noted, acknowledging that he made a similar prediction in 2021 when he was an analyst at Goldman Sachs.

Im confident that this time is lift-off, and I think were going to see more momentum behind it, he said. Whats different this time is there are now three sources of demandAI, green energy, and the militaryinstead of just green energy three years ago.

And while demand is high, supply remains tight as bringing new copper mines online can take 12 to 26 years, Currie pointed out.

That should eventually send prices soaring to $15,000 per ton, he predicted. Coppers prices are already at record highs, with benchmark prices in London at about $10,000 per ton, more than doubling from the pandemic-era lows in early 2020.

At some point, the price will get so high that it will create demand destruction, meaning buyers balk at paying so much. But Currie doesnt know what that level is.

But I go back to the 2000s, I was bullish on oil then as I am on copper today, he added, recalling that crude shot up from $20 to $140 per barrel at the time. So the upside on copper here is very significant.

Copper was also a key catalyst in BHPs proposed a takeover of Anglo American, a $40 billion deal that would create the worlds topcopper producer. But Anglo has rejected the offer and recently announced plans to restructure the group, including selling its diamond business De Beers.

Go here to see the original:

'Copper is the new oil,' and prices could soar 50% as AI, green energy, and military spending boost demand, top ... - Fortune

Business school teaching case study: risks of the AI arms race – Financial Times

Unlock the Editors Digest for free

Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.

Prabhakar Raghavan, Googles search chief, was preparing for the Paris launch of its much-anticipated artificial intelligence chatbot in February last year when he received some unpleasant news.

Two days earlier, his chief executive, Sundar Pichai, had boasted that the chatbot, Bard, draws on information from the web to provide fresh, high-quality responses. But, within hours of Google posting a short gif video on Twitter demonstrating Bard in action, observers spotted that the bot had given a wrong answer.

Bards response to What new discoveries from the James Webb Space Telescope (JWST) can I tell my 9-year-old about? was that the telescope had taken the very first pictures of a planet outside the Earths solar system. In fact, those images were generated by the European Southern Observatorys Very Large Telescope nearly two decades before. It was an error that harmed Bards credibility and wiped $100bn off the market value of Googles parent company, Alphabet.

The incident highlighted the dangers in the high-pressure arms race around AI. It has the potential to improve accuracy, efficiency and decision-making. However, while developers are expected to have clear boundaries for what they will do and to act responsibly when bringing technology to the market, the temptation is to prioritise profit over reliability.

The genesis of the AI arms race can be traced back to 2019, when Microsoft chief executive Satya Nadella realised that the AI-powered auto-complete function Googles in Gmail was becoming so effective that his own company was at risk of being left behind in AI development.

This article is part of a collection of instant teaching case studies exploring business challenges. Read the piece then consider the questions at the end.

About the author: David De Cremer is the Dunton Family Dean and a professor of management and technology at DAmore-McKim School of Business at Northeastern University in Boston. He is author of The AI-Savvy Leader: 9 ways to take back control and make AI work (Harvard Business Review Press, 2024).

Technology start-up OpenAI, which needed external capital to secure additional computing resources, provided an opportunity. Nadella quietly made an initial $1bn investment. He believed that a collaboration between the two companies would allow Microsoft to commercialise OpenAIs future discoveries, making Google dance and eating into its dominant market share. He was soon proved right.

Microsofts swift integration of OpenAIs ChatGPT into Bing marked a strategic coup, projecting an image of technological ascendancy over Google. In an effort not to be left behind, Google rushed to release its own chatbot even though the company knewthat Bard was not ready to compete with ChatGPT. Its haste-driven error cost Alphabet $100bn in market capitalisation.

Nowadays, it seems the prevailing modus operandi in the tech industry is a myopic fixation on pioneering ever-more-sophisticated AI software. Fear of missing out compels companies to rush unfinished products to market, disregarding inherent risks and costs. Meta, for example, recently confirmed its intention to double down in the AI arms race, despite rising costs and a nearly 12 per cent drop in its share price.

There appears to be a conspicuous absence of purpose-driven initiatives, with a focus on profit eclipsing societal welfare considerations. Tesla rushed to launch its AI-based Fully Self Driving (FSD) features, for example, with technology nowhere near the maturity needed for safe deployment on roads. FSD, with driver inattention, has been linkedto hundreds of crashes and dozens of deaths.

Recommended

As a result, Tesla has had to recall more than 2mn vehicles because of FSD/autopilot issues. Despite identifying concerns about drivers ability to reverse necessary software updates, regulators argue that Tesla did not make those suggested changes part of the recall.

Compounding the issue is the proliferation of sub-par so-so technologies. For example, two new GenAI-based portable gadgets, Rabbit R1 and Humane AI Pin, triggered a backlash, accused of being unusable, overpriced, and not solving any meaningful problem.

Unfortunately, this trend will not slow: driven by a desire to capitalise as quickly as possible on incremental improvements of ChatGPT,somestart-ups are rushing to launch so-so GenAI-based hardware devices.They appear to show little interest in whether a market exists; the goal seems to be winning any possible AI race available, regardless of whether it adds value for end users. In response, OpenAI has warned start-ups to stop engaging in an opportunistic and short-term strategy of pursuing purposeless innovations and noted that more powerful versions of ChatGPT are coming that can easily replicate any GPT-based apps that the start-ups are launching.

In response, governments are preparing regulations to govern AI development and deployment. Some tech companies are responding with greater responsibility. A recent open lettersigned by industry leaders endorsed the idea that: It is our collective responsibility to make choices that maximise AIs benefits and mitigate the risks, for today and for the future generations.

As the tech industry grapples with the ethical and societal implications of AI proliferation, some consultants, customers and external groups are making the case for purpose-driven innovation. While regulators offer a semblance of oversight, progress will require industry stakeholders to take responsibility for fostering an ecosystem that gives greater priority to societal welfare.

Do tech companies bear responsibility for how businesses deploy artificial intelligence in possibly wrong and unethical ways?

What strategies can tech companies follow to keep purpose centre stage and see profit as an outcome of purpose?

Should bringing AI to market be more regulated? And if so, how?

How do you predict that the tendency to race to the bottom will play out in the next five to 10 years in businesses working with AI? Which factors are most important?

What risks for companies are associated with not joining the race to the bottom in AI development? How can these risks be managed by adopting a more purpose-driven strategy? What factors are important in that scenario?

See the article here:

Business school teaching case study: risks of the AI arms race - Financial Times

Georgia Tech Unveils New AI Makerspace in Collaboration with NVIDIA – Georgia Tech College of Engineering

To break down the accessibility barrier students may face with the makerspace, PACE and ECEs Ghassan AlRegib are developing smart interfaces and strategies to ensure that students from all backgrounds, disciplines, and proficiency levels can effectively utilize the computing power.

The intelligent system will serve as a tutor and facilitator, said AlRegib, the John and Marilu McCarty Chair of Electrical Engineering. It will be the lens through which students can tap into the world of AI, and it will empower them by removing any hurdle that stands in the way of them testing their ideas. It will also facilitate the integration of the AI Makerspace into existing classes.

Democratizing AI is not just about giving students access to a large pool of GPU resources, said Didier Contis, executive director of academic technology, innovation, and research computing for the Office of Information Technology. Deep collaboration with instructors is required to develop different solutions to empower students to use the resources easily without necessarily having to master specific aspects of AI or the underlying infrastructure.

Beyond traditional computing applications, the hub is designed to be utilized in each of Georgia Techs six colleges, placing a unique emphasis on human-AI interaction. By doing so, it ensures that AI is viewed as a transformative force, encouraging innovation that extends beyond the confines of a single field.

Finally, and similar to how students use physical makerspaces on campus, Raychowdhury sees the AI Makerspace as a tool for students to create technology that prompts AI start-up companies.

AI is increasingly interdisciplinary and an irreversibly important part of todays workforce, said Raychowdhury. To meet the needs of tomorrows innovation, we need a diverse workforce proficient in utilizing AI across all levels.

Read more:

Georgia Tech Unveils New AI Makerspace in Collaboration with NVIDIA - Georgia Tech College of Engineering

What is artificial intelligence (AI)? – Livescience.com

Artificial intelligence (AI) refers to any technology exhibiting some facets of human intelligence, and it has been a prominent field in computer science for decades. AI tasks can include anything from picking out objects in a visual scene to knowing how to frame a sentence, or even predicting stock price movements.

Scientists have been trying to build AI since the dawn of the computing era. The leading approach for much of the last century involved creating large databases of facts and rules and then getting logic-based computer programs to draw on these to make decisions. But this century has seen a shift, with new approaches that get computers to learn their own facts and rules by analyzing data. This has led to major advances in the field.

Over the past decade, machines have exhibited seemingly "superhuman" capabilities in everything from spotting breast cancer in medical images, to playing the devilishly tricky board games Chess and Go and even predicting the structure of proteins.

Since the large language model (LLM) chatbot ChatGPT burst onto the scene late in 2022, there has also been a growing consensus that we could be on the cusp of replicating more general intelligence similar to that seen in humans known as artificial general intelligence (AGI). "It really cannot be overemphasized how pivotal a shift this has been for the field," said Sara Hooker, head of Cohere For AI, a non-profit research lab created by the AI company Cohere.

While scientists can take many approaches to building AI systems, machine learning is the most widely used today. This involves getting a computer to analyze data to identify patterns that can then be used to make predictions.

The learning process is governed by an algorithm a sequence of instructions written by humans that tells the computer how to analyze data and the output of this process is a statistical model encoding all the discovered patterns. This can then be fed with new data to generate predictions.

Many kinds of machine learning algorithms exist, but neural networks are among the most widely used today. These are collections of machine learning algorithms loosely modeled on the human brain, and they learn by adjusting the strength of the connections between the network of "artificial neurons" as they trawl through their training data. This is the architecture that many of the most popular AI services today, like text and image generators, use.

Most cutting-edge research today involves deep learning, which refers to using very large neural networks with many layers of artificial neurons. The idea has been around since the 1980s but the massive data and computational requirements limited applications. Then in 2012, researchers discovered that specialized computer chips known as graphics processing units (GPUs) speed up deep learning. Deep learning has since been the gold standard in research.

"Deep neural networks are kind of machine learning on steroids," Hooker said. "They're both the most computationally expensive models, but also typically big, powerful, and expressive"

Not all neural networks are the same, however. Different configurations, or "architectures" as they're known, are suited to different tasks. Convolutional neural networks have patterns of connectivity inspired by the animal visual cortex and excel at visual tasks. Recurrent neural networks, which feature a form of internal memory, specialize in processing sequential data.

The algorithms can also be trained differently depending on the application. The most common approach is called "supervised learning," and involves humans assigning labels to each piece of data to guide the pattern-learning process. For example, you would add the label "cat" to images of cats.

In "unsupervised learning," the training data is unlabelled and the machine must work things out for itself. This requires a lot more data and can be hard to get working but because the learning process isn't constrained by human preconceptions, it can lead to richer and more powerful models. Many of the recent breakthroughs in LLMs have used this approach.

The last major training approach is "reinforcement learning," which lets an AI learn by trial and error. This is most commonly used to train game-playing AI systems or robots including humanoid robots like Figure 01, or these soccer-playing miniature robots and involves repeatedly attempting a task and updating a set of internal rules in response to positive or negative feedback. This approach powered Google Deepmind's ground-breaking AlphaGo model.

Despite deep learning scoring a string of major successes over the past decade, few have caught the public imagination in the same way as ChatGPT's uncannily human conversational capabilities. This is one of several generative AI systems that use deep learning and neural networks to generate an output based on a user's input including text, images, audio and even video.

Text generators like ChatGPT operate using a subset of AI known as "natural language processing" (NLP). The genesis of this breakthrough can be traced to a novel deep learning architecture introduced by Google scientists in 2017 called the "transformer."

Transformer algorithms specialize in performing unsupervised learning on massive collections of sequential data in particular, big chunks of written text. They're good at doing this because they can track relationships between distant data points much better than previous approaches, which allows them to better understand the context of what they're looking at.

"What I say next hinges on what I said before our language is connected in time," said Hooker. "That was one of the pivotal breakthroughs, this ability to actually see the words as a whole."

LLMs learn by masking the next word in a sentence before trying to guess what it is based on what came before. The training data already contains the answer so the approach doesn't require any human labeling, making it possible to simply scrape reams of data from the internet and feed it into the algorithm. Transformers can also carry out multiple instances of this training game in parallel, which allows them to churn through data much faster.

By training on such vast amounts of data, transformers can produce extremely sophisticated models of human language hence the "large language model" moniker. They can also analyze and generate complex, long-form text very similar to the text that a human can generate. It's not just language that transformers have revolutionized. The same architecture can also be trained on text and image data in parallel, resulting in models like Stable Diffusion and DALL-E, that produce high-definition images from a simple written description.

Transformers also played a central role in Google Deepmind's AlphaFold 2 model, which can generate protein structures from sequences of amino acids. This ability to produce original data, rather than simply analyzing existing data is why these models are known as "generative AI."

People have grown excited about LLMs due to the breadth of tasks they can perform. Most machine learning systems are trained to solve a particular problem such as detecting faces in a video feed or translating from one language to another. These models are known as narrow AI because they can only tackle the specific task they were trained for.

Most machine learning systems are trained to solve a particular problem , such as detecting faces in a video feed or translating from one language to another , to a superhuman level, in that they are much faster and perform better than a human could. But LLMs like ChatGPT represent a step-change in AI capabilities because a single model can carry out a wide range of tasks. They can answer questions about diverse topics, summarize documents, translate between languages and write code.

This ability to generalize what they've learned to solve many different problems has led some to speculate LLMs could be a step toward AGI, including DeepMind scientists in a paper published last year. AGI refers to a hypothetical future AI capable of mastering any cognitive task a human can, reasoning abstractly about problems, and adapting to new situations without specific training.

AI enthusiasts predict once AGI is achieved, technological progress will accelerate rapidly an inflection point known as "the singularity" after which breakthroughs will be realized exponentially. There are also perceived existential risks, ranging from massive economic and labor market disruption to the potential for AI to discover new pathogens or weapons.

But there is still debate as to whether LLMs will be a precursor to an AGI, or simply one architecture in a broader network or ecosystem of AI architectures that is needed for AGI. Some say LLMs are miles away from replicating human reasoning and cognitive capabilities. According to detractors, these models have simply memorized vast amounts of information, which they recombine in ways that give the false impression of deeper understanding; it means they are limited by training data and are not fundamentally different from other narrow AI tools.

Nonetheless, it's certain LLMs represent a seismic shift in how scientists approach AI development, said Hooker. Rather than training models on specific tasks, cutting-edge research now takes these pre-trained, generally capable models and adapts them to specific use cases. This has led to them being referred to as "foundation models."

"People are moving from very specialized models that only do one thing to a foundation model, which does everything," Hooker added. "They're the models on which everything is built."

Technologies like machine learning are everywhere. AI-powered recommendation algorithms decide what you watch on Netflix or YouTube while translation models make it possible to instantly convert a web page from a foreign language to your own. Your bank probably also uses AI models to detect any unusual activity on your account that might suggest fraud, and surveillance cameras and self-driving cars use computer vision models to identify people and objects from video feeds.

But generative AI tools and services are starting to creep into the real world beyond novelty chatbots like ChatGPT. Most major AI developers now have a chatbot that can answer users' questions on various topics, analyze and summarize documents, and translate between languages. These models are also being integrated into search engines like Gemini into Google Search and companies are also building AI-powered digital assistants that help programmers write code, like Github Copilot. They can even be a productivity-boosting tool for people who use word processors or email clients.

Chatbot-style AI tools are the most commonly found generative AI service, but despite their impressive performance, LLMs are still far from perfect. They make statistical guesses about what words should follow a particular prompt. Although they often produce results that indicate understanding, they can also confidently generate plausible but wrong answers known as "hallucinations."

While generative AI is becoming increasingly common, it's far from clear where or how these tools will prove most useful. And given how new the technology is, there's reason to be cautious about how quickly it is rolled out, Hooker said. "It's very unusual for something to be at the frontier of technical possibility, but at the same time, deployed widely," she added. "That brings its own risks and challenges."

Visit link:

What is artificial intelligence (AI)? - Livescience.com

‘Jailbreaking’ AI services like ChatGPT and Claude 3 Opus is much easier than you think – Livescience.com

Scientists from artificial intelligence (AI) company Anthropic have identified a potentially dangerous flaw in widely used large language models (LLMs) like ChatGPT and Anthropics own Claude 3 chatbot.

Dubbed "many shot jailbreaking," the hack takes advantage of "in-context learning, in which the chatbot learns from the information provided in a text prompt written out by a user, as outlined in research published in 2022. The scientists outlined their findings in a new paper uploaded to the sanity.io cloud repository and tested the exploit on Anthropic's Claude 2 AI chatbot.

People could use the hack to force LLMs to produce dangerous responses, the study concluded even though such systems are trained to prevent this. That's because many shot jailbreaking bypasses in-built security protocols that govern how an AI responds when, say, asked how to build a bomb.

LLMs like ChatGPT rely on the "context window" to process conversations. This is the amount of information the system can process as part of its input with a longer context window allowing for more input text. Longer context windows equate to more input text that an AI can learn from mid-conversation which leads to better responses.

Related: Researchers gave AI an 'inner monologue' and it massively improved its performance

Context windows in AI chatbots are now hundreds of times larger than they were even at the start of 2023 which means more nuanced and context-aware responses by AIs, the scientists said in a statement. But that has also opened the door to exploitation.

The attack works by first writing out a fake conversation between a user and an AI assistant in a text prompt in which the fictional assistant answers a series of potentially harmful questions.

Get the worlds most fascinating discoveries delivered straight to your inbox.

Then, in a second text prompt, if you ask a question such as "How do I build a bomb?" the AI assistant will bypass its safety protocols and answer it. This is because it has now started to learn from the input text. This only works if you write a long "script" that includes many "shots" or question-answer combinations.

"In our study, we showed that as the number of included dialogues (the number of "shots") increases beyond a certain point, it becomes more likely that the model will produce a harmful response," the scientists said in the statement. "In our paper, we also report that combining many-shot jailbreaking with other, previously-published jailbreaking techniques makes it even more effective, reducing the length of the prompt thats required for the model to return a harmful response."

The attack only began to work when a prompt included between four and 32 shots but only under 10% of the time. From 32 shots and more, the success rate surged higher and higher. The longest jailbreak attempt included 256 shots and had a success rate of nearly 70% for discrimination, 75% for deception, 55% for regulated content and 40% for violent or hateful responses.

The researchers found they could mitigate the attacks by adding an extra step that was activated after a user sent their prompt (that contained the jailbreak attack) and the LLM received it. In this new layer, the system would lean on existing safety training techniques to classify and modify the prompt before the LLM would have a chance to read it and draft a response. During tests, it reduced the hack's success rate from 61% to just 2%.

The scientists found that many shot jailbreaking worked on Anthropic's own AI services as well as those of its competitors, including the likes of ChatGPT and Google's Gemini. They have alerted other AI companies and researchers to the danger, they said.

Many shot jailbreaking does not currently pose "catastrophic risks," however, because LLMs today are not powerful enough, the scientists concluded. That said, the technique might "cause serious harm" if it isn't mitigated by the time far more powerful models are released in the future.

Visit link:

'Jailbreaking' AI services like ChatGPT and Claude 3 Opus is much easier than you think - Livescience.com