Archive for the ‘Alphago’ Category

If I had to pick one AI tool… this would be it. – Exponential View

There are so many new artificial intelligence products out there. Which ones are really worth your time?

If I had to pick one, it wouldnt be ChatGPT or Claude. It would be Perplexity.ai.

Since 1 October Ive logged more than 268 queries on Perplexity from my laptop alone (I use it on my phone, too). Its displacing a large number of my Google searches.

I decided to speak to the co-founder and CEO of Perplexity, Aravind Srinivas. Aravind and his team are hot off the heels of a $500 million funding round led by IVP.

You can watch our discussion in the video embedded in this post. The full hour-long discussion and transcript are open to paying members of Exponential View.

Of the many brilliant insights in our conversation, I was particularly excited to cover the following nine areas

Googles innovators dilemma.

The fuzzy art of shipping products built on AI models.

AI as ignition for a new era of human entrepreneurship.

Mapping out the route to AGI.

Going from from autocomplete to autopilot in the coming years.

Safety in a world with billions of AIs.

AI open-source: democratizing progress or losing control?

Beyond the technology: how do we get the public behind this journey?

Share

Azeem Azhar: Aravind, thanks for taking a few moments off the rocket ship to speak to me.

Link:
If I had to pick one AI tool... this would be it. - Exponential View

For the first time, AI produces better weather predictions — and it’s … – ZME Science

AI-generated image.

Predicting the weather is notoriously difficult. Not only are there a million and one parameters to consider but theres also a good degree of chaotic behavior in the atmosphere. But DeepMinds scientists (the same group that brought us AlphaGo and AlphaFold) have developed a system that can revolutionize weather forecasting. This advanced AI model leverages vast amounts of data to generate highly accurate predictions.

Weather forecasting, an indispensable tool in our daily lives, has undergone tremendous advancements over the years. Todays 6-day forecast is as good (if not better) than the 3-day forecast from 30 years ago. Storms and extreme weather events rarely catch people off-guard. You may not notice it because the improvement is gradual, but weather forecasting has progressed greatly.

This is more than just a convenience; its a lifesaver. Weather forecasts help people prepare for extreme events, saving lives and money. They are indispensable for farmers protecting their crops, and they significantly impact the global economy.

This is exactly where AI enters the room.

DeepMind scientists now claim theyve made a remarkable leap in weather forecasting with their GraphCast model. GraphCast is a sophisticated machine-learning algorithm that outperforms conventional weather forecasting around 90% of the time.

We believe this marks a turning point in weather forecasting, Googles researchers wrote in a study published Tuesday.

Crucially, GraphCast offers warnings much faster than standard models. For instance, in September, GraphCast accurately predicted that Hurricane Lee would make landfall in Nova Scotia nine days in advance. Currently used models predicted it only six days in advance.

The method that GraphCast uses is significantly different. Current forecasts typically use a lot of carefully defined physics equations. These are then transformed into algorithms and run on supercomputers, where models are simulated. As mentioned, scientists have this approach with great results so far.

However, this approach requires a lot of expertise and computation power. Machine learning offers a different approach. Instead of running equations on the current weather conditions, you look at the historical data. You see what type of conditions led to what type of weather. It gets even better: you can mix conventional methods with this new AI approach, and get accurate, fast readings.

Crucially, GraphCast and traditional approaches go hand-in-hand: we trained GraphCast on four decades of weather reanalysis data, from the ECMWFs ERA5 dataset. This trove is based on historical weather observations such as satellite images, radar, and weather stations using a traditional numerical weather prediction (NWP) to fill in the blanks where the observations are incomplete, to reconstruct a rich record of global historical weather, writes lead author Remi Lam, from DeepMind.

While GraphCasts training was computationally intensive, the resulting forecasting model is highly efficient. Making 10-day forecasts with GraphCast takes less than a minute on a single Google TPU v4 machine. For comparison, a 10-day forecast using a conventional approach can take hours of computation in a supercomputer with hundreds of machines.

The algorithm isnt perfect, it still lags behind conventional models in some regards (especially in precipitation forecasting). But considering how easy it is to use, its at least an excellent complement to existing forecasting tools. Theres another exciting bit about it: its open source. This means that companies and researchers can use and change it to better suit their needs.

Byopen-sourcing the model code for GraphCast,we are enabling scientists and forecasters around the world to benefit billions of people in their everyday lives. GraphCast is already being used by weather agencies, adds Lam.

The significance of this development cannot be overstated. As our planet faces increasingly unpredictable weather patterns due to climate change, the ability to accurately and quickly predict weather events becomes a critical tool in mitigating risks. The implications are far-reaching, from urban planning and disaster management to agriculture and air travel.

Moreover, the open-source nature of GraphCast democratizes access to cutting-edge forecasting technology. By making this powerful tool available to a wide range of users, from small-scale farmers in remote areas to large meteorological organizations, the potential for innovation and localized weather solutions increases exponentially.

No doubt, were witnessing another field where machine learning is making a difference. The marriage of AI and weather forecasting is not just a fleeting trend but a fundamental shift in how we understand and anticipate the whims of nature.

Read more here:
For the first time, AI produces better weather predictions -- and it's ... - ZME Science

Understanding the World of Artificial Intelligence: A Comprehensive … – Medium

Welcome to the fascinating world of Artificial Intelligence (AI). As technology continues to evolve at an unprecedented pace, AI stands at the forefront, reshaping our lives and industries. Lets dive deep into the core concepts that make AI the marvel it is today.

Algorithms are the unsung heroes of the digital age. Think of them as a chefs recipe, detailing step-by-step instructions for a computer to whip up a delightful dish. From ancient Babylonian clay tablets to todays sophisticated computer systems, algorithms have been guiding processes and decisions. For instance, the age-old Euclidean algorithm for division is still very much in use. Even our daily activities, like brushing our teeth, can be broken down into a series of algorithmic steps.

Machine Learning (ML) is like giving computers a brain of their own. Instead of spoon-feeding them every piece of information, we let them learn from patterns and data. Imagine showing a computer millions of pictures of cats and dogs. Over time, it starts recognizing the subtle differences and can classify new images with remarkable accuracy. However, while theyre pattern recognition champions, they might stumble when faced with tasks requiring intricate reasoning.

Natural Language Processing (NLP) is the art and science of making machines understand and respond to human language. If youve ever chatted with Siri or Alexa, youve experienced NLP in action. Todays advanced NLP systems can even discern the context of words. For instance, they can figure out whether club refers to a sandwich, a golf game, or a nightlife venue based on surrounding text.

Neural Networks take inspiration from the human brain. Just as our brain has neurons that transmit signals, AI has artificial neurons or nodes that communicate. These networks continuously learn and adapt. For instance, platforms like Pinterest use neural networks to curate content that resonates with users preferences.

Deep Learning is like Neural Networks on steroids. The deep signifies the multiple layers of artificial neurons. These layers enable the system to process information in a more intricate manner, making them adept at handling complex tasks.

Large Language Models (LLMs) are the maestros of text. They can summarize, create, and even predict text. These models are trained on vast amounts of data, making them incredibly versatile. They owe their efficiency to the transformer model, a groundbreaking development by Google in 2017.

Generative AI can craft content, be it text, images, or even audio. By feeding specific prompts into foundation models, we get outputs tailored to our needs. These models have given birth to innovations like OpenAIs ChatGPT and Google Bard.

Chatbots are our digital conversationalists. Powered by Generative AI, they can engage in meaningful dialogues, answer queries, and even generate content in the style of famous personalities. ChatGPT, for instance, can discuss topics ranging from history to music and offer insights on a plethora of subjects.

Hallucination in AI is when a model produces outputs that might sound plausible but arent rooted in reality. Its essential to differentiate between hallucinations and biases, as the former is an output error, while the latter stems from skewed training data.

Artificial General Intelligence (AGI) is the zenith of AI development. Its the dream of creating machines that can think, learn, and adapt just like humans. While were still on the journey towards AGI, advancements like DeepMinds AlphaGo and MuZero show promising strides in that direction.

The realm of AI is vast and ever-evolving. As we continue to harness its potential, were not just reshaping technology but also redefining the boundaries of human-machine collaboration. Embrace the journey, for the future is AI!

If you like what you read please hit a follow on our Instagram page

Follow Us on Instagram for video content AI Agenda

Here is the original post:
Understanding the World of Artificial Intelligence: A Comprehensive ... - Medium

On AI and the soul-stirring char siu rice – asianews.network

October 11, 2023

KUALA LUMPUR Limitations of traditional programming

Firstly, lets consider traditional computer programming.

Here, the computer acts essentially as a puppet, mimicking precisely the set of explicit human-generated instructions.

Take a point-of-sale system at a supermarket as an example: scan a box of Cheerios, and it charges $3; scan a Red Bull, its $2.50.

This robotic repetition of specific commands is probably the most familiar aspect of computers for many people.

This is akin to rote learning from a textbook, start to finish.

But this programmed obedience has limitationssimilar to how following a fixed recipe restricts culinary creativity.

Traditional programming struggles when faced with complex or extensive data.

A set recipe may create a delicious Beef Wellington, but it lacks the capacity to innovate or adapt.

Furthermore, not all data fits neatly into an A corresponds to Bmodel.

Take YouTube videos: their underlying messages cant be easily boiled down into basic algorithms.

This rigidity led to the advent of machine learning or AI,which emerged to discern patterns in data without being explicitly programmed to do so.

Remarkably, the core tenets of machine learning are not entirely new.

Groundwork was being laid as far back as the mid-20th century by pioneers like Alan Turing.

Laksa Penang + Ipoh

During my childhood, my mother saw the value in non-traditional learning methods.

She enrolled me in a memory training course that discouraged rote memorization.

Instead, the emphasis was on creating mind maps and making associative connections between different pieces of information.

Machine learning models operate on a similar principle. They generate their own sort of mind maps, condensing vast data landscapes into more easily navigated territories.

This allows them to form generalizations and adapt to new information.

For instance, if you type King Man + Woman into ChatGPT, it responds with Queen.

This demonstrates that the machine isnt just memorizing words, but understands the relationships between them.

In this case, it deconstructs King into something like royalty + man.

When you subtract man and add woman, the equation becomes royalty + woman, which matches Queen.

For a more localized twist, try typing Laksa Penang + Ipoh in ChatGPT. Youll get Hor Fun. Isnt that fun?

Knowledge graphs and cognitive processes

Machine learning fundamentally boils down to compressing a broad swath of world information into an internal architecture.

This enables machine learning to exhibit what we commonly recognize as intelligence, a mechanism strikingly similar to human cognition.

This idea of internal compression and reconstruction is not unique to machines.

For example, a common misconception is that our eyes function like high-definition cameras, capturing every detail within their view.

The reality is quite different. Just as machine learning models process fragmented data, our brains take in fragmented visual input and then reconstruct it into a more complete picture based on pre-existing knowledge.

Our brains role in filling in these perceptual gaps also makes us susceptible to optical illusions.

You might see two people of identical height appear differently depending on their surroundings.

This phenomenon stems from our brains reliance on built-in rules to complete the picture, and manipulating these rules can produce distortions.

Speaking of rule-breaking, recall the Go match between AlphaGo and Lee Sedol.

The human side was losing until Sedol executed a move that AlphaGos internal knowledge graph hadnt anticipated.

This led to several mistakes by the AI, allowing Sedol to win that round.

Here too, the core concept of data reconstruction is at play.

Beyond chess: The revolution in deep learning

The creation and optimization of knowledge graphs have always been a cornerstone of machine learning.

However, for a long time, this area remained our blind spot.

In the realm of chess, before the advent of deep learning, we leaned heavily on human experience.

We developed chess algorithms based on what we thought were optimal rules, akin to following a fixed recipe for a complex dish like Beef Wellington.

We believed our method was fool-proof.

This belief was challenged by Rich Sutton, a luminary in machine learning, in his blog post The Bitter Lesson.

According to Sutton, our tendency to assume that we have the world all figured out is inherently flawed and short-sighted.

In contrast, recent advancements in machine learning, including AlphaGo Zero and the ChatGPT youre interacting with now, adopt a more flexible, Char Siu Riceapproach.

They learn from raw data with minimal human oversight.

Sutton argues that given the continued exponential growth in computing power, evidenced by Moores Law, this method of autonomous learning is the most sustainable path forward for AI development.

While the concept of computers learning on their ownmight unnerve some people, lets demystify that notion.

Far from edging towardshuman-like self-awareness or sentience, these machines are engaging in advanced forms of data analysis and pattern recognition.

Machine learning models perform the complex dance of parsing, categorization, and linking large sets of dataakin to an expert chef intuitively knowing how to meld flavors and techniques.

These principles are now entrenched in our daily lives.

When you search for something on Google or receive video recommendations on TikTok, its these very algorithms at work.

So, instead of indulging in unwarranted fears about the future of machine learning, lets appreciate the advancements that bring both simplicity and complexity into our lives, much like a perfect bowl of Char Siu Rice.

Read also:

(Yuan-SenTinggraduated from Chong Hwa Independent High School in Kuala Lumpur before earning his degree from Harvard University in 2017. Subsequently, he washonoredwith a Hubble Fellowship from NASA in 2019, allowing him to pursue postdoctoral research at the Institute for Advanced Study in Princeton. Currently, he serves as an associate professor at the Australian National University, splitting his time between the School of Computing and the Research School of Astrophysics and Astronomy. His primary focus is onutilizingadvanced machine learning techniques for statistical inference in the realm of astronomical big data.)

Continue reading here:
On AI and the soul-stirring char siu rice - asianews.network

Nvidias Text-to-3D AI Tool Debuts While Its Hardware Business Hits Regulatory Headwinds – Decrypt

Renowned for its technological prowess, Nvidia has arrived at the crossroads of innovation and entrenched interests. As the computer chip maker moves solidly into artificial intelligence, releasing a new application that could redefine 3D modeling, it concurrently faces geopolitical hurdles that threaten its dominance in hardware.

Nvidia joined forces with 3D software publisher Masterpiece Studio to release Masterpiece X, aiming to revolutionize the 3D modeling field by making it as easy as creating a two-dimensional image with MidJourney or Stable Diffusion.

"For years, we've worked hard to create cutting-edge 3D tools that are intuitive but also tools that would enable and empower more and more people to start creating 3D. Masterpiece Studio said in an official announcement, Generative AI enables entirely new possibilities.

The studio says its solution makes it possible to create 3D models with no local hardware or software required, as everything happens in the cloud. All you need is a keyboard, a browser, a little imagination, and just a few words," they wrote.

As a quick experiment, Decrypt took Masterpiece X for a spin. Our efforts to digitally sculpt our AI mascot Gen were not good. The envisioned "child robot" bore more resemblance to a chubby pigeon, while the render of an elegant teacher avatar seemed more like a tipsy vagabond.

Although far from perfect, these results hint at the software's vast potential and exciting advancements on the horizon. It is easier to reach a desired result starting from a pre-existing model instead of having to create a design from scratch.

The AI industrys dependency on Nvidia is notable. A considerable portion of the sector is tethered to Nvidia's cutting-edge technology in software and hardware, underscoring the firm's monumental influence in the sector.

This dominance has significantly contributed to Nvidia's financial performance, with the company becoming among the 10 top-performing stocks of 2023. Astonishingly, Nvidia's stock has surged by over 200% during the year, marking its all-time high in September 2023.

However, geopolitical challenges loom large. A recent report from Reuters highlighted the U.S. administration's efforts to tighten restrictions on AI chip exports to China. The restrictions have, in the past, hindered Nvidia from delivering its top-tier AI chips to Chinese consumers chips that are the gold standard for various AI applications.

In this case, the companys powerful H800 chips may be in the bullseye of the US government, even though Nvidia specifically designed them to comply with current export restrictions. They are less powerful and sophisticated than the current top-of-the-line H100 lineup. However, regulators seem determined to close any possible loophole to not give China any advantage in the AI race.

Undeterred by global challenges, China continues to showcase its technological resilience. The release of Huawei's Mate 60 series, equipped with the Kirin 9000S chip, exemplifies its determination. This phone features a 14nm chip designed to perform on par with its 7nm counterparts, and it boasts 5G capabilities. While the U.S. took measures to restrict Chinas access to certain technologies related to 5G and hardware development due to national security concerns, companies from that country managed to innovate and move forward.

Like a high-wire artist, Nvidia is walking a fine line between rising AI hype and geopolitical gravity. For now, Nvidia wobbles forward, with one foot planted in the promises of AI and the other mired in the perils of nationalism, while the whole AI industry is watching to see what happens.

See the article here:
Nvidias Text-to-3D AI Tool Debuts While Its Hardware Business Hits Regulatory Headwinds - Decrypt