Archive for the ‘Alphazero’ Category

The Race for AGI: Approaches of Big Tech Giants – Fagen wasanni

Big tech companies like OpenAI, Google DeepMind, Meta (formerly Facebook), and Tesla are all on a quest to achieve Artificial General Intelligence (AGI). While their visions for AGI differ in some aspects, they are all determined to build a safer, more beneficial form of AI.

OpenAIs mission statement encapsulates their goal of ensuring that AGI benefits all of humanity. Sam Altman, former CEO of OpenAI, believes that AGI may not have a physical body and that it should contribute to the advancement of scientific knowledge. He sees AI as a tool that amplifies human capabilities and participates in a human feedback loop.

OpenAIs key focus has been on transformer models, such as the GPT series. These models, trained on large datasets, have been instrumental in OpenAIs pursuit of AGI. Their transformer models extend beyond text generation and include text-to-image and voice-to-text models. OpenAI is continually expanding the capabilities of the GPT paradigm, although the exact path to AGI remains uncertain.

Google DeepMind, on the other hand, places its bets on reinforcement learning. Demis Hassabis, CEO of DeepMind, believes that AGI is just a few years away and that maximizing total reward through reinforcement learning can lead to true intelligence. DeepMind has developed models like AlphaFold and AlphaZero, which have showcased the potential of this approach.

Metas Yann LeCun disagrees with the effectiveness of supervised and reinforcement learning for achieving AGI, citing their limitations in reasoning with commonsense knowledge. He champions self-supervised learning, which does not rely on labeled data for training. Meta has dedicated significant research efforts to self-supervised learning and has seen promising results in language understanding models.

Elon Musks Tesla aims to build AGI that can comprehend the universe. Musk believes that a physical form may be essential for AGI, as seen through his investments in robotics. Teslas Optimus robot, powered by a self-driving computer, is a step towards that vision.

Both Google and OpenAI have incorporated multimodality functions into their models, allowing for the processing of textual descriptions associated with images. These companies are also exploring research avenues like causality, which could have a significant impact on achieving AGI.

While the leaders in big tech have different interpretations of AGI and superintelligence, their approaches reflect a shared ambition to develop AGI that benefits humanity. The race for AGI is still ongoing, and the path to its realization remains a combination of innovation, research, and exploration.

Read more:
The Race for AGI: Approaches of Big Tech Giants - Fagen wasanni

Book Review: Re-engineering the Chess Classics by GM Matthew … – Chess.com

Matthew Sadler is a very strong grandmaster (2694 at age 49) and one of the leading computer chess experts. In 2019 he wrote the award-winning Game Changer with Natasha Regan about AlphaZero, and in 2021 he published The Silicon Road to Chess Improvement on how to use chess engines to improve your own game. In addition, Matthew kept the world appraised of the latest engine developments through his tweets and recaps of the Top Engine Chess Championships.

For this latest book Re-engineering the Chess Classics, he teamed up with Steve Giddins to evaluate 40 classical games through the eyes of Stockfish, Leela Chess Zero, and Komodo Dragon. The games are from the period 1852 to 1998 and include games from all the World Champions of that period.

Over the last five years, chess has been revolutionized by the research of AlphaZero, the subsequent implementation of their concepts in Leela Chess Zero, and finally, including neural network technology in Stockfish NNUE. The development of chess engines has been so strong that any opening analysis from before 2020 has lost much of its value. Can the classics stand the test of time?

The themes that emerge from analyzing the forty classics game will not surprise you:

Consider the position after 1.e4 e5 2. Nf3 Nc6 3.Bb5 d6 4.d4 Bd7 5. Nc3 Nge7 6.d5. The engine assessment after 6.d5 is over +2.5 for White, a decisive advantage.

The preference of engines for space has also led to some openings, like the Kings-Indian being hardly playable at the engine level.

Lets assume White moves up his h-pawn. For three tempi (h4-h5-h6), White creates dark square weaknesses on the kingside. The advanced h-pawn restricts the opponents king (mate on g7 but also mate on the back rank). Furthermore, White adds an attacker to his existing attack that might assist other attackers and tie down defenders. Finally, in the endgame, the h7-pawn might become a target.

The advance of the rooks pawn has also impacted the opening theory. For example: 1.d4 Nf6 2.c4 g6 3.Nc3 d5 4.Nf3 Bg7 5.h4 is now a popular Grnfeld Defence variation.

Mistakes come easily in bad positions, but not when you are an engine!

Humans tend to concentrate on one area of the board and devote all their efforts to breaking through on that side, whereas engines are masters at switching plans and creating threats over the whole board.

This was the traditional strength of chess engines and still is.

Interestingly, we play less well than engines because humans play with baggage. In bad positions, we stress out and cannot find the most stubborn defence. When we attack, we focus on breaking through and lack the agility to see the whole board and switch strategy when necessary. Engines play without memory or ego and look with objectivity at every position.

The development of the strongest engines has led to a reevaluation of the relative importance of material, activity, and space. If you want to see how the latest chess concepts impact 40 classics, this book is for you!

The book is currently on introductory offer at ForwardChess for $23.79 and can be pre-ordered at Amazon in hardcover for $34.95.

Continued here:
Book Review: Re-engineering the Chess Classics by GM Matthew ... - Chess.com

The Sparrow Effect: How DeepMind is Rewriting the AI Script – CityLife

The Sparrow Effect: How DeepMind is Rewriting the AI Script

The Sparrow Effect, a term coined to describe the incredible impact of DeepMinds artificial intelligence (AI) technology, is rewriting the AI script and transforming the way we think about machine learning. DeepMind, a London-based AI research lab acquired by Google in 2014, has been at the forefront of AI development, making groundbreaking strides in areas such as natural language processing, computer vision, and reinforcement learning. With its innovative approach to AI research and development, DeepMind is pushing the boundaries of what machines can do and revolutionizing the field of AI.

One of the most notable achievements of DeepMind is the development of AlphaGo, an AI program that stunned the world by defeating the world champion Go player, Lee Sedol, in 2016. Go, an ancient Chinese board game, is considered one of the most complex games in the world, with more possible board configurations than there are atoms in the universe. AlphaGos victory was a watershed moment in AI history, as it demonstrated that machines could not only learn to play complex games but also outperform human experts.

The success of AlphaGo was built on a technique called deep reinforcement learning, which combines deep neural networks with reinforcement learning algorithms. This approach allows AI systems to learn from their own experiences, rather than relying on pre-programmed rules or human input. By playing millions of games against itself, AlphaGo was able to develop its own strategies and refine its gameplay, ultimately surpassing human-level performance.

Following the success of AlphaGo, DeepMind turned its attention to other complex games, such as chess and shogi. In 2017, the company unveiled AlphaZero, an AI system that taught itself to play chess, shogi, and Go from scratch, without any prior knowledge of the games rules or strategies. In a matter of hours, AlphaZero was able to defeat world-class AI opponents in all three games, showcasing the power of deep reinforcement learning and the potential for AI to master a wide range of tasks.

DeepMinds achievements in game-playing AI have far-reaching implications for the broader field of AI research. By demonstrating that machines can learn complex tasks without human intervention, DeepMind has opened the door to a new era of AI development, in which AI systems can learn and adapt to new challenges autonomously. This has the potential to revolutionize industries such as healthcare, finance, and transportation, where AI could be used to optimize processes, make more accurate predictions, and even save lives.

For example, DeepMind has already made significant progress in applying its AI technology to healthcare. In 2018, the company developed an AI system capable of diagnosing eye diseases with the same accuracy as human experts, potentially helping to prevent blindness in millions of people worldwide. Additionally, DeepMind has been working on AI models that can predict the progression of diseases such as Alzheimers and Parkinsons, which could lead to earlier diagnoses and more effective treatments.

Despite the tremendous potential of DeepMinds AI technology, there are also concerns about the ethical implications of AI development. As AI systems become more powerful and autonomous, questions arise about the potential for job displacement, privacy violations, and even the possibility of AI systems making life-or-death decisions. To address these concerns, DeepMind has established an ethics and society research unit, which aims to ensure that AI is developed responsibly and in the best interests of humanity.

In conclusion, the Sparrow Effect, as exemplified by DeepMinds groundbreaking achievements in AI, is rewriting the AI script and opening up new possibilities for machine learning. By pushing the boundaries of what machines can do, DeepMind is not only revolutionizing the field of AI but also paving the way for a future in which AI systems can help us solve some of the worlds most pressing challenges. However, as we continue to explore the potential of AI, it is crucial that we also consider the ethical implications of this powerful technology and work to ensure that it is developed responsibly and for the benefit of all.

Excerpt from:
The Sparrow Effect: How DeepMind is Rewriting the AI Script - CityLife

Vitalik Buterin Exclusive Interview: Longevity, AI and More – Lifespan.io News

Vitalik Buterin holding Zuzu, the puppy rescued by people of Zuzalu. Photo: Michelle Lai

Dont try finding Zuzalu on a map; it doesnt exist anymore. It was a pop-up city conceived by the tech entrepreneur Vitalik Buterin, creator of Ethereum, and a group of like-minded people to facilitate co-living and collaboration in fields like crypto, network states, AI, and longevity. It was also, in substantial part, funded by Vitalik.

Zuzalu, located on the Adriatic coast of Montenegro, began its short history on March 25 and wound down on May 25. It was a complex and memorable phenomenon, and Im wrapping my mind around a larger article in the works.

Usually, I dont eat breakfast due to my intermittent fasting regimen, but in Zuzalu, breakfast, served at a particular local restaurant, was the healthiest meal of the day. Also, it was free (kudos to Vitalik, and more on that later). Most importantly, it was the place to meet new people.

This was also where, on one of my last days in Zuzalu, I sat down with Vitalik himself for a talk. Not the best setting for an interview, considering the steady hum of voices and utensils clanging in the background, but it was the only gap in Vitaliks busy schedule.

Vitalik is 29, slender and mild-mannered, with a soft, pensive smile. When he talks, his train of thought moves fast, fueled by intelligence and curiosity. He seems to be genuinely interested in how the world works and just as genuinely disinterested in his own status something that was characteristic of Zuzalu as a whole.

Like any Zuzalu breakfast chat, ours was a bit all over the place, and we eventually ended up discussing the possibility of an AI-driven apocalypse (everyones favorite topic there). Apologies to the longevity purists reading this. However, we started with Zuzalu itself.

Zuzalu intentionally does not mean anything in any language.

The idea came about six months ago. I was already thinking about many different topics at the same time. I reviewed Balajis book last year, so I was thinking about network states, but also about crypto, real-world applications of Ethereum, other zero-knowledge proofs, and so on.

I am also a fan of the longevity space, I read Aubreys book when I was a teenager, and I know how important this is. The idea came together, as an experiment, to try doing things in all those areas at the same time.

I thought wed take 200 people, some from the Ethereum space, some from longevity, some philosophers, people just interested in building societies, and so on, bring them together for two months, and see what happens. The rationale behind the size is that its a large enough leap from the things people do already.

We have big conferences, but they only last a week, and we have hacker houses, but those only have ten people. So, lets do something with two hundred people that would last for two months. Its a big enough jump to create something new, but its still manageable. Its not something crazy like going from 0 to 5,000.

I knew a couple of locals here in Montenegro, having been introduced to the country last year. The government has been very open to becoming more crypto-friendly. On my first visit, they gave me citizenship, something that no other country has done. They did a lot, and I just happened to know people here who are very good at logistics and organization. From there, people started joining in. The team and the organization started growing very quickly.

I think it worked. Many people reported how much they enjoyed the experience, how happy they were, how this gave them a feeling of community and family. Maybe things are different now, but when I did a poll a month ago, a third of the people here were digital nomads. One of the problems digital nomads always face is loneliness. You dont have company, youre going to unfamiliar places, it can be hard. Some of those people enjoy the digital nomad experience, they like to travel like that, but others are doing it out of necessity.

Yes, and also from places like China. So, that part was a success. On many other things, there were some successes and some things we can learn from. The big idea was that 200 people is already an economy of scale. It enables you to do things collectively that take too much effort to do as a person.

For instance, if you want food thats different from what most other people eat, usually you have to go get it yourself. You go to a restaurant, and even if you order a salad or fish, you dont know what oil they use, and so on. Here, because we represent so many people, we talked to this restaurant, and we told them what menu to use for breakfast. Its not perfect, but we tried to follow Bryan Johnsons Blueprint menu as much as we could, although many ingredients were very hard to get. But its still much better than the average breakfast [at this point, Im nodding with my mouth full].

For some things beyond that, at least for the first half of Zuzalu, there werent enough champions to push many of the ideas, but that has improved a lot recently. People are forming clubs for exercise, such as the cold plunge club, hiking, and others.

Exactly. If youre one person, you will not be able to have a gym, but as a group, you can make that happen. Biomarker testing that we organized also comes to mind. People enjoy doing things together.

I feel like its trying to be. I think the challenge that all these co-living projects have is that if you make co-living the primary meme, youre going to mostly attract people who want to be very close with other people, who enjoy collective cooking and stuff like this. But for many other people its not a good fit.

Here, its much more moderate in a lot of ways. People have their own apartments. If you want to retreat to your apartment and not talk to other people, you can. You are not obligated to show up for any of the events. You dont have to eat at restaurants three times a day, dont have to talk to people all the time. Our model gives people more choice without pushing them into a lifestyle thats not compatible with them.

Then, theres this interesting thing I have noticed I have one friend here who is an extreme introvert. Normally, he goes off by himself, doesnt really talk to people, and here he just did, he started talking to people more because those were people he wanted to talk to.

On the education side, one of the big weaknesses was that we tried to organize different weeks, for each week to have a theme. There was a synthetic biology week, then public goods, then zero-knowledge proofs, then free cities and network states, now longevity. Some aspects of that work were interesting for people, but theres a reason why college courses are in parallel and not in series. People learn better when its spaced over a long period of time. We didnt do that, and that probably was a mistake.

I would say yes. I think there were two big cross-pollination events here. One is the intersection between longevity and crypto, such as the decentralized science space.

Exactly, it has been happening. It has brought many different people from those groups together. I know that a lot of connections were made between science people and public goods people. I think that a lot of people realized that funding science is a natural fit for some of the work that public goods people have been doing.

The second cross-pollination event happened between the longevity people and people building new cities. There are people from Prospera here, from VitaDAO, and now, they are working much more closely together than ever before.

This is probably a fair question. It is true that longevity as a field has been around for many years, and we still dont have the magic pill for immortality or anything close to that. There are very fundamental reasons why thats true for longevity, while AI is seeing much more progress. I think we just know a lot less about the body, as its an incredibly complicated machine.

The way I see this question is that if you look at the difference between the first computer and what we have now, the difference is huge. By the standards of the 1950s, todays computers feel like magic. Theres a common phrase that people always overestimate the short term and underestimate the long term, and I personally expect the longevity field to have a similar kind of progress. There are a few decades that might look useless from the outside, but theyre laying the foundations, and then the gains become faster than most people expect.

Its not just my intersection. I feel like a lot of people got into those things at the same time. Theres definitely a pretty significant cluster of the crypto space thats also interested in longevity, especially older Ethereum people.

You could say that. One of the big criticisms of the longevity space is this idea that youre extending life, but is the life youre extending worth living? Its the misconception that were basically trying to keep 80-year-olds barely alive. Im trying to show that this is not the case, that the longevity space is specifically about repairing damage before it develops into a pathology.

But then people see someone like Bryan Johnson. He is a multimillionaire who literally puts his life into being as healthy as possible. He takes this extremely customized menu, a huge number of supplements, spends his entire days doing exercises and so on. People look at that and they think, first, that it is only accessible to rich people, and, second, this is something youd only do if you dont care about actually living your life. Neither of those things are necessarily true.

To me, a part of the motivation was to show people a different model. Its also a personal struggle for me. I cant dedicate my entire life to being healthy. I have Ethereum stuff, I need to travel everywhere, Im a nomad, all my supplies are in a 40-liter backpack, so I have to compromise between a lot of things.

What we tried to show here is that if we do things in groups with economies of scale, it can really help the average person to maintain a reasonable lifestyle routine, including things like exercise and diet.

There are people here who are pretty intense about health stuff as we said, cold plunges, sauna, gym. I know someone who runs for two and a half hours every day. Still, they dont look like theyre willing to sacrifice their life to extend their lifespan.

I totally agree, and thats an argument that not enough people are making. Bryans example creates an impression that you have to go out of your way to stay healthy, but I think the extent to which its true is exaggerated. If you look at Aubrey, he is pretty normie in his personal lifestyle, but the people who make news are usually on the extreme ends of things. I think its good that they exist, and weve learned a lot from Bryan, but someone has to make a different case.

I would say, absolutely. We did a poll about one and a half weeks into the experiment, and one of the questions was, if there was another Zuzalu, would you show up? Zero people voted no.

I think its going to be renewed anyway, with or without us. When we asked who was thinking of making their own Zuzalu, about 15 people raised their hands. Its going to happen, and the question is, what role are we taking in this experiment?

Scaling is a big challenge. Theres a difference between doing this for two hundred people and doing something that includes thousands or tens of thousands of people. Once you have this number of people, its not one village anymore, you will have interactions between villages, you will have conflicts.

Theres also the question of, whats the long-term goal of this. If you want to create a biotech-friendly network state, you cant jump locations every two months. The equipment is not going to move, and you cant convince a new country to install favorable regulations every two months. Convincing even one is hard.

On the other hand, if your goal is to, say, create a new type of university, then moving every two months would be great. Giving people new experiences would make learning even more enjoyable.

So, different groups have different needs. Figuring out what makes sense for people is a learning process. Thats true for cities too. You have big cities and small cities, cities focused around particular industries, university towns, natural resources gathering cities, trade towns. All these look different. For any new category of institutions that are based on co-living in person, you will have to account for this diversity.

Overall, it feels like the basic format has been validated; it turned out to be something that a lot of people like and enjoy more than their usual life. People are willing to spend a lot of time here rather than in big cities. In the future, with better choice of location, with better preparation, this can be much cheaper than big cities, more enjoyable and more useful professionally for many people. So, many things were proven, but there was also probably a huge number of small mistakes.

I think theres some chance that the arguments that AI doomers make are correct, but that chance is far from 100%. I think its good to worry about those things. Im happy that people are taking the problem of AI alignment seriously. Its a small amount of work that could make a big difference, so its obviously worth doing.

Its harder for me to be convinced that taking that step is a good idea because it has its own risks. The very first question is How do you even enforce it? We have all those different countries that are going to have their own ideas. If some countries try to enforce a slowdown when others do not want to go along, that could itself lead to serious conflicts.

Also, slowing down AI obviously slows down longevity research. Many people think longevity is fundamentally hard, and we will need strong AI to make this problem solvable.

Its easier for me to be convinced that we need some medium level of more carefulness and slowing down of some specific things than to be convinced of more drastic attempts to slow AI progress greatly or stop it outright.

I agree with that, and thats a big part of why I do take them seriously. They have powerful arguments, and many people who argue against the doomers have only very basic counterarguments that the doomers already thought of and responded to ten years ago. Im definitely not going to just dismiss their arguments. If people do suggest pragmatic ways to either slow down AI research or put a lot of resources into solving this problem, Ill be very open to that.

I guess its hard for me to accept either of the extreme positions either that were clearly going to be totally fine, or that theres a greater than 50% chance well all die because theres just so many unknowns. For example, five years ago when the best AI was AlphaZero, I dont think it was even within many peoples space of possibilities that were going to switch away from goal-directed reinforcement learning and toward this really weird paradigm of managing to solve thousands of problems by, like, predicting text on the internet. So, I expect similar things that are outside of our current imagination to happen another few times before we get to the singularity.

If I had to predict a concrete place the AI doomer story is wrong, if it had to be wrong, I would say its in the idea of a fast take-off: that AI capabilities will pile on so fast that we wont be able to adapt to problems as they come. We may well have a surprisingly long period of approximately human-level AI. But then again, these are only speculations, and you should not take me for a specialist.

I think yes, but also kind of chaotic. Many people have not been exposed to deep AI issues at all, and then Nate [Soares, head of MIRI] is coming in with those very deep radical arguments on why AI is going to destroy the world. Theres this big disconnect between what one side believes and the other side believes, something you cant resolve in a three-day conference.

I think Nate would say that this is the entire problem theyre trying to solve.

As I understand his argument, its basically that even if we make a definition that works really well from our point of view, and if we had it trained on ten million examples, and it makes sense to us, the AI will be much more computationally powerful than we are, and it will find some really weird way to satisfy its model of those values in a way that totally goes against what the original intention was. Just how tractable or intractable that problem is, is one of the things that are very hard for me to judge, because its so abstract.

Yes, I think theres a big chance that the alignment will turn out to be much simpler than we expected, and the time period during which a combination of human and AI will continue to be smarter than AI alone will be much longer than we expected.

I also think theres a big chance that there are no easy strategies for destroying the entire world. The few counterexamples like biolabs can be dealt with individually instead of dealing with them on the AI side. Theres also some chance that humans are much closer to the ceiling of what kind of intelligence is possible to have from AI.

Still, I think there are many different totally unknown things that could happen, and our prediction power is limited. People generally did not predict that we would go that fast from a more goal-directed AI like AlphaZero to a less goal-directed AI like ChatGPT. It shows you how easy it is to have all kinds of surprises.

I also dont want all that Im saying here to be misinterpreted as my definite statement when in reality, my thoughts on this are going in all kinds of different directions and I could easily disagree with myself a year from now.

Id say probably. I dont know how such a merger would look like though.

[Long pause] Im curious about it.

We would like to ask you a small favor. We are a non-profit foundation, and unlike some other organizations, we have no shareholders and no products to sell you. We are committed to responsible journalism, free from commercial or political influence, that allows you to make informed decisions about your future health.

All our news and educational content is free for everyone to read, but it does mean that we rely on the help of people like you. Every contribution, no matter if its big or small, supports independent journalism and sustains our future. You can support us by making a donation or in other ways at no cost to you.

Single Recurring

DONATE MONTHLY

Your monthly donations help Lifespan.io continue advocating for the longevity biotech community and longer healthier lives for all of us.

Vitalik Buterin holding Zuzu, the puppy rescued by people of Zuzalu. Photo: Michelle Lai Dont try finding Zuzalu on a...

Lifespan.io president Keith Comito presenting in Zuzalu. Photo: Arkadi Mazin While the format of this conference was rather conventional, the...

Decentralized autonomous organizations (DAOs) are decentralized autonomous alternatives to traditional, centralized organizations. Currently estimated at having a total value of...

We have two scientific research projects in the new Gitcoin fundraising round. Help us to combat Alzheimer's disease or improve...

More:
Vitalik Buterin Exclusive Interview: Longevity, AI and More - Lifespan.io News

How to play chess against ChatGPT (and why you probably shouldn’t) – Android Authority

Modern AI chatbots like ChatGPT and Bing Chat can mimic human conversation to a surprising extent. And if that wasnt enough, the latest language models also boast impressive logical reasoning abilities. GPT-4, for example, ranks in the 90th percentile of test-takers in exam topics ranging from biology to world history. So with these impressive credentials, you may wonder: can ChatGPT play chess? And if so, whats the best way to challenge one of the worlds most capable chatbots to a game?

Here are two different ways to play chess with ChatGPT. Later, well also recommend some alternative chess AI you can try instead.

To play a game of chess versus ChatGPT, you have two options. You can type in your own prompts and match the moves manually on a website like chess.com. Alternatively, third-party services like ChessGPT communicate with the chatbot behind the scenes and translate the moves to a chessboard. Keep reading to learn more.

JUMP TO KEY SECTIONS

Are you looking for the most straightforward way to play chess with ChatGPT? Just ask the chatbot to play along. But what about the chessboard? Unfortunately, ChatGPT cant draw or render images just yet, so youll have to make do with just text. Seasoned chess players will already be familiar with algebraic chess notation, which uses a system of coordinates to identify the location of squares on a board. And thats exactly what well use to play chess with ChatGPT.

You can learn more about how the algebraic notation works on Chess.com. Well also use the same website to play the game versus ChatGPT. Heres a step-by-step guide:

Calvin Wankhede / Android Authority

OpenAI, the creator of ChatGPT, also allows third-party developers to communicate with the chatbot through code. This has led to the rise of many ChatGPT-powered websites, including a handful centered around board games like chess. In fact, the popular chess.com website had a GPT-powered AI opponent until very recently.

Why use a third-party website rather than ChatGPT directly? Primarily because you dont need to switch back and forth between a chessboard and the chatbot. ChessGPT.ai, for example, pitches itself as ChatGPT hooked up with a chessboard with a bit of creative prompt engineering.

Another service, creatively named Chess vs. GPT, even lets you check match replays and read the chat log as the game progresses.

With the instructions on how to play chess with ChatGPT out of the way, just how capable is the chatbot? I played a few games using the above methods and found that the chatbot just didnt perform as well as I expected.

On more than one occasion, ChatGPT forgot the boards state and suggested illegal or nonsensical moves. It would also conjure up pieces that no longer existed on the board. In that respect, the second method worked a lot better as it would automatically tell ChatGPT to try again.

ChatGPT doesn't play chess very well, so consider trying a different AI.

If you were hoping to beat an AI at chess, chances are that you will win against ChatGPT quite easily once it fumbles. Thats hardly surprising, though, if you remember that the GPT in ChatGPT signifies a large language model and not a general-purpose artificial intelligence program. The world already has several chess-optimized AI that can outperform just about any human. Some examples include AlphaZero and the open-source Stockfish engine. You can play against the latter via a free app, completely offline.

So should you play chess versus ChatGPT? Id recommend it for entertainment, but not much else. If you do manage to play a full game without errors, consider yourself lucky!

View original post here:
How to play chess against ChatGPT (and why you probably shouldn't) - Android Authority