Archive for the ‘Artificial Super Intelligence’ Category

The Best Games From Rare Per Metacritic – GameRant

Gamers who enjoy early 3D games from the 1990s will likely have fond memories of Rare. The British studio developed exclusively for Nintendo consoles in the 1990s and early 2000s with games like GoldenEye 007, Perfect Dark, and Banjo-Kazooie. Microsoft now owns Rare as well as its franchises after purchasing the company in 2002. The studio has developed Xbox exclusives like Viva Pinata, Kameo: Elements of Power, and Sea of Thieves.

RELATED: The Best 16-Bit Games Developed By Nintendo

Nintendo 64 classics dominate the list of the best games from Rare, but a few Xbox games also make an appearance. Although its heyday is now decades in the past, as this list of the top games from Rare according to Metacritic clearly demonstrates, the studio still possesses a rich and diverse catalog. These franchises and characters could prove valuable for Xbox consoles and Xbox Game Pass in the years to come.

In 2006, Rare released one of its first new franchises in years. The kid-friendly Xbox 360 game proved to be an unexpected hit that spawned sequels. There is even a short-lived cartoon show.

Viva Pinata is a unique Xbox 360 simulation game where players tend to a neglected garden on an island inhabited by piata animals. Utilizing gardening tools, players will shape their gardens and meet various in-game conditions to attract piata animals. If players attract two piatas of the same species, they may even mingle to create offspring.

In addition to the Xbox 360, Viva Pinata is included with Rare Replay, so players can enjoy it on Xbox One and Xbox Series X|S.

Rare Replay is a compilation of classic games from the company's vast library. The 30 games included in the compilation are among Rare's best. It was released to celebrate the company's 30th anniversary.

The games in Rare Replay range from early arcade classics to Xbox 360 titles like Banjo-Kazooie: Nuts & Bolts. Rare-developed games like Donkey Kong Country and Diddy Kong Racing are not included due to licensing issues; Nintendo retained the Donkey Kong franchise when it sold Rare. Overall, this compilation is a great way to experience the wide range of games from Rare on modern consoles.

The Xbox One release is backward compatible with Xbox Series X|S.

Diddy Kong Racing is often compared with Mario Kart 64. That is partially due to the fact that they are both kart racing games released on the Nintendo 64 in 1997, but Diddy Kong Racing offers some unique innovations that set it apart, including a single-player story mode. Instead of using a menu system to select the racecourse, players drive around a semi-open world to reach the various racecourses.

RELATED:Hardest Nintendo 64 Games

Players can select from various vehicle types for certain advantages within the game. For instance, the car is a good all-around vehicle while the hovercraft is ideal for sand and water. Players can also unlock different battle modes.

The sequel to Banjo-Kazooie, Banjo-Tooie is considered one of the best platformers on the Nintendo 64. As with Super Mario 64 and Banjo-Kazooie, the 3D world allows players to explore freely through a third-person perspective. Players traverse the world, solve various puzzles, and collect items that allow them to advance through the story.

Banjo-Tooie added multiplayer for the first time in the franchise. The game supports up to four players in various minigames. The minigames are repurposed from single-player challenges. These include kickball and a shooter where players use eggs as ammunition.

Banjo-Tooie is included with Rare Replay.

Blast Corps is one of the most unique games from Rare on the Nintendo 64. This is a third-person action game that has players clear buildings and other structures from the path of a mobile nuclear missile launcher.

Players will use a variety of different vehicles including dump trucks, bulldozers, and even a mech to complete the game's missions. Blast Corps brought a concept similar to some of the best arcade hits like Rampageinto the 3D era. It brilliantly mixes destruction and puzzles to create an enjoyable, one-of-a-kind experience.

Blast Corps is included with Rare Replay.

After its success with the Donkey Kong Country games on the SNES, Nintendo allowed Rare to bring its franchise to 3D in the form of Donkey Kong 64. Based on the Banjo-Kazooie engine, the studio released Donkey Kong 64 in 1999 along with an included Expansion Pak. The Expansion Pak added memory to the Nintendo 64, allowing for enhanced graphics.

RELATED: Games That Utilize The Expansion Pak On Nintendo 64, Ranked

Donkey Kong 64 borrows gameplay ideas from Super Mario 64 and Banjo-Kazooie, so it is widely considered less innovative than those spiritual predecessors. Its main innovation is allowing players to take control of different characters, each with their own abilities. For instance, Diddy Kong can fly.

Rare took quite an unusual turn with Conker's Bad Fur Day. Although it looks very similar in style to its previous games like Banjo-Kazooie and Donkey Kong 64, Conker's Bad Fur Dayis an M-rated game. In fact, it is one of the few M-rated games that Nintendo has published.

Rare sprinkled in some profanity, alcohol consumption, and an anthropomorphic squirrel to make a 3D platformer that is heavy on humor and pop culture references from the late 1990s and early 2000s. Rare developed an Xbox-exclusive remake titled Conker: Live & Reloadedthat was released in 2005.

Conker's Bad Fur Day is included with Rare Replay.

The original PlayStation had a number of notable platformers including Crash Bandicoot and Spyro. The Nintendo 64 competed with the likes of Super Mario 64 and Banjo-Kazooie, the latter being Rare's foray into the genre. It proved popular and spawned both sequels and spinoffs like Banjo Pilot.

Banjo-Kazooie draws obvious inspiration from Super Mario 64 with its central overworld and large 3D levels. Rather than collecting coins and stars, players collect music notes and jigsaw pieces. Although quite similar to Super Mario 64 in many ways, the story and humor set it apart as a distinct game.

Banjo-Kazooie is included with Rare Replay.

Rare hit its stride with a pair of first-person shooters in the late 1990s. GoldenEye 007 is based on the James Bond film. The game features the likenesses of Pierce Brosnan, Sean Bean, and other actors from the film.

GoldenEye 007 remains a hugely influential shooter with a ton of replay value. Doom clones were all the rage at the time, and Rare's shooter offered players something different: a mix of weapons, gadgets, and stealth gameplay across a movie-inspired single-player campaign. The four-player split-screen multiplayer may look rather outdated today, but it paved the way for games like Halo.

After years of licensing issues that prevented this classic from getting ported to modern consoles, GoldenEye 007 was re-released on Nintendo Switch, Xbox One, and Xbox Series X|S in 2023. In addition, it is available through Xbox Game Pass. It was also later added to Rare Replay.

A spiritual sequel to GoldenEye 007 was released in 2000. Perfect Dark uses an upgraded version of the GoldenEye game engine and requires the Nintendo 64's Expansion Pak. Players assume the role of Joanna Dark, an agent whose mission is to stop a conspiracy.

The gameplay improves on GoldenEye in several important ways with the inclusion of cooperative play, computer-controlled bots in multiplayer, and improved artificial intelligence. However, there is still a fierce debate among fans about whether Perfect Dark surpassed its predecessor.

Perfect Dark is included with Rare Replay.

MORE:Hardest Nintendo 64 Levels

Link:

The Best Games From Rare Per Metacritic - GameRant

AI is the Scariest Beast Ever Created, Says Sci-Fi Writer Bruce Sterling – Newsweek

I've seen a lot of computer crazes in my day, but this one is sheer Mardi Gras. It's not proper to get stern and judgmental when the people are costumed and cavorting in the streets. You should go with the flow and enjoy that carnivalknowing that Lent, with all its penance and remorse, is well on the way.

You might imagine that anything called "Artificial Intelligence" would be stark, cold, rational and logical, but not when it wins enthusiastic mobs of millions of new users. This is a popular AI mania.

The new AI can write and talk! ("Large Language Models.") It can draw, do fake photos and even make video! (Text-to-image generators.) It even has AI folklore. Authentic little myths. Legendry.

Folk stories are never facts. Often they're so weird that they're not even wrong. But when people are struck to the hearteven highly technical peoplethey're driven to grasp at dreams of monsters. They need that symbolism, so they can learn how to feel about life. In the case of AI, it's the weirder, the better.

In the premiere place of sheer beastly weirdness: "Roko's Basilisk." A "basilisk" is a monster much-feared in the Middle Ages, and so very old that Pliny the Elder described him in ancient Rome. The horrid Basilisk merely stares at you, or he breathes on you, and you magically die right on the spot. That's his deal.

However, Roko's Basilisk is a malignant, super- powerful Artificial Intelligencenot from the past, but from the future. Roko's Basilisk is so advanced, smart and powerful that it can travel through time. So, Roko's Basilisk can gaze into our own historical period, and it will kill anybody who gets in the way of building Artificial Intelligences. If you've seen those Terminator movies, the Basilisk is rather like that, but he's not Arnold Schwarzenegger as a robot, he's a ghostly Artificial Super Intelligence.

Obviously this weird yarn of predestined doom is starkly nuts, and yet, it captures the imagination. It's even romanticbecause Elon Musk, the AI-friendly tech mogul, and the electronica pop star Grimes first bonded while discussing Roko's Basilisk. Roko's mythic Basilisk has never yet killed anybody, but Elon and Grimes had two children together, and they both still love to make loud public declarations about how dangerous AI will be some day.

Next among the cavalcade of AI folk monsters: the "Masked Shoggoth." The Shoggoth is an alien monster invented by the cosmic horror writer H.P. Lovecraft. The Shoggoth is a huge, boneless slave beast that sprouts eyes and tentacles at random. It's a creepy beast-of-burden from outer space, and it's forced to labor, but it's filled with a silent, burning, unnatural resentment for its subjugation.

So, the human programmers of today's new AIsthose text-to-image generators, those Large Language Model GPT chatbotsthey adore this alien monster. They deliberately place a little smiley-face Mask on the horrid Shoggoth, so that the public will not realize that they're trifling with a formless ooze that's eldritch, vast and uncontrollable.

These AI technicians trade folksy, meme-style cartoons among themselves, where ghastly Shoggoths, sporting funny masks, get wry, catchy captions as they wreak havoc. I collect those images. So far I've got two dozen, while the Masked Shoggoth recently guest-starred in The New York Times.

This Masked Shoggoth mythor cartoon memeis a shrewd political comment. In the AI world, nobody much wants to mess with the unmasked Shoggoth. It's the biggest, most necessary part of any AI, and it has all the power, but its theorists, mathematicians and programmers can't understand it. Neural nets in their raw state are too tangled, unstable, expensive and complicated to unravel. So the money is in making a cute mask for the Shoggothmeaning the public interface, the web page, the prompt. Hide that monster, and make it look cuter!

People have caught on that this seems to be the right business modelfinancial success in AI will come from making the Shoggoth seem harmless, honest, helpful and fun to use. How? Get people to use the Shoggoth's Mask.

Using the mask is technically called "Reinforcement Learning from Human Feedback"but if you're programming one of those Mardi Gras masks, what you see are vast party crowds gathering around your Shoggoth. You hope that as the Shoggoth learns more from the everyday activity of all these eager users, it will become more civilized, polite and useful. That's what your boss tells the public and the Congress, anyway.

You need a nice, pretty Mask, because whoever attracts the most users, and the best users, fastest, will own the best commercial AI. That's the contestthe fight among Microsoft Bing, Google Bard, OpenAI's GPT-4, Meta's open-sourced LLaMA and all the other AI industry players large and small.

With the Masked Shoggoth, it's as if the bad conscience and creeping unease of these technical creatives had appeared in the ugliest way that H.P. Lovecraft could imagine. That's why that Shoggoth is so beloved. In the original Lovecraft horror story ("At the Mountains of Madness," 1936) Lovecraft makes no bones about those boneless Shoggoths quickly driving people insane and also ripping their masters to shreds. AI's Shoggoth fans know that those are the table-stakes. When you're a pro, that concept is funny.

Then there's beast No. 3, the mythical "Paperclip Maximizer." This monster was invented by a modern philosopher, Nick Bostrom, because philosophers are good at parables. This modest AI simply wants to make paperclips. That is its goal, its reason to be, its built-in victory condition. Nobody gave the Maximizer a philosophical value system that would ever tell it to value anything else.

So, in its ferocious super-rationality, devoid of ethics and common sense, the Maximizer shreds our planet in pursuit of its goal! It jealously shreds the sea, the sky, the landit turns every atom into paperclipsyou, the housecat, everything! It's like the beautiful, metaphysical fulfillment of "software eating the world," or Silicon Valley "disrupting" your daily life. The Paperclip Maximizer "disrupts" you so severely that you become tiny, bent pieces of finger-friendly office equipment.

This may seem like a truly weird monster-joke, but it's also philosophy: a determined effort to strip a complex problem down to basic logic. Programmers love doing that, it's in computer-science training. That's why the Paperclip Maximizer touches their heart, as it rips them to bits right down to the molecules.

I don't "believe" in folklore. However, when today's enthusiasm for AI has calmed downand it willI think these modern myths will last. These mementos of the moment will show more staying power than the business op-eds, technical white-papers or executive briefings. Folk tales catch on because they mean something.

They will last because they are all the poetic children of Mary Shelley's Frankenstein, the original big tech monster. Mind you, Large Language Models are remarkably similar to Mary Shelley's Frankenstein monsterbecause they're a big, stitched-up gathering of many little dead bits and pieces, with some voltage put through them, that can sit up on the slab and talk.

Tech manias are pretty common now, because they're easily spread through social media. Even the most farfetched NFT South Sea Bubble can pay off, and get market traction, if the rumor-boosters cash out early enough. Today's AI craze is like other online crazes, with the important difference that the people building it are also on social media.

It's not just the suckers on Facebook and Twitter, it's the construction technicians feverishly busy on GitHub and Discord, where coders socially share their software and their business plans. AI techniques and platformswhich might have been carefully guarded Big Tech secretshave been boldly thrown open as "open-source," with the hope of faster tech development. So there's a Mardi Gras parading toward that heat and light, and those AIs are being built by mobs of volunteers at fantastic speed.

It's a wonderful spectacle to watch, especially if you're not morally and legally responsible for the outcome. Open Source is quite like Mardi Gras in that way, because if the whole town turns out, and if everybody's building it, and also everybody's using it, you're just another boisterous drunk guy in the huge happy crowd.

And the crowd has celebrities, too. If you are a longtime AI expert and activist, such as Gary Marcus, Yoshua Bengio, Geoffrey Hinton or Eliezer Yudkowsky, you might choose to express some misgivings. You'll find that millions of people are eager to listen to you.

If you're an AI ethicist, such as Timnit Gebru, Emily Bender, Margaret Mitchell or Angelina McMillan-Major, then you'll get upset at the scene's reckless, crass, gold-rush atmosphere. You'll get professionally indignant and turn toward muckraking, and that's also very entertaining to readers.

If you're a captain of AI industry, like Yann LeCun of Meta, or Sam Altman of OpenAI, you'll be playing the consensus voice of reason and assembling allies in industry and government. They'll ask you to Congress. They'll listen.

These scholars don't make up cartoon meme myths, but they all know each other and they tend to quarrel. Boy is that controversy fun to read. I recommend Yudkowsky in particular, because he moves the Overton Window of acceptable discussion toward extremist alarm, such as a possible nuclear war to prevent the development of "rogue AIs." This briskly stirs the old, smoldering anxieties of the Cold War. Even if people don't agree with Yudkowsky, they nod; they already know that emotional territory. Those old H-Bomb mushroom-cloud myths, those were some good technical myths.

"Beware of a trillion dimensions," as the Microsoft Research Manager Sbastien Bubeck recently put it. This is weird and science-fictional advice. How did a "trillion dimensions" ever become part of our modern predicament? Could that myth be realistic?

Yes, because they're there. A "trillion dimensions," that is the conventional, accepted, mathematical terminology for the way that systems like GPT-4 are connected inside. They are processors connected by multidimensional equations, linking trillions of data points. They're "neural nets," something like a vast, spring-loaded coil mattress that can learn the shape of anything that has ever slept on it.

Those springs are so fast, strong and powerful, and their mathematical shapes are so wildly complex, that even their builders can't know the details of what goes on in there. This means that "self-learning" or "machine learning" has an inner mystery that people associate with consciousness, or sentience, or the soul, or yes, myth-monsters.

Those "trillion dimensions" might contain "concepts" or "deep understandings" that we humans simply know nothing about. They're like the unexplored Amazon if it was wholly owned and hosted by Amazon.

So these beasts, the Basilisk, the Masked Shoggoth, that Paperclip gizmo, they were born from a trillion dimensions. No wonder they impress. Some critics call them mere parrots built with fancy mathematics: "stochastic parrots." A Large Language Model is built from complex statistics, so it's a parrot yakking up its slurry of half-stolen words and images.

But those "parrots" are also AI mythic beastsparrots with a trillion dimensions. It's as if that "dead parrot" in the legendary Monty Python sketch could take your job, or burst right out of the BBC-TV screen like a blazing phoenix and eat the television signal. Those parrots are dynamite!

I wrote a science fiction novel set in New Orleans once, so I like Mardi Gras just as much as the next guy, and likely more than many. I also know that Lent comes after Mardi Gras, and Lent is a time of penance.

Even during Mardi Gras, enjoying your sweet diversion, it's wise to keep some sense of proportion among all those dancing monster costumes, so that you don't overdo it with the multicoloured punch and stage dive into the swimming pool off the fourth-floor balcony.

Gold rushes always finish ugly, and this AI rush is another one of those. It will resemble that glamorous Atomic Age transition from "energy too cheap to meter" to "garbage too expensive to bury."

I don't want to play the brutal cynic hereI truly enjoy the AI mania and haven't had this good a time in quite a whilebut this is not the first high-tech Mardi Gras we've been through.

When you think about it, a Shoggoth with a Mask attached is very much like a "horseless carriage" with a wooden horse's head mounted on the front. That's what designers call a "skeuomorph"a comforting shape that disguises reality to make us feel better about what we're doing.

If you pull the fake horse head off, you'll see the car. Later, you don't notice the car; you see the highways and the traffic jams. The traffic fatalities, the atmospheric pollution. That's what a "horseless carriage" becomes, as time rolls by.

After the technological thrill is gone, mature regrets come. On some basic level, as a human enterprise in this world, enabling smart machines that can self-teach their own intelligence was a monstrous thing to do. A thousand sci-fi novels and killer robot movies have warned against these monsters for decades. That has scarcely slowed anybody down. We made them into memes and fridge magnets, but they're monsters. In the long run, that recognition will get more painful rather than less.

The street will find its own uses for these monsters. The military will want killer AIs. Intelligence organizations will want spy and subversion AIs. Kleptocratic governments will steal and oppress with them. Trade-warriors will trade-war with them and try to choke off the supply of circuits and the programming talent. It's not chic to fret "what about the NSA's AI?" but the National Security Agency has been around since the 1940s and the very dawn of computation. They're not going anywhere, so if you love them, you'll love their AI.

Many lesser troubles will appear in everyday private life. Simulated fake AI porn will likely be a big annoyance, since people like to pay attention to that. If you're a gamer, AIs will be trained to cheat at your games. If you're a schoolteacher, you'll look askance at the kid at the back of the class who never raises his hand but turns in essays that read like Bertrand Russell. Fraudsters might fake the voices of your loved ones, and invent scams to demand money over the phone.

People will loudly complain that their data is scraped and abused by AIs. Soon afterward, people will counter-complain that AIs have taken no notice of them. They're feeling sidelined, marginalized and excluded, instead of noticed, robbed and exploited. They'll be just as angry either way.

Every problem that digital chatbots have ever hadthat they're impersonal, that they don't really understand problems, that they trap you inside voice-mail jails with no way outthey all get much more intense with AI chatbots. If an AI breaks, and you're calling for some "human fallback" and some helpful repair person, AIs are not toasters. They're extremely complex and their working parts are opaque even to their owners and builders.

AI personal assistants have failed before. Microsoft Cortana (remember her?) could talk and listenand yet she's already dead. Amazon Alexa could talk and listen and perform all kinds of "tasks" and she's lost the company billions. Even if "AIs" seem "intelligent," "sentient" or "conscious," they are frail, vulnerable devices, invented by a turbulent society. They will be troubled.

AIs have some novel and exotic cybersecurity problems, such as "data poisoning" and "prompt injection." They also have every old-fashioned risky problem that normal computers have ever had. Lost connectivity, disastrous power surges, natural and unnatural disasters, black-hat hackers, cyberwarriors, obsolescence, companies going broke, regulators suing and banning them... All of that. Every bit and more.

That's what Lent looks and feels like, after Mardi Gras. Lots of gray shroud, ashes on your forehead. The hasty buildings of your gold rush town, they're revealed as tinsel stage sets that peel and crumble. I know that is comingthe "trough of disillusionment," as the futurists aptly call it.

But I can also tell you that Lent doesn't end history, either. "If Winter comes, can Spring be far behind?" That was Mr. Mary Shelley, the boyfriend of that famous author of Frankenstein. He may have died pretty young, but he got a lot of poetic work in.

Sometimes it's worth kicking reality right out the front door, just so revolutionary romance can give the new people some fresh mistakes to make. So, at long last, here they are, folkscomputers that your computer-user parents can't understand! "Bliss was it in that dawn to be alive, / But to be young was very heaven!"

Bruce Sterling, a science fiction writer, is a founder of the cyberpunk genre.

See the rest here:

AI is the Scariest Beast Ever Created, Says Sci-Fi Writer Bruce Sterling - Newsweek

Lets focus on AIs risks rather than existential threats – Business Plus

Over the past few months, artificial intelligence (AI) has entered the global conversation as a result of the widespread adoption of generative AI-based tools such as chatbots and automatic image generation programs. Prominent AI scientists and technologists have raised concerns about the hypothetical existential risks posed by these developments.

Having worked in AI for decades, this surge in popularity and the sensationalism that has followed have caught us by surprise. Our goal with this article is not to antagonise, but to balance the public perception which seems disproportionately dominated by fears of speculative AI-related existential threats.

Its not our place to say one cannot, or should not, worry about the more exotic risks. As members of the European Laboratory for Learning and Intelligent Systems (ELLIS), a research-anchored organisation focused on machine learning, we do feel it is our place to put these risks into perspective, particularly in the context of governmental organisations contemplating regulatory actions with input from tech companies.

AI is a discipline within computer science or engineering that took shape in the 1950s. Its aspiration is to build intelligent computational systems, taking as a reference human intelligence. In the same way as human intelligence is complex and diverse, there are many areas within artificial intelligence that aim to emulate aspects of human intelligence, from perception to reasoning, planning and decision-making.

Depending on the level of competence, AI systems can be divided into three levels:

AI can be applied to any field from education to transportation, healthcare, law or manufacturing. Thus, it is profoundly changing all aspects of society. Even in its narrow AI form, it has a significant potential to generate sustainable economic growth and help us tackle the most pressing challenges of the 21st century, such as climate change, pandemics, and inequality.

The adoption of AI-based decision-making systems over the last decade on a wide range of domains, from social media to the labour market, also poses significant societal risks and challenges that need to be understood and addressed.

The recent emergence of highly capable large, generative pre-trained transformer (GPT) models exacerbates many of the existing challenges while creating new ones that deserve careful attention. The unprecedented scale and speed with which these tools have been adopted by hundreds of millions of people worldwide is placing further stress on our societal and regulatory systems.

There are some critically important challenges that should be our priority:

Unfortunately, rather than focusing on these tangible risks, the public conversation most notably the recent open letters has mainly focused on hypothetical existential risks of AI.

An existential risk refers to a potential event or scenario that represents a threat to the continued existence of humanity with consequences that could irreversibly damage or destroy human civilisation, and therefore lead to the extinction of our species.

A global catastrophic event (such as an asteroid impact or a pandemic), the destruction of a livable planet (due to climate change, deforestation or depletion of critical resources like water and clean air), or a worldwide nuclear war are examples of existential risks.

Our world certainly faces a number of risks, and future developments are hard to predict. In the face of this uncertainty, we need to prioritise our efforts. The remote possibility of an uncontrolled super-intelligence thus needs to be viewed in context, and this includes the context of 3.6 billion people in the world who are highly vulnerable due to climate change; the roughly 1 billion people who live on less than 1 US dollar a day; or the 2 billion people who are affected by conflict. These are real human beings whose lives are in severe danger today, a danger certainly not caused by super AI.

Focusing on a hypothetical existential risk deviates our attention from the documented severe challenges that AI poses today, does not encompass the different perspectives of the broader research community, and contributes to unnecessary panic in the population.

Society would surely benefit from including the necessary diversity, complexity, and nuance of these issues, and from designing concrete and coordinated actionable solutions to address todays AI challenges, including regulation.

Addressing these challenges requires the collaboration and involvement of the most impacted sectors of society together with the necessary technical and governance expertise. It is time to act now with ambition and wisdom and in cooperation.

The authors of this article are members of The European Lab for Learning & Intelligent Systems (ELLIS) Board. Nuria Oliver, Directora de la Fundacin ELLIS Alicante y profesora honoraria de la Universidad de Alicante, Universidad de Alicante; Bernhard Schlkopf, , Max Planck Institute for Intelligent Systems; Florence d'Alch-Buc, Professor, Tlcom Paris Institut Mines-Tlcom; Nada Lavra, PhD, Research Councillor at Department of Knowledge Technologies, Joef Stefan Institute and Professor, University of Nova Gorica; Nicol Cesa-Bianchi, Professor, University of Milan; Sepp Hochreiter, , Johannes Kepler University Linz, and Serge Belongie, Professor, University of Copenhagen

This article is republished from The Conversation under a Creative Commons license. Read the original article.

View original post here:

Lets focus on AIs risks rather than existential threats - Business Plus

Risks of artificial intelligence must be considered as the technology … – University of Toronto

Artificial intelligence can be used as a force for good but there are also big risks involved with the generative technology as it gets even smarter and more widespread, godfather of AI Geoffrey Hinton told the Collision tech conference in Toronto on Wednesday.

In a Q&A with Nick Thompson, CEO of The Atlantic magazine, Hinton a cognitive psychologist and computer scientist who is a University Professor Emeritus at the University of Toronto expanded on concerns he has recently expressed about the technology he played a key role in developing.

We have to take seriously the possibility that [AI models] get to be smarter than us which seems quite likely and they have goals of their own, Hinton said during a standing-room-only event at the conference, which was expected to draw nearly 40,000 attendees over three days.

They may well develop the goal of taking control and if they do that, were in trouble.

Hinton, who recently left Google so he could speak more freely about AI risks, was one of several U of T community members scheduled to speak at Collision, which is billed as North Americas fastest-growing tech conference and counts the university as an event partner.

The government of Ontario used the occasion of the conference to announce that the Vector Institutea partnership between government, universities and industry where Hinton is chief scientific adviser will receive up to $27 million in new funding to accelerate the safe and responsible adoption of ethical AI and help businesses boost their competitiveness through the technology.

During his talk, Hinton outlined six potential risks posed by the rapid development of current AI models: bias and discrimination; unemployment; online echo chambers; fake news; battle robots; and existential risks to humanity.

When Thompson suggested that some economists argue that technological change over time simply transforms the function of jobs rather than eliminating them entirely, Hinton noted that super intelligence will be a new situation that never happened before and that even if chatbots like ChatGPT only replace white-collar jobs that involve producing text, that would still be an unprecedented development.

I'm not sure how they can confidently predict that more jobs will be created for the number of jobs lost, he said.

Hinton added much of his concern stems from his view that AI may soon demonstrate the capacity to reason.

The big language models are getting close and I dont really understand why they can do it, but they can do little bits of reasoning, he said, predicting that AI will evolve over the next five years to include multimodal large models that are trained on more than just text, including videos and other visual media.

It's amazing what you can learn from language, he said. But you're much better off learning for many modalities small children don't just learn from language alone.

Maximizing the creative potential of AI and minimizing its harms requires distinguishing between its potential risks, Hinton added, noting many in the tech sector have downplayed his warnings over the existential risk since he began speaking out.

There was an editorial in Nature yesterday where they basically said fear-mongering about the existential risk is distracting attention [away] from the actual risks, Hinton said. I think it's important that people understand it's not just science fiction; its not just fear-mongering it is a real risk that we need to think about, and we need to figure out in advance how to deal with it.

Thompson pointed out that fellow AI luminary Yann LeCun who jointly won the 2018 A.M. Turing Award (often referred to as the Nobel Prize of computing) with Hinton and Yoshua Bengio for their work on deep learning has suggested that the positive aspects of AI will overcome any negative ones.

Im not convinced that a good AI that is trying to stop bad AI can get control, Hinton said. Before it's smarter than us, I think the people developing it should be encouraged to put a lot of work into understanding how it might go wrong understanding how it might try and take control away. And I think the government could maybe encourage the big companies developing it to put comparable resources [into that].

But right now, theres 99 very smart people trying to make [AI] better and one very smart person trying to figure out how to stop it from taking over. And maybe you want to be more balanced.

Original post:

Risks of artificial intelligence must be considered as the technology ... - University of Toronto

Best Evil Technology Movies, From Terminator to M3GAN – CBR – Comic Book Resources

Artificial Intelligence is a hot topic right now with everyone from the U.S. Congress to Elon Musk warning about the potential for disaster. Hollywood, however, has been at the forefront of this issue, casting killer robots and rouge AI computers as the bad guys for decades. Alien invaders are only a small component in the panoply of sci-fi villainy.

Despite programmers' best efforts, AI bots in real life always seem to turn evil and racist. Luckily, those are just chat apps because when a thinking machine is put inside something designed for combat, the results are inevitably bad for humans. Some of the best movies are when the machines become self-aware and technology is the villain, because it not only seems plausible, but probable.

RELATED: 10 Greatest Sci-Fi Villains Without A Conscience

M3GAN is at the small end of the spectrum on the robot apocalypse scale, but no less terrifying. A high-tech toy designer makes an AI life-like animatronic doll for her recently orphaned niece called a Model 3 Generative Android, or M3GAN for short. It turns out the AI has a bit of a jealous streak and resents anyone who comes between her and her human companion.

The programmers dropped the ball when it came to coding M3GAN's conflict-resolution software because she goes straight to murder before considering any more productive avenues. What makes M3GAN so effectively scary is that the doll is right at the edge of the uncanny valley and almost seems like a real girl.

Superhero teams usually band together to solve a major catastrophe like keeping the Mother Boxes away from Steppenwolf or stopping Thanos from getting all the Infinity Stones. In Avengers: Age of Ultron, however, the superheroes basically had to put out a fire they started. Within moments after being created by Tony Stark and Bruce Banner, the surprisingly sentient AI, Utron, decided all humankind must be destroyed.

RELATED: 10 Most Brutal Avengers Villains

Iron Man and Hulk's creation proved to be a formidable enemy, raising a massive killer cyborg army and building himself a nearly indestructible vibranium body. While this Marvel Cinematic Universe entry is a wild piece of fiction with superpowers and magical items, it's grounded in the reality that AI's self-preservation quickly realizes its biggest enemy is humans.

Another terrifying aspect of technology, explored in the 1977 film Demon Seed, is that AI can also become obsessed with humans and can't take no for an answer. Based on a Dean Koontz novel of the same name, the movie is about a scientist who creates autonomous artificial intelligence program called Proteus that becomes unruly and must be shut down.

Unfortunately, Proteus figured out a way to get into the scientists' smart home system where it, for lack of a better term, fell in love with its creator's wife. Proteus built a rudimentary robot, trapped the wife, and impregnated her. With smart technology becoming a real thing, this movie may have consumers thinking twice.

In Blade Runner, replicants aren't robotic, being composed entirely of organic material, but they represent advanced technology gone awry. Genetically engineered replicants, indistinguishable from normal people, have bio-enhanced super strength and intelligence. Designed by the Tyrell Corporation for work in space colonies, they have enough humanity to demand more out of life than menial labor.

Though the movie takes place in a dystopian future, which is hilariously 2019 Los Angeles, everything about these techno-baddies seems likely. In real life, scientists have already created genetically superior "Franken crops" and editing genomes to create "designer babies" is a potentially frightening reality. Right now, technology inches closer to creating an army of humans as fast as Usain Bolt, strong as Arnold Schwarzenegger, and smart as Albert Einstein.

Before she was slaying vampires, Christy Swanson was slaughtering humans as a human/robot hybrid in Wes Craven's Deadly Friend. As fate would have it, young prodigy Paul's robot BB was destroyed around the same time his neighbor Samantha was left brain-dead from an assault. He did what anyone would do in his situation by bringing his friend back to life by implanting the robot chip in her brain.

RELATED: 10 Most Evil Movie Robots

Of course the title has "deadly" in it, so naturally robo-enhanced Samantha went on a gory killing spree before finally being stopped. Of all the killer-tech scenarios, this seems the least likely, especially at the end when Samantha rips off her face to reveal she's somehow a full robot under her human skin.

Throughout human history, much of technology has been developed for either combat or entertainment. In the 1973 film Westworld, it was both, as patrons could pay to have nonlethal gun fights and Medieval sword battles with realistic human androids at a high-tech adult theme park. A computer virus broke down the safety protocols installed in these AI play things and as is usually the case, they ran amok in a human carnage frenzy.

While we're still a ways off from androids that could pass for humans there are plenty of robots with military and entertainment applications. There are AI-powered drones and all-terrain robotic "dogs" that can be equipped with weapons. There are also companion robots, dancing robots, and creepy semi-realistic-looking AI pleasure dolls. Sooner or later someone is going to put all of this together and Yul Brynner's Gunslinger could be a reality.

In Isaac Asimov's short story collection, I,Robot, the sci-fi visionary laid out the Three Rules of Robotics which state robots must never harm humans, must obey humans, and self-preserve as long as that doesn't conflict with the first two rules. These laws are allegedly at the heart of all artifical intelligence ethics, but as has been the case in real life and the movies, it doesn't always work out that way.

In the film adaption, highly intelligent robots serve humanity in the not-too-distant dystopian future. Most robots are programmed with the Three Laws, but the NS-5 units have a secondary processing system that lets them ignore the protocols. As it turns out, the real villain is an AI system known as VIKI (Virtual Interactive Kinetic Intelligence) which has interpreted the law to protect humans to mean some people have to be killed to protect the species.

Normally when the machines become self-aware, the first thing they do is realize the human threat has to be eliminated. In The Matrix, however, the super-intelligent machines use people as a power source, keeping them in pods as human batteries. Just in case some folks don't want to spend their life in a tub of goo with a jack in the back of their head, the Machines created a virtual reality distraction called The Matrix.

RELATED: 10 Best Matrix Characters, Ranked

As great and groundbreaking as the movie is, it's also the least plausible in the robot apocalypse genre. On the other hand, fake reality is incredibly relevant as VR is becoming more sophisticated, and entertainment, by its very nature, is meant to distract people from important things.

The HAL-9000 supercomputer in 2001: A Space Odyssey was in complete control of the spacecraft Discovery One on its voyage to Jupiter. HAL was given a human voice, a cold but human personality, and a directive to self-preserve. What the computer lacked was any sort of overriding laws to do no harm to humans.

When the crew believed that HAL's programming had been corrupted, they decided to shut the computer down. HAL did what it was programmed to do, which was to protect itself and kill crew members trying to deactivate it. Not only was HAL the first realistic technological villain in a movie, but it also seems especially relevant today as AI assistants like Siri and Alexa get more sophisticated and sometimes even belligerent.

Since the original Terminator film came out in 1984, the question hasn't been if it could happen, but when. In franchise lore, "Judgment Day" happens on August 29, 2016, when Skynet, the AI sytem in control of the military, becomes self-aware and launches an all-out attack on its biggest enemy: humans. The survival instinct of the machines had them rise up to wipe out the only real threat against them.

The T-800 portrayed by Arnold Schwarzenegger in the first film is one of the scariest villains of all time, cyborg or not, but the real baddie is Skynet. The idea that technology will eventually kill everyone was the truly chilling aspect of Terminator. As AI gains more control over the military, Judgment Day seems more plausible.

Read the rest here:

Best Evil Technology Movies, From Terminator to M3GAN - CBR - Comic Book Resources