Archive for the ‘Alphago’ Category

Life Lived On Screen: Philosophical, Poetic, and Political Observations – lareviewofbooks

AUGUST 2, 2020

I.

THERE IS THE idea of a physical human to human encounter.

As a being together.

The image is one of a shared experience of time a time constituted by the act of committing to one another, to an encounter.

Of inhabiting, together, a space where bodies meet, where talking and laughing and crying is a haptic experience. Where one breathes the same air, smells the same smells.

An experience the body can remember sensorially long after.

Reaching out and touching. Shared surfaces. Breathing, talking, anything really.[1]

Can this kind of encounter happen through machines or machine interfaces? Zoom, Facebook, Google, Twitter, LinkedIn, Skype, Microsoft Teams, and many more.

Can it happen with a machine?

Traditionally, the answer to both questions is no.

No, it cannot really happen by way of a machine interface because too much is lost.

And no, one cannot have a true encounter with machines.

II.

In times of COVID-19, we spend more of our life online in networks than ever before.

What is the effect of this life lived on screen on what it is to be human?

We have more Zoom meetings, surf longer on Instagram, spend more time on Facebook and Twitter than ever before.

What is the transformation of the human brought about by life lived on screens and how to bring this transformation into focus?

As a site of philosophical change and as an opportunity for philosophers, artists, and technologists to come together and give shape?

What are the philosophical and poetic and political stakes and opportunities of this, of our moment in time?

III.

The migration of human activity to technological platforms began long before COVID-19.

The reference, here, is particularly to the emergence in the early 2000s of interactive, often user-generated content, and the emergence of network companies.

The classic examples here are companies like Google (Google mastered microtargeting), Facebook, Twitter, Amazon, Google, Microsoft (Skype and Team), and now also Zoom.

This matters for two reasons.

The first is that the material infrastructural conditions of possibility for how we now spend much of our time has been laid long before the present: satellites, high-speed fiber-optic cables between cities and underneath the ocean, file sharing systems in massive computer farms that host servers, AI algorithms that work through enormous amounts of data quickly to find patterns and calculate preferences, etc.

The second reason is that the material infrastructure that makes life lived on screen possible is inseparably related to platform capitalism. Platform capitalism consists (mostly but not exclusively) of companies that make money by offering free services such as search or posting images or messaging but that collect and harvest user data in order to either sell it to other platform companies or, more often, to sell it to advertising companies (who then devise microtargeting strategies, that is, they deliver ads to specific audiences).

In order to generate data, these companies have been busy finding ways long before COVID-19 to migrate human activity online.

Or, perhaps more accurately: They have been busy creating new forms of human activity suited to life online surfing, search, texting, sexting, browsing, FaceTiming, YouTubing, binge watching, etc.

AR and VR especially via Facebook and Oculus may soon be an additional element of life on screen.

And COVID-19?

Well, for most platform companies, the spread of SARS-CoV-2 and the shelter-at-home orders have been a massive boost: screen time has increased dramatically and so has their capacity to generate and mine data.

That is, COVID-19 has been a consolidation and even an expansion event for platform capitalism.

The contrast to older forms of capitalism, especially to industrial manufacturing, couldnt be sharper.

The question thus emerges whether or not we are currently seeing a powerful acceleration of a shift from earlier forms of capitalism toward a new, still-nascent form called platform capitalism.

A shift from a mode of production focused on the industrial production of goods by labor to another one that is about users, data, and AI?

What are the philosophical, poetic, and political dimension of this shift?

IV.

In my observation, platform companies have made dominant a form of relationality networks that runs diagonal to the usual, place-based socialities of the nation (usually framed in terms of belonging and non-belonging, inclusion and exclusion of a people imagined in territorial and ethnic or racial terms).

In fact, I think it is no exaggeration to argue that networks have given rise to a new structure and experience of reality that is radically different from and even incommensurable with the structure and experience of reality that defined societies.

I offer a simple juxtaposition to illustrate my point.

Societies, usually, have three main features.

First, they are organized hierarchically. That is, they typically have a few powerful individuals at the top, while the vast majority of individuals assemble at the bottom.

Second, they are organized vertically, by which I mean that they accommodate an often vast diversity of opinions and points of views.

Third, societies are usually held together by a national sentiment and, most importantly, by a national communication or media system. The form this media system almost always takes is mass communication, where the few communicate to the many. What they communicate is information information people may vehemently disagree about, but the baseline of this disagreement is that people agree about the things that they disagree about. Mass communication assures that people have a shared sense of reality.

Networks defy all three of those features.

First, if societies are hierarchical and vertical, then networks are flat and horizontal: networks tend to be self-assemblies of people with similar views and inclinations.

Second, while societies are contained by national territories, networks tend to be global and cut across national boundaries: another way of saying this is that while societies are place-specific units, networks are non-place-specific units.

And third, if in society the few communicate with the many and what they communicate is information, then in networks the many communicate directly unfiltered with the many, and what they communicate is not information but affective (emotional) intensity.

It strikes me as uncontroversial that today more and more humans live in networks and that networks, ultimately, defy the logic of society.

Indeed, the rise of networks has created a situation in which, counter to what the moderns thought, society and the social are not timeless ontological categories that define the human.

On the contrary, they are recent and transitory concepts that have no universal validity for all of humanity or all of human history.

Of course, societas is an ancient concept. However, up until the late 18th century, a societas was a legal and not a national or territorial concept; it referred to those who held legal rights vis--vis the monarch.

Things only changed in the years predating the French Revolution when the argument emerged that the people and not the aristocrats and the grand bourgeoisie who held legal rights vis--vis the king should be the society constitutive of the political entity called France.

The early nation-states, which emerged in the context of the first Industrial Revolution and at a time when several cholera epidemics ravaged Europe, found themselves confronted with the need to know their societies, to know how many people lived on their territory, how many were born, how many died, how many got sick and of what; they had to know how many married and how many divorced.

As political existence and the biological vitality of the national society were understood to be connected, states began to conduct massive surveys to understand how they could reform and advance their societies.

Over time between the 1830s and the 1890s this gave rise to what one could call the logic of the social: the idea that the truth about humans is that they are born in societies and that society will shape them and even determine them. The truth about humans is that they are social, in the sense of societal being: tell me in which segment you were born, and I tell you who you are likely to marry, how many kids you we will have, what your job will be, what you are likely to die of.

The social was discovered as the true ontological ground of the human.

To this day, most normative theories of the human call them anthropology: from Marx via the Frankfurt School to Pierre Bourdieu are based on the idea that society is the true ontological ground of the human.

All our modern political institutions are based on society.

If it is true that networks defy the logic of society, then the social sciences, simply because they take the social for granted as the true logic of the human, will fail to bring the human into view.

What we need, then, is a shift from social anthropology (an anthropology that grounds in the concept of the social) to a network anthropology: a multifaceted study of how networks give rise to humans.

V.

The difference between networks and societies which appears to map onto the difference between platform and industrial capitalism is related to the changing relation between humans and machines brought about by recent advances in AI, specifically in machine learning.

One can say that machine learning technologies are beginning to liberate machines from the narrow industrial concept of what a machine is and that this liberation may have far-reaching consequences for what it means to have an encounter.

Traditionally, there were unbridgeable differences between human and machines.

Partly, because humans have intelligence reason while machines are reducible to mechanism.

Partly because machines have no life, no quality of their own. They are reducible to the engineers who invented them and hence mere tools.

The implication, often, is that there is no will, no interference, no freedom, no opening.

But machine learning and neurotechnology make us reconsider these boundaries between organisms and machines, between humans and mechanisms.

First, the success of artificial neural nets or the basic continuity between neural and mechanical processes suggests that the distinction between the natural and the artificial may perhaps matter much less than we thought.

Second, the emergence of deep learning architectures has led to machines with a mind of their own: they have an agency that is not reducible to the intent of or the program written by the engineer.

The exemplary reference here is a 2016 game of Go, played by a deep learning system named AlphaGo (built by DeepMind, a London-based, Google-owned AI company) against Lee Sedol, an 18-time world champion. Toward the end of Game Two in a Best of Five series, AlphaGo opted for a move move 37 that was highly unusual.

DeepMind later announced that AlphaGo had calculated the odds that an expert human player would have made the same move at 1 in 10,000.

It played the move anyway: as if it judged that a nonhuman move would be better in this case.

Fan Hui, the three-time European Go champion, remarked: Its not a human move. So beautiful. So beautiful.

Wired wrote shortly after the game was over: Move 37 showed that AlphaGo wasnt just regurgitating years of programming or cranking through a brute-force predictive algorithm. It was the moment AlphaGo proved it[s] [] ability to play a beautiful game not just like a person but in a way no person could.[2]

Traditionally, a program that doesnt conform to the intentionality of the engineer was considered faulty. However, contemporary machine learning systems are built to defy to exceed the mind of the engineer: it is expected that the machine brings something to a game, a conversation, a question that the engineers did not and could not possibly provide it with (something nonhuman).

These developments one could call them the liberation of machines from the human or at least from the concept of the machine that up until recently defined the human imagination of what a machine could be are related to the rise of networks.

They are related insofar as in networks, relationality once a human to human prerogative may no longer be limited to human to human encounters anymore.

What effects will the liberation of machines which is constitutive of networks as much as of machines have on what it is to be human?

Or on what it is to be in relation?

VI.

As I see it, what is needed now are philosophical investigations of the new technology that is being built.

Not studies in terms of society, as this would ultimately imply holding on to the old concept of the human as social being.

Nor studies in terms of the human, if that means the defense of the human against the machine.

But rather, collaborative studies conducted jointly by philosophers and artists in collaboration with technologists of how networks and machine learning are challenging old and enabling new, yet to be explored concepts of living together.

All by itself, COVID-19 has little to do with these most far-reaching philosophical transformations brought about by networks and by machine learning.

And yet, COVID-19 brings this transformation into view with sharper clarity than ever before and has led to circumstances due to which this new and different world might come faster than we anticipated.

What will it mean to be together with a machine?

To address this question, we may need a whole new vocabulary of encounters and relations.

[1] From Lauren Lee McCarthy, Later Date, 2020, https://vimeo.com/416588466/bb8762077d.

[2] Cade Metz, What the AI Behind AlphaGo Can Teach Us About Being Human, Wired, May 19, 2016, https://www.wired.com/2016/05/google-alpha-go-ai.

Image Credit: Stills from Lauren Lee McCarthy,Later Date, 2020

Tobias Rees isthe founding Director of the Berggruen Institutes Transformations of the Human Program. He also serves as Reid Hoffman Professor of Humanities at the New School for Social Research and is a Fellow of the Canadian Institute for Advanced Research.

More here:
Life Lived On Screen: Philosophical, Poetic, and Political Observations - lareviewofbooks

The US, China and the AI arms race: Cutting through the hype – CNET

Prasit photo/Getty Images

Artificial intelligence -- which encompasses everything from service robots to medical diagnostic tools to your Alexaspeaker -- is a fast-growing field that is increasingly playing a more critical role in many aspects of our lives. A country's AI prowess has major implications for how its citizens live and work -- and its economic and military strength moving into the future.

With so much at stake, the narrative of an AI "arms race" between the US and China has been brewing for years. Dramatic headlines suggest that China is poised to take the lead in AI research and use, due to its national plan for AI domination and the billions of dollars the government has invested in the field, compared with the US' focus on private-sector development.

Subscribe to the TVs, Streaming and Audio newsletter, receive notifications and see related stories on CNET.

But the reality is that at least until the past year or so, the two nations have been largely interdependent when it comes to this technology. It's an area that has drawn attention and investment from major tech heavy hitters on both sides of the Pacific, including Apple, Google and Facebook in the US and SenseTime, Megvii and YITU Technology in China.

Generation China is a CNET series that looks at the areas of technology where the country is looking to take a leadership position.

"Narratives of an 'arms race' are overblown and poor analogies for what is actually going on in the AI space," said Jeffrey Ding, the China lead for the Center for the Governance of AI at the University of Oxford's Future of Humanity Institute. When you look at factors like research, talent and company alliances, you'll find that the US and Chinese AI ecosystems are still very entwined, Ding added.

But the combination of political tensions and the rapid spread of COVID-19 throughout both nations is fueling more of a separation, which will have implications for both advances in the technology and the world's power dynamics for years to come.

"These new technologies will be game-changers in the next three to five years," said Georg Stieler, managing director of Stieler Enterprise Management Consulting China. "The people who built them and control them will also control parts of the world. You cannot ignore it."

You can trace China's ramp up in AI interest back to a few key moments starting four years ago.

The first was in March 2016, when AlphaGo -- a machine-learning system built by Google's DeepMind that uses algorithms and reinforcement learning to train on massive datasets and predict outcomes -- beat the human Go world champion Lee Sedol. This was broadcast throughout China and sparked a lot of interest -- both highlighting how quickly the technology was advancing, and suggesting that because Go involves war-like strategies and tactics, AI could potentially be useful for decision-making around warfare.

The second moment came seven months later, when President Barack Obama's administration released three reports on preparing for a future with AI, laying out a national strategic planand describing the potential economic impacts(all PDFs). Some Chinese policymakers took those reports as a sign that the US was further ahead in its AI strategy than expected.

This culminated in July 2017, when the Chinese government under President Xi Jinping released a development plan for the nation to become the world leader in AI by 2030, including investing billions of dollars in AI startups and research parks.

In 2016, professional Go player Lee Sedol lost a five-game match against Google's AI program AlphaGo.

"China has observed how the IT industry originates from the US and exerts soft influence across the world through various Silicon Valley innovations," said Lian Jye Su, principal analyst at global tech market advisory firm ABI Research. "As an economy built solely on its manufacturing capabilities, China is eager to find a way to diversify its economy and provide more innovative ways to showcase its strengths to the world. AI is a good way to do it."

Despite the competition, the two nations have long worked together. China has masses of data and far more lax regulations around using it, so it can often implement AI trials faster -- but the nation still largely relies on US semiconductors and open source software to power AI and machine learning algorithms.

And while the US has the edge when it comes to quality research, universities and engineering talent, top AI programs at schools like Stanford and MIT attract many Chinese students, who then often go on to work for Google, Microsoft, Apple and Facebook -- all of which have spent the last few years acquiring startups to bolster their AI work.

China's fears about a grand US AI plan didn't really come to fruition. In February 2019, US President Donald Trump released an American AI Initiative executive order, calling for heads of federal agencies to prioritize AI research and development in 2020 budgets. It didn't provide any new funding to support those measures, however, or many details on how to implement those plans. And not much else has happened at the federal level since then.

Meanwhile, China plowed on, with AI companies like SenseTime, Megvii and YITU Technology raising billions. But investments in AI in China dropped in 2019, as theUS-China trade war escalated and hurt investor confidence in China, Su said. Then, in January, the Trump administration made it harder for US companies to export certain types of AI software in an effort to limit Chinese access to American technology.

Just a couple weeks later, Chinese state media reported the first known death from an illness that would become known as COVID-19.

In the midst of the coronavirus pandemic, China has turned to some of its AI and big data tools in attempts to ward off the virus, including contact tracing, diagnostic tools anddrones to enforce social distancing. Not all of it, however, is as it seems.

"There was a lot of propaganda -- in February, I saw people sharing on Twitter and LinkedIn stories about drones flying along high rises, and measuring the temperature of people standing at the window, which was complete bollocks," Stieler said. "The reality is more like when you want to enter an office building in Shanghai, your temperature is taken."

A staff member introduces an AI digital infrared thermometer at a building in Beijing in March.

The US and other nations are grappling with the same technologies -- and the privacy, security and surveillance concerns that come along with them -- as they look to contain the global pandemic, said Elsa B. Kania, adjunct fellow with the Center for a New American Security's Technology and National Security Program, focused on Chinese defense innovation and emerging technologies.

"The ways in which China has been leveraging AI to fight the coronavirus are in various respects inspiring and alarming," Kania said. "It'll be important in the United States as we struggle with these challenges ourselves to look to and learn from that model, both in terms of what we want to emulate and what we want to avoid."

The pandemic may be a turning point in terms of the US recognizing the risks of interdependence with China, Kania said. The immediate impact may be in sectors like pharmaceuticals and medical equipment manufacturing. But it will eventually influence AI, as a technology that cuts across so many sectors and applications.

Despite the economic impacts of the virus, global AI investments are forecast to grow from $22.6 billion in 2019 to $25 billion in 2020, Su said. The bigger consequence may be on speeding the process of decoupling between the US and China, in terms of AI and everything else.

The US still has advantages in areas like semiconductors and AI chips. But in the midst of the trade war, the Chinese government is reducing its reliance on foreign technologies, developing domestic startups and adopting more open-source solutions, Su said. Cloud AI giants like Alibaba, for example, are using open-source computing models to develop their own data center chips. Chinese chipset startups like Cambricon Technologies, Horizon Robotics and Suiyuan Technology have also entered the market in recent years and garnered lots of funding.

But full separation isn't on the horizon anytime soon. One of the problems with referring to all of this as an AI arms race is that so many of the basic platforms, algorithms and even data sources are open-source, Kania said. The vast majority of the AI developers in China use Google TensorFlow or Facebook PyTorch, Stieler added -- and there's little incentive to join domestic options that lack the same networks.

The US remains the world's AI superpower for now, Su and Ding said. But ultimately, the trade war may do more harm to American AI-related companies than expected, Kania said.

Now playing: Watch this: Coronavirus care gets help from AI

0:26

"My main concern about some of these policy measures and restrictions has been that they don't necessarily consider the second-order effects, including the collateral damage to American companies, as well as the ways in which this may lessen US leverage or create much more separate or fragmented ecosystems," Kania said. "Imposing pain on Chinese companies can be disruptive, but in ways that can in the long term perhaps accelerate these investments and developments within China."

Still, "'arms race' is not the best metaphor," Kania added. "It's clear that there is geopolitical competition between the US and China, and our competition extends to these emerging technologies including artificial intelligence that are seen as highly consequential to the futures of our societies' economies and militaries."

Excerpt from:
The US, China and the AI arms race: Cutting through the hype - CNET

DeepMind sets AI loose on Diplomacy board game, and collaboration is key – TechRepublic

Artificial intelligence systems have become increasingly well-adapted to a host of basic board games. Now, DeepMind is hoping to teach agents the art of collaboration using Diplomacy.

IMAGE: iStock/MaksimTkachenko

From Turochamp to DeepBlue, human-vs.-computer competition has captivated audiences for decades fueling plenty of hyperbole along the way. In recent years, artificial intelligence (AI) systems have claimed supremacy across a variety of classic games. The AI research and development company DeepMind has been behind many of these systems at the bleeding edge of innovation.

In March 2016, one such bout of bytes vs. brains pitted DeepMind's AI system, AlphaGo against Go legend and 18-time world titleholder Lee Sedol. With millions tuning in around the globe, the unthinkable slowly unfolded as AlphaGo picked apart arguably the best player of the abstract strategy board game of the past decade with surgical precision. The stunning AlphaGo victory awarded the AI system a 9 dan ranking, the highest such certification.

Now the company has set its sights on training an AI agent on another of mankind's mysterious board games; this time trying its hand at Diplomacy. After all, it was only a matter of time before we trained AI the skillful art of negotiation en route to global domination.

Unlike more rudimentary games, Diplomacy involves a complex level of strategy and scheming. In a game like checkers, for example, a player has a rather limited decision about where to move an individual piece at any given time. The nuances and complexities, of course, increase with chess as a player must assign value to pieces and orchestrate a cohesive series of moves for success. In the esoteric world of boardgames, Diplomacy presents its own set of challenges for AI.

"Diplomacy has seven players and focuses on building alliances, negotiation, and teamwork in the face of uncertainty about other agents. As a result, agents have to constantly reason about who to cooperate with and how to coordinate actions," said Tom Eccles, a research engineer at DeepMind.

SEE:Building the bionic brain (free PDF)(TechRepublic)

AI systems have proved to be far superior to even the best human beings at zero-sum games like chess and Go. In this type of gameplay, there can only be one winner and one loser. Dissimilarly, Diplomacy requires agents to build alliances and foster collaboration.

"On the one hand, it is difficult to make progress in the game without the support of other players, but on the other hand, only one player can eventually win. This means it is more difficult to achieve cooperation in this environment. The tension between cooperation and competition in Diplomacy makes building trustworthy agents in this game an interesting research challenge," said Tom Anthony, a research scientist at DeepMind.

The ability to expeditiously vanquish a human player in a zero-sum game is certainly impressive, however, a richer layering of skills opens up another world of AI potential. Our day-to-day lives involve an intricate patchwork of balanced synergies; our individual needs often packaged within a larger group effort. That said, this research could enhance agents' ability to collaborate with us and one another, leading to a vast spectrum of real-world applications.

"In real-life, we often work in teams and have to both compete and cooperate. From simple decisions such as scheduling a meeting or deciding where to eat out with friends, to complex decisions such as negotiating with suppliers or clients or assigning tasks in a joint project, we constantly reason about how to best work with others. It seems likely that as AI systems become more complex, we'd need to provide them with better tools for effectively cooperating with others," said Yoram Bachrach, a research scientist at DeepMind.

Organizational workflows are typically hinged on collaboration and teamwork. As digital transformation takes hold across industries, organizations are increasingly utilizing a host of autonomous systems to increase efficiency and streamline operations. Enhancing agents with artificial soft skills related to teamwork and cooperation may be key moving forward.

"Artificial Intelligence is increasingly being applied to more complex tasks. This could mean that a number of different autonomous systems must work together, or at least in the same environment, in order to solve a task. As such, understanding how autonomous systems learn, act, and adapt to each other, is a growing area of research." Eccles said.

SEE:Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation(TechRepublic Premium)

It's important to note that this research focused on understanding the interactions in a "many-agent setting," and used a limited No-Press version of gameplay, which does not allow communication. Further research and development will allow future agents to participate in full Diplomacy gameplay, leveraging communication to build alliances and negotiate with other players.

In the full version, "communication is used to broker deals and form alliances, but also to misrepresent situations and intentions," according to the paper. Teaching an agent to utilize other players as collaborative pawns to ensure victory does bring up a series of concerns.

In one such scenario, the authors of the report explain that "agents may learn to establish trust, but might also exploit that trust to mislead their co-players and gain the upper hand." The researchers reiterate the importance of testing these agents in an isolated environment to better understand developments and pinpoint detrimental behaviors if they arise.

"We start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes. Our teams working on technical safety and ethics aim to ensure that we are constantly anticipating short- and long-term risks, exploring ways to prevent these risks from happening, and finding ways to address them if they do." Anthony said.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Read more here:
DeepMind sets AI loose on Diplomacy board game, and collaboration is key - TechRepublic

Is Dystopian Future Inevitable with Unprecedented Advancements in AI? – Analytics Insight

Artificial super-intelligence (ASI) is a software-based system with intellectual powers beyond those of humans across an almost comprehensive range of categories and fields of endeavor.

The reality is that AI has been with here for a long time now, ever since computers were able to make decisions based on inputs and conditions. When we see a threatening Artificial Intelligence system in the movies, its the malevolence of the system, coupled with the power of some machine that scares people.

However, it still behaves in fundamentally human ways.

The kind of AI that prevails today can be described as an Artificial Functional Intelligence (AFI). These systems are programmed to perform a specific role and to do so as well or better than a human. They have also become more successful at this in a short period which no one has ever predicted. For example, beating human opponents in complex games like Go and StarCraft II which knowledgeable people thought wouldnt happen for years, if not decades.

However, Alpha Go might beat every single human Go player handily from now until the heat death of the Universe, but when it is asked for the current weather conditions there the machine lacks the intelligence of even single-celled organisms that respond to changes in temperature.

Moreover, the prospect of limitless expansion of technology granted by the development of Artificial Intelligence is certainly an inviting one. While investment and interest in the field only grow by every passing year, one can only imagine what we might have to come.

Dreams of technological utopias granted by super-intelligent computers are contrasted with those of an AI lead dystopia, and with many top researchers believing the world will see the arrival of AGI within the century, it is down to the actions people take now to influence which future they might see. While some believe that only Luddites worry about the power AI could one-day hold over humanity, the reality is that most tops AI academics carry a similar concern for its more grim potential.

Its high time people must understand that no one is going to get a second attempt at Powerful AI. Unlike other groundbreaking developments for humanity, if it goes wrong there is no opportunity to try again and learn from the mistakes. So what can we do to ensure we get it right the first time?

The trick to securing the ideal Artificial Intelligence utopia is ensuring that their goals do not become misaligned with that of humans; AI would not become evil in the same sense that much fear, the real issue is it making sure it could understand our intentions and goals. AI is remarkably good at doing what humans tell it, but when given free rein, it will often achieve the goal humans set in a way they never expected. Without proper preparation, a well-intended instruction could lead to catastrophic events, perhaps due to an unforeseen side effect, or in a more extreme example, the AI could even see humans as a threat to fully completing the task set.

The potential benefits of super-intelligent AI are so limitless that there is no question in the continued development towards it. However, to prevent AGI from being a threat to humanity, people need to invest in AI safety research. In this race, one must learn how to effectively control a powerful AI before its creations.

The issue of ethics in AI, super-intelligent or otherwise, is being addressed to a certain extent, evidenced by the development of ethical advisory boards and executive positions to manage the matter directly. DeepMind has such a department in place, and international oversight organizations such as the IEEE have also created specific standards intended for managing the coexistence of highly advanced AI systems and the human beings who program them. But as AI draws ever closer to the point where super-intelligence is commonplace and ever more organizations adopt existing AI platforms, ethics must be top of mind for all major stakeholders in companies hoping to get the most out of the technology.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Smriti is a Content Analyst at Analytics Insight. She writes Tech/Business articles for Analytics Insight. Her creative work can be confirmed @analyticsinsight.net. She adores crushing over books, crafts, creative works and people, movies and music from eternity!!

Link:
Is Dystopian Future Inevitable with Unprecedented Advancements in AI? - Analytics Insight

An eye on AI CII Global Knowledge Summit explores impacts and strategies for the Age of the Algorithm – YourStory

Next month, CIIs annual summit will explore the digital transformation of knowledge societies. To be held entirely online from July 6-8, the forum is titled CII Global Knowledge Virtual Summit 2020: Knowledge in the Age of Artificial Intelligence.

The conference is also supported by the KM Global Network (KMGN), and will feature the awards ceremony for the Most Innovative Knowledge Enterprise (MIKE). AFCONS, Infosys, Wipro, Cognizant, and Tata Chemicals are winners of the MIKE Awards at the India and global levels.

YourStory is the media partner for the summit this year as well (see Part I and Part II of our 2019 summit articles). Topics addressed this year include the rise of AI/ML, knowledge integration, gamification, and storytelling.

In this series of preview articles, YourStory presents insights from the speakers and organisers of the CII 2020 summit, as well as experts from KMGN (see Part I and Part II of our ongoing coverage of the 2020 edition). The knowledge movement has particular urgency in the wake of COVID-19 to speed up effective knowledge-sharing across sectoral and national boundaries.

In a chat with YourStory, Jennifer Mecherippady, Senior Vice-President of CGI, shows a number of AI benefits that have been realised by her company. These include digital transformation of AM/IM (application/infrastructure management) operations through its Intelligent Automation Platform, responding to RFPs based on insights from specifications and past data, and digitisation of industry-specific needs in banking and HR.

A number of case studies of AI have shown broader impacts across industries, explains Sameer Dhanrajani, CEO of AIQRATE. He is also the author of AI and Analytics: Accelerating Business Decisions (see my book review here).

The case studies cover AI impacts in media (innovative content creation via hyper-personalisation and micro-segmenting), insurance (transformation of the business value chain in claims processing, telematics, risk management, actuarial valuations), and manufacturing (predictive asset maintenance to pre-empt wear and tear).

We are being ushered into an AI era, an algorithm-led economy wherein self-intuitive and ML- enabled algorithms sit at the core of every business model and in the organisational DNA, delivering end-to-end transformative impact, he explains.

Machines are great at evaluating huge volumes of data and generating clever visualisations from these. AI is also good at finding trends that humans cant immediately see due to the volume of data and possible interfering counter patterns, explains Arthur Shelley, Founder of Intelligent Answers.

A number of other experts have documented specific impacts of AI and ML in companies like Amazon, GE, Bosch, Nike, Caterpillar, Spotify, Netflix, SAP, Cisco, IBM, Siemens, Verizon, Unilever, P&G, GSK, Novartis, SalesForce.com, DBS Bank, RioTinto, Lowes, AllState, and AlphaGo. See my book reviews of Prediction Machines; What to do when Machines do Everything; Machine, Platform, Crowd; The AI Advantage; and Human + Machine.

Every five years or so, the field of KM undergoes a metamorphosis, absorbing the latest trends into its practices and thereby delivering continuing value, explains Rudolph D'souza, Chair of KMGN and Chief Knowledge Officer of AFCONS Infrastructure. He cites the rise of the internet, social media, and enterprise digital platforms as examples of such waves.

The same is going to happen with AI, automation, and machines. What will change is the pace, the sources of knowledge, and in this new era the application of knowledge, Rudolph says. The role of KM is to absorb the latest applications to serve organisation needs to compete effectively.

This is already happening, mainly in the form of simple decision support where the implications are not catastrophic. But some use cases of higher-end applications have been around, as in the case of using machines to analyse scans in oncology departments and assist specialists, Rudolph observes.

Knowledge creation and management is a critical differentiator for the industry. With AI making great strides in generating knowledge from raw video, image, voice, and social media text, knowledge creation and management has to be redefined, explains Gopichand Katragadda, Chairman, Global Knowledge Summit 2020, and Founder and CEO at Myelin Foundry.

The rise of AI and automation will lead to the increasing embedding of relevant knowledge about decisions, design, and processes right into the code, according to Ravi Shankar Ivaturi, Business Operations Senior Director, Products and Platforms, Unisys. This can lead to positive and negative effects, he cautions.

Structured KM lays the foundation on which AI, machine learning, and automation can thrive, according to Ved Prakash, Chief Knowledge Officer of Trianz. The role of KM is only going to increase in the emerging scenarios where deep understanding of knowledge and data will be a key skill, he adds.

The role of KM is going to be that of a connective tissue across systems, machines, and humans. The game is still about insights, explains Balaji Iyer, Director of Knowledge Management and Enterprise Transformation at Grant Thornton.

Many processes are automated in a HUMBOT framework where humans work closely with bots to get the desired outcomes. There is a crucial knowledge play in areas of machine teaching, human-bot hand-offs, and solving the right problems, he adds

The more AI makes a lot of the processes appear like black boxes for business leaders, the more pronounced the need for a next-gen KM program, Balaji says. He also draws attention to the re-imagination of KM systems using AI as a backbone for an AI-driven world, with KM products like Microsofts Cortex as an example.

AI will continue to be used to replicate human cognitive functions such as memory, learning, evaluation, decision making, and problem solving, says Zeba Khan, Managing Partner, Xenvis Solutions. The role of the human factor in aspects of creativity, intuition and in other soft skills cannot be replaced by technology. AI will not replace human jobs but will redefine them, she emphasises.

AI needs knowledge to properly operate and produce valuable results. KM will help producing the raw material for AI and support the AI process at every stage, explains Vincent Ribire, Managing Director and Co-founder of the Institute for Knowledge and Innovation Southeast Asia (IKI-SEA), hosted by Bangkok University.

Every organisation using AI aims to have knowledge embedded into a system to perform the roles humans do at lightning speed, observes Rajesh Dhillon, President, Knowledge Management Society (KMS), Singapore. Knowledge sharing, collaboration, reuse and learning are the impetus for implementing KM and keeping AI relevant.

AI-assisted collaboration tools can take knowledge management to another level, observes Refiloe Mabaso, Deputy Chairperson of Knowledge Management South Africa (KMSA). AI and KM combined can help teams and organisations operate even more intelligently.

What AI is not (yet) great at is finding the gaps or creatively connecting the insights that may be possible. The future is about what is possible in future and this is informed from what currently is and cant be done, explains Arthur Shelley of Intelligent Answers.

This is where collaboration between AI and human creativity offers more than either alone can achieve, he adds. Based in Melbourne, Arthur is the producer of the Creative Melbourne conference, and author of KNOWledge SUCCESSion, Being a Successful Knowledge Leader, and The Organizational Zoo.

AI and automation can be beneficial, but humane and responsible automation is important for balancing the unemployment and cost, cautions Sudip Mazumder, Head of Engineering and Construction, Digital at L&T NxT, and General Manager, L&T Group. AI may lead to dehumanised processes as peoples behavioural drivers may not be mapped in an AI model, he explains.

There will be realignment of the human-machine equation in the context of AI proliferation in the Industry 4.0 era, explains Sameer Dhanrajani of AIQRATE. However, akin to all three previous revolutions, AI progress will redefine jobs and human roles a few notches up, he adds.

He foresees a change in workforce composition with menial and trivial jobs getting redefined with AI and redesigned with human-machine combinations. However, platform aggregators and the gig economy will open up new work opportunities for the workforce.

A world that was hurtling at a relentless pace towards automation, AI, and ML has been forced to stop in its tracks and take cognizance of the human in the process. And, it took a virus to do that, cautions Rajib Chowdhury, Founder of The Gamification Company.

Working from home is ineffective without emotional trust, a sense of ownership, self-motivation, and measures of accountability, he adds. Let us not forget that we humans are fundamentally social beings. Technology is but a medium that plays a role of enabler to the process, he emphasises.

The human factor is still key in a world of AI, explains Jennifer Mecherippady of CGI. This includes identifying potential problems and measurable metrics, providing the right data sets, attributes, and values, and finally evaluating the business outcomes.

The screaming need for KM in the age of automation, ML, and AI is to formulate and implement frameworks for the Governance of Human and Machine Knowledge, emphasises Arthur Murray, CEO of Applied Knowledge Sciences, in Washington DC.

Knowledge, whether human or automated, does not manage itself. It requires, as we like to say, adult supervision, he explains. In a recent column, he shows how these challenges manifested themselves in Microsofts aborted Twitter chatbot Tay.

KM practitioners should strategically work with executive management to measure and update performance impacts of AI, advises Moria Levy, CEO, ROM Knowledgeware. They should examine how AI can, or cannot, support critical decisions. This involves knowledge validation, sense-making, and risk analysis.

A number of experts have weighed in on broader ethical dimensions of AI with respect to embedded bias, monopolistic practices, global governance, and lack of transparency and accountability. See for example my book reviews of A Human's Guide to Machine Intelligence, Life 3.0, The Four, and The Platform Society.

Despite the presence of AI for decades, a number of myths and misconceptions persist, and get in the way of harnessing AI. Jennifer Mecherippady of CGI points to some such myths: AI will replace humans and overtake human intelligence, AI can make sense of any data and learn the way humans learn, and AI will give immediate business results.

Many companies are embracing digital transformation without fully understanding the key role of analytics and AI, cautions Sameer Dhanrajani of AIQRATE. The road to digital transformation is incomplete without AI being at the fulcrum of the business. Enterprises cannot adopt AI if the foundational aspects of analytics capability are not in place in the journey to AI, he emphasises.

Lack of awareness of AI impacts gets in the way of evangelising and democratising AI, he adds. AI calls for disrupting the business value chain of the enterprises and replacing it with high powered ML-enabled algorithms.

The speakers offer a range of tips for professionals and organisations to upskill themselves for a world of AI. You need to identify different groups of people and upskill them. For example, programmers need to be able to identify, implement, refine, and manage new models, Jennifer Mecherippady of CGI explains.

Business users should master how to effectively use intelligent systems for solving new business problems. Business consultants should be able to understand business problems and identify the right use cases to invest in AI, she adds. Use case identification, collaboration, and scaling call for a systematic learning process.

AI therefore should be owned by the teams invested in driving the benefits for customers, she adds. CGIs organisational model alignment emphasises a flattened structure consisting of just five level to business unit leaders.

Learning will not be a one-time effort. It will be a continual one and the market will unleash new exponential technologies, business practices, and disruptive scenarios in rapid time cycles, observes Sameer Dhanrajani of AIQRATE.

The basic needs for survival so far have been roti, kapda, makaan, and data. All professions will be forced to add the fifth element learning into their monthly budgets to ensure that they remain topical on skills and competencies, Sameer jokes.

The speakers offer a range of tips for businesses to harness AI. Continue looking for strong opportunities and business cases for AI. Make it a goal for your teams, advises Jennifer of CGI.

Many enterprises have only a short-term measure for AI adoption and focus only on PoCs or limited engagements. Instead, they need to make AI integral to the strategy of the enterprise and a rallying cry, Sameer of AIQRATE urges.

The COVID-19 crisis will accelerate AI adoption in totality and across industry segments. Customer preferences have drastically changed, and operational processes have been altered because of this Black Swan event, Sameer observes.

However, as the current running algorithms have been fed with historical and episodical instances of the past, the coronavirus crisis will compel enterprises to alter the algorithms with revised assumptions and variables. Otherwise, these pre-configured algorithms may create biases in the existing data sets and provide distorted recommendations to the stakeholders, Sameer cautions.

Want to make your startup journey smooth? YS Education brings a comprehensive Funding Course, where you also get a chance to pitch your business plan to top investors. Click here to know more.

Read the original:
An eye on AI CII Global Knowledge Summit explores impacts and strategies for the Age of the Algorithm - YourStory