Archive for the ‘Alphago’ Category

How Artificial Intelligence Will Guide the Future of Agriculture – Growing Produce

New automated harvesters like the Harvest CROO Robotics strawberry robot utilizes AI to capture images of ripe berries ready to pick.Photo by Frank Giles

Artificial intelligence, or AI as it is more commonly called, has become more prominent in conversations about technology these days. But what does it mean? And how might it shape the future of agriculture?

In many ways, AI is already at work in agricultural research and in-field applications, but there is much more to come. Researchers in the field are excited about its potential power to process massive amounts of data and learn from it at a pace that far outstretches the capability of the human mind.

The newly installed University of Florida Vice President of Agriculture and Natural Resources, Scott Angle, sees AI as a unifying element of technology as it advances.

Robotics, visioning, automation, and genetic breakthroughs will need advanced AI to benefit growers, he says. Fortunately, UF recognized this early on and is developing a program to significantly ramp up AI research at the university.

Jim Carroll is a global futurist who specializes in technology and explaining it in a way that non-computer scientists can understand. He says first and foremost, AI is not some out-of-control robot that will terrorize and destroy our way of life like it is often portrayed in the media and popular culture.

This isnt new, Carroll says. I actually found articles in Popular Mechanics magazine in the 1930s that spoke of Giant Robot Brains that would steal all our jobs.

What is AI, really? The best way to think about it is that its an algorithm at heart its a computer that is really good at processing data, whether that be pure data, images, or other information. It has been trained and learns how to recognize patterns, trends, and insights in that information. The more it does it and gets the right scores, the better it gets. Its not really that scary.

John McCarthy is considered one of the founding fathers of AI and is credited with coining the term in 1955. He was joined by Alan Turing, Marvin Minsky, Allen Newell, and Herbert Simon in the early development of the technology.

Back in 1955, AI entered the academic world as a new discipline, and in subsequent years has experienced momentum in fits and starts. The technology went through a phase of frozen funding some called the AI winter. Some of this was because AI research was divided into subfields that didnt communicate with each other. Robotics went down one path while machine learning went down another. How and where would artificial neural networks be applied to practical effect?

But, as computing power has in-creased exponentially over time, AI, as Angle notes, is becoming a unifying technology that can tie all the subfields together. What once could only be imagined is becoming reality.

Dr. Yiannis Ampatzidis, an Assistant Professor who teaches precision agriculture and machine learning at UF/IFAS, says applications are already at work in agriculture including imaging, robotics, and big data analysis.

In precision agriculture, AI is used for detecting plant diseases and pests, plant stress, poor plant nutrition, and poor water management, Ampatzidis says. These detection technologies could be aerial [using drones] or ground based.

The imaging technology used to detect plant stress also could be deployed for precision spraying applications. Currently, John Deere is working to commercialize a weed sprayer from Blue River Technology that detects weeds and applies herbicides only to the weed.

Ampatzidis notes AI is utilized in robotics as well. The technology is used in the blossoming sector of robot harvesters where it is utilized to detect ripe fruit for picking. Floridas Harvest CROO Robotics is one example. Its robot strawberry harvester was used in commercial harvest during the 2019-2020 strawberry season in Florida.

Ampatzidis says AI holds great potential in the analytics of big data. In many ways, it is the key to unlocking the power of the massive amounts of data being generated on farms and in ag research. He and his team at UF/IFAS have developed the AgroView cloud-based technology that uses AI algorithms to process, analyze, and visualize data being collected from aerial- and ground-based platforms.

The amount of these data is huge, and its very difficult for a human brain to process and analyze them, he says. AI algorithms can detect patterns in these data that can help growers make smart decisions. For example, Agroview can detect and count citrus trees, estimate tree height and canopy size, and measure plant nutrient levels.

Carroll adds there is so much data in imagery being collected today.

An AI system can often do a better analysis at a lower cost, he says. Its similar to what we are talking about in the medical field. An AI system can read the information from X-rays and be far more accurate in a diagnosis.

So, are robots and AI coming to steal all our jobs? Thats a complicated question yet to be fully played out as the technology advances. Ampatzidis believes the technology will replace repetitive jobs and ones that agriculture is already struggling to fill with human labor.

It will replace jobs in factories, in agriculture [hand harvesters and some packinghouse jobs], vehicle drivers, bookkeepers, etc., Ampatzidis says. It also will replace many white-collar jobs in the fields of law, healthcare, accounting, hospitality, etc.

Of course, AI also could develop new jobs in the area of computer science, automation, robotics, data analytics, and computer gaming.

Carroll adds people should not fear the potential creative destruction brought on by the technologies enabled by AI. I always tell my audiences, Dont fear the future, he says. I then observe that some people see the future and see a threat. Innovators see the same future and see an opportunity.

Yiannis Ampatzidis, an Assistant Professor who teaches precision agriculture and machine learning at UF/IFAS, says AI applications are already at work in agriculture.Photo by Frank Giles

In July, the University of Florida announced a $70 million public-private partnership with NVIDIA, a multinational technology company, to build the worlds fastest AI supercomputer in academia. The system will be operating in early 2021. UF faculty and staff will have the tools to apply AI in multiple fields, such as dealing with major challenges like rising sea levels, population aging, data security, personalized medicine, urban transportation, and food insecurity. UF expects to educate 30,000 AI-supporting graduates by 2030.

AlphaGo, a 2017 documentary film, probably does about as good a job as any in illustrating the potential power of AI. The film documents a team of scientists who built a supercomputer to master the board game Go that originated in Asia more than 3,000 years ago. It also is considered one of the most complex games known to man. The conventional wisdom was that no computer would be capable of learning the vast number of solutions in the game and the reasoning required to win.

The computer, AlphGo, not only mastered the game in short order it took down human masters and champions of the game.

To learn more about the film, visit AlphaGoMovie.com.

Giles is editor of Florida Grower, a Meister Media Worldwide publication. See all author stories here.

Read the rest here:
How Artificial Intelligence Will Guide the Future of Agriculture - Growing Produce

The world of Artificial… – The American Bazaar

Sophia. Source: https://www.hansonrobotics.com/press/

Humans are the most advanced form of Artificial Intelligence (AI), with an ability to reproduce.

Artificial Intelligence (AI) is no longer a theory but is part of our everyday life. Services like TikTok, Netflix, YouTube, Uber, Google Home Mini, and Amazon Echo are just a few instances of AI in our daily life.

This field of knowledge always attracted me in strange ways. I have been an avid reader and I read a variety of subjects of non-fiction nature. I love to watch movies not particularly sci-fi, but I liked Innerspace, Flubber, Robocop, Terminator, Avatar, Ex Machina, and Chappie.

When I think of Artificial Intelligence, I see it from a lay perspective. I do not have an IT background. I am a researcher and a communicator; and, I consider myself a happy person who loves to learn and solve problems through simple and creative ideas. My thoughts on AI may sound different, but Im happy to discuss them.

Humans are the most advanced form of AI that we may know to exit. My understanding is that the only thing that differentiates humans and Artificial Intelligence is the capability to reproduce. While humans have this ability to multiply through male and female union and transfer their abilities through tiny cells, machines lack that function. Transfer of cells to a newborn is no different from the transfer of data to a machine. Its breathtaking that how a tiny cell in a human body has all the necessary information of not only that particular individual but also their ancestry.

Allow me to give an introduction to the recorded history of AI. Before that, I would like to take a moment to share with you my recent achievement that I feel proud to have accomplished. I finished a course in AI from Algebra University in Croatia in July. I could attend this course through a generous initiative and bursary from Humber College (Toronto). Such initiatives help intellectually curious minds like me to learn. I would also like to express that the views expressed are my own understanding and judgment.

What is AI?

AI is a branch of computer science that is based on computer programming like several other coding programs. What differentiates Artificial Intelligence, however, is its aim that is to mimic human behavior. And this is where things become fascinating as we develop artificial beings.

Origins

I have divided the origins of AI into three phases so that I can explain it better and you dont miss on the sequence of incidents that led to the step by step development of AI.

Phase 1

AI is not a recent concept. Scientists were already brainstorming about it and discussing the thinking capabilities of machines even before the term Artificial Intelligence was coined.

I would like to start from 1950 with Alan Turing, a British intellectual who brought WW II to an end by decoding German messages. Turing released a paper in the October of 1950 Computing Machinery and Intelligence that can be considered as among the first hints to thinking machines. Turing starts the paper thus: I propose to consider the question, Can machines think?. Turings work was also the beginning of Natural Language Processing (NLP). The 21st-century mortals can relate it with the invention of Apples Siri. The A.M. Turing Award is considered the Nobel of computing. The life and death of Turing are unusual in their own way. I will leave it at that but if you are interested in delving deeper, here is one article by The New York Times.

Five years later, in 1955, John McCarthy, an Assistant Professor of Mathematics at Dartmouth College, and his team proposed a research project in which they used the term Artificial Intelligence, for the first time.

McCarthy explained the proposal saying, The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. He continued, An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.

It started with a few simple logical thoughts that germinated into a whole new branch of computer science in the coming decades. AI can also be related to the concept of Associationism that is traced back to Aristotle from 300 BC. But, discussing that in detail will be outside the scope of this article.

It was in 1958 that we saw the first model replicating the brains neuron system. This was the year when psychologist Frank Rosenblatt developed a program called Perceptron. Rosenblatt wrote in his article, Stories about the creation of machines having human qualities have long been fascinating province in the realm of science fiction. Yet we are now about to witness the birth of such a machine a machine capable of perceiving, recognizing, and identifying its surroundings without any human training or control.

A New York Times article published in 1958 introduced the invention to the general public saying, The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.

My investigation in one of the papers of Rosenblatt hints that even in the 1940s scientists talked about artificial neurons. Notice in the Reference section of Rosenblatts paper published in 1958. It lists Warren S. McCulloch and Walter H. Pitts paper of 1943. If you are interested in more details, I would suggest an article published in Medium.

The first AI conference took place in 1959. However, by this time, the leads in Artificial Intelligence had already exhausted the computing capabilities of the time. It is, therefore, no surprise that not much could be achieved in AI in the next decade.

Thankfully, the IT industry was catching up quickly and preparing the ground for stronger computers. Gordon Moore, the co-founder of Intel, made a few predictions in his article in 1965. Moore predicted a huge growth of integrated circuits, more components per chip, and reduced costs. Integrated circuits will lead to such wonders as home computers or at least terminals connected to a central computerautomatic controls for automobiles, and personal portable communications equipment, Moore predicted. Although scientists had been toiling hard to launch the Internet, it was not until the late 1960s that the invention started showing some promises. On October 29, 1969, ARPAnet delivered its first message: a node-to-node communication from one computer to another, notes History.com.

With the Internet in the public domain, computer companies had a reason to accelerate their own developments. In 1971, Intel introduced its first chip. It was a huge breakthrough. Intel impressively compared the size and computing abilities of the new hardware saying, This revolutionary microprocessor, the size of a little fingernail, delivered the same computing power as the first electronic computer built in 1946, which filled an entire room.

Around the 1970s more popular versions of languages came in use, for instance, C and SQL. I mention these two as I remember when I did my Diploma in Network-Centered Computing in 2002, the advanced versions of these languages were still alive and kicking. Britannica has a list of computer programming languages if you care to read more on when the different languages came into being.

These advancements created a perfect amalgamation of resources to trigger the next phase in AI.

Phase 2

In the late 1970s, we see another AI enthusiast coming in the scene with several research papers on AI. Geoffrey Hinton, a Canadian researcher, had confidence in Rosenblatts work on Perceptron. He resolved an inherent problem with Rosenblatts model that was made up of a single layer perceptron. To be fair to Rosenblatt, he was well aware of the limitations of this approach he just didnt know how to learn multiple layers of features efficiently, Hinton noted in his paper in 2006.

This multi-layer approach can be referred to as a Deep Neural Network.

Another scientist, Yann LeCun, who studied under Hinton and worked with him, was making strides in AI, especially Deep Learning (DL, explained later in the article) and Backpropagation Learning (BL). BL can be referred to as machines learning from their mistakes or learning from trial and error.

Similar to Phase 1, the developments of Phase 2 end here due to very limited computing power and insufficient data. This was around the late 1990s. As the Internet was fairly recent, there was not much data available to feed the machines.

Phase 3

In the early 21st-century, the computer processing speed entered a new level. In 2011, IBMs Watson defeated its human competitors in the game of Jeopardy. Watson was quite impressive in its performance. On September 30, 2012, Hinton and his team released the object recognition program called Alexnet and tested it on Imagenet. The success rate was above 75 percent, which was not achieved by any such machine before. This object recognition sent ripples across the industry. By 2018, image recognition programming became 97% accurate! In other words, computers were recognizing objects more accurately than humans.

In 2015, Tesla introduced its self-driving AI car. The company boasts its autopilot technology on its web site saying, All new Tesla cars come standard with advanced hardware capable of providing Autopilot features today, and full self-driving capabilities in the futurethrough software updates designed to improve functionality over time.

Go enthusiasts will also remember the 2016 incident when Google-owned DeepMinds AlphaGo defeated the human Go world-champion Lee Se-dol. This incident came at least a decade too soon. We know that Go is considered one of the most complex games in human history. And, AI could learn it in just 3 days, to a level to beat a world champion who, I would assume must have spent decades to achieve that proficiency!

The next phase shall be to work on Singularity. Singularity can be understood as machines building better machines, all by themselves. In 1993, scientist Vernor Vinge published an essay in which he wrote, Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Scientists are already working on the concept of technological singularity. If these achievements can be used in a controlled way, these can help several industries, for instance, healthcare, automobile, and oil exploration.

I would also like to add here that Canadian universities are contributing significantly to developments in Artificial Intelligence. Along with Hinton and LeCun, I would like to mention Richard Sutton. Sutton, Professor at the University of Alberta, is of the view that advancements in the singularity can be expected around 2040. This makes me feel that when AI will no longer need human help, it will be a kind of specie in and of itself.

To get to the next phase, however, we would need more computer power to achieve the goals of tomorrow.

Now that we have some background on the genesis of AI and some information on the experts who nourished this advancement all these years, it is time to understand a few key terms of AI. By the way, if you ask me, every scientist who is behind these developments is a new topic in themselves. I have tried to put a good number of researched sources in the article to generate your interest and support your knowledge in AI.

Big Data

With the Internet of Things (IoT), we are saving tons of data every second from every corner of the world. Consider, for instance, Google. It seems that it starts tracking our intentions as soon as we type the first alphabet on our keyboard. Now think for a second how much data is generated from all the internet users from all over the World. Its already making predictions of our likes, dislikes, actionseverything.

The concept of big data is important as that makes the memory of Artificial Intelligence. Its like a parent sharing their experience with their child. If the child can learn from that experience, they develop cognizant abilities and venture into making their own judgments and decisions. Similarly, big data is the human experience that is shared with machines and they develop on that experience. This can be supervised as well as unsupervised learning.

Symbolic Reasoning and Machine Learning

The basics of all processes are some mathematical patterns. I think that this is because math is something that is certain and easy to understand for all humans. 2 + 2 will always be 4 unless there is something we havent figured out in the equation.

Symbolic reasoning is the traditional method of getting work done through machines. According to Pathmind, to build a symbolic reasoning system, first humans must learn the rules by which two phenomena relate, and then hard-code those relationships into a static program. Symbolic reasoning in AI is also known as the Good Old Fashioned AI (GOFAI).

Machine Learning (ML) refers to the activity where we feed big data to machines and they identify patterns and understand the data by themselves. The outcomes are not as predicted as here machines are not programmed to specific outcomes. Its like a human brain where we are free to develop our own thoughts. A video by ColdFusion explains ML thus: ML systems analyze vast amounts of data and learn from their past mistakes. The result is an algorithm that completes its task effectively. ML works well with supervised learning.

Here I would like to make a quick tangent for all those creative individuals who need some motivation. I feel that all inventions were born out of creativity. Of course, creativity comes with some basic understanding and knowledge. Out of more than 7 billion brains, somewhere someone is thinking out of the box, verifying their thoughts, and trying to communicate their ideas. Creativity is vital for success. This may also explain why some of the most important inventions took place in a garage (Google and Microsoft). Take, for instance, a small creative tool like a pizza cutter. Someone must have thought about it. Every time I use it, I marvel how convenient and efficient it is to slice a pizza without disturbing the toppings with that running cutter. Always stay creative and avoid preconceived ideas and stereotypes.

Alright, back to the topic!

Deep Learning

Deep Learning (DL) is a subset of ML. This technology attempts to mimic the activity of neurons in our brain using matrix mathematics, explains ColdFusion. I found this article that describes DL well. With better computers and big data, it is now possible to venture into DL. Better computers provide the muscle and the big data provides the experience to a neuron network. Together, they help a machine think and execute tasks just like a human would do. I would suggest reading this paper titled Deep Leaning by LeCun, Bengio, and Hinton (2015) for a deeper perspective on DL.

The ability of DL makes it a perfect companion for unsupervised learning. As big data is mostly unlabelled, DL processes it to identify patterns and make predictions. This not only saves a lot of time but also generates results that are completely new to a human brain. DL offers another benefit it can work offline; meaning, for instance, a self-driving car. It can take instantaneous decisions while on the road.

What next?

I think that the most important future development will be AI coding AI to perfection, all by itself.

Neural nets designing neural nets have already started. Early signs of self-production are in vision. Google has already created programs that can produce its own codes. This is called Automatic Machine Learning or AutoML. Sundar Pichai, CEO of Google and Alphabet, shared the experiment in his blog. Today, designing neural nets is extremely time intensive, and requires an expertise that limits its use to a smaller community of scientists and engineers. Thats why weve created an approach called AutoML, showing that its possible for neural nets to design neural nets, said Pichai (2017).

Full AI capabilities will also trigger several other programs like fully-automated self-driving cars, full-service assistance in sectors like health care and hospitality.

Among the several useful programs of AI, ColdFusion has identified the five most impressive ones in terms of image outputs. These are AI generating an image from a text (Plug and Play Generative Networks: Conditional Iterative Generation of Images in Latent Space), AI reading lip movements from a video with 95% accuracy (LipNet), Artificial Intelligence creating new images from just a few inputs (Pix2Pix), AI improving the pixels of an image (Google Brains Pixel Recursive Super Resolution), and AI adding color to b/w photos and videos (Let There Be Color). In the future, these technologies can be used for more advanced functions like law enforcement et cetera.

AI can already generate images of non-existing humans and add sound and body movements to the videos of individuals! In the coming years, these tools can be used for gaming purposes, or maybe fully capable multi-dimensional assistance like the one we see in the movie Iron Man. Of course, all these developments would require new AI laws to avoid misuse; however, that is a topic for another discussion.

Humans are advanced AI

Artificial Intelligence is getting so good at mimicking humans that it seems that humans themselves are some sort of AI. The way Artificial Intelligence learns from data, retains information, and then develops analytical, problem solving, and judgment capabilities are no different from a parent nurturing their child with their experience (data) and then the child remembering the knowledge and using their own judgments to make decisions.

We may want to remember here that there are a lot of things that even humans have not figured out with all their technology. A lot of things are still hidden from us in plain sight. For instance, we still dont know about all the living species in the Amazon rain forest. Astrology and astronomy are two other fields where, I think, very little is known. Air, water, land, and celestial bodies control human behavior, and science has evidence for this. All this hints that we as humans are not in total control of ourselves. This feels similar to AI, which so far requires external intervention, like from humans, to develop it.

I think that our past has answers to a lot of questions that may unravel our future. Take for example the Great Pyramid at Giza, Egypt, which we still marvel for its mathematical accuracy and alignment with the earths equator as well as the movements of celestial bodies. By the way, we could compare the measurements only because we have already reached a level to know the numbers relating to the equator.

Also, think of Indias knowledge of astrology. It has so many diagrams of planetary movements that are believed to impact human behavior. These sketches have survived several thousand years. One of Indias languages, Vedic, is considered more than 4,000 years old, perhaps one of the oldest in human history. This was actually a question asked from IBM Watson during the 2011 Jeopardy competition. Understanding the literature in this language might unlock a wealth of information.

I feel that with the kind of technology we have in AI, we should put some of it at work to unearth our wisdom from the past. It is a possibility that if we overlook it, we may waste resources by reinventing the wheel.

More here:
The world of Artificial... - The American Bazaar

Life Lived On Screen: Philosophical, Poetic, and Political Observations – lareviewofbooks

AUGUST 2, 2020

I.

THERE IS THE idea of a physical human to human encounter.

As a being together.

The image is one of a shared experience of time a time constituted by the act of committing to one another, to an encounter.

Of inhabiting, together, a space where bodies meet, where talking and laughing and crying is a haptic experience. Where one breathes the same air, smells the same smells.

An experience the body can remember sensorially long after.

Reaching out and touching. Shared surfaces. Breathing, talking, anything really.[1]

Can this kind of encounter happen through machines or machine interfaces? Zoom, Facebook, Google, Twitter, LinkedIn, Skype, Microsoft Teams, and many more.

Can it happen with a machine?

Traditionally, the answer to both questions is no.

No, it cannot really happen by way of a machine interface because too much is lost.

And no, one cannot have a true encounter with machines.

II.

In times of COVID-19, we spend more of our life online in networks than ever before.

What is the effect of this life lived on screen on what it is to be human?

We have more Zoom meetings, surf longer on Instagram, spend more time on Facebook and Twitter than ever before.

What is the transformation of the human brought about by life lived on screens and how to bring this transformation into focus?

As a site of philosophical change and as an opportunity for philosophers, artists, and technologists to come together and give shape?

What are the philosophical and poetic and political stakes and opportunities of this, of our moment in time?

III.

The migration of human activity to technological platforms began long before COVID-19.

The reference, here, is particularly to the emergence in the early 2000s of interactive, often user-generated content, and the emergence of network companies.

The classic examples here are companies like Google (Google mastered microtargeting), Facebook, Twitter, Amazon, Google, Microsoft (Skype and Team), and now also Zoom.

This matters for two reasons.

The first is that the material infrastructural conditions of possibility for how we now spend much of our time has been laid long before the present: satellites, high-speed fiber-optic cables between cities and underneath the ocean, file sharing systems in massive computer farms that host servers, AI algorithms that work through enormous amounts of data quickly to find patterns and calculate preferences, etc.

The second reason is that the material infrastructure that makes life lived on screen possible is inseparably related to platform capitalism. Platform capitalism consists (mostly but not exclusively) of companies that make money by offering free services such as search or posting images or messaging but that collect and harvest user data in order to either sell it to other platform companies or, more often, to sell it to advertising companies (who then devise microtargeting strategies, that is, they deliver ads to specific audiences).

In order to generate data, these companies have been busy finding ways long before COVID-19 to migrate human activity online.

Or, perhaps more accurately: They have been busy creating new forms of human activity suited to life online surfing, search, texting, sexting, browsing, FaceTiming, YouTubing, binge watching, etc.

AR and VR especially via Facebook and Oculus may soon be an additional element of life on screen.

And COVID-19?

Well, for most platform companies, the spread of SARS-CoV-2 and the shelter-at-home orders have been a massive boost: screen time has increased dramatically and so has their capacity to generate and mine data.

That is, COVID-19 has been a consolidation and even an expansion event for platform capitalism.

The contrast to older forms of capitalism, especially to industrial manufacturing, couldnt be sharper.

The question thus emerges whether or not we are currently seeing a powerful acceleration of a shift from earlier forms of capitalism toward a new, still-nascent form called platform capitalism.

A shift from a mode of production focused on the industrial production of goods by labor to another one that is about users, data, and AI?

What are the philosophical, poetic, and political dimension of this shift?

IV.

In my observation, platform companies have made dominant a form of relationality networks that runs diagonal to the usual, place-based socialities of the nation (usually framed in terms of belonging and non-belonging, inclusion and exclusion of a people imagined in territorial and ethnic or racial terms).

In fact, I think it is no exaggeration to argue that networks have given rise to a new structure and experience of reality that is radically different from and even incommensurable with the structure and experience of reality that defined societies.

I offer a simple juxtaposition to illustrate my point.

Societies, usually, have three main features.

First, they are organized hierarchically. That is, they typically have a few powerful individuals at the top, while the vast majority of individuals assemble at the bottom.

Second, they are organized vertically, by which I mean that they accommodate an often vast diversity of opinions and points of views.

Third, societies are usually held together by a national sentiment and, most importantly, by a national communication or media system. The form this media system almost always takes is mass communication, where the few communicate to the many. What they communicate is information information people may vehemently disagree about, but the baseline of this disagreement is that people agree about the things that they disagree about. Mass communication assures that people have a shared sense of reality.

Networks defy all three of those features.

First, if societies are hierarchical and vertical, then networks are flat and horizontal: networks tend to be self-assemblies of people with similar views and inclinations.

Second, while societies are contained by national territories, networks tend to be global and cut across national boundaries: another way of saying this is that while societies are place-specific units, networks are non-place-specific units.

And third, if in society the few communicate with the many and what they communicate is information, then in networks the many communicate directly unfiltered with the many, and what they communicate is not information but affective (emotional) intensity.

It strikes me as uncontroversial that today more and more humans live in networks and that networks, ultimately, defy the logic of society.

Indeed, the rise of networks has created a situation in which, counter to what the moderns thought, society and the social are not timeless ontological categories that define the human.

On the contrary, they are recent and transitory concepts that have no universal validity for all of humanity or all of human history.

Of course, societas is an ancient concept. However, up until the late 18th century, a societas was a legal and not a national or territorial concept; it referred to those who held legal rights vis--vis the monarch.

Things only changed in the years predating the French Revolution when the argument emerged that the people and not the aristocrats and the grand bourgeoisie who held legal rights vis--vis the king should be the society constitutive of the political entity called France.

The early nation-states, which emerged in the context of the first Industrial Revolution and at a time when several cholera epidemics ravaged Europe, found themselves confronted with the need to know their societies, to know how many people lived on their territory, how many were born, how many died, how many got sick and of what; they had to know how many married and how many divorced.

As political existence and the biological vitality of the national society were understood to be connected, states began to conduct massive surveys to understand how they could reform and advance their societies.

Over time between the 1830s and the 1890s this gave rise to what one could call the logic of the social: the idea that the truth about humans is that they are born in societies and that society will shape them and even determine them. The truth about humans is that they are social, in the sense of societal being: tell me in which segment you were born, and I tell you who you are likely to marry, how many kids you we will have, what your job will be, what you are likely to die of.

The social was discovered as the true ontological ground of the human.

To this day, most normative theories of the human call them anthropology: from Marx via the Frankfurt School to Pierre Bourdieu are based on the idea that society is the true ontological ground of the human.

All our modern political institutions are based on society.

If it is true that networks defy the logic of society, then the social sciences, simply because they take the social for granted as the true logic of the human, will fail to bring the human into view.

What we need, then, is a shift from social anthropology (an anthropology that grounds in the concept of the social) to a network anthropology: a multifaceted study of how networks give rise to humans.

V.

The difference between networks and societies which appears to map onto the difference between platform and industrial capitalism is related to the changing relation between humans and machines brought about by recent advances in AI, specifically in machine learning.

One can say that machine learning technologies are beginning to liberate machines from the narrow industrial concept of what a machine is and that this liberation may have far-reaching consequences for what it means to have an encounter.

Traditionally, there were unbridgeable differences between human and machines.

Partly, because humans have intelligence reason while machines are reducible to mechanism.

Partly because machines have no life, no quality of their own. They are reducible to the engineers who invented them and hence mere tools.

The implication, often, is that there is no will, no interference, no freedom, no opening.

But machine learning and neurotechnology make us reconsider these boundaries between organisms and machines, between humans and mechanisms.

First, the success of artificial neural nets or the basic continuity between neural and mechanical processes suggests that the distinction between the natural and the artificial may perhaps matter much less than we thought.

Second, the emergence of deep learning architectures has led to machines with a mind of their own: they have an agency that is not reducible to the intent of or the program written by the engineer.

The exemplary reference here is a 2016 game of Go, played by a deep learning system named AlphaGo (built by DeepMind, a London-based, Google-owned AI company) against Lee Sedol, an 18-time world champion. Toward the end of Game Two in a Best of Five series, AlphaGo opted for a move move 37 that was highly unusual.

DeepMind later announced that AlphaGo had calculated the odds that an expert human player would have made the same move at 1 in 10,000.

It played the move anyway: as if it judged that a nonhuman move would be better in this case.

Fan Hui, the three-time European Go champion, remarked: Its not a human move. So beautiful. So beautiful.

Wired wrote shortly after the game was over: Move 37 showed that AlphaGo wasnt just regurgitating years of programming or cranking through a brute-force predictive algorithm. It was the moment AlphaGo proved it[s] [] ability to play a beautiful game not just like a person but in a way no person could.[2]

Traditionally, a program that doesnt conform to the intentionality of the engineer was considered faulty. However, contemporary machine learning systems are built to defy to exceed the mind of the engineer: it is expected that the machine brings something to a game, a conversation, a question that the engineers did not and could not possibly provide it with (something nonhuman).

These developments one could call them the liberation of machines from the human or at least from the concept of the machine that up until recently defined the human imagination of what a machine could be are related to the rise of networks.

They are related insofar as in networks, relationality once a human to human prerogative may no longer be limited to human to human encounters anymore.

What effects will the liberation of machines which is constitutive of networks as much as of machines have on what it is to be human?

Or on what it is to be in relation?

VI.

As I see it, what is needed now are philosophical investigations of the new technology that is being built.

Not studies in terms of society, as this would ultimately imply holding on to the old concept of the human as social being.

Nor studies in terms of the human, if that means the defense of the human against the machine.

But rather, collaborative studies conducted jointly by philosophers and artists in collaboration with technologists of how networks and machine learning are challenging old and enabling new, yet to be explored concepts of living together.

All by itself, COVID-19 has little to do with these most far-reaching philosophical transformations brought about by networks and by machine learning.

And yet, COVID-19 brings this transformation into view with sharper clarity than ever before and has led to circumstances due to which this new and different world might come faster than we anticipated.

What will it mean to be together with a machine?

To address this question, we may need a whole new vocabulary of encounters and relations.

[1] From Lauren Lee McCarthy, Later Date, 2020, https://vimeo.com/416588466/bb8762077d.

[2] Cade Metz, What the AI Behind AlphaGo Can Teach Us About Being Human, Wired, May 19, 2016, https://www.wired.com/2016/05/google-alpha-go-ai.

Image Credit: Stills from Lauren Lee McCarthy,Later Date, 2020

Tobias Rees isthe founding Director of the Berggruen Institutes Transformations of the Human Program. He also serves as Reid Hoffman Professor of Humanities at the New School for Social Research and is a Fellow of the Canadian Institute for Advanced Research.

More here:
Life Lived On Screen: Philosophical, Poetic, and Political Observations - lareviewofbooks

The US, China and the AI arms race: Cutting through the hype – CNET

Prasit photo/Getty Images

Artificial intelligence -- which encompasses everything from service robots to medical diagnostic tools to your Alexaspeaker -- is a fast-growing field that is increasingly playing a more critical role in many aspects of our lives. A country's AI prowess has major implications for how its citizens live and work -- and its economic and military strength moving into the future.

With so much at stake, the narrative of an AI "arms race" between the US and China has been brewing for years. Dramatic headlines suggest that China is poised to take the lead in AI research and use, due to its national plan for AI domination and the billions of dollars the government has invested in the field, compared with the US' focus on private-sector development.

Subscribe to the TVs, Streaming and Audio newsletter, receive notifications and see related stories on CNET.

But the reality is that at least until the past year or so, the two nations have been largely interdependent when it comes to this technology. It's an area that has drawn attention and investment from major tech heavy hitters on both sides of the Pacific, including Apple, Google and Facebook in the US and SenseTime, Megvii and YITU Technology in China.

Generation China is a CNET series that looks at the areas of technology where the country is looking to take a leadership position.

"Narratives of an 'arms race' are overblown and poor analogies for what is actually going on in the AI space," said Jeffrey Ding, the China lead for the Center for the Governance of AI at the University of Oxford's Future of Humanity Institute. When you look at factors like research, talent and company alliances, you'll find that the US and Chinese AI ecosystems are still very entwined, Ding added.

But the combination of political tensions and the rapid spread of COVID-19 throughout both nations is fueling more of a separation, which will have implications for both advances in the technology and the world's power dynamics for years to come.

"These new technologies will be game-changers in the next three to five years," said Georg Stieler, managing director of Stieler Enterprise Management Consulting China. "The people who built them and control them will also control parts of the world. You cannot ignore it."

You can trace China's ramp up in AI interest back to a few key moments starting four years ago.

The first was in March 2016, when AlphaGo -- a machine-learning system built by Google's DeepMind that uses algorithms and reinforcement learning to train on massive datasets and predict outcomes -- beat the human Go world champion Lee Sedol. This was broadcast throughout China and sparked a lot of interest -- both highlighting how quickly the technology was advancing, and suggesting that because Go involves war-like strategies and tactics, AI could potentially be useful for decision-making around warfare.

The second moment came seven months later, when President Barack Obama's administration released three reports on preparing for a future with AI, laying out a national strategic planand describing the potential economic impacts(all PDFs). Some Chinese policymakers took those reports as a sign that the US was further ahead in its AI strategy than expected.

This culminated in July 2017, when the Chinese government under President Xi Jinping released a development plan for the nation to become the world leader in AI by 2030, including investing billions of dollars in AI startups and research parks.

In 2016, professional Go player Lee Sedol lost a five-game match against Google's AI program AlphaGo.

"China has observed how the IT industry originates from the US and exerts soft influence across the world through various Silicon Valley innovations," said Lian Jye Su, principal analyst at global tech market advisory firm ABI Research. "As an economy built solely on its manufacturing capabilities, China is eager to find a way to diversify its economy and provide more innovative ways to showcase its strengths to the world. AI is a good way to do it."

Despite the competition, the two nations have long worked together. China has masses of data and far more lax regulations around using it, so it can often implement AI trials faster -- but the nation still largely relies on US semiconductors and open source software to power AI and machine learning algorithms.

And while the US has the edge when it comes to quality research, universities and engineering talent, top AI programs at schools like Stanford and MIT attract many Chinese students, who then often go on to work for Google, Microsoft, Apple and Facebook -- all of which have spent the last few years acquiring startups to bolster their AI work.

China's fears about a grand US AI plan didn't really come to fruition. In February 2019, US President Donald Trump released an American AI Initiative executive order, calling for heads of federal agencies to prioritize AI research and development in 2020 budgets. It didn't provide any new funding to support those measures, however, or many details on how to implement those plans. And not much else has happened at the federal level since then.

Meanwhile, China plowed on, with AI companies like SenseTime, Megvii and YITU Technology raising billions. But investments in AI in China dropped in 2019, as theUS-China trade war escalated and hurt investor confidence in China, Su said. Then, in January, the Trump administration made it harder for US companies to export certain types of AI software in an effort to limit Chinese access to American technology.

Just a couple weeks later, Chinese state media reported the first known death from an illness that would become known as COVID-19.

In the midst of the coronavirus pandemic, China has turned to some of its AI and big data tools in attempts to ward off the virus, including contact tracing, diagnostic tools anddrones to enforce social distancing. Not all of it, however, is as it seems.

"There was a lot of propaganda -- in February, I saw people sharing on Twitter and LinkedIn stories about drones flying along high rises, and measuring the temperature of people standing at the window, which was complete bollocks," Stieler said. "The reality is more like when you want to enter an office building in Shanghai, your temperature is taken."

A staff member introduces an AI digital infrared thermometer at a building in Beijing in March.

The US and other nations are grappling with the same technologies -- and the privacy, security and surveillance concerns that come along with them -- as they look to contain the global pandemic, said Elsa B. Kania, adjunct fellow with the Center for a New American Security's Technology and National Security Program, focused on Chinese defense innovation and emerging technologies.

"The ways in which China has been leveraging AI to fight the coronavirus are in various respects inspiring and alarming," Kania said. "It'll be important in the United States as we struggle with these challenges ourselves to look to and learn from that model, both in terms of what we want to emulate and what we want to avoid."

The pandemic may be a turning point in terms of the US recognizing the risks of interdependence with China, Kania said. The immediate impact may be in sectors like pharmaceuticals and medical equipment manufacturing. But it will eventually influence AI, as a technology that cuts across so many sectors and applications.

Despite the economic impacts of the virus, global AI investments are forecast to grow from $22.6 billion in 2019 to $25 billion in 2020, Su said. The bigger consequence may be on speeding the process of decoupling between the US and China, in terms of AI and everything else.

The US still has advantages in areas like semiconductors and AI chips. But in the midst of the trade war, the Chinese government is reducing its reliance on foreign technologies, developing domestic startups and adopting more open-source solutions, Su said. Cloud AI giants like Alibaba, for example, are using open-source computing models to develop their own data center chips. Chinese chipset startups like Cambricon Technologies, Horizon Robotics and Suiyuan Technology have also entered the market in recent years and garnered lots of funding.

But full separation isn't on the horizon anytime soon. One of the problems with referring to all of this as an AI arms race is that so many of the basic platforms, algorithms and even data sources are open-source, Kania said. The vast majority of the AI developers in China use Google TensorFlow or Facebook PyTorch, Stieler added -- and there's little incentive to join domestic options that lack the same networks.

The US remains the world's AI superpower for now, Su and Ding said. But ultimately, the trade war may do more harm to American AI-related companies than expected, Kania said.

Now playing: Watch this: Coronavirus care gets help from AI

0:26

"My main concern about some of these policy measures and restrictions has been that they don't necessarily consider the second-order effects, including the collateral damage to American companies, as well as the ways in which this may lessen US leverage or create much more separate or fragmented ecosystems," Kania said. "Imposing pain on Chinese companies can be disruptive, but in ways that can in the long term perhaps accelerate these investments and developments within China."

Still, "'arms race' is not the best metaphor," Kania added. "It's clear that there is geopolitical competition between the US and China, and our competition extends to these emerging technologies including artificial intelligence that are seen as highly consequential to the futures of our societies' economies and militaries."

Excerpt from:
The US, China and the AI arms race: Cutting through the hype - CNET

DeepMind sets AI loose on Diplomacy board game, and collaboration is key – TechRepublic

Artificial intelligence systems have become increasingly well-adapted to a host of basic board games. Now, DeepMind is hoping to teach agents the art of collaboration using Diplomacy.

IMAGE: iStock/MaksimTkachenko

From Turochamp to DeepBlue, human-vs.-computer competition has captivated audiences for decades fueling plenty of hyperbole along the way. In recent years, artificial intelligence (AI) systems have claimed supremacy across a variety of classic games. The AI research and development company DeepMind has been behind many of these systems at the bleeding edge of innovation.

In March 2016, one such bout of bytes vs. brains pitted DeepMind's AI system, AlphaGo against Go legend and 18-time world titleholder Lee Sedol. With millions tuning in around the globe, the unthinkable slowly unfolded as AlphaGo picked apart arguably the best player of the abstract strategy board game of the past decade with surgical precision. The stunning AlphaGo victory awarded the AI system a 9 dan ranking, the highest such certification.

Now the company has set its sights on training an AI agent on another of mankind's mysterious board games; this time trying its hand at Diplomacy. After all, it was only a matter of time before we trained AI the skillful art of negotiation en route to global domination.

Unlike more rudimentary games, Diplomacy involves a complex level of strategy and scheming. In a game like checkers, for example, a player has a rather limited decision about where to move an individual piece at any given time. The nuances and complexities, of course, increase with chess as a player must assign value to pieces and orchestrate a cohesive series of moves for success. In the esoteric world of boardgames, Diplomacy presents its own set of challenges for AI.

"Diplomacy has seven players and focuses on building alliances, negotiation, and teamwork in the face of uncertainty about other agents. As a result, agents have to constantly reason about who to cooperate with and how to coordinate actions," said Tom Eccles, a research engineer at DeepMind.

SEE:Building the bionic brain (free PDF)(TechRepublic)

AI systems have proved to be far superior to even the best human beings at zero-sum games like chess and Go. In this type of gameplay, there can only be one winner and one loser. Dissimilarly, Diplomacy requires agents to build alliances and foster collaboration.

"On the one hand, it is difficult to make progress in the game without the support of other players, but on the other hand, only one player can eventually win. This means it is more difficult to achieve cooperation in this environment. The tension between cooperation and competition in Diplomacy makes building trustworthy agents in this game an interesting research challenge," said Tom Anthony, a research scientist at DeepMind.

The ability to expeditiously vanquish a human player in a zero-sum game is certainly impressive, however, a richer layering of skills opens up another world of AI potential. Our day-to-day lives involve an intricate patchwork of balanced synergies; our individual needs often packaged within a larger group effort. That said, this research could enhance agents' ability to collaborate with us and one another, leading to a vast spectrum of real-world applications.

"In real-life, we often work in teams and have to both compete and cooperate. From simple decisions such as scheduling a meeting or deciding where to eat out with friends, to complex decisions such as negotiating with suppliers or clients or assigning tasks in a joint project, we constantly reason about how to best work with others. It seems likely that as AI systems become more complex, we'd need to provide them with better tools for effectively cooperating with others," said Yoram Bachrach, a research scientist at DeepMind.

Organizational workflows are typically hinged on collaboration and teamwork. As digital transformation takes hold across industries, organizations are increasingly utilizing a host of autonomous systems to increase efficiency and streamline operations. Enhancing agents with artificial soft skills related to teamwork and cooperation may be key moving forward.

"Artificial Intelligence is increasingly being applied to more complex tasks. This could mean that a number of different autonomous systems must work together, or at least in the same environment, in order to solve a task. As such, understanding how autonomous systems learn, act, and adapt to each other, is a growing area of research." Eccles said.

SEE:Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation(TechRepublic Premium)

It's important to note that this research focused on understanding the interactions in a "many-agent setting," and used a limited No-Press version of gameplay, which does not allow communication. Further research and development will allow future agents to participate in full Diplomacy gameplay, leveraging communication to build alliances and negotiate with other players.

In the full version, "communication is used to broker deals and form alliances, but also to misrepresent situations and intentions," according to the paper. Teaching an agent to utilize other players as collaborative pawns to ensure victory does bring up a series of concerns.

In one such scenario, the authors of the report explain that "agents may learn to establish trust, but might also exploit that trust to mislead their co-players and gain the upper hand." The researchers reiterate the importance of testing these agents in an isolated environment to better understand developments and pinpoint detrimental behaviors if they arise.

"We start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes. Our teams working on technical safety and ethics aim to ensure that we are constantly anticipating short- and long-term risks, exploring ways to prevent these risks from happening, and finding ways to address them if they do." Anthony said.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Read more here:
DeepMind sets AI loose on Diplomacy board game, and collaboration is key - TechRepublic