Archive for the ‘Alphago’ Category

To understand AI’s problems look at the shortcuts taken to create it – EastMojo

A machine can only do whatever we know how to order it to perform, wrote the 19th-century computing pioneer Ada Lovelace. This reassuring statement was made in relation to Charles Babbages description of the first mechanical computer.

Lady Lovelace could not have known that in 2016, a program called AlphaGo, designed to play and improve at the board game Go, would not only be able to defeat all of its creators, but would do it in ways that they could not explain.

Opt out orcontact usanytime. See ourPrivacy Policy

In 2023, the AI chatbot ChatGPT is taking this to another level, holding conversations in multiple languages, solving riddles and even passing legal and medical exams. Our machines are now able to do things that we, their makers, do not know how to order them to do.

This has provoked both excitement and concern about the potential of this technology. Our anxiety comes from not knowing what to expect from these new machines, both in terms of their immediate behaviour and of their future evolution.

We can make some sense of them, and the risks, if we consider that all their successes, and most of their problems, come directly from the particular recipe we are following to create them.

The reason why machines are now able to do things that we, their makers, do not fully understand is because they have become capable of learning from experience. AlphaGo became so good by playing more games of Go than a human could fit into a lifetime. Likewise, no human could read as many books as ChatGPT has absorbed.

Its important to understand that machines have become intelligent without thinking in a human way. This realisation alone can greatly reduce confusion, and therefore anxiety.

ADVERTISEMENT

CONTINUE READING BELOW

Intelligence is not exclusively a human ability, as any biologist will tell you, and our specific brand of it is neither its pinnacle nor its destination. It may be difficult to accept for some, but intelligence has more to do with chickens crossing the road safely than with writing poetry.

In other words, we should not necessarily expect machine intelligence to evolve towards some form of consciousness. Intelligence is the ability to do the right thing in unfamiliar situations, and this can be found in machines, for example those that recommend a new book to a user.

If we want to understand how to handle AI, we can return to a crisis that hit the industry from the late 1980s, when many researchers were still trying to mimic what we thought humans do. For example, they were trying to understand the rules of language or human reasoning, to program them into machines.

That didnt work, so they ended up taking some shortcuts. This move might well turn out to be one of the most consequential decisions in our history.

The first shortcut was to rely on making decisions based on statistical patterns found in data. This removed the need to actually understand the complex phenomena that we wanted the machines to emulate, such as language. The auto-complete feature in your messaging app can guess the next word without understanding your goals.

ADVERTISEMENT

CONTINUE READING BELOW

While others had similar ideas before, the first to make this method really work, and stick, was probably Fredrick Jelinek at IBM, who invented statistical language models, the ancestors of all GPTs, while working on machine translation.

In the early 1990s, he summed up that first shortcut by quipping: Whenever I fire a linguist, our systems performance goes up. Though the comment may have been said jokingly, it reflected a real-world shift in the focus of AI away from attempts to emulate the rules of language.

This approach rapidly spread to other domains, introducing a new problem: sourcing the data necessary to train statistical algorithms.

Creating the data specifically for training tasks would have been expensive. A second shortcut became necessary: data could be harvested from the web instead.

As for knowing the intent of users, such as in content recommendation systems, a third shortcut was found: to constantly observe users behaviour and infer from it what they might click on.

ADVERTISEMENT

CONTINUE READING BELOW

By the end of this process, AI was transformed and a new recipe was born. Today, this method is found in all online translation, recommendations and question-answering tools.

For all its success, this recipe also creates problems. How can we be sure that important decisions are made fairly, when we cannot inspect the machines inner workings?

How can we stop machines from amassing our personal data, when this is the very fuel that makes them operate? How can a machine be expected to stop harmful content from reaching users, when it is designed to learn what makes people click?

It doesnt help that we have deployed all this in a very influential position at the very centre of our digital infrastructure, and have delegated many important decisions to AI.

For instance, algorithms, rather than human decision makers, dictate what were shown on social media in real time. In 2022, the coroner who ruled on the tragic death of 14-year-old Molly Russell partly blamed an algorithm for showing harmful material to the child without being asked to.

ADVERTISEMENT

CONTINUE READING BELOW

As these concerns derive from the same shortcuts that made the technology possible, it will be challenging to find good solutions. This is also why the initial decisions of the Italian privacy authority to block ChatGPT created alarm.

Initially, the authority raised the issues of personal data being gathered from the web without a legal basis, and of the information provided by the chatbot containing errors. This could have represented a serious challenge to the entire approach, and the fact that it was solved by adding legal disclaimers, or changing the terms and conditions, might be a preview of future regulatory struggles.

Dear Reader, Over the past four years, EastMojo revolutionised the coverage of Northeast India through our sharp, impactful, and unbiased overage. And we are not saying this: you, our readers, say so about us. Thanks to you, we have become Northeast Indias largest, independent, multimedia digital news platform.Now, we need your help to sustain what you started.We are fiercely protective of our independent status and would like to remain so: it helps us provide quality journalism free from biases and agendas. From travelling to the remotest regions to cover various issues to paying local reporters honest wages to encourage them, we spend our money on where it matters.Now, we seek your support in remaining truly independent, unbiased, and objective. We want to show the world that it is possible to cover issues that matter to the people without asking for corporate and/or government support. We can do it without them; we cannot do it without you.Support independent journalism, subscribe to EastMojo.

Thank you,Karma PaljorEditor-in-Chief,eastmojo.com

We need good laws, not doomsaying. The paradigm of AI shifted long ago, but it was not followed by a corresponding shift in our legislation and culture. That time has now come.

An important conversation has started about what we should want from AI, and this will require the involvement of different types of scholars. Hopefully, it will be based on the technical reality of what we have built, and why, rather than on sci-fi fantasies or doomsday scenarios.

Nello Cristianini, Professor of Artificial Intelligence, University of Bath

ADVERTISEMENT

CONTINUE READING BELOW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Also Read | Peanut butter is a liquid: the physics of this and other oddfluids

Like Loading...

Related

Latest Stories

Read the original:
To understand AI's problems look at the shortcuts taken to create it - EastMojo

Terence Tao Leads White House’s Generative AI Working Group … – Pandaily

On May 13th, Terence Tao, an award winning Australia-born Chinese mathematician, announced that he and physicist Laura Greene will co-chair a working group studying the impacts of generative artificial intelligence technology on the Presidential Council of Advisors on Science and Technology (PCAST). The group will hold a public meeting during the PCAST conference on May 19th, where Demis Hassabis, founder of DeepMind and creator of AlphaGo, as well as Stanford University professor Fei-Fei Li among others will give speeches.

According to Terence Taos blog, the group mainly researches the impact of generative AI technology in scientific and social fields, including large-scale language models based on text such as ChatGPT, image generators like DALL-E 2 and Midjourney, as well as scientific application models for protein design or weather forecasting. It is worth mentioning that Lisa Su, CEO of AMD, and Phil Venables, Chief Information Security Officer of Google Cloud are also members of this working group.

According to an article posted on the official website of the White House, PCAST develops evidence-based recommendations for the President on matters involving science, technology, and innovation policy, as well as on matters involving scientific and technological information that is needed to inform policy affecting the economy, worker empowerment, education, energy, the environment, public health, national and homeland security, racial equity, and other topics.

SEE ALSO: Mathematician Terence Tao Comments on ChatGPT

After the emergence of ChatGPT, top mathematicians like Terence Tao also paid great attention to it and began exploring how artificial intelligence could help them complete their work. In an article titled How will AI change mathematics? Rise of chatbots highlights discussion in the Nature Journal, Andrew Granville, a number theorist at McGill University in Canada, also said that we are studying a very specific question: will machines change mathematics? Mathematician Kevin Buzzard agrees, saying that in fact, now even Fields Medal winners and other very famous mathematicians are interested in this field, which shows that it has become popular in an unprecedented way.

Previously, Terence Tao wrote on the decentralized social network Mastodon, Today was the first day that I could definitively say that #GPT4 has saved me a significant amount of tedious work. In his experimentation, Terence Tao discovered many hidden features of ChatGPT such as searching for formulas, parsing documents with code formatting, rewriting sentences in academic papers and sometimes even semantically searching incomplete math problems to generate hints.

See the original post here:
Terence Tao Leads White House's Generative AI Working Group ... - Pandaily

Why we should be concerned about advanced AI – Epigram

By Gaurav Yadav, Second year, Law

In 1955, four scientists coined the term artificial intelligence (AI) and embarked on a summer research project aimed at developing machines capable of using language, forming abstractions and solving problems typically reserved for humans. Their ultimate goal was to create machines rivalling human intelligence. The past decade has witnessed a remarkable transformation in AI capabilities, but this rapid progress should prompt more caution than enthusiasm.

The foundation of AI lies in machine learning, a process by which machines learn from data without explicit programming. Using vast datasets and statistical methods, algorithms identify patterns and relationships in the data, later using these patterns to make predictions or decisions on previously unseen data. The current paradigm in machine learning involves developing artificial neural networks that mimic the human brain's structure.

AI systems can be divided into two categories: 'narrow' and 'general'. A 'narrow' AI system excels in specific tasks, such as image recognition or strategic games like chess or Go, whereas artificial general intelligence (AGI) refers to a system proficient across a wide range of tasks, comparable to a human being.

A growing number of people worry that the emergence of advanced AI could lead to an existential crisis. Advanced AI broadly means AI systems capable of performing all cognitive tasks typically completed by humansenvision an AI system managing a company's operations as its CEO.

Daniel Eth, a former research scholar at the Future of Humanity Institute, University of Oxford, describes the potential outcome for advanced AI as one that could involve a single AGI surpassing human experts in most fields and disciplines. Another possibility entails an ecosystem of specialised AI systems, collectively capable of virtually all cognitive tasks. While researchers may disagree on the necessity of AGI or whether current AI models are approaching advanced capabilities, a general consensus exists that advanced or transformative AI systems are theoretically feasible.

Though certain aspects of this discussion might evoke a science fiction feel, recent AI breakthroughs seem to have blurred the line between fantasy and reality. Notable examples include large language models like GPT-4 and AlphaGo's landmark victory over Lee Sedol. These advancements underscore the potential for transformative AI systems in the future. AI systems can now recognise images, produce videos, excel at StarCraft, and produce text that is indistinguishable from human writing. The state of the art in AI is now a moving target, with AI capabilities advancing year after year.

Why should we be concerned about advanced AI?

If advanced AI is unaligned with human goals, it could pose significant risks for humanity. The 'alignment problem'which is the problem of aligning the goals of an AI with human objectivesis difficult because of the black-box nature of neural networks. It is incredibly hard to know what is going on inside of an AI when its coming up with outputs. AI systems might develop their own goals that diverge from ours, which are challenging to detect and counteract.

For instance, a reinforcement learning model (another form of machine learning), controlling a boat in a racing game maximised its score by circling and collecting power-ups rather than finishing the race. Given its aim was to achieve the highest score possible, it will go about finding ways to do that even if it breaks our understanding of how to play the game.

It may seem far-fetched to argue that advanced AI systems could pose an existential risk to humanity based on this humorous example. However, if we entertain the idea that AI systems can develop goals misaligned with our intentions, it becomes easier to envision a scenario where an advanced AI system could lead to disastrous consequences for mankind.

Imagine a world where advanced AI systems gain prominence within our economic and political systems, taking control of or being granted authority over companies and institutions. As these systems accumulate power, they may eventually surpass human control, leaving us vulnerable to permanent disempowerment.

What do we do about this?

There is a growing field of professionals that are working in the field of AI safety, who are focused on solving the alignment problem and ensuring that advanced AI systems do not spiral out of control.

Presently, their efforts encompass various approaches, such as interpretability work, which aims to decipher the inner workings of otherwise opaque AI systems. Another approach involves ensuring that AI systems are truthful with us. A specific branch of this work, known as eliciting latent knowledge, explores the extraction of "knowledge" from AI systems, effectively compelling them to be honest.

At the same time, significant work is being carried out in the realm of AI governance. This includes efforts to minimise the risks associated with advanced AI systems by focusing on policy development and fostering institutional change. Organisations such as the Centre for Governance of AI are actively engaged in projects addressing various aspects of AI governance. By promoting responsible AI research and implementation, these initiatives seek to ensure that advanced AI systems are developed and deployed in ways that align with human values and societal interests.

The field of AI safety remains alarmingly underfunded and understaffed, despite the potential risks of advanced AI systems. Benjamin Hilton estimates that merely 400 people globally are actively working to reduce the likelihood of AI-related existential catastrophes. This figure is strikingly low compared to the vast number of individuals working to advance AI capabilities, which Hilton suggests is approximately 1,000 times greater.

If this has piqued your interest or concern, you might want to consider pursuing a career in AI safety. To explore further, you could read the advice provided by 80,000 Hours, a website that provides support to help students and graduates switch into careers that tackle the worlds most pressing problems, or deepen your understanding of the field of AI safety by enrolling in the AGI Safety Fundamentals Course.

Featured image: Generated using DALL-E by OpenAI

Go here to see the original:
Why we should be concerned about advanced AI - Epigram

Purdue President Chiang to grads: Let Boilermakers lead in … – Purdue University

Purdue President Mung Chiang made these remarks during the universitys Spring Commencement ceremonies May 12-14.

Opening

Today is not just any graduation but the commencement at a special place called Purdue, with a history that is rich and distinct and an accelerating momentum of excellence at scale. There is nothing more exciting than to see thousands of Boilermakers celebrate a milestone in your lives with those who have supported you. And this commencement has a special meaning to me as my first in the new role serving our university.

President Emeritus Mitch Daniels gave 10 commencement speeches, each an original treatise, throughout the Daniels Decade. I was tempted to simply ask generative AI engines to write this one for me. But I thought itd be more fun to say a few thematic words by a human for fellow humans before that becomes unfashionable.

AI at Purdue

Sometime back in the mid-20th century, AI was a hot topic for a while. Now it is again; so hot that no computation is too basic to self-anoint as AI and no challenge seems too grand to be out of its reach. But the more you know how tools such as machine learning work, the less mysterious they become.

For the moment, lets assume that AI will finally be transformational to every industry and to everyone: changing how we live, shaping what we believe in, displacing jobs. And disrupting education.

Well, after IBMs Deep Blue beat the world champion, we still play chess. After calculators, children are still taught how to add numbers. Human beings learn and do things not just as survival skills, but also for fun, or as a training of our mind.

That doesnt mean we dont adapt. Once calculators became prevalent, elementary schools pivoted to translating real-world problems into math formulations rather than training for the speed of adding numbers. Once online search became widely available, colleges taught students how to properly cite online sources.

Some have explored banning AI in education. That would be hard to enforce; its also unhealthy as students need to function in an AI-infused workplace upon graduation. We would rather Purdue evolve teaching AI and teaching with AI.

Thats why Purdue offers multiple major and minor degrees, fellowships and scholarships in AI and in its applications. Some will be offered as affordable online credentials, so please consider coming back to get another Purdue degree and enjoy more final exams!

And thats why Purdue will explore the best way to use AI in serving our students: to streamline processes and enhance efficiency so that individualized experiences can be offered at scale in West Lafayette. Machines free up human time so that we can do less and watch Netflix on a couch, or we can do more and create more with the time saved.

Pausing AI research is even less practical, not the least because AI is not a well-defined, clearly demarcated area in isolation. All universities and companies around the world would have to stop any research that involves math. My Ph.D. co-advisor, Professor Tom Cover, did groundbreaking work in the 1960s on neural networks and statistics, not realizing those would later become useful in what others call AI. We would rather Purdue advance AI research with nuanced appreciation of the pitfalls, limitations and unintended consequences in its deployment.

Thats why Purdue just launched the universitywide Institute of Physical AI. Our faculty are the leaders at the intersection of virtual and physical, where the bytes of AI meet the atoms of what we grow, make and move from agriculture tech to personalized health care. Some of Purdues experts develop AI to check and contain AI through privacy-preserving cybersecurity and fake video detection.

Limitations and Limits

As it stands today, AI is good at following rules, not breaking rules; reinforcing patterns, not creating patterns; mimicking whats given, not imagining beyond their combinations. Even individualization algorithms, ironically, work by first grouping many individuals into a small number of similarity classes.

At least for now, the more we advance artificial intelligence, the more we marvel at human intelligence. Deep Blue vs. Kasparov, or AlphaGo vs. Lee, were not fair comparisons: the machines used four orders of magnitude more energy per second! Both the biological mechanisms that generate energy from food and the amount of work we do per Joule must be astounding to machines envy. Can AI be as energy efficient as it is fast? Can it take in energy sources other than electricity? When someday it does, and when combined with sensors and robotics that touch the physical world, youd have to wonder about the fundamental differences between humans and machines.

Can AI, one day, make AI? And stop AI?

Can AI laugh, cry and dream? Can it contain multitudes and contradictions like Walt Whitman?

Will AI be aware of itself, and will it have a soul, however awareness and souls are defined? Will it also be T.S. Eliots infinitely suffering things?

Where does an AI life start and stop anyway? What constitutes the identity of one AI, and how can it live without having to die? Indeed, if the memory and logic chips sustain and merge, is AI all collectively one life? And if AI duplicates a humans mind and memory, is that human life going to stay on forever, too?

These questions will stay hypothetical until breakthroughs more architectural than just compounding silicon chips speed and exploding data to black-box algorithms.

However, if given sufficient time and as a matter of time, some of these questions are bound to eventually become real, what then is uniquely human? What would still be artificial about artificial intelligence? Some of that eventuality might, with bumps and twists, show up faster than we had thought. Perhaps in your generation!

Freedoms and Rights

If Boilermakers must face these questions, perhaps it does less harm to consider off switches controlled by individual citizens than a ban by some bureaucracy. May the medicine be no worse than the disease, and regulations by government agencies not be granular or static, for governments dont have a track record of understanding fast-changing technologies, let alone micromanaging them. Some might even argue that government access to data and arbitration of algorithms counts among the most worrisome uses of AI.

What we need are basic guardrails of accountability, in data usage compensation, intellectual property rights and legal liability.

We need skepticism in scrutinizing the dependence of AI engines output on their input. Data tends to feed on itself, and machines often give humans what we want to see.

We need to preserve dissent even when its inconvenient, and avoid philosopher kings dressed in AI even when the alternative appears inefficient.

We need entrepreneurs in free markets to invent competing AI systems and independently maximize choices outside the big tech oligopoly. Some of them will invent ways to break big data.

Where, when and how is data collected, stored and used? Like many technologies, AI is born neutral but suffers the natural tendency of being abused, especially in the name of the collective good. Todays most urgent and gravest nightmare of AI is its abuse by authoritarian regimes to irreversibly lock in the Orwellian 1984: the surveillance state oppressing rights, aided and abetted by AI three-quarters of a century after that bleak prophecy.

We need verifiable principles of individual rights, reflecting the Constitution of our country, in the age of data and machines around the globe. For example, MOTA:

My worst fear about AI is that it shrinks individual freedom. Our best hope for AI is that it advances individual freedom. That it presents more options, not more homogeneity. That the freedom to choose and free will still prevail.

Let us preserve the rights that survived other alarming headlines in centuries past.

Let our students sharpen the ability to doubt, debate and dissent.

Let a university, like Purdue, present the vista of intellectual conflicts and the toil of critical thinking.

Closing

Now, about asking AI engines to write this speech. We did ask it to write a commencement speech for the president of Purdue University on the topic of AI, after I finished drafting my own.

Im probably not intelligent enough or didnt trust the circular clichs on the web, but what I wrote had almost no overlap with what AI did. I might be biased, but the AI version reads like a B- high school essay, a grammatically correct synthesis with little specificity, originality or humor. Its so toxically generic that even adding a human in the loop to build on it proved futile. Its so boring that you would have fallen asleep even faster than you just did. By the way, you can wake up now: Im wrapping up at last.

Maybe most commencement speeches and strategic plans sound about the same: Universities have made it too easy for language models! Maybe AI can remind us to try and be a little less boring in what we say and how we think. Maybe bots can murmur: Dont you ChatGPT me whenever were just echoing in an ever smaller and louder echo chamber down to the templated syntax and tired words. Smarter AI might lead to more interesting humans.

Well, there were a few words of overlap between my draft and AIs. So, heres from both some bytes living in a chip and a human Boilermaker to you all on this 2023 Purdue Spring Commencement: Congratulations, and Boiler Up!

Read this article:
Purdue President Chiang to grads: Let Boilermakers lead in ... - Purdue University

12 shots at staying ahead of AI in the workplace – pharmaphorum

Oliver Stohlmanns Corporate Survival Hacks series draws on his experiences of working in local, regional, and global life sciences communications to offer some little tips for enjoying a big business career. In this update, he shares expectations on how artificial intelligence (AI) may impact our workplaces and what we may do to leverage this trend for the benefit of both people and business.

Regardless of where you are on the corporate ladder, whether you know it or not, your life is going to change; dramatically, fast.

Indications of what artificial intelligence (AI) is already able to do and how its broader application will change our work environment are mind-boggling. What well experience in the next five to ten years is a massive explosion of AI usage in nearly all areas of life.

The beginning of the beginning?

A few examples? Generating flawless text or images is no longer an issue of skill or knowledge. Most AI-generated results are so impressive that a number of people and professions are already impacted by this.

As a teacher or university lecturer, it hardly makes sense today to have students draft their own essays or academic papers. According to Nature, it has become impossible even for scientists to differentiate with certainty between AI-created and original abstracts.

At a recent marketing seminar I was involved in, not one of 36 business students was able to provide a superior and better structured answer than ChatGPT to the question, Please explain SWOT analysis. Try for yourself.

Authentic voice and imagery

In the US, the start-up DoNotPay was about to run a pilot in February in which AI would represent a client in a speeding case court hearing. The chatbot would run on a smartphone, listening to what was being said in court, before whispering instructions into the defendants earpiece on how to best answer the judges questions. The experiment got stopped at the last minute by state bar associations concerned about the robot lawyer practicing law without a license. However, if these objections can be resolved, this may be the way forward in many comparable settings. Its not a matter of AI capability.

If you cannot or do not wish to attend meetings in person, VALL-E is able to read any text in your voice and tonality, or anyone elses. All you need to do is submit a three-second original voice sample. Soon the human ear will not be able to differentiate between the authentic sound of a persons voice and AI imitations of it.

DALL-E2 is an AI system that can create realistic images and artwork in line with your exact specifications, from your description in natural language. The need for graphic designers, photographers, illustrators, and even classic painters will fade.

Shifting from the what to the how

In the future, the best speakers will be those able to authentically repeat what those little ear pods tell them with exceptional charisma, intonation, natural gestures, and facial expressions. Neither content nor expertise will be a bottleneck. An AI-enabled speaker will be able to talk about absolutely any subject on any level of expertise. And yes, theyll be able to answer any question, too, even the provocative ones.

The best business consultants, trainers, and leadership coaches will be those with outstanding social, didactic, and motivational skills. Professional education will continue to matter, but it will focus much more on supporting executives on how to run their business, team, and customer relations; not on transferring knowledge. Being an expert knowledgeable on the what will not suffice. Most consultants, trainers, and coaches will be replaced by social learning environments. Facilitators may guide customised knowledge acquisition, while coaches and consultants will largely focus on optimising executives acumen, personality, and other soft components of effective leadership.

More human in Human Resources

The best people managers will be those who naturally adopt and apply the latest intelligence on people management that their employers AI-powered HR function equips them with. Human touch will not be lacking. Itll be delivered in a personalised way allowing the manager to tailor their approach to different team members of diverse engagement drivers and needs. Data collection and evaluation will run fully automated in the background, providing the manager with individual strength assessments, goal recommendations, performance tracking, corrective interventions, and development recommendations customised to each team member while calibrating across large organisations in real time.

The best HR representatives will be those who lend these automated processes and decisions a trustworthy, fair, and human face. Decisions will be facilitated and employee conversations prepared flawlessly by AI systems running in the background. The number of real people employed in human resources will shrink. Those left, however, will primarily focus on interfacing with internal clients and employees. The quality of these interactions, and that of preparing materials and compelling scripts to enable powerful conversations, will materially increase.

Language creation and translation

The best writers will be Whoops, I started this sentence wrong: therell be no need for writers. Or very few, outstanding ones at best. Already today, AI-generated texts are of a quality, clarity, and artistic beauty that beats 80% of human professional writers. Try it out: ask ChatGPT to draft an introduction for the website of company Human Hips that designs and replaces human hip implants. See what happens.

I just made up that company name. If it existed, they may use the resulting draft for their website straight. Yes, it could be improved by a great writer, more details added reflecting the specialty offerings of that enterprise. However, AI is on track to producing superior texts compared to most human writers, based on minimal input and cost, and faster than anyone else could.

The best translators will be Sorry, got this wrong again: translators will disappear. AI already supplies great, and will deliver perfect, translations into any and all global languages in split seconds, for any length and complexity of written or spoken word. Roles that translate texts or simultaneously translate the spoken word will be a concept of the past.

Seizing the AI revolution

The best employees those who retain well-paid jobs and climb the career ladder will be those able to competently navigate the avalanche of AI-led and augmented applications. They can select the relevant ones to add business value and adapt key features to meet specific business and customer needs. Theyre able to utilise AI to achieve outcomes faster and more efficiently, at lower cost and better quality than whats imaginable today.

The best executives will be algorithm-based. Of course, its a scary prospect to remove thinking humans with deep background and long experience from the positions of power. However, just imagine how much better, faster, fairer, and more ethical fact-based decision-making could become once typical human flaws are removed from the equation. These may include ones individual values and beliefs, ideologies, biases, personal relationships, and interdependencies, including corruption and other temptations; plus cultural and institutional norms, value-systems, expectations, and the pressures typically resulting from those. Scary, but likely in the future.

The best politicians will be You get my drift!

But theres an upside - many, actually

I would be mistaken if I didnt at least briefly point out the phenomenally positive, life-enhancing, and sometimes life-saving opportunities AI brings to society, too.

Apart from GPS systems navigating us to destinations safely, faster, and more reliably, our cars are already equipped with lots of other AI-based safety features that serve to prevent accidents before they happen. An armada of sensors connected and communicating with smart control centres is constantly watching not only over the cars we use, but buses, trains, ships, planes, trucks, agricultural machinery, etc., to keep operations, passengers, and freight safe. They also make sure that buildings, roads, rail tracks, bridges, tunnels, airports, harbours, stations, wind turbines, and all other infrastructure is constantly monitored and gets maintained preventatively before fatigue, vibrations, climate, or other forces can lead to damage or disaster.

As much as I dont like the idea of machines taking over, they most certainly make safer drivers than I am. My future driverless car wont get distracted, nor will it become tired, and it will be able to detect nearing obstacles, stopping traffic, or the deer about to cross the road earlier than I could. In the same way, pilots have been using autopilots for years that cannot only keep planes stable in the air, but also take off and land them safely in the harshest weather conditions.

Human health: an AI beneficiary

In medicine, AI-augmented surgery can already operate more precisely than the human hand could, with trained physicians informing and supervising the process and intervening as needed. Implants are being precision-measured, designed to your individual specifications, and a unique product tailormade to provide an optimal, long-lasting fit. Thats not to mention the fast, minimally invasive precision-surgery that spares patients pain and time, while reducing hospital capacity and cost.

Innovative medical therapies will be designed, developed, and clinically trialled much faster driven by AI-led processes, and made available to the right patients, who benefit from treatment and who will have been pre-determined with the aid of biomarkers or other tests conducted by means of you guessed it AI at rocket speed and precision.

These are just examples. The fast-increasing use of AI will radically change the way we work and live. But it will also usher in a world of opportunities that we and future generations will greatly benefit from.

Buckle up!

However, in case you find the above scenarios unsettling: most do not even touch on the true potential of artificial intelligence. What weve been talking about, so far, is mostly the seamless automation of individual steps and processes so that results can be achieved faster, more efficiently, and more accurately than any human brain could.

Fasten your seatbelts for when true self-learning algorithms with the capacity and capability to continuously learn from errors and instantly apply their insights to improve approaches in real-time are ready for mass application.

For instance, DeepMinds AlphaGo system, who apologies: that famously defeated the worlds Go champion Le Se-dol in 2016. Three years later, the South Korean attributed his retirement from the complex board game to the rise of AI, saying that it was an entity that cannot be defeated.

Well, for a bit of hope, read this recent update on how the story continued with a comprehensive defeat of a top-ranked AI system in the same game. However, you may also notice even that human victory over AI was owed to yet more artificial intelligence support

Whichever way you look at the rise of AI, its diverse applications, future possibilities, or the potential need for regulation: its going to be a fast ride.

About the author

Oliver Stohlmann is a communications leader with more than 20 years experience of working at local, regional, and global levels for several of the worlds premier life sciences corporations. Most recently, he was Johnson & Johnsons global head of external innovation communication. He currently works for Exscientia plc and as an independent leadership coach, trainer, team-developer, and communications consultant.

Read the rest here:
12 shots at staying ahead of AI in the workplace - pharmaphorum