Why it’s time to address the ethical dilemmas of artificial intelligence – Economic Times

The Future of Life Institute (FLI) was founded in March 2014 by eminent futurologists and researchers to reduce catastrophic and existential risks to humankind from advanced technologies like artificial intelligence (AI). Elon Musk, who is on FLI's advisory board, donated $10 million to jump-start research on AI safety because, in his words, 'with artificial intelligence, we are summoning the devil'. For something that everyone is singing hosannas to these days, and treating as a solution to almost all challenges faced by industry or healthcare or education, why this cautionary tale?

AI's perceived risk isn't only from autonomous weapon systems that countries like the US, China, Israel and Turkey produce that can track and target humans and assets without human intervention. It's equally about the deployment of AI and such technologies for mass surveillance, adverse health interventions, contentious arrests and the infringement of fundamental rights. Not to mention about the vulnerabilities that dominant governments and businesses can insidiously create.

AI came into global focus in 1997 when IBM's Deep Blue beat world chess champion Garry Kasparov. We came to accept that the outcome was inevitable, considering it was a game based on logic. And that the ability of the computer to reference past games, figure options and select the most effective move instantly, is superior to what humans could ever do. When Google DeepMind's AlphaGo program bested the world's best Go player Lee Sedol in 2016, we learnt that AI could easily master games based on intuition too.

AI, AI, SirAs the United Nations Educational, Scientific and Cultural Organisation (Unesco) sharpened the focus in recognising the ethical dilemmas that AI could create, it has embarked on developing a legal, global document on the subject. Situations discussed include how a search engine can become an echo chamber upholding real-life biases and prejudices - like when we search for the 'greatest leaders of all time', and get a list of only male personalities. Or the quandary when a car brakes to avoid a jaywalker and shifts the risk from the pedestrian to the travellers in the car. Or when AI is exploited to study 346 Rembrandt paintings pixel by pixel, leveraging deep-learning algorithms to produce a magnificent, 3D-printed masterpiece that could deceive the best art experts and connoisseurs.

Then there is the AI-aided application of justice in legislation, administration, adjudication and arbitration. Unesco's quest to provide an ethical framework to ensure emerging technologies benefit humanity at large is, indeed, a noble one.

Interestingly, computer scientists at the Vienna University of Technology (TU Wein), Austria, are studying Indian Vedic texts, and applying them to mathematical logic. The idea is to develop reasoning tools to address deontic - relating to duty and obligation - concepts like prohibitions and commitments, to implement ethics in AI.

Logicians at the Institute of Logic and Computation at TU Wein and the Austrian Academy of Science are also gleaning the Mimamsa, which interprets the Vedas and suggests how to maintain harmony in the world, to resolve many innate contradictions. Essentially, as classical logic is less useful when dealing with ethics, deontic logic needs to be developed that can be expressed in mathematical formulae, creating a framework that computers can comprehend and respond to.

Isaac Asimov's iconic 1950 book, I, Robot, sets out the three rules all robots must be programmed with: the Three Laws of Robotics - 1. To never harm a human or allow a human to come to harm. 2. To obey humans unless this violates the first law. 3. To protect its own existence unless this violates the first or second laws. In the 2004 film adaptation, a larger threat is envisaged - when AI-enabled robots rebel and try to enslave and control all humans, to protect humanity for its own good, by their dialectic.

Artificially RealIn the real world, there is little doubt that AI has to be mobilised for the greater good, guided by the right human intention, so that it can be leveraged to control larger forces of nature like climate change and natural disasters that we can't otherwise manage. AI must be a means to nourish humanity in multifarious ways, rather than unobtrusively aid its destruction. It is obvious that the Three Laws of Robotics must be augmented, so that expanded algorithms help the AI engine respect privacy, and not discriminate in terms of race, gender, age, colour, wealth, religion, power or politics.

We're seeing the mainstreaming of AI in an age of exponential digital transformation. How we figure its future will shape the next stage of human evolution. The time is opportune for governments to confabulate - to shape equitable outcomes, a risk management strategy and pre-emptive contingency plans.

More here:
Why it's time to address the ethical dilemmas of artificial intelligence - Economic Times

Related Posts

Comments are closed.