Archive for the ‘Alphago’ Category

Examining the world through signals and systems – MIT News

Theres a mesmerizing video animation on YouTube of simulated, self-driving traffic streaming through a six-lane, four-way intersection. Dozens of cars flow through the streets, pausing, turning, slowing, and speeding up to avoid colliding with their neighbors. And not a single car stopping. But what if even one of those vehicles was not autonomous? What if only one was?

In the coming decades, autonomous vehicles will play a growing role in society, whether keeping drivers safer, making deliveries, or increasing accessibility and mobility for elderly or disabled passengers.

But MIT Assistant Professor Cathy Wu argues that autonomous vehicles are just part of a complex transport system that may involve individual self-driving cars, delivery fleets, human drivers, and a range of last-mile solutions to get passengers to their doorstep not to mention road infrastructure like highways, roundabouts, and, yes, intersections.

Transport today accounts for about one-third of U.S. energy consumption. The decisions we make today about autonomous vehicles could have a big impact on this number ranging from a 40 percent decrease in energy use to a doubling of energy consumption.

So how can we better understand the problem of integrating autonomous vehicles into the transportation system? Equally important, how can we use this understanding to guide us toward better-functioning systems?

Wu, who joined the Laboratory for Information and Decision Systems (LIDS) and MIT in 2019, is the Gilbert W. Winslow Assistant Professor of Civil and Environmental Engineering as well as a core faculty member of the MIT Institute for Data, Systems, and Society. Growing up in a Philadelphia-area family of electrical engineers, Wu sought a field that would enable her to harness engineering skills to solve societal challenges.

During her years as an undergraduate at MIT, she reached out to Professor Seth Teller of the Computer Science and Artificial Intelligence Laboratory to discuss her interest in self-driving cars.

Teller, who passed away in 2014, met her questions with warm advice, says Wu. He told me, If you have an idea of what your passion in life is, then you have to go after it as hard as you possibly can. Only then can you hope to find your true passion.

Anyone can tell you to go after your dreams, but his insight was that dreams and ambitions are not always clear from the start. It takes hard work to find and pursue your passion.

Chasing that passion, Wu would go on to work with Teller, as well as in Professor Daniela Russ Distributed Robotics Laboratory, and finally as a graduate student at the University of California at Berkeley, where she won the IEEE Intelligent Transportation Systems Society's best PhD award in 2019.

In graduate school, Wu had an epiphany: She realized that for autonomous vehicles to fulfill their promise of fewer accidents, time saved, lower emissions, and greater socioeconomic and physical accessibility, these goals must be explicitly designed-for, whether as physical infrastructure, algorithms used by vehicles and sensors, or deliberate policy decisions.

At LIDS, Wu uses a type of machine learning called reinforcement learning to study how traffic systems behave, and how autonomous vehicles in those systems ought to behave to get the best possible outcomes.

Reinforcement learning, which was most famously used by AlphaGo, DeepMinds human-beating Go program, is a powerful class of methods that capture the idea behind trial-and-error given an objective, a learning agent repeatedly attempts to achieve the objective, failing and learning from its mistakes in the process.

In a traffic system, the objectives might be to maximize the overall average velocity of vehicles, to minimize travel time, to minimize energy consumption, and so on.

When studying common components of traffic networks such as grid roads, bottlenecks, and on- and off-ramps, Wu and her colleagues have found that reinforcement learning can match, and in some cases exceed, the performance of current traffic control strategies. And more importantly, reinforcement learning can shed new light toward understanding complex networked systems which have long evaded classical control techniques. For instance, if just 5 to 10 percent of vehicles on the road were autonomous and used reinforcement learning, that could eliminate congestion and boost vehicle speeds by 30 to 140 percent. And the learning from one scenario often translates well to others. These insights could one day soon help to inform public policy or business decisions.

In the course of this research, Wu and her colleagues helped improve a class of reinforcement learning methods called policy gradient methods. Their advancements turned out to be a general improvement to most existing deep reinforcement learning methods.

But reinforcement learning techniques will need to be continually improved to keep up with the scale and shifts in infrastructure and changing behavior patterns. And research findings will need to be translated into action by urban planners, auto makers and other organizations.

Today, Wu is collaborating with public agencies in Taiwan and Indonesia to use insights from her work to guide better dialogues and decisions. By changing traffic signals or using nudges to shift drivers behavior, are there other ways to achieve lower emissions or smoother traffic?

Im surprised by this work every day, says Wu. We set out to answer a question about self-driving cars, and it turns out you can pull apart the insights, apply them in other ways, and then this leads to new exciting questions to answer.

Wu is happy to have found her intellectual home at LIDS. Her experience of it is as a very deep, intellectual, friendly, and welcoming place. And she counts among her research inspirations MIT course 6.003 (Signals and Systems) a class she encourages everyone to take taught in the tradition of professors Alan Oppenheim (Research Laboratory of Electronics) and Alan Willsky (LIDS). The course taught me that so much in this world could be fruitfully examined through the lens of signals and systems, be it electronics or institutions or society, she says. I am just realizing as Im saying this, that I've been empowered by LIDS thinking all along!

Research and teaching through a pandemic havent been easy, but Wu is making the best of a challenging first year as faculty. (Ive been working from home in Cambridge my short walking commute is irrelevant at this point, she says wryly.) To unwind, she enjoys running, listening to podcasts covering topics ranging from science to history, and reverse-engineering her favorite Trader Joes frozen foods.

Shes also been working on two Covid-related projects born at MIT: One explores how data from the environment, such as data collected by internet-of-things-connected thermometers, can help identify emerging community outbreaks. Another project asks if its possible to ascertain how contagious the virus is on public transport, and how different factors might decrease the transmission risk.

Both are in their early stages, Wu says. We hope to contribute a bit to the pool of knowledge that can help decision-makers somewhere. Its been very enlightening and rewarding to do this and see all the other efforts going on around MIT.

More here:
Examining the world through signals and systems - MIT News

DeepMind Proposes Graph-Theoretic Investigation of the Multiplayer Games Landscape – Synced

In the mid-1960s, computer science and AI researchers adopted the pet name drosophila for the game of Chess a reference to the fruit flies commonly used in genetic research. American evolutionary biologist Thomas Morgan made critical contributions to the field by studying his famous fly rooms, and AI researchers today believe multiplayer games like Chess can provide similar accessible and relatively simple experimental environments for shaping useful knowledge about complex systems.

In recent years researchers have made multiplayer games a hot testbed for AI research, using reinforcement learning techniques to create superhuman agents in Chess, Go, StarCraft II and others.

This progress, however, can be better informed by characterizing games and their topological landscape, proposes the paper Navigating the Landscape of Multiplayer Games, recently published in Nature Communications. In the work, researchers from DeepMind and Universidade de Lisboa introduce a graph-based toolkit for analyzing and comparing games in this regard.

Understanding and decomposing the characterizing features of games can be leveraged for downstream training of agents via curriculum learning, which seeks to enable agents to learn increasingly-complex tasks. The researchers say it has become increasingly important to identify a framework that can taxonomize, characterize, and decompose complex AI tasks, and they turned to multiplayer games for references. They defined the core challenge as a Problem Problem: the engineering problem of generating large numbers of interesting adaptive environments to support research.

The researchers start with a fundamental question: What makes a game interesting enough for an AI agent to learn to play? They propose that answering this requires techniques that can characterize and enable discovery over the topological landscape of games, whether they are interesting or not.

The team combined graph and game theory to analyze the structure of general-sum, multiplayer games. They used the new toolkit to characterize games, looking at motivating examples and canonical games with well-defined structures first, then extending to larger-scale empirical games datasets. The games graph representations can offer researchers various insights, such as strong transitive relationships revealed in AlphaGo, the DeepMind program that defeated Go grandmaster Lee Sedol in 2016.

The study surveys the landscape of games and develops techniques to help with understanding the space of games, the downstream training of agents in game settings, and interest-improving algorithmic development. The team says the work opens paths for further exploration of the theoretical properties of graph-based games analysis and the Problem Problem and task theory, and can benefit related studies on the geometry and structure of games.

The paper Navigating the Landscape of Multiplayer Games is on Nature Communications.

Reporter: Fangyu Cai |Editor: Michael Sarazen

We know you dont want to miss any news or research breakthroughs.Subscribe to our popular newsletterSynced Global AI Weeklyto get weekly AI updates.

Thinking of contributing to Synced Review? Synceds new column Share My Research welcomes scholars to share their own research breakthroughs with global AI enthusiasts.

Like Loading...

Read the original here:
DeepMind Proposes Graph-Theoretic Investigation of the Multiplayer Games Landscape - Synced

There’s No Turning Back on AI in the Military – WIRED

For countless Americans, the United States military epitomizes nonpareil technological advantage. Thankfully, in many cases, we live up to it.

But our present digital reality is quite different, even sobering. Fighting terrorists for nearly 20 years after 9/11, we remained a flip-phone military in what is now a smartphone world. Infrastructure to support a robust digital force remains painfully absent. Consequently, service members lead personal lives digitally connected to almost everything and military lives connected to almost nothing. Imagine having some of the worlds best hardwarestealth fighters or space planessupported by the worlds worst data plan.

Meanwhile, the accelerating global information age remains dizzying. The year 2020 is on track to produce 59 zetabytes of data. Thats a one with 21 zeroes after itover 50 times the number of stars in the observable universe. On average, every person online contributes 1.7 megabytes of content per second, and counting. Taglines like Data is the new oil emphasize the economic import, but not its full potential. Data is more reverently captures its ever evolving, artificially intelligent future.

WIRED OPINION

ABOUT

Will Roper is the Air Force and Space Force acquisition executive.

The rise of artificial intelligence has come a long way since 1945, when visionary mathematician Alan Turing hypothesized that machines would one day perform intelligent functions, like playing chess. Aided by meteoric advances in data processinga million-billion-fold over the past 70 yearsTurings vision was achieved only 52 years later, when IBMs Deep Blue defeated the reigning world chess champion, Garry Kasparov, with select moves described as almost human. But this impressive feat would be dwarfed in 2016 when Googles AlphaGo shocked the world with a beyond-human, even beautiful move on its way to defeating 18-time world Go champion Lee Sedol. That now famous move 37 of game two was the death knell of human preeminence in strategy games. Machines now teach the worlds elite how to play.

China took more notice of this than usual. Weve become frustratingly accustomed to them copying or stealing US military secretstwo decades of post-9/11 operations provides a lot of time to watch and learn. But Chinas ambitions far outstrip merely copying or surpassing our military. AlphaGos victory was a Sputnik moment for the Chinese Communist Party, triggering its own NASA-like response: a national Mega-Project in AI. Though there is no moon in this digital space race, its giant leap may be the next industrial revolution. The synergy of 5G and cloud-to-edge AI could radically evolve the internet of things, enabling ubiquitous AI and all the economic and military advantages it could bestow. It's not just our military that needs digital urgency: Our nation must wake up fast. The only thing worse than fearing AI itself is fearing not having it.

There is a gleam of hope. The Air Force and Space Force had their own move 37 moment last month during the first AI-enabled shoot-down of a cruise missile at blistering machine speeds. Though happening in a literal flash, this watershed event was seven years in the making, integrating technologies as diverse as hypervelocity guns, fighters, computing clouds, virtual reality, 4G LTE and 5G, and even Project Maventhe Pentagons first AI initiative. In the blink of a digital eye, we birthed an internet of military things.

Working at unprecedented speeds (at least for the Pentagon), the Air Force and Space Force are expanding this IoT.mil across the militaryand not a moment too soon. With AI surpassing human performance in more than just chess and Go, traditional roles in warfare are not far behind. Whose AI will overtake them? is an operative question in the digital space race. Another is how our military finally got off the launch pad.

More than seven years ago, I spearheaded the development of hypervelocity guns to defeat missile attacks with low-cost, rapid-fire projectiles. I also launched Project Maven to pursue machine-speed targeting of potential threats. But with no defense plug-n-play infrastructure, these systems remained stuck in airplane mode. The Air Force and Space Force later offered me the much-needed chance to create that digital infrastructurecloud, software platforms, enterprise data, even coding skillsfrom the ground up. We had to become a good software company to become a software-enabled force.

See more here:
There's No Turning Back on AI in the Military - WIRED

The Military’s Mission: Artificial Intelligence in the Cockpit – The Cipher Brief

The Defense Advanced Research Projects Agency (DARPA) recently hosted the AlphaDogfight Trials putting artificial intelligence technology from eight different organizations up against human pilots. In the end, the winning AI, made by Heron Systems, faced off against a human F-16 pilot in a simulated dogfight with the AI system scoring a 5-0 victory against the human pilot.

The simulation was part of an effort to better understand how to integrate AI systems in piloted aircraft in part, to increase the lethality of the Air Force. The event also re-launched questions about the future of AI in aviation technology and how human pilots will remain relevant in an age of ongoing advancements in drone and artificial intelligence technology.

The Background:

The Experts:

The Cipher Brief spoke with our expert, General Philip M. Breedlove (Ret.) and Tucker Cinco Hamilton to get their take on the trials and the path ahead for AI in aviation.

General Philip M. Breedlove, Former Supreme Allied Commander, NATO & Command Pilot

Gen. Breedlove retired as NATO Supreme Allied Commander and is a command pilot with 3,500 flying hours, primarily in the F-16. He flew combat missions in Operation Joint Forge/Joint Guardian. Prior to his position as SACEUR, he served as Commander, U.S. Air Forces in Europe; Commander, U.S. Air Forces Africa; Commander, Air Component Command, Ramstein; and Director, Joint Air Power Competence Centre, Kalkar, Germany.

Lt. Col. Tucker Cinco Hamilton, Director, Dept. of the Air Force AI Accelerator at MIT

Cinco Hamilton is Director, Department of the Air Force-MIT Accelerator and previously served as Director of the F-35 Integrated Test Force at Edwards AFB, responsible for the developmental flight test of the F-35. He has logged over 2,100 hours as a test pilot in more than 30 types of aircraft.

How significant was this test between AI and human pilots?

Tucker Cinco Hamilton: It was significant along the same lines as whenDeepMind Technologies AlphaGo won the game Go against a grand-master. It was animportant moment that revealedtechnological capability, but it must be understood in the context of the demonstration. Equally, it did not prove that fighter pilots are no longer needed on the battlefield. What I hope people tookaway from the demonstration was that AI/ML technology is immensely capable andvitally important to understand and cultivate; that with an ethical and focused developmental approach we can bolster the human-machine interaction.

General Breedlove: Technology is moving fast, but in some cases, policy might not move so fast. For instance, technology exists now to put sensors on these aircrafts that are better than the human eye. They can see better. They can see better in bad conditions. And especially when you start to layer a blend of visual, radar, and infrared sensing together, it is my belief that we can actually achieve a more reliable discerning capability than the human eye. I do not believe that our limitations are going to be on the ability of the machine to do what it needs to do. The real limitations are going to be on what we allow it to do in a policy format.

How will fighter pilots of the future think about data and technology in the cockpit?

General Breedlove: Some folks believe that were never going to move forward with this technology because fighter pilots dont want to give up the control. I think for most young fighter pilots and for most of the really savvy older fighter pilots, thats not true. We want to be effective, efficient, lethal killing machines when our nation needs to us to be. If we can get into an engagement where we can use these capabilities to make us more effective and more efficient killing machines, then I think youre going to see people, young people, and even people like me, absolutely embracing it.

Tucker Cinco Hamilton: I think the future fighter aircraft will be manned, yet linked into AI/ML powered autonomous systems that bolster the fighter pilots battlefield safety, awareness, and capability. The future I see is one in which an operator is still fully engaged with battlefield decision making, yet being supported by technology through human-machine teaming.

As we develop and integrate AI/ML capability we must do so ethically. This is an imperative. Our warfighter and our society deserve transparent, ethically curated, and ethically executed algorithms. In addition, data must be transparently and ethicallycollected and used.BeforeAI/ML capability fullymakes its way into combat applicationswe need to have established a strong and thoughtful ethicalfoundation.

Looking Ahead:

General Breedlove: Humans are training machines to do things, and machines are executing what theyve been trained to do, as opposed to actually making independent, non-human aided decisions. I do believe were in a timeframe now where there may be a person in the loop in certain parts of the engagement, but were probably not very far off from a point in time when the human says, Yep, thats the target. Hit it. Or the human takes the aircraft to a point where only the bad element is in front of it, and the decision concerning collateral damage has already been made, and then the human turns it completely over. But to the high-end extreme of, launch an airplane and then see what happens next, kind of scenario, I think were still a long way away from that. I think there are going to be humans in the engagement loop for a long time.

Tucker Cinco Hamilton: Autonomous systems are here to stay. Whether helping manage our engine operation or saving us from ground collision with the Automatic Ground Collision Avoidance System. As aircraft software continues to become more agile, these autonomous systems will play a part in currently fielded physical systems. This type of advancement is important and needed. However, AI/ML powered autonomous systems havelimitations, and thats exactly where the operator comes in. We need to focus on creatingcapability that bolsters our fighter pilots,allowing them to best prosecute the attack, not remove them from the cockpit. If that is through keeping them safe, or pinpointing/identifying the correct target, helping alert them of incoming threats, or garnering knowledge of the battlefield its all about human-machine teaming. That teaming isexactly what the recent DARPA demonstration was about, proving that an AI powered system can help in situations even as dynamic as dogfighting.

Cipher Brief Intern Ben McNally contributed research for this report

Read more from General Breedlove (Ret.) on the future of AI in the cockpit exclusively in The Cipher Brief

Read more expert-driven national security insight, perspective and analysis in The Cipher Brief

Read more here:
The Military's Mission: Artificial Intelligence in the Cockpit - The Cipher Brief

8 Examples of Artificial Intelligence in our Everyday Lives – Edgy Labs

The applications of artificial intelligence have grown over the past decade. Here are examples of artificial intelligence that we use in our everyday lives.

Main Examples of Artificial Intelligence Takeaways:

The words artificial intelligence may seem like a far-off concept that has nothing to do with us. But the truth is that we encounter several examples of artificial intelligence in our daily lives.

From Netflixs movie recommendation to Amazons Alexa, we now rely on various AI models without knowing it. In this post, well consider eight examples of how were already using artificial intelligence.

Artificial intelligence is an expansive branch of computer science that focuses on building smart machines. Thanks to AI, these machines can learn from experience, adjust to new inputs, and perform human-like tasks. For example, chess-playing computers and self-driving cars rely heavily on natural language processing and deep learning to function.

American computer scientist John McCarthy coined the term artificial intelligence back in 1956. At the time, McCarthy only created the term to distinguish the AI field from cybernetics.

However, AI is more popular than ever today due to:

Hollywood movies tend to depict artificial intelligence as a villainous technology that is destined to take over the world.

One example is the artificial superintelligence system, Skynet, from the film franchise Terminator. Theres also VIKI, an AI supercomputer from the movie I, Robot, who deemed that humans cant be trusted with their own survival.

Holywood has also depicted AI as superintelligent robots, like in movies I Am Mother and Ex Machina.

However, the current AI technologies are not as sinister or quite as advanced. With that said, these depictions raise an essential question:

No, not exactly. Artificial intelligence and robotics are two entirely separate fields. Robotics is a technology branch that deals with physical robots programmable machines designed to perform a series of tasks. On the other hand, AI involves developing programs to complete tasks that would otherwise require human intelligence. However, the two fields can overlap to create artificially intelligent robots.

Most robots are not artificially intelligent. For example, industrial robots are usually programmed to perform the same repetitive tasks. As a result, they typically have limited functionality.

However, introducing an AI algorithm to an industrial robot can enable it to perform more complex tasks. For instance, it can use a path-finding algorithm to navigate around a warehouse autonomously.

To understand how thats possible, we must address another question:

The four artificial intelligence types are reactive machines, limited memory, Theory of Mind, and self-aware. These AI types exist as a type of hierarchy, where the simplest level requires basic functioning, and the most advanced level is well, all-knowing. Other subsets of AI include big data, machine learning, and natural language processing.

The simplest types of AI systems are reactive. They can neither learn from experiences nor form memories. Instead, reactive machines react to some inputs with some output.

Examples of artificial intelligence machines in this category include Googles AlphaGo and IBMs chess-playing supercomputer, Deep Blue.

Deep Blue can identify chess pieces and knows how each of them moves. While the machine can choose the most optimal move from several possibilities, it cant predict the opponents moves.

A reactive machine doesnt rely on an internal concept of the world. Instead, it perceives the world directly and acts on what it sees.

Limited memory refers to an AIs ability to store previous data and use it to make better predictions. In other words, these types of artificial intelligence can look at the recent past to make immediate decisions.

Note that limited memory is required to create every machine learning model. However, the model can get deployed as a reactive machine type.

The three significant examples of artificial intelligence in this category are:

Self-driving cars are limited memory AI that makes immediate decisions using data from the recent past.

For example, self-driving cars use sensors to identify steep roads, traffic signals, and civilians crossing the streets. The vehicles can then use this information to make better driving decisions and avoid accidents.

In Psychology, theory of mind refers to the ability to attribute mental state beliefs, intent, desires, emotion, knowledge to oneself and others. Its the fundamental reason we can have social interactions.

Unfortunately, were yet to reach the Theory of Mind artificial intelligence type. Although voice assistants exhibit such capabilities, its still a one-way relationship.

For example, you could yell angrily at Google Maps to take you in another direction. However, itll neither show concern for your distress nor offer emotional support. Instead, the map application will return the same traffic report and ETA.

An AI system with Theory of Mind would understand that humans have thoughts, feelings, and expectations for how to be treated. That way, it can adjust its response accordingly.

The final step of AI development is to build self-aware machines that can form representations of themselves. Its an extension and advancement of the Theory of Mind AI.

A self-aware machine has human-level consciousness, with the ability to think, desire, and understand its feelings. At the moment, these types of artificial intelligence only exist in movies and comic book pages. Self-aware machines do not exist.

Although self-aware machines are still decades away, several artificial intelligence examples already exist in our everyday lives.

Several examples of artificial intelligence impact our lives today. These include FaceID on iPhones, the search algorithm on Google, and the recommendation algorithm on Netflix. Youll also find other examples of how AI is in use today on social media, digital assistants like Alexa, and ride-hailing apps such as Uber.

Virtual filters on Snapchat and the FaceID unlock on iPhones are two examples of AI applications today. While the former uses face detection technology to identify any face, the latter relies on face recognition.

So, how does it work?

The TrueDepth camera on the Apple devices projects over 30,000 invisible dots to create a depth map of your face. It also captures an infrared image of the users face.

After that, a machine learning algorithm compares the scan of your face with what a previously enrolled facial data. That way, it can determine whether to unlock the device or not.

According to Apple, FaceID automatically adapts to changes in the users appearance. These include wearing cosmetic makeup, growing facial hair, or wearing hats, glasses, or contact lens.

The Cupertino-based tech giant also stated that the chance of fooling FaceID is one in a million.

Several text editors today rely on artificial intelligence to provide the best writing experience.

For example, document editors use an NLP algorithm to identify incorrect grammar usage and suggest corrections. Besides auto-correction, some writing tools also provide readability and plagiarism grades.

However, editors such as INK took AI usage a bit further to provide specialized functions. It uses artificial intelligence to offer smart web content optimization recommendations.

Just recently, INK has released a study showing how its AI-powered writing platform can improve content relevance and help drive traffic to sites. You can read their full study here.

Social media platforms such as Facebook, Twitter, and Instagram rely heavily on artificial intelligence for various tasks.

Currently, these social media platforms use AI to personalize what you see on your feeds. The model identifies users interests and recommends similar content to keep them engaged.

Also, researchers trained AI models to recognize hate keywords, phrases, and symbols in different languages. That way, the algorithm can swiftly take down social media posts that contain hate speech.

Other examples of artificial intelligence in social media include:

Plans for social media platform involve using artificial intelligence to identify mental health problems. For example, an algorithm could analyze content posted and consumed to detect suicidal tendencies.

Getting queries directly from a customer representative can be very time-consuming. Thats where artificial intelligence comes in.

Computer scientists train chat robots or chatbots to impersonate the conversational styles of customer representatives using natural language processing.

Chatbots can now answer questions that require a detailed response in place of a specific yes or no answer. Whats more, the bots can learn from previous bad ratings to ensure maximum customer satisfaction.

As a result, machines now perform basic tasks such as answering FAQs or taking and tracking orders.

Media streaming platforms such as Netflix, YouTube, and Spotify rely on a smart recommendation system thats powered by AI.

First, the system collects data on users interests and behavior using various online activities. After that, machine learning and deep learning algorithms analyze the data to predict preferences.

Thats why youll always find movies that youre likely to watch on Netflixs recommendation. And you wont have to search any further.

Search algorithms ensure that the top results on the search engine result page (SERP) have the answers to our queries. But how does this happen?

Search companies usually include some type of quality control algorithm to recognize high-quality content. It then provides a list of search results that best answer the query and offers the best user experience.

Since search engines are made entirely of codes, they rely on natural language processing (NLP) technology to understand queries.

Last year, Google announced Bidirectional Encoder Representations from Transformers (BERT), an NLP pre-training technique. Now, the technology powers almost all English-based query on Google Search.

In October 2011, Apples Siri became the first digital assistant to be standard on a smartphone. However, voice assistants have come a long way since then.

Today, Google Assistant incorporates advanced NLP and ML to become well-versed in human language. Not only does it understand complex commands, but it also provides satisfactory outputs.

Also, digital assistants now have adaptive capabilities for analyzing user preferences, habits, and schedules. That way, they can organize and plan actions such as reminders, prompts, and schedules.

Various smart home devices now use AI applications to conserve energy.

For example, smart thermostats such as Nest use our daily habits and heating/cooling preferences to adjust home temperatures. Likewise, smart refrigerators can create shopping lists based on whats absent on the fridges shelves.

The way we use artificial intelligence at home is still evolving. More AI solutions now analyze human behavior and function accordingly.

We encounter AI daily, whether youre surfing the internet or listening to music on Spotify.

Other examples of artificial intelligence are visible in smart email apps, e-commerce, smart keyboard apps, as well as banking and finance. Artificial intelligence now plays a significant role in our decisions and lifestyle.

The media may have portrayed AI as a competition to human workers or a concept thatll eventually take over the world. But thats not the case.

Instead, artificial intelligence is helping humans become more productive and helping us live a better life.

More:
8 Examples of Artificial Intelligence in our Everyday Lives - Edgy Labs