Archive for the ‘Artificial Intelligence’ Category

CDAC, IITs to jointly offer online course on artificial intelligence – The Indian Express

Students with basic knowledge of machine learning can apply for an online course on applied artificial intelligence (AI) offered by select Indian Institutes of Technology (IITs).

The course will teach ways to implement AI in industrial use and domains like healthcare, applications in smart city projects and so on.

The course, which includes demonstrations and code walkthroughs and industrial use-cases, is part of the ongoing National Supercomputing Mission (NSM). This six-year-old mission is jointly being led by the Centre for Development of Advanced Computing (CDAC) and Indian Institute of Science under the aegis of the department of science and technology and the electronics and IT ministry.

The online course, to be jointly conducted by IITs Kharagpur, Madras, Palakkad and Goa will cover topics like fundamentals of AI accelerators and system setup, accelerated deep learning, end-to-end accelerated deep science and industrial use-cases of accelerated AI.

For registrations and further details, applicants can visit iitgoa.ac.in/aishikshaai/schedule.php

The 33-session long course will commence on January 31 and is best suited for students in their third and fourth years of engineering from any stream, science postgraduates, PhD scholars and working professionals.

Read the original:
CDAC, IITs to jointly offer online course on artificial intelligence - The Indian Express

Artificial Intelligence Used To Search for the Next SARS-COV-2 – SciTechDaily

Rhinolophus rouxi, which inhabits parts of South Asia, was identified as a likely but undetected betacoronavirus host by the study authors. Credit: Brock and Sherri Fenton

Daniel Becker, an assistant professor of biology in the University of Oklahomas Dodge Family College of Arts and Sciences, has been leading a proactive modeling study over the last year and a half to identify bat species that are likely to carry betacoronaviruses, including but not limited to SARS-like viruses.

The study Optimizing predictive models to prioritize viral discovery in zoonotic reservoirs, which was published by Lancet Microbe, was guided by Becker; Greg Albery, a postdoctoral fellow at Georgetown Universitys Bansal Lab; and Colin J. Carlson, an assistant research professor at Georgetowns Center for Global Health Science and Security.

It also included collaborators from the University of Idaho, Louisiana State University, University of California Berkeley, Colorado State University, Pacific Lutheran University, Icahn School of Medicine at Mount Sinai, University of Glasgow, Universit de Montral, University of Toronto, Ghent University, University College Dublin, Cary Institute of Ecosystem Studies, and the American Museum of Natural History.

Becker and colleagues study is part of the broader efforts of an international research team called the Verena Consortium (viralemergence.org), which works to predict which viruses could infect humans, which animals host them, and where they could emerge. Albery and Carlson were co-founders of the consortium in 2020, with Becker as a founding member.

Despite global investments in disease surveillance, it remains difficult to identify and monitor wildlife reservoirs of viruses that could someday infect humans. Statistical models are increasingly being used to prioritize which wildlife species to sample in the field, but the predictions being generated from any one model can be highly uncertain. Scientists also rarely track the success or failure of their predictions after they make them, making it hard to learn and make better models in the future. Together, these limitations mean that there is high uncertainty in which models may be best suited to the task.

In this study, researchers used bat hosts of betacoronaviruses, a large group of viruses that includes those responsible for SARS and COVID-19, as a case study for how to dynamically use data to compare and validate these predictive models of likely reservoir hosts. The study is the first to prove that machine learning models can optimize wildlife sampling for undiscovered viruses and illustrates how these models are best implemented through a dynamic process of prediction, data collection, validation and updating.

In the first quarter of 2020, researchers trained eight different statistical models that predicted which kinds of animals could host betacoronaviruses. Over more than a year, the team then tracked discovery of 40 new bat hosts of betacoronaviruses to validate initial predictions and dynamically update their models. The researchers found that models harnessing data on bat ecology and evolution performed extremely well at predicting new hosts of betacoronaviruses. In contrast, cutting-edge models from network science that used high-level mathematics but less biological data performed roughly as well or worse than expected at random.

Importantly, their revised models predicted over 400 bat species globally that could be undetected hosts of betacoronaviruses, including not only in southeast Asia but also in sub-Saharan Africa and the Western Hemisphere. Although 21 species of horseshoe bats (in the Rhinolophusgenus) are known to be hosts of SARS-like viruses, researchers found at least two-fourths of plausible betacoronavirus reservoirs in this bat genus might still be undetected.

One of the most important things our study gives us is a data-driven shortlist of which bat species should be studied further, said Becker, who adds that his team is now working with field biologists and museums to put their predictions to use. After identifying these likely hosts, the next step is then to invest in monitoring to understand where and when betacoronaviruses are likely to spill over.

Becker added that although the origins of SARS-CoV-2 remain uncertain, the spillover of other viruses from bats has been triggered by forms of habitat disturbance, such as agriculture or urbanization.

Bats conservation is therefore an important part of public health, and our study shows that learning more about the ecology of these animals can help us better predict future spillover events, he said.

For more on this research, see Shall We Play a Game? Researchers Use AI To Search for the Next COVID/SARS-Like Virus.

Reference: Optimising predictive models to prioritise viral discovery in zoonotic reservoirs by Daniel J Becker, PhD; Gregory F Albery, PhD; Anna R Sjodin, PhD; Timothe Poisot, PhD; Laura M Bergner, PhD; Binqi Chen; Lily E Cohen, MPhil; Tad A Dallas, PhD; Evan A Eskew, PhD; Anna C Fagre, DVM; Maxwell J Farrell, PhD; Sarah Guth, BA; Barbara A Han, PhD; Nancy B Simmons, PhD; Michiel Stock, PhD; Emma C Teeling, PhD and Colin J Carlson, PhD, 10 January 2022, The Lancet Microbe.DOI: 10.1016/S2666-5247(21)00245-7

Visit link:
Artificial Intelligence Used To Search for the Next SARS-COV-2 - SciTechDaily

Artificial Intelligence (AI) – United States Department of …

A global technology revolution is now underway. The worlds leading powers are racing to develop and deploy new technologies like artificial intelligence and quantum computing that could shape everything about our lives from whereweget energy, to how we do our jobs, to how wars are fought. We want America to maintain our scientific and technological edge, because its critical to us thriving in the 21st century economy.

Investments in AI have led to transformative advances now impacting our everyday lives, including mapping technologies, voice-assisted smart phones, handwriting recognition for mail delivery, financial trading, smart logistics, spam filtering, language translation, and more. AI advances are also providing great benefits to our social wellbeing in areas such as precision medicine, environmental sustainability, education, and public welfare.

The term artificial intelligence means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.

The Department of State focuses on AI because it is at the center of the global technological revolution; advances in AI technology present both great opportunities and challenges. The United States, along with our partners and allies, can both further our scientific and technological capabilities and promote democracy and human rights by working together to identify and seize the opportunities while meeting the challenges by promoting shared norms and agreements on the responsible use of AI.

Together with our allies and partners, the Department of State promotes an international policy environment and works to build partnerships that further our capabilities in AI technologies, protect our national and economic security, and promote our values. Accordingly, the Department engages in various bilateral and multilateral discussions to support responsible development, deployment, use, and governance of trustworthy AI technologies.

The Department provides policy guidance to implement trustworthy AI through theOrganization for Economic Cooperation and Development (OECD)AI Policy Observatory, a platform established in February 2020 to facilitate dialogue between stakeholders and provide evidence-based policy analysis in the areas where AI has the most impact.The State Department provides leadership and support to the OECD Network of Experts on AI (ONE AI), which informs this analysis.The United States has 47 AI initiatives associated with the Observatory that help contribute to COVID-19 response, invest in workforce training, promote safety guidance for automated transportation technologies, andmore.

The OECDs Recommendation on Artificial Intelligence is the backbone of the activities at the Global Partnership on Artificial Intelligence (GPAI) and the OECD AI Policy Observatory. In May 2019, the United States joined together with likeminded democracies of the world in adopting the OECD Recommendation on Artificial Intelligence, the first set of intergovernmental principles for trustworthy AI. The principles promote inclusive growth, human-centered values, transparency, safety and security, and accountability. The Recommendation also encourages national policies and international cooperation to invest in research and development and support the broader digital ecosystem for AI. The Department of State champions the principles as the benchmark for trustworthy AI, which helps governments design national legislation.

GPAI is a voluntary, multi-stakeholder initiative launched in June 2020 for the advancement of AI in a manner consistent with democratic values and human rights. GPAIs mandate is focused on project-oriented collaboration, which it supports through working groups looking at responsible AI, data governance, the future of work, and commercialization and innovation. As a founding member, the United States has played a critical role in guiding GPAI and ensuring it complements the work of the OECD.

In the context of military operations in armed conflict, the United States believes that international humanitarian law (IHL) provides a robust and appropriate framework for the regulation of all weapons, including those using autonomous functions provided by technologies such as AI. Building a better common understanding of the potential risks and benefits that are presented by weapons with autonomous functions, in particular their potential to strengthen compliance with IHL and mitigate risk of harm to civilians, should be the focus of international discussion. The United States supports the progress in this area made by the Convention on Certain Conventional Weapons, Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapon Systems (GGE on LAWS), which adopted by consensus 11 Guiding Principles on responsible development and use of LAWS in 2019. The State Department will continue to work with our colleagues at the Department of Defense to engage the international community within the LAWS GGE.

Learnmore about what specific bureaus and offices are doing to support this policy issue:

TheGlobal Engagement Centerhas developed a dedicated effort for the U.S. Government to identify, assess, test and implement technologies against the problems of foreign propaganda and disinformation, in cooperation with foreign partners, private industry and academia.

The Office of the Under Secretary for Managementuses AI technologies within the Department of State to advance traditional diplomatic activities,applying machine learning to internal information technology and management consultant functions.

TheOffice of the Under Secretary of State for Economic Growth, Energy, and the Environmentengages internationally to support the U.S. science and technology (S&T) enterprise through global AI research and development (R&D) partnerships, setting fair rules of the road for economic competition, advocating for U.S. companies, and enabling foreign policy and regulatory environments that benefit U.S. capabilities in AI.

TheOffice of the Under Secretary of State for Arms Control and International Securityfocuses on the security implications of AI, including potential applications in weapon systems, its impact on U.S. military interoperability with its allies and partners,its impact on stability,and export controls related to AI.

TheOffice of the Under Secretary for Civilian Security, Democracy, and Human Rightsand its component bureaus and offices focus on issues related to AI and governance, human rights, including religious freedom, and law enforcement and crime, among others.

TheOffice of the Legal Adviserleads on issues relating to AI in weapon systems (LAWS), in particular at the Group of Governmental Experts on Lethal Autonomous Weapons Systems convened under the auspices of the Convention on Certain Conventional Weapons.

For more information on federalprograms and policyon artificial intelligence, visitai.gov.

Read the original here:
Artificial Intelligence (AI) - United States Department of ...

What is Artificial Intelligence (AI)? – India | IBM

Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.

While a number of definitions of artificial intelligence (AI) have surfaced over the last few decades, John McCarthy offers the following definition in this 2004 paper(PDF, 106 KB) (link resides outside IBM), " It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

However, decades before this definition, the birth of the artificial intelligence conversation was denoted by Alan Turing's seminal work, "Computing Machinery and Intelligence" (PDF, 89.8 KB)(link resides outside of IBM), which was published in 1950. In this paper, Turing, often referred to as the "father of computer science", asks the following question, "Can machines think?" From there, he offers a test, now famously known as the "Turing Test", where a human interrogator would try to distinguish between a computer and human text response. While this test has undergone much scrutiny since its publish, it remains an important part of the history of AI as well as an ongoing concept within philosophy as it utilizes ideas around linguistics.

Stuart Russell and Peter Norvig then proceeded to publish, Artificial Intelligence: A Modern Approach(link resides outside IBM), becoming one of the leading textbooks in the study of AI. In it, they delve into four potential goals or definitions of AI, which differentiates computer systems on the basis of rationality and thinking vs. acting:

Human approach:

Ideal approach:

Alan Turings definition would have fallen under the category of systems that act like humans.

At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.

Today, a lot of hype still surrounds AI development, which is expected of any new emerging technology in the market. As noted in Gartners hype cycle (link resides outside IBM), product innovations like, self-driving cars and personal assistants, follow a typical progression of innovation, from overenthusiasm through a period of disillusionment to an eventual understanding of the innovations relevance and role in a market or domain. As Lex Fridman notes here (link resides outside IBM) in his MIT lecture in 2019, we are at the peak of inflated expectations, approaching the trough of disillusionment.

As conversations emerge around the ethics of AI, we can begin to see the initial glimpses of the trough of disillusionment. To read more on where IBM stands within the conversation around AI ethics, read more here.

Weak AIalso called Narrow AI or Artificial Narrow Intelligence (ANI)is AI trained and focused to perform specific tasks. Weak AI drives most of the AI that surrounds us today. Narrow might be a more accurate descriptor for this type of AI as it is anything but weak; it enables some very robust applications, such as Apple's Siri, Amazon's Alexa, IBM Watson, and autonomous vehicles.

Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). Artificial general intelligence (AGI), or general AI, is a theoretical form of AI where a machine would have an intelligence equaled to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI)also known as superintelligencewould surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, that doesn't mean AI researchers aren't also exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the superhuman, rogue computer assistant in 2001: A Space Odyssey.

Since deep learning and machine learning tend to be used interchangeably, its worth noting the nuances between the two. As mentioned above, both deep learning and machine learning are sub-fields of artificial intelligence, and deep learning is actually a sub-field of machine learning.

Deep learning is actually comprised of neural networks. Deep in deep learning refers to a neural network comprised of more than three layerswhich would be inclusive of the inputs and the outputcan be considered a deep learning algorithm. This is generally represented using the following diagram:

The way in which deep learning and machine learning differ is in how each algorithm learns. Deep learning automates much of the feature extraction piece of the process, eliminating some of the manual human intervention required and enabling the use of larger data sets. You can think of deep learning as "scalable machine learning" as Lex Fridman noted in same MIT lecture from above. Classical, or "non-deep", machine learning is more dependent on human intervention to learn. Human experts determine the hierarchy of features to understand the differences between data inputs, usually requiring more structured data to learn.

"Deep" machine learning can leverage labeled datasets, also known as supervised learning, to inform its algorithm, but it doesnt necessarily require a labeled dataset. It can ingest unstructured data in its raw form (e.g. text, images), and it can automatically determine the hierarchy of features which distinguish different categories of data from one another. Unlike machine learning, it doesn't require human intervention to process data, allowing us to scale machine learning in more interesting ways.

There are numerous, real-world applications of AI systems today. Below are some of the most common examples:

The idea of 'a machine that thinks' dates back to ancient Greece. But since the advent of electronic computing (and relative to some of the topics discussed in this article) important events and milestones in the evolution of artificial intelligence include the following:

IBM has been a leader in advancing AI-driven technologies for enterprises and has pioneered the future of machine learning systems for multiple industries. Based on decades of AI research, years of experience working with organizations of all sizes, and on learnings from over 30,000 IBM Watson engagements, IBM has developed the AI Ladder for successful artificial intelligence deployments:

IBM Watson gives enterprises the AI tools they need to transform their business systems and workflows, while significantly improving automation and efficiency. For more information on how IBM can help you complete your AI journey, explore the IBM portfolio of managed services and solutions

Sign up for an IBMid and create your IBM Cloud account.

Visit link:
What is Artificial Intelligence (AI)? - India | IBM

What is Artificial Intelligence (AI)? – AI Definition and …

Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.

As the hype around AI has accelerated, vendors have been scrambling to promote how their products and services use AI. Often what they refer to as AI is simply one component of AI, such as machine learning. AI requires a foundation of specialized hardware and software for writing and training machine learning algorithms. No one programming language is synonymous with AI, but a few, including Python, R and Java, are popular.

In general, AI systems work by ingesting large amounts of labeled training data, analyzing the data for correlations and patterns, and using these patterns to make predictions about future states. In this way, a chatbot that is fed examples of text chats can learn to produce lifelike exchanges with people, or an image recognition tool can learn to identify and describe objects in images by reviewing millions of examples.

AI programming focuses on three cognitive skills: learning, reasoning and self-correction.

Learning processes. This aspect of AI programming focuses on acquiring data and creating rules for how to turn the data into actionable information. The rules, which are called algorithms, provide computing devices with step-by-step instructions for how to complete a specific task.

Reasoning processes. This aspect of AI programming focuses on choosing the right algorithm to reach a desired outcome.

Self-correction processes. This aspect of AI programming is designed to continually fine-tune algorithms and ensure they provide the most accurate results possible.

AI is important because it can give enterprises insights into their operations that they may not have been aware of previously and because, in some cases, AI can perform tasks better than humans. Particularly when it comes to repetitive, detail-oriented tasks like analyzing large numbers of legal documents to ensure relevant fields are filled in properly, AI tools often complete jobs quickly and with relatively few errors.

This has helped fuel an explosion in efficiency and opened the door to entirely new business opportunities for some larger enterprises. Prior to the current wave of AI, it would have been hard to imagine using computer software to connect riders to taxis, but today Uber has become one of the largest companies in the world by doing just that. It utilizes sophisticated machine learning algorithms to predict when people are likely to need rides in certain areas, which helps proactively get drivers on the road before they're needed. As another example, Google has become one of the largest players for a range of online services by using machine learning to understand how people use their services and then improving them. In 2017, the company's CEO, Sundar Pichai, pronounced that Google would operate as an "AI first" company.

Today's largest and most successful enterprises have used AI to improve their operations and gain advantage on their competitors.

Artificial neural networks and deep learning artificial intelligence technologies are quickly evolving, primarily because AI processes large amounts of data much faster and makes predictions more accurately than humanly possible.

While the huge volume of data being created on a daily basis would bury a human researcher, AI applications that use machine learning can take that data and quickly turn it into actionable information. As of this writing, the primary disadvantage of using AI is that it is expensive to process the large amounts of data that AI programming requires.

Advantages

Disadvantages

AI can be categorized as either weak or strong.

Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University, explained in a 2016 article that AI can be categorized into four types, beginning with the task-specific intelligent systems in wide use today and progressing to sentient systems, which do not yet exist. The categories are as follows:

AI is incorporated into a variety of different types of technology. Here are six examples:

Artificial intelligence has made its way into a wide variety of markets. Here are nine examples.

AI in healthcare. The biggest bets are on improving patient outcomes and reducing costs. Companies are applying machine learning to make better and faster diagnoses than humans. One of the best-known healthcare technologies is IBM Watson. It understands natural language and can respond to questions asked of it. The system mines patient data and other available data sources to form a hypothesis, which it then presents with a confidence scoring schema. Other AI applications include using online virtual health assistants and chatbots to help patients and healthcare customers find medical information, schedule appointments, understand the billing process and complete other administrative processes. An array of AI technologies is also being used to predict, fight and understand pandemics such as COVID-19.

AI in business. Machine learning algorithms are being integrated into analytics and customer relationship management (CRM) platforms to uncover information on how to better serve customers. Chatbots have been incorporated into websites to provide immediate service to customers. Automation of job positions has also become a talking point among academics and IT analysts.

AI in education. AI can automate grading, giving educators more time. It can assess students and adapt to their needs, helping them work at their own pace. AI tutors can provide additional support to students, ensuring they stay on track. And it could change where and how students learn, perhaps even replacing some teachers.

AI in finance. AI in personal finance applications, such as Intuit Mint or TurboTax, is disrupting financial institutions. Applications such as these collect personal data and provide financial advice. Other programs, such as IBM Watson, have been applied to the process of buying a home. Today, artificial intelligence software performs much of the trading on Wall Street.

AI in law. The discovery process -- sifting through documents -- in law is often overwhelming for humans. Using AI to help automate the legal industry's labor-intensive processes is saving time and improving client service. Law firms are using machine learning to describe data and predict outcomes, computer vision to classify and extract information from documents and natural language processing to interpret requests for information.

AI in manufacturing. Manufacturing has been at the forefront of incorporating robots into the workflow. For example, the industrial robots that were at one time programmed to perform single tasks and separated from human workers, increasingly function as cobots: Smaller, multitasking robots that collaborate with humans and take on responsibility for more parts of the job in warehouses, factory floors and other workspaces.

AI in banking. Banks are successfully employing chatbots to make their customers aware of services and offerings and to handle transactions that don't require human intervention. AI virtual assistants are being used to improve and cut the costs of compliance with banking regulations. Banking organizations are also using AI to improve their decision-making for loans, and to set credit limits and identify investment opportunities.

AI in transportation. In addition to AI's fundamental role in operating autonomous vehicles, AI technologies are used in transportation to manage traffic, predict flight delays, and make ocean shipping safer and more efficient.

Security. AI and machine learning are at the top of the buzzword list security vendors use today to differentiate their offerings. Those terms also represent truly viable technologies. Organizations use machine learning in security information and event management (SIEM) software and related areas to detect anomalies and identify suspicious activities that indicate threats. By analyzing data and using logic to identify similarities to known malicious code, AI can provide alerts to new and emerging attacks much sooner than human employees and previous technology iterations. The maturing technology is playing a big role in helping organizations fight off cyber attacks.

Some industry experts believe the term artificial intelligence is too closely linked to popular culture, and this has caused the general public to have improbable expectations about how AI will change the workplace and life in general.

While AI tools present a range of new functionality for businesses, the use of artificial intelligence also raises ethical questions because, for better or worse, an AI system will reinforce what it has already learned.

This can be problematic because machine learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human being selects what data is used to train an AI program, the potential for machine learning bias is inherent and must be monitored closely.

Anyone looking to use machine learning as part of real-world, in-production systems needs to factor ethics into their AI training processes and strive to avoid bias. This is especially true when using AI algorithms that are inherently unexplainable in deep learning and generative adversarial network (GAN) applications.

Explainability is a potential stumbling block to using AI in industries that operate under strict regulatory compliance requirements. For example, financial institutions in the United States operate under regulations that require them to explain their credit-issuing decisions. When a decision to refuse credit is made by AI programming, however, it can be difficult to explain how the decision was arrived at because the AI tools used to make such decisions operate by teasing out subtle correlations between thousands of variables. When the decision-making process cannot be explained, the program may be referred to as black box AI.

Despite potential risks, there are currently few regulations governing the use of AI tools, and where laws do exist, they typically pertain to AI indirectly. For example, as previously mentioned, United States Fair Lending regulations require financial institutions to explain credit decisions to potential customers. This limits the extent to which lenders can use deep learning algorithms, which by their nature are opaque and lack explainability.

The European Union's General Data Protection Regulation (GDPR) puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.

In October 2016, the National Science and Technology Council issued a report examining the potential role governmental regulation might play in AI development, but it did not recommend specific legislation be considered.

Crafting laws to regulate AI will not be easy, in part because AI comprises a variety of technologies that companies use for different ends, and partly because regulations can come at the cost of AI progress and development. The rapid evolution of AI technologies is another obstacle to forming meaningful regulation of AI. Technology breakthroughs and novel applications can make existing laws instantly obsolete. For example, existing laws regulating the privacy of conversations and recorded conversations do not cover the challenge posed by voice assistants like Amazon's Alexa and Apple's Siri that gather but do not distribute conversation -- except to the companies' technology teams which use it to improve machine learning algorithms. And, of course, the laws that governments do manage to craft to regulate AI don't stop criminals from using the technology with malicious intent.

The terms AI and cognitive computing are sometimes used interchangeably, but, generally speaking, the label AI is used in reference to machines that replace human intelligence by simulating how we sense, learn, process and react to information in the environment.

The label cognitive computing is used in reference to products and services that mimic and augment human thought processes.

The concept of inanimate objects endowed with intelligence has been around since ancient times. The Greek god Hephaestus was depicted in myths as forging robot-like servants out of gold. Engineers in ancient Egypt built statues of gods animated by priests. Throughout the centuries, thinkers from Aristotle to the 13th century Spanish theologian Ramon Llull to Ren Descartes and Thomas Bayes used the tools and logic of their times to describe human thought processes as symbols, laying the foundation for AI concepts such as general knowledge representation.

The late 19th and first half of the 20th centuries brought forth the foundational work that would give rise to the modern computer. In 1836, Cambridge University mathematician Charles Babbage and Augusta Ada Byron, Countess of Lovelace, invented the first design for a programmable machine.

1940s. Princeton mathematician John Von Neumann conceived the architecture for the stored-program computer -- the idea that a computer's program and the data it processes can be kept in the computer's memory. And Warren McCulloch and Walter Pitts laid the foundation for neural networks.

1950s. With the advent of modern computers, scientists could test their ideas about machine intelligence. One method for determining whether a computer has intelligence was devised by the British mathematician and World War II code-breaker Alan Turing. The Turing Test focused on a computer's ability to fool interrogators into believing its responses to their questions were made by a human being.

1956. The modern field of artificial intelligence is widely cited as starting this year during a summer conference at Dartmouth College. Sponsored by the Defense Advanced Research Projects Agency (DARPA), the conference was attended by 10 luminaries in the field, including AI pioneers Marvin Minsky, Oliver Selfridge and John McCarthy, who is credited with coining the term artificial intelligence. Also in attendance were Allen Newell, a computer scientist, and Herbert A. Simon, an economist, political scientist and cognitive psychologist, who presented their groundbreaking Logic Theorist, a computer program capable of proving certain mathematical theorems and referred to as the first AI program.

1950s and 1960s. In the wake of the Dartmouth College conference, leaders in the fledgling field of AI predicted that a man-made intelligence equivalent to the human brain was around the corner, attracting major government and industry support. Indeed, nearly 20 years of well-funded basic research generated significant advances in AI: For example, in the late 1950s, Newell and Simon published the General Problem Solver (GPS) algorithm, which fell short of solving complex problems but laid the foundations for developing more sophisticated cognitive architectures; McCarthy developed Lisp, a language for AI programming that is still used today. In the mid-1960s MIT Professor Joseph Weizenbaum developed ELIZA, an early natural language processing program that laid the foundation for today's chatbots.

1970s and 1980s. But the achievement of artificial general intelligence proved elusive, not imminent, hampered by limitations in computer processing and memory and by the complexity of the problem. Government and corporations backed away from their support of AI research, leading to a fallow period lasting from 1974 to 1980 and known as the first "AI Winter." In the 1980s, research on deep learning techniques and industry's adoption of Edward Feigenbaum's expert systems sparked a new wave of AI enthusiasm, only to be followed by another collapse of government funding and industry support. The second AI winter lasted until the mid-1990s.

1990s through today. Increases in computational power and an explosion of data sparked an AI renaissance in the late 1990s that has continued to present times. The latest focus on AI has given rise to breakthroughs in natural language processing, computer vision, robotics, machine learning, deep learning and more. Moreover, AI is becoming ever more tangible, powering cars, diagnosing disease and cementing its role in popular culture. In 1997, IBM's Deep Blue defeated Russian chess grandmaster Garry Kasparov, becoming the first computer program to beat a world chess champion. Fourteen years later, IBM's Watson captivated the public when it defeated two former champions on the game show Jeopardy!. More recently, the historic defeat of 18-time World Go champion Lee Sedol by Google DeepMind's AlphaGo stunned the Go community and marked a major milestone in the development of intelligent machines.

Because hardware, software and staffing costs for AI can be expensive, many vendors are including AI components in their standard offerings or providing access to artificial intelligence as a service (AIaaS) platforms. AIaaS allows individuals and companies to experiment with AI for various business purposes and sample multiple platforms before making a commitment.

Popular AI cloud offerings include the following:

See the original post:
What is Artificial Intelligence (AI)? - AI Definition and ...