Media Search:



Focus on the song rather than its singer – Morning Star Online

Victor Grayson: In Search of Britains Lost Revolutionaryby Harry TaylorPluto Press 16.99

HARRY TAYLOR hasnt solved every riddle involving Victor Grayson, but his investigation of this complex and enigmatic figure is thorough and enthralling. Grayson, a charismatic and fiery socialist orator, was seduced into serving the establishment he hated before his mysterious disappearance in September 1920.

The book builds on earlier biographies by socialist journalist Reg Groves and Labour peer David Clark, but roundly refutes a bizarre conspiracy theory concocted by right-wing journalist Donald McCormick. The motives for further investigation are Taylors discovery of fresh information, and his conviction that Graysons story holds lessons for the British left in the wake of the defeat of Jeremy Corbyn. Theres a sense in which were haunted by Grayson.

We begin with a childhood of privation and challenge in working-class Liverpool. In a pivotal episode, the teenage Grayson stows away on a ship bound for Chile and is put ashore in Pembrokeshire. The 165-mile walk to his home city reveals it has no monopoly on destitution.

The book explores the tension between Graysons Christianity and socialism, the bisexuality that led to him being blackmailed, his attractiveness to older and wealthier female admirers, and his natural flair for oratory. Taylor rejects the established image of Grayson as a hot-headed troublemakerwith no grasp of ideology or policy. His calls for direct action were consonant with the beliefs of the Labour leadership at the time, and he had a working knowledge of Marxs writing.

The tragic sweep of Graysons life is punctuated with fascinating cameos. For example, I was surprised to discover he received support from writers Hilaire Belloc and GK Chesterton, whose suspicion of socialism was offset by disillusionment with the party-political system.

Graysons legend centres on an astonishing by-election victory at Colne Valley in 1907, achieved without official Labour Party support, but with a blend of Christian socialism, Marxist economics, and dazzling eloquence. In Parliament, his provocative attempts to secure an unemployment debate led to Labour MPs unanimously supporting Liberals and Tories in voting for his suspension. His behaviour was attributed to drunkenness, but Taylor finds evidence of a premeditated and principled stand.

Graysons drinking did eventually derail his career. Later, he mustered working-class support for the first world war on behalf of his old adversary Churchill, served at Passchendaele and condemned striking workers.

Taylors compelling scholarship assesses Graysons career in the context of his era, but we are challenged to draw conclusions relevant to our own. The author focuses on the need for a Labour Party responsive to the needs of working people and based on strong structures and organisation.

Another moral to be drawn about progress towards socialism is the need to focus on the song rather than its singer.

Read the original post:
Focus on the song rather than its singer - Morning Star Online

Norman Mailer is reaping the anti-whiteness he sowed – UnHerd

Debate

14:30

by Eric Kaufmann

Not so hip anymore

Random House, the publishing giant, recently cancelled plans to publish a collection of Norman Mailers political writings on the centenary of his birth in 2023 after a junior staffer objected to the title of his 1957 essay, The White Negro. Mailers essay celebrated what he took to be the uninhibited, expressive ethos of the African-American hipster, with his jazz, style and dance. This Hip sensibility was contrasted to the spiritually repressed and boring Square quality of white America.

Left-modernism is the dominant ideology in western elite culture, sometimes referred to as the successor ideology. Its an uneasy compound of liberalism and socialism in which the cartridge of liberalism, with its historic concern for the rights of minorities and desire to be free of social constraints, is plugged into the slot in socialisms victim-oppressor console once reserved for the working class.

Mailers Beat Generation exemplified the Left-modernist ethos, valorising the downtrodden as spiritually superior to the white middle class. Mailers critique in The White Negro recalled Carl Van Vechtens 1926 novel Nigger Heaven. Van Vechten belonged to Americas first generation of cultural Leftists, the Young Intellectuals, who brought drug-taking, modern art and critiques of white Protestantism to New Yorks Greenwich Village in the 1912-17 period.

Van Vechtens innovation was a form of slumming in which white bohemians started going up to Harlem to see black jazz. Like Mailer, Young Intellectuals like Van Vechten viewed African-Americans as a source of spiritual depth and liberation from the oppressive structures of Protestant white America. As Mailer wrote in his essay, In such places as Greenwich Village, a mnage--trois was completedthe bohemian and the juvenile delinquent came face-to-face with the Negro.

The phenomenon of WASP Americans turning against their own ethnic group began in pre-World War I Greenwich Village. For Randolph Bourne, a key figure in the Young Intellectuals, writing in 1917, The Anglo-Saxon element is guilty of what every dominant race is guilty of in every European country: the imposition of its own culture upon the minority peoples. Bourne equated Anglo-Saxondom with masculine domination, blending anti-whiteness and feminism into a kind of wokeness avant la lettre.

By the 1920s, in the wake of immigration restriction and the prohibition of alcohol, the Left-modernist critique of the countrys WASP ethnic majority had become a staple of the American literary world, featuring in novels such as Main Street or even The Great Gatsby.

Where socialism believes in equality-in-similarity, Left-modernism celebrates equality-in-diversity, with little emphasis on community. As radical fifties avatar C. Wright Mills confided, he could appreciate liberty and equality, but not fraternity. Left-modernism appealed to bohemian intellectuals because it allowed them to combine artistic experimentation and self-expression with egalitarian politics. When the Soviet Union banned artistic experimentation in favour of socialist realism in 1938, this helped alienate a significant section of the western cultural Left, many of whom turned against communism.

From the 1910s to the 1960s, Left-modernism largely managed to keep its twin balls of radical Leftism and modernist individualism in the air. But with the victory of civil rights and the rise of minority social movements as Left-modernism acquired institutional power through the expansion of universities and television the ideology wobbled on its axis, elevating its Leftist superego over its modernist id.

Viewed through its new politically-correct lens, the anti-white romanticisation of black Americans exemplified by Van Vechten or Mailer became a personification of the very whiteness they railed against: a micro-aggression rooted in colonialist domination and cultural appropriation.

Mailer chose to ride the shark of anti-whiteness, so his estate shouldnt be surprised when it turns on him.

See the article here:
Norman Mailer is reaping the anti-whiteness he sowed - UnHerd

Artificial Intelligence (AI) – United States Department of …

A global technology revolution is now underway. The worlds leading powers are racing to develop and deploy new technologies like artificial intelligence and quantum computing that could shape everything about our lives from whereweget energy, to how we do our jobs, to how wars are fought. We want America to maintain our scientific and technological edge, because its critical to us thriving in the 21st century economy.

Investments in AI have led to transformative advances now impacting our everyday lives, including mapping technologies, voice-assisted smart phones, handwriting recognition for mail delivery, financial trading, smart logistics, spam filtering, language translation, and more. AI advances are also providing great benefits to our social wellbeing in areas such as precision medicine, environmental sustainability, education, and public welfare.

The term artificial intelligence means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.

The Department of State focuses on AI because it is at the center of the global technological revolution; advances in AI technology present both great opportunities and challenges. The United States, along with our partners and allies, can both further our scientific and technological capabilities and promote democracy and human rights by working together to identify and seize the opportunities while meeting the challenges by promoting shared norms and agreements on the responsible use of AI.

Together with our allies and partners, the Department of State promotes an international policy environment and works to build partnerships that further our capabilities in AI technologies, protect our national and economic security, and promote our values. Accordingly, the Department engages in various bilateral and multilateral discussions to support responsible development, deployment, use, and governance of trustworthy AI technologies.

The Department provides policy guidance to implement trustworthy AI through theOrganization for Economic Cooperation and Development (OECD)AI Policy Observatory, a platform established in February 2020 to facilitate dialogue between stakeholders and provide evidence-based policy analysis in the areas where AI has the most impact.The State Department provides leadership and support to the OECD Network of Experts on AI (ONE AI), which informs this analysis.The United States has 47 AI initiatives associated with the Observatory that help contribute to COVID-19 response, invest in workforce training, promote safety guidance for automated transportation technologies, andmore.

The OECDs Recommendation on Artificial Intelligence is the backbone of the activities at the Global Partnership on Artificial Intelligence (GPAI) and the OECD AI Policy Observatory. In May 2019, the United States joined together with likeminded democracies of the world in adopting the OECD Recommendation on Artificial Intelligence, the first set of intergovernmental principles for trustworthy AI. The principles promote inclusive growth, human-centered values, transparency, safety and security, and accountability. The Recommendation also encourages national policies and international cooperation to invest in research and development and support the broader digital ecosystem for AI. The Department of State champions the principles as the benchmark for trustworthy AI, which helps governments design national legislation.

GPAI is a voluntary, multi-stakeholder initiative launched in June 2020 for the advancement of AI in a manner consistent with democratic values and human rights. GPAIs mandate is focused on project-oriented collaboration, which it supports through working groups looking at responsible AI, data governance, the future of work, and commercialization and innovation. As a founding member, the United States has played a critical role in guiding GPAI and ensuring it complements the work of the OECD.

In the context of military operations in armed conflict, the United States believes that international humanitarian law (IHL) provides a robust and appropriate framework for the regulation of all weapons, including those using autonomous functions provided by technologies such as AI. Building a better common understanding of the potential risks and benefits that are presented by weapons with autonomous functions, in particular their potential to strengthen compliance with IHL and mitigate risk of harm to civilians, should be the focus of international discussion. The United States supports the progress in this area made by the Convention on Certain Conventional Weapons, Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapon Systems (GGE on LAWS), which adopted by consensus 11 Guiding Principles on responsible development and use of LAWS in 2019. The State Department will continue to work with our colleagues at the Department of Defense to engage the international community within the LAWS GGE.

Learnmore about what specific bureaus and offices are doing to support this policy issue:

TheGlobal Engagement Centerhas developed a dedicated effort for the U.S. Government to identify, assess, test and implement technologies against the problems of foreign propaganda and disinformation, in cooperation with foreign partners, private industry and academia.

The Office of the Under Secretary for Managementuses AI technologies within the Department of State to advance traditional diplomatic activities,applying machine learning to internal information technology and management consultant functions.

TheOffice of the Under Secretary of State for Economic Growth, Energy, and the Environmentengages internationally to support the U.S. science and technology (S&T) enterprise through global AI research and development (R&D) partnerships, setting fair rules of the road for economic competition, advocating for U.S. companies, and enabling foreign policy and regulatory environments that benefit U.S. capabilities in AI.

TheOffice of the Under Secretary of State for Arms Control and International Securityfocuses on the security implications of AI, including potential applications in weapon systems, its impact on U.S. military interoperability with its allies and partners,its impact on stability,and export controls related to AI.

TheOffice of the Under Secretary for Civilian Security, Democracy, and Human Rightsand its component bureaus and offices focus on issues related to AI and governance, human rights, including religious freedom, and law enforcement and crime, among others.

TheOffice of the Legal Adviserleads on issues relating to AI in weapon systems (LAWS), in particular at the Group of Governmental Experts on Lethal Autonomous Weapons Systems convened under the auspices of the Convention on Certain Conventional Weapons.

For more information on federalprograms and policyon artificial intelligence, visitai.gov.

Read the original here:
Artificial Intelligence (AI) - United States Department of ...

What is Artificial Intelligence (AI)? – India | IBM

Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.

While a number of definitions of artificial intelligence (AI) have surfaced over the last few decades, John McCarthy offers the following definition in this 2004 paper(PDF, 106 KB) (link resides outside IBM), " It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

However, decades before this definition, the birth of the artificial intelligence conversation was denoted by Alan Turing's seminal work, "Computing Machinery and Intelligence" (PDF, 89.8 KB)(link resides outside of IBM), which was published in 1950. In this paper, Turing, often referred to as the "father of computer science", asks the following question, "Can machines think?" From there, he offers a test, now famously known as the "Turing Test", where a human interrogator would try to distinguish between a computer and human text response. While this test has undergone much scrutiny since its publish, it remains an important part of the history of AI as well as an ongoing concept within philosophy as it utilizes ideas around linguistics.

Stuart Russell and Peter Norvig then proceeded to publish, Artificial Intelligence: A Modern Approach(link resides outside IBM), becoming one of the leading textbooks in the study of AI. In it, they delve into four potential goals or definitions of AI, which differentiates computer systems on the basis of rationality and thinking vs. acting:

Human approach:

Ideal approach:

Alan Turings definition would have fallen under the category of systems that act like humans.

At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.

Today, a lot of hype still surrounds AI development, which is expected of any new emerging technology in the market. As noted in Gartners hype cycle (link resides outside IBM), product innovations like, self-driving cars and personal assistants, follow a typical progression of innovation, from overenthusiasm through a period of disillusionment to an eventual understanding of the innovations relevance and role in a market or domain. As Lex Fridman notes here (link resides outside IBM) in his MIT lecture in 2019, we are at the peak of inflated expectations, approaching the trough of disillusionment.

As conversations emerge around the ethics of AI, we can begin to see the initial glimpses of the trough of disillusionment. To read more on where IBM stands within the conversation around AI ethics, read more here.

Weak AIalso called Narrow AI or Artificial Narrow Intelligence (ANI)is AI trained and focused to perform specific tasks. Weak AI drives most of the AI that surrounds us today. Narrow might be a more accurate descriptor for this type of AI as it is anything but weak; it enables some very robust applications, such as Apple's Siri, Amazon's Alexa, IBM Watson, and autonomous vehicles.

Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). Artificial general intelligence (AGI), or general AI, is a theoretical form of AI where a machine would have an intelligence equaled to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI)also known as superintelligencewould surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, that doesn't mean AI researchers aren't also exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the superhuman, rogue computer assistant in 2001: A Space Odyssey.

Since deep learning and machine learning tend to be used interchangeably, its worth noting the nuances between the two. As mentioned above, both deep learning and machine learning are sub-fields of artificial intelligence, and deep learning is actually a sub-field of machine learning.

Deep learning is actually comprised of neural networks. Deep in deep learning refers to a neural network comprised of more than three layerswhich would be inclusive of the inputs and the outputcan be considered a deep learning algorithm. This is generally represented using the following diagram:

The way in which deep learning and machine learning differ is in how each algorithm learns. Deep learning automates much of the feature extraction piece of the process, eliminating some of the manual human intervention required and enabling the use of larger data sets. You can think of deep learning as "scalable machine learning" as Lex Fridman noted in same MIT lecture from above. Classical, or "non-deep", machine learning is more dependent on human intervention to learn. Human experts determine the hierarchy of features to understand the differences between data inputs, usually requiring more structured data to learn.

"Deep" machine learning can leverage labeled datasets, also known as supervised learning, to inform its algorithm, but it doesnt necessarily require a labeled dataset. It can ingest unstructured data in its raw form (e.g. text, images), and it can automatically determine the hierarchy of features which distinguish different categories of data from one another. Unlike machine learning, it doesn't require human intervention to process data, allowing us to scale machine learning in more interesting ways.

There are numerous, real-world applications of AI systems today. Below are some of the most common examples:

The idea of 'a machine that thinks' dates back to ancient Greece. But since the advent of electronic computing (and relative to some of the topics discussed in this article) important events and milestones in the evolution of artificial intelligence include the following:

IBM has been a leader in advancing AI-driven technologies for enterprises and has pioneered the future of machine learning systems for multiple industries. Based on decades of AI research, years of experience working with organizations of all sizes, and on learnings from over 30,000 IBM Watson engagements, IBM has developed the AI Ladder for successful artificial intelligence deployments:

IBM Watson gives enterprises the AI tools they need to transform their business systems and workflows, while significantly improving automation and efficiency. For more information on how IBM can help you complete your AI journey, explore the IBM portfolio of managed services and solutions

Sign up for an IBMid and create your IBM Cloud account.

Visit link:
What is Artificial Intelligence (AI)? - India | IBM

What is Artificial Intelligence (AI)? – AI Definition and …

Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.

As the hype around AI has accelerated, vendors have been scrambling to promote how their products and services use AI. Often what they refer to as AI is simply one component of AI, such as machine learning. AI requires a foundation of specialized hardware and software for writing and training machine learning algorithms. No one programming language is synonymous with AI, but a few, including Python, R and Java, are popular.

In general, AI systems work by ingesting large amounts of labeled training data, analyzing the data for correlations and patterns, and using these patterns to make predictions about future states. In this way, a chatbot that is fed examples of text chats can learn to produce lifelike exchanges with people, or an image recognition tool can learn to identify and describe objects in images by reviewing millions of examples.

AI programming focuses on three cognitive skills: learning, reasoning and self-correction.

Learning processes. This aspect of AI programming focuses on acquiring data and creating rules for how to turn the data into actionable information. The rules, which are called algorithms, provide computing devices with step-by-step instructions for how to complete a specific task.

Reasoning processes. This aspect of AI programming focuses on choosing the right algorithm to reach a desired outcome.

Self-correction processes. This aspect of AI programming is designed to continually fine-tune algorithms and ensure they provide the most accurate results possible.

AI is important because it can give enterprises insights into their operations that they may not have been aware of previously and because, in some cases, AI can perform tasks better than humans. Particularly when it comes to repetitive, detail-oriented tasks like analyzing large numbers of legal documents to ensure relevant fields are filled in properly, AI tools often complete jobs quickly and with relatively few errors.

This has helped fuel an explosion in efficiency and opened the door to entirely new business opportunities for some larger enterprises. Prior to the current wave of AI, it would have been hard to imagine using computer software to connect riders to taxis, but today Uber has become one of the largest companies in the world by doing just that. It utilizes sophisticated machine learning algorithms to predict when people are likely to need rides in certain areas, which helps proactively get drivers on the road before they're needed. As another example, Google has become one of the largest players for a range of online services by using machine learning to understand how people use their services and then improving them. In 2017, the company's CEO, Sundar Pichai, pronounced that Google would operate as an "AI first" company.

Today's largest and most successful enterprises have used AI to improve their operations and gain advantage on their competitors.

Artificial neural networks and deep learning artificial intelligence technologies are quickly evolving, primarily because AI processes large amounts of data much faster and makes predictions more accurately than humanly possible.

While the huge volume of data being created on a daily basis would bury a human researcher, AI applications that use machine learning can take that data and quickly turn it into actionable information. As of this writing, the primary disadvantage of using AI is that it is expensive to process the large amounts of data that AI programming requires.

Advantages

Disadvantages

AI can be categorized as either weak or strong.

Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University, explained in a 2016 article that AI can be categorized into four types, beginning with the task-specific intelligent systems in wide use today and progressing to sentient systems, which do not yet exist. The categories are as follows:

AI is incorporated into a variety of different types of technology. Here are six examples:

Artificial intelligence has made its way into a wide variety of markets. Here are nine examples.

AI in healthcare. The biggest bets are on improving patient outcomes and reducing costs. Companies are applying machine learning to make better and faster diagnoses than humans. One of the best-known healthcare technologies is IBM Watson. It understands natural language and can respond to questions asked of it. The system mines patient data and other available data sources to form a hypothesis, which it then presents with a confidence scoring schema. Other AI applications include using online virtual health assistants and chatbots to help patients and healthcare customers find medical information, schedule appointments, understand the billing process and complete other administrative processes. An array of AI technologies is also being used to predict, fight and understand pandemics such as COVID-19.

AI in business. Machine learning algorithms are being integrated into analytics and customer relationship management (CRM) platforms to uncover information on how to better serve customers. Chatbots have been incorporated into websites to provide immediate service to customers. Automation of job positions has also become a talking point among academics and IT analysts.

AI in education. AI can automate grading, giving educators more time. It can assess students and adapt to their needs, helping them work at their own pace. AI tutors can provide additional support to students, ensuring they stay on track. And it could change where and how students learn, perhaps even replacing some teachers.

AI in finance. AI in personal finance applications, such as Intuit Mint or TurboTax, is disrupting financial institutions. Applications such as these collect personal data and provide financial advice. Other programs, such as IBM Watson, have been applied to the process of buying a home. Today, artificial intelligence software performs much of the trading on Wall Street.

AI in law. The discovery process -- sifting through documents -- in law is often overwhelming for humans. Using AI to help automate the legal industry's labor-intensive processes is saving time and improving client service. Law firms are using machine learning to describe data and predict outcomes, computer vision to classify and extract information from documents and natural language processing to interpret requests for information.

AI in manufacturing. Manufacturing has been at the forefront of incorporating robots into the workflow. For example, the industrial robots that were at one time programmed to perform single tasks and separated from human workers, increasingly function as cobots: Smaller, multitasking robots that collaborate with humans and take on responsibility for more parts of the job in warehouses, factory floors and other workspaces.

AI in banking. Banks are successfully employing chatbots to make their customers aware of services and offerings and to handle transactions that don't require human intervention. AI virtual assistants are being used to improve and cut the costs of compliance with banking regulations. Banking organizations are also using AI to improve their decision-making for loans, and to set credit limits and identify investment opportunities.

AI in transportation. In addition to AI's fundamental role in operating autonomous vehicles, AI technologies are used in transportation to manage traffic, predict flight delays, and make ocean shipping safer and more efficient.

Security. AI and machine learning are at the top of the buzzword list security vendors use today to differentiate their offerings. Those terms also represent truly viable technologies. Organizations use machine learning in security information and event management (SIEM) software and related areas to detect anomalies and identify suspicious activities that indicate threats. By analyzing data and using logic to identify similarities to known malicious code, AI can provide alerts to new and emerging attacks much sooner than human employees and previous technology iterations. The maturing technology is playing a big role in helping organizations fight off cyber attacks.

Some industry experts believe the term artificial intelligence is too closely linked to popular culture, and this has caused the general public to have improbable expectations about how AI will change the workplace and life in general.

While AI tools present a range of new functionality for businesses, the use of artificial intelligence also raises ethical questions because, for better or worse, an AI system will reinforce what it has already learned.

This can be problematic because machine learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human being selects what data is used to train an AI program, the potential for machine learning bias is inherent and must be monitored closely.

Anyone looking to use machine learning as part of real-world, in-production systems needs to factor ethics into their AI training processes and strive to avoid bias. This is especially true when using AI algorithms that are inherently unexplainable in deep learning and generative adversarial network (GAN) applications.

Explainability is a potential stumbling block to using AI in industries that operate under strict regulatory compliance requirements. For example, financial institutions in the United States operate under regulations that require them to explain their credit-issuing decisions. When a decision to refuse credit is made by AI programming, however, it can be difficult to explain how the decision was arrived at because the AI tools used to make such decisions operate by teasing out subtle correlations between thousands of variables. When the decision-making process cannot be explained, the program may be referred to as black box AI.

Despite potential risks, there are currently few regulations governing the use of AI tools, and where laws do exist, they typically pertain to AI indirectly. For example, as previously mentioned, United States Fair Lending regulations require financial institutions to explain credit decisions to potential customers. This limits the extent to which lenders can use deep learning algorithms, which by their nature are opaque and lack explainability.

The European Union's General Data Protection Regulation (GDPR) puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.

In October 2016, the National Science and Technology Council issued a report examining the potential role governmental regulation might play in AI development, but it did not recommend specific legislation be considered.

Crafting laws to regulate AI will not be easy, in part because AI comprises a variety of technologies that companies use for different ends, and partly because regulations can come at the cost of AI progress and development. The rapid evolution of AI technologies is another obstacle to forming meaningful regulation of AI. Technology breakthroughs and novel applications can make existing laws instantly obsolete. For example, existing laws regulating the privacy of conversations and recorded conversations do not cover the challenge posed by voice assistants like Amazon's Alexa and Apple's Siri that gather but do not distribute conversation -- except to the companies' technology teams which use it to improve machine learning algorithms. And, of course, the laws that governments do manage to craft to regulate AI don't stop criminals from using the technology with malicious intent.

The terms AI and cognitive computing are sometimes used interchangeably, but, generally speaking, the label AI is used in reference to machines that replace human intelligence by simulating how we sense, learn, process and react to information in the environment.

The label cognitive computing is used in reference to products and services that mimic and augment human thought processes.

The concept of inanimate objects endowed with intelligence has been around since ancient times. The Greek god Hephaestus was depicted in myths as forging robot-like servants out of gold. Engineers in ancient Egypt built statues of gods animated by priests. Throughout the centuries, thinkers from Aristotle to the 13th century Spanish theologian Ramon Llull to Ren Descartes and Thomas Bayes used the tools and logic of their times to describe human thought processes as symbols, laying the foundation for AI concepts such as general knowledge representation.

The late 19th and first half of the 20th centuries brought forth the foundational work that would give rise to the modern computer. In 1836, Cambridge University mathematician Charles Babbage and Augusta Ada Byron, Countess of Lovelace, invented the first design for a programmable machine.

1940s. Princeton mathematician John Von Neumann conceived the architecture for the stored-program computer -- the idea that a computer's program and the data it processes can be kept in the computer's memory. And Warren McCulloch and Walter Pitts laid the foundation for neural networks.

1950s. With the advent of modern computers, scientists could test their ideas about machine intelligence. One method for determining whether a computer has intelligence was devised by the British mathematician and World War II code-breaker Alan Turing. The Turing Test focused on a computer's ability to fool interrogators into believing its responses to their questions were made by a human being.

1956. The modern field of artificial intelligence is widely cited as starting this year during a summer conference at Dartmouth College. Sponsored by the Defense Advanced Research Projects Agency (DARPA), the conference was attended by 10 luminaries in the field, including AI pioneers Marvin Minsky, Oliver Selfridge and John McCarthy, who is credited with coining the term artificial intelligence. Also in attendance were Allen Newell, a computer scientist, and Herbert A. Simon, an economist, political scientist and cognitive psychologist, who presented their groundbreaking Logic Theorist, a computer program capable of proving certain mathematical theorems and referred to as the first AI program.

1950s and 1960s. In the wake of the Dartmouth College conference, leaders in the fledgling field of AI predicted that a man-made intelligence equivalent to the human brain was around the corner, attracting major government and industry support. Indeed, nearly 20 years of well-funded basic research generated significant advances in AI: For example, in the late 1950s, Newell and Simon published the General Problem Solver (GPS) algorithm, which fell short of solving complex problems but laid the foundations for developing more sophisticated cognitive architectures; McCarthy developed Lisp, a language for AI programming that is still used today. In the mid-1960s MIT Professor Joseph Weizenbaum developed ELIZA, an early natural language processing program that laid the foundation for today's chatbots.

1970s and 1980s. But the achievement of artificial general intelligence proved elusive, not imminent, hampered by limitations in computer processing and memory and by the complexity of the problem. Government and corporations backed away from their support of AI research, leading to a fallow period lasting from 1974 to 1980 and known as the first "AI Winter." In the 1980s, research on deep learning techniques and industry's adoption of Edward Feigenbaum's expert systems sparked a new wave of AI enthusiasm, only to be followed by another collapse of government funding and industry support. The second AI winter lasted until the mid-1990s.

1990s through today. Increases in computational power and an explosion of data sparked an AI renaissance in the late 1990s that has continued to present times. The latest focus on AI has given rise to breakthroughs in natural language processing, computer vision, robotics, machine learning, deep learning and more. Moreover, AI is becoming ever more tangible, powering cars, diagnosing disease and cementing its role in popular culture. In 1997, IBM's Deep Blue defeated Russian chess grandmaster Garry Kasparov, becoming the first computer program to beat a world chess champion. Fourteen years later, IBM's Watson captivated the public when it defeated two former champions on the game show Jeopardy!. More recently, the historic defeat of 18-time World Go champion Lee Sedol by Google DeepMind's AlphaGo stunned the Go community and marked a major milestone in the development of intelligent machines.

Because hardware, software and staffing costs for AI can be expensive, many vendors are including AI components in their standard offerings or providing access to artificial intelligence as a service (AIaaS) platforms. AIaaS allows individuals and companies to experiment with AI for various business purposes and sample multiple platforms before making a commitment.

Popular AI cloud offerings include the following:

See the original post:
What is Artificial Intelligence (AI)? - AI Definition and ...