Archive for the ‘Artificial Intelligence’ Category

Artificial Intelligence What it is and why it matters | SAS

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names.

This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.

While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isnt that scary or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry. Keep reading for modern examples of artificial intelligence in health care, retail and more.

Why is artificial intelligence important?

More:
Artificial Intelligence What it is and why it matters | SAS

How Artificial Intelligence Will Make Decisions In Tomorrows Wars – Forbes

A US-built drone piloted by artificial intelligence. (Photo by Cristina Young/U.S. Navy via Getty ... [+] Images)

Artificial intelligence isn't only a consumer and business-centric technology. Yes, companies use AI to automate various tasks, while consumers use AI to make their daily routines easier. But governmentsand in particular militariesalso have a massive interest in the speed and scale offered by AI. Nation states are already using artificial intelligence to monitor their own citizens, and as the U.K.'s Ministry of Defence (MoD) revealed last week, they'll also be using AI to make decisions related to national security and warfare.

The MoD's Defence and Security Accelerator (DASA) has announced the initial injection of 4 million in funding for new projects and startups exploring how to use AI in the context of the British Navy. In particular, the DASA is looking to support AI- and machine learning-based technology that will "revolutionise the way warships make decisions and process thousands of strands of intelligence and data."

In this first wave of funding, the MoD will share 1 million between nine projects as part of DASAs Intelligent ShipThe Next Generation competition. However, while the first developmental forays will be made in the context of the navy, the U.K. government intends any breakthroughs to form the basis of technology that will be used across the entire spectrum of British defensive and offensive capabilities.

"The astonishing pace at which global threats are evolving requires new approaches and fresh-thinking to the way we develop our ideas and technology," said U.K. Defence Minister James Heappey. "The funding will research pioneering projects into how AI and automation can support our armed forces in their essential day-to-day work."

More specifically, the project will be looking at how four conceptsautomation, autonomy, machine learning, and AIcan be integrated into U.K. military systems and how they can be exploited to increase British responsiveness to potential and actual threats.

"This DASA competition has the potential to lead the transformation of our defence platforms, leading to a sea change in the relationships between AI and human teams," explains Julia Tagg, the technical lead at the MoD's Defence Science and Technology Laborator (Dstl). "This will ensure U.K. defense remains an effective, capable force for good in a rapidly changing technological landscape."

On the one hand, such an adaption is a necessary response to the ever-changing nature of inter-state conflict. Instead of open armed warfare between states and their manned armies, geopolitical rivalry is increasingly being fought out in terms of such phenomena as cyber-warfare, micro-aggressive standoffs, and trade wars. As Julia Tagg explains, this explosion of multiple smaller events requires defence forces to be much more aware of what's happening in the world around them.

"Crews are already facing information overload with thousands of sources of data, intelligence, and information," she says. "By harnessing automation, autonomy, machine learning and artificial intelligence with the real-life skill and experience of our men and women, we can revolutionise the way future fleets are put together and operate to keep the U.K. safe."

That said, the most interestingand worryingelement of the Intelligent Ship project is the focus on introducing AI-enabled "autonomy" to the U.K.'s defense capabilities. As a number of reports from the likes ofthe Economist, MIT Technology Review and Foreign Affairs have argued, AI-powered systems potentially come with a number of serious weaknesses. Like any code-based system they're likely to contain bugs that can be attacked by enemies, while the existence of biases in data (as seen in the context of law and employment) indicate that algorithms may simply perpetuate the prejudices and mistakes of past human decision-making.

It's for such reasons that the increasing fondness of militaries for AI is concerning. Not only is the British government stepping up its investment in military AI, but the United States government earmarked $927 million for "Artificial Intelligence/Machine Learning investments to expand military advantage" in last year's budget. As for China, its government has reportedly invested "tens of billions of dollars" in AI capabilities, while Russia has recently outlined an ambitious general AI strategy for 2030. It's even developing robot soldiers, according to some reports.

So besides being the future of everything else, AI is likely to be the future of warfare. It will increasingly process defense-related information, filter such data for the greatest threats, make defence decisions based on its programmed algorithms, and perhaps even direct combat robots. This will most likely make national militaries stronger and more capable, but it could come at the cost of innocent lives, and perhaps even the cost of escalation into open warfare. Because as the example of Stanislav Petrov in 1983 proves, automated defense systems can't always be trusted.

Read the original post:
How Artificial Intelligence Will Make Decisions In Tomorrows Wars - Forbes

University to Help Society Wrap Their Minds Around Artificial Intelligence – Governing

(TNS) Researchers at the University of Michigan have been exploring the need to set ethics standards and policies when it comes to the use of artificial intelligence, and they now have their own place to do so.

The university has created a new Center of Ethics, Society and Computing (ESC) that will focus on AI, data usage, augmented and virtual reality, privacy, open data and identity.

According to thecenters website, the name and abbreviation alludes to the ESC key on a computer keyboard, which was added to interrupt a program when it produced unwanted results.

In the same way, the Center for Ethics, Society and Computing (ESC pronounced escape) is dedicated to intervening when digital media and computing technologies reproduce inequality, exclusion, corruption, deception, racism or sexism, the centers mission statement reads.

The center will bring together scholars who are committed to feminist, justice-focused, inclusive and interdisciplinary approaches to computing, the university said in a news release.

Associate Director Silvia Lindtner said the center has been in a soft launch phase since March 2019. The idea for ESC was born out of making a critical engagement with the politics and ethics of computing a central aspect of technology research and design, she said.

We established ESC to build on and give legitimacy to the long-term scholarship and activism in technology, engineering and design, and to create an interdisciplinary space to explore and apply critical, justice-oriented and feminist approaches to computing and technology research, Lindtner said.

Director Christian Sandvig said the center is hosting a visiting artist working on robotics this term, and that the center includes faculty from computer science, architecture, music and business schools.

We are fairly unique because we are aggressively pursuing research approaches and topics beyond what people normally think about as computing, Sandvig said.

Lindtner said the universitys public nature allows the center to engage deeply with the broader public, policy experts and actors in the social justice movement.

This is a topic that used to be on the fringes, but more recently has gotten broader attention as we have experienced many unintended consequences of technology, Lindtner said.

Some of the concerns the center will be tackling include gender and racial stereotyping in AI and data-based algorithms, as well as an overall lack of accountability and digital justice.

Sandvig said a lot of companies are now rushing to nominal ethics conversations as a solution to the negative perceptions of their products, but ESC is not interested in ethics-washing.

Were looking ahead to difficult debates about the future path we are steering with technology in society, Sandvig said. We need to make it normal that there is an extensive program of research about this topic ethics, justice, technology, people and the future and it must be central to the enterprise of developing technology and training students.

The center is sponsored by the School of Information; the Center for Political Studies at the Institute for Social Research, and the Department of Communication Studies in the College of Literature, Sciences and the Arts at UM.

2020 MLive.com, Walker, Mich..Distributed byTribune Content Agency, LLC.

See the original post:
University to Help Society Wrap Their Minds Around Artificial Intelligence - Governing

Google CEO: ‘Artificial intelligence needs to be regulated’ | TheHill – The Hill

Google CEO Sundar Pichaiis calling for governments around the world to regulate artificial intelligence, saying the sensitive technology should not be used to "support mass surveillance or violate human rights."

However, Pichai the top executive at Google as well as its parent company Alphabet also argued that governments should not go too far as they work to rein in high-stakes technologies like facial recognition and self-driving vehicles.

His speech inEurope and companion op-edcome as Europe weighs newethics rules for artificial intelligence and the White House urges a light-touch approach to regulating technology.

"There is no question in my mind that artificial intelligence needs to be regulated," Pichai wrote in theFinancial Times. "It is too important not to. The only question is how to approach it."

Since 2018 Google has touted its AI principles as a potential framework for government regulation. The guidelines urge tech companies to ensure artificial intelligence technologies incorporateprivacy features, contribute to the greater social good and do not reflect "unfair" human biases.

Critics have pushed back onthe tech industry's stated support for AI regulation,claiming the companies are trying to dictate the terms of regulation in their own favor.

"Sensible regulation must also take a proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities," Pichai wrote.

Governments around the world have found themselves behind the curve as artificial intelligence advances at lightning speed, opening up new frontiers for potential regulation. Several cities in the U.S. have taken the lead by imposing all-out bans on facial recognition technology,which oftenmisidentifies people of color at higher rates.

Pichai has thrown his support behind a temporary ban onfacial recognition technology, which he says can be used for "nefarious" purposes.

"I think it is important that governments and regulations tackle it sooner rather than later and give a framework for it, Pichai said at a conference in Brussels this week.It can be immediate, but maybe theres a waiting period before we really think about how its being used. ... Its up to governments to chart the course.

Microsoft has also released its own ideas around how to regulatefacial recognition tech, and says it abides by a strict set of AI ethics standards.

In 2018, Pichai spent his speech in Davos, Switzerland, toutingthe enormous potential of artificial intelligence, presenting a rosier view of the technologybefore it experienced an intense backlash over the past several years.

Now, as Europe and the U.S. creep closer to instituting rules around many of the products that Google creates, Pichai is raising his voice around what he sees as the best approach to AI.

"Googles role starts with recognizing the need for a principled and regulated approach to applying AI, but it doesnt end there," Pichai wrote. "We want to be a helpful and engaged partner to regulators as they grapple with the inevitable tensions and trade-offs. We offer our expertise, experience and tools as we navigate these issues together."

Read the original post:
Google CEO: 'Artificial intelligence needs to be regulated' | TheHill - The Hill

How the Pentagon’s JAIC Picks Its Artificial Intelligence-Driven Projects – Nextgov

The Pentagon launched its Joint Artificial Intelligence Center in 2018 to strategically unify and accelerate AI applications across the nations defense and military enterprise. Insiders at the center have now spent about nine months executing that defense driven AI-support.

At an ACT-IAC forum in Washington Wednesday, Rachael Martin, the JAICs mission chief of Intelligent Business Automation Augmentation and Analytics, highlighted insiders early approach to automation and innovation.

Our mission is to transform the [Defense] business process through AI technologies, to improve efficiency and accuracybut really to do all those things so that we can improve our overall warfighter support, Martin said.

Within her specific mission area, Martin and the team explore and develop automated applications that support a range of efforts across the Pentagon, such as business administration, human capital management, acquisitions, finance and budget training, and beyond. Because the enterprise is vast, the center is selective in determining the projects and programs best fit to be taken under its wing.

For the JAIC, there are a couple of key principles that we want to go by, or that we're going to adhere to when we're looking at a project and whether we support it, Martin explained.

The first principle to be evaluated is mission impact. In this review, insiders pose questions like who cares? she said. They assess the user-base that would most benefit from the project and what the ultimate outcome would be across Defense if the JAIC opted to support it. Next, according to Martin, officials review data-readiness. In this light, insiders address factors like where the data to be used is storedand whether its actually prepped for AI, or more advanced analysis and modeling to run on top of it.

The third factor thats assessed is technology maturity. Martin said that contrary to what many seem to think, the JAIC is not a research organization but instead seeks to apply and accelerate their adoption of already-existing solutions across the department and where those improvements are needed most. Insiders are therefore not at all interested in spending heaps of time researching new, emerging AI and automation applications. Instead, they aim to identify what already exists and is ready to be deployed at this moment.

So that's a big one for us that we like to emphasize, Martin said.

The final assessment is whether the JAIC can identify Defense insiders who will actually use whatever they are set to build. When developing something new, Martin said insiders want to those itll eventually touch to weigh in on the development every step of the way.

We're not in the business of coming up with good ideas and then creating something and trying to hoist it on somebody else, Martin said. We really believe in a very user-centric approach.

Excerpt from:
How the Pentagon's JAIC Picks Its Artificial Intelligence-Driven Projects - Nextgov