Archive for the ‘Artificial Intelligence’ Category

The Use of Artificial Intelligence as a Strategy to Analyse Urban Informality – ArchDaily

The Use of Artificial Intelligence as a Strategy to Analyse Urban Informality

Facebook

Twitter

Pinterest

Whatsapp

Mail

Or

Within the Latin American and Caribbean region, it has been recorded that at least 25% of the population lives in informal settlements.Given that their expansion is one of the major problems afflicting these cities, a project is presented, supported by the IDB, which proposes how new technologies are capable of contributing to the identification and detection of these areas in order to intervene in them and help reduce urban informality.

Informal settlements, also known as slums, shantytowns, camps or favelas, depending on the country in question, are uncontrolled settlements on land where, in many cases, the conditions for a dignified life are not in place. Through self-built dwellings, these sites are generally the result of the continuous growth of the housing deficit.

For decades, the possibility of collecting information about the Earth's surface through satellite imagery has been contributing to the analysis and production of increasingly accurate and useful maps for urban planning. In this way, not only the growth of cities can be seen, but also the speed at which they are growing and the characteristics of their buildings.

Advances in artificial intelligence facilitate the processing of a large amount of information.When a satellite or aerial image is taken of a neighbourhood where a municipal team has previously demarcated informal areas, the image is processed by an algorithm that will identify the characteristic visual patterns of the area observed from space.The algorithm will then identify other areas with similar characteristics in other images, automatically recognising the districts where informality predominates.It is worth noting that while satellites are able to report both where and how informal settlements are growing, specialised equipment and processing infrastructure are also required.

This particular project brings to the table the role of artificial intelligence in detecting and acting on informal settlements in Colombia where, during 2018, the population exceeded 48 million inhabitants, with three out of four people residing in cities. In fact, it is estimated that by 2050 it will increase by 28% with an urban population in equal or greater proportion. Thus, there is a real need to build new urban homes.

The Government of Colombia appointed the National Planning Department (DNP) to support the Ministry of Housing in defining new methodologies to address the problem of informal housing. In 2021, the DNP was supported by the Housing and Urban Development Division of the IDB and the company GIM and carried out a pilot project using artificial intelligence to obtain detailed information on Colombian informal housing. The Mayor's Office of Barranquilla provided the data for the project.

In practice, it was demonstrated that there was a coincidence of about 85% between the areas delimited by the algorithm maps and those produced by the local specialists, which was sufficient to recognise and prioritise those areas in need of intervention.

The idea is to be able to use this system in other regions. The IDB seek to promote this technology used in Barranquilla to all of Latin America and the Caribbean through a software package called AMISAI (Automated Mapping of Informal Settlements with Artificial Intelligence), which is part of the Open Urban Planning Toolbox, a catalogue of digital tools used for open-source urban planning.

Source:- Luz Adriana Moreno Gonzlez, Vronique de Laet, Hctor Antonio Vzquez Brust, Patricio Zambrano Barragn, Can Artificial Intelligence Help Reduce Urban Informality?

Read more:
The Use of Artificial Intelligence as a Strategy to Analyse Urban Informality - ArchDaily

Global Artificial Intelligence in Contact Centers Market Report 2022-2036: Use Cases for AI Today and the Exciting Future of this Technology – PR…

DUBLIN, March 14, 2022 /PRNewswire/ -- The "The State of Artificial Intelligence in Contact Centers" report has been added to ResearchAndMarkets.com's offering.

This Report provides a maturity model for the service experience in contact centers, looking ahead to the next 15 years.

It describes how AI can and should be used, application by application, to enhance contact center performance and provides recommendations and best practices for implementing AI-enabled solutions. It offers both a strategic perspective and tactical guidance to help companies realize the maximum benefits from their AI initiatives.

Artificial intelligence is being added to all of the systems and applications used by contact center agents. It has already introduced a basic form of human-like understanding and intelligence into self-service solutions and is on its way to delivering practical and quantifiable improvements to many other applications.

The State of Artificial Intelligence in the Contact Center analyzes how artificial intelligence (AI) can be applied to transform the customer experience (CX), drive a new era in servicing, and significantly improve the performance of contact centers. It explains AI, its underlying technologies and how it enhances contact center systems and applications.

The Report provides use cases for AI today and anticipates the exciting future of this technology, also analyzing the value proposition and payback for its adoption in each application.

Report Includes

Key Topics Covered:

1. Executive Summary

2. Introduction

3. Contact Center AI Defined and Explained3.1 Rules vs. AI3.2 Where Automation Fits in the World of AI3.3 Data is a Key to the Success of AI Initiatives

4. The Role of AI in Enhancing the CX

5. The Vision for AI in Contact Centers5.1 Operational Impact of the AI Hub in Contact Centers

6. Contact Center AI-Enabled Applications6.1 Contact Center Portfolio of AI-Enabled Systems and Applications6.2 AI-Enabled Systems and Applications for Contact Centers6.2.1 Intelligent Virtual Agent/Conversational AI6.2.2 Interaction (Speech and Text) Analytics6.2.3 Analytics-Enabled Quality Management6.2.4 Virtual Assistant6.3 Targeted AI Systems and Applications for Contact Centers6.3.1 Transcription6.3.2 Real-Time Guidance6.3.3 Predictive Behavioral Routing6.3.4 Predictive Analytics6.4 Emerging AI Systems and Applications for Contact Centers6.4.1 Workforce Management6.4.2 Customer Journey Analytics6.4.3 Customer Relationship Management6.4.4 Contact Center Performance Management6.4.5 Automatic Call Distributor6.4.6 Dialer/Campaign Management6.5 Contributing AI Systems and Applications for Contact Centers6.5.1 Robotic Process Automation6.5.2 Intelligent Hiring6.5.3 Desktop Analytics6.5.4 Knowledge Management6.5.5 Voice Biometrics6.5.6 Voice-of-the-Customer/Surveying

7. The Contact Center AI Journey7.1 The Contact Center Maturity Model7.1.1 Reactive Contact Centers, 20217.1.2 Responsive Contact Centers, 2022 - 20257.1.3 Real-Time Contact Centers, 2026 - 20307.1.4 Proactive Contact Centers, 2031 - 20357.1.5 Predictive Contact Centers, 20367.2 Role and Contributions of AI in Contact Centers

8. Final Thoughts

For more information about this report visit https://www.researchandmarkets.com/r/pt0g92

Media Contact:

Research and Markets Laura Wood, Senior Manager [emailprotected]

For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1907 Fax (outside U.S.): +353-1-481-1716

SOURCE Research and Markets

View original post here:
Global Artificial Intelligence in Contact Centers Market Report 2022-2036: Use Cases for AI Today and the Exciting Future of this Technology - PR...

How Should Local Governments Approach AI and Algorithms? – Government Technology

How can government agencies avoid causing more harm than good when they use artificial intelligence and machine learning? A new report attempts to answer this question with a framework and best practices to follow for agencies pursuing algorithm-based tools.

The report comes from the Pittsburgh Task Force on Public Algorithms. The task force studied municipal and county governments use of AI, machine learning and other algorithm-based systems that make or assist with decisions impacting residents opportunities, access, liberties, rights and/or safety.

Local governments have adopted automated systems to support everything from traffic signal changes to child abuse and neglect investigations. Government use of such tools is likely to grow as the technologies mature and agencies become more familiar with them, predicts the task force.

This status quo leaves little room for public or third-party oversight, and residents often have little information on these tools, who designed them or whom to contact with complaints.

The goal isnt to quash tech adoption, just to make it responsible, said David Hickton, task force member and founding director of the University of Pittsburgh Institute for Cyber Law, Policy and Security.

The task force included members of academia, community organizations and civil rights groups, and received advice from local officials.

We hope that these recommendations, if implemented, will offer transparency into government algorithmic systems, facilitate public participation in the development of such systems, empower outside scrutiny of agency systems, and create an environment where appropriate systems can responsibly flourish, the report states.

While automated systems often intend to reduce human error and bias, algorithms make mistakes, too. After all, an algorithm reflects human judgments. Developers choose what factors the algorithms will assess and how heavily each factor is weighted, as well as what data the tool will use to make decisions.

Governments therefore should avoid adopting automated decision-making systems until theyve consulted with residents through multiple channels, not just public comment sessions who would be most impacted.

Residents must understand the tools and the ways theyll be used, believe the proposed approach tackles whatever issue in a productive way, and agree the potential benefits provided by an algorithmic system outweigh the risk of errors, the task force said.

Sufficient transparency allows the public to ensure that a system is making trade-offs consistent with public policy, the report states. A common trade-off is balancing the risk of false positives and false negatives. A programmer may choose to weigh those in a manner different than policymakers or the public might prefer.

Constituents and officials must decide how to balance the risk of an automated system making a mistake. For instance, Philadelphia probation officials have used an algorithm to predict the likelihood of people released on probation becoming reoffenders. These officials have required individuals on probation to receive more or less supervision based on the findings. In this case, accepting more false positives means increasing the chance that people will get inaccurately flagged as higher risk and be subjected to unnecessary intensive supervision, while accepting more false negatives may lead to less oversight for individuals who are likely to reoffend.

For example, an individual may be flagged by a pretrial risk assessment algorithm as unlikely to make their court date. But theres a big difference between officials jailing the person before the court date and officials following up with texted court date reminders and transportation assistance.

Community members told the task force that the safest use of algorithms may be to identify root problems (especially in marginalized communities) and allocate services, training and resources to strengthen community support systems.

Residents also emphasized that issues can be complex and often require decision-makers to consider individual circumstances, even if also using algorithms for help.

Systems should be vetted before adoption and reviewed regularly such as monthly to see if theyre performing well or need updates. Ideally, independent specialists could evaluate sensitive tools and employees training on them, and in-house staff would examine the workings of vendor-provided algorithms.

Contract terms should require vendors to provide details that can help evaluate their algorithms fairness and effectiveness. This step could prevent companies from hiding under claims of trade secrecy.

Local government faces few official limitations around how they can use automated decision-making systems, Hickton said, but residents could put pressure on election officials to make changes. Governments could theoretically appoint officials or boards in charge of overseeing and reviewing algorithms to improve accountability.

I can't predict where this will all go, but I'm hopeful that what we've done is put a spotlight on a problem and that we are giving the public greater access and equity in the discussion and the solutions, he said.

See the original post here:
How Should Local Governments Approach AI and Algorithms? - Government Technology

The Vulnerability of AI Systems May Explain Why Russia Isn’t Using Them Extensively in Ukraine – Forbes

Output of an Artificial Intelligence system from Google Vision, performing Facial Recognition on a ... [+] photograph of a man in San Ramon, California, November 22, 2019. (Photo by Smith Collection/Gado/Getty Images)

The news that Ukraine is using facial recognition software to uncover Russian assailants and identify Ukrainians killed in the ongoing war is noteworthy largely because its one of few documented uses of artificial intelligence in the conflict. A Georgetown University think tank is trying to figure out why while advising U.S. policymakers of the risks of AI.

The CEO of the controversial American facial recognition company Clearview AI told Reuters that Ukraines defense ministry began using its imaging software Saturday after Clearview offered it for free. The reportedly powerful recognition tool relies on artificial intelligence algorithms and a massive quantity of image training data scraped from social media and the internet.

But aside from Russian influence campaigns with their much-discussed deep fakes and misinformation-spreading bots, the lack of known tactical use (at least publicly) of AI by the Russian military has surprised many observers. Andrew Lohn isnt one of them.

Lohn, a senior fellow with Georgetown Universitys Center for Security and Emerging Technology, works on its Cyber-AI Project, which is seeking to draw policymakers attention to the growing body of academic research showing that AI and machine-learning (ML) algorithms can be attacked in a variety of basic, readily exploitable ways.

We have perhaps the most aggressive cyber actor in the world in Russia who has twice turned off the power to Ukraine and used cyber-attacks in Georgia more than a decade ago. Most of us expected the digital domain to play a much larger role. Its been small so far, Lohn says.

We have a whole bunch of hypotheses [for limited AI use] but we dont have answers. Our program is trying to collect all the information we can from this encounter to figure out which are most likely.

They range from the potential effectiveness of Ukrainian cyber and counter-information operations, to an unexpected shortfall in Russian preparedness for digital warfare in Ukraine, to Russias need to preserve or simplify the digital operating environment for its own tactical reasons.

All probably play some role, Lohn believes, but just as crucial may be a dawning recognition of the limits and vulnerability of AI/ML. The willingness to deploy AI tools in combat is a confidence game.

Junk In, Junk Out

Artificial intelligence and machine learning require vast amounts of data, both for training and to interpret for alerts, insights or action. Even when AI/ML have access to an unimpeded base of data, they are only as good as the information and assumptions which underlie them. If for no other reason than natural variability, both can be significantly flawed. Whether AI/ML systems work as advertised is a huge question, Lohn acknowledges.

The tech community refers to unanticipated information as Out of Distribution data. AI/ML may perform at what is deemed to be an acceptable level in a laboratory or in otherwise controlled conditions, Lohn explains. Then when you throw it into the real world, some of what it experiences is different in some way. You dont know how well it will perform in those circumstances.

In circumstances where life, death and military objectives are at stake, having confidence in the performance of artificial intelligence in the face of disrupted, deceptive, often random data is a tough ask.

Lohn recently wrote a paper assessing the performance of AI/ML when such systems scoop in out of distribution data. While their performance doesnt fall off quite as quickly as he anticipated, he says that if they operate in an environment where theres a lot of conflicting data, theyre garbage.

He also points out that the accuracy rate of AI/ML is impressively high but compared to low expectations. For example, image classifiers can work at 94%, 98% or 99.9% accuracy. The numbers are striking until one considers that safety-critical systems like cars/airplanes/healthcare devices/weapons are typically certified out to 5 or 6 decimal points (99.999999%) accuracy.

Lohn says AI/ML systems may still be better than humans at some tasks but the AI/ML community has yet to figure out what accuracy standards to put in place for system components. Testing for AI systems is very challenging, he adds.

For a start, the artificial intelligence development community lacks a test culture similar to what has become so familiar for military aerospace, land, maritime, space or weapons systems; a kind of test-safety regime that holistically assesses the systems-of-systems that make up the above.

The absence of such a back end combined with specific conditions in Ukraine may go some distance to explain the limited application of AI/ML on the battlefield. Alongside it lies the very real vulnerability of AI/ML to the compromised information and active manipulation that adversaries already to seek to feed and to twist it.

Bad Data, Spoofed Data & Classical Hacks

Attacking AI/ML systems isnt hard. It doesnt even require access to their software or databases. Age-old deceptions like camouflage, subtle visual environment changes or randomized data can be enough to throw off artificial intelligence.

As a recent article in the Armed Forces Communications and Electronics Associations (AFCEA) magazine noted, researchers from Chinese e-commerce giant Tencent managed to get a Tesla sedans autopilot (self-driving) feature to switch lanes into oncoming traffic simply by using inconspicuous stickers on the roadway. McAfee Security researchers used similarly discreet stickers on speed limit signs to get a Tesla to speed up to 85 miles per hour in a 35 mile-an-hour zone.

An Israeli soldier is seen during a military exercise in the Israeli Arab village of Abu Gosh on ... [+] October 20, 2013 in Abu Gosh, Israel. (Photo by Lior Mizrahi/Getty Images)

Such deceptions have probably already been examined and used by militaries and other threat actors Lohn says but the AI/ML community is reluctant to openly discuss exploits that can warp its technology. The quirk of digital AI/ML systems is that their ability to sift quickly through vast data sets - from images to electromagnetic signals - is a feature that can be used against them.

Its like coming up with an optical illusion that tricks a human except with a machine you get to try it a million times within a second and then determine whats the best way to effect this optical trick, Lohn says.

The fact that AI/ML systems tend to be optimized to zero in on certain data to bolster their accuracy may also be problematic.

Were finding that [AI/ML] systems may be performing so well because theyre looking for features that are not resilient, Lohn explains. Humans have learned to not pay attention to things that arent reliable. Machines see something in the corner that gives them high accuracy, something humans miss or have chosen not to see. But its easy to trick.

The ability to spoof AI/ML from outside joins with the ability to attack its deployment pipeline. The supply chain databases on which AI/ML rely are often open public databases of images or software information libraries like GitHub.

Anyone can contribute to these big public databases in many instances, Lohn says. So there are avenues [to mislead AI] without even having to infiltrate.

The National Security Agency has recognized the potential of such data poisoning. In January, Neal Ziring, director of NSAs Cybersecurity Directorate, explained during a Billington CyberSecurity webinar that research into detecting data poisoning or other cyber attacks is not mature. Some attacks work by simply seeding specially crafted images into AI/ML training sets, which have been harvested from social media or other platforms.

According to Ziring, a doctored image can be indistinguishable to human eyes from a genuine image. Poisoned images typically contain data that can train the AI/ML to misidentify whole categories of items.

The mathematics of these systems, depending on what type of model youre using, can be very susceptible to shifts in the way recognition or classification is done, based on even a small number of training items, he explained.

Stanford cryptography professor Dan Boneh told AFCEA that one technique for crafting poisoned images is known as the fast gradient sign method (FGSM). The method identifies key data points in training images, leading an attacker to make targeted pixel-level changes called perturbations in an image. The modifications turn the image into an adversarial example, providing data inputs that make the AI/ML misidentify it by fooling the model being used. A single corrupt image in a training set can be enough to poison an algorithm, causing misidentification of thousands of images.

FGSM attacks are white box attacks, where the attacker has access to the source code of the AI/ML. They can be conducted on open-source AI/ML for which there are several publicly accessible repositories.

You typically want to try the AI a bunch of times and tweak your inputs so they yield the maximum wrong answer, Lohn says. Its easier to do if you have the AI itself and can [query] it. Thats a white box attack.

If you dont have that, you can design your own AI that does the same [task] and you can query that a million times. Youll still be pretty effective at [inducing] the wrong answers. Thats a black box attack. Its surprisingly effective.

Black box attacks where the attacker only has access to the AI/ML inputs, training data and outputs make it harder to generate a desired wrong answer. But theyre effective at producing random misinterpretation, creating chaos Lohn explains.

DARPA has taken up the problem of increasingly complex attacks on AI/ML that dont require inside access/knowledge of the systems being threatened. It recently launched a program called Guaranteeing AI Robustness against Deception (GARD), aimed at the development of theoretical foundations for defensible ML and the creation and testing of defensible systems.

More classical exploits wherein attackers seek to penetrate and manipulate the software and networks that AI/ML run on remain a concern. The tech firms and defense contractors crafting artificial intelligence systems for the military have themselves been targets of active hacking and espionage for years. While Lohn says there has been less reporting of algorithm and software manipulation, that would be potentially be doable as well.

It may be harder for an adversary to get in and change things without being noticed if the defender is careful but its still possible.

Since 2018, the Army Research Laboratory (ARL) along with research partners in the Internet of Battlefield Things Collaborative Research Alliance, looked at methods to harden the Armys machine learning algorithms and make them less susceptible to adversarial machine learning techniques. The collaborative developed a tool it calls Attribution-Based Confidence Metric for Deep Neural Networks in 2019 to provide a sort of quality assurance for applied AI/ML.

Despite the work, ARL scientist Brian Jalaian told its public affairs office that, While we had some success, we did not have an approach to detect the strongest state-of-the-art attacks such as [adversarial] patches that add noise to imagery, such that they lead to incorrect predictions.

If the U.S. AI/ML community is facing such problems, the Russians probably are too. Andrew Lohn acknowledges that there are few standards for AI/ML development, testing and performance, certainly nothing like the Cybersecurity Maturity Model Certification (CMMC) that DoD and others adopted nearly a decade ago.

Lohn and CSET are trying to communicate these issues to U.S. policymakers not to dissuade the deployment of AI/ML systems, Lohn stresses, but to make them aware of the limitations and operational risks (including ethical considerations) of employing artificial intelligence.

Thus far he says, policymakers are difficult to paint with a broad brush. Some of those Ive talked with are gung-ho, others are very reticent. I think theyre beginning to become more aware of the risks and concerns.

He also points out that the progress weve made in AI/ML over the last couple of decades may be slowing. In another recent paper he concluded that advances in the formulation of new algorithms have been overshadowed by advances in computational power which has been the driving force in AI/ML development.

Weve figured out how to string together more computers to do a [computational] run. For a variety of reasons, it looks like were basically at the edge of our ability to do that. We may already be experiencing a breakdown in progress.

Policymakers looking at Ukraine and at the world before Russias invasion were already asking about the reliability of AI/ML for defense applications, trying to gauge the level of confidence they should place in it. Lohn says hes basically been telling them the following;

Self driving cars can do some things that are pretty impressive. They also have giant limitations. A battlefield is different. If youre in a permissive environment with an application similar to existing commercial applications that have proven successful, then youre probably going to have good odds. If youre in a non-permissive environment, youre accepting a lot of risk.

The rest is here:
The Vulnerability of AI Systems May Explain Why Russia Isn't Using Them Extensively in Ukraine - Forbes

Award-winner warns of the failures of artificial intelligence – The Australian Financial Review

On a positive note, he says AI has been identified as a key enabler on 79 per cent (134 targets) of the United Nations Sustainable Development Goals (SDGs). However, 35 per cent (59 targets) may experience a negative impact from AI.

Unfortunately, he says unless we start to address the inequities associated with the development of AI right now, were in grave danger of not achieving the UNs SDG goals and, more pertinently, if AI is not properly governed and proper ethics are applied from the beginning, it will have not only a negative physical impact, it will also have a significant social impact globally.

There are significant risks to human dignity and human autonomy, he warns.

If AI is not properly governed and its not underpinned by ethics, it can create socio-economic inequality and impact on human dignity.

A part of the problem at present is most AI is being developed for a commercial outcome, with estimates suggesting its commercial worth to be $15 trillion a year by 2030.

Unfortunately, the path were on poses some significant challenges.

Samarawickrama says AI ethics is underpinned by human ethics and the underlying AI decision-making is driven by data and a hypothesis created by humans.

The danger is much AI is built off the back of the wrong hypothesis because there is an unintentional bias built into the initial algorithm. Every conclusion the AI is making is reached from the hypothesis, which means every decision and the quality of that decision its making is based off a humans ethics and biases.

For Samarawickrama, this huge flaw in AI can only be rectified if diversity, inclusion and socio-economic inequality are taken into account from the very beginning of the AI process.

We can only get to that point if we ensure we have good AI governance and ethics.

The alternative is were basically set up to fail if we do not have that diversity of data.

Much of his work in Australia is with the Australian Red Cross and its parent the International Federation of Red Cross and Red Crescent Societies (IFRC), where he has built a framework linking AI to the seven Red Cross principles in a bid to link AI to the IFRCs global goal of mitigating human suffering.

And while this is enhancing the data literacy across the Red Cross, it also has a potential usage in many organisations, because its about increasing diversity and social justice around AI.

Its a complex problem to solve because there are lot of perspectives as to what mitigating human suffering involves. It goes beyond socio-economic inequality and bias.

For example, the International Committee of the Red Cross is concerned about autonomous weapons and their impact on human suffering.

Samarawickrama says if we are going to achieve the UNSDGs as well as reap the benefits of a $15 trillion a year global economy by 2030, we have to work hard to ensure we get AI right now by focussing on AI governance and ethics.

If we dont, we create a risk of failing to achieve those goals and we need to reduce those by ensuring AI can bring the benefits and value it promises to all of us.

Its why the Red Cross is a good place to start because its all about reducing human suffering, wherever its found and, we need to link that to AI, Samarawickrama says.

Excerpt from:
Award-winner warns of the failures of artificial intelligence - The Australian Financial Review