Archive for the ‘Artificial Intelligence’ Category

Artificial Intelligence, Virtual Reality, Haptics, Robotics, and Display … – PR Newswire

Contributors to the SIGGRAPH Emerging Technologies, Immersive Pavilion, and VR Theater Demonstrate These Advancements and Evolution of Technology

CHICAGO, July 12, 2023 /PRNewswire/ --SIGGRAPH 2023, the premier conference and exhibition on computer graphics and interactive techniques, marks its 50th year of breakthroughs and innovation by highlighting the development of emerging technologies over the years. This year's contributions will showcase the developments in virtual reality, artificial intelligence, haptics, robotics, and display technologies. The 50th annual conference runs 610 August 2023 in person in Los Angeles, with a companion Virtual Access component.

SIGGRAPH conferences often launch experimental developments and projects that then are utilized by the mainstream in the day-to-day. These technologies evolve to become more complete and ready to deploy in real-life applications. Scientists, engineers, artists, designers, programmers, researchers, and inventors, among others, conceive of innovations that have the potential to shift the way consumers work, socialize, and play in mixed reality and shared spaces.

"Technology is important to the SIGGRAPH community, and we have always worked to be the catalyst for technological advancements that improve and enhance the way people live," said Dr. Mashhuda Glencross, SIGGRAPH 2023 Emerging Technologies Chair. "As technology shifts to change the way in which we interact with computer generated environments, data, and people, our community responds and amazes with more innovation. Historically, these emerging technologies and developments that are first shown at SIGGRAPH have had major impact in the real world, and I expect no less with this year's Emerging Technologies contributions."

Advancements in display technologies is a key focus of SIGGRAPH 2023 Emerging Technologies installations, with developments in virtual reality headsets, near-eye displays, and an AI-mediated video conferencing system. The "AI-mediated 3D Video Conferencing" project by Michael Stengel et al., is a 3D video conferencing system that can reconstruct and autostereoscopically display a life-sized talking head using consumer-grade computer resources and minimal capture equipment. A virtual reality head-mounted display that can achieve near retinal resolution, supporting a wide range of eye accommodation and matching the dynamics of eye accommodation is showcased with the "Retinal-resolution Varifocal VR" installation from Yang Zhao et al. The "Neural Holographic Near-eye Displays for Virtual Reality," from Suyeon Choi et al., has the ability to produce full 3D depth cues, correct for visual aberrations, and lower power consumption. A prototype holographic display, the installation will demonstrate how Neural Holography algorithms have taken significant strides toward unlocking this potential.

The Immersive Pavilion celebrates the evolution of augmented, virtual, and mixed realities, and makes the connection between past, present, and future advancements. This year's content demonstrates the opportunities for collaboration and more immersive storytelling and connecting emotionally in a different reality. With "Heightened Empathy: A Multi-user Interactive Experience in a Bioresponsive Virtual Reality," by Mark Armstrong et al., users are immersed in a VR representation of each other's emotional states while also reflecting this to the audience. The three modes are designed to stimulate cognitive, emotional, and compassionate empathy. From Ke-Fan Lin et al., "Actualities: Seamless Live Performance With the Physical and Virtual Audiences in Multiverse" is a seamless live performance provided for on-site and online audiences synchronously, allowing virtual and physical audiences to interact with each other in the multiverse. Both audiences are part of the performance and can influence the visual showing on the screen or personal devices.

The SIGGRAPH 2023 VR Theater strives to create memorable experiences in a world of immersive storytelling through its jury-selected short-form works. The VR Theater presents the creations from those working in a medium without walls or frames. This year's showcase includes "Lustration," from executive producer Nathan Anderson, a four-part animated series that follows a group of characters from both the real world and afterlife. Viewers can actively explore the environment while watching the illustrative art style, different camera angles with changing scenes, and perspectives that keep the viewers engaged. From director Charuvit Wannissorn, "Luna: Episode 1 Left Behind" is an interactive VR story about a robot and a little girl trying to survive an AI apocalypse. With a branching narrative and creative use of voice, the story utilizes a unique facet of VR where the audience embodies a character to feel like a part of the story.

Learn more about how SIGGRAPH 2023 is demonstrating the evolution of technology by reviewing Emerging Technologies, Immersive Pavilion, and VR Theater content on the full program. For more information about the conference, opportunities, or to register to attend in person or online, go to s2023.SIGGRAPH.org/register.

About ACM, ACM SIGGRAPH, and SIGGRAPH 2023ACM, the Association for Computing Machinery, is the world's largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field's challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. The SIGGRAPH conference is the world's leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2023, the 50th annual conference hosted by ACM SIGGRAPH, will take place live 610 August at the Los Angeles Convention Center, along with a Virtual Access option.

SOURCE SIGGRAPH

See the rest here:
Artificial Intelligence, Virtual Reality, Haptics, Robotics, and Display ... - PR Newswire

Artificial Intelligence History: The Turing Test & Fears Of A.I. – BBC History Magazine

Mary Shelleys 1818 novel Frankenstein the urtext for science fiction is all about creating artificial life. And Fritz Langs seminal 1927 film Metropolis established an astonishing number of fantasy horror tropes with its Maschinenmensch the machine human robot that wreaks murderous chaos.

Actually creating AI, however, remained firmly in the realm of science fiction until the advent of the first digital computers soon after the end of the Second World War. Central to this story is Alan Turing, the brilliant British mathematician best known for his work cracking Nazi ciphers at Bletchley Park. Though his code-breaking work was vital for the Allied war effort, Turing deserves to be at least as well known for his work on the development of computers and AI.

While studying for his PhD in the 1930s, he produced a design for a mathematical device now known as a Turing machine, providing a blueprint for computers that is still standard today. In 1948, Turing took a job at Manchester University to work on Britains first computer, the so-called Manchester baby. The advent of computers sparked a wave of curiosity about these electronic brains, which seemed to be capable of dazzling intellectual feats.

Alan Turing deserves to be at least as well known for his work on the development of computers and AI

Turing apparently became frustrated by dogmatic arguments that intelligent machines were impossible and, in a 1950 article in the journal MIND, sought to settle the debate. He proposed a method which he called the Imitation Game but which is now known as the Turing test for detecting a machines ability to display intelligence. A human interrogator engages in conversations with another person and a machine but the dialogue is conducted via teleprinter, so the interrogator doesnt know which is which. Turing argued that if a machine couldnt be reliably distinguished from a person through such a test, that machine should be considered intelligent.

At the same time, on the other side of the Atlantic, US academic John McCarthy had become interested in the possibility of intelligent machines. In 1955, while applying for funding for a scientific conference the following year, he coined the term artificial intelligence.

McCarthy had grand expectations for his event: he thought that, having brought together researchers with relevant interests, AI would be developed within just a few weeks.In the event, they made little progress at the conference but McCarthys delegates gave birth to a new field, and an unbroken thread connects those scientists through their academic descendants down to todays AI.

At the end of the 1950s, only a handful of digital computers existed worldwide. Even so, McCarthy and his colleagues had by then constructed computer programs that could learn, solve problems, complete logic puzzles and play games. They assumed that progress would continue to be swift, particularly because computers were rapidly becoming faster and cheaper.

But momentum waned and, by the 1970s, research funding agencies had become frustrated by over-optimistic predictions of progress. Cuts followed, and AI acquired a poor reputation. A new wave of ideas prompted a decade of excitement in the 1980s but, once again, progress stalled and, once again, AI researchers were accused of overinflating expectations of breakthroughs.

Things really began to change this century with the development of a new class of deep learning AI systems based on neural network technology itself a very old idea. Animal brains and nervous systems comprise huge numbers of cells called neurons, connected to one another in vast networks: the human brain, for example, contains tens of billions of neurons, each of which has, on average, of the order of 7,000 connections. Each neuron recognises simple patterns in data received by its network connections, prompting it to communicate with its neighbours via electro-chemical signals.

The human brain contains tens of billions of neurons, each of which has, on average, of the order of 7,000 connections

Human intelligence somehow arises from these interactions. In the 1940s, US researchers Warren McCulloch and Walter Pitts were struck by the idea that electrical circuits might simulate such systems and the field of neural networks was born. Although theyve been studied continuously since McCulloch and Pitts proposal, it took further scientific advances to make neural networks a practical reality.

Notably, scientists had to work out how to train or configure networks. The required breakthroughs were delivered by British-born researcher Geoffrey Hinton and colleagues in the 1980s. This work prompted a short lived flurry of interest in the field, but it died down when it became clear that computer technology of the time was not powerful enough to build useful neural networks.

Come the new century, that situation changed: today we live in an age of abundant, cheap computer power and data both of which are essential for building the deep-learning networks that underpin recent advances in AI.

Neural networks represent the core technology underpinning ChatGPT, the AI program released by OpenAI in November 2022. ChatGPT the neural networks of which comprise around a trillion components each immediately went viral, and is now used by hundreds of millions of people every day. Some of its success can be attributed to the fact that it feels exactly like the kind of AI we have seen in the movies. Using ChatGPT involves simply having a conversation with something that seems both knowledgeable and smart.

What its neural networks are doing, however, is quite basic. When you type something, ChatGPT simply tries to predict what text should appear next. To do this, it has been trained using vast amounts of data (including all of the text published on the world wide web). Somehow, those huge neural networks and data enable it to provide extraordinarily impressive responses for all intents and purposes, passing Turings test.

The success of ChatGPT has brought to the fore a primal fear: that we might bring something to life and then lose control. This is the nightmare of Frankenstein, Metropolis and The Terminator. With the unnerving ability of ChatGPT, you might believe that such scenarios could be close at hand. However, though ChatGPT is remarkable, we shouldnt credit it with too much real intelligence. It is not actually a mind it only tries to suggest text that might appear next.

The success of ChatGPT has brought to the fore a primal fear: that we might bring something to life and then lose control

It isnt wondering why you are asking it about curry recipes or the performance of Liverpool Football Club in fact, it isnt wondering anything. It doesnt have any beliefs or desires, nor any purpose other than to predict words. ChatGPT is not going to crawl out of the computer and take over.

That doesnt mean, of course, that there are no potential dangers in AI. One of the most immediate is that ChatGPT or its like may be used to generate disinformation on an industrial scale to influence forthcoming US and UK elections. We also dont know the extent to which such systems acquire the countless human biases we all display, and which are likely evident in its training data. The program, after all, is doing its best to predict what we would write so the large scale adoption of this technology may essentially serve to hold up a mirror to our prejudices. We may not like what we see.

Michael Wooldridge is professor of computer science at the University of Oxford, and author of The Road to Conscious Machines: The Story of AI (Pelican, 2020)

This article was first published in the August 2023 issue of BBC History Magazine

View post:
Artificial Intelligence History: The Turing Test & Fears Of A.I. - BBC History Magazine

Our emerging regulatory approach to Big Tech and Artificial … – FCA

Speaker:Nikhil Rathi, Chief Executive Location:Economist Impact, Finance transformed: exploring the intersection of finance 2.0 and web3, London Delivered:12 July 2023 Note:this is the speech as drafted and may differ from the delivered version

Depending on who you speak to, AI could either lead to the destruction of civilisation, or the cure for cancer or both.

It could either displace todays jobs or enable an explosion in future productivity.

The truth probably embraces both scenarios. At the FCA we are determined that, with the right guardrails in place, AI can offer opportunity.

The Prime Minister said he wants to make the UK the home of global AI safety regulation.

We stand ready to make this a reality for financial services, having been a key thought leader on the topic, including most recently hosting 97 global regulators to discuss regulatory use of data and AI.

Today, we published our feedback statement on Big Tech in Financial Services.

We have announced a call for further input on the role of Big Tech firms as gatekeepers of data and the implications of the ensuing data-sharing asymmetry between Big Tech firms and financial services firms.

We are also considering the risks that Big Tech may pose to operational resilience in payments, retail services and financial infrastructure. And we are mindful of the risk that Big Tech could pose in manipulating consumer behavioural biases.

Partnerships with Big Tech can offer opportunities particularly by increasing competition for customers and stimulating innovation but we need to test further whether the entrenched power of Big Tech could also introduce significant risks to market functioning.

What does it mean for competition if Big Tech firms have access to unique and comprehensive data sets such as browsing data, biometrics and social media?

Coupled with anonymised financial transaction data, over time this could result in a longitudinal data set that could not be rivalled by that held by a financial services firm and it will be a data set that could cover many countries and demographics.

Separately, with so many financial services using Critical Third Parties indeed, as of 2020, nearly two thirds of UK firms used the same few cloud service providers we must be clear where responsibility lies when things go wrong. Principally this will be with the outsourcing firm, but we want to mitigate the potential systemic impact that could be triggered by a Critical Third Party.

Together with the Bank of England and PRA, we will therefore be regulating these Critical Third Parties - setting standards for their services including AI services - to the UK financial sector. That also means making sure they meet those standards and ensuring resilience.

The use of AI can both benefit markets and can also cause imbalances and risks that affect the integrity, price discovery and transparency and fairness of markets if unleashed unfettered.

Misinformation fuelled by social media can impact price formation across global markets.

Generative AI can affect our markets in ways and at a scale not seen before for example, on Monday, May 22 this year, a suspected AI generated image purporting to show the Pentagon in the aftermath of an explosion spread across social media just as US markets opened.

It jolted global financial markets until US officials quickly clarified it was a hoax.

We have observed how intraday volatility has doubled and amplified compared to during the 2008 financial crisis.

This surge in intraday short-term trading across markets and asset classes suggests investors are increasingly turning to highly automated strategies.

Just last week, an online scam video used a deep fake, computer generated video of respected personal finance campaigner Martin Lewis to endorse an investment scheme.

There are other risks too, involving cyber fraud, cyber attacks and identity fraud increasing in scale and sophistication and effectiveness. This means that as AI is further adopted, the investment in fraud prevention and operational and cyber resilience will have to accelerate at the same time. We will take a robust line on this full support for beneficial innovation alongside proportionate protections.

Another area that we are examining is explainability or otherwise of AI models.

To make a great cup of tea, do you just need to know to boil the kettle, and then pour the boiling water over the teabag (AFTER the milk of course, I am a Northerner) or do you need to understand why the molecules in the water move more quickly after you have imbued them with energy through the warmer temperature? And do you need to know the correct name for this process a Brownian motion by the way or do you just need to know that you have made a decent cup of tea?

Firms in most regulatory regimes are required to have adequate systems and controls. Many in the financial services industry themselves feel that they want to be able to explain their AI models or prove that the machines behaved in the way they were instructed to in order to protect their customers and their reputations particularly in the event that things go wrong.

AI models such as ChatGPT can actuallyinvent fake case studiessometimes referred to as 'hallucination bias'. This was visible in a recent New York court case with case citations by one set of lawyers being based on fake case material.

There are also potential problems around data bias. AI model outcomes depend heavily on accuracy of data inputs. So what happens when the input data is wrong or is skewed and generates a bias?

Poor quality or historically biased data sets can have exponentially worse effects when coupled with AI which augments the bias. But what of human biases? It was not long ago that unmarried women were routinely turned down for mortgages. There are tales of bank managers rejecting customers loan applications if they dared to dress down for the meeting.

Therefore can we really conclude that a human decision-maker is always more transparent and less biased than an AI model? Both need controls and checks.

Speculation abounds about large asset managers in the US edging towards unleashing AI based investment advisors for the mass market.

Some argue that autonomous investment funds can outperform human led funds.

The investment management industry is also facing considerable competitive and cost pressures, with a PwC survey this week citing one in six asset and wealth managers expecting to disappear or be swallowed by a rival by 2027. Some say they need to accelerate tech enablement to survive. But it is intriguing that one Chinese hedge fund that was poised to use a completely automated investment model effectively using AI as a fund manager has recently dropped the idea, despite it apparently being able to outperform the market significantly.

And what of the opportunities of AI? There are many.

In the UK, we had the lowest annual growth in worker productivity in the first quarter this year for a decade.

There is optimism that AI can boost productivity and in April, a study by the National Bureau of Economic Research in the US found that productivity was boosted by 14% when over 5000 customer support agents used an AI conversational tool.

Many of the jobs our children will do have not yet been invented but will be created by technology.

And what of the other benefits of AI in financial services? Such as:

As a data-led regulator, we are training our staff to make sure they can maximise the benefits from AI.

We have invested in our tech horizon scanning and synthetic data capabilities, and this summer have established our Digital Sandbox to be the first of its kind used by any global regulator, using real transaction, social media, and other synthetic data to support Fintech and other innovations to develop safely.

Internally, the FCA has developed its supervision technology. We are using AI methods for firm segmentation, the monitoring of portfolios and to identify risky behaviours.

If there is one thing, we know about AI it is that it transcends borders and needs a globally co-ordinated approach.

The FCA plays an influential role internationally both bilaterally and within global standard setting bodies and will be seeking to use those relationships to manage the risks and opportunities of innovations and AI.

The FCA is a founding member and convenor of the Global Financial Innovation Network, where over 80 international regulators collaborate and share approaches to complex emerging areas of regulation, including ESG, AI, and Crypto.

We are also one of four regulators that form the UK Digital Regulation Cooperation Forum, pooling insight and experience on issues such as AI and algorithmic processing.

Separately, we are also hosting the global techsprint on the identification of Greenwashing in our Digital Sandbox, and we will be extending this global techsprint approach to include Artificial Intelligence risks and innovation opportunities.

We still have questions to answer about where accountability should sit with users, with the firms or with the AI developers? And we must have a debate about societal risk appetite.

What should be offered in terms of compensation or redress if customers lose out due to AI going wrong? Or should there be an acceptance for those who consent to new innovations that they will have to swallow a degree of risk?

Any regulation must be proportionate enough to foster beneficial innovation but robust enough to avoid a race to the bottom and a loss in trust and confidence, which when it happens can be deleterious for financial services and very hard to win back.

One way to strike the balance and make sure we maximise innovation but minimise risk is to work with us, through our upcoming AI Sandbox.

While the FCA does not regulate technology, we do regulate the effect on and use of tech in financial services.

We are already seeing AI-based business models coming through our Authorisations gateway both from new entrants and within the 50,000 firms we already regulate.

And with these developments, it is critical we do not lose sight of our duty to protect the most vulnerable and to safeguard financial inclusion and access.

Our outcomes-based approach not only serves to protect but also to encourage beneficial innovation.

Thanks to this outcomes-based approach, we already have frameworks in place to address many of the issues that come with AI.

The Consumer Duty, coming into force this month, stipulates that firms must design products and services that aim to secure good consumer outcomes. And they have to demonstrate how all parts of their supply chain from sales to after sales and distribution and digital infrastructure deliver these.

The Senior Managers & Certification Regimealso gives us a clear framework to respond to innovations in AI. This makes clear that senior managers are ultimately accountable for the activities of the firm.

There have recently beensuggestions in Parliament that there should be a bespoke SMCR-type regime for the most senior individuals managing AI systems, individuals who may not typically have performed roles subject to regulatory scrutiny but who will now be increasingly central to firms decision-making and the safety of markets. This will be an important part of the future regulatory debate.

We will remain super vigilant on how firms mitigate cyber-risks and fraud given the likelihood that these will rise.

Our Big Tech feedback statement sets out our focus on the risks to competition.

We are open to innovation and testing the boundaries before deciding whether and what new regulations are needed. For example, we will work with regulatory partners such as the Information Commissioners Office to test consent models provided that the risks are properly explained and demonstrably understood.

We will link our approach to our new secondary objective to support economic growth and international competitiveness as the PM has set out, adoption of AI could be key to the UKs future competitiveness, nowhere more so than in financial services.

The UK is a leader in fintech with London being in the top 3 centres in the world and number 1 in Europe.

We have world-class talent and are ensuring the development of further skills, with our world class universities.

We want to support inward investment with pro-innovation regulation and transparent engagement.

International and industry collaboration is key on this issue, and we stand ready to lead and help make the UK the global home of AI regulation and safety.

Read more:
Our emerging regulatory approach to Big Tech and Artificial ... - FCA

One Of The Most Important Uses Of Artificial Intelligence Is Fraud … – Finextra

Online shopping has quickly become one of the primary means for buying furniture, groceries, and clothes that were initially bought offline. Unfortunately, due to global business environments featuring high volumes of data, detecting fraudsters in such an environment can often be challenging.

Fraud Detection has proven itself effective at combating fraud with artificial intelligence in banking and insurance. Some banks reimburse consumers, while others claim the transaction was unilaterally by the customer. Either way, banks face financial or customer trust losses.

AI and Fraud Detection

Artificial Intelligence fraud detection technology has dramatically assisted businesses in enhancing internal security and streamlining corporate operations. Artificial Intelligence's efficiency makes it a formidable force against financial crime; AI's data analysis capabilities allow it to uncover patterns in transactions that indicate fraudulent behavior and then be deployed in real-time against it for detection purposes.

AI models can help detect fraud by flagging transactions for further scrutiny or altogether rejecting them, rating their likelihood, and providing investigators with case codes to investigate transactions flagged for further examination or rejection. They may even rate each likelihood differently to allow investigators to focus on those most likely committing it. These models often also provide cause codes associated with their flagged transactions.

Reason codes aid investigators by quickly pinpointing problems and expediting investigations. Investigative teams can also utilize artificial intelligence (AI), which assesses suspicious transactions. Doing this will increase its understanding and prevent it from recreating trends that don't result in fraud.

The Role of ML and AI in Fraud Detection

Machine learning refers to analytical approaches which "learn patterns" automatically within data sets without human assistance, similar to artificial intelligence (AI) approaches that recognize patterns automatically from data. AI stands for artificial intelligence: specific analytical techniques applied towards various tasks ranging from driving cars safely and detecting fraud - while machine learning serves as one method to build these models.

AI refers to technology capable of performing tasks that require intelligence, such as analyzing data or understanding human language. AI algorithms are designed to recognize and predict patterns in real-time. AI often incorporates different ML models.

AI's Machine Learning subset utilizes algorithms for processing large datasets to enable systems to become autonomous. As more data comes their way, their performance improves over time; Unsupervised Machine Learning is often taken as the approach used. While UML algorithms look for hidden patterns inside them, SML algorithms use labeled data to anticipate future events.

SML algorithms use transactional data labeled fraudulent or not to train their supervised machine-learning models; UML employs anomaly detection algorithms based on features to detect transactions that differ significantly from the norm; these models tend to be simpler but less accurate than SML models.

Fraud detection and prevention tools such as these can be highly efficient because they can automatically discover patterns across vast amounts of transactions. When employed effectively, machine learning can differentiate between fraudulent activity and legal conduct while adapting to previously unknown fraud techniques.

Data management can become quite intricate when trying to recognize patterns within data and apply data science techniques to distinguish normal from abnormal behavior, often within milliseconds of each calculation being executed. It requires understanding data patterns and using data science practices, if desired to improve classification systems and differentiation capabilities continuously. Execution of hundreds of measures within milliseconds must occur for maximum efficiency.

Without proper domain data and fraud-specific approaches, it can be easy for machine-learning algorithms to deploy inaccurately, leading to costly miscalculations that prove difficult or even impossible to rectify. This may prove expensive regarding both time and resources spent fixing it. As with humans, an improperly built machine-learning model may exhibit undesirable traits.

Is Fraud Detection Using Artificial Intelligence Possible

AI can play an invaluable role in managing fraud by detecting suspicious activities and preventing future fraudulent schemes from emerging. Fraud losses account for an average annual percentage loss of 6.055% of global gross domestic product, while cyber breaches cause businesses ranging in cost between 3-10%; global digital fraud losses will reach more than $343 billion by 2027.

Under current estimates, any organization should establish an efficient fraud management system to identify, prevent, detect, and respond appropriately to any possible fraudulent activity within its walls. This entails both detection and prevention strategies within an organization's walls.

Artificial intelligence plays a pivotal role in managing fraud. AI technology, such as machine learning algorithms (ML), can analyze large data sets to detect anomalies that suggest possible fraud.

AI fraud management systems have proven highly successful at recognizing and stopping various fraud types - payment fraud, identity fraud, or phishing, to name but three examples; adapting quickly to emerging patterns of fraudulent behavior while becoming even better detectors with time. AI fraud prevention solutions may integrate seamlessly with additional security measures like identity verification or biometric authentication for enhanced protection against such schemes.

What are the Benefits of AI in Fraud Detection?

AI fraud detection offers a way to enhance customer service without negatively affecting the accuracy and speed of operations. We discuss its key benefits below:

Accuracy: Artificial Intelligence development software can quickly sort through large volumes of data, quickly identifying patterns and anomalies that would otherwise be difficult for humans to recognize. AI algorithms also learn and develop over time by continuously processing new information gathered by analyzing previous datasets.

Real-time monitoring: AI algorithms allow real-time tracking, enabling organizations to detect and respond immediately to fraud attempts.

False positives are reduced: Fraud detection often produces false positives when legitimate transactions are mistakenly marked as fraudulent. However, AI algorithms designed for learning will reduce false positives significantly.

Increased efficiency: Human intervention is not as necessary when repetitive duties like evaluating transactions or confirming identity are automated by AI systems.

Cost reduction: Fraudulent actions may have a serious negative impact on an organization's finances and reputation. AI algorithms save them money while protecting their image by helping curb fraudulent activities and safeguard their brand by mitigating fraudulent actions.

AI-based Uses for Fraud Detection and Prevention

Combining AI Models that are Supervised and Unsupervised

As organized crime has proven incredibly adaptive and sophisticated, traditional defense methods will not suffice; each use case should include tailor-made approaches to anomaly detection that best suit its unique circumstances.

Therefore, supervised and non-supervised models must be combined into any comprehensive next-generation fraud tactics strategy. Supervised learning is one form of machine learning in which models are created using numerous "labeled transactions."

Every transaction must be classified either as fraud or not, and models need to be trained with large volumes of transaction data to identify patterns that represent lawful activity best. Accuracy directly corresponds with relevant, clean training data for a supervised algorithm. Models without supervision are used to detect unusual behaviors when transactional data labels are few or nonexistent, necessitating self-learning in these instances to uncover patterns that traditional analytics cannot.

In Action: Behavioral Analytics

Machine learning techniques are used in behavioral analytics to predict and understand behavior more closely across all transactions. Data is then utilized to create profiles highlighting each user, merchant, or account's activities and behavior.

Profiles can be updated in real-time to reflect transactions made, which allows analytic functions to predict future behavior accurately. Profiles detail financial and non-financial transactions, such as changing addresses or requests for duplicate cards and password reset requests. Financial transaction data can help create patterns that show an individual's average spending velocity, their preferred hours and days for transacting, and the distance between payment locations.

Profiles can provide a virtual snapshot of current activities. This can prevent transactions from being abandoned due to false positives. An effective corporate fraud credit solution consists of analytical models and profiles which offer real-time insights into transaction trends.

Develop Models with Large Datasets

Studies have demonstrated that data volume and variety play more of a factor than intelligence regarding machine-learning models' success, providing computing equivalent to human knowledge.

As expected, increasing the data set used for creating features of a machine-learning model could improve the accuracy of prediction. Consider that doctors have been trained to treat thousands of patients simultaneously; their knowledge allows them to diagnose correctly in their areas of specialization.

Fraud detection models can benefit significantly from processing millions of transactions (both valid and fraudulent), as well as from studying these instances in depth. To best detect fraud, one must evaluate large volumes of data to assess risk at individual levels and calculate it effectively.

Self-Learning AI and Adaptive Analytics

Machine learning can help to combat fraudsters who make it challenging for consumers to protect their accounts. Fraud detection experts must look for adaptive artificial intelligence development solutions which sharpen judgments and reactions to marginal conclusions to enhance performance and ensure maximum protection of funds.

Accuracy is crucial when distinguishing between transactions that either cross or fall below a particular threshold and those which fall just shy of it, thus indicating a false-positive event - legal transactions scoring highly - and false adverse events, in which fraudulent ones score lowly.

Adaptive analytics offers businesses a more accurate picture of danger areas within a company. It increases sensitivity to fraud trends by adapting automatically to recent cases' dispositions. As such, adaptive systems make more accurate differentiation between frauds; an analyst informs adaptive systems when any particular transaction is, in fact, legal and should remain within it.

Analysts can accurately reflect the evolving fraud landscape, from new fraud tactics and patterns that may have lain dormant for some time to subtle misconduct practices that had lain dormant for extended periods. Their adaptive modeling allows automatic model adjustments.

This innovative adaptive modeling method automatically adjusts predictor characteristics within fraud models to improve detection rates and forestall future attacks. It is an indispensable way of improving fraud detection while mitigating new ones.

What Dangers could Arise from the Application of AI in Fraud Detection?

AI technologies can also pose certain risks, but these are manageable partly by AI solutions that explain their use. Below, we discuss the potential dangers of AI fraud detection:

Biased Algorithms: AI algorithms may produce little results if their training data includes bias. Such an AI program might produce incorrect outcomes if its training data contains bias.

False positive or false negative results: Automated systems may produce inaccurate negative or positive effects that appear false positive - these false negative cases often ignore fraudulent activity that would otherwise occur, while false positive cases involve overshadowing this type of activity altogether.

Absence of transparency: AI algorithms can often be challenging to decipher, making it hard for individuals to determine why an individual transaction was marked as fraudulent.

Explainable AI can be used to reduce some of the inherent risks. This term refers to AI systems that communicate their decision-making process clearly so humans can understand. Explainable AI has proven particularly helpful for fraud detection as it offers clear explanations for why certain transactions or activities were flagged as potentially illicit activities or transactions.

Bottom Line

As part of their AI fraud detection strategies, an artificial intelligence development company can identify automated fraud and complex attempts more rapidly and efficiently by employing supervised and unsupervised machine learning approaches.

Since card-not-present transactions remain prevalent online, Banking and Retail industries face constant threats in terms of fraud allegations. Data breaches can result from various crimes, such as email phishing and financial fraud, identity theft, document falsification, and false accounts created by criminals targeting vulnerable users.

Read more here:
One Of The Most Important Uses Of Artificial Intelligence Is Fraud ... - Finextra

Artificial intelligence must be grounded in human rights, says High … – OHCHR

HIGH LEVEL SIDE EVENT OF THE 53rd SESSION OF THE HUMAN RIGHTS COUNCIL on

What should the limits be? A human-rights perspective on whats next for artificial intelligence and new and emerging technologies

Opening Statement by Volker Trk

UN High Commissioner for Human Rights

It is great that we are having a discussion about human rights and AI.

We all know how much our world and the state of human rights is being tested at the moment. The triple planetary crisis is threatening our existence. Old conflicts have been raging for years, with no end in sight. New ones continue to erupt, many with far-reaching global consequences. We are still reeling from consequences of the COVID-19 pandemic,which exposed and deepened a raft of inequalities the world over.

But the question before us today what the limits should be on artificial intelligence and emerging technologies is one of the most pressing faced by society, governments and the private sector.

We have all seen and followed over recent months the remarkable developments in generative AI, with ChatGPT and other programmes now readily accessible to the broader public.

We know that AI has the potential to be enormously beneficial to humanity. It could improve strategic foresight and forecasting, democratize access to knowledge, turbocharge scientific progress, and increase capacity for processing vast amounts of information.

But in order to harness this potential, we need to ensure that the benefits outweigh the risks, and we need limits.

When we speak of limits, what we are really talking about is regulation.

To be effective, to be humane, to put people at the heart of the development of new technologies, any solution any regulation must be grounded in respect for human rights.

Two schools of thoughts are shaping the current development of AI regulation.

The first one is risk-based only, focusing largely on self-regulation and self-assessment by AI developers. Instead of relying on detailed rules, risk-based regulation emphasizes identifying, and mitigating risks to achieve outcomes.

This approach transfers a lot of responsibility to the private sector. Some would say too much we hear that from the private sector itself.

It also results in clear gaps in regulation.

The other approach embeds human rights in AIs entire lifecycle. From beginning to end, human rights principles are included in the collection and selection of data; as well as the design, development, deployment and use of the resulting models, tools and services.

This is not a warning about the future we are already seeing the harmful impacts of AI today, and not only generative AI.

AI has the potential to strengthen authoritarian governance.

It can operate lethal autonomous weapons.

It can form the basis for more powerful tools of societal control, surveillance, and censorship.

Facial recognition systems, for example, can turn into mass surveillance of our public spaces, destroying any concept of privacy.

AI systems that are used in the criminal justice system to predict future criminal behaviour have already been shown to reinforce discrimination and to undermine rights, including the presumption of innocence.

Victims and experts, including many of you in this room, have raised the alarm bell for quite some time, but policy makers and developers of AI have not acted enough or fast enough on those concerns.

We need urgent action by governments and by companies. And at the international level, the United Nations can play a central role in convening key stakeholders and advising on progress.

There is absolutely no time to waste.

The world waited too long on climate change. We cannot afford to repeat that same mistake.

What could regulation look like?

The starting point should be the harms that people experience and will likely experience.

This requires listening to those who are affected, as well as to those who have already spent many years identifying and responding to harms. Women, minority groups, marginalized people, in particular, are disproportionately affected by bias in AI. We must make serious efforts to bring them to the table for any discussion on governance.

Attention is also needed to the use of AI in public and private services where there is a heightened risk of abuse of power or privacy intrusions justice, law enforcement, migration, social protection, or financial services.

Second, regulations need to require assessment of the human rights risks and impacts of AI systems before, during, and after their use. Transparency guarantees, independent oversight, and access to effective remedies are needed, particularly when the State itself is using AI technologies.

AI technologies that cannot be operated in compliance with international human rights law must be banned or suspended until such adequate safeguards are in place.

Third, existing regulations and safeguards need to be implemented for example, frameworks on data protection, competition law, and sectoral regulations, including for health, tech or financial markets. A human rights perspective on the development and use of AI will have limited impact if respect for human rights is inadequate in the broader regulatory and institutional landscape.

And fourth, we need to resist the temptation to let the AI industry itself assert that self-regulation is sufficient, or to claim that it should be for them to define the applicable legal framework. I think we have learnt our lesson from social media platforms in that regard. Whilst their input is important, it is essential that the full democratic process laws shaped by all stakeholders is brought to bear, on an issue in which all people, everywhere, will be affected far into the future.

At the same time, companies must live up to their responsibilities to respect human rights in line with the Guiding Principles on Business and Human Rights. Companies are responsible for the products they are racing to put on the market. My Office is working with a number of companies, civil society organizations and AI experts to develop guidance on how to tackle generative AI. But a lot more needs to be done along these lines.

Finally, while it would not be a quick fix, it may be valuable to explore the establishment of an international advisory body for particularly high-risk technologies, one that could offer perspectives on how regulatory standards could be aligned with universal human rights and rule of law frameworks. The body could publicly share the outcomes of its deliberations and offer recommendations on AI governance. This is something that the Secretary-General of the United Nations has also proposed as part of the Global Digital Compact for the Summit of the Future next year.

The human rights framework provides an essential foundation that can provide guardrails for efforts to exploit the enormous potential of AI, while preventing and mitigating its enormous risks.

I look forward to discussing these issues with you.

See the original post here:
Artificial intelligence must be grounded in human rights, says High ... - OHCHR