Archive for the ‘Artificial Intelligence’ Category

Demystifying AI: The Probability Theory Behind LLMs Like OpenAI’s ChatGPT – PYMNTS.com

When a paradigm shift occurs, it is not always obvious to those affected by it.

But there is no eye of the storm equivalent when it comes to generative artificial intelligence (AI).

The technology ishere. There are already variouscommercial productsavailable fordeployment, and organizations that can effectively leverage it in support of theirbusiness goalsare likely to outperform their peers that fail to adopt the innovation.

Still, as with many innovations, uncertainty and institutional inertia reign supreme which is why understanding how the large language models (LLMs) powering AI work is critical to not just piercing the black box of the technologys supposed inscrutability, but also to applying AI tools correctly within an enterprise setting.

The most important thing to understand about the foundational models powering todays AI interfaces and giving them their ability to generate responses is the simple fact that LLMs, like Googles Bard, Anthropics Claude, OpenAIs ChatGPT and others, are just adding one word at a time.

Underneath the layers of sophisticated algorithmic calculations, thats all there is to it.

Thats because at a fundamental level, generative AI models are built to generate reasonable continuations of text by drawing from a ranked list of words, each given different weighted probabilities based on the data set the model was trained on.

Read more:There Are a Lot of Generative AI Acronyms Heres What They All Mean

While news of AI that can surpass human intelligence are helping fuel the hype of the technology, the reality is far more driven by math than it is by myth.

It is important for everyone to understand that AIlearns from data at the end of the day [AI] is merely probabilistics and statistics, Akli Adjaoute, AI pioneer and founder and general partner at venture capital fund Exponion, told PYMNTS in November.

But where do the probabilities that determine an AI systems output originate from?

The answer lies within the AI models training data. Peeking into the inner workings of an AI model reveals that it is not only the next reasonable word that is being identified, weighted, then generated, but that this process occurs on a letter by letter basis, as AI models break apart words into more manageable tokens.

That is a big part of whyprompt engineering for AI models is an emerging skillset. After all, different prompts produce different outputs based on the probabilities inherent to each reasonable continuation, meaning that to get the best output, you need to have a clear idea of where to point the provided input or query.

It also means that the data informing the weight given to each probabilistic outcome must berelevantto the query. The more relevant, the better.

See also:Tailoring AI Solutions by Industry Key to Scalability

While PYMNTS Intelligence has found that more than eight in 10 business leaders (84%) believe generative AI will positively impactthe workforce, generative AI systems are only as good as the data theyre trained on. Thats why the largest AI players are in an arms race toacquire the best training data sets.

Theres a long way to go before theres afuturistic version of AIwhere machines think and make decisions. Humans will be around for quite a while,Tony Wimmer, head of data and analytics atJ.P. Morgan Payments, told PYMNTS in March. And the more that we can write software that has payments data at the heart of it to help humans, the better payments will get.

Thats why, to train an AI model to perform to the necessary standard, many enterprises are relying ontheir own internal datato avoid compromising model outputs. By creating vertically specialized LLMs trained for industry use cases, organizations can deploy AI systems that are able to find the signal within the noise, as well as to be further fine-tuned to business-specific goals with real-time data.

AsAkli Adjaoutetold PYMNTS back in November, if you go into a field where the data is real, particularly in thepayments industry, whether its credit risk, whether its delinquency, whether its AML [anti-money laundering], whether its fraud prevention, anything that touches payments AI can bring a lot of benefit.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read the rest here:
Demystifying AI: The Probability Theory Behind LLMs Like OpenAI's ChatGPT - PYMNTS.com

Tags:

The Urgent but Difficult Task of Regulating Artificial Intelligence – Amnesty International

By David Nolan, Hajira Maryam & Michael Kleinman, Amnesty Tech

The year 2023 marked a new era of AI hype, rapidly steering policy makers towards discussions on the safety and regulation of new artificial intelligence (AI) technologies. The feverish year in tech started with the launch of ChatGPT in late 2022 and ended with a landmark agreement on the EU AI Act being reached. Whilst the final text is still being ironed out in technical meetings over the coming weeks, early signs indicate the western worlds first AI rulebook goes someway to protecting people from the harms of AI but still falls short in a number of crucial areas, failing to ensure human rights protections especially for the most marginalised. This came soon after the UK Government hosted an inaugural AI Safety Summit in November 2023, where global leaders, key industry players, and select civil society groups gathered to discuss the risks of AI. Although the growing momentum and debate on AI governance is welcomed and urgently needed, the key question for 2024 is whether these discussions will generate concrete commitments and focus on the most important present-day AI risks, and critically whether it will translate into further substantive action in other jurisdictions.

Whilst AI developments do present new opportunities and benefits, we must not ignore the documented dangers posed by AI tools when they are used as a means of societal control, mass surveillance and discrimination. All too often, AI systems are trained on massive amounts of private and public datadata which reflects societal injustices, often leading to biased outcomes and exacerbating inequalities. From predictive policing tools, to automated systems used in public sector decision-making to determine who can access healthcare and social assistance, to monitoring the movement of migrants and refugees, AI has flagrantly and consistently undermined the human rights of the most marginalised in society. Other forms of AI, such as fraud detection algorithms, have also disproportionately impacted ethnic minorities, who have endured devastating financial problems as Amnesty International has already documented, while facial recognition technology has been used by the police and security forces to target racialised communities and entrench Israels system of apartheid.

So, what makes regulation of AI complex and challenging? First, there is the vague nature of the term AI itself, making efforts to regulate this technology more cumbersome. There is no widespread consensus on the definition of AI because the term does not refer to a singular technology and rather encapsulates a myriad technological applications and methods. The use of AI systems in many different domains across the public and private sector, means a large number of varied stakeholders are involved in its development and deployment, meaning such systems are a product of labour, data, software and financial inputs and any regulation must grapple with upstream and downstream harms. Further, these systems cannot be strictly considered as hardware or software, but rather their impact comes down to the context in which they are developed and implemented and regulation must take this into account.

As we enter 2024, now is the time to not only ensure that AI systems are rights respecting by design, but also to guarantee that those who are impacted by these technologies are not only meaningfully involved in decision-making on how AI technology should be regulated, but also that their experiences are continually surfaced and are centred within these discussions.

Alongside the EU legislative process, the UK, US, and others, have set out their distinct roadmaps and approach to identifying the key risks AI technologies present, and how they intend to mitigate these. Whilst there are many complexities of these legislative processes, this should not delay any efforts to protect people from the present and future harms of AI, and there are crucial elements that we, at Amnesty, know any proposed regulatory approach must contain. Regulation must be legally binding and center the already documented harms to people subject to these systems. Commitments and principles on the responsible development and use of AIthe core of the current pro-innovation regulatory framework being pursued by the UKdo not offer an adequate protection against the risks of emerging technology and must be put on statutory footing.

Similarly, any regulation must include broader accountability mechanisms over and above technical evaluations that are being pushed by industry. Whilst these may be a useful string within any regulatory toolkits bow, particularly in testing for algorithmic bias, bans and prohibitions cannot be off the table for systems fundamentally incompatible with human rights, no matter how accurate or technically efficacious they purport to be.

Others must learn from the EU process and ensure there are not loopholes for public and private sector players to circumvent regulatory obligations, and removing any exemptions for AI used within national security or law enforcement is critical to achieving this. It is also important that where future regulation limits or prohibits the use of certain AI systems in one jurisdiction, no loopholes or regulatory gaps allow the same systems to be exported to other countries where they could be used to harm the human rights of marginalized groups. This remains a glaring gap in the UK, US, and EU approaches, as they fail to take into account the global power imbalances of these technologies, especially their impact on communities in the Global Majority whose voices are not represented in these discussions. There have already been documented cases of outsourced workers being exploited in Kenya and Pakistan by companies developing AI tools.

As we enter 2024, now is the time to not only ensure that AI systems are rights-respecting by design, but also to guarantee that those who are impacted by these technologies are not only meaningfully involved in decision-making on how AI technology should be regulated, but also that their experiences are continually surfaced and are centred within these discussions.More than lip service by lawmakers, we need binding regulation that holds companies and other key industry players to account and ensures that profits do not come at the expense of human rights protections. International, regional and national governance efforts must complement and catalyse each other, and global discussions must not come at the expense of meaningful national regulation or binding regulatory standards these are not mutually exclusive. This is the level at which accountability is servedwe must learn from past attempts to regulate tech, which means ensuring robust mechanisms are introduced to allow victims of AI-inflicted rights violations to seek justice.

Read the original here:
The Urgent but Difficult Task of Regulating Artificial Intelligence - Amnesty International

Tags:

1 Spectacular Artificial Intelligence (AI) Growth Stock Down 35% to Buy Hand Over Fist in 2024 – Yahoo Finance

1 Spectacular Artificial Intelligence (AI) Growth Stock Down 35% to Buy Hand Over Fist in 2024  Yahoo Finance

Read the original here:
1 Spectacular Artificial Intelligence (AI) Growth Stock Down 35% to Buy Hand Over Fist in 2024 - Yahoo Finance

Tags:

The Diabetic Cyborg Life 01/22: Artificial Intelligence Makes for Lies in New Hampshire – Medium

Source

AI is making bogus calls to voters in the primary tomorrow, Tuesday, January 22, 2024. It seems GOP or other liars are using technology to make it sound like Joe Biden is spreading misinformation.

The call, which was sent Sunday, said, Your vote makes a difference in November, not this Tuesday.

This means that we all need to know that this method might be repeated again in November.

Originally posted here:
The Diabetic Cyborg Life 01/22: Artificial Intelligence Makes for Lies in New Hampshire - Medium

Tags:

Comparing Student Reactions To Lectures In Artificial Intelligence And Physics – Science 2.0

In the past two weeks I visited two schools in Veneto to engage students with the topic of Artificial Intelligence, which is something everybody seems to be happy to hear about these days: on the 10th of January I visited a school in Vicenza, and on the 17th a school in Venice. In both cases there were about 50-60 students, but there was a crucial difference: while the school in Venezia (the "Liceo Marco Foscarini", where I have been giving lectures in the past within the project called "Art and Science") was a classical liceum and the high-schoolers who came to listen to my presentation were between 16 and 18 years old, the one in Vicenza was a middle school, and its attending students were between 11 and 13 years old. Since the contents of the lecture could withstand virtually no change - I was too busy during these first few post-Christmas weeks - the two-pronged test was an effective testing ground to spot differences in the reaction of the two audiences. To be honest, I approached the first event with some worries that the content I was presenting to those young kids was going to be a bit overwhelming to them, so maybe in hindsight we could imagine that the impression I got was biased by this "low expectations" attitude.

To make matters worse, because my lecture was the first in a series organized by a local academy, with comparticipation of the Comune of Vicenza, the lecture I gave had to follow speeches from the school director, the maior of Vicenza, and a couple of other introductions - something that I was sure was further decreasing the stamina and willingness to listen to a frontal lecture of the young audience. In fact, I was completely flabberghasted.

Not only did the middle schoolers in Vicenza follow with attention and in full silence the 80-minutes-long talk I had prepared. They also interrupted a few times with witty questions (as I had begged them to do, in fact). At the end of the presentation, I was hit by a rapid succession of questions ranging over the full contents of the lecture - from artificial intelligence to particle physics, to details about the SWGO experiment, astrophysics, and what not. I counted about 20 questions and then lost track of that. This continued after the end of the event, when some of the students were not completely happy yet and came to meet me and ask for more detail.

Above, a moment during the lecture in Vicenza

When I gave the same lecture in Venice, I must say I did receive again several interesting questions. But in comparison, the Foscarini teenagers were clearly a bit less enthusiastic on the whole of the topic of the lecture. Maybe my assessment comes from the bias I was mentioning earlier; and in part, I have to say I have much more experience with high-schoolers than with younger students, so I knew better what to expect and I was not surprised by the outcome.

This comparison seems to align with what has been once observed by none other than Carl Sagan. I have to thank Phil Warnell here, who commenting on Facebook to a post I wrote there on my experience with middle schoolers cited a piece from Sagan that is quite relevant:

I cannot but concur with what Sagan says in these two quotes. I also believe that part of the unwillingness of high-schoolers to ask questions is due to the judgment of their peers. What happens is that until we are 12 or 13 we for the most part have not yet had experience with the negative feedback we may get by being participative in school events, and we do not yet fear the reaction of our friends and not-so-friendly schoolmates. It seems that kind of experience grows a shell around them, making them a bit less willing to expose themselves and speak up to discuss what they did not understand, or to express enthusiasm. I think that is a bit sad, but it is of course part of our early trajectory amid experiences that form us and equip us with the vaccines we are going to need in the rest of our life.

See original here:
Comparing Student Reactions To Lectures In Artificial Intelligence And Physics - Science 2.0

Tags: