Archive for the ‘Artificial Intelligence’ Category

Artificial Intelligence and Cybersecurity: Key Topics at the 78th … – Mayer Brown

Recently, world leaders and key stakeholders gathered for the 78th session of the United Nations General Assembly (UNGA) to discuss global challenges with the goal of furthering peace, security, and sustainable development. A key topic of discussion was the digital revolution, focusing on the opportunities and challenges presented by artificial intelligence (AI), as well as the continued importance of strengthening global cybersecurity.

Throughout the UNGA, world leaders highlighted potential risks associated with AI. In US President Joe Bidens remarks to the UNGA, he stated that AI holds enormous potential and enormous peril and noted that [t]ogether with leaders around the world, the United States is working to strengthen rules and policies so AI technologies are safe before they're released to the public. UN Secretary-General Antnio Guterres also referred to AI as an emerging threat[] that requires new innovative forms of governance, with input from experts building this technology and from those monitoring its abuses. When speaking to the Security Council in July 2023, Secretary-General Guterres had provided a similar warning, stating that AI tools can also be used by those with malicious intent, such as for targeting critical infrastructure, disinformation and hate speech, and deepfakes, and that malfunctioning AI systems pose particular risks, for example, in the context of nuclear weapons and biotechnology.

World leaders called for new guardrails or governance frameworks to address these risks. Secretary-General Guterres announced the establishment of a High-Level Advisory Body on Artificial Intelligence, which will be comprised of government and private sector experts. This builds on Secretary-General Guterres prior backing of an international AI watchdog body similar to the International Atomic Energy Agency. The High-Level Advisory Body on Artificial Intelligence will be tasked with analyzing and developing recommendations for the international governance of AI. An interim report on AI governance is scheduled to be released at the end of 2023 and the recommendations finalized by mid-2024.

In addition to emphasizing the importance of AI governance, political leaders discussed the role of AI in accelerating the achievement of UNs Sustainable Development Goals, which focus on addressing poverty, inequality, climate change, environmental degradation, peace, and justice. US Secretary of State Antony Blinken, along with foreign ministers and secretaries of state from Japan, Kenya, Singapore, Spain, Morocco, and the United Kingdom, met with several private sector AI developers to discuss AIs potential in supplementing the UNs Sustainable Development Goals. The discussion noted the importance of partnership between the government, private sector, and stakeholders in responsibly harnessing AI to achieve these goals.

Beyond AI, concerns about global cybersecurity continued to be a theme at this years UNGA. For example, the US State Department led a side dialogue focused on securing cyberspace from significantly destructive attacks. Ambassador at Large for Cyber and Digital Policy Nathaniel C. Fick, who moderated the discussion, and Deputy Secretary of State Richard R. Verma focused on how member states could cooperate with each other to respond and recover from cyber attacks, as well as emphasizing the United States commitment to collaborating with other countries to strengthen cybersecurity. As this effort to enhance global collaboration on cybersecurity continues, nation states will need to determine the ways in which private sector entitiescritical infrastructure and cybersecurity firms, for examplewill play a role in this process.

The 78th session highlighted AI and cybersecurity as prominent global challenges, as well as important opportunities for cross-border collaboration between member states and the private sector. While these initiatives run in parallel with actions by governments in Europe, the United States, and other regions, they also may affect how individual countries approach AI and cybersecurity. Moreover, although similar discussions on AI are occurring in other international forasuch as the Organization for Economic Cooperation and Development and the G-7 (through the AI Hiroshima Process)the broad reach of the UN-led effort could allow it to have significant influence over global AI standards. Companies interested in global AI and cybersecurity policy will likely benefit from considering how the positions shared at the UNGA could inform key policy debates affecting their business in the months ahead.

See the original post:
Artificial Intelligence and Cybersecurity: Key Topics at the 78th ... - Mayer Brown

How artificial intelligence can help read mammograms Beaufort … – The Island News

October is Breast Cancer Awareness Month.

Breast cancer is the second most common cancer for women in the United States, making early detection very important. And that might be easier to do with artificial intelligence, or sometimes referred to as AI. Its now being used to help doctors read mammograms.

Its not anything that the patient would be able to see. Its something that we see on the detection side, explained Laura Dean, MD, diagnostic radiology specialist for Cleveland Clinic. So, its basically just an algorithm or annotations that are embedded into the patient images that we see when were reviewing all of the imaging for the patient.

Dr. Dean said AI can help spot more subtle findings on breast imaging.

Research shows it can also help radiologists be more efficient and accurate.

She said another benefit is that artificial intelligence is constantly learning from known or proven cancers, and that information can then be applied when analyzing images.

Dr. Dean uses AI in her own practice and said there have been multiple occasions where it has detected something she couldnt see.

I think everyone, and me included, we tend to be a little bit skeptical initially when we have a task that a computer is performing. It takes a little bit of time to learn trust, to kind of learn how to apply that to our practice, she said. But I think its really exciting to see how this has helped aid our detection of breast cancer. We, of course, want to find breast cancer as early as we possibly can.

In addition to self-checks at home, women are encouraged to start getting annual mammograms for breast cancer when they turn 40.

Those who are at an increased risk may need to have screening sooner.

However, its best to talk with your physician.

Source:October 2, 2023;ccnewsservice@ccf.org

The rest is here:
How artificial intelligence can help read mammograms Beaufort ... - The Island News

AI tech boom: Is the artificial intelligence market already saturated? – Cointelegraph

From voice assistants to algorithms predicting global market trends, artificial intelligence (AI) is seeing explosive growth. But as with any emerging technology, there comes a point where innovation risks giving way to oversaturation.

The rapid proliferation of AI tools and solutions in recent months has ignited discussions among industry experts and investors alike. Are we witnessing the zenith of AIs golden age, or are we on the precipice of a market saturated beyond capacity?

The tech landscape has always been dynamic, with innovations often outpacing the markets ability to adapt.

The late 1990s saw the dot-com bubble, a period marked by exuberant optimism around internet-based companies. Startups with little more than a web presence achieved staggering valuations, only for many to crash spectacularly when the bubble burst.

In 2017, the world witnessed a surge in initial coin offerings (ICOs), a fundraising method where new cryptocurrency projects sold their underlying tokens to investors.

This period was marked by immense enthusiasm for the potential of blockchain and decentralized technologies. However, excitement often overshadowed the practicality and viability of many projects.

As a result, investments were made in ventures that either had limited real-world applications or, in some cases, no genuine ties to cryptocurrency whatsoever.

Recent: Google paves way for AI-produced content with new policy

A notable example was during 2017s blockchain naming trend with the company previously known as Long Island Iced Tea Corp. The company made soft drinks and had little to do with blockchain. In a bid to capitalize on the blockchain hype, the company rebranded itself as Long Blockchain Corp.

Following this rebranding, the companys stock price soared, with shares rising by an astonishing 275% in just one day. This increase, despite no substantial shift in its business model or operations, highlighted the speculative nature of the market at the time and the lengths to which companies would go to ride the blockchain wave.

The enthusiasm was short-lived, however. According to Bitcoin.com, almost half of the projects offering ICOs in 2017 had failed by February 2018.

While the dot-com and blockchain bubbles were characterized by speculation and, at times, a lack of authentic value, the AI wave is fundamentally different.

Companies like Microsoft and Google are not just dabbling in AI theyre integrating it into products and services that millions use daily, showcasing real-world applications that are actively improving industries.

Michael Koch, co-founder and CEO of HubKonnect an AI platform for local store marketing campaigns told Cointelegraph:

Googles generative AI, Google Bard, attracted over 140 million visitors in May alone, sports teams are receiving real-time analytics, and AI chatbots are becoming more time and cost-efficient.

The allure of artificial intelligence has led to a surge in AI-driven tools, solutions and startups. According to Precedence Research, the global artificial intelligence market was valued at $454 billion in 2022 and is projected to grow to $538 billion in 2023.

Venture capital (VC) has been a significant funding source for the AI sector in 2023. Data from PitchBook indicates that generative AI startups raised over $1.7 billion in Q1 of 2023, with an additional $10.7 billion worth of deals announced that were not yet completed.

Some of the most notable raises included Google-backed Anthropic, which secured $450 million at a reported $5 billion valuation. Builder.AI raised $250 million. Mistral AI managed to raise $113 million without a product or even a proof-of-concept. With the injection of VC thrown at these AI startups like wildfire, one can draw some similarities to the ICO bust. In that situation, there was also a lot of hype without any actual use cases or proof of viability. However, what distinguishes AI is its multitude of use cases and real-life examples of success. Take, for instance, ChatGPT, which rapidly reached 100 million users in just two months, demonstrating AIs tangible impact.

Yet, with this rapid growth and high valuations, some feel the AI market is overheating. JPMorgans chief markets strategist, Marko Kolanovic, believes the AI market is near its saturation point. As reported by Forbes, Kolanovic said the recent market uptick is a result of an AI-driven bubble and that the hype around the technology was due to the popularization of chatbots that often fail in basic questions rather than AI-powered earnings growth.

Leif-Nissen Lundbk, founder and CEO of generative AI company Xayn, has a contrasting view and believes we are only at the tip of the iceberg. He told Cointelegraph:

The sheer volume of companies entering the AI space has raised concerns about a potentially saturated market. Companies worldwide are now utilizing AI as part of their core functionalities. From 10Webs no-code website builder to RainbowAIs weather app, and from ICarbonXs AI providing personalized health analyses to SherpaAIs virtual personal assistant, the stage has been set for countless others to follow suit.

Lundbk recognizes that the influx of new companies could lead to the market becoming saturated in some areas but does not see it as a pertinent issue, stating, The business-to-customer market is perhaps a bit more saturated but has not yet reached full capacity, while the business-to-business market is only in its infancy, even though AI has been around for a while. The vast majority of corporations are only using AI or machine learning for a few visible projects, if at all, that are easier to implement with lower risk, but arent applying it yet on a large scale.

Koch says that the influx of newcomers might give the illusion of an oversaturated AI market, but he views initial saturation as a necessary phase to foster future advancements.

He stated: AI will never be saturated because we are only on the first off-ramp of the AI super highway. It seems saturated because people from other industries are trying to step into the space, but when it comes down to innovation, theres already a select group of companies that are so far ahead and that have been in the AI space for decades. To be able to drive innovation forward, saturation will arise at a basic level, but there are elite players and companies that are leading the future of AI.

The rapid growth, high valuations and influx of new entrants into the AI realm have sparked debates about market saturation. Historical tech bubbles, such as the dot-com era and the blockchain hype, serve as reminders of the potential repercussions of unchecked growth and speculation.

Magazine:Blockchain detectives: Mt. Gox collapse saw birth of Chainalysis

However, the depth of AIs potential is far from fully realized. The technologys tangible impact speaks to its practical and transformative nature.

Its evident that the AI market is multifaceted. As with any burgeoning technology, the challenge is to strike a balance between rapid growth and sustainable development.

See original here:
AI tech boom: Is the artificial intelligence market already saturated? - Cointelegraph

Lies, Damn Lies, and Generative Artificial Intelligence: How GAI … – Public Knowledge

By Lisa Macpherson August 7, 2023

Generative artificial intelligence (AI) has exploded into popular consciousness since the release of ChatGPT to the general public for testing in November 2022. The term refers to machine learning systems that can be used to create new content in response to human prompts after being trained on vast amounts of data. Outputs of generative artificial intelligence may include audio (e.g., Amazon Polly and Murf.AI), code (e.g., CoPilot), images (e.g., Stable Diffusion, Midjourney, and Dall-E), text (e.g. ChatGPT, Llama), and videos (e.g., Synthesia). As has been the case for many advances in science and technology, were hearing from all sides about the short- and long-term risks as well as the societal and economic benefits of these capabilities.

In this post, well discuss the specific risk that broad use of generative artificial intelligence systems will further distort the integrity of our news environment through the creation and spread of false information. Well also discuss a range of solutions that have been proposed to protect the integrity of our information environment.

Highlighting the Risks of Generative AI for Disinformation

Generative artificial intelligence systems can compound the existing challenges in our information environment in at least three ways: by increasing the number of parties that can create credible disinformation narratives; making them less expensive to create; and making them more difficult to detect. If social media made it cheaper and easier to spread disinformation, now generative AI will make it easier to produce. And traditional cues that alert researchers to false information, like language and syntax issues and cultural gaffes in foreign intelligence operations, will be missing.

ChatGPT, the consumer-facing application of generative pre-trained transformer GPT, has already been described as the most powerful tool for spreading misinformation that has ever been on the internet. Researchers at OpenAI, ChatGPTs parent company, have conveyed their own concerns that their systems could be misused by malicious actors motivated by the pursuit of monetary gain, a particular political agenda, and/or a desire to create chaos or confusion. Image generators, like Stability AIs Stable Diffusion, create such realistic images that they may undermine the classic entreaty to believe your own eyes in order to determine what is true and what is not.

This isnt just about hallucinations, which refers to when a generative model puts out factually incorrect or nonsensical information. Researchers have already proven that bad actors can use machine-generated propaganda to sway opinions. The impact of generative models on our information environment can be cumulative: Researchers are finding that the use of content from large language models to train other models pollutes the information environment and results in content that is further and further from reality. It all adds a scary new twist to the classic description of the internet as five websites, each consisting of screenshots of text from the other four. What if all those websites were actually training each other on false information, then feeding it to us?

These risks have already created momentum among policymakers to regulate generative AI. The Federal Trade Commission recently demanded that OpenAI provide detailed descriptions of all complaints it has received of its products making false, misleading, disparaging or harmful statements about people. The White House, House, and Senate are holding hearings or calling for comments about the risks of generative AI in order to steer potential policy interventions. Legislators have called for content authenticity standards; notifications to users when generative AI is used to create content; impact and risk assessments; and certification of high-impact AI systems. And inevitably weve already heard generative AI and Section 230 used together in a sentence. (Our position is that the large language models associated with generative AI do not enjoy Section 230 protections.)

So what should we do? Its already clear that a range of solutions will be both desirable and necessary in order to protect the integrity of our information environment and help restore trust in institutions but, spoiler alert few of them pertain specifically to disinformation generated by AI.

Technical Solutions

The explosion of focus on generative AI has ignited a parallel explosion in technological solutions to track digital provenance and ensure content authenticity that is, tools to help detect what content is created with AI. These tools, some of which come from the creators of AI systems, can be applied in different places on the value chain. For example, Adobes Firefly generative technology, which will be integrated into Googles Bard chatbot, attaches nutrition labels to the content it produces, including the date an image was made and the digital tools used to create it. The Coalition for Content Provenance and Authenticity, a consortium of major technology, media, and consumer products companies, has launched an interoperable verification standard for certifying the source and history (that is, provenance) of media content. Various systems for so-called digital watermarking modifications of generated text or media in ways that are invisible to people but can be detected by AI using cryptographic techniques have also been proposed. Several companies, including Meta for its new Llama 2 product, encourage the use of classifiers that detect and filter outputs based on the meaning conveyed by the words chosen. An alternative technical approach to detect inauthentic content that can be used downstream is the use of digital forensics tactics, like tracking the network or device address or conducting reverse image searches for content that has already been posted and shared.

While each of these solutions has its own strengths and weaknesses, even in aggregate, they are imperfect and may be outpaced by developments in the technology itself. Early tools, like OpenAIs own classifier, have already been retired because of their low rate of accuracy. Opt-in standards wont be adopted by bad actors; in fact, bad actors may copy, resave, shrink, or crop images, which obscures the signals that AI detectors rely on. Bad actors may also favor earlier, more basic versions of generative AI systems that lack the protections of new versions. Like the content moderation systems of the dominant platforms, most of the detectors currently struggle with writing that is not in English, and can sustain or amplify moderation bias against marginalized groups. In another parallel to content moderation, development of classifier systems can take a heavy toll on human workers. In short, it is unlikely these tools would win a technological arms race with motivated generators of disinformation. And some of these methods raise concerns that they may encourage platforms to detect and moderate certain forms of content too aggressively, threatening free expression.

Content Moderation Solutions

Another range of solutions has to do with how downstream companies, such as search engines and social media platforms, moderate content created by generative AI. Most of their approaches are really extensions of their existing strategies to mitigate disinformation. These include using fact-checking partnerships to verify the veracity of content; labeling of problematic content as a means of adding friction to sharing; downranking content from repeat offenders; upranking trusted sources of information; and fingerprinting and sharing of known AI-created content across platforms (similar to processes that already exist for fingerprinting non-consensual intimate images and child sexual abuse materials). In their efforts to avoid partisan debates about censorship and bias, several of the major platforms have also shifted their emphasis from the content of posts to account and behavioral signals, like detecting networks of accounts that amplify each others messages, large groups of accounts that are created at the same time, and hashtag flooding.

All of these methods may be helpful if lower cost, higher volume and more difficult detection are the hallmarks of generative AI in disinformation. They may also use risk assessments to determine where the potential harms are severe enough to warrant specific policies related to AI-generated content. (Elections and public health information are the most prevalent examples. When the stakes are that high, it may warrant prohibitions on certain uses of generative AI or manipulated media.) They could add information about AI-generated content (such as its prevalence, or the type moderated) to existing transparency reports. We would also favor policies that call for more accountability, including legal liability, for paid advertising. We dont have the same concerns about over-moderation of commercial speech.

But all these methods carry the same limits and risks as they do for other forms of content. That includes the risk of over-moderation, which invariably has a particular impact on marginalized communities. As generative AI comes into broader use, users may actually be posting content that is beneficial and entertaining, making strict moderation policies by search and social media platforms undesirable as well as legally problematic. Even when strict policies and enforcement are warranted, their value depends on platforms willingness and ability to enforce them, including in languages other than English. Do we really want platforms to be the main line of defense against harmful narratives of disinformation given the platforms history, including on topics of enormous public importance like COVID-19 and elections?

AI Industry Self Regulation

Until or unless there are government regulations, the field of AI will be governed largely by the ethical frameworks, codes, and practices of its developers and users. (There are exceptions, such as when AI systems have outcomes that are discriminatory.) Virtually every AI developer has articulated their own principles for responsible AI development. These principles may encompass each stage of the product development process, from pretraining and training of data sets to setting boundaries for outputs, and incorporate principles like privacy and security, equity and inclusion, and transparency. They also articulate use policies that ostensibly govern what users can generate. For example, OpenAIs usage policies disallow disinformation, as well as hateful, harassing, or violent content and coordinated inauthentic behavior, among other things.

But these policies, no matter how well-intentioned, have significant limits. For example, researchers recently found that the guardrails of both closed systems, like ChatGPT, and open-sourced systems, like Metas Llama 2 product, can be coaxed into generating biased, false, and violative responses. And, as in every other industry, voluntary standards and self-regulation are subject to daily trade-offs with growth and profit motives. This will be the case even when voluntary standards are agreed to collectively (as is a new industry-led body to develop safety standards) or secured by the White House (as is a new set of commitments announced last week). For the most part, were talking about the same companies even some of the same people whose voluntary standards have proven insufficient to safeguard our privacy, moderate content that threatens democracy, ensure equitable outcomes, and prohibit harassment and hate speech.

Regulatory Solutions

Any discussion of how to regulate disinformation in the United States no matter how virulent, and no matter how its created is bounded by the simple fact that most of it is constitutionally protected speech. Regardless, policymakers are actively exploring whether, or how, to regulate generative (and other) AI. New research shows public support for the federal government taking steps to restrict false information and extremely violent content online. In Public Knowledges view: Proceed with caution. While there may be room and precedent for content standards for the most destructive lawful but awful disinformation (such as networked disinformation that threatens national security and public health and safety), in general user speech is protected speech and free expression values are paramount.

One framework which begins by comparing AI to nuclear weapons is grounded in the idea of incremental regulation; that is, regulation that recognizes and accounts for a breadth of use cases and potential benefits as well as harms. It encourages us to focus on applications of the technology, not bans or restrictions on the technology itself. Every sector and use case comes with its own set of ethical dilemmas, technical complexities, stakeholders and policy challenges, and potential transformational benefits from AI. For example, in the case of disinformation, Public Knowledge advocates for solutions that address the harms associated with disinformation whether they originate with generative AI, Photoshop, troll farms, or your uncle Frank. The resulting policy solutions would encompass things like requirements for risk assessment frameworks and mitigation strategies; transparency on algorithmic decision-making and its outcomes; access to data for qualified researchers; guarantee of due process in content moderation; impact assessments that show how algorithmic systems perform against tests for bias; and enforcement of accountability for the platforms business model (e.g., paid advertising).

We also need to account for the rapidity of innovation in this sector. One solution that Public Knowledge has favored is an expert and dedicated administrative agency for digital platforms. A dedicated agency should have the authority to conduct oversight and auditing of AI and other algorithmic decision making products in order to protect consumers and promote civic discourse and democracy. But such an agency should also have broader authorities, including to enhance competition and empower the public to choose platforms and services whose policies align with their values. Data privacy protections are also relevant here, as they would disallow the customization and targeting of content that can make disinformation narratives so potent and so polarizing. But lets implement protections that cover all the data collection, exploitation, and surveillance uses weve discussed for so many years.

The Best Time To Act

To paraphrase an old expression, the best time to act to protect the integrity of our information environment is, well, in 2016; but the second-best time is now. Theres been a lot of freaking out about the heightened risks of disinformation due to generative AI as the United States and 49 other countries enter another election cycle for 2024. But generative AI is only one of the new threats in our information environment.

Virtually all of the major platforms have rolled back disinformation policies and protections before the 2024 election cycle. A U.S. District Court judge recently issued a ruling and preliminary injunction limiting contact between Biden administration officials and social media platforms over certain online content, even some content relating to national security and public health and safety. There is a powerful new counter-narrative in Congress and the judicial system about the governments role in content moderation and an equation with censorship. Social media platforms, and media in general, seem to be fragmenting. This could be good or bad: Will the popularity of alternative, sometimes highly partisan, platforms send the conspiracy theorists back underground, made less dangerous because they are less able to find one another, connect, and communicate? Could more cohesive online communities with more in common increase the civility of these platforms? Or will the end of a few dominant digital gatekeepers mean even greater sequestering and polarization? And what happens if Twitter or X does implode like the Titan submersible, and its wonky, highly influential user base of journalists, politicians and experts disbands and cant find one another to connect the dots on world events?

It will take a whole-of-society approach to restore trust in our information environment, and we need to accelerate solutions that have already been proposed. We favor solutions that equip civil society to identify false information and allow all Americans to make informed choices about what information they share. We should enable research into how disinformation is seeded and spread and how to counteract it. Policymakers should create incentives for the technology platforms to change their policies and product design and they should foster more competition and choice among media outlets. Civil society should convene stakeholders, including from the communities most impacted by misinformation, to research and design solutions all while protecting privacy and freedom of expression. And we should use policy to solve the collapse of local news, since it has opened information voids that disinformation rushes in to fill.

Lets not waste a crisis, even if its a false one. Lets focus the explosion of attention on generative AI and its threats to democracy into productive solutions to the challenges and harms of disinformation weve been facing for years.

Excerpt from:
Lies, Damn Lies, and Generative Artificial Intelligence: How GAI ... - Public Knowledge

Artificial Intelligence Will Drive Evolution, Not Extinction, for … – Wealth Management

Artificial Intelligence, which has been advancing for decades, has exploded onto the scene recently, creating exciting and extraordinary use cases in every field from healthcare to manufacturing.

Just look at ChatGPT: The popular chatbot from OpenAI that jumpstarted the current AI conversation is estimated to have had 100 million monthly active users just two months after its launch last November, making it the fastest-growing consumer application in history.

Related: The Rapid Pace Of Communications-Focused Generative AI For Advisors

At the same time, AI has quickly become one of the most innovative technologies of the 21stcentury, with the potential to both enhance and disrupt major industries, including wealth management.And while AI will have many time- and cost-saving uses for the broader financial services sector, what does it mean for the future of financial advisors?

Every five to 10 years, a new technology grabs headlines and stirs up all types of dire predictions about the financial advice business. Most of these prognostications revolved around the imminent disintermediation of financial advisors from their clients.

Related: The Google Glass Flop And Lessons For Fintech Wealth Management Today

They never came to pass.

Many fintech innovations have greatly accelerated the evolution of financial services over the past 20 years, and I fully expected AI to provide even more fuel to increase the speed of our industrys evolutionary process.

However, I believe the need for personal, face-to-face adviceeven if its over Zoomremains as strong today as ever and will continue to be in strong demand going forward.

Technology Doesnt Always Fulfill its Promise

Lets start with the fact there are many examples of innovative technology that were expected to fundamentally transform multiple industry sectors, but never did.

A few years ago, I wrote about howwealthtech entrepreneurs and fintech founders should take some lessons from the failure of Google Glass.Does anybody remember how that specific technology was supposed to revolutionize the way we interact with each other and the world around us?And yet, that never happened.

The basic failure was no one asked for the innovative product.Not only were the glasses aesthetically displeasing, but more to the point, the whole concept of individuals recording everything they see was a bit creepy.

Heres another example:Remember Segways? The high-tech scooter had suchan exciting futuristic promise that none other than Steve Jobs said it was a technology that would do better than the personal computer. But in reality, going at a top speed of 13 mph and having to keep your balance never caught on with the public at large.

Going back to the dot-com era, for every success, its easy to find cautionary tales of start-ups jumping on a transformative new technology, raising huge sums of capital, and never living up to their hype as customers stayed away in droves.

And well have to keep an eye on crypto to see where we go from here. While there definitely seems to be a future for blockchain technology, the proliferation of cryptocurrencies, NFTs and other digital assets have gone too far too fast. Despite their promise, crypto doesnt need to be a significant allocation in every investors portfolio, as the true believers kept exclaiming just a few short years ago. In fact, its safe to say financial advisors will not lose the next generation of clients if they take a slow-walk approach to crypto right now.

The Robo Advisor Revolution That Wasnt

One of the more recent innovations that was going to mark the end of the financial advisor was the robo advisor. It was said these automated platforms would completely supplant human advisors within five yearsback in 2015.

While it has been shown there is a client type for robo advisors, many investors, especially those with higher assets and more complex financial lives, do not want to trust the assets theyve spent a life building to a faceless algorithm.

Its just too far of a leap. Instead, robo advisors have been incorporated within larger wealth management organizations tech stacks to increase scalability, especially with smaller accounts.

Evolution Versus Disruption

All of which brings us to the present-day prophecies about what AI means for wealth management.

In the past few months, weve seen diametrically opposed headlines about the impact of generative learning and AI. On one hand, were told it will permeate every aspect of our lives and make the world a better place. The opposite viewone held by leading researchers in the spacepredicts the proliferation of AI may result in human extinction.

Regardless of your view, it is clear, AI will continue to expand and impact our lives. It seems to me there will be benefits and risks for the wealth management industry.

There has often been a fear-driven response from the broader industry whenever a meaningful new technology emerges. But the reality is that people are not going to be completely replaced by AI due to current cultural expectations of service.

Client-facing advisors and support staff will continue to be important to the investing public. Although some tech-savvy consumers are willing to discuss their financial security with a chatbot, I believe most high-net-worth and ultra-high-net-worth individuals and families will want a person supporting their accounts.

Where AI will potentially have a significant impact is on back and middle office solutions, driven by the need for greater efficiencies in the face of escalating regulatory complexities, rising technology costs and margin compression. In due course, you will see AI as an extension of the broader toolkit advisors have at their disposal to provide an enhanced, yet highly scalable, client experience.

As with past successful technology innovationsincluding the desktop computer, portfolio management and client service software/platforms, the internet and cell phonesAI will ultimately prove to be a substantial benefit to independent advisors by improving the service experience for existing clients, while scaling their practices to support larger client bases.

For wealth management firms, AI-led efficiency and scalability will be of even greater importance as they deal with an ageing advisor population leaving the business just as the generational transfer of wealth creates higher demand for services.

Wealth Management Remains Relationship-Driven

At its core, wealth management remains a relationship-driven business. Clients entrust their assets and financial futures to people after a level of confidence has been built and a sense of reliability established.

Regardless of how much generative learning an AI bot might have, it cannot ask all the right questions, pick up on non-verbal cues or interpret any number of intangible signals. It takes an experienced financial professional with a keen understanding of behavioral finance to be a successful advisor.

Unlike buying the latest gadget that may or may not deliver on its promise, wealth management is much more consequential for individuals and families.

And so long as AI is utilized for evolutionary (versus revolutionary) change, it will be embraced by firms, advisors and clients alike.

Adam Malamed is CEO ofSanctuary Wealth

Visit link:
Artificial Intelligence Will Drive Evolution, Not Extinction, for ... - Wealth Management