Archive for the ‘Artificial Intelligence’ Category

What is safe artificial intelligence in mortgage lending? – National Mortgage News

If you have people making decisions from data, you probably need artificial intelligence but not the kind that's making headlines right now.

The sensational headlines this week are about generative chatbots programs like ChatGPT that carry on natural-sounding conversations in (written) English. They're amazingly lifelike and seem to be thinking for themselves.But the things they say are often false, and even when telling the truth, they can't tell you where they got their information.They're working from large tables of how words are commonly used, not information about the outside world. So despite the "wow" factor, they're not, by themselves, the right tool for anything in mortgage lending that I can see.

Chatbots do have their uses. You might want to have a web page that takes customers' questions in plain English and answers them. Generative technology can be useful on the input side, for recognizing different ways of wording a question, but the answers have to be controlled. When a customer asks for his loan balance, the chatbot must actually look up the balance, not just make up something that uses words in a plausible way. Even if the computer misunderstands the question, it must not spout falsehoods.

But chatbots are just one tiny part of AI.They are one application of machine learning, which itself is still not the whole of AI, but let's look at that next.

Machine learning means getting software to recognize patterns and train itself from data.Machine learning is very useful for finding statistical regularities and estimating probabilities. It is basically statistical regression, greatly expanded into many dimensions.Neural networks are one kind of machine learning, and they are multi-layer statistical models, not models of brains.

The results of machine learning are only probable, not certain.You have to be ready to live with inaccuracy.Fortunately, people recognize that the answers aren't coming from a conscious human mind, and it's easier for humans to be cautious.Machine learning will tell you whether a borrower is probably a good risk. It will not tell you for certain exactly what that borrower will do.That is easy to understand, and useful.

Apart from inaccuracy, the big risk with machine learning is that it will learn the wrong things specifically, discriminatory decisioning. If you tell a computer to find patterns, it will find them, whether or not they are patterns society wants to perpetuate. If the data used to train a machine learning model reflects historic racial bias, it may discover this and perpetuate it in its predictions. It has no way to know you don't want it to use that knowledge. It might even detect race indirectly, from location (old-fashioned illegal redlining), or choice of hairdressers, or anything else.

How strongly you guard against this depends on what you are using machine learning for. If you're just plotting an advertising strategy or making predictions internally, the prejudiced computer may not violate laws or regulations but if it's making decisions about people, it certainly will. The cure is to block inappropriate information from being used, so the machine is only learning from data you're entitled to use, and also to test the results to see if the system is in fact biased.You usually cannot look at the machine learning system to find out what it learned, because the patterns are hidden in matrices of numbers.

But even that isn't all of AI.Traditionally, AI comprises all uses of computers that are based on the study of human thought.That includes some technologies that are not in today's limelight but are very applicable to finance.They revolve around knowledge-based systems and explicit rules for reasoning.

One time-honored method is knowledge engineering: Get a human expert, such as a loan underwriter, to work through a lot of examples and tell you how to analyze them. Then write a computer program that does the same thing, and refine it, with help both from the human expert and from statistical tests. The result is likely to be a rule-based, knowledge-based system, using well-established techniques to reason from explicit knowledge. And it can well be more accurate and reliable than the human expert because it never forgets anything.On the other hand, unlike the human expert, it knows nothing that was not built into it.

Knowledge engineering mixes well with machine learning approaches that output understandable rules, such as decision trees. There are also ways to probe a machine learning system to extract explicit knowledge from it; this is called explainable AI (XAI).

Of course, knowledge-based systems face a pitfall of their own that we recognized long ago: "As soon as it works reliably, it's no longer called AI!" But we're in business to make good decisions, not to impress people with magic.

Here is the original post:
What is safe artificial intelligence in mortgage lending? - National Mortgage News

Using an Artificial Intelligence Interface to Plan a Disney Cruise – DCL Fan

Avid Disney vacation planners know that using technology to help plan a Disney vacation is as old as tech itself. When the internet still had that new car smell, us Disney planners flocked to Disneys fledgling website and independent chat groups to help craft magical vacations.

Many decades later, we still utilize online resources and discussion boards, like DISboards, but today we have social media, vlogs, podcasts, and even TikToks. This brings me to the new kid on the planning block Artificial Intelligence.

Chat GPT is an online platform that allows almost anyone with an internet connection to have a chat dialogue with an artificial intelligence bot. But, lets let ChatGPT answer the question.

You will find pros and cons of utilizing artificial intelligence like ChatGPT. So I thought, why not begin my AI adventure exploring a subject with which I am extremely familiar: Disney Cruise Line.

Below you will see screenshots of questions I have asked, along with the answers generated by Chat GPT.

If you can use a search engine, you can use Chat GPT. I simply asked, What is Rotational Dining on Disney Cruise Line? and this is the answer provided.

I also asked, What Port Adventures are available on Disneys Castaway Cay?

So far, so good, but these are fairly simple questions.

ChatGPT provides users with answers that are conversational and authoritative meaning it really thinks it knows what it is talking about. However, if you ask ChatGPT, it will sometimes admit the answers provided are simply wrong.

For example, I asked ChatGPT How much should I tip on Disney Cruise Line? and this is the response given.

Current gratuities charged are $14.50 per guest per day. Also, bar, beverage, and spa services gratuities are actually 18%, not 15%. It is a small detail, but one you wouldnt catch unless you had previously sailed with Disney Cruise Line or had already done your research.

ChatGPT does offer a feedback option. You can utilize the Thumbs Up or Thumbs Down icons at the top of your answer, or you can provide the correct information in the chat field. That does not mean the answer will be corrected the next time you ask. It only means that the artificial intelligence will group your response with the information it has already gathered from various sources.

For me, the best feature of ChatGPT is that I can ask the program a question as if I was talking with a person. You can ask for ideas like:

This AI is aware that it has limitations. For vacation planners, this tool can help spark ideas on what activities are available at destinations worldwide. However, it cannot provide real-time travel information, nor can it give you travel quotes or help you book travel.

That last suggestion about contacting a travel agent is good advice. Lets see what ChatGPT has to say about that.

Visit the human agents over at Dreams Unlimited Travel the official sponsor of DCL Fan to request your free, no-obligation quote from one of our experienced Disney Cruise Line travel planners who will be happy to assist you when planning a Disney Cruise Line vacation for the humans in your traveling party.

Melanie is the mom of three young adults. She is a native Floridian who now lives in North Carolina. She is a Gold Castaway Club Member who has sailed on all four of the current ships at least once and is ready to set sail on the Disney Wish this fall.

Here is the original post:
Using an Artificial Intelligence Interface to Plan a Disney Cruise - DCL Fan

Artificial Intelligence: Key Practices to Help Ensure Accountability in … – Government Accountability Office

What GAO Found

Artificial intelligence (AI) is evolving at a rapid pace and the federal government cannot afford to be reactive to its complexities, risks, and societal consequences. Federal guidance has focused on ensuring AI is responsible, equitable, traceable, reliable, and governable. Third-party assessments and audits are important to achieving these goals. However, a critical mass of workforce expertise is needed to enable federal agencies to accelerate the delivery and adoption of AI.

Participants in an October 2021 roundtable convened by GAO discussed agencies' needs for digital services staff, the types of work that a more technical workforce could execute in areas such as artificial intelligence, and challenges associated with current hiring methods. They noted such staff would require a variety of digital and government-related skills. Participants also discussed challenges associated with existing policies, infrastructure, laws, and regulations that may hinder agency recruitment and retention of digital services staff.

During a September 2020 Comptroller General Forum on AI, experts discussed approaches to ensure federal workers have the skills and expertise needed for AI implementation. Experts also discussed how principles and frameworks on the use of AI can be operationalized into practices for managers and supervisors of these systems, as well as third-party assessors. Following the forum, GAO developed an AI Accountability Framework of key practices to help ensure responsible AI use by federal agencies and other entities involved in AI systems. The Framework is organized around four complementary principles: governance, data, performance, and monitoring.

Artificial Intelligence (AI) Accountability Framework

To help managers ensure accountability and the responsible use of AI in government programs and processes, GAO has developed an AI Accountability Framework. Separately, GAO has identified mission-critical gaps in federal workforce skills and expertise in science and technology as high-risk areas since 2001.

This testimony summarizes two related reportsGAO-22-105388 and GAO-21-519SP. The first report addresses the digital skills needed to modernize the federal government. The second report describes discussions by experts on the types of risks and challenges in applying AI systems in the public sector.

To develop the June 2021 AI Framework, GAO convened a Comptroller General Forum in September 2020 with AI experts from across the federal government, industry, and nonprofit sectors. The Framework was informed by an extensive literature review, and the key practices were independently validated by program officials and subject matter experts.

For the November 2021 report on digital workforce skills, GAO convened a roundtable discussion in October 2021 comprised of chief technology officers, chief data officers, and chief information officers, among others. Participants discussed ways to develop a dedicated talent pool to help meet the federal government's needs for digital expertise.

For more information, contact Taka Ariga at (202) 512-6888 or arigat@gao.gov.

Excerpt from:
Artificial Intelligence: Key Practices to Help Ensure Accountability in ... - Government Accountability Office

Carleton Experts Available: Artificial Intelligence | Carleton Newsroom – Carleton Newsroom

Carleton University experts are available to discuss artificial intelligence (AI).

If you are interested in speaking with the experts below, please feel free to reach out directly. If you require other assistance, please email Steven Reid, Media Relations Officer, atsteven.reid3@carleton.ca.

Mohamed Al GuindyProfessor of Finance,Sprott School of Business at Carleton University

Email:mohamed.alguindy@carleton.ca

Al Guindys research focuses on how technology, including artificial intelligence, affects financial markets and economics.He can also discuss AI generally. Al Guindys research also includes a study on cryptocurrency adoption in Canada. His work is featured on Yahoo Finance, Investment Relations Magazine, and Harvard Law School Forum on Corporate Governance and Financial Regulation.

For more on Al Guindy:https://sprott.carleton.ca/profile/mohamed-al-guindy/

Jim DaviesProfessor, Department of Cognitive Science at Carleton University

Email: jim.davies@carleton.ca

Davies is available to discuss a number of topics involving AI, including:

As director of the Science of Imagination Laboratory, Davies explores computational modelling and artificial intelligence applied to human visual imagination. His work has shown how people use visual thinking to solve problems and how they visualize imagined situations and worlds. He is co-host of the award-winning Minding the Brain podcast.

For more on Davies visit: https://carleton.ca/cognitivescience/people/davies-jim/

Ksenia YadavProfessor, Department of Electronics at Carleton University

Email: kseniayadav@cunet.carleton.ca

Yadav is available to discuss a number of AI and machine learning (ML) related subjects, including how these new tools may enable people to solve complex and previously intractable problems across a number of fields. The United Nations has predicted that some of the most pressing environmental, social and economic problems of our civilization will be among the biggest beneficiaries of these technologies.

She can also discuss the potential for AI and ML to be misused in various ways, including malicious attacks, misinformation, as well as propagation of biases and discrimination.

Yadavs current research involves the use of AI and ML in the design and manufacturing of electronic components. She advises on educational challenges in rapidly evolving technological fields, lowering barriers to STEM in underrepresented populations, and effective collaborations between the public and private sectors.

For more information on Yadav visit: https://carleton.ca/doe/people/ksenia-yadav/

Media ContactSteven Reid (he/him)Media Relations OfficerCarleton University613-265-6613Steven.Reid3@carleton.ca

Thursday, May 18, 2023 in Experts AvailableShare: Twitter, Facebook

Read more:
Carleton Experts Available: Artificial Intelligence | Carleton Newsroom - Carleton Newsroom

Artificial Intelligence Wants Your Name, Image and Likeness … – JD Supra

Innovations in artificial intelligence (AI) have made it easier than ever to replicate a person's name, image, and likeness (NIL), particularly if that person is a celebrity. AI algorithms require massive amounts of "training data" videos, images, and soundbitesto create "deepfake" renderings of persona in a way that feels real. The vast amount of training data available for celebrities and public figures make them easy targets. So, how can celebrities protect their NIL from unauthorized AI uses?

The right of publicity is the primary tool for celebrity NIL protection. The right of publicity protects against unauthorized commercial exploitation of an individual's persona, from appearance and voice to signature catchphrase. Past right of publicity cases provide some context for how this doctrine could be applied to AI-generated works.

In the 1980s and 1990s, Bette Midler and Tom Waits, respectively, successfully sued over the use of sound-a-like musicians in commercial ads. These courts, according to Waits's case, recognized the "right of publicity to control the use of [their] identity as embodied in [their] voice." Using the same rationale, deepfake ads and endorsements that use AI-technology to replicate a celebrity's voice or appearance would similarly violate publicity rights.

Those lawsuits are just around the corner. Earlier this year, a finalist on the television show "Big Brother" filed a class action lawsuit against the developer of Reface, a subscription-based mobile application that allows users to "face-swap" with celebrities. Using traditional principles of right of publicity, the plaintiff is seeking accountability for unauthorized commercial uses of his NIL in the AI-technology space.

The right of publicity is not without limitations. First, because it is governed by state statutory and common law, protections vary by jurisdiction. California's right of publicity statute, for example, covers the use of a person's NIL in any manner, while laws in other states only protect against use of NIL in certain contexts.

In 2020, New York expanded its right of publicity laws to specifically prohibit certain deepfake content. Second, the right of publicity specifically applies to commercial uses. The doctrine might stop AI users from profiting from celebrity image in the advertising and sales context, but creative useslike deepfake memes, parody videos, and perhaps even uses of AI-generated NIL in film and televisionmay fall outside the scope of the right of publicity.

The Lanham Act provides another avenue for addressing unauthorized AI-generated NIL. Section 43(a) of the Lanham Act is aimed at protecting consumers from false and misleading statements, or misrepresentations of fact, made in connection with goods and services. Like the right of publicity, courts have applied the Lanham Act in cases involving the unauthorized use of celebrity NIL to falsely suggest that the celebrity sponsors or endorses a product or service. For example, the Lanham Act applies to circumstances that imply sponsorship, including the sound-a-like cases referenced above, and in cases involving celebrity look-a-likes like White v. Samsung Electronics and Wendt v. Host Int'l Inc.

Under this framework, celebrity plaintiffs may have recourse in the event their NIL is used, for example, in deepfake sponsored social media posts, or in digitally altered ad campaigns featuring celebrity lookalikes. And, because the Lanham Act is a federal statute with nationwide applicability, it may offer greater predictability and flexibility to celebrity plaintiffs seeking redress.

AI technology also creates unique issues with enforcement and recovery. Because of the wide availability of AI technology, it can be difficult to identify the source of infringing content. Tech-savvy deepfake developers take care to avoid detection. And while deepfake content is most easily shared on social media, social media providers are immunized from liability for certain non-IP tort claims (including right of publicity claims) arising out of user-generated content under Section 230 of the Communications Decency Act.

As AI technology advances, legislators and courts will continue to face new questions about scope of persona rights, the applicability of existing legal protections, and the practicality of recovery. While AI-specific regulation may be on the horizon, existing legal frameworks can be mobilized to combat misappropriation at the intersection of celebrity NIL and emergent technology.

Here is the original post:
Artificial Intelligence Wants Your Name, Image and Likeness ... - JD Supra