Archive for the ‘Artificial Super Intelligence’ Category

Artificial Intelligence isn’t taking over anything – Talon Marks

Lets all take a deep breath and relax about all of this AI stuff because the worries of it taking over songwriting are ridiculous.

Artists do not need to worry about AI songwriting taking over because the difference in quality is hugely noticeable.

At first, AI music seemed to be something that real songwriters should be concerned about when somebody going by the name ghostwriter created a song with vocals of Drake and The Weeknd called, Heart on My Sleeve.

To be fair, when the song came out it left a lot of people amazed because of how close the sound was to the two artists.

The song was so good that it was even eligible for a Grammy Award which is super impressive but then again, the Grammy in recent years has been viewed by many as a joke due to some outlandish winners.

Since then, the AI songwriter ghostwriter hasnt had a song blow up as big as this one nor has there been an AI song that has come anyone near the quality of this one.

Thats simply because it is not the real artist, no matter how good it may sound people know its not the actual artist so why should we care?

Drake is going to be releasing his newest album For All the Dogs on Oct. 6 and do you think that if ghostwriter released an AI Drake album on the same day more people would tune into that album?

Of course, not because the AI music doesnt come close in terms of quality to the actual artist.

For the most part, AI music is being used in another way and that is AI covers, thats where people get an artist for example Juice WRLD and theyll do a cover of him singing Love Yourself by Justin Bieber.

The covers sound amazing, and those arent harmful to the artist because its a song that was already released what should the artist be worried about?

The only real reason an artist would get concerned over an AI cover is that if the cover is doing better than the actual song in terms of numbers that isnt going to happen.

Mainly because most covers that are being done are from popular songs that are already well-established in terms of numbers.

As for photos well that may be a different story but still something people shouldnt be too worried about.

You see with AI photos they can create a photo of anybody they want doing anything possible and it looks real almost too real.

The only reason why this is something to look out for is because they can take a photo of anything while an actual photographer has to bust their asses off to get a great photo or at least even a decent one.

It is for sure something to be concerned about but what will be the fall of this is that we have sources that include the actual person that is being used in the photo.

An AI photo editor could take a photo of a celebrity that is damaging to the celebritys reputation but all it takes is for the source to speak up and deny the photo is them.

The more and more the photos are viewed as false, the more and more people will catch on to this. The same goes for the music side of AI.

Story continues below advertisement

Read the rest here:

Artificial Intelligence isn't taking over anything - Talon Marks

AI and You: The Chatbots Are Talking to Each Other, AI Helps … – CNET

After taking time off, I returned this week to find my inbox flooded with news about AI tools, issues, missteps and adventures. And the thing that stood out was how much investment there is in having AI chatbots pretend to be someone else.

In the case of Meta, CEO Mark Zuckerberg expanded the cast of AI characters the tech giant's more than 3 billion users can interact withon popular Meta platforms like Facebook, Instagram, Messenger and WhatsApp. Those characters are based on real-life celebrities, athletes and artists, including musician Snoop Dogg, famous person Kylie Jenner, ex-quarterback Tom Brady, tennis star Naomi Osaka, other famous person Paris Hilton and celebrated English novelist Jane Austen.

"The characters are a way for people to have fun, learn things, talk recipes or just pass the time all within the context of connecting with friends and family," company executives told The New York Times about all these pretend friends you can now converse with.

Said Zuckerberg, "People aren't going to want to interact with one single super intelligent AI people will want to interact with a bunch of different ones."

But let's not pretend that pretend buddies are just about helping you connect with family and friends. As we know, it's all about the money, and right now tech companies are in a land grab that's currently pitting Meta against other AI juggernauts, including OpenAI's ChatGPT, Microsoft's Bing and Google's Bard. It's a point the Times noted as well: "For Meta, widespread acceptance of its new AI products could significantly increase engagement across its many apps, most of which rely on advertising to make money. More time spent in Meta's apps means more ads shown to its users."

To be sure, Meta wasn't the first to come up with the idea of creating personalities or characters to put a human face on conversational AI chatbots (see ELIZA, who was born in the late '60s.) And it's an approach that seems to be paying off.

Two-year-old Character.ai, which lets you interact with chatbots based on famous people like Taylor Swift and Albert Einstein and fictional characters such as Nintendo's Super Mario, is one of the most visited AI sites and is reportedly seeking funding that would put the startup's valuation at $5 billion to $6 billion, according toBloomberg. This week Character.ai, which also lets you create your own personality-driven chatbots, introduced a new feature for subscribers, called Character Group Chat, that lets you and your friends chat with multiple AI characters at the same time. (Now's your chance to add Swift and Mario to your group chats.)

But using famous people to hawk AI is only fun if those people are in on it and by that I mean get paid for their AI avatars. Earlier this month, actor Tom Hanks warned people about a dental adthat used his likeness without his approval. "Beware!!" Hanks told his 9.5 million Instagram followers. "There's a video out there promoting some dental plan with an AI version of me. I have nothing to do with it."

Hanks in an April podcast predicted the perils posed by AI. "Right now if I wanted to, I could get together and pitch a series of seven movies that would star me in them in which I would be 32 years old from now until kingdom come. Anybody can now re-create themselves at any age they are by way of AI or deepfake technology ... I can tell you that there [are] discussions going on in all of the guilds, all of the agencies, and all of the legal firms to come up with the legal ramifications of my face and my voice and everybody else's being our intellectual property."

Of course, he was right about all those discussions. The Writers Guild of America just ended the writers strike with Hollywood after agreeing to terms on the use of AI in film and TV. But actors, represented by SAG-AFTRA, are still battling it out, with one of the sticking points being the use of "digital replicas."

Here are the other doings in AI worth your attention.

OpenAI is rolling out new voice and image capabilities in ChatGPT that let you "have a voice conversation or show ChatGPT what you're talking about." The new capabilities are available to people who pay to use the chatbot (ChatGPT Plus costs $20 per month.)

Says the company, "Snap a picture of a landmark while traveling and have a live conversation about what's interesting about it. When you're home, snap pictures of your fridge and pantry to figure out what's for dinner (and ask follow up questions for a step by step recipe). After dinner, help your child with a math problem by taking a photo, circling the problem set, and having it share hints with both of you."

So what's it like to talk to ChatGPT? Wall Street Journal reviewer Joanna Stern describes it as similar to the movie Her, in which Joaquin Phoenix falls in love with an AI operating system named Samantha, voiced by Scarlett Johansson.

"The natural voice, the conversational tone and the eloquent answers are almost indistinguishable from a human at times," Stern writes. "But you're definitely still talking to a machine. The response time ... can be extremely slow, and the connection can fail restarting the app helps. A few times it abruptly cut off the conversation (I thought only rude humans did that!)"

A rude AI? Maybe the chatbots are getting more human after all.

Speaking of more humanlike AIs, a company called Fantasy is creating "synthetic humans" for clients including Ford, Google, LG and Spotify to help them "learn about audiences, think through product concepts and even generate new ideas," reported Wired.

"Fantasy uses the kind of machine learning technology that powers chatbots like OpenAI's ChatGPT and Google's Bard to create its synthetic humans," according to Wired. "The company gives each agent dozens of characteristics drawn from ethnographic research on real people, feeding them into commercial large language models like OpenAI's GPT and Anthropic's Claude. Its agents can also be set up to have knowledge of existing product lines or businesses, so they can converse about a client's offerings."

Humans aren't cut out of the loop completely. Fantasy told Wired that for oil and gas company BP, it's created focus groups made up of both real people and synthetic humans and asked them to discuss a topic or product idea. The result? "Whereas a human may get tired of answering questions or not want to answer that many ways, a synthetic human can keep going," Roger Rohatgi, BP's global head of design, told the publication.

So, the end goal may be to just have the bots talking among themselves. But there's a hitch: Training AI characters is no easy feat. Wired spoke with Michael Bernstein, an associate professor at Stanford University who helped create a community of chatbots called Smallville, and it paraphrased him thus:

"Anyone hoping to use AI to model real humans, Bernstein says, should remember to question how faithfully language models actually mirror real behavior. Characters generated this way are not as complex or intelligent as real people and may tend to be more stereotypical and less varied than information sampled from real populations. How to make the models reflect reality more faithfully is 'still an open research question,' he says."

Deloitte updated its report on the "State of Ethics and Trust in Technology" for 2023, and you can download the 53-page report here. It's worth reading, if only as a reminder that the way AI tools and systems are developed, deployed and used is entirely up to us humans.

Deloitte's TL;DR? Organizations should "develop trustworthy and ethical principles for emerging technologies" and work collaboratively with "other businesses, government agencies, and industry leaders to create uniform, ethically robust regulations for emerging technologies."

And if they don't? Deloitte lists the damage from ethical missteps, including reputational harm, human damage and regulatory penalties. The researcher also found that financial damage and employee dissatisfaction go hand in hand. "Unethical behavior or lack of visible attention to ethics can decrease a company's ability to attract and keep talent. One study found employees of companies involved in ethical breaches lost an average of 50% in cumulative earnings over the subsequent decade compared to workers in other companies."

The researcher also found that 56% of professionals are unsure if their companies have ethical guidelines for AI use, according to a summary of the findings by CNET sister site ZDNET.

One of the challenges in removing brain tumors is for surgeons to determine how much around the margins of the tumor they need to remove to ensure they've excised all the bad stuff. It's tricky business, to say the least, because they need to strike a "delicate balance between maximizing the extent of resection and minimizing risk of neurological damage," according to a new study.

That report, published in Nature this week, offers news about a fascinating advance in tumor detection, thanks to an AI neural network. Scientists in the Netherlands developed a deep learning system called Sturgeon that aims to assist surgeons in finding that delicate balance by helping to get a detailed profile of the tumor during surgery.

You can read the Nature report, but I'll share the plain English summary provided by New York Times science writer Benjamin Mueller: "The method involves a computer scanning segments of a tumor's DNA and alighting on certain chemical modifications that can yield a detailed diagnosis of the type and even subtype of the brain tumor. That diagnosis, generated during the early stages of an hours-long surgery, can help surgeons decide how aggressively to operate."

In tests on frozen tumor samples from prior brain cancer operations, Sturgeon accurately diagnosed 45 of 50 cases within 40 minutes of starting that DNA sequencing, the Times said. And then it was tested during 25 live brain surgeries, most of which were on children, and delivered 18 correct diagnoses.

The Times noted that some brain tumors are difficult to diagnose, and that not all cancers can be diagnosed by way of the chemical modifications the new AI method analyzes. Still, it's encouraging to see what could be possible with new AI technologies as the research continues.

Given all the talk above about how AIs are being used to create pretend versions of real people (Super Mario aside), the word I'd pick for the week would be "anthropomorphism," which is about ascribing humanlike qualities to nonhuman things. But I covered that in the Aug. 19 edition of AI and You.

So instead, I offer up the Council of Europe's definition of "artificial intelligence":

A set of sciences, theories and techniques whose purpose is to reproduce by a machine the cognitive abilities of a human being. Current developments aim to be able to entrust a machine with complex tasks previously delegated to a human.

However, the term artificial intelligence is criticized by experts who distinguish between "strong" AI (who are able to contextualize very different specialized problems completely independently) and "weak" or "moderate" AI (who perform extremely well in their field of training). According to some experts, "strong" AI would require advances in basic research to be able to model the world as a whole and not just improvements in the performance of existing systems.

For comparison, here's the US State Departmentquoting the National Artificial Intelligence Act of 2020:

The term "artificial intelligence" means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

See the rest here:

AI and You: The Chatbots Are Talking to Each Other, AI Helps ... - CNET

How to Build a Chatbot Using Streamlit and Llama 2 – MUO – MakeUseOf

Llama 2 is an open-source large language model (LLM) developed by Meta. It is a competent open-source large language model, arguably better than some closed models like GPT-3.5 and PaLM 2. It consists of three pre-trained and fine-tuned generative text model sizes, including the 7 billion, 13 billion, and 70 billion parameter models.

You will explore Llama 2s conversational capabilities by building a chatbot using Streamlit and Llama 2.

How different is Llama 2 from its predecessor large language model, Llama 1?

Llama 2 significantly outperforms its predecessor in all respects. These characteristics make it a potent tool for many applications, such as chatbots, virtual assistants, and natural language comprehension.

To start building your application, you have to set up a development environment. This is to isolate your project from the existing projects on your machine.

First, start by creating a virtual environment using the Pipenv library as follows:

Next, install the necessary libraries to build the chatbot.

Streamlit: It is an open-source web app framework that renders machine learning and data science applications quickly.

Replicate: It is a cloud platform that provides access to large open-source machine-learning models for deployment.

To get a Replicate token key, you must first register an account on Replicate using your GitHub account.

Once you have accessed the dashboard, navigate to the Explore button and search for Llama 2 chat to see the llama-270b-chat model.

Click on the llama-270b-chat model to view the Llama 2 API endpoints. Click the API button on the llama-270b-chat models navigation bar. On the right side of the page, click on the Python button. This will provide you with access to the API token for Python Applications.

Copy the REPLICATE_API_TOKEN and store it safe for future use.

First, create a Python file called llama_chatbot.py and an env file (.env). You will write your code in llama_chatbot.py and store your secret keys and API tokens in the .env file.

On the llama_chatbot.py file, import the libraries as follows.

Next, set the global variables of the llama-270b-chat model.

On the .env file, add the Replicate token and model endpoints in the following format:

Paste your Replicate token and save the .env file.

Create a pre-prompt to start the Llama 2 model depending on what task you want it to do. In this case, you want the model to act as an assistant.

Set up the page configuration for your chatbot as follows:

Write a function that initializes and sets up session state variables.

The function sets the essential variables like chat_dialogue, pre_prompt, llm, top_p, max_seq_len, and temperature in the session state. It also handles the selection of the Llama 2 model based on the user's choice.

Write a function to render the sidebar content of the Streamlit app.

The function displays the header and the setting variables of the Llama 2 chatbot for adjustments.

Write the function that renders the chat history in the main content area of the Streamlit app.

The function iterates through the chat_dialogue saved in the session state, displaying each message with the corresponding role (user or assistant).

Handle the user's input using the function below.

This function presents the user with an input field where they can enter their messages and questions. The message is added to the chat_dialogue in the session state with the user role once the user submits the message.

Write a function that generates responses from the Llama 2 model and displays them in the chat area.

The function creates a conversation history string that includes both user and assistant messages before calling the debounce_replicate_run function to obtain the assistant's response. It continually modifies the response in the UI to give a real-time chat experience.

Write the main function responsible for rendering the entire Streamlit app.

It calls all the defined functions to set up the session state, render the sidebar, chat history, handle user input, and generate assistant responses in a logical order.

Write a function to invoke the render_app function and start the application when the script is executed.

Now your application should be ready for execution.

Create a utils.py file in your project directory and add the function below:

The function performs a debounce mechanism to prevent frequent and excessive API queries from a users input.

Next, import the debounce response function into your llama_chatbot.py file as follows:

Now run the application:

Expected output:

The output shows a conversation between the model and a human.

Some real-world examples of Llama 2 applications include:

With closed models like GPT-3.5 and GPT-4, it is pretty difficult for small players to build anything of substance using LLMs since accessing the GPT model API can be quite expensive.

Opening up advanced large language models like Llama 2 to the developer community is just the beginning of a new era of AI. It will lead to more creative and innovative implementation of the models in real-world applications, leading to an accelerated race toward achieving Artificial Super Intelligence (ASI).

More here:

How to Build a Chatbot Using Streamlit and Llama 2 - MUO - MakeUseOf

ONU’s Polar SURF undergraduate research projects expand into the … – Northern News

Topics such as climate change, cultural politics, and teacher evaluation comments presented deep research dives this summer for several Ohio Northern University College of Arts & Sciences undergraduates and professors. Polar SURF (Summer Undergraduate Research Fellowships) is an innovative ONU program that offers summer research opportunities for students interested in in-depth academic exploration begun by professors and scaled to students capabilities. Resulting were seven projects, ranging in focus from the sciences to the humanities, that introduced students to formal research methods typically reserved for graduate school studies. According to Brad Wile, Ph.D., associate dean for faculty affairs and chemistry professor, Polar SURF provides a shorter summer experience for capable students and committed professors compared to externally funded experiences like National Science Foundation research experiences for undergraduates (REU) program. Funded with endowed College support, Polar SURF also opens the field to disciplines beyond science. This past summer featured projects in areas such as communications, political science, toxicology, art and ethics. Wile said SURF allows students to extrapolate from professors research ideas and existing work and run with those, with faculty guidance. In some cases, students are required to produce a formal research paper on their findings. Publication in professional journals is a possibility for some projects. Having the interested approaches that weve seen these students and faculty take over the summer has been great, Wile said. Three Summer Polar SURF 2023 projects are highlighted below. Automating Identification of Toxic Student Evaluations Koen Suzelis and Gabriel Mott worked with John Curiel, Ph.D., assistant professor of political science, to develop a solution to unhelpful toxic comments that students contribute to professor evaluations. While studies have shown the most negative comments, such as those that are racist, are typically not as prevalent as neutral or positive constructive input that instructors can use to improve their teaching, they still have an outsized impact. Mostly students with strong feelings tend to write comments, the three wrote in their papers abstract. Among the most recallable are toxic comments, comments that are unhelpful/hurtful in harassment, outrage, or personal attacks. These in turn demoralize professors while unduly influencing administrator hiring/firing decisions. They act as a potent poison pill for many faculty across universities. To date, cost constraints prevent universities from systematically identifying and quarantining toxic comments, they continued. Suzelis, Mott, and Curiel created an automated machine learning tool that rather effectively and affordably flags nonproductive toxic comments in student evaluations. They collected hundreds of evaluations from ONU, Ohio State University, and University of North Carolina at Chapel Hill, and divided them into three categories: outrage, personal attacks, and prejudicial and bigoted comments. The paper also addresses reframing evaluation questions. Their method, which incorporates artificial intelligence, seeks to consistently, efficiently, and affordably flag toxic comments and excise those that would unduly bias university administrators against faculty while at the same time allowing for comments with the potential to offer meaningful feedback to remain, they wrote. Resulting is a tool that any school or individual educator will be able to use, and one that potentially could have multiple uses for any organization wanting to aggregate and isolate other written content. Scrutinizing a Super Bowl Ad The He Gets Us ad campaign, which first ran during the January 2023 Super Bowl, resulted in an intriguing research project for Devin Gelbrand and Megan Wood, Ph.D., assistant professor of communication and culture. Launched in 2022 by Christian philanthropy foundation The Signatry, the $100 million marketing campaign intends to overcome ideological divides by encouraging people to find commonality with Jesus. The groups publicity approach describes Jesus as one of radical forgiveness, compassion, and love, and portrays him as an immigrant, a refugee, a feminist, and a radical activist, explain Gelband and Wood in their research paper. Yet, they note, skeptics point out that some heavyweight behind-the scenes donors and the campaigns parent foundation have strong ties to conservative political projects and far-right ideologies that appear at odds with the campaigns inclusive messaging. Gelband and Woods research explores the media and cultural politics of the He Gets Us campaign, going beyond the common "culture-war" frame to investigate how the campaign's use of the third way rhetoric illuminates a contextually-significant set of tensions within the relationship between evangelical Christianity and the right-wing of U.S. politics. Historical context and precedent undergird their hypothesis, which posits the campaign is an effort to solve American Christianitys growing image problem with business savvy by relativizing and obscuring political differences to draw popular support while its benefactors fund candidates and polities that re-entrench those political divides and further decimate the rights of Americans. The two use a methodological approach called articulation, drawn from cultural studies, which helps researchers explore the relationship between a cultural phenomenon like the ad campaign and the social, economic, and political context of its production and reception. Gelband's and Woods mutual interest in the study of popular culture helped them pinpoint a collaborative research project. We had a very lively conversation about this Super Bowl campaign, which had just aired at the time, Wood said. Devin plans to parlay this project into another unique research question he will take on for his senior capstone this semester. Climate Change and Politics Mott also conducted an interdisciplinary analysis of environmental rhetoric in State of the Union addresses. Forrest Clingerman, Ph.D., religion and philosophy professor and Honors Program director, suggested Mott undertake a SURF project and I was immediately interested, Mott said. I used basic numeric analysis, political theory, and linguistic theory to investigate why presidents say the things they do, especially in the context of theState of the Union, Mott explained. I've found that presidentstend to discuss the environment more overall over time, but their normative arguments are extremely varied. Overall, Democrats discussed the environment more with more noticeable tonal characteristics, he concluded. Mott, who is wrapping up his research, said the project has been enjoyable. The greatest difficulty I've felt is trying to find direction for my project over the summer months we weren't in person, but the faculty members were all very helpful and supportive, he said. I've worked with a lot of new methods (humanities)that were interesting and challenging to adapt to relative to my usual practices, which are much more logically analytical. Mott hopes to publish his work in an undergraduate journal. Polar SURF is a great opportunity to experiment with interdisciplinary questions in humanities research, said Clingerman. Working with faculty members Clingerman, Jonathan Spelman, associate professor of philosophy, and Emily Jay, BFA 10, adjunct art professor, there was a collaborative group that fostered Motts work and the work of three other undergraduates Margaret Kurtz, who conducted an analysis on local churchs views on climate change; Madeline Alexander, who studied the ethics of activism; and Aubrey Davis, who used locally-sourced materials such as clay to examine environmental sustainability using artistic expression. Such an approach presents students with a broader context and multiple perspectives with which to investigate and formulate responses. Polar SURFs flexibility also allows students to still work during the summer while having this opportunity to do something really unique, Clingerman said.

Read more from the original source:

ONU's Polar SURF undergraduate research projects expand into the ... - Northern News

Why Artificial Intelligence Needs to Consider the Unique Needs of … – Women’s eNews

Artificial intelligence (AI) is making headlines everywhere. Yet AI applications and implications for older adults, particularly older women, have not been adequately contemplated.

Its no longer a moonshot idea from a science fiction movie.AI is already part of our daily lives Apples Siri, Amazons Alexa, self-driving cars. And now ChatGPT, an AI chatbot that has human-like conversations, composes music, creates art and writes essays.It has disrupted the world as we know it. Pundits who are not easily impressed often describe these advancements as scary good.

Many leaders have asked for a pause on AI development until we gain a better understanding of its impact. This is a good idea but for reasons well beyond those often identified.

We need to ask: How can we ensure that AIs reach is considering the unique needs of different populations? For example, many countries are becoming super-aged societies where women make up the majority of the older population. Is AI taking the needs of older adults into account?

Without thinking through these questions, we may leave older adults, particularly women, and other marginalized populations, open to discriminatory outcomes.

The needs of older women are often invisible to decision-makers. Older women are a unique population and often gendered ageism discrimination based on their age and sex causes their needs to be neglected. Research has already demonstrated that older women are more likely to experience adverse health outcomes and facepovertyand discrimination based on age and sex.

AI perpetuates this discrimination in the virtual world by replicating discriminatory practices in the real world. Whats worse is that AI automates this discrimination speeds it up and makes the impact more widely felt.

AI models use historical data. In healthcare, large data sets composed of personal and biomedical information are currently being used to train AI, but these data have, in many cases,excluded older adultsand women, making technologies exclusionary by design.

For example, AI has a valuable use in drug research and development, which uses massive data sets or big data. But AI is only as good as the data it gets and much of the world has not collected drug data properly. In the United States,until the 1990s, women and minorities were not required to be included in National Institute of Health (NIH) funded studies. Andup until 2019, older adults were not required to be included in NIH funded studies leaving a gap in our understanding of the health needs of older women in particular.

Excluding older women from drug data collection has been specifically detrimental because they are more likely to have chronic conditions, conditions that may require drugs, and are more likely to experienceharmful side effectsfrom medications.

Also, AI powered systems are often designed based on ageist assumptions. Stereotypes such as older adults being technophobes result in their exclusion from participation in the design of advanced technologies.

For example, women make up majority of the residents in long-term care homes.A studyfound that biases held by technology developers towards older adults hindered the appropriate utilization of AI in long-term care.

There also needs to be further thought given to loss of autonomy and privacy, and the effects of limiting human companionship because of AI. Older women are more likely to experience loneliness, yet AI is already being used in the form of companion robots. Their impact on older womens wellbeing, especially loss of human contact, is not well studied.

This is how older women get left out from properly benefitting from advancements in technology.

The World Health Organizations (WHO) timelypolicy briefaddressesAgeism in Artificial Intelligence for Healthand outlines eight important considerations to ensure that AI technologies for health address ageism. These include participatory design of AI technology with older people and age-inclusive data.

We would add the need to consider the differences between women and men throughout.All levels of government also need to think about how AI is impacting our lives and get innovative with policy and legal frameworks to prevent systemic discrimination.

Ethical guidelines and the ongoing evaluation of AI systems can help prevent the perpetuation of gendered ageism and promote fair and equitable outcomes.

Its time we rethink our approach and reimagine our practices, so that everyone can participate and take advantage of what AI has to offer.

About the Authors: Surbhi Kalia is the Strategy Consultant andDr. Paula Rochon is a geriatrician and the founding directorof theWomens Age Labat Womens College Hospital.

Visit link:

Why Artificial Intelligence Needs to Consider the Unique Needs of ... - Women's eNews