Archive for the ‘Artificial Intelligence’ Category

Digital Dr. Dolittle: decoding animal conversations with artificial … – KUOW News and Information

We could be talking to animals in the next year using AI. But are we ready?

Whenever I'm out doing field work or on a hike, Ive not only got my eyes wide open, but my ears too. Theres a lot going on in a forest or under the sea - the sounds of nature. So many of those sounds are about communication. And some species seem more chatty than others. Birds and whales seem to have a lot more to say than bears or mountain lions.

Personally, I love to chat with ravens. I like to think that we have lovely conversations. I know Im fooling myself but theres something happening that might change that.

Theres a tech company out of Silicon Valley that is hoping to make that dream of communicating with animals a reality. Earth Species Project is a non-profit working to develop machine learning that can decode animal language. Basically, artificial intelligence that can speak whale or monkey...or perhaps even raven?

We are awash in meanings and signals. And what we're gonna have to do is use these brand new big telescopes of AI to discover what's been there all along, said Aza Raskin, co-founder of Earth Species Project.

So we are doing something a bit different on The Wild today - fun to mix things up now and then. For this episode Im not outdoors among the wild creatures, but in my home studio, talking with two fascinating people about the latest developments in technology that are being created to talk to wild animals. Well also explore the ethics of this technology... something Karen Bakker, a professor at the University of British Columbia, knows a lot about.

We could lure every animal on the planet to their deaths with this technology, if it develops as Aza suggests it might, said Bakker.

What are the downsides to playing the role of Digital Dr. Dolittle?

Guests:

Aza Raskin, co-founder of Earth Species Project and co-founder of the Center for Humane Technology.

Karen Bakker, professor at the University of British Columbia where she researches digital innovation and environmental governance. She also leads the Smart Earth Project.

Original post:
Digital Dr. Dolittle: decoding animal conversations with artificial ... - KUOW News and Information

Podcast: Now Is the Best Time To Embrace Artificial Intelligence – Reason

In this week's The Reason Roundtable, editors Matt Welch, Katherine Mangu-Ward, Nick Gillespie, and special guest Elizabeth Nolan Brown unpack the ubiquitous sense that politicians of every stripe have abandoned a commitment to free expression. They also examine the fast evolution of artificial intelligence chatbots like ChatGPT.

0:42: Politicians choose the culture war over the First Amendment

20:04: Artificial intelligence and large language model (LLM) chatbots like ChatGPT

36:13: Weekly Listener Question

44:27: This week's cultural recommendations

Mentioned in this podcast:

"Congress Asks Is TikTok Really 'An Extension of' the Chinese Communist Party?" by Elizabeth Nolan Brown

"TikTok Is Too Popular To Ban," by Elizabeth Nolan Brown

"Utah Law Gives Parents Full Access to Teens' Social Media," by Elizabeth Nolan Brown

"Florida's War on Drag Targets Theater's Liquor License," by Scott Shackford

"Welcoming Our New Chatbot Overlords," by Ronald Bailey

"Maybe A.I. Will Be a ThreatTo Governments," by Peter Suderman

"The Luddites' Veto," by Ronald Bailey

"Artificial Intelligence Will Change JobsFor the Better," by Jordan McGillis

"The Robot Revolution Is Here," by Katherine Mangu-Ward

"The Earl Weaver Case for Rand Paul's Libertarianism," by Matt Welch

"Rand Paul Tries (Again!) To Make It Harder for Police To Take Your Stuff," by Scott Shackford

Send your questions to roundtable@reason.com. Be sure to include your social media handle and the correct pronunciation of your name.

Today's sponsor:

Audio production by Ian Keyser

Assistant production by Hunt Beaty

Music: "Angeline," by The Brothers Steve

See the rest here:
Podcast: Now Is the Best Time To Embrace Artificial Intelligence - Reason

Artificial Intelligence will not save banks from short-sightedness – SWI swissinfo.ch in English

Banks like Credit Suisse use sophisticated models to analyse and predict risks, but too often they are ignored or bypassed by humans, saysrisk management expert Didier Sornette.

This content was published on March 28, 2023March 28, 2023 minutes

Writes about the impact of new technologies on society: are we aware of the revolution in progress and its consequences? Hobby: free thinking. Habit: asking too many questions.

The collapse of Credit Suisse has once again exposed the high-stakes risk culture in the financial sector. The many sophisticated artificial intelligence (AI) tools used by the banking system to predict and manage risks arent enough to save banks from failure.

According to Didier Sornette, honorary professor of entrepreneurial risks at the federal technology institute ETH Zurich, the tools aren't the problem but rather the short-sightedness of bank executives who prioritise profits.

SWI swissinfo.ch: Banks use AI models to predict risks and evaluate the performance of their investments, yet these models couldnt save Credit Suisse or Silicon Valley Bank from collapse. Why didnt they act on the predictions?And why didnt decision-makers intervene earlier?

Didier Sornette:I have made so many successful predictions in the past that were systematically ignored by managers and decision-makers. Why? Because it is so much easier to say that the crisis is an act of God and could not have been foreseen, and to wash your hands of any responsibility.

Acting on predictions means to stop the dance, in other words to take painful measures. This is why policymakers are essentially reactive, always behind the curve. It is political suicide to impose pain to embrace a problem and solve it before it explodes in your face. This is the fundamental problem of risk control.

Credit Suisse had very weak risk controls and culture for decades. Instead, business units were always left to decide what to do and therefore inevitably accumulated a portfolio of latent risks or I'd say lots of far out-of-the-money put options [when an option has no intrinsic value].Then, when a handful of random events occurred that were symptomatic of the fundamental lack of controls, people started to get worried. When a large US bank [Silicon Valley Bank] with $220 billion (CHF202 billion) of assets quickly went insolvent, people started to reassess their willingness to leave uninsured deposits at any poorly run bank - and voil.

SWI: This means that risk prediction and management wont work if the problem is not solved at the systemic level?

D.S.: The policy of zero or negative interest rates is the root cause of all this.It has led to positions of these banks that are vulnerable to rising rates. The huge debts of countries have also made them vulnerable. We live in a world that has become very vulnerable because of the short-sighted and irresponsible policies of the big central banks, which have not considered the long-term consequences of their "firefighting" interventions.

The shock is a systemic one, starting from Silicon Valley Bank, Signature Banketc., with Credit Suisse being only an episode revealing the major problem of the system: the consequences of the catastrophic policies of the central banks since 2008, which flooded the markets with easy money and led to huge excesses in financial institutions. We are now seeing some of the consequences.

SWI: What role can AI-based risk prediction play, for example, in the case of the surviving giant UBS?

D.S.: AIand mathematical models are irrelevant in the sense that (risk control) tools are useful only if there is a will to use them!

When there is a problem, many people always blame the models, the risk methods etc. This is wrong. The problems lie with humans whosimply ignore models and bypass them. There were so many instances in the last 20 years. Again and again, the same kind of story repeats itself with nobody learning the lessons. So AI cant do much because the problem is not about more "intelligencebut greed and short-sightedness.

Despite the apparent financial gains, this is probably a bad and dangerous deal for UBS. The reason is that it takes decades to create the right risk culture and they are now likely to create huge morale damage via the big headcount reductions. Additionally, no regulator will be giving them an indemnity for inherited regulatory or client Anti-Money Laundering violations from the Credit Suisse side, which we know had very weak compliance. They will have to deal with surprising problems there for years.

SWI: Could we envision a more rigorous form of oversight of the banking system by governments or even taxpayers using data collected by AI systems?

D.S.: Collecting data is not the purview of AI systems. Collecting clean and relevant data is the most difficult challenge, much more difficult than machine learning and AI techniques. Most data is noisy, incomplete, inconsistent, very costly to obtain and to manage. This requires huge investments and a long-term view that is almost always missing. Hence crises occur every fiveyears or so.

SWI: Lately, weve been hearing more and more about behavioral finance. Is there more psychology and irrationality in the financial system than we think?

D.S.: There is greed, fear, hope and... sex. Joking aside, people in banking and finance are in general superrational when it comes to optimising their goals and getting rich. It is not irrationality, it is betting and taking big risks where the gains are privatised and the losses are socialised.

Strong regulations need to be imposed. In a sense, we need to make "banking boring to tame the beasts that tend to destabilise the financial system by construction.

SWI: Is there a future in which machine learning can prevent the failure of "too big to fail"banks like Credit Suisse, or is that pure science fiction?

D.S.: Yes, an AI can prevent a future failure if the AI takes power and enslaves humans to follow the risk managements with incentives dictated by the AI, as in many scenarios depicting the dangers of superintelligent AI. I am not kidding.

The interview was conducted in writing. It has been edited for clarity and brevity.

In compliance with the JTI standards

More: SWI swissinfo.ch certified by the Journalism Trust Initiative

Originally posted here:
Artificial Intelligence will not save banks from short-sightedness - SWI swissinfo.ch in English

Most Jobs Soon To Be Influenced By Artificial Intelligence, Research Out Of OpenAI And University Of Pennsylvania Suggests – Forbes

As artificial intelligence opens up and becomes democratized through platforms offering generative AI, its likely to alter tasks within at least 80% of all jobs, a new analysis suggests. Jobs requiring college education will see the highest impacts, and in many cases, at least half of peoples tasks may be affected by AI. Its extremely important to add that affected occupations will be significantly influenced or augmented by generative AI, not replaced.

Thats the word from a paper published by a team of researchers from OpenAI, OpenResearch, and the University of Pennsylvania. The researchers included Tyna Eloundou with OpenAI, Sam Manning with OpenResearch and OpenAI, Pamela Mishkin with OpenAI, and Daniel Rock, assistant professor at the University of Pennsylvania, also affiliated with OpenAI and OpenResearch.

The research looked at the potential implications of GPT (Generative Pre-trained Transformer) models and related technologies on occupations, assessing their exposure to GPT capabilities. Our findings indicate that approximately 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of GPTs, while around 19% of workers may see at least 50% of their tasks impacted, Eloundou and her colleagues estimate. The influence spans all wage levels, with higher-income jobs potentially facing greater exposure particularly jobs requiring college degrees. At the same time, they observe, considering each job as a bundle of tasks, it would be rare to find any occupation for which AI tools could do nearly all of the work.

The researchers base their study on GPT-4, and use the terms large language models (LLMs) and GPTs interchangeably.

Their findings suggest that programming and writing skills are more likely to be influenced by generative AI. On the other hand, occupations or tasks involving science and critical thinking skills are less likely to be influenced. Occupations that are seeing or will see a high degree of AI-based influence and augmentation (again, emphasis on influence and augment) include the following:

GPTs are improving in capabilities over time with the ability to complete or be helpful for an increasingly complex set of tasks and use-cases, Eloundou and her co-authors point out. They caution, however, that the definition of a task is very fluid. It is unclear to what extent occupations can be entirely broken down into tasks, and whether this approach systematically omits certain categories of skills or tasks that are tacitly required for competent performance of a job, they add. Additionally, tasks can be composed of sub-tasks, some of which are more automatable than others.

Theres more implications to AI than simply taking over tasks, of course. While the technical capacity for GPTs to make human labor more efficient appears evident, it is important to recognize that social, economic, regulatory, and other factors will influence actual labor productivity outcomes, the team states. There will be broader implications for AI as it progresses, including their potential to augment or displace human labor, their impact on job quality, impacts on inequality, skill development, and numerous other outcomes.

Still, accurately predicting future LLM applications remains a significant challenge, even for experts, Eloundou and her co-authors caution. The discovery of new emergent capabilities, changes in human perception biases, and shifts in technological development can all affect the accuracy and reliability of predictions regarding the potential impact of GPTs on worker tasks and the development of GPT-powered software.

An important takeaway from this study is that generative AI not to mention AI in all forms is reshaping the workplace in ways that currently cannot be imagined. Yes, some occupations may eventually disappear, but those that can harness the productivity and power of AI to create new innovations and services that improve the lives of customers or people will be well-placed for the economy of the mid-to-late 2020s and beyond.

I am an author, independent researcher and speaker exploring innovation, information technology trends and markets. I served as co-chair of the AI Summit in 2021 and 2022, and have also participated in the IEEE International Conference on Edge Computing and the International SOA and Cloud Symposium series.I am also a co-author of the SOA Manifesto, which outlines the values and guiding principles of service orientation in business and IT.I also regularly contribute to Harvard Business Review and CNET on topics shaping business and technology careers.

Much of my research work is in conjunction with Forbes Insights and Unisphere Research/ Information Today, Inc., covering topics such as artificial intelligence, cloud computing, digital transformation, and big data analytics.

In a previous life, I served as communications and research manager of the Administrative Management Society (AMS), an international professional association dedicated to advancing knowledge within the IT and business management fields. I am a graduate of Temple University.

Link:
Most Jobs Soon To Be Influenced By Artificial Intelligence, Research Out Of OpenAI And University Of Pennsylvania Suggests - Forbes

Artificial Intelligence Paints Quite a Picture of Cat Country’s Jahna – catcountry1073.com

In the old days, books, movies and television shows would have stories of computers that could thing for themselves. They could make assumptions, solve problems, and answer questions.

Fast forward to 2023 and that fiction has now become reality.

It's called ChatGPT.

ChatGPT uses algorithm, artificial intelligence, and other state-of-the-art computer "stuff" to generate human-like text. You can ask it to write an article, answer questions and more. It uses information on the internet to solve and answer.

ChatGPT can also do some creative writing

So, I used ChatGPT to help me with a project centering around my Cat Country Morning Show co-host Jahna.

The results blew me away!

The first "project" I asked for ChatGPT to do was to provide a caption for this photo of Jahna:

Courtesy of Jahna

As of yet, ChatGPT can't "see" the photo, so I couldn't use it directly, Instead I asked: "Provide thee possible captions for a photo of Jahna, the co-host of the radio show, Cat Country Mornings with Joe and Jahna."

Within seconds, Chat spit out this:

I must admit, Chat did pretty good, right?

Then, I decided to go a little further. I asked Chat, "Provide three creative fictional stories about 32-year-old Jahna."

Mind you, that's the only information I provided. The response I received from Chat was absolutely stunning!

Sure, here are three creative fictional stories about 32-year-old Jahna:

Holy crap!

Find out more about ChatGPT here.

From one side of South Jersey to the other, these are some of the best places to grab a hoagie... enjoy!

This Cape May County Home is Amazing!

Originally posted here:
Artificial Intelligence Paints Quite a Picture of Cat Country's Jahna - catcountry1073.com