Archive for the ‘Artificial Intelligence’ Category

Opinion: The Long, Uncertain Road to Artificial General Intelligence – Undark Magazine

Last month, DeepMind, a subsidiary of technology giant Alphabet, set Silicon Valley abuzz when it announced Gato, perhaps the most versatile artificial intelligence model in existence. Billed as a generalist agent, Gato can perform over 600 different tasks. It can drive a robot, caption images, identify objects in pictures, and more. It is probably the most advanced AI system on the planet that isnt dedicated to a singular function. And, to some computing experts, it is evidence that the industry is on the verge of reaching a long-awaited, much-hyped milestone: Artificial General Intelligence.

Unlike ordinary AI, Artificial General Intelligence wouldnt require giant troves of data to learn a task. Whereas ordinary artificial intelligence has to be pre-trained or programmed to solve a specific set of problems, a general intelligence can learn through intuition and experience.

An AGI would in theory be capable of learning anything that a human can, if given the same access to information. Basically, if you put an AGI on a chip and then put that chip into a robot, the robot could learn to play tennis the same way you or I do: by swinging a racket around and getting a feel for the game. That doesnt necessarily mean the robot would be sentient or capable of cognition. It wouldnt have thoughts or emotions, itd just be really good at learning to do new tasks without human aid.

This would be huge for humanity. Think about everything you could accomplish if you had a machine with the intellectual capacity of a human and the loyalty of a trusted canine companion a machine that could be physically adapted to suit any purpose. Thats the promise of AGI. Its C-3PO without the emotions, Lt. Commander Data without the curiosity, and Rosey the Robot without the personality. In the hands of the right developers, it could epitomize the idea of human-centered AI.

But how close, really, is the dream of AGI? And does Gato actually move us closer to it?

For a certain group of scientists and developers (Ill call this group the Scaling-Uber-Alles crowd, adopting a term coined by world-renowned AI expert Gary Marcus) Gato and similar systems based on transformer models of deep learning have already given us the blueprint for building AGI. Essentially, these transformers use humongous databases and billions or trillions of adjustable parameters to predict what will happen next in a sequence.

The Scaling-Uber-Alles crowd, which includes notable names such as OpenAIs Ilya Sutskever and the University of Texas at Austins Alex Dimakis, believes that transformers will inevitably lead to AGI; all that remains is to make them bigger and faster. As Nando de Freitas, a member of the team that created Gato, recently tweeted: Its all about scale now! The Game is Over! Its about making these models bigger, safer, compute efficient, faster at sampling, smarter memory De Freitas and company understand that theyll have to create new algorithms and architectures to support this growth, but they also seem to believe that an AGI will emerge on its own if we keep making models like Gato bigger.

Call me old-fashioned, but when a developer tells me their plan is to wait for an AGI to magically emerge from the miasma of big data like a mudfish from primordial soup, I tend to think theyre skipping a few steps. Apparently, Im not alone. A host of pundits and scientists, including Marcus, have argued that something fundamental is missing from the grandiose plans to build Gato-like AI into full-fledged generally intelligent machines.

If you put an AGI on a chip and then put that chip into a robot, the robot could learn to play tennis the same way you or I do: by swinging a racket around and getting a feel for the game.

I recently explained my thinking in a trilogy of essays for The Next Webs Neural vertical, where Im an editor. In short, a key premise of AGI is that it should be able to obtain its own data. But deep learning models, such as transformer AIs, are little more than machines designed to make inferences relative to the databases that have already been supplied to them. Theyre librarians and, as such, they are only as good as their training libraries.

A general intelligence could theoretically figure things out even if it had a tiny database. It would intuit the methodology to accomplish its task based on nothing more than its ability to choose which external data was and wasnt important, like a human deciding where to place their attention.

Gato is cool and theres nothing quite like it. But, essentially, it is a clever package that arguably presents the illusion of a general AI through the expert use of big data. Its giant database, for example, probably contains datasets built on the entire contents of websites such as Reddit and Wikipedia. Its amazing that humans have managed to do so much with simple algorithms just by forcing them to parse more data.

In fact, Gato is such an impressive way to fake general intelligence, it makes me wonder if we might be barking up the wrong tree. Many of the tasks Gato is capable of today were once believed to be something only an AGI could do. It feels like the more we accomplish with regular AI, the harder the challenge of building a general agent appears to be.

Call me old fashioned, but when a developer tells me their plan is to wait for an AGI to magically emerge from the miasma of big data like a mudfish from primordial soup, I tend to think theyre skipping a few steps.

For those reasons, Im skeptical that deep learning alone is the path to AGI. I believe well need more than bigger databases and additional parameters to tweak. Well need an entirely new conceptual approach to machine learning.

I do think that humanity will eventually succeed in the quest to build AGI. My best guess is that we will knock on AGIs door sometime around the early-to-mid 2100s, and that, when we do, well find that it looks quite different from what the scientists at DeepMind are envisioning.

But the beautiful thing about science is that you have to show your work, and, right now, DeepMind is doing just that. Its got every opportunity to prove me and the other naysayers wrong.

I truly, deeply hope it succeeds.

Tristan Greene is a futurist who believes in the power of human-centered technology. Hes currently the editor of The Next Webs futurism vertical, Neural.

Read more:
Opinion: The Long, Uncertain Road to Artificial General Intelligence - Undark Magazine

Oregon is dropping an artificial intelligence tool used in child welfare system – NPR

Sen. Ron Wyden, D-Ore., speaks during a Senate Finance Committee hearing on Oct. 19, 2021. Wyden says he has long been concerned about the algorithms used by his state's child welfare system. Mandel Ngan/AP hide caption

Sen. Ron Wyden, D-Ore., speaks during a Senate Finance Committee hearing on Oct. 19, 2021. Wyden says he has long been concerned about the algorithms used by his state's child welfare system.

Child welfare officials in Oregon will stop using an algorithm to help decide which families are investigated by social workers, opting instead for a new process that officials say will make better, more racially equitable decisions.

The move comes weeks after an Associated Press review of a separate algorithmic tool in Pennsylvania that had originally inspired Oregon officials to develop their model, and was found to have flagged a disproportionate number of Black children for "mandatory" neglect investigations when it first was in place.

Oregon's Department of Human Services announced to staff via email last month that after "extensive analysis" the agency's hotline workers would stop using the algorithm at the end of June to reduce disparities concerning which families are investigated for child abuse and neglect by child protective services.

"We are committed to continuous quality improvement and equity," Lacey Andresen, the agency's deputy director, said in the May 19 email.

Jake Sunderland, a department spokesman, said the existing algorithm would "no longer be necessary," since it can't be used with the state's new screening process. He declined to provide further details about why Oregon decided to replace the algorithm and would not elaborate on any related disparities that influenced the policy change.

Hotline workers' decisions about reports of child abuse and neglect mark a critical moment in the investigations process, when social workers first decide if families should face state intervention. The stakes are high not attending to an allegation could end with a child's death, but scrutinizing a family's life could set them up for separation.

From California to Colorado and Pennsylvania, as child welfare agencies use or consider implementing algorithms, an AP review identified concerns about transparency, reliability and racial disparities in the use of the technology, including their potential to harden bias in the child welfare system.

U.S. Sen. Ron Wyden, an Oregon Democrat, said he had long been concerned about the algorithms used by his state's child welfare system and reached out to the department again following the AP story to ask questions about racial bias a prevailing concern with the growing use of artificial intelligence tools in child protective services.

"Making decisions about what should happen to children and families is far too important a task to give untested algorithms," Wyden said in a statement. "I'm glad the Oregon Department of Human Services is taking the concerns I raised about racial bias seriously and is pausing the use of its screening tool."

Sunderland said Oregon child welfare officials had long been considering changing their investigations process before making the announcement last month.

He added that the state decided recently that the algorithm would be completely replaced by its new program, called the Structured Decision Making model, which aligns with many other child welfare jurisdictions across the country.

Oregon's Safety at Screening Tool was inspired by the influential Allegheny Family Screening Tool, which is named for the county surrounding Pittsburgh, and is aimed at predicting the risk that children face of winding up in foster care or being investigated in the future. It was first implemented in 2018. Social workers view the numerical risk scores the algorithm generates the higher the number, the greater the risk as they decide if a different social worker should go out to investigate the family.

But Oregon officials tweaked their original algorithm to only draw from internal child welfare data in calculating a family's risk, and tried to deliberately address racial bias in its design with a "fairness correction."

In response to Carnegie Mellon University researchers' findings that Allegheny County's algorithm initially flagged a disproportionate number of Black families for "mandatory" child neglect investigations, county officials called the research "hypothetical," and noted that social workers can always override the tool, which was never intended to be used on its own.

Wyden is a chief sponsor of a bill that seeks to establish transparency and national oversight of software, algorithms and other automated systems.

"With the livelihoods and safety of children and families at stake, technology used by the state must be equitable and I will continue to watchdog," Wyden said.

The second tool that Oregon developed an algorithm to help decide when foster care children can be reunified with their families remains on hiatus as researchers rework the model. Sunderland said the pilot was paused months ago due to inadequate data but that there is "no expectation that it will be unpaused soon."

In recent years while under scrutiny by a crisis oversight board ordered by the governor, the state agency currently preparing to hire its eighth new child welfare director in six years considered three additional algorithms, including predictive models that sought to assess a child's risk for death and severe injury, whether children should be placed in foster care, and if so, where. Sunderland said the child welfare department never built those tools, however.

Read this article:
Oregon is dropping an artificial intelligence tool used in child welfare system - NPR

Evaluating brain MRI scans with the help of artificial intelligence – MIT Technology Review

Greece is just one example of a population where the share of older people is expanding, and with it the incidences of neurodegenerative diseases. Among these, Alzheimers disease is the most prevalent, accounting for 70% of neurodegenerative disease cases in Greece. According to estimates published by the Alzheimer Society of Greece, 197,000 people are suffering from the disease at present. This number is expected to rise to 354,000 by 2050.

Dr. Andreas Papadopoulos1, a physician and scientific coordinator at Iatropolis Medical Group, a leading diagnostic provider near Athens, Greece, explains the key role of early diagnosis: The likelihood of developing Alzheimers may be only 1% to 2% at age 65. But then it doubles every five years. Existing drugs cannot reverse the course of the degeneration; they can only slow it down. This is why its crucial to make the right diagnosis in the preliminary stageswhen the first mild cognitive disorder appearsand to filter out Alzheimers patients2.

Diseases like Alzheimers or other neurodegenerative pathologies characteristically have a very slow progression, which makes is difficult to recognize and quantify pathological changes on brain MRI images at an early stage. In evaluating scans, some radiologists describe the process as one of guesstimation, as visual changes in the highly complex anatomy of the brain are not always possible to observe well with the human eye. This is where technical innovations such as artificial intelligence can offer support in interpreting clinical images.

One such tool is the AI-Rad Companion Brain MR3. Part of a family of AI-based, decision-support solutions for imaging, AI-Rad Companion Brain MR is a brain volumetry software that provides automatic volumetric quantification of different brain segments. It is able to segment them from each other: it isolates the hippocampi and the lobes of the brain and quantifies white matter and gray matter volumes for each segment individually. says Dr. Papadopoulos. In total, it has the capacity to segment, measure volumes, and highlight more than 40 regions of the brain.

Calculating volumetric properties manually can be an extremely laborious and time-consuming task. More importantly, it also involves a degree of precise observation that humans are simply not able to achieve. says Dr. Papadopoulos. Papadopoulos has always been an early adopter and welcomed technological innovations in imaging throughout his career. This AI-powered tool means that he can now also compare the quantifications with normative data from a healthy population. And its not all about the automation: the software displays the data in a structured report and generates a highlighted deviation map based on user settings. This allows the user to also monitor volumetric changes manually with all the key data prepared automatically in advance.

Opportunities for more accurate observation and evaluation of volumetric changes in the brain encourages Papadopoulos when he considers how important the early detection of neurodegenerative diseases is. He explains: In the early stages, the volumetric changes are small. In the hippocampus, for example, there is a volume reduction of 10% to 15%, which is very difficult for the eye to detect. But the objective calculations provided by the system could prove a big help.

The aim of AI is to relieve physicians of a considerable burden and, ultimately, to save time when optimally embedded in the workflow. An extremely valuable role for this particular AI-powered postprocessing tool is that it can visualize a deviation of the different structures that might be hard to identify with the naked eye. Papadopoulos already recognizes that the greatest advantage in his work is the objective framework that AI-Rad Companion Brain MR provides on which he can base his subjective assessment during an examination.

AI-Rad Companion4 from Siemens Healthineers supports clinicians in their daily routine of diagnostic decision-making. To maintain a continuous value stream, our AI-powered tools include regular software updates and upgrades that are deployed to the customers via the cloud. Customers can decide whether they want to integrate a fully cloud-based approach into their working environment leveraging all the benefits of the cloud or a hybrid approach that allows them to process imaging data within their own hospital IT setup.

The upcoming software version of AI-Rad Companion Brain MR will contain new algorithms that are capable of segmenting, quantifying, and visualizing white matter hyperintensities (WMH). Along with the McDonald criteria, reporting WHM aids in multiple sclerosis (MS) evaluation.

Read this article:
Evaluating brain MRI scans with the help of artificial intelligence - MIT Technology Review

Quick Study: Artificial Intelligence Ethics and Bias – InformationWeek

Mention artificial intelligence to pretty much anyone and there's a good chance that the term that once seemed magical now spawns a queasy feeling. It generates thoughts of a computer stealing your job, technology companies spying on us, and racial, gender and economic bias.

So, how do we bring the magic back to AI? Maybe it comes down to people and things that humans actually do pretty well: thinking and planning. That's one finding that will become clear in a review of the articles in this Quick Study packed with InformationWeek articles focused on AI ethics and bias.

Yes, there are ways to develop and utilize AI in ethical manners, but they involve thinking through how your organization will use AI, how you will test it, and what your training data looks like. In these articles AI experts and companies that have succeeded with AI share their advice.

What You Need to Know About AI Ethics

Honesty is the best policy. The same is true when it comes to artificial intelligence. With that in mind, a growing number of enterprises are starting to pay attention to how AI can be kept from making potentially harmful decisions.

Why AI Ethics Is Even More Important Now

Contact-tracing apps are fueling more AI ethics discussions, particularly around privacy. The longer term challenge is approaching AI ethics holistically.

Data Innovation in 2021: Supply Chain, Ethical AI, Data Pros in High Demand

Year in Review: In year two of the pandemic, enterprise data innovation pros put a focus on supply chain, ethical AI, automation, and more. From the automation to the supply chain to responsible/ethical AI, enterprises made progress in their efforts during 2021, but more work needs to be done.

The Tech Talent Chasm

How a changing world is forcing businesses to rethink everything, and in recruiting IT talent understand that great candidates want their employers to take AI ethics seriously.

3 Components CIOs Need to Create an Ethical AI Framework

CIOs shouldnt wait for an ethical AI framework to be mandatory. Whether buying the technology or building it, they need processes in place to embed ethics into their AI systems, according to PwC.

Why You Should Have an AI & Ethics Board

Guidelines are great -- but they need to be enforced. An ethics board is one way to ensure these principles are woven into product development and uses of internal data, according to the chief data officer of ADP.

How and Why Enterprises Must Tackle Ethical AI

Artificial intelligence is becoming more common in enterprises, but ensuring ethical and responsible AI is not always a priority. Here's how organizations can make sure that they are avoiding bias and protecting the rights of the individual.

Common AI Ethics Mistakes Companies Are Making

More organizations are embracing the concept of responsible AI, but faulty assumptions can impede success.

How IT Pros Can Lead the Fight for Data Ethics

Maintaining ethics means being alert on a continuum for issues. Heres how IT teams can play a pivotal role in protecting data ethics.

Ex-Googler's Ethical AI Startup Models More Inclusive Approach

Backed by big foundations, ethical AI startup DAIR promises a focus on AI directed by and in service of the many rather than controlled just by a few giant tech companies. How do its goals align with your enterprise's own AI ethics program?

The Cost of AI Bias: Lower Revenue, Lost Customers

A survey shows tech leadership's growing concern about AI bias and AI ethics, as negative events impact revenue, customer losses, and more.

What We Can Do About Biased AI

Biased artificial intelligence is a real issue. But how does it occur, what are the ramifications -- and what can we do about it?

How Fighting AI Bias Can Make Fintech Even More Inclusive

Digitized presumptions, encoded by very human creators, can introduce prejudice in new financial technology meant to be more accessible.

Im Not a Cat: The Human Side of Artificial Intelligence

Unconscious biases will be reflected in the data that feeds your AI and ML algorithms. Here are three simple actions to dismantle unconscious bias in AI.

When A Good Machine Learning Model Is So Bad

IT teams must work with managers who oversee data scientists, data engineers, and analysts to develop points of intervention that complement model ensemble techniques.

Go here to read the rest:
Quick Study: Artificial Intelligence Ethics and Bias - InformationWeek

Artificial Intelligence Model Can Successfully Predict the Reoccurrence of Crohns Disease – SciTechDaily

A new study finds that an artificial intelligence model can predict whether Crohns disease will recur after surgery.

A deep learning model trained to analyze histological images of surgical specimens accurately classified patients with and without Crohns disease recurrence, investigators report in The American Journal of Pathology.

According to researchers, more than 500,000 individuals in the United States have Crohns disease. Crohns disease is a chronic inflammatory bowel disease that damages the digestive system lining. It can cause digestive system inflammation, which may result in abdominal pain, severe diarrhea, exhaustion, weight loss, and malnutrition.

Many people end up needing surgery to treat their Crohns disease. Even after a successful operation, recurrence is common. Now, researchers are reporting that their AI tool is highly accurate at predicting the postoperative recurrence of Crohns disease. It also linked recurrence with the histology of subserosal adipose cells and mast cell infiltration.

Using an artificial intelligence (AI) tool that simulates how humans visualize and is trained to identify and categorize pictures, researchers created a model that predicts the postoperative recurrence of Crohns disease with high accuracy by evaluating histological images. The AI tool also identified previously unknown differences in adipose cells and substantial disparities in the degree of mast cell infiltration in the subserosa, or outer lining of the gut, when comparing individuals with and without disease recurrence. Elseviers The American Journal of Pathology published the findings.

The 10-year rate of postoperative symptomatic recurrence of Crohns disease, a chronic inflammatory gastrointestinal illness, is believed to be 40%. Although there are scoring methods to measure Crohns disease activity and the existence of postoperative recurrence, no scoring system has been devised to predict whether Crohns disease will return.

Sixty-eight patients with Crohns disease were classified according to the presence or absence of postoperative recurrence within two years. The investigators performed histological analysis of surgical specimens using deep learning EfficientNet-b5, a commercially available AI model designed to perform image classification. They achieved a highly accurate prediction of postoperative recurrence (AUC=0.995) and discovered morphological differences in adipose cells between the two groups. Credit: The American Journal of Pathology

Most of the analysis of histopathological images using AI in the past have targeted malignant tumors, explained lead investigators Takahiro Matsui, MD, Ph.D., and Eiichi Morii, MD, Ph.D., Department of Pathology, Osaka University Graduate School of Medicine, Osaka, Japan. We aimed to obtain clinically useful information for a wider variety of diseases by analyzing histopathology images using AI. We focused on Crohns disease, in which postoperative recurrence is a clinical problem.

The research involved 68 Crohns disease patients who underwent bowel resection between January 2007 and July 2018. They were divided into two groups based on whether or not they had postoperative disease recurrence within two years after surgery. Each group was divided into two subgroups, one for training and the other for validation of an AI model. Whole slide pictures of surgical specimens were cropped into tile images for training, labeled for the presence or absence of postsurgical recurrence, and then processed using EfficientNet-b5, a commercially available AI model built to perform image classification. When the model was tested with unlabeled photographs, the findings indicated that the deep learning model accurately classified the unlabeled images according to the presence or absence of disease occurrence.

Following that, prediction heat maps were created to identify areas and histological features from which the machine learning algorithm could accurately predict recurrence. All layers of the intestinal wall were shown in the photos. The heatmaps revealed that the machine learning algorithm correctly predicted the subserosal adipose tissue layer. However, the model was less precise in other regions, such as the mucosal and proper muscular layers. Images with the greatest accurate predictions were taken from the non-recurrence and recurrence test datasets. The photos with the greatest predictive results all had adipose tissue.

Because the machine learning model achieved accurate predictions from images of subserosal tissue, the investigators hypothesized that subserosal adipose cell morphologies differed between the recurrence and the non-recurrence groups. Adipose cells in the recurrence group had a significantly smaller cell size, higher flattening, and smaller center-to-center cell distance values than those in the nonrecurrence group.

These features, defined as adipocyte shrinkage, are important histological characteristics associated with Crohns disease recurrence, said Dr. Matsui and Dr. Morii.

The investigators also hypothesized that the differences in adipocyte morphology between the two groups were associated with some degree or type of inflammatory condition in the tissue. They found that the recurrence group had a significantly higher number of mast cells infiltrating the subserosal adipose tissue, indicating that the cells are associated with the recurrence of Crohns disease and the adipocyte shrinkage phenomenon.

To the investigators knowledge, these findings are the first to link postoperative recurrence of Crohns disease with the histology of subserosal adipose cells and mast cell infiltration. Dr. Matsui and Dr. Morii observed, Our findings enable stratification by the prognosis of postoperative Crohns disease patients. Many drugs, including biologicals, are used to prevent Crohns disease recurrence, and proper stratification can enable more intensive and successful treatment of high-risk patients.

Reference: Deep Learning Analysis of Histologic Images from Intestinal Specimen Reveals Adipocyte Shrinkage and Mast Cell Infiltration to Predict Postoperative Crohn Disease by Hiroki Kiyokawa, Masatoshi Abe, Takahiro Matsui, Masako Kurashige, Kenji Ohshima, Shinichiro Tahara, Satoshi Nojima, Takayuki Ogino, Yuki Sekido, Tsunekazu Mizushima and Eiichi Morii, 28 March 2022, The American Journal of Pathology.DOI: 10.1016/j.ajpath.2022.03.006

View post:
Artificial Intelligence Model Can Successfully Predict the Reoccurrence of Crohns Disease - SciTechDaily