Archive for the ‘Artificial Intelligence’ Category

The potential of artificial intelligence to bring equity in health care – MIT News

Health care is at a junction, a point where artificial intelligence tools are being introduced to all areas of the space. This introduction comes with great expectations: AI has the potential to greatly improve existing technologies, sharpen personalized medicines, and, with an influx of big data, benefit historically underserved populations.

But in order to do those things, the health care community must ensure that AI tools are trustworthy, and that they dont end up perpetuating biases that exist in the current system. Researchers at the MIT Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic), an initiative to support AI research in health care, call for creating a robust infrastructure that can aid scientists and clinicians in pursuing this mission.

Fair and equitable AI for health care

The Jameel Clinic recently hosted the AI for Health Care Equity Conference to assess current state-of-the-art work in this space, including new machine learning techniques that support fairness, personalization, and inclusiveness; identify key areas of impact in health care delivery; and discuss regulatory and policy implications.

Nearly 1,400 people virtually attended the conference to hear from thought leaders in academia, industry, and government who are working to improve health care equity and further understand the technical challenges in this space and paths forward.

During the event, Regina Barzilay, the School of Engineering Distinguished Professor of AI and Health and the AI faculty lead for Jameel Clinic, and Bilal Mateen, clinical technology lead at the Wellcome Trust, announced the Wellcome Fund grant conferred to Jameel Clinic to create a community platform supporting equitable AI tools in health care.

The projects ultimate goal is not to solve an academic question or reach a specific research benchmark, but to actually improve the lives of patients worldwide. Researchers at Jameel Clinic insist that AI tools should not be designed with a single population in mind, but instead be crafted to be reiterative and inclusive, to serve any community or subpopulation. To do this, a given AI tool needs to be studied and validated across many populations, usually in multiple cities and countries. Also on the project wish list is to create open access for the scientific community at large, while honoring patient privacy, to democratize the effort.

What became increasingly evident to us as a funder is that the nature of science has fundamentally changed over the last few years, and is substantially more computational by design than it ever was previously, says Mateen.

The clinical perspective

This call to action is a response to health care in 2020. At the conference, Collin Stultz, a professor of electrical engineering and computer science and a cardiologist at Massachusetts General Hospital, spoke on how health care providers typically prescribe treatments and why these treatments are often incorrect.

In simplistic terms, a doctor collects information on their patient, then uses that information to create a treatment plan. The decisions providers make can improve the quality of patients lives or make them live longer, but this does not happen in a vacuum, says Stultz.

Instead, he says that a complex web of forces can influence how a patient receives treatment. These forces go from being hyper-specific to universal, ranging from factors unique to an individual patient, to bias from a provider, such as knowledge gleaned from flawed clinical trials, to broad structural problems, like uneven access to care.

Datasets and algorithms

A central question of the conference revolved around how race is represented in datasets, since its a variable that can be fluid, self-reported, and defined in non-specific terms.

The inequities were trying to address are large, striking, and persistent, says Sharrelle Barber, an assistant professor of epidemiology and biostatistics at Drexel University. We have to think about what that variable really is. Really, its a marker of structural racism, says Barber. Its not biological, its not genetic. Weve been saying that over and over again.

Some aspects of health are purely determined by biology, such as hereditary conditions like cystic fibrosis, but the majority of conditions are not straightforward. According to Massachusetts General Hospital oncologist T. Salewa Oseni, when it comes to patient health and outcomes, research tends to assume biological factors have outsized influence, but socioeconomic factors should be considered just as seriously.

Even as machine learning researchers detect preexisting biases in the health care system, they must also address weaknesses in algorithms themselves, as highlighted by a series of speakers at the conference. They must grapple with important questions that arise in all stages of development, from the initial framing of what the technology is trying to solve to overseeing deployment in the real world.

Irene Chen, a PhD student at MIT studying machine learning, examines all steps of the development pipeline through the lens of ethics. As a first-year doctoral student, Chen was alarmed to find an out-of-the-box algorithm, which happened to project patient mortality, churning out significantly different predictions based on race. This kind of algorithm can have real impacts, too; it guides how hospitals allocate resources to patients.

Chen set about understanding why this algorithm produced such uneven results. In later work, she defined three specific sources of bias that could be detangled from any model. The first is bias, but in a statistical sense maybe the model is not a good fit for the research question. The second is variance, which is controlled by sample size. The last source is noise, which has nothing to do with tweaking the model or increasing the sample size. Instead, it indicates that something has happened during the data collection process, a step way before model development. Many systemic inequities, such as limited health insurance or a historic mistrust of medicine in certain groups, get rolled up into noise.

Once you identify which component it is, you can propose a fix, says Chen.

Marzyeh Ghassemi, an assistant professor at the University of Toronto and an incoming professor at MIT, has studied the trade-off between anonymizing highly personal health data and ensuring that all patients are fairly represented. In cases like differential privacy, a machine-learning tool that guarantees the same level of privacy for every data point, individuals who are too unique in their cohort started to lose predictive influence in the model. In health data, where trials often underrepresent certain populations, minorities are the ones that look unique, says Ghassemi.

We need to create more data, it needs to be diverse data, she says. These robust, private, fair, high-quality algorithms we're trying to train require large-scale data sets for research use.

Beyond Jameel Clinic, other organizations are recognizing the power of harnessing diverse data to create more equitable health care. Anthony Philippakis, chief data officer at the Broad Institute of MIT and Harvard, presented on the All of Us research program, an unprecedented project from the National Institutes of Health that aims to bridge the gap for historically under-recognized populations by collecting observational and longitudinal health data on over 1 million Americans. The database is meant to uncover how diseases present across different sub-populations.

One of the largest questions of the conference, and of AI in general, revolves around policy. Kadija Ferryman, a cultural anthropologist and bioethicist at New York University, points out that AI regulation is in its infancy, which can be a good thing. Theres a lot of opportunities for policy to be created with these ideas around fairness and justice, as opposed to having policies that have been developed, and then working to try to undo some of the policy regulations, says Ferryman.

Even before policy comes into play, there are certain best practices for developers to keep in mind. Najat Khan, chief data science officer at Janssen R&D, encourages researchers to be extremely systematic and thorough up front when choosing datasets and algorithms; detailed feasibility on data source, types, missingness, diversity, and other considerations are key. Even large, common datasets contain inherent bias.

Even more fundamental is opening the door to a diverse group of future researchers.

We have to ensure that we are developing and investing back in data science talent that are diverse in both their backgrounds and experiences and ensuring they have opportunities to work on really important problems for patients that they care about, says Khan. If we do this right, youll see ... and we are already starting to see ... a fundamental shift in the talent that we have a more bilingual, diverse talent pool.

The AI for Health Care Equity Conference was co-organized by MITs Jameel Clinic; Department of Electrical Engineering and Computer Science; Institute for Data, Systems, and Society; Institute for Medical Engineering and Science; and the MIT Schwarzman College of Computing.

Go here to read the rest:
The potential of artificial intelligence to bring equity in health care - MIT News

Insights on the Artificial Intelligence in Marketing Global Market to 2028 – by Offering, Application, End-use – GlobeNewswire

Dublin, June 02, 2021 (GLOBE NEWSWIRE) -- The "Artificial Intelligence in Marketing Market Forecast to 2028 - COVID-19 Impact and Global Analysis By Offering, Application, End-Use Industry, and Geography" report has been added to ResearchAndMarkets.com's offering.

The global artificial intelligence in marketing market was valued at US$ 12,044.46 million in 2020 and is projected to reach US$ 107,535.57 million by 2028; it is expected to grow at a CAGR of 31.4% from 2020 to 2028.

The rising adoption of customer-centric marketing strategies and increasing use of social media platforms for advertising are among the factors boosting the artificial intelligence in marketing market growth. However, scarcity of personnel well-versed with AI knowledge hinders the market growth. Further, surge in the adoption of cloud-based applications and services creates notable opportunities for the artificial intelligence in marketing market players.

The use of artificial intelligence in marketing helps the marketers to use customer's data to draw important insights of their buying behavior and preferences, among others. It is used in applications such as dynamic pricing, social media advertising, and sales & marketing automation. Artificial intelligence uses concepts such as machine learning to know these patterns, which helps companies to plan their next move accordingly. In the recent years, there has been an unprecedented increase in social media engagement . According to DIGITAL 2021, ~0.5 billion new users joined the world's social media networks in the beginning of 2021. Moreover, in January 2021, there were 4.20 billion social media users worldwide. This number has increased by 490 million in the last year, representing year-on-year growth of more than 13%. During 2020, more than 1.3 million new users joined the social media streams on average every day, i.e., ~15 new users every second.

Many companies have realized the platform's tremendous potential and are using it for ecommerce, customer support, marketing, and public relations, among others. Artificial intelligence have become an unintegral part social media networks today. Social networks such as Facebook, LinkedIn, Instagram, and Snapchat allow marketers to run paid advertising to platform users based on demographic and behavioral targeting. For instance, according to DIGITAL 2020, in January 2020, the potential number of people that marketers can reach using advertisements was 1.95 billion on Facebook, 928.5 million on Instagram, 663.3 million on LinkedIn, 381.5 million on Snapchat, 339.6 million on Twitter, and 169.0 million on Pinterest. Moreover, in January 2019, a total of US$ 89.91 billion was spent on social media ads. In the same month, the total global digital ad spend was US$ 333.3 billion, which accounts for 50.1% of the total global ad expenditure. Of the total digital ad spend, Google, Facebook, Alibaba, and Amazon accounted for 31.1%, 20.2%, 8.8%, and 4.2%, respectively. Thus, the increasing use of social media for advertising is bolstering the AI in marketing market growth.

Based on offering, the artificial intelligence in marketing market is segmented into solutions and services. In 2020, the solutions segment held the larger market share, and it is further projected to account for a larger share during 2021-2028. However, the services segment is expected to register a higher CAGR in the market during the forecast period.

The COVID-19 virus outbreak has been affecting every business globally since December 2019. The continuous growth in the number of virus-infected patients has governments to put a bar on transportation of humans and goods. However, on the contrary, COVID-19 on the other side is anticipated to accelerate private 5G and LTE adoption. Among B2C and consumer, the data consumption is expected to grow as social distancing continues. Also, the enterprises pivot to digital models and function virtually, the rate of data consumption will endure to boom and as result creating demand for establishing connectivity-centric ecosystem.

The Industrial Bank of Korea (IBK), European Association for Artificial Intelligence (EurAI), European Lab for Learning & Intelligent Systems (ELLIS), Organization for Economic Co-operation and Development, and Association for the Advancement of Artificial Intelligence (AAAI) are among the prime secondary sources referred to while preparing this report.

Key Topics Covered:

1. Introduction

2. Key Takeaways

3. Research Methodology3.1 Coverage3.2 Secondary Research3.3 Primary Research

4. Artificial Intelligence in Marketing Market Landscape4.1 Market Overview4.2 Ecosystem Analysis4.3 Expert Opinion4.4 PEST Analysis4.4.1 Artificial Intelligence in Marketing Market - North America PEST Analysis4.4.2 Artificial Intelligence in Marketing Market - Europe PEST Analysis4.4.3 Artificial Intelligence in Marketing Market - APAC PEST Analysis4.4.4 Artificial Intelligence in Marketing Market - MEA PEST Analysis4.4.5 Artificial Intelligence in Marketing Market - SAM PEST Analysis

5. Artificial Intelligence in Marketing Market - Key Industry Dynamics5.1 Market Drivers5.1.1 Rising Adoption of Customer-Centric Marketing Strategies5.1.2 Increasing Use of Social Media for Advertising5.2 Market Restraints5.2.1 Limited Number of Artificial Intelligence (AI) Experts5.3 Market Opportunities5.3.1 Growth in Adoption of Cloud-Based Applications and Services5.4 Future Trends5.4.1 Dynamic Personalized Ad Serving5.5 Impact Analysis of Drivers and Restraints

6. Artificial Intelligence in Marketing Market - Global Market Analysis

7. Artificial Intelligence in Marketing Market - By Offering

8. Artificial Intelligence in Marketing Market - By Application

9. Artificial Intelligence in Marketing Market - By End-Use Industry

10. Artificial Intelligence in Marketing Market - Geographic Analysis

11. Impact of COVID-19 Pandemic11.1 Overview11.2 Impact of COVID-19 Pandemic on Global Artificial Intelligence in Marketing Market11.2.1 North America: Impact Assessment of COVID-19 Pandemic11.2.2 Europe: Impact Assessment of COVID-19 Pandemic11.2.3 Asia-Pacific: Impact Assessment of COVID-19 Pandemic11.2.4 Middle East and Africa: Impact Assessment of COVID-19 Pandemic11.2.5 South America: Impact Assessment of COVID-19 Pandemic

12. Artificial Intelligence in Marketing Market - Industry Landscape12.1 Overview12.2 Growth Strategies Done by the Companies in the Market, (%)12.3 Organic Developments12.3.1 Overview12.4 Inorganic Developments12.4.1 Overview

13. Company Profiles13.1 Affectiva13.1.1 Key Facts13.1.2 Business Description13.1.3 Products and Services13.1.4 Financial Overview13.1.5 SWOT Analysis13.1.6 Key Developments13.2 Appier Inc.13.2.1 Key Facts13.2.2 Business Description13.2.3 Products and Services13.2.4 Financial Overview13.2.5 SWOT Analysis13.2.6 Key Developments13.3 Bidalgo13.3.1 Key Facts13.3.2 Business Description13.3.3 Products and Services13.3.4 Financial Overview13.3.5 SWOT Analysis13.3.6 Key Developments13.4 Novantas (Amplero), Inc.13.4.1 Key Facts13.4.2 Business Description13.4.3 Products and Services13.4.4 Financial Overview13.4.5 SWOT Analysis13.4.6 Key Developments13.5 CognitiveScale13.5.1 Key Facts13.5.2 Business Description13.5.3 Products and Services13.5.4 Financial Overview13.5.5 SWOT Analysis13.5.6 Key Developments13.6 SAS Institute Inc.13.6.1 Key Facts13.6.2 Business Description13.6.3 Products and Services13.6.4 Financial Overview13.6.5 SWOT Analysis13.6.6 Key Developments13.7 SAP SE13.7.1 Key Facts13.7.2 Business Description13.7.3 Products and Services13.7.4 Financial Overview13.7.5 SWOT Analysis13.7.6 Key Developments13.8 Salesforce.com, inc.13.8.1 Key Facts13.8.2 Business Description13.8.3 Products and Services13.8.4 Financial Overview13.8.5 SWOT Analysis13.8.6 Key Developments13.9 Oracle Corporation13.9.1 Key Facts13.9.2 Business Description13.9.3 Products and Services13.9.4 Financial Overview13.9.5 SWOT Analysis13.9.6 Key Developments13.10 IBM Corporation13.10.1 Key Facts13.10.2 Business Description13.10.3 Products and Services13.10.4 Financial Overview13.10.5 SWOT Analysis13.10.6 Key Developments13.11 Amazon Web Services13.11.1 Key Facts13.11.2 Business Description13.11.3 Products and Services13.11.4 Financial Overview13.11.5 SWOT Analysis13.11.6 Key Developments13.12 Adobe13.12.1 Key Facts13.12.2 Business Description13.12.3 Products and Services13.12.4 Financial Overview13.12.5 SWOT Analysis13.12.6 Key Developments13.13 Accenture13.13.1 Key Facts13.13.2 Business Description13.13.3 Products and Services13.13.4 Financial Overview13.13.5 SWOT Analysis13.13.6 Key Developments13.14 Microsoft Corporation13.14.1 Key Facts13.14.2 Business Description13.14.3 Products and Services13.14.4 Financial Overview13.14.5 SWOT Analysis13.14.6 Key Developments13.15 Xilinx, Inc.13.15.1 Key Facts13.15.2 Business Description13.15.3 Products and Services13.15.4 Financial Overview13.15.5 SWOT Analysis13.15.6 Key Developments

14. Artificial Intelligence in Marketing Market- Company Profiles

For more information about this report visit https://www.researchandmarkets.com/r/xrvozg

More:
Insights on the Artificial Intelligence in Marketing Global Market to 2028 - by Offering, Application, End-use - GlobeNewswire

Artificial Intelligence and the Labor Shortage Crisis in the US – IoT For All

As US businesses begin to emerge from Covid, many are now facing a labor shortage crisis. After nearly 18 months of being locked down and vaccination rates increase, Americans are heading out in droves to their favorite restaurants, bars, and retail establishments.While this is a positive sign, its presenting a big problem for businesses across the country as they struggle to keep up with the surge in demand.

According to a May 6th, 2021 Department of Labor Report, 16.2 million are claiming unemployment benefits.Not all news is negative.Aprils ADP payroll report states that 742,000 jobs had been created. iCIMS April report indicates that job openings are up 22%, hiring is up 18%, and job applications have decreased by 23%.

Some economists are attributing the labor shortage to the federal governments expanded unemployment benefits of $300.As we hear about positive trends in the job market, frustrated business owners are left wondering if the federal government has gone too far with unemployment assistance programs.Are capable Americans content sitting at home collecting unemployment than finding work?

While many restaurant and retail establishments employ high school and college students, most are staffed by adults outside of those demographics.The United States Census Bureau study indicates that these low-skilled workers are younger, less likely to have a college degree, and live in poverty. According to a report by Data USA, the average salary for restaurant workers is $22,426.

While the $300 in additional benefits was instituted at the start of the pandemic, is it still necessary as the economy comes roaring back to life?

At $45,188 or$40,976 in annualized benefits for Kentucky and Kansas, what would motivate anyone to find work until benefits expire, given current pay in these low-skilled jobs?

On March 4th of this year, Tech Talks published an article on How AI can help SMBs and workers make the $15 minimum wage transition.The current administrations push to raise the minimum wage fell flat on March 5th.When presented with the dichotomy of not working or working, most will go with the former when the pay is significantly higher.

Two ways out of this conundrum: reduce unemployment benefits or raise the minimum wage.Its not an easy answer as there are many complexities involved like virus concerns, access to childcare, social unrest, etc.This comes at a time when America is getting back on its feet.Many businesses will not service their clientele as we head into the busy spring and summer months.

Its not just restaurants and retail.We see staffing issues in the manufacturing and supply chain arenas.If not addressed, this labor issue can lead to higher prices for consumers, product shortages, or worse, the businesses that were lucky enough to survive Covid will be forced to shut down.

Talk to any small to the mid-size business owner, and theyll say their biggest expense is labor.Oftentimes, this represents 20-30 percent of their gross earnings.According to JP Morgan Chase, outside of the big brands like Walmart, McDonalds, and Amazon, these fearless entrepreneurs represent nearly 99 percent of Americas 28.7 million firms.

Artificial Intelligence is the ability for a computer to think and act like a human, which has become more prominent in recent years.Businesses accelerated their rate of technological adoption to survive during the pandemic.AI-driven platforms are proving to be adequate replacements for repetitive tasks that can easily be automated:

AI will not replace the need for humans in these lines of work. It can, however, significantly reduce the need for labor.Consider a business that would need 5 workers in each of these situations.With properly placed AI platforms, the need for these types of employees can be reduced by as much as 60-70 percent.

A full-time employee paid a minimum wage salary will earn $600/$2400 in a given week/month.Multiply this by 3 employees, and your labor costs total $7,200 a month plus benefits.Many of these AI tools that can help drive top, and bottom-line growth are a fraction of your labor expense.

Labor Shortage + Higher Wages = Inflationary Pressures

Theres no end to the labor uprising dilemma.Businesses will need to turn to AI-driven automation to remain competitive to keep both labor and prices in check.

Read the original post:
Artificial Intelligence and the Labor Shortage Crisis in the US - IoT For All

Artificial intelligence system can predict the impact of research – Chemistry World

An artificial intelligence system trained on almost 40 years of the scientific literature correctly identified 19 out of 20 research papers that have had the greatest scientific impact on biotechnology and has selected 50 recent papers it predicts will be among the top 5% of biotechnology papers in the future.1

Scientists say the system could be used to find hidden gems of research overlooked by other methods, and even to guide decisions on funding allocations so that it will be most likely to target promising research.

But its sparked outrage among some members of the scientific community, who claim it will entrench existing biases.

Our goal is to build tools that help us discover the most interesting, exciting and impactful research especially research that might be overlooked with existing publication metrics, says James Weis, a computer scientist at the Massachusetts Institute of Technology and the lead author of a new study about the system.

The study describes a machine-learning system called Delphi Dynamic Early-warning by Learning to Predict High Impact that was trained with metrics drawn from more than 1.6 million papers published in 42 biotechnology-related journals between 1982 and 2019.

The system assessed 29 different features of the papers in the journals, which resulted in more than 7.8 million individual machine-learning nodes and 201 million relationships.

The features included regular metrics, such as the h-index of an authors research productivity and the number of citations a research paper generated in the five years since its publication. But they also included things like how an authors h-index had changed over time, the number and rankings of a papers co-authors, and several metrics about the journals themselves.

The researchers then used the system to correctly identify 19 of the 20 seminal biotechnology papers from 1980 to 2014 in a blinded study, and to select another 50 papers published in 2018 that they predict will be among the top 5% of impactful biotechnology research papers in the years to come.

Weis says the important paper that the Delphi system missed involved the foundational development of chromosome conformation capture methods for analysing the spatial organisation of chromosomes within a cell in part because a large number of the citations that resulted were in non-biotechnology journals and so were not in their database.

We dont expect to be able to identify all foundational technologies early, Weis says. Our hope is primarily to find technologies that have been overlooked by current metrics.

As with all machine learning systems, due care needs to be taken to reduce systemic biases and to ensure that malicious actors cannot manipulate it, he says. But by considering a broad range of features and using only those that hold real signal about future impact, we think that Delphi holds the potential to reduce bias by obviating reliance on simpler metrics, he says. Weis adds that this will also make Delphi harder to game.

Weis says the Delphi prototype can be easily expanded into other scientific fields, initially by including additional disciplines and academic journals, and potentially other sources of high quality research like the online preprint archive arXiv.

The intent is not to create a replacement for existing methods for judging the importance of research, but to improve them, he says. We view Delphi as an additional tool to be integrated into the researchers toolkit not as a replacement for human-level expertise and intuition.

The system has already attracted some criticism. Andreas Bender, a chemist at the University of Cambridge, wrote on Twitter that Delphi will only serve to perpetuate existing academic biases, while Daniel Koch, a molecular biophysicist at Kings College London, tweeted:Unfortunately, once again impactful is defined mostly by citation-based metrics, so whats optimized is scientific self-reference.

Lutz Bornmann, a sociologist of science at the Max Planck Society headquarters in Munich who has studied how research impacts can be measured2 notes that many of the publication features assessed by the Delphi system rely heavily on the quantification of the research citations that result from them. However, the proposed method sounds interesting and led to first promising empirical results, he says. Further extensive empirical tests are necessary to confirm these first results.

See more here:
Artificial intelligence system can predict the impact of research - Chemistry World

Artificial intelligence system could help counter the spread of disinformation – MIT News

Disinformation campaigns are not new think of wartime propaganda used to sway public opinion against an enemy. What is new, however, is the use of the internet and social media to spread these campaigns. The spread of disinformation via social media has the power to change elections, strengthen conspiracy theories, and sow discord.

Steven Smith, a staff member from MIT Lincoln Laboratorys Artificial Intelligence Software Architectures and Algorithms Group, is part of a team that set out to better understand these campaigns by launching the Reconnaissance of Influence Operations (RIO) program. Their goal was to create a system that would automatically detect disinformation narratives as well as those individuals who are spreading the narratives within social media networks. Earlier this year, the team published a paper on their work in the Proceedings of the National Academy of Sciences and they received an R&D 100 award last fall.

The project originated in 2014 when Smith and colleagues were studying how malicious groups could exploit social media. They noticed increased and unusual activity in social media data from accounts that had the appearance of pushing pro-Russian narratives.

"We were kind of scratching our heads," Smith says of the data. So the team applied for internal funding through the laboratorys Technology Office and launched the program in order to study whether similar techniques would be used in the 2017 French elections.

In the 30 days leading up to the election, the RIO team collected real-time social media data to search for and analyze the spread of disinformation. In total, they compiled 28 million Twitter posts from 1 million accounts. Then, using the RIO system, they were able to detect disinformation accounts with 96 percent precision.

What makes the RIO system unique is that it combines multiple analytics techniques in order to create a comprehensive view of where and how the disinformation narratives are spreading.

"If you are trying to answer the question of who is influential on a social network, traditionally, people look at activity counts," says Edward Kao, who is another member of the research team. On Twitter, for example, analysts would consider the number of tweets and retweets. "What we found is that in many cases this is not sufficient. It doesnt actually tell you the impact of the accounts on the social network."

As part of Kaos PhD work in the laboratorys Lincoln Scholars program, a tuition fellowship program, he developed a statistical approach now used in RIO to help determine not only whether a social media account is spreading disinformation but also how much the account causes the network as a whole to change and amplify the message.

Erika Mackin, another research team member, also applied a new machine learning approach that helps RIO to classify these accounts by looking into data related to behaviors such as whether the account interacts with foreign media and what languages it uses. This approach allows RIO to detect hostile accounts that are active in diverse campaigns, ranging from the 2017 French presidential elections to the spread of Covid-19 disinformation.

Another unique aspect of RIO is that it can detect and quantify the impact of accounts operated by both bots and humans, whereas most automated systems in use today detect bots only. RIO also has the ability to help those using the system to forecast how different countermeasures might halt the spread of a particular disinformation campaign.

The team envisions RIO being used by both government and industry as well as beyond social media and in the realm of traditional media such as newspapers and television. Currently, they are working with West Point student Joseph Schlessinger, who is also a graduate student at MIT and a military fellow at Lincoln Laboratory, to understand how narratives spread across European media outlets. A new follow-on program is also underway to dive into the cognitive aspects of influence operations and how individual attitudes and behaviors are affected by disinformation.

Defending against disinformation is not only a matter of national security, but also about protecting democracy, says Kao.

See the rest here:
Artificial intelligence system could help counter the spread of disinformation - MIT News