Media Search:



Artificial intelligence system can predict the impact of research – Chemistry World

An artificial intelligence system trained on almost 40 years of the scientific literature correctly identified 19 out of 20 research papers that have had the greatest scientific impact on biotechnology and has selected 50 recent papers it predicts will be among the top 5% of biotechnology papers in the future.1

Scientists say the system could be used to find hidden gems of research overlooked by other methods, and even to guide decisions on funding allocations so that it will be most likely to target promising research.

But its sparked outrage among some members of the scientific community, who claim it will entrench existing biases.

Our goal is to build tools that help us discover the most interesting, exciting and impactful research especially research that might be overlooked with existing publication metrics, says James Weis, a computer scientist at the Massachusetts Institute of Technology and the lead author of a new study about the system.

The study describes a machine-learning system called Delphi Dynamic Early-warning by Learning to Predict High Impact that was trained with metrics drawn from more than 1.6 million papers published in 42 biotechnology-related journals between 1982 and 2019.

The system assessed 29 different features of the papers in the journals, which resulted in more than 7.8 million individual machine-learning nodes and 201 million relationships.

The features included regular metrics, such as the h-index of an authors research productivity and the number of citations a research paper generated in the five years since its publication. But they also included things like how an authors h-index had changed over time, the number and rankings of a papers co-authors, and several metrics about the journals themselves.

The researchers then used the system to correctly identify 19 of the 20 seminal biotechnology papers from 1980 to 2014 in a blinded study, and to select another 50 papers published in 2018 that they predict will be among the top 5% of impactful biotechnology research papers in the years to come.

Weis says the important paper that the Delphi system missed involved the foundational development of chromosome conformation capture methods for analysing the spatial organisation of chromosomes within a cell in part because a large number of the citations that resulted were in non-biotechnology journals and so were not in their database.

We dont expect to be able to identify all foundational technologies early, Weis says. Our hope is primarily to find technologies that have been overlooked by current metrics.

As with all machine learning systems, due care needs to be taken to reduce systemic biases and to ensure that malicious actors cannot manipulate it, he says. But by considering a broad range of features and using only those that hold real signal about future impact, we think that Delphi holds the potential to reduce bias by obviating reliance on simpler metrics, he says. Weis adds that this will also make Delphi harder to game.

Weis says the Delphi prototype can be easily expanded into other scientific fields, initially by including additional disciplines and academic journals, and potentially other sources of high quality research like the online preprint archive arXiv.

The intent is not to create a replacement for existing methods for judging the importance of research, but to improve them, he says. We view Delphi as an additional tool to be integrated into the researchers toolkit not as a replacement for human-level expertise and intuition.

The system has already attracted some criticism. Andreas Bender, a chemist at the University of Cambridge, wrote on Twitter that Delphi will only serve to perpetuate existing academic biases, while Daniel Koch, a molecular biophysicist at Kings College London, tweeted:Unfortunately, once again impactful is defined mostly by citation-based metrics, so whats optimized is scientific self-reference.

Lutz Bornmann, a sociologist of science at the Max Planck Society headquarters in Munich who has studied how research impacts can be measured2 notes that many of the publication features assessed by the Delphi system rely heavily on the quantification of the research citations that result from them. However, the proposed method sounds interesting and led to first promising empirical results, he says. Further extensive empirical tests are necessary to confirm these first results.

See more here:
Artificial intelligence system can predict the impact of research - Chemistry World

Artificial intelligence system could help counter the spread of disinformation – MIT News

Disinformation campaigns are not new think of wartime propaganda used to sway public opinion against an enemy. What is new, however, is the use of the internet and social media to spread these campaigns. The spread of disinformation via social media has the power to change elections, strengthen conspiracy theories, and sow discord.

Steven Smith, a staff member from MIT Lincoln Laboratorys Artificial Intelligence Software Architectures and Algorithms Group, is part of a team that set out to better understand these campaigns by launching the Reconnaissance of Influence Operations (RIO) program. Their goal was to create a system that would automatically detect disinformation narratives as well as those individuals who are spreading the narratives within social media networks. Earlier this year, the team published a paper on their work in the Proceedings of the National Academy of Sciences and they received an R&D 100 award last fall.

The project originated in 2014 when Smith and colleagues were studying how malicious groups could exploit social media. They noticed increased and unusual activity in social media data from accounts that had the appearance of pushing pro-Russian narratives.

"We were kind of scratching our heads," Smith says of the data. So the team applied for internal funding through the laboratorys Technology Office and launched the program in order to study whether similar techniques would be used in the 2017 French elections.

In the 30 days leading up to the election, the RIO team collected real-time social media data to search for and analyze the spread of disinformation. In total, they compiled 28 million Twitter posts from 1 million accounts. Then, using the RIO system, they were able to detect disinformation accounts with 96 percent precision.

What makes the RIO system unique is that it combines multiple analytics techniques in order to create a comprehensive view of where and how the disinformation narratives are spreading.

"If you are trying to answer the question of who is influential on a social network, traditionally, people look at activity counts," says Edward Kao, who is another member of the research team. On Twitter, for example, analysts would consider the number of tweets and retweets. "What we found is that in many cases this is not sufficient. It doesnt actually tell you the impact of the accounts on the social network."

As part of Kaos PhD work in the laboratorys Lincoln Scholars program, a tuition fellowship program, he developed a statistical approach now used in RIO to help determine not only whether a social media account is spreading disinformation but also how much the account causes the network as a whole to change and amplify the message.

Erika Mackin, another research team member, also applied a new machine learning approach that helps RIO to classify these accounts by looking into data related to behaviors such as whether the account interacts with foreign media and what languages it uses. This approach allows RIO to detect hostile accounts that are active in diverse campaigns, ranging from the 2017 French presidential elections to the spread of Covid-19 disinformation.

Another unique aspect of RIO is that it can detect and quantify the impact of accounts operated by both bots and humans, whereas most automated systems in use today detect bots only. RIO also has the ability to help those using the system to forecast how different countermeasures might halt the spread of a particular disinformation campaign.

The team envisions RIO being used by both government and industry as well as beyond social media and in the realm of traditional media such as newspapers and television. Currently, they are working with West Point student Joseph Schlessinger, who is also a graduate student at MIT and a military fellow at Lincoln Laboratory, to understand how narratives spread across European media outlets. A new follow-on program is also underway to dive into the cognitive aspects of influence operations and how individual attitudes and behaviors are affected by disinformation.

Defending against disinformation is not only a matter of national security, but also about protecting democracy, says Kao.

See the rest here:
Artificial intelligence system could help counter the spread of disinformation - MIT News

The United Nations needs to start regulating the ‘Wild West’ of artificial intelligence – The Conversation CA

The European Commission recently published a proposal for a regulation on artificial intelligence (AI). This is the first document of its kind to attempt to tame the multi-tentacled beast that is artificial intelligence.

The sun is starting to set on the Wild West days of artificial intelligence, writes Jeremy Kahn. He may have a point.

When this regulation comes into effect, it will change the way that we conduct AI research and development. In the last few years of AI, there were few rules or regulations: if you could think it, you could build it. That is no longer the case, at least in the European Union.

There is, however, a notable exception in the regulation, which is that is does not apply to international organizations like the United Nations.

Naturally, the European Union does not have jurisdiction over the United Nations, which is governed by international law. The exclusion therefore does not come as a surprise, but does point to a gap in AI regulation. The United Nations therefore needs its own regulation for artificial intelligence, and urgently so.

Artificial intelligence technologies have been used increasingly by the United Nations. Several research and development labs, including the Global Pulse Lab, the Jetson initiative by the UN High Commissioner for Refugees , UNICEFs Innovation Labs and the Centre for Humanitarian Data have focused their work on developing artificial intelligence solutions that would support the UNs mission, notably in terms of anticipating and responding to humanitarian crises.

United Nations agencies have also used biometric identification to manage humanitarian logistics and refugee claims. The UNHCR developed a biometrics database which contained the information of 7.1 million refugees. The World Food Program has also used biometric identification in aid distribution to refugees, coming under some criticism in 2019 for its use of this technology in Yemen.

In parallel, the United Nations has partnered with private companies that provide analytical services. A notable example is the World Food Programme, which in 2019 signed a contract worth US$45 million with Palantir, an American firm specializing in data collection and artificial intelligence modelling.

In 2014, the United States Bureau of Immigration and Customs Enforcement (ICE) awarded a US$20 billion-dollar contract to Palantir to track undocumented immigrants in the U.S., especially family members of children who had crossed the border alone. Several human rights watchdogs, including Amnesty International, have raised concerns about Palantir for human rights violations.

Like most AI initiatives developed in recent years, this work has happened largely without regulatory oversight. There have been many attempts to set up ethical modes of operation, such as the Office for the Co-ordination of Humanitarian Affairs Peer Review Framework, which sets out a method for overseeing the technical development and implementation of AI models.

In the absence of regulation, however, tools such as these, without legal backing, are merely best practices with no means of enforcement.

In the European Commissions AI regulation proposal, developers of high-risk systems must go through an authorization process before going to market, just like a new drug or car. They are required to put together a detailed package before the AI is available for use, involving a description of the models and data used, along with an explanation of how accuracy, privacy and discriminatory impacts will be addressed.

The AI applications in question include biometric identification, categorization and evaluation of the eligibility of people for public assistance benefits and services. They may also be used to dispatch of emergency first response services all of these are current uses of AI by the United Nations.

Conversely, the lack of regulation at the United Nations can be considered a challenge for agencies seeking to adopt more effective and novel technologies. As such, many systems seem to have been developed and later abandoned without being integrated into actual decision-making systems.

An example of this is the Jetson tool, which was developed by UNHCR to predict the arrival of internally displaced persons to refugee camps in Somalia. The tool does not appear to have been updated since 2019, and seems unlikely to transition into the humanitarian organizations operations. Unless, that is, it can be properly certified by a new regulatory system.

Trust in AI is difficult to obtain, particularly in United Nations work, which is highly political and affects very vulnerable populations. The onus has largely been on data scientists to develop the credibility of their tools.

A regulatory framework like the one proposed by the European Commission would take the pressure off data scientists in the humanitarian sector to individually justify their activities. Instead, agencies or research labs who wanted to develop an AI solution would work within a regulated system with built-in accountability. This would produce more effective, safer and more just applications and uses of AI technology.

Read this article:
The United Nations needs to start regulating the 'Wild West' of artificial intelligence - The Conversation CA

Artificial Intelligence Technology Solutions Files 10-K and Audited Financials – Yahoo Finance

Artificial Intelligence Technology Solutions, Inc., (OTCPK:AITX), a global leader in AI-driven security and productivity solutions for enterprise clients, filed its annual report on Form 10-K with the Securities and Exchange Commission for its fiscal year 2021 ended February 28, 2021. AITX is a full SEC reporting company that files detailed annual and quarterly reports.

"I am so pleased to share the results of such a pivotal year for the company," said Steve Reinharz, President and CEO of AITX. "The company has experienced significant improvement in all areas, and the financial state of AITX has never been stronger."

Key takeaways from the 10-K filing

Differences in Derivative Liability

The amount of derivative liability is a function of the underlying value of convertible debt and associated interest which reduced from to approximately $9,521,000 at February 29, 2020 to $943,000 at February 28, 2021 due to conversions and debt settlements and exchanges during the year and the change in fair value of derivative liabilities which fluctuates based on the change in the market price of the company's common stock. AITX therefore saw a reduction in derivative liability from $6,890,688 at February 29, 2020 to $446,466 at February 28, 2021.

Debt Exchange & New Financing at Market Price

In December of 2020, the company announced that it had exchanged approximately $7.7 M of current convertible debt and interest which had conversion rights at a conversion price discount of approximately 50% and whose debt bore interest at a default rate of 24% for $7.7 M in promissory notes along with warrants. The new debt has three year terms and bears interest at 12%. The company extended payment terms, improved interest rates, removed the associated derivative liability and stress on market price due to high volume discounted conversions. The company also issued $825,000 in new convertible debt that converts at market price and not the highly discounted conversion price that was done before. Reinharz added "Cleaning up this debt is a big deal!"

Story continues

"FY 2021 was just incredible for AITX," Reinharz added. "We couldnt have achieved this level of success without the tireless efforts of the entire team. From engineering to sales to production, every member contributed to this growth. We are well underway in a significant phase of growth and we expect to increase our recurring monthly revenue run rate by a factor 5 - 20 times when compared to the monthly revenue run rate at the end of next fiscal year ending February 28, 2022," Reinharz commented.

Reinharz also indicated that the company expects to release its first quarterly report that will cover March, April and May (the 10-Q) as soon as available as it will show substantial progress over the FY 2021 10-K in terms of revenue, debt and cash. "We are on track for an amazing year. Hold on, its early," Reinharz concluded.

Follow Steve Reinharz on Twitter @SteveReinharz for future AITX and RAD updates.

AITX through its subsidiary, Robotic Assistance Devices, Inc. (RAD), is redefining the $25 billion (US) security and guarding services industry through its broad lineup of innovative, AI-driven Solutions-as-a-Service business model. RAD solutions are specifically designed to provide a cost savings to businesses of between 35%-80% when compared to the industrys existing and costly manned security guarding and monitoring model. RAD delivers this tremendous costs savings via a suite of stationary and mobile robotic solutions that complement, and at times, directly replace the need for human personnel in environments better suited for machines. All RAD technologies, AI-based analytics and software platforms are developed in-house.

CAUTIONARY DISCLOSURE ABOUT FORWARD-LOOKING STATEMENTS

This release contains "forward-looking statements" within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E the Securities Exchange Act of 1934, as amended and such forward-looking statements are made pursuant to the safe harbor provisions of the Private Securities Litigation Reform Act of 1995. Statements in this news release other than statements of historical fact are "forward-looking statements" that are based on current expectations and assumptions. Forward-looking statements involve risks and uncertainties that could cause actual results to differ materially from those expressed or implied by the statements, including, but not limited to, the following: the ability of Artificial Intelligence Technology Solutions to provide for its obligations, to provide working capital needs from operating revenues, to obtain additional financing needed for any future acquisitions, to meet competitive challenges and technological changes, to meet business and financial goals including projections and forecasts, and other risks. Artificial Intelligence Technology Solutions undertakes no duty to update any forward-looking statement(s) and/or to confirm the statement(s) to actual results or changes in Artificial Intelligence Technology Solutions expectations.

About Artificial Intelligence Technology Solutions (AITX)

AITX is an innovator in the delivery of artificial intelligence-based solutions that empower organizations to gain new insight, solve complex challenges and fuel new business ideas. Through its next-generation robotic product offerings, AITXs RAD and RAD-M companies help organizations streamline operations, increase ROI and strengthen business. AITX technology improves the simplicity and economics of patrolling and guard services, and allows experienced personnel to focus on more strategic tasks. Customers augment the capabilities of existing staffs and gain higher levels of situational awareness, all at drastically reduced cost. AITX solutions are well suited for use in multiple industries such as enterprises, government, transportation, critical infrastructure, education and healthcare. To learn more, visit http://www.aitx.ai and http://www.roboticassistancedevices.com, or follow Steve Reinharz on Twitter @SteveReinharz.

View source version on businesswire.com: https://www.businesswire.com/news/home/20210601005345/en/

Contacts

Investor Relations ContactThe Waypoint Refinery, LLC845-397-2956www.thewaypointrefinery.com

Steve Reinharz949-636-7060

Read more here:
Artificial Intelligence Technology Solutions Files 10-K and Audited Financials - Yahoo Finance

How Artificial Intelligence Is Cutting Wait Time at Red Lights – Motor Trend

Who hasn't been stuck seething at an interminable red light with zero cross traffic? When this happened one time too many to Uriel Katz, he co-founded Israel-based, Palo Alto, California-headquartered tech startup NoTraffic in 2017. The company claims its cloud- and artificial-intelligence-based traffic control system can halve rush-hour times in dense urban areas, reduce annual CO2 emissions by a half-billion tons in places like Phoenix/Maricopa County, and slash transportation budgets by 70 percent. That sounded mighty free-lunchy, so I got NoTraffic's VP of strategic partnerships, Tom Cooper, on the phone.

Here's how it works: Sensors perceive, identify, and analyze all traffic approaching each intersection, sharing data to the cloud. Here light timing and traffic flow is adjusted continuously, prioritizing commuting patterns, emergency and evacuation traffic, a temporary parade of bicycleswhatever. Judicious allocation of "green time" means no green or walk-signal time gets wasted.

I assumed such features had long since evolved from the tape-drive traffic control system Michael Cain's team sabotaged in Rome to pull off The Italian Job in 1969. Turns out that while most such systems' electronics have evolved, their central intelligence and situational adaptability have not.

Intersections that employ traffic-sensing pavement loops, video cameras, or devices that enable emergency vehicle prioritization still typically rely on hourly traffic-flow predictions for timing. When legacy system suppliers like Siemens offer similar technology with centralized control, it typically requires costly installation of fiber-optic or other wired-network connections, as the latency inherent in cellular communications can't meet stringent standards set by Advance Transportation Controller (ATC), National Electrical Manufacturers Association (NEMA), CalTrans, and others for safety and conflict resolution.

By contrast, NoTraffic localizes all the safety-critical decision-making at the intersection, with a camera/radar sensor that can identify vehicles, pedestrians, and bikers observing each approach. These sensors are wired to a box inside the existing control cabinet that can also accept input signals from pressure loops or other existing infrastructure. The controller only requires AC power. It connects to the cloud via 4G/5G/LTE, but this connection merely allows for sharing of data that constantly tailors the signal timing of nearby intersections. This is not nano-second, fiber-optic-speed critical info. NoTraffic promises to instantly leapfrog legacy intersections to state-of-the-art intelligence, safety sensing, and connectivity.

Installation cost per intersection roughly equals the cost budgeted for maintaining and repairing today's inductive loops and camera intersections every five years, but the NoTraffic gear allegedly lasts longer and is upgradable over the air. This accounts for that 70 percent cost savings.

NoTraffic's congestion-reduction claims don't require vehicle-to-infrastructure communications or Waze/Google/Apple Maps integration, but adding such features via over-the-air upgrades promises to further improve future traffic flow.

Hardening the system against Italian Job-like traffic system hacks is essential, so each control box is electrically isolated and firewalled. All input signals from the local sensors are fully encrypted. Ditto all cloud communications.

NoTraffic gear is up and running in California, Arizona, and on the East Coast, and the company plans to be in 41 markets by the end of 2021. Maricopa County has the greatest number of NoTraffic intersections, and projections indicate equipping all 4,000 signals in the area would save 7.8 centuries of wasted commuting time per year, valued at $1.2 billion in economic impact. Reducing that much idling time would save 531,929 tons of CO2 emissionsakin to taking 115,647 combustion-engine vehicles off the road. The company targets jurisdictions covering 80 percent of the nation's 320,000 traffic signals, noting that converting the entire U.S. traffic system could reduce CO2 by as much as removing 20 million combustion vehicles each year.

I fret that despite its obvious advantages, greedy municipalities might push to leverage NoTraffic cameras for red light enforcement, but Cooper noted the company's clients are traffic operations departments, which are not tasked with revenue generation. NoTraffic is neither conceived nor enabled to be an enforcement tool. Let's hope the system proves equally hackproof to government "revenuers" and gold thieves alike.

Read the original:
How Artificial Intelligence Is Cutting Wait Time at Red Lights - Motor Trend