Archive for the ‘Artificial Intelligence’ Category

How Artificial Intelligence Is Cutting Wait Time at Red Lights – Motor Trend

Who hasn't been stuck seething at an interminable red light with zero cross traffic? When this happened one time too many to Uriel Katz, he co-founded Israel-based, Palo Alto, California-headquartered tech startup NoTraffic in 2017. The company claims its cloud- and artificial-intelligence-based traffic control system can halve rush-hour times in dense urban areas, reduce annual CO2 emissions by a half-billion tons in places like Phoenix/Maricopa County, and slash transportation budgets by 70 percent. That sounded mighty free-lunchy, so I got NoTraffic's VP of strategic partnerships, Tom Cooper, on the phone.

Here's how it works: Sensors perceive, identify, and analyze all traffic approaching each intersection, sharing data to the cloud. Here light timing and traffic flow is adjusted continuously, prioritizing commuting patterns, emergency and evacuation traffic, a temporary parade of bicycleswhatever. Judicious allocation of "green time" means no green or walk-signal time gets wasted.

I assumed such features had long since evolved from the tape-drive traffic control system Michael Cain's team sabotaged in Rome to pull off The Italian Job in 1969. Turns out that while most such systems' electronics have evolved, their central intelligence and situational adaptability have not.

Intersections that employ traffic-sensing pavement loops, video cameras, or devices that enable emergency vehicle prioritization still typically rely on hourly traffic-flow predictions for timing. When legacy system suppliers like Siemens offer similar technology with centralized control, it typically requires costly installation of fiber-optic or other wired-network connections, as the latency inherent in cellular communications can't meet stringent standards set by Advance Transportation Controller (ATC), National Electrical Manufacturers Association (NEMA), CalTrans, and others for safety and conflict resolution.

By contrast, NoTraffic localizes all the safety-critical decision-making at the intersection, with a camera/radar sensor that can identify vehicles, pedestrians, and bikers observing each approach. These sensors are wired to a box inside the existing control cabinet that can also accept input signals from pressure loops or other existing infrastructure. The controller only requires AC power. It connects to the cloud via 4G/5G/LTE, but this connection merely allows for sharing of data that constantly tailors the signal timing of nearby intersections. This is not nano-second, fiber-optic-speed critical info. NoTraffic promises to instantly leapfrog legacy intersections to state-of-the-art intelligence, safety sensing, and connectivity.

Installation cost per intersection roughly equals the cost budgeted for maintaining and repairing today's inductive loops and camera intersections every five years, but the NoTraffic gear allegedly lasts longer and is upgradable over the air. This accounts for that 70 percent cost savings.

NoTraffic's congestion-reduction claims don't require vehicle-to-infrastructure communications or Waze/Google/Apple Maps integration, but adding such features via over-the-air upgrades promises to further improve future traffic flow.

Hardening the system against Italian Job-like traffic system hacks is essential, so each control box is electrically isolated and firewalled. All input signals from the local sensors are fully encrypted. Ditto all cloud communications.

NoTraffic gear is up and running in California, Arizona, and on the East Coast, and the company plans to be in 41 markets by the end of 2021. Maricopa County has the greatest number of NoTraffic intersections, and projections indicate equipping all 4,000 signals in the area would save 7.8 centuries of wasted commuting time per year, valued at $1.2 billion in economic impact. Reducing that much idling time would save 531,929 tons of CO2 emissionsakin to taking 115,647 combustion-engine vehicles off the road. The company targets jurisdictions covering 80 percent of the nation's 320,000 traffic signals, noting that converting the entire U.S. traffic system could reduce CO2 by as much as removing 20 million combustion vehicles each year.

I fret that despite its obvious advantages, greedy municipalities might push to leverage NoTraffic cameras for red light enforcement, but Cooper noted the company's clients are traffic operations departments, which are not tasked with revenue generation. NoTraffic is neither conceived nor enabled to be an enforcement tool. Let's hope the system proves equally hackproof to government "revenuers" and gold thieves alike.

Read the original:
How Artificial Intelligence Is Cutting Wait Time at Red Lights - Motor Trend

Is artificial intelligence the future of network security? – SecurityBrief Asia

Artificial intelligence must be the future for network security, according to Fortinet.

With the threat landscape constantly evolving and increasing in complexity, continued digital innovation, technological developments, and the introduction of 5G, coupled with the challenges of accelerated remote working practices and a growing cybersecurity skills gap, have collectively exacerbated the challenges that CISOs face in terms of protecting their companies digital assets.

As CISOs assess their cybersecurity posture, its essential that they consider how to leverage new and emerging technologies to best protect their infrastructure, the company says.

There have been significant developments in the artificial intelligence (AI) space that make it an increasingly strategic investment.

However, Fortinet says it can be challenging for CISOs to cut through the hype and understand which AI-based solution is best suited to their organisation.

The continued investment in digital innovation and development is one of the key factors in maintaining an advantage over competitors," says Corne Mare, chief information security officer, Fortinet.

"AI-driven solutions have been commonplace for some time now but determining which solution is best for an organisation can still be a hurdle for many CISOs."

Mare says it is not enough to simply incorporate AI-driven solutions into a security strategy.

"CISOs must also be able to assess the company behind the solution and ensure it has the appropriate knowledge, skills, and resources to operationalise it.

"Adequate access to actionable threat intelligence is equally critical. Its easy for technology companies to promote their AI solutions and claim they are AI-driven," Mare says.

"CISOs should only engage companies that can strongly back up these claims and demonstrate proven experience to provide the best defence and strategy for their organisation.

AI-driven solutions on their own may not be effective enough to secure an organisational environment. However, enhancing AI solutions with machine learning, augmented intelligence, and analytics capabilities, among others, lets CISOs create a much stronger cybersecurity ecosystem for their organisation.

As technological advancements see AI-driven solutions increase in their capabilities and complexities, so too do the capabilities of cybercriminals," says Mare.

"To reinforce a robust cybersecurity ecosystem, CISOs must develop strategic, proactive cybersecurity approaches that leverage AI-driven solutions to act on threat intelligence.

"Integrating other smart, digital solutions will help to deliver timely, accurate information that organisations can use to help prepare and protect their assets.

In addition to leveraging solutions like augmented intelligence, analytics, and machine learning combined with AI, CISOs should consider resourcing their IT and security teams with the right people to strengthen their security strategy.

However, there are also opportunities for CISOs to leverage their AI-driven security solutions to close the cybersecurity skills gap and mitigate resourcing challenges.

Developing a robust cybersecurity posture for an organisation often requires investing in a wide variety of technologies and tools to defend against threats," says Mare.

"While IT is a very skilled workforce, employees can be stretched thin in organisations trying to manage a large volume of digital solutions in addition to their daily responsibilities.

"However, CISOs can improve efficiencies and strengthen their security operations by leveraging AI solutions and tools, particularly those with built-in automation and integration, to alleviate the pressure on IT teams without reducing the effectiveness of the security strategy.

See original here:
Is artificial intelligence the future of network security? - SecurityBrief Asia

Thales and Atos create a sovereign Big Data and Artificial Intelligence platform – Intelligent CIO ME

Atos and Thales have announced the creation of Athea, a joint-venture that will develop a sovereign Big Data and Artificial Intelligence platform for public and private sector players in the defence, intelligence and internal state security communities. Athea will draw on the experience gained by both companies from the demonstration phase of the ARTEMIS programme, the Big Data platform of the French Ministry of Armed Forces. The contract to optimise and prepare the full-scale roll-out of the ARTEMIS platform was also awarded jointly to the two leaders by the French Defence Procurement Agency on April 30, 2021. The new joint venture will initially serve the French market before addressing European requirements at a later date.

With the exponential rise in the number of sources of information and increased pressure to respond more quickly to potential issues, state agencies need to manage ever-greater volumes of heterogeneous data and accelerate the development of new AI applications where security and sovereignty are key. Athea will create a solution to securely handle sensitive data on a nationwide scale and support the implementation of that solution within government programmes. The new entity will also provide expert appraisal, consulting, training and other services.

The joint venture will pool the companies investments, expertise and experience to respond quickly and efficiently to demand for innovation. Athea will work with an ecosystem of large companies, SMEs, start-ups and research institutes specialising in Big Data and Artificial Intelligence. In conjunction with the recently created Defence Digital Agency, the joint entity will also provide secure solutions and open and modular technological building blocks, which encourage collaboration and stimulate the industrial and sovereign ecosystem, in order to support the development of trusted applications.

This joint venture between Thales and Atos illustrates the commitment of both our companies to supporting the Digital Transformation of our customers by providing a secure and innovative solution based on French technology to process huge volumes of heterogeneous data. Together, we will capitalise on our respective areas of expertise to provide best-in-class Big Data and Artificial Intelligence solutions, said Marc Darmon, Executive Vice President, Secure Communications and Information Systems, Thales.

Sensitive data capabilities have become a sovereignty issue for State agencies. By combining the expertise of two major players in defence and digital technologies with the flexibility of a dedicated entity, Athea will generate huge potential for innovation and stimulate the industrial and defence ecosystem, including innovative start-ups, to meet the needs of government agencies and other stakeholders in the sector. This new joint venture between Atos and Thales is an opportunity to combine a comprehensive understanding of the defence and security issues faced by European States with access to the latest innovations in Big Data and Artificial Intelligence, said Pierre Barnab, Senior Executive Vice President, Big Data and Cybersecurity, Atos.

Facebook Twitter LinkedInEmailWhatsApp

Read the original:
Thales and Atos create a sovereign Big Data and Artificial Intelligence platform - Intelligent CIO ME

Thales, Atos take on big data and artificial intelligence in new joint venture – DefenseNews.com

STUTTGART, Germany Two major French technology companies are joining forces in an effort to become the European nations premier institution for artificial intelligence and big-data efforts.

Thales and Atos announced Thursday the creation of a joint venture called Athea, along with plans to develop a flagship, sovereign, big-data and AI platform that could serve customers in the public and private sector.

This new partnership comes as nations across Europe, and beyond, are targeting AI and big data as key enabling technologies for future military capabilities.

With the exponential rise in the number of sources of information, and increased pressure to respond more quickly to potential issues, state agencies need to manage ever-greater volumes of heterogeneous data and accelerate the development of new AI applications where security and sovereignty are key, the companies said in a news release.

The two teams began discussing the potential of a joint venture several months ago, per a Thales spokesperson.

Together, we will capitalise on our respective areas of expertise to provide best-in-class big data and artificial intelligence solutions, Marc Darmon, executive vice president for secure communications and information systems at Thales, said in a statement.

Athea will draw on each companys work on Project Artemis meant to provide the French military with a big-data processing capability to build a system that securely handles sensitive data on a nationwide scale and that will also support the solutions implementation within government programs, per the joint news release.

Both Atos and Thales have worked on the demonstration phase for Artemis, awarded in 2017, and both were chosen in April to prepare the full-scale rollout of the program by the French military procurement agency DGA.

Sign up for our Early Bird Brief Get the defense industry's most comprehensive news and information straight to your inbox

Subscribe

Enter a valid email address (please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Thanks for signing up!

By giving us your email, you are opting in to the Early Bird Brief.

Athea will generate huge potential for innovation, and stimulate the industrial and defence ecosystem, including innovative start-ups, to meet the needs of government agencies and other stakeholders in the sector, said Pierre Barnab, senior executive vice president for big data and cybersecurity at Atos.

Athea will initially focus on the French market before addressing European requirements at a later date, the companies said. This indicates that the joint venture will not affect ongoing multinational projects, such as Thales work on the Franco-German-Spanish Future Combat Air System or NATOs deployable combat cloud program.

For the air system, Thales is an industry partner for two of the programs seven technology pillars: the air combat cloud for which the industry lead is Airbus and the advanced sensors pillar, led by Indra. The company was also selected by the NATO Communications and Information Agency to develop and build the alliances first theater-level, deployable defense cloud capability, dubbed Firefly, within the next two years.

A Thales spokesperson said Athea might very well work with NATO in the future as the alliance pursues new emerging and disruptive technologies, including AI and big data.

NATO has identified those two capabilities as the first tech areas to target under its recently established emerging and disruptive technology strategy. It plans to release a strategy dedicated solely to artificial intelligence this summer, aligned with the NATO Summit scheduled for June 14 in Brussels.

See the article here:
Thales, Atos take on big data and artificial intelligence in new joint venture - DefenseNews.com

Artificial Intelligence: Advancing Applications in the CPI – ChemEngOnline

A convergence of digital technologies and data science means that industrial AI is gaining ground and producing results for CPI users

As data accessibility and analysis capabilities have rapidly advanced in recent years, new digital platforms driven by artificial intelligence (AI) and machine learning (ML) are increasingly finding practical applications in industry.

Data are so readily available now. Several years ago, we didnt have the manipulation capability, the broad platform or cloud capacity to really work with large volumes of data. Weve got that now, so that has been huge in making AI more practical, says Paige Morse, industry marketing director for chemicals at Aspen Technology, Inc. (Bedford, Mass.; http://www.aspentech.com). While AI and ML have been part of the digitalization discussion for many years, these technologies have not seen a great deal of practical application in the chemical process industries (CPI) until relatively recently, says Don Mack, global alliance manager at Siemens Industry, Inc. (Alpharetta, Ga.; http://www.industry.usa.siemens.com). In order for AI to work correctly, it needs data. Control systems and historians in chemical plants have a lot of data available, but in many cases, those data have just been sitting dormant, not really being put to good use. However, new digitalization tools enable us to address some use cases for AI that until recently just werent possible.

This convergence of technologies, from smart sensors to high-performance computing and cloud storage, along with advances in data science, deep learning and access to free and open-source software, have enabled the field of industrial AI to move beyond pure research to practical applications with business benefits, says Samvith Rao, chemical and petroleum industry manager at MathWorks (Natick, Mass.; http://www.mathworks.com). Such business benefits are wide-ranging, spanning varying realms from maintenance to materials science to emerging applications like supply-chain logistics and augmented-reality (AR). MathWorks recently collaborated with a Shell petroleum refinery to use AI to automatically incorporate tagged equipment information into operators AR headsets. All equipment in the refinery is tagged with a unique code. Shell wished to extract this data from the images acquired from cameras in the field. First, image recognition and computer-vision algorithms were applied, followed by deep-learning models for object detection to perform optical character-recognition. Ultimately, equipment meta-data was projected onto AR headsets of operators in the field, explains Rao.

Another major emerging area for industrial AI is in supply-chain management. The application of AI in supply chains lets us look at different scenarios and consider alternatives. Feedback from data about whats actually occurred, including any surprising events, can be put into the model to develop better scenario options that appropriately reflect reality, says Morse.

With the wide variety of end-use applications and ever-expanding platform capabilities, determining the most streamlined way to adopt an AI-based platform into an existing process can seem daunting, but Colin Parris, senior vice president and chief technology officer at GE Digital (San Ramon, Calif.; http://www.ge.com/digital), classifies industrial AI into three discrete pillars that build upon each other to deliver value early warning of problems, continuous prediction of problems and dynamic optimization. Data are, of course, paramount in realizing all three pillars. For early warning, I have sensors to give the state of the plant, showing the anomalies when that state changes. Continuous prediction looks at condition-based data to avoid unplanned shutdowns. Here, I want to know the condition of the ball bearings, the corrosion in the pipes, understand the creep in the machines in order to then determine the best plan and not default to time-based maintenance, so I need a lot of data. And if I want to do optimization, I need even more data, says Parris. All of the data can be culminated into a digital twin, which Parris defines as a living, learning model that is continuously updated to give an exact view of an asset (Figure 1). He emphasizes that model complexity is not a given. I may be able to use a surrogate model, which is a slimmed-down version that doesnt need to know all the process nuances. I may only need to know about certain critical parts. The model will constantly use data and update itself to live.

FIGURE 1. A robust digital twin may look at an entire plant, or might be a slimmed-down model that considers only certain critical parts. The model should use AI to continuously update itself

GE Digital worked with Wacker Chemie AG (Munich, Germany; http://www.wacker.com) to apply a holistic AI hierarchy for asset-performance management (APM) at a polysilicon production plant in Tennessee. There are roughly 1,500 pressure vessels at the site, and maintenance on them takes six weeks, resulting in significant financial burden due to lost production time. Regulatory compliance meant that these vessels were supposed to be maintained every two years. But, because we were able to actually capture the digital twin and show the current state of the asset, we helped the plant achieve API 580/581 certification, which says if a plant can show a certain level of condition-based capability, they can extend the maintenance interval anywhere from 5 to 10 years based upon the condition, explains Parris. With the early-warning and continuous-prediction pieces in place, the plant was experiencing improved availability and less downtime, and was able to begin looking at dynamic optimization. For Wacker, this included investigating specific product characteristics and intelligently adjusting the processes for higher-margin products. Thats the way it tends to work you go in a stepwise fashion to ultimately get to optimization, but its really hard to get to the optimization piece unless you first really understand the asset and have a digital twin that you know is learning as you make changes, adds Parris.

Furthermore, when implementing an advanced AI solution in a new or existing process, users must consider how the platform will be used and who will actually be using it. In the past, black-box AI solutions required users with some expertise in data science or advanced statistics, which often resulted in organizational data siloes, says Allison Buenemann, industry principal chemicals, at Seeq Corp. (Seattle, Wash.; http://www.seeq.com). Now, the industry has more self-service offerings in the advanced analytics and ML space, meaning that users in many different roles can access the most relevant data and insights for their own unique job needs. For instance, front-line process experts can hit the ground running, solving problems using complex algorithms from an unintimidating low- or no-code application experience. Executives and management teams can expect an empowered workforce solving high-value business problems with rapid time to insight, adds Buenemann. This democratization of data analytics and ML across organizations means that all stakeholders can work together to drive business value. Users must be able to document the thought process behind an analysis and they also must be able to structure analysis results for easy consumption, she explains.

The massive growth in sensor volume and associated data availability have certainly helped to promote the applicability of AI in industrial environments, but computing power and network connectivity are also critical pieces of the puzzle. Yokogawa Electric Corp. (Tokyo, Japan; http://www.yokogawa.com) recently announced a proof-of-concept project to utilize fifth-generation (5G) mobile communications for AI-enabled process controllers. The project will focus on using 5G to remotely control the level in a network of water tanks. One of the major benefits of 5G connectivity in autonomous, realtime plant control, according to Hirotsugu Gotou, manager, Yokogawa products control center, is its low-latency function, which means that the network can process a large volume of data with minimal delay. Yokogawas cloud-based AI controller system employs reinforcement-learning technology to determine the optimal operation parameters for a particular control loop.

Understanding reinforcement-learning schemes, which build upon modern predictive control, is crucial for autonomous process control. Reinforcement learning is a type of machine learning in which a computer learns to perform a task through repeated trial-and-error interactions with a dynamic environment, explains Mathworks Samvith Rao. Such a platform develops control policy in real time by interacting with the process, enabling the computer to make a series of decisions that maximize a reward metric for the task without human intervention and without being explicitly programmed to achieve the task. Robust mechanisms for safe operation of a fully trained model, and indeed, for safe operation of a plant, are high priorities for further investigation, he emphasizes.

In Yokogawas reinforcement-learning proof-of-concept, the AI controls tank level and continuously receives sensor data on flowrate and level. Based on these data, the AI will learn about the operation and will repeat the process to derive the optimal operation parameters, explains Gotou. Yokogawa previously completed a demonstration project using its proprietary AI to control water level in a three-tank system (Figure 2), which showed that after 30 iterations of learning (taking less than 4 h), the AI agent was able learn from its past decisions to determine the optimal control methods. Now, the company will work with mobile network provider NTT Domoco to construct a demonstration facility for cloud-based remote control of water tank level and evaluate the communication performance of the 5G network for realtime, autonomous process control. 5G networks are not yet widely adopted in industrial settings, but other projects are also exploring these technologies for IIoT applications. In April, GE Research announced an initiative to test Verizons 5G platform in a variety of industrial applications, including realtime control of wind farms. And last year, Accenture and AT&T began a 5G proof-of-concept project to develop 5G use cases for IIoT applications at a petroleum refinery in Louisiana operated by Phillips 66.

FIGURE 2. This demonstration unit includes a three-tank network in which an autonomous, reinforcement-learning-based scheme monitors and controls water level

Another important factor is the collaborative environment that has been fostered through open-source AI platforms, explains Gino Hernandez, head of global digital business for ABB Energy Industries (Zurich, Switzerland; http://www.abb.com). As things become more open and more distributed, I think its going to enable users to apply the technologies in a more meaningful way. The more people talk about the different models and their successes using open-source type AI models, and being able to have platforms where they can import and run those models is going to be key, he notes. In the past, vendors kept their platforms closed, which limited users to develop models only for a specific digital architecture. Now, says Hernandez, more AI platforms enable users to import models including their own proprietary algorithms from various sources to develop a more robust analytics program. Some users have rich domain expertise and want to build their own platforms. They are looking for environments where they not only have the ability to potentially use vendor-developed algorithms, but also use their own algorithms and have a sandbox in which they can import their own models and begin to integrate them, he explains.

As with any digital technology, cybersecurity and protecting proprietary intellectual property (IP) are paramount, but Hernandez also brings up the idea of sharable IP as a major area of opportunity for industrial AI. We see a lot of open sharing with users looking at different models related to machinery health in the open-source space. There are definite advantages for companies being open to sharing machinery-health data in multi-tenant cloud environments, because it helps us as an industry to better capture, understand and very quickly identify when there are systemic problems within pumps, sensors, PLCs or other elements, continues Hernandez. He also believes that the industry is becoming more comfortable with the ability to securely lock certain components of proprietary data within a platform, but still be able to share other selections of more generic data within a cloud environment. Facilitating and expediting this collaborative conversation will be key in accelerating the adoption and evolution of predictive machinery-health monitoring, which is among the more mature use cases for industrial AI, notes Hernandez.

One of the most prominent uses for industrial AI continues to be predictive maintenance. Everybodys looking at how to get more throughput, and the easiest way to do that is to reduce your downtime with predictive maintenance, explains Clayton French, product owner Digital Enterprise Labs at Siemens Industry. Siemens has worked with Sinopec Groups (Beijing, China; http://www.sinopecgroup.com) Qingdao Refinery using AI to investigate critical rotating-equipment components and predict potential causes of downtime. We took six months of data and did a feasibility study, which found that eighteen hours before compressor failure, they would have been notified that the asset was having a problem, potentially saving around $300,000, says French. In another project, French notes that Siemens conducted a feasibility study in which AI was able to detect an equipment failure almost a month in advance. Such models integrate correlation analysis, pattern recognition, health monitoring and risk assessment, among others.

Furthermore, when an anomaly is detected, and a countermeasure is initiated in the plant to fix the problem, the AI can record the instance in its database. Then, the next time it senses that a similar failure is about to occur, the AI will recommend a similar countermeasure, which can reduce maintenance time in the long term. This shows that the AI is learning and taking in all of these inputs. It continues to get better after its initial implementation, adds French. He emphasizes, however, that users should practice prudence in applying AI: Not everything turns out to be worthwhile in some cases, the AI can only predict something a few minutes before it happens, so you cant do anything actionable. Our studies point out what is actionable so that users can target the most effective things to monitor.

TrendMiner N.V. (Hasselt, Belgium; http://www.trendminer.com) recently introduced its custom-built Anomaly Detection Model using ML optimized for learning normal operating conditions and detecting deviations on new incoming data, which ultimately helps to avoid sub-optimal process operation or downtime by allowing users to react at the advent of anomaly ahead of productivity losses or equipment malfunctions, explains TrendMiner director of products Nick Van Damme. The ML model interfaces with TrendMiners self-service time-series analytics platform, by collecting sensor data readings over a user-defined historical timeframe of the process or equipment being analyzed. Process and asset experts further prepare the data by leveraging built-in search and contextualization capabilities to filter out irrelevant data to confirm that the view is an accurate representation of normal, desired operating conditions.This prepared view is then used to train the Anomaly Detection Model to learn the desired process conditions by considering the unique relationships between the sensors. This will allow detection of anomalies on other historical data, and more importantly, on new incoming data for a process. The trained model will return whether a new datapoint is an outlier or not based on a given threshold and return an anomaly score. The higher the anomaly score, the more likely that the datapoint is an outlier, adds Van Damme. In a batch process use-case, the model was trained to recognize a good batch profile and use that as a benchmark to alert users of deviations. A dashboard (Figure 3) provides visualizations of the operating zones learned by the model, with the latest process data points overlaid (shown in orange in Figure 3). Such a visualization enables users to quickly evaluate current process conditions versus normal operating behavior.

FIGURE 3. A visualization engine driven by ML develops a dashboard where current incoming data can be quickly benchmarked against established operating conditions

Another maintenance example from AspenTech involves fouling in ethylene furnaces. Typically, an operator will do periodic cleanouts of coke buildup on the furnaces, but what would be better is to get a better indication of when you actually need to do a cleanout, versus just scheduling it. So what companies are doing is taking the relevant furnace operating data and being able to predict fouling to prevent unplanned downtime. Users can be sure they are cleaning out the furnace before a real operational issue occurs, notes Morse.

On the optimization side, she highlights a case where AspenTech helped a polyethylene producer to streamline transitions between product grades to maximize production value. As catalysts are changed out to accommodate different production slates, there is a transition period where the resulting product is an off-grade material. The customer was able to apply an AI hybrid-model concept to look at how reactors are actually performing, and was able to decrease the amount of transition, both in terms of volume throughput, so they werent wasting feedstock making a product they didnt want, but also by narrowing that transition time, they were also spending more reactor time making the preferred product instead of transition-grade material.

Rockwell Automation, Inc. (Milwaukee, Wis.; http://www.rockwell.com) has also done extensive work using AI to optimize catalyst yield and product selectivity in traditional polymerization processes, as well. We started using pure neural networks to try to learn polymer reaction coefficients. We lean more and more into the actual reaction kinetics and the material balance around the reactors, trying to control the polymer chain length in the reactor. This is how you can get a specific property, such as melt flow or a melt index, on a polymer, says Pal Roach, oil and gas industry consultant at Rockwell Automation. In a particular example involving Chevron Phillips, an AI-driven advanced control model was applied to cut transition times between polymer grades by four hours. This change also led to a 50% reduction in product variability. In another case involving a distillation unit for long-chain alcohols, an AI-driven scheme applied to a nonlinear controller helped to cut energy consumption by around 35% and significantly reduce product-quality variability, as well as associated waste. There are going to be more and more of these types of AI applications coming as the industry refocuses and transitions into greener energy and more environmental safety and governance consciousness, predicts Roach.

Beyond predictive maintenance, companies are also starting to use AI to translate business targets (such as financial, quality or environmental goals) into process-improvement actions. Maintenance is key, because when youre shut down, youre not making any product and youre losing money. So, once you address that problem, the next question asks how can we run even better? Then you can start looking at process optimization, says Mack. The main problem for the optimization, especially for complex production lines, is the correlation of the process variables with which the operators are confronted, combined with the high numbers of DCS alarms that couldnt be evaluated. This issue is addressed by business impact driven anomaly detection. In the past, when operators would adjust setpoints for process variables, it would be loosely tied into business objectives, such as product quality. Now, process data can be aligned with specific business targets using AI. Anomalies we might detect in the data could be affecting quality or throughput. Then, using AI, users can categorize and rank these anomalies and their impact on business goals. The end result is that the process control system, as it sees these issues occurring, will prioritize them based on the business objectives of the company, he says, adding that such an AI engine could similarly be tied to a companys sustainability goals.

As chemical manufacturers are increasingly looking toward more sustainable feedstock options, bio-based processes, such as fermentation, are reaching larger scales and necessitating more precise and predictive control. We have used AI on corn-to-bioethanol fermentation optimization and seen yield increases from 2 to 5%, so that means youre getting more alcohol from the same amount of corn. And weve also seen overall production capacity increases as high as 20%, says Michael Tay, advanced analytics product manager at Rockwell Automation. To build the AI model for fermentation, Tay explained that Rockwell began with classic biofermentation modeling tuning the Michaelis-Menten equations, which predict the enzymatic rate of reaction, as the fundamental architecture. This enabled realtime control of the temperature profile in the fermenter. You try to keep temperatures high, but then as alcohol concentration increases, you have to cool the reactor more so that the yeast gets more life out of it, because as the alcohol concentration goes up, the yeast performance goes down. The AI is showing dynamic recognition and adaptation of the fermentation profiles, so thats sort of the key to those yield improvements. But youre also getting more alcohol out of every batch, he adds. In addition to temperature-driven optimization, Rockwell has also used AI to improve the enzyme-dosing step in biofermentation processes. If you have this causally correct model that is based on biological fundamentals, driven by data and AI, then you can optimize your batch yield to ultimately get more out of the yeast, which is your catalyst in the reactor, says Tay. AspenTech is also working on developing accurate AI and simulation models for bio-based processes like fermentation, as well as looking at advanced chemical recycling models. Were tuning those processes to be more efficient, and were approaching predictability, but the feedstock variance will be something that we will be working on constantly, adds Paige Morse.

While AI and other digital tools have historically targeted operational and financial objectives, many chemical companies are increasingly looking at process metrics that specifically consider environmental initiatives, such as reducing emissions and waste. Seeq worked with a CPI company to deploy an automated model of a sulfur oxides (SOx) detectors behavior during the time periods when its range was exceeded. Typically, accurate emissions reporting becomes more challenging when vent-stack analyzers peg out at their limits, necessitating complex, manual calculations and modeling. Seeqs model development required event identification to isolate the data set for the time periods before, during and after a detector range exceedance occurred. Regression models were fit to the data before and after the exceedance, and then extrapolated forward and backward to generate a continuous modeled signal, which is used to calculate the maximum concentration of pollutant, says Buenemann. The solution also compiled relevant visualizations into a single auto-updating report displaying data for the most recent exceedance event alongside visualizations tracking year-to-date progression toward permit limits, which enabled the company to make proactive process adjustments based on the SOx emissions trajectory.

AI plays a major role in reducing waste by helping to ensure product quality, explains Mathworks Rao, citing the example of Japanese films manufacturer Dexerials, which deployed an AI program for realtime detection of product defects. A deep-learning-based machine-vision system extracts the properties of product defects, such as color, shape and texture, from images, and classifies according to the type of defects. The system was put in place to improve upon the manual inspection system, which was an error-ridden process with low accuracy. The AI system not only improved the accuracy, but also greatly reduced product and feedstock waste and frequent production stoppage.

Beyond improving day-to-day industrial operations, AI and ML technologies are also enabling advances in the synthesis of new materials and product formulations. In developing ML-powered digital technologies that encompass the chemical knowledge for synthetic processes and materials formulation, IBM (Armonk, N.Y.; http://www.ibm.com) took inspiration from sources very far removed from chemistry image processing and language translation. We learned that some of the technologies that have been developed for image processing were actually applicable in the context of materials formulation, so we took those concepts and brought them into the chemical space, allowing us to reduce the dimensionality of chemical problems, explains Teo Laino, distinguished researcher at IBM Research Europe. IBM is partnering with Evonik Industries AG (Essen, Germany; http://www.evonik.com) to apply such a scheme to aid in optimizing polymer formulations. Quite often, when companies are working on formulating materials, such as polymers, the amount of data is relatively sparse compared to the dimensionality of the problem. The use of technologies that reduce the size of the problem means that there are fewer degrees of freedom, which are easier to match with available data. This is optimal, because users can make good use of data and can really see sensible benefits, he adds. Typically, optimizing a material to meet specific property requirements could take months, but IBMs platform for this inverse design process can significantly decrease that time, he says.

In designing a cognitive solution for chemical synthesis, IBM trained digital architectures that are normally used for translating between languages to create a digital solution that can optimize synthetic routes for molecules (Figure 4). By starting with technologies typically used for natural language processing, we recast the problem of predicting the chemical reactivity between molecules as a translation problem between different languages, explains Laino. Notably, the ML scheme has been validated in a large number of test cases, since IBM first made the platform (IBM RXN for chemistry, rxn.res.ibm.com) freely available online in 2018.This is one of the most complicated tasks in the materials industry today, and it is where ML can help to greatly speed up the design process. You can reduce the number of tests and trials and go more directly to the domain of the material formulation that is going to satisfy your requirements, says Laino.

FIGURE 4. AI can be used to quickly determine synthetic routes for new molecules

We built a community of more than 25,000 users that have been using the models almost 4 million times. You can use our digital models for driving and programming commercial automation hardware, and you can run chemical synthesis from home wherever you have a web browser. Its a fantastic way of providing a work-from-home environment, even for experimental chemists, says Laino. IBM calls this technology IBM RoboRXN (Figure 5) and is using its ML synthesis capabilities for in-house research related to designing novel materials for atmospheric carbon-capture applications. IBMs ML platform has also been adopted by Diamond Light Source (Oxfordshire, U.K.; http://www.diamond.ac.uk), the U.K.s national synchrotron science facility, to operate their fully autonomous chemical laboratory. They are coupling their own automated lab hardware with IBMs predictive platform to drive their chemical-synthesis experiments, adds Laino.

Some of IBMs other notable projects include its ten-year relationship with the Cleveland Clinic for deployment of AI for advancing life sciences research and drug design chemistry; and a collaboration with Mitsui Chemicals, Inc. (Tokyo; http://www.mitsuichemicals.com) to develop an AI-empowered blockchain platform promoting plastics circularity and materials traceability.

FIGURE 5. Open-source AI platforms enable experiments to be run remotely, bringing a new level of autonomous operations into chemistry laboratories

AI and ML are also proving to be effective technologies for accelerating the product-development cycle. Dow Polyurethanes (Midland, Mich.; http://www.dow.com/polyurethanes) and Microsoft collaborated to create the Predictive Intelligence platform product formulation and development. The platform harnesses materials-science data captured from decades of formulations and experimental trials and applies AI and ML to rapidly develop optimal product formulations for customers, explains Alan Robinson, North America commercial vice president, Dow Polyurethanes. Predictive Intelligence allows us to not only discover the chemistry and what a formulation needs to look like, but now we can also look at how we simulate trials. In the past, wed be running numerous trials that take place over a period as long as 18 months, and now we can do that with a couple clicks of a button, says Robinson.

The demands of end-use polyurethane applications mean that finding the best chemistry for a particular product can be quite complex. In a typical year were releasing hundreds of new products, and in a typical formulation, there might be a dozen components that are individually mixed at different levels in different orders. We also have to think of all of the different tooling and equipment that the materials will be subject to, as well as the kinetics that have to be played out. So, the challenge was how to take all the kinetics, rheology and formulation data and create a system that could move us forward, explains Dave Parrillo, vice president R&D, Dow Industrial Intermediates & Infrastructure.

To build such a complex platform, Dow relied on theory-based neural networks that incorporated critical correlations for kinetics and rheology. In a typical neural network, you feed it lots of data, which it learns from, and behind the scenes, its tuning its knobs and weighing different influences. We can now influence those knobs with theoretical correlations so that the system not only learns, but gets smarter over time, and also starts to explore spaces where we might not have as much data. It folds theoretical, empirical, semi-empirical, and experimental information into a single tool, says Parrillo. One of the first major applications that Dow is trialing for the platform is polyurethane mattresses, with multiple applications to follow in 2022.

Customers might be looking at a number of parameter constraints, from hardness and density, to airflow and viscoelastic recovery. Weve actually asked the AI engine to give us a series of formulations and then benchmark those formulations in the laboratory, and the accuracy is extraordinarily high, emphasizes Parrillo. The Predictive Intelligence platform will be available to customers beginning later this year.

FIGURE 6. AI can be used to rapidly and accurately validate pharmaceutical products for defects, which reduces manual inspection requirements

Once a product formulation is developed and manufacturing has begun, inspection and validation are key. Stevanato Group (Padua, Italy; http://www.stevanatogroup.com) recently launched an AI platform focused on visual inspection of biopharmaceutical products, looking at both particle and cosmetic defects (Figure 6). AI can improve overall inspection performance in terms of detection rate and false rejection rate. AI can help to reduce false rejects and costly interventions to parameterize the machine during production, explains Raffaele Pace, engineering vice president of operations at Stevanato Group. Recently, trials of the automatic inspection platform have produced promising results, including the ability to reduce falsely rejected products tenfold, with up to 99.9% accuracy, using deep learning (DL) techniques. Unlike traditional rule-based systems, DL models can generalize their predictions and be more flexible regarding variations, adds Pace. He also mentions that such advanced inspection performance can help to reduce the number of gray items, which are flagged on the production line but not rejected outright. Typically, such items require manual re-inspection, which adds time to the process. This helps the entire process become more lean and have less waste, while maintaining and improving quality, he continues. The company is currently working to enhance detection accuracy for both liquid and lyophilized products, and also developing an initiative to create pre-trained neural networks that could then adapt to specific defects and drugs. Producing such models will entail training the system with thousands of images, notes Pace.

Mary Page Bailey

View post:
Artificial Intelligence: Advancing Applications in the CPI - ChemEngOnline