Archive for the ‘Machine Learning’ Category

PhD Position – Machine learning to increase geothermal energy efficiency, Karlsruhe Institute – ThinkGeoEnergy

The Karlsruhe Institute of Technology in Germany has an open PhD position for a project that will use machine learning to model scaling formation in cascade geothermal operations.

The Karlsruhe Institute of Technology (KIT) in Germany currently has an open PhD position in the upcoming Machine Learning for Enhancing Geothermal energy production (MALEG) project. Interested applicants may visit the official KIT page for more details on the application. Submissions will be accepted only until September 30, 2022.

The target of the MALEG project is the design and optimization of cascade production schemes aiming for the highest possible energy output in geothermal energy facilities by preventing scaling. The enhanced scaling potential of lower return temperatures is one key challenge as geothermal cascade use becomes a more common strategy to increase efficiency.

The research will be focusing on the development of a machine learning tool to quantify the impact of the enhanced cooling on the fluid-mineral equilibrium and to optimize the operations economically. The tool will be based on results from widely-applied deterministic models and experimental data collected at geothermal plants in Germany, Austria and Turkey by our international project partners. Once fully implemented the MALEG-tool will work as a digital twin of the power plant, ready to assess and predict scaling formation processes for geothermal production from different geological settings.

The ideal candidate should hold a masters degree in geosciences or geophysics with sound interest in aqueous geochemistry and experience in numerical modeling.

Source: Karlsruhe Institute of Technology

Read more:
PhD Position - Machine learning to increase geothermal energy efficiency, Karlsruhe Institute - ThinkGeoEnergy

Prediction of mortality risk of health checkup participants using machine learning-based models: the J-SHC study | Scientific Reports – Nature.com

Participants

This study was conducted as part of the ongoing Study on the Design of a Comprehensive Medical System for Chronic Kidney Disease (CKD) Based on Individual Risk Assessment by Specific Health Examination (J-SHC Study). A specific health checkup is conducted annually for all residents aged 4074years, covered by the National Health Insurance in Japan. In this study, a baseline survey was conducted in 685,889 people (42.7% males, age 4074years) who participated in specific health checkups from 2008 to 2014 in eight regions (Yamagata, Fukushima, Niigata, Ibaraki, Toyonaka, Fukuoka, Miyazaki, and Okinawa prefectures). The details of this study have been described elsewhere11. Of the 685,889 baseline participants, 169,910 were excluded from the study because baseline data on lifestyle information or blood tests were not available. In addition, 399,230 participants with a survival follow-up of fewer than 5years from the baseline survey were excluded. Therefore, 116,749 patients (42.4% men) with a known 5-year survival or mortality status were included in this study.

This study was conducted in accordance with the Declaration of Helsinki guidelines. This study was approved by the Ethics Committee of Yamagata University (Approval No. 2008103). All data were anonymized before analysis; therefore, the ethics committee of Yamagata University waived the need for informed consent from study participants.

For the validation of a predictive model, the most desirable way is a prospective study on unknown data. In this study, the data on health checkup dates were available. Therefore, we divided the total data into training and test datasets to build and test predictive models based on health checkup dates. The training dataset consisted of 85,361 participants who participated in the study in 2008. The test dataset consisted of 31,388 participants who participated in this study from 2009 to 2014. These datasets were temporally separated, and there were no overlapping participants. This method would evaluate the model in a manner similar to a prospective study and has an advantage that can demonstrate temporal generalizability. Clipping was performed for 0.01% outliers for preprocessing, and normalization was performed.

Information on 38 variables was obtained during the baseline survey of the health checkups. When there were highly correlated variables (correlation coefficient greater than 0.75), only one of these variables was included in the analysis. High correlations were found between body weight, abdominal circumference, body mass index, hemoglobin A1c (HbA1c), fasting blood sugar, and AST and alanine aminotransferase (ALT) levels. We then used body weight, HbA1c level, and AST level as explanatory variables. Finally, we used the following 34 variables to build the prediction models: age, sex, height, weight, systolic blood pressure, diastolic blood pressure, urine glucose, urine protein, urine occult blood, uric acid, triglycerides, high-density lipoprotein cholesterol (HDL-C), LDL-C, AST, -glutamyl transpeptidase (GTP), estimated glomerular filtration rate (eGFR), HbA1c, smoking, alcohol consumption, medication (for hypertension, diabetes, and dyslipidemia), history of stroke, heart disease, and renal failure, weight gain (more than 10kg since age 20), exercise (more than 30min per session, more than 2days per week), walking (more than 1h per day), walking speed, eating speed, supper 2h before bedtime, skipping breakfast, late-night snacks, and sleep status.

The values of each item in the training data set for the alive/dead groups were compared using the chi-square test, Student t-test, and MannWhitney U test, and significant differences (P<0.05) were marked with an asterisk (*) (Supplementary Tables S1 and S2).

We used two machine learning-based methods (gradient boosting decision tree [XGBoost], neural network) and one conventional method (logistic regression) to build the prediction models. All the models were built using Python 3.7. We used the XGBoost library for GBDT, TensorFlow for neural network, and Scikit-learn for logistic regression.

The data obtained in this study contained missing values. XGBoost can be trained to predict even with missing values because of its nature; however, neural network and logistic regression cannot be trained to predict with missing values. Therefore, we complemented the missing values using the k-nearest neighbor method (k=5), and the test data were complemented using an imputer trained using only the training data.

The parameters required for each model were determined for the training data using the RandomizedSearchCV class of the Scikit-learn library and repeating fivefold cross-validation 5000 times.

The performance of each prediction model was evaluated by predicting the test dataset, drawing a ROC curve, and using the AUC. In addition, the accuracy, precision, recall, F1 scores (the harmonic mean of precision and recall), and confusion matrix were calculated for each model. To assess the importance of explanatory variables for the predictive models, we used SHAP and obtained SHAP values that express the influence of each explanatory variable on the output of the model4,12. The workflow diagram of this study is shown in Fig.5.

Workflow diagram of development and performance evaluation of predictive models.

Read more here:
Prediction of mortality risk of health checkup participants using machine learning-based models: the J-SHC study | Scientific Reports - Nature.com

Industrial Automation Market to Generate Revenue of $289 Billion by 2028 | Growing Adoption of AI and Machine Learning to Play Key Role -…

Westford, USA, Aug. 24, 2022 (GLOBE NEWSWIRE) -- As the world becomes increasingly automated, businesses are turning to industrial automation solutions to help increase efficiency and productivity. By automating routine tasks and processes, businesses can free up workforce time to focus on more important duties. This growth of the industrial automation market is mainly due to the increasing demand for safe, reliable, and efficient manufacturing systems. These systems help manufacturers achieve increased output and reduce costs. As per SkyQuests findings, businesses save from 15 to 60% on worker costs, making it one of the most cost-effective investments a business can make.

In addition to reducing labor costs, industrial automation can also reduce environmental impact. For example, if a factory is using manual tasks to produce products, the production process often involves a lot of waste created from the work processes. With industrial automation, these tasks can be automated, leading to a decrease in waste and a reduction in environmental impact.

Get sample copy of this report:

https://skyquestt.com/sample-request/industrial-automation-market

There are a number of different industrial automation technologies available, so businesses can find the right solution for their specific needs. Some of the most common types of industrial automation include: robots, machine learning algorithms, computer-aided manufacturing (CAM), and wireless technology.

As per SkyQuest analysis, some of the leading companies in the industrial automation market are ABB Ltd., Siemens AG, Fanuc Corporation, Mitsubishi Electric Corporation, Kawasaki Heavy Industries Ltd., and Rexnord Corp. These companies offer a range of products and services that include controllers, drives, processing units, sensors, and software. They prefer to partner with larger manufacturers who can leverage their resources to develop and deploy advanced technology solutions across their entire manufacturing operations.

Increasing adoption of advanced manufacturing technologies, such as 3D printing, and recent shift in production to Asia are driving the growth of this industry. Additionally, surging demand for smart machines that can automatically optimize processes and reduce variability is boosting the growth of industrial automation market.

SkyQuest has published a report on global industrial automation market. The provides a detailed understanding about market trends, consumer analysis, demand and supply gap, pricing analysis, top players in the market and their market share, competitive landscape, value chain analysis, and market dynamics. It will help the market participants in identifying lucrative growth opportunity, targeting potential consumers, devising growth strategies, finding what competition are doing and where the opportunity lies to incentives on the weaknesses of others. For more details.

Browse summary of the report and Complete Table of Contents (ToC):

https://skyquestt.com/report/industrial-automation-market

Industrial Automation to Account for a Whopping 37% of the Global Workforce Says Analyst at SkyQuest

As industrial automation technologies continue to develop, so too is the industrys adoption of them. A recent study found that automation is growing even more rapidly than anticipated, and by 2025, it is predicted that industrial automation will account for a whopping 37% of the workforce.

According to a recent survey conducted by SkyQuest on industrial automation market, the chief executive of companies that have invested in industrial automation say that the technology has been key to their success. In fact, almost 2/3 of respondents reported that industrial automation has helped them boost production and improve efficiency. Furthermore, nearly 50% of these businesses say that the technology has increased their competitiveness and allowed them to attract new customers.

AI and Blockchain Technology are Trending in Industrial Automation Market

One of the most pressing issues facing industrial automation today is reliability. With a growing number of devices and systems interacting with one another, it's critical that these systems work as intended and without issue. Here are some of the top trends happening in industrial automation today:

Smart Manufacturing is Gaining Grounds in Industrial Automation Market

Manufacturing is not just a physical process. It's also a digital process. The rise of smart manufacturing technologies means that factories can now control and monitor their processes in real time, which enables more efficient production and improved safety. Today, automotive, electronics and FMCG sectors is contributing around 65% of the revenue to the global industrial automation market. Automotive sector is witnessing steady growth owing to soaring demand for safety features, enhanced functionality, efficient fuel economy and elevated adoption of intelligent mobility solutions. In addition, automotive industry is also witnessing rise in popularity of hybrid and electric vehicles which is adding to the growth momentum of this sector.

The most common types of smart manufacturing technologies are robotics, sensors, and machine learning algorithms. Robotics help factories automate tasks and functions so that they can be performed faster and with greater accuracy. Sensors enable factories to monitor conditions inside and outside the factory, and they can transmit this information to processors for analysis.

As per SkyQuest analysis, machine learning is being used to improve a variety of processes and operations. For example, it can be used to optimize production lines, predict the needs of customers and even determine when products need to be replaced. Additionally, it can be used to improve predictive maintenance and forecasting. Furthermore, it can also be used to develop autonomous systems.

Today, manufacturers across the global industrial automation market are opting for smart manufacturing due to following key factors:

Speak to Analyst for your custom requirements:

https://skyquestt.com/speak-with-analyst/industrial-automation-market

Top Players in Global Industrial Automation Market

Related Reports in SkyQuests Library:

Global Metaverse Infrastructure Market

Global Micro Mobile Data Center Market

Global Machine Learning Market

Global Location Based Services Market

Global Virtual Events Market

About Us:

SkyQuest Technologyis leading growth consulting firm providing market intelligence, commercialization and technology services. It has 450+ happy clients globally.

Address:

1 Apache Way, Westford, Massachusetts 01886

Phone:

USA (+1) 617-230-0741

Email:sales@skyquestt.com

LinkedInFacebookTwitter

Go here to see the original:
Industrial Automation Market to Generate Revenue of $289 Billion by 2028 | Growing Adoption of AI and Machine Learning to Play Key Role -...

Tackling the reproducibility and driving machine learning with digitisation – Scientific Computing World

Dr Birthe Nielsen discusses the role of the Methods Database in supporting life sciences research by digitising methods data across different life science functions.

Reproducibility of experiment findings and data interoperability are two of the major barriers facing life sciences R&D today. Independently verifying findings by re-creating experiments and generating the same results is fundamental to progressing research to the next stage in its lifecycle - be it advancing a drug to clinical development or a product to market. Yet, in the field of biology alone, one study found that 70 per cent of researchers are unable to reproduce the findings of other scientists and 60 per cent are unable to reproduce their own findings.

This causes delays to the R&D process throughout the life sciences ecosystem. For example, biopharmaceutical companies often use an external Contract Research Organisation (CROs) to conduct clinical studies. Without a centralised repository to provide consistent access, analytical methods are often shared with CROs via email or even by physical documents, and not in a standard format but using an inconsistent terminology. This leads to unnecessary variability and several versions of the same analytical protocol. This makes it very challenging for a CRO to re-establish and revalidate methods without a labour-intensive process that is open to human interpretation and thus error.

To tackle issues like this, the Pistoia Alliance launched the Methods Hub project. The project aims to overcome the issue of reproducibility by digitising methods data across different life science functions, and ensuring data is FAIR (Findable, Accessible, Interoperable, Reusable) from the point of creation. This will enable seamless and secure sharing within the R&D ecosystem, reduce experiment duplication, standardise formatting to make data machine-readable and increase reproducibility and efficiency. Robust data management is also the building block for machine learning and is the stepping-stone to realising the benefits of AI.

Digitisation of paper-based processes increases the efficiency and quality of methods data management. But it goes beyond manually keying in method parameters on a computer or using an Electronic Lab Notebook; A digital and automated workflow increases efficiency, instrument usages and productivity. Applying a shared data standards ensures consistency and interoperability in addition to fast and secure transfer of information between stakeholders.

One area that organisations need to address to comply with FAIR principles, and a key area in which the Methods Hub project helps, is how analytical methods are shared. This includes replacing free-text data capture with a common data model and standardised ontologies. For example, in a High-Performance Liquid Chromatography (HPLC) experiment, rather than manually typing out the analytical parameters (pump flow, injection volume, column temperature etc.), the scientist will simply download a method that will automatically populate the execution parameters in any given Chromatographic Data System (CSD). This not only saves time during data entry, but the common format eliminates room for human interpretation or error.

Additionally, creating a centralised repository like the Methods Hub in a vendor-neutral format is a step towards greater cyber-resiliency in the industry. When information is stored locally on a PC or an ELN and is not backed up, a single cyberattack can wipe it out instantly. Creating shared spaces for these notes via the cloud protects data and ensures it can be easily restored.

A proof of concept (PoC) via the Methods Hub project was recently successfully completed to demonstrate the value of methods digitisation. The PoC involved the digital transfer via cloud of analytical HPLC methods, proving it is possible to move analytical methods securely between two different companies and CDS vendors with ease. It has been successfully tested in labs at Merck and GSK, where there has been an effective transfer of HPLC-UV information between different systems. The PoC delivered a series of critical improvements to methods transfer that eliminated the manual keying of data, reduces risk, steps, and error, while increasing overall flexibility and interoperability.

The Alliance project team is now working to extend the platforms functionality to connect analytical methods with results data, which would be an industry first. The team will also be adding support for columns and additional hardware and other analytical techniques, such as mass spectrometry and nuclear magnetic resonance spectroscopy (NMR). It also plans to identify new use cases, and further develop the cloud platform that enables secure methods transfer.

If industry-wide data standards and approaches to data management are to be agreed on and implemented successfully, organisations must collaborate. The Alliance recognises methods data management is a big challenge for the industry, and the aim is to make Methods Hub an integral part of the system infrastructure in every analytical lab.

Tackling issues such as digitisation of methods data doesnt just benefit individual companies but will have a knock-on effect for the whole life sciences industry. Introducing shared standards accelerates R&D, improves quality, and reduces the cost and time burden on scientists and organisations. Ultimately this ensures that new therapies and breakthroughs reach patients sooner. We are keen to welcome new contributors to the project, so we can continue discussing common barriers to successful data management, and work together to develop new solutions.

Dr Birthe Nielsen is the Pistoia Alliance Methods Database project manager

Read more here:
Tackling the reproducibility and driving machine learning with digitisation - Scientific Computing World

Cerebral Uses Machine Learning to Identify Patients in Mental Health Crisis, Vows ‘Just the Beginning’ for AI Investments – Behavioral Health Business

Digital mental health company Cerebral is using a machine learning algorithm to help pinpoint patients in crisis. And the recently unveiled initiative is just the beginning of Cerebrals use of machine learning-enabled solutions, according to the company.

Machine learning (ML) and artificial intelligence (AI) are critical tools in the advancement of mental health care, but these benefits are only possible at scale, a team of Cerebral researchers explained in a company post. Both technologies require many data points to test and validate hypotheses in order to prove that the systems are working effectively.

Dubbed the Crisis Message Detector 1 (CMD-1), the newly touted tool is designed to identify messages from patients experiencing a mental health crisis, then refer those patients to a crisis response specialist.

Specifically, the tool was trained to spot signs of suicidal ideation, homocidal ideation, non-suicidal self-injury or domestic violence, according to the company post. If a patients messages to Cerebral have been flagged, a specialist will reach out directly and assess the patients risk level.

This mental health professional will then be able to call emergency contacts or local responders, if needed.

Cerebral is pitching this as an alternative to relying on patients to call 911 if there is an emergency. The company claims the tool is accurate in properly identifying individuals in crisis.

During a week-long pilot, CMD-1 screened over 60,000 EMR messages and flagged more than 500 potential crises, the Cerebral researchers wrote. The model successfully detected over 99% of all crisis messages and, as a result, crisis specialists were able to respond to patients in less than 9 minutes on average.

Cerebral receives several thousand patient messages each day via its online chat system or mobile app. The company says that somebody from the patients care team reviews and addresses those messages, but that human-led process isnt always as fast as it needs to be.

The company plans to expand its machine learning initiatives in the upcoming months and focus on issues such as response times for medication concerns, scheduling issues and general support requests.

Because of Cerebrals experience serving a quarter-million people (and counting), we are uniquely suited to develop and implement cutting edge ML/AI tools to supplement the expertise of our clinicians and help improve clinical outcomes, the companys post added.

CMD-1 is being rolled out nationally and will be available 24/7.

In addition to demonstrating the companys interest in ML and AI, the patient-identification tool is also reflective of its previously discussed commitment to quality control moving forward.

Earlier this year, Cerebral came under fire for its prescribing practices of controlled substances. In June, news surfaced that the Department of Justice (DOJ) launched an investigation into the company based on a potential violation of the Controlled Substance Act.

In turn, the digital health company changed up its leadership team. Founder Kyle Robertson stepped down as CEO, and Chief Medical Officer Dr. David Mou took on the role.

I will say that we have made mistakes, Mou said at the American Telemedicine Association conference in May. And Ill also admit that we will continue to make mistakes and learn.

Cerebral also announced layoffs for July 1. In June, Mou told BHB that the layoffs were reflective of the companys priorities to keep behavioral health front and center while moving into value-based care.

As for the future, the company is looking to treat serious mental illness (SMI) and developing its value-based care proposition, Mou previously told BHB.

Original post:
Cerebral Uses Machine Learning to Identify Patients in Mental Health Crisis, Vows 'Just the Beginning' for AI Investments - Behavioral Health Business