Archive for the ‘Machine Learning’ Category

The Future of ML Development Services: Trends and Predictions – FinSMEs

Enter the world of ML development services, a land where everything is in constant change due to technological advancements and data-driven innovation solutions.

In recent years, ML has become a groundbreaking technology that revolutionized various sectors such as health care, finances and transportation among others. The demand for ML development services has been growing at an extremely fast pace due to the rise of digitization that is taking place in various companies and doesnt seem like it will reduce any time soon. However, what is the future of machine learning in this fast-growing field? In this post, we will analyze the newest tendencies and make some forecasts on how ML development companies may change our world in a few years. Prepare for an adventurous journey into the world of existing technologies and their future possibilities!

First, we will address the described tendencies and forecasts without going into deeper details regarding why machine learning is gaining popularity in todays digital reality. This usefulness can be credited to the unmatched capacity to process vast tracts of data and make inferences or choices devoid of software. The advent of big data brought some enormous opportunities and challenges, high on the list of which is my favorite technology machine learning (ML). Importantly, it has already disrupted sectors such as healthcare services and finance industries especially when artificial intelligence is applied. Nevertheless, other applications of this technology are almost limitless to various areas and beyond; thus displaying the broad range of influence that transformative machine learning has.

Recently, there has been a significant increase in cloud-based machine learning capabilities. Most vendors, enterprises or individuals will find these platforms to be cost-effective means of deploying ML-based applications. Cloud-based solutions for the development of ML have three main benefits scalability, availability and automation. They provide an opportunity for developers to apply complex ML models and do not distract attention from important infrastructure details. In addition, the ML cloud platforms contain many tools and APIs for pre built models that result in development speed faster. The industry-wide adoption of ML-oriented products has determined the development of cloud-based platforms where solutions based on machine learning can be constructed. Because technology is developing every single day, we can assume that in future these platforms are going to be more complicated and provide developers with better choices of options as well as skills for AI.

With the above great leaps in machine learning for developers, there have been increasing conversations surrounding one field and it is interpretability. In other words, producing outputs is not enough for AI; the developers and users must come to grips with how those results were arrived at or what factors are involved. It is especially important for such areas as healthcare or finance since decisions made by AI models can influence significantly there. As a result, there is an elevated need for the generation of models that are easily transparent and interpretable to the needs shown. This is such a key achievement in ensuring that Artificial Intelligence becomes reliable and answerable to everything it offers.

The business need for integration with other growing technologies is because technology continues to evolve at the rate of exponential function. Scalable development is supported by artificial intelligence solutions for machines in different remote locations as we can see the increased popularity among manufacturers through Industrial Internet manufacturing and distribution. By integrating the above technologies it becomes possible to develop new competencies, improved decision making as well enhanced customer service. However, in the modern market, it is no longer possible to perceive these emerging technologies as standalone elements but more so as a constituent of the technology within which they operate. Integration strategy will result in development by a business or the adoption of some other software that is there and they would eventually benefit from this because it makes things much easier for them.

https://www.thewatchtower.com/blogs_on/supervised-machine-learning-its-advantages

Increased demand for personalized and customized ML solutions: With more companies embracing the use of machine learning to have an upper hand, the demand for specially tailored solutions shall grow. This will hence demand that machine learning development services like N-ix.com customize their solutions according to the specific needs and preferences of each client. Advancements in natural language processing (NLP): However, NPL has certainly come a long way and it continues to organize new language machines increasingly with effectiveness. With further advancements that lie ahead, NLP will evolve to even higher levels offering more advanced conversational AI and text analysis in the future.

Continued focus on ethics: However, as AI technologies continue their blend into different sectors of human life and activities in general, there will be an increased interest regarding the ethical development and deployment principles related to these emerging systems. The concern for these companies that provide the standards and guidelines will be for the government to model their operations by strict ethical practices to establish trust with clients as a well-behaved entity. In conclusion, machine learning development services have no limit to their possibilities in the future. Technological progress and wider adoption of AI solutions will surely keep the development in the field actively progressing, turning ML into a sphere with no boundaries for growth and innovation. Machine learning has a transforming effect on the world that is happening right under our noses, and it is quite thrilling for business owners as well as developers.

The trend of ML development services has tremendously changed. With the emergence of big data as a rapidly advancing trend and increasing demands for intelligent software, developers need to change their direction rather fast. Currently, ML algorithms are developed for application in various sectors such as medical care services or the financial sphere and other areas. Given that firms are increasingly embracing the creative development of approaches geared towards the promotion and support for complete production value, as well as other client relations enhancing such a trend is bound to be here with us. It is also clear that, as the demand for ML development services rises, there will be an increased number of innovative solutions to offer businesses a competitive edge. While much about ML remains unknown, there is no denying that such technologies have the potential to reform our lives and business operations.

Original post:
The Future of ML Development Services: Trends and Predictions - FinSMEs

Tags:

CSRWire – Island Conservation Harnesses Machine Learning Solutions From Lenovo and NVIDIA To Restore Island … – CSRwire.com

Published 04-18-24

Submitted by Lenovo

Optimizing and accelerating image processing with AI helps conservation experts safeguard seabird nesting sites on Robinson Crusoe Island.

Around the world, biodiversity is under threat. We are now in what many scientists call the sixth mass extinctionand over the last century, hundreds of species of plants and animals have been lost forever.

Island ecosystems can be particularly vulnerable to human activity. On Robinson Crusoe Island in the South Pacific Ocean, native seabirds such as the pink-footed shearwater are easy prey for an invasive species: the South American coati. Introduced to the island by humans almost a century ago, coatis are housecat-sized mammals in the same family as racoons, which hunt for shearwaters in their nesting sites throughout the island.

Protecting island ecosystems

Leading the fight against invasive species on Robinson Crusoe Island is Island Conservation: an international non-profit organization that restores island ecosystems to benefit wildlife, oceans, and communities. For many years, Island Conservation has been working side by side with island residents to help protect threatened and endangered species.

For Island Conservation, physically removing invasive coatis from shearwater nesting sites is only part of the challenge. To track coati activity, the organization also carefully monitors shearwater nesting sites using more than 70 remote camera traps.

Processing thousands of images a month

The organizations camera traps generate a massive amount of dataaround 140,000 images every monthwhich must be collected and analyzed for signs of coati activity. In the past, the Island Conservation team relied heavily on manual processes to perform this task. To classify 10,000 images would take a trained expert roughly eight hours of non-stop work.

Whats more, manual processing diverted valuable resources away from Island Conservations vital work in the field. The organization knew that there had to be a better way.

Realizing the potential of machine learning

David Will, Head of Innovation at Island Conservation, recalls the challenge: We started experimenting with machine learning [ML] models to accelerate image processing. We were convinced that automation was the way to go, but one of the big challenges was connectivity. Many of the ML solutions we looked at required us to move all of our photos to the cloud for processing. But on Robinson Crusoe Island, we just didnt have a reliable enough internet connection to do that.

As a temporary workaround, Island Conservation saved its camera trap images to SD cards and airmailed them to Santiago de Chile, where they could be uploaded to the cloud for processing. While airmail was the fastest and most frequent link between the island and the mainland, the service only ran once every two weeksand there was a lag of up to three months between a camera trap capturing an image and Island Conservation receiving the analysis.

David Will comments: The time between when we detected an invasive species on a camera and when we were able to respond meant we didnt have enough time to make the kind of decisions we needed to make to prevent extinctions on the island.

Tackling infrastructure challenges

Thats when Lenovo entered the frame. Funded by the Lenovo Work for Humankind initiative with a mission to use technology for good, a global team of 16 volunteers traveled to the island. Using Lenovos smarter technology from devices to software, IT services to servers, the volunteers were able to do their own day jobs while volunteering to help upgrade the islands networking infrastructure: boosting its bandwidth from 1 Mbps to 200 Mbps.

Robinson Crusoe Island is plagued with harsh marine conditions with limited access. They needed a sturdy system that brings compute to the data and allows remote management. The solution was LenovosThinkEdge SE450 with NVIDIA A40 GPUs. The AI-optimized edge server provided a rugged design capable of withstanding extreme conditions while running quietly, allowing it to live comfortably in the new remote workspace. Lenovo worked with Island Conservation to tailor the server to its needs, adding additional graphics cards to increase the AI processing capability per node. We took the supercomputer capability they had in Santiago and brought that into a form factor that is much smaller, says Charles Ferland, Vice President and General Manager of Edge Computing at Lenovo.

The ThinkEdge SE450 eliminated the need for on-site technicians. Unlike a data center, which needs staff on-site, the ThinkEdge server could be monitored and serviced remotely by Lenovo team members. It proved to be the perfect solution. The ThinkEdge server allows for full remote access and management of the device speeding up decisions from a matter of months to days.

David Will comments, Lenovo helped us run both the A40s at the same time immensely speeding up processing, something we previously couldnt do. It has worked tremendously well and almost all of our processing to-date has been done on the ThinkEdge SE450.

Unleashing the power of automation

To automate both the detection and classification of coatis, Lenovo data scientists from the AI Center of Excellence built a custom AI script to detect and separate out the results for coatis and other species from MegaDetectoran open-source object detection model that identifies animals, people, and vehicles in camera trap images. Next, Lenovo data scientists trained an ML model on a custom dataset to give a multi-class classification result for nine species local to Robinson Crusoe Island, including shearwater and coatis.

This two-step GPU-enabled detector-and-classifier pipeline can provide results for 24,000 camera trap images in just one minute. Previously, this would have taken a trained expert twenty hours of laboran astonishing 99.9% time saving. The model achieved 97.5% accuracy on a test dataset with approximately 400 classifications per second. Harnessing the power of NVIDIAs CUDA enabled GPUs allowed us to have a 160x speedup on MegaDetector compared to the previous implementation.

Sachin Gopal Wani, AI Data Scientist at Lenovo, comments: Delivering a solution that is easily interpretable by the user is a crucial part of our AI leadership. I made a custom script that generates outputs compatible with TimeLapsea software the conservationists use worldwide to visualize their results. This enabled much faster visualization for a non-technical end-user without storing additional images. Our solution allows for the results to load with the original images overlapped with classification results, saving terabytes of disk space.

With these ML capabilities, Island Conservation can filter out images that do not contain invasive species with a high degree of certainty. Using its newly upgraded internet connection, the organization can upload images of coati activity to the cloud, where volunteers on the mainland evaluate the images and send recommendations to the island rapidly.

Using ML, we can expedite image processing, get results in minutes, and cut strategic decision time from three months to a matter of weeks, says David Will. This shorter response time means more birds protected from direct predation and faster population recovery.

Looking to the future

Looking ahead, Island Conservation plans to continue its collaboration with the Lenovo AI Center of Excellence to develop Gen AI to detect other types of invasive species, including another big threat to native fauna: rodents.

With Lenovos support, were now seeing how much easier it is to train our models to detect other invasive species on Robinson Crusoe Island, says David Will. Recently, I set up a test environment to detect a new species. After training the model for just seven hours, we recorded 98% detection accuracyan outstanding result.

As the project scope expands, Island Conservation plans to use more Lenovo ThinkEdge SE450 devices with NVIDIA A40 GPUs for new projects across other islands. Lenovos ThinkEdge portfolio has been optimized for Edge AI inferencing, offering outstanding performance and ruggedization to securely process the data where its created.

Backed by Lenovo and NVIDIA technology, Island Conservation is in a stronger position than ever to protect native species from invasive threats.

David Will says: In many of our projects, we see that more than 30% of the total project cost is spent trying to remove the last 1% of invasives and confirm their absence. With Lenovo, we can make decisions based on hard data, not gut feeling, which means Island Conservation takes on new projects sooner.

Healing our oceans

Island Conservations work with Lenovo on Robinson Crusoe Island will serve as a blueprint for future activities. The team plans to repurpose the AI application to detect different invasive species on different islands around the world from the Caribbean to the South and West Pacific, the Central Indian Ocean, and the Eastern Tropical Pacificwith the aim of saving endangered species, increasing biodiversity, and increasing climate resilience.

In fact, Island Conservation, Re:wild, and Scripps Institution of Oceanography recently launched the Island-Ocean Connection Challenge to bring NGOs, governments, funders, island communities, and individuals together to begin holistically restoring 40 globally significant island-ocean ecosystems by 2030.

Everything is interconnected in what is known as the land-and-sea cycle, says David Will. Healthy oceans depend on healthy islands. Island and marine ecosystem elements cycle into one another, sharing nutrients vital to the plants and animals within them. Indigenous cultures have managed resources this way for centuries. Climate change, ocean degradation, invasive species, and biodiversity loss are causing entire land-sea ecosystems to collapse, and island communities are disproportionately impacted.

The Island-Ocean Connection Challenge marks the dawn of a new era of conservation that breaks down artificial silos and is focused on holistic restoration.

David Will concludes: Our collective effort, supported by Lenovo and NVIDIA, is helping to bridge the digital divide on island communities, so they can harness cutting-edge technology to help restore, rewild, and protect their ecosystems, and dont get further left behind by AI advances.

Get involved today at http://www.jointheiocc.org.

To read the Lenovo case study on Island Conservation, click here. Or to watch the Lenovo case study video, click here.

Lenovo is a US$62 billion revenue global technology powerhouse, ranked #217 in the Fortune Global 500, employing 77,000 people around the world, and serving millions of customers every day in 180 markets. Focused on a bold vision to deliver Smarter Technology for All, Lenovo has built on its success as the worlds largest PC company by further expanding into growth areas that fuel the advancement of New IT technologies (client, edge, cloud, network, and intelligence) including server, storage, mobile, software, solutions, and services. This transformation together with Lenovos world-changing innovation is building a more inclusive, trustworthy, and smarter future for everyone, everywhere. Lenovo is listed on the Hong Kong stock exchange under Lenovo Group Limited (HKSE: 992)(ADR: LNVGY). To find out more visit https://www.lenovo.com, and read about the latest news via our StoryHub.

More from Lenovo

Read more:
CSRWire - Island Conservation Harnesses Machine Learning Solutions From Lenovo and NVIDIA To Restore Island ... - CSRwire.com

Tags:

Investigation of the effectiveness of a classification method based on improved DAE feature extraction for hepatitis C … – Nature.com

In this subsection, we evaluate the feature extraction effect of the IDAE by conducting experiments on the Hepatitis C dataset with different configurations to test its generalization ability. We would like to investigate the following two questions:

How effective is IDAE in classifying the characteristics of hepatitis C ?

If the depth of the neural network is increased, can IDAE mitigate the gradient explosion or gradient vanishing problem while improving the classification of hepatitis C disease ?

Does an IDAE of the same depth tend to converge more easily than other encoders on the hepatitis C dataset ?

Firstly, out of public health importance, Hepatitis C (HCV) is a global public health problem due to the fact that its chronic infection may lead to serious consequences such as cirrhosis and liver cancer, and Hepatitis C is highly insidious, leading to a large number of undiagnosed cases.It is worth noting that despite the wide application of traditional machine learning and deep learning algorithms in the healthcare field, especially in the research of acute conditions such as cancer, however, there is a significant lack of in-depth exploration of chronic infectious diseases, such as hepatitis C. In addition, the complex biological attributes of the hepatitis C virus and the significant individual differences among patients together give rise to the challenge of multilevel nonlinear correlation among features. Therefore, the application of deep learning methods to the hepatitis C dataset is not only an important way to validate the efficacy of such algorithms, but also an urgent research direction that needs to be put into practice to fill the existing research gaps.

The Helmholtz Center for Infection Research, the Institute of Clinical Chemistry at the Medical University of Hannover, and other research organizations provided data on people with hepatitis C, which was used to compile the information in this article. The collection includes demographic data, such as age, as well as test results for blood donors and hepatitis C patients. By examining the dataset, we can see that the primary features are the quantity of different blood components and liver function, and that the only categorical feature in the dataset is gender. Table 1 shows the precise definition of these fields.

This essay investigates the categorisation issue. The Table 2 lists the description and sample size of the five main classification labels. In the next training, in order to address the effect of sample imbalance on the classification effect, the model will be first smote32 sampled and then trained using the smote sampled samples. With a sample size of 400 for each classification.

The aim of this paper is to investigate whether IDAE can extract more representative and robust features, and we have chosen a baseline model that includes both traditional machine learning algorithms and various types of autoencoders, which will be described in more detail below:

SVM: support vector machines are used to achieve optimal classification of data by constructing maximally spaced classification hyperplanes and use kernel functions to deal with nonlinear problems, aiming to seek to identify decision boundaries that maximize spacing in the training data.

KNN: the K Nearest Neighbors algorithm determines the class or predictive value of a new sample by calculating its distance from each sample in the training set through its K nearest neighbors.

RF: random forests utilize random feature selection and Bootstrap sampling techniques to construct and combine the prediction results of multiple decision trees to effectively handle classification and regression problems.

AE: autoencoder is a neural network structure consisting of an encoder and a decoder that learns a compact, low-dimensional feature representation of the data through a autoreconfiguration process of the training data, and is mainly used for data dimensionality reduction, feature extraction, and generative learning tasks.

DAE: denoising autoencoder is a autoencoder variant that excels at extracting features from noisy inputs, revealing the underlying structure of the data and learning advanced features by reconstructing the noise-added inputs to improve network robustness, and whose robust features have a gainful effect on the downstream tasks, which contributes to improving the model generalization ability.

SDAE: stacked denoising autoencoder is a multilayer neural network structure consisting of multiple noise-reducing autoencoder layers connected in series, each of which applies noise to the input data during training and learns to reconstruct the undisturbed original features from the noisy data, thus extracting a more abstract and robust feature representation layer by layer.

DIUDA: the main feature of Dual Input Unsupervised Denoising Autoencoder is that it receives two different types of input data at the same time, and further enhances the generalization ability of the model and the understanding of the intrinsic structure of the data by fusing the two types of inputs for the joint learning and extraction of the feature representation.

In this paper, 80% of the Hepatitis C dataset is used as model training and the remaining 20% is used to test the model. Since the samples are unbalanced, this work is repeated with negative samples to ensure that the samples are balanced. For the autoencoder all methods, the learning rate is initialized to 0.001, the number of layers for both encoder and decoder are set to 3, the number of neurons for encoder is 10, 8, 5, the number of neurons for decoder is 5, 8, 10, and the MLP is initialized to 3 layers with the number of neurons 10, 8, 5, respectively, and furthermore all models are trained until convergence, with a maximum training epoch is 200. The machine learning methods all use the sklearn library, and the hyperparameters use the default parameters of the corresponding algorithms of the sklearn library.

To answer the first question, we classified the hepatitis C data after feature extraction using a modified noise-reducing auto-encoder and compared it using traditional machine learning algorithms such as SVM, KNN, and Random Forest with AE, DAE, SDAE, and DIUDA as baseline models. Each experiment was conducted 3 times to mitigate randomness. The average results for each metric are shown in Table 3.From the table, we can make the following observations.

The left figure shows the 3D visualisation of t-SNE with features extracted by DAE, and the right figure shows the 3D visualisation of t-SNE with features extracted by IDAE.

Firstly, the IDAE shows significant improvement on the hepatitis C classification task compared to the machine learning algorithms, and also outperforms almost all machine learning baseline models on all evaluation metrics. These results validate the effectiveness of our proposed improved noise-reducing autoencoder on the hepatitis C dataset. Secondly, IDAE achieves higher accuracy on the hepatitis C dataset compared to the traditional autoencoders such as AE, DAE, SDAE and DIUDA, etc., with numerical improvements of 0.011, 0.013, 0.010, 0.007, respectively. other metrics such as the AUC-ROC and F1 scores, the values are improved by 0.11, 0.10, 0.06,0.04 and 0.13, 0.11, 0.042, 0.032. From Fig. 5, it can be seen that the IDAE shows better clustering effect and class boundary differentiation in the feature representation in 3D space, and both the experimental results and visual analyses verify the advantages of the improved model in classification performance. Both experimental results and visualisation analysis verify the advantages of the improved model in classification performance.

Finally, SVM and RF outperform KNN for classification in the Hepatitis C dataset due to the fact that SVM can handle complex nonlinear relationships through radial basis function (RBF) kernels. The integrated algorithm can also integrate multiple weak learners to indirectly achieve nonlinear classification. KNN, on the other hand, is based on linear measures such as Euclidean distance to construct decision boundaries, which cannot effectively capture and express the essential laws of complex nonlinear data distributions, leading to poor classification results.

In summary, these results demonstrate the superiority of the improved noise-reducing autoencoder in feature extraction of hepatitis C data. It is also indirectly verified by the effect of machine learning that hepatitis C data features may indeed have complex nonlinear relationships.

To answer the second question, we analyze in this subsection the performance variation of different autoencoder algorithms at different depths. To perform the experiments in the constrained setting, we used a fixed learning rate of 0.001. The number of neurons in the encoder and decoder was kept constant and the number of layers added to the encoder and decoder was set to {1, 2, 3, 4, 5, 6}. Each experiment was performed 3 times and the average results are shown in Fig. 6, we make the following observations:

Effects of various types of autoencoders at different depths.

Under different layer configurations, the IDAE proposed in this study shows significant advantages over the traditional AE, DAE, SDAE and SDAE in terms of both feature extraction and classification performance. The experimental data show that the deeper the number of layers, the greater the performance improvement, when the number of layers of the encoder reaches 6 layers, the accuracy improvement effect of IDAE is 0.112, 0.103 , 0.041, 0.021 ,the improvement effect of AUC-ROC of IDAE is 0.062, 0.042, 0.034,0.034, and the improvement effect of F1 is 0.054, 0.051, 0.034,0.028 in the order of the encoder.

It is worth noting that conventional autocoders often encounter the challenges of overfitting and gradient vanishing when the network is deepened, resulting in a gradual stabilisation or even a slight decline in their performance on the hepatitis C classification task, which is largely attributed to the excessive complexity and gradient vanishing problems caused by the over-deep network structure, which restrict the model from finding the optimal solution. The improved version of DAE introduces residual neural network, which optimises the information flow between layers and solves the gradient vanishing problem in deep learning by introducing directly connected paths, and balances the model complexity and generalisation ability by flexibly expanding the depth and width of the network. Experimental results show that the improved DAE further improves the classification performance with appropriate increase in network depth, and alleviates the overfitting problem at the same depth. Taken together, the experimental results reveal that the improved DAE does mitigate the risk of overfitting at the same depth as the number of network layers deepens, and also outperforms other autoencoders in various metrics.

To answer the third question, in this subsection we analyse the speed of model convergence for different autoencoder algorithms. The experiments were also performed by setting the number of layers added to the encoder and decoder to {3, 6}, with the same number of neurons in each layer, and performing each experiment three times, with the average results shown in Fig. 7, where we observe the following conclusions: The convergence speed of the IDAE is better than the other autoencoder at different depths again. Especially, the contrast is more obvious at deeper layers. This is due to the fact that the chain rule leads to gradient vanishing and overfitting problems, and its convergence speed will have a decreasing trend; whereas the IDAE adds direct paths between layers by incorporating techniques such as residual connectivity, which allows the signal to bypass the nonlinear transforms of some layers and propagate directly to the later layers. This design effectively mitigates the problem of gradient vanishing as the depth of the network increases, allowing the network to maintain a high gradient flow rate during training, and still maintain a fast convergence speed even when the depth increases. In summary, when dealing with complex and high-dimensional data such as hepatitis C-related data, the IDAE is able to learn and extract features better by continuously increasing the depth energy, which improves the model training efficiency and overall performance.

Comparison of model convergence speed for different layers of autoencoders.

Excerpt from:
Investigation of the effectiveness of a classification method based on improved DAE feature extraction for hepatitis C ... - Nature.com

Tags:

Machine Learning Uncovers New Ways to Kill Bacteria With Non-Antibiotic Drugs – ScienceAlert

Human history was forever changed with the discovery of antibiotics in 1928. Infectious diseases such as pneumonia, tuberculosis and sepsis were widespread and lethal until penicillin made them treatable.

Surgical procedures that once came with a high risk of infection became safer and more routine. Antibiotics marked a triumphant moment in science that transformed medical practice and saved countless lives.

But antibiotics have an inherent caveat: When overused, bacteria can evolve resistance to these drugs. The World Health Organization estimated that these superbugs caused 1.27 million deaths around the world in 2019 and will likely become an increasing threat to global public health in the coming years.

New discoveries are helping scientists face this challenge in innovative ways. Studies have found that nearly a quarter of drugs that aren't normally prescribed as antibiotics, such as medications used to treat cancer, diabetes and depression, can kill bacteria at doses typically prescribed for people.

Understanding the mechanisms underlying how certain drugs are toxic to bacteria may have far-reaching implications for medicine. If nonantibiotic drugs target bacteria in different ways from standard antibiotics, they could serve as leads in developing new antibiotics.

But if nonantibiotics kill bacteria in similar ways to known antibiotics, their prolonged use, such as in the treatment of chronic disease, might inadvertently promote antibiotic resistance.

In our recently published research, my colleagues and I developed a new machine learning method that not only identified how nonantibiotics kill bacteria but can also help find new bacterial targets for antibiotics.

Numerous scientists and physicians around the world are tackling the problem of drug resistance, including me and my colleagues in the Mitchell Lab at UMass Chan Medical School. We use the genetics of bacteria to study which mutations make bacteria more resistant or more sensitive to drugs.

When my team and I learned about the widespread antibacterial activity of nonantibiotics, we were consumed by the challenge it posed: figuring out how these drugs kill bacteria.

To answer this question, I used a genetic screening technique my colleagues recently developed to study how anticancer drugs target bacteria. This method identifies which specific genes and cellular processes change when bacteria mutate. Monitoring how these changes influence the survival of bacteria allows researchers to infer the mechanisms these drugs use to kill bacteria.

I collected and analyzed almost 2 million instances of toxicity between 200 drugs and thousands of mutant bacteria. Using a machine learning algorithm I developed to deduce similarities between different drugs, I grouped the drugs together in a network based on how they affected the mutant bacteria.

My maps clearly showed that known antibiotics were tightly grouped together by their known classes of killing mechanisms. For example, all antibiotics that target the cell wall the thick protective layer surrounding bacterial cells were grouped together and well separated from antibiotics that interfere with bacteria's DNA replication.

Intriguingly, when I added nonantibiotic drugs to my analysis, they formed separate hubs from antibiotics. This indicates that nonantibiotic and antibiotic drugs have different ways of killing bacterial cells. While these groupings don't reveal how each drug specifically kills antibiotics, they show that those clustered together likely work in similar ways.

The last piece of the puzzle whether we could find new drug targets in bacteria to kill them came from the research of my colleague Carmen Li.

She grew hundreds of generations of bacteria that were exposed to different nonantibiotic drugs normally prescribed to treat anxiety, parasite infections and cancer.

Sequencing the genomes of bacteria that evolved and adapted to the presence of these drugs allowed us to pinpoint the specific bacterial protein that triclabendazole a drug used to treat parasite infections targets to kill the bacteria. Importantly, current antibiotics don't typically target this protein.

Additionally, we found that two other nonantibiotics that used a similar mechanism as triclabendazole also target the same protein. This demonstrated the power of my drug similarity maps to identify drugs with similar killing mechanisms, even when that mechanism was yet unknown.

Our findings open multiple opportunities for researchers to study how nonantibiotic drugs work differently from standard antibiotics. Our method of mapping and testing drugs also has the potential to address a critical bottleneck in developing antibiotics.

Searching for new antibiotics typically involves sinking considerable resources into screening thousands of chemicals that kill bacteria and figuring out how they work. Most of these chemicals are found to work similarly to existing antibiotics and are discarded.

Our work shows that combining genetic screening with machine learning can help uncover the chemical needle in the haystack that can kill bacteria in ways researchers haven't used before.

There are different ways to kill bacteria we haven't exploited yet, and there are still roads we can take to fight the threat of bacterial infections and antibiotic resistance.

Mariana Noto Guillen, Ph.D. Candidate in Systems Biology, UMass Chan Medical School

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read the original post:
Machine Learning Uncovers New Ways to Kill Bacteria With Non-Antibiotic Drugs - ScienceAlert

Tags:

Formal Interaction Model (FIM): A Mathematics-based Machine Learning Model that Formalizes How AI and Users Shape One Another – MarkTechPost

Formal Interaction Model (FIM): A Mathematics-based Machine Learning Model that Formalizes How AI and Users Shape One Another  MarkTechPost

Read more from the original source:
Formal Interaction Model (FIM): A Mathematics-based Machine Learning Model that Formalizes How AI and Users Shape One Another - MarkTechPost

Tags: