Archive for the ‘Machine Learning’ Category

Machine-learning-powered extraction of molecular diffusivity from single-molecule images for super-resolution … – Nature.com

Lippincott-Schwartz, J., Snapp, E. & Kenworthy, A. Studying protein dynamics in living cells. Nat. Rev. Mol. Cell Biol. 2, 444456 (2001).

Article CAS PubMed Google Scholar

Verkman, A. S. Solute and macromolecule diffusion in cellular aqueous compartments. Trends Biochem. Sci. 27, 2733 (2002).

Article CAS PubMed Google Scholar

Mach, R. & Wohland, T. Recent applications of fluorescence correlation spectroscopy in live systems. FEBS Lett. 588, 35713584 (2014).

Article PubMed Google Scholar

Lippincott-Schwartz, J., Snapp, E. L. & Phair, R. D. The development and enhancement of FRAP as a key tool for investigating protein dynamics. Biophys. J. 115, 11461155 (2018).

Article CAS PubMed PubMed Central Google Scholar

Wawrezinieck, L., Rigneault, H., Marguet, D. & Lenne, P.-F. Fluorescence correlation spectroscopy diffusion laws to probe the submicron cell membrane organization. Biophys. J. 89, 40294042 (2005).

Article CAS PubMed PubMed Central Google Scholar

Bacia, K., Kim, S. A. & Schwille, P. Fluorescence cross-correlation spectroscopy in living cells. Nat. Methods 3, 8389 (2006).

Article CAS PubMed Google Scholar

Elson, E. L. Fluorescence correlation spectroscopy: past, present, future. Biophys. J. 101, 28552870 (2011).

Article CAS PubMed PubMed Central Google Scholar

Krieger, J. W. et al. Imaging fluorescence (cross-) correlation spectroscopy in live cells and organisms. Nat. Protoc. 10, 19481974 (2015).

Article CAS PubMed Google Scholar

Manley, S. et al. High-density mapping of single-molecule trajectories with photoactivated localization microscopy. Nat. Methods 5, 155157 (2008).

Article CAS PubMed Google Scholar

Chenouard, N. et al. Objective comparison of particle tracking methods. Nat. Methods 11, 281289 (2014).

Article CAS PubMed PubMed Central Google Scholar

Cognet, L., Leduc, C. & Lounis, B. Advances in live-cell single-particle tracking and dynamic super-resolution imaging. Curr. Opin. Chem. Biol. 20, 7885 (2014).

Article CAS PubMed Google Scholar

Manzo, C. & Garcia-Parajo, M. F. A review of progress in single particle tracking: from methods to biophysical insights. Rep. Prog. Phys. 78, 124601 (2015).

Article PubMed Google Scholar

Shen, H. et al. Single particle tracking: from theory to biophysical applications. Chem. Rev. 117, 73317376 (2017).

Article CAS PubMed Google Scholar

Beheiry, M. E., Dahan, M. & Masson, J.-B. InferenceMAP: mapping of single-molecule dynamics with Bayesian inference. Nat. Methods 12, 594595 (2015).

Article PubMed Google Scholar

Xiang, L., Chen, K., Yan, R., Li, W. & Xu, K. Single-molecule displacement mapping unveils nanoscale heterogeneities in intracellular diffusivity. Nat. Methods 17, 524530 (2020).

Article CAS PubMed PubMed Central Google Scholar

Yan, R., Chen, K. & Xu, K. Probing nanoscale diffusional heterogeneities in cellular membranes through multidimensional single-molecule and super-resolution microscopy. J. Am. Chem. Soc. 142, 1886618873 (2020).

Article CAS PubMed PubMed Central Google Scholar

Xiang, L., Chen, K. & Xu, K. Single molecules are your Quanta: a bottom-up approach toward multidimensional super-resolution microscopy. ACS Nano 15, 1248312496 (2021).

Article CAS PubMed PubMed Central Google Scholar

Schuster, J., Cichos, F. & von Borczyskowski, C. Diffusion measurements by single-molecule spot-size analysis. J. Phys. Chem. A 106, 54035406 (2002).

Article CAS Google Scholar

Zareh, S. K., DeSantis, M. C., Kessler, J. M., Li, J.-L. & Wang, Y. M. Single-image diffusion coefficient measurements of proteins in free solution. Biophys. J. 102, 16851691 (2012).

Article CAS PubMed PubMed Central Google Scholar

Serag, M. F., Abadi, M. & Habuchi, S. Single-molecule diffusion and conformational dynamics by spatial integration of temporal fluctuations. Nat. Commun. 5, 5123 (2014).

Article CAS PubMed Google Scholar

Mckl, L., Roy, A. R. & Moerner, W. E. Deep learning in single-molecule microscopy: fundamentals, caveats, and recent developments [Invited]. Biomed. Opt. Express 11, 16331661 (2020).

Article PubMed PubMed Central Google Scholar

Nehme, E., Weiss, L. E., Michaeli, T. & Shechtman, Y. Deep-STORM: super-resolution single-molecule microscopy by deep learning. Optica 5, 458464 (2018).

Article CAS Google Scholar

Zhang, P. et al. Analyzing complex single-molecule emission patterns with deep learning. Nat. Methods 15, 913916 (2018).

Article CAS PubMed PubMed Central Google Scholar

Zelger, P. et al. Three-dimensional localization microscopy using deep learning. Opt. Express 26, 3316633179 (2018).

Article CAS PubMed Google Scholar

Kim, T., Moon, S. & Xu, K. Information-rich localization microscopy through machine learning. Nat. Commun. 10, 1996 (2019).

Article PubMed PubMed Central Google Scholar

Hershko, E., Weiss, L. E., Michaeli, T. & Shechtman, Y. Multicolor localization microscopy and point-spread-function engineering by deep learning. Opt. Express 27, 61586183 (2019).

Article CAS PubMed Google Scholar

Mckl, L., Petrov, P. N. & Moerner, W. E. Accurate phase retrieval of complex 3D point spread functions with deep residual neural networks. Appl. Phys. Lett. 115, 251106 (2019).

Article PubMed PubMed Central Google Scholar

Zhang, Z., Zhang, Y., Ying, L., Sun, C. & Zhang, H. F. Machine-learning based spectral classification for spectroscopic single-molecule localization microscopy. Opt. Lett. 44, 58645867 (2019).

Article CAS PubMed PubMed Central Google Scholar

Gaire, S. K. et al. Accelerating multicolor spectroscopic single-molecule localization microscopy using deep learning. Biomed. Opt. Express 11, 27052721 (2020).

Article CAS Google Scholar

Mckl, L., Roy, A. R., Petrov, P. N. & Moerner, W. E. Accurate and rapid background estimation in single-molecule localization microscopy using the deep neural network BGnet. Proc. Natl Acad. Sci. 117, 6067 (2020).

Article PubMed Google Scholar

Nehme, E. et al. DeepSTORM3D: dense 3D localization microscopy and PSF design by deep learning. Nat. Methods 17, 734740 (2020).

Article CAS PubMed PubMed Central Google Scholar

Speiser, A. et al. Deep learning enables fast and dense single-molecule localization with high accuracy. Nat. Methods 18, 10821090 (2021).

Article CAS PubMed PubMed Central Google Scholar

Cascarano, P. et al. DeepCEL0 for 2D single-molecule localization in fluorescence microscopy. Bioinformatics 38, 14111419 (2022).

Article CAS PubMed Google Scholar

Spilger, R. et al. in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support Vol. 11045 (eds. Stoyanov, D. et al.) 128136 (Springer International Publishing, 2018).

Newby, J. M., Schaefer, A. M., Lee, P. T., Forest, M. G. & Lai, S. K. Convolutional neural networks automate detection for tracking of submicron-scale particles in 2D and 3D. Proc. Natl Acad. Sci. USA 115, 90269031 (2018).

Article CAS PubMed PubMed Central Google Scholar

Muoz-Gil, G. et al. Objective comparison of methods to decode anomalous diffusion. Nat. Commun. 12, 6253 (2021).

Article PubMed PubMed Central Google Scholar

Kowalek, P., Loch-Olszewska, H. & Szwabiski, J. Classification of diffusion modes in single-particle tracking data: Feature-based versus deep-learning approach. Phys. Rev. E 100, 032410 (2019).

Article CAS PubMed Google Scholar

Granik, N. et al. Single-particle diffusion characterization by deep learning. Biophys. J. 117, 185192 (2019).

Article CAS PubMed PubMed Central Google Scholar

Pinholt, H. D., Bohr, S. S.-R., Iversen, J. F., Boomsma, W. & Hatzakis, N. S. Single-particle diffusional fingerprinting: a machine-learning framework for quantitative analysis of heterogeneous diffusion. Proc. Natl Acad. Sci. 118, e2104624118 (2021).

Article CAS PubMed PubMed Central Google Scholar

Pineda, J. et al. Geometric deep learning reveals the spatiotemporal features of microscopic motion. Nat. Mach. Intell. 5, 7182 (2023).

Article Google Scholar

He, K., Zhang, X., Ren, S. & Sun, J. in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 770778 (IEEE, 2016).

Ioffe, S. & Szegedy, C. in Proceedings of the 32nd International Conference on Machine Learning 448456 (PMLR, 2015).

Ramachandran, P., Zoph, B. & Le, Q. V. Searching for activation functions. arXiv https://doi.org/10.48550/arXiv.1710.05941 (2017).

Nair, V. & Hinton, G. E. in Proc. 27th International Conference on International Conference on Machine Learning 807814 (Omnipress, 2010).

Choi, A. A. et al. Displacement statistics of unhindered single molecules show no enhanced diffusion in enzymatic reactions. J. Am. Chem. Soc. 144, 48394844 (2022).

Article CAS PubMed PubMed Central Google Scholar

Tobin, J. et al. Domain randomization for transferring deep neural networks from simulation to the real world. arXiv https://doi.org/10.48550/arXiv.1703.06907 (2017).

Filippov, A., Ordd, G. & Lindblom, G. Sphingomyelin structure influences the lateral diffusion and Raft formation in lipid Bilayers. Biophys. J. 90, 20862092 (2006).

Article CAS PubMed Google Scholar

Mach, R. & Hof, M. Lipid diffusion in planar membranes investigated by fluorescence correlation spectroscopy. Biochim. Biophys. Acta BBA Biomembr. 1798, 13771391 (2010).

Article Google Scholar

Sharonov, A. & Hochstrasser, R. M. Wide-field subdiffraction imaging by accumulated binding of diffusing probes. Proc. Natl Acad. Sci. USA 103, 1891118916 (2006).

Article CAS PubMed PubMed Central Google Scholar

Maekawa, T. et al. Molecular diffusion and nano-mechanical properties of multi-phase supported lipid bilayers. Phys. Chem. Chem. Phys. 21, 1668616693 (2019).

Article CAS PubMed Google Scholar

Kuo, C. & Hochstrasser, R. M. Super-resolution microscopy of lipid bilayer phases. J. Am. Chem. Soc. 133, 46644667 (2011).

Article CAS PubMed PubMed Central Google Scholar

Yan, R., Wang, B. & Xu, K. Functional super-resolution microscopy of the cell. Curr. Opin. Chem. Biol. 51, 9297 (2019).

Article CAS PubMed Google Scholar

Continued here:
Machine-learning-powered extraction of molecular diffusivity from single-molecule images for super-resolution ... - Nature.com

Machine Learning and AI Combined Can Boost Energy and Chemical Production – Yahoo Finance

NORTHAMPTON, MA / ACCESSWIRE / March 28, 2023 / Schneider Electric

Schneider Electric, Tuesday, March 28, 2023, Press release picture

Today's energy-intensive processes are looking to artificial intelligence (AI) technologies, including machine learning (ML), to help deliver smart automation capabilities needed to decrease machine downtime, expand asset utilization, and unlock immediate insights into real-time process optimization.

Organizations with a "digital-first" mindset understand the potential ML offers to vastly increase daily decision-making accuracy, speed, and flexibility. According to a recent study, 84% of C-suite executives believe AI is necessary to achieve their growth objectives, yet 74% concede that significant barriers to implementation exist.

Core constraints to building automated analytics into automation and control applications are a lack of access to technical skills, diversity in domain expertise, and deployment tools.

Numerous organizations, including Schneider Electric, have found that a fundamental enabler of successful and ground-breaking ML deployment is to team up with expert outside partners. Dynamic collaborations can significantly enhance the skills and abilities of cross-functional and interdisciplinary teams. Such cooperation is the core philosophy behind our Partnerships of the Future program, an initiative designed to develop mutually beneficial professional relationships to speed innovation and generate superior business outcomes for customers and partners alike.

A collaborative approach to digital development strategy pays off

With valuable input from specialists at Alkhorayef Petroleum, Schneider Electric was able to develop edge analytics-enabled AI capabilities into the EcoStruxure Autonomous Production Advisor platform for oil and gas production facilities. It's one example of the several successful digital co-innovation efforts Schneider is currently executing across multiple industrial segments. The goal of both partners was to build an AI-based solution incorporating ML and pattern recognition models that could detect anomalies in the Oil and Gas extraction process and positively impact several key challenges, including:

Story continues

Harnessing and replicating the insights and expertise of the most proficient well operators so their abilities could be automated and deployed across a broader range of production conditions.

Actively managing equipment lifecycles to optimize well intervention schedules and generate maximum value from the physical asset base.

Monitoring and reacting to downhole conditions in real-time to optimize petroleum production, reduce unplanned downtime, maximize oil volumes, and improve safety.

Because traditional automation architectures and strategies wouldn't deliver the required capabilities, particularly for remote oil and gas wells, a new cooperative development approach was undertaken. Together, the team was able to leverage Schneider's expertise in IIoT-enabled control systems and AI-based process optimization with Alkhorayef Petroleum's knowledge and expertise of electrical submersible pumps to create novel techniques to capture and automate expert knowledge.

Cloud computing and edge analytics combined

EcoStruxure Autonomous Production Advisor merges the power and flexibility of cloud and edge computing with the value-generating capabilities of artificial intelligence and machine learning. In conjunction with remote terminal units (RTUs), the platform runs on industrial-grade edge controllers that combine supervised and unsupervised ML models running directly at the edge.

Replicating the actions of highly skilled human operators, the AI monitors the pump operation, assesses production variables, and analyzes the interactions and relationships between them, to identify anomalous operations. As a next step, the AI model classifies the detected anomalous events (such as sand intrusion, interfering gas, or mechanical problems) as specific issues. Continuous validation of the AI model's event classifications by operators and experts helps to retrain the model, developing increasingly accurate diagnostic and predictive abilities.

The implementation of machine learning models in industrial applications forms an exciting new area because they can be trained to optimize operations and asset performance in a variety of important areas, such as:

Identification of asset deterioration

Early detection of abnormal behavior

Prediction of equipment failure and smart alarming

Asset performance management (digital twin)

An additional benefit of AI models is that such a solution can be trained for image recognition, enabling it to be an automation aid for several applications, including:

Product quality

Man down and intrusion detection and alarming

Leakage detection and contactless flow measurement

Machine vision and object and shape detection

Vendor-agnostic hardware enables the platform to be deployed to existing architectures without requiring significant modifications.

Co-innovation delivers tangible results

In offshore and onshore wells in the Middle East, Africa, and Latin America, the EcoStruxure Autonomous Production Advisor model training process has proven to be very effective at capturing the skills and expertise of the most senior operators and having the system automate and reproduce them. In one use case run by Schneider Electric, the customer reported a 13% increase in production and a 33% reduction in energy consumption.

Innovation isn't just about technology; there's no "one size fits all" strategy for partnering to invent new solutions that deliver major dividends. Success depends on nurturing conditions for a dynamic, mutually beneficial partnership. With a depth of co-innovation experience unrivaled in the smart control systems and process automation space, Schneider Electric is ready to work with true partners looking to overcome our greatest challenges.

Click to learn more about EcoStruxure Autonomous Production Advisor and Alkhorayef Petroleum.

View additional multimedia and more ESG storytelling from Schneider Electric on 3blmedia.com.

Contact Info:Spokesperson: Schneider ElectricWebsite: https://www.3blmedia.com/profiles/schneider-electricEmail: info@3blmedia.com

SOURCE: Schneider Electric

View source version on accesswire.com: https://www.accesswire.com/746202/Machine-Learning-and-AI-Combined-Can-Boost-Energy-and-Chemical-Production

Here is the original post:
Machine Learning and AI Combined Can Boost Energy and Chemical Production - Yahoo Finance

OpenXLA Project is Now Available to Accelerate and Simplify Machine Learning – MarkTechPost

Over the past few years, machine learning (ML) has completely revolutionized the technology industry. Ranging from 3D protein structure prediction and prediction of tumors in cells to helping identify fraudulent credit card transactions and curating personalized experiences, there is hardly any industry that has not yet employed ML algorithms to enhance their use cases. Even though machine learning is a rapidly emerging discipline, there are still a number of challenges that need to be resolved before these ML models can be developed and put into use. Nowadays, ML development and deployment suffer for a number of reasons. Infrastructure and resource limitations are among the main causes, as the execution of ML models is frequently computationally intensive and necessitates a large amount of resources. Moreover, there is a lack of standardization when it comes to deploying ML models, as it depends greatly on the framework and hardware being used and the purpose for which the model is being designed. As a result, it takes developers a lot of time and effort to ensure that a model employing a specific framework functions properly on every piece of hardware, which requires a considerable amount of domain-specific knowledge. Such inconsistencies and inefficiencies greatly affect the speed at which developers work and places restriction on the model architecture, performance, and generalizability.

Several ML industry leaders, including Alibaba, Amazon Web Services, AMD, Apple, Cerebras, Google, Graphcore, Hugging Face, Intel, Meta, and NVIDIA, have teamed up to develop an open-source compiler and infrastructure ecosystem known as OpenXLA to close this gap by making ML frameworks compatible with a variety of hardware systems and increasing developers productivity. Depending on the use case, developers can choose the framework of their choice (PyTorch, TensorFlow, etc.) and build it with high performance across multiple hardware backend options like GPU, CPU, etc., using OpenXLAs state-of-the-art compilers. The ecosystem significantly focuses on providing its users with high performance, scalability, portability, and flexibility, while making it affordable at the same time. The OpenXLA Project, which consists of the XLA compiler (a domain-specific compiler that optimizes linear algebra operations to be run across hardware) and StableHLO (a compute operation that enables the deployment of various ML frameworks across hardware), is now available to the general public and is accepting contributions from the community.

The OpenXLA community has done a fantastic job of bringing together the expertise of several developers and industry leaders across different fields in the ML world. Since ML infrastructure is so immense and vast, no single organization is capable of resolving it alone at a large scale. Thus, experts well-versed in different ML domains such as frameworks, hardware, compilers, runtime, and performance accuracy have come together to accelerate the pace of development and deployment of ML models. The OpenXLA project achieves this vision in two ways by providing: a modular and uniform compiler interface that developers can use for any framework and pluggable hardware-specific backends for model optimizations. Developers can also leverage MLIR-based components from the extensible ML compiler platform to configure them according to their particular use cases and enable hardware-specific customization throughout the compilation workflow.

OpenXLA can be employed for a spectrum of use cases. They include developing and delivering cutting-edge performance for a variety of established and new models, including, to mention a few, DeepMinds AlphaFold and multi-modal LLMs for Amazon. These models can be scaled with OpenXLA over numerous hosts and accelerators without exceeding the deployment limits. One of the most significant uses of the ecosystem is that it provides support for a multitude of hardware devices such as AMD and NVIDIA GPUs, x86 CPU, etc., and ML accelerators like Google TPUs, AWS Trainium and Inferentia, and many more. As mentioned previously, earlier developers needed domain-specific knowledge to write device-specific code to increase the performance of models written in different frameworks to be executed across hardware. However, OpenXLA has several model enhancements that simplify a developers job, like streamlined linear algebra operations, enhanced scheduling, etc. Moreover, it comes with a number of modules that provide effective model parallelization across various hardware hosts and accelerators.

The developers behind the OpenXLA Project are extremely excited to see how developers use it to enhance ML development and deployment for their preferred use case.

Check out theProject and Blog.All Credit For This Research Goes To the Researchers on This Project. Also,dont forget to joinour 16k+ ML SubReddit,Discord Channel,andEmail Newsletter, where we share the latest AI research news, cool AI projects, and more.

Khushboo Gupta is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Goa. She is passionate about the fields of Machine Learning, Natural Language Processing and Web Development. She enjoys learning more about the technical field by participating in several challenges.

Continue reading here:
OpenXLA Project is Now Available to Accelerate and Simplify Machine Learning - MarkTechPost

Can AI Help Find Life on Mars or Icy Worlds? – SciTechDaily

A recent study led by SETI Institute Senior Research Scientist Kim Warren-Rhodes and published in Nature Astronomy brings us closer to discovering extraterrestrial life by mapping scarce life forms in extreme environments. The interdisciplinary research focuses on life hidden within salt domes, rocks, and crystals at Salar de Pajonales, situated at the border of the Chilean Atacama Desert and Altiplano. This study could help pinpoint exact locations to search for life on other planets, despite the limited opportunities to collect samples or access remote sensing instruments.

Wouldnt discovering life on other worlds be made easier if we knew the exact locations to search? However, opportunities to collect samples or access remote sensing instruments are limited. A recent study, published in Nature Astronomy and led by SETI Institute Senior Research Scientist Kim Warren-Rhodes, brings us one step closer to finding extraterrestrial life. The interdisciplinary study maps the scarce life forms hidden within salt domes, rocks, and crystals at Salar de Pajonales, located at the boundary of the Chilean Atacama Desert and Altiplano.

Warren-Rhodes teamed up with Michael Phillips from the Johns Hopkins Applied Physics Lab and Freddie Kalaitzis from the University of Oxford to train a machine-learning model that could recognize patterns and rules associated with the distribution of life forms. This model was designed to predict and identify similar distributions in untrained data. By combining statistical ecology with AI/ML, the scientists achieved a remarkable outcome: the ability to locate and detect biosignatures up to 87.5% of the time, compared to just 10% with a random search. This also reduced the search area by as much as 97%.

Biosignature probability maps from CNN models and statistical ecology data. The colors in a) indicate the probability of biosignature detection. In b) a visible image of a gypsum dome geologic feature (left) with biosignature probability maps for various microhabitats (e.g., sand versus alabaster) within it. Credit: M. Phillips, F. Kalaitzis, K. Warren- Rhodes.

Our framework allows us to combine the power of statistical ecology with machine learning to discover and predict the patterns and rules by which nature survives and distributes itself in the harshest landscapes on Earth, said Rhodes. We hope other astrobiology teams adapt our approach to mapping other habitable environments and biosignatures. With these models, we can design tailor-made roadmaps and algorithms to guide rovers to places with the highest probability of harboring past or present lifeno matter how hidden or rare.

Video showing the major concepts of integrating datasets from orbit to the ground. The first frames zoom in from a global view to an orbital image of Salar de Pajonales. The salar is then overlain with an interpretation of its compositional variability derived from ASTER multispectral data. The next sequence of frames transitions to drone-derived images of the field site within Salar de Pajonales. Note features of interest that become identifiable in the scene, starting with polygonal networks of ridges, then individual gypsum domes and polygonal patterned ground, and ending with individual blades of selenite. The video ends with a first-person view of a set of gypsum domes studied in the article using machine learning techniques. Credit: M. Phillips

Ultimately, similar algorithms and machine learning models for many different types of habitable environments and biosignatures could be automated onboard planetary robots to efficiently guide mission planners to areas at any scale with the highest probability of containing life.

Rhodes and the SETI Institute NASA Astrobiology Institute (NAI) team used the Salar de Pajonales, as a Mars analog. Pajonales is a high altitude (3,541 m), high U/V, hyperarid, dry salt lakebed, considered inhospitable to many life forms but still habitable.

During the NAI projects field campaigns, the team collected over 7,765 images and 1,154 samples and tested instruments to detect photosynthetic microbes living within the salt domes, rocks, and alabaster crystals. These microbes exude pigments that represent one possible biosignature on NASAs Ladder of Life Detection.

At Pajonales, drone flight imagery connected simulated orbital (HiRISE) data to ground sampling and 3D topographical mapping to extract spatial patterns. The studys findings confirm (statistically) that microbial life at the Pajonales terrestrial analog site is not distributed randomly but concentrated in patchy biological hotspots strongly linked to water availability at km to cm scales.

Next, the team trained convolutional neural networks (CNNs) to recognize and predict macro-scale geologic features at Pajonalessome of which, like patterned ground or polygonal networks, are also found on Marsand micro-scale substrates (or micro-habitats) most likely to contain biosignatures.

Orbit-to-Ground study of biosignatures in the terrestrial Mars analog study site Salar de Pajonales, Chile. (b) drone view of the site with macroscale geologic features (domes, aeolian cover, ridge networks, and patterned ground) in false color. (c) 3-D rendering of dome macrohabitats from drone imagery. (d) Orange and green bands of pigments of the photosynthetic microbial communities living in Ca-sulfate micro-habitats. These biosignatures are a feature of NASAs Ladder of Life Detection and are detectable by eye and by instruments such as Raman (e) and Visible Short-Wave Infrared spectroscopy. Credit: N. Cabrol, M. Phillips, K. Warren-Rhodes, J. Bishop, and D. Wettergreen.

Like the Perseverance team on Mars, the researchers tested how to effectively integrate a UAV/drone with ground-based rovers, drills, and instruments (e.g., VISIR on MastCam-Z and Raman on SuperCam on the Mars 2020 Perseverance rover).

The teams next research objective at Pajonales is to test the CNNs ability to predict the location and distribution of ancient stromatolite fossils and halite microbiomes with the same machine learning programs to learn whether similar rules and models apply to other similar yet slightly different natural systems. From there, entirely new ecosystems, such as hot springs, permafrost soils, and rocks in the Dry Valleys, will be explored and mapped. As more evidence accrues, hypotheses about the convergence of lifes means of surviving in extreme environments will be iteratively tested, and biosignature probability blueprints for Earths key analog ecosystems and biomes will be inventoried.

While the high-rate of biosignature detection is a central result of this study, no less important is that it successfully integrated datasets at vastly different resolutions from orbit to the ground, and finally tied regional orbital data with microbial habitats, said Nathalie A. Cabrol, the PI of the SETI Institute NAI team. With it, our team demonstrated a pathway that enables the transition from the scales and resolutions required to characterize habitability to those that can help us find life. In that strategy, drones were essential, but so was the implementation of microbial ecology field investigations that require extended periods (up to weeks) of in situ (and in place) mapping in small areas, a strategy that was critical to characterize local environmental patterns favorable to life niches.

This study led by the SETI Institutes NAI team has paved the way for machine learning to assist scientists in the search for biosignatures in the universe. Their paper Orbit-to-Ground Framework to Decode and Predict Biosignature Patterns in Terrestrial Analogues is the culmination of five years of the NASA-funded NAI project, and a cooperative astrobiology research effort with over 50 team members from 17 institutions. In addition to Johns Hopkins Applied Physics Lab and the University of Oxford, the Universidad Catlica del Norte, Antofagasta, Chile supported this research.

Reference: Orbit-to-ground framework to decode and predict biosignature patterns in terrestrial analogues by Kimberley Warren-Rhodes, Nathalie A. Cabrol, Michael Phillips, Cinthya Tebes-Cayo, Freddie Kalaitzis, Diego Ayma, Cecilia Demergasso, Guillermo Chong-Diaz, Kevin Lee, Nancy Hinman, Kevin L. Rhodes, Linda Ng Boyle, Janice L. Bishop, Michael H. Hofmann, Neil Hutchinson, Camila Javiera, Jeffrey Moersch, Claire Mondro, Nora Nofke, Victor Parro, Connie Rodriguez, Pablo Sobron, Philippe Sarazzin, David Wettergreen, Kris Zacny and the SETI Institute NAI Team, 6 March 2023, Nature Astronomy.DOI: 10.1038/s41550-022-01882-x

The SETI NAI team project entitled Changing Planetary Environments and the Fingerprints of Life was funded by the NASA Astrobiology Program.

See the rest here:
Can AI Help Find Life on Mars or Icy Worlds? - SciTechDaily

Artificial Intelligence Glossary: AI Terms Everyone Should Learn – The New York Times

Weve compiled a list of phrases and concepts useful to understanding artificial intelligence, in particular the new breed of A.I.-enabled chatbots like ChatGPT, Bing and Bard.

If you dont understand these explanations, or would like to learn more, you might want to consider asking the chatbots themselves. Answering such questions is one of their most useful skills, and one of the best ways to understand A.I. is to use it. But keep in mind that they sometimes get things wrong.

Bing and Bard chatbots are being rolled out slowly, and you may need to get on their waiting lists for access. ChatGPT currently has no waiting list, but it requires setting up a free account.

For more on learning about A.I., check out The New York Timess five-part series on becoming an expert on chatbots.

Anthropomorphism: The tendency for people to attribute humanlike qualities or characteristics to an A.I. chatbot. For example, you may assume it is kind or cruel based on its answers, even though it is not capable of having emotions, or you may believe the A.I. is sentient because it is very good at mimicking human language.

Bias: A type of error that can occur in a large language model if its output is skewed by the models training data. For example, a model may associate specific traits or professions with a certain race or gender, leading to inaccurate predictions and offensive responses.

A brave new world. A new crop of chatbotspowered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning todays powerhouses into has-beens and creating the industrys next giants. Here are the bots to know:

ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacationsand translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images(and ace the Uniform Bar Exam).

Bing. Two months after ChatGPTs debut, Microsoft, OpenAIs primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bots occasionally inaccurate, misleading and weird responsesthat drew much of the attention after its release.

Ernie. The search giant Baidu unveiled Chinas first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flopafter a promised live demonstration of the bot was revealed to have been recorded.

Emergent behavior: Unexpected or unintended abilities in a large language model, enabled by the models learning patterns and rules from its training data. For example, models that are trained on programming and coding sites can write new code. Other examples include creative abilities like composing poetry, music and fictional stories.

Generative A.I.: Technology that creates content including text, images, video and computer code by identifying patterns in large quantities of training data, and then creating original material that has similar characteristics. Examples include ChatGPT for text and DALL-E and Midjourney for images.

Hallucination: A well-known phenomenon in large language models, in which the system provides an answer that is factually incorrect, irrelevant or nonsensical, because of limitations in its training data and architecture.

Large language model: A type of neural network that learns skills including generating prose, conducting conversations and writing computer code by analyzing vast amounts of text from across the internet. The basic function is to predict the next word in a sequence, but these models have surprised experts by learning new abilities.

Natural language processing: Techniques used by large language models to understand and generate human language, including text classification and sentiment analysis. These methods often use a combination of machine learning algorithms, statistical models and linguistic rules.

Neural network: A mathematical system, modeled on the human brain, that learns skills by finding statistical patterns in data. It consists of layers of artificial neurons: The first layer receives the input data, and the last layer outputs the results. Even the experts who create neural networks dont always understand what happens in between.

Parameters: Numerical values that define a large language models structure and behavior, like clues that help it guess what words come next. Systems like GPT-4 are thought to have hundreds of billions of parameters.

Reinforcement learning: A technique that teaches an A.I. model to find the best result by trial and error, receiving rewards or punishments from an algorithm based on its results. This system can be enhanced by humans giving feedback on its performance, in the form of ratings, corrections and suggestions.

Transformer model: A neural network architecture useful for understanding language that does not have to analyze words one at a time but can look at an entire sentence at once. This was an A.I. breakthrough, because it enabled models to understand context and long-term dependencies in language. Transformers use a technique called self-attention, which allows the model to focus on the particular words that are important in understanding the meaning of a sentence.

See original here:
Artificial Intelligence Glossary: AI Terms Everyone Should Learn - The New York Times