Archive for the ‘Machine Learning’ Category

Learning to grow machine-learning models | MIT News | Massachusetts Institute of Technology – MIT News

Its no secret that OpenAIs ChatGPT has some incredible capabilities for instance, the chatbot can write poetry that resembles Shakespearean sonnets or debug code for a computer program. These abilities are made possible by the massive machine-learning model that ChatGPT is built upon. Researchers have found that when these types of models become large enough, extraordinary capabilities emerge.

But bigger models also require more time and money to train. The training process involves showing hundreds of billions of examples to a model. Gathering so much data is an involved process in itself. Then come the monetary and environmental costs of running many powerful computers for days or weeks to train a model that may have billions of parameters.

Its been estimated that training models at the scale of what ChatGPT is hypothesized to run on could take millions of dollars, just for a single training run. Can we improve the efficiency of these training methods, so we can still get good models in less time and for less money? We propose to do this by leveraging smaller language models that have previously been trained, says Yoon Kim, an assistant professor in MITs Department of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Rather than discarding a previous version of a model, Kim and his collaborators use it as the building blocks for a new model. Using machine learning, their method learns to grow a larger model from a smaller model in a way that encodes knowledge the smaller model has already gained. This enables faster training of the larger model.

Their technique saves about 50 percent of the computational cost required to train a large model, compared to methods that train a new model from scratch. Plus, the models trained using the MIT method performed as well as, or better than, models trained with other techniques that also use smaller models to enable faster training of larger models.

Reducing the time it takes to train huge models could help researchers make advancements faster with less expense, while also reducing the carbon emissions generated during the training process. It could also enable smaller research groups to work with these massive models, potentially opening the door to many new advances.

As we look to democratize these types of technologies, making training faster and less expensive will become more important, says Kim, senior author of a paper on this technique.

Kim and his graduate student Lucas Torroba Hennigen wrote the paper with lead author Peihao Wang, a graduate student at the University of Texas at Austin, as well as others at the MIT-IBM Watson AI Lab and Columbia University. The research will be presented at the International Conference on Learning Representations.

The bigger the better

Large language models like GPT-3, which is at the core of ChatGPT, are built using a neural network architecture called a transformer. A neural network, loosely based on the human brain, is composed of layers of interconnected nodes, or neurons. Each neuron contains parameters, which are variables learned during the training process that the neuron uses to process data.

Transformer architectures are unique because, as these types of neural network models get bigger, they achieve much better results.

This has led to an arms race of companies trying to train larger and larger transformers on larger and larger datasets. More so than other architectures, it seems that transformer networks get much better with scaling. Were just not exactly sure why this is the case, Kim says.

These models often have hundreds of millions or billions of learnable parameters. Training all these parameters from scratch is expensive, so researchers seek to accelerate the process.

One effective technique is known as model growth. Using the model growth method, researchers can increase the size of a transformer by copying neurons, or even entire layers of a previous version of the network, then stacking them on top. They can make a network wider by adding new neurons to a layer or make it deeper by adding additional layers of neurons.

In contrast to previous approaches for model growth, parameters associated with the new neurons in the expanded transformer are not just copies of the smaller networks parameters, Kim explains. Rather, they are learned combinations of the parameters of the smaller model.

Learning to grow

Kim and his collaborators use machine learning to learn a linear mapping of the parameters of the smaller model. This linear map is a mathematical operation that transforms a set of input values, in this case the smaller models parameters, to a set of output values, in this case the parameters of the larger model.

Their method, which they call a learned Linear Growth Operator (LiGO), learns to expand the width and depth of larger network from the parameters of a smaller network in a data-driven way.

But the smaller model may actually be quite large perhaps it has a hundred million parameters and researchers might want to make a model with a billion parameters. So the LiGO technique breaks the linear map into smaller pieces that a machine-learning algorithm can handle.

LiGO also expands width and depth simultaneously, which makes it more efficient than other methods. A user can tune how wide and deep they want the larger model to be when they input the smaller model and its parameters, Kim explains.

When they compared their technique to the process of training a new model from scratch, as well as to model-growth methods, it was faster than all the baselines. Their method saves about 50 percent of the computational costs required to train both vision and language models, while often improving performance.

The researchers also found they could use LiGO to accelerate transformer training even when they didnt have access to a smaller, pretrained model.

I was surprised by how much better all the methods, including ours, did compared to the random initialization, train-from-scratch baselines. Kim says.

In the future, Kim and his collaborators are looking forward to applying LiGO to even larger models.

The work was funded, in part, by the MIT-IBM Watson AI Lab, Amazon, the IBM Research AI Hardware Center, Center for Computational Innovation at Rensselaer Polytechnic Institute, and the U.S. Army Research Office.

See the original post here:
Learning to grow machine-learning models | MIT News | Massachusetts Institute of Technology - MIT News

Dense reinforcement learning for safety validation of autonomous vehicles – Nature.com

Kalra, N. & Paddock, S. M. Driving to safety: how many miles of driving would it take to demonstrate autonomous vehicle reliability? Transp. Res. A 94, 182193 (2016).

Google Scholar

LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436444 (2015).

Article ADS CAS PubMed Google Scholar

10 million self-driving cars will be on the road by 2020. Insider https://www.businessinsider.com/report-10-million-self-driving-cars-will-be-on-the-road-by-2020-2015-5-6 (2016).

Nissan promises self-driving cars by 2020. Wired https://www.wired.com/2013/08/nissan-autonomous-drive/ (2014).

Teslas self-driving vehicles are not far off. Insider https://www.businessinsider.com/elon-musk-on-teslas-autonomous-cars-2015-9 (2015).

Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles (Society of Automotive Engineers, 2021); https://www.sae.org/standards/content/j3016_202104/.

2021 Disengagement Reports (California Department of Motor Vehicles, 2022); https://www.dmv.ca.gov/portal/vehicle-industry-services/autonomous-vehicles/disengagement-reports/.

Paz, D., Lai, P. J., Chan, N., Jiang, Y. & Christensen, H. I. Autonomous vehicle benchmarking using unbiased metrics. In IEEE International Conference on Intelligent Robots and Systems 62236228 (IEEE, 2020).

Favar, F., Eurich, S. & Nader, N. Autonomous vehicles disengagements: trends, triggers, and regulatory limitations. Accid. Anal. Prev. 110, 136148 (2018).

Article PubMed Google Scholar

Riedmaier, S., Ponn, T., Ludwig, D., Schick, B. & Diermeyer, F. Survey on scenario-based safety assessment of automated vehicles. IEEE Access 8, 8745687477 (2020).

Article Google Scholar

Nalic, D. et al. Scenario based testing of automated driving systems: a literature survey. In Proc. of the FISITA Web Congress 110 (Fisita, 2020).

Feng, S., Feng, Y., Yu, C., Zhang, Y. & Liu, H. X. Testing scenario library generation for connected and automated vehicles, part I: methodology. IEEE Trans. Intell. Transp. Syst. 22, 15731582 (2020).

Article Google Scholar

Feng, S. et al. Testing scenario library generation for connected and automated vehicles, part II: case studies. IEEE Trans. Intell. Transp. Syst. 22, 56355647 (2020).

Article Google Scholar

Feng, S., Yan, X., Sun, H., Feng, Y. & Liu, H. X. Intelligent driving intelligence test for autonomous vehicles with naturalistic and adversarial environment. Nat. Commun. 12, 748 (2021).

Article ADS CAS PubMed PubMed Central Google Scholar

Sinha, A., OKelly, M., Tedrake, R. & Duchi, J. C. Neural bridge sampling for evaluating safety-critical autonomous systems. Adv. Neural Inf. Process. Syst. 33, 64026416 (2020).

Google Scholar

Li, L. et al. Parallel testing of vehicle intelligence via virtual-real interaction. Sci. Robot. 4, eaaw4106 (2019).

Article PubMed Google Scholar

Zhao, D. et al. Accelerated evaluation of automated vehicles safety in lane-change scenarios based on importance sampling techniques. IEEE Trans. Intell. Transp. Syst. 18, 595607 (2016).

Article PubMed PubMed Central Google Scholar

Donoho, D. L. High-dimensional data analysis: the curses and blessings of dimensionality. AMS Math Challenges Lecture 1, 32 (2000).

Google Scholar

Hinton, G. E. & Salakhutdinov, R. R. Reducing the dimensionality of data with neural networks. Science 313, 504507 (2006).

Article ADS MathSciNet CAS PubMed MATH Google Scholar

Silver, D. et al. Mastering the game of go without human knowledge. Nature 550, 354359 (2017).

Article ADS CAS PubMed Google Scholar

Mirhoseini, A. et al. A graph placement methodology for fast chip design. Nature 594, 207212 (2021).

Article ADS CAS PubMed Google Scholar

Cummings, M. L. Rethinking the maturity of artificial intelligence in safety-critical settings. AI Mag. 42, 615 (2021).

Google Scholar

Kato, S. et al. Autoware on board: enabling autonomous vehicles with embedded systems. In 2018 ACM/IEEE 9th International Conference on Cyber-Physical Systems 287296 (IEEE, 2018).

Feng, S. et al. Safety assessment of highly automated driving systems in test tracks: a new framework. Accid. Anal. Prev. 144, 105664 (2020).

Article PubMed Google Scholar

Lopez, P. et al. Microscopic traffic simulation using SUMO. In International Conference on Intelligent Transportation Systems 25752582 (IEEE, 2018).

Arun, A., Haque, M. M., Bhaskar, A., Washington, S. & Sayed, T. A systematic mapping review of surrogate safety assessment using traffic conflict techniques. Accid. Anal. Prev. 153, 106016 (2021).

Article PubMed Google Scholar

Sutton, R. S. & Barto, A. G. Reinforcement Learning: An Introduction (MIT Press, 2018).

Koren, M., Alsaif, S., Lee, R. & Kochenderfer, M. J. Adaptive stress testing for autonomous vehicles. In IEEE Intelligent Vehicles Symposium (IV) 17 (IEEE, 2018).

Sun, H., Feng, S., Yan, X. & Liu, H. X. Corner case generation and analysis for safety assessment of autonomous vehicles. Transport. Res. Rec. 2675, 587600 (2021).

Article Google Scholar

Schulman, J., Wolski, F., Dhariwal, P., Radford, A. & Klimov, O. Proximal policy optimization algorithms. Preprint at https://arxiv.org/abs/1707.06347 (2017).

Owen, A. B. Monte Carlo theory, methods and examples. Art Owen https://artowen.su.domains/mc/ (2013).

Krajewski, R., Moers, T., Bock, J., Vater, L. & Eckstein, L. September. The round dataset: a drone dataset of road user trajectories at roundabouts in Germany. In 2020 IEEE 23rd International Conference on Intelligent Transportation Systems 16 (IEEE, 2020).

Nowakowski, C., Shladover, S. E., Chan, C. Y. & Tan, H. S. Development of California regulations to govern testing and operation of automated driving systems. Transport. Res. Rec. 2489, 137144 (2015).

Article Google Scholar

Sauerbier, J., Bock, J., Weber, H. & Eckstein, L. Definition of scenarios for safety validation of automated driving functions. ATZ Worldwide 121, 4245 (2019).

Article Google Scholar

Pek, C., Manzinger, S., Koschi, M. & Althoff, M. Using online verification to prevent autonomous vehicles from causing accidents. Nat. Mach. Intell. 2, 518528 (2020).

Article Google Scholar

Seshia, S. A., Sadigh, D. & Sastry, S. S. Toward verified artificial intelligence. Commun. ACM 65, 4655 (2022).

Article Google Scholar

Wing, J. M. A specifiers introduction to formal methods. IEEE Comput. 23, 824 (1990).

Article Google Scholar

Li, A., Sun, L., Zhan, W., Tomizuka, M. & Chen, M. Prediction-based reachability for collision avoidance in autonomous driving. In 2021 IEEE International Conference on Robotics and Automation 79087914 (IEEE, 2021).

Automated Vehicle Safety Consortium AVSC Best Practice for Metrics and Methods for Assessing Safety Performance of Automated Driving Systems (ADS) (SAE Industry Technologies Consortia, 2021).

Au, S. K. & Beck, J. L. Important sampling in high dimensions. Struct. Saf. 25, 139163 (2003).

Article Google Scholar

Silver, D., Singh, S., Precup, D. & Sutton, R. S. Reward is enough. Artif. Intell. 299, 113 (2021).

Article MathSciNet MATH Google Scholar

Mnih, V. et al. Human-level control through deep reinforcement learning. Nature 518, 529533 (2015).

Article ADS CAS PubMed Google Scholar

Weng, B., Rao, S. J., Deosthale, E., Schnelle, S. & Barickman, F. Model predictive instantaneous safety metric for evaluation of automated driving systems. In IEEE Intelligent Vehicles Symposium (IV) 18991906 (IEEE, 2020).

Junietz, P., Bonakdar, F., Klamann, B. & Winner, H. Criticality metric for the safety validation of automated driving using model predictive trajectory optimization. In International Conference on Intelligent Transportation Systems 6065 (IEEE, 2018).

Huang, G., Liu, Z., Van Der Maaten, L. & Weinberger, K. Q. Densely connected convolutional networks. In IEEE Conference on Computer Vision and Pattern Recognition 47004708 (IEEE, 2017).

Bengio, Y., Louradour, J., Collobert, R. & Weston, J. Curriculum learning. In International Conference on Machine Learning 4148 (ICML, 2009).

Yan, X., Feng, S., Sun, H., & Liu, H. X. Distributionally consistent simulation of naturalistic driving environment for autonomous vehicle testing. Preprint at https://arxiv.org/abs/2101.02828 (2021).

Bezzina, D. & Sayer, J. Safety Pilot Model Deployment: Test Conductor Team Report DOT HS 812 171 (National Highway Traffic Safety Administration, 2014).

Sayer, J. et al. Integrated Vehicle-based Safety Systems Field Operational Test: Final Program Report FHWA-JPO-11-150; UMTRI-2010-36 (Joint Program Office for Intelligent Transportation Systems, 2011).

Treiber, M., Hennecke, A. & Helbing, D. Congested traffic states in empirical observations and microscopic simulations. Phys. Rev. E 62, 1805 (2000).

Article ADS CAS MATH Google Scholar

Kesting, A., Treiber, M. & Helbing, D. General lane-changing model MOBIL for car-following models. Transp. Res. Rec. 1999, 8694 (2007).

Article Google Scholar

Liang, E. et al. RLlib: abstractions for distributed reinforcement learning. In International Conference on Machine Learning 30533062 (ICML, 2018).

Chang A. X. et al. ShapeNet: an information-rich 3D model repository. Preprint at https://arxiv.org/abs/1512.03012 (2015).

Darweesh, H. et al. Open source integrated planner for autonomous navigation in highly dynamic environments. J. Robot. Mechatron. 29, 668684 (2017).

Article Google Scholar

View original post here:
Dense reinforcement learning for safety validation of autonomous vehicles - Nature.com

Biological research and self-driving labs in deep space supported by artificial intelligence – Nature.com

Afshinnekoo, E. et al. Fundamental biological features of spaceflight: advancing the field to enable deep-space exploration. Cell 183, 11621184 (2020).

Article Google Scholar

Loftus, D. J., Rask, J. C., McCrossin, C. G. & Tranfield, E. M. The chemical reactivity of lunar dust: from toxicity to astrobiology. Earth Moon Planets 107, 95105 (2010).

Article Google Scholar

Pohlen, M., Carroll, D., Prisk, G. K. & Sawyer, A. J. Overview of lunar dust toxicity risk. NPJ Microgravity 8, 55 (2022).

Paul, A.-L. & Ferl, R. J. The biology of low atmospheric pressureimplications for exploration mission design and advanced life support. Am. Soc. Gravit. Space Biol. 19, 317 (2005).

Council, N. R. Recapturing a Future for Space Exploration: Life and Physical Sciences Research for a New Era (National Academies Press, 2011).

Goswami, N. et al. Maximizing information from space data resources: a case for expanding integration across research disciplines. Eur. J. Appl. Physiol. 113, 16451654 (2013).

Article Google Scholar

Nangle, S. N. et al. The case for biotech on Mars. Nat. Biotechnol. 38, 401407 (2020).

Article Google Scholar

Costes, S. V., Sanders, L. M. & Scott, R. T. Workshop on Artificial Intelligence & Modeling for Space Biology. Zenodo https://doi.org/10.5281/zenodo.7508535 (2023).

Jordan, M. I. & Mitchell, T. M. Machine learning: trends, perspectives, and prospects. Science 349, 255260 (2015).

Article MathSciNet MATH Google Scholar

Topol, E. J. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again (Basic Books, 2019).

Topol, E. J. High-performance medicine: the convergence of human and artificial intelligence. Nat. Med. 25, 4456 (2019).

Article Google Scholar

Scott, R. T. et al. Biomonitoring and precision health in deep space supported by artificial intelligence. Nat. Mach. Intell. https://doi.org/10.1038/s42256-023-00617-5 (2023).

National Academies of Sciences, Engineering, and Medicine, Policy and Global Affairs, Board on Research Data and Information & Committee on Toward an Open Science Enterprise Open Science by Design: Realizing a Vision for 21st Century Research (National Academies Press, 2018).

Wilkinson, M. D. et al. The FAIR guiding principles for scientific data management and stewardship. Sci. Data 3, 160018 (2016).

Article Google Scholar

Berrios, D. C., Beheshti, A. & Costes, S. V. FAIRness and usability for open-access omics data systems. AMIA Annu. Symp. Proc. 2018, 232241 (2018).

Google Scholar

Low, L. A. & Giulianotti, M. A. Tissue chips in space: modeling human diseases in microgravity. Pharm. Res. 37, 8 (2019).

Article Google Scholar

Ronca, A. E., Souza, K. A. & Mains, R. C. (eds) Translational Cell and Animal Research in Space: 19652011 NASA Special Publication NASA/SP-2015-625 (NASA Ames Research Center, 2016).

Alwood, J. S. et al. From the bench to exploration medicine: NASA life sciences translational research for human exploration and habitation missions. NPJ Microgravity 3, 5 (2017).

Schatten, H., Lewis, M. L. & Chakrabarti, A. Spaceflight and clinorotation cause cytoskeleton and mitochondria changes and increases in apoptosis in cultured cells. Acta Astronaut. 49, 399418 (2001).

Article Google Scholar

Shi, L. et al. Spaceflight and simulated microgravity suppresses macrophage development via altered RAS/ERK/NFB and metabolic pathways. Cell. Mol. Immunol. 18, 14891502 (2021).

Article Google Scholar

Ferl, R. J., Koh, J., Denison, F. & Paul, A.-L. Spaceflight induces specific alterations in the proteomes of Arabidopsis. Astrobiology 15, 3256 (2015).

Article Google Scholar

Ou, X. et al. Spaceflight induces both transient and heritable alterations in DNA methylation and gene expression in rice (Oryza sativa L.). Mutat. Res. 662, 4453 (2009).

Article Google Scholar

Overbey, E. G. et al. Spaceflight influences gene expression, photoreceptor integrity, and oxidative stress-related damage in the murine retina. Sci. Rep. 9, 13304 (2019).

Article Google Scholar

Clment, G. & Slenzka, K. Fundamentals of Space Biology: Research on Cells, Animals, and Plants in Space (Springer Science & Business Media, 2006).

Yeung, C. K. et al. Tissue chips in space-challenges and opportunities. Clin. Transl. Sci. 13, 810 (2020).

Article Google Scholar

Low, L. A., Mummery, C., Berridge, B. R., Austin, C. P. & Tagle, D. A. Organs-on-chips: into the next decade. Nat. Rev. Drug Discov. 20, 345361 (2021).

Article Google Scholar

Globus, R. K. & Morey-Holton, E. Hindlimb unloading: rodent analog for microgravity. J. Appl. Physiol. 120, 11961206 (2016).

Article Google Scholar

Simonsen, L. C., Slaba, T. C., Guida, P. & Rusek, A. NASAs first ground-based Galactic cosmic ray simulator: enabling a new era in space radiobiology research. PLoS Biol. 18, e3000669 (2020).

Article Google Scholar

Buckey, J. C. Jr & Homick, J. L. The Neurolab Spacelab Mission: Neuroscience Research in Space: Results from the STS-90, Neurolab Spacelab Mission. NASA Technical Reports Server (NASA, 2003).

Diallo, O. N. et al. Impact of the International Space Station Research Results. NASA Technical Reports Server (NASA, 2019).

Vandenbrink, J. P. & Kiss, J. Z. Space, the final frontier: a critical review of recent experiments performed in microgravity. Plant Sci. 243, 115119 (2016).

Article Google Scholar

Massaro Tieze, S., Liddell, L. C., Santa Maria, S. R. & Bhattacharya, S. BioSentinel: a biological CubeSat for deep space exploration. Astrobiology https://doi.org/10.1089/ast.2019.2068 (2020).

Ricco, A. J., Maria, S. R. S., Hanel, R. P. & Bhattacharya, S. BioSentinel: a 6U nanosatellite for deep-space biological science. IEEE Aerospace Electron. Syst. Mag. 35, 618 (2020).

Article Google Scholar

Chen, Y. et al. Automated cells-to-peptides sample preparation workflow for high-throughput, quantitative proteomic assays of microbes. J. Proteome Res. 18, 37523761 (2019).

Article Google Scholar

Zampieri, M., Sekar, K., Zamboni, N. & Sauer, U. Frontiers of high-throughput metabolomics. Curr. Opin. Chem. Biol. 36, 1523 (2017).

Article Google Scholar

Stephens, Z. D. et al. Big data: astronomical or genomical? PLoS Biol. 13, e1002195 (2015).

Article Google Scholar

Tomczak, K., Czerwiska, P. & Wiznerowicz, M. The Cancer Genome Atlas (TCGA): an immeasurable source of knowledge. Contemp. Oncol. 19, A68A77 (2015).

Google Scholar

Lonsdale, J. et al. The Genotype-Tissue Expression (GTEx) project. Nat. Genet. 45, 580585 (2013).

Article Google Scholar

Atta, L. & Fan, J. Computational challenges and opportunities in spatially resolved transcriptomic data analysis. Nat. Commun. 12, 5283 (2021).

Article Google Scholar

Marx, V. Method of the year: spatially resolved transcriptomics. Nat. Methods 18, 914 (2021).

Article Google Scholar

Deamer, D., Akeson, M. & Branton, D. Three decades of nanopore sequencing. Nat. Biotechnol. 34, 518524 (2016).

Article Google Scholar

Mardis, E. R. DNA sequencing technologies: 20062016. Nat. Protoc. 12, 213218 (2017).

Article Google Scholar

Stuart, T. & Satija, R. Integrative single-cell analysis. Nat. Rev. Genet. 20, 257272 (2019).

Article Google Scholar

Asp, M. et al. A spatiotemporal organ-wide gene expression and cell atlas of the developing human heart. Cell 179, 16471660.e19 (2019).

Article Google Scholar

Giacomello, S. et al. Spatially resolved transcriptome profiling in model plant species. Nat Plants 3, 17061 (2017).

Article Google Scholar

Mao, X. W. et al. Characterization of mouse ocular response to a 35-day spaceflight mission: evidence of blood-retinal barrier disruption and ocular adaptations. Sci. Rep. 9, 8215 (2019).

Article Google Scholar

Jonscher, K. R. et al. Spaceflight activates lipotoxic pathways in mouse liver. PLoS ONE 11, e0152877 (2016).

Article Google Scholar

Beheshti, A. et al. Multi-omics analysis of multiple missions to space reveal a theme of lipid dysregulation in mouse liver. Sci. Rep. 9, 19195 (2019).

Article Google Scholar

Malkani, S. et al. Circulating miRNA spaceflight signature reveals targets for countermeasure development. Cell Rep. 33, 108448 (2020).

Article Google Scholar

da Silveira, W. A. et al. Comprehensive multi-omics analysis reveals mitochondrial stress as a central biological hub for spaceflight impact. Cell 183, 11851201.e20 (2020).

Article Google Scholar

Jiang, P., Green, S. J., Chlipala, G. E., Turek, F. W. & Vitaterna, M. H. Reproducible changes in the gut microbiome suggest a shift in microbial and host metabolism during spaceflight. Microbiome 7, 113 (2019).

Article Google Scholar

Beisel, N. S., Noble, J., Barbazuk, W. B., Paul, A.-L. & Ferl, R. J. Spaceflight-induced alternative splicing during seedling development in Arabidopsis thaliana. NPJ Microgravity 5, 9 (2019).

Polo, S.-H. L. et al. RNAseq analysis of rodent spaceflight experiments is confounded by sample collection techniques. iScience 23, 101733 (2020).

Article Google Scholar

Choi, S., Ray, H. E., Lai, S.-H., Alwood, J. S. & Globus, R. K. Preservation of multiple mammalian tissues to maximize science return from ground based and spaceflight experiments. PLoS ONE 11, e0167391 (2016).

Article Google Scholar

Krishnamurthy, A., Ferl, R. J. & Paul, A.-L. Comparing RNA-seq and microarray gene expression data in two zones of the Arabidopsis root apex relevant to spaceflight. Appl. Plant Sci. 6, e01197 (2018).

Article Google Scholar

Vrana, J. et al. Aquarium: open-source laboratory software for design, execution and data management. Synth. Biol. 6, ysab006 (2021).

Article Google Scholar

Miles, B. & Lee, P. L. Achieving reproducibility and closed-loop automation in biological experimentation with an IoT-enabled lab of the future. SLAS Technol. 23, 432439 (2018).

Article Google Scholar

Read the original:
Biological research and self-driving labs in deep space supported by artificial intelligence - Nature.com

What Is OpenAI Gym and How Can You Use It? – MUO – MakeUseOf

If you can't build a machine learning model from scratch or lack the infrastructure, merely connecting your app to a working model fixes the gap.

Artificial intelligence is here for everyone to use one way or the other. As for OpenAI Gym, there are many explorable training grounds to feed your reinforcement learning agents.

What is OpenAI Gym, how does it work, and what can you build using it?

OpenAI Gym is a Pythonic API that provides simulated training environments for reinforcement learning agents to act based on environmental observations; each action comes with a positive or negative reward, which accrues at each time step. While the agent aims to maximize rewards, it gets penalized for each unexpected decision.

The time step is a discrete-time tick for the environment to transit into another state. It adds up as the agent's actions change the environment state.

The OpenAI Gym environments are based on the Markov Decision Process (MDP), a dynamic decision-making model used in reinforcement learning. Thus, it follows that rewards only come when the environment changes state. And the events in the next state only depend on the present state, as MDP doesn't account for past events.

Before moving on, let's dive into an example for a quick understanding of OpenAI Gym's application in reinforcement learning.

Assuming you intend to train a car in a racing game, you can spin up a racetrack in OpenAI Gym. In reinforcement learning, if the vehicle turns right instead of left, it might get a negative reward of -1. The racetrack changes at each time step and might get more complicated in subsequent states.

Negative rewards or penalties aren't bad for an agent in reinforcement learning. In some cases, it encourages it to achieve its goal more quickly. Thus, the car learns about the track over time and masters its navigation using reward streaks.

For instance, we initiated the FrozenLake-v1 environment, where an agent gets penalized for falling into ice holes but rewarded for recovering a gift box.

Our first run generated fewer penalties with no rewards:

However, a third iteration produced a more complex environment. But the agent got a few rewards:

The outcome above doesn't imply that the agent will improve in the next iteration. While it may successfully avoid more holes the next time, it may get no reward. But modifying a few parameters might improve its learning speed.

The OpenAI Gym API revolves around the following components:

Since OpenAI Gym allows you to spin up custom learning environments, here are some ways to use it in a real-life scenario.

You can leverage OpenAI Gym's gaming environments to reward desired behaviors, create gaming rewards, and increase complexity per game level.

Where there's a limited amount of data, resources, and time, OpenAI Gym can be handy for developing an image recognition system. On a deeper level, you can scale it to build a face recognition system, which rewards an agent for identifying faces correctly.

OpenAI Gym also offers intuitive environment models for 3D and 2D simulations, where you can implement desired behaviors into robots. Roboschool is an example of scaled robot simulation software built using OpenAI Gym.

You can also build marketing solutions like ad servers, stock trading bots, sales prediction bots, product recommender systems, and many more using the OpenAI Gym. For instance, you can build a custom OpenAI Gym model that penalizes ads based on impression and click rate.

Some ways to apply OpenAI Gym in natural language processing are multiple-choice questions involving sentence completion or building a spam classifier. For example, you can train an agent to learn sentence variations to avoid bias while marking participants.

OpenAI Gym supports Python 3.7 and later versions. To set up an OpenAI Gym environment, you'll install gymnasium, the forked continuously supported gym version:

Next, spin up an environment. You can create a custom environment, though. But start by playing around with an existing one to master the OpenAI Gym concept.

The code below spins up the FrozenLake-v1. The env.reset method records the initial observation:

observation, info = env.reset()

Some environments require extra libraries to work. If you need to install another library, Python recommends it via the exception message.

For example, you'll install an additional library (gymnasium[toy-text]) to run the FrozenLake-v1 environment.

One of the setbacks to AI and machine learning development is the shortage of infrastructure and training datasets. But as you look to integrate machine learning models into your apps or devices, it's all easier now with ready-made AI models flying around the internet. While some of these tools are low-cost, others, including the OpenAI Gym, are free and open-source.

The rest is here:
What Is OpenAI Gym and How Can You Use It? - MUO - MakeUseOf

Machine Learning Programs Predict Risk of Death Based on Results From Routine Hospital Tests – Neuroscience News

Summary: Using ECG data, a new machine learning algorithm was able to predict death within 5 years of a patient being admitted to hospital with 87% accuracy. The AI was able to sort patients into 5 categories ranging from low to high risk of death.

Source: University of Alberta

If youve ever been admitted to hospital or visited an emergency department, youve likely had an electrocardiogram, or ECG, a standard test involving tiny electrodes taped to your chest that checks your hearts rhythm and electrical activity.

Hospital ECGs are usually read by a doctor or nurse at your bedside, but now researchers are using artificial intelligence to glean even more information from those results to improve your care and the health-care system all at once.

Inrecently published findings, the research team built and trained machine learning programs based on 1.6 million ECGs done on 244,077 patients in northern Alberta between 2007 and 2020.

The algorithm predicted the risk of death from that point for each patient from all causes within one month, one year and five years with an 85 percent accuracy rate, sorting patients into five categories from lowest to highest risk.

The predictions were even more accurate when demographic information (age and sex) and six standard laboratory blood test results were included.

The study is a proof-of-concept for using routinely collected data to improve individual care and allow the health-care system to learn as it goes, according to principal investigatorPadma Kaul, professor of medicine and co-director of theCanadian VIGOUR Centre.

We wanted to know whether we could use new methods like artificial intelligence and machine learning to analyze the data and identify patients who are at higher risk for mortality, Kaul explains.

These findings illustrate how machine learning models can be employed to convert data collected routinely in clinical practice to knowledge that can be used to augment decision-making at the point of care as part of a learning health-care system.

A clinician will order an electrocardiogram if you have high blood pressure or symptoms of heart disease, such as chest pain, shortness of breath or an irregular heartbeat. The first phase of the study examined ECG results in all patients, but Kaul and her team hope to refine these models for particular subgroups of patients.

They also plan to focus the predictions beyond all-cause mortality to look specifically at heart-related causes of death.

We want to take data generated by the health-care system, convert it into knowledge and feed it back into the system so that we can improve care and outcomes. Thats the definition of a learning health-care system.

Author: Ross NeitzSource: University of AlbertaContact: Ross Neitz University of AlbertaImage: The image is in the public domain

Original Research: Open access.Towards artificial intelligence-based learning health system for population-level mortality prediction using electrocardiograms by Padma Kaul et al. npj Digital Medicine

Abstract

Towards artificial intelligence-based learning health system for population-level mortality prediction using electrocardiograms

The feasibility and value of linking electrocardiogram (ECG) data to longitudinal population-level administrative health data to facilitate the development of a learning healthcare system has not been fully explored. We developed ECG-based machine learning models to predict risk of mortality among patients presenting to an emergency department or hospital for any reason.

Using the 12-lead ECG traces and measurements from 1,605,268 ECGs from 748,773 healthcare episodes of 244,077 patients (20072020) in Alberta, Canada, we developed and validated ResNet-based Deep Learning (DL) and gradient boosting-based XGBoost (XGB) models to predict 30-day, 1-year, and 5-year mortality. The models for 30-day, 1-year, and 5-year mortality were trained on 146,173, 141,072, and 111,020 patients and evaluated on 97,144, 89,379, and 55,650 patients, respectively. In the evaluation cohort, 7.6%, 17.3%, and 32.9% patients died by 30-days, 1-year, and 5-years, respectively.

ResNet models based on ECG traces alone had good-to-excellent performance with area under receiver operating characteristic curve (AUROC) of 0.843 (95% CI: 0.8380.848), 0.812 (0.8080.816), and 0.798 (0.7920.803) for 30-day, 1-year and 5-year prediction, respectively; and were superior to XGB models based on ECG measurements with AUROC of 0.782 (0.7760.789), 0.784 (0.7800.788), and 0.746 (0.7400.751).

This study demonstrates the validity of ECG-based DL mortality prediction models at the population-level that can be leveraged for prognostication at point of care.

Here is the original post:
Machine Learning Programs Predict Risk of Death Based on Results From Routine Hospital Tests - Neuroscience News