Archive for the ‘Machine Learning’ Category

Generative deep learning for the development of a type 1 diabetes simulator | Communications Medicine – Nature.com

Kaizer, J. S., Heller, A. K. & Oberkampf, W. L. Scientific computer simulation review. Reliab. Eng. Syst. Saf. 138, 210218 (2015).

Article Google Scholar

Kadota, R. et al. A mathematical model of type 1 diabetes involving leptin effects on glucose metabolism. J. Theor. Biol. 456, 213223 (2018).

Article MathSciNet CAS PubMed ADS Google Scholar

Farmer Jr, T., Edgar, T. & Peppas, N. Pharmacokinetic modeling of the glucoregulatory system. J. Drug Deliv. Sci. Technol. 18, 387 (2008).

Article CAS PubMed Google Scholar

Nath, A., Biradar, S., Balan, A., Dey, R. & Padhi, R. Physiological models and control for type 1 diabetes mellitus: a brief review. IFAC-PapersOnLine 51, 289294 (2018).

Article Google Scholar

Mansell, E. J., Docherty, P. D. & Chase, J. G. Shedding light on grey noise in diabetes modelling. Biomed. Signal Process. Control 31, 1630 (2017).

Article Google Scholar

Mari, A., Tura, A., Grespan, E. & Bizzotto, R. Mathematical modeling for the physiological and clinical investigation of glucose homeostasis and diabetes. Front. Physiol. https://doi.org/10.3389/fphys.2020.575789 (2020).

Hovorka, R. et al. Nonlinear model predictive control of glucose concentration in subjects with type 1 diabetes. Physiol. Meas. 25, 905 (2004).

Article PubMed Google Scholar

Man, C. D. et al. The UVA/PADOVA type 1 diabetes simulator: new features. J. Diabetes Sci. Technol. 8, 2634 (2014).

Article PubMed PubMed Central Google Scholar

Bergman, R. N. & Urquhart, J. The pilot gland approach to the study of insulin secretory dynamics. In Proceedings of the 1970 Laurentian Hormone Conference 583605 (Elsevier, 1971).

Franco, R. et al. Output-feedback sliding-mode controller for blood glucose regulation in critically ill patients affected by type 1 diabetes. IEEE Trans. Control Syst. Technol. 29, 27042711 (2021).

Article Google Scholar

Nielsen, M. A visual proof that neural nets can compute any function. http://neuralnetworksanddeeplearning.com/chap4.html (2016).

Zhou, D.-X. Universality of deep convolutional neural networks. Appl. Comput. Harmon. Anal. 48, 787794 (2020).

Article MathSciNet Google Scholar

Nikzad, M., Movagharnejad, K., Talebnia, F. Comparative study between neural network model and mathematical models for prediction of glucose concentration during enzymatic hydrolysis. Int. J. Comput. Appl. 56, 1 (2012).

Nalisnick, E.T., Matsukawa, A., Teh, Y.W., Grr, D., Lakshminarayanan, B.: Do deep generative models know what they dont know? In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, https://openreview.net/forum?id=H1xwNhCcYm (2019).

Noguer, J., Contreras, I., Mujahid, O., Beneyto, A. & Vehi, J. Generation of individualized synthetic data for augmentation of the type 1 diabetes data sets using deep learning models. Sensors. https://doi.org/10.3390/s22134944 (2022).

Thambawita, V. et al. Deepfake electrocardiograms using generative adversarial networks are the beginning of the end for privacy issues in medicine. Sci. Rep. 11, 18 (2021).

Article Google Scholar

Marouf, M. et al. Realistic in silico generation and augmentation of single-cell RNA-seq data using generative adversarial networks. Nat. Commun. 11, 112 (2020).

Article Google Scholar

Festag, S., Denzler, J. & Spreckelsen, C. Generative adversarial networks for biomedical time series forecasting and imputation. J. Biomed. Inform. 129, 104058 (2022).

Article PubMed Google Scholar

Xu, J., Li, H. & Zhou, S. An overview of deep generative models. IETE Tech. Rev. 32, 131139 (2015).

Article Google Scholar

Wan, C. & Jones, D. T. Protein function prediction is improved by creating synthetic feature samples with generative adversarial networks. Nat. Mach. Intell. 2, 540550 (2020).

Article Google Scholar

Choudhury, S., Moret, M., Salvy, P., Weilandt, D., Hatzimanikatis, V., & Miskovic, L. Reconstructing kinetic models for dynamical studies of metabolism using generative adversarial networks. Nat. Mach. Intell. 4, 710719 (2022).

Dieng, A.B., Kim, Y., Rush, A. M. & Blei, D. M. Avoiding latent variable collapse with generative skip models. In Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research (eds Chaudhuri, K. & Sugiyama, M.) Vol. 89, 23972405 (PMLR, 2019).

Ruthotto, L. & Haber, E. An introduction to deep generative modeling. GAMM-Mitteilungen 44, 202100008 (2021).

Article MathSciNet Google Scholar

Xie, T. et al. Progressive attention integration-based multi-scale efficient network for medical imaging analysis with application to COVID-19 diagnosis. Comput. Biol. Med. 159, 106947 (2023).

Article CAS PubMed PubMed Central Google Scholar

Li, H., Zeng, N., Wu, P. & Clawson, K. Cov-net: A computer-aided diagnosis method for recognizing COVID-19 from chest x-ray images via machine vision. Expert Syst. Appl. 207, 118029 (2022).

Article PubMed PubMed Central Google Scholar

Li, K., Liu, C., Zhu, T., Herrero, P. & Georgiou, P. Glunet: a deep learning framework for accurate glucose forecasting. IEEE J. Biomed. health Inform. 24, 414423 (2019).

Article PubMed Google Scholar

Rabby, M. F. et al. Stacked LSTM based deep recurrent neural network with Kalman smoothing for blood glucose prediction. BMC Med. Inform. Decis. Mak. 21, 115 (2021).

Article Google Scholar

Munoz-Organero, M. Deep physiological model for blood glucose prediction in T1DM patients. Sensors 20, 3896 (2020).

Article CAS PubMed PubMed Central ADS Google Scholar

Noaro, G., Zhu, T., Cappon, G., Facchinetti, A. & Georgiou, P. A personalized and adaptive insulin bolus calculator based on double deep q-learning to improve type 1 diabetes management. IEEE J. Biomed. Health Inform. 27, pp. 25362544 (2023).

Emerson, H., Guy, M. & McConville, R. Offline reinforcement learning for safer blood glucose control in people with type 1 diabetes. J. Biomed. Inform. 142, 104376 (2023).

Article PubMed Google Scholar

Lemercier, J.-M., Richter, J., Welker, S. & Gerkmann, T. Analysing diffusion-based generative approaches versus discriminative approaches for speech restoration. In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 15 (2023).

Richter, J., Welker, S., Lemercier, J.-M., Lay, B. & Gerkmann, T. Speech enhancement and dereverberation with diffusion-based generative models. In IEEE/ACM Transactions on Audio, Speech, and Language Processing 113 (2023).

Yoo, T. K. et al. Deep learning can generate traditional retinal fundus photographs using ultra-widefield images via generative adversarial networks. Comput. Methods Prog. Biomed. 197, 105761 (2020).

Article Google Scholar

You, A., Kim, J. K., Ryu, I. H. & Yoo, T. K. Application of generative adversarial networks (GAN) for ophthalmology image domains: a survey. Eye Vis. 9, 119 (2022).

Article Google Scholar

Liu, M. et al. Aa-wgan: attention augmented Wasserstein generative adversarial network with application to fundus retinal vessel segmentation. Comput. Biol. Med. 158, 106874 (2023).

Article PubMed Google Scholar

Wang, S. et al. Diabetic retinopathy diagnosis using multichannel generative adversarial network with semisupervision. IEEE Trans. Autom. Sci. Eng. 18, 574585 (2021).

Article Google Scholar

Zhou, Y., Wang, B., He, X., Cui, S. & Shao, L. DR-GAN: conditional generative adversarial network for fine-grained lesion synthesis on diabetic retinopathy images. IEEE J. Biomed. Health Inform. 26, 5666 (2020).

Article CAS Google Scholar

Liu, S. et al. Prediction of OCT images of short-term response to anti-VEGF treatment for diabetic macular edema using different generative adversarial networks. Photodiagnosis Photodyn. Ther. 41, 103272 (2023).

Sun, L.-C. et al. Generative adversarial network-based deep learning approach in classification of retinal conditions with optical coherence tomography images. Graefes Arch. Clin. Exp. Ophthalmol. 261, 13991412 (2023).

Article Google Scholar

Zhang, J., Zhu, E., Guo, X., Chen, H. & Yin, J. Chronic wounds image generator based on deep convolutional generative adversarial networks. In Theoretical Computer Science: 36th National Conference, NCTCS 2018, Shanghai, China, October 1314, 2018, Proceedings 36, 150158 (Springer, 2018).

Cichosz, S. L. & Xylander, A. A. P. A conditional generative adversarial network for synthesis of continuous glucose monitoring signals. J. Diabetes Sci. Technol. 16, 12201223 (2022).

Article PubMed Google Scholar

Mujahid, O. et al. Conditional synthesis of blood glucose profiles for T1D patients using deep generative models. Mathematics. https://doi.org/10.3390/math10203741 (2022).

Eunice, H. W. & Hargreaves, C. A. Simulation of synthetic diabetes tabular data using generative adversarial networks. Clin. Med. J. 7, 4959 (2021).

Che, Z., Cheng, Y., Zhai, S., Sun, Z. & Liu, Y. Boosting deep learning risk prediction with generative adversarial networks for electronic health records. In 2017 IEEE International Conference on Data Mining (ICDM) 787792 (2017).

Noguer, J., Contreras, I., Mujahid, O., Beneyto, A. & Vehi, J. Generation of individualized synthetic data for augmentation of the type 1 diabetes data sets using deep learning models. Sensors 22, 4944 (2022).

Article CAS PubMed PubMed Central ADS Google Scholar

Lim, G., Thombre, P., Lee, M. L. & Hsu, W. Generative data augmentation for diabetic retinopathy classification. In 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI) 10961103 (2020).

Zhu, T., Yao, X., Li, K., Herrero, P. & Georgiou, P. Blood glucose prediction for type 1 diabetes using generative adversarial networks. In CEUR Workshop Proceedings, Vol. 2675, 9094 (2020).

Zeng, A., Chen, M., Zhang, L., & Xu, Q. Are transformers effective for time series forecasting? In Proceedings of the AAAI conference on artificial intelligence.37, pp. 1112111128 (2023).

Zhu, T., Li, K., Herrero, P. & Georgiou, P. Glugan: generating personalized glucose time series using generative adversarial networks. IEEE J. Biomed. Health Inf. https://doi.org/10.1109/JBHI.2023.3271615 (2023).

Lanusse, F. et al. Deep generative models for galaxy image simulations. Mon. Not. R. Astron. Soc. 504, 55435555 (2021).

Article ADS Google Scholar

Ghosh, A. & ATLAS collaboration. Deep generative models for fast shower simulation in ATLAS. In Journal of Physics: Conference Series. IOP Publishing. 1525, p. 012077 (2020).

Borsoi, R. A., Imbiriba, T. & Bermudez, J. C. M. Deep generative endmember modeling: an application to unsupervised spectral unmixing. IEEE Trans. Comput. Imaging 6, 374384 (2019).

Article MathSciNet Google Scholar

Ma, H., Bhowmik, D., Lee, H., Turilli, M., Young, M., Jha, S., & Ramanathan, A.. Deep generative model driven protein folding simulations. In I. Foster, G. R. Joubert, L. Kucera, W. E. Nagel, & F. Peters (Eds.), Parallel Computing: Technology Trends (pp. 4555). (Advances in Parallel Computing; Vol. 36). IOS Press BV. https://doi.org/10.3233/APC200023 (2020)

Wen, J., Ma, H. & Luo, X. Deep generative smoke simulator: connecting simulated and real data. Vis. Comput. 36, 13851399 (2020).

Article Google Scholar

Mincu, D. & Roy, S. Developing robust benchmarks for driving forward AI innovation in healthcare. Nat. Mach. Intell. 4, 916921 (2022).

Mirza, M. & Osindero, S. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014).

Isola, P., Zhu, J.-Y., Zhou, T. & Efros, A. A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 11251134 (2017).

Ahmad, S. et al. Generation of virtual patient populations that represent real type 1 diabetes cohorts. Mathematics 9, 1200 (2021).

Bertachi, A. et al. Prediction of nocturnal hypoglycemia in adults with type 1 diabetes under multiple daily injections using continuous glucose monitoring and physical activity monitor. Sensors https://doi.org/10.3390/s20061705 (2020).

Marling, C. & Bunescu, R. The OhioT1DM dataset for blood glucose level prediction: update 2020. In CEUR Workshop Proceedings, Vol. 2675, 71 (NIH Public Access, 2020).

Estremera, E., Cabrera, A., Beneyto, A. & Vehi, J. A simulator with realistic and challenging scenarios for virtual T1D patients undergoing CSII and MDI therapy. J. Biomed. Inform. 132, 104141 (2022).

Article PubMed Google Scholar

Marin, I., Gotovac, S., Russo, M. & Boi-tuli, D. The effect of latent space dimension on the quality of synthesized human face images. J. Commun. Softw. Syst. 17, 124133 (2021).

Article Google Scholar

The Editorial Board. Into the latent space. Nat. Mach. Intell. 2, 151 (2020).

Battelino, T. et al. Continuous glucose monitoring and metrics for clinical trials: an international consensus statement. Lancet Diabetes Endocrinol. https://doi.org/10.1016/S2213-8587(22)00319-9 (2022).

Beneyto, A., Bertachi, A., Bondia, J. & Vehi, J. A new blood glucose control scheme for unannounced exercise in type 1 diabetic subjects. IEEE Trans. Control Syst. Technol. 28, 593600 (2020).

Article Google Scholar

See the original post here:
Generative deep learning for the development of a type 1 diabetes simulator | Communications Medicine - Nature.com

Integrating core physics and machine learning for improved parameter prediction in boiling water reactor operations … – Nature.com

Low-fidelity and high-fidelity data

The LF model was made in the US NRC codes, Purdue Advanced Reactor Core Simulator (PARCS)19. This model consists of three different fuel bundles labeled each with varying uranium enrichment and gadolinia concentration. The model includes 560 fuel bundles encircled by reflectors. Along with the radial setup, there are 26 axial planes made up of 24 fuel nodes, plus a node of reflectors at the top and bottom planes.

In this work, the model was made in quarter symmetry to save computational time and further reduce the data complexity20. The symmetry was conducted in the radial direction only. The axial discretization was explicitly modeled from bottom to top of the reactor, from reflector to reflector. This is because BWRs axial variation is not symmetrical axially, so it is required to model it in sufficient detail. Based on this description, the boundary condition was set to be reflective in the west and north of the radial core and vacuum (zero incoming neutron currents) for the other directions.

For developing the ML model, the depletion steps were reduced to 12 steps, from the typical 3040 depletion steps. The PARCS cross-section library was generated using CASMO-4 for fuel lattices and reflectors. The library includes group constants from eight lattice simulations over control rod positions, coolant density, and fuel temperature. Lattices were simulated at 23 kW/g of heavy metal power density to a burnup of 50 GWd/MT of initial heavy metal.

The HF data were collected using Serpent21 Monte Carlo simulations. The model was created to reproduce PARCS solutions on the same core conditions but with higher resolutions and using the state-of-the-art simulation approach. This means no diffusion approximation and continuous energy neutron transport was modeled in detailed geometry structures. Each Serpent calculation was run on 500,000 particles, 500 active cycles, and 100 inactive cycles. The other simulation settings were also optimized for depletion calculations.

The reactor model used in this work is based on cycle 1 of the Edwin Hatch Unit 1 nuclear power plant. The power plant, located near Baxley, Georgia, is a boiling water reactor of the BWR-4 design, developed by General Electric, with a net electrical output of approximately 876 MWe and 2436 MWth of thermal output. Since its commissioning in 1975, Unit 1 has operated with a core design containing uranium dioxide fuel assemblies, utilizing a direct cycle where water boils within the reactor vessel to generate steam that drives turbines.

The specification of cycle 1 of Hatch reactor unit 1 is presented in Table 5. While it is a commercial, large power plant, Hatch 1 is not as large as a typical 1,000 GWe LWR. Some BWR designs also have about 700-800 assemblies. Nevertheless, due to the availability of the core design for this work, it is generally viable to use this model as a test case.

There are 560 fuel bundles the size of a 7(times)7 GE lattice in the Hatch 1 Cycle 1 model. Out of the number of fuel bundles in the cycle 1 core, there are three different types of fuels with varying enrichments and burnable absorbers. Using the procedures in running the Serpent model, high-resolution simulations were obtained as shown in the geometry representation in Fig. 6. In the figure, different colors represent different material definitions in Serpent. Because of how the materials were defined individually, the color scheme shown also varied from pin to pin and assembly to assembly. The individual material definition in the pin level was required to capture the isotopic concentration and instantaneous state variables at different fuel exposures and core conditions.

Geometry representation of the full-size BWR core modeled in Serpent. Images were generated by the Serpent geometry plotter.

There are 2400 data points collected as samples for this work with various combinations of control blade patterns and core flow rates and 12 different burnup steps. These data points are translated from 200 independent cycle runs for both PARCS and Serpent to provide LF and HF simulation data, respectively. The collected data were processed into a single HDF5 file.

The data processing parts are performed through data split procedures and data normalization. The data is separated into different sets, with a training-validation-test ratio of 70:15:15. The training data is used to teach the network, the validation data to tune hyperparameters and prevent overfitting, and the test data to evaluate the models generalization performance on unseen data. From the 2400 data points (200 cycles), the dataset was separated into:

Train Dataset: 140 runs or 1680 data points

Validation Dataset: 30 runs or 360 data points

Test Dataset: 30 runs or 360 data points

The data splitting process was not conducted randomly, but based on the average control blade position in a cycle run. Figure 7 presents the distribution of the average control rod inserted in the reactor. The maximum number of steps is 48 for fully withdrawn blades. In the plot, it can be inferred that the test data have the lowest average CR position (largest insertion), followed by the validation set, and the train data have the highest average CR position (smallest insertion).

Train-validation-test data split based on average control blade position in the BWR core. Image was generated using Python Matplotlib Library.

The CR-based splitting for the dataset has the purpose of demonstrating the generalization of the model on out-of-sample CR position data. On the other hand, random splitting is not preferred for small datasets, like this problem as the ML model tends to overfit (or imitate) the data. The fixed (CR-based) splitting process used here ensures that the model can perform well on data with a different distribution than the training dataset.

After splitting the data, normalization of the data is important for the ML model to ensure data integrity and avoid anomalies. In this context, the data processing employs Min-Max scaling, a common normalization technique, to rescale the features to a range [0, 1]. This is achieved by subtracting the minimum value of each feature and then dividing by the range of that feature. The scaling is conducted to fit the training data using the MinMaxScaler class from the scikit-learn package then apply the same scaling to the validation and testing data.

The target parameters used here are the core eigenvalue (or (k_{textrm eff})) and power distribution. The ML model will provide the correction (via predicted errors) of the target parameters that can be used to obtain the predicted HF parameters of interest. The perturbed variables are the parameters that are varied and govern the data collection process and in ML modeling. In this case, the perturbed variables are summarized in Table 6.

In this work, a neural network architecture, called BWR-ComodoNet (Boiling Water ReactorCorrection Model for Diffusion SolverNetwork) is built which is based on the 3D2D convolutional neural network (CNN) architecture. This means that the spatial data in the input and output are processed according to their actual dimensions, which are 3D and 2D arrays. The scalar data are still processed using standard dense layers of neural networks.

The architecture of the BWR-ComodoNet is presented in Fig. 8. The three input features: core flow rate, control rod pattern, and nodal exposure enter three different channels of the network. The scalar parameter goes directly into the dense layer in the encoding process, while the 2D and 3D parameters enter the 2D and 3D CNN layers, respectively. The encoding processes end in the step where all channels are concatenated into one array and connected to dense layers.

Architecture of BWR-ComodoNet using 3D-2D CNN-based encoder-decoder neural networks. Image was generated using draw.io diagram application.

The decoding process follows the shape of the target data. In this case, the output will be both (k_{textrm eff}) error (scalar) and the 3D nodal power error. Since the quarter symmetry is used in the calculation, the 3D nodal power has the shape of (14,14,26) in the x,y, and z dimensions, respectively. BWR-ComodoNet outputs the predicted errors, so there is an additional post-processing step to add the LF data with the predicted error to obtain the predicted HF data.

The output parameters from the neural network model comprise errors in the effective neutron multiplication factor, (k_{eff}), and the errors in nodal power, which is quantified as:

$$begin{aligned} begin{array}{l} e_{k} = k_H-k_L \ vec {e}_{P} = vec {P}_H-vec {P}_L end{array} end{aligned}$$

(4)

Here, (e_k) denotes the error in (k_{eff}) and (vec {e}_{P}) represents the nodal power error vector. The subscripts H and L indicate high-fidelity and low-fidelity data, respectively. According to the equation, the predicted high-fidelity data can be determined by adding the error predictions from the machine learning model to the low-fidelity solutions22.

Given the predicted errors, (hat{e}_k) and (hat{vec {e}}_{P}), the predicted high-fidelity data, (k_H) and (vec {P}_H) is defined as:

$$begin{aligned} begin{array}{l} k_H = k_L + hat{e}_k = k_L + mathscr {N}_k(varvec{theta }, textbf{x}) \ vec {P}_H = vec {P}_L + hat{vec {e}}_{P} = vec {P}_L + mathscr {N}_P(varvec{theta }, textbf{x}) end{array} end{aligned}$$

(5)

where (mathscr {N}_k(varvec{theta }, textbf{x})) and (mathscr {N}_P(varvec{theta }, textbf{x})) are the neural networks for (k_{eff}) and power with optimized weights (varvec{theta }) and input features (textbf{x}). Although Eq. 5 appears to represent a linear combination of low-fidelity parameters and predicted errors, itis important to note that the neural network responsible for predicting the errors is inherently non-linear. As a result, the predicted error is expected to encapsulate the non-linear discrepancies between the low-fidelity and high-fidelity data.

The machine learning architecture for predicting reactor parameters is constructed using the TensorFlow Python library. The optimization of the model is performed through Bayesian Optimization, a technique that models the objective function, which in this case is to minimize validation loss, using a Gaussian Process (GP). This surrogate model is then used to efficiently optimize the function23. Hyperparameter tuning was conducted over 500 trials to determine the optimal configuration, including the number of layers and nodes, dropout values, and learning rates.

The activation function employed for all layers is the Rectified Linear Unit (ReLU), chosen for its effectiveness in introducing non-linearity without significant computational cost. The output layer utilizes a linear activation function to directly predict the target data.

Regularization is implemented through dropout layers to prevent overfitting and improve model generalizability. Additionally, early stopping is employed with a patience of 96 epochs, based on monitoring validation loss, to halt training if no improvement is observed. A learning rate schedule is also applied, reducing the learning rate by a factor of 0.1 every 100 epochs, starting with an initial rate. The training process is conducted with a maximum of 512 epochs and a batch size of 64, allowing for sufficient iterations to optimize the model while managing computational resources.

It is important to note that the direct ML model mentioned in the results, which directly outputs (k_{eff}) and nodal power, follows a different architecture and is independently optimized with distinct hyperparameters compared to the LF+ML model. This differentiation allows for tailored optimization to suit the specific objectives of each model.

See the original post here:
Integrating core physics and machine learning for improved parameter prediction in boiling water reactor operations ... - Nature.com

Top AI Certification Courses to Enroll in 2024 – Analytics Insight

Artificial intelligence (AI) is one of the most in-demand and rapidly evolving fields in the world, with applications and opportunities across various industries and domains. Whether you are a beginner or an experienced professional, acquiring an AI certification can help you boost your skills, knowledge, and career prospects in this exciting and competitive field.

CareerFoundry is an online portal that provides career-change opportunities in a variety of technology sectors, including UX design, UI design, web programming, and data analytics.

Their AI for Everyone course is a beginner-friendly and project-based course that covers the fundamentals of AI, machine learning, and deep learning.

Coursera is one of the most popular and reputable online learning platforms, offering courses, specializations, and degrees from top universities and organizations around the world. Their AI for Everyone course is a non-technical and introductory course that covers the basics of AI, machine learning, and deep learning.

Google is one of the leading and most innovative companies in the field of AI, machine learning, and deep learning, offering various tools and services to support and advance the development and deployment of AI solutions. Their TensorFlow Developer Certificate is a professional certification that validates your ability to build, train, and deploy machine learning models using TensorFlow, an open-source and widely used machine learning library.

edX is another popular and reputable online learning platform, offering courses, professional certificates, and degrees from top universities and organizations around the world. Their Professional Certificate in Machine Learning and Artificial Intelligence by Microsoft is a comprehensive and intermediate-level program that covers the key concepts and techniques of machine learning and artificial intelligence.

Coursera also offers specializations, which are collections of courses that focus on a specific topic or skill. The Natural Language Processing Specialization by National Research University Higher School of Economics is a specialized and advanced-level program that covers the theory and practice of natural language processing (NLP).

Udacity is another popular and reputable online learning platform, offering nanodegrees, which are project-based and career-oriented programs that focus on a specific topic or skill. Their AI Engineer Nanodegree is a comprehensive and intermediate-level program that covers a wide range of AI topics and techniques, such as computer vision, natural language processing, reinforcement learning, generative AI, and more.

IBM is another leading and innovative company in the field of AI, machine learning, and deep learning, offering various tools and services to support and advance the development and deployment of AI solutions. Their AI Engineering Professional Certificate is a comprehensive and intermediate-level program that covers the fundamentals and applications of machine learning and deep learning, using Python, TensorFlow, Keras, PyTorch, and other tools and frameworks.

Coursera also offers specializations from deeplearning.ai, which is an online education platform founded by Andrew Ng, dedicated to teaching, and promoting deep learning, which is an area of AI that utilizes neural networks to learn from data and produce predictions or choices. Their Deep Learning Specialization is a foundational and intermediate-level program that covers the basics and applications of neural networks and deep learning.

edX also offers MicroMasters programs, which are collections of graduate-level courses that focus on a specific topic or skill. The MicroMasters Program in Artificial Intelligence by Columbia University is an advanced and rigorous program that covers the theory and practice of artificial intelligence, machine learning, and deep learning, using Python, TensorFlow, PyTorch, and other tools and frameworks.

Coursera also offers professional certificates from IBM, which are collections of courses that focus on a specific topic or skill. Their IBM Applied AI Professional Certificate is a beginner-friendly and practical program that covers the basics and applications of artificial intelligence, machine learning, and deep learning.

Join our WhatsApp and Telegram Community to Get Regular Top Tech Updates

Original post:
Top AI Certification Courses to Enroll in 2024 - Analytics Insight

Artificial Intelligence Market towards a USD 2,745 bn by 2032 – Market.us Scoop – Market News

Introduction

Artificial Intelligence (AI) is a transformative technology that aims to mimic human intelligence and perform tasks that typically require human cognitive abilities. It encompasses various subfields, such as machine learning, natural language processing, computer vision, and robotics. AI systems are designed to analyze vast amounts of data, learn from patterns, make predictions, and automate complex processes. The potential applications of AI are vast, ranging from healthcare and finance to transportation and manufacturing.

The global Artificial Intelligence (AI) market is set to reach approximately USD 2,745 billion by 2032, marking a substantial increase from USD 177 billion in 2023, with a steady CAGR of 36.8%.

The AI market has been experiencing rapid growth, driven by advancements in technology, increased data availability, and the need for automation and intelligent decision-making. Organizations across industries are recognizing the value of AI in improving efficiency, enhancing customer experiences, and gaining a competitive edge. The AI market encompasses a wide range of solutions, including AI software platforms, AI-enabled hardware, and AI services.

Challenges:

Predictions:

In conclusion, AI is a transformative technology with immense potential to revolutionize various industries. The AI market is experiencing significant growth, driven by technological advancements and the increasing demand for intelligent automation and decision-making capabilities. Gathering data from reliable sources and staying informed about emerging trends can provide valuable insights into the AI market, enabling organizations to leverage AI effectively and drive innovation in their respective fields.

Excerpt from:
Artificial Intelligence Market towards a USD 2,745 bn by 2032 - Market.us Scoop - Market News

The Top 3 Machine Learning Stocks to Buy in March 2024 – InvestorPlace

Source: NicoElNino / Shutterstock.com

You may be hearing the word AI bubble a lot these days, especially regarding the stock market. Since OpenAI released its artificial-intelligence (AI) chatbot ChatGPT in Nov. 2022, it feels like every company in the world has been getting into the AI business.

Machine learning is a type of AI that allows computers to learn and reproduce how humans learn and use that to replicate their behaviors. As you might imagine, machine learning has the potential to decrease the cost and time of human tasks and eliminate redundant work.

Companies are set to save billions of dollars by integrating machine learning tools and software in their businesses. As investors, not only is it important to look at which companies are successfully using machine learning, but also the companies that are providing these tools to be used. This article will discuss three of the top machine-learning stocks to buy while the AI industry remains red-hot.

Source: Poetra.RH / Shutterstock.com

NVIDIA (NASDAQ:NVDA) is the global leader when it comes to producing GPUs that can power machine-learning computers. The stock has been on a tear over the past year, returning north of 240% to shareholders, while surging up the list of the worlds most valuable companies. Despite such unprecedented growth, Yahoo Finance analysts still remain optimistic for with with a one-year target between an average of $852.10 to a high of $1,400.0.

When it comes to machine-learning GPUs, NVIDIA is second to none in the semiconductor industry. NVIDIA has more demand for its chips than it has supply even at elevated prices, an its customers include some of the most powerful companies in the world.

You might think that a stock that has risen by more than 240% in one year is overinflated. The fact is, that NVIDIAs revenue has grown so fast that its growth has kept pace with its stock valuation. Looking comparatively, NVIDIAs forward P/E ratio of 34.25x is still lower than the likes of Amazon and Tesla. As long as AI and machine learning are being adopted, NVIDIAs stock should continue to reap rewards for investors.

Source: Roschetzky Photography / Shutterstock.com

Tesla (NASDAQ:TSLA) is a company that needs no introduction. It is the largest manufacturer of electric vehicles in the world and single-handedly revolutionized the auto industry. While its stock has lagged behind its other Magnificient 7 counterparts in 2024 due to high-interest rate environments, its consensus one-year price target still aims for a high of $345.00.

So, how does an electric vehicle company operate in the machine-learning industry? Tesla, led by CEO Elon Musk, has long been trying to master self-driving technology. Teslas FSD or full self-driving software has had some roadblocks from regulatory agencies like the NHTSA in America, but Musk remains confident that it will be available to all Tesla users in the future.

Teslas stock still trades at a premium, especially since the company has reported declining operating margins and fairly stagnant revenue growth. The forward P/E ratio of the stock shows that TSLA is trading at about 65x forward earnings, which is nearly double that of NVIDIA. As mentioned, Teslas stock could continue to struggle until interest rates begin to decline. Savvy long-term investors might be taking this period of consolidation as a time to load up on the high-growth stock.

Source: Mamun sheikh K / Shutterstock.com

Palantir (NYSE:PLTR) is a data analytics and software company that has a very polarizing following on social media. At one time, Palantir was looked at as a meme stock, but the company has since proven to be profitable and has exhibited impressive growth.

While the operations of Palantir have always been shrouded in mystery, the company has made clear progress in growing its customer base over the past few years. One of the ways it has done this is by introducing its AIP or Artificial Intelligence Platform. AIP uses machine learning to help large-scale enterprises unluck insights from large sets of data. From this analysis, companies can identify inefficiencies and operate at a higher level.

We did mention Palantirs stock is trading at the high end of analyst estimates, right? Well, although it is a much smaller company, Palantirs valuation currently dwarfs that of both NVIDIA and Tesla. At its current price, Palantirs stock trades at about 25x sales and 79x future earnings. With the potential to be considered for S&P 500 inclusion later this year, and management guiding a FY2024 revenue of around $2.6 billion, Palantir is a worthy company to look into capitalizing off machine learning.

On the date of publication, Ian Hartana and Vayun Chugh did not hold (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

Chandler Capital is the work of Ian Hartana and Vayun Chugh. Ian Hartana and Vayun Chugh are both self-taught investors whose work has been featured in Seeking Alpha. Their research primarily revolves around GARP stocks with a long-term investment perspective encompassing diverse sectors such as technology, energy, and healthcare.

View original post here:
The Top 3 Machine Learning Stocks to Buy in March 2024 - InvestorPlace