Archive for the ‘Machine Learning’ Category

Machine learningbased observation-constrained projections reveal elevated global socioeconomic risks from wildfire – Nature.com

Applying traditional EC for global fire carbon emissions

The recently developed emergent constraint (EC) approach has demonstrated robust capability in reducing the uncertainty in characterizing or projecting Earth system variables simulated by a multimodel ensemble25,26. The basic concept of EC is that, despite the distinct model structures and parameters, there exists various across-model relationships (emergent constraints) between pairs of quantities when we analyze outputs from multiple models27. Therefore, the EC concept is especially useful to derive the relationship between a variable that is difficult or impossible to measure (e.g., future wildfires) and a second, measurable variable (e.g., historical wildfires), across multiple ESMs. We start with global total values and find significant linear relationship between historical and future global total fire carbon emission across 38 ensemble members of 13 ESMs (Supplementary Fig.2a). Because we are particularly interested in the spatial distribution of future wildfires, which are critical for quantifying future socioeconomic risks from wildfires, we further apply the EC concept to every grid cell of the globe, using either a single constraint variable (historical fire carbon emissions) or multiple constraint variables (the atmospheric and terrestrial variables in Supplementary Table2), with the latter being shown in Supplementary Fig.2b. We find insignificant linear relationships between these historical fire-relevant variables and future wildfires in the historically fire-prone regions across the analyzed 38 members of 13 ESMs. The failure of the traditional EC concept in constraining fire carbon emissions at local scales could be attributed to the highly nonlinear interactions between fire and its cross-section drivers, which is likely inadequately captured by the linear relationship under the EC assumption. Therefore, we further develop an MLT-based constraint to deal with the complex response of wildfires to environmental and socioeconomic drivers.

MLT provide powerful tools for capturing the nonlinear and interactive roles among regulators of an Earth system feature, thereby facilitating effective, multivariate constraint on wildfire activity, which represents an integrated function of climate, terrestrial ecosystem, and socioeconomic conditions. MLT have been widely applied for identifying empirical regulators32 and building prediction systems for global and regional fire activity35. To constrain the projected fire carbon emissions simulated by 13 ESMs using observational data, the current study establishes an MLT-based emergent relationship between the future fire carbon emissions and historical fire carbon emissions, climate, terrestrial ecosystem, and socioeconomic drivers.

Here, we use MLT to examine the empirical relationships between historical, observed influencing factors of wildfires and future fire carbon emissions from ESMs and then feed observational data into the trained machine learning models (Supplementary Fig.3). To train the MLT to use historical states for the prediction of future fire carbon emission, the historical and future simulations from the SSP (Shared Socioeconomic Pathway) 5-8536, a high-emission scenario, are analyzed for the currently available 13 ESMs in CMIP6 (Supplementary Table1). A subset of these ESMs (i.e., nine ESMs that provide simulation in a lower-emission scenario, SSP2-45) is also analyzed to examine the dependence of fire regimes on socioeconomic pathway. The training is conducted using the spatial sample of decadal mean predictors and target variable, both individually from each ESM and from their aggregation, with the later referred to as multimodel mean and subsequently analyzed for projecting fire carbon emission and its socioeconomic risks. Corresponding to the spatial resolution of the observational products of fire carbon emission, all model outputs are bilinearly interpolated to a 0.250.25 grid, resulting in a spatial sample of 11,325 points per model for the training. To perform the observational constraint, the historical observed predictors are then fed into the trained machine learning models. The historical predictors are listed in Supplementary Table2 with their observational data sources, temporal coverages, and spatial resolutions. For the atmospheric and terrestrial variables, the annual mean value and climatology in each of 12 calendar months are included as predictors. This training and observational constraining is performed for target decades (20112020, 20212030, 20912100), and the historical period is always 20012010. Future changes in fire carbon emission are quantified and expressed as the relative trend (% decade1) (i.e., the ratio between the absolute trend and the mean value during the 2010s), for both the default and observation-constrained ensembles.

The current spatial sample training approach establishes a history-future relationship for each pixel using the entire global sample. To minimize local prediction errors for a certain pixel, MLT search all pixels, regardless of their geographical location, to optimize the prediction model of future fires at the target pixel. In this way, a physically robust history-future relationship is established based on the global sample of locations, whereas influences of localized features, such as socioeconomic development, on wildfire trends are naturally damped in our approach (Supplementary Figs.10 and 11). The reliability of MLT is degraded when the actual observational data space is insufficiently covered by the training (historical CMIP6 simulation) data space, namely the extrapolation uncertainty. Here, we further evaluate the data space of both observation and historical simulation of the climate and fire variables (Supplementary Fig.14), and we find all these assessed variables are largely overlapped, indicating minimal extrapolation error involved in the current MLT application.

To minimize the projection uncertainty associated with the selected machine learning algorithms, this study examines three MLTrandom forest (rf), support vector machine with Radial Basis Function Kernel (svmRadialCost), and gradient boosting machine (gbm). These three algorithms differ substantially in their function. The average among these algorithms is thus believed to better capture the complex interrelation between the historical predictors and future fire carbon emissions than any single algorithm. The MLT analysis is performed using the caret, dplyr, randomForest, kernlab, and gbm packages in the R statistical software. The prediction model is fitted for each MLT using the training data set that targets each future decade, with parameters optimized for the minimum RMSE via 10-fold cross-validationin other words, using a randomly chosen nine-tenth of the entire spatial sample (n=10,193) for model fitting and the remaining one-tenth of the entire spatial sample (n=1,132) for validation, and repeating the process 10 times. For svmRadialCost, the optimal pair of cost parameter (C) and kernel parameter sigma (sigma) is searched from 30 (tuneLength=30) C candidates and their individually associated optimal sigma. For gbm, we set the complexity of trees (interaction.depth) to 3, and learning rate (shrinkage) to 0.2, and let the train function search for the optimal number of trees from 10 to 200 with an increment of 5 (10, 15, 20, , 200). For rf, the number of variables available for splitting at each tree node (mtry) is allowed to search between 5 and 50 with an increment of 1 (5, 6, 7, , 50); the number of trees is determined by the algorithm provided by randomForest package and the train function by the caret package. The cross-validation R2s exceed 0.8 (n=1,132) for all optimized MLT and all future periods. The currently examined ESMs, MLT, and hundreds of observational data set combinations constitute a multimodel, multidata set ensemble of projected fire carbon emissions for the twenty-first century. This multimodel, multidata set ensemble allows natural quantification of uncertainty in the future projection derived from observational sources and MLT, compared with a previous single-MLT, single-observation approach67.

This MLT-based observational constraining approach is validated for a historical period using the emergent relation between the fire-climate-ecosystem-socioeconomics during 19972006 and fire carbon emission during 20072016. The spatial correlation and RMSE with the observed decadal mean fire carbon emission (n=11,325) is evaluated and compared for the constrained and unconstrained ensemble, reported in the main text (Figs.1 and 2). The RMSE and R2 produced by the traditional EC approach that constrains fire carbon emissions during 20072016 with fire carbon emissions during 19972006 are reported along with the MLT-based observational constraint in Fig.1e, f. The MLT-based observational constraining approach is also applied to six ESMs that report burned area fraction, and validation is also conducted and reported in Supplementary Fig.6.

Because the MLT are trained using the global spatial sample, we expect the performance of MLT to be sensitive to the spatial resolution of the training data set. This assumption is tested by varying the interpolation grids (1, 2.5, 5, and 10 latitude by longitude) of the ESMs and fitting MLT using this specific-resolution training data for the validation period (Supplementary Fig.7). Observational data sets at 0.25 resolution are subsequently fed into the fitted MLT models, regardless of the input model data resolution. This sensitive test sheds light on the importance of spatial resolution to our observational constraining and thereby implies potential accuracy improvement of our MLT-based observation constraint with the development of higher-resolution ESMs.

Here, we define the socioeconomic exposure to wildfires as a product of decadal mean fire carbon emission and number of people, amount of GDP, and agricultural area exposed to the burning in each grid cell, following previous definition for extreme heat68. These exposure metrics measure the amount of population, GDP, and agricultural area affected by wildfires, whose severity is represented by the amount of fire carbon emission. The projected population at 1/81/8 resolution under SSP5-85 is obtained from the National Center for Atmospheric Researchs Integrated Assessment Modeling Group and the City University of New York Institute for Demographic Research69. The projected GDP at 1km resolution under SSP5 is disaggregated from national GDP projections using nighttime light and population70. The agricultural area projection at 0.050.05 resolution under SSP5-85 is obtained from the Global Change Analysis Model and a geospatial downscaling model (Demeter)71. All the projected socioeconomic variables are resampled to 0.250.25 resolution before the calculation of exposure to fire carbon emission fraction. Future changes in socioeconomic exposure to wildfires are quantified as the relative trend (% decade1) (i.e., the ratio between the absolute trend and the mean value during the 2010s) for the default and observation-constrained ensembles. These relative changes provide direct implications on what the future would be like compared with the current state, regardless of the potential biases simulated by the default ESMs.

The mechanisms underlying the projected evolution in fire carbon emissions are explored in two tasks, addressing the importance of drivers in the historical and dynamical perspectives. The first task assesses the relative contribution of each environmental and socioeconomic drivers historical distribution to the projected future wildfire distribution, for directly understanding how the current observational constraint works (Supplementary Fig.8). The second task examines the relative contribution of each drivers projected trend to the projected wildfires trends in a specific region, for disentangling the dynamical mechanisms underlying future evolution of regional wildfires (Supplementary Fig.9). These tasks benefit from the importance score as an output of MLT. Although the calculation of importance scores varies substantially by MLT, all the importance scores qualitatively reflect relative importance of each predictor when making a prediction. For each tree in both rf and gbm, the prediction accuracy on the out-of-bag portion of the data is recorded. Then, the same is done after permuting each predictor variable. For rf, the differences are averaged for each tree and normalized by the standard error. For gbm, the importance order is first calculated for each tree and then summed up over each boosting iteration. For svm, we estimate the contribution of a single variable by training the model on all variables except that specific variable. The difference in performance between that model and the one with all variables is then considered the marginal contribution of that particular variable; such marginal contribution of each variable is standardized to derive the variables relative importance. Because we apply multiple MLT in this study, the average importance scores from these MLT are reported in the corresponding figures for robustness.

In the first task, the importance of each historical driver to future global wildfire distributions is examined in three MLT models (random forest, support vector machine, and gradient boosting machine) that are trained for projecting future fire carbon emissions (Supplementary Fig.8). For the atmospheric and terrestrial variables that include annual mean and monthly climatology as predictors, to account for the overall importance of a particular variable while considering the possible information overlapping contained in each month and annual mean, the importance of each variable is represented by the highest importance score among these 13 predictors (annual mean, January, February, , December). The importance score of each historical driver reflects the relative weight of each historical, environmental driver in determining the spatial pattern of fire carbon emissions in each future decade.

In the second task, the dynamical importance of each environmental drivers future evolution is assessed for targeted tropical regions (i.e., Amazon and Congo) and major land cover types (tropical forests, other forest, shrubland, savannas, grasslands, and croplands) in both default and constrained ensembles through the importance of each drivers trend to the projected wildfire trend. For the default ensemble, the three MLT models (random forest, support vector machine, and gradient boosting machine) are used to predict the spatial distribution of simulated trends in fire carbon emission using the simulated trends in the socioeconomic, atmospheric, and terrestrial variables that are considered in our observational constraint for wildfires, for each ESM and their multimodel mean. This analysis excludes flash rate, another predictor in constraining future wildfires, because it is not dynamically simulated by most ESMs. For the observation-constrained ensemble, we first constrain the projected atmospheric and terrestrial variables in each future decade, using a similar approach as we constrain future fire carbon emissions, for each individual ESM and their multimodel aggregation. In this constraint for environmental drivers, all the variables in Supplementary Table2 are considered as predictors, thereby achieving self-consistency of the constrained future evolution of all these fire-relevant variables. Noticing that the socioeconomic trends are determined by the SSPs, future socioeconomic developments are therefore not constrained in the current approach. Then, the same three MLT models are used to predict the spatial distribution of constrained trends in fire carbon emissions using the constrained trends in those environmental and socioeconomic drivers. For computational efficiency, only the annual mean trends in the environmental drivers are constrained and analyzed in this task. The importance scores of projected trends in socioeconomic and environmental drivers reflect their dynamic role in future evolution of wildfires in the target tropical regions. Here, the Amazon and Congo regions are shown as examples of how this analysis is applied to understand regional wildfire evolutions, though the mechanism underlying the future evolution of wildfires in other regions could be similarly explored.

Read the rest here:
Machine learningbased observation-constrained projections reveal elevated global socioeconomic risks from wildfire - Nature.com

OpenShift 4.10: Red Hat teams with Nvidia to add AI and machine learning – ZDNet

Getty Images

You can run Kubernetes straight from the code, but few companies have the nerves to do it. Instead, they turn to programs such as Red Hat's OpenShift. These make orchestrating containers much easier. Now, with its most recent update, Red Hat OpenShift 4.10, is also adding artificial intelligence (AI) and machine learning (ML) functionality to its bag of tricks,

Managing AI and ML in the Enterprise

The AI and ML deployments are well underway, but for CXOs the biggest issue will be managing these initiatives, and figuring out where the data science team fits in and what algorithms to buy versus build.

Read More

Red Hat is pulling this off by allying with Nvidia. Specifically, this latest OpenShift version is certified to work with NVIDIA AI Enterprise 2.0. Nvidia AI Enterprise is an AI software suite to get both experienced and new AI companies to quickly get to work on AI development and deployment. It does this by providing proven, open-sourced containers and frameworks. These are certified to run on common data center platforms from both Red Hat and VMware vSphere with Tanzu. This setup uses Nvidia Certified servers configured with GPUs or CPU only, either on-premise on the cloud. The idea is to give customers a ready-to-run AI platform so companies can focus on creating business value from AI, not on running the AI infrastructure.

Red Hat customers now deploy Red Hat OpenShift on NVIDIA-Certified Systems with Nvidia Enterprise AI software as well as on previously supported Nvidia DGX A100 systems, another high-performance AI workload compute system. This also enables organizations to quickly deploy an AI infrastructure to consolidate and accelerate the MLOps lifecycle.

Of course, there's more to this latest OpenShift update than AI support. Red Hat OpenShift 4.10 also supports a wider spectrum of cloud-native workloads across the open hybrid cloud by supporting additional public clouds and hardware architectures. These new features and capabilities include:

Installer provisioned infrastructure (IPI) support for Azure Stack Hub and Alibaba Cloud and IBM Cloud, both available as a technology preview. Users can now use the IPI process for fully automated, integrated, one-click installation of OpenShift 4.

Running Red Hat OpenShift on Arm processors. Arm support will be available in two ways: full stack automation IPI for Amazon Web Services (AWS) and user provisioned (UPI) for bare-metal on pre-existing infrastructure.

Red Hat OpenShift availability on NVIDIA LaunchPad.NVIDIA LaunchPad provides free access to curated labs for enterprise IT and AI professionals to experience Nvidia-accelerated systems and software.

Many customers have long been awaiting the day they could run OpenShift on ARM. It offers cost savings, reduced power consumption, and, in some scenarios, performance gains. It finally appeared as a beta last summer, and now it's ready to run in production. As Eddie Ramirez, ARM's vice president of Infrastructure Line of Business, said, "By adding support for Arm to OpenShift, Red Hat is providing software developers with compelling, new choices in AI processing and helping to unlock the benefits of high performing, cost-efficient Arm-based processors in hybrid cloud-based environments."

The brand-new OpenSift also includes three new compliance operators so if your business works in retail, electrical utilities, or federal government contracting, you make certain your Kubernetes clusters comply with these standards.

Finally, OpenShift 4.10 has improved its security by making sandboxed containers, based on Kata containers, generally available. Sandboxed containers provide an optional additional layer of isolation for workloads with stringent application-level security requirements. This complements OpenShift's older built-in security functionality such as SELinux, role-based access control (RBAC), projects, security context constraints (SCCs), and Kubernetes network policies.

Security improvements have also been made to keep OpenShift clusters running well in disconnected or air-gapped settings. With this, you can mirror OpenShift images and keep them up to date even though the clusters are usually disconnected.

NVIDIA Enterprise AI 2.0 on OpenShift 4.10 and OpenShift 4.10 are now generally available.

See also

Here is the original post:
OpenShift 4.10: Red Hat teams with Nvidia to add AI and machine learning - ZDNet

West Virginia University researchers believe machine learning may predict where need for COVID tests is greatest – WV News

Country

United States of AmericaUS Virgin IslandsUnited States Minor Outlying IslandsCanadaMexico, United Mexican StatesBahamas, Commonwealth of theCuba, Republic ofDominican RepublicHaiti, Republic ofJamaicaAfghanistanAlbania, People's Socialist Republic ofAlgeria, People's Democratic Republic ofAmerican SamoaAndorra, Principality ofAngola, Republic ofAnguillaAntarctica (the territory South of 60 deg S)Antigua and BarbudaArgentina, Argentine RepublicArmeniaArubaAustralia, Commonwealth ofAustria, Republic ofAzerbaijan, Republic ofBahrain, Kingdom ofBangladesh, People's Republic ofBarbadosBelarusBelgium, Kingdom ofBelizeBenin, People's Republic ofBermudaBhutan, Kingdom ofBolivia, Republic ofBosnia and HerzegovinaBotswana, Republic ofBouvet Island (Bouvetoya)Brazil, Federative Republic ofBritish Indian Ocean Territory (Chagos Archipelago)British Virgin IslandsBrunei DarussalamBulgaria, People's Republic ofBurkina FasoBurundi, Republic ofCambodia, Kingdom ofCameroon, United Republic ofCape Verde, Republic ofCayman IslandsCentral African RepublicChad, Republic ofChile, Republic ofChina, People's Republic ofChristmas IslandCocos (Keeling) IslandsColombia, Republic ofComoros, Union of theCongo, Democratic Republic ofCongo, People's Republic ofCook IslandsCosta Rica, Republic ofCote D'Ivoire, Ivory Coast, Republic of theCyprus, Republic ofCzech RepublicDenmark, Kingdom ofDjibouti, Republic ofDominica, Commonwealth ofEcuador, Republic ofEgypt, Arab Republic ofEl Salvador, Republic ofEquatorial Guinea, Republic ofEritreaEstoniaEthiopiaFaeroe IslandsFalkland Islands (Malvinas)Fiji, Republic of the Fiji IslandsFinland, Republic ofFrance, French RepublicFrench GuianaFrench PolynesiaFrench Southern TerritoriesGabon, Gabonese RepublicGambia, Republic of theGeorgiaGermanyGhana, Republic ofGibraltarGreece, Hellenic RepublicGreenlandGrenadaGuadaloupeGuamGuatemala, Republic ofGuinea, RevolutionaryPeople's Rep'c ofGuinea-Bissau, Republic ofGuyana, Republic ofHeard and McDonald IslandsHoly See (Vatican City State)Honduras, Republic ofHong Kong, Special Administrative Region of ChinaHrvatska (Croatia)Hungary, Hungarian People's RepublicIceland, Republic ofIndia, Republic ofIndonesia, Republic ofIran, Islamic Republic ofIraq, Republic ofIrelandIsrael, State ofItaly, Italian RepublicJapanJordan, Hashemite Kingdom ofKazakhstan, Republic ofKenya, Republic ofKiribati, Republic ofKorea, Democratic People's Republic ofKorea, Republic ofKuwait, State ofKyrgyz RepublicLao People's Democratic RepublicLatviaLebanon, Lebanese RepublicLesotho, Kingdom ofLiberia, Republic ofLibyan Arab JamahiriyaLiechtenstein, Principality ofLithuaniaLuxembourg, Grand Duchy ofMacao, Special Administrative Region of ChinaMacedonia, the former Yugoslav Republic ofMadagascar, Republic ofMalawi, Republic ofMalaysiaMaldives, Republic ofMali, Republic ofMalta, Republic ofMarshall IslandsMartiniqueMauritania, Islamic Republic ofMauritiusMayotteMicronesia, Federated States ofMoldova, Republic ofMonaco, Principality ofMongolia, Mongolian People's RepublicMontserratMorocco, Kingdom ofMozambique, People's Republic ofMyanmarNamibiaNauru, Republic ofNepal, Kingdom ofNetherlands AntillesNetherlands, Kingdom of theNew CaledoniaNew ZealandNicaragua, Republic ofNiger, Republic of theNigeria, Federal Republic ofNiue, Republic ofNorfolk IslandNorthern Mariana IslandsNorway, Kingdom ofOman, Sultanate ofPakistan, Islamic Republic ofPalauPalestinian Territory, OccupiedPanama, Republic ofPapua New GuineaParaguay, Republic ofPeru, Republic ofPhilippines, Republic of thePitcairn IslandPoland, Polish People's RepublicPortugal, Portuguese RepublicPuerto RicoQatar, State ofReunionRomania, Socialist Republic ofRussian FederationRwanda, Rwandese RepublicSamoa, Independent State ofSan Marino, Republic ofSao Tome and Principe, Democratic Republic ofSaudi Arabia, Kingdom ofSenegal, Republic ofSerbia and MontenegroSeychelles, Republic ofSierra Leone, Republic ofSingapore, Republic ofSlovakia (Slovak Republic)SloveniaSolomon IslandsSomalia, Somali RepublicSouth Africa, Republic ofSouth Georgia and the South Sandwich IslandsSpain, Spanish StateSri Lanka, Democratic Socialist Republic ofSt. HelenaSt. Kitts and NevisSt. LuciaSt. Pierre and MiquelonSt. Vincent and the GrenadinesSudan, Democratic Republic of theSuriname, Republic ofSvalbard & Jan Mayen IslandsSwaziland, Kingdom ofSweden, Kingdom ofSwitzerland, Swiss ConfederationSyrian Arab RepublicTaiwan, Province of ChinaTajikistanTanzania, United Republic ofThailand, Kingdom ofTimor-Leste, Democratic Republic ofTogo, Togolese RepublicTokelau (Tokelau Islands)Tonga, Kingdom ofTrinidad and Tobago, Republic ofTunisia, Republic ofTurkey, Republic ofTurkmenistanTurks and Caicos IslandsTuvaluUganda, Republic ofUkraineUnited Arab EmiratesUnited Kingdom of Great Britain & N. IrelandUruguay, Eastern Republic ofUzbekistanVanuatuVenezuela, Bolivarian Republic ofViet Nam, Socialist Republic ofWallis and Futuna IslandsWestern SaharaYemenZambia, Republic ofZimbabwe

Read this article:
West Virginia University researchers believe machine learning may predict where need for COVID tests is greatest - WV News

Alchemab Selected to Access NVIDIA Cambridge-1 Supercomputer to Advance Machine Learning Enabled Antibody Discovery – Business Wire

BOSTON & CAMBRIDGE, England--(BUSINESS WIRE)--Alchemab Therapeutics, a biotechnology company focused on the discovery and development of naturally-occurring protective antibodies and immune repertoire-based patient stratification tools, has been selected by NVIDIA to harness the power of the UKs most powerful supercomputer, Cambridge-1. Alchemab will use the NVIDIA DGX SuperPOD supercomputing cluster, powered by NVIDIA DGX A100 systems, to gain greater understanding and insights from its extensive neurology and oncology datasets.

We are honored to collaborate with NVIDIA to advance our work applying machine learning to the prediction of antibody structure and function, said Douglas A. Treco, PhD, Chief Executive Officer of Alchemab Therapeutics. Using Cambridge-1, Alchemab will vastly accelerate our capabilities and we are excited about the potential to collaborate with NVIDIAs world-leading team to better understand the language of antibodies.

Craig Rhodes, EMEA Industry Lead for Healthcare and Life Sciences at NVIDIA, commented: Cambridge-1 enables the application of machine learning to help solve the most pressing clinical challenges, advance health research through digital biology, and unlock a deeper understanding of diseases. The system drives workloads that are scaled and optimised for supercomputing and will help extraordinary organisations like Alchemab, a member of the NVIDIA Inception program, to further their research on antibodies and other protective therapeutics for hard to treat diseases.

Our collaboration with NVIDIA will unlock countless opportunities to advance Alchemabs state-of-the-art platform, facilitating the discovery of novel therapeutics and patient stratification techniques, said Jake Galson, PhD, Head of Technology at Alchemab Therapeutics. Machine learning is accelerating research across multiple therapeutic areas and will be pivotal in helping Alchemab predict the function of novel antibodies based on their sequence alone.

An individuals antibody repertoire encodes information about past immune responses and potential for future disease protection. Alchemab believes that deciphering information stored in these antibody sequence datasets will transform the fundamental understanding of disease and enable discovery of novel diagnostics and antibody therapeutics. Using self-supervised machine learning, Alchemab has developed antibody-specific language model AntiBERTa (Antibody-specific Bi-directional Encoder Representation from Transformers), a 12-layer transformer model which provides a contextualized numeric representation of antibody sequences. AntiBERTa learns biologically relevant information and is primed for multiple downstream tasks which are improving our understanding of the language of antibodies.

Attend Alchemabs session on deciphering the language of antibodies on March 24 at GTC, a free to register global AI conference. Find more details on the Nvidia Inception program here. Find project updates and more information on Cambridge-1 projects here.

About AlchemabAlchemab has developed a highly differentiated platform which enables the identification of novel drug targets, therapeutics and patient stratification tools by analysis of patient antibody repertoires. The platform uses well-defined patient samples, deep B cell sequencing and computational analysis to identify convergent protective antibody responses among individuals that are susceptible but resilient to specific diseases.

Alchemab is building a broad pipeline of protective therapeutics for hard-to-treat diseases, with an initial focus on neurodegenerative conditions and oncology. The highly specialized patient samples that power Alchemabs platform are made available through valued partnerships and collaborations with patient representative groups, biobanks, industry partners and academic institutions.

For more information, visit http://www.alchemab.com.

Originally posted here:
Alchemab Selected to Access NVIDIA Cambridge-1 Supercomputer to Advance Machine Learning Enabled Antibody Discovery - Business Wire

AI and Machine Learning: The Present and the Future – Marketscreener.com

We have heard the adage "data is the new oil."Datahas become one of the most critical assets to enterprises globally. Digitalization of organizations hasopened upa new horizon in customer outreach, customer services and customer interactions. Every interaction with a customer is now a data footprint - with massive potential to be harnessed when viewed and analyzed in totality.

The collection and processing of data is facilitated by new technologies such as 5G mobile networks and edge computing (In an a previous blog I spoke about how edge is ushering in a business transformation - read here). The time then is ripe for enterprises to tap into the transformative effects of artificial intelligence (AI) and machine learning (ML).

Early forays into AI were inhibited by a lack of computing and processing power, but today that barrier has largely been lifted due to progress in both IT infrastructure and software spaces. Artificial intelligence has also evolved greatly as myriad industries recognize its ability to help businesses stay relevant, improve operations, gain competitive advantage and pursue new business directions. The AI space is growing exponentially. Gartner has predictedthat the business value of AI will reach $5.1 billion by 2025.

For the digitally connected consumer, examples of AI are commonplace. Commonly used applications with AI at their core include Apple's Siri, Amazon's Alexa and navigation applications such as Waze and Google Maps that recommend best routes to take based on current traffic conditions.

What's perhaps lesser known is how AI and ML have been applied to great transformative effect in a variety of use-cases today. With the vast number of data endpoints today, the convergence of AI and the internet of things (IoT), which is about sensors installed in machines that stream information to be processed and analyzed, has been greatly beneficial to industries.

AI plays an instrumental role in the manufacturing industry, assisting in matters ranging from demand forecasting to quality assurance to predictive maintenance and, of course, cost savings. A McKinsey reportrevealed 64% of respondents in the manufacturing sector who adopted some form of AI enjoyed cost savings of at least 10%, with 37% of respondents reporting cost savings of more than 20%.

A large global food manufacturer used machine learning to improve planning coordination across its marketing, sales, account management and supply chain, which resulted in a 20% reduction in forecast errors, a 30% reduction in lost sales, a 30% reduction in product obsolescence and a 50% reduction in demand planners' workload.

A premier automobile manufacturer,meanwhile, used automated image recognition, which uses AI to evaluate component images during production and compares them in milliseconds to hundreds of other images of the same sequence to determine deviations from the standard in real-time. The AI application also checks whether all required parts have been mounted and if they have been mounted in the right place.It's also deployed in other parts of the manufacturing process, such as dust particle analysis at its paint shop, where vehicle surfaces are painted and dust particle content on the surfaces needs to be eradicated. There, AI algorithms compare real-time data from dust particle sensors in the path booths and dryers with a comprehensive database that was developed for dust particle analysis. The result - highly sensitive manufacturing systems benefited from even greater precision during the production process.

Over in Japan, Konica Minolta, an imaging technology firm, embedded AI and ML into its Dynamic Digital Radiography (DDR) healthcare solution. Backed by IT infrastructure from Dell Technologies capable of processing up to 300 images in a single scan and animating those images in mere minutes, DDR enabledmedical practitioners to make better predictions concerning lung ventilation and perfusion (oxygen and blood flow) in X rays, so a patient's treatment plan could be more easily determined.

Governments' focus on smart citiestoo, has given AI an opportunity to shine in many ways. From a citizen security standpoint, AI-backed security camera footage can be analyzed in real time to detect criminal behavior so it can be instantly reported and dealt with. Automatic number-plate recognition (ANPR), a technology that uses optical character recognition on images to read vehicle registration plates from camera footage, can be used to great effect for traffic management and to predict traffic for planning purposes. AI is also used to assist with predictive maintenance for public infrastructure, pollution control and waste management (where AI powered robots can sort through rubbish and clean lakes and rivers).

The future for artificial intelligence and machine learning will be unbelievably exciting. The potential is immense, and we have just scratched the tip of the iceberg. As Gartner puts it, there are four trendsdriving the AI industry - responsible AI, small and wide data, operationalization of AI platforms and efficient use of resources.

As we have seen with some of the customers quoted above, Dell Technologies continues to invest and work in this space, collaborating with our customers and our partners to fully harness the power of these evolving technologies. In times to come, we will see more analytics driven transformative business outcomes. Fasten your seat belts - this is taking off.

Originally posted here:
AI and Machine Learning: The Present and the Future - Marketscreener.com