Archive for the ‘Machine Learning’ Category

Alchemab Selected to Access NVIDIA Cambridge-1 Supercomputer to Advance Machine Learning Enabled Antibody Discovery – Business Wire

BOSTON & CAMBRIDGE, England--(BUSINESS WIRE)--Alchemab Therapeutics, a biotechnology company focused on the discovery and development of naturally-occurring protective antibodies and immune repertoire-based patient stratification tools, has been selected by NVIDIA to harness the power of the UKs most powerful supercomputer, Cambridge-1. Alchemab will use the NVIDIA DGX SuperPOD supercomputing cluster, powered by NVIDIA DGX A100 systems, to gain greater understanding and insights from its extensive neurology and oncology datasets.

We are honored to collaborate with NVIDIA to advance our work applying machine learning to the prediction of antibody structure and function, said Douglas A. Treco, PhD, Chief Executive Officer of Alchemab Therapeutics. Using Cambridge-1, Alchemab will vastly accelerate our capabilities and we are excited about the potential to collaborate with NVIDIAs world-leading team to better understand the language of antibodies.

Craig Rhodes, EMEA Industry Lead for Healthcare and Life Sciences at NVIDIA, commented: Cambridge-1 enables the application of machine learning to help solve the most pressing clinical challenges, advance health research through digital biology, and unlock a deeper understanding of diseases. The system drives workloads that are scaled and optimised for supercomputing and will help extraordinary organisations like Alchemab, a member of the NVIDIA Inception program, to further their research on antibodies and other protective therapeutics for hard to treat diseases.

Our collaboration with NVIDIA will unlock countless opportunities to advance Alchemabs state-of-the-art platform, facilitating the discovery of novel therapeutics and patient stratification techniques, said Jake Galson, PhD, Head of Technology at Alchemab Therapeutics. Machine learning is accelerating research across multiple therapeutic areas and will be pivotal in helping Alchemab predict the function of novel antibodies based on their sequence alone.

An individuals antibody repertoire encodes information about past immune responses and potential for future disease protection. Alchemab believes that deciphering information stored in these antibody sequence datasets will transform the fundamental understanding of disease and enable discovery of novel diagnostics and antibody therapeutics. Using self-supervised machine learning, Alchemab has developed antibody-specific language model AntiBERTa (Antibody-specific Bi-directional Encoder Representation from Transformers), a 12-layer transformer model which provides a contextualized numeric representation of antibody sequences. AntiBERTa learns biologically relevant information and is primed for multiple downstream tasks which are improving our understanding of the language of antibodies.

Attend Alchemabs session on deciphering the language of antibodies on March 24 at GTC, a free to register global AI conference. Find more details on the Nvidia Inception program here. Find project updates and more information on Cambridge-1 projects here.

About AlchemabAlchemab has developed a highly differentiated platform which enables the identification of novel drug targets, therapeutics and patient stratification tools by analysis of patient antibody repertoires. The platform uses well-defined patient samples, deep B cell sequencing and computational analysis to identify convergent protective antibody responses among individuals that are susceptible but resilient to specific diseases.

Alchemab is building a broad pipeline of protective therapeutics for hard-to-treat diseases, with an initial focus on neurodegenerative conditions and oncology. The highly specialized patient samples that power Alchemabs platform are made available through valued partnerships and collaborations with patient representative groups, biobanks, industry partners and academic institutions.

For more information, visit http://www.alchemab.com.

Originally posted here:
Alchemab Selected to Access NVIDIA Cambridge-1 Supercomputer to Advance Machine Learning Enabled Antibody Discovery - Business Wire

AI and Machine Learning: The Present and the Future – Marketscreener.com

We have heard the adage "data is the new oil."Datahas become one of the most critical assets to enterprises globally. Digitalization of organizations hasopened upa new horizon in customer outreach, customer services and customer interactions. Every interaction with a customer is now a data footprint - with massive potential to be harnessed when viewed and analyzed in totality.

The collection and processing of data is facilitated by new technologies such as 5G mobile networks and edge computing (In an a previous blog I spoke about how edge is ushering in a business transformation - read here). The time then is ripe for enterprises to tap into the transformative effects of artificial intelligence (AI) and machine learning (ML).

Early forays into AI were inhibited by a lack of computing and processing power, but today that barrier has largely been lifted due to progress in both IT infrastructure and software spaces. Artificial intelligence has also evolved greatly as myriad industries recognize its ability to help businesses stay relevant, improve operations, gain competitive advantage and pursue new business directions. The AI space is growing exponentially. Gartner has predictedthat the business value of AI will reach $5.1 billion by 2025.

For the digitally connected consumer, examples of AI are commonplace. Commonly used applications with AI at their core include Apple's Siri, Amazon's Alexa and navigation applications such as Waze and Google Maps that recommend best routes to take based on current traffic conditions.

What's perhaps lesser known is how AI and ML have been applied to great transformative effect in a variety of use-cases today. With the vast number of data endpoints today, the convergence of AI and the internet of things (IoT), which is about sensors installed in machines that stream information to be processed and analyzed, has been greatly beneficial to industries.

AI plays an instrumental role in the manufacturing industry, assisting in matters ranging from demand forecasting to quality assurance to predictive maintenance and, of course, cost savings. A McKinsey reportrevealed 64% of respondents in the manufacturing sector who adopted some form of AI enjoyed cost savings of at least 10%, with 37% of respondents reporting cost savings of more than 20%.

A large global food manufacturer used machine learning to improve planning coordination across its marketing, sales, account management and supply chain, which resulted in a 20% reduction in forecast errors, a 30% reduction in lost sales, a 30% reduction in product obsolescence and a 50% reduction in demand planners' workload.

A premier automobile manufacturer,meanwhile, used automated image recognition, which uses AI to evaluate component images during production and compares them in milliseconds to hundreds of other images of the same sequence to determine deviations from the standard in real-time. The AI application also checks whether all required parts have been mounted and if they have been mounted in the right place.It's also deployed in other parts of the manufacturing process, such as dust particle analysis at its paint shop, where vehicle surfaces are painted and dust particle content on the surfaces needs to be eradicated. There, AI algorithms compare real-time data from dust particle sensors in the path booths and dryers with a comprehensive database that was developed for dust particle analysis. The result - highly sensitive manufacturing systems benefited from even greater precision during the production process.

Over in Japan, Konica Minolta, an imaging technology firm, embedded AI and ML into its Dynamic Digital Radiography (DDR) healthcare solution. Backed by IT infrastructure from Dell Technologies capable of processing up to 300 images in a single scan and animating those images in mere minutes, DDR enabledmedical practitioners to make better predictions concerning lung ventilation and perfusion (oxygen and blood flow) in X rays, so a patient's treatment plan could be more easily determined.

Governments' focus on smart citiestoo, has given AI an opportunity to shine in many ways. From a citizen security standpoint, AI-backed security camera footage can be analyzed in real time to detect criminal behavior so it can be instantly reported and dealt with. Automatic number-plate recognition (ANPR), a technology that uses optical character recognition on images to read vehicle registration plates from camera footage, can be used to great effect for traffic management and to predict traffic for planning purposes. AI is also used to assist with predictive maintenance for public infrastructure, pollution control and waste management (where AI powered robots can sort through rubbish and clean lakes and rivers).

The future for artificial intelligence and machine learning will be unbelievably exciting. The potential is immense, and we have just scratched the tip of the iceberg. As Gartner puts it, there are four trendsdriving the AI industry - responsible AI, small and wide data, operationalization of AI platforms and efficient use of resources.

As we have seen with some of the customers quoted above, Dell Technologies continues to invest and work in this space, collaborating with our customers and our partners to fully harness the power of these evolving technologies. In times to come, we will see more analytics driven transformative business outcomes. Fasten your seat belts - this is taking off.

Originally posted here:
AI and Machine Learning: The Present and the Future - Marketscreener.com

Learning from the most advanced AI in manufacturing and operations – McKinsey

Audio

Making good use of data and analytics will not be done in any single bold move but through multiple coordinated actions. Despite the recent and significant advances in machine intelligence, the full scale of the opportunity is just beginning to unfold. But why are some companies doing better than others? How do companies identify where to get started based on their digital journeys?

In this episode of McKinsey Talks Operations, Bruce Lawler, managing director for the Massachusetts Institute of Technologys (MIT) Machine Intelligence for Manufacturing and Operations (MIMO) program, and Vijay DSilva, senior partner emeritus at McKinsey, speak with McKinseys Daphne Luchtenberg about how companies across industries and sizes can learn from leaders and integrate analytics and data to improve their operations. The following is an edited version of their conversation.

Daphne Luchtenberg: Earlier this year, McKinsey and MITs Machine Intelligence for Manufacturing and Operations studied 100 companies and sectors from automotive to mining. To discuss this and more, Im joined by the authors, Vijay DSilva, senior partner emeritus at McKinsey, and Bruce Lawler, managing director for MITs MIMO.

Lets start with the why. What was the main driver behind the partnership and why did we commission the research?

Vijay DSilva: Over the past few years, weve had conversations with dozens and dozens of companies on the topic of automation and machine intelligence, and something came out of it. It was clear that we saw a rising level of attention paid to the topic. But at the same time, we saw many companies struggle while others succeeded. And it was really hard to tell why that was happening. We started by looking at the literature and saw a lot of what companies could do or a point of view of what they should be doing in this space, but we didnt really find a lot on what actually was working for the leaders and what wasnt working for the rest. So we launched this research to try and address the question.

What we really wanted to do was get a firsthand account across as many companies as we could find to drive both success and struggle across a fairly large weight of companies. Based on the interviews and the surveys, we can now map out the journeys that companies should take or could take in accelerating progress in this space. What was particularly important was it could define success and failure in many cases in some industries.

Daphne Luchtenberg: Bruce, a lot of people have had false starts, right? And we hear about bots and machine learning based on data analytics, but where did you and the team see practical examples where they were really starting to add value?

Bruce Lawler: We looked at over 100 companies in the study itself, and then we did deep-dive interviews with quite a few of them. And what we saw was that there really is a two- to threefold difference across every major operational indicator, and some examples of success stories came out. At Wayfair, for example, they use machine intelligence to optimize shipping, and they reduced their logistics cost by 7.5 percent, which in margin business is huge.

A predictive maintenance company called Augury worked with Colgate-Palmolive to use predictive maintenance, and they saved 192 million tubes of toothpaste. They worked with Frito-Lay and they saved a million pounds of product. Another example is Vistra, an energy generation company. They looked at their power plants and the overall efficiency, what they call the heat rate. They were able to reduce energy consumption by about 1 percent, which doesnt sound like a lot, but you realize they generate enough energy for 20 million households. Finally, Amgen uses visual inspection to look at filled syringes, and they were able to cut false rejects by 60 percent.

Daphne Luchtenberg: Thats amazing, right? Even while philosophically execs have bought into the idea of machine learning, if we get down to brass tacks, there are real examples of where its been helpful in the context of efficiency and in operations.

Bruce Lawler: There are quite a few different use cases where the leaders focus. Those are in forecasting, transportation, logistics and predictive maintenance, as I mentioned. But close behind those were quite a few others in terms of inventory optimization, or process improvement, some early warning systems, cycle time reduction, or supply chain optimization. The bottom 50 percent did not have this type of focus. So I think a key takeaway from the study is the laser focus of the leaders on winning use cases. And second, they took a multidimensional approach.

Historically, people thought if they hire a data scientist, that would be enough. But there actually were nine different areas that are required to be a leader, although you dont have to do them all at once. Well give an example of Cooper Standard, which is doing a very cutting-edge, real-time process control using machine learning. To be successful, they needed three big things: strategy, people, and data. Strategy they had to, from an entire company perspective, decide that this was important to them, that what they had today wasnt good enough, but there were other solutions.

Second, they had to upskill the people that they already had, typically control engineers who did not understand data science and data scientists who didnt understand control engineering. Theyre almost exact opposite fields. Also, they gave people online access to data and they very much empowered their frontline people as well.

On the topic of data, they had too much of it. Its a very complex process that they have and they had to come up with new methods of data pipelining. They couldnt even use the cloud because the data was moving so quickly, they had to process it locally. And the process lines are running so quickly, they had to make local, real-time decisions.

Daphne Luchtenberg: Bruce, what other surprises did you and the team come across as you were completing the research?

Bruce Lawler: I think one of the main things was the efficacy and the efficiency of the leaders ability to deploy at scale. For example, a buyer, an international pharmaceutical company, was able to use their governance process to triage the most valuable applications. They would then go to one plant where they were perfecting these applications. And once theyve achieved the results that theyd hoped, they would rapidly deploy them around the world to their facilities. They ended up being classified as what we call an executer in our study, even though their performance results were that of a leader.

Vijay DSilva: I had the same observation that Bruce had. And there were two things in particular that surprised me. One was we always expected the leaders to invest more heavily than the others, because they were far more advanced and were spending more money. What was surprising was that the rate of increase in the investments when we asked people to talk about future investments for the leaders was much higher than the rest. We were left with the feeling that not only was the gap large, but it was increasing.

The second thing that surprised me was the fact that the leaders dont have to be large firms and you didnt necessarily need the pockets to become a leader. We found plenty of examples of leaders that were smaller firms that were quite nimble but were able to pick their shots intelligently. That was one theme that came through across many of the companies that we saw, that the ability to focus their efforts on where it mattered made them leaders.

Daphne Luchtenberg: Thanks, Vijay. Just to press a little further there, companies across industries in a wide range of sizes from blue chip companies to greenfield sites, theyre all trying to integrate analytics and data to improve their operations. However, the results have been mixed. Why do some companies do so much better than others?

Vijay DSilva: Its an interesting question, Daphne. We looked at nine different thingsnine different levers that companies could pull. And out of nine, five really stood out at us that really make the difference, and they were the following: governance, deployment, partnering, people, and data. Governance means to what degree is there a top-down push from senior management, and also a purpose-driven approach to deploy the technology. Leading companies have strong governance to keep the digital programs on track and to document how the portfolio is doing. For example, a pharmaceutical company put a lot of effort to use AI in some of its plants across a number of use cases, and then had work to that applied across the network. Leading firms will actually do this quite rigorously and regularly.

The second thing is, especially given the dearth of talent in data science in the industry, leading firms are much more purposeful in terms of how they organized. The poor performers were more likely to spread their resources thin across multiple teams or not have them at all. In contrast, leading companies like McDonalds, as Bruce mentioned earlier, would be more likely to have a center of excellence where they would concentrate their resources.

Deployment is literally to what degree our use cases were used and in what order. Leading companies had much more of it and were much more conscious of which ones mattered. And then as we took it into partnerships, partners were far more common across leading firms than the rest, which surprised us initially. But they were more reliant on either academia, start-ups, or existing technology vendors or consultants, and use a wider range of partners than the rest. An example, was the company Augury that Bruce mentioned before, used by both Colgate-Palmolive and PepsiCo Frito-Lay, and essentially, using AI-driven systems and whats available out there in the market to generate impact. Analog Devices is a semiconductor firm that collaborated with MIT to use machine intelligence quality control to use production runs or defaults in production runs.

The last one is data, specifically the democratization of data, where leaders normally put much more effort into making sure that data was accurate. Ninety-two percent had processes to make sure that the data was available and accurate. But also the fact that it was available to the front line. In contrast, over 50 percent of the leaders had data available to the front line versus only 4 percent of the rest.

The poor performers were more likely to spread their resources thin across multiple teams or not have them at all. Leading companies would be more likely to have a center of excellence where they would concentrate their resources.

Daphne Luchtenberg: Thanks, Vijay. And Bruce, weve talked a bit about the four categories that the research settled on. Can you talk through what those four categories are and how you define them?

Bruce Lawler: The leaders really captured the largest gains and had the largest deployments. As a result, they have the most infrastructure and the most capabilities across the company.

Then there was the middle ground, what we call the planners in the executer, which have really good maturity on the enablers, theyve invested in people, data infrastructure, data scientists, and their governance processes, but they havent yet proceeded far enough along their journey to get the same results as the leaders.

Finally, we come to the executers. Executors were hyper-focused on very simply getting solid gains and typically broadly deployed as the buyer example I gave earlier. To give you an idea of the differences, if I compare the leading to the emerging, for example, leaders had about 9 percent average KPI improvement versus the emerging companies at 2 percent. Leaders had a payback period of a little over a year, where emerging companies were at two years. So, double. In terms of deployment, leaders were doing 18 different use cases where the emerging companies were six on average.

Daphne Luchtenberg: How can companies get started on their digital journey? What do they do first?

Vijay DSilva: We found a lot of bad companies that shouldve not started. If there was one thing that we really learned from talking to the leaders, its to start with what matters to you. There was plenty of evidence of companies starting on certain use cases and others trying to replicate that experience, which tended to fail unless it was a problem that really mattered to them. The context of each company and their strategy, we realized, was extremely important. The first thing was to start with a use case that really matters.

The second thing is around making sure that the data is available. And weve talked to the course of this effort in this podcast about how data is important. Leaders take data extremely seriously, very often baking it into the early parts of their processes. Its making sure that the accuracy of the data is right and the availability of data is right. This has changed from a few years ago. Finding a vendor with a proven solution is often one of the fastest things that companies could do. There isnt a need to reinvent the wheel, and the vendor landscape has simply exploded over the past few years and theres plenty of help out there.

The fourth is driving to an early win. Momentum is extremely important here, and leaders realize the value of having a strong momentum here to keep the engine running. Therefore, were starting with an early win to build up the momentum to gradually become more sophisticated over time.

Daphne Luchtenberg: Thanks, Vijay. And Bruce, we talked earlier about the importance of kind of engaging with a broader ecosystem. And that from that comes increased momentum. What did you see the leaders do in this area that was really interesting?

Bruce Lawler: This was another surprising finding. The leaders actually do work a lot with partners, even though theyve spent excessively on their internal infrastructure; thats to help them pick the best partners. Some of these partners are risky, with longer timelines. For example, leaders tend to partner with start-ups, which is typically a little riskier, or they partner with academia, which leads to longer timelines. Ill give you an example. Analog Devices worked with MIT on one of their ion implantations processes. Thats part of the semiconductor manufacturing process and it was important to them to really get this right, because the way semiconductors are made, you lay down one layer and it could be months before you finish the entire chip and you can test it. In this case, it was worth taking the risk to determine if a process months earlier actually ruined a product that you then spend more time and money on.

Daphne Luchtenberg: I suppose its a little bit counterintuitive, as weve been talking about bots and machine learning, that Vijay, both you and Bruce have talked about the importance of the people component. Why is that? Does it turn out to be such an important indicator?

Vijay DSilva: I cannot overemphasize how important this one factor turned out to be. I know it sounds trite, but as we dug in through what different companies are doing, it was eye opening in terms of what was happening on the people front in two key ways. One is in terms of building skills, and we talked about centers of excellence, to what degree leaders of building skills due to power and some of these efforts. The leaders had thought about roles that the others hadnt even gotten to. For instance, things like machine-learning engineers versus simply data scientists and data engineers. And there were four or five different categories of people that the leaders were building into the process, thinking three or four steps ahead.

The second thing is that there was greater emphasis on training their frontline employees. We saw this. We mentioned McDonalds before, where even though there was a core within the company that was developing applications for forecasting footfall, for instance, there was a greater degree of emphasis on training the frontline staff to be able to get the most out of it. That was a theme that we saw across multiple companies.

And then the third one is around access to data. The leaders were much more willing to give access to data to the front line and across the board, across the company in a particular firm, versus the rest of the companies that will sometimes tend to be much more guarded around how they use data. That was the third thing in terms of providing frontline employees and employees in general with the resources and the data that they needed to succeed.

Daphne Luchtenberg: Bruce, a lot of our audience who follow McKinsey Talks Operations will be thinking about their own careers, their own personal development plans. How should they be thinking about building their own skills in this realm?

Bruce Lawler: This industry is moving so quickly, and you cannot keep up with it. Its really a large and complex field, so no one person can know everything. What we found to be successful was a team approach. So I think, learning who your trusted partners can be, whether the vendors or even sometimes your customers, start-ups, academia, or your new employees, thats going to be whats important. And you really need to get outside points of view. Even if youre a digital native, its a diverse space.

Daphne Luchtenberg: Thanks, Bruce. Thats great. Vijay, were coming to the end of our program, and we must thank you, Bruce, and the team for pulling this really interesting research piece together and giving us kind of a road map. Can you just give us a sense, regardless of what category an organization might feel theyre ina leader, a planner, an executor, or an emerging companyhow should they be moving ahead? How should they be focusing on the next step?

Vijay DSilva: There were four things we identified in the work that we did. The first one was having some sense of a North Star. There was always the risk that companies would bounce from one pilot to another pilot to a third. To the question, having a clear-eyed view of what the end game isthe North Star, the goal, or whatever you call it. It was extremely important, because that would guide a lot of future effort. The second thing we were struck by, across many companies we talked to, was there wasnt enough clarity about where they stood versus their peers. The thing we felt was fairly important was to just take an honest self-assessment in terms of where they stood compared with state of the art today or state of practice.

The third one was having some sense of what a transition plan would be. So for instance, there are many paths to becoming a leader, whether you go and execute first, or a planner, and having some sense of how to get there was important. Now we recognize that the industry is changing so fast that the plan might change, but it was important to have a point of view, so that companies wouldnt spread their investment dollars too thinly. The last one was the importance of having use casesa handful of use cases that matter to them. And starting with those and building up momentum from that. Having a clear sense of what those use cases are and making sure that the momentum and impact from that was important.

Daphne Luchtenberg: Brilliant. Thanks, Vijay. And Bruce, we pride ourselves on this McKinsey Talks Operations series that we always get pragmatic and its not theoretical, but its about what can we do next. So if I were to ask you, whats the one thing that our listeners should know, should read, and should learn, how would you guide them?

Bruce Lawler: What they should know is the types of problems that make good machine-learning problems. For example, if its a very high-volume problem with a large number of transactions or large number of products or if its a high rateshort cycle times or short decision timesor its high complexity, where theres many interactions of different systems coming together, or its a highly sensitive process that requires very tight controlsas far as what you should read, any article that really describes how others have successfully used machine learning, that will give you ideas on what problems to solve. So, focus on the what, not the how. You want to be successful quickly, so learn from other examples. And as Vijay said, pick ones that are important to you, and then duplicate the methodology.

Last, what you should learn is, what type of problem are you trying to solve and what types of problems are solvable by machine learning? So for example, is it a classification problem? Am I trying to classify dogs or cats? Is that a clustering problem or am I trying to take groups of things and group them together very much like we did in this study? Predictionam I trying to predict if something will fail in the field in the future, even if its working just fine now? Or an anomaly detection, which is something really different than something else.

Focus on the what, not the how. You want to be successful quickly, so learn from other examples. And pick ones that are important to you, and then duplicate the methodology.

Daphne Luchtenberg: Bruce, can you say a bit more about the companies that participated?

Bruce Lawler: A little over half had 10,000 or more employees, so they are a little bit on the larger size. But 45 percent, actually, were under 10,000. And to break that down a little bit, 12 of them had just 50 to 199 employees, so they were quite small. And as far as range of industries, we covered everything from oil and gas to retail to healthcare and pharma, aerospace, automotive. So, 17 total categories of industry.

Daphne Luchtenberg: And Vijay, now that this research chapter has come to an end, what are the next steps? And what can our listeners look out for?

Vijay DSilva: We published this on both McKinsey and MIT websites, and were very excited about that. We love your comments and theres been a fair bit of debate that this has generated, which has been fantastic. And then in parallel what were doing is going back to each of the companies that participated with our results, which includes where they stand versus the others and what that might mean for them. Its a different story for each one, which is going on as we speak. As this proceeds, our hope is that over time, we expand this to a greater and greater share of the industry, but both in terms of manufacturing and operations more broadly.

As Bruce mentioned before, weve got 17 industries covered in this study. And over time wed expect that to deepen as we get more and more companies, each of the industries, suspecting that the story, the implication would be quite different by industry and by company depending on the size they are, the maturity that theyre in, and where they hope to get to.

Bruce Lawler: If I could just add that we are creating these individual playbooks for each of the companies so they can see exactly where they are on their journey and what are the immediate next steps that they should be doing on their path toward being a leader or certainly getting better KPI performance and faster paybacks.

Daphne Luchtenberg: Bruce, thank you so much for sharing these insights. Vijay, thank you very much for being part of this conversation. I summarize it as some of these efficiency gains and operational gains are definitely within reach, and those companies that havent yet made the first move, they should do so forthwith. Would you agree, Vijay, Bruce?

Bruce Lawler: Absolutely.

Vijay DSilva: Absolutely.

Daphne Luchtenberg: Thank you so much for spending some time with us today. And we look forward to being back with you all soon for our next program of McKinsey Talks Operations.

Youve been listening to McKinsey Talks Operations with me, Daphne Luchtenberg. If you like what youve heard, subscribe to our show on Apple Podcasts, Spotify, or wherever you listen. Well be back with a new episode in a couple of weeks.

View post:
Learning from the most advanced AI in manufacturing and operations - McKinsey

New DDN Storage Appliance Doubles Performance for NVIDIA DGX AI Solutions and Speeds Up Analytics and Machine Learning in the Cloud by 100% – PR…

Next-Generation Flash and Hybrid DDN A3I AI400X2 Storage Appliances Deliver Enhanced Efficiency and Usability for NVIDIA DGX POD, DGX SuperPOD, and Enterprise AI Data Workloads

CHATSWORTH, Calif., March 22, 2022 /PRNewswire/ --DDN, the global leader in artificial intelligence (AI) and multicloud data management solutions, today announced its next-generation flash and hybrid data platforms for NVIDIA DGX PODand DGX SuperPODAI, analytics and deep learning computing infrastructure.

Powering thousands of NVIDIA DGX systems, including NVIDIA's Selene and Cambridge-1 DGX SuperPOD systems, DDN offers a broad range of optimized AI data storage solutions for applications such as autonomous vehicles, natural language processing, financial modeling, drug discovery, academic research, and government security.

The DDN A3I AI400X2 system delivers real-world performance of more than 90 GB/s and 3 million IOPS to an NVIDIA DGX A100 system. Available with 250TB and 500TB all-NVMe usable capacity, and with the ability to scale orders of magnitude more, the DDN AI400X2 is the world's most performing and efficient building block for AI infrastructures.

"DDN has been a market leader in AI, analytics and machine learning for many years and our collaboration with NVIDIA is leading the industry in performance, efficiency and ease of management at any scale," said Dr. James Coomer, vice president of products, DDN. "With our next-generation flash and hybrid DDN AI400X2 storage systems, we are effectively doubling performance, improving ease of use and greatly expanding support for all AI users globally."

NVIDIA DGX systems with DDN storage solutions have been implemented successfully by IT organizations worldwide. In 2021, DDN delivered more than 2.5 exabytes of AI, analytics and deep learning flash and hybrid storage solutions in the cloud and customers' data centers. DDN expects to achieve significant growth in its AI business in 2022.

"NVIDIA DGX SuperPOD provides enterprises with a proven, turnkey AI infrastructure solution for powering their most transformative work," said Charlie Boyle, vice president, DGX systems, NVIDIA. "From compute to networking to storage, every element of a DGX SuperPOD is selected to ensure it provides powerful performance, and DDN storage keeps pace with the needs of the most demanding AI workloads."

DDN is working closely with NVIDIA on next-generation Reference Architecture documents that integrate DDN AI400X2 appliances with NVIDIADGX A100 systems. Customers will be able to quickly deploy and scale AI turnkey systems using standard DGX POD and DGX SuperPOD configurations. Backed by NVIDIA's leadership in accelerated computing and DDN's leadership in AI data management at scale, these integrated systems will deliver the fastest path to AI implementation to customers across market segments and industries.

DDN at NVIDIA GTC 2022DDN will present at NVIDIA GTC during sessions highlighting how best to implement secure and highly efficient integrated systems that deliver the highest value in AI, analytics and deep learning applications across industries and use cases. Click here for more information about DDN at GTC.

Supporting Resources

About DDNDDN is the world's largest private data storage company and the leading provider of intelligent technology and infrastructure solutions for Enterprise at Scale, AI and analytics, HPC, government, and academia customers. Through its DDN and Tintri divisions, the company delivers AI, Data Management software and hardware solutions, and unified analytics frameworks to solve complex business challenges for data-intensive, global organizations. DDN provides its enterprise customers with the most flexible, efficient, and reliable data storage solutions for on-premises and multi-cloud environments at any scale. Over the last two decades, DDN has established itself as the data management provider of choice for over 11,000 enterprises, government, and public-sector customers, including many of the world's leading financial services firms, life science organizations, manufacturing and energy companies, research facilities, and web and cloud service providers.

Contact:Press Relations at DDN[emailprotected]

Walt & Company, on behalf of DDNSharon Sumrit [emailprotected]

2022 All rights reserved. A3I and DDN are trademarks or registered trademarks owned by DataDirect Networks. All other trademarks are the property of their respective owners.

SOURCE DataDirect Networks (DDN)

The rest is here:
New DDN Storage Appliance Doubles Performance for NVIDIA DGX AI Solutions and Speeds Up Analytics and Machine Learning in the Cloud by 100% - PR...

Top 5 Deep Learning Frameworks that Techies Should Learn in 2022 – Analytics Insight

Deep learning frameworks are trending among machine learning developers

Deep learning frameworks help data scientists and ML developers in various critical tasks. As of today, both predictive analytics and machine learning are deeply integrated into business operations and have proven to be quite crucial. Integrating this advanced branch of ML can enhance efficiency and accuracy for the task-at-hand when it is trained with vast amounts of big data. In this video, we will explore the top deep learning frameworks that techies should learn this year.

Tensor Flow: The Javascript-based open-source learning platform has a wide range of tools to enable model deployment on different types of devices. While the core tools facilitate model deployment on browsers, the lite version is well-suited for mobiles and embedded devices.

PyTorch: Developed by Facebook, it is a versatile framework, originally designed to explore the entire process, from research prototyping to production deployment. It carries a C++ frontend over a Python interface.

Keras: It is an open-source framework that can run on top of Tensorflow, Theano, Microsoft Cognitive Toolkit, and Plaid ML. Keras framework is known for its speed because of built-in support for parallel processing of data processing and ML training.

Sonnet: A high-level library that is used in building complex neural network structures in Tensorflow. It simplifies the high-level architectural designs by independently creating Python objects to a graph.

MXNet: It is a highly scalable open-source Deep learning framework designed to train and deploy deep neural networks. It is capable of fast model training and supports multiple programming languages such as C, C++, Python, Julia, Matlab, etc.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.

Read more from the original source:
Top 5 Deep Learning Frameworks that Techies Should Learn in 2022 - Analytics Insight