Archive for the ‘Machine Learning’ Category

Learning from the most advanced AI in manufacturing and operations – McKinsey

Audio

Making good use of data and analytics will not be done in any single bold move but through multiple coordinated actions. Despite the recent and significant advances in machine intelligence, the full scale of the opportunity is just beginning to unfold. But why are some companies doing better than others? How do companies identify where to get started based on their digital journeys?

In this episode of McKinsey Talks Operations, Bruce Lawler, managing director for the Massachusetts Institute of Technologys (MIT) Machine Intelligence for Manufacturing and Operations (MIMO) program, and Vijay DSilva, senior partner emeritus at McKinsey, speak with McKinseys Daphne Luchtenberg about how companies across industries and sizes can learn from leaders and integrate analytics and data to improve their operations. The following is an edited version of their conversation.

Daphne Luchtenberg: Earlier this year, McKinsey and MITs Machine Intelligence for Manufacturing and Operations studied 100 companies and sectors from automotive to mining. To discuss this and more, Im joined by the authors, Vijay DSilva, senior partner emeritus at McKinsey, and Bruce Lawler, managing director for MITs MIMO.

Lets start with the why. What was the main driver behind the partnership and why did we commission the research?

Vijay DSilva: Over the past few years, weve had conversations with dozens and dozens of companies on the topic of automation and machine intelligence, and something came out of it. It was clear that we saw a rising level of attention paid to the topic. But at the same time, we saw many companies struggle while others succeeded. And it was really hard to tell why that was happening. We started by looking at the literature and saw a lot of what companies could do or a point of view of what they should be doing in this space, but we didnt really find a lot on what actually was working for the leaders and what wasnt working for the rest. So we launched this research to try and address the question.

What we really wanted to do was get a firsthand account across as many companies as we could find to drive both success and struggle across a fairly large weight of companies. Based on the interviews and the surveys, we can now map out the journeys that companies should take or could take in accelerating progress in this space. What was particularly important was it could define success and failure in many cases in some industries.

Daphne Luchtenberg: Bruce, a lot of people have had false starts, right? And we hear about bots and machine learning based on data analytics, but where did you and the team see practical examples where they were really starting to add value?

Bruce Lawler: We looked at over 100 companies in the study itself, and then we did deep-dive interviews with quite a few of them. And what we saw was that there really is a two- to threefold difference across every major operational indicator, and some examples of success stories came out. At Wayfair, for example, they use machine intelligence to optimize shipping, and they reduced their logistics cost by 7.5 percent, which in margin business is huge.

A predictive maintenance company called Augury worked with Colgate-Palmolive to use predictive maintenance, and they saved 192 million tubes of toothpaste. They worked with Frito-Lay and they saved a million pounds of product. Another example is Vistra, an energy generation company. They looked at their power plants and the overall efficiency, what they call the heat rate. They were able to reduce energy consumption by about 1 percent, which doesnt sound like a lot, but you realize they generate enough energy for 20 million households. Finally, Amgen uses visual inspection to look at filled syringes, and they were able to cut false rejects by 60 percent.

Daphne Luchtenberg: Thats amazing, right? Even while philosophically execs have bought into the idea of machine learning, if we get down to brass tacks, there are real examples of where its been helpful in the context of efficiency and in operations.

Bruce Lawler: There are quite a few different use cases where the leaders focus. Those are in forecasting, transportation, logistics and predictive maintenance, as I mentioned. But close behind those were quite a few others in terms of inventory optimization, or process improvement, some early warning systems, cycle time reduction, or supply chain optimization. The bottom 50 percent did not have this type of focus. So I think a key takeaway from the study is the laser focus of the leaders on winning use cases. And second, they took a multidimensional approach.

Historically, people thought if they hire a data scientist, that would be enough. But there actually were nine different areas that are required to be a leader, although you dont have to do them all at once. Well give an example of Cooper Standard, which is doing a very cutting-edge, real-time process control using machine learning. To be successful, they needed three big things: strategy, people, and data. Strategy they had to, from an entire company perspective, decide that this was important to them, that what they had today wasnt good enough, but there were other solutions.

Second, they had to upskill the people that they already had, typically control engineers who did not understand data science and data scientists who didnt understand control engineering. Theyre almost exact opposite fields. Also, they gave people online access to data and they very much empowered their frontline people as well.

On the topic of data, they had too much of it. Its a very complex process that they have and they had to come up with new methods of data pipelining. They couldnt even use the cloud because the data was moving so quickly, they had to process it locally. And the process lines are running so quickly, they had to make local, real-time decisions.

Daphne Luchtenberg: Bruce, what other surprises did you and the team come across as you were completing the research?

Bruce Lawler: I think one of the main things was the efficacy and the efficiency of the leaders ability to deploy at scale. For example, a buyer, an international pharmaceutical company, was able to use their governance process to triage the most valuable applications. They would then go to one plant where they were perfecting these applications. And once theyve achieved the results that theyd hoped, they would rapidly deploy them around the world to their facilities. They ended up being classified as what we call an executer in our study, even though their performance results were that of a leader.

Vijay DSilva: I had the same observation that Bruce had. And there were two things in particular that surprised me. One was we always expected the leaders to invest more heavily than the others, because they were far more advanced and were spending more money. What was surprising was that the rate of increase in the investments when we asked people to talk about future investments for the leaders was much higher than the rest. We were left with the feeling that not only was the gap large, but it was increasing.

The second thing that surprised me was the fact that the leaders dont have to be large firms and you didnt necessarily need the pockets to become a leader. We found plenty of examples of leaders that were smaller firms that were quite nimble but were able to pick their shots intelligently. That was one theme that came through across many of the companies that we saw, that the ability to focus their efforts on where it mattered made them leaders.

Daphne Luchtenberg: Thanks, Vijay. Just to press a little further there, companies across industries in a wide range of sizes from blue chip companies to greenfield sites, theyre all trying to integrate analytics and data to improve their operations. However, the results have been mixed. Why do some companies do so much better than others?

Vijay DSilva: Its an interesting question, Daphne. We looked at nine different thingsnine different levers that companies could pull. And out of nine, five really stood out at us that really make the difference, and they were the following: governance, deployment, partnering, people, and data. Governance means to what degree is there a top-down push from senior management, and also a purpose-driven approach to deploy the technology. Leading companies have strong governance to keep the digital programs on track and to document how the portfolio is doing. For example, a pharmaceutical company put a lot of effort to use AI in some of its plants across a number of use cases, and then had work to that applied across the network. Leading firms will actually do this quite rigorously and regularly.

The second thing is, especially given the dearth of talent in data science in the industry, leading firms are much more purposeful in terms of how they organized. The poor performers were more likely to spread their resources thin across multiple teams or not have them at all. In contrast, leading companies like McDonalds, as Bruce mentioned earlier, would be more likely to have a center of excellence where they would concentrate their resources.

Deployment is literally to what degree our use cases were used and in what order. Leading companies had much more of it and were much more conscious of which ones mattered. And then as we took it into partnerships, partners were far more common across leading firms than the rest, which surprised us initially. But they were more reliant on either academia, start-ups, or existing technology vendors or consultants, and use a wider range of partners than the rest. An example, was the company Augury that Bruce mentioned before, used by both Colgate-Palmolive and PepsiCo Frito-Lay, and essentially, using AI-driven systems and whats available out there in the market to generate impact. Analog Devices is a semiconductor firm that collaborated with MIT to use machine intelligence quality control to use production runs or defaults in production runs.

The last one is data, specifically the democratization of data, where leaders normally put much more effort into making sure that data was accurate. Ninety-two percent had processes to make sure that the data was available and accurate. But also the fact that it was available to the front line. In contrast, over 50 percent of the leaders had data available to the front line versus only 4 percent of the rest.

The poor performers were more likely to spread their resources thin across multiple teams or not have them at all. Leading companies would be more likely to have a center of excellence where they would concentrate their resources.

Daphne Luchtenberg: Thanks, Vijay. And Bruce, weve talked a bit about the four categories that the research settled on. Can you talk through what those four categories are and how you define them?

Bruce Lawler: The leaders really captured the largest gains and had the largest deployments. As a result, they have the most infrastructure and the most capabilities across the company.

Then there was the middle ground, what we call the planners in the executer, which have really good maturity on the enablers, theyve invested in people, data infrastructure, data scientists, and their governance processes, but they havent yet proceeded far enough along their journey to get the same results as the leaders.

Finally, we come to the executers. Executors were hyper-focused on very simply getting solid gains and typically broadly deployed as the buyer example I gave earlier. To give you an idea of the differences, if I compare the leading to the emerging, for example, leaders had about 9 percent average KPI improvement versus the emerging companies at 2 percent. Leaders had a payback period of a little over a year, where emerging companies were at two years. So, double. In terms of deployment, leaders were doing 18 different use cases where the emerging companies were six on average.

Daphne Luchtenberg: How can companies get started on their digital journey? What do they do first?

Vijay DSilva: We found a lot of bad companies that shouldve not started. If there was one thing that we really learned from talking to the leaders, its to start with what matters to you. There was plenty of evidence of companies starting on certain use cases and others trying to replicate that experience, which tended to fail unless it was a problem that really mattered to them. The context of each company and their strategy, we realized, was extremely important. The first thing was to start with a use case that really matters.

The second thing is around making sure that the data is available. And weve talked to the course of this effort in this podcast about how data is important. Leaders take data extremely seriously, very often baking it into the early parts of their processes. Its making sure that the accuracy of the data is right and the availability of data is right. This has changed from a few years ago. Finding a vendor with a proven solution is often one of the fastest things that companies could do. There isnt a need to reinvent the wheel, and the vendor landscape has simply exploded over the past few years and theres plenty of help out there.

The fourth is driving to an early win. Momentum is extremely important here, and leaders realize the value of having a strong momentum here to keep the engine running. Therefore, were starting with an early win to build up the momentum to gradually become more sophisticated over time.

Daphne Luchtenberg: Thanks, Vijay. And Bruce, we talked earlier about the importance of kind of engaging with a broader ecosystem. And that from that comes increased momentum. What did you see the leaders do in this area that was really interesting?

Bruce Lawler: This was another surprising finding. The leaders actually do work a lot with partners, even though theyve spent excessively on their internal infrastructure; thats to help them pick the best partners. Some of these partners are risky, with longer timelines. For example, leaders tend to partner with start-ups, which is typically a little riskier, or they partner with academia, which leads to longer timelines. Ill give you an example. Analog Devices worked with MIT on one of their ion implantations processes. Thats part of the semiconductor manufacturing process and it was important to them to really get this right, because the way semiconductors are made, you lay down one layer and it could be months before you finish the entire chip and you can test it. In this case, it was worth taking the risk to determine if a process months earlier actually ruined a product that you then spend more time and money on.

Daphne Luchtenberg: I suppose its a little bit counterintuitive, as weve been talking about bots and machine learning, that Vijay, both you and Bruce have talked about the importance of the people component. Why is that? Does it turn out to be such an important indicator?

Vijay DSilva: I cannot overemphasize how important this one factor turned out to be. I know it sounds trite, but as we dug in through what different companies are doing, it was eye opening in terms of what was happening on the people front in two key ways. One is in terms of building skills, and we talked about centers of excellence, to what degree leaders of building skills due to power and some of these efforts. The leaders had thought about roles that the others hadnt even gotten to. For instance, things like machine-learning engineers versus simply data scientists and data engineers. And there were four or five different categories of people that the leaders were building into the process, thinking three or four steps ahead.

The second thing is that there was greater emphasis on training their frontline employees. We saw this. We mentioned McDonalds before, where even though there was a core within the company that was developing applications for forecasting footfall, for instance, there was a greater degree of emphasis on training the frontline staff to be able to get the most out of it. That was a theme that we saw across multiple companies.

And then the third one is around access to data. The leaders were much more willing to give access to data to the front line and across the board, across the company in a particular firm, versus the rest of the companies that will sometimes tend to be much more guarded around how they use data. That was the third thing in terms of providing frontline employees and employees in general with the resources and the data that they needed to succeed.

Daphne Luchtenberg: Bruce, a lot of our audience who follow McKinsey Talks Operations will be thinking about their own careers, their own personal development plans. How should they be thinking about building their own skills in this realm?

Bruce Lawler: This industry is moving so quickly, and you cannot keep up with it. Its really a large and complex field, so no one person can know everything. What we found to be successful was a team approach. So I think, learning who your trusted partners can be, whether the vendors or even sometimes your customers, start-ups, academia, or your new employees, thats going to be whats important. And you really need to get outside points of view. Even if youre a digital native, its a diverse space.

Daphne Luchtenberg: Thanks, Bruce. Thats great. Vijay, were coming to the end of our program, and we must thank you, Bruce, and the team for pulling this really interesting research piece together and giving us kind of a road map. Can you just give us a sense, regardless of what category an organization might feel theyre ina leader, a planner, an executor, or an emerging companyhow should they be moving ahead? How should they be focusing on the next step?

Vijay DSilva: There were four things we identified in the work that we did. The first one was having some sense of a North Star. There was always the risk that companies would bounce from one pilot to another pilot to a third. To the question, having a clear-eyed view of what the end game isthe North Star, the goal, or whatever you call it. It was extremely important, because that would guide a lot of future effort. The second thing we were struck by, across many companies we talked to, was there wasnt enough clarity about where they stood versus their peers. The thing we felt was fairly important was to just take an honest self-assessment in terms of where they stood compared with state of the art today or state of practice.

The third one was having some sense of what a transition plan would be. So for instance, there are many paths to becoming a leader, whether you go and execute first, or a planner, and having some sense of how to get there was important. Now we recognize that the industry is changing so fast that the plan might change, but it was important to have a point of view, so that companies wouldnt spread their investment dollars too thinly. The last one was the importance of having use casesa handful of use cases that matter to them. And starting with those and building up momentum from that. Having a clear sense of what those use cases are and making sure that the momentum and impact from that was important.

Daphne Luchtenberg: Brilliant. Thanks, Vijay. And Bruce, we pride ourselves on this McKinsey Talks Operations series that we always get pragmatic and its not theoretical, but its about what can we do next. So if I were to ask you, whats the one thing that our listeners should know, should read, and should learn, how would you guide them?

Bruce Lawler: What they should know is the types of problems that make good machine-learning problems. For example, if its a very high-volume problem with a large number of transactions or large number of products or if its a high rateshort cycle times or short decision timesor its high complexity, where theres many interactions of different systems coming together, or its a highly sensitive process that requires very tight controlsas far as what you should read, any article that really describes how others have successfully used machine learning, that will give you ideas on what problems to solve. So, focus on the what, not the how. You want to be successful quickly, so learn from other examples. And as Vijay said, pick ones that are important to you, and then duplicate the methodology.

Last, what you should learn is, what type of problem are you trying to solve and what types of problems are solvable by machine learning? So for example, is it a classification problem? Am I trying to classify dogs or cats? Is that a clustering problem or am I trying to take groups of things and group them together very much like we did in this study? Predictionam I trying to predict if something will fail in the field in the future, even if its working just fine now? Or an anomaly detection, which is something really different than something else.

Focus on the what, not the how. You want to be successful quickly, so learn from other examples. And pick ones that are important to you, and then duplicate the methodology.

Daphne Luchtenberg: Bruce, can you say a bit more about the companies that participated?

Bruce Lawler: A little over half had 10,000 or more employees, so they are a little bit on the larger size. But 45 percent, actually, were under 10,000. And to break that down a little bit, 12 of them had just 50 to 199 employees, so they were quite small. And as far as range of industries, we covered everything from oil and gas to retail to healthcare and pharma, aerospace, automotive. So, 17 total categories of industry.

Daphne Luchtenberg: And Vijay, now that this research chapter has come to an end, what are the next steps? And what can our listeners look out for?

Vijay DSilva: We published this on both McKinsey and MIT websites, and were very excited about that. We love your comments and theres been a fair bit of debate that this has generated, which has been fantastic. And then in parallel what were doing is going back to each of the companies that participated with our results, which includes where they stand versus the others and what that might mean for them. Its a different story for each one, which is going on as we speak. As this proceeds, our hope is that over time, we expand this to a greater and greater share of the industry, but both in terms of manufacturing and operations more broadly.

As Bruce mentioned before, weve got 17 industries covered in this study. And over time wed expect that to deepen as we get more and more companies, each of the industries, suspecting that the story, the implication would be quite different by industry and by company depending on the size they are, the maturity that theyre in, and where they hope to get to.

Bruce Lawler: If I could just add that we are creating these individual playbooks for each of the companies so they can see exactly where they are on their journey and what are the immediate next steps that they should be doing on their path toward being a leader or certainly getting better KPI performance and faster paybacks.

Daphne Luchtenberg: Bruce, thank you so much for sharing these insights. Vijay, thank you very much for being part of this conversation. I summarize it as some of these efficiency gains and operational gains are definitely within reach, and those companies that havent yet made the first move, they should do so forthwith. Would you agree, Vijay, Bruce?

Bruce Lawler: Absolutely.

Vijay DSilva: Absolutely.

Daphne Luchtenberg: Thank you so much for spending some time with us today. And we look forward to being back with you all soon for our next program of McKinsey Talks Operations.

Youve been listening to McKinsey Talks Operations with me, Daphne Luchtenberg. If you like what youve heard, subscribe to our show on Apple Podcasts, Spotify, or wherever you listen. Well be back with a new episode in a couple of weeks.

View post:
Learning from the most advanced AI in manufacturing and operations - McKinsey

New DDN Storage Appliance Doubles Performance for NVIDIA DGX AI Solutions and Speeds Up Analytics and Machine Learning in the Cloud by 100% – PR…

Next-Generation Flash and Hybrid DDN A3I AI400X2 Storage Appliances Deliver Enhanced Efficiency and Usability for NVIDIA DGX POD, DGX SuperPOD, and Enterprise AI Data Workloads

CHATSWORTH, Calif., March 22, 2022 /PRNewswire/ --DDN, the global leader in artificial intelligence (AI) and multicloud data management solutions, today announced its next-generation flash and hybrid data platforms for NVIDIA DGX PODand DGX SuperPODAI, analytics and deep learning computing infrastructure.

Powering thousands of NVIDIA DGX systems, including NVIDIA's Selene and Cambridge-1 DGX SuperPOD systems, DDN offers a broad range of optimized AI data storage solutions for applications such as autonomous vehicles, natural language processing, financial modeling, drug discovery, academic research, and government security.

The DDN A3I AI400X2 system delivers real-world performance of more than 90 GB/s and 3 million IOPS to an NVIDIA DGX A100 system. Available with 250TB and 500TB all-NVMe usable capacity, and with the ability to scale orders of magnitude more, the DDN AI400X2 is the world's most performing and efficient building block for AI infrastructures.

"DDN has been a market leader in AI, analytics and machine learning for many years and our collaboration with NVIDIA is leading the industry in performance, efficiency and ease of management at any scale," said Dr. James Coomer, vice president of products, DDN. "With our next-generation flash and hybrid DDN AI400X2 storage systems, we are effectively doubling performance, improving ease of use and greatly expanding support for all AI users globally."

NVIDIA DGX systems with DDN storage solutions have been implemented successfully by IT organizations worldwide. In 2021, DDN delivered more than 2.5 exabytes of AI, analytics and deep learning flash and hybrid storage solutions in the cloud and customers' data centers. DDN expects to achieve significant growth in its AI business in 2022.

"NVIDIA DGX SuperPOD provides enterprises with a proven, turnkey AI infrastructure solution for powering their most transformative work," said Charlie Boyle, vice president, DGX systems, NVIDIA. "From compute to networking to storage, every element of a DGX SuperPOD is selected to ensure it provides powerful performance, and DDN storage keeps pace with the needs of the most demanding AI workloads."

DDN is working closely with NVIDIA on next-generation Reference Architecture documents that integrate DDN AI400X2 appliances with NVIDIADGX A100 systems. Customers will be able to quickly deploy and scale AI turnkey systems using standard DGX POD and DGX SuperPOD configurations. Backed by NVIDIA's leadership in accelerated computing and DDN's leadership in AI data management at scale, these integrated systems will deliver the fastest path to AI implementation to customers across market segments and industries.

DDN at NVIDIA GTC 2022DDN will present at NVIDIA GTC during sessions highlighting how best to implement secure and highly efficient integrated systems that deliver the highest value in AI, analytics and deep learning applications across industries and use cases. Click here for more information about DDN at GTC.

Supporting Resources

About DDNDDN is the world's largest private data storage company and the leading provider of intelligent technology and infrastructure solutions for Enterprise at Scale, AI and analytics, HPC, government, and academia customers. Through its DDN and Tintri divisions, the company delivers AI, Data Management software and hardware solutions, and unified analytics frameworks to solve complex business challenges for data-intensive, global organizations. DDN provides its enterprise customers with the most flexible, efficient, and reliable data storage solutions for on-premises and multi-cloud environments at any scale. Over the last two decades, DDN has established itself as the data management provider of choice for over 11,000 enterprises, government, and public-sector customers, including many of the world's leading financial services firms, life science organizations, manufacturing and energy companies, research facilities, and web and cloud service providers.

Contact:Press Relations at DDN[emailprotected]

Walt & Company, on behalf of DDNSharon Sumrit [emailprotected]

2022 All rights reserved. A3I and DDN are trademarks or registered trademarks owned by DataDirect Networks. All other trademarks are the property of their respective owners.

SOURCE DataDirect Networks (DDN)

The rest is here:
New DDN Storage Appliance Doubles Performance for NVIDIA DGX AI Solutions and Speeds Up Analytics and Machine Learning in the Cloud by 100% - PR...

Top 5 Deep Learning Frameworks that Techies Should Learn in 2022 – Analytics Insight

Deep learning frameworks are trending among machine learning developers

Deep learning frameworks help data scientists and ML developers in various critical tasks. As of today, both predictive analytics and machine learning are deeply integrated into business operations and have proven to be quite crucial. Integrating this advanced branch of ML can enhance efficiency and accuracy for the task-at-hand when it is trained with vast amounts of big data. In this video, we will explore the top deep learning frameworks that techies should learn this year.

Tensor Flow: The Javascript-based open-source learning platform has a wide range of tools to enable model deployment on different types of devices. While the core tools facilitate model deployment on browsers, the lite version is well-suited for mobiles and embedded devices.

PyTorch: Developed by Facebook, it is a versatile framework, originally designed to explore the entire process, from research prototyping to production deployment. It carries a C++ frontend over a Python interface.

Keras: It is an open-source framework that can run on top of Tensorflow, Theano, Microsoft Cognitive Toolkit, and Plaid ML. Keras framework is known for its speed because of built-in support for parallel processing of data processing and ML training.

Sonnet: A high-level library that is used in building complex neural network structures in Tensorflow. It simplifies the high-level architectural designs by independently creating Python objects to a graph.

MXNet: It is a highly scalable open-source Deep learning framework designed to train and deploy deep neural networks. It is capable of fast model training and supports multiple programming languages such as C, C++, Python, Julia, Matlab, etc.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.

Read more from the original source:
Top 5 Deep Learning Frameworks that Techies Should Learn in 2022 - Analytics Insight

Fresh4Cast leader argues for the crucial role of machine learning in moving the industry forward – Produce Business UK

Automation is touching every industry; you cant survive in the 21st century economy without the data and the insights that come from technologies like artificial intelligence and machine learning. The food automation market, for example, isexpected to reach $29.4 billion by 2027.

Within the food space is produce and agriculture, and these are sub-spaces that havent seen quite as much advancement and adoption. Thats changing now thanks to companies likeFresh4cast, a company that uses AI forecasting to help growers and distributors improve productivity, increase margins and reduce waste. Its a solution that includes data sets build from historical, as well as trade statistics and weather, and a virtual assistant designed to automate tasks.

At the London Produce Show and Conference, we will be welcoming Fresh4casts COOMichele DallOlio.

Michele has based his career on the synergy between innovation and fresh produce. Starting with a degree in Agribusiness and a master in Management and Marketing, he explored the complexity of fresh produce data working as Head of Research for a leading Italian consultancy. He then moved to London and started a new journey withFresh4castwhere he is now the COO.

Michele spoke to us about how greater insights can help growers and distributorsDL benefit from increased insights, how that can lead to less food waste, and what hell be talking about at the London Produce Show.

Michele DallOlioCOOFresh4cast

Q: Lets kick this off by giving a little bit of an overview of yourself and about the Fresh4cast and what you do.

A: Im from Italy, I moved to London five years ago. I have always been working and studying in the fresh produce sector, from high school until now. In my career back in Italy, I was working with a lot of data, I was head of analysis in a lead consultancy there and I basically developed into a more data-oriented person with Fresh4cast. When I moved to London five years ago, I joined as Head of Customer development and now Im COO, so Im specifically looking at all the operations, the planning internally, and Im basically the interface between the customer and our production team.

Q: You said youve been in the produce space for a number of years and Im really fascinated by the idea of applying technologies like artificial intelligence and machine learning to sectors where that kind of technology really hasnt been applied before. I used to work for a motor company, for example, and that was a space that had been legacy space and the technology was very slow to develop because of the older people that were set in their ways. Do you feel like that was the same thing in the produce space? Was there a lack of innovation for a long time? And is that changing now?

A: We are definitely at a tipping point because, if you think about agriculture in general, and fresh produce is one of the sub sectors of agriculture, it is always lagging a bit behind compared to other sectors, for a variety of reasons. Service-based sectors are always more advanced, when we look at software, for instance. So, we definitely are at a tipping point, because, yes, as a sector, its a bit behind, but the benefit is that someone else already explored those paths. If youre lagging a bit behind, you know what works and what doesnt; its an important factor, especially in AI, because theres a lot of trial and error, and a lot of errors. There are a lot of very good examples where fresh produce can take inspiration from. So, the data is there, its building up and its just waiting for a machine learning application or an algorithmic forecaster to untap its potential.

Q: What do you think are some of the reasons why the space was lagging behind before?

A: Well, there are a lot of reasons; its a very difficult topic. If you think about innovation in general, not just technological innovation, its driven by key factors such as availability of talent, and being able to attract those talents in the sector. Compared to other sectors, of course, agriculture is a lower margin sector, so innovation is there but its not always the first priority. And so, people and resources are the main thing that I see at the moment that is actually changing. Until 10 years ago, you didnt see any fresh produce business having a data scientist in house or a team of people that was analyzing data, or actually hiring companies, such as Fresh4cast, for building a data set, building machine learning forecasters, and so on. Nowadays, there are a lot of requests for this, so the mentality of the top management is changing. That should drive this tipping point off of catching up with other sectors.

Q: Its funny what you said about being a little bit behind meaning that you get to actually see what works and what doesnt. I never thought of it that way before. Everybody else does this trial and error and then you come along and go, Okay, well, now we know what works, and we can just apply it.

A: When we think about the future and present, and we think, now is the present for everyone, but its not actually true because, for some people, theyre already in the future. So, we can basically copy or take a lot of inspiration from them.

Q: Talk about the ways that you apply AI and machine learning to the produce sector, and the ways that you use that data.

A: Fresh4cast has the three step approach. First of all, we have the customer as a data asset. As you know, machine learning feeds from data and learns from data, so thats the very first milestone. Building a data set is easier said than done, because its very laborious, and it requires different kinds of skills in the company, but we have different tools over there. So, whenever we have a data set that we can work with, the second bit is that we display it back to the customer using business intelligence tools that weve built. So, there is very specific data, for instance data analytics, that helps to understand the seasonality in the fresh produce business, and so on. Its about understanding what happened in the past in order to understand what is going to happen in the future. And the third point is using algorithmic forecasting, machine learning forecasting, very different tools, in order to extract even more value from that data asset, letting the machine find correlations and try to build models that will predict whats going to happen in the future, even specific inputs.

Q: So, you get the data and you have to make these forecasts based on that data. And then what do the growers and distributors do with that? How do they put it to use? What are some use cases for them?

A: Well, it depends on the supply chain. So, in order to answer your question, I need to talk about the supply chain approach of Fresh4cast. We work with the whole supply chain; we dont work only with one aspect. So, we both work with growers, with distributors, with data from retailers, for instance, and so on. And the important bit is that, for each point of the supply chain, the application changes. Ill give you two key examples: one is at production where, if a grower is going to plant this amount of strawberries, for instance, we give them the weather forecast and other inputs, so they know when to plant them and how much is going to harvest. So, in a nutshell, how many strawberries will be ready next week or in four weeks time and at what quality. On the other side, on the sales side, say there is a distributor thats supplying, for instance, a big retailer; the distributor needs to foresee and start planning for how much the retailer is going to ask in the next few weeks. So, we are talking about a forecast that tries to predict how much volume will be needed? If there is a big promo in Tesco, for instance, what is going to be the seasonality in the future? The cannibalization between the category and so on.

This is usually something that a human could do, but not at scale. There are a lot of very small tasks that a human could do, but it will take him so long that the data is already old, so it wouldnt be effective to use that forecast because we already have the actuals. A machine learning application, especially in fresh produce, is something that is automating a lot of very small tasks in a clever way. Its like a proficient assistant: it gives you an output, and the human, at the end of the day, decides what to do with it and makes decisions using this information.

Q: Youre telling growers when and how much to grow, and youre telling distributors and retailers how much theyre going to sell, is that right? So, everybody in the supply chain is getting this data to know how much to expect and how much they should expect to sell?

A: Exactly. If you want to be demand driven, you need to have a forecast in all of the key steps of your supply chain that feeds into the other. So, for instance, if you have a product that you will have next week, how much sales will you have next week? These two pieces of information together creates synergy and allows you to plan better, for instance, your warehouse activities, like how many man hours you need to pack the product.

Q: Where do you pull your data from? Like you said, youre using an existing database. Is any of your data proprietary?

A: We are a software as a service, first of all, so their data is confined inside the customers walls. It doesnt go anywhere and we only use the data for the customer. So, we dont do data aggregation with other customers or build models across customers. We do every application in isolation because we also work with fierce competitors. So, thats the way to go. We provide some data such as weather and international trade, but its all publicly available data, we dont have any proprietary data, we just have proprietary models that interpret the data.

Q: Its interesting that you dont aggregate that data. Wouldnt that be a more helpful way to get a broader view of the market?

A: We have a few cases where a few companies put together their data, but we need to have written consent. By default, we always work only with the data from the specific customer. And the reason why is that aggregation is useful for generic market trends. So, companies like Nielsen, they aggregate data across a lot of companies, so they have market trends. On our end, we tend to do the opposite: we specialize and fine tune the forecasting model specifically on that customers operations and that customer data. Because even if one company says the same thing as another one, it doesnt mean that their business structure and supply chain are similar. They could have a very different structure and, therefore, whenever you change something in the structure, the data reflects the operation. So, it would be a different kind of data.

Q: I would think that what one retailer sells would sell the same at another retailer but it sounds like maybe thats not necessarily the case.

A: We dont work directly with retailers; our customers always specialize only in fresh produce. Some of our customer data comes from the retailer, so we can forecast that, but our customers are the growers and distributors. The retailers, we can have the data about them, but they usually have their own forecasting system internally. Just to clarify.

Q: I know that you also offer a virtual analyst for your customers and Im very interested in learning more about that. I saw that it can send email reports, alerts, prepare Excel reports, and PowerPoint presentations. Whats the technology behind that?

A: Saga is our virtual assistant and you already mentioned a lot of the use cases that we use it for. Its basically a very proficient assistant that automates boring tasks. That means its very quick at doing them and it takes out that overhead of admin-based work that all the employees have in their routine job. From sales to production, they always have to work with an Excel file, for instance. With Saga, if a grower sends their estimate to the central planning team, they CC Saga in their email, then Saga is able to see the attachment, incorporate the attachment in our database, display analytics, and come back with an email report, which is very bespoke, depending on the customer. Basically, its good at interfacing, especially with email attachment and preparing reports on the fly. So, again, its all about automation, at the end of the day.

Q: Im assuming that the whole point of that is to free employees up to do more complicated tasks rather than, like you said, repetitive boring stuff that takes up a lot of time but it doesnt require much skill.

A: Exactly. The second point I mentioned before is the business intelligence bit. If you think about how much time you spend on getting the file out of ERP, for instance, elaborating with Excel, remapping, and so on, you will probably spend 80% on transforming and manipulating the data and 20% of your remaining time on actually analyzing the data and making a decision from what you just discovered. With automation, you get rid of all the preparation, so you get rid of all that 80%, but you have ready made analytics, so you can focus your attention on making better decisions for the business. And maybe you have some extra time to have coffee. Thats a very Italian thing to say, I realize.

Q: Have you been able to actually measure improved productivity for your customers? And do you have any numbers you could share with me?

A: Productivity is quite difficult. I could share with you a couple of examples of what happens, but they would be customer specific, so I would avoid that. I can share it with you, though, the improvement of our specialized business intelligence tools that allows the growers or the planner to improve their own accuracy. So, the key part of improving is measuring at the very beginning; you need to measure, understand, and after that you can improve. We have a case study where growers were producing forecasts for their crops and, using our business intelligence tool, they were measuring the accuracy of their own forecast on a daily and weekly basis. They managed to shave 20% of their total errors. So, just looking at their data and having these tools that give you key KPIs, or key performance indicators, on how good your forecast is, where your errors are, and so on, they could shave, without any other inputs, 20% of their errors out of their forecast activity.

Q: How do you measure the reduction in food waste?

A: The reduction in food waste depends, again, on the level of supply chain we are talking about. Im focusing a lot on the production side but, if you think about your sales side, if you have too much product, and you didnt know in advance, and youre not able to sell it in your warehouse, you will have whats called an overstock. Usually it is not a big problem in other categories but we are in fresh produce, so the shelf life, how long you can keep the product in the fridge, is very, very short. Thats one of the reasons why the founder, Mihai Ciobanu, actually focused on the fresh produce at the very beginning with forecasting, because its very, very difficult to forecast. And, on top of that, if you get the forecast wrong, you can lose a lot of money, basically, throwing away a product that should have been sold.

Q: Give me a preview of what youll be talking about at the London Produce Show and Conference.

A: The production will be focused on how to leverage your owndata assets and extra value from it. Specifically, we will look at how the forecasting activity, and specifically the machine learning tool, is helping both growers and distributors to improve efficiency and reduce waste in their own supply chain. We will have a couple of practical examples of how better forecasting is helping with these two topics.

Continued here:
Fresh4Cast leader argues for the crucial role of machine learning in moving the industry forward - Produce Business UK

Graph + AI Summit 2022: Industrys Only Open Conference For Accelerating Analytics and AI With Graph to Feature Speakers, Use Cases from Worlds Most…

TigerGraph, Inc.

Virtual Global Event to Take Place May 24-25, 2022; Call for Papers Open Through April 11

REDWOOD CITY, Calif., March 22, 2022 (GLOBE NEWSWIRE) -- TigerGraph, provider of a leading graph analytics platform, today announced the return of Graph + AI Summit, the only open industry conference devoted to democratizing and accelerating analytics, AI, and machine learning with graph algorithms. The virtual global event will take place May 24-25, 2022 and the call for speakers is open through April 11, 2022.

Graph + AI Summit is a global celebration of the power of graph and AI, bringing together business leaders, domain experts, and developers to explore creative ways to solve problems with graph technology, said Yu Xu, CEO and Founder, TigerGraph. We will be showcasing real-world examples of graph with AI and machine learning use cases from world-leading banks, retailers, and fintechs. Well also be revealing all 15 winners of the Graph for All Million Dollar Challenge, an exciting initiative seeking world-changing graph implementations from around the globe. Were looking forward to connecting with global graph enthusiasts this year and hope youll join us.

Past Graph + AI Summits have attracted thousands of attendees from 70+ countries. Data scientists, data engineers, architects, and business and IT executives from over 182 of the Fortune 500 companies participated in the last event alone. Past speakers from Amazon, Capgemini, Gartner, Google, Microsoft, UnitedHealth Group, JPMorgan Chase, Mastercard, NewDay, Intuit, Jaguar Land Rover, Pinterest, Stanford University, Forrester Research, Accenture, KPMG, Intel, Dell, and Xilinx along with many innovative startups shared how their organizations reaped the benefits of graph.

Graph + AI Summit 2022 Call for Papers Open Through April 11, 2022

Are you building cutting-edge graph technology solutions to help your organization adapt to an uncertain world? Maybe youre an expert in supercharging machine learning and artificial intelligence using graph algorithms. Or maybe youre a business leader who knows the value of overcoming the data silos created by legacy enterprise solutions. If any of these scenarios describe you, or if you have deep knowledge of graph technology, we want you to be a speaker at this years Graph + AI Summit.

Story continues

The conference will include keynote presentations from graph luminaries as well as industry and technology tracks. Each track will include beginner, intermediate, and advanced-level sessions. Our audience will benefit from a mix of formal presentations and interactive panel participation. Case studies are particularly welcome. Your submission may include one or more of the following topics:

Artificial intelligence use cases and case studies

Machine learning use cases and case studies

Graph neural networks

Combing Natural Language Processing (NLP) with graph

First-of-a-kind solutions combining AI, machine learning, and graph algorithms

Predictive analytics

Customer 360 and customer journey

Hyper-personalized recommendation engine

Fraud detection, anti-money laundering

Supply chain optimization

Cybersecurity

Industry-specific applications in the internet, eCommerce, banking, insurance, fintech, media, manufacturing, transportation, and healthcare industries.

Please submit your proposal by April 11, 2022 at 12:00 A.M./midnight PT here.

Registration

To register for the event, please visit https://www.tigergraph.com/graphaisummit/.

Graph for All Million Dollar Challenge Winners to be Featured at Graph + AI Summit 2022

Last month, TigerGrpah launched Graph for All Million Dollar Challenge, a global search for innovative ways to harness the power of graph technology and machine learning to solve real-world problems. The challenge brings together brilliant minds to build innovative solutions to better our future with one question: How will you change the world with graph? Since the launch, the challenge has gained major traction worldwide with over 1,000 registrations from 90+ countries so far. TigerGraph will reveal and feature all 15 winners of the challenge at the Graph + AI Summit 2022 event. For more information or to register for the challenge, please visit https://www.tigergraph.com/graph-for-all/.

Helpful Links

About TigerGraph TigerGraph is a platform for advanced analytics and machine learning on connected data. Based on the industrys first and only distributed native graph database, TigerGraphs proven technology supports advanced analytics and machine learning applications such as fraud detection, anti-money laundering (AML), entity resolution, customer 360, recommendations, knowledge graph, cybersecurity, supply chain, IoT, and network analysis. The company is headquartered in Redwood City, California, USA. Start free with tigergraph.com/cloud.

Media Contacts:

North AmericaTanya CarlssonOffleash PRtanya@offleashpr.com+1 (707) 529-6139

EMEAAnne HardingThe Message Machineanne@themessagemachine.com +44 7887 682943

The rest is here:
Graph + AI Summit 2022: Industrys Only Open Conference For Accelerating Analytics and AI With Graph to Feature Speakers, Use Cases from Worlds Most...