Archive for the ‘Machine Learning’ Category

Runway Raises $8.5M Series A to Build the Next Generation of Creative Tools – Business Wire

NEW YORK--(BUSINESS WIRE)--Runway, a start-up building the next generation of digital creative tools, announced today an $8.5 million Series A led by Amplify Partners with participation from Lux Capital and Compound Ventures. Using machine learning, the company is pioneering image and video creation techniques for synthetic content manipulation and media editing, allowing users to create and generate content with cutting-edge AI and graphics technology. With an active and growing community, Runway has emerged as a leader in the future of content production.

Deep learning techniques are bringing a new paradigm to content creation with synthetic media and automation, Runway founder Cristobal Valenzuela explained. With Runway, were building the backbone of that creative revolution, allowing creators to do things that were impossible until very recently.

Today, Runways user community includes designers, filmmakers, and other creative professionals at R/GA, New Balance, Google, and IBM. A favorite among educators, Runway has been incorporated into the design curriculum at NYU, RISD, and MIT. So far, Runway users have trained more than 50,000 AI models, uploaded over 24 million files to the platform, and run more than 900,000 models.

By making sophisticated machine learning algorithms unimaginably accessible, Runway challenges a designers own visual and muscle memories, pushing them out of their comfort zones to create unexpected, innovative work, said Onur Yuce Gun, Creative Manager of Computation Design at New Balance.

Most recently, Runway released Green Screen, a web tool that uses machine learning to automate the process of rotoscoping, saving users significant time when they want to remove objects from a background. Professional editors and content creators are using Green Screen to edit content faster and expand the visual effects tools of their projects.

The ways in which we distribute content have changed radically in recent years; however, the tools that creative professionals use to make content have not, said Sarah Catanzaro, Partner at Amplify Partners. For the first time in decades, creatives now have a radically better suite of tools to generate and edit images, video, and other media with AI. Runway is not only automating routine work for creatives, but also enabling new forms of perception and creative expression. We are thrilled to partner with a talented team as they develop the next-gen creative toolkit.

The investment brings Runways total funding since launching to $10.5M and will help Runway to hire within their research and engineering teams as they continue building cutting-edge synthetic media tools while growing their community of creative users.

Valenzuela co-founded Runway with Anastasis Germanidis and Alejandro Matamala, all graduates of NYUs Interactive Telecommunications Program.

About Runway: Runway is building the next generation of creative tools that makes machine learning easy and accessible for all types of creatives. With an active and growing community, Runway is pioneering how content and media are created. With a focus on video automation and synthetic media, Runway reduces the costs of creating visual media across creative industries. To learn more and sign up for a free account visit http://www.runwayml.com.

Read more here:
Runway Raises $8.5M Series A to Build the Next Generation of Creative Tools - Business Wire

What is Machine Learning? | IBM

Machine learning focuses on applications that learn from experience and improve their decision-making or predictive accuracy over time.

Machine learning is a branch of artificial intelligence (AI) focused on building applications that learn from data and improve their accuracy over time without being programmed to do so.

In data science, an algorithm is a sequence of statistical processing steps. In machine learning, algorithms are 'trained' to find patterns and features in massive amounts of data in order to make decisions and predictions based on new data. The better the algorithm, the more accurate the decisions and predictions will become as it processes more data.

Today, examples of machine learning are all around us. Digital assistants search the web and play music in response to our voice commands. Websites recommend products and movies and songs based on what we bought, watched, or listened to before. Robots vacuum our floors while we do . . . something better with our time. Spam detectors stop unwanted emails from reaching our inboxes. Medical image analysis systems help doctors spot tumors they might have missed. And the first self-driving cars are hitting the road.

We can expect more. As big data keeps getting bigger, as computing becomes more powerful and affordable, and as data scientists keep developing more capable algorithms, machine learning will drive greater and greater efficiency in our personal and work lives.

There are four basic steps for building a machine learning application (or model). These are typically performed by data scientists working closely with the business professionals for whom the model is being developed.

Training data is a data set representative of the data the machine learning model will ingest to solve the problem its designed to solve. In some cases, the training data is labeled datatagged to call out features and classifications the model will need to identify. Other data is unlabeled, and the model will need to extract those features and assign classifications on its own.

In either case, the training data needs to be properly preparedrandomized, de-duped, and checked for imbalances or biases that could impact the training. It should also be divided into two subsets: the training subset, which will be used to train the application, and the evaluation subset, used to test and refine it.

Again, an algorithm is a set of statistical processing steps. The type of algorithm depends on the type (labeled or unlabeled) and amount of data in the training data set and on the type of problem to be solved.

Common types of machine learning algorithms for use with labeled data include the following:

Algorithms for use with unlabeled data include the following:

Training the algorithm is an iterative processit involves running variables through the algorithm, comparing the output with the results it should have produced, adjusting weights and biases within the algorithm that might yield a more accurate result, and running the variables again until the algorithm returns the correct result most of the time. The resulting trained, accurate algorithm is the machine learning modelan important distinction to note, because 'algorithm' and 'model' are incorrectly used interchangeably, even by machine learning mavens.

The final step is to use the model with new data and, in the best case, for it to improve in accuracy and effectiveness over time. Where the new data comes from will depend on the problem being solved. For example, a machine learning model designed to identify spam will ingest email messages, whereas a machine learning model that drives a robot vacuum cleaner will ingest data resulting from real-world interaction with moved furniture or new objects in the room.

Machine learningmethods (also called machine learning styles) fall into three primary categories.

Supervised machine learning trains itself on a labeled dataset. That is, the data is labeled with information that the machine learning model is being built to determine and that may even be classified in ways the model is supposed to classify data. For example, a computer vision model designed to identify purebred German Shepherd dogs might be trained on a data set of various labeled dog images.

Supervised machine learning requires less training data than other machine learningmethods and makes training easier because the results of the model can be compared to actual labeled results. But, properly labeled data is expensive to prepare, and there's the danger of overfitting, or creating a model so closely tied and biased to the training data that it doesn't handle variations in new data accurately.

Learn more about supervised learning.

Unsupervised machine learning ingests unlabeled datalots and lots of itand uses algorithms to extract meaningful features needed to label, sort, and classify the data in real-time, without human intervention. Unsupervised learning is less about automating decisions and predictions, and more about identifying patterns and relationships in data that humans would miss. Take spam detection, for examplepeople generate more email than a team of data scientists could ever hope to label or classify in their lifetimes. An unsupervised learning algorithm can analyze huge volumes of emails and uncover the features and patterns that indicate spam (and keep getting better at flagging spam over time).

Learn more about unsupervised learning.

Semi-supervised learning offers a happy medium between supervised and unsupervised learning. During training, it uses a smaller labeled dataset to guide classification and feature extraction from a larger, unlabeled data set. Semi-supervised learning can solve the problem of having not enough labeled data (or not being able to afford to label enough data) to train a supervised learning algorithm.

Reinforcement machine learning is a behavioral machinelearning model that is similar to supervised learning, but the algorithm isnt trained using sample data. This model learns as it goes by using trial and error. A sequence of successful outcomes will be reinforced to develop the best recommendation or policy for a given problem.

The IBM Watson system that won the Jeopardy! challenge in 2011 makes a good example. The system used reinforcement learning to decide whether to attempt an answer (or question, as it were), which square to select on the board, and how much to wagerespecially on daily doubles.

Learn more about reinforcement learning.

Deep learning is a subset of machine learning (all deep learning is machine learning, but not all machine learning is deep learning). Deep learning algorithms define an artificial neural network that is designed to learn the way the human brain learns. Deep learning models require large amounts of data that pass through multiple layers of calculations, applying weights and biases in each successive layer to continually adjust and improve the outcomes.

Deep learning models are typically unsupervised or semi-supervised. Reinforcement learning models can also be deep learning models. Certain types of deep learning modelsincluding convolutional neural networks (CNNs) and recurrent neural networks (RNNs)are driving progress in areas such as computer vision, natural language processing (including speech recognition), and self-driving cars.

See the blog post AI vs. Machine Learning vs. Deep Learning vs. Neural Networks: Whats the Difference? for a closer look at how the different concepts relate.

Learn more about deep learning.

As noted at the outset, machine learning is everywhere. Here are just a few examples of machine learning you might encounter every day:

IBM Watson Machine Learning supports the machine learning lifecycle end to end. It is available in a range of offerings that let you build machine learning models wherever your data lives and deploy them anywhere in your hybrid multicloud environment.

IBM Watson Machine Learning on IBM Cloud Pak for Data helps enterprise data science and AI teams speed AI development and deployment anywhere, on a cloud native data and AI platform. IBM Watson Machine Learning Cloud, a managed service in the IBM Cloud environment, is the fastest way to move models from experimentation on the desktop to deployment for production workloads. For smaller teams looking to scale machine learning deployments, IBM Watson Machine Learning Server offers simple installation on any private or public cloud.

To get started, sign up for an IBMid and create your IBM Cloud account.

Go here to see the original:
What is Machine Learning? | IBM

The 12 Coolest Machine-Learning Startups Of 2020 – CRN

Learning Curve

Artificial intelligence has been a hot technology area in recent years and machine learning, a subset of AI, is one of the most important segments of the whole AI arena.

Machine learning is the development of intelligent algorithms and statistical models that improve software through experience without the need to explicitly code those improvements. A predictive analysis application, for example, can become more accurate over time through the use of machine learning.

But machine learning has its challenges. Developing machine-learning models and systems requires a confluence of data science, data engineering and development skills. Obtaining and managing the data needed to develop and train machine-learning models is a significant task. And implementing machine-learning technology within real-world production systems can be a major hurdle.

Heres a look at a dozen startup companies, some that have been around for a few years and some just getting off the ground, that are addressing the challenges associated with machine learning.

AI.Reverie

Top Executive: Daeil Kim, Co-Founder, CEO

Headquarters: New York

AI.Reverie develops AI and machine -earning technology for data generation, data labeling and data enhancement tasks for the advancement of computer vision. The companys simulation platform is used to help acquire, curate and annotate the large amounts of data needed to train computer vision algorithms and improve AI applications.

In October AI.Reverie was named a Gartner Cool Vendor in AI core technologies.

Anodot

Top Executive: David Drai, Co-Founder, CEO

Headquarters: Redwood City, Calif.

Anodots Deep 360 autonomous business monitoring platform uses machine learning to continuously monitor business metrics, detect significant anomalies and help forecast business performance.

Anodots algorithms have a contextual understanding of business metrics, providing real-time alerts that help users cut incident costs by as much as 80 percent.

Anodot has been granted patents for technology and algorithms in such areas as anomaly score, seasonality and correlation. Earlier this year the company raised $35 million in Series C funding, bringing its total funding to $62.5 million.

BigML

Top Executive: Francisco Martin, Co-Founder, CEO

Headquarters: Corvallis, Ore.

BigML offers a comprehensive, managed machine-learning platform for easily building and sharing datasets and data models, and making highly automated, data-driven decisions. The companys programmable, scalable machine -earning platform automates classification, regression, time series forecasting, cluster analysis, anomaly detection, association discovery and topic modeling tasks.

The BigML Preferred Partner Program supports referral partners and partners that sell BigML and oversee implementation projects. Partner A1 Digital, for example, has developed a retail application on the BigML platform that helps retailers predict sales cannibalizationwhen promotions or other marketing activity for one product can lead to reduced demand for other products.

StormForge

Top Executive: Matt Provo, Founder, CEO

Headquarters: Cambridge, Mass.

StormForge provides machine learning-based, cloud-native application testing and performance optimization software that helps organizations optimize application performance in Kubernetes.

StormForge was founded under the name Carbon Relay and developed its Red Sky Ops tools that DevOps teams use to manage a large variety of application configurations in Kubernetes, automatically tuning them for optimized performance no matter what IT environment theyre operating in.

This week the company acquired German company Stormforger and its performance testing-as-a-platform technology. The company has rebranded as StormForge and renamed its integrated product the StormForge Platform, a comprehensive system for DevOps and IT professionals that can proactively and automatically test, analyze, configure, optimize and release containerized applications.

In February the company said that it had raised $63 million in a funding round from Insight Partners.

Comet.ML

Top Executive: Gideon Mendels, Co-Founder, CEO

Headquarters: New York

Comet.ML provides a cloud-hosted machine-learning platform for building reliable machine-learning models that help data scientists and AI teams track datasets, code changes, experimentation history and production models.

Launched in 2017, Comet.ML has raised $6.8 million in venture financing, including $4.5 million in April 2020.

Dataiku

Top Executive: Florian Douetteau, Co-Founder, CEO

Headquarters: New York

Dataikus goal with its Dataiku DSS (Data Science Studio) platform is to move AI and machine-learning use beyond lab experiments into widespread use within data-driven businesses. Dataiku DSS is used by data analysts and data scientists for a range of machine-learning, data science and data analysis tasks.

In August Dataiku raised an impressive $100 million in a Series D round of funding, bringing its total financing to $247 million.

Dataikus partner ecosystem includes analytics consultants, service partners, technology partners and VARs.

DotData

Top Executive: Ryohei Fujimaki, Founder, CEO

Headquarters: San Mateo, Calif.

DotData says its DotData Enterprise machine-learning and data science platform is capable of reducing AI and business intelligence development projects from months to days. The companys goal is to make data science processes simple enough that almost anyone, not just data scientists, can benefit from them.

The DotData platform is based on the companys AutoML 2.0 engine that performs full-cycle automation of machine-learning and data science tasks. In July the company debuted DotData Stream, a containerized AI/ML model that enables real-time predictive capabilities.

Eightfold.AI

Top Executive: Ashutosh Garg, Co-Founder, CEO

Headquarters: Mountain View, Calif.

Eightfold.AI develops the Talent Intelligence Platform, a human resource management system that utilizes AI deep learning and machine-learning technology for talent acquisition, management, development, experience and diversity. The Eightfold system, for example, uses AI and ML to better match candidate skills with job requirements and improves employee diversity by reducing unconscious bias.

In late October Eightfold.AI announced a $125 million Series round of financing, putting the startups value at more than $1 billion.

H2O.ai

Top Executive: Sri Ambati, Co-Founder, CEO

Headquarters: Mountain View, Calif.

H2O.ai wants to democratize the use of artificial intelligence for a wide range of users.

The companys H2O open-source AI and machine-learning platform, H2O AI Driverless automatic machine-learning software, H20 MLOps and other tools are used to deploy AI-based applications in financial services, insurance, health care, telecommunications, retail, pharmaceutical and digital marketing.

H2O.ai recently teamed up with data science platform developer KNIME to integrate Driverless AI for AutoMl with KNIME Server for workflow management across the entire data science life cyclefrom data access to optimization and deployment.

Iguazio

Top Executive: Asaf Somekh, Co-Founder, CEO

Headquarters: New York

The Iguazio Data Science Platform for real-time machine learning applications automates and accelerates machine-learning workflow pipelines, helping businesses develop, deploy and manage AI applications at scale that improve business outcomeswhat the company calls MLOps.

In early 2020 Iguazio raised $24 million in new financing, bringing its total funding to $72 million.

OctoML

Top Executive: Luis Ceze, Co-Founder, CEO

Headquarters: Seattle

OctoMLs Software-as-a-Service Octomizer makes it easier for businesses and organizations to put deep learning models into production more quickly on different CPU and GPU hardware, including at the edge and in the cloud.

OctoML was founded by the team that developed the Apache TVM machine-learning compiler stack project at the University of Washingtons Paul G. Allen School of Computer Science & Engineering. OctoMLs Octomizer is based on the TVM stack.

Tecton

Top Executive: Mike Del Balso, Co-Founder, CEO

Headquarters: San Francisco

Tecton just emerged from stealth in April 2020 with its data platform for machine learning that enables data scientists to turn raw data into production-ready machine-learning features. The startups technology is designed to help businesses and organizations harness and refine vast amounts of data into the predictive signals that feed machine-learning models.

The companys three founders: CEO Mike Del Balso, CTO Kevin Stumpf and Engineering Vice President Jeremy Hermann previously worked together at Uber where they developed the companys Michaelangelo machine-learning platform the ride-sharing company used to scale its operations to thousands of production models serving millions of transactions per second, according to Tecton.

The company started with $25 million in seed and Series A funding co-led by Andreessen Horowitz and Sequoia.

View post:
The 12 Coolest Machine-Learning Startups Of 2020 - CRN

Commentary: Pathmind applies AI, machine learning to industrial operations – FreightWaves

The views expressed here are solely those of the author and do not necessarily represent the views of FreightWaves or its affiliates.

In this installment of the AI in Supply Chain series (#AIinSupplyChain), we explore how Pathmind, an early-stage startup based in San Francisco, is helping companies apply simulation and reinforcement learning to industrial operations.

I asked Chris Nicholson, CEO and founder of Pathmind, What is the problem that Pathmind solves for its customers? Who is the typical customer?

Nicholson said: The typical Pathmind customer is an industrial engineer working at a simulation consulting firm or on the simulation team of a large corporation with industrial operations to optimize. This ranges from manufacturing companies to the natural resources sector, such as mining and oil and gas. Our clients build simulations of physical systems for routing, job scheduling or price forecasting, and then search for strategies to get more efficient.

Pathminds software is suited for manufacturing resource management, energy usage management optimization and logistics optimization.

As with every other startup that I have highlighted as a case in this #AIinSupplyChain series, I asked, What is the secret sauce that makes Pathmind successful? What is unique about your approach? Deep learning seems to be all the rage these days. Does Pathmind use a form of deep learning? Reinforcement learning?

Nicholson responded: We automate tasks that our users find tedious or frustrating so that they can focus on whats interesting. For example, we set up and maintain a distributed computing cluster for training algorithms. We automatically select and tune the right reinforcement learning algorithms, so that our users can focus on building the right simulations and coaching their AI agents.

Echoing topics that we have discussed in earlier articles in this series, he continued: Pathmind uses some of the latest deep reinforcement learning algorithms from OpenAI and DeepMind to find new optimization strategies for our users. Deep reinforcement learning has achieved breakthroughs in gaming, and it is beginning to show the same performance for industrial operations and supply chain.

On its website, Pathmind describes saving a large metals processor 10% of its expenditures on power. It also describes the use of its software to increase ore preparation by 19% at an open-pit mining site.

Given how difficult it is to obtain good quality data for AI and machine learning systems for industrial settings, I asked how Pathmind handles that problem.

Simulations generate synthetic data, and lots of it, said Slin Lee, Pathminds head of engineering. The challenge is to build a simulation that reflects your underlying operations, but there are many tools to validate results.

Once you pass the simulation stage, you can integrate your reinforcement learning policy into an ERP. Most companies have a lot of the data they need in those systems. And yes, theres always data cleansing to do, he added.

As the customer success examples Pathmind provides on its website suggest, mining companies are increasingly looking to adopt and implement new software to increase efficiencies in their internal operations. This is happening because the industry as a whole runs on very old technology, and deposits of ore are becoming increasingly difficult to access as existing mines reach maturity. Moreover, the growing trend toward the decarbonization of supply chains, and the regulations that will eventually follow to make decarbonization a requirement, provide an incentive for mining companies to seize the initiative in figuring out how to achieve that goal by implementing new technology

The areas in which AI and machine learning are making the greatest inroads are mineral exploration using geological data to make the process of seeking new mineral deposits less prone to error and waste; predictive maintenance and safety using data to preemptively repair expensive machinery before breakdowns occur; cyberphysical systems creating digital models of the mining operation in order to quickly simulate various scenarios; and autonomous vehicles using autonomous trucks and other autonomous vehicles and machinery to move resources within the area in which mining operations are taking place.

According to Statista, The revenue of the top 40 global mining companies, which represent a vast majority of the whole industry, amounted to some 692 billion U.S. dollars in 2019. The net profit margin of the mining industry decreased from 25 percent in 2010 to nine percent in 2019.

The trend toward mining companies and other natural-resource-intensive industries adopting new technology is going to continue. So this is a topic we will continue to pay attention to in this column.

Conclusion

If you are a team working on innovations that you believe have the potential to significantly refashion global supply chains, wed love to tell your story at FreightWaves. I am easy to reach on LinkedIn and Twitter. Alternatively, you can reach out to any member of the editorial team at FreightWaves at media@freightwaves.com.

Dig deeper into the #AIinSupplyChain Series with FreightWaves:

Commentary: Optimal Dynamics the decision layer of logistics? (July 7)

Commentary: Combine optimization, machine learning and simulation to move freight (July 17)

Commentary: SmartHop brings AI to owner-operators and brokers (July 22)

Commentary: Optimizing a truck fleet using artificial intelligence (July 28)

Commentary: FleetOps tries to solve data fragmentation issues in trucking (Aug. 5)

Commentary: Bulgarias Transmetrics uses augmented intelligence to help customers (Aug. 11)

Commentary: Applying AI to decision-making in shipping and commodities markets (Aug. 27)

Commentary: The enabling technologies for the factories of the future (Sept. 3)

Commentary: The enabling technologies for the networks of the future (Sept. 10)

Commentary: Understanding the data issues that slow adoption of industrial AI (Sept. 16)

Commentary: How AI and machine learning improve supply chain visibility, shipping insurance (Sept. 24)

Commentary: How AI, machine learning are streamlining workflows in freight forwarding, customs brokerage (Oct. 1)

Commentary: Can AI and machine learning improve the economy? (Oct. 8)

Commentary: Savitude and StyleSage leverage AI, machine learning in fashion retail (Oct. 15)

Commentary: How Japans ABEJA helps large companies operationalize AI, machine learning (Oct. 26)

Authors disclosure: I am not an investor in any early-stage startups mentioned in this article, either personally or through REFASHIOND Ventures. I have no other financial relationship with any entities mentioned in this article.

See the rest here:
Commentary: Pathmind applies AI, machine learning to industrial operations - FreightWaves

Comparison of machine learning algorithms for the prediction of five-year survival in oral squamous cell carcinoma – DocWire News

This article was originally published here

J Oral Pathol Med. 2020 Nov 21. doi: 10.1111/jop.13135. Online ahead of print.

ABSTRACT

BACKGROUND/AIM: Machine learning analyses of cancer outcomes for oral cancer remain sparse compared to other types of cancer like breast or lung. The purpose of the present study was to compare the performance of machine learning algorithms in the prediction of global, recurrence-free five-year survival in oral cancer patients based on clinical and histopathological data.

METHODS: Data was gathered retrospectively from 416 patients with oral squamous cell carcinoma. The dataset was divided into training and test dataset (75:25 split). Training performance of five machine learning algorithms (Logistic regression, K-nearest neighbours, Nave Bayes, Decision tree and Random forest classifiers) for prediction was assessed by k-fold cross validation. Variables used in the machine learning models were age, sex, pain symptoms, grade of lesion, lymphovascular invasion, extracapsular extension, perineural invasion, bone invasion and type of treatment. Variable importance was assessed and model performance on the testing data was assessed using receiver operating characteristic curves, accuracy, sensitivity, specificity and F1 score.

RESULTS: The best performing model was the Decision tree classifier, followed by the Logistic Regression model (accuracy 76% and 60%, respectively). The Nave Bayes model did not display any predictive value with 0% specificity.

CONCLUSIONS: Machine learning presents a promising and accessible toolset for improving prediction of oral cancer outcomes. Our findings add to a growing body of evidence that Decision tree models are useful in models in predicting OSCC outcomes. We would advise that future similar studies explore a variety of machine learning models including Logistic regression to help evaluate model performance.

PMID:33220109 | DOI:10.1111/jop.13135

Link:
Comparison of machine learning algorithms for the prediction of five-year survival in oral squamous cell carcinoma - DocWire News