Archive for the ‘Machine Learning’ Category

AMD’s DLSS-alternative doesn’t need machine learning to work – PC Gamer

After much pining, AMD PC enthusiastsas well as console gamers, potentiallywill finally be getting FidelityFX Super Resolution (FSR) this year. That's the red team's answer to Nvidia's DLSS, and could mean ray tracing isn't the restrictive force it is right now for the Radeon RX 6000-series cards. There's been no word as to an exact release date but, at some point in 2021 those harbouring an RDNA2 graphics card will be able to enjoy the new, resolution-based performance improving techwith no need for machine learning.

FSR is AMD's equivalent to Nvidia's DLSS (Deep Learning Super Sampling) which uses AI to sharpen up frames and stabilise frame rates at higher resolutions, and is essentially what allows GeForce cards to deliver decent performance when using ray traced lighting effects. Though, as AMD's VP of graphics, Scott Herkelman, explains in his recent talk with PCWorld, "you don't need machine learning to do it."

Herkelman admitted there's still some work to be done, but it's coming along well. He explains that the company has made an effort to involve it's followers in the design process, giving them a chance to really influence the direction the company goes with the technology.

This dedication to open development may have hampered the process in terms of speed, but it it means developers are more ready and able to collaborate to improve the tech.

Despite AMD's focus is on getting FSR out to PC gamers first, it should also be rolling out as a cross-platform technology. Meaning this isn't just going to benefit PC gamers, but console gamers too thanks to AMD components being packed inside the likes of the PlayStation 5 and Xbox series X and S.

There was some potential for the FSR feature to have released alongside the Radeon RX 6700 XT, but it seems AMD is waiting for the entire lineup to be availableI use that term looselybefore hitting us with the new tech.

Still, the list of general FidelityFX-supporting games is growing, showing that the forerunner features for FSR are being taken seriously by developers. And, with each step, AMD comes closer to rolling out this impressive-sounding technological development.

Original post:
AMD's DLSS-alternative doesn't need machine learning to work - PC Gamer

Machine Learning Deployment Is The Biggest Tech Trend In 2021 – Analytics India Magazine

What good is an ML model if it isnt fast? doesnt scale? isnt accurate enough? takes weeks to deploy? and costs too much?

Having machine learning in a companys portfolio used to be an investor magnet. Now, the market is bullish on MLaaS, with a new breed of companies offering machine learning services (libraries/APIs/frameworks) to help other companies get their job done better and faster.

According to PwC, AIs potential global economic impact will be worth $15.7 trillion by 2030. And, as interests slowly shift towards MLOps, it is possible that these companies, which promise to scale and accelerate ML deployment, might grab a bigger piece of the pie. Last week, OctoML raised $28 million. The Seattle-based startup offers a machine learning acceleration platform built on top of the open-source Apache TVM compiler framework project. The $28 million Series B funding brings the companys total funding to $47 million.

Image credits: OctoML

90% of machine learning models dont make it to production.

For OctoMLs CEO, Luis Ceze, there is still a significant gap between building a model and making it production-ready. Between rapidly evolving ML models, wrote Ceze in a blog post, ML frameworks and a Cambrian explosion of hardware backends makes ML deployment challenging. It is not easy to make sure your model runs fast enough and to benchmark it across different deployment hardware. Even if your determined machine learning team has hurtled through this gauntlet, they still have to go through a whole different set of challenges to package and deploy at the edge, explained Ceze.

A good performance in ML models requires long hours of manual optimizations. These long hours will then translate into hefty cloud bills. Added to this is the model packaging which varies with devices and platforms. According to Ceze, there are no modern CI/CD integrations to keep up with model changes.

What good is an ML model if it isnt fast? doesnt scale? isnt accurate enough? takes weeks to deploy? and costs too much?, questioned Ceze as he made a case for OctoML.

OctoML addressed these pain points with their open-source machine learning compiler framework Apache TV, which according to the team, has quickly become the go-to solution for developers and ML engineers to maximize ML model performance on any hardware backend. With OctoML we are establishing the first Machine Learning Acceleration Platform that will automatically maximize model performance while enabling seamless deployment on any hardware, cloud provider, or edge devices, said Ceze.

Be it MLOps or XOps, these services are designed to ease the developers of technical debt that these mega ML models accumulate with changing complexities. Apart from OctoML, there are a few other startups that have succeeded in convincing the investors. Lets take a look at couple of them:

Funding till date: $10 million

The team at Verta is building software for data science teams to address the problem of model management how to track, version, and audit models used across products. Verta MLOps software supports model development, deployment, operations, monitoring, and collaboration enabling data scientists to manage models across their lifecycle. So far, the company has $10 million in funding and it promises to make robust, scalable, mature deployable models a reality.

Funding till date: $38.1 million

Image credits: Algorithmia

Were obsessed with helping organizations get ML models into production because thats the only way they can generate business value, said the team at Algorithmia. Their enterprise MLOps platform manages all stages of the production ML lifecycle within existing operational processes, so you can put models into production quickly, securely, and cost-effectively. Unlike inefficient and expensive do-it-yourself MLOps management solutions that lock users into specific technology stacks, Algorithmia automates ML deployment, optimizes collaboration between operations and development, leverages existing SDLC and CI/CD systems, and provides advanced security and governance.

Algorithmias funding (Source: Crunchbase)

Today Algorithmias services are used by over 130,000 engineers and data scientists, including the United Nations, government intelligence agencies, and Fortune 500 companies.

Its [MLOps] going to be an essential component to enterprises industrializing their AI efforts in the future, said Diego M. Oppenheimer, Algorithmias CEO in a recent interview with GitHub.

Funding: $14.5 million

Databand brings in the similar flavor into the ML ecosystem. The team Databand is trying to solve the problems that arise due to increasing data workloads. The company founded by Josh Benamram, Victor Shafran and Evgeny Shulmanhelps helps data engineering teams catch data pipeline issues and trace the impact of those problems across end-to-end data flows. Databands platform includes an application for visualizing pipeline metadata, and an open source library for integrating with your Python, Java, Scala, or SQL data processes. Data pipeline monitoring is a key aspect of machine learning deployment. We can clearly see how targeting even a niche aspect of the whole ML deployment can land big investors.

Image credits: Gartner

Modern day software companies are in the process of or have already embraced machine learning as a key tool. Now they are at a crucial juncture where they can either leverage the MLOps services offered by these startups or build everything on their own. But, there are not many reasons why an organization looking to transition to ML will take the pain of MLOps. As companies look to leverage ML minus the deployment headache, niche players like OctoML will continue to pop up. Even the latest Gartner survey lists scalability and acceleration of machine learning deployment as two driving forces that will continue to trend this year. According to Gartner, XOps a variant of MLOps that deals with efficiencies in data, machine learning, model, platform will try to implement best DevOps practices and ensure reliability, reusability and repeatability.

Continued here:
Machine Learning Deployment Is The Biggest Tech Trend In 2021 - Analytics India Magazine

Experts on how Artificial Intelligence & Machine Learning will impact India’s national security – Economic Times

To initiate a dialogue on technological developments in the aerospace and defence landscape and to create an innovative roadmap for India's defence ecosystem, Economic Times Digital is hosting a one of a kind Defence Summit bringing experts and commentators together.

Maroof Raza, Media Commentator on Global, Military & Security issues will open the ET AeroDef Summit 2021 and shed light on how Artificial Intelligence and Machine Learning will impact national security. Eminent speakers and industry experts like Deepak Hota, Former CMD, BEML, Anuj Prasad: Partner (Head - Aerospace and Defence), Cyril Amarchand Mangaldas and Major General Rohit Gupta, SM (Retd) Head Aerospace & Defence, Primus Partners will share their insights on initiatives like Make In India and how India can become a self-reliant military superpower.

Commodore Anil Jai Singh, IN ( Retd); Senior Vice President, Thyssenkrupp Marine Systems India, Ratan Shrivastava, Aerospace & Defence Expert, and MD, BowerGroupAsia (India) Ltd) and Abhishek Verma, Partner and Lead (Aerospace and Defence), KPMG will deliberate on Indias military prowess and share insights on the future of defence and warfare.

Discussion points

Excerpt from:
Experts on how Artificial Intelligence & Machine Learning will impact India's national security - Economic Times

This Boston Based Startup is Applying Machine Learning-Anchored Computation to Enhance Drug Discovery and Development – MarkTechPost

Valo Health is a drug development company headquartered in Boston, Massachusetts, the United States, which uses human-centric data and machine learning to strengthen and boost the process of drug discovery and its development to promote intelligent health. The Opal Computational Platform by Valo Health utilizes the available clinical data and identifies the different molecules present to better the therapy results. This ensures that the procedure of drug discovery is an end-to-end process and can be used for an entire plethora of diseases. In the recent Series B funding rounds, this organization managed to add another $100 million in its funding, taking the total funding received in this round to $300 million. This funding will be primarily utilized for R&D to further improve technology and expand the drug development program.

This organization was formed in 2019 by David Berry under Flagship Pioneering, a Cambridge-based mega-giant already having companies like Moderna and Indigo Ag in its ambit. Valo Healths main aim is to use machine learning in a manner wherein previously unknown molecules associations can be identified and brought to light. The focus here is unlimited, and Valo Health claims that their products could be highly disruptive to the industry and bring in significant value so as to equip the people with better and enhanced treatment alternatives.

The entire process of drug discovery and development is being re-imagined by Valo Health. By using artificial intelligence and machine learning, the processs cost and time have been potentially cut in half. The technology at this organization blends in an enormous amount of human data with machine learning computation to allow itself to go from simply making molecules to then testing them in mere months. This allows medical chemistry optimization on a large scale.

Valo Health was formerly known as Integral Health, and it was launched after acquiring two firms, Forma Therapeutics, and Numerate which not only provided it with a sound capital from discoveries but also gave the essential asset in the form of engineers who had already created a base of 30,000 models and around 70 trillion molecules. The company unites human intelligence and machine intelligence to accelerate the entire process. It claims that it has been able to manufacture the first closed-loop, human-centric, activity learning, and end-to-end integrated drug discovery and development engine.

The company builds off the experience and products available from the other firms in the market and tries to look at it from a different perspective altogether, keeping an eye out for new opportunities to improve the same or even extend it further up a notch. The company, as of now, is focusing on advancing new medicines, refining its discovery and developmental engine, and coming up with new programs.

Source Link: https://www.valohealth.com/

Suggested

Visit link:
This Boston Based Startup is Applying Machine Learning-Anchored Computation to Enhance Drug Discovery and Development - MarkTechPost

Machine Learning Meets the Maestros | Duke Today – Duke Today

DURHAM, N.C. -- Even if you cant name the tunes, youve probably heard them: from the iconic dun-dun-dun-dunnnn opening of Beethoven's Fifth Symphony to the melody of Ode to Joy, the German composers symphonies are some of the best known and widely performed in classical music.

Just as enthusiasts can recognize stylistic differences between one orchestras version of Beethovens hits and another, now machines can, too.

A Duke University team has developed a machine learning algorithm that listens to multiple performances of the same piece and can tell the difference between, say, the Berlin Philharmonic and the London Symphony Orchestra, based on subtle differences in how they interpret a score.

In a study published in a recent issue of the journal Annals of Applied Statistics, the team set the algorithm loose on all nine Beethoven symphonies as performed by 10 different orchestras over nearly eight decades, from a 1939 recording of the NBC Symphony Orchestra conducted by Arturo Toscanini, to Simon Rattles version with the Berlin Philharmonic in 2016.

Although each follows the same fixed score - the published reference left by Beethoven about how to play the notes -- every orchestra has a slightly different way of turning a score into sounds.

The bars, dots and squiggles on the page are mere clues, said Anna Yanchenko, a Ph.D. student and musician working with statistical science professor Peter Hoff at Duke. They tell the musicians what instruments should be playing and what notes they play, and whether to play slow or fast, soft or loud. But just how fast is fast? And how loud is loud?

Its up to the conductor -- and the individual musicians -- to bring the music to life; to determine exactly how much to speed up or slow down, how long to hold the notes, how much the volume should rise or fall over the course of a performance. For instance, if the score for a given piece says to play faster, one orchestra may double the tempo while another barely picks up the pace at all, Yanchenko said.

Hoff and Yanchenko converted each audio file into plots, called spectrograms and chromagrams, essentially showing how the notes an orchestra plays and their loudness vary over time. After aligning the plots, they calculated the timbre, tempo and volume changes for each movement, using new statistical methods they developed to look for consistent differences and similarities among orchestras in their playing.

Some of the results were expected. The 2012 Beethoven cycle with the Vienna Philharmonic, for example, has a strikingly similar sound to the Berlin Philharmonics 2016 version -- since the two orchestras were led by the same conductor.

But other findings were more surprising, such as the similarities between the symphonies conducted by Toscanini with regular modern instruments, and those played on period instruments more akin to Beethovens time.

The study also found that older recordings were more quirky and distinctive than newer ones, which tended to conform to more similar styles.

Yanchenko isnt using her code and mathematical models as a substitute for experiencing the music. On the contrary: shes a longtime concert-goer at the Boston Symphony Orchestra in her home state of Massachusetts. But she says her work helps her compare performance styles on a much larger scale than would be possible by ear alone.

Most previous AI efforts to look at how performance styles change across time and place have been limited to considering just a few pieces or instruments at a time. But the Duke teams method makes it possible to contrast many pieces involving scores of musicians and dozens of different instruments.

Rather than have people manually go through the audio and annotate it, the AI learns to spot patterns on its own and understands the special qualities of each orchestra automatically.

When shes not pursuing her Ph.D., Yanchenko plays trombone in Dukes Wind Symphony. Last semester, they celebrated Beethoven's 250th birthday pandemic-style with virtual performances of his symphonies in which all the musicians played their parts by video from home.

I listened to a lot of Beethoven during this project, Yanchenko said. Her favorite has to be his Symphony No. 7.

I really like the second movement, Yanchenko said. Some people take it very slow, and some people take it more quickly. It's interesting to see how different conductors can see the same piece of music.

The team's source code and data are available online at https://github.com/aky4wn/HMDS.

CITATION: "Hierarchical Multidimensional Scaling for the Comparison of Musical Performance Styles," Anna K. Yanchenko, Peter D. Hoff. Annals of Applied Statistics, December 2020. DOI: 10.1214/20-AOAS1391

Visit link:
Machine Learning Meets the Maestros | Duke Today - Duke Today