Archive for the ‘Machine Learning’ Category

Cohere For AI Announces Non-Profit Lab Dedicated to Open Source Fundamental Research – GlobeNewswire

PALO ALTO, Calif. and TORONTO, June 14, 2022 (GLOBE NEWSWIRE) -- Today, Cohere For AI, a non-profit research lab and community, announced its official launch. Dedicated to contributing open source, fundamental machine learning research, the lab will focus on solving some of the most complex challenges in the field of machine learning.

Sara Hooker will serve as Head of Cohere For AI, bringing a wealth of knowledge across AI and machine learning, with a specialty in deep learning. Prior to Cohere For AI, Sara was a Research Scientist at Google Brain where she focused on training models that go beyond top-line metrics to effectively demonstrate ability to be interpretable, compact, fair, and robust. She also founded Delta Analytics, a non-profit that brings together researchers, data scientists, and software engineers to volunteer their skills for non-profits around the world.

In order to realise the potential of machine learning, we need to make sure were working across a diverse set of people, disciplines, backgrounds, and geographies, said Aidan Gomez, CEO and Cofounder at Cohere. Im so excited to have Sara at the helm of Cohere For AI and cant wait to build the community together.

Cohere For AI aims to create open collaboration with the broader machine learning community. The lab is committed to supporting fundamental research on machine learning topics, while also prioritizing good stewardship of open source scientific practices.

This is the lab I wish had existed when I entered the field, said Hooker, Head of Cohere For AI. Depending on where youre located, theres often a lack of opportunities in machine learning. Cohere For AI aims to reimagine how, where, and by whom research is done. Im inspired by the opportunity to make an impact in ways that dont just advance progress on machine learning research, but also broadens access to the field.

In addition to contributions to fundamental research, Cohere For AI will support a machine learning community where members can connect with each other, discover new colleagues, and spur open discussion and collaboration. The lab and community will work to create new points of entry to machine learning research and will, ultimately, reflect the diversity of its members experiences and interests.

To get involved, browse our open research positions at jobs.lever.co/cohere, and stay in the loop on new programs and lab developments by signing up here.

About Cohere For AICohere For AI is a non-profit research lab and community dedicated to contributing fundamental research in machine learning, working to solve some of the field's most challenging problems. It supports responsible research across machine learning, while also prioritizing good stewardship of open source scientific practices. As a borderless research lab, Cohere For AI is community-driven and motivated by the opportunity to establish an inclusive, distributed community made up of brilliant research and engineering talent from across the globe.

Media Contactpress@cohere.ai

Read more:
Cohere For AI Announces Non-Profit Lab Dedicated to Open Source Fundamental Research - GlobeNewswire

How Microsoft Teams uses AI and machine learning to improve calls and meetings – Microsoft

As schools and workplaces begin resuming in-person operations, we project a permanent increase in the volume of online meetings and calls. And while communication and collaboration solutions have played a critical role in enabling continuity during these unprecedented times, early stress tests have revealed opportunities to improve and enhance meeting and call quality.

Disruptive echo effects, poor room acoustics, and choppy video are some common issues that hinder the effectiveness of online calls and meetings. Through AI and machine learning, which have become fundamental to our strategy for continual improvement, weve identified and are now delivering innovative enhancements in Microsoft Teams that improve such audio and video challenges in ways that are both user-friendly and scalable across environments.

Today, were announcing the availability of new Teams features including echo cancellation, adjusting audio in poor acoustic environments, and allowing users to speak and hear at the same time without interruptions. These build on AI-powered features recently released like expanding background noise suppression.

During calls and meetings, when a participant has their microphone too close to their speaker, its common for sound to loop between input and output devices, causing an unwanted echo effect. Now, Microsoft Teams uses AI to recognize the difference between sound from a speaker and the users voice, eliminating the echo without suppressing speech or inhibiting the ability of multiple parties to speak at the same time.

In specific environments, room acoustics can cause sound to bounce, or reverberate, causing the users voice to sound shallow as if theyre speaking within a cavern. For the first time, Microsoft Teams uses a machine learning model to convert captured audio signal to sound as if users are speaking into a close-range microphone.

A natural element of conversation is the ability to interrupt for clarification or validation. This is accomplished through full-duplex (two-way) transmission of audio, allowing users to speak and hear others at the same time. When not using a headset, and especially when using devices where the speaker and microphone are very close to each other, it is difficult to remove echo while maintaining full-duplex audio. Microsoft Teams uses a model trained with 30,000 hours of speech samples to retain desired voices while suppressing unwanted audio signals resulting in more fluid dialogue.

Each of us has first-hand experience of a meeting disrupted by the unexpected sounds of a barking dog, a car alarm, or a slammed door. Over two years ago, we announced the release of AI-based noise suppression in Microsoft Teams as an optional feature for Windows users. Since then, weve continued a cycle of iterative development, testing, and evaluation to further optimize our model. After recording significant improvements across key user metrics, we have enabled machine learning-based noise suppression as default for Teams customers using Windows (including Microsoft Teams Rooms), as well as Mac and iOS users. A future release of this feature is planned for Teams Android and web clients.

These AI-driven audio enhancements are rolling out and are expected to be generally available in the coming months.

We have also recently released AI-based video and screen sharing quality optimization breakthroughs for Teams. From adjustments for low light to optimizations based on the type of content being shared, we now leverage AI to help you look and present your best.

The impact of presentations can often depend on an audiences ability to read on-screen text or watch a shared video. But different types of shared content require varied approaches to ensure the highest video quality, particularly under bandwidth constraints. Teams now uses machine learning to detect and adjust the characteristics of the content presented in real-time, optimizing the legibility of documents or smoothness of video playback.

Unexpected issues with network bandwidth can lead to a choppy video that can quickly shift the focus of your presentation. AI-driven optimizations in Teams help adjust playback in challenging bandwidth conditions, so presenters can use video and screen sharing worry-free.

Though you cant always control the surrounding lighting for your meetings, new AI-powered filters in Teams give you the option to adjust brightness and add a soft focus for your meetings with a simple toggle in your device settings, to better accommodate for low-light environments.

The past two years have made clear how important communication and collaboration platforms like Microsoft Teams are to maintaining safe, connected, and productive operations. In addition to bringing new features and capabilities to Teams, well continue to explore new ways to use technology to make online calling and meeting experiences more natural, resilient, and efficient.

Visit the Tech Community Teams blog for more technical details about how we leverage AI and machine learning for audio quality improvements as well as video and screen sharing optimization in Microsoft Teams.

Read more from the original source:
How Microsoft Teams uses AI and machine learning to improve calls and meetings - Microsoft

Google to Make Chrome ‘More Helpful’ With New Machine Learning Additions – ExtremeTech

This site may earn affiliate commissions from the links on this page. Terms of use. (Photo: PCMag)In a new blog post, Google says its going to be bringing new features to Chrome via on device machine learning (ML). The goal is to improve the browsing experience, and to do so its adding several new ML models that will focus on different tasks. Googles says itll begin addressing how web notifications are handled, and that it also has ideas for an adaptive tool bar. These new features will lead to a safer, more accessible and more personalized browsing experience according to Google. Also, since the models run (and stay) on your device instead of in the cloud, its theoretically better for your privacy.

First theres web notifications, which we take to mean this kind of stuff. Things like sign up for our newsletter, for example. Google says these are update from sites you care about, but adds that too many of them are a nuisance. It says in an upcoming version of Chrome, the on-device ML will examine how you interact with notifications. If it finds you are denying permission to certain types of notification requests, it will silence similar ones in the future. If a notification is silenced automatically, Chrome will still add a notification for it, shown below. This would seemingly allow you to override Googles prediction.

Google also wants Chrome to change what the tool bar does based on your past behavior. For example, it says some people like to use voice search in the morning on their train commute (this person sounds annoying). Other people routinely share links. In both of these situations, Chrome would anticipate your needs and add either a microphone button or share icon in the tool bar, making the process easier. Youll be able to customize it manually as well. The screenshots provided note theyre from Chrome on Android. Its unclear if this functionality will appear on other platforms.

In addition to these new features, Google is also touting the work machine learning is already doing for Chrome users. For example, when you arrive at a web page its scanned and compared to a database of known phishing/malicious sites. If theres a match it gives you a warning, and youve probably seen this once or twice already. Its a full-page, all-red page block, so youd know it if youve seen it. Google says it rolled out new ML models in March of this year that increased the number of malicious sites it could detect by 2.5X.

Google doesnt specify when these new features will launch, nor does it say if they will be mobile-only. All we know is the silence notifications will appear in the next release of Chrome. According to our browser, version 102 is the current one. For the adaptive tool bar, it says that will arrive in the near future. Its also unclear if running these models on-device will incur some type of performance hit.

Now Read:

Read more:
Google to Make Chrome 'More Helpful' With New Machine Learning Additions - ExtremeTech

Can machine learning prolong the life of the ICE? – Automotive World

The automotive industry is steadily moving away from internal combustion engines (ICEs) in the wake of more stringent regulations. Some industry watchers regard electric vehicles (EVs) as the next step in vehicle development, despite high costs and infrastructural limitations in developing markets outside Europe and Asia. However, many markets remain deeply dependent on the conventional ICE vehicle. A 2020 study by Boston Consulting Group found that nearly 28% of ICE vehicles could still be on the road as late as 2035, while EVs may only account for 48% of vehicles registered on the road by this time as well.

If ICE vehicles are to remain compliant with ever more restrictive emissions regulations, they will require some enhancements and improvements. Enter Secondmind, a software and virtualisation company based in the UK. The company is employed by many mainstream manufacturers looking to reduce emissions from pre-existing ICEs without significant investment or development costs. Secondminds Managing Director, Gary Brotman, argues that software-based approaches are efficiently streamlining the process of vehicle development and could prolong the life of the ICE for some years to come.

Follow this link:
Can machine learning prolong the life of the ICE? - Automotive World

Artificial Intelligence and Machine Learning Are Headed for A Major Bottleneck Here’s How We Solve It – Datanami

(ArtemisDiana/Shutterstock)

Artificial intelligence (AI) and machine learning (ML) are already changing the world but the innovations were seeing so far are just a taste of whats around the corner. We are on the precipice of a revolution that will affect every industry, from business and education to healthcare and entertainment. These new technologies will help solve some of the most challenging problems of our age and bring changes comparable in scale to the renaissance, the Industrial Revolution, and the electronic age.

While the printing press, fossil fuels, and silicon drove these past epochal shifts, a new generation of algorithms that automate tasks previously thought impossible will drive the next revolution. These new technologies will allow self-driving cars to identify traffic patterns, automate energy balancing in smart power grids, enable real-time language translation, and pioneer complex analytical tools that detect cancer before any human could ever perceive it.

Well, thats the promise of the AI and ML revolution, anyway. And to be clear, these things are all within our theoretical reach. But what the tech optimists tend to leave out is that our path to the bright, shiny AI future has some major potholes in it. One problem is looming especially large. We call it the dirty secret of AI and ML: right now, AI and ML dont scale well.

Scale the ability to expand a single machines capability to broader, more widespread applications is the holy grail of every digital business. And right now, AI and ML dont have it. While algorithms may hold the keys to our future, when it comes to creating them, were currently stuck in a painstaking, brute force methodology.

(paitoon/Shutterstock)

CreatingAI and ML algorithms isnt the hard part anymore. You tell them what to learn, feed them the right data, and they learn how to parse novel data without your help. The labor-intensive piece comes when you want the algorithms to operate in the real world. Left to their own devices, AI will suck up as much time, compute, and data/bandwidth as you give it. To be truly effective, these algorithms need to run lean, especially now that businesses and consumers are showing an increasing appetite for low-latency operations at the edge. Getting your AI to run in an environment where speed, compute,

and bandwidth are all constrained is the real magic trick here.

Thus, optimizing AI and ML algorithms has become the signature skill of todays AI researchers/engineers. Its expensive in terms of time, resources, money, and talent, but essential if you want performantAI. However, today, the primary way were addressing the problem is via brute force throwing bodies at the problem. Unfortunately, the demand for these algorithms is exploding while the pool of qualified AI engineers remains relatively static. Even if it were economically feasible to hire them, there are not enough trained AI engineers to work on all the projects that will take the world to the resplendent AI/sci-fi future weve been promised.

But all is not lost. There is a way for us to get across the threshold to achieve the exponential AI advances we require. The answer to scaling AI and ML algorithms is actually a simple idea. Train ML algorithms to tune ML algorithms, an approach the industry calls Automated Machine Learning, or AutoML. Tuning AI and ML algorithms may be more of an art than a science, but then again, so is driving, photo retouching, and instant language translation, all of which are addressable via AI and ML.

(Phonlamai Photo/Shutterstock)

AutoML will allow us to scale AI optimization so it can achieve full adoption throughout computing, including at the edge where latency and compute are constrained. By using hardware awareness in AutoML, we can push performance even further. We believe this approach will also lead to a world where the barrier to entry for AI programmers is lower, allowing more people to enter the field, and making better use of high-level programmers. Its our hope that the resulting shift will alleviate the current talent bottleneck the industry is facing.

Over the next few years, we expect to automate various AI optimization techniques such as pruning, distillation, neural architecture search, and others, to achieve 15-30x performance improvements. Googles EfficientNet research has also yielded very promising results in the field of auto-scaling convolutional neural networks. Another example is DataRobots AutoML tools, which can be applied to automating the tedious and time-consuming manual work required for data preparation and model selection.

There is one last hurdle to cross, though. AI automates tasks we always assumed we needed humans to do, offloading these difficult feats to a computer programmed by a clever AI engineer. The dream of AutoML is to offload the work another level, using AI algorithms to tune and create new AI algorithms. But theres no such thing as a free lunch. We will now need evenmore highlyskilled programmers to develop the AutoML routines at the meta-level. The good news is, we think weve got enough of them to do this.

But its not all about growing the field from the top. This innovation not only expands the pool of potential programmers, allowing lower-level programmers to create highly effective AI it provides a de facto training path to move them into higher and higher-skilled positions. This in turn will create a robust talent pipeline that can supply the industry for years to come and ensure we have a good supply of hardcore AI developers for when we hit the next bottleneck. Because yes, there may come a day when we need Auto-AutoML, but for now, we want to take things one paradigm-shifting innovation at a time. It may sound glib, but we believe it wholeheartedly: the answer to the problems of AI is more AI.

About the authors: Nilesh Jain is a Principal Engineer at Intel Labs where he leads Emerging Visual/AI Systems Research Lab. He focuses on developing innovative technologies for edge/cloud systems for emerging workloads. His current research interests include visual computing, hardware aware AutoML systems. He received M.Sc. degree from Oregon Graduate Institute/OHSU. He is also Sr. IEEE member, and has published over 15 papers and over 20 patents.

Ravi Iyer is an Intel Fellow in Intel Labs where he leads the Emerging Systems Lab. His research interests include developing innovative technologies, architectures and edge/cloud systems for emerging workloads. He has published over 150 papers and has over 40 patents granted. He received his Ph.D. in Computer Science from Texas A&M. He is also an IEEE Fellow.

Related Items:

Why Data Scientists and ML Engineers Shouldnt Worry About the Rise of AutoML

AutoML Tools Emerge as Data Science Difference Makers

What is Feature Engineering and Why Does It Need To Be Automated?

More here:
Artificial Intelligence and Machine Learning Are Headed for A Major Bottleneck Here's How We Solve It - Datanami