Archive for the ‘Machine Learning’ Category

Technology is shaping learning in higher education – McKinsey

The COVID-19 pandemic forced a shift to remote learning overnight for most higher-education students, starting in the spring of 2020. To complement video lectures and engage students in the virtual classroom, educators adopted technologies that enabled more interactivity and hybrid models of online and in-person activities. These tools changed learning, teaching, and assessment in ways that may persist after the pandemic. Investors have taken note. Edtech start-ups raised record amounts of venture capital in 2020 and 2021, and market valuations for bigger players soared.

A study conducted by McKinsey in 2021 found that to engage most effectively with students, higher-education institutions can focus on eight dimensionsof the learning experience. In this article, we describe the findings of a study of the learning technologies that can enable aspects of several of those eight dimensions (see sidebar Eight dimensions of the online learning experience).

In November 2021, McKinsey surveyed 600 faculty members and 800 students from public and private nonprofit colleges and universities in the United States, including minority-serving institutions, about the use and impact of eight different classroom learning technologies (Exhibit 1). (For more on the learning technologies analyzed in this research, see sidebar Descriptions of the eight learning technologies.) To supplement the survey, we interviewed industry experts and higher-education professionals who make decisions about classroom technology use. We discovered which learning tools and approaches have seen the highest uptake, how students and educators view them, the barriers to higher adoption, how institutions have successfully adopted innovative technologies, and the notable impacts on learning (for details about our methodology, see sidebar About the research).

Exhibit 1

Survey respondents reported a 19 percent average increase in overall use of these learning technologies since the start of the COVID-19 pandemic. Technologies that enable connectivity and community building, such as social mediainspired discussion platforms and virtual study groups, saw the biggest uptick in use49 percentfollowed by group work tools, which grew by 29 percent (Exhibit 2). These technologies likely fill the void left by the lack of in-person experiences more effectively than individual-focused learning tools such as augmented reality and virtual reality (AR/VR). Classroom interaction technologies such as real-time chatting, polling, and breakout room discussions were the most widely used tools before the pandemic and remain so; 67 percent of survey respondents said they currently use these tools in the classroom.

Exhibit 2

The shift to more interactive and diverse learning models will likely continue. One industry expert told us, The pandemic pushed the need for a new learning experience online. It recentered institutions to think about how theyll teach moving forward and has brought synchronous and hybrid learning into focus. Consequently, many US colleges and universities are actively investing to scale up their online and hybrid program offerings.

Some technologies lag behind in adoption. Tools enabling student progress monitoring, AR/VR, machine learningpowered teaching assistants (TAs), AI adaptive course delivery, and classroom exercises are currently used by less than half of survey respondents. Anecdotal evidence suggests that technologies such as AR/VR require a substantial investment in equipment and may be difficult to use at scale in classes with high enrollment. Our survey also revealed utilization disparities based on size. Small public institutions use machine learningpowered TAs, AR/VR, and technologies for monitoring student progress at double or more the rates of medium and large public institutions, perhaps because smaller, specialized schools can make more targeted and cost-effective investments. We also found that medium and large public institutions made greater use of connectivity and community-building tools than small public institutions (57 to 59 percent compared with 45 percent, respectively). Although the uptake of AI-powered tools was slower, higher-education experts we interviewed predict their use will increase; they allow faculty to tailor courses to each students progress, reduce their workload, and improve student engagement at scale (see sidebar Differences in adoption by type of institution observed in the research).

While many colleges and universities are interested in using more technologies to support student learning, the top three barriers indicated are lack of awareness, inadequate deployment capabilities, and cost (Exhibit 3).

Exhibit 3

More than 60 percent of students said that all the classroom learning technologies theyve used since COVID-19 began had improved their learning and grades (Exhibit 4). However, two technologies earned higher marks than the rest for boosting academic performance: 80 percent of students cited classroom exercises, and 71 percent cited machine learningpowered teaching assistants.

Exhibit 4

Although AR/VR is not yet widely used, 37 percent of students said they are most excited about its potential in the classroom. While 88 percent of students believe AR/VR will make learning more entertaining, just 5 percent said they think it will improve their ability to learn or master content (Exhibit 5). Industry experts confirmed that while there is significant enthusiasm for AR/VR, its ability to improve learning outcomes is uncertain. Some data look promising. For example, in a recent pilot study, students who used a VR tool to complete coursework for an introductory biology class improved their subject mastery by an average of two letter grades.

Exhibit 5

Faculty gave learning tools even higher marks than students did, for ease of use, engagement, access to course resources, and instructor connectivity. They also expressed greater excitement than students did for the future use of technologies. For example, while more than 30 percent of students expressed excitement for AR/VR and classroom interactions, more than 60 percent of faculty were excited about those, as well as machine learningpowered teaching assistants and AI adaptive technology.

Eighty-one percent or more of faculty said they feel the eight learning technology tools are a good investment of time and effort relative to the value they provide (Exhibit 6). Expert interviews suggest that employing learning technologies can be a strain on faculty members, but those we surveyed said this strain is worthwhile.

Exhibit 6

While faculty surveyed were enthusiastic about new technologies, experts we interviewed stressed some underlying challenges. For example, digital-literacy gaps have been more pronounced since the pandemic because it forced the near-universal adoption of some technology solutions, deepening a divide that was unnoticed when adoption was sporadic. More tech-savvy instructors are comfortable with interaction-engagement-focused solutions, while staff who are less familiar with these tools prefer content display and delivery-focused technologies.

According to experts we interviewed, learning new tools and features can bring on general fatigue. An associate vice president of e-learning at one university told us that faculty there found designing and executing a pilot study of VR for a computer science class difficult. Its a completely new way of instruction. . . . I imagine that the faculty using it now will not use it again in the spring. Technical support and training help. A chief academic officer of e-learning who oversaw the introduction of virtual simulations for nursing and radiography students said that faculty holdouts were permitted to opt out but not to delay the program. We structured it in a were doing this together way. People who didnt want to do it left, but we got a lot of support from vendors and training, which made it easy to implement simulations.

Despite the growing pains of digitizing the classroom learning experience, faculty and students believe there is a lot more they can gain. Faculty members are optimistic about the benefits, and students expect learning to stay entertaining and efficient. While adoption levels saw double-digit growth during the pandemic, many classrooms have yet to experience all the technologies. For institutions considering the investment, or those that have already started, there are several takeaways to keep in mind.

In an earlier article, we looked at the broader changes in higher education that have been prompted by the pandemic. But perhaps none has advanced as quickly as the adoption of digital learning tools. Faculty and students see substantial benefits, and adoption rates are a long way from saturation, so we can expect uptake to continue. Institutions that want to know how they stand in learning tech adoption can measure their rates and benchmark them against the averages in this article and use those comparisons to help them decide where they want to catch up or get ahead.

See more here:
Technology is shaping learning in higher education - McKinsey

Cohere For AI Announces Non-Profit Lab Dedicated to Open Source Fundamental Research – GlobeNewswire

PALO ALTO, Calif. and TORONTO, June 14, 2022 (GLOBE NEWSWIRE) -- Today, Cohere For AI, a non-profit research lab and community, announced its official launch. Dedicated to contributing open source, fundamental machine learning research, the lab will focus on solving some of the most complex challenges in the field of machine learning.

Sara Hooker will serve as Head of Cohere For AI, bringing a wealth of knowledge across AI and machine learning, with a specialty in deep learning. Prior to Cohere For AI, Sara was a Research Scientist at Google Brain where she focused on training models that go beyond top-line metrics to effectively demonstrate ability to be interpretable, compact, fair, and robust. She also founded Delta Analytics, a non-profit that brings together researchers, data scientists, and software engineers to volunteer their skills for non-profits around the world.

In order to realise the potential of machine learning, we need to make sure were working across a diverse set of people, disciplines, backgrounds, and geographies, said Aidan Gomez, CEO and Cofounder at Cohere. Im so excited to have Sara at the helm of Cohere For AI and cant wait to build the community together.

Cohere For AI aims to create open collaboration with the broader machine learning community. The lab is committed to supporting fundamental research on machine learning topics, while also prioritizing good stewardship of open source scientific practices.

This is the lab I wish had existed when I entered the field, said Hooker, Head of Cohere For AI. Depending on where youre located, theres often a lack of opportunities in machine learning. Cohere For AI aims to reimagine how, where, and by whom research is done. Im inspired by the opportunity to make an impact in ways that dont just advance progress on machine learning research, but also broadens access to the field.

In addition to contributions to fundamental research, Cohere For AI will support a machine learning community where members can connect with each other, discover new colleagues, and spur open discussion and collaboration. The lab and community will work to create new points of entry to machine learning research and will, ultimately, reflect the diversity of its members experiences and interests.

To get involved, browse our open research positions at jobs.lever.co/cohere, and stay in the loop on new programs and lab developments by signing up here.

About Cohere For AICohere For AI is a non-profit research lab and community dedicated to contributing fundamental research in machine learning, working to solve some of the field's most challenging problems. It supports responsible research across machine learning, while also prioritizing good stewardship of open source scientific practices. As a borderless research lab, Cohere For AI is community-driven and motivated by the opportunity to establish an inclusive, distributed community made up of brilliant research and engineering talent from across the globe.

Media Contactpress@cohere.ai

Read more:
Cohere For AI Announces Non-Profit Lab Dedicated to Open Source Fundamental Research - GlobeNewswire

How Microsoft Teams uses AI and machine learning to improve calls and meetings – Microsoft

As schools and workplaces begin resuming in-person operations, we project a permanent increase in the volume of online meetings and calls. And while communication and collaboration solutions have played a critical role in enabling continuity during these unprecedented times, early stress tests have revealed opportunities to improve and enhance meeting and call quality.

Disruptive echo effects, poor room acoustics, and choppy video are some common issues that hinder the effectiveness of online calls and meetings. Through AI and machine learning, which have become fundamental to our strategy for continual improvement, weve identified and are now delivering innovative enhancements in Microsoft Teams that improve such audio and video challenges in ways that are both user-friendly and scalable across environments.

Today, were announcing the availability of new Teams features including echo cancellation, adjusting audio in poor acoustic environments, and allowing users to speak and hear at the same time without interruptions. These build on AI-powered features recently released like expanding background noise suppression.

During calls and meetings, when a participant has their microphone too close to their speaker, its common for sound to loop between input and output devices, causing an unwanted echo effect. Now, Microsoft Teams uses AI to recognize the difference between sound from a speaker and the users voice, eliminating the echo without suppressing speech or inhibiting the ability of multiple parties to speak at the same time.

In specific environments, room acoustics can cause sound to bounce, or reverberate, causing the users voice to sound shallow as if theyre speaking within a cavern. For the first time, Microsoft Teams uses a machine learning model to convert captured audio signal to sound as if users are speaking into a close-range microphone.

A natural element of conversation is the ability to interrupt for clarification or validation. This is accomplished through full-duplex (two-way) transmission of audio, allowing users to speak and hear others at the same time. When not using a headset, and especially when using devices where the speaker and microphone are very close to each other, it is difficult to remove echo while maintaining full-duplex audio. Microsoft Teams uses a model trained with 30,000 hours of speech samples to retain desired voices while suppressing unwanted audio signals resulting in more fluid dialogue.

Each of us has first-hand experience of a meeting disrupted by the unexpected sounds of a barking dog, a car alarm, or a slammed door. Over two years ago, we announced the release of AI-based noise suppression in Microsoft Teams as an optional feature for Windows users. Since then, weve continued a cycle of iterative development, testing, and evaluation to further optimize our model. After recording significant improvements across key user metrics, we have enabled machine learning-based noise suppression as default for Teams customers using Windows (including Microsoft Teams Rooms), as well as Mac and iOS users. A future release of this feature is planned for Teams Android and web clients.

These AI-driven audio enhancements are rolling out and are expected to be generally available in the coming months.

We have also recently released AI-based video and screen sharing quality optimization breakthroughs for Teams. From adjustments for low light to optimizations based on the type of content being shared, we now leverage AI to help you look and present your best.

The impact of presentations can often depend on an audiences ability to read on-screen text or watch a shared video. But different types of shared content require varied approaches to ensure the highest video quality, particularly under bandwidth constraints. Teams now uses machine learning to detect and adjust the characteristics of the content presented in real-time, optimizing the legibility of documents or smoothness of video playback.

Unexpected issues with network bandwidth can lead to a choppy video that can quickly shift the focus of your presentation. AI-driven optimizations in Teams help adjust playback in challenging bandwidth conditions, so presenters can use video and screen sharing worry-free.

Though you cant always control the surrounding lighting for your meetings, new AI-powered filters in Teams give you the option to adjust brightness and add a soft focus for your meetings with a simple toggle in your device settings, to better accommodate for low-light environments.

The past two years have made clear how important communication and collaboration platforms like Microsoft Teams are to maintaining safe, connected, and productive operations. In addition to bringing new features and capabilities to Teams, well continue to explore new ways to use technology to make online calling and meeting experiences more natural, resilient, and efficient.

Visit the Tech Community Teams blog for more technical details about how we leverage AI and machine learning for audio quality improvements as well as video and screen sharing optimization in Microsoft Teams.

Read more from the original source:
How Microsoft Teams uses AI and machine learning to improve calls and meetings - Microsoft

Google to Make Chrome ‘More Helpful’ With New Machine Learning Additions – ExtremeTech

This site may earn affiliate commissions from the links on this page. Terms of use. (Photo: PCMag)In a new blog post, Google says its going to be bringing new features to Chrome via on device machine learning (ML). The goal is to improve the browsing experience, and to do so its adding several new ML models that will focus on different tasks. Googles says itll begin addressing how web notifications are handled, and that it also has ideas for an adaptive tool bar. These new features will lead to a safer, more accessible and more personalized browsing experience according to Google. Also, since the models run (and stay) on your device instead of in the cloud, its theoretically better for your privacy.

First theres web notifications, which we take to mean this kind of stuff. Things like sign up for our newsletter, for example. Google says these are update from sites you care about, but adds that too many of them are a nuisance. It says in an upcoming version of Chrome, the on-device ML will examine how you interact with notifications. If it finds you are denying permission to certain types of notification requests, it will silence similar ones in the future. If a notification is silenced automatically, Chrome will still add a notification for it, shown below. This would seemingly allow you to override Googles prediction.

Google also wants Chrome to change what the tool bar does based on your past behavior. For example, it says some people like to use voice search in the morning on their train commute (this person sounds annoying). Other people routinely share links. In both of these situations, Chrome would anticipate your needs and add either a microphone button or share icon in the tool bar, making the process easier. Youll be able to customize it manually as well. The screenshots provided note theyre from Chrome on Android. Its unclear if this functionality will appear on other platforms.

In addition to these new features, Google is also touting the work machine learning is already doing for Chrome users. For example, when you arrive at a web page its scanned and compared to a database of known phishing/malicious sites. If theres a match it gives you a warning, and youve probably seen this once or twice already. Its a full-page, all-red page block, so youd know it if youve seen it. Google says it rolled out new ML models in March of this year that increased the number of malicious sites it could detect by 2.5X.

Google doesnt specify when these new features will launch, nor does it say if they will be mobile-only. All we know is the silence notifications will appear in the next release of Chrome. According to our browser, version 102 is the current one. For the adaptive tool bar, it says that will arrive in the near future. Its also unclear if running these models on-device will incur some type of performance hit.

Now Read:

Read more:
Google to Make Chrome 'More Helpful' With New Machine Learning Additions - ExtremeTech

Can machine learning prolong the life of the ICE? – Automotive World

The automotive industry is steadily moving away from internal combustion engines (ICEs) in the wake of more stringent regulations. Some industry watchers regard electric vehicles (EVs) as the next step in vehicle development, despite high costs and infrastructural limitations in developing markets outside Europe and Asia. However, many markets remain deeply dependent on the conventional ICE vehicle. A 2020 study by Boston Consulting Group found that nearly 28% of ICE vehicles could still be on the road as late as 2035, while EVs may only account for 48% of vehicles registered on the road by this time as well.

If ICE vehicles are to remain compliant with ever more restrictive emissions regulations, they will require some enhancements and improvements. Enter Secondmind, a software and virtualisation company based in the UK. The company is employed by many mainstream manufacturers looking to reduce emissions from pre-existing ICEs without significant investment or development costs. Secondminds Managing Director, Gary Brotman, argues that software-based approaches are efficiently streamlining the process of vehicle development and could prolong the life of the ICE for some years to come.

Follow this link:
Can machine learning prolong the life of the ICE? - Automotive World