Archive for the ‘Machine Learning’ Category

WaterScope: Meet the Team Using Machine Learning To Ensure Water Is Safe To Drink – Yahoo Finance

Northampton, MA --News Direct-- Cisco Systems Inc.

Now that the Cisco Global Problem Solver Challenge 2021 winners have been officially announced, we are excited for you to learn more about each winning team and the story behind each innovation. The Cisco Global Problem Solver Challenge is an annual competition that awards cash prizes to early-stage tech entrepreneurs solving the worlds toughest problems. Now in its fifth year, the competition awarded its largest prize pool ever, $1 million USD, to 20 winning teams from around the world.

When Alexander Patto, Nalin Patel, Tianheng Zhao, and Richard Bowman joined a water purifier project at Cambridge University, they were tasked with answering the question, How do you tell whether the water is pure? They realized quickly that the process around testing the microbiology of water hadnt changed in over 30 years. Globally, water-born bacterial infections lead to over 500,000 diarrheal related deaths each year, which is over 2,000 deaths every day (more than malaria and HIV combined). Current water testing equipment is bulky, expensive and takes at least a day to give results. Alex and his team tried to work out how they might improve the process and after about a month of trying to solve the problem, they co-founded WaterScope.

What problem is your technology solution trying to solve?

Alex: Access to information that will give people better drinking water sources. Its trying to solve both inequality and in particular, bacterial contamination. At the moment, if you were to go into Tanzania, and there was a public tap, theres just no way of knowing whether the water is safe to drink. The community is quite removed from the testing facility that comes in. So, what were trying to do is make a test that anyone can understand whether the water has got bacterial contamination. Currently the systems are very complicated. The WaterScope system aims to be empowering for the community. It allows the community to put mechanisms in place, to clean the water locally and get sustainable change at a local level.

Story continues

Can you explain how the solution works?

Alex: At the moment, theres two parts to the solution. First is the technology, which enables simple, portable bacterial testing, and then once you have the data and once you have the technology, and its being used, the second challenge is how then do you convert that to have impact back on the lives of people on the ground?

A person in the village would collect water from the source and they would filter it through our reusable cartridge. The cartridge has a disposable element to it which allows it to maintain the integrity of the test. The purpose of the cartridge is to take the lab into the field. It condenses the [testing] process into a small cartridge. Once they have the filtered sample, they put it into WaterScopes imaging system, and they incubate it for up to 18 hours. Then, they take it out and capture an image, and at Waterscope we use machine learning to identify the bacteria. The importance of this method is that whoever is collecting the sample doesnt have to be trained in microbiology.

After the results are captured, they are sent in real time to our database, which will then allow mapping and then intervention from potential governance. It allows for real time intervention. It also gives locals the agency to purify and periodically clean their water supply.

What inspired you to develop this solution?

Alex: It just fell together. I was doing my PhD in genetics at Cambridge, and I found myself getting far removed from the impact I wanted to have. I was actively participating in outreach projects and bumped into three people who had similar inclinations. We found more scope based on some research that was being done in Physics, and we thought, maybe we can have an impact. I didnt expect it to become my full-time job. We got a bit of funding from the university and from the humanitarian innovation fund, and we managed to get a pilot done. Having looked at the scale of the problem, it just felt right to do what were doing full time.

How will winning a prize in the Cisco Global Problem Solver Challenge help you advance your business?

Alex: Weve got prototypes that weve tested in the field. Now Waterscope is looking to convert these prototypes to post-production prototypes for manufacturing and understanding how we keep the costs down. We also want to keep those distribution channels open allowing us to get it to people who need it. The other side is firm up the software, improving machine learning, improve the way we use cloud technology, and flesh out more of the community impact side of things. Were aiming to commercialise by the end of 2022.

Waterscope is looking to use the funding to match fund an implementation project where well work with ten potential communities to understand how we can have an impact on the key community stakeholders.

How has the global pandemic impacted your work?

Alex: Quite significantly. We had a project funded by the United Kingdom government last year in which we were going to fly to Tanzania to train and collaborate field partners on the system, run workshops with community members, and the pandemic hit. So, we had to think around how to still get that field data and community data form the system without leaving the UK. We ended up reaching out to more people and spent a lot of time building a solid relationship over video conferencing. The benefit being now we have great partners on the ground, theyre very familiar with our system and that probably wouldnt have happened before. We would have normally done an intensive week or two in the field and left again, so the pandemic changed our approach to trials. We now have that longevity with our partners. Its also far more inclusive than it would have been, it doesnt beat the face-to-face meeting and seeing someone use our technology. Weve done a lot and were better for it, so were thankful for that.

Why did you decide to start your own social enterprise versus going to work for a company?

Alex: You get moments where my peers are out in London as consultants, earning a lot of money, and they enjoy that. I havent really thought about it too much. I find my days really fulfilling, I work with great people and Im so fortunate that we now have our own company. Its liberating. I find it hard imagining what it would be like to work for another company now because Im so used to working with the WaterScope team. Funding is a constant battle, though.

My family has been supportive of this. My dads a builder and my mums a renovator. Theyve always worked for themselves, since I was young. I grew up on a farm in Wales and Im the first person to go to university in my family. I think my mum sent the Cisco challenge voting to all her friends. Its also something they can all get behind. When I was in the nitty gritty of research, conversations around dinner might be on cells and protein. It really wasnt gripping. Now, its very easy to communicate the importance of what were doing, and people are naturally invested.

What advice do you have for other social entrepreneurs?

Alex: Get a good partner. A partner you can rely on. Get an advocate on your technology in assessing where its used. Fundraising is hard. Youll need resilience because you will apply for a lot of grants and funding streams, youll only get about 10 percent of them. You need to be able to handle rejection and failure. Youve also got to build your network as strong as possible. Working in things like incubators certainly helps. We got into a fellowship here and there, that put us in contact with like-minded people, it was really helpful because my previous contacts were all academics. Get an advisory board, they will help you get other people involved. Try not to say no to any opportunity that comes along. I give a couple of lectures in university and talks at events; you always meet new people. As long as youre open to those opportunities, it will come. Get involved with some universities, their networks are vast.

Stay tuned for more articles in our blog series, featuring interviews with every Cisco Global Problem Solver Challenge 2021 winning team!

View additional multimedia and more ESG storytelling from Cisco Systems Inc. on 3blmedia.com

View source version on newsdirect.com: https://newsdirect.com/news/waterscope-meet-the-team-using-machine-learning-to-ensure-water-is-safe-to-drink-262779442

Read the original here:
WaterScope: Meet the Team Using Machine Learning To Ensure Water Is Safe To Drink - Yahoo Finance

Jordan Harrod: Brain researcher and AI-focused YouTuber – MIT News

Scientist, writer, policy advocate, YouTuber before Jordan Harrod established her many successful career identities, her first role was as a student athlete. While she enjoyed competing in everything from figure skating to fencing, she also sustained injuries that left her with chronic pain. These experiences as a patient laid the groundwork for an interest in biomedical research and engineering. I knew I wanted to make tools that would help people with health issues similar to myself, she says.

Harrod went on to pursue her BS in biomedical engineering at Cornell University. Before graduating, she spent a summer at Stanford University doing machine-learning research for MRI reconstruction. I didnt know anything about machine learning before that, so I did a lot of learning on the fly, she says. I realized that I enjoyed playing with data in different ways. Machine learning was also becoming the new big thing at the time, so it felt like an exciting path to follow.

Harrod looked for PhD programs that would combine her interests in helping patients, biomedical engineering, and machine learning. She came across the Harvard-MIT Program in Health Sciences and Technology (HST) and realized it would be the perfect fit. The interdisciplinary program requires students to perform clinical rotations and take introductory courses alongside medical students. Ive found that the clinical perspective was often underrated on the research side, so I wanted to make sure Id have that. My goal was that my research would be translatable to the real world, Harrod says.

Mapping the brain to understand consciousness

Today, Harrod collaborates with professors Emery Brown, an anesthesiologist, and Ed Boyden, a neuroscientist, to study how different parts of the brain relate to consciousness and arousal. They seek to understand how the brain operates under different states of consciousness and the way this affects the processing of signals associated with pain. By studying arousal in mice and applying statistical tools to analyze large datasets of activated brain regions, for example, Browns team hopes to improve the current understanding of anesthesia.

This is another step toward creating better anesthesia regimens for individual patients, says Harrod.

Since beginning her neuroscience research, Harrod has been amazed to learn how much about the brain still needs to be uncovered. In addition to understanding biological mechanisms, she believes there is still work to be done at a preliminary cause and effect level. Were still learning how different arousal centers work together to modulate consciousness, or what happens if you turn one off, says Harrod. I dont think I realized the magnitude or the difficulty of the challenge, let alone how hard it is to translate our research to brains in people.

I didnt come into graduate school with a neuroscience background, so every day is an opportunity to learn new things about the brain. Even after three years, Im still amazed with how much we have yet to discover.

Sharing knowledge online and beyond

Outside of the lab, Harrod focuses her time on communicating research to the public and advocating for improved science policies. She is the chair of the External Affairs Board of the Graduate Student Council, an Early Career Policy Ambassador for the Society for Neuroscience, and the co-founder of the MIT Science Policy Review, which publishes peer reviewed reports on different science policy issues.

Most of our research is funded by our taxpayers, yet most people dont necessarily understand whats going on in the research that theyre funding, explains Harrod. I wanted to create a way so people could better understand how different regulations affect them personally.

In addition to her advocacy roles, Harrod also has a dedicated online presence. She writes articles for Massive Science and is well-known for her YouTube channel. Her videos, released weekly, investigate the different ways we interact with artificial intelligence daily. What began as a hobby three years ago has developed into an active community with 70,000 subscribers. I hadnt seen many other people talking about AI and machine learning in a casual way, so I decided to do it for fun, she says. Its been a great way to keep me looped into the broader field questions.

Harrods most popular video focuses on how AI can be used to monitor online exam proctoring. With the shift to online learning occurring during the pandemic, many students have used her video to understand how AI proctors can detect cheating. As the audience grows, its been exciting to read the comments and see people get curious about AI applications they had never heard of before. Ive also gotten to have interesting conversations with people who I wouldnt have come across otherwise, she says.

In the future, Harrod hopes to find a career that will allow her to balance her time between lab research, policy, and science communication. She plans on continuing to use her knowledge as a scientist to debunk hype and tell truthful stories to the public. Ive seen so many articles with headlines that could be misleading if someone only read the title. For example, a small study done in mice can be exaggerated to make mind-reading technology seem real, when the research still has a long way to go.

Since making my YouTube channel, Ive learned its important to give people reasonable expectations about whats real and what theyre going to encounter in their lives. They deserve to know the full picture so they can make informed decisions, she says.

See the rest here:
Jordan Harrod: Brain researcher and AI-focused YouTuber - MIT News

Here’s How Companies are Using AI, Machine Learning – Dice Insights

Companies widely expect that artificial intelligence (A.I.) and machine learning will fundamentally change their operations in coming years. To hear executives talk about it, apps will grow smarter, tech stacks will automatically adapt to vulnerabilities, and processes throughout organizations will become entirely automated.

Given the buzz around A.I., its easy for predictions to easily slip into the realm of the fantastical (In less than six months, well have cars that drive themselves! And apps that predict what a user wants before they want it!). Its worth taking a moment to see what companies areactuallydoing with A.I. at this juncture.

To that end, CompTIArecently asked 400 companiesabout their most common use-cases for A.I. Heres what they said:

The pandemic has accelerated digital transformation and changed how we work, Khali Henderson, Senior Partner at BuzzTheory and vice chair of CompTIAs Emerging Technology Community, wrote in a statement accompanying the data.We learnedsomewhat painfullythat traditional tech infrastructure doesnt provide the agility, scalability and resilience we now require. Going forward, organizations will invest in technologies and services that power digital work, automation and human-machine collaboration. Emerging technologies like AI and IoT will be a big part of that investment, which IDC pegs at $656 billion globally this year.

That predictive sales/lead scoring would top this list makes a lot of senseif companies are going to invest in A.I., theyre likely to start with a process that can provide a rapid return on investment (and generate a lot of cash).According to CompTIA, A.I. helps with more effective prioritization of sales prospects via lead scoring and provides detailed, real-time analytics. Its a similar story with CRM/service delivery optimization, where A.I. can help salespeople and technologists better identify potential customers and cross-selling opportunities.

Companies have spent years working on chatbots and digital assistants, hoping that automated systems can replace massive, human-powered call centers. So far, theyve had mixed results;the early generations of chatbotswere capable of conducting simple interactions with customers, but had a hard time with complex requests and the nuances of language. The emergence of more sophisticated systems likeGoogle Duplexpromises a future in which machines effectively chat with customers on a range of issuesprovided customers can trust interacting with software in place of a human being.

As A.I. and machine learning gradually evolve, opportunities to work with the technology will increase. While many technologists tend to equate artificial intelligence withcutting-edge projectssuch as self-driving cars, this CompTIA data makes it clear that companies first use of A.I. and machine learning will probably involve sales and customer service. Be prepared.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

See original here:
Here's How Companies are Using AI, Machine Learning - Dice Insights

Column: Simplifying live broadcast operations using AI and machine learning – NewscastStudio

Subscribe toNewscastStudio's newsletter for the latest in broadcast design and engineering delivered to your inbox.

Artificial Intelligence and machine learning are seen as pillars of the next generation of technological advancement in broadcast media for a variety of reasons, including the ability to sift through mountains of data while identifying anomalies, spotting trends and alerting users to potential problems before they occur without the need for human intervention. With the more data they ingest these models improve over time, meaning the more ML models utilized across a variety of applications, the faster and more complex the insights derived from these tools become.

But to truly understand why machine learning provides enormous value for broadcasters, lets break it down into use cases and components within broadcast media where AI and ML can have the greatest impact.

Imagine a live sporting event stopsstreaming,or that framesstart dropping for no apparent reason.Viewers are noticing quality problems and starting to complain.Technicians are baffled and customers may have just missed the play of the year. Revenue therefore takes a hit and executives want to know what is to blame.

These are situations every broadcaster wants to avoid, and in these tense moments there is no time to lose viewers are flipping to otherservices andad revenue is being lost by the second. What went wrong? Who or what is to blame and how can we get this back up and running immediately, while mitigating this risk in the future? Modern broadcasters need to know before problems happen not be caught in a crisis trying to pick up the pieces after an incident.

Advertisement

The promise of our interconnected world means video workflowsareinteracting, intertwining, and integrating in new ways every day, simultaneously increasing information sharing, agility and connectivity while producing increasingly complex challenges and issues to diagnose. As more on-prem and cloud resources are connected with equipment from different vendors, sources, and partner organizationsdistributing to new device types,thereisan enormous, ever-expanding number of log and telemetrydata produced.

As a result, broadcastengineers have more information than they can effectively process. They routinely silence frequent alerts and alarms because with too much data overload it can be impossible to tellwhat isimportant and what is not. This inevitably leaves teams overwhelmed and lacking insights.

Advanced analytics and ML can help with these problems by making sense of overwhelming quantities of data, allowing human operators to sift through insignificant clutter and to focus and understand where issues are likely to occur before failures are noticed. Advanced analytics provide media companies the unprecedented opportunity to leverage sophisticated event correlation, data aggregation, deep learning, and virtually limitless applications to improve broadcast workflows. The benefit is to be able to do more with less, to innovate faster than the competition and prepare for the future both by increasing your knowledge base and opening the potential for cost reduction and time savings, honing in on the crucial details behind the data that matters most to both their users and organization.

One of the biggest challenges facing broadcast operations engineers is to recognize when things are not working before the viewers experience is affected. In a perfect world operators and engineers want to predict outages and identify potential issues ahead of time. Machine learning models can be orchestrated to recognize the normal ranges based on hundreds to thousands of measurements beyond the ability of a human operator and alert the operator in real time when a stream anomaly occurs. While this process normally requires monitoring logs on dozens of machines and keeping track of the performance of network links between multiple locations and partners, using ML allows the system to identify patterns in large data sets and helps operators focus only on workflow anomalies dramatically reducing workload.

Anomaly detection works by building a predictive model of what the next measurements related to a stream will be for example, the round-trip time of packets on the network or the raw bitrate of the stream and then determining how different the expected value is from the next measurement. As a tool to sort through normal and abnormal streams, this can be essential, especially when managing hundreds or thousands of concurrent channels. One benefit of anomalous behavior identification would be enabling an operator to switch to a backup link that uses a different network link before a failure occurs.

Anomaly detection can also be a vital component of reducing needless false alarms and reducing time waste. Functionality such as customizable alerting preferences and aggregated health scores generated by threat-gauging data points assist operators to sift through and assimilate data trends so they can focus where they really need to. In addition, predictive and proactive alerting can be orders of magnitude less expensive and allow broadcasters to be able to identify the root causes of instability and failure faster and easier.

A major challenge to any analytics system is data collection. When you have a video workflow comprised of machines in disparate data centers running different operating systems and tools, it can be difficult to assimilate and standardize reliable, relevant data that can be used in any AI/ML system. While there are natural data aggregation points in most broadcast architectures for example if you are using a cloud operations and remote management platform or common protocol stack this is certainly not a given.

Although standards exist for how video data should be formatted and transmitted, few actually describe how machine data, network measurements, and other telemetry should be collected, transmitted and stored. Therefore it is essential to select a technology partner that sends data to a common aggregation point where it is parsed, normalized and put into a database while supporting multiple protocols to support a robust AI/ML solution.

Once you have a method for collecting real-time measurements from your video workflow, you can feed this data into a ML engine to detect patterns. From there you can train the system not only to understand normal operating behavior for anomaly detection, but also to recognize specific patterns leading up to video degradation events. With these patterns determined you can also identify common metadata related to degradation events across systems, allowing you to identify that the degradation event is related to a particular shared network segment.

For example, if a particular ISP in a particular region continues to experience latency or blackout issues, the system learns to pick up on warning signs ahead of time and notifies the engineer before an outage preventing issues proactively while simultaneously improving root cause identification within your entire ecosystem. Developers can also see that errors are more often observed using common encoder or network hardware settings. Unexpected changes in the structure of the video stream or the encoding quality might also be important signals of impending problems. By observing correlations, ML gives operators key insights into the causes of problems and how to solve them.

Predictive analytics, alerts and correlations are useful for automated failure prediction and alerting, but when all else fails, ML models can also be used to help operators concentrate on areas of concern following an outage, making retrospective analysis much easier and faster via root cause analysis.

With workflows that consist of dozens of machines and network segments, it is inherently difficult to know where to look for problems. However, ML models, as we have seen, provide trend identification and help visualize issues using data aggregation. Even relatively straightforward visualizations of how a stream deviates from the norm are incredibly valuable, whether in the form of historical charts, customizable reports or questions as simple as how a particular stream compares to a similar recent stream.

Leveraging AI and ML to improve operational efficiency and quality provides a powerful advantage while preparing broadcasters for the future of live content delivery over IP. Selecting the right vendor for system monitoring and orchestration that integrates AI and ML capabilities can help your organization make sense of the vast amounts of data being sent across the media supply chain and be a powerful differentiator.

As experiments to test hypotheses are essential to the traditional learning process, the same goes for ML models. Building, training, deploying, and updating ML models are inherently complex, meaning providers in cooperation with their users must continue to iterate, compare results, and adjust accordingly to understand the why behind the data, improving root cause analysis and the customer experience.

Machine learning presents an unprecedented opportunity for sophisticated event correlation, data aggregation, deep learning, and virtually unlimited applications across broadcast media operations as it evolves exponentially year to year. As models become more informed and interconnected, problem solving and resolution technology based on deep learning and AI will become increasingly essential tools. Broadcast organizations looking to prepare themselves for such a future would be wise to prepare for this eventuality by choosing the right vendor to integrate AI and ML enabled tools into their workflows.

Andrew leads Zixis Intelligent Data Platform initiative, bringing AI and ML to live broadcast operations. Before Zixi he led the video platform product team at Brightcove where he spent 6 years working with some of the largest broadcasters and media companies. Particular areas of interest include live streaming, analytics, ad integration, and video players. Andrew has an MBA from Babson College and a BA from Oberlin College.

Continue reading here:
Column: Simplifying live broadcast operations using AI and machine learning - NewscastStudio

Deep machine learning study finds that body shape is associated with income – PsyPost

A new study published in PLOS One has found a relationship between a persons body shape and their family income. The findings provide more evidence for the beauty premium a phenomenon in which people who are physically attractive tend to earn more than their less attractive counterparts.

Researchers have consistently found evidence for the beauty premium. But Suyong Song, an associate professor at The University of Iowa, and his colleagues observed that the measurements used to gauge physical appearance suffered some important limitations.

I have been curious of whether or not there is physical attractiveness premium in labor market outcomes. One of the challenges is how researchers overcome reporting errors in body measures such as height or weight, as most previous studies often defined physical appearance from subjective opinions based on surveys, Song explained.

The other challenge is how to define body shapes from these body measures, as these measures are too simple to provide a complete description of body shapes. In this study, collaborated with one of my coauthors (Stephen Baek at University of Virginia), we use novel data which contains three-dimensional whole-body scans. Using a state-of-the art machine learning technique, called graphical autoencoder, we addressed these concerns.

The researchers used the deep machine learning methods to identify important physical features in whole-body scans of 2,383 individuals from North America.

The data came from the Civilian American and European Surface Anthropometry Resource (CAESAR) project, a study conducted primarily by the U.S. Air Force from 1998 to 2000. The dataset included detailed demographic information, tape measure and caliper body measurements, and digital three-dimensional whole-body scans of participants.

The findings showed that there is a statistically significant relationship between physical appearance and family income and that these associations differ across genders, Song told PsyPost. In particular, the males stature has a positive impact on family income, whereas the females obesity has a negative impact on family income.

The researchers estimated that one centimeter increase in stature (converted in height) is associated with approximately $998 increase in family income for a male who earns $70,000 of the median family income. For women, the researchers estimated that one unit decrease in obesity (converted in BMI) is associated with approximately $934 increase in the family income for a female who earns $70,000 of family income.

The results show that the physical attractiveness premium continues to exist, and the relationship between body shapes and family income is heterogeneous across genders, Song said.

Our findings also highlight importance of correctly measuring body shapes to provide adequate public policies for improving healthcare and mitigating discrimination and bias in the labor market. We suggest that (1) efforts to promote the awareness of such discrimination must occur through workplace ethics/non-discrimination training; and (2) mechanisms to minimize the invasion of bias throughout hiring and promotion processes, such as blind interviews, should be encouraged.

The new study avoids a major limitation of previous research that relied on self-reported attractiveness and body-mass index calculations, which do not distinguish between fat, muscle, or bone mass. But the new study has an important limitation of its own.

One major caveat is that the data set only includes family income as opposed to individual income. This opens up additional channels through which physical appearance could affect family income, Song explained. In this study, we identified the combined association between body shapes and family income through the labor market and marriage market. Thus, further investigations with a new survey on individual income would be an interesting direction for the future research.

The study, Body shape matters: Evidence from machine learning on body shape-income relationship, was published July 30, 2021.

View original post here:
Deep machine learning study finds that body shape is associated with income - PsyPost