Archive for the ‘Machine Learning’ Category

Machine learning links material composition and performance in catalysts – University of Michigan News

From left to right, diagrams show an oxygen atom bonding with a metal, a metal oxide, and a perovskite. The new model could help chemical engineers design these three types of catalysts to improve the sustainability of fuel and fertilizer production as well as the manufacturing of household chemicals. Credit: Jacques Esterhuizen, Linic Lab, University of Michigan.

In a finding that could help pave the way toward cleaner fuels and a more sustainable chemical industry, researchers at the University of Michigan have used machine learning to predict how the compositions of metal alloys and metal oxides affect their electronic structures.

The electronic structure is key to understanding how the material will perform as a mediator, or catalyst, of chemical reactions.

Were learning to identify the fingerprints of materials and connect them with the materials performance, said Bryan Goldsmith, the Dow Corning Assistant Professor of Chemical Engineering.

A better ability to predict which metal and metal oxide compositions are best for guiding which reactions could improve large-scale chemical processes such as hydrogen production, production of other fuels and fertilizers, and manufacturing of household chemicals such as dish soap.

The objective of our research is to develop predictive models that will connect the geometry of a catalyst to its performance. Such models are central for the design of new catalysts for critical chemical transformations, said Suljo Linic, the Martin Lewis Perl Collegiate Professor of Chemical Engineering.

One of the main approaches to predicting how a material will behave as a potential mediator of a chemical reaction is to analyze its electronic structure, specifically the density of states. This describes how many quantum states are available to the electrons in the reacting molecules and the energies of those states.

Usually, the electronic density of states is described with summary statisticsan average energy or a skew that reveals whether more electronic states are above or below the average, and so on.

Thats OK, but those are just simple statistics. You might miss something. With principal component analysis, you just take in everything and find whats important. Youre not just throwing away information, Goldsmith said.

Principal component analysis is a classic machine learning method, taught in introductory data science courses. They used the electronic density of states as input for the model, as the density of states is a good predictor for how a catalysts surface will adsorb, or bond with, atoms and molecules that serve as reactants. The model links the density of states with the composition of the material.

Unlike conventional machine learning, which is essentially a black box that inputs data and offers predictions in return, the team made an algorithm that they could understand.

We can see systematically what is changing in the density of states and correlate that with geometric properties of the material, said Jacques Esterhuizen, a doctoral student in chemical engineering and first author on the paper in Chem Catalysis.

This information helps chemical engineers design metal alloys to get the density of states that they want for mediating a chemical reaction. The model accurately reflected correlations already observed between a materials composition and its density of states, as well as turning up new potential trends to be explored.

The model simplifies the density of states into two pieces, or principal components. One piece essentially covers how the atoms of the metal fit together. In a layered metal alloy, this includes whether the subsurface metal is pulling the surface atoms apart or squeezing them together, and the number of electrons that the subsurface metal contributes to bonding. The other piece is just the number of electrons that the surface metal atoms can contribute to bonding. From these two principal components, they can reconstruct the density of states in the material.

This concept also works for the reactivity of metal oxides. In this case, the concern is the ability of oxygen to interact with atoms and molecules, which is related to how stable the surface oxygen is. Stable surface oxygens are less likely to react, whereas unstable surface oxygens are more reactive. The model accurately captured the oxygen stability in metal oxides and perovskites, a class of metal oxides.

The study was supported by the Department of Energy and the University of Michigan.

More information:

Go here to read the rest:
Machine learning links material composition and performance in catalysts - University of Michigan News

Frontier Development Lab Transforms Space and Earth Science for NASA with Google Cloud Artificial Intelligence and Machine Learning Technology – SETI…

August 26, 2021, Mountain View, Calif., Frontier Development Lab (FDL), in partnership with the SETI Institute, NASA and private sector partners including Google Cloud, are transforming space and Earth science through the application of industry-leading artificial intelligence (AI) and machine learning (ML) tools.

FDL tackles knowledge gaps in space science by pairing ML experts with researchers in physics, astronomy, astrobiology, planetary science, space medicine and Earth science.These researchers have utilized Google Cloud compute resources and expertise since 2018, specifically AI / ML technology, to address research challenges in areas like astronaut health, lunar exploration, exoplanets, heliophysics, climate change and disaster response.

With access to compute resources provided by Google Cloud, FDL has been able to increase the typical ML pipeline by more than 700 times in the last five years, facilitating new discoveries and improved understanding of our planet, solar system and the universe. Throughout this period, Google Clouds Office of the CTO (OCTO) has provided ongoing strategic guidance to FDL researchers on how to optimize AI / ML , and how to use compute resources most efficiently.

With Google Clouds investment, recent FDL achievements include:

"Unfettered on-demand access to massive super-compute resources has transformed the FDL program, enabling researchers to address highly complex challenges across a wide range of science domains, advancing new knowledge, new discoveries and improved understandings in previously unimaginable timeframes, said Bill Diamond, president and CEO, SETI Institute.This program, and the extraordinary results it achieves, would not be possible without the resources generously provided by Google Cloud.

When I first met Bill Diamond and James Parr in 2017, they asked me a simple question: What could happen if we marry the best of Silicon Valley and the minds of NASA? said Scott Penberthy, director of Applied AI at Google Cloud. That was an irresistible challenge. We at Google Cloud simply shared some of our AI tricks and tools, one engineer to another, and they ran with it. Im delighted to see what weve been able to accomplish together - and I am inspired for what we can achieve in the future. The possibilities are endless.

FDL leverages AI technologies to push the frontiers of science research and develop new tools to help solve some of humanity's biggest challenges. FDL teams are comprised of doctoral and post-doctoral researchers who use AI / ML to tackle ground-breaking challenges. Cloud-based super-computer resources mean that FDL teams achieve results in eight-week research sprints that would not be possible in even year-long programs with conventional compute capabilities.

High-performance computing is normally constrained due to the large amount of time, limited availability and cost of running AI experiments, said James Parr, director of FDL. Youre always in a queue. Having a common platform to integrate unstructured data and train neural networks in the cloud allows our FDL researchers from different backgrounds to work together on hugely complex problems with enormous data requirements - no matter where they are located.

Better integrating science and ML is the founding rationale and future north star of FDLs partnership with Google Cloud. ML is particularly powerful for space science when paired with a physical understanding of a problem space. The gap between what we know so far and what we collect as data is an exciting frontier for discovery and something AI / ML and cloud technology is poised to transform.

You can learn more about FDLs 2021 program here.

The FDL 2021 showcase presentations can be watched as follows:

In addition to Google Cloud, FDL is supported by partners including Lockheed Martin, Intel, Luxembourg Space Agency, MIT Portugal, Lawrence Berkeley National Lab, USGS, Microsoft, NVIDIA, Mayo Clinic, Planet and IBM.

About the SETI InstituteFounded in 1984, the SETI Institute is a non-profit, multidisciplinary research and education organization whose mission is to lead humanity's quest to understand the origins and prevalence of life and intelligence in the universe and share that knowledge with the world. Our research encompasses the physical and biological sciences and leverages expertise in data analytics, machine learning and advanced signal detection technologies. The SETI Institute is a distinguished research partner for industry, academia and government agencies, including NASA and NSF.

Contact Information:Rebecca McDonaldDirector of CommunicationsSETI Institutermcdonald@SETI.org

DOWNLOAD FULL PRESS RELEASE HERE.

The rest is here:
Frontier Development Lab Transforms Space and Earth Science for NASA with Google Cloud Artificial Intelligence and Machine Learning Technology - SETI...

The dos and donts of machine learning research read it, nerds – The Next Web

Did you know Neural is taking the stage this fall? Together with an amazing line-up of experts, we will explore the future of AI during TNW Conference 2021. Secure your ticket now!

Machine learning is becoming an important tool in many industries and fields of science. But ML research and product development present several challenges that, if not addressed, can steer your project in the wrong direction.

In a paper recently published on the arXiv preprint server, Michael Lones, Associate Professor in the School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, provides a list of dos and donts for machine learning research.

The paper, which Lones describes as lessons that were learnt whilst doing ML research in academia, and whilst supervising students doing ML research, covers the challenges of different stages of the machine learning research lifecycle. Although aimed at academic researchers, the papers guidelines are also useful for developers who are creating machine learning models for real-world applications.

Here are my takeaways from the paper, though I recommend anyone involved in machine learning research and development to read it in full.

Machine learning models live and thrive on data. Accordingly, across the paper, Lones reiterates the importance of paying extra attention to data across all stages of the machine learning lifecycle. You must be careful of how you gather and prepare your data and how you use it to train and test your machine learning models.

No amount of computation power and advanced technology can help you if your data doesnt come from a reliable source and hasnt been gathered in a reliable manner. And you should also use your own due diligence to check the provenance and quality of your data. Do not assume that, because a data set has been used by a number of papers, it is of good quality, Lones writes.

Your dataset might have various problems that can lead to your model learning the wrong thing.

For example, if youre working on a classification problem and your dataset contains too many examples of one class and too few of another, then the trained machine learning model might end up learning to predict every input as belonging to the stronger class. In this case, your dataset suffers from class imbalance.

While class imbalance can be spotted quickly with data exploration practices, finding other problems needs extra care and experience. For example, if all the pictures in your dataset were taken in daylight, then your machine learning model will perform poorly on dark photos. A more subtle example is the equipment used to capture the data. For instance, if youve taken all your training photos with the same camera, your model might end up learning to detect the unique visual footprint of your camera and will perform poorly on images taken with other equipment. Machine learning datasets can have all kinds of such biases.

The quantity of data is also an important issue. Make sure your data is available in enough abundance. If the signal is strong, then you can get away with less data; if its weak, then you need more data, Lones writes.

In some fields, the lack of data can be compensated for with techniques such as cross-validation and data augmentation. But in general, you should know that the more complex your machine learning model, the more training data youll need. For example, a few hundred training examples might be enough to train a simple regression model with a few parameters. But if you want to develop a deep neural network with millions of parameters, youll need much more training data.

Another important point Lones makes in the paper is the need to have a strong separation between training and test data. Machine learning engineers usually put aside part of their data to test the trained model. But sometimes, the test data leaks into the training process, which can lead to machine learning models that dont generalize to data gathered from the real world.

Dont allow test data to leak into the training process, he warns. The best thing you can do to prevent these issues is to partition off a subset of your data right at the start of your project, and only use this independent test set once to measure the generality of a single model at the end of the project.

In more complicated scenarios, youll need a validation set, a second test set that puts the machine learning model into a final evaluation process. For example, if youre doing cross-validation or ensemble learning, the original test might not provide a precise evaluation of your models. In this case, a validation set can be useful.

If you have enough data, its better to keep some aside and only use it once to provide an unbiased estimate of the final selected model instance, Lones writes.

Today, deep learning is all the rage. But not every problem needs deep learning. In fact, not every problem even needs machine learning. Sometimes, simple pattern-matching and rules will perform on par with the most complex machine learning models at a fraction of the data and computation costs.

But when it comes to problems that are specific to machine learning models, you should always have a roster of candidate algorithms to evaluate. Generally speaking, theres no such thing as a single best ML model, Lones writes. In fact, theres a proof of this, in the form of the No Free Lunch theorem, which shows that no ML approach is any better than any other when considered over every possible problem.

The first thing you should check is whether your model matches your problem type. For example, based on whether your intended output is categorical or continuous, youll need to choose the right machine learning algorithm along with the right structure. Data types (e.g., tabular data, images, unstructured text, etc.) can also be a defining factor in the class of model you use.

One important point Lones makes in his paper is the need to avoid excessive complexity. For example, if youre problem can be solved with a simple decision tree or regression model, theres no point in using deep learning.

Lones also warns against trying to reinvent the wheel. With machine learning being one of the hottest areas of research, theres always a solid chance that someone else has solved a problem that is similar to yours. In such cases, the wise thing to do would be to examine their work. This can save you a lot of time because other researchers have already faced and solved challenges that you will likely meet down the road.

To ignore previous studies is to potentially miss out on valuable information, Lones writes.

Examining papers and work by other researchers might also provide you with machine learning models that you can use and repurpose for your own problem. In fact, machine learning researchers often use each others models to save time and computational resources and start with a baseline trusted by the ML community.

Its important to avoid not invented here syndrome, i.e., only using models that have been invented at your own institution, since this may cause you to omit the best model for a particular problem, Lones warns.

Having a solid idea of what your machine learning model will be used for can greatly impact its development. If youre doing machine learning purely for academic purposes and to push the boundaries of science, then there might be no limits to the type of data or machine learning algorithms you can use. But not all academic work will remain confined in research labs.

[For] many academic studies, the eventual goal is to produce an ML model that can be deployed in a real world situation. If this is the case, then its worth thinking early on about how it is going to be deployed, Lones writes.

For example, if your model will be used in an application that runs on user devices and not on large server clusters, then you cant use large neural networks that require large amounts of memory and storage space. You must design machine learning models that can work in resource-constrained environments.

Another problem you might face is the need for explainability. In some domains, such as finance and healthcare, application developers are legally required to provide explanations of algorithmic decisions in case a user demands it. In such cases, using a black-box model might be impossible. For example, even though a deep neural network might give you a performance advantage, its lack of interpretability might make it useless. Instead, a more transparent model such as a decision tree might be a better choice even if it results in a performance hit. Alternatively, if deep learning is an absolute requirement for your application, then youll need to investigate techniques that can provide reliable interpretations of activations in the neural network.

As a machine learning engineer, you might not have precise knowledge of the requirements of your model. Therefore, it is important to talk to domain experts because they can help to steer you in the right direction and determine whether youre solving a relevant problem or not.

Failing to consider the opinion of domain experts can lead to projects which dont solve useful problems, or which solve useful problems in inappropriate ways, Lones writes.

For example, if you create a neural network that flags fraudulent banking transactions with very high accuracy but provides no explanation of its decision, then financial institutions wont be able to use it.

There are various ways to measure the performance of machine learning models, but not all of them are relevant to the problem youre solving.

For example, many ML engineers use the accuracy test to rate their models. The accuracy test measures the percent of correct predictions the model makes. This number can be misleading in some cases.

For example, consider a dataset of x-ray scans used to train a machine learning model for cancer detection. Your data is imbalanced, with 90 percent of the training examples flagged as benign and a very small number classified as malign. If your trained model scores 90 on the accuracy test, it might have just learned to label everything as benign. If used in a real-world application, this model can lead to missed cases with disastrous outcomes. In such a case, the ML team must use tests that are insensitive to class imbalance or use a confusion matrix to check other metrics. More recent techniques can provide a detailed measure of a models performance in various areas.

Based on the application, the ML developers might also want to measure several metrics. To return to the cancer detection example, in such a model, it might be important to reduce false negatives as much as possible even if it comes at the cost of lower accuracy or a slight increase in false positives. It is better to send a few people healthy people for diagnosis to the hospital than to miss critical cancer patients.

In his paper, Lones warns that when comparing several machine learning models for a problem, dont assume that bigger numbers do not necessarily mean better models. For example, performance differences might be due to your model being trained and tested on different partitions of your dataset or on entirely different datasets.

To really be sure of a fair comparison between two approaches, you should freshly implement all the models youre comparing, optimise each one to the same degree, carry out multiple evaluations and then use statistical tests to determine whether the differences in performance are significant, Lones writes.

Lones also warns not to overestimate the capabilities of your models in your reports. A common mistake is to make general statements that are not supported by the data used to train and evaluate models, he writes.

Therefore, any report of your models performance must also include the kind of data it was trained and tested on. Validating your model on multiple datasets can provide a more realistic picture of its capabilities, but you should still be wary of the kind of data errors we discussed earlier.

Transparency can also contribute greatly to other ML research. If you fully describe the architecture of your models as well as the training and validation process, other researchers that read your findings can use them in future work or even help point out potential flaws in your methodology.

Finally, aim for reproducibility. if you publish your source code and model implementations, you can provide the machine learning community with great tools in future work.

Interestingly, almost everything Lones wrote in his paper is also applicable to applied machine learning, the branch of ML that is concerned with integrating models into real products. However, I would like to add a few points that go beyond academic research and are important in real-world applications.

When it comes to data, machine learning engineers must consider an extra set of considerations before integrating them into products. Some include data privacy and security, user consent, and regulatory constraints. Many a company has fallen into trouble for mining user data without their consent.

Another important matter that ML engineers often forget in applied settings is model decay. Unlike academic research, machine learning models used in real-world applications must be retrained and updated regularly. As everyday data changes, machine learning models decay and their performance deteriorates. For example, as life habits changed in wake of the covid lockdown, ML systems that had been trained on old data started to fail and needed retraining. Likewise, language models need to be constantly updated as new trends appear and our speaking and writing habits change. These changes require the ML product team to devise a strategy for continued collection of fresh data and periodical retraining of their models.

Finally, integration challenges will be an important part of every applied machine learning project. How will your machine learning system interact with other applications currently running in your organization? Is your data infrastructure ready to be plugged into the machine learning pipeline? Does your cloud or server infrastructure support the deployment and scaling of your model? These kinds of questions can make or break the deployment of an ML product.

For example, recently, AI research lab OpenAIlaunched a test version of their Codex API model for public appraisal. But their launch failed because their servers couldnt scale to the user demand.

Hopefully, this brief post will help you better assess your machine learning project and avoid mistakes. Read Loness full paper, titled, How to avoid machine learning pitfalls: a guide for academic researchers, for more details about common mistakes in the ML research and development process.

This article was originally published by Ben Dickson onTechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original article here.

See the original post here:
The dos and donts of machine learning research read it, nerds - The Next Web

WaterScope: Meet the Team Using Machine Learning To Ensure Water Is Safe To Drink – Yahoo Finance

Northampton, MA --News Direct-- Cisco Systems Inc.

Now that the Cisco Global Problem Solver Challenge 2021 winners have been officially announced, we are excited for you to learn more about each winning team and the story behind each innovation. The Cisco Global Problem Solver Challenge is an annual competition that awards cash prizes to early-stage tech entrepreneurs solving the worlds toughest problems. Now in its fifth year, the competition awarded its largest prize pool ever, $1 million USD, to 20 winning teams from around the world.

When Alexander Patto, Nalin Patel, Tianheng Zhao, and Richard Bowman joined a water purifier project at Cambridge University, they were tasked with answering the question, How do you tell whether the water is pure? They realized quickly that the process around testing the microbiology of water hadnt changed in over 30 years. Globally, water-born bacterial infections lead to over 500,000 diarrheal related deaths each year, which is over 2,000 deaths every day (more than malaria and HIV combined). Current water testing equipment is bulky, expensive and takes at least a day to give results. Alex and his team tried to work out how they might improve the process and after about a month of trying to solve the problem, they co-founded WaterScope.

What problem is your technology solution trying to solve?

Alex: Access to information that will give people better drinking water sources. Its trying to solve both inequality and in particular, bacterial contamination. At the moment, if you were to go into Tanzania, and there was a public tap, theres just no way of knowing whether the water is safe to drink. The community is quite removed from the testing facility that comes in. So, what were trying to do is make a test that anyone can understand whether the water has got bacterial contamination. Currently the systems are very complicated. The WaterScope system aims to be empowering for the community. It allows the community to put mechanisms in place, to clean the water locally and get sustainable change at a local level.

Story continues

Can you explain how the solution works?

Alex: At the moment, theres two parts to the solution. First is the technology, which enables simple, portable bacterial testing, and then once you have the data and once you have the technology, and its being used, the second challenge is how then do you convert that to have impact back on the lives of people on the ground?

A person in the village would collect water from the source and they would filter it through our reusable cartridge. The cartridge has a disposable element to it which allows it to maintain the integrity of the test. The purpose of the cartridge is to take the lab into the field. It condenses the [testing] process into a small cartridge. Once they have the filtered sample, they put it into WaterScopes imaging system, and they incubate it for up to 18 hours. Then, they take it out and capture an image, and at Waterscope we use machine learning to identify the bacteria. The importance of this method is that whoever is collecting the sample doesnt have to be trained in microbiology.

After the results are captured, they are sent in real time to our database, which will then allow mapping and then intervention from potential governance. It allows for real time intervention. It also gives locals the agency to purify and periodically clean their water supply.

What inspired you to develop this solution?

Alex: It just fell together. I was doing my PhD in genetics at Cambridge, and I found myself getting far removed from the impact I wanted to have. I was actively participating in outreach projects and bumped into three people who had similar inclinations. We found more scope based on some research that was being done in Physics, and we thought, maybe we can have an impact. I didnt expect it to become my full-time job. We got a bit of funding from the university and from the humanitarian innovation fund, and we managed to get a pilot done. Having looked at the scale of the problem, it just felt right to do what were doing full time.

How will winning a prize in the Cisco Global Problem Solver Challenge help you advance your business?

Alex: Weve got prototypes that weve tested in the field. Now Waterscope is looking to convert these prototypes to post-production prototypes for manufacturing and understanding how we keep the costs down. We also want to keep those distribution channels open allowing us to get it to people who need it. The other side is firm up the software, improving machine learning, improve the way we use cloud technology, and flesh out more of the community impact side of things. Were aiming to commercialise by the end of 2022.

Waterscope is looking to use the funding to match fund an implementation project where well work with ten potential communities to understand how we can have an impact on the key community stakeholders.

How has the global pandemic impacted your work?

Alex: Quite significantly. We had a project funded by the United Kingdom government last year in which we were going to fly to Tanzania to train and collaborate field partners on the system, run workshops with community members, and the pandemic hit. So, we had to think around how to still get that field data and community data form the system without leaving the UK. We ended up reaching out to more people and spent a lot of time building a solid relationship over video conferencing. The benefit being now we have great partners on the ground, theyre very familiar with our system and that probably wouldnt have happened before. We would have normally done an intensive week or two in the field and left again, so the pandemic changed our approach to trials. We now have that longevity with our partners. Its also far more inclusive than it would have been, it doesnt beat the face-to-face meeting and seeing someone use our technology. Weve done a lot and were better for it, so were thankful for that.

Why did you decide to start your own social enterprise versus going to work for a company?

Alex: You get moments where my peers are out in London as consultants, earning a lot of money, and they enjoy that. I havent really thought about it too much. I find my days really fulfilling, I work with great people and Im so fortunate that we now have our own company. Its liberating. I find it hard imagining what it would be like to work for another company now because Im so used to working with the WaterScope team. Funding is a constant battle, though.

My family has been supportive of this. My dads a builder and my mums a renovator. Theyve always worked for themselves, since I was young. I grew up on a farm in Wales and Im the first person to go to university in my family. I think my mum sent the Cisco challenge voting to all her friends. Its also something they can all get behind. When I was in the nitty gritty of research, conversations around dinner might be on cells and protein. It really wasnt gripping. Now, its very easy to communicate the importance of what were doing, and people are naturally invested.

What advice do you have for other social entrepreneurs?

Alex: Get a good partner. A partner you can rely on. Get an advocate on your technology in assessing where its used. Fundraising is hard. Youll need resilience because you will apply for a lot of grants and funding streams, youll only get about 10 percent of them. You need to be able to handle rejection and failure. Youve also got to build your network as strong as possible. Working in things like incubators certainly helps. We got into a fellowship here and there, that put us in contact with like-minded people, it was really helpful because my previous contacts were all academics. Get an advisory board, they will help you get other people involved. Try not to say no to any opportunity that comes along. I give a couple of lectures in university and talks at events; you always meet new people. As long as youre open to those opportunities, it will come. Get involved with some universities, their networks are vast.

Stay tuned for more articles in our blog series, featuring interviews with every Cisco Global Problem Solver Challenge 2021 winning team!

View additional multimedia and more ESG storytelling from Cisco Systems Inc. on 3blmedia.com

View source version on newsdirect.com: https://newsdirect.com/news/waterscope-meet-the-team-using-machine-learning-to-ensure-water-is-safe-to-drink-262779442

Read the original here:
WaterScope: Meet the Team Using Machine Learning To Ensure Water Is Safe To Drink - Yahoo Finance

Jordan Harrod: Brain researcher and AI-focused YouTuber – MIT News

Scientist, writer, policy advocate, YouTuber before Jordan Harrod established her many successful career identities, her first role was as a student athlete. While she enjoyed competing in everything from figure skating to fencing, she also sustained injuries that left her with chronic pain. These experiences as a patient laid the groundwork for an interest in biomedical research and engineering. I knew I wanted to make tools that would help people with health issues similar to myself, she says.

Harrod went on to pursue her BS in biomedical engineering at Cornell University. Before graduating, she spent a summer at Stanford University doing machine-learning research for MRI reconstruction. I didnt know anything about machine learning before that, so I did a lot of learning on the fly, she says. I realized that I enjoyed playing with data in different ways. Machine learning was also becoming the new big thing at the time, so it felt like an exciting path to follow.

Harrod looked for PhD programs that would combine her interests in helping patients, biomedical engineering, and machine learning. She came across the Harvard-MIT Program in Health Sciences and Technology (HST) and realized it would be the perfect fit. The interdisciplinary program requires students to perform clinical rotations and take introductory courses alongside medical students. Ive found that the clinical perspective was often underrated on the research side, so I wanted to make sure Id have that. My goal was that my research would be translatable to the real world, Harrod says.

Mapping the brain to understand consciousness

Today, Harrod collaborates with professors Emery Brown, an anesthesiologist, and Ed Boyden, a neuroscientist, to study how different parts of the brain relate to consciousness and arousal. They seek to understand how the brain operates under different states of consciousness and the way this affects the processing of signals associated with pain. By studying arousal in mice and applying statistical tools to analyze large datasets of activated brain regions, for example, Browns team hopes to improve the current understanding of anesthesia.

This is another step toward creating better anesthesia regimens for individual patients, says Harrod.

Since beginning her neuroscience research, Harrod has been amazed to learn how much about the brain still needs to be uncovered. In addition to understanding biological mechanisms, she believes there is still work to be done at a preliminary cause and effect level. Were still learning how different arousal centers work together to modulate consciousness, or what happens if you turn one off, says Harrod. I dont think I realized the magnitude or the difficulty of the challenge, let alone how hard it is to translate our research to brains in people.

I didnt come into graduate school with a neuroscience background, so every day is an opportunity to learn new things about the brain. Even after three years, Im still amazed with how much we have yet to discover.

Sharing knowledge online and beyond

Outside of the lab, Harrod focuses her time on communicating research to the public and advocating for improved science policies. She is the chair of the External Affairs Board of the Graduate Student Council, an Early Career Policy Ambassador for the Society for Neuroscience, and the co-founder of the MIT Science Policy Review, which publishes peer reviewed reports on different science policy issues.

Most of our research is funded by our taxpayers, yet most people dont necessarily understand whats going on in the research that theyre funding, explains Harrod. I wanted to create a way so people could better understand how different regulations affect them personally.

In addition to her advocacy roles, Harrod also has a dedicated online presence. She writes articles for Massive Science and is well-known for her YouTube channel. Her videos, released weekly, investigate the different ways we interact with artificial intelligence daily. What began as a hobby three years ago has developed into an active community with 70,000 subscribers. I hadnt seen many other people talking about AI and machine learning in a casual way, so I decided to do it for fun, she says. Its been a great way to keep me looped into the broader field questions.

Harrods most popular video focuses on how AI can be used to monitor online exam proctoring. With the shift to online learning occurring during the pandemic, many students have used her video to understand how AI proctors can detect cheating. As the audience grows, its been exciting to read the comments and see people get curious about AI applications they had never heard of before. Ive also gotten to have interesting conversations with people who I wouldnt have come across otherwise, she says.

In the future, Harrod hopes to find a career that will allow her to balance her time between lab research, policy, and science communication. She plans on continuing to use her knowledge as a scientist to debunk hype and tell truthful stories to the public. Ive seen so many articles with headlines that could be misleading if someone only read the title. For example, a small study done in mice can be exaggerated to make mind-reading technology seem real, when the research still has a long way to go.

Since making my YouTube channel, Ive learned its important to give people reasonable expectations about whats real and what theyre going to encounter in their lives. They deserve to know the full picture so they can make informed decisions, she says.

See the rest here:
Jordan Harrod: Brain researcher and AI-focused YouTuber - MIT News