Archive for the ‘Machine Learning’ Category

Learning to think critically about machine learning | MIT News | Massachusetts Institute of Technology – MIT News

Students in the MIT course 6.036 (Introduction to Machine Learning) study the principles behind powerful models that help physicians diagnose disease or aid recruiters in screening job candidates.

Now, thanks to the Social and Ethical Responsibilities of Computing (SERC) framework, these students will also stop to ponder the implications of these artificial intelligence tools, which sometimes come with their share of unintended consequences.

Last winter, a team of SERC Scholars worked with instructor Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering, and the 6.036 teaching assistants to infuse weekly labs with material covering ethical computing, data and model bias, and fairness in machine learning. The process was initiated in the fall of 2019 by Jacob Andreas, the X Consortium Assistant Professor in the Department of Electrical Engineering and Computer Science. SERC Scholars collaborate in multidisciplinary teams to help postdocs and faculty develop new course material.

Because 6.036 is such a large course, more than 500 students who were enrolled in the 2021 spring term grappled with these ethical dimensions alongside their efforts to learn new computing techniques. For some, it may have been their first experience thinking critically in an academic setting about the potential negative impacts of machine learning.

The SERC Scholars evaluated each lab to develop concrete examples and ethics-related questions to fit that weeks material. Each brought a different toolset. Serena Booth is a graduate student in the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL). Marion Boulicault was a graduate student in the Department of Linguistics and Philosophy, and is now a postdoc in the MIT Schwarzman College of Computing, where SERC is based. And Rodrigo Ochigame was a graduate student in the Program in History, Anthropology, and Science, Technology, and Society (HASTS) and is now an assistant professor at Leiden University in the Netherlands. They collaborated closely with teaching assistant Dheekshita Kumar, MEng 21, who was instrumental in developing the course materials.

They brainstormed and iterated on each lab, while working closely with the teaching assistants to ensure the content fit and would advance the core learning objectives of the course. At the same time, they helped the teaching assistants determine the best way to present the material and lead conversations on topics with social implications, such as race, gender, and surveillance.

In a class like 6.036, we are dealing with 500 people who are not there to learn about ethics. They think they are there to learn the nuts and bolts of machine learning, like loss functions, activation functions, and things like that. We have this challenge of trying to get those students to really participate in these discussions in a very active and engaged way. We did that by tying the social questions very intimately with the technical content, Booth says.

For instance, in a lab on how to represent input features for a machine learning model, they introduced different definitions of fairness, asked students to consider the pros and cons of each definition, then challenged them to think about the features that should be input into a model to make it fair.

Four labs have now been published on MIT OpenCourseWare. A new team of SERC Scholars is revising the other eight, based on feedback from the instructors and students, with a focus on learning objectives, filling in gaps, and highlighting important concepts.

An intentional approach

The students efforts on 6.036 show how SERC aims to work with faculty in ways that work for them, says Julie Shah, associate dean of SERC and professor of aeronautics and astronautics. They adapted the SERC process due to the unique nature of this large course and tight time constraints.

SERC was established more than two years ago through the MIT Schwarzman College of Computing as an intentional approach to bring faculty from divergent disciplines together into a collaborative setting to co-create and launch new course material focused on social and responsible computing.

Each semester, the SERC team invites about a dozen faculty members to join an Action Group dedicated to developing new curricular materials (there are several SERC Action Groups, each with a different mission). They are purposeful in whom they invite, and seek to include faculty members who will likely form fruitful partnerships in smaller subgroups, says David Kaiser, associate dean of SERC, the Germeshausen Professor of the History of Science, and professor of physics.

These subgroups of two or three faculty members hone their shared interest over the course of the term to develop new ethics-related material. But rather than one discipline serving another, the process is a two-way street; every faculty member brings new material back to their course, Shah explains. Faculty are drawn to the Action Groups from all of MITs five schools.

Part of this involves going outside your normal disciplinary boundaries and building a language, and then trusting and collaborating with someone new outside of your normal circles. Thats why I think our intentional approach has been so successful. It is good to pilot materials and bring new things back to your course, but building relationships is the core. That makes this something valuable for everybody, she says.

Making an impact

Over the past two years, Shah and Kaiser have been impressed by the energy and enthusiasm surrounding these efforts.

They have worked with about 80 faculty members since the program started, and more than 2,100 students took courses that included new SERC content in the last year alone. Those students arent all necessarily engineers about 500 were exposed to SERC content through courses offered in the School of Humanities, Arts, and Social Sciences, the Sloan School of Management, and the School of Architecture and Planning.

Central to SERC is the principle that ethics and social responsibility in computing should be integrated into all areas of teaching at MIT, so it becomes just as relevant as the technical parts of the curriculum, Shah says. Technology, and AI in particular, now touches nearly every industry, so students in all disciplines should have training that helps them understand these tools, and think deeply about their power and pitfalls.

It is not someone elses job to figure out the why or what happens when things go wrong. It is all of our responsibility and we can all be equipped to do it. Lets get used to that. Lets build up that muscle of being able to pause and ask those tough questions, even if we cant identify a single answer at the end of a problem set, Kaiser says.

For the three SERC Scholars, it was uniquely challenging to carefully craft ethical questions when there was no answer key to refer to. But thinking deeply about such thorny problems also helped Booth, Boulicault, and Ochigame learn, grow, and see the world through the lens of other disciplines.

They are hopeful the undergraduates and teaching assistants in 6.036 take these important lessons to heart, and into their future careers.

I was inspired and energized by this process, and I learned so much, not just the technical material, but also what you can achieve when you collaborate across disciplines. Just the scale of this effort felt exciting. If we have this cohort of 500 students who go out into the world with a better understanding of how to think about these sorts of problems, I feel like we could really make a difference, Boulicault says.

Read more:
Learning to think critically about machine learning | MIT News | Massachusetts Institute of Technology - MIT News

Machine Learning Application in the Manufacturing Industry – IoT For All

Manufacturers, to keep up with the latest changes in technology, need to explore one of the most critical elements driving factories forward into the future: machine learning. Lets talk about the most important applications and innovations that ML technology is providing in 2022.

Machine learning is a subfield of artificial intelligence, but not all AI technologies count as machine learning. There are various other types of AI that play a role in many industries, such as robotics, natural language processing, and computer vision. If youre curious about how these technologies affect the manufacturing industry, check out our review below.

Basically, machine learning algorithms utilize training data to power an algorithm that allows the software to solve a problem. This data may come from real-time IoT sensors on a factory floor, or it may come from other methods. Machine learning has a variety of methods such as neural networks and deep learning. Neural networks imitate biological neurons to discover patterns in a dataset to solve problems. Deep learning utilizes various layers of neural networks, where the first layer utilizes raw data input and passes processed information from one layer to the next.

Lets start by imagining a box with assembly robots, IoT sensors, and other automated machinery. At one end you supply the materials necessary to complete the product; at the other end, the product rolls off the assembly line. The only intervention needed for this device is routine maintenance of the equipment inside. This is the ideal future of manufacturing, and machine learning can help us understand the full picture of how to achieve this.

Aside from the advanced robotics necessary for automated assembly to work, machine learning can help ensure: quality assurance, NDT analysis, and localizing the causes of defects, among other things.

You can think of this factory in a box example as a way of simplifying a larger factory, but in some cases its quite literal.Nokiais utilizing portable manufacturing sites in the form of retrofitted shipping containers with advanced automated assembly equipment. You can use these portable containers in any location necessary, allowing manufacturers to assemble products on site instead of needing to transport the products longer distances.

Using neural networks, high optical resolution cameras, and powerfulGPUs, real-time video processing combined with machine learning and computer vision can complete visual inspection tasks better than humans can. This technology ensures that the factory in a box is working correctly and that unusable products are eliminated from the system.

In the past, machine learnings use in video analysis has been criticized for the quality of video used. This is because images can be blurry from frame to frame, and the inspection algorithm may be subject to more errors. With high-quality cameras and greater graphical processing power, however, neural networks can more efficiently search for defects in real-time without human intervention.

Using various IoT sensors, machine learning can help test the created products without damaging them. An algorithm can search for patterns in the real-time data that correlate with a defective version of the unit, enabling the system to flag potentially unwanted products.

Another way that we can detect defects in materials is through non-destructive testing. This involves measuring a materials stability and integrity without causing damage. For example, you can use an ultrasound machine to detect anomalies like cracks in a material. The machine can measure data that humans can analyze to look for these outliers by hand.

However, outlier detection algorithms, object detection algorithms, and segmentation algorithms can automate this process by analyzing the data for recognizable patterns that humans may not be able to see with much greater efficiency. Machine learning is also not subject to the same number of errors that humans are prone to make.

One of the core tenants of machine learnings role in manufacturing is predictive maintenance. PwCreportedthat predictive maintenance will be one of the largest growing machine learning technologies in manufacturing, having an increase of 38 percent in market value from 2020 to 2025.

With unscheduled maintenance having the potential to deeply cut into a businesss bottom line, predictive maintenance can enable factories to make appropriate adjustments and corrections before machinery can experience more costly failures. We want to make sure that our factory in a box will have as much uptime with the fewest delays possible, and predictive maintenance can make that happen.

Extensive IoT sensors that record vital information about the operating conditions and status of a machine make predictive maintenance possible. This may include humidity, temperature, and more.

A machine learning algorithm can analyze patterns in data collected over time and reasonably predict when the machine may need maintenance. There are several approaches to achieve this goal:

Thanks to the IoT sensors powering predictive maintenance, machine learning can analyze the patterns in the data to see what parts of the machine need to be maintained to prevent a failure. If certain patterns lead to a trend of defects, its possible that hardware or software behaviors can be identified as causes of those defects. From here, engineers can come up with solutions to correct the system to avoid those defects in the future. This enables us to reduce the margin of error of our factory in a box scenario.

Digital twins are a virtual recreation of the production process based on data from IoT sensors and real-time data. They can be created as an original hypothetical representation of a system that doesnt yet exist, or they could be a recreation of an existing system.

The digital twin is a sandbox for experimentation in which machine learning can be used to analyze patterns in a simulation to optimize the environment. This helps support quality assurance and predictive maintenance efforts as well. We can also use machine learning alongside digital twins for layout optimization. This works when planning the layout of a factory or for optimizing the existing layout.

If we want to optimize every part of the factory, we also need to pay attention to the energy that it requires. The most common way to do this is to use sequential data measurements, which can be analyzed by data scientists with machine learning algorithms powered by autoregressive models and deep neural networks.

Weve used machine learning to optimize the factorys production processes, but what about the product itself? BMWintroducedthe BMW iX Flow at CES 2022 with a special e-ink wrap that can allow it to change the color (or more accurately, the shade) of the car between black and white. BMW explained that Generative design processes are implemented to ensure the segments reflect the characteristic contours of the vehicle and the resulting variations in light and shadow.

Generative design is where machine learning is used to optimize the design of a product, whether it be an automobile, electronic device, toy, or other items. With data and a desired goal, machine learning can cycle through all possible arrangements to find the best design.

ML algorithms can be trained to optimize a design for weight, shape, durability, cost, strength, and even aesthetic parameters.

Generative design process can be based on these algorithms:

Lets step away from the factory in a box example for a bit and look at a broader picture of needs in manufacturing. Production is only one element. The supply chain roles from a manufacturing center are also being improved with machine learning technologies, such as logistics route optimization and warehouse inventory control. These make up a cognitive supply chain that continues to evolve in the manufacturing industry.

AI-powered logistics solutions use object detection models instead of barcode detection, thus replacing manual scanning. Computer vision systems can detect shortages and overstock. By identifying these patterns, managers can be made aware of actionable situations. Computers can even be left to take action automatically to optimize inventory storage.

At MobiDev, we have researched a use case of creating a system capable of detecting objects for logistics. Read more aboutobject detection using small datasetsfor automated items counting in logistics.

How much should a factory produce and ship out? This is a question that can be difficult to answer. However, with access to appropriate data, machine learning algorithms can help factories understand how much they should be making without overproducing. The future of machine learning in manufacturing depends on innovative decisions.

Visit link:
Machine Learning Application in the Manufacturing Industry - IoT For All

FDA Issues Advisory on Use of AI and Machine Learning for Large Vessel Occlusion in the Brain – Diagnostic Imaging

Suggesting that some radiologists may not be aware of the intended use of computer-aided triage and notification (CADt) devices, the Food and Drug Administration (FDA) has issued an advisory on the use of the imaging software for patients with suspected large vessel occlusion (LVO) in the brain.

Emphasizing proper use of CADt software, the FDA notes these devices are not intended to substitute for diagnostic assessment by radiologists. While CADt devices can help flag and prioritize brain imaging with findings that are suspicious for LVO, the advisory points out that an LVO, a common cause of acute ischemic strokes, may still be present even if it is not flagged by the CADt imaging software.

If there is any potential over-reliance on CADt software, Vivek Bansal, MD said it may stem from a team of health-care providers striving to do the right thing for the patient under tight time constraints. While interventionalists, neurosurgeons and neurologists all have strong knowledge of brain vessels, there may be different levels of experience, according to Dr. Bansal, the national subspecialty lead for neuroradiology at Radiology Partners. He added that while these specialists look closely at images they take in the operating suite, they may not look at the actual CT images to the same level.

In regard to the imaging, Dr. Bansal said one may be looking at tiny branching vessels that are diving up and down into different slices of the images, and you have to scroll up and down to really trace them out vessel by vessel. This can be challenging and particularly hard to do on a smartphone in a brightly lit room, pointed out Dr. Bansal.

The clock is ticking, and time is brain. We are trying to race against the clock because every minute we take to arrive at a diagnosis, more brain cells may be dying (if the patient has a clot). The quicker we can get them to a diagnosis and the patient gets to a cath lab, the better the outcomes for the patient. I think that is the biggest challenge: trying to do something that is very meticulous in a very small amount of time, explained Dr. Bansal.

The FDA advisory also maintained that it is important to have awareness of the design capabilities of different CADt devices, many of which have artificial intelligence (AI) or machine learning technology, For example, the FDA cautioned that LVO CADt devices may not assess all intracranial vessels. Dr. Bansal said this is an important distinction with AI tools.

While some AI tools are very good at looking at an M1 occlusion, which is the proximal part of the middle cerebral artery, the newer AI tools are capable of looking at M2 occlusions with proximal anterior cerebral artery (ACA) and posterior cerebral artery (PCA) occlusions. All of these things are important in terms of patient care, maintained Dr. Bansal, who is affiliated with the East Houston Pathology Group in Texas.

Dr. Bansal said the key is understanding the role of AI-enabled devices and their value in triaging cases.

At any given moment, I might have 40 stat exams on my list. Im cranking through them as fast as I can but if AI tools are saying 'Hey, look at this one next, whether it is a potential large vessel occlusion or brain bleed, that is very helpful, suggested Dr. Bansal. Where we are at right now, I think that the only way we can look at AI is to look at it as a triaging tool.

Continue reading here:
FDA Issues Advisory on Use of AI and Machine Learning for Large Vessel Occlusion in the Brain - Diagnostic Imaging

AdTheorent Uses Machine Learning-Powered Predictive Advertising to Boost Donations and Drive Awareness for American Cancer Society – PR Newswire

AdTheorent's performance-first platform drove a 68% engagement rate and delivered a Return on Ad Spend that exceeded benchmark by 117%

NEW YORK, April 14, 2022 /PRNewswire/ -- AdTheorent Holding Company, Inc. ("AdTheorent" or the "Company") (Nasdaq: ADTH), a leading programmatic digital advertising company using advanced machine learning technology and privacy-forward solutions to deliver measurable value for advertisers and marketers, today announced campaign results from a recent digital fundraising campaign for American Cancer Society (ACS). The campaign goal was to drive cost-effective donations and positive Return on Ad Spend (RoAS), as well as raise awareness of ACS. The campaign drove strong donations revenue, yielding an overall campaign RoAS which was 2-times more efficient than the ACS target benchmark.

The Approach:

AdTheorent worked with Tombras, media agency of record for ACS, to drive efficient donations and achieve a strong RoAS, in addition to increasing awareness of the brand's core areas of focus, including: advocacy, discovery, and patient support. In order to achieve the dual-pronged objectives, AdTheorent leveraged a mix of cross-device rich media, interactive banners and display tactics, targeted using AdTheorent's advanced predictive advertising platform. AdTheorent developed custom machine learning models fueled by non-individualized statistics to identify and reach consumers with the highest likelihood of completing the required campaign actions. AdTheorent's programmatic performance optimizers utilized myriad signals in the custom predictive models such as ad position, publisher, geo-intelligence, non-individualized user device attributes, location DMA, time of day, connection signal and many others to find the most qualified users and reach ACS' target audience of prospective donors, current donors, and lapsed donors, with a national footprint. Additionally, AdTheorent utilized real-time contextual signals to identify and reach consumers engaging with content related to ACS or charitable donations. Through in-unit pixel placement, user engagement fueled targeting allowing AdTheorent to optimize in real-time and scale targeting to drive results for each targeting tactic.

"Every dollar raised helps the American Cancer Society improve the lives of people with cancer and their families as the only organization that integrates advocacy, discovery and direct patient support," said Ben Devore, Director, Media Strategy at ACS. "Every bit of our campaign spend needs to be optimized for the best possible performance, so our key advertising goal was to reach the most probable donors, and then engage them in a way that would drive donations. AdTheorent helped us outperform our KPIs, with a very efficient return on ad spend and an exceptionally high engagement rate of nearly 70% throughout the duration of the campaign which helps our organization achieve greater impact, overall."

The Results:

The campaign exceeded all benchmarks across all tactics:

AdTheorent's data driven-platform identified targeting variables which yielded conversion lift, providing valuable insights for future flights of the campaign.

"AdTheorent Predictive Advertising uses advanced machine learning and data science to drive real-world performance and advertiser ROI in the most privacy-forward and efficient manner," said James Lawson, CEO at AdTheorent. "We are honored to work with Tombras and ACS to further ACS's vital mission. And we are proud of the results we have helped produce, driving donation revenue at an efficiency rate 2X greater than ACS expectations."

About AdTheorent

AdTheorent uses advanced machine learning technology and privacy-forward solutions to deliver impactful advertising campaigns for marketers.AdTheorent's industry-leading machine learning platform powers its predictive targeting, geo-intelligence,audience extension solutions and in-house creative capability, Studio AT.Leveraging only non-sensitive data and focused on the predictive value of machine learning models, AdTheorent'sproduct suite and flexible transaction models allow advertisers to identify the most qualified potential consumers coupled with the optimal creative experience todeliver superior results, measured by each advertiser's real-world business goals.

AdTheorent is consistently recognized with numerous technology, product, growth and workplace awards. AdTheorent was awarded "Best AI-Based Advertising Solution" (AI Breakthrough Awards) and "Most Innovative Product" (B.I.G. Innovation Awards) for four consecutive years. Additionally, AdTheorent is the only six-time recipient of Frost & Sullivan's "Digital Advertising Leadership Award."AdTheorent is headquartered in New York, with fourteen offices across the United States and Canada. For more information, visit adtheorent.com.

About Tombras

Tombras is a 430+ person full service, independent advertising agency headquartered in Knoxville, Tennessee connecting data and creativity for business results. Named a FastCo Most Innovative Company, to the AdAge A-List and a Most Effective Independent Agency per Effie Worldwide. Tombras is one of the fastest growing full-service independent agencies with offices in New York, Atlanta, Washington, D.C., Charlotte, NC, and headquarters in Knoxville. Tombras works with notable brands including American Cancer Society, Big Lots, MoonPie, Mozilla Firefox, Orangetheory Fitness, Pernod Ricard and others. More information:tombras.com.

About American Cancer Society

The American Cancer Society is on a mission to free the world from cancer. We invest in lifesaving research, provide 24/7 information and support, and work to ensure that individuals in every community have access to cancer prevention, detection, and treatment. For more information, visit cancer.org.

SOURCE AdTheorent

Read more:
AdTheorent Uses Machine Learning-Powered Predictive Advertising to Boost Donations and Drive Awareness for American Cancer Society - PR Newswire

A call for ethical use of AI in Earth system science | NCAR & UCAR News – University Corporation for Atmospheric Research

Apr 15, 2022 - by Laura Snider

Artificial intelligence holds vast potential to help solve a number of challenging problems in Earth system science, from improving prediction of severe weather events to increasing the efficiency of climate models. But as in all AI applications, the use of machine learning and other techniques in environmental science has the potential to introduce biases that could deepen inequities.

The authors of a new paper published in the journal Environmental Data Science argue that researchers must develop ethical, responsible, and trustworthy approaches to applying AI in Earth system science to ensure that unintentional consequences do not worsen environmental and climate injustice.

Its really exciting to see all the ways researchers are finding to creatively apply artificial intelligence in weather, climate, and other environmental science research, said David John Gagne, a scientist at the National Center for Atmospheric Research (NCAR) and a paper co-author. But we have a responsibility to ensure that we dont cause more harm than good.

The papers lead author is Amy McGovern of the University of Oklahoma. Other co-authors include Imme Ebert-Uphoff of Colorado State University and Ann Bostrom of the University of Washington.

A central bias that could be exacerbated by AI is related to where and how weather and climate data are collected. For example, hailstorms, tornadoes, and other severe weather events are more likely to be reported in areas with higher populations. Therefore, the severe weather datasets used to train machine learning models may not adequately represent the amount of severe weather that takes place in rural, sparsely populated parts of the country. The machine learning model, then, will also tend to underpredict the weather in those regions.

These relatively low-population areas may be home to communities that are already underserved by the weather community.

The authors list a range of other issues that can arise through the use of AI for environmental science, including the use of non-trustworthy models or applying a model to inappropriate situations.

Read the University of Oklahoma news release

See all News

Read the original here:
A call for ethical use of AI in Earth system science | NCAR & UCAR News - University Corporation for Atmospheric Research