Self-supervised learning is the future of AI – The Next Web
Despite the huge contributions of deep learning to the field of artificial intelligence, theres something very wrong with it: It requires huge amounts of data. This is one thing that boththe pioneersandcritics of deep learningagree on. In fact, deep learning didnt emerge as the leading AI technique until a few years ago because of the limited availability of useful data and the shortage of computing power to process that data.
Reducing the data-dependency of deep learning is currently among the top priorities of AI researchers.
In hiskeynote speech at the AAAI conference, computer scientist Yann LeCun discussed the limits of current deep learning techniques and presented the blueprint for self-supervised learning, his roadmap to solve deep learnings data problem. LeCun is one of thegodfathers of deep learningand the inventor ofconvolutional neural networks (CNN), one of the key elements that have spurred a revolution in artificial intelligence in the past decade.
Self-supervised learning is one of several plans to create data-efficient artificial intelligence systems. At this point, its really hard to predict which technique will succeed in creating the next AI revolution (or if well end up adopting a totally different strategy). But heres what we know about LeCuns masterplan.
First, LeCun clarified that what is often referred to as the limitations of deep learning is, in fact, a limit ofsupervised learning. Supervised learning is the category of machine learning algorithms that require annotated training data. For instance, if you want to create an image classification model, you must train it on a vast number of images that have been labeled with their proper class.
[Deep learning] is not supervised learning. Its not justneural networks. Its basically the idea of building a system by assembling parameterized modules into a computation graph, LeCun said in his AAAI speech. You dont directly program the system. You define the architecture and you adjust those parameters. There can be billions.
Deep learning can be applied to different learning paradigms, LeCun added, including supervised learning,reinforcement learning, as well as unsupervised or self-supervised learning.
But the confusion surrounding deep learning and supervised learning is not without reason. For the moment, the majority of deep learning algorithms that have found their way into practical applications are based on supervised learning models, which says a lot aboutthe current shortcomings of AI systems. Image classifiers, facial recognition systems, speech recognition systems, and many of the other AI applications we use every day have been trained on millions of labeled examples.
Reinforcement learning and unsupervised learning, the other categories of learning algorithms, have so far found very limited applications.
Supervised deep learning has given us plenty of very useful applications, especially in fields such ascomputer visionand some areas of natural language processing. Deep learning is playing an increasingly important role in sensitive applications, such as cancer detection. It is also proving to be extremely useful in areas where the scale of the problem is beyond being addressed with human efforts, such aswith some caveatsreviewing the huge amount of content being posted on social media every day.
If you take deep learning from Facebook, Instagram, YouTube, etc., those companies crumble, LeCun says. They are completely built around it.
But as mentioned, supervised learning is only applicable where theres enough quality data and the data can capture the entirety of possible scenarios. As soon as trained deep learning models face novel examples that differ from their training examples, they start to behave in unpredictable ways. In some cases,showing an object from a slightly different anglemight be enough to confound a neural network into mistaking it with something else.
ImageNet vs reality: In ImageNet (left column) objects are neatly positioned, in ideal background and lighting conditions. In the real world, things are messier (source: objectnet.dev)
Deep reinforcement learning has shownremarkable results in games and simulation. In the past few years, reinforcement learning has conquered many games that were previously thought to off-limits for artificial intelligence. AI programs have already decimated human world champions atStarCraft 2, Dota, and the ancient Chinese board game Go.
But the way these AI programs learn to solve problems is drastically different from that of humans. Basically, a reinforcement learning agent starts with a blank slate and is only provided with a basic set of actions it can perform in its environment. The AI is then left on its own to learn through trial-and-error how to generate the most rewards (e.g., win more games).
This model works when the problem space is simple and you have enough compute power to run as many trial-and-error sessions as possible. In most cases, reinforcement learning agents take an insane amount of sessions to master games. The huge costs have limited reinforcement learning research to research labsowned or funded by wealthy tech companies.
Reinforcement learning agents must be trained on hundreds of years worth of session to master games, much more than humans can play in a lifetime (source: Yann LeCun).
Reinforcement learning systems are very bad attransfer learning. A bot that plays StarCraft 2 at grandmaster level needs to be trained from scratch if it wants to play Warcraft 3. In fact, even small changes to the StarCraft game environment can immensely degrade the performance of the AI. In contrast, humans are very good at extracting abstract concepts from one game and transferring it to another game.
Reinforcement learning really shows its limits when it wants to learn to solve real-world problems that cant be simulated accurately. What if you want to train a car to drive itself? And its very hard to simulate this accurately, LeCun said, adding that if we wanted to do it in real life, we would have to destroy many cars. And unlike simulated environments, real life doesnt allow you to run experiments in fast forward, and parallel experiments, when possible, would result in even greater costs.
LeCun breaks down the challenges of deep learning into three areas.
First, we need to develop AI systems that learn with fewer samples or fewer trials. My suggestion is to use unsupervised learning, or I prefer to call it self-supervised learning because the algorithms we use are really akin to supervised learning, which is basically learning to fill in the blanks, LeCun says. Basically, its the idea of learning to represent the world before learning a task. This is what babies and animals do. We run about the world, we learn how it works before we learn any task. Once we have good representations of the world, learning a task requires few trials and few samples.
Babies develop concepts of gravity, dimensions, and object persistence in the first few months after their birth. While theres debate on how much of these capabilities are hardwired into the brain and how much of it is learned, what is for sure is that we develop many of our abilities simply by observing the world around us.
The second challenge is creating deep learning systems that can reason. Current deep learning systems are notoriously bad at reasoning and abstraction, which is why they need huge amounts of data to learn simple tasks.
The question is, how do we go beyond feed-forward computation and system 1? How do we make reasoning compatible with gradient-based learning? How do we make reasoning differentiable? Thats the bottom line, LeCun said.
System 1 is the kind of learning tasks that dont require active thinking, such as navigating a known area or making small calculations. System 2 is the more active kind of thinking, which requires reasoning.Symbolic artificial intelligence, the classic approach to AI, has proven to be much better at reasoning and abstraction.
But LeCun doesnt suggest returning to symbolic AI or tohybrid artificial intelligence systems, as other scientists have suggested. His vision for the future of AI is much more in line with that of Yoshua Bengio, another deep learning pioneer, who introduced the concept ofsystem 2 deep learningat NeurIPS 2019 and further discussed it at AAAI 2020. LeCun, however, did admit that nobody has a completely good answer to which approach will enable deep learning systems to reason.
The third challenge is to create deep learning systems that can lean and plan complex action sequences, and decompose tasks into subtasks. Deep learning systems are good at providing end-to-end solutions to problems but very bad at breaking them down into specific interpretable and modifiable steps. There have been advances in creatinglearning-based AI systems that can decompose images, speech, and text. Capsule networks, invented by Geoffry Hinton, address some of these challenges.
But learning to reason about complex tasks is beyond todays AI. We have no idea how to do this, LeCun admits.
The idea behind self-supervised learning is to develop a deep learning system that can learn to fill in the blanks.
You show a system a piece of input, a text, a video, even an image, you suppress a piece of it, mask it, and you train a neural net or your favorite class or model to predict the piece thats missing. It could be the future of a video or the words missing in a text, LeCun says.
The closest we have to self-supervised learning systems are Transformers, an architecture that has proven very successful innatural language processing. Transformers dont require labeled data. They are trained on large corpora of unstructured text such as Wikipedia articles. And theyve proven to be much better than their predecessors at generating text, engaging in conversation, and answering questions. (But they are stillvery far from really understanding human language.)
Transformers have become very popular and are the underlying technology for nearly all state-of-the-art language models, including Googles BERT, Facebooks RoBERTa,OpenAIs GPT2, and GooglesMeena chatbot.
More recently, AI researchers have proven thattransformers can perform integration and solve differential equations, problems that require symbol manipulation. This might be a hint that the evolution of transformers might enable neural networks to move beyond pattern recognition and statistical approximation tasks.
So far, transformers have proven their worth in dealing with discreet data such as words and mathematical symbols. Its easy to train a system like this because there is some uncertainty about which word could be missing but we can represent this uncertainty with a giant vector of probabilities over the entire dictionary, and so its not a problem, LeCun says.
But the success of Transformers has not transferred to the domain of visual data. It turns out to be much more difficult to represent uncertainty and prediction in images and video than it is in text because its not discrete. We can produce distributions over all the words in the dictionary. We dont know how to represent distributions over all possible video frames, LeCun says.
For each video segment, there are countless possible futures. This makes it very hard for an AI system to predict a single outcome, say the next few frames in a video. The neural network ends up calculating the average of possible outcomes, which results in blurry output.
This is the main technical problem we have to solve if we want to apply self-supervised learning to a wide variety of modalities like video, LeCun says.
LeCuns favored method to approach supervised learning is what he calls latent variable energy-based models. The key idea is to introduce a latent variable Z which computes the compatibility between a variable X (the current frame in a video) and a prediction Y (the future of the video) and selects the outcome with the best compatibility score. In his speech, LeCun further elaborates on energy-based models and other approaches to self-supervised learning.
Energy-based models use a latent variable Z to compute the compatibility between a variable X and a prediction Y and select the outcome with the best compatibility score (image credit: Yann LeCun).
I think self-supervised learning is the future. This is whats going to allow to our AI systems, deep learning system to go to the next level, perhaps learn enough background knowledge about the world by observation, so that some sort of common sense may emerge, LeCun said in his speech at the AAAI Conference.
One of the key benefits of self-supervised learning is the immense gain in the amount of information outputted by the AI. In reinforcement learning, training the AI system is performed at scalar level; the model receives a single numerical value as reward or punishment for its actions. In supervised learning, the AI system predicts a category or a numerical value for each input.
In self-supervised learning, the output improves to a whole image or set of images. Its a lot more information. To learn the same amount of knowledge about the world, you will require fewer samples, LeCun says.
We must still figure out how the uncertainty problem works, but when the solution emerges, we will have unlocked a key component of the future of AI.
If artificial intelligence is a cake, self-supervised learning is the bulk of the cake, LeCun says. The next revolution in AI will not be supervised, nor purely reinforced.
This story is republished fromTechTalks, the blog that explores how technology is solving problems and creating new ones. Like them onFacebookhere and follow them down here:
Published April 5, 2020 05:00 UTC
Read more from the original source:
Self-supervised learning is the future of AI - The Next Web
- Why IBMs New Machine-Learning Model Is a Big Deal for Next-Generation Chips - TipRanks - January 24th, 2026 [January 24th, 2026]
- A no-compromise amplifier solution: Synergy teams up with Wampler and Friedman to launch its machine-learning power amp and promises to change the... - January 24th, 2026 [January 24th, 2026]
- Our amplifier learns your cabinets impedance through controlled sweeps and continues to monitor it in real-time: Synergys Power Amp Machine-Learning... - January 24th, 2026 [January 24th, 2026]
- Machine Learning Studied to Predict Response to Advanced Overactive Bladder Therapies - Sandip Vasavada - UroToday - January 24th, 2026 [January 24th, 2026]
- Blending Education, Machine Learning to Detect IV Fluid Contaminated CBCs, With Carly Maucione, MD - HCPLive - January 24th, 2026 [January 24th, 2026]
- Why its critical to move beyond overly aggregated machine-learning metrics - MIT News - January 24th, 2026 [January 24th, 2026]
- Machine Learning Lends a Helping Hand to Prosthetics - AIP Publishing LLC - January 24th, 2026 [January 24th, 2026]
- Hassan Taher Explains the Fundamentals of Machine Learning and Its Relationship to AI - mitechnews.com - January 24th, 2026 [January 24th, 2026]
- Keysight targets faster PDK development with machine learning toolkit - eeNews Europe - January 24th, 2026 [January 24th, 2026]
- Training and external validation of machine learning supervised prognostic models of upper tract urothelial cancer (UTUC) after nephroureterectomy -... - January 24th, 2026 [January 24th, 2026]
- Age matters: a narrative review and machine learning analysis on shared and separate multidimensional risk domains for early and late onset suicidal... - January 24th, 2026 [January 24th, 2026]
- Uncovering Hidden IV Fluid Contamination Through Machine Learning, With Carly Maucione, MD - HCPLive - January 24th, 2026 [January 24th, 2026]
- Machine learning identifies factors that may determine the age of onset of Huntington's disease - Medical Xpress - January 24th, 2026 [January 24th, 2026]
- AI and Machine Learning - WEF expands Fourth Industrial Revolution Network - Smart Cities World - January 24th, 2026 [January 24th, 2026]
- Machine-learning analysis reclassifies armed conflicts into three new archetypes - The Brighter Side of News - January 24th, 2026 [January 24th, 2026]
- Machine learning and AI the future of drought monitoring in Canada - sasktoday.ca - January 24th, 2026 [January 24th, 2026]
- Machine learning revolutionises the development of nanocomposite membranes for CO capture - European Coatings - January 24th, 2026 [January 24th, 2026]
- AI and Machine Learning - Leading data infrastructure is helping power better lives in Sunderland - Smart Cities World - January 24th, 2026 [January 24th, 2026]
- How banks are responsibly embedding machine learning and GenAI into AML surveillance - Compliance Week - January 20th, 2026 [January 20th, 2026]
- Enhancing Teaching and Learning of Vocational Skills through Machine Learning and Cognitive Training (MCT) - Amrita Vishwa Vidyapeetham - January 20th, 2026 [January 20th, 2026]
- New Research in Annals of Oncology Shows Machine Learning Revelation of Global Cancer Trend Drivers - Oncodaily - January 20th, 2026 [January 20th, 2026]
- Machine learning-assisted mapping of VT ablation targets: progress and potential - Hospital Healthcare Europe - January 20th, 2026 [January 20th, 2026]
- Machine Learning Achieves Runtime Optimisation for GEMM with Dynamic Thread Selection - Quantum Zeitgeist - January 20th, 2026 [January 20th, 2026]
- Machine learning algorithm predicts Bitcoin price on January 31, 2026 - Finbold - January 20th, 2026 [January 20th, 2026]
- AI and Machine Learning Transform Baldness Detection and Management - Bioengineer.org - January 20th, 2026 [January 20th, 2026]
- A longitudinal machine-learning approach to predicting nursing home closures in the U.S. - Nature - January 11th, 2026 [January 11th, 2026]
- Occams Razor in Machine Learning. The Power of Simplicity in a Complex World - DataDrivenInvestor - January 11th, 2026 [January 11th, 2026]
- Study Explores Use of Automated Machine Learning to Compare Frailty Indices in Predicting Spinal Surgery Outcomes - geneonline.com - January 11th, 2026 [January 11th, 2026]
- Hunting for "Oddballs" With Machine Learning: Detecting Anomalous Exoplanets Using a Deep-Learned Low-Dimensional Representation of Transit... - January 9th, 2026 [January 9th, 2026]
- A Machine Learning-Driven Electrophysiological Platform for Real-Time Tumor-Neural Interaction Analysis and Modulation - Nature - January 9th, 2026 [January 9th, 2026]
- Machine learning elucidates associations between oral microbiota and the decline of sweet taste perception during aging - Nature - January 9th, 2026 [January 9th, 2026]
- Prognostic model for pancreatic cancer based on machine learning of routine slides and transcriptomic tumor analysis - Nature - January 9th, 2026 [January 9th, 2026]
- Bidgely Redefines Energy AI in 2025: From Machine Learning to Agentic AI - galvnews.com - January 9th, 2026 [January 9th, 2026]
- Machine Learning in Pharmaceutical Industry Market Size Reach USD 26.2 Billion by 2031 - openPR.com - January 9th, 2026 [January 9th, 2026]
- Noise-resistant Qubit Control With Machine Learning Delivers Over 90% Fidelity - Quantum Zeitgeist - January 9th, 2026 [January 9th, 2026]
- Machine Learning Models Forecast Parshwanath Corporation Limited Uptick - Real-Time Stock Alerts & High Return Trading Ideas -... - January 9th, 2026 [January 9th, 2026]
- Machine Learning Models Forecast Imagicaaworld Entertainment Limited Uptick - Technical Resistance Breaks & Outstanding Capital Returns -... - January 2nd, 2026 [January 2nd, 2026]
- Cognitive visual strategies are associated with delivery accuracy in elite wheelchair curling: insights from eye-tracking and machine learning -... - January 2nd, 2026 [January 2nd, 2026]
- Machine Learning Models Forecast Covidh Technologies Limited Uptick - Earnings Forecast Updates & Small Investment Trading Plans -... - January 2nd, 2026 [January 2nd, 2026]
- Machine Learning Models Forecast Sri Adhikari Brothers Television Network Limited Uptick - Stock Split Announcements & Rapid Wealth Accumulation -... - January 2nd, 2026 [January 2nd, 2026]
- Army to ring in new year with new AI and machine learning career path for officers - Stars and Stripes - December 31st, 2025 [December 31st, 2025]
- Army launches AI and machine-learning career path for officers - Federal News Network - December 31st, 2025 [December 31st, 2025]
- AI and Machine Learning Transforming Business Operations, Strategy, and Growth AI - openPR.com - December 31st, 2025 [December 31st, 2025]
- New at Mouser: Infineon Technologies PSOC Edge Machine Learning MCUs for Robotics, Industrial, and Smart Home Applications - Business Wire - December 31st, 2025 [December 31st, 2025]
- Machine Learning Models Forecast The Federal Bank Limited Uptick - Double Top/Bottom Patterns & Affordable Growth Trading - bollywoodhelpline.com - December 31st, 2025 [December 31st, 2025]
- Machine Learning Models Forecast Future Consumer Limited Uptick - Stock Valuation Metrics & Free Stock Market Beginner Guides - earlytimes.in - December 31st, 2025 [December 31st, 2025]
- Machine learning identifies statin and phenothiazine combo for neuroblastoma treatment - Medical Xpress - December 29th, 2025 [December 29th, 2025]
- Machine Learning Framework Developed to Align Educational Curricula with Workforce Needs - geneonline.com - December 29th, 2025 [December 29th, 2025]
- Study Develops Multimodal Machine Learning System to Evaluate Physical Education Effectiveness - geneonline.com - December 29th, 2025 [December 29th, 2025]
- AI Indicators Detect Buy Opportunity in Everest Organics Limited - Healthcare Stock Analysis & Smarter Trades Backed by Machine Learning -... - December 29th, 2025 [December 29th, 2025]
- Automated Fractal Analysis of Right and Left Condyles on Digital Panoramic Images Among Patients With Temporomandibular Disorder (TMD) and Use of... - December 29th, 2025 [December 29th, 2025]
- Machine Learning Models Forecast Gayatri Highways Limited Uptick - Inflation Impact on Stocks & Fast Profit Trading Ideas - bollywoodhelpline.com - December 29th, 2025 [December 29th, 2025]
- Machine Learning Models Forecast Punjab Chemicals and Crop Protection Limited Uptick - Blue Chip Stock Analysis & Double Or Triple Investment -... - December 29th, 2025 [December 29th, 2025]
- Machine Learning Models Forecast Walchand PeopleFirst Limited Uptick - Risk Adjusted Returns & Investment Recommendations You Can Trust -... - December 27th, 2025 [December 27th, 2025]
- Machine learning helps robots see clearly in total darkness using infrared - Tech Xplore - December 27th, 2025 [December 27th, 2025]
- Momentum Traders Eye Manas Properties Limited for Quick Bounce - Market Sentiment Report & Smarter Trades Backed by Machine Learning -... - December 27th, 2025 [December 27th, 2025]
- Machine Learning Models Forecast Bigbloc Construction Limited Uptick - MACD Trading Signals & Minimal Risk High Reward - bollywoodhelpline.com - December 27th, 2025 [December 27th, 2025]
- Avoid These 10 Machine Learning Project Mistakes - Analytics Insight - December 27th, 2025 [December 27th, 2025]
- Infleqtion Secures $2M U.S. Army Contract to Advance Contextual Machine Learning for Assured Navigation and Timing - Yahoo Finance - December 12th, 2025 [December 12th, 2025]
- A county-level machine learning model for bottled water consumption in the United States - ESS Open Archive - December 12th, 2025 [December 12th, 2025]
- Grainge AI: Solving the ingredient testing blind spot with machine learning - foodingredientsfirst.com - December 12th, 2025 [December 12th, 2025]
- Improved herbicide stewardship with remote sensing and machine learning decision-making tools - Open Access Government - December 12th, 2025 [December 12th, 2025]
- Hero Medical Technologies Awarded OTA by MTEC to Advance Machine Learning and Wearable Sensing for Field Triage - PRWeb - December 12th, 2025 [December 12th, 2025]
- Lieprune Achieves over Compression of Quantum Neural Networks with Negligible Performance Loss for Machine Learning Tasks - Quantum Zeitgeist - December 12th, 2025 [December 12th, 2025]
- WFS Leverages Machine Learning to Accurately Forecast Air Cargo Volumes and Align Workforce Resources - Metropolitan Airport News - December 12th, 2025 [December 12th, 2025]
- "Emerging AI and Machine Learning Technologies Revolutionize Diagnostic Accuracy in Endoscope Imaging" - GlobeNewswire - December 12th, 2025 [December 12th, 2025]
- Study Uses Multi-Scale Machine Learning to Classify Cognitive Status in Parkinsons Disease Patients - geneonline.com - December 12th, 2025 [December 12th, 2025]
- WFS uses machine learning to forecast cargo volumes and staffing - STAT Times - December 12th, 2025 [December 12th, 2025]
- Portfolio Management with Machine Learning and AI Integration - The AI Journal - December 12th, 2025 [December 12th, 2025]
- AI, Machine Learning to drive power sector transformation: Manohar Lal - DD News - December 7th, 2025 [December 7th, 2025]
- AI WebTracker and Machine-Learning Compliance Tools Help Law Firms Acquire High-Value Personal Injury Cases While Reducing Fake Leads and TCPA Risk -... - December 7th, 2025 [December 7th, 2025]
- AI AND MACHINE LEARNING BASED APPLICATIONS TO PLAY PIVOTAL ROLE IN TRANSFORMING INDIAS POWER SECTOR, SAYS SHRI MANOHAR LAL - pib.gov.in - December 7th, 2025 [December 7th, 2025]
- AI and Machine Learning to Transform Indias Power Sector, Says Manohar Lal - The Impressive Times - December 7th, 2025 [December 7th, 2025]
- Exploring LLMs with MLX and the Neural Accelerators in the M5 GPU - Apple Machine Learning Research - November 23rd, 2025 [November 23rd, 2025]
- Machine learning model for HBsAg seroclearance after 48-week pegylated interferon therapy in inactive HBsAg carriers: a retrospective study - Virology... - November 23rd, 2025 [November 23rd, 2025]
- IIT Madras Free Machine Learning Course 2026: What to know - Times of India - November 23rd, 2025 [November 23rd, 2025]
- Towards a Better Evaluation of 3D CVML Algorithms: Immersive Debugging of a Localization Model - Apple Machine Learning Research - November 23rd, 2025 [November 23rd, 2025]
- A machine-learning powered liquid biopsy predicts response to paclitaxel plus ramucirumab in advanced gastric cancer: results from the prospective IVY... - November 23rd, 2025 [November 23rd, 2025]
- Monitoring for early prediction of gram-negative bacteremia using machine learning and hematological data in the emergency department - Nature - November 23rd, 2025 [November 23rd, 2025]
- Development and validation of an interpretable machine learning model for osteoporosis prediction using routine blood tests: a retrospective cohort... - November 23rd, 2025 [November 23rd, 2025]