MIT FUTURE OF AI
Self-Supervised Learning and Foundation Models
to sign up for more lecture material and updates.
ChatGPT, Code Pilot, CLIP, Dall-E, Stable-Diffusion, AlphaFold, Self-driving cars – is now the time that AI lives up to all its hype? What's the secret sauce behind these recent breakthroughs within AI? It’s called self-supervised learning and it is changing everything. With the help of it, Facebook's Yann LeCun now believes he sees a way to Artificial General Intelligence (AGI) in the form of foundation models. In this non-technical series of lectures, we will start with the history of AI, then with what supervised learning and reinforcement learning is missing, and conclude with the deep practical and foundational implications of self-supervised learning. We cover applications in both science and business. Lectures (Thursdays at 2-3pm, room 24-121) will be recorded and all backgrounds are welcome.
Lectures are created by Rickard BrĂ¼el-Gabrielsson
Reviews
Lecture 1: Introduction
Lecture 2: Algorithms
Lecture 4: Data & Stable-Diffusion
MIT Introductory Course on Self-Supervised Learning & Foundation Models Covering:
- ChatGPT
- Stable-Diffusion & Dall-E
- Neural Networks
- Supervised Learning
- Representation & Unsupervised Learning
- Reinforcement Learning
- Generative AI
- Self-Supervised Learning
- Foundation Models
- GANs (adversarial)
- Contrastive Learning
- Auto-encoders
- Denoising & Diffusion
from Hacker News https://ift.tt/HfVQCmq
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.