AI Safety Fundamentals: Alignment
A podcast by BlueDot Impact
Categories:
83 Episodes
-
Public by Default: How We Manage Information Visibility at Get on Board
Published: 12/05/2024 -
Writing, Briefly
Published: 12/05/2024 -
Being the (Pareto) Best in the World
Published: 04/05/2024 -
How to Succeed as an Early-Stage Researcher: The “Lean Startup” Approach
Published: 23/04/2024 -
Become a Person who Actually Does Things
Published: 17/04/2024 -
Planning a High-Impact Career: A Summary of Everything You Need to Know in 7 Points
Published: 16/04/2024 -
Working in AI Alignment
Published: 14/04/2024 -
Computing Power and the Governance of AI
Published: 07/04/2024 -
AI Control: Improving Safety Despite Intentional Subversion
Published: 07/04/2024 -
Emerging Processes for Frontier AI Safety
Published: 07/04/2024 -
AI Watermarking Won’t Curb Disinformation
Published: 07/04/2024 -
Challenges in Evaluating AI Systems
Published: 07/04/2024 -
Interpretability in the Wild: A Circuit for Indirect Object Identification in GPT-2 Small
Published: 01/04/2024 -
Towards Monosemanticity: Decomposing Language Models With Dictionary Learning
Published: 31/03/2024 -
Zoom In: An Introduction to Circuits
Published: 31/03/2024 -
Weak-To-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision
Published: 26/03/2024 -
Can We Scale Human Feedback for Complex AI Tasks?
Published: 26/03/2024 -
Machine Learning for Humans: Supervised Learning
Published: 13/05/2023 -
Visualizing the Deep Learning Revolution
Published: 13/05/2023 -
Four Background Claims
Published: 13/05/2023
Listen to resources from the AI Safety Fundamentals: Alignment course!https://aisafetyfundamentals.com/alignment