Why AI Alignment Could Be Hard With Modern Deep Learning
AI Safety Fundamentals: Alignment - A podcast by BlueDot Impact
Categories:
Why would we program AI that wants to harm us? Because we might not know how to do otherwise.Source:https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/Crossposted from the Cold Takes Audio podcast.---A podcast by BlueDot Impact. Learn more on the AI Safety Fundamentals website.