The Alignment Problem
The alignment problem is the problem of specifying the objective function of a powerful AI system in a way that is consistent with human values, and of ensuring that the system will remain aligned with human values even as it becomes vastly more intelligent than its creators. The alignment problem is a key challenge in AI safety.
Sign up for our newsletter
Get the latest OpenAI news delivered to your inbox.