+**AI Alignment** is the critical field of study ensuring future [Artificial Intelligence](/wiki/artificial_intelligence) systems operate according to human intentions and [Human Values](/wiki/human_values). It addresses the challenge of building AI that is robustly beneficial, preventing unintended or harmful outcomes as intelligence advances. The aim is to create intelligent agents that align with our deepest goals, fostering a safe and prosperous future.
+## See also
+- [AI Safety](/wiki/ai_safety)
+- [Machine Ethics](/wiki/machine_ethics)
+- [Superintelligence](/wiki/superintelligence)
... 1 more lines