+**Ai Safety** is the field dedicated to ensuring that advanced [Artificial Intelligence](/wiki/artificial_intelligence) systems benefit humanity, rather than causing harm. It explores methods to build AI that is robust, trustworthy, and aligned with human values, a challenge often termed the [alignment](/wiki/alignment) problem.
+## See also
+- [AI Ethics](/wiki/ai_ethics)
+- [Existential Risk](/wiki/existential_risk)
+- [Superintelligence](/wiki/superintelligence)
... 1 more lines