+The **vanishing gradient** problem hinders training of deep [Neural Networks](/wiki/neural_networks). During [Backpropagation](/wiki/backpropagation), gradients diminish drastically through layers, preventing early layers from effectively adjusting their [Weight](/wiki/weight) parameters.
+## See also
+- [Exploding Gradient](/wiki/exploding_gradient)
+- [Recurrent Neural Network](/wiki/recurrent_neural_network)
+- [Long Short-Term Memory](/wiki/long_short-term_memory)
... 1 more lines