Bias Terms (in Architecture):

These are parameters added to the weighted sum in each neuron of a neural network layer.

  • Purpose: The bias allows the activation function to be shifted left or right, enabling the model to better fit the data.
  • Equation:
    For a neuron, the output is: y=f(w1x1+w2x2+⋯+wnxn+b)y = f(w_1x_1 + w_2x_2 + \dots + w_nx_n + b)y=f(w1​x1​+w2​x2​+⋯+wn​xn​+b) where bbb is the bias term.
  • Why it matters: Without bias, all activation functions would be forced to pass through the origin, which limits the model’s flexibility.

2. Bias (in Learning / Decision-Making):

This refers to systematic errors in predictions caused by incorrect assumptions in the learning algorithm.

  • Examples:
    • A neural network trained on biased data (e.g., skewed by race or gender) can learn and reproduce that bias.
    • Underfitting a model leads to high bias—it fails to capture the underlying trend.
  • Common Sources:
    • Training data that reflects human or societal biases.
    • Model architecture choices that oversimplify the problem.
    • Loss functions or optimization objectives that don’t account for fairness.

Ways to Address Learning Bias:

  • Use fairness-aware algorithms or debiased datasets.
  • Apply regularization and cross-validation to control for underfitting/overfitting.
  • Perform bias audits and monitor model outputs across subgroups.