When fitting a linear model to data, we typically choose weights which minimize the sum of the squared error between the model and observed data. In case of a model with many features, overfitting becomes a problem, creating models with low training error but high test error. Overfitting may be controlled by adding a penalty term of, for example, the L1 or L2 norm of the model weights. We derive each of these loss functions using a Bayesian approach, by including a certain prior to the model parameters.