O

Services

Case studies

Contact

22.06.21

Do algorithms dream of simulated sheep?

featured image thumbnail for post Do algorithms dream of simulated sheep?

Dreams are a way to stop the brain overfitting, according to a new theory.

Overfitting is a concept in machine learning where a model works well on a training dataset but fails to generalise to new, unseen data. Many approaches have been taken to reduce overfitting including weight regularisation, dropout, noise addition and early stopping. These are part of the standard ML toolkit.

A new paper proposes that the purpose of dreaming in humans and other animals is to stop the brain from overfitting - this is known as the Overfitted Brain Hypothesis. Dreams effectively generate data outside of our usual training set to push the limits of our perceptual models. If you look at the same computer monitor every day, everything might start to look like a monitor. Dreams mix things up a bit.

🛎️ Why this matters: It's a nice theory, I've no idea if it's true. I'll sleep on it.

←Previous: Don't use deep learning for tabular data

Next: Open super big language model→


Keep up with the latest developments in data science. One email per month.

ortom logoortom logoortom logoortom logo

©2025

LINKEDIN

CLUTCH.CO

TERMS & PRIVACY