Until recently, models learnt using labelled data and humans learnt using language. This is starting to change.
It is becoming possible to describe tasks using natural language so that the machines can learn. This allows much faster learning using less data - which is similar to how humans operate. Models like GPT-2 and 3 and BERT have opened the possibility of interacting with and traning computers in exactly the same way we do with humans.
This research is similar to the PET paper linked to a couple of weeks ago and is a really exciting avenue of research. As the authors say:
"We envision a future where in order to solve a machine learning task, we no longer have to collect a large labeled dataset, but instead interact naturally and expressively with a model in the same way that humans have interacted with each other for millennia—through language."