Write a short summary of language model prompting techniques:
One of the exciting outcomes from the recent wave of progress in NLP was the ability to train models using a prompt. Rather than using a large labelled dataset to train a model about film reviews, it is possible to use a prompt like Is the following movie review positive or negative?” before the input sequence, “This movie was amazing!”".
However, the problem with this is that choosing your prompt can have a large impact of performance. A new paper from Google AI suggests a new approach, they call 'prompt tuning'. Using this model, prompts are represented using a vector and the best performing one is learnt from the data. It still allows learning with a small dataset, but performance is improved without the need for hand writing of prompts.