RE: In language modeling perplexity, is lower better

See question above

Add Comment
2 Answers
Yes, in language modeling, lower perplexity is better. Perplexity is a measurement used to evaluate language models. It reflects how well a model predicts a sample. Lower perplexity means that the model's predictions are similar to the actual distribution, thereby making the model a better one. Ideally, a perfect model would have a perplexity of 1, meaning it perfectly predicts the sample every time. So, in summary, the lower the perplexity, the better the language model.
Answered on August 2, 2023.
Add Comment

Your Answer

By posting your answer, you agree to the privacy policy and terms of service.