RE: In language modeling perplexity, is lower better
Yes, in language modeling, lower perplexity is better. Perplexity is a measurement used to evaluate language models. It reflects how well a model predicts a sample. Lower perplexity means that the model's predictions are similar to the actual distribution, thereby making the model a better one. Ideally, a perfect model would have a perplexity of 1, meaning it perfectly predicts the sample every time. So, in summary, the lower the perplexity, the better the language model.