The end of the "best open LLM"

Modeling the compute versus performance tradeoff of many open LLMs.

Read more here: External Link