Definition

The number of adjustable weights and biases within a neural network model, such as an LLM. A higher parameter count generally indicates a more complex model capable of learning more intricate patterns, but also requires more computational resources and memory.

Why it matters (in Poovi’s context)

A key factor influencing LLM performance and hardware requirements, as demonstrated by testing models of different sizes (e.g., 70B vs. 405B parameters).

Key properties or components

  • Model complexity
  • Resource requirements
  • Performance trade-offs
  • Training data scale

Contradictions or debates

None.

Sources