Most "speed tests" on the Internet are actually testing capacity. They flood the connection with as much data as will fit and report the peak number. That tells you the size of the pipe, not how fast your data actually travels through it.
This test measures throughput: the sustained rate at which data is transferred from one point to another over time. Throughput is governed by latency (round-trip time), not line rate. It's the metric that defines what users actually experience.
The test is conducted in accordance with RFC 6349, the IETF framework for TCP throughput measurement, to produce results that reflect real-world application behavior.
Most speed tests aggregate multiple parallel connections and report the combined total. This inflates the result. It's like saying ten cars stuck in traffic at 1 mph is the same as one car cruising at 10 mph.
This test operates on a per-user (single session) basis, which is how the vast majority of real applications work. The result you see is what a single application session can actually achieve on your connection.
ISPs advertise "up to" speeds because they know the advertised rate is the theoretical maximum capacity, not guaranteed throughput. Actual throughput is determined by latency.
When your computer requests data, each packet makes a round trip to the server and back. The time this takes (the round-trip time, or RTT) directly limits how fast data can flow.
For example, if the RTT is 65 milliseconds, the maximum throughput for a single TCP session is approximately 8 Mbps. This doesn't change even if you pay for a 1 Gbps connection. The pipe may be wide, but the data can only travel so fast for a given distance.