Why is USB the only protocol that lies about its speed?
PCIe 1.0 1x: 250MB/s, 2.5GT/s (Gbd)
100BASE-TX: 100Mbps, 125Mbd
1000BASE-T: 1Gbps, 125Mbd x4
10GBASE-T: 10Gbps, 800Mbd x4
TB3: 40Gbps, 20.625Gbd x2
USB 3.0 Gen1: 4Gbps, 5Gbd
It's not 5Gbps if you can only send 4Gbps!
If you're wondering about the math for 1G and 10G Ethernet, they use a more efficient physical layer where 1 baud is more than 1 bit (and more than enough for overhead and error correction on top).
Also USB2.0 is super cursed. It's actually 480Mbps if you send enough "0" bits. If you send too many "1" bits, then it drops down to 411Mbps. Variable speed depending on the data?!
Of course it's also very inefficient for other reasons so you never get anywhere close to those speeds anyway...
@lina the reason for that in particular is "bitstuffing" which is an error detection pattern, which is somewhat important for a hardware protocol that relies on a potentially long cable length and variably capable hardware on the other end. basically, it gets shot in the foot by its own complexity
@spinach It's not an error detection pattern, it's there to make clock recovery possible. Errors can sometimes cause bit stuffing violations, but the primary purpose of bit stuffing is not error detection. The protocol has CRCs for that.
@lina yeah that's entirely fair. again though, still the protocol gets tripped up by its complexity