We know that losing packets is not a good thing; retransmissions cause delays. We also know that TCP ensures reliable data delivery, masking the impact of packet loss. So why are some applications seemingly unaffected by the same packet loss rate that seems to cripple others? From a performance analysis perspective, how do you understand the relevance of packet loss and avoid chasing red herrings?
In Part II, we examined two closely related constraints – bandwidth and congestion. In Part III, we discussed TCP slow-start and introduced the Congestion Window (CWD). In Part IV, we’ll focus on packet loss, continuing the concepts from these two previous entries.