What is Jitter and should we care about it?
This blog is Part 2 of a 3 part blog and concentrates on jitter (variable latency). Part 1 dealt with distance, latencies and orbits and subsequent parts will discuss the effect of atmospheric conditions and choice of wavebands.
First, it’s worth noting that while the term jitter is used by network specialists and certain application performance engineers, it isn’t really a network term at all – it’s a communication engineer’s term – and essentially refers to the difference between when a timing signal should have been received and when it was actually received.
So, in an ideal world, if you transmit a signal (down a wire or as a wave) 1 Million times a second (1Mhz) at even spacing then that’s what you expect to receive: 1 pulse at exactly every microsecond (millionth of a second). That’s not to say that all the pulses might not all be delayed (perhaps due to distance), but the expectation is that they are delayed by the same amount and so the jitter is 0. Unfortunately real life is not like that and signals can arrive relatively too early, or too late. This is jitter.
Packet Delay Variation (PDV)
In networks, and application engineering, data is typically grouped together and transmitted as packets. So the correct term (for packet type jitter) is Packet Delay Variation (PDV) but we’ll use the term Jitter to mean the same thing i.e. PDV.
What is affected by Jitter?
Let’s begin with what isn’t really affected by Jitter – Connection orientated applications
Now, jitter doesn’t matter to most “standard” applications i.e. applications based on the TCP part of the IP protocol family. These include most “transactional” and “file transferring” applications like:
● Web – http and https
● Network file systems – CIFS (NetBIOS over TCP), NFS
● File transfer – ftp, sftp
● Custom TCP communications – various messengers, apps
● Video/Audio Services – Netflix, Internet radio (* we’ll come back to this one)
The reason for this is that they are not especially time-sensitive and are often themselves waiting for acknowledgement of successful packet delivery before they can transmit more data i.e. they are inherently jittery in themselves.
Streaming Applications/Apps
The jitter problem typically all starts when you are trying to stream live audio or video, a telephone call, live telemetry or timing protocols over a network. As mentioned, the network doesn’t send a bit stream, rather it generally sends it as a packet stream (the packet being the basic unit of transmission in most modern data networks) with a regular time gap between packets.
So, you try to send a packet containing audio samples (for example), say 1000 times a second so that the playback system can play them back as sounds, but they don’t arrive with that spacing due to jitter (PDV) and so the played sound is all over the place.
Wait, you say, that doesn’t actually happen in real life. No indeed, because two primary solutions to this have been adopted:
1. The application is not actually real-time!
A realization that the stream does not actually need to be real time e.g. internet radio, Netflix (see I said we’d be back here!) as you can buffer (receive in advance) a large chunk of data (20 seconds for example) and play the samples back evenly spaced, which (as discussed in the box out) will likely even have more than one sample in each packet.
You can do this because you know the encoding/decoding system (codec) and its data rate. You also know that it doesn’t matter if one consumer hears/sees the station/program a little later than another, in general.
2. It is real-time, but you can delay playback a bit
Our example here would be a telephone call. We clearly can’t delay the speech for many seconds as it interferes with our brain’s speech processing, as many of us have experienced when things go wrong on long distance telephone calls. But we can hold the packet playback back slightly.
The technique uses a jitter buffer (perhaps more properly should be called an anti-jitter buffer!). Packets are stored in the jitter buffer (in the correct order) and then played back at an even speed thus sorting out the audio. The problem here is that this buffer cannot be too large because “we” notice. Humans will usually notice round-trip voice delays of over 250ms. The ITU (International Telecommunications Union) recommends a maximum of 150 ms one-way latency (300ms round trip). Remember that (from Part 1) a satellite phone call via a GEO satellite will easily exceed 300ms even with no jitter present, though LEOs (like Iridium) and MEOs (like O3b) don’t suffer this basic very high latency due to them being much closer to us.
Streaming Audio Example
As an example, let’s design a streaming protocol for uncompressed audio CD data over a network. Now, CD audio has a stereo bit rate of 1.4112 Mbps. This is made up of 44,100 samples per second and each sample uses 16 bits per channel (so for stereo that’s 2 channels – 32bits per sample). I mentioned that data networks use packets, so we could put every 32 bit (4byte) sample into a packet and then send them evenly spaced to get 44.1K samples per second.
The problem is that packets have a minimum size and lots of overhead (addressing, checksum, minimum packet size etc). It would be like sending 4 passengers in a 40 seat coach – lots of overhead and waste. For Ethernet packets containing IPv4 and UDP formatted packets the overhead would be 14 + 20 + 8 +4 = 46 bytes – Just to send 4 bytes of data! And that completely ignores the fact that “Layer 2” frames like Ethernet have a minimum packet size – 64 bytes in the case of Ethernet. So for 4 bytes of valuable data we’d be sending 16 times that in headers and wasted space – a bit rate of ~18.5Mbps. Outrageously high for most networks and an impossible waste for a satellite network!
So instead we’ll decide to pack lots of audio samples into one larger packet, because the overhead will always be 46 bytes. A typical large packet is 1500 bytes of layer 2 payload, which has an overhead of 32 bytes (IPv4, UDP & Checksum) and so can carry (1500-32)/4 = 367 audio samples and our packet rate would need to be 44,100/367 = 120.16 packets per second – a pretty modest rate – about 1 packet every 10 milliseconds. However, our audio playback system now has to cope with that and store (buffer) incoming packets decode them and play back the individual samples at a steady rate. [In practice the codec we just “designed” above won’t be used, as one involving compression like MP3 will use far less I/O.
See below for how an original audio wave is sampled at various quality levels:
Applications/Protocols that can’t tolerate much jitter
It all comes down to how real-time your application needs to be. Perhaps it’s worth thinking about what “real-time” means. Techtarget.com has the following definition:
“Real-time is a level of computer responsiveness that a user senses as sufficiently immediate or that enables the computer to keep up with some external process (for example, to present visualizations of the weather as it constantly changes). Real-time is an adjective pertaining to computers or processes that operate in real time. Real time describes a human rather than a machine sense of time.”
I like that definition because they explain that real-time responsiveness is all relative to your need. If you’re controlling a space rocket you’ll likely need to respond far faster than if you’re having a conversation with a human. It’s all relative.
So, if you’re remote controlling an aircraft you might need real time to be pretty low in latency, and therefore having large jitter buffers to smooth out a jittery flow of packets coming from a control “joystick” won’t work.
These applications cannot therefore, use TCP over IP, the acknowledgment process alone could slow the process down too much and cause jitter. They, instead, tend to use an underlying datagram protocol like UDP which is not acknowledged. As these packets are not acknowledged, if their loss matters then some form of redundancy is required e.g. simple redundant designs might be to send a packet twice, or use a middle packet containing data from the preceding and next packets etc. These allow for a high packet error/loss rate but use a lot of extra bandwidth, so in practice schemes better suited to the likely error/loss rates are employed.
What creates Jitter?
So now we’ve looked at whether we should care about jitter. It’s worth understanding why it occurs.
Networks are pretty good at creating jitter even without help at all from any satellite equipment. As packets pass through routers (the junctions, roundabouts and rotaries of a data network) they may need to wait for packets from other streams and so they are queued and therefore, delayed. The next packet may not be similarly delayed or may be delayed even more – all of a sudden you have jitter.
Satellite networks also throw a few other things into the mix, but which of these you get depends on the satcoms design, and they are evolving. Examples include:
● Jitter introduced by the Terminal/Modem – Depending on the transmission model you may need to wait until it is your turn (slot) to transmit data up to the satellite, if other users in the same area are sending.
● In older designs where satellite had large footprints, again you would have to wait for your turn (slot) before data was transmitted to you
● In systems like Iridium, a LEO satellite constellation. Here, jitter will be highly variable, though base latency is relatively low, with satellites in constant motion. Packets go up to the nearest convenient satellite (in range) and then pass through the satellite constellation’s mesh before landing at the receiving terminal.
To an extent, GEO transmission issues are mitigated by Spot beams, in modern GEO satcoms systems, where transmission from the satellite is not the same to a whole large area, but rather divided into many focused spots, where each can have separate transmissions and so is not waiting on all the others.
Until modern satellites are deployed, more frequently this jitter will be prevalent in GEO satellite designs that were originally conceived more for broadcast communications (often TV), than unicast (point to point), but now with TV users wanting to watch TV “on-demand” these are being replaced by more modern satellites.
So again, should we care about Jitter?
Summarising:
● To TCP-based applications – http, https, cifs (NetBIOS), ftp, buffered video, buffered audio etc. mean (average) latency (as well as bandwidth and loss – discussed further in part 3) is the dominant characteristic – jitter simply affects the mean latency, but as a separate effect can be ignored
● To UDP-based applications which are real time, it really does matter:
-Control systems will not work properly
-Humans have trouble with delays in live video and voice calls and video conferencing
-Telemetry may be out of date
In other words, the effect depends on the application.
How can you test your applications with Satellite Jitter (and Latency, Errors, Bandwidth Limitation)?
[If you read part 1 then you can skip to “The End” – the arguments are similar and you can “also” simulate jitter. If you didn’t please read on… ]You need to test!
That may not be as formal as it sounds: we could say you need to try the application in the satellite network.
There are issues with testing or trying using actual (real) satellite networks though:
● Satellite time is expensive and the equipment not at all easy to deploy
● It will be just about impossible to mimic your or your customers’ real locations
● If you find an issue which needs attention, getting it to the developers for a solution will be difficult (and if the developers say they’ve sorted it out it is likely to be very difficult to retest)
● You won’t be able to try out other satellite environments e.g. MEO or LEO without purchasing them
● You won’t be able to have a rainstorm appear just when you need it during your testing
Using Satellite Network Emulators
Because of the issues of “real network testing” in Satellite networks we’ve brought Satellite Network Emulation capabilities to our NE-ONE Professional and NE-ONE Enterprise Network Emulators.
People think of anything with the name “emulator” in it as some sort of complex mathematical device which predicts behaviours. They may be complex, but only internally. Externally we make them very straightforward. And, they don’t predict behaviour, you get to actually try out (“test”) your application using your real clients and servers just as though they were in the satellite network.
All you need to do is plug them in between a client device and the servers and set them for the satellite situation you want. You can even try out other options like LEO or MEO within seconds.
Plugging them in is easy because they have Ethernet ports, you don’t need any satellite equipment at all.
Want to know more – click here
“The End”
Part 3 concentrates on Errors, Loss, the effect of atmospheric conditions and choice of wavebands. It will follow soon.
If you missed Part 1, it’s already posted and looks at Latency