What Causes Satellite Loss and Error?

LEO network

What causes satellite loss and error?

This blog is Part 3 of a 3 part blog and concentrates on Errors, Loss, the effect of atmospheric conditions and choice of wavebands. Part 1 dealt with Distance, Latencies and Orbits and Part 2 looked at Jitter.

Atmospheric Conditions and SATCOMS Data Transmission

 

There’s no doubt that in an ideal world the transmission from Ground to Satellite and vice versa would be error free, and if that were the case there would be nothing to say here.  The bottom line is that they are not error free and the problems occur primarily due to atmospheric conditions.  So what are these conditions?  Well, it all starts with the Sun and goes down from there.  We may have:

 

  • Space Weather – Solar Flares etc – geomagnetic effects
  • Ionospheric Scintillation – Irregularities in the Earth’s ionosphere which affect the amplitude and phase of radio signals
  • Cloud – Water droplets absorb and reflect radio signals
  • Rain – Raindrops themselves absorb and reflect radio signals
  • Dust and Sand Storms

  • Well that’s not an exhaustive list, but you get the idea.

So any of these factors can produce an error in the transmission stream and the more you get, the harder they are to deal with.

 

Increasing Bandwidth vs Transmission Quality

Let’s talk about the physical transmission layer (OSI Layer 1) in SATCOMS and note that it might be in Bits or Symbols.  

 

What’s a Symbol?  Well SATCOMS looks at transmission at the lowest level in Hz (Hertz – cycles per second).  Now we could send data one bit per cycle in the standard binary fashion, as happens in wired and optical circuits, or if conditions allowed we might try to have several different “levels” per cycle and encode 2, 4, 8, 16,  32 bits or more in a cycle. These are called Symbols.  

 

Technically, the method of doing this is to modulate the signal. In a popular form of modulation: 64-QAM (64 Level Quadrature Amplitude Modulation) both amplitude and phase modulation are used.

 

The problem is, the higher the encoding levels, the better we need the transmission quality to be and all the atmospheric factors mentioned above can dash that by interfering with amplitude and phase, and more. A solution to this is to use Forward Error Correction (FEC) but this decreases net throughput – see below for more on Forward Error Correction.

 

A Quick Look at how Data is Transmitted in Packets

 

As humans we tend to think of transmitting bytes of data.  We have “data plans” for so many Gigabytes per month, but data is not normally transferred in bytes between systems.  Instead it is transferred in packets (blocks of bytes), aka Frames.  These packets consist of a:

  • Header – Information on how to deliver the packet e.g.. the destination address (and more)
  • Data –  the data we are actually sending

Now the data may itself contain a sort of sub-packet i.e. have a Header and Data itself, and if you think that’s uncommon – no it absolutely isn’t:  in most businesses and homes IP packets are sent inside Ethernet Packets.

 

Why is data transmitted this way?  Because a typical network operates like the “Post Office” handling network traffic on behalf of many customers.  A packet is, to the network, like a letter is to the post office – it contains address information, including sender information, so that packets can be delivered to a variety of destinations and the recipient knows where they came from.  

 

If we sent one byte at a time it would still need a header and so the amount of header information would exceed the actual data we were transmitting by huge amounts – what a waste of bandwidth that would be!

 

The OSI Network Layer Model (diagram below courtesy of Wikipedia), lays out how these packets, and packets-in-packets, are carried starting at the physical layer.

OSI LayerProtocol Data UnitFunction
Host Layer7 - ApplicationDataHigh-level APIs, including resource sharing, remote file access
Host Layer6 - PresentationDataTranslation of data between a networking service and an application; including character encoding, data compression and encryption/decryption
Host Layer5 - SessionDataManaging communication sessions, i.e., continuous exchange of information in the form of multiple back-and-forth transmissions between two nodes
Host Layer4 - TransportSegment DatagramReliable transmission of data segments between points on a network, including segmentation, acknowledgement and multiplexing
Host Layer3 - NetworkPacketStructuring and managing a multi-node network, including addressing, routing and traffic control
Media Layer2 - Data LinkFrameReliable transmission of data frames between two nodes connected by a physical layer
Media Layer1 - PhysicalBit, SymbolTransmission and reception of raw bit streams over a physical medium

So in our SATCOMS example the lowest layers (closest to the physical) are:

 

  1. Satellite Physical (SATPHY),  the radio (wireless) transmission as a bit or symbol stream
  2. Satellite Medium Access Control (SMAC) & Satellite Link Control (SLC)
  3. IPv4 or IPv6

From Layer 3 up the packets are the same as the ones our computers, devices, phones etc generate.

 

 

 

 

 

Haven’t got time to read the full article?

Take away a downloadable PDF copy of the article to read whenever.

Click me!
satellite loss and error document

Forward Error Correction (FEC)

A standard network approach to error correction is to send data, then wait for an acknowledgement (ACK) from the receiver and if none is received then resend the data. Well at least that’s it at its most simple. This kind of system is used by the TCP part of the IP networking stack, for example.

The problem with this method is that if you have large round trip latencies of, say, 700ms (GEO orbit) then it will take over 700ms to get a retransmission of the data. This would seriously hamper transmission rates.

Enter Forward Error Correction (FEC): If we can send some redundant data with the real data that allows for the correction of one or more errored bits then we don’t need to retransmit the data – thus saving at least 700ms in the example above, at the expense of sending more data than required.

 

What happens when atmospheric conditions disturb transmission

  1. Unlike standard Ethernet at Layer 2 which has no error correction at all, many SATCOM circuits support Forward Error Correction (FEC), as explained above.  When it works, not much additional delay is incurred.‌
  2. Ultimately, if there are too many errors to correct, the encoding level could be reduced making the data more likely to be successfully decoded.
  3. If there are more errors even than the above can fix then we have bit errors getting through at Layer 2, and for IP a checksum will fail somewhere at Layer 3 or above and the packet will be discarded, and require re-transmission.

 

What about Wavebands?

 

There is no magic formula for how wavebands perform because different SATCOMS providers may have different power outputs and therefore, better Signal to Noise ratio, but the general trends are:

Higher Frequencies implies higher throughput & higher susceptibility to attenuation by rain/cloud etc.  

Here is a table we put together as a quick guide, but the more we looked into it the more complex it got as you have to take individual services into account due to power, adaptive coding and modulation (ACM), or not etc.

WavebandFrequencyThroughput (Bandwidth)Rain/Cloud Resilience
L-Band1-2Ghz400kbpsPremium
C-Band (inc VSAT)4-8GhzCost EffectiveGood
X-Band (inc VSAT) 9-12GhzSimilar to COK
Ku-Band (inc VSAT)12-18Ghz1-12MpbsSusceptible
Ku-Band HTS Spot Beam12-18Ghz80Mbps-200Mbps
Susceptible
Ka-Band (inc VSAT)26.5-40GHz30-50MpbsVery susceptible (but modern Ka has a lot of power to compensate)

 

Application perspective on Layer 1 effects

The application is, in general, going to experience a few things:

  1. Lowering of the available Bandwidth, where FEC repeatedly fails e.g. ACM mentioned above
  2. Loss of data for unacknowledged Layer 4 protocols e.g. when the transport layer (4) is UDP
  3. Re-transmission of data for guaranteed Layer 4 protocols e.g. when the transport layer (4) is TCP

 

So, if we want to test an application for these effects we need to be able to produce similar effects for Layers 2 (&3) which will have a similar impact on the transport layer (Layer 4) and above.

 

So again, should we care about Atmospheric Effects?

Summarizing:

  • For TCP-based applications – http, https, cifs (NetBIOS), ftp, buffered video, buffered audio etc. reduction in bandwidth and retransmission due to packet loss are significant factors

Fundamentally we will see a slowdown in transmission which may be very significant 

  • For UDP-based applications – VoIP, Real Time Video, Telemetary etc

Humans have trouble with breakup and quality loss in live video and voice calls and video conferencing

Telemetry may be lost

As ever, the consequence depends on the application.

 

How can you test your applications with Satellite Bit Error, Loss and Bandwidth Limitations (as well as Latency, Jitter etc.)?

[If you read Part 1 or Part 2 then you can skip to “The End” – the arguments are similar and you can “also” simulate Bandwidth Restriction, Bit Errors, Loss, Latency and Jitter.  If you didn’t please read on… ]

 

You need to test!

That may not be as formal as it sounds: we could say you need to try the application in the satellite network.  

However, there are issues with testing or trying using actual (real) satellite networks:

  • Satellite time is expensive and the equipment not at all easy to deploy
  • It will be just about impossible to mimic your or your customers’ real locations
  • If you find an issue which needs attention, getting it to the developers for a solution will be difficult (and if the developers say they have sorted it out it is likely to be very difficult to retest)
  • You won’t be able to try out other satellite environments e.g. MEO or LEO without purchasing them
  •  You won’t be able to have a rainstorm appear just when you need it during your testing

 

Using Satellite Network Emulators

Because of the issues of “real network testing” in Satellite networks  we’ve brought Satellite Network Emulation capabilities to our NE-ONE Professional and NE-ONE Enterprise Network Emulators.

People think of anything with the name “emulator” in it as some sort of complex mathematical device which predicts behaviours.  They may be complex, but only internally. Externally, we make them very straightforward.  And, they don’t predict behavior, you get to actually try out (“test”) your application using your real clients and servers just as though they were in the satellite network.

All you need to do is plug them in between a client device and the servers and set them for the satellite situation you want.  You can even try out other options like LEO or MEO within seconds.

Plugging them in is easy because they have Ethernet ports, you don’t need any satellite equipment at all.

Want to know more – click here

“The End” 

 

 

 

 

 

News Blog