BLOG: Satellites Part 3

This blog is Part 3 of a 3 part blog and concentrates on Errors, Loss, the effect of atmospheric conditions and choice of wavebands. Part 1 dealt with Distance, Latencies and Orbits and Part 2 looked at Jitter.

Atmospheric Conditions and Satcoms Data Transmission

There’s no doubt that in an ideal world the transmission from Ground to Satellite and vice versa would be error free, and if that were the case there’d be nothing to say here.  The bottom line is that they’re not error free and the problems occur primarily due to atmospheric conditions.  So what are these conditions?  Well it all starts with the sun and goes down from there.  We may have:

  • Space Weather – Solar Flares etc – geomagnetic effects
    • Ionospheric scintillation – Irregularities in the Earth’s ionosphere which affect the amplitude and phase of radio signals
  • Cloud – Water droplets absorb and reflect radio signals
  • Rain – Raindrops themselves absorb and reflect radio signals
  • Dust and Sand Storms

 

Well that’s not an exhaustive list, but you get the idea.

 

So any of these factors can produce an error in the transmission stream and the more you get the harder they are to deal with.

 

Box out

A quick look at How Data is Transmitted in Packets

As humans we tend to think of transmitting bytes of data.  We have “data plans” for so many Gigabytes per month, but data is not normally transferred in bytes between systems.  Instead it is transferred in packets (blocks of bytes) aka Frames.  These packets consist of a:

 

    • Header – information on how to deliver the packet e.g.. the destination address (and more)
  • Data –  the data were actually sending

 

Now the Data may itself contain a sort of sub-packet i.e. have a Header and Data itself, and if you think that’s uncommon – no it absolutely isn’t:  in most businesses and homes IP packets are sent Inside Ethernet Packets.

 

Why is data transmitted this way?  Because a typical network operates like the “Post Office” handling network traffic on behalf of many customers.  A packet is, to the network, like a letter is to the post office – it contains address information, including sender information, so that packets can be delivered to a variety of destinations and the recipient knows where they came from.  

 

If we sent 1 byte at a time it would still need a header and so the amount of header information would exceed the actual data we were transmitting by huge amounts – what a waste of bandwidth that would be!

 

This is the OSI Network Layer Model (Diagram courtesy of Wikipedia)

 

Layer

Protocol data unit (PDU)

Function

Host

layers

7

Application

Data

High-level APIs, including resource sharing, remote file access

6

Presentation

Translation of data between a networking service and an application; including character encoding, data compression and encryption/decryption

5

Session

Managing communication sessions, i.e., continuous exchange of information in the form of multiple back-and-forth transmissions between two nodes

4

Transport

Segment, Datagram

Reliable transmission of data segments between points on a network, including segmentation, acknowledgement and multiplexing

Media

layers

3

Network

Packet

Structuring and managing a multi-node network, including addressing, routing and traffic control

2

Data link

Frame

Reliable transmission of data frames between two nodes connected by a physical layer

1

Physical

Bit, Symbol

Transmission and reception of raw bit streams over a physical medium

 

So In out Satcoms example the lowest layers are:

 

  1. Satellite Physical (SATPHY) The radio (wireless) transmission as a bit or symbol stream
  2. Satellite Medium Access Control (SMAC) & Satellite Link Control (SLC)
  3. IPv4 or IPv6

 

So from layer 3 up the packets are the same as the ones our computers, devices, phones etc generated

Increasing Bandwidth vs Transmission Quality

Let’s talk about the physical transmission layer (OSI Layer 1) in Satcoms and note that it might be in Bits or Symbols.  

 

What’s a symbol?  Well satcoms looks at transmission at the lowest level in Hz (Hertz – cycles per second).  Now we could send data 1 bit per cycle in the standard binary fashion, as happens in wired and optical circuits, or if conditions allowed we might try to have several different “levels” per cycle and encode 2, 4, 8, 16 or 32 bits in a few cycles. These are called Symbols.  

 

Technically, The method of doing this is to modulate the signal. The bits per symbol is the modulation’s power of 2. So for example, in a 64-QAM modulation 64 = 26 so the bits per symbol is 6. Forward Error Correction decreases throughput but is beyond the scope of this discussion.

 

The problem is the higher the encoding (as we need to distinguish the variations in frequency) the better we need the transmission quality to be and all the atmospheric factors mentioned above can dash that.

What Happens when atmospheric conditions disturb transmission

  1. Unlike standard Ethernet at Layer 2 which has no error correction at all, many Satcoms circuits support Forward Error Correction (FEC), where extra data is sent with the symbols, which can be used to correct the damaged information.  When it works, not much additional delay is incurred.
  2. Ultimately if there are too many errors to correct, the number of bits per symbol could be reduced which are more likely to be successfully decoded.
  3. If there are more errors even than the above can fix then we have bit errors getting through at layer 2 and for IP a checksum will fail somewhere at layer 3 or above and the packet will be discarded

 

What about Wavebands?

 

There is no magic formula for how wavebands perform because different Satcoms providers may have different power and therefore better Signal to Noise ratio, but the general trends are:

 

Higher Frequencies -> Higher Throughput & Higher susceptibility to attenuation by rain/cloud etc.  

 

Here’s a table I put together as a quick guide, but the more I looked into it the more complex it got as you have to take individual services into account due to power, adaptive coding and modulation (ACM), or not etc.

 

Waveband

Frequency

Throughput (Bandwidth)

Rain/Cloud Resilience

L-Band

1-2Ghz

400kbps

Premium

C-Band (inc VSAT)

4-8Ghz

Cost Effective

Good

X-Band

(inc VSAT)

9-12Ghz

Similar to C 

OK

Ku-Band

(inc VSAT)

12-18GHz

1–12Mbps

Susceptible

Ku-Band

HTS Spot Beam

12-18Ghz

80Mbps-

200Mbps

Susceptible

Ka-band

(inc VSAT)

26.5-40GHz

30-50Mbps

Very susceptible (but modern Ka has a lot of power to compensate)

 

Application perspective on Layer 1 effects

The application is in general going to experience a few things:

  1. Lowering of the available Bandwidth, where FEC repeatedly fails e.g. ACM mentioned above
  2. Loss of data for unacknowledged layer 4 protocols e.g. Transport layer (4) is UDP
  3. Re-transmission of data for guaranteed layer 4 protocols e.g. Transport layer (4) is TCP

 

So if we want to test an application for these effects we need to be able to produce similar effects for layers 2 (&3) which will have a similar impact on the Transport layer (layer 4) and above

 

So again, should we care about Atmospheric Effects?

Summarising:

  •   To TCP-based applications – http, https, cifs (NetBIOS), ftp, buffered video, buffered audio etc. reduction in bandwidth and retransmission due to packet loss are significant factors

Fundamentally we will see a slowdown which may be very significant of transmission

  •   To UDP-based applications – VoIP, Real Time Video, Telemetary etc

Humans have trouble with breakup and quality loss in live video and voice calls and video conferencing

   -Telemetry may lost

As ever the consequence depends on the application.

How can you test your applications with Satellite Bit Error, Loss and Bandwidth Limitations (as well as Latency, Jitter etc.)?

[If you read part 1 or part 2 then you can skip to “The End” – the arguments are similar and you can “also” simulate Bandwidth Restriction, Bit Errors, Loss, Latency and Jitter.  If you didn’t please read on… ]

 

You need to test!

That may not be as formal as it sounds: we could say you need to try the application in the satellite network.  

There are issues with testing or trying using actual (real) satellite networks though:

  •   Satellite time is expensive and the equipment not at all easy to deploy
  •   It will be just about impossible to mimic your or your customers’ real locations
  •   If you find an issue which needs attention, getting it to the developers for a solution will be difficult (and if the developers say they’ve sorted it out it is likely to be very difficult to retest)
  •   You won’t be able to try out other satellite environments e.g. MEO or LEO without purchasing them
  •   You won’t be able to have a rainstorm appear just when you need it during your testing

Using Satellite Network Emulators

Because of the issues of “real network testing” in Satellite networks  we’ve brought Satellite Network Emulation capabilities to our NE-ONE Professional and NE-ONE Enterprise Network Emulators.

People think of anything with the name “emulator” in it as some sort of complex mathematical device which predicts behaviours.  They may be complex, but only internally. Externally we make them very straightforward.  And, they don’t predict behaviour, you get to actually try out (“test”) your application using your real clients and servers just as though they were in the satellite network.

All you need to do is plug them in between a client device and the servers and set them for the satellite situation you want.  You can even try out other options like LEO or MEO within seconds.

Plugging them in is easy because they have Ethernet ports, you don’t need any satellite equipment at all.

Want to know more – click here