With a trend towards server consolidation, virtualization, cloud computing, increased remote working and greater reliance on wireless technology, more applications than ever before are being deployed over Cloud, Wide Area Networks (WANs), (A)DSL, GPRS, 3G/4G, Radio or Satellite networks, and more.
However, despite this trend, a glaring problem remains in mainstream application development; software performance testing is still frequently only conducted over the fast & reliable Local Area Networks (LANs) of the test lab.
Passing application performance tests in perfect LAN conditions is no guarantee of acceptable performance in non-LAN networks. LANs and WANs (for the purposes of brevity, please assume any further reference to WANs in this document also includes Wireless, Radio, Satellite and similar networks) are very different environments, and any belief that conducting testing in LAN conditions can give you a good indication of how the same application is going to perform when placed in a WAN is plainly unrealistic.
Let’s explore some of the ways in which LANs differ from the other networks applications will have to operate over, and why these differences may cause your current test environment to be unsuitable for robust product development.
In LANs, available bandwidth is rarely an issue. However, as WANs are expensive they have lower available bandwidths (i.e. they are rarely over specified). As there are more applications competing for space, the amount of bandwidth available to a
particular application will be much lower, and this may have a significant impact on its performance. In addition to this, network administrators can (and do) set up their networks to favour certain applications like Voice over IP (VoIP) over “conventional” applications by setting up network QoS.
This de-facto de-prioritises other applications and as a result, the software you are testing may have very restricted available bandwidth when placed in the production environment, due to other applications having a higher priority. However, your product will still need to, and be expected to, function over this lower bandwidth at an acceptable level. Therefore, you need to factor bandwidth restrictions into your testing.
While the impact of limited bandwidth on applications is fairly obvious to all of us, the effect of other network conditions such as latency, jitter, loss and errors and even quality of service is not so straightforward to assess, but have a very significant impact on product performance. However, unlike limited bandwidth, you can’t just go to your telco and buy your way out of these issues.
Most WAN usage is split into TCP (a guaranteed delivery system for session-based applications) and UDP (a connectionless, non guaranteed, delivery system usually used in VoIP, Live Video over IP and online gaming). These respond to limited bandwidth, latency, jitter, loss etc., in different ways, with TCP-based applications generally slowing down markedly as latency and loss increase, irrespective of bandwidth, and UDP-based applications becoming less real time with latency and losing sound or bits of picture (blocking) or game positioning/control information with loss and errors.
TCP connections are responsible for handling http (www) traffic, FTP, Microsoft networking and most other general purpose traffic, including custom application protocols, so these are all sensitive to higher latency (and loss), as described. As an example of the effect of latency: If you run a TCP based application over a network between London and New York, the network does …