Attending Interop in Vegas last month, I was surprised to note the number of vendors exhibiting their wares under the banner of APM – Application Performance Management. With all the different offerings, it was rather confusing. It got me thinking about a trip I’d taken the night before to an ice-cream parlour. I love ice-cream, and at the parlour there were plenty of flavours on offer. With APM, as with ice-cream, meeting customer demand is key- one flavour doesn’t suit everyone.
Looking at the offerings all touted under the umbrella of APM, I realised that some were obvious choices and others, well…
Application Performance Management (APM) tools are really coming into focus as a result of cloud computing, agile development, virtualization, and mobile device adoption. We are seeing the industry really embracing “networked applications” and, as such, there will be a great need for these apps to perform well in the network as well as avoiding downtime. This means monitoring and managing from a network perspective, as well as a server and application perspective if we are to eliminate system outages and poor performance.
There are many APM solutions out in the market place - some 200 at last count - across different disciplines and domains, with differing features, methodology and options. The trick is to find one that suits you best. Even though they all go under the same umbrella, their approach is very different. Some monitor transactions across the network, some monitor applications on servers, some monitor the clients, some perform synthetic transactions.
I’ve been pondering the on-going issues around transparency within the electric and heating utility services, and to be honest I’m a bit confused. I’m offered documents that try to explain it all but it’s written in a way that, to be honest, is less transparent than my bill!
Through working for a company that offers intelligent real-time monitoring for cloud services, I’m used to better treatment than that. For me the concept means that I should know at a glance exactly what I’m getting, how well am I being served and how much it’s costing me.
That the military have been remotely controlling UAVs (Unmanned Aerial Vehicles), often hundreds or even thousands of miles away from where the vehicle is actually operating is nothing new. So far, the UAVs that have been deployed have been quite modest in size. But eventually, this technology will, most likely, be used to control much larger vehicles including passenger airliners!
According to a recent (and excellent) article on The Economist web site called “This is your ground pilot speaking”, soon, a small twin-engined Jetstream commuter aircraft will take off from an aerodrome in Lancashire, England and fly towards Scotland – but on this occasion the main pilot won’t be in the cockpit. Instead, they will remain firmly on the ground, flying the plane from there. As this is a test flight, there will be a pilot in the aircraft in case something goes wrong.
Testing has often been the poor relation, an afterthought, but things are changing, over the past few years outages have really impacted customers. Last week a computer outage at United Airlines delayed thousands of travellers (see the full Boston Globe Story) and earlier this year RBS had a software failure due to a software upgrade! This made the news, but for thousands of companies world-wide, outages can and do happen, they may not be so dramatic but they do impact reputation, end-user experience and can incur heavy financial losses. Since it’s a given that outages no matter what the reason can happen, it begs the question – can we do anything to reduce the chances of failure?
The BBC reports that following a site rebuild, 10% of Barclays Online Banking customers are experiencing very long log-in times. So we called a few people we know who use the service and, while hardly a comprehensive sample, it appears that they all encountered lengthy delays. Our very qualitative initial investigations suggest that the new version is using frameworks to deliver the application and that it may be quite a reasonably sized download – multiple this by many thousands of customers doing this and a likely cause of the bottleneck is at the server end. Of course, a healthy dose of pre-deployment application performance testing could have identified this was likely to happen, so you have to ask the question - did Barclays test the performance of their online banking web site in realistic network conditions?
In his latest blog for ComputerWorld (What the heck is 3.5G?) my colleague, Frank Puranik, has been exploring what we, as mobile phone users, may actually be getting in the way of link speeds (or should expect to receive when the newer 3.5G / 4G services are rolled out) when our mobile devices display symbols telling us we are receiving a 3G or H connection.
To say the situation is confusing is putting it mildly. It appears that services that were actually 3G were not displayed as 3G and new services being positioned as 4G/LTE are actually going to be 3G offerings, albeit, considerably enhanced versions.
In a recent blog my colleague, Phil Bull notes that there’s confusion over bandwidth and speed. This discussion started with an article on Network World’s website where Netforecast said “No Matter What the FCC Says, Bandwidth Is Not Speed”. Basically the FCC were bandying the term speed and bandwidth pretty interchangeably and NetForecast took umbrage to this saying that most users equate “speed” to their application’s performance i.e. Response time. They noted, as we have pointed out many times that other factors, such as loss, re-order and latency, were just as important as bandwidth in delivering this “speed”.
And, that’s totally correct, but it’s not complete! It’s worse than that Jim: Bandwidth is not equal to Link Speed either.