Don’t Let Your Proxy Be a Single Point of Failure

Last Updated on

There’s one thing that I’ve learnt from working in numerous IT departments over the last twenty years, is that the proxy server never gets the attention it deserves. In fact if periodically a few hundred people didn’t lose their entire access to Facebook or Twitter, then many wouldn’t even know it existed. In most larger IT departments, most of the important application servers will have an individual or even team that looks after it. Yet the proxy server is often ignored despite the fact that in many aspects it’s one of the most crucial.

If there’s one way to make the phone lines to IT support light up then it’s when something happens to the proxy server. Often the server can be fine but if that crucial link from the company network to the internet goes down, you’ll find everyone notices and quickly. You may think that most employees can live without Facebook for a few hours, after all just read the firm’s Acceptable Use Policy. Yet it’s a fact that many people’s job roles have adapted to include vital steps which can only be achieved by internet connectivity.

While many applications servers will have extensive backup and business continuity plans to support them, the humble proxy is often forgotten. Until of course something happens to it, have a good look at your proxy and try and see if it’s got some sort of resilience – perhaps check out this useful information on basic fault tolerance.

Building In Fault Tolerance

The key purpose of creating fault tolerance is to avoid (or at least minimize as far as possible) the possibility that the functionality of the system ever becomes unavailable because of a fault in one or more of its components.

Fault tolerance is necessary in systems that are used to protect people’s safety (such as air traffic control hardware and software systems), and in systems which security, data protection and integrity, and high value transactions depend on.

Redundancy

To remove a single point of failure and provide fault tolerance, fault tolerant systems use the concept of “redundancy.” In practice, in the above example,  this would mean equipping the system with one or more extra PSUs which are redundant in the sense that they are not required to power the system when the primary PSU is functioning normally.

However, if the primary PSU fails (or a fault such as overheating is detected which indicates that it is about to fail) then it can be taken out of service and one of the redundant PSUs can kick in without any interruption to the functioning of the overall system.

Ideally, redundancy would be provided for all components in a system, but in practice this is usually too expensive. For that reason designers calculate how likely a component is to fail, how important it is to the system, and how expensive it is to make redundant, before selecting the most best candidates for redundancy.

An alternative approach is to treat redundancy at the system level, having an alternate entire computer system which can kick in in the event of a system failure.

Diversity

In some cases, it may not be possible to provide redundancy, and an example of this is the main electrical supply which normally comes from the public electricity grid. If the main electricity supply fails (perhaps due to a power station failure or interruption to power lines during a storm) then it is usually not possible to access an alternative public electricity grid.

In this case fault tolerance can be achieved by diversity, which in practice means getting an electricity supply from another source entirely – most likely a backup electricity generator which kicks in automatically in case of a main power failure.

In some cases the “diverse” option (in this case the generator) may not have the same capacity as the primary option, which may necessitate a graceful degradation of service until the primary option can be restored.

Source: https://www.enterprisestorageforum.com/storage-management/fault-tolerance.html

Proxies in the internet are of course a completely different beast to their corporate versions of course. The proxy in a standard network will be responsible for channeling all sorts of data, and likely will be configured to spend an awful lot of it’s resources acting as a huge web cache too.

The commercial proxies that people use over the internet, usually have a much different purpose. Often the role is very transparent to hide the origin of the connection and provide alternative identities. This is why so much of the technology on ‘external proxies’ will be focused on utilising large banks of IP addresses efficiently like rotating proxies.

It’s almost certain though that this focus comes at the expense of other factors which in most cases will be security and resilience. After all the data that is transferred over these external services is meant to be anonymous so the proxy owners will likely pay it no heed. Corporate proxies are potentially transferring commercially sensitive information both to and from the network.

Whatever the proxy, they deserve some respect for the functions most perform in all sorts of roles. Proxies are generally the work horses of the server world and can cause loads of very serious issues if they’re not looked after.

Leave a Reply