What caused the ATO systems to crash

JobKeeper

By Robert Merkel, Monash University

Many Australian Tax Office IT systems have been unavailable for days after a major fault, apparently caused by a problem with a large-scale storage server.

The ATO’s online systems, including its public website and portals for taxation agents, were down for several days. On Thursday, the ATO reported that most services are now operational but may experience slowdowns.

Read more: SMEs ask ATO to pay their bills after third day of business portal outage, saying “you have destroyed Christmas”

There were also reports that up to one petabyte of data was affected by the fault. The ATO has reported that no taxpayer data have been lost, although it is unclear as to whether any internal data have been lost.

Outage in a SAN

According to the ATO and media reports, the system outage was caused by a failure in a 3PAR StoreServe storage area network (SAN) made by Hewlett Packard Enterprise (HPE).

These devices contain racks full of hard disks and/or solid-state storage devices to store data on a gargantuan scale, and fast network interfaces to provide that data to the various “application servers” that provide the ATO’s online systems.

The two units purchased by the ATO were reportedly capable of storing up to a petabyte—that’s 1000 terabytes or 1 million gigabytes—of data each. They would have cost hundreds of thousands of dollars.

While these devices are expensive, they allow IT staff to allocate storage efficiently and flexibly to where it is needed, and thus (in theory) can improve reliability.

Multiple levels of redundancy, made redundant

Entrusting so much of the IT operations of a large organisation like the ATO to a single storage server requires a high degree of confidence that it will function reliably. As such, a number of levels of redundancy are incorporated into this kind of storage system.

As a first protection against a failure of a single disk (or solid-state storage device), data are “mirrored” across multiple physical disks. If monitoring systems detect a failure, operations can fall back on the mirrored data.

The faulty disk can be replaced and the full mirror restored, all without interrupting user operations. High-end systems such as these also incorporate redundancy into their controller electronics.

However, if a major hardware failure occurs, such as a power failure that is not covered by a backup power supply, many such systems have a second level of redundancy. The entire contents of the SAN is “mirrored” to a second system, often in another physical location, and systems switch over to the backup automatically.

According to iTnews, all of this redundancy was made moot by the nature of the problem: corrupted data were being written to the SAN for some reason, and this corrupted data were then mirrored to the backup SAN.

In this situation, all the redundancy within and between the SANs does not help, as the bad data were replicated across the entire system. This is why keeping traditional backup snapshots—copies of data as it previously existed in the system—is so important, regardless of any amount of mirroring.

The ATO appears to have comprehensive backups of the stored data, however, restoring all of it and returning the SANs to an operational configuration has had to be done manually. It is not surprising that this has taken several days to complete.

Assessing the ATO’s response

While it is tempting to pile on to another large-scale government IT failure, a fair assessment should take into consideration the nature of the failure and the ATO’s response.

Firstly, it appears that the ATO heeded one of the key lessons from the Census website meltdown and communicated what was going on to the public effectively. It responded to the failures by providing informative updates on social media and more comprehensive information on a functioning part of its website.

Secondly, it appears that its backup strategy was sufficient to get all systems back up and running without data loss, despite a nearly worst-case failure in their primary storage system.

If its incident response can be criticised, it may have been able to restore services much faster if more of that process had been automated. However, this appears to be a highly unusual incident.

Restoring one set of application data due to corruption caused by the application itself is a relatively common situation. Restoring many different sets of data because of an apparent bug in the storage server is extremely rare.

Furthermore, while few people ever see them, SANs like this are very common devices in data centres. They provide a generic low-level storage service and are expected to provide it highly reliably.

Indeed, HPE markets its enterprise storage systems with a “99.9999% uptime guarantee”, which requires that a device is non-operational for no more than 30 seconds per year.

Over the past few days, the IT staff at the Australian Tax Office have probably had a few sleepless nights. It’s likely that engineers at HPE will have a few more trying to get to the bottom of why their enterprise storage system seems to have failed so comprehensively.The Conversation

Robert Merkel is a lecturer in software engineering at Monash University

This article was originally published on The Conversation. Read the original article.

COMMENTS