NYSE Disaster Recovery Blunder Triggered Major Trading Glitch

    The NYSE (New York Stock Exchange) is beleaguered by criticism after an unexpected outage and IT error caused drastic price fluctuations in blue-chip stocks and billions worth of trades to be called off.

    The NYSE blamed the ‘technical’ glitch on a manual error with its disaster recovery (DR) system, according to The Register

    Let’s Flesh out the Reasons Behind the “Mayhem”

    The exchange’s secondary Chicago data centre is supposed to protect US stock markets in the event of an outage or when a natural disaster strikes the venue.

    As part of regular maintenance activities, the DR system should be tested daily and shut down manually after the closing bell.

    However, on Monday, January 23rd, after the market was closed, an NYSE staffer failed to turn off the disaster recovery system correctly.

    The result: the backup system, which is meant to be turned on in the event of a disaster incident only, was left operating overnight. 

    It means that the exchange’s trading acted as if Tuesday’s trades were being carried on with the prices of Monday’s trade. 

    The consequence: at 9:30 am on Tuesday, as trading started, the NYSE software malfunctioned, and skipped the day’s opening auctions which set prices incorrectly and unfortunately, led to a debacle.

    The NYSE, in a statement, said, “The root cause was determined to be a manual error involving the exchange’s disaster recovery configuration at system start of day.

    What Was the Consequence?

    The technical error triggered a string of events with catastrophic repercussions.

    What NYSE addressed as a “system error” caused shares in over 250 firms to go haywire, with some firms encountering fluctuations in their stock prices by around 25%. 

    As reported, a total of 84 stocks saw their valuations drastically plunge or surge until they had reached the limits set to thwart securities from trading at extreme prices.

    According to a statement released on Wednesday morning, due to the system disruption, 81 stocks had short-sale restrictions (SSR) implemented “erroneously,” with Snap and Morgan Stanley being badly affected. 

    While Morgan’s share price dropped by 13%, Walmart saw an increase of 12% in its share price due to the error.

    Soon after NYSE halted the most egregious transactions, it stated: “Approximately 4,341 trades in 251 symbols should be busted (canceled).”

    NYSE Fielding Claims

    NYSE officials spent hours hunting down the reason behind the turmoil until it was confirmed that no such trading chaos would occur again.

    “The issues around our market open on Tuesday are our collective responsibility, and we have moved swiftly and decisively to resolve them as a team,” explained a spokesperson from NYSE to Bloomberg. “A core value of ICE (Intercontinental Exchange, NYSE’s parent company) and the NYSE is our commitment to collaboration.”

    NYSE is evaluating the financial losses stemming from this “wreck” and is fielding claims from the affected businesses as per exchange regulations.

    However, NYSE is struggling not only with the cost but also with the long-term damage caused to the stature of the world’s largest stock exchange.

    On Wednesday, the multinational financial services company, Charles Schwab Corp, released a statement slamming the NYSE.

    “Unfortunately, the NYSE has not owned up to their full responsibility, and retail investors will have to go through a lengthy process to correct orders, with no guarantee of a reasonable outcome,” stated a spokesperson from the above brokerage company.

    Automated DR Systems Can Decrease System Failure Risks

    Analysts unanimously agreed that automation could help avoid such system errors entirely.

    Automation eliminates human error,” according to Dennis Hahn, an analyst at Omdia. “If this [DR system] required to be manually shutdown, this is ridiculous and asking for trouble.”

    In short, when it comes to disaster planning for data centers, one of the key elements is deploying an automated DR framework

    Future-focused DR solutions like Protera enable customisable and automated backup scheduling. Plus, it frees users from manually configuring each device while also allowing them to back up their business-critical data in multiple locations. 

    The result: significantly minimised human intervention, ensuring business continuity.

    With customisable RPO and RTO objectives, users can set their systems for backup—every day, every hour, or even every few minutes—based on their backup policies.

    Wrapping Up

    Unfortunately, the NYSE backup blunder is not the only high-profile operation disruption occurring in January due to manual errors. The recent massive system crash in NYSE raises concern among US retail investors. The occurrence indicates that NYSE should “come up with something better” and implement automation and best practices for disaster recovery management. 

    Don't miss out!

    Sing up for our newsletter to stay in the loop.

    Featured Article

    Cutting Costs without Cutting Corners: The Benefits of Efficient IVR Systems in Banking and Utilities

    We live in a world where customer service is very, very important. If someone leaves your business feeling dissatisfied, you can be sure they’ll...

    Latest articles

    From Our Advertisers


    Related articles