We argue that our work responds to a serious imperative confronted by the financial community (as well as by other critical infrastructure providers). As noted above, today many enterprises opt to use asynchronous or semi-synchronous remote mirroring solutions despite the risks they pose, because synchronous solutions are perceived as prohibitively expensive in terms of performance [22]. In effect, these enterprises have concluded that there simply is no way to maintain a backup at geographically remote distances at the update rates seen within their datacenters. Faced with this apparent impossibility, they literally risk disaster.
It is not feasible to simply legislate a solution, because today's technical options are inadequate. Financial systems are under huge competitive pressure to support enormous transaction rates, and as the clearing time for transactions continues to diminish towards immediate settlement, the amounts of money at risk from even a small loss of data will continue to rise [20]. Asking a bank to operate in slow-motion so as to continuously and synchronously maintain a remote mirrored backup is just not practical: the institution would fail for reasons of non-competitiveness.
Our work cannot completely eliminate this problem: for the largest transactions, synchronous mirroring (or some other means of guaranteeing that data will survive any possible outage) will remain necessary. Nonetheless, we believe that there may be a very large class of applications with intermediary data stability needs. If we can reduce the window of vulnerability significantly, our hypothesis is that even in a true disaster that takes the primary site offline and simultaneously disrupts the network, the challenges of restarting using the backup will be reduced. Institutions betting on network-sync would still be making a bet, but we believe the bet is a much less extreme one, and much easier to justify.