Recent events - the pandemic, extreme weather, the Ukraine war and global supply chain disruptions - have strained the world to breaking point.
No one can fail to be aware of the consequences of a lack of resilience in our economy and society.
Similarly, digital resilience is the ability of a system to deal with operational software or hardware failures in ways that do not destroy or corrupt data. It also aims to preserve functionality and provide a level of operational service.
For example, the UK's Financial Conduct Authority defines operational resilience as the ability of firms, financial market infrastructure and the financial sector to prevent, adapt and respond to, recover and learn from business disruption.
It goes without saying that banking and financial systems are increasingly complex, interconnected and built out of a myriad of software components.
But there are many co-dependencies between software and other components in digital systems - hardware, networks and human factors.
Service breaches are increasing in scale. Some are due to network failures but increasingly they are caused by software failures.
And software is at the heart of digital resilience.
This week I want to examine the increasing causes of software risks and what needs adapting to tackle the changes.
Unlike hardware, the causes of software failures are harder to predict.
Most risks from software are similar to those from global warming or pandemics in that major shocks are certain but not their location or timing.
Both the risks of software failures and the scope of their potential impact on business services are increasing.
There are three levers to the increased service breaches and faulty answers.
First, technological levers are acting as causes or accelerators of risks.
The Internet of Things is a case in point. IoT has become the Internet of Everything with enormous software throughput. Such an increased volume of software suggests the risk is growing. Also complexity from interactions in frequency and intensity has likely led to unpredictable failures.
Next, the demanding business approach of rapid time-to-market software development for digital products and services has reduced considerations over service breaches and maintenance.
Finally, the dependence of our economy and society on digitalization has been increasing. This dependence and integration of business operations with the use of common components has spread the effects of software malfunctions.
Since society and the economy are dependent on software reliant services, the resilience of these services becomes key.
Resilience encompasses factors like availability, correct functioning of systems and protection of data, lack of harm to life or health, and lack of material damage.
Software is now a utility but not treated as such by policymakers, end users and IT professionals.
Different stakeholders need to adapt to reflect these changes.
Governments in particular have a role in making knowledge of software risks commonplace. Policymakers should be aware of the consequences of software failures and consider appropriate mitigation.
IT leaders in organizations should walk with managers responsible for service delays to understand its exposure to software failures.
IT professionals should work with policymakers and others to explore the fact that software is different and that not all software failures can be predicted.
In addition, IT professionals should create a set of guidelines for organizations to improve the operational resilience of their systems, such as the Resilience Management Model developed by the Software Engineering Institute at Carnegie Mellon University.
But the first step is to raise awareness at all levels.
Dr Jolly Wong is a policy fellow at the Centre for Science and Policy, University of Cambridge
The scene in Sheung Shui on January 11, 2018, when a software error led to a service disruption on the East Rail line.