Throughout history, bank failures have been a regular occurrence. From the Panic of 1837 to the financial crash of 2008, banks have failed for myriad reasons ranging from individual miscalculations to macroeconomic instability.
In the 19th century, bank failures were caused primarily by overexpansion and speculation. For example, when New York City banks loaned money to Western states during a period of rapid growth after the War of 1812, many borrowers defaulted on their loans, leading to banking meltdowns. In addition, many state and local governments ran up unsustainable budget deficits in the aftermath of the war that they sought to cover with bond sales. These bonds were often sold at steep discounts by heavily-indebted banks as a way of staying afloat – a strategy which ultimately led to many lending institutions’ demise.
The Great Depression saw a wave of almost unprecedented bank failures throughout the United States as people withdrew their money due to the stock market crash and widespread economic malaise. It would take decades for confidence in the banking system to be restored – until then, it seemed like just about any bank could go under at any time.
Since then, however, regulatory reforms have helped to make banking much more secure due to restrictions on speculative investments as well as enhanced requirements for capital reserves and oversight from government entities such as the Federal Deposit Insurance Corporation (FDIC). Banks remain vulnerable in times of economic hardship but are now far better-positioned than ever before against potential disaster.
Though financial crises will continue regardless given capitalism’s inherent volatility and riskiness, prevailing wisdom is that individual banks should be far less likely than ever before in history to fail due to new laws protecting depositors and requiring greater transparency from key players within our financial system.