Baltimore details?

Noticed this https://it.slashdot.org/story/19/05/22/208246/hackers-are-holding-baltimores-government-computers-hostage today , does anyone have details on how they were infected and what got crypto’d?

I run a small municipal government network and computing infrastructure. I like to think we’ve done much to prevent and contain any infections and could recover in a reasonable amount of time. We’ve used TrueNAS/FreeNas for a decade now, primary NAS servers have hourly snapshots that are kept two weeks, daily snapshots that are kept for two months and weekly snapshots that are kept for two years. All data is replicated using a dedicated fiber pair (separated from our MAN backbone) to similar servers 3 miles away at another site acting as “hot” standbys. Three “black case” machines full of drives at three different sites do nightly “pull” rsyncs of the NASs through the MAN, keep changed/deleted files for a period of time and are configured to only accept SSH connections. Systems not file based, like database servers, are configured to do periodic automatic data backups to the NAS at least once per day, server images are archived at least one per month and are also kept on the NASs so they can be easily spun up as VMs nearly anywhere from last good snapshot or rsync’d archive…

All traditional security measures (and the ones that are still most effective, even if there are unpatched systems or zero days, etc.) are in place and have been since I took charge in 2007, like minimum user rights (if we have to spend some of our time installing some specialized application for a user, its time better spent than trying to recover from an infection), segmented networks and data stores (if someone gets infected, it is contained), forcing strong passwords, tight network routing/firewall rules…Site-site VPNs are in place for all locations not on our private fiber network, so no known data is flowing in the open (shadow IT becomes a concern in our modern “app” culture)…We have disabled USB drives in some critical systems, as it has been the most successful attack vector we’ve witnessed - fortunately those were contained by traditional security measures and easily recovered from…

So my question becomes, did Baltimore not have backups or even thought about recovery (obviously no BC or DR plan)? Even if they had to rebuild servers from scratch (i.e. only data backups), I would think most systems could get back up within a week…What system got infected that it was do wide reaching, really interesting that it affected many systems in nearly every department?..What was overlooked before this incident happened and what can we all learn from it to prevent it from happening to anyone else?

1 Like

Given the recent history of corruption and incompetence in Baltimore ( I live in Baltimore County ) I would speculate that all the points you mentioned were faults. No one is talking and quite frankly the citizens of the city are probably better off. One more point they seem to be clueless about their workflows so they are having a hard time getting anything done.

By the way that is why we are know as Baltimorons.

1 Like

The Arstechnia article talks about their issues and how large of an attack surface they have. Also, this attack appears to be very targeted. As I understand it require persistent access over time, so the attackers probably disabled backups or other fail safes before deploying the ransomware.

1 Like

I haven’t been following Baltimore’s fubar. I don’t do business with the city and never will because I like to get paid plus I don’t do kickbacks.

1 Like