FreeNAS Snapshot Crypto Scenario

So, something crept to mind today reading about more crypto/ransomware, and I thought I’d ask considering I’ve not yet had to deal with a scenario like I’ll give below. What would FreeNAS do in such a scenario? What would you do?

You have a single central FreeNAS system, with one large z3 pool, many datasets, all datasets snapped at least daily, and the pool at 80% Used.

You have a rogue computer with access to 75% of all datasets and it gets crypto/ransomwared. As a consequence, the datasets get crypto’d, but there snapshots that go back two weeks.

Now you walk in the next day and realize what’s happened, and after throwing the bad employee in the nearest closet for a timeout, you praise the zfs gods for snapshots, than realize the pool is at 80% and get stressed again…

The questions than become:

With the pool at 80%, and crypto changing 75% of it, would it even properly snapshot that night?

If it would, what’s the outcome? If it wouldn’t, would it attempt to fill the pool first then fail?

Knowing your snapshots go back two weeks, in either case of the snapshot outcome from above, could you Rollback all your datasets? If so, how, considering it’s 80%+ full?

I’ll also add one last wrench in the mix, suppose that central box, has a remote backup/sync freenas node, what happens with that if a sync is attempted that night?

I know the above is a sort-of worst case, however, Chance Favors the Prepared Mind…

1 Like

When you roll back to a a snapshot, all the current data is replaced by the snapshot. If there are mass changes that would exceed the space snapshot then it should fail but still keep the older snapshots. You have some good questions and perhaps I need to make a part two of this video.

2 Likes

Thanks Tom, I did watch that when you released it, and it answers the initial questions most would have with ransomware and zfs; it’s why I switched years ago.

I and I’m sure others, would thoroughly appreciate a part two that went over my worst case scenario above. At the end of the day, we can’t control what others do, so prepare for the worst, hope for the best is about all we can do sometimes.

"Two is One, One is None. Have 2 FreeNAS systems would be the ideal but budgets always don’t allow so, 2nd choice would be an additional pool. Snapshots are a great idea but they are not backups. Third choice, disaster recovery service where you can restore and use till in house system is clean and up. Running a system at 80% is playing Russian Roulette with an AK.

I know these things, that’s why it’s a hypothetical, but possible, scenario. :wink:

I’m not sure an AK is a decent Russian Roulette gun though, they like to jam up. I’d rather play with a old 6-Shooter that’s loaded. :crazy_face:

1 Like

@faust
Like the way you think, it’s true Red Teaming trying to analyze the potential faults in a system or systems of systems. Murphy’s ghost is always lurking, waiting for that moment.

1 Like

And we all hate that damn :ghost: Murphy!
Unless it’s Robocops’ Murphy, he’s cool, provided it’s Peter Weller of course. :robot:

Heres something I would try…

FIRST stop making more snapshots… we dont need snapshots of infected data. Delete any other snapshot of the infected data. We dont want to be recovering any of those. Next check which was the last good data… and restore that.

If you have multiple datasets and you use multiple snapshots, you should be able to recover 1 dataset at a time, which should theoretically have enough of free space to recover said data.

I would do the smallest dataset first, thereby freeing more space along the way, enough to finally handle the largest dataset last.

1 Like