Backup with trusted repository storage is essential (secure, reliable and efficient). Here’s a fresh example of data loss issues Veeam is seeing in support (Anton Gostev’s Weekly Word) from users who chose “low-end” NAS as their backup storage. What makes this one worth highlighting here is the usage of NFS protocol, which removes all those additional quirks of the SMB stack which Anton talked about so much here before, thus leaving very few moving parts.
To start, just a couple of data points to help you better understand the output (and also explain why Veeam loves to deal with NFS in such cases). Basically, NFS allows issuing critical data write commands with the special FILE_SYNC flag, which lets NFS clients specify that the data must be written to disk before the NFS server replies. So upon getting a reply, they can be sure that the data has been successfully stored on persistent media which in turn allows applications to, for example, finalize the corresponding transaction. While in case asynchronous writes are being done, NFS clients can request to land them to disk with the COMMIT command. You can read more about these flags here.
So, it was just Veeam issuing NFS commands against the storage device, and Wireshark capturing all commands and their content. After doing this for some time, they have spotted some obvious data losses. Below are the relevant parts of Wireshark outputs.
First data loss was at rewriting some data in the file:
592900 248.749308 IP1 IP2 NFS V3 WRITE Call (Reply In 592908), FH: 0x9b5a7c0e Offset: 12288 Len: 4096 FILE_SYNC 592913 248.750430 IP1 IP2 NFS V3 COMMIT Call (Reply In 592918), FH: 0x9b5a7c0e 596098 249.549631 IP1 IP2 NFS V3 READ Call (Reply In 596101), FH: 0x9b5a7c0e Offset: 12288 Len: 4096
Everything looks good here except after synchronously writing some actual data with the first operation, Wireshark content drill-down showed that the storage returned they all zeroes when Veeam tried to read this data back a bit later. So it’s not just a single bit rot – it’s a whole lot of data gone.
Second data loss was in appending new data to the existing file:
592021 248.652950 IP1 IP2 NFS V3 CREATE Call (Reply In 592023), DH: 0x64222626/06j00000000000000e7-0000.vindex Mode: GUARDED 592023 248.653470 IP2 IP1 NFS V3 CREATE Reply (Call In 592021) -> handle: [hash (CRC-32): 0x9b5a7c0e] 592857 248.746030 IP1 IP2 NFS V3 WRITE Call (Reply In 592861), FH: 0x9b5a7c0e Offset: 40960 Len: 4096 FILE_SYNC 596080 249.546216 IP2 IP1 NFS V3 LOOKUP Reply (Call In 596079), FH: 0x9b5a7c0e -> size: 40960
As you can see, here the data was just lost completely as the file size remained unchanged after they synchronously wrote 4096 more bytes at offset 40960.
“You Had One Job” meme fits this situation too well. Veeam didn’t not name the NAS vendor in this particular support case because tnhey have been seeing similar data losses with many vendors from the corresponding market segment. But they don’t want you to think it was some noname: contrary to that, it seems to be the default vendor which comes to everyone’s minds when they think cheap NAS. Veeam will say though that after seeing all this, the customer immediately decided to go ahead and buy some proper storage instead.
So if this does not convince you to finally stop deploying low-end NAS for your backup storage, then I don’t know what else will. Honestly, I don’t even know why they are still on the table after so many years of us recommending against using them… especially when our users admit that the cost difference of going with a general-purpose server is negligible but the benefits go well beyond just having storage you can trust.
This week, I am in Prague for Veeam Vanguards Summit 2021: Learning, networking, and fun! 🙂
Step by Step Guide Veeam B&R 11 Upgrade: Guide.
Veeam CDP and Application consistency: Blog Post.
Veeam improves the engine in version 11: Blog Post.
Veeam B&R v11 and ReFS: Blog Post.
Veeam B&R 11 – Continuous Data Protection: Blog Post.
Microsoft Teams Backup with VBO v5: Blog Post.
Protect your Backup against Ransomware: Blog Post.