The ReFS filesystem is commonly used for Virtualization, Backup, and Microsoft Exchange because of its resiliency, real-time tier optimization, faster virtual machine operations, and great scalability. But until recently, ReFS didn’t support data deduplication, which was available on NTFS formatted volumes only. Data deduplication can provide significant savings on storage costs by using block-level technology to reduce the number of space files take up on a disk.
What is Data Deduplication?
Most likely if you have been around storage or virtualization technologies for any length of time, you have no doubt heard about data deduplication or deduplication for short. This can certainly be used as a buzzword among storage vendors. However, it is an important technical feature of today’s modern storage solutions.
What exactly is Deduplication?
Most of today’s complex storage systems store data in various chunks of data using different technologies. When storing very similar servers, files, and other stored items, it is very probable you will have multiple bits of data that could be identical between a number of servers stored on a volume such as is found in storage backing a Hyper-V virtualized environment.
One common complaint about ReFS has been a lack of support tools in case things do go wrong. Looks like Microsoft also recognized this as one of the blockers to ReFS adoption – because they did add one called ReFSutil.exe to Windows Server 2019, which is designed for triaging corruption and salvaging data from corrupted volumes. I was unable to find much documentation regarding all available options, but for salvaging specifically the command line looks to be refsutil salvage. Unfortunately, looks like this utility only works with Server 2019 – simply copying it to Server 2016 didn’t work. But I assume attaching the impacted volume to Server 2019 should work. In general, ReFS volume metadata corruption are not a common issue we see in support – they are certainly not more common than with other file systems. As a reminder, the most probable cause of ReFS volume corruption is torn metadata write caused by using a non-HLK certified RAID controller that does not handle flush command appropriately, including during the power loss scenario.
Knowing that our volume had data, the first thing we wanted to do was to verify if ReFSutil could see the corruption. Fascinatingly, ReFSutil thought everything was fine: More on AndrewYager.com IT Blog…
Veeam Backup for Office 365 v2 – Architecture & Components – Blog Post!
Step by Step Guide Veeam Backup for Office365 v2 Installation – Step by Step Guide!
VeeamON 2019 Coverage – v10 is coming!
Interview – Q&A about Veeam Availability Suite 9.5 U3: Veeam Blog
VeeamON Forum France – Interviewed by LeMagIT: LeMagIT
[…] Continue reading » […]