I have a question regarding the VBoxManage "--discard" option on Windows hosts. I recently upgraded my main system (based on Windows 8.1 x64 Update 1 Enterprise) to use a single large SSD drive. To minimize the write rate and maximize the free space, I have configured my linux guests to recognize and use VDIs as non-rotational media (SSDs) and manually set "discard" to "on" with VBoxManage. After issuing "fstrim /" on linux guests the system VDIs lost most of their weight (awesome x 1), which means the system properly issues trim commands to the VM and the VM properly handles it by trimming the VDI file (awesome x 2).
Now, can someone tell me how is this VDI size trimming feature implemented on Windows? More importantly, does it affect the underlying host SSD wear out rate? The most straightforward implementation I could think of would be to have a "virtual physical block" to VDI block mapping table. If certain VDI block becomes empty and is inside the file, we *copy* the last VDI block into its space and update mapping table accordingly. Then we can trim the VDI image to free unused space. But this copy operation requires erase/write operations and I guess will wear out the SSD more when compared to not enabling "--discard" on the VDI. Did I guess correctly or is there something smarter in works under the hood? I apologize if the question seem too technical. This is my first SSD drive and I am totally paranoid about its premature death
Thanks!