mpack wrote:What should it do, crash ?
Yes, crash. Or swap.
I mean, that's already the case. Let's say I start my 4 GiB VM, and it uses 1 GiB to run initally. Then I do some work on my 8GiB host, until its RAM is used at 7GiB out of 8. Then I go back to my VM, do some work, e.g. open a lot of tabs in a browser, when the host memory reaches 8GiB everything crashes, both host and guests (providing there is no swap enabled). That's expected, if you don't want it to crash then you have to watch your memory on the host.
So, how is that different from allowing the VM to return unused memory to the host ?
mpack wrote:If it gives it up there is no guarantee that it can get it back
Again, in my scenario there was no guarantee either that the memory was available to the guest when it needed it, and if fact in wasn't and it crashed. No difference there.
Returning unused memory to the host would avoid to have to restart the VMs in order to reclaim unused memory on the host.
It seems (don't take my word for it, haven't tried) that other hypervisors, like Hyper-V or KVM, implement such a feature :
viewtopic.php?f=3&t=101406
WSL 2 (Windows Subsystem for Linux) uses the Dynamic Memory feature of Hyper-V, which is configured with multiple RAM sizes. If you would use WSL 2, you'd configure Startup RAM and Minimum RAM to 4 GB, and Maximum RAM to 20 GB. The VM would then statically reserve 4 GB RAM from the host, and dynamically use 16 GB on demand. If the host was low on physical memory, the dynamic memory could even be moved to the pagefile, severely slowing down the VM (but better than crashing).
https://pve.proxmox.com/wiki/Dynamic_Memory_Management
Memory ballooning (KVM only) allows you to have your guest dynamically change it’s memory usage by evicting unused memory during run time. It reduces the impact your guest can have on memory usage of your host by giving up unused memory back to the host.
The Proxmox VE host can loan ballooned memory to a busy VM. The VM decides which processes or cache pages to swap out to free up memory for the balloon. The VM (Windows or Linux) knows best which memory regions it can give up without impacting performance of the VM.
I guess there's a good reason why VirtualBox doesn't do it, I'm just trying to understand why. Is it just not yet implemented ? Has it something to do with the fact that VB is a type2 hypervisor ? Do VB developers think it's a bad idea ? If yes, why ? Because it can crash ? Again, it already crashes if a VM requires more memory that the host has, no difference.