Pretty sure you can't hard-limit the VFS cache currently. The max-size is actually a target-size and not a hard-limit. It can and will expand beyond this - but periodically be pruned down to the target size. (how often is up to you - via a flag)
It will expand as much as it needs (temporarily) to accommodate the files that are currently writing and/or not yet fully uploaded (even if this brings it temporarily over the target maximum). Usually not a problem - but on a RAM-disk it indeed might be due to very limited size.
I'm pretty sure a new flag would need to be implemented for this - to perhaps delay any transfers that could not be guaranteed to fit within the cache absolute-limit. @ncw how feasible is this you think?
Sidenote: If you are into this sort of tiered-storage stuff I highly recommend looking into Primocache. It is easily the best user-customizable solution that exists currently. HDD's, SSDs, RAM, - tier them as you wish and let them fly
That said though - HDDs don't really wear down from writes like SSDs do so they make excellent cache-drives. HDD failure is kind of random except that you want to keep them as temperature-stable as feasible according to statistical data from major datacenter sources. They means basically "any minimal airflow" is great. Over-cooling might actually be detrimental, but leaving them without any airflow at all is bad. I run all my HDDs in bays with the fans set to the minimal they will spin at (inaudible) and their temperature-change over time is very slow, which is the most ideal scenario according to the best science on the topic.
TLDR: Don't be afraid to use a HDD as a cache-disk. Even an old one will be faster than your gigabit and not be a bottleneck, and you probably aren't going to "wear it out" by writing to it.
Actually - the best advice is probably not not spin it down in idle. load/unload cycles on the read heads is a significant statistical factor in failure. Keep it with some airflow and don't let it spin up/down all the time and your HDD will last maximally.