

What I’d recommend is setting up a few testing systems with 2-3GB of swap or more, and monitoring what happens over the course of a week or so under varying (memory) load conditions. As long as you haven’t encountered severe memory starvation during that week – in which case the test will not have been very useful – you will probably end up with some number of MB of swap occupied.
And
[… On Linux Kernel > 4.0] having a swap size of a few GB keeps your options open on modern kernels.
And finally
For laptop/desktop users who want to hibernate to swap, this also needs to be taken into account – in this case your swap file should be at least your physical RAM size.









Yeh, either proxy editing (where it’s low res versions until export).
Or you could try a more suitable intermediary codec.
I presume you are editing h.264 or something else with “temporal compression”. Essentially there are a few full frames every second, and the other frames are stored as changes. Massively reduces file size, but makes random access expensive as hell.
Something like ProRes, DNxHD… I’m sure there are more. They store every frame, so decoding doesn’t require loading the last full frame and applying the changes to the current frame.
You will end up with massive files (compared to h.264 etc), but they should run a lot better for editing.
And they are lossless, so you convert source footage then just work away.
Really high res projects will combine both of these. Proxy editing with intermediary codecs