Before publishing any modelled scenario, it's always important to "sensitivity test" it. Basically make small incremental changes to the variables and see if the result is stable.
If the result is wildly different following a small change to one particular variable, you either need to be very confident in that variable, or you need to re-work your model until it is more stable.
I assume that all the published models have been checked in this way, but given how little we know about Coronavirus and the urgency to get these models published, I suspect that none of them are particularly stable or reliable...