As you say, this very much depends on a few things - number of metrics turned on and their frequency and the retention in your database (number of days history).
The first two determine your throughput which you will need your cores and to a certain extent, memory.
The last one is all about storage capacity.
In my experience, I would expect this size environment to be classed as medium sized and therefore will probably need to cope with about 2000-3000 messages per second in your data engine.
This you should estimate in your monitoring governance work so that you know roughly the amount of data you are collecting and get a more accurate number than my estimate.
If it is around this amount then I would like to see a primary hub of at least 16 cores and 24gb ram and a database of at least 16-24 cores and 32-48gb ram
If you are keeping your raw data for around 60 days and hourly data for a further 60 days and then the daily for up to a year then you should allow at least 800gb for the database. Make sure you have plenty of space in your transaction log partition, if you are in full mode.
Please understand these are very rough figures and could have a variance of +- 30% or maybe more.
If you want some help with getting more accurate figures then CA Services can help you.
We have calculators and have done this many times.
Regards
Rowan