Credit: WhiteMocca
Dear KV,
My team and I are selecting a new server platform for our project and trying to decide whether we need more cores or higher-frequency CPUs, which seems to be the main trade-off to make on current server systems. Our system is deployed on the highest-end systems and, therefore, the highest-frequency systems we could buy two years ago. We run these systems at 100% CPU utilization at all times. Our deployment does not consume a lot of memory, just a lot of CPU cycles, and so we are again leaning toward buying the latest, top-of-the-line servers from our vendor. We have looked at refactoring some of our software, but from a cost perspective, expensive servers are cheaper than expensive programmer time, which is being used to add new features, rather than reworking old code. In your opinion, what is more important in modern systems: frequency or core count?
No entries found