@aparrish /me unpacks old internet uncle mode
Actually, scale used to mean that it supports more users/things *on the same hardware*. It was a question of how well optimized the software was. I still support this idea of scalability as a measure of good engineering (more or less; separate discussion).
Scaling to multiple cores (vertically) or machines (horizontally) has often been used to avoid optimization. Which is kinda dumb, unless you can just throw money at your problem.
> Scaling to multiple cores (vertically) or machines (horizontally) has often been used to avoid optimization.
Often, but not quite.
Sometimes a task done in parallel with multiple low-power cores is completed faster than when done with one high-power core. Like compression, for instance. If you still don't use pigz, try it.
A lot of times, a task run by multiple machines provides the fault-tolerance and possibility to safely maintain one node while the other is working.
Let's say you have N cores. You may complete a task in (T/N)+D time instead of T time by spreading it to those N cores. The D is a delta you introduce by managing the parallelization, it's often small enough, but...
... well, first off, this is wall time, not CPU time. That may be all you care about, sure. But then you may want to...
@drq @aparrish ... scale to more tasks, M. Now your total wall time is M*((T/N)+D), or M(T/N) + MD. It becomes particularly visible when M=N that parallelization can hurt, because instead of just using a single core per task (each in time T in parallel), your extra parallelization introduces an overhead of MD.
It may still be the better choice for reasons of CPU utilisation, or because your single task wall time matters, etc.
Anyway, optimizing that...
A private instance for the Finkhäuser family.