(just thinking about https://runyourown.social — the "More on scale" passage in particular, and how it's weird that Darius has to present "the notion that software does not have to scale" as some kind of tendentious heresy instead of, like, a default so obvious that it goes unstated)
@aparrish /me unpacks old internet uncle mode
Actually, scale used to mean that it supports more users/things *on the same hardware*. It was a question of how well optimized the software was. I still support this idea of scalability as a measure of good engineering (more or less; separate discussion).
Scaling to multiple cores (vertically) or machines (horizontally) has often been used to avoid optimization. Which is kinda dumb, unless you can just throw money at your problem.
> Scaling to multiple cores (vertically) or machines (horizontally) has often been used to avoid optimization.
Often, but not quite.
Sometimes a task done in parallel with multiple low-power cores is completed faster than when done with one high-power core. Like compression, for instance. If you still don't use pigz, try it.
A lot of times, a task run by multiple machines provides the fault-tolerance and possibility to safely maintain one node while the other is working.
@drq @aparrish ... scale to more tasks, M. Now your total wall time is M*((T/N)+D), or M(T/N) + MD. It becomes particularly visible when M=N that parallelization can hurt, because instead of just using a single core per task (each in time T in parallel), your extra parallelization introduces an overhead of MD.
It may still be the better choice for reasons of CPU utilisation, or because your single task wall time matters, etc.
Anyway, optimizing that...
A private instance for the Finkhäuser family.