(just thinking about https://runyourown.social — the "More on scale" passage in particular, and how it's weird that Darius has to present "the notion that software does not have to scale" as some kind of tendentious heresy instead of, like, a default so obvious that it goes unstated)
@aparrish /me unpacks old internet uncle mode
Actually, scale used to mean that it supports more users/things *on the same hardware*. It was a question of how well optimized the software was. I still support this idea of scalability as a measure of good engineering (more or less; separate discussion).
Scaling to multiple cores (vertically) or machines (horizontally) has often been used to avoid optimization. Which is kinda dumb, unless you can just throw money at your problem.
> Scaling to multiple cores (vertically) or machines (horizontally) has often been used to avoid optimization.
Often, but not quite.
Sometimes a task done in parallel with multiple low-power cores is completed faster than when done with one high-power core. Like compression, for instance. If you still don't use pigz, try it.
A lot of times, a task run by multiple machines provides the fault-tolerance and possibility to safely maintain one node while the other is working.
Let's say you have N cores. You may complete a task in (T/N)+D time instead of T time by spreading it to those N cores. The D is a delta you introduce by managing the parallelization, it's often small enough, but...
... well, first off, this is wall time, not CPU time. That may be all you care about, sure. But then you may want to...
A private instance for the Finkhäuser family.