when you think about it, the idea that software should scale is actually really weird. "sure this garden is nice, but how nice can it be if it doesn't grow to cover the entire surface of the earth?"

(just thinking about runyourown.social — the "More on scale" passage in particular, and how it's weird that Darius has to present "the notion that software does not have to scale" as some kind of tendentious heresy instead of, like, a default so obvious that it goes unstated)

Follow

@aparrish /me unpacks old internet uncle mode

Actually, scale used to mean that it supports more users/things *on the same hardware*. It was a question of how well optimized the software was. I still support this idea of scalability as a measure of good engineering (more or less; separate discussion).

Scaling to multiple cores (vertically) or machines (horizontally) has often been used to avoid optimization. Which is kinda dumb, unless you can just throw money at your problem.

@jens
> Scaling to multiple cores (vertically) or machines (horizontally) has often been used to avoid optimization.

Often, but not quite.
Sometimes a task done in parallel with multiple low-power cores is completed faster than when done with one high-power core. Like compression, for instance. If you still don't use pigz, try it.

A lot of times, a task run by multiple machines provides the fault-tolerance and possibility to safely maintain one node while the other is working.

@aparrish

@drq @aparrish Of course some tasks benefit from parallelization. But that's rarely the case when people throw around the term "scale".

Let's say you have N cores. You may complete a task in (T/N)+D time instead of T time by spreading it to those N cores. The D is a delta you introduce by managing the parallelization, it's often small enough, but...

... well, first off, this is wall time, not CPU time. That may be all you care about, sure. But then you may want to...

@drq @aparrish ... scale to more tasks, M. Now your total wall time is M*((T/N)+D), or M(T/N) + MD. It becomes particularly visible when M=N that parallelization can hurt, because instead of just using a single core per task (each in time T in parallel), your extra parallelization introduces an overhead of MD.

It may still be the better choice for reasons of CPU utilisation, or because your single task wall time matters, etc.

Anyway, optimizing that...

@drq @aparrish ... initial single core T is always going to get you benefits without such a cost.

Sign in to participate in the conversation
Finkhäuser Social

A private instance for the Finkhäuser family.