ausblenden:
Schlagwörter:
-
Zusammenfassung:
We consider the following scheduling problem:
Our goal is to execute a given amount of
arbitrarily decomposable work on a distributed machine
as quickly as possible.
The work is maintained by a central scheduler
that can assign chunks of work of an arbitrary size
to idle processors.
The difficulty is that the processing time required for
a chunk is not exactly predictable---usually the less, the
larger the chunk---and that processors suffer a
delay for each assignment.
Our objective is to minimize the total wasted time of the schedule,
that is, the sum of all delays plus the idle times of processors
waiting for the last processor to finish.
We introduce a new deterministic model for this setting,
based on estimated ranges $[\alpha(w),\beta(w)]$ for processing times
of chunks of size $w$.
Depending on $\alpha$, $\beta$, and a measure for the overall
deviation from these estimates, we can prove matching
upper and lower bounds on the wasted time, the former being
achieved by our new \emph{balancing} strategy.
This is in sharp contrast with previous work that,
even under the strong assumption of independent,
approximately normally distributed chunk processing times,
proposed only heuristic scheduling schemes supported
merely by empirical evidence.
Our model naturally subsumes this stochastic setting, and
our generic analysis is valid for
most of the existing schemes too, proving them to be non-optimal.