Journal of Scheduling

, Volume 21, Issue 5, pp 505–516 | Cite as

Models and algorithms for energy-efficient scheduling with immediate start of jobs

  • Akiyoshi Shioura
  • Natalia V. ShakhlevichEmail author
  • Vitaly A. Strusevich
  • Bernhard Primas
Open Access


We study a scheduling model with speed scaling for machines and the immediate start requirement for jobs. Speed scaling improves the system performance, but incurs the energy cost. The immediate start condition implies that each job should be started exactly at its release time. Such a condition is typical for modern Cloud computing systems with abundant resources. We consider two cost functions, one that represents the quality of service and the other that corresponds to the cost of running. We demonstrate that the basic scheduling model to minimize the aggregated cost function with n jobs is solvable in \(O(n\log n)\) time in the single-machine case and in \(O(n^{2}m)\) time in the case of m parallel machines. We also address additional features, e.g., the cost of job rejection or the cost of initiating a machine. In the case of a single machine, we present algorithms for minimizing one of the cost functions subject to an upper bound on the value of the other, as well as for finding a Pareto-optimal solution.


Speed scaling Energy minimization Immediate start Bicriteria optimization 

1 Introduction

In this paper, we study scheduling models that address two important aspects of modern computing systems: machine speed scaling for time and energy optimization and the requirement to start jobs immediately at the time they are submitted to the system. The first aspect, speed scaling, has been the subject of intensive research since the 1990s, see Yao et al. (1995), and has become particularly important recently, with the increased attention to energy-saving demands, see surveys Albers (2009, 2010a), Jing et al. (2013), Gerards et al. (2016). It reflects the ability of modern computing systems to change their clock speeds through the technique known as Dynamic Voltage and Frequency Scaling (DVFS). The higher the speed, the better the performance from users’ perspective, but the energy usage and other computation costs do increase. The goal is to select the right speed value from the full spectrum of speed to achieve a desired trade-off between performance and energy. DVFS techniques have been successfully applied in Cloud data centers to reduce the energy usage, see, e.g., VonLaszewski et al. (2009), Wu et al. (2014), DoLago et al. (2011).

The second aspect, the immediate start condition, is motivated by the advancements of modern Cloud computing systems, and it is widely accepted by practitioners. This feature is not typical for the traditional scheduling research dealing with scenarios arising from manufacturing. In such systems, jobs compete for limited resources. They often have to wait until resources become available, and job starting times can be delayed if the system is busy.

In modern computing systems (Clouds and data centers), processing units are no longer scarce resources, but quite opposite, abundant resources; see, e.g., Kushida et al. (2015). Clouds give the illusion of infinite computing resources available on demand. Cloud providers agree with customers on a service-level agreement (SLA) and sell computing services to customers as utilities. Special mechanisms allow Cloud providers to ensure that the actual demand for resources is met at practically any point in time (Armbrust et al. 2009, 2010; Jennings and Stadler 2015). In the modern competitive market, Cloud providers achieve high availability of resources, promise their customers instant access to resources and allow customers to monitor how that promise is kept (Aceto et al. 2013). These features are unprecedented in the history of IT and have now become a standard.

The infrastructure for Cloud computing systems is provided by data centers. Data centers execute a large number of computing processes (which we call jobs from now on). In order to guarantee on-demand access, the execution of a job needs to be started immediately upon its submission to the system. Customers experiencing waiting times in order to get their jobs started become unsatisfied with the service and are likely to change the provider next time (Armbrust et al. 2010). It is therefore in the interest of providers to start job execution as soon as jobs are submitted to the system. This phenomenon is our motivation for what we call the immediate start condition.

The optimization criteria are typically of two types: those related to the system performance and the quality-of-service (QoS) provision, as well as those related to the operational cost of the processing system. The criteria of the first type may represent the mean flow times of jobs, total (or mean) tardiness or a more general function F defined as the sum of penalty functions of job completion times. The second objective G is the sum of operational costs for using individual resources, each of which depends on the time the resource is used. It can be linear, to model the monetary resource usage cost, or convex, to model the energy consumption cost.

Observing the immediate start condition is one of the key priorities for resource providers, and it is usually included in the QoS protocols. The case study presented by Garg et al. (2013) characterizes a possible waste of execution time due to the resource unavailability. It is estimated as low as 0.5% for customers of Amazon EC2 and 0.1% for Windows Azure customers. The Rackspace web hosting company guarantees 100% network availability and 99.95–99.99% platform availability; see Rackspace (2015); in reality Rackspace often achieves 100% availability of its resources (Garg et al. 2013). Nowadays, special software is being developed in order to strengthen service-level agreements (SLAs) for customers by fixing the maximum response time, which can be as small as a few seconds (Iqbal et al. 2009). Due to a strong competition in the area, the customers choose providers who are prepared to demonstrate that handling the submitted jobs is their top priority.

The immediate start requirement is not a seemingly strong assumption, but a fact of today’s life. It is widely accepted in distributed computing, but generally overlooked by the scheduling community, where the traditional perception remains, that of limited resources and acceptable delayed starting times.

In this paper, we initiate the study of the immediate start off-line scheduling models, assuming that accurate job characteristics can be available in advance through historical analysis, predicting techniques or a combination of both; see, e.g., Moreno et al. (2014) for off-line scenarios in Cloud computing. To satisfy the immediate start condition, we recommend a policy of changing the processing speeds, so that a certain measure of the schedule quality and the cost of speeds (normally understood as energy) are both taken into consideration. The owners of submitted tasks and the providers of processing facilities both want an early completion of tasks, as recorded in the SLAs, and this in practice leads to a no-preemption requirement; see Tian and Zhao (2015). We understand that the models that we address in this paper are rather ideal and simple, but we see our work as a necessary step that should be made before more advanced and practically relevant models are investigated.

In the remainder of this section, we provide a formal definition of the model under study and discuss the relevant literature.

1.1 Definitions and notation

Formally, in the models under consideration, we are given a set of jobs \( N=\left\{ 1,2,\ldots ,n\right\} \). A job \(j\in N\) can be understood as a computational task characterized by its volume or work \(\gamma _{j}\), measured in millions of instructions to be performed on a computing device. Each job \(j\in N\) is associated with a release date \(r_{j}\) before which it is not available. For completeness, assume that \(r_{n+1}=+\,\infty \).

It is also possible that job j is given a due date \(d_{j}\), before which it is desired to complete that job, and/or a deadline \(\bar{d}_{j}\). The due dates \(d_{j}\) are seen as “soft”, i.e., it is possible to violate them, and usually a certain penalty is associated with such a violation. On the other hand, the deadlines \(\bar{d}_{j}\) are “hard”, i.e., in any feasible schedule job j must be completed by time \(\bar{d}_{j}\). If for a job \(j\in N\) no deadline is given, we assume that \(\bar{d}_{j}=+\,\infty \). Additionally, job j can be given weight \(w_{j}\), which indicates its relative importance.

Each job is processed without preemption either on a single machine or on one of m parallel machines. For processing job \(j\in N\), the corresponding machine has to be assigned speed \(s_{j}\), measured in millions of instructions performed per time unit, so that the actual processing time of job j is defined by
$$\begin{aligned} p_{j}=\frac{\gamma _{j}}{s_{j}}. \end{aligned}$$
The main feature of the models that we study in this paper is the requirement of the immediate start or dispatch of each job, i.e., job \(j\in N \) must start its processing immediately upon its arrival at time \(r_{j}\). Without loss of generality, we assume that all release dates are distinct and the jobs are numbered in increasing order of their release dates, i.e.,
$$\begin{aligned} r_{1}<r_{2}<\cdots <r_{n}. \end{aligned}$$
For a fixed schedule, let \(C_{j}\) denote the completion time of job \(j\in N\). Provided that job \(j\in N\) is processed at speed \(s_{j}\), we have that
$$\begin{aligned} C_{j} =r_{j}+p_{j}, \end{aligned}$$
where the actual processing time \(p_{j}\) is defined by (1).
We associate each job \(j\in N\) with two types of penalties:
  1. (i)
    a traditional scheduling cost function \(f_{j}\left( C_{j}\right) \) , where \(f_{j}\) is a non-decreasing function, so that \(f_{j}\left( C_{j}\right) \) represents the penalty for completing job j at time \(C_{j}\) ; the total cost is then
    $$\begin{aligned} F=\sum _{j=1}^{n}f_{j}\left( C_{j}\right) =\sum _{j=1}^{n}f_{j}\left( r_{j}+p_{j}\right) ; \end{aligned}$$
  2. (ii)
    the speed cost function \(g_{j}\left( s_{j}\right) \), which is often interpreted as the energy that is required for running job j for one time unit at speed \(s_{j}\); the operational cost is then
    $$\begin{aligned} G=\sum _{j=1}^{n}\frac{\gamma _{j}}{s_{j}}g_{j}\left( s_{j}\right) =\sum _{j=1}^{n}p_{j}g_{j}\left( \frac{\gamma _{j}}{p_{j}}\right) . \end{aligned}$$
Notice that our model is formulated for a homogeneous distributed system: all physical machines have the same speed and energy characteristics, and the cost functions \(g_j\) are independent of machines.
It is widely accepted in the literature on power-aware scheduling that energy is proportionate to the cube of speed; see, e.g., Brooks et al. (2000) and Pruhs et al. (2008). This is why in most illustrative examples presented in the remainder of this paper we assume that
$$\begin{aligned} g_{j}\left( s_{j}\right) =\beta _{j}s_{j}^{3},j\in N, \end{aligned}$$
so that
$$\begin{aligned} G=\sum _{j=1}^{n}\beta _{j}\gamma _{j}s_{j}^{2}=\sum _{j=1}^{n}\frac{\beta _{j}\gamma _{j}^{3}}{p_{j}^{2}}. \end{aligned}$$
Depending on the type of the objective function, in this paper we address several different models with immediate start:
  • \(\Pi _{+}\): it is required to find a feasible schedule that minimizes the aggregated cost \(F+G\);

  • \(\Pi _{1}\): it is required to find a feasible schedule that minimizes one of the cost functions subject to an upper bound on the other function, e.g., to minimize total energy G subject to an upper bound on the value of F;

  • \(\Pi _{2}\): it is required to find feasible schedules that simultaneously minimize two cost components, e.g., to find the Pareto-optimal solutions for the problem of minimizing total cost F and total energy G.

1.2 Related work

Both features, machine speed scaling and the immediate start condition, have a long history of study. However, so far they have been considered separately and in different contexts. One point of difference is related to preemption, the ability to interrupt and resume job processing at any time. This feature is typically accepted in speed scaling research in order to avoid intractable cases, while it is forbidden in the immediate start model on a single machine and on parallel identical machines. Notice that preemptive version with immediate start should have additional condition on immediate migration and restart, which makes preemption redundant. In what follows, we provide further details about the two streams of research.

The speed scaling research stems from the seminal paper by Yao et al. (1995), who developed an \(O(n^{3})\)-time algorithm for preemptive scheduling of n jobs on a single machine within the time windows \(\left[ r_{j},\bar{d}_{j}\right] \) given for each job \(j\in N\). Note that in that paper time windows are treated in the traditional sense, without the immediate start requirement. Subsequent papers by Li et al. (2006, 2014), Albers et al. (2011, 2014) and Angel et al. (2012) proposed improved algorithms for the single-machine problem and extended this line of research to the multi-machine model. The running times of the current fastest algorithms are \(O(n^{2})\) and \(O(n^{4})\) for the single-machine and parallel-machine cases, see Shioura et al. (2015).

Speed scaling problems which involve not only the speed cost function G, but also a scheduling cost function \(F=\sum _{j=1}^{n}f_{j}(C_{j})\) have been under study since the paper by Pruhs et al. (2008). The two most popular functions are the total completion time \(F_{1}=\sum _{j=1}^{n}C_{j}\) and the total rejection cost \(F_{2}=\sum _{j=1}^{n}w_{j}\mathrm {sgn}\left( \max \left\{ C_{j}-\overline{d}_{j},0\right\} \right) \), where \(w_{j}\) is the cost incurred if job j cannot be processed before its deadline and therefore is rejected. Without the immediate start condition, the tractable cases of problems \(\Pi _{+}\) and \(\Pi _{1}\) with objectives \(F_{1}\) and \( F_{2}\) are very limited.

In the case of function \(F_{1}\), the version of problem \(\Pi _{+}\) with equal release dates is solvable in \(O\left( n^{2}m^{2}(n+\log m)\right) \) time, where m is the number of machines; see Bampis et al. (2015). Notice that preemptions are redundant in that model. If jobs are available at arbitrary release times \(r_{j}\), then problem \(\Pi _{1}\) is NP-hard even if there is only one machine and preemption is allowed, see Barcelo (2015). For problems with arbitrary release dates and equal-work jobs, preemption allowance makes no difference to an optimal solution, and due to a nonlinear nature of the problem an optimal value of the objective can be found within a chosen accuracy \(\varepsilon \). For example, for problem \(\Pi _{1}\) on a single machine an algorithm by Pruhs et al. (2008) takes \(O(n^{2}\log \frac{\overline{G} }{\varepsilon })\) time, where \(\overline{G}\) is the upper bound on the speed cost function (energy), while for problem \(\Pi _{+}\) on parallel machines an algorithm by Albers and Fujiwara (2007) requires \(O(n^{3}\log \frac{1}{ \varepsilon })\) time. The difficulties associated with arbitrary-length jobs are discussed by Pruhs et al. (2008), Bunde (2009), Barcelo et al. (2013). For the problem of preemptive scheduling on a single discretely controllable machine, Antoniadis et al. (2014) provide an algorithm with time complexity \(O(n^{4}k)\), where k is the number of possible speed values of the processor.

In the speed scaling research, the problems of minimizing the total rejection cost \(F_{2}\) are typically studied as those of maximizing the throughput, defined as the number of jobs that can be processed by their deadlines. Polynomial-time algorithms are known only for special cases, where various conditions are imposed, in addition to the assumption that all jobs have equal weights \(w_{j}\). Notice that strict assumptions of those models make preemption redundant. The single-machine problem \(\Pi _{1}\) with \(w_{j}=1\) for all \(j\in N\) is solvable in \(O\left( n^{4}\log n\log \left( \sum \gamma _{j}\right) \right) \) time and in \(O\left( n^{6}\log n\log \left( \sum \gamma _{j}\right) \right) \) time, depending on whether the jobs are available simultaneously (\(r_{j}=0\) for all \(j\in N\)) or not; in the latter case, it is further required that release dates and deadlines are agreeable, see Angel et al. (2013). The parallel-machine problem \(\Pi _{1}\) with the jobs of equal size and equal weight (\(\gamma _{j}=w_{j}=1\), \(j\in N\)) is solvable in \( O(n^{12m+9})\) time or in \(O(n^{4m+4}m)\) time, if additionally release dates and deadlines are agreeable, see Angel et al. (2016).

Research on speed scaling problems extends to the design of approximation algorithms and the study of their online versions. Without providing a comprehensive list of results of this type, we refer an interested reader to the survey papers by Albers (2009, 2010a, b) and Bampis (2016).

As far as the immediate start condition is concerned, the most relevant problems studied in the literature fall into the category of interval scheduling. In such models, each job is characterized by time intervals where it can be processed (Kovalyov et al. 2007). One of the most well-studied versions of interval scheduling assumes that there is only one interval per job \([r_j,\overline{d}_j]\). In interval scheduling, there is no freedom in selecting job starting times and in making preemption: every job \(j\in N\) should start precisely at a given time \(r_{j}\) and complete at a given deadline \(\bar{d}_{j}\). There is also no control over machine speeds, which are fixed and cannot be changed. The decision making consists in (i) selecting a subset of jobs that can be processed within their time intervals and (ii) assigning them to the machines for processing without preemption. The two typical objectives are the job rejection cost, which is defined similarly to the function \(F_{2}\), and the machine usage cost defined typically as the (weighted) number of machines which are selected to process the jobs. Note that unlike the operational cost function G used in our model, the machine usage cost in interval scheduling does not take into account the actual time of using a machine.

Within the broad range of interval scheduling results (see the survey papers by Kolen et al. (2007) and Kovalyov et al. (2007)), those relevant to our study deal with identical parallel machines or uniform machines. In the case of identical parallel machines, the fastest algorithms for minimizing the job rejection cost have time complexity \(O(n\log n)\) if all jobs have equal weights (Carlisle and Lloyd 1995) and \(O(mn\log n)\) if job weights are allowed to be different (Bouzina and Emmons 1996); the fastest algorithm for minimizing the machine usage cost is of time complexity \(O(n\log n)\) if machine weights are equal (Gupta et al. 1979).

The version of the problem with uniform machines is less studied. For uniform machines, both problems, with job rejection cost and machine usage cost, are strongly NP-hard; see Nakajima et al. (1982) and Bekki and Azizoğlu (2008). Polynomial-time algorithms, all of time complexity \(O(n\log n)\), are known for the problem of minimizing the machine usage cost, if there are only two types of machines, slow and fast (Nakajima et al. 1982), and for the problem of minimizing the job rejection cost, in one of the following two cases: if all jobs are available simultaneously and have equal weights, or if all jobs have equal volume and there are only two processing machines (Bekki and Azizoğlu 2008).

One more problem related to our study is a relaxed version of interval scheduling, where the jobs are allowed to start at any time after their release dates \(r_{j}\), but they are required to complete exactly at their deadlines \(\overline{d}_{j}\). Such a problem can be considered as a counterpart of our problem, where the jobs are required to start at release dates \(r_{j}\), but they are allowed to complete at any time before deadlines \(\overline{d}_{j}\).

For the model with fixed job completion times, the scheduling cost function \( F=\sum f_{j}\left( C_{j}\right) \) can be only of type \(F_{2}\) representing the job rejection cost, since for any accepted job j, \(C_{j}=\overline{d} _{j}\) and therefore there is no scope for optimizing a function \( f_{j}(C_{j})\). Prior study focuses on the model with identical parallel machines, where machine speeds are equal and cannot be changed. Algorithms of time complexities \(O(n\log n)\) and \(O(n^{2}m)\) are proposed by Lann and Mosheiov (2003) and Hiraishi et al. (2002) for the case of equal-weight jobs and for the general case, respectively. The latter result is generalized further to the case of controllable processing times (Leyvand et al. 2010), where a job consuming \(x_j\) amount of resources gets a compressed processing time given by an arbitrary decreasing function \( p^{\prime }_{j}(x_{j})\); the associated resource consumption cost is linear,
$$\begin{aligned} G^{\prime }= \sum _{j=1}^n \beta ^{\prime }_{j} x_{j}. \end{aligned}$$
The two typical examples of \(p^{\prime }_{j}\) are
$$\begin{aligned} p^{\prime }_{j}(x_{j})=\overline{p}_j - a_j x_j \quad \mathrm { and } \quad p^{\prime }_{j}(x_{j})=\left( \tfrac{\theta _{j}}{x_{j}} \right) ^{k}, \end{aligned}$$
where \(\overline{p}_j\) and \(\theta _{j}\) are given job-related parameters, while k is a positive constant; see Shabtay and Steiner (2007). The second model is linked to the power-aware model: if \(k=1/2\), \( \theta _{j}=\gamma _{j}^{3}\) and \(x_{j}=\gamma _{j}s_{j}^{2}\), then we get \( G=G^{\prime }\) assuming a cubic power function (compare \(p^{\prime }_{j}(x_{j})=\tfrac{\gamma _{j}}{s_{j}}\) with (1) and \(G^{\prime }=\sum _{j=1}^{n}\beta ^{\prime }_{j} \gamma _{j}s_{j}^{2}\) with (4)). Notice that as observed above, the research with fixed completion times is limited to only one type of scheduling objective: job rejection cost.

As demonstrated in Leyvand et al. (2010), the counterpart of problem \(\Pi _{+}\) with fixed completion times can be solved in \(O(mn^{2})\) time, while the counterparts of problems \(\Pi _{1}\) and \(\Pi _{2}\) are NP-hard. For discrete versions of NP-hard problems, Leyvand et al. (2010) develop algorithms of time complexity \(O(mn^{m+1}X_{\max })\), where \(X_{\max }\) is the maximum resource usage cost, \(X_{\max }=\sum _{j=1}^{n}\beta _{j}^{ \mathrm {contr}}\max \left\{ x_{j}\right\} \), assuming that resource amounts \( x_{j}\) are allowed to take only discrete values from a given range.

We study the most general versions of \(\Pi _{+}\), \(\Pi _{1}\) and \(\Pi _{2}\) with arbitrary functions \(f_{j}\left( C_{j}\right) \), reflecting diverse needs of customer-oriented quality-of-service provisioning in distributed systems. Problem \(\Pi _{+}\) is solvable in O(n) time on a single machine (Sect.  2), and in \(O(n^{2}m)\) on m parallel machines (Sect. ). The \(\Pi _{1}\) model of minimizing energy G on a single machine subject to an upper bound on the total flow time is handled in Sect. 4; we formulate it as a nonlinear resource allocation problem with continuous variables and explain how it can be solved in \( O(n\log n)\) time. In Sect. 5, we present a method, also of time complexity \(O(n\log n)\), for finding Pareto-optimal solutions for the \( \Pi _{2}\) model, in which the functions F and G have to be simultaneously minimized on a single machine. Conclusions are presented in Sect. 6.

2 Problem \(\Pi _{+}\) on a single machine

In this section, we consider the problem of minimizing the sum of the performance cost function F and total energy G on a single machine, provided that each job \(j\in N\) starts immediately at time \(r_{j}\).

It is clear that in the single-machine case, in order to guarantee the immediate start of job \(j+1\), each job j, \(1\le j\le n-1\), must be completed no later than time \(r_{j+1}\). Taking into account deadlines \(\bar{d }_{j}\), we conclude that in a feasible schedule each job \(j\in N\) must be completed by its imposed deadline \(D_{j}\) given by
$$\begin{aligned} D_{j}=\min \left\{ \bar{d}_{j},r_{j+1}\right\} ,~1\le j\le n. \end{aligned}$$
Recall that \(r_{n+1}=+\,\infty \).
Due to the immediate start condition, the actual processing time \(p_{j}\) of job \(j\in N\) should not exceed
$$\begin{aligned} u_{j}=D_{j}-r_{j}. \end{aligned}$$
In order to minimize the sum of total cost F and total energy G, we need to solve a problem, which in terms of the decisions variables \(p_{j}\), \(j\in N\), can be formulated as
$$\begin{aligned} \begin{array}{cc} \mathrm {Minimize} &{} \displaystyle Z=\sum _{j=1}^{n}f_{j}\left( r_{j}+p_{j}\right) +\sum _{j=1}^{n}p_{j}g_{j}\left( \frac{\gamma _{j}}{p_{j}} \right) \\ \mathrm {subject~to} &{} 0<p_{j}\le u_{j},~j\in N. \end{array} \end{aligned}$$
Due to a separable structure of the objective in (5), the optimal processing times can be found independently for each job \(j\in N\) by solving the following n problems with a single decision variable p:
$$\begin{aligned} \begin{array}{cc} \mathrm {Minimize} &{} \displaystyle Z_{j}=f_{j}\left( r_{j}+p\right) +pg_{j}\left( \frac{\gamma _{j}}{p}\right) \\ \mathrm {subject~to} &{} 0<p\le u_{j}. \end{array} \end{aligned}$$
For problem (6), let \(Z_{j}^{*}\) denote the smallest value of the objective function \(Z_{j}\), and \(p_{j}^{*}\) be the value of p that minimizes \(Z_{j}\). In the schedule that minimizes \(F+G\), the jobs are processed in the order of their numbering, and the actual processing time of job \(j\in N\) is equal to \(p_{j}^{*}\).

For most practically relevant cases, we may assume that for each \(j\in N\) problem (6) can be solved in constant time. Under this assumption, we obtain the following statement.

Theorem 1

The problem \(\Pi _{+}\) of minimizing the sum of total cost F and total energy G on a single machine is solvable in \(O\left( n\right) \) time, provided that the jobs are numbered in accordance with (2) and for each \(j\in N\) problem (6) can be solved in constant time.

Below we present several illustrations, taking two popular scheduling performance measures and, as agreed in Sect. 1, a cubic speed cost function (3). Notice that for the latter function, \( pg_{j}\left( \frac{\gamma _{j}}{p}\right) =\frac{\beta _{j}\gamma _{j}^{3}}{p^{2}}\), \(j\in N\).

For job \(j\in N\), suppose that \(f_{j}\left( C_{j}\right) =w_{j}C_{j}\), i.e., F represents the weighted sum of the completion times. Then, problem (6) can be written as
$$\begin{aligned} \begin{array}{cc} \mathrm {Minimize} &{} \displaystyle Z_{j}=w_{j}p+w_{j}r_{j}+\frac{\beta _{j}\gamma _{j}^{3}}{p^{2}} \\ \mathrm {subject~to} &{} 0<p\le u_{j}, \end{array} \end{aligned}$$
so that \(p_{j}^{*}=\min \left\{ \gamma _{j}\left( \frac{2\beta _{j}}{ w_{j}}\right) ^{1/3},u_{j}\right\} \).

For another illustration, assume that job \(j\in N\) is given a “soft” due date \(d_{j}\), but no “hard” deadline \(\bar{d}_{j}\), i.e., \( D_{j}=r_{j+1},\) \(1\le j\le n-1\). Suppose that \(f_{j}\left( C_{j}\right) =w_{j}\max \left\{ C_{j}-d_{j},0\right\} \), i.e., F represents total weighted tardiness.

If for a job \(j\in N\), the inequality \(r_{j+1}\le d_{j}\) holds, then job j will be completed before its due date and problem (6) can be written as
$$\begin{aligned} \begin{array}{cc} \mathrm {Minimize} &{} \displaystyle \frac{\beta _{j}\gamma _{j}^{3}}{p^{2}} \\ \mathrm {subject~to} &{} 0<p\le r_{j+1}-r_{j}, \end{array} \end{aligned}$$
so that \(p_{j}^{*}=\min \left\{ \gamma _{j}\left( \frac{2\beta _{j}}{ w_{j}}\right) ^{1/3},r_{j+1}-r_{j}\right\} \). Otherwise, i.e., if \( r_{j+1}>d_{j}\), in order to solve problem (6), we need to solve two problems:
$$\begin{aligned} \begin{array}{cc} \mathrm {Minimize} &{} \displaystyle \frac{\beta _{j}\gamma _{j}^{3}}{p^{2}} \\ \mathrm {subject~to} &{} 0<p\le d_{j}-r_{j}, \end{array} \end{aligned}$$
which corresponds to an early completion of job j so that no tardiness occurs, and
$$\begin{aligned} \begin{array}{cc} \mathrm {Minimize} &{} \displaystyle w_{j}(p+r_{j}-d_{j})+\frac{\beta _{j}\gamma _{j}^{3}}{p^{2}} \\ \mathrm {subject~to} &{} d_{j}-r_{j}\le p\le r_{j+1}-r_{j}, \end{array} \end{aligned}$$
where job j completes after its due date. For job \(j\in N\), the optimal actual processing time \(p_{j}^{*}\) is equal to the value of p that delivers the lowest value of the objective function in these two problems.

In the presented examples, which can be extended to most traditionally used objective functions, the actual processing time \(p_{j}^{*}\) of each job is essentially written in closed form, which justifies our assumption that each problem (6) can be solved in constant time.

3 Problem \(\Pi _{+}\) on parallel machines

In this section, we study the problem of finding an immediate start schedule for processing the jobs of set N on m parallel machines \(M_{1}\), \(M_{2}\),\(\ldots ,M_{m}\) that minimizes the sum of the total cost F and total energy G. Adapting the ideas from the work by Hiraishi et al. (2002) and Leyvand et al. (2010), we reduce our problem to a minimum-cost flow problem in a special network (Fig. 1).
Fig. 1

Network \(H=(V,T)\)

Introduce a bipartite network \(H=(V,T)\). The node set \(V=\left\{ s,t\right\} \cup N_{A}\cup N_{B}\) consists of a source s, a sink t, two sets \( N_{A}=\left\{ A_{1},A_{2},\ldots ,A_{n}\right\} \) and \(N_{B}=\left\{ B_{1},B_{2},\ldots ,B_{n}\right\} \), where each node \(A_{j}\) and \(B_{j}\) is associated with job \(j\in N\). The set T of arcs is defined as \(T=T_{A}\cup T_{AB}\cup T_{BA}\cup T_{B}\), where
$$\begin{aligned} \begin{array}{ll} T_{A}=\{(s,A_{j})\mid A_{j}\in N_{A}\}, &{} \\ T_{B}=\{\left( B_{j},t\right) \mid B_{j}\in N_{B}\}, &{} \\ T_{AB}=\{(A_{j},B_{j})\mid j\in N\}, &{} \\ T_{BA}= \{(B_{j},A_{k})\mid B_{j}\in N_{B},\ A_{k}\in N_{A},~1\le j<k\le n\}. &{} \end{array} \end{aligned}$$
Each arc \(\left( q,q^{\prime }\right) \in T\) is associated with capacity \( \mu (q,q^{\prime })\) and cost \(c(q,q^{\prime })\). Recall that a feasible flow \(x:A\rightarrow \mathbb {R}\) satisfies the capacity constraint
$$\begin{aligned} 0\le x(q,q^{\prime })\le \mu (q,q^{\prime }),\qquad (q,q^{\prime })\in T, \end{aligned}$$
i.e., the flow on an arc cannot be larger than its capacity, and the flow balance constraint
$$\begin{aligned} \displaystyle \sum _{q:~(q^{\prime },q)\in T_{AB}\cup T_{BA}}x(q^{\prime },q)=\sum _{q:~(q,q^{\prime })\in T_{AB}\cup T_{BA}}x(q,q^{\prime }), \end{aligned}$$
for \(q\in V\backslash \left\{ s,t\right\} \), i.e., for each node q other than the source and the sink the flow that enters the node must be equal to the flow that leaves the node.
The value of a flow x is equal to the total flow on the arcs that leave the source (or, equivalently, enter the sink):
$$\begin{aligned} \text{ the } \text{ value } \text{ of } \text{ flow } x=\sum _{j\in N}x(s,A_{j})=\sum _{j\in N}x(B_{j},t). \end{aligned}$$
For network H, let us set all arc capacities to 1. By appropriately defining the costs on the arcs of network H, we reduce the original problem of minimizing the objective function \(F+G\) on m parallel machines to finding the minimum-cost flow of value m in H.

Suppose a flow of value m in network H is found. Since the network is acyclic, the arcs with a flow equal to 1 will form m paths from s to t , and the order of arcs of set \(T_{BA}\) in each path defines the sequence of jobs on a machine. A path starts with an arc \(\left( s,A_{j}\right) \), proceeds with pairs of arcs of the form \((A_{j},B_{j})\), \((B_{j},A_{k})\), and concludes with the final pair \((A_{\ell },B_{\ell })\), \((B_{\ell },t)\). An arc \(\left( s,A_{j}\right) \) implies that job j is the first on some machine. A pair \((A_{j},B_{j})\), \((B_{j},A_{k})\) corresponds to scheduling two jobs, j and k, one after another on the same machine, while a pair \( (A_{\ell },B_{\ell })\), \((B_{\ell },t)\) corresponds to assigning job \(\ell \) as the last job on a machine.

The arc costs reflect the selected sequence of jobs on a machine. If a job \( j\in N\) has no “hard” deadline, define \( \bar{d}_{j}=+\,\infty \). For the final pair of the chain \((A_{\ell },B_{\ell }) \), \((B_{\ell },t)\), the cost of scheduling job \(\ell \) as the last job on a machine is equal to the contribution of job \(\ell \in N\) to the objective function. It can be found as the optimal value \(Z_{\ell }^{*}\) for the problem (6) with \(j=\ell \) and \(u_{j}=\bar{d}_{j}\). Thus, for each \(j\in N\), we compute the value \(Z_{j}^{*}\) and assign this value as a cost of the arc \((B_{j},t)\).

If a pair of arcs \((A_{j},B_{j})\), \((B_{j},A_{k})\) with \(r_{j}<r_{k}\) belongs to a certain path from s to t, then job j is sequenced on some machine immediately before job k, and therefore must complete before time \( \min \left\{ \bar{d}_{j},r_{k}\right\} \). The cost associated with the job sequence \(\left( j,k\right) \) is equal to the smallest value \(Z_{j,k}^{*} \) of the objective function \(Z_{j,k}\) for the problem
$$\begin{aligned} \begin{array}{cc} \mathrm {Minimize} &{} \displaystyle Z_{j,k}=f_{j}\left( r_{j}+p\right) +pg_{j}\left( \frac{\gamma _{j}}{p}\right) \\ \mathrm {subject~to} &{} 0<p\le \min \left\{ \bar{d}_{j}-r_{j},r_{k}-r_{j} \right\} , \end{array} \end{aligned}$$
which can be seen as problem (6), where \(u_{j}=\min \{\bar{d} _{j}-r_{j},r_{k}-r_{j}\}\). For each pair \(\left( j,k\right) \) where \(1\le j<k\le n\), we compute the value \(Z_{j,k}^{*}\) and assign this value as a cost of the arc \((B_{j},A_{k})\).

For each arc \((A_{j},B_{j})\in T_{AB}\), the cost is set equal to \(-M\), where M is a large positive number. This guarantees that every arc \( (A_{j},B_{j})\in T_{AB}\) receives a flow of 1, so that each job \(j\in N\) will be scheduled. If we ignore the costs of the arcs \((A_{j},B_{j})\in T_{AB} \), the total cost of the found flow is equal to the optimal value of the function \(F+G\).

Thus, if one of the paths from s to t visits the sequence of nodes \( (s,A_{j_{1}},B_{j_{1}},A_{j_{2}},B_{j_{2}}, \ldots , \) \(A_{j_{y}}, B_{j_{y}},t) \), then in the associated schedule on some machine the sequence of jobs \((j_{1},j_{2},\ldots ,\) \(j_{y}) \) is processed. The actual processing time \(p_{j_{i}}^{*}\) of job \(j_{i}\), \(1\le i\le y-1\), is equal to the value of p that delivers the smallest value of \( Z_{j_{i},j_{i+1}}^{*}\), while for the last job \(j_{y}\) the actual processing \(p_{j_{y}}^{*}\) is defined by the value of p that delivers the smallest value of \(Z_{j_{y}}^{*}\).

As in Sect. 2, we may assume that determining the cost of each arc of network H takes constant time, so that all the costs will be found in \(O\left( n^{2}\right) \) time. The required flow can be found in \( O\left( n^{2}m\right) \) by applying the successive shortest path algorithm, similar to the Ford–Fulkerson algorithm; see Ahuja et al. (1993).

Theorem 2

The problem \(\Pi _{+}\) of minimizing the sum of total cost F and total energy G on m parallel machines is solvable in \(O\left( n^{2}m\right) \) time by finding the minimum-cost flow of value m in network H, provided that the cost of each arc of H can be computed in constant time.

The described approach can be extended to the problem of determining the optimal number of parallel machines to be used. This aspect is particularly important in modern computing systems, as there are overheads related to initialization of virtual machines in Clouds, and overheads for activating the machines which are in the sleep mode.

Suppose that using v parallel machines incurs cost \(\sigma _{v}\), \(1\le v\le m\), and we are interested in minimizing \(F+G\) plus additionally the cost \(\sigma _{v}\) of all used machines. This can be done by solving the sequence of flow problems in network H, trying flow values 1, then 2, etc. up to an upper bound m on the machine number. For each tried value of v, \(1\le v\le m\), the function \(F+G+\sigma _{v}\) is evaluated and the best option is taken. The running time for solving the resulting problem remains \( O\left( n^{2}m\right) \), since the successive shortest path algorithm for finding the min-cost flow of value m will iteratively find the min-cost flows with all required intermediate values 1, \(2,\ldots ,m-1\).

Theorem 3

The problem \(\Pi _{+}\) of minimizing the sum of total cost F, total energy G and the cost \(\sigma _{v }\) for using \(v \le m\) machines, where v is a decision variable, is solvable in \(O\left( n^{2}m\right) \) time, under the assumptions of Theorem 2.

A drawback of the model with the aggregated objective function is that it schedules all arrived jobs. In the case of a rather short interval available for processing a job, this can only be achieved if a very high speed is applied, which may be unacceptably expensive. It may appear to be beneficial not to accept certain jobs and to pay an agreed rejection fee.

Suppose that the cost of rejecting job \(j\in N\) is \(\delta _{j}\). Let \(N_{A}\) be the set of accepted jobs, while \(N_{R}=N\backslash N_{A}\) be the set of rejected jobs. If we want to minimize the sum of the performance function for the accepted jobs, total energy used and total rejection penalty, we need to solve the problem with the objective function
$$\begin{aligned} Z=\sum _{j\in N_{A}}f_{j}\left( p_{j}+r_{j}\right) +\sum _{j\in N_{A}}p_{j}g_{j}\left( \frac{\gamma _{j}}{p_{j}}\right) +\sum _{j\in N_{R}}\delta _{j}, \end{aligned}$$
which is equivalent (up to the additive constant \(\sum _{j\in N}\delta _{j}\)) to
$$\begin{aligned} Z^{\prime }=\sum _{j\in N_{A}}f_{j}\left( p_{j}+r_{j}\right) +\sum _{j\in N_{A}}p_{j}g_{j}\left( \frac{\gamma _{j}}{p_{j}}\right) -\sum _{j\in N_{A}}\delta _{j}. \end{aligned}$$
In the network model, we replace the cost on an arc \(\left( A_{j},B_{j}\right) \in T_{AB}\) by \(-\,\delta _{j}\), keeping the cost of an arc \(\left( B_{j},A_{k}\right) \in T_{BA}\) the same as in the basic model. Recall that the latter cost is found by solving problem (9). Since in an optimal solution less than m machines can be used, we add an extra arc \(\left( s,t\right) \) of capacity m and zero cost.

For example, suppose that the minimum-cost flow of value \(\ell \), \(\ell \le m\), in the modified network is found, and one of the paths from s to t visits the sequence of nodes \((s,A_{j_{1}},B_{j_{1}},A_{j_{2}},B_{j_{2}}, \ldots ,A_{j_{y}},B_{j_{y}},t)\). Then, the sequence of accepted jobs \(\left( j_{1},j_{2},\ldots ,j_{y}\right) \) is processed on some machine, and the contribution of job \(j_{i}\) is equal to the cost of the arc that leaves node \(B_{j_{i}}\), found by solving problem (9), plus the cost \(-\,\delta _{j_{i}}\) of the arc that enters node \(B_{j_{i}}\), \(1\le i\le y\). The described adjustments do not change the time complexity of the approach.

Theorem 4

The problem \(\Pi _{+}\) in which it is required to determine the set \(N_{R}\) of rejected jobs to minimize the sum of total cost F, total energy G and the cost \(\sum _{j\in N_{R}}\delta _{j}\) is solvable in \(O\left( n^{2}m\right) \) time, under the assumptions of Theorem 2.

4 Problem \(\Pi _{1}\) on a single machine

In this section, we consider the problem of minimizing total energy G subject to a constraint on total cost F on a single machine. The presented solution approach is based on Karush–Kuhn–Tucker (KKT) reasoning in relation to the associated Lagrange function. This approach works for a wide range of functions G and F; however, below for simplicity it is presented for the case that \(F=\sum _{j\in N}\left( C_{j}-r_{j}\right) \), i.e., F represents total flow time. Moreover, a natural interpretation of the obtained results occurs if for each \(j\in N\) the energy function \(g_{j}\) is polynomial, strictly convex, decreasing in \( p_{j}\) and job-independent, e.g., satisfies (3) with \(\beta _{j}=1 \).

Due to the immediate start condition, we see that \(C_{j}-r_{j}=p_{j}\), and let P be a given upper bound on \(F=\sum _{j\in N}p_{j}\). Let \(u_{j}\) be an upper bound on the actual processing time \(p_{j}\), defined as in Sect. . Denote
$$\begin{aligned} G_{j}\left( p_{j}\right) =p_{j}g_{j}\left( \frac{\gamma _{j}}{p_{j}}\right) ,~j\in N. \end{aligned}$$
Then, the problem we study in this section can be formulated as
$$\begin{aligned} \begin{array}{cc} \mathrm {Minimize} &{} \displaystyle G=\sum _{j=1}^{n}G_{j}\left( p_{j}\right) \\ \mathrm {subject~to} &{} \sum _{j=1}^{n}p_{j}\le P, \\ &{} 0<p_{j}\le u_{j},~j\in N. \end{array} \end{aligned}$$
Such a problem can be classified as a nonlinear resource allocation problem with continuous decision variables; see the survey by Patriksson (2008). Note that we can limit our consideration to the case of \(P<\sum _{j\in N}u_{j}\); otherwise, in an optimal solution \(p_{j}=u_{j}\) for all \(j\in N\).
For a vector \({\mathbf {p=}}\left( p_{1},p_{2},\ldots ,p_{n}\right) \) and a non-negative Lagrange multiplier \(\lambda \), introduce the Lagrangian function
$$\begin{aligned} L\left( {\mathbf {p}},\lambda \right) =\sum _{j=1}^{n}G_{j}\left( p_{j}\right) +\lambda \left( \sum _{j=1}^{n}p_{j}-P\right) \end{aligned}$$
and its dual
$$\begin{aligned} Q\left( \lambda \right) =-\,\lambda P+\sum _{j=1}^{n}\min _{0<p_{j}\le u_{j}}\left\{ G_{j}\left( p_{j}\right) +\lambda p_{j}\right\} , \end{aligned}$$
see Patriksson (2008). The derivative of the Lagrangian dual \(Q\left( \lambda \right) \) is equal to
$$\begin{aligned} Q^{\prime }\left( \lambda \right) =-P+\sum _{j=1}^{n}p_{j}\left( \lambda \right) , \end{aligned}$$
where vector \({\mathbf {p}}\left( \lambda \right) {\mathbf {=}}\left( p_{1}\left( \lambda \right) ,p_{2}\left( \lambda \right) ,\ldots ,p_{n}\left( \lambda \right) \right) \) delivers the unique minimum to the Lagrangian function for a given \(\lambda \).

The KKT conditions guarantee that there exists a value \(\lambda ^{*}\) such that \(Q^{\prime }\left( \lambda ^{*}\right) =0\). Such a multiplier \( \lambda ^{*}\) and vector \({\mathbf {p}}\left( \lambda ^{*}\right) \) deliver the minimum to the Lagrangian function, so that vector \({\mathbf {p}} \left( \lambda ^{*}\right) \) is a solution to problem (10), i.e., defines the optimal values of the actual processing times.

Differentiating functions \(G_{j}\left( p_{j}\right) \) denote
$$\begin{aligned} \lambda _{j}=-G_{j}^{\prime }\left( u_{j}\right) ,~1\le j\le n. \end{aligned}$$
For a polynomial energy function, e.g., \(G_{j}\left( p_{j}\right) =p_{j} \frac{\gamma _{j}^{3}}{p_{j}^{3}}=\frac{\gamma _{j}^{3}}{p_{j}^{2}},\) \(j\in N \), the values \(\lambda _{j}\) admit a natural interpretation. Indeed, \( \lambda _{j}=-G_{j}^{\prime }\left( u_{j}\right) =\frac{2\gamma _{j}^{3}}{ u_{j}^{3}}=2s_{j}^{3}\), i.e., if for a job j the actual processing time is equal \(u_{j}\), then this job is processed at speed \(\root 3 \of {\lambda _{j}/2}\).
Let \(\pi =\left( \pi \left( 1\right) ,\pi \left( 2\right) ,\ldots ,\pi \left( n\right) \right) \) be a permutation of the numbers \(1,2,\ldots ,n\) such that
$$\begin{aligned} \lambda _{\pi \left( 1\right) }\ge \lambda _{\pi \left( 2\right) }\ge \cdots \ge \lambda _{\pi \left( n\right) }. \end{aligned}$$
For a k, \(1\le k\le n\), in accordance with the KKT reasoning for the resource allocation problem (Patriksson 2008), define
$$\begin{aligned} p_{\pi \left( j\right) }\left( \lambda _{\pi \left( k\right) }\right) =\left\{ \begin{array}{ll} u_{\pi \left( j\right) }, &{} \mathrm {if~}1\le j\le k, \\ p_{\pi \left( j\right) }^{0}, &{} \mathrm {if~}k+1\le j\le n, \end{array} \right. \end{aligned}$$
where the values \(p_{\pi \left( j\right) }^{0}\) for \(k+1\le j\le n\) are solutions of the system of equations
$$\begin{aligned} G_{\pi \left( j\right) }^{\prime }\left( p_{\pi \left( j\right) }\right) =-\,\lambda _{\pi \left( k\right) },~k+1\le j\le n. \end{aligned}$$
By applying binary search with respect to k, we find a value \(k^{*}\) such that either
$$\begin{aligned} \sum _{j=1}^{n}p_{j}\left( \lambda _{\pi \left( k^{*}\right) }\right) =P \end{aligned}$$
$$\begin{aligned} \sum _{j=1}^{n}p_{j}\left( \lambda _{\pi \left( k^{*}\right) }\right)>P>\sum _{j=1}^{n}p_{j}\left( \lambda _{\pi \left( k^{*}+1\right) }\right) . \end{aligned}$$
In the former case, we define \(\lambda ^{*}=\lambda _{\pi \left( k^{*}\right) }\) and \({\mathbf {p}}\left( \lambda ^{*}\right) ={\mathbf {p}} \left( \lambda _{\pi \left( k^{*}\right) }\right) \); otherwise, solve the system of equations
$$\begin{aligned} \begin{array}{ll} G_{\pi \left( j\right) }^{\prime }\left( p_{\pi \left( j\right) }^{*}\right) =-\,\lambda ^{*}, \quad k^{*}+1\le j\le n, \\ \sum _{j=1}^{k^{*}}u_{\pi \left( j\right) }+\sum _{j=k^{*}+1}^{n}p_{\pi \left( j\right) }^{*}=P. &{} \end{array} \end{aligned}$$
Having solved the latter system, we determine \(\lambda ^{*}\) and the values \(p_{\pi \left( j\right) }^{*},\) \(k^{*}+1\le j\le n\). The components of the solution vector \({\mathbf {p}}\left( \lambda ^{*}\right) \) are defined by
$$\begin{aligned} p_{\pi \left( j\right) }\left( \lambda ^{*}\right) =\left\{ \begin{array}{ll} u_{\pi \left( j\right) }, &{} \mathrm {if~}1\le j\le k^{*}, \\ p_{\pi \left( j\right) }^{*} &{} \mathrm {if~}k^{*}+1\le j\le n. \end{array} \right. \end{aligned}$$
The search for the value \(k^{*}\) takes at most \(\log n\) iterations, and system (12) has to be solved in each iteration. Additionally, system (13) has to be solved at most once. If energy functions are cubic, we may assume that solving systems (12) and (13) requires time that is linear with respect to the number of decision variables. Indeed, the solution to (12) is given by
$$\begin{aligned} p_{\pi \left( j\right) }^{0}=\root 3 \of {\frac{2}{\lambda _{\pi \left( j\right) }}}\gamma _{\pi \left( j\right) }=\frac{u_{\pi (k)}}{\gamma _{\pi (k)}} \times \gamma _{\pi (j)}, \quad k+1\le j\le n. \end{aligned}$$
The solution to (13) is given by
$$\begin{aligned} \begin{array}{l} \lambda ^{*}=\frac{2\left( \sum _{j=k^{*}+1}^{n}\gamma _{\pi \left( j\right) }\right) ^{3}}{\left( P-\sum _{j=1}^{k^{*}}u_{\pi \left( j\right) }\right) ^{3}}; \\ p_{\pi \left( j\right) }\left( \lambda ^{*}\right) =\root 3 \of {\frac{2}{ \lambda ^{*}}}\gamma _{\pi \left( j\right) },\quad k^{*}+1\le j\le n. \end{array} \end{aligned}$$
Thus, we have proved the following statement.

Theorem 5

The problem \(\Pi _{1}\) of minimizing total energy G on a single machine, subject to the bounded total flow time \(F\le P\), reduces to the nonlinear resource allocation problem and can be solved in \(O\left( n\log n\right) \) time, provided that energy functions \(g_{j}\) are polynomial, strictly convex, decreasing in \(p_{j}\) and job-independent.

The following remark is useful for justifying the solution method for the bicriteria problem, presented in the next section. Simultaneous equations ( 13) imply that in an optimal solution for each job \(\pi \left( j\right) \), \(1\le j\le k^{*}\), the equality \(p_{\pi \left( j\right) }\left( \lambda ^{*}\right) =u_{\pi \left( j\right) }\) holds, i.e., each of these jobs fully uses the interval \(\left[ r_{\pi \left( j\right) }, r_{\pi \left( j\right) }+u_{\pi \left( j\right) }\right] \) available for its processing. The processing speed of job \(\pi \left( j\right) \), \(1\le j\le k^{*}\), is \(\frac{\gamma _{\pi \left( j\right) }}{u_{\pi \left( j\right) }}=\root 3 \of {\lambda _{\pi \left( j\right) }/2}\). Besides, for \(k^{*}+1\le j\le n\), due to (13), it follows that \(G_{\pi \left( j\right) }^{\prime }\left( p_{\pi \left( j\right) }^{*}\right) =-\,\lambda ^{*}\), so that all jobs \(\pi \left( j\right) \), \(k^{*}+1\le j\le n\) , are processed at the same speed \(\root 3 \of {\lambda ^{*}/2}\) and none of these jobs fully uses the available interval. Moreover, since \(\lambda _{\pi \left( 1\right) }\ge \cdots \ge \lambda _{\pi \left( k^{*}\right) }>\lambda ^{*}>\lambda _{\pi \left( k^{*}+1\right) }\ge \cdots \ge \lambda _{\pi \left( n\right) }\), we conclude that the common speed at which each job \(\pi \left( j\right) \), \(k^{*}+1\le j\le n,\) is processed is less than the processing speed of the jobs \(\pi \left( j\right) \), \(1\le j\le k^{*}\).

5 Problem \(\Pi _{2}\) on a single machine

In this section, we describe an approach to solving the bicriteria problem, in which it is required to simultaneously minimize total cost F and total energy G on a single machine. Recall that a schedule \(S^{\prime }\) is called Pareto-optimal if there exists no schedule \(S^{\prime \prime }\) such that \(F(S^{\prime \prime })\le F(S^{\prime })\) and \(G(S^{\prime \prime })\le G(S^{\prime })\), where at least one of these inequalities is strict.

Although the outlined approach can be extended to deal with rather general cost functions, below we present it for \(F=\sum _{j=1}^{n}\left( C_{j}-r_{j}\right) \) and \(G=\sum _{j=1}^{n}p_{j}g_{j}\left( \frac{\gamma _{j} }{p_{j}}\right) =\sum _{j=1}^{n}\frac{\gamma _{j}^{3}}{p_{j}^{2}}\). The solution of the problem of finding the Pareto optimum is given in the space of variables F and G by (i) a sequence of break-points \( F_{0},F_{1},F_{2},\ldots ,F_{\nu }\) of the variable F and (ii) an explicit formula that expresses variable G as a function of variable \(F\in \left[ F_{k},F_{k+1}\right] \) for all \(k=0,1,\ldots .,\nu -1\). As we show below, \( \nu = n\).

In line with the reasoning presented in Sect. 4, compute
$$\begin{aligned} s_{j}=\frac{\gamma _{j}}{u_{j}},~j\in N. \end{aligned}$$
The value \(s_{j}\) represents the speed at which job \(j\in N\) has to be processed to get the actual processing time \(u_{j}\). Determine a permutation \(\pi =\left( \pi \left( 1\right) ,\pi \left( 2\right) ,\ldots ,\pi \left( n\right) \right) \) of the numbers \(1,2,\ldots ,n\) such that
$$\begin{aligned} s_{\pi \left( 1\right) }\ge s_{\pi \left( 2\right) }\ge \cdots \ge s_{\pi \left( n\right) }. \end{aligned}$$
For completeness, define \(s_{\pi \left( 0\right) }=+\,\infty \).
$$\begin{aligned} \begin{array}{l} U_{k}\left( \pi \right) =\sum _{j=1}^{k}u_{\pi \left( j\right) },~\varGamma _{k}\left( \pi \right) =\sum _{j=1}^{k}\gamma _{\pi \left( j\right) }, \\ ~R_{k}\left( \pi \right) =\sum _{j=1}^{k}\frac{\gamma _{\pi \left( j\right) }^{3}}{u_{\pi \left( j\right) }^{2}},~1\le k\le n. \end{array} \end{aligned}$$
Additionally, for completeness, define \(U_{0}\left( \pi \right) =\varGamma _{0}\left( \pi \right) =R_{0}\left( \pi \right) =0\).
Denote \(\varGamma =\sum _{j\in N}\gamma _{j}.\) Introduce \(F_{0}=0\) and
$$\begin{aligned} \begin{array}{ll} F_{k} &{} =\sum _{j=1}^{k}u_{\pi \left( j\right) }+\frac{u_{\pi \left( k\right) }}{\gamma _{\pi \left( k\right) }}\sum _{j=k+1}^{n}\gamma _{\pi \left( j\right) } \\ &{} =U_{k}\left( \pi \right) +\frac{\varGamma -\varGamma _{k}\left( \pi \right) }{ s_{\pi \left( k\right) }},\qquad 1\le k\le n. \end{array} \end{aligned}$$

Theorem 6

For the bicriteria problem \(\Pi _{2}\) of minimizing total flow time F and total energy G on a single machine, the values \(F_{k}\), \(0\le k\le n,\) defined by (15) correspond to the break-points of the variable F, and the variable G can be expressed as
$$\begin{aligned} G=R_{k}(\pi )+\frac{\left( \varGamma -\varGamma _{k}\left( \pi \right) \right) ^{3} }{\left( F-U_{k}\left( \pi \right) \right) ^{2}},~F\in (F_{k},F_{k+1}], \end{aligned}$$
for \(0\le k\le n-1\). Problem \(\Pi _{2}\) is solvable in \(O(n \log n)\) time.


The fact that the values \(F_{k}\), \(0\le k\le \nu \), are indeed break-points and that \(\nu = n\) follows from the structure of an optimal solution of the problem of minimizing total energy G subject to an upper bound on the sum of actual processing times; see (11) and (14) from Sect. 4. For \(F\in (F_{k},F_{k+1}]\) considering the jobs in accordance with the permutation \( \pi \), the actual processing times of the first k jobs are fixed to their upper bounds, while the actual processing times of the remaining jobs are obtained by running these jobs at a common speed s, that decreases starting from \(s_{\pi \left( k\right) }\). The next break-point \(F_{k+1}\) occurs when s becomes equal to \(s_{\pi \left( k+1\right) }\). Note that break-points \(F_{k}\) and \(F_{k+1}\) coincide if \(s_{\pi \left( k\right) } = s_{\pi \left( k+1\right) }\), but we count them separately so that indeed \( \nu = n\). The last break-point \(F_{n}\) corresponds to the situation that the actual processing time of job \(\pi (n)\) is equal to its largest possible value \(u_{\pi (n)}\).

Consider the interval \((F_{0},F_{1}]\). In this interval, the jobs are run with a speed \(s\ge s_{\pi \left( 1\right) }\), so that for \(F\in (F_{0},F_{1}]\), \(F=\sum _{j\in N}p_{j}=\sum _{j\in N}\gamma _{j}/s=\varGamma /s\). We deduce that
$$\begin{aligned} \begin{array}{ll} G &{} =\sum _{j=1}^{n}\frac{\gamma _{j}^{3}}{p_{j}^{2}}=s^{2}\sum _{j=1}^{n} \gamma _{j}=\frac{\varGamma ^{3}}{F^{2}} \\ &{} =R_{0}(\pi )+\frac{\left( \varGamma -\Gamma _{0}\left( \pi \right) \right) ^{3}}{\left( F-U_{0}\left( \pi \right) \right) ^{2}},\qquad F\in (F_{0},F_{1}], \end{array} \end{aligned}$$
which complies with (16) for \(k=0\).
Now consider the next interval \((F_{1},F_{2}]\). It follows that \(F\in (F_{1},F_{2}]\) can be written as
$$\begin{aligned} F=u_{\pi \left( 1\right) }+\frac{1}{s}\sum _{j=2}^{n}\gamma _{\pi \left( j\right) }=U_{1}\left( \pi \right) +\frac{\varGamma -\varGamma _{1}\left( \pi \right) }{s}, \end{aligned}$$
as a function of speed s, where s decreases from \(s_{\pi \left( 1\right) }\) to \(s_{\pi \left( 2\right) }\), so that
$$\begin{aligned} s=\frac{\varGamma -\varGamma _{1}\left( \pi \right) }{F-U_{1}\left( \pi \right) } \end{aligned}$$
$$\begin{aligned} G=\frac{\gamma _{\pi \left( 1\right) }^{3}}{u_{\pi \left( 1\right) }^{2}} +s^{2}\sum _{j=2}^{n}\gamma _{\pi (j)}=R_{1}(\pi )+\frac{\left( \varGamma -\varGamma _{1}\left( \pi \right) \right) ^{3}}{\left( F-U_{1}\left( \pi \right) \right) ^{2}}, \end{aligned}$$
as (16) for \(k=1\).
Consider an interval \((F_{k},F_{k+1}]\) for some k, \(0\le k\le n-1\). It follows that \(F\in (F_{k},F_{k+1}]\) can be written as
$$\begin{aligned} F=\sum _{j=1}^{k}u_{\pi \left( j\right) }+\frac{1}{s}\sum _{j=k+1}^{n}\gamma _{\pi (j)}=U_{k}\left( \pi \right) +\frac{\varGamma -\varGamma _{k}\left( \pi \right) }{s}, \end{aligned}$$
where s decreases from \(s_{\pi \left( k\right) }\) to \(s_{\pi \left( k+1\right) }\), so that
$$\begin{aligned} s= & {} \frac{\varGamma -\varGamma _{k}\left( \pi \right) }{F-U_{k}\left( \pi \right) } \text{ and } \\ G= & {} \sum _{j=1}^{k}\frac{\gamma _{\pi \left( j\right) }^{3}}{u_{\pi \left( j\right) }^{2}}+s^{2}\sum _{j=k+1}^{n}\gamma _{j}=R_{k}(\pi )+\frac{\left( \varGamma -\varGamma _{k}\left( \pi \right) \right) ^{3}}{\left( F-U_{k}\left( \pi \right) \right) ^{2}}. \end{aligned}$$
Computing G for all values of k, \(0\le k\le n-1\), takes \(O\left( n\log n\right) \) time. This proves the theorem. \(\square \)

6 Conclusions

In this paper, we address several versions of the scheduling model that combines a well-established feature of speed scaling and a requirement of immediate job starting times, that is typical for modern Cloud computing systems. Both objectives are of the min-sum type, one depending on the job completion times, and another one on the machine usage cost. We show that the single-machine model with n jobs can be solved in \(O(n\log n)\) time for two single criterion versions of our problem, \(\Pi _{+}\) and \(\Pi _{1}\), or for the most general bicriteria version \(\Pi _{2}\). The single criterion version \(\Pi _{+}\) of the multi-machine model with n jobs and m machines is solvable in \(O(n^{2}m)\) time.

Presented results for immediate start models can be naturally generalized to handle problems that combine a max-type scheduling objective \(F^{\max }=\max _{j\in N}\left\{ f_{j}(C_{j})\right\} \) and the energy component G. For example, for \(f_{j}(C_{j})=C_{j}\) or \(f_{j}(C_{j})=C_{j}-d_{j}\) the objective \(F_{\max }\) becomes either the makespan \(C_{\max }\) or the maximum lateness \(L_{\max }\), respectively.
  • For problem \(\Pi _{1}^{\max }\) (minimizing energy G subject to an upper bound \(\overline{F}\) on the value of \(F^{\max }\)), define deadlines induced by a given value of \(\overline{F}\), eliminate \(F^{\max }\) from consideration by setting \(f_{j}(C_{j})=0\), \(j\in N\), and solve problem \( \Pi _{+}\) to minimize \(G+0\) using the techniques from Sects. 23.

  • As far as problem \(\Pi _{+}^{\max }\) is concerned, function \(F_{\max }\) is convex in \(p_{j}\) for the most popular min-max scheduling objectives, such as \(F_{\max }\in \left\{ C_{\max },L_{\max }\right\} \). Since the energy component G is also convex in \(p_{j}\), it follows that the objective \(F^{\max }+G\) is convex and its minimum can be found by a numerical method of convex optimization.

To summarize, our study can be considered as the first attempt to explore fundamental properties of the new model with the immediate start condition. Future research may elaborate further the applied aspects of our study: the basic system model can be enhanced to address a range of issues typical for modern Cloud computing systems, such as heterogeneous physical machines having different speed characteristics and energy usage functions, introduction of virtual machines with possible allocation of several virtual machines to one physical machine, the possibility of migrating virtual machines and associated tasks. Our study may also serve as a basis for the development of the online algorithms for problems with the immediate start condition. Notice that the online versions of the traditional models of power-aware scheduling, without immediate start, are proposed by Albers and Fujiwara (2007), Bansal et al. (2010), Chan et al. (2013), Lam et al. (2008, 2012).



This research was supported by the EPSRC funded project EP/J019755/1 “Submodular Optimisation Techniques for Scheduling with Controllable Parameters”. The first author was partially supported by JSPS KAKENHI Grant Numbers 15K00030 and 15H00848.


  1. Aceto, G., Botta, A., de Donato, W., & Pescapè, A. (2013). Cloud monitoring: A survey. Computer Networks, 57, 2093–2115.CrossRefGoogle Scholar
  2. Ahuja, R. K., Magnanti, T. L., & Orlin, J. B. (1993). Network flows: Theory, algorithms and applications. Englewood Cliffs: Prentice Hall.Google Scholar
  3. Albers, S. (2009). Algorithms for energy saving. Lecture Notes in Computer Science, 5760, 173–186.CrossRefGoogle Scholar
  4. Albers, S. (2010a). Energy-efficient algorithms. Communications of the ACM, 53, 86–96.CrossRefGoogle Scholar
  5. Albers, S. (2010b). Algorithms for energy management. Lecture Notes in Computer Science, 6072, 1–11.Google Scholar
  6. Albers, S., & Fujiwara, H. (2007). Energy-efficient algorithms for flow time minimization. ACM Transactions on Algorithms, 3, 49:1–49:17.CrossRefGoogle Scholar
  7. Albers, S., Antoniadis, A., & Geiner, G. (2011). On multiprocessor speed scaling with migration. In Proceedings of the symposium on parallelism in algorithms and architectures (SPAA) (pp. 279–288).Google Scholar
  8. Albers, S., Müller, F., & Schmelzer, S. (2014). Speed scaling on parallel processors. Algorithmica, 68, 404–425.CrossRefGoogle Scholar
  9. Angel, E., Bampis, E., Kacem, F., & Letsios, D. (2012). Speed scaling on parallel processors with migration. Lecture Notes in Computer Science, 7484, 128–140.CrossRefGoogle Scholar
  10. Angel, E., Bampis, E., Chau, V., & Letsios, D. (2013). Throughput maximization for speed-scaling with agreeable deadlines. Lecture Notes in Computer Science, 7876, 10–19.CrossRefGoogle Scholar
  11. Angel, E., Bampis, E., Chau, V., & Thang, N. K. (2016). Throughput maximization in multiprocessor speed-scaling. Theoretical Computer Science, 630, 1–12.CrossRefGoogle Scholar
  12. Antoniadis, A., Barcelo, N., Consuegra, M., Kling, P., Nugen, M., Pruhs, K., & Scquizzatok, M. (2014). Efficient computation of optimal energy and fractional weighted flow trade-off schedules. In Proceedings of the 31st international symposium on theoretical aspects of computer science (STACS ’14) (pp. 63–74).Google Scholar
  13. Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R. H., Konwinski, A., et al. (2009). Above the clouds: A Berkeley view of cloud computing (p. 28). UCB/EECS, vol: Technical Report, EECS Department, University of California, Berkeley.Google Scholar
  14. Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R. H., Konwinski, A., et al. (2010). A view of cloud computing. Communications of the ACM, 53, 55–58.CrossRefGoogle Scholar
  15. Bampis, E. (2016). Algorithmic issues in energy-efficient computation. Lecture Notes in Computer Science, 9869, 3–14.CrossRefGoogle Scholar
  16. Bampis, E., Letsios, D., & Lucarelli, G. (2015). Green scheduling, flows and matchings. Theoretical Computer Science, 579, 126–136.CrossRefGoogle Scholar
  17. Bansal, N., Pruhs, K., & Stein, C. (2010). Speed scaling for weighted flow time. SIAM Journal on Computing, 39, 1294–1308.CrossRefGoogle Scholar
  18. Barcelo, N. (2015). The complexity of speed-scaling. Ph.D. thesis, University of Pittsburgh.Google Scholar
  19. Barcelo, N., Cole, D., Letsios, D., Nugent, M., & Pruhs, K. (2013). Optimal energy trade-off schedules. Sustainable Computing: Informatics and Systems, 3, 207–217.Google Scholar
  20. Bekki, Ö. B., & Azizoğlu, M. (2008). Operational fixed interval scheduling problem on uniform parallel machines. International Journal of Production Economics, 112, 756–768.CrossRefGoogle Scholar
  21. Bouzina, K. I., & Emmons, H. (1996). Interval scheduling on identical machines. Journal of Global Optimization, 9, 379–393.CrossRefGoogle Scholar
  22. Brooks, D. M., Bose, P., Schuster, S. E., Jacobson, H., Kudva, P. N., Buyuktosunoglu, A., et al. (2000). Power-aware microarchitecture: Design and modeling challenges for next-generation microprocessors. IEEE Micro, 20, 26–44.CrossRefGoogle Scholar
  23. Bunde, D. P. (2009). Power-aware scheduling for makespan and flow. Journal of Scheduling, 12, 489–500.CrossRefGoogle Scholar
  24. Carlisle, M. C., & Lloyd, E. L. (1995). On the \(k\)-coloring of intervals. Discrete Applied Mathematics, 59, 225–235.CrossRefGoogle Scholar
  25. Chan, S.-H., Lam, T.-W., & Lee, L.-K. (2013). Scheduling for weighted flow time and energy with rejection penalty. Theoretical Computer Science, 470, 93–104.CrossRefGoogle Scholar
  26. Do Lago, D.G., Madeira, E.R.M., & Bittencourt, L.F. (2011). Power-aware virtual machine scheduling on clouds using active cooling control and DVFS. In Proceedings of the 9th international workshop on middleware for grids, clouds and e-science (MGC ’11) (pp. 1–6).Google Scholar
  27. Garg, S. K., Versteeg, S., & Buyya, R. (2013). A framework for ranking of cloud computing services. Future Generation Computer Systems, 29, 1012–1023.CrossRefGoogle Scholar
  28. Gerards, M. E. T., Hurink, J. L., & Hölzenspies, P. K. F. (2016). A survey of offline algorithms for energy minimization under deadline constraints. Journal of Scheduling, 19, 3–19.CrossRefGoogle Scholar
  29. Gupta, U. I., Lee, D. T., & Leung, J. Y.-T. (1979). An optimal solution for the channel-assignment problem. IEEE Transactions on Computers, 28, 807–810.CrossRefGoogle Scholar
  30. Hiraishi, K., Levner, E., & Vlach, M. (2002). Scheduling of parallel identical machines to maximize the weighted number of just-in-time jobs. Computers and Operations Research, 29, 841–848.CrossRefGoogle Scholar
  31. Iqbal, W., Dailey, M., & Carrera, D. (2009). SLA-driven adaptive resource management for web applications on a heterogeneous compute cloud. Lecture Notes in Computer Science, 5931, 243–253.CrossRefGoogle Scholar
  32. Jennings, B., & Stadler, R. (2015). Resource management in clouds: Survey and research challenges. Journal of Network and Systems Management, 23, 567–619.CrossRefGoogle Scholar
  33. Jing, S.-Y., Ali, S., She, K., & Zhong, Y. (2013). State-of-the-art research study for green cloud computing. Journal of Supercomputing, 65, 445–468.CrossRefGoogle Scholar
  34. Kolen, A. W. J., Lenstra, J. K., Papadimitriou, C. H., & Spieksma, F. C. R. (2007). Interval scheduling: A survey. Naval Research Logistics, 54, 530–543.CrossRefGoogle Scholar
  35. Kovalyov, M. Y., Ng, C. T., & Cheng, T. C. E. (2007). Fixed interval scheduling: Models, applications, computational complexity and algorithms. European Journal of Operational Research, 178, 331–342.CrossRefGoogle Scholar
  36. Kushida, K. E., Murray, J., & Zysman, J. (2015). Cloud computing: From scarcity to abundance. Journal of Industry, Competition and Trade, 15, 5–19.CrossRefGoogle Scholar
  37. Lam, T.-W., Lee, L.-K., To, I. K. K., & Wong, P. W. H. (2008). Speed scaling functions for flow time scheduling based on active job count. Lecture Notes in Computer Science, 5193, 647–659.CrossRefGoogle Scholar
  38. Lam, T. W., Lee, L. K., To, I. K. K., & Wong, P. W. H. (2012). Improved multi-processor scheduling for flow time and energy. Journal of Scheduling, 15, 105–116.CrossRefGoogle Scholar
  39. Lann, A., & Mosheiov, G. (2003). A note on the maximum number of on-time jobs on parallel identical machines. Computers and Operations Research, 30, 1745–1749.CrossRefGoogle Scholar
  40. Leyvand, Y., Shabtay, D., Steiner, G., & Yedidsion, L. (2010). Just-in-time scheduling with controllable processing times on parallel machines. Journal of Combinatorial Optimization, 19, 347–368.CrossRefGoogle Scholar
  41. Li, M., Yao, A. C., & Yao, F. F. (2006). Discrete and continuous min-energy schedules for variable voltage processors. Proceedings of the National Academy of Sciences of the United States of America, 103, 3983–3987.CrossRefGoogle Scholar
  42. Li, M., Yao, F. F., & Yuan, H. (2014). An \(O(n^{2})\) algorithm for computing optimal continuous voltage schedules. arXiv:1408.5995v1.
  43. Moreno, I. S., Garraghan, P., Townend, P., & Xu, J. (2014). Analysis, modeling and simulation of workload patterns in a large-scale utility cloud. IEEE Transactions on Cloud Computing, 2, 208–221.CrossRefGoogle Scholar
  44. Nakajima, K., Hakimi, S. L., & Lenstra, J. K. (1982). Complexity results for scheduling tasks in fixed intervals on two types of machines. SIAM Journal on Computing, 11, 512–520.CrossRefGoogle Scholar
  45. Patriksson, M. (2008). A survey on the continuous nonlinear resource allocation. European Journal of Operational Research, 185, 1–46.CrossRefGoogle Scholar
  46. Pruhs, K., Uthaisombut, P., & Woeginger, G. (2008). Getting the best response for your erg. ACM Transactions on Algorithms, 4, 38:1–38:17.CrossRefGoogle Scholar
  47. Rackspace: Our 100% Network Uptime Guarantee, Resource Document. Rackspace US Inc. Accessed December 17, 2015.
  48. Shabtay, D., Bensoussan, Y., & Kaspi, M. (2012). A bicriteria approach to maximize the weighted number of just-in-time jobs and to minimize the total resource consumption cost in a two-machine flow-shop scheduling system. International Journal of Production Economics, 136, 67–74.CrossRefGoogle Scholar
  49. Shabtay, D., & Steiner, G. (2007). A survey of scheduling with controllable processing times. Discrete Applied Mathematics, 155, 1643–1666.CrossRefGoogle Scholar
  50. Shioura, A., Shakhlevich, N.V., & Strusevich, V.A. (2015). Energy saving computational models with speed scaling via submodular optimization. In Proceedings of the 3rd international conference on green computing, technology and innovation (ICGCTI2015) Google Scholar
  51. Tian, W., & Zhao, Y. (2015). Optimized cloud resource management and scheduling: Theory and practices. Los Altos: Morgan Kaufmann.Google Scholar
  52. Von Laszewski, G., Wang, L., Younge, A. J., & He, X. (2009). Power-aware scheduling of virtual machines in DVFS-enabled clusters. In IEEE international conference on cluster computing and workshops (CLUSTER ’09) (pp. 1–10).Google Scholar
  53. Wu, C.-M., Chang, R.-S., & Chan, H.-Y. (2014). A green energy-efficient scheduling algorithm using the DVFS technique for cloud datacenters. Future Generation Computer Systems, 37, 141–147.CrossRefGoogle Scholar
  54. Yao, F. F., Demers, A. J., & Shenker, S. (1995). A scheduling model for reduced CPU energy. In Proceedings of the 36th IEEE symposium on foundations of computer science (FOCS ’95) (pp. 374–382).Google Scholar

Copyright information

© The Author(s) 2017

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  • Akiyoshi Shioura
    • 1
  • Natalia V. Shakhlevich
    • 2
    Email author
  • Vitaly A. Strusevich
    • 3
  • Bernhard Primas
    • 2
  1. 1.Department of Industrial Engineering and EconomicsSchool of Engineering, Tokyo Institute of TechnologyTokyoJapan
  2. 2.School of Computing, University of LeedsLeedsUK
  3. 3.Department of Mathematical Sciences, Faculty of Architecture, Computing & HumanitiesUniversity of Greenwich, Old Royal Naval CollegeLondonUK

Personalised recommendations