Demand response in the residential context

Technological improvements, changing environmental conditions, altered consumption patterns, and rising pollution levels (e.g., carbon emissions) challenge traditional energy grids (Lawrence et al. 2017). Renewable, but unstable, energy sources like photovoltaic (PV), wind power, and hydro power and new appliances such as electric vehicles (EVs) result in more volatile energy generation and consumption that affect both the supply side and the demand side (Seidel et al. 2013). Critical peaks, contingencies, volatile load profiles, unreliable market performance, and inefficient infrastructure use (Siano 2014) can result in blackouts, brownouts, shortages, and a high spinning reserve and can endanger the reliability of energy grids. The major transformations in the residential context, in particular through the increased use of EVs and decentralized energy generation and energy storage, contributes to this problem because the energy consumed in this sector amounts to more than a third of total consumption (Hu and Li 2013; Jovanovic et al. 2016). One possible solution to the challenges in this context is the management of appliances in Smart Homes (Koolen et al. 2017).

Appliance management can be carried out by Demand Response (DR). DR optimizes consumption patterns by, for example, using external signals (e.g., pricing signals, direct controlling signals). Optimization is then performed by shifting or managing loads based on incentive-based or price-based programs (Merkert et al. 2015), the latter are often realized through dynamic pricing (e.g., time-of-use pricing (TOUP), critical-peak pricing, real-time pricing (RTP) (Siano 2014; Steen et al. 2012)). DR can also “transform domestic customers from static consumers into active participants” ((Molderink et al. 2010), p., 109), which especially promotes the implementation of decentralized energy generation (e.g., PV, storages) in the grid. Nevertheless, since DR is a complex task and pricing signals may change during the day (cf., RTP), it can hardly be carried out manually by the users without assistance. Especially the reaction on changing price signals would require significant user efforts regarding time and participation as consumption plans need to be adapted on multiple times throughout the day. Therefore, applying DR in the residential context requires supporting infrastructure, including, for instance, a Smart Meter and a Home Energy Management System (HEMS) (e.g. (Chaudhari et al. 2014)).

A HEMS is a system that enables DR applications for residential users (Zhao et al. 2013) by monitoring and automatically managing appliances based on a user-specific set of requirements (REQs) (e.g., (Hu and Li 2013; Zhao et al. 2013; Han et al. 2014; Kuzlu et al. 2012)). Thereby, a HEMS can be assigned to a single home or multiple homes, especially in the latter case forming a microgrid (e.g., (Jiang and Fei 2015)). REQs are composed of the user-specific scenarios, including living behaviors, appliances, available storage, and energy-generation infrastructures, and consider environmental factors like weather, stock prices, and price signals (Fig. 1).

Fig. 1
figure 1

Smart Home context

To address these REQs, various DR methods can be applied that, for instance, differ regarding their objectives, optimization methods, and communication models (Kosek et al. 2013). Overviews can be found in (Balijepalli et al. 2011; Al-Sumaiti et al. 2014; Gerwig et al. 2015). DR methods are here understood as algorithms that are developed to solve optimization problems in the DR context in light of constraints and desired goals. However, (Khadar et al. 2017) denotes, that no sophisticated comparison of DR method performance is done yet. Moreover, to the best of our knowledge, no approaches have been developed so far that support this comparison and a selection of best performing DR methods for a user-specific scenario.

Consequently, the development of a solution is required, that assists with the comparison of DR methods and with the selection of the best one for a user-specific scenario. As HEMSs have heterogeneous structures and scenarios can change as new appliances and infrastructure components are developed, the solution should be adaptable for all kinds of HEMSs. A framework is suitable for this task, since it can provide a reusable structure and it can be adapted to fit individual needs (Johnson and Foote 1988). In this study, we develop a decision support framework that assists in selecting the best DR method for a user-specific scenario.

The application of the framework allows practitioners to increase the efficiency of the DR method selection process and to further enhance DR-related benefits, such as cost minimization, load profile flattening, and peak load reduction. Researchers can use the framework to compare the effects of existing DR methods in varying scenarios and thus inform the enhancement of their methods and underlying algorithms.

We proceed by describing related work in Section “Related Work” and the chosen research design in Section “Research Design”. In the next step, we gather seven REQs based on the literature analyses, expert workshops and expert interviews (Section “Gathering Requirements”), derive solution concepts (SOCs) (Section “Deriving Solution Concepts”), and assemble them into a decision support framework (Section “Assembling the Framework”). To demonstrate our framework’s applicability, we implement a software prototype and conduct a simulation study with illustrative examples extracted from artificial data that build on real-world scenarios (Section “Demonstration”). Finally, we discuss our findings and provide an outlook on future research (Section “Conclusion”).

Related work

Based on an extensive literature analysis (see Appendix 1), we identified a set of contributions to the domain of DR methods that include comparisons, benchmarks, and evaluations of DR methods. Although some of these studies perform benchmarking, most provide evaluations, especially argumentative comparisons of types of methods (e.g., (Sianaki et al. 2010)), proofs that a particular method can find an optimum (e.g., (Lab 2018)), evaluations against the status quo (e.g., (Gerwig et al. 2015)), and descriptions of methods with a variation of different input factors (e.g., (Mohsenian-Rad et al. 2010)). Among these, proving a method’s ability to find an optimum is in many cases not possible since a complex optimization problem is solved with a heuristic or the solving process is too complex and time-consuming. Decision support systems are often used in the energy field (e.g., (Sellak et al. 2017)), but we found no suitable system or framework that could meet our research goal and no solution in the extant literature could be discovered.

Moreover, most working groups and research projects that address this topic focus on the organizational levels and grid levels and do not compare or evaluate DR methods. For instance, “MOSAIK” (a flexible simulation framework for smart grids (MOSAIK 2018)), uses a co-simulation approach to simulate methods and scenarios for a large energy grid with multiple elements like different consumers, decentralized energy generation and EVs (Schloegl et al. 2015). “DER-CAM” focuses on minimizing costs by optimizing on-site generation and heat and power systems (Lab 2018) in buildings and microgrids, and bases on an optimization problem with multiple objectives. However, the authors state that the optimization problem must be solved by a heuristic, a greedy-based solving strategy. So even if the method’s ability to find the optimal solution is proven, a solution is often found with a heuristic, and the results differ based on the chosen algorithm. Other projects can be found, such as Smart Grid Algorithm Engineering, Smart Nord, and D-Flex (Blank et al. 2015), have been addressed, but none realizes a way to select a DR method for individual scenarios. Besides the literature search and the analysis of existing projects and research groups, the energy informatics field might offer helpful insights (Goebel et al. 2014).

Our research aims at increasing energy efficiency by supporting the user-specific selection of DR methods that are used on HEMS in residential buildings (here Smart Homes) (Goebel et al. 2014). Similar research activities fail to compare, evaluate, and benchmark DR methods, although they address, for example, distributed and renewable energy resources (e.g., (Berthold et al. 2017; Dyson et al. 2014)), islanded grids (e.g., (Vergara et al. 2018; Ma and Billanes 2016)) or grids in general (e.g., (Steinbrink et al. 2017)), the energy grid in smart cities (e.g., (Masera et al. 2018; Stoyanova et al. 2017)), security concerns (e.g., (Fulli et al. 2017)), EVs and batteries in general (e.g., (Park et al. 2016; Ma et al. 2010)), energy contracts (e.g., (Basmadjian et al. 2016)), or data and privacy concerns (e.g., (Cupelli et al. 2018)).

Leaving the immediate field of DR methods, Ketter et al. (Ketter et al. 2016) present competitive benchmarking, an approach for the comparison of solutions to wicked problems. For demonstration, they select the topic of sustainable energy and financial market stability (Ketter et al. 2016) and the Trading-Agent-Competition (TAC) (Ketter et al. 2011). The TAC, which has several customer models that represent households, businesses, and so forth, takes a supplier perspective, while the demand perspective is “fixed”. However, by fixing the demand perspective, the stated research question is not sufficiently answered, as the focus lies on the Smart Home Context. In this context, the supplier is the black box and delivers information like a cost function or penalty costs for high peaks (O’Connell et al. 2014) Although benchmarking is frequently addressed in the DR field, to the best of our knowledge, the field currently offers no support for decision-making in our context.

Focusing on benchmarking procedures in general, that aim at identification of best practices and performance improvement (e.g., (Andersen and Pettersen 1995; Camp 2006)), Lugauer et al. (Lugauer et al. 2012) analyze 35 benchmarking frameworks in terms of the four phases—planning, collecting, analyzing, and improving—based on the “plan-do-check-act”-cycle from Deming (Deming 1982). The planning phase is about what to benchmark, and identifying who and which resources are involved. Afterwards, in the collecting phase, benchmarking objects need to be analyzed by defining needed data and measurements. In the analyzing phase, the performance of the benchmarking objects is compared, and gaps between these performances are identified. During the improving phase, necessary actions need to be identified, to move, originally the business, forward and improve, for example business processes.

Research design

Our research began with the identification of a practical problem. An industry partner that plans to design and implement a HEMS, asked, what kind of services could be additionally offered to the customers, based on this HEMS. They state the delivery of an optimized energy consumption plan to energy-users to be useful, because users can realize cost benefits. As the HEMS should be used in diverse surroundings, an “optimal” DR method could not be chosen easily. Because of the high individuality of user scenarios and the variety of DR methods, a decision-making approach must be implemented on these HEMS that supports the ability to select a method that best fits each scenario. To develop the framework, we carried out a staged research design (Fig. 2) that consists of gathering REQs (Section “Gathering Requirements”), suggesting SOCs (Section “Deriving Solution Concepts”), assembling the decision support framework from these SOCs (Section “Assembling the Framework”), and demonstrating the applicability of the framework by simulating various scenarios (Section “Demonstration”). In the following, each of these phases is described in detail.

Fig. 2
figure 2

Staged research design

Phase 1: Gathering Requirements

In this phase, REQs are derived from the knowledge base inductively. We build on existing literature (literature analysis) and expert knowledge (expert interviews) and derive several REQs based on Deming’s general benchmarking approach (Deming 1982).

Phase 2: Deriving Solution Concepts

We suggested SOCs that address each REQ individually by searching for information in existing research and by interviewing experts. These SOCs are only addressing the REQs, but still must be combined to realize the decision support framework.

Phase 3: Assembling the framework

After gathering REQs and deriving SOCs, we used Deming’s (Deming 1982) benchmarking process as guidance and matched each SOC to it. We also analyzed information flows from both inside and outside the framework and added, for example, event triggers for changes in the scenario.

Phase 4: Demonstration

For the demonstration, we implemented a software prototype that features four DR methods and a database suitable for conducting simulations. The aim of this research is not to evaluate the exemplarily implemented methods but to show that our framework is feasible and applicable. As the framework should be reusable and adaptable, it is not HEMS-specific and so is (re-)useable on many kinds of systems. We used artificial data that have minor disadvantages but are of sufficient quality (Cao et al. 2013) to conduct meaningful DR simulations. As the simulation is an artificial setting, we impose several events to simulate changes in the scenario (e. g., additional appliances, defects of appliances, altered user behavior).

Decision support framework

Gathering requirements

This section presents seven REQs for a decision support framework grounded in general benchmarking approaches (that follow a plan-collect-analyze-improve cycle (Lugauer et al. 2012)), a literature analysis, a review of several DR methods (e.g., found in (Balijepalli et al. 2011; Al-Sumaiti et al. 2014; Gerwig et al. 2015; Barbato and Capone 2014)), and expert interviews (Behrens et al. 2017).

Inspired by the first benchmarking phase (planning phase) (Camp 2006), we define the benchmarking objects and benchmarking goals (Heib et al. 1997). Benchmarking objects reference to the entities to be compared, such as companies, divisions in a company, or even products, processes, and functions (Heib et al. 1997). In our study, these benchmarking objects are DR methods. Benchmarking goals are the objectives that the benchmarking process seeks to achieve, such as improving the energy efficiency of Smart Homes by selecting the best DR method for a certain situation. However, “best” depends on how the term is defined. As it relates to DR (Gellings 1985), “best” can be the method that reduces peak loads and costs or the method that ensures a flattened load profile. These objectives may differ in different homes or over time according to the users’ preferences. Consequently:

  • REQ1—Objective(s) Specification: Provide features that allow users to specify their individual objectives.

The benchmarking approach requires collecting suitable data. In addition to the individual objective, these data comprise the scenario of the user, which can be very heterogeneous (Hoogsteen et al. 2016) based on, for example, living behaviors, daily routines, existing infrastructure, and appliances (Pflugradt 2017). Various data sources, such as recorded data, simulated data, and predicted data, can be used (e. g., (Monacchi et al. 2014; Behrens et al. 2014; Behrens et al. 2016)). To identify the best method for an individual scenario, this specific scenario needs to be represented first—regardless which data is chosen (e.g., forecasting data, past data, manual data from the user). DR can be divided into two phases: data acquisition (often broken down to forecasting) and optimization (Simmhan et al. 2011) (Fig. 3). User-specific data (e.g., deadlines for finishing appliances and times during which the user cannot run appliances) must be added in or after the first phase and used in the second.

Fig. 3
figure 3

DR phases, data sources, and scenario data

We need an adjustment between these two phases, for example, to identify the possible using intervals of appliances (e.g., (Behrens et al. 2017)). This can be done automatically (e.g., by predicting user’s behavior and needs) or manually by the user. Consequently:

  • REQ2—Scenario Building: Provide features that allow users to build individual scenarios.

Another part of collecting data is to ensure its quality, which has been identified as a problem in DR research (e.g., (Cao et al. 2013; Barta et al. 2014)). Data on energy use is often recorded in heterogeneous ways, so the data structures are not standardized and information (e.g., whether an appliance is deferrable) may be missing (Behrens et al. 2016). Moreover, gaps in the data (Monacchi et al. 2014; Barta et al. 2014), such as missing recordings and zero values that are often caused by a malfunction of the Smart Meters, occur (Hoogsteen et al. 2016). Regardless of which data is chosen, it must be in a suitable format and of high quality (Cao et al. 2013). High quality thereby means, that needed information are given, the recording frequency is adequate, and missing values can either be predicted or filled without falsifying the data. Thus, data quality needs to be guaranteed through a suitable preprocessing (cf., (Barta et al. 2014)). Consequently:

  • REQ3—Data Preprocessing: Provide features that allow the quality of the data to be ensured.

The second DR phase deals with optimization by DR methods and therefore delivers data for the analysis (third benchmarking phase). However, not all DR methods have directly comparable characteristics and therefore additional demarcation is needed. Different characteristics, regarding their functionality and the considered appliances, infrastructure and constraints (Kosek et al. 2013; Gerwig et al. 2015; Behrens et al. 2017) need to be addressed. Benchmarking must guarantee that the only methods that are compared are those that, for example, fulfill the same constraints (Behrens et al. 2017). The methods themselves and the additional information need to be stored to enable the HEMS accessing them. Consequently:

  • REQ4—Method Comparability: Provide features that ensure the comparability of DR methods.

The calculation of assessment criteria is the last step of the data-collection phase. Assessment criteria like measures of energy consumption and savings must be calculated and analyzed (Gerwig et al. 2015). However, many possible criteria are offered in the DR field (e.g., (Balijepalli et al. 2011; Al-Sumaiti et al. 2014)), so the appropriate choices must be made. Hence, these criteria must be identified in a first step. Based on these definitions and after the methods calculated their results for a certain scenario, the assessment criteria need to be calculated. For ensuring a uniform calculation of the criteria’s measurements and comparability, we need a centralized computation, as the methods might have a different calculation or might even miss the calculation of single measurements. The DR methods must transfer the results in a suitable format to our framework, which—afterwards—calculates all assessment criteria for all comparable methods. Consequently:

  • REQ5—Result Calculation: Provide features that allow DR methods to be assessed based on established criteria.

Analysis, the third benchmarking phase, identifies best practices and performance gaps to enable improvement (e.g., (Lugauer et al. 2012; American Productivity and Quality Center 1993)). In the DR context, this means we need a way to identify the best DR method to enable a decision making. “Best” in this case depends heavily on the user’s objectives (REQ1) and the user-specific scenario (REQ2). Based on this “objective-function” identification, the calculated assessment criteria (REQ6) are interpreted. As we have multiple criteria, we must add a suitable decision making process (e.g., (Ho et al. 2010; de Boer et al. 2001)). Multiple criteria are difficult to handle because their units of measure may differ, their levels of importance may differ, and so on. Consequently, the suitable decision making must be supported by identifying the best DR method, which can be selected by the user and then be deployed on the HEMS. Consequently:

  • REQ6—Decision Support: Provide features that allow the best DR method for a specified scenario to be identified.

Improving is the last step of the benchmarking approach that especially in the DR field is needed if changes occur that may impact the effectiveness of the DR method chosen. Changes can include the addition of a new appliance or a new operating status (“Appliance Event”) (cf., (Bui et al. 2017)) or changes in the objectives (“Objective Event”). These changes can occur because of unexpected malfunctions or use patterns or insufficient forecasts. A monitoring and a suitable reaction mechanism are needed to facilitate the ability to react to such events. Consequently, the selection needs to be repeated and is a continuous task, as also stated in the benchmarking phase of (Lugauer et al. 2012). We need a starting point for the “Initialization”, as no data is set before. The last requirement therefore faces the challenge of reacting on events to realize the most beneficial method selection. Consequently:

  • REQ7—Event Reaction: Provide features that allow users to react to events.

Deriving solution concepts

We suggest possible SOCs to address the REQs, which are derived inductively from a literature review and interviews with experts.

SOC1—Let the user select the objectives to be achieved

The user should be presented a variety of options so he or she can choose which objectives are most important. Our literature search identified several DR objectives, shown in Table 1, each of which can be reached using DR methods and can be provided for decision-making. The user can select the objective(s) to be reached (Fig. 7 in Appendix 2). As the “welfare” objective is difficult to measure but important for users (cf., (Jovanovic et al. 2016)), we consider it a given. The user can state his or her preferences based on usage patterns during scenario-building (REQ2) or select an automated way to predict preferences, for example, via forecasting based on historical data.

Table 1 DR objectives derived from the literature

SOC2—Provide energy consumption data and let the users specify their energy consumption profile

The framework should provide several options for the user with which to select their energy composition: a predefined dataset (e.g., the last day or similar households), where only single loads are adapted to the user’s habits (e.g., an EV is added or the use intervals are changed); an empty dataset to build a scenario and add all appliances, plus additional information “bottom up”; past data recorded from the HEMS; or forecasted data. This step might be done manually (by the user) or automatically when the user gives a starting point and the framework then uses forecasting data. As privacy and data security are important topics (for example due to the general data protection regulation), data must not leave the HEMS. Based on the data provider, the user-specific scenario can be built (e.g., (Hoogsteen et al. 2016)). Besides selecting the data, additional user-specific information regarding the user’s welfare is needed, but defining the user’s welfare is not easy, so a utility function is often used in the DR context (e.g., (Li et al. 2012; Samadi et al. 2012; Mas-Colell et al. 1995)), for example known from the microeconomics context. A questionnaire can be used to derive a utility function (Jovanovic et al. 2016). However, no standard has been defined for measuring the user’s welfare in a real use-case yet, so we let the user define the times when an appliance may run (e.g., the EV must be charged in the morning or the washing machine must be finished by 6:00 p.m.) (e.g., (Jovanovic et al. 2016)).

SOC3—Sample the data, fill gaps, and reject unreliable data

To provide appropriate data, gaps and data-quality issues (e.g., time resolution) must be addressed. Most DR methods use a fifteen- or thirty-minute cycle to optimize load positioning (Abdulla et al. 2017), as cycles of less than 1 minute or continuous recordings are not required (and are difficult to achieve). We suggest a down-sampling with a frequency of 1 minute (e.g., (Hoogsteen et al. 2016) or (Palensky and Dietrich 2011)) for 1440 recording points over 24 h, which is suitable for most DR methods. The gaps can be divided into two dimensions: the number of gaps for a certain day and the number of (completely) missing days in the data. These gaps can be filled by writing in zero values or approximating the missing values (Cao et al. 2013). However, if too many values are missing to interpolate these values or by filling with zero values the data would be too much impure, the data is insufficient for the next steps and sorted out.

SOC4—Add metadata to the methods and form groups

Suitable metadata for DR methods must be identified in terms of its classification and characteristics. DR methods can be classified by two dimensions: Place of decision-making and used communication (Kosek et al. 2013). The loads addressed and constraints met must also be determined, as, for example, methods that do not consider the same constraints cannot be compared (c.f., (Behrens et al. 2017)). Every method must be comparable with other methods from the same “class” and suitable for the scenario. Therefore, we add additional data (metadata) based on the DR method’s classification (Kosek et al. 2013), the compliances addressed and the constraints considered (Behrens et al. 2017). The methods themselves must be available in a repository, either locally on the HEMS or in the cloud.

SOC5—Establish measurements and provide needed data

To provide the indicators, measurement information, and DR objective, assignments needed to rate the methods, we analyzed literature reviews on several DR methods (found in the literature reviews of (Balijepalli et al. 2011; Al-Sumaiti et al. 2014; Gerwig et al. 2015)). The criteria we derived with which to rate the methods consisted of one or more DR objective, an indicator, and a measurement unit (Table 2). Additional information, such as a cost function, is also needed. Energy consumption is needed and must be transferred in a suitable format to the framework to calculate the indicator, so an interface must be designed. Other information, such as PV or battery use, could also be required. Our literature analysis showed that one indicator can represent multiple objectives, so we enabled the user to state criteria for each stated objective and support a “default criteria” (underlined objective in Table 2).

Table 2 Assessment criteria derived from literature (underlined objective number = default)

SOC6—Identify the best method(s) based on user objectives and scenarios

To support decision-making, we rank the DR methods based on their results (SOC5) and the stated user objectives (SOC1). As we may have multiple criteria and objectives, we match a multi-objective function with multiple criteria. Additional difficulties occur because individual objectives might be weighted differently when their importance to the users differs. Hence, three cases are derived: only a single objective is selected, multiple objectives with different weights are selected, and multiple objectives are selected but no weights are given. To make the results comparable, a single number from the criteria (e.g., (Charnes et al. 1978; Cooper et al. 2006)) using one of three strategies is formed:

  • If the user selects only a single objective, since we enable only one measure for each objective, we can directly compare the criteria with each other.

  • If the user selects two or more objectives and weights for each objective, a combined value with the formula ∑Criteriai ∗ Weighti for each DR method can be calculated.

  • If the user selects two or more objectives but no weights, equal weights are supposed and calculate the comparative value as in the second strategy.

SOC7—React to events by repeating the decision support process

Reacting to events—here, “Appliance Events” and “Objective Events”—requires two actions: implementing a monitoring strategy and implementing a suitable reaction strategy. The monitoring determines whether changes, such as a new appliance being added or a change in the infrastructure (e.g., because of a malfunction) occurred. The framework repeats the selection process if the user changes the objectives (“Objective Event”) but the process can jump to data collection phase and rebuild the database if the event is an “Appliance Event.” When the process is run for the first time, an “Initialization” is conducted, meaning that the process starts at the beginning with the user stating his or her objective(s). In the monitoring phase, while the DR method is used, the framework receives event triggers and additional information from the HEMS. After receiving such an event, a specific phase is called and the information (e.g., the load profile of the new appliance) is handed over.

Assembling the framework

The next step in developing the decision support framework is to assemble and embed the SOCs to the framework and into the HEMS and into the environment. Next, we describe how we assembled the framework and how the framework interacts with the HEMS and the environment.

Assemble the Solution Concepts

The REQs and SOCs are inspired by the benchmarking phases of (Lugauer et al. 2012), so the framework is divided into four phases (see Fig. 4 for a visualization of the process and Table 3 for an overview of the REQs and SOCs): The first phase, Planning, involves SOC1, the specification of the objective. The second phase, Demand Response, involves collecting data, including building the scenario (SOC2), preparing the data (SOC3), ensuring the comparability of methods (SOC4), and the calculating the measures (SOC5). We divided this second phase into two sub-phases: data acquisition (SOC2 and SOC3) and optimization of the consumption plan (SOC4 and SOC5). The third phase, Decision and Deployment, identifies the best DR method based on the collected data, stated objectives, and so forth (SOC6), which is then deployed on the HEMS. In the last phase, Monitoring, the framework reacts to events that occur during the DR method’s use. If an event occurs, the framework guides either the HEMS (automated) or the user (manual) back to the planning phase or the demand response phase (SOC7). When the decision process is first run, it begins with SOC1 (objective specification).

Fig. 4
figure 4

Decision support framework for DR methods with the SOCs addressed

Table 3 Derived REQs and SOCs (overview)

Embed the Framework

The framework requires inputs from the HEMS, the users, and the environment, so the framework must interact with those three entities. Both the inputs and the interactions are specified for each phase of the framework. In the Planning phase, users state their objectives, potentially on the HEMS’ (graphical) user interface (e.g., (Han et al. 2014; Khadar et al. 2017; Yener et al. 2017)), as this interface is a suitable way to enable the user to transport his or her objectives into the framework. This phase occurs either on the initial use of the DR method or when an “Objective Event” occurs. In the Demand Response phase, the framework provides suitable data to the user for the purpose of scenario-building, again, potentially using the HEMS’ interface, and the user specifies individual appliances, intervals, etc. The HEMS can transfer these data to the framework. Even if an automated process of scenario-building (e.g., through forecasting) is chosen, the user can monitor and alter the scenario. In the third phase, Decision and Deployment, the framework shows the ranked DR methods in the HEMS’ interface based on their results and the users’ stated preferences. Then either the user chooses his or her favored method, or the framework performs this choice automatically. The method is then deployed on the HEMS. Finally, in the Monitoring phase, the deployed method is used on the HEMS, and any change events that occur are reported to the framework, as well as additional information (e.g., new objective given, new appliance installed). If an “Objective Event” occurs, for example because the user changes the objectives in the HEMS’ interface, these altered objective(s) need to be reported to the framework. If an “Appliance Event” occurs, the altered appliance(s) need to be reported, for example the consumption profile of the (new) appliance. The framework then continues with the decision process and delivers the new information.

Demonstration

To demonstrate the framework’s feasibility and applicability, we conducted several simulations. We instantiated the SOCs (Section “Instantiation of Framework”) and designed an illustrative scenario (Section “Demonstration Case”). The instantiation consists of the framework itself (visualized with a graphical user interface, simulating the HEMS interface) and four DR methods, each of which follows the same optimization problem and fulfills the same constraints (for the description of the optimization problem and the constraints, see Appendix 4). We exemplarily implemented two solving strategies with two variations per strategy: greedy-based (min and max) (Sianaki et al. 2010) and multi-agent-system-based (with and without communication) (Mohsenian-Rad et al. 2010) (for additional information see Appendix 3).

We have chosen an artificial setting for the demonstration scenario. Consequently, events are triggered manually and the users’ objective(s), appliances, and so on are simulated. To consider multiple homes and the effects of a method selection, we simulated a microgrid consisting of several homes. We included different home configurations (e.g., varying appliances, usage patterns, etc.). To simulate changes, we introduce two appliances events, namely the consideration of an EV with regular, domestic (AC) charging and furthermore, the installation of corresponding fast-charging infrastructure (DC).

Instantiation of framework

We implemented the software prototype in Java so we could use common libraries to visualize the results or read out data. At the beginning of the decision process, the user must select the objectives to be achieved (SOC1). In our demonstration case, we use cost minimization, represented by savings, as measurement (see Fig. 7 in Appendix 2). As for the user-specific data for the selection process (SOC2), we use artificial data, for example, for use intervals. We use a TOUP, and as no storage or generation infrastructure is in the demonstration scenario, these data are not added (see Fig. 8 in Appendix 2). The data itself must be in a suitable format (SOC3), so we choose a 15-minute interval to match the proposed DR methods. As the data are simulated, there are no gaps. Each DR method has a meta file that provides additional metadata (SOC4). Once optimized, the resulting consumption plan is sent to the framework. Next, the indicators are calculated and written into an output file for documentation, and the measurements are calculated with the information given (SOC5). With these criteria and based on the objectives from SOC1, a ranked list of methods is created (SOC6), and the results are displayed (see Fig. 9 in Appendix 2). The selected method is deployed on the HEMS (see Fig. 10 in Appendix 2). Afterwards, the framework switches to the monitoring phase (SOC7), where two events occur, and the analysis is repeated.

Demonstration case

To build a suitable foundation for this test case, appropriate data must be selected and prepared (SOC2 and SOC3). At the beginning, we used recorded data from naturalistic homes (analyzed, for example, in (Monacchi et al. 2014; Behrens et al. 2016)), which cover existing scenarios of real users, so no assumptions needed to be made (cf. (Barta et al. 2014)). However, we had to add information about, for example, the time of use of certain appliances, so we added artificial data using the LoadProfileGenerator (LPG) (Pflugradt 2017) to ensure sufficient data quality (cf., (Cao et al. 2013; Hoogsteen et al. 2016)). LPG has several predefined load profiles and uses a behavior model to simulate the data.

We used seven predefined households from the LPG as a demonstration case. Households’ inhabitants differ in terms of their ages, jobs, habits, and daily routines. Consequently, the consumption amounts and patterns in these households differ.

Basic scenario details

We built the following scenario (SOC2) of seven homes:

  1. 1.

    A couple (male, age 40; female, age 35), both of whom work outside the home, with three children (male, age 13; male, age 6; female, age 4), requires 4001.24 kWh throughout the year (min (day): 0.16 watts, max (day): 8914.97watts).

  2. 2.

    A working woman (age 30) with two children (males, ages 11 and 7) requires 3277.88 kWh throughout the year (min: 0.16 watts, max: 7364.95 watts).

  3. 3.

    A multigenerational home with a working couple (male, age 40; female, age 32), two children (female, age 15; male, age 4), and two seniors (male, age 70; female age 68) requires 8279.49 kWh throughout the year (min: 0.50 watts, max: 13,624.86 watts).

  4. 4.

    A working woman, age 25, requires 1778.51 kWh throughout the year (min: 0.16 watts, max: 8644.84 watts).

  5. 5.

    A working man, age 26, requires 1512.73 kWh throughout the year (min: 0.16 watts, max: 7208.11 watts).

  6. 6.

    A working couple (male, age 25; female, age 23) requires 2448.96 kWh throughout the year (min: 0.16 watts, max: 6370.82 watts).

  7. 7.

    A couple (working man, age 45; female homemaker, age 43) with two children (male, age 20; female, age 14) requires 4112.76 kWh throughout the year (min: 0.16 watts, max: 10,643.47 watts).

Deferrable appliances within the homes are washing machines, dryers, and dishwashers. The duration, time, and quantity of appliances, which depend on the life styles and behavior of the residents, are provided. We added the following information, that are visualized in Fig. 5:

  • The deferrable appliances may not run between 12 a.m. and 7 a.m.

  • The working residents leave at 7 a.m. and return at 6 p.m., thus an EV cannot be charged in between.

  • Our test case uses a TOUP. The price from 0:00–06:00 and 22:00–24:00 is €0.24 per kWh, and the price from 06:00–22:00 is €0.36 per kWh.

Fig. 5
figure 5

Summarized consumption of all households, EV charging times, and TOUP borders

Scenario variations

Events can call for adjustments to the chosen DR method. For initialization, the user first states objectives to be achieved. In our scenario, we choose cost minimization as the only goal, which means that we have just one objective. Measuring this, the savings are considered as criteria—this means, how much cost (in percent) from not performing DR can be saved. In the second step, we assemble the scenario details, which mean the user adds the appliances and usage patterns (e.g., intervals) to the scenario. In the demonstration case the predefined consumption plans are used. After the implemented DR methods calculated an optimized consumption plan, a ranked order is derived, and the users deploy the favored method on the HEMS. As a starting point, we assume that our users have no EVs. Later, they will buy an EV (we assume a Volkswagen e-Up) and charge it with 3.6 kW (AC) (Appliance Event 1). The alteration is monitored by the HEMS. If changes occur in a Smart Home or microgrid (here events, see for example (Bui et al. 2017)) the scenario needs to be re-analyzed, ensuring that still the best method is selected. Correspondingly, the decision process starts from the second phase and the scenario must be adjusted. Later on, a fast charging station is installed (Appliance Event 2). As a result, we have three variations of our scenario, caused by two Appliance Events (see Table 4 for a summarized scenario description).

Table 4 Scenario Description

Results of the demonstration case

After we built the demonstration case, the implemented DR methods calculate the new consumption plans and transfer them to the framework. The measurements are calculated, and the best method is identified based on the selected objective, in this case, cost savings (see Fig. 6). The results indicate that the performances of the particular DR method depends on the scenario. Without an EV, the best methods are the two greedy methods. This changes, if an EV is considered, as this shifts load to the morning hours which flattens the load profile. At this point, the greedy (max) is most beneficial, but after the households switch to DC fast charging, the greedy (min) generates the most savings. Each event (integrating EVs and changing the charging mode) requires adjustments in the grid; if the methods were not re-selected after the events occurred, we would have a loss of 3.1% (event 1) respectively 0.7% (event 2) in savings. The overall savings for performing DR at all are much higher (up to 34.2%). Therefore, we generated additional savings of about €1.00 each day just by selecting the best method over the second best. Comparing the best to the worst methods, we save 1.6% in scenario 1, 14.2% in scenario 2, and 0.8% in scenario 3. While these savings may not sound significant, they are realized on top of the savings created by using the “wrong” DR method and just because we re-selected the method with our framework (see Appendix 5 for more detailed results).

Fig. 6
figure 6

Results of method benchmarking. Without EV (top), EV charged with 3.6 kW (middle), and EV charged with DC (bottom). The best method for each is highlighted (in light gray). Values = Savings [%] of each method. (For more detailed results see Appendix 5)

Conclusion

This study focusses on the Smart Home context, which has several challenges. To address these challenges, we gathered a set of REQs for a decision support framework for DR methods in the residential context, derived SOCs to fulfill the REQs, and assembled them into a framework. To demonstrate the framework’s feasibility and applicability of this framework, we implemented a software prototype and conducted simulations with different scenarios. The results indicate that our framework supports the identification of the best DR method for a user-specific scenario and thus allows a decision making. The developed framework thereby aims at increasing the efficiency of DR method usage in Smart Homes and thus allows to realize DR related objectives such as improving grid reliability, minimizing energy costs, and reducing peak loads. To do so, the decision process is structured to enable a systematic and scenario-specific selection. To support this decision process, the framework contains valuable advice for (technical) implementation(s) on different HEMS, derived from literature and experts.

From a research perspective, our results can be used for supporting the management of complex energy structures in changing scenarios and for investigations of whether such a framework can increase user acceptance by encouraging more sustainable behavior and consumption. Moreover, researchers can compare DR methods in specific scenarios (e.g., to compare methods they develop to other methods). Making an evaluation with benchmarking datasets possible increases the comparability of methods.

For practice, the integration of the framework in a HEMS can support the management of increasing complexity in the Smart Home context and in urban infrastructures (e.g., Smart Cities). Consequently, new appliances and new infrastructure can be used more efficiently; for example, a PV panel can be combined with energy storage when the best method can be selected more efficiently. Moreover, user acceptance may increase, as the user can see his or her advantages by performing DR as well as choosing the best performing DR method and that his or her preferences (welfare) are considered.

The demonstration case aims at showing the frameworks feasibility and applicability in answering the research question by identifying the best DR method for a specific situation. It thereby covers only a limited choice of all possible scenario specifications. Other microgrid consumptions, cost functions, and infrastructures would deliver other DR method results. However, the decision process remains the same and will also support the selection of the best performing DR method.

Moreover, as intelligent energy management systems, especially HEMS, have not seen a wide implementation, a demonstration and evaluation with real-world data was not possible. Consequently, as the rarely available (real) data lacks in addition on information, we relied on artificial data and their corresponding limitations such as insufficient event integration, abrupt changes in user behavior, etc.

The developed framework focusses on the residential context. Other contexts, discussed in the literature, are industrial and commercial contexts (Gellings and Chamberlin 1987). Future research can apply our framework to these contexts to investigate its applicability there. For example, available DR methods will differ, appliances might have other restrictions and available infrastructure is more diverse. In general, we assume that our approach is capable of supporting a decision support in other contexts as well. Most aspects of these contexts are similar to the residential context, such as objectives and criteria. However, changes to the REQs might be necessary. For example, other constraints or variations like deadlines (e.g., production deadlines) would have to be taken into account in a more detailed way (e.g., because the supply chain needs more precise planning or because machines can switch among multiple modes (Behrens et al. 2017)). Moreover, pricing schemes might differ from the residential context, and flexibility might alter, as machines react differently to changes compared to a private user.

Real data lack on additional information, for example usage times, deadline of single appliances, etc. Since recording the additional information required from the user is difficult, we plan to design and implement a tool that enables users to transfer user-specific data, such as behavior patterns and preferences. To increase user acceptance and penetration, consumers who have no knowledge in the field should also use the framework, supported by an assistant, guiding the user.

Our approach can be combined with the TAC, from (Ketter et al. 2011). Ketter et al. (Ketter et al. 2016) called the entire field a “wicked problem.” As the TAC is a “system within a system,” the users in the TAC might change their strategy when they have such benchmarking as decision support. Therefore, the users react to external factors like price signals as well as events. A challenge, then, would be to implement agents not only on the supplier side but also on the consumer side to reach the best result for both sides or to deal with decision-making on the supplier side. Agents on the supplier side would then benefit from our framework and the improved decision making (selecting the best performing DR method).

DR objectives are manifold and differ on indicators and dimensions. The consideration of multiple indicators with various dimensions and no predefined weightings thereby can impede the decision process. One method that can be used in this case (multiple indicators, various dimensions, no predefined weights) is the Data-Envelopment Analysis (e.g., (Charnes et al. 1978; Vine et al. 1994; Suganthi and Samuel 2012)). So far, our framework can only consider a single objective respectively indicator or multiple ones, if the user defines weights or there are equal weights for each objective. It seems possible to consider efficiency as well, so not only the quality of a method’s result can be considered but also the effort needed to achieve these results.