In the session on Flexibility in Distribution Grids we organized as part of the Sustainable Urban Energy Systems conference in Delft last week, we discussed this challenge with representatives from two distribution network operators (Fons Jansen and Willem van den Reek), a professor on regulation in energy markets (Machiel Mulder) and head of new energy business of an aggregator (Jorg van Heesbeen). First, each of them gave their vision on these challenges.

Willem van den Reek and Fons Jansen proposed the plans of Alliander and Enexis. The core task of these publicly owned distribution systems operators is providing access to the grid, and distributing power at efficient costs. One possible, but very expensive approach to the identified challenge is to reinforce the network. However, this only should be done if this is indeed the most efficient solution. Their plans therefore contained two new ingredients to prevent or postpone such large investments:

- new tariffs for households that are more related to the capacity used (a capacity charge for a fixed capacity “bundle” with a fixed (high) price for any “outside of bundle” capacity use), and, where this is insufficient to prevent overloading the network,
- a new market where the network operator pays some of the owners of flexible load to shift their consumption in time.

The discussion in this session focused mostly on the new tariff, because, as I understood, the market for flexibility should only be a temporary solution, at a few places, while the network there is being reinforced. (Also, Laurens de Vries and I shared our opinion on these flex-markets before.)

Machiel Mulder fully agreed with the idea of tariffs that depend on the capacity used. He furthermore explained that the economically most efficient tariff would be to price the use of the network depending on the congestion (peak pricing): when and where there is no congestion, the tariff can be a relatively low fixed amount, but at moments and locations where congestion occurs, the tariffs should be so high that a sufficient volume of flexible loads shift to less congested times. Machiel also presented the results of a study on how fair people consider such a congestion pricing (yes, there is a similarity here to the #vroempoen). In this study (“Assessing fairness of dynamic grid tariffs”, 2017) he showed that people think this is less fair than payments based on the total energy used (transport charge), or on the maximal capacity allowed (capacity charge), but when properly explained (that this leads to the most efficient energy system and that in this way the people/loads that demand the most from the network indeed also pay the most), it is considered “reasonably” fair on average.

Finally, Jorg van Heesbeen explained that Jedlix provides smart charging services to users of electric cars, and how trading such flexible demand on the (wholesale) electricity market reduces costs for users, and helps balancing the system and the integration of renewable generation. He also said that he expects that aggregators like Jedlix will always try to minimise the total electricity costs for their users, and consequently will make use of new opportunities like flex markets, and deal with new tariff systems, even if these are quite involved.

After these introductions, the discussion was kicked off by the observation that there seems to be one main disagreement between the tariff that is theoretically optimal (the dynamic peak tariff), and the capacity charge proposed by the network operators. On the one hand, the network operators prefer a tariff that is fair, and from that conclude that it should not be allowed to be different depending on circumstances such as time and location, while on the other hand the theoretically most efficient solution for the energy system as a whole (and thus on average the cheapest for its users) explicitly differentiates depending on the (peak) use of the network, and thus is different from time to time and location to location.

All parties present seemed to subscribe the need for some kind of (price/tariff) incentive to avoid peaks that could overload the network (at least at places where reinforcement can easily be avoided in this way). To me it didn’t seem like the participants agreed on how to do this, but some criteria for a good tariff design surfaced in the course of the discussion:

- The tariff design should support an efficient electricity system.
- The tariff design should be fair. For example, we don’t want people without electric cars pay more, such that the people with a lot of money can make better use of the network.
- The tariff design should be simple. For example, it should not be too complex for a typical end-user to optimally adapt to the new tariff design.

The most efficient design does not simultaneously seem also the most simple and fair, so we are facing a trade-off among these criteria. My main conclusion from the discussion is that we are missing essential information about the properties of the proposed tariff designs to appropriately make this trade-off. More specifically, for each of the proposed tariff designs, I conclude that we need to answer the following questions:

- What should these tariffs be, given current and future market conditions?
- What is the effect on the total system efficiency?
- What is the effect on the total energy costs for the different user groups? (Including some examples.)
- Can we make decision-making sufficiently easy for all user groups?

Only with this knowledge we can properly continue the discussion about a new tariff design and move forward with a transition to a more efficient and sustainable electricity system.

]]>The difficulty with the power system is that supply and demand needs to be in balance at all times. While in other markets it is relatively easy to store goods in times of low demand and low prices, this is very difficult with electrical energy. Power produced at noon is a different product than power produced in the evening, with different demand (because of the way people live) and different supply (because of electrical energy coming from wind and sun) and has a different price. The match of generation and demand is made through electricity markets, and these are therefore defined on so called programmable time units (PTUs). In each PTU the amount of electrical energy sold (and thus generated) and bought (and thus used) should be equal. The length of these PTUs typically is one hour, or in some cases, 30 or even 15 minutes.

However, this granularity is insufficient to guarantee that power generated equals power consumed throughout. This can be observed from the following figure of the average frequency of the European system (winter 2016/2017). In the evening hours, the average frequency drops almost below 49.9Hz at the start of every hour. These temporary episodes of shortage are equivalent to a generation dip of about 2 GW (and additionally we see some overproduction before the end of each PTU). These are resolved via expensive reserves, while reserves are of course intended to handle unforeseen events.

A well-established solution is to use a finer granularity for these markets. As long as optimal bidding and market clearing can be done within this same period, this leads to significantly smaller frequency dips or (in our simulations) lower unserved energy demands. However, given the desire for optimal solutions, in 2013 hourly time slots seemed to be the smallest PTU possible, and since this runtime scales exponentially in the number of periods, this direction of increasing granularity certainly has its computational limitations.

Changing the market rules into trading *linear power trajectories* within each PTU instead of a total energy amount brings the advantages of a finer granularity, but does not increase the computational load, because the instantaneous power balance is followed with more precision. In our experiments, we subsequently ideally dispatch the energy plan avoiding imbalances as much as possible, very much like the real-time dispatch market in the US. In these simulations, the reduction in unserved energy demand is similar to doubling the number of periods. If energy planning was not ideally dispatched, like in the EU markets, the performance of energy-based trading would be even worse than what we compare to in the paper, and thus trading power would be even more advantageous.

Moreover, trading power trajectories instead of energy amounts can be done in addition to any increase in granularity. How to compute the payments based on such power trajectories is also explained in our paper on “Trading power instead of energy in day-ahead electricity markets“. Many thanks to the main author, Rens Philipsen and my other co-authors: Laurens de Vries and German Morales.

]]>Acronym |
Workshop |
Organizers |

COPLAS | Constraint Satisfaction Techniques for Planning and Scheduling | Roman Bartak Miguel Salido |

HSDIP | Heuristics and Search for Domain-Independent Planning | Daniel Gnad Michael Katz Nir Lipovetzky Guillem Francès Christian Muise Miquel Ramírez Silvan Sievers |

DMAP | Distributed and Multi-Agent Planning | Antonin Komenda Michal Stolba Michal Pechoucek Daniel Fišer |

KEPS | Knowledge Engineering for Planning and Scheduling | Lukas Chrpa Ron Petrick Mauro Vallati Tiago Vaquero |

PlanRob | Planning and Robotics | Alberto Finzi Erez Karpas Goldie Nejat Andre Orlandini Siddharth Srivastava |

PlanSOpt | Planning, Search and Optimization | Michael Cashmore Andre A. Cire Bram Ridder Chiara Piacentini |

SPARK | Scheduling and Planning applications woRKshop | Sara Bernardini Simon Parkinson Kartik Talamadupula |

Hierarchical Planning | Hierarchical Planning | Pascal Bercher Daniel Höller Susanne Biundo Ron Alford |

IntEx | Integrated Planning, Acting and Execution | Mak Roberts Tiago Vaquero Sara Bernardini Tim Niemueller Simone Fratini |

UISP | User Interfaces and Scheduling and Planning | Jeremy D. Frank Richard G. Freedman J. Benton Ronald P. A. Petrick |

XAIP | EXplainable AI Planning | Susanne Biundo Pat Langley Daniele Magazzeni David Smith |

Many intelligent systems aim to make sequences of decisions based on predictions from data gathered from the past. This brings three general challenges for such planning algorithms to the table: first, for computing good decisions we the model of the predictions need to be usable for the decision-making algorithm. Second, in many cases, it should be possible to explain the choices made by such an intelligent system; simply “this is what the system has learned” is insufficient for important decisions. Third, as soon as the system starts making decisions, this influences the environment and thus the data collected: Past performance is no guarantee of future results.

One example of a domain with an enormous potential for planning algorithms is the smart grid. Renewables and flexible demand such as heat pumps and charging of electric vehicles bring more uncertainty, and also computational challenges for several parties in the electricity sector. There is the new role of an aggregator to schedule and trade the flexibility of demand in the electricity markets. The electricity markets themselves may need to be redesigned because of the intermittency of generation and flexible demand, resulting in more complexity in clearing the market. Distribution network operators need to coordinate the flexible demand in congested areas to prevent more demand shifting to a moment in time (with the lowest electricity price) than the capacity of the network allows, such as illustrated in the figure below.

Each of these new challenges in the smart grid gives rise to an interesting optimization problem. Interesting, because typically they are NP-hard: there is no known algorithm that can directly solve such realistically-sized problems quickly enough.

In our research we tackle such challenges by identifying and exploiting some structure in these problems. For example, to schedule a large number of heat pumps under a single network capacity constraint, we decouple the scheduling problems for the individual heat pumps as much as possible by representing the value for the network capacity of all heat pumps by an artificial `price’ for each moment in time. See also this post on the recently accepted paper by Frits de Nijs for the case where the constraint is not exactly known in advance (because of non-flexible demand on the network).

]]>Our proposed methods compute coordinated policies for a number of sequential decisions that are taken without further communication, while aiming to meet common stochastic time/history-dependent resource limits in expectation. These methods are extensions of two known deterministic preallocation algorithms that compute policies for a given plan horizon by allocating resources to agents a priori: a mixed-integer linear program by Wu and Durfee (2010) that guarantees not violating the resource constraint, and a so-called Constrained Markov Decision Process (CMDP) formulation by Altman (1999) that allows stochastic policies and only guarantees that the expected consumption is not more than the amount available. Extensive experiments on two completely different domains show that both our extensions take more time to compute than their original versions, but lead to simultaneously fewer violations of the constraint and more efficient executions. Furthermore we show that more frequent replanning and communication further improves results.

de Nijs, F., Spaan, M., & de Weerdt, M. (2018). Preallocation and Planning under Stochastic Resource Constraints. In Proceedings of the 32th AAAI Conference on Artificial Intelligence.

]]>A number of the Dutch (electricity) network operators have joined forces to use flexible demand to prevent costly reinforcements of the network. I think this is a really good objective: we expect more electricity use in the grid because of (for example) heat pumps, electric cars and solar panels, and with the old custom of scaling the network to the worst case, very costly reinforcements of the network may seem necessary. However, in some of areas an overload of the network will only be for a few hours, and only for a few times a year. In such cases, it may not be economical to reinforce the network. The alternative is to coordinate some of these loads: shifting them in time to prevent the congestion. The owners of loads that allow for this can then receive some compensation.

The problem that we discussed in our opinion article is that this situation cannot be seen in isolation: owners of flexible loads can (and some already are) monetizing this flexibility by trading in electricity markets. For example, currently there is significant value for flexibility in the (Dutch) imbalance market, and also in the long run and internationally there will be value for flexible demand, because of the change to intermittent power generation such as from wind, and the physical requirement that electrical supply and demand must be in balance at all times.

Typically the owners of flexible loads do not trade directly themselves, but hand over part of the control to a so-called aggregator: an electricity trading company with the objective of making the best use of this flexibility. Such aggregators will happily trade this flexibility with network operators at the right price. The consequence of this is that this price needs to be at least the amount that can be earned in the electricity markets.

This, however, may lead to significant problems in the long run: such an additional source of profit will draw more flexible electrical loads to a congested area. These loads will then all shift simultaneously to the moments where the most profit can be made in the electricity markets, leading to additional moments of congestion, and the network operator then needs to compensate all loads above the capacity.

We conclude in our opinion article that using a flexibility market to resolve congestion is not sustainable: it is like the idea of paying people for not using the highway during peak hours. The operator of the infrastructure is paying users for not doing something, and is thereby creating a market where an increase in supply (of flexibility) leads to an increase in costs for the buyer (the network operator), instead of the regular (stabilizing) property of markets that an increase in supply leads to lower costs for the buyer.

Flexibility is thus not the appropriate good to be traded in a market to solve congestion. The scarce good here is in fact the network capacity. By having consumers pay a dynamic and localized price for their momentary network use, this price can be used to keep consumption within the network capacity (Philipsen et al., 2016). However, it is still an open question of how to arrange this in the most efficient way. A very interesting multi-party optimization problem!

Laurens and I are happy to work on such interesting problems together. We would like to thank a few people and companies that made this possible. First, we are co-supervising the PhD project by Rens Philipsen (since October 2014) on market design for distribution systems within the GCP project. Also, in the beginning of this year (2017) Irma Stegmann started her master’s thesis project at an aggregator, i.e., the company Jedlix with Laurens as supervisor. The topic of her thesis is “Flexibility trading for aggregators of electrical vehicles within the Universal Smart Energy Framework” (USEF). Around the same time we started a project with this company on Future-proof Flexible Charging, where we investigate how to optimize the use of flexibility in charging electric vehicles, also in the context of USEF. Finally, we were invited by Liander for a Klankbordgroep called DYNAMO on the development of a market for flexibility. Discussing with all of us helped us to shape our opinion. Thank you all for these collaborations!

]]>Stochastic optimization aims (usually) at finding a decision for which the (weighted) sum of the objective values over a set of scenarios is optimal. Robust optimization aims at finding an optimal decision under the worst-case uncertainty realization over a continuous uncertainty set (i.e., a two-stage stochastic and/or robust optimization problem). In a unified stochastic-robust optimization, the aim is to combine the advantages of both (good expected and robust solution) and overcome the disadvantages of each (high computational burden and expensive over-conservativeness).

The formulation we used is also adaptive (i.e., two stage); this means that we take into account that some decisions can be made after we have observed the actual value of the uncertain parameters (i.e., after the realization of the uncertainty). Generally, taking this adaptive part into account makes the problem harder.

An adaptive robust optimization model is a very good fit to the problem of determining which power generators to put online, given predictions for electricity demand and wind generation: we aim to minimize the costs for generators in expectation, including cost of scenarios where the uncertain realization of wind creates an imbalance and we need to recover (adaptive decisions) from this at some extra costs, for example by redispatching generators or putting expensive reserve generators online if needed, or by wind curtailment (i.e., shutting down wind generators).

Our main contribution is an improved formulation for the problem where the uncertainty for wind generation is modeled by a so-called box uncertainty set. Such a box uncertainty set assumes a range for wind power generation for every time step and different locations in the network. We show that the adaptive robust optimization model can then be represented by a single-level optimization program. Intuitively, this is possible by taking into account the lowest amount of wind power generation possible and optimize for that scenario. In general, if the uncertainty is represented by a vector that describes upper bounds for a set of constraints (in our case, the maximum wind power that can be dispatched), the worst case must be a minimal element in the set of all such vectors possible, since all other vectors lead to strictly larger feasible regions and thus better solutions.

In our paper we also considered the unified stochastic robust model and reasoned that it makes sense to allow the actual wind dispatch in a scenario never to be lower than that in the worst-case scenario (with minimal wind). These extra constraints (per scenario per time step and per location) appear to improve the robustness of the unified model at an acceptable cost in additional computation time.

A final important contribution, admittedly all work done by German, is an extensive set of experiments, where we show the difference in both quality (in terms of cost and wind curtailment) and run time for the stochastic, robust and stochastic-robust formulations. The most significant result is that the run time for the stochastic-robust is lower than a pure robust or a pure stochastic formulation, while outperforming them significantly in all other aspects (costs, robustness and wind curtailment).

]]>The ICAPS 2018 program committee invites paper submissions related to automated planning and scheduling. Relevant contributions include, but are not limited to:

- Applications and case studies of planning and scheduling techniques
- Uncertainty and stochasticity in planning and scheduling
- Partially observable and unobservable domains
- Conformant, contingent and adversarial planning
- Plan and schedule execution, monitoring and repair
- Continuous planning, on-line and real-time domains
- Plan recognition, plan management and goal reasoning
- Classical planning techniques and analysis
- Continuous state and action spaces
- Multi-agent and distributed planning
- Domain modelling, knowledge acquisition and engineering
- Learning for planning and scheduling
- Human computer interaction for planning and scheduling systems
- Mixed initiative planning and scheduling systems

Besides the main track, ICAPS 2018 will continue to host additional tracks in the following areas:

**Robotics:**Planning, execution, and coordination for individual or teams of robots, at the level of tasks, behaviors and motions; applications in autonomous, mixed-initiative, and human-robot interactive systems; preference for techniques demonstrated on actual robot systems.**Novel Applications:**Emerging and deployed applications, case studies and lessons learned.**Planning and Learning:**Research at the intersection of the fields of machine learning and planning & scheduling.**ICAPS 2018 will host a new special track on operations research for planning & scheduling.**

Additionally, the ICAPS program will include journal presentations, workshops and tutorials, each having separate submission and notification dates, to be announced separately.

Authors may submit long papers (8 pages AAAI style plus up to one page of references) or short papers (4 pages plus up to one page of references). The type of paper must be indicated at submission time.

All papers, regardless of length, will be reviewed against the standard criteria of relevance, originality, significance, clarity and soundness, and are expected to meet the same high standards set by ICAPS. Short papers may be of narrower scope, for example by addressing a highly specific issue, or proposing or evaluating a small, yet important, extension of previous work or new idea.

Authors making multiple submissions must ensure that each submission has significant unique content. Papers submitted to ICAPS 2018 may not be submitted to other conferences or journals during the ICAPS 2018 review period nor may they be already under review or published in other conferences or journals. Overlength papers will be rejected without review.

All submissions will be made electronically, through the EasyChair conference system:

https://www.easychair.org/conferences/?conf=icaps2018

Submitted PDF papers should be anonymous for double-blind reviewing, adhere to the page limits of the relevant track CFP/submission type (long or short), and follow the AAAI author kit instructions for formatting: http://www.aaai.org/Publications/Author/icaps.php

In addition to the submitted PDF paper, authors can submit supplementary material (videos, technical proofs, additional experimental results) for their paper. Please make sure that the supporting material is also anonymized. Papers should be self-contained; reviewers are encouraged, but not obligated, to consider supporting material in their decision.

Abstract submission | November 17, 2017 |

Paper submission | November 21, 2017 |

Author notification | January 29, 2018 |

Summer School | June 20 – 23, 2018 |

Conference | June 24 – 29, 2018 |

The reference timezone for all deadlines is UTC-12. That is, as long as there is still some place anywhere in the world where the deadline has not yet passed, you are on time!

For inquiries contact: pcchairs-icaps18@googlegroups.com

]]>The aim of the conference is to bring together interested researchers from Constraint Programming (CP), Artificial Intelligence (AI), and Operations Research (OR) to present new techniques or applications and to provide an opportunity for researchers in one area to learn about techniques in the others. A main objective of this conference series is also to give these researchers the opportunity to show how the integration of techniques from different fields can lead to interesting results on large and complex problems. Therefore, papers that actively combine, integrate, or contrast approaches from more than one of the areas are especially solicited. High quality papers from a single area are also welcome, if they are of interest to other communities involved. Application papers showcasing CP/AI/OR techniques on novel and challenging applications or experience reports on such applications are strongly encouraged.

The program committee invites submissions that include but are not limited to the following topics:

- Inference and relaxation methods: constraint propagation, cutting planes, global constraints, graph algorithms, dynamic programming, Lagrangian and convex relaxations, heuristic functions based on constraint relaxation.
- Search methods: branch and bound, intelligent backtracking, incomplete search, randomized search, portfolios, column generation, Benders decompositions or any other decomposition methods, local search, meta-heuristics.
- Integration methods: solver communication, model transformations and solver selection, parallel and distributed solving, combining machine learning with combinatorial optimization.
- Modeling methods: comparison of models, symmetry breaking, uncertainty, dominance relationships.
- Innovative applications of CP/AI/OR techniques.
- Implementation of CP/AI/OR techniques and optimization systems.

Submissions are of two types: regular papers (submitted for publication and presentation) and extended abstracts (submitted for presentation only). Outstanding regular paper submissions may be invited to be published directly in the journal Constraints through a “fast track” process.

This year’s conference will also introduce a distinguished paper award and a student paper award.

]]>These first two days consisted mainly of reading, listening, and learning. My expectations were more than just confirmed: this practical problem is significantly more complex than my typical benchmark problem. In fact this shunting problem can be seen as a combination of five (!) more basic/general problems such as matching, flow shop, path finding, etc.. In addition there are many “dirty details” (exceptions, security-related conditions, etc.), real data is incomplete, the realised situation may actually be different from the problem input, and using all of this effectively will require a significant change in the organisation. I like a good challenge!

The current algorithmic state of the art is a very smart local search approach by Roel van den Broek which he presented last Wednesday at the International Conference on Operations Research in Berlin. I have two concrete ideas to continue from this: I’d like to improve the quality of the results from the local search by embedding it in a branch-and-bound algorithm, and I’d like to get result that are a bit more robust to changes by applying ideas from robust and stochastic optimisation, while still using the local search procedure. I’m happy I also found a good student who wants to work with me, because this may be a but more work than just one day a week. Contact me or stay tuned for more.

]]>