Skip to main content

Ephemeral Resource Constraints in Optimization

  • Chapter
  • First Online:
Evolutionary Constrained Optimization

Part of the book series: Infosys Science Foundation Series ((ISFSASE))

Abstract

Constraints in optimization come traditionally in two types familiar to most readers: hard and soft. Hard constraints delineate absolutely between feasible and infeasible solutions, whereas soft constraints essentially specifyadditional objectives. In this chapter, we describe a third type of constraint, much less familiar and only investigated recently, which we call ephemeral resource constraints (ERCs). ERCs differ from the other constraints in three major ways. (i) The constraints are dynamic or temporary (i.e., may be active or not active), and occur only during optimization—they do not affect the feasibility of final solutions. (ii) Solutions violating the constraints cannot be evaluated on the objective function—in fact that is their main defining property. (iii) The constraints that are active are usually a function of previous solutions evaluated, bringing in a time-linkage aspect to the optimization. We explain with examples how these constraints arise in real-world optimization problems, especially when solution evaluation depends on experimental processes (i.e. in “closed-loop optimization”). Using a theoretical model based on Markov chains, the effects of these constraints on evolutionary search, e.g., drift effects on the search direction, are described. Next, a number of strategies for coping with ERCs are summarized, and evidence for their robustness is provided. In the final section, we look to the future and consider the many open questions there are in this new area.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    When an EA is used, closed-loop optimization may also be referred to as evolutionary experimentation (Rechenberg 2000) or experimental evolution.

  2. 2.

    Indeed, we can consider any optimization problem or benchmark.

  3. 3.

    We leave out the variables \(t_{\text {ctf}}^{\text {start}},t_{\text {ctf}}^{\text {end}}, c_{\text {order}} ,c_{\mathrm{{time\_step}}},\) and \(C\) from \( commCompERC (\ldots )\) for ease of presentation. They will be specified where appropriate.

  4. 4.

    Note, in an EA performing optimization of a function, the number of performed selection steps displayed on the x-axes of Fig. 4.6 would be equivalent to the number of performed function evaluations.

  5. 5.

    We get the zigzag-shaped line for SSGA (rri) during the constraint time frame because \(c_t(B)\) is plotted after each time step containing here of one selection step. For GGA the change in \(c_t(B)\) is smooth because a time step consists of \(\mu \) selection steps.

  6. 6.

    For D-MAB we set the threshold parameter to \(\lambda _{\text {PH}}=0.1\), the tolerance parameter to \(\delta =0.01\), and the scaling factor to \(C=1\).

  7. 7.

    RL-EA also employed the \(\epsilon \)-greedy action selection method (\(\epsilon =0.1\)), optimistic initial values for the action-value estimates, and replacing eligibility traces with the eligibility trace being set to 0 at the beginning of each EA run. The decay factor was set to \(\lambda =1\), the discount factor to \(\gamma =1\), and the learning rate to \(\alpha =0.1\).

  8. 8.

    The instance considered is a uniform random 3-SAT problem and can be downloaded online at http://people.cs.ubc.ca/~hoos/SATLIB/benchm.html; the name of the instance is “uf50-218/uf50-01.cnf”. The instance consists of 218 clauses and is satisfiable. We treat this 3-SAT instance as a MAX-SAT optimization problem, with fitness calculated as the proportion of satisfied clauses.

References

  • Allmendinger R (2012) Tuning evolutionary search for closed-loop optimization. PhD thesis, Department of Computer Science, University of Manchester, UK

    Google Scholar 

  • Allmendinger R, Knowles J (2010) On-line purchasing strategies for an evolutionary algorithm performing resource-constrained optimization. In: Proceedings of parallel problem solving from nature, pp 161–170

    Google Scholar 

  • Allmendinger R, Knowles J (2011) Policy learning in resource-constrained optimization. In: Proceedings of the genetic and evolutionary computation conference, pp 1971–1978

    Google Scholar 

  • Allmendinger R, Knowles J (2013) On handling ephemeral resource constraints in evolutionary search. Evol Comput 21(3):497–531

    Article  Google Scholar 

  • Auger A, Doerr B (2011) Theory of randomized search heuristics. World Scientific, Singapore

    Book  MATH  Google Scholar 

  • Bäck T, Knowles J, Shir OM (2010) Experimental optimization by evolutionary algorithms. In: Proceedings of the genetic and evolutionary computation conference (companion), pp 2897–2916

    Google Scholar 

  • Bedau MA (2010) Coping with complexity: machine learning optimization of highly synergistic biological and biochemical systems. In: Keynote talk at the international conference on genetic and evolutionary computation

    Google Scholar 

  • Borodin A, El-Yaniv R (1998) Online computation and competitive analysis. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  • Bosman PAN (2005) Learning, anticipation and time-deception in evolutionary online dynamic optimization. In: Proceedings of genetic and evolutionary computation conference, pp 39–47

    Google Scholar 

  • Bosman PAN, Poutré HL (2007) Learning and anticipation in online dynamic optimization with evolutionary algorithms: the stochastic case. In: Proceedings of genetic and evolutionary computation conference, pp 1165–1172

    Google Scholar 

  • Branke J (2001) Evolutionary optimization in dynamic environments. Kluwer Academic Publishers, Dordrecht

    Google Scholar 

  • Caschera F, Gazzola G, Bedau MA, Moreno CB, Buchanan A, Cawse J, Packard N, Hanczyc MM (2010) Automated discovery of novel drug formulations using predictive iterated high throughput experimentation. PLoS ONE 5(1):e8546

    Article  Google Scholar 

  • Chen T, He J, Sun G, Chen G, Yao X (2009) A new approach for analyzing average time complexity of population-based evolutionary algorithms on unimodal problems. IEEE Trans Syst Man Cybern B 39(5):1092–1106

    Article  Google Scholar 

  • Coello CAC (2002) Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: a survey of the state of the art. Comput Methods Appl Mech Eng 191(11–12):1245–1287

    Article  MATH  Google Scholar 

  • Costa LD, Fialho A, Schoenauer M, Sebag M (2008) Adaptive operator selection with dynamic multi-armed bandits. In: Proceedings of genetic and evolutionary computation conference, pp 913–920

    Google Scholar 

  • Davis TE, Principe JC (1993) A Markov chain framework for the simple genetic algorithm. Evol Comput 1(3):269–288

    Article  Google Scholar 

  • Doob JL (1953) Stochastic processes. Wiley, New York

    MATH  Google Scholar 

  • Finkel DE, Kelley CT (2009) Convergence analysis of sampling methods for perturbed Lipschitz functions. Pac J Optim 5:339–350

    MATH  MathSciNet  Google Scholar 

  • Goldberg DE, Segrest P (1987) Finite Markov chain analysis of genetic algorithms. In: Proceedings of the international conference on genetic algorithms, pp 1–8

    Google Scholar 

  • Hartland C, Gelly S, Baskiotis N, Teytaud O, Sebag M (2006) Multi-armed bandits, dynamic environments and meta-bandits. In: NIPS workshop online trading of exploration and exploitation

    Google Scholar 

  • Hartland C, Baskiotis N, Gelly S, Sebag M, Teytaud O (2007) Change point detection and meta-bandits for online learning in dynamic environments. In: CAp, pp 237–250

    Google Scholar 

  • He J, Yao X (2002) From an individual to a population: an analysis of the first hitting time of population-based evolutionary algorithms. IEEE Trans Evol Comput 6(5):495–511

    Article  Google Scholar 

  • Herdy M (1997) Evolutionary optimization based on subjective selection-evolving blends of coffee. In: European congress on intelligent techniques and soft computing, pp 640–644

    Google Scholar 

  • Holland JH (1975) Adaptation in natural and artificial systems. MIT Press, Boston

    Google Scholar 

  • Horn J (1993) Finite Markov chain analysis of genetic algorithms with niching. In: Proceedings of the international conference on genetic algorithms, pp 110–117

    Google Scholar 

  • Jin Y (2011) Surrogate-assisted evolutionary computation: recent advances and future challenges. Swarm Evol Comput 1(2):61–70

    Article  Google Scholar 

  • Judson RS, Rabitz H (1992) Teaching lasers to control molecules. Phys Rev Lett 68(10):1500–1503

    Article  Google Scholar 

  • Kauffman S (1989) Adaptation on rugged fitness landscapes. In: Lecture notes in the sciences of complexity, pp 527–618

    Google Scholar 

  • Kaufman L, Rousseeuw PJ (1990) Finding groups in data: an introduction to cluster analysis. Wiley, New York

    Book  Google Scholar 

  • King RD, Whelan KE, Jones FM, Reiser PGK, Bryant CH, Muggleton SH, Kell DB, Oliver SG (2004) Functional genomic hypothesis generation and experimentation by a robot scientist. Nature 427:247–252

    Article  Google Scholar 

  • Klockgether J, Schwefel H-P (1970) Two-phase nozzle and hollow core jet experiments. In: Engineering aspects of magnetohydrodynamics, pp 141–148

    Google Scholar 

  • Knowles J (2009) Closed-loop evolutionary multiobjective optimization. IEEE Comput Intell Mag 4(3):77–91

    Article  Google Scholar 

  • Lehre PK (2011) Fitness-levels for non-elitist populations. In: Proceedings of the conference on genetic and evolutionary computation, pp 2075–2082

    Google Scholar 

  • Liepins GE, Potter WD (1991) A genetic algorithm approach to multiple-fault diagnosis. In: Handbook of genetic algorithms, pp 237–250

    Google Scholar 

  • Mahfoud SW (1991) Finite Markov chain models of an alternative selection strategy for the genetic algorithm. Complex Syst 7:155–170

    Google Scholar 

  • Michalewicz Z, Schoenauer M (1996) Evolutionary algorithms for constrained parameter optimization problems. Evol Comput 4(1):1–32

    Article  Google Scholar 

  • Nakama T (2008) Theoretical analysis of genetic algorithms in noisy environments based on a Markov model. In: Proceedings of the genetic and evolutionary computation conference, pp 1001–1008

    Google Scholar 

  • Nguyen TT (2010) Continuous dynamic optimisation using evolutionary algorithms. PhD thesis, University of Birmingham

    Google Scholar 

  • Nix A, Vose MD (1992) Modeling genetic algorithms with Markov chains. Ann Math Artif Intell 5:79–88

    Article  MATH  MathSciNet  Google Scholar 

  • Nocedal J, Wright SJ (1999) Numerical optimization. Springer, New York

    Book  MATH  Google Scholar 

  • Norris JR (1998) Markov chains (Cambridge Series in Statistical and Probabilistic Mathematics). Cambridge University Press, Cambridge

    Google Scholar 

  • O’Hagan S, Dunn WB, Brown M, Knowles J, Kell DB (2005) Closed-loop, multiobjective optimization of analytical instrumentation: gas chromatography/time-of-flight mass spectrometry of the metabolomes of human serum and of yeast fermentations. Anal Chem 77(1):290–303

    Article  Google Scholar 

  • O’Hagan S, Dunn WB, Knowles J, Broadhurst D, Williams R, Ashworth JJ, Cameron M, Kell DB (2007) Closed-loop, multiobjective optimization of two-dimensional gas chromatography/mass spectrometry for serum metabolomics. Anal Chem 79(2):464–476

    Article  Google Scholar 

  • Pettinger JE, Everson RM (2003) Controlling genetic algorithms with reinforcement learning. Technical report, The University of Exeter

    Google Scholar 

  • Rechenberg I (2000) Case studies in evolutionary experimentation and computation. Comput Methods Appl Mech Eng 2–4(186):125–140

    Article  Google Scholar 

  • Reeves CR, Rowe JE (2003) Genetic algorithms—principles and perspectives: a guide to GA theory. Kluwer Academic Publishers, Boston

    Google Scholar 

  • Rummery GA, Niranjan M (1994) On-line Q-learning using connectionist systems. Technical report CUED/F-INFENG/TR 166, Cambridge University Engineering Department

    Google Scholar 

  • Schwefel H-P (1968) Experimentelle Optimierung einer Zweiphasendüse, Teil 1. AEG Research Institute Project MHD-Staustrahlrohr 11.034/68, Technical report 35, Berlin

    Google Scholar 

  • Schwefel H-P (1975) Evolutionsstrategie und numerische Optimierung. PhD thesis, Technical University of Berlin

    Google Scholar 

  • Shir O, Bäck T (2009) Experimental optimization by evolutionary algorithms. In: Tutorial at the international conference on genetic and evolutionary computation

    Google Scholar 

  • Shir OM (2008) Niching in derandomized evolution strategies and its applications in quantum control: a journey from organic diversity to conceptual quantum designs. PhD thesis, University of Leiden

    Google Scholar 

  • Small BG, McColl BW, Allmendinger R, Pahle J, López-Castejón G, Rothwell NJ, Knowles J, Mendes P, Brough D, Kell DB (2011) Efficient discovery of anti-inflammatory small molecule combinations using evolutionary computing. Nat Chem Biol (to appear)

    Google Scholar 

  • Sutton RS, Barto AG (1998) Reinforcement learning: an introduction. MIT Press, Cambridge

    Google Scholar 

  • Syswerda G (1989) Uniform crossover in genetic algorithms. In: Proceedings of the international conference on genetic algorithms, pp 2–9

    Google Scholar 

  • Syswerda G (1991) A study of reproduction in generational and steady state genetic algorithms. In: Foundations of genetic algorithms, pp 94–101

    Google Scholar 

  • Thompson A (1996) Hardware evolution: automatic design of electronic circuits in reconfigurable hardware by artificial evolution. PhD thesis, University of Sussex

    Google Scholar 

  • Vaidyanathan S, Broadhurst DI, Kell DB, Goodacre R (2003) Explanatory optimization of protein mass spectrometry via genetic search. Anal Chem 75(23):6679–6686

    Article  Google Scholar 

  • Vose MD, Liepins GE (1991) Punctuated equilibria in genetic search. Complex Syst 5:31–44

    MATH  MathSciNet  Google Scholar 

  • Zhang W (2001) Phase transitions and backbones of 3-SAT and maximum 3-SAT. In: Proceedings of the international conference on principles and practice of constraint programming, pp 153–167

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Richard Allmendinger .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer India

About this chapter

Cite this chapter

Allmendinger, R., Knowles, J. (2015). Ephemeral Resource Constraints in Optimization. In: Datta, R., Deb, K. (eds) Evolutionary Constrained Optimization. Infosys Science Foundation Series(). Springer, New Delhi. https://doi.org/10.1007/978-81-322-2184-5_4

Download citation

  • DOI: https://doi.org/10.1007/978-81-322-2184-5_4

  • Published:

  • Publisher Name: Springer, New Delhi

  • Print ISBN: 978-81-322-2183-8

  • Online ISBN: 978-81-322-2184-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics