Skip to main content

Learning with Delayed Rewards in Ant Systems for the Job-Shop Scheduling Problem

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1424))

Abstract

We apply the idea of learning with delayed rewards to improve performance of the Ant System. We will mention different mechanisms of delayed rewards in the Ant Algorithm (AA). The AA for JSP was first applied in classical form by A. Colorni and M. Dorigo. We adapt an idea of an evolution of the algorithm itself using the methods of the learning process. We accentuate the co-operation and stigmergy effect in this algorithm. We propose the optimal values of the parameters used in this version of the AA, derived as a result of our experiments.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Boryczka M., Boryczka U.: Generative policies in ant system. In: Proceedings of the EUFIT’97 Conference, Aachen, September, 1997, 857–861.

    Google Scholar 

  2. Colorni A., Dorigo M., Maniezzo U.: An Investigation of same Properties of an Ant Algorithm. In: Proceedings of the Parallel Problem Solving from Nature Conference (PPSN 92), Brussels, Belgium, Elsevier Publishing, 1992.

    Google Scholar 

  3. Colorni A., Dorigo M., Maniezzo U., Trubian M.: Ant system for Job-shop Scheduling, Belgian Journal of Operations Research, Statistic and Computer Science, 1994.

    Google Scholar 

  4. Dorigo M., Bersini H.: A comparison of Q-learning and classifier systems. In: Proceedings of From Animats to Animals Third International Conference on Simulation of Adaptive Behavior (SAB 94), Brighton UK, August 8–12, 1994.

    Google Scholar 

  5. Gambarella L.M., Dorigo M.: AntQ: A Reinforcement Learning approach to the travelling salesman problem. Proceedings of ML-95, Twelfth International Conference On Machine Learning, Morgan Kaufmann Publishers, 1995, 252–260.

    Google Scholar 

  6. Graham R.L., Lawler E.L., Lenstra J.K., Rinnooy Kan A.H.G.: Optimization and approximation in deterministic sequencing and scheduling: a survey. Annals of Discrete Mathematics, 5 (1979), 287–326.

    Article  MATH  MathSciNet  Google Scholar 

  7. Michalewicz, Z.: Genetic Algorithms + Data Structures = Evolution Programs. Berlin: Springer Verlag, 1996.

    MATH  Google Scholar 

  8. Singh S., Norving P., Cohn D.: Agents and Reinforcement Learning. Dr. Dobb’s Journal, March, 1997.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Boryczka, U. (1998). Learning with Delayed Rewards in Ant Systems for the Job-Shop Scheduling Problem. In: Polkowski, L., Skowron, A. (eds) Rough Sets and Current Trends in Computing. RSCTC 1998. Lecture Notes in Computer Science(), vol 1424. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-69115-4_37

Download citation

  • DOI: https://doi.org/10.1007/3-540-69115-4_37

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-64655-6

  • Online ISBN: 978-3-540-69115-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics