Skip to main content

Adversarial Prediction: Lossless Predictors and Fractal Like Adversaries

(Abstract)

  • Conference paper
  • 964 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7748))

Abstract

In this talk we will look at the classical prediction game where the adversary (or nature) is producing a sequence of bits and a prediction algorithm is trying to predict the future bit(s) from the past bits. This is like gambling on the future bits which involves the risk of making mistakes while shooting for profit from right predictions. Say the algorithm gets a payoff of 1 on a right prediction and − 1 on wrong predictions (and is also make fractional bets c ≤ 1 in which case its payoff is + c or − c). We will see an algorithm [1] that has a good performance while almost never taking a risk of having a net loss where loss is said to happen when the number of wrong predictions exceeds the number of right predictions. Our algorithm gets no more than an exponentially small loss \(e^{-\Omega(\epsilon^2 T)}\) over T bits on any sequence (where ε is a constant parameter). Further as compared to the payoff that would have been achieved by predicting the majority bit (in hindsight) our algorithms payoff is not lower by more than O(εT) (which is commonly known as regret). We will also see experimental results on how these algorithms perform on stock data. Our algorithms build upon several classical works on the experts problem [2-4]

We will also see what kind of sequences are best from the adversary’s perspective. We will show that under a certain formulation of predictive payoff it is best for the adversary to generate a “fractal like” sequence [5].

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Kapralov, M., Panigrahy, R.: Prediction strategies without loss. In: Proceedings of NIPS (2011)

    Google Scholar 

  2. Even-Dar, E., Kearns, M., Mansour, Y., Wortman, J.: Regret to the best vs. regret to the average. Machine Learning 72, 21–37 (2008)

    Article  Google Scholar 

  3. Cover, T.: Behaviour of sequential predictors of binary sequences. In: Transactions of the Fourth Prague Conference on Information Theory, Statistical Decision Functions, Random Processes (1965)

    Google Scholar 

  4. Littlestone, N., Warmuth, M.: The weighted majority algorithm. In: FOCS (1989)

    Google Scholar 

  5. Poppat, P., Panigrahy, R.: Fractal structures in adversarial prediction (manuscript, 2012)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Panigrahy, R. (2013). Adversarial Prediction: Lossless Predictors and Fractal Like Adversaries. In: Ghosh, S.K., Tokuyama, T. (eds) WALCOM: Algorithms and Computation. WALCOM 2013. Lecture Notes in Computer Science, vol 7748. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-36065-7_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-36065-7_2

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-36064-0

  • Online ISBN: 978-3-642-36065-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics