Advertisement

A Generalization of Forward-Backward Algorithm

  • Ai Azuma
  • Yuji Matsumoto
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5781)

Abstract

Structured prediction has become very important in recent years. A simple but notable class of structured prediction is one for sequences, so-called sequential labeling. For sequential labeling, it is often required to take a summation over all the possible output sequences, when estimating the parameters of a probabilistic model for instance. We cannot make the direct calculation of such a summation from its definition in practice. Although the ordinary forward-backward algorithm provides an efficient way to do it, it is applicable to limited types of summations. In this paper, we propose a generalization of the forward-backward algorithm, by which we can calculate much broader types of summations than the existing forward-backward algorithms. We show that this generalization subsumes some existing calculations required in past studies, and we also discuss further possibilities of this generalization.

Keywords

Discrete Fourier Transform Directed Path Directed Acyclic Graph Conditional Random Field Output Sequence 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the em algorithm. J. Roy. Stat. Soc. Ser. B 39(1), 1–38 (1977)MathSciNetzbMATHGoogle Scholar
  2. 2.
    Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In: Proc. of ICML 2001, pp. 282–289 (2001)Google Scholar
  3. 3.
    Vishwanathan, S.V.N., Schraudolph, N.N., Schmidt, M.W., Murphy, K.P.: Accelerated training of conditional random fields with stochastic gradient methods. In: Proc. of ICML 2006, pp. 969–976 (2006)Google Scholar
  4. 4.
    Jiao, F., Wang, S., Lee, C.-H., Greiner, R., Schuurmans, D.: Semi-supervised conditional random fields for improved sequence segmentation and labeling. In: Proc. of COLING-ACL 2006, July 2006, pp. 209–216 (2006)Google Scholar
  5. 5.
    Kakade, S., Teh, Y.W., Roweis, S.: An alternate objective function for Markovian fields. In: Proc. of the ICML 2002, vol. 19 (2002)Google Scholar
  6. 6.
    Mann, G.S., McCallum, A.: Generalized expectation criteria for semi-supervised learning of conditional random fields. In: Proc. of ACL 2008: HLT, June 2008, pp. 870–878 (2008)Google Scholar
  7. 7.
    Jansche, M.: Maximum expected f-measure training of logistic regression models. In: Proc. of HLT/EMNLP 2005, October 2005, pp. 692–699 (2005)Google Scholar
  8. 8.
    Sarawagi, S., Cohen, W.W.: Semi-markov conditional random fields for information extraction. In: NIPS, vol. 17, pp. 1185–1192 (2004)Google Scholar
  9. 9.
    Altun, Y., Smola, A.J., Hofmann, T.: Exponential families for conditional random fields. In: Proc. of UAI 2004, pp. 2–9 (2004)Google Scholar
  10. 10.
    Lafferty, J., Zhu, X., Liu, Y.: Kernel conditional random fields: Representation and clique selection. In: Proc. of ICML 2004 (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Ai Azuma
    • 1
  • Yuji Matsumoto
    • 1
  1. 1.Nara Institute of Science and TechnologyJapan

Personalised recommendations