Abstract
In the previous chapter we saw how the MED approach to learning can combine generative modeling with discriminative methods, such as SVMs. In this chapter we explore extensions of this framework spanning a wide variety of learning scenarios. One resounding theme is the introduction of further (possibly intermediate) variables in the discriminant function \( {\cal L}\left( {X;\Theta } \right) \), and solving for an augmented distribution P(Θ, …) involving these new terms (Figure 4.1). The resulting partition function typically involves more integrals, but as long as it is analytic, the number of Lagrange multipliers and the complexity of the optimization will remain basically unchanged, as when we introduced slack variables in Section 3. Once again, we note that as we add more distributions to the prior, we must be careful to balance their competing goals (i.e. their variances) evenly so that we still derive meaningful information from each component of the aggregate prior (the model prior, the margin prior, and the many further priors we will introduce shortly).
Each problem that I solved became a rule which served afterwards to solve other problems. René Descartes, 1596-1650
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer Science+Business Media New York
About this chapter
Cite this chapter
Jebara, T. (2004). Extensions to MED. In: Machine Learning. The International Series in Engineering and Computer Science, vol 755. Springer, Boston, MA. https://doi.org/10.1007/978-1-4419-9011-2_4
Download citation
DOI: https://doi.org/10.1007/978-1-4419-9011-2_4
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4613-4756-9
Online ISBN: 978-1-4419-9011-2
eBook Packages: Springer Book Archive