Skip to main content

Introduction

  • Chapter
  • First Online:
Boosted Statistical Relational Learners

Part of the book series: SpringerBriefs in Computer Science ((BRIEFSCOMPUTER))

  • 641 Accesses

Abstract

There is no doubt the role of structure and relations within data becomes more and more important nowadays—for example, Google, Facebook, world wide mind etc. In many learning and mining tasks information about one objects can help a learner to reach conclusions about other, related objects and in turn to improve its overall performance. However, relations are difficult to represent using a fixed set of propositional features i.e., vectors of fixed dimensions—the standard approach within statistical machine learning and data mining. To overcome this, Statistical Relational Learning (SRL) Getoor and Taskar (2007) studies the combination of relational learning (e.g. inductive logic programming) and statistical machine learning. By combining the power of logic and probability, such approaches can perform robust and accurate reasoning and learning about complex relational data. The advantage of these formulations is that they can succinctly represent probabilistic dependencies among the attributes of different related objects, leading to a compact representation of learned models. Most of these methods essentially use first-order logic to capture domain knowledge and soften the rules using probabilities or weights. These approaches range can be broadly classified into directed models and undirected models. The advantage of these models is that they can succinctly represent probabilistic dependencies among the attributes of different related objects, leading to a compact representation of learned models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For instance, when learning to predict if someone (say x) is popular, it is possible to use the predicate Friends in several ways. Some possible ways are Friends(x,y), Friends(y,x), Friends(x,“Erdos”) and Friends(“Erdos”,x). Of course, the constant “Erdos” can be replaced with all possible constants in the data base.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sriraam Natarajan .

Rights and permissions

Reprints and permissions

Copyright information

© 2014 The Author(s)

About this chapter

Cite this chapter

Natarajan, S., Kersting, K., Khot, T., Shavlik, J. (2014). Introduction. In: Boosted Statistical Relational Learners. SpringerBriefs in Computer Science. Springer, Cham. https://doi.org/10.1007/978-3-319-13644-8_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-13644-8_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-13643-1

  • Online ISBN: 978-3-319-13644-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics