Advertisement

Proactive-DIEL in Evolving Referral Networks

  • Ashiqur R. KhudaBukhshEmail author
  • Jaime G. Carbonell
  • Peter J. Jansen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10207)

Abstract

Distributed learning in expert referral networks is a new Active Learning paradigm where experts—humans or automated agents—solve problems if they can or refer said problems to others with more appropriate expertise. Recent work augmented the basic learning-to-refer method with proactive skill posting, where experts may report their top skills to their colleagues, and proposed a modified algorithm, proactive-DIEL (Distributed Interval Estimation Learning), that takes advantage of such one-time posting instead of using an uninformed prior. This work extends the method in three main directions: (1) Proactive-DIEL is shown to work on a referral network of automated agents, namely SAT solvers, (2) Proactive-DIEL’s reward mechanism is extended to another referral-learning algorithm, \(\epsilon \)-Greedy, with some appropriate modifications. (3) The method is shown robust with respect to evolving networks where experts join or drop off, requiring the learning method to recover referral expertise. In all cases the proposed method exhibits superiority to the state of the art.

Keywords

Active learning Evolving referral network Proactive skill posting 

Notes

Acknowledgements

This research is partially funded by the National Science Foundation grant EAGER-1649225.

References

  1. 1.
    KhudaBukhsh, A.R., Jansen, P.J., Carbonell, J.G.: Distributed learning in expert referral networks. In: European Conference on Artificial Intelligence (ECAI), pp. 1620–1621 (2016)Google Scholar
  2. 2.
    KhudaBukhsh, A.R., Carbonell, J.G., Jansen, P.J.: Proactive skill posting in referral networks. In: Kang, B., Bai, Q. (eds.) AI 2016. LNCS, vol. 9992, pp. 585–596. Springer, Cham (2016). doi: 10.1007/978-3-319-50127-7_52 Google Scholar
  3. 3.
    Kaelbling, L.P.: Learning in Embedded Systems. MIT Press, Cambridge (1993)Google Scholar
  4. 4.
    Kaelbling, L.P., Littman, M.L., Moore, A.P.: Reinforcement learning: a survey. J. Artif. Intell. Res. 4, 237–285 (1996)Google Scholar
  5. 5.
    Donmez, P., Carbonell, J.G., Schneider, J.: Efficiently learning the accuracy of labeling sources for selective sampling. In: Proceedings of KDD 2009, p. 259 (2009)Google Scholar
  6. 6.
    Kautz, H., Selman, B., Milewski, A.: Agent amplified communication, pp. 3–9 (1996)Google Scholar
  7. 7.
    Yolum, P., Singh, M.P.: Dynamic communities in referral networks. Web Intell. Agent Syst. 1(2), 105–116 (2003)Google Scholar
  8. 8.
    Yu, B.: Emergence and evolution of agent-based referral networks. Ph.D. thesis, North Carolina State University (2002)Google Scholar
  9. 9.
    Yu, B., Venkatraman, M., Singh, M.P.: An adaptive social network for information access: theoretical and experimental results. Appl. Artif. Intell. 17, 21–38 (2003)CrossRefGoogle Scholar
  10. 10.
    Yu, B., Singh, M.P.: Searching social networks. In: Proceedings of AAMAS 2003 (2003)Google Scholar
  11. 11.
    Babaioff, M., Sharma, Y., Slivkins, A.: Characterizing truthful multi-armed bandit mechanisms. In: Proceedings of the 10th ACM Conference on Electronic Commerce, pp. 79–88. ACM (2009)Google Scholar
  12. 12.
    Biswas, A., Jain, S., Mandal, D., Narahari, Y.: A truthful budget feasible multi-armed bandit mechanism for crowdsourcing time critical tasks. In: Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, International Foundation for Autonomous Agents and Multiagent Systems, pp. 1101–1109 (2015)Google Scholar
  13. 13.
    Tran-Thanh, L., Stein, S., Rogers, A., Jennings, N.R.: Efficient crowdsourcing of unknown experts using multi-armed bandits. In: European Conference on Artificial Intelligence, pp. 768–773 (2012)Google Scholar
  14. 14.
    Tran-Thanh, L., Chapman, A., Rogers, A., Jennings, N.R.: Knapsack based optimal policies for budget-limited multi-armed bandits. arXiv preprint arXiv:1204.1909 (2012)
  15. 15.
    Nisan, N., Roughgarden, T., Tardos, E., Vazirani, V.V.: Algorithmic Game Theory, vol. 1. Cambridge University Press, Cambridge (2007)CrossRefzbMATHGoogle Scholar
  16. 16.
    KhudaBukhsh, A.R., Xu, L., Hoos, H.H., Leyton-Brown, K.: Satenstein: automatically building local search sat solvers from components. In: IJCAI, vol. 9, pp. 517–524 (2009)Google Scholar
  17. 17.
    KhudaBukhsh, A.R., Xu, L., Hoos, H.H., Leyton-Brown, K.: Satenstein: automatically building local search sat solvers from components. Artif. Intell. 232, 20–42 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Mach. Learn. 47(2–3), 235–256 (2002)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Ashiqur R. KhudaBukhsh
    • 1
    Email author
  • Jaime G. Carbonell
    • 1
  • Peter J. Jansen
    • 1
  1. 1.Carnegie Mellon UniversityPittsburghUSA

Personalised recommendations