A glimpse of Semantic Web trust
- 68 Downloads
Trust management plays a significant role in the Semantic Web for combining authoritative information, fitting the services and increasing data security and user privacy. It helps people to overcome mistrust and fear of risk (for selecting the services) as well as providing reliability for the user to use the Semantic Web services. This paper tries to give an insight regarding different dimensions of Semantic Web trust layer. It will look at trust from four different perspectives, namely policies which can be applied, contents that should be proven, in addition to origins of acquired information or services. Finally, due to emergence of Semantic Web of things, the trust model and management within distributed systems will be reviewed.
KeywordsSemantic Web Trust Policy Reputation Content Semantic Web of things
In the era of Information, by transformation of Internet into Internet of things and Semantic Web, cooperation of human and computers is the prime solution for social challenges. The main purpose of Semantic Web creation was to assist not only the human interactions with machines but also to help the machines interactions with each other. In fact, the Semantic Web can be viewed as a network of linked information, which facilitates machine processing globally. Semantic Web also can be considered as sets of relationships between entities on the Web, these connections can also be seen as graphs where the predicates are taken as edges and classes as nodes .
From the beginning, it was clear that the reliability and security of information in an immense and open information space such as Web would become a challenge. Almost unsupervised and uncontrolled, it is the nature of Web that allows one to say anything over a certain subject on the Web, and this makes the Web a unique source of information. However, it is the user’s responsibility to distinguish right from wrong. On the other hand, in an agent based environment, where computers have to make choice over multiple and alternative sources to the requested queries, this would be achieved through harder and computationally intense processes . Therefore, necessity of a mechanism to provide secure data interaction, identify the trueness of content and trustworthiness of the origin is obvious.
Getting information from Web become common every day, and users acquire their information through various sources ranging from personal Web pages, governmental institutions to scientific portals, human users tend to make decision regarding to trust a source using different methods, such as relying on their previous experiences or other user’s opinions, but as we know the Semantic Web seek different goal and that is to give computer agents the ability to interact with the Web content and other agents, make decisions over choosing the right service or information provider. In this case, how will be a computer agent able to trust an information source? How can it identify the correctness of acquired information? And how it can achieve a secure communication?
This paper provides an overview on the research works related to the Semantic Web security and trust layers of the Semantic Web stack. Section 2 studies various definitions of trust and reputation. Section 3 categorizes trust from different perspectives. Section 4 reviews the prominent approaches in the distributed trust. Section 5 presents different trust and reputation test beds. Section 6 introduces open challenges. Section 7 concludes the paper.
This section reviews the definitions two preliminary concepts, namely trust and reputation.
Trust carries different meaning depending on the context and the area it is used. In computer network it refers to mechanisms that insure the security and access control. In distributed systems and agent based systems it is considered as a tool to measure reliability. In game theory and policies, it is viewed as the rate of correct decisions made by system under uncertain condition . Trust can be clustered in two main categories, namely reliability trust and decision trust, each with different descriptions. When person A asks person B to perform a certain task, the reliability (probability) of trusted person B as seen by trusting person A depends on the performance of expected tasks .
Trust in decisions is the degree that trusting party is willing to depend on trusted party under certain circumstances to acquire sense of security, even against the possibility of odds. Decisions made under this class, depends on the degree of risk accepted by trusting party and the previous negative or positive experiences it had toward trusted party . Trust is also defined as the firm belief in the competence of an entity to act dependably, securely, and reliably within a specified context .
General thoughts regarding a person or thing are called reputation. Reputation can be based on accumulated ratings or scores given by community members to a person. Different approaches can be implemented in order to calculate rating of an entity, such as average; for example, it is possible to calculate the average of reputation scores given by community members toward the attitude of an entity. Usually members of certain group receive almost the same rating from other users, when a group is well known in certain subject, all the members of that group usually receive the same credit as their group .
In addition, reputation can be considered as the personal beliefs or experiences of an entity regarding to performance of other entity over certain subjects. In this case, reputation ratings should be based on firsthand experiences or based on a weighted measure divided by the total amount of references provided by single individual such as the approach used by Google’s page rank. A reputation approach can be either centralized and be given by an authority, or it can be distributed and based on knowledge of crowd .
3 Categorization of the trust
This section categorizes the trust and the providing approaches, based on three perspectives.
3.1 Policy-based trust
The only vertical layer of Semantic Web stack, which is called digital signature, utilizes the XML digital signature ability such as signed references, info’s and digestion values to mark any Web content. Along with proof, logic and trust layer itself; these layers are responsible for trustworthiness of Semantic Web processes . As the structure of XML Documents are like graphs, the main challenge is to designate which parts of the documents be accessible by which users through enforcing policies on users and documents. In other words, Semantic Web agents are required to ensure the safety of information and Web services from unauthorized access. To satisfy this need, there are broad range of security policies, such as authentication, data integrity and privacy, access control, authorization and confidentiality existing.
XML nature of Semantic Web gives it the advantage of using meta data instead of data itself. There are many advantages in using information regarding data instead of whole data itself. One is the relatively small size of meta data, in addition to the ability to make data more discoverable. Besides many benefits of meta data, it has disadvantages. It can be created by number of resources, such as automated tools, data owner itself and other users on the Web, therefore because of non-uniform trustworthiness of meta data generators on the Web, it is imperative for each user to understand the trustworthiness level of each Meta data in order to get the full advantage of it .
Currently, our Web is equipped with variety of tools to ensure security of information exchange. Tools such as digital signature, public key, Web certificate and encryption. Several security standards have been introduced to guarantee safety of contents exchanging in Web of trust between business partners. For example, WS-security policy introduced by W3C for XML-based Web services, which describes ways of attaching signatures and encryption headers or security tokens to a SOAP message, or SAML policy provided by OASIS security services which is providing a means to authorize and authenticate, but it is unable to give any suggestion regarding trust .
Kerberos ticket issuing system, which is originally created by MIT for project Athena, is one of the widely used trusted third-party authentication technologies. WS-trust as an extension to previously mentioned WS-security, designates the ways of acquiring trust through authorization, identity proof and entity performance .
Another challenge in establishing trust is to provide a means to reveal credential but prevent loss of privacy and control over information. To deal with this issue, different mechanisms and policies are introduced, such as TrustBuilder which was designed to provide mechanism for credential tradeoffs in a way that would not be causing loss of privacy . As trust decisions are type of actions that require acceptance of certain amount of risk of revealing credentials in return to getting advantage of earning trust. Other system suggested to facilitate negotiation of credential exchange is called PeerTrust, which is a more recent policy and trust negotiation language [9, 10].
Prominent trust policies and systems
Kohl and Neuman 
Winslett et al. 
Li et al. 
Bonatti et al. 
Distributed trust management
Blaze et al. 
Blaze et al. 
3.2 Reputation-based trust
The purpose of reputation-based trust is to make trust decisions through personal- or others-experiences or in some cases through combination of personal and global experiences. In reputation-based trust, members in the community judge about other members in their network based on their transactions, quality of product and service consistency [17, 18]. In other words, members of community would implement a collaborative sanctioning in a team effort to give incentive to poor quality service providers in the network to provide better services. A trust-based network can be considered as a graph in which the members are nodes with weighted edges according to amount of trust performance perceived from other users by members. Through trust network, users will be able to trust the resources directly by personal experiences or indirectly by other trusted users using trust propagation methods [19, 20].
In a distributed system, there is no reputation center, instead there are multiple reputation bases where each member can submit its experience regarding other members, or even members can get information they need related to a certain member of community from different user who had previous experience regarding that particular member. A peer-to-peer system is an example of distributed system .
In a local model, the idea is based on transitivity nature of reputation, although under conditions in this model trust is personal and varies from node to node, but in any case a node didn’t have any information regarding a new trustee, it can rely on closed trusted nodes experience. If someone doesn’t have any information regarding someone else usually his/her trusts to his friends and relatives more than unknown people or sources. According to small world hypothesis, there would be a path from trustor to trustee through chain of trusted close friends [3, 23].
In the following, some of the reputation calculation models are reviewed:
Subtraction or ratings average One of the simplest ways of reputation calculation is the subtraction of aggregation of positive and negative ratings given by users. These methods are also known as simple summation and average methods. The advantage of this method is its simplicity, but it also suffers from imperfect reflection of user’s opinion regarding a particular member or information resource due to its primitive mathematics . An advanced version of this method is calculation of average of ratings or weighted average of ratings based on certain factors each with a weight assigned to them .
Because opinions can also be mapped into Beta PDFs and hence the opratores are the same as bayesian method, therefore this model can be named both opinion- and bayesian-based .
Fuzzy logic based model In this method, using linguistically fuzzy concepts repution of members of network is indicated, meaning that the amount of membership function illustrates almost how much agents are fit into concepts of trustworthiness. Reasoning in this method is done through fuzzy logic and fuzzy measures .
Flow model In flow method, reputation is calculated using the transitive itraton through chain of members in the network. Some of models assume a constant reputaton weight for how trust network which can be distributed between members of network, even or unevenly. Each member reputation can only be increased at the cost of other members, since the total weight of network is constant. Therefore, the degree of increase and decrease of each node reputation is a function of input and output flow of the reputation score within the network . Table 2, summarizes the reputation calculation models and their prominent examples.
Multi context models Since Trust and reputation are multi-context in nature, therefore creation of multi-dimensional models to calculate trust and reputation has importance. Multi-dimensional models have modular structure, agents created in such an architecture are capable of utilizing several logics in a way that increases its representational power to maximum . Some of well-known multi-dimensional models are REGRET, SPORAS and HISTOS.
REGRET model Within REGRET model it is possible to calculate multi-dimensional reputation systems, it is possible to take into account dimensions such as social, ontological hierarchy and individual dimensions. This model is actually the natural extension of previous widely used models and is flexible enough to be implemented on societies with different social structure, and agents that belong to more than one group at a time .
SPORAS and HISTOS models As an evolved version of online reputation models in which are utilizing simple summation and average methods, within SPORAS only the most recent rating between users is considered and also users with higher reputation values receive very smaller rating changes in compare to the users with low reputation values after each update iteration. Although SPORAS have the same characteristics of simple summation and average models but still has more robustness to user behavior changes and hence is more reliable. HISTOS was introduced as a response to lack of personalization within SPORAS model. HISTOS can deal with direct information as well as witness information .
Prominent examples of reputation calculation models
Subtraction or ratings average
Ebay reputaion forum
Resnick and Zeckhauser 
Schneider et al. 
Esfandiari and Chandrasekharan 
Shekarpour and Katebi 
Carbo et al. 
Page et al. 
Ziegler and Lausen 
Advogato’s reputation scheme
3.3 Content-based trust
Web contents are represented as axioms and ontologies within the Semantic Web. In the following, the possibilities of using content of Web transactions to gain trust are explored. Content of information exchanged on the Web was never considered in Semantic Web trust layer. This issue is solved by authentication, identification and proof checking. However, Semantic Web makes it possible to interact and utilize Web content directly. Thus, it provides a unique opportunity to use the content of Web resources as a means to judge regarding the identity of their creators.
While all other types of trust assessment methods are concerned with information provider’s legitimacy based on their reputation, behavior and implemented policies, content-based trust is more involved with the nature of the contents given on the Web. In real life, one may choose to trust information provided by a trusted resource, however if the information that is provided by many low trusted resources are the same and it conflicts with the information given by the trusted resource, then people might choose to believe the information comes from the many, even if they may not look legit. Therefore, it can be said that each of the reputation and certification is just one of the dimensions that would create a phenomenon called trust.
Authority Trusted information providers for particular subject may not be trusted on other subject areas. People may trust information provided by world health organization about diseases, but economical information is provided by the same organization will not be trusted by the users .
Transitivity of legitimation Having relation with highly trusted and authorized entities on the network can transfer some of trust to other entities in relation with them. For example, certificates provided by universities to medical students .
Pedigree Contents generated by entities may receive credit and trust from their creators. Information provided by a scientific web site is more likely to be accepted by user in compare to anonymous resources .
Bias Sometimes information provided by resources may be incomplete or insufficient under certain condition, for example a drug production factory may ignore side effects related to certain treatment condition and focus on trial outcomes. Designation of bias requires expertise and profession .
Motivation in providing accurate information If there is motivation and interest in information provider to provide more accurate information, then it is more likely that users believe to that information .
Deceptive behavior Encountering with information resource with sinister goals is natural event on the Web, therefore users should be alerted about the fact that resources and their associates may not be what they appear to be .
Based on what mentioned regarding content-based trust, this method tries to introduce new metrics for trust, using the content of information provided by the trusted suppliers. In a Semantic-enabled Web, not only humans will need to make decisions, but also agents should be able to choose to trust certain resources while facing with many other alternatives. This process is happening by human users on everyday life. People choose resources and information in their everyday Web activities but the rationale behind their decision is unknown due to complexity of human behavior, therefore it would be advantageous for automated systems and agents to utilize the capabilities of Semantic Web and make trust judgments based on content of information provided by resources .
4 Trust in Semantic Web of things
In the literature, Li et al.  introduced a new language for management of trust based on behavior and constructed a hypothetical meaning for them. In addition, they illustrated that utilization of graphs in credentials are functioning accurately . Ghorbanimoghaddam  highlighted the advantages of using trust in distributed systems and explored weaknesses of different related introduced trust methods. According to research works around On and OFF attacks, using an adaptive oblivious pattern instead of using oblivious factors themselves is more effective . Nitti et al.  introduced a protocol for dynamic management of trust, a solution to deal with nodes that acting wrong and functioning dynamically. This protocol also was able to designate the suitable parameters for each conditions of network in which dynamically changing . Liu et al.  first explored failure reasons of traditional security mechanisms in managing trust, and then introduced a holistic model to manage trust within distributed system such as the one used in distributed systems .
Prominent works in distributed trust
Performance/trust evaluation metrics
Trust management model
Bao and Chen 
Update trust value using direct observations and Indirect recommendations
Social trust and QOS metrics
Honesty, cooperativeness, and community-interest
Effectiveness of our trust management
Protocol by a service composition application in comparison with Ideal service composition (upper bound) and Random service composition (lower bound)
Community-based social environment
Jingpei Wang et al. 
Fuzzy set theory and formal semantics-based language
Quality of service (Qos)
(1) Extracting trust information
(2) Decision-making based on trust (two types of decision-making based on trust: access
Control policy, based on trust and self-organized decision)
Providing a service of self-organizing a set of items based on their trust status to take an informed decision
Based on a service model (layered)
Dong Chen et al. 
Direct observations and indirect reputation-NS-3 simulator
(1) End-to-end packet forwarding ratio (EPFR)
(2) AEC: the energy consumption
(3) The package delivery ratio (PDR)
Confidentiality integrity, availability
(PDR) Package Delivery Ratio
(DP) Detection Probability
Based on fuzzy reputation model (TRM)
Ben Saied et al. 
Assigns dynamic trust scores to cooperating nodes according to different contexts and different functions
Trust values updated by events and time
Deter a class of common attacks designed to target trust management systems
A context-aware and multiservice approach
Ray Chen et al. 
Direct user satisfaction experiences of past interaction experiences and recommendations from others.
Social relationships: friendship, social contact, and community of interest
SOA-based service oriented architecture
Hui Xia 
Direct trust, recommendation trust, incentive function and active degree
Incentive function and active degree
Network interaction quality, adaptability, malicious node identification, attack resistance
Lesser attack effects on introduced model in compare to TSF2 and CFStrust
Based on fuzzy approach
Both direct observations and indirect recommendations in NS-3.17 network simulator
Using social network theory (social hierarchy is structured using balanced connectivity criteria and a K-mean clustering algorithm)
Nitti and Atzori 
Past direct (direct interactions) or indirect (through intermediate nodes) Experiences
Credibility and centrality
Isolation of almost any malicious node in the network
Social Internet of Things (SIoT) paradigm basis of the behavior of the objects
5 Trust and reputation test beds
In order to observe the performance and behavior of introduced trust and reputation models, it is required to test them within certain environment called testbed. Since each model tries to cover certain aspects of reputation and trust, therefore there is no test bed that offers an environment to compare all of presented models with each other hence making comparison process more twisted and complicated. Each proposed model is presented by particular testing environment exclusively designed to that model. There are test beds created based on prisoner’s Dilemma such as the playground designed by Marsh . In this test bed agent have freedom of movement and interactions are saved using prisoner’s dilemma whenever agent make a move. Schillo et al.  suggested a disclosed iterated prisoner’s dilemma using partner selection and standard payoff matrix . Castelfranchi et al.  in their research presented a test bed designed to observe the effects of interactions between artificial agent populations following different criterions for aggregation control purposes. ART test bed presented by Fullam et al. , as a respond to existing shortcomings among previously introduced test beds, within ART test bed researchers are capable of comparing different subjective metrics and conduct their research using flexible parameters .
6 Open challenges and issues
Performance improvement for Semantic Web trust algorithms.
Seamless integration and cooperation of various trust management models for achieving holistic trust management in Semantic Web.
Power efficient trust management models, as well as faster and less energy consuming mechanisms to support semantic enabled devices within IoT.
Approaches to overcome difficulties of transmission and computation of trust among different networks.
Privacy of the human and confidentiality of the business processes.
Autonomic trust management algorithms.
Trustworthy data fusion.
This paper tried to give an insight regarding different dimensions of Semantic Web trust layer. How intelligent agents should trust different resources on the Web when more than one choice is available depends on reputation metrics and calculation methods that mentioned here. How to decide whether the content supplied is relevant using the nature of Semantic Web is explored in this research. In addition, different policies that can be imposed on network to facilitate and secure information exchange has been reviewed. As for the distributed systems, in order to achieve robust trust management, trust properties should be improved. Valid ratings for comments provided by nodes, honesty of the provided recommendation by each node within semantic networks and evaluation of the past experience with a particular node that is intended to communicate with, could be solved utilizing fuzzy logic approaches, also the context aware approaches are good to deter malicious information within Semantic Web space. As a result, it seems that the combination of the context aware and fuzzy approaches could be useful in designing an effective trust management model in this scope.
Compliance with ethical standards
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
- 4.Schneider J et al (2000) Disseminating trust information in wearable communities In: Proceedings of the 2nd international symposium on handheld and ubiquitous computing (HUC2K)Google Scholar
- 5.Resnick P, Zeckhauser R (2002) Trust among strangers in internet transactions: empirical analysis of eBay’s reputation system. In: Baye MR (ed) The economics of the internet and e-commerce, advances in applied microeconomics, vol 11. Elsevier Science, AmsterdamGoogle Scholar
- 7.Kohl JT, Clifford Neuman B, T’so TY (1994) The evolution of the Kerberos authentication system. In: Distributed open systems. IEEE Computer Society Press, pp 78–94Google Scholar
- 9.Leithead T, Nejdl W, Olmedilla D, Seamons KE, Winslett M, Yu T, Zhang CC (2004) How to exploit ontologies for trust negotiation. In: ISWC workshop on trust, security, and reputation on the semantic web, volume 127 of CEUR workshop proceedings. Technical University of Aachen (RWTH), Hiroshima, JapanGoogle Scholar
- 10.Nejdl W, Olmedilla D, Winslett M (2004) PeerTrust: automated trust negotiation for peers on the semantic web. In: Jonker W, Petković M (eds) Secure data management. SDM 2004. Lecture notes in computer science, vol 3178. Springer, BerlinGoogle Scholar
- 12.Bonatti PA, De Coi JL, Olmedilla D, Sauro L (2008) Policy-driven negotiations and explanations: exploiting logic-programming for trust management, privacy & security. In: Garcia de la Banda M, Pontelli E (eds) Logic programming. ICLP 2008. Lecture notes in computer science, vol 5366. Springer, BerlinGoogle Scholar
- 13.SAML (2005) http://www.oasis-open.org/committees/tchome.php?wgabbrev = security. Accessed 21 Mar 2019
- 14.WS-Trust (2005) http://www-128.ibm.com/developerworks/library/specification/ws-trust/. Accessed 23 Mar 2019
- 15.Blaze M, Feigenbaum J, Strauss M (1998) Compliance checking in the policymaker trust-management system. In: Proceedings of the financial cryptography ’98. Lecture notes in computer science, vol 1465. Springer, Berlin, pp 254–274Google Scholar
- 16.Blaze M, Feigenbaum J, Ioannidis J, Keromytis A (1998) The KeyNote trust management system. http://www.cis.upenn.edu/~angelos/keynote.html
- 18.Riloff E, Wiebe J, Phillips W (2005) Exploiting subjectivity classification to improve information extraction. In: Proceedings of the 20th national conference on artificial intelligenceGoogle Scholar
- 19.Hui Xia ZJ, Ju L, Li X, Zhu Y (2011) A subjective trust management model with multiple decision factors for MANET based on AHP and fuzzy logic rules. In: IEEE/ACM international conference on green computing and communicationsGoogle Scholar
- 20.Guha R, Kumar R, Raghavan P, Tomkins A (2004) Propagation of trust and distrust. In: WWW ’04: Proceedings of the 13th international conference on world wide web. ACM Press, New York, pp 403–412Google Scholar
- 21.Sabater J, Sierra C (2005) REGRET: a reputation model for gregarious societies. In: Proceedings of the 4th international workshop on deception, fraud and trust in agent societies, in the 5th international conference on autonomous agents (AGENTS’01). ACM Press, Montreal, Canada, pp 61–69Google Scholar
- 22.Kamvar SD, Schlosser MT, Garcia-Molina H (2003) The EigenTrust algorithm for reputation management in P2P networks. In: WWW’03: Proceedings of the 12th international conference on world wide web. ACM Press, New York, pp 640–651Google Scholar
- 24.Carbo J, Molina J, Dalliva J (2002) Comparing predication of SPORAS vs fuzzy reputation agent system. In: 3rd international conference on fuzzy system and fuzzy setsGoogle Scholar
- 25.Esfandiari B, Chandrasekharan S (2001) On how agents make friends: mechanisms for trust acquisition. In: Proceedings of the fourth workshop on deception, fraud and trust in agent societies, pp 27–34Google Scholar
- 26.Page L, Brin S, Motwani R, Winograd T (1998) The PageRank citation ranking: bringing order to the web. Technical report, Stanford Digital Library Technologies ProjectGoogle Scholar
- 27.Ziegler C-N, Lausen G (2004) Spreading activation models for trust propagation. In: Proceedings of the IEEE international conference on e-technology, e-commerce, and e-service (EEE’04), Taipei, MarchGoogle Scholar
- 28.Levien R (2004) Attack resistant trust metrics. Ph.D. thesis, University of California at BerkeleyGoogle Scholar
- 30.Chen IR, Guo J, Bao F (2014) Trust management for SOA-based IoT and its application to service composition. IEEE Trans on Serv Comput 99(1):1Google Scholar
- 31.Bonatti P, Olmedilla D (2005) Driving and monitoring provisional trust negotiation with metapolicies. In: POLICY ’05: proceedings of the sixth IEEE international workshop on policies for distributed systems and networks (POLICY’05). IEEE Computer Society, Washington, DC, pp 14–23Google Scholar
- 33.Ghorbanimoghaddam M (2015) Trust metrics in recommender systems, a survey. Adv Comput Intell Int J (ACII) 2(3):1245–1257Google Scholar
- 34.Michele Nitti RG, Atzori L, Iera A, Morabito G (2012) A subjective model for trustworthiness evaluation in the social internet of things. In: 23rd annual IEEE international symposium on personal, indoor and mobile radio communicationsGoogle Scholar
- 35.Liu L, Loper ML, Ozkaya Y, Yasar A, Yigitoglu E (2016) Machine to machine trust in the IoT era. TRUST@AAMASGoogle Scholar
- 36.Bao F, Chen I-R (2012) Dynamic trust management for internet of things applications. Self-IoT’12, San Jose, California, ACM 978-1-4503-1753Google Scholar
- 37.Jingpei Wang SB, Yu Y, Xinxin N (2013) Distributed trust management mechanism for the internet of things. In: Proceedings of the 2nd international conference on computer science and electronics engineering (ICCSEE 2013)Google Scholar
- 39.Yinan X (2016) A study on trust management algorithms for the social internet of things. Xie yinan school of computer science and engineering, a dissertation submitted in partial fulfilment of the requirement for the degree of master of science in digital media technologyGoogle Scholar
- 41.Marsh S (1994) Formalizing trust as a computational concept. PhD thesis, Department of mathematics and computer science, Sterling UniversityGoogle Scholar
- 43.Castelfranchi C, Conte R, Paolucci M (1998) Normative reputation and the costs of compliance. J Artif Soc Soc Simul 1(3):1–3Google Scholar
- 44.Fullam KK, Klos T, Muller G, Sabater-Mir J, Barber KS, Vercouter L (2006) The agent reputation and trust (ART) testbed. In: Stølen K, Winsborough WH, Martinelli F, Massacci F (eds) Trust management. iTrust 2006. Lecture notes in computer science, vol 3986. Springer, BerlinGoogle Scholar