Obligation as Optimal Goal Satisfaction
Abstract
Formalising deontic concepts, such as obligation, prohibition and permission, is normally carried out in a modal logic with a possible world semantics, in which some worlds are better than others. The main focus in these logics is on inferring logical consequences, for example inferring that the obligation O q is a logical consequence of the obligations O p and O (p → q). In this paper we propose a nonmodal approach in which obligations are preferred ways of satisfying goals expressed in firstorder logic. To say that p is obligatory, but may be violated, resulting in a less than ideal situation s, means that the task is to satisfy the goal p ∨ s, and that models in which p is true are preferred to models in which s is true. Whereas, in modal logic, the preference relation between possible worlds is part of the semantics of the logic, in this nonmodal approach, the preference relation between firstorder models is external to the logic. Although our main focus is on satisfying goals, we also formulate a notion of logical consequence, which is comparable to the notion of logical consequence in modal deontic logic. In this formalisation, an obligation O p is a logical consequence of goals G, when p is true in all best models of G. We show how this nonmodal approach to the treatment of deontic concepts deals with problems of contrarytoduty obligations and normative conflicts, and argue that the approach is useful for many other applications, including abductive explanations, defeasible reasoning, combinatorial optimisation, and reactive systems of the production system variety.
Keywords
Deontic logic Abductive logic programming Normative conflicts Contrarytoduty obligations Goals Preferences1 Introduction
There are two ways to understand such natural language sentences as birds can fly. One is to understand them literally, but only as defeasible assumptions. The other is to understand them as approximations to more precisely stated sentences, such as a bird can fly if the bird is normal, with an extra condition the bird is normal, which is defeasible, but is assumed to hold by default.
In this paper, we explore the second approach, applied to natural language sentences involving deontic attitudes. In contrast to modal approaches, which aim to stay close to the literal expression of natural language sentences, our approach uses a nonmodal logic, in which implicit alternatives are made explicit. For example, instead of understanding the sentence you should wear a helmet if you are driving a motorcycle as it is expressed literally, we understand it instead as saying that you have a choice: if you are driving a motorcycle, then you will drive with a helmet or you will risk suffering undesirable consequences that are worse than the discomfort of wearing a helmet.
This is not an entirely new idea. Herbert Bohnert [8] suggested a similar approach for imperative sentences, treating the command do a, for example, as an elliptical statement of a nonmodal, declarative sentence either you will do a or else s, where s is a sanction or “some future situation of directly unpleasant character”. Alan Ross Anderson [2] built upon Bohnert’s idea, but reformulated it in alethic modal logic, reducing deontic sentences of the form O p (meaning p is obligatory) to alethic sentences of the form N (¬p → s) (meaning it is necessarily the case that if p does not hold, then s holds). A similar reduction of deontic logic to alethic logic was also proposed by Stig Kanger [35]. Our nonmodal approach, using abductive logic programming (ALP) [34], is similar in spirit, in the sense that goals in ALP  whether they represent the personal goals of an individual agent, the social goals of a society of agents, the dictates of a powerful authority, or physical constraints  are hard constraints that must be satisfied.
For simplicity, we consider only logic programs, which are sets of definite clauses of the form conclusion ← condition _{ 1 } ∧ … ∧ condition _{ n }, where conclusion and each condition _{ i } is an atomic formula, and all variables are universally quantified. Any logic program P (or P ∪ Δ) of this form has a unique minimal model [17]. The logic program can be regarded as a definition of this model, and the model can be regarded as the intended model of the logic program.G is true in the model represented by P∪ Δ.
In ordinary abduction, the goals G represent a set of observations, and Δ represents external events that explain G. In deontic applications, the goals G represent obligations, augmented if necessary with less desirable alternatives, and Δ represents actions and possibly other assumptions that together with P satisfy G.
In general, there can be many Δ ⊆ A that satisfy the same goals G. In some cases, the choice between them may not matter; but in other cases, where some Δ are better than others, it may be required to generate some best Δ. For example, in ordinary abduction, it is normally required to generate the best explanation of the observations. In deontic applications, it is similarly required to generate some best more complete model of the world. However, due to practical limitations of incomplete knowledge and lack of computational resources, it may not always be feasible to generate a best Δ. In some cases, it may not even be possible to decide whether one Δ is better than another. It other cases, it may be enough simply to satisfy the goals [63] without attempting to optimise them. Nonetheless, the aim of generating a best solution represents a normative ideal, against which other, more practical solutions can be compared.
This focus on satisfying goals in ALP contrasts with the more common focus on determining logical consequence in most formal logics. We will argue that some of the problems that arise in deontic logic in particular are due to this focus on logical consequence, and that they can be solved by shifting focus to goal satisfaction. To facilitate the argument, we employ the following definition, adapted from [31]:The normative abductive task is to satisfy G by generating some Δ ⊆ A, such that G is true in the model M represented by P ∪ Δ and there does not exist any Δ’ ⊆ Asuch that G is true in the model M’ represented by P ∪ Δ’ and M < M’.
In this paper, we show how ALP deals with contrarytoduty obligations, which arise when the violation of a primary obligation p invokes a secondary obligation q. We represent such contrarytoduty obligations by means of a goal of the form ¬p → q (or equivalently p ∨ q), together with an indication that models in which p is true are better than models in which q is true.An obligation O p is alogical consequence of anormative abductive framework 〈P, G, A, <〉 if and only if p is true in all best models of G.
We also address the problem of reasoning with conflicting norms, which arise when two obligations p and q are incompatible. We represent such conflicting norms by goals of the form p ∨ sanction1 and q ∨ sanction2, where models in which p is true are better than models in which sanction1 is true, models in which q is true are better than models in which sanction2 is true, but models in which both p and sanction2 are true and models in which both q and sanction1 are true may be incomparable.
At various places in this paper, we compare the ALP approach with that of standard deontic logic and some of its variants, production systems, Horty’s default deontic logic, constrained optimisation, SBVR (Semantics of Business Vocabulary and Rules [61] and SCIFF [1]. The comparison with constrained optimisation shows that the separation in ALP between goals and preferences is a standard approach in practical problem solving systems. One of the advantages of the separation is that it shows how the normative ideal of generating a best solution can be approximated in practice by strategies designed to find the best solution possible within the computational resources available. The comparison with SBVR, on the other hand, shows that the syntactic limitations of ALP compared with modal deontic logics do not seem to be a limitation in practice, because they are shared with other approaches, such as SBVR, developed for applying deontic reasoning to practical applications.
The application of ALP to deontic reasoning has previously been explored in [38], which applies ALP to deontic interpretations of the Wason selection task [70] and to moral dilemmas in socalled trolley problems [65]. However, the approach most closely related to the one of this paper is that of SCIFF [1], which uses ALP to specify and verify interaction in multiagent systems. Alberti et al. [1] compare SCIFF with modal deontic logics, but do not discuss the treatment of conflicting obligations or contrarytoduty obligations.
Although we compare our approach with standard deontic logic, we do not compare it in detail with the myriad of other logics that have been developed for deontic reasoning. These other logics include defeasible deontic logics [48], stit logics of agency [30], inputoutput logics [43, 44] and preferencebased deontic logics, such as [26] and [66]. As Paul Bartha [7] puts it, “Attempts to address these problems have resulted in an almost bewildering proliferation of different systems of deontic logic  at least one per deontic logician, as some have quipped  so that innovation inevitably meets with a certain amount of skepticism and fatigue.” Instead, we broaden our comparison to include such related work as production systems, constrained optimisation and SBVR, which have received little attention in the literature on deontic logic.
Many of the issues addressed in this paper are controversial, for example whether firstorder logic is adequate for knowledge representation and problem solving or whether other logics are necessary; and whether a single universal logic is possible for human reasoning or whether many logics are needed for different purposes. Although it is possible to address these issues with theorems and their proofs, we pursue a more informal approach in this paper. We assume only that the reader has a general background in formal logic, but no specific knowledge of deontic logic or ALP. The next two sections provide brief introductions to both deontic logic and ALP.
2 Deontic Logic
Deontic logic is concerned with representing and reasoning about norms of behaviour. However, many authors have denied “the very possibility of the logic of norms and imperatives” [27]. Makinson [44], in particular, states that “there is a singular tension between the philosophy of norms and the formal work of deontic logicians. .... Declarative statements may bear truthvalues, i.e. are capable of being true or false, whilst norms are items of another kind. They assign obligations, permissions, and prohibitions. They may be applied or not, respected or not…But it makes no sense to describe norms as true or as false.” However Jørgensen [33], while acknowledging this distinction between norms and declarative sentences, noted that there are “inferences in which one or both premises as well as the conclusion are imperative sentences, and yet the conclusion is just as inescapable as the conclusion of any syllogism containing sentences in the indicative mood only”. The resulting conundrum has come to be known as Jørgensen’s dilemma.
Despite these philosophical concerns, deontic logic has been a thriving research area, owing in large part to its formalisation in modal logic by von Wright [69]. The best known formalisation, which is commonly used as a basis for comparison with other deontic logics, is standard deontic logic (SDL). SDL is a propositional logic with a modal operator O representing obligation, where O p means that p is obligatory.
 D:

¬ (O p ∧O ¬ p)
 K:

O p ∧O (p → q) → O q
 NEC:

If p is a theorem, then O p is a theorem.
 RM:
If p → q is a theorem, then O p → O q is a theorem.
2.1 Ross’s Paradox
Thus, it seems that, if you are obliged to mail a letter, then you can satisfy the obligation either by mailing it or by burning it.It is obligatory that the letter is mailed.
If the letter is mailed, then the letter is mailed or the letter is burned.
Therefore, it is obligatory that the letter is mailed or the letter is burned.
i.e. O mail, mail → mail ∨ burn. Therefore O (mail ∨ burn).
2.2 The Good Samaritan Paradox
But concluding that a person ought to be robbed if the person ought to be helped when he is robbed seems hardly good advertising for being a Good Samaritan.It ought to be the case that Jones helps Smith who has been robbed.
If Smith has been robbed and Jones helps Smith, then Smith has been robbed.
Therefore, it ought to be the case that Smith has been robbed.
i.e. O (rob ∧ help), rob ∧ help → rob . Therefore O rob.
2.3 Chisholm’s Paradox
Much of the discussion concerning the Paradox concerns the representation of conditional obligations of the kind involved in the second and third sentences. For example, the second and third sentences can be represented in the alternative forms go → O tell and O (¬ go → ¬ tell), respectively. Different representations lead to different problems. See, for example, the discussion in [11].It ought to be that Jones goes to assist his neighbours.
It ought to be that, if Jones goes, then he tells them he is coming.
If Jones doesn’t go, then he ought not tell them he is coming.
Jones doesn’t go.
i.e. O go, O (go → tell), ¬ go → O ¬ tell, ¬ go.
McNamara [46] claims, in the context of discussing the Paradox, that there is nearly universal agreement that such conditional obligations cannot be faithfully represented “by a composite of some sort of unary deontic operator and a material conditional”. One of the most common responses to the problem is to employ a dyadic deontic logic, like that of [26], in which conditional obligations are expressed using a binary deontic operator O (q/p), representing that the obligation q is conditional on p.
Another reaction to the Paradox is to formalise it in an action or temporal logic, e.g. [47], so that the obligation for John to assist his neighbours holds only until he doesn’t go, at which time he has the new obligation not to tell that he is coming. However, as [55] points out, the solution doesn’t work for contrarytoduty obligations not involving change of state, as in Forrester’s paradox.
2.4 Forrester’s Paradox
Suppose, regrettably, that Smith kills Jones. Then he ought to kill him gently. But, by RM, Smith ought to kill Jones, which contradicts the first obligation, that Smith ought not to kill Jones.It is forbidden for aperson to kill. i.e. O ¬ kill
But if a person kills, the killing ought to be gentle. i.e. kill → O kill gently
If aperson kills gently, then the person kills. i.e. kill gently → kill
2.5 Conflicting Obligations
Together with RM, D implies that these obligations are inconsistent. But as Hilpinen and McNamara [27] put it, such moral dilemmas “seem not only logically coherent but all too familiar”.Join the French resistance. i.e. O join
Stay at home and look after his aged mother. i.e. O stay
Joining and staying are incompatible. i.e. ¬ (join ∧ stay)
In SDL and most other modal deontic logics, it follows that you should both eat with your fingers and not eat with your fingers, which is clearly impossible. However, intuitively, the first obligation is a general rule, which is defeated by the second obligation, which is an exception to the rule. Horty [32] shows how to formalise such defeasible rules using default logic [57]. Our ALP representation of conflicting obligations, in Section 5, can be viewed, in part, as a variant of Horty’s solution, using Poole’s [2] transformation of default rules into strict rules with abductive hypotheses.Don’t eat with your fingers.
If you are served cold asparagus, eat it with your fingers.
You are served cold asparagus.
i.e. O ¬ fingers, asparagus → O fingers, asparagus.
3 Abductive Logic Programming
Abduction was identified by Charles Sanders Peirce [49] as a form of reasoning in which assumptions are generated in order to deduce conclusions  for example to generate the assumption q, to deduce the conclusion p, using the belief q → p. Peirce focused on the use of abductive reasoning to generate explanations q for observations p. In Artificial Intelligence, abduction has also been used for many other purposes, including natural language understanding [28], fault diagnosis [53] and planning [18].
Poole et al. [50] developed a form of abduction, called Theorist, and showed that it can be used for nonmonotonic, default reasoning  for example, to generate the assumption normalbird(tweety), to deduce canfly(tweety), using the beliefs bird(tweety) and ∀X (bird(X) ∧ normalbird(X) → canfly(X)). Poole [51] showed, more generally, that, by making implicit assumptions, like normalbird(X), explicit, default rules in default logic [57] can be translated into “hard” or “strict” rules in an abductive framework.^{1} Bondarenko et al. [9] showed that abduction with an argumentation interpretation can be used to generalize many other existing formalisms for default reasoning. Poole [52] showed that, by associating probabilities with assumptions, abductive logic programs can also represent Bayesian networks.
This characterisation of ALP distinguishes between goals G, which are “one off” and integrity constraints I, which are “persistent”. It reflects the historical origins of ALP, in which logic programs are used to solve existentially quantified goals, but are extended with assumptions, which are restricted by integrity constraints, which are universally quantified.the task is to extend a“theory” P, which is alogic program,
with a set of assumptions Δ ⊆ A,
which are ground (i.e. variablefree) atomic sentences,
so that the extended logic program P ∪ Δ bothsolves a goal G and satisfies integrity constraints I.
However, in this paper, we employ a variant of ALP in which the emphasis is shifted from logic programs to integrity constraints, which can be arbitrary sentences of firstorder logic (FOL), in the spirit of the related framework FO(ID) [16], in which FOL is extended with logic programs, viewed as inductive definitions. Moreover, we do not distinguish formally between goals and integrity constraints and between solving a goal and satisfying integrity constraints.
It is the task of satisfying G that gives G its goallike nature. As mentioned in the Introduction, a sentence in G can represent a personal goal of an individual agent, a social goal of a society of agents, a dictate of a powerful authority, or a physical or logical constraint. It can also represent an observation to be explained. Despite these different uses of sentences in G, they all have the same formal properties; and we use the two terms goal and integrity constraint interchangeably.the task is to satisfy G, by generating some Δ ⊆ A such that
G is true in the minimal model min(P ∪ Δ) defined by P ∪ Δ.^{2}
In this paper, we understand the term goal satisfaction in a modeltheoretic sense, which contrasts with the theoremproving view in which goal satisfaction means that G is a theorem that is a logical consequence of P ∪ Δ or of the completion of P ∪ Δ [13]. These two different uses of logic, for theoremproving and for satisfiability, have analogues in modal logic, where there has also been a shift away from theoremproving to model checking [24] and to model generation [6]. The corresponding shift from a theoremproving semantics for ALP to a model generation semantics plays an important role in the ALP approach to reasoning about obligations.
3.1 Logic Programs (LP) as Definitions of Minimal Models
Not only is the Herbrand model min(P) a model of P, but it is the unique minimal model of P, in the sense that min(P) ⊆ M’ for any other Herbrand model M’ of P [17].
We also say, somewhat loosely, that P is a set of beliefs, and that min(P) is a model of the world. This is a different notion from the notion of belief in epistemic logic, in which it is possible to distinguish between a statement p about the world and a statement B p of a belief about the world. In our simplified approach, there is no distinction between the “world” and “beliefs” about the world, which are represented in the unadorned form p rather than B p.
3.2 Goals
 G1:
∃E,T threat(E, T).
 G2:
¬ ∃E, T(threat(E, T) ∧ (T ≠ 11 ∨ T ≠ 13))
 G3:
∀E, T(threat(E, T) → fire(E, T) ∨ flood(E, T)).
 G4:
¬ ∃Agent1, Agent2, Action (happens(do(Agent1, Action)) ∧ harms(Action, Agent2))
 G5:
∀Agent, Action (promise(do(Agent, Action)) → happens(do(Agent, Action)))
 G6:
∀ E , T 1 [t h r e a t ( E , T 1 ) →∃ T 2 [e l i m i n a t e ( E , T 1 , T 2 ) ∧ T 1 < T 2 < T 1 + 3]]
3.3 Integrity Constraints
Logic programs can be used both as programs for performing computation and as databases for queryanswering. When a logic program P is used as a database, then queryanswering is the task of determining whether a query Q expressed as a sentence in FOL is true in the minimal model of the database, or of generating instances of Q that are true in the minimal model. Such queries do not involve a commitment to the truth of Q. In contrast, integrity constraints specify necessary properties of the database. In this respect, integrity constraints are like necessary truths in alethic modal logic, and the database P is like a set of contingent truths.
3.4 Abductive Explanations
3.5 Reduction of Soft Constraints to Hard Constraints
Data base integrity constraints can be hard constraints, which represent physical or logical properties of the application domain, or soft constraints, which represent ideal behaviour and states of affairs, but which may nonetheless be violated. However, in ALP, all integrity constraints are hard constraints. Soft constraints need to be represented as hard constraints, by including less desirable alternatives explicitly. This reformulation of soft constraints as hard constraints in ALP is like the Andersonian reduction of deontic logic to alethic modal logic [2], but with the obvious difference that in ALP hard constraints are represented in FOL.
For example, a typical library database [62] might contain facts about books that are held by the library, about eligible borrowers, and about books that are out on loan. Some integrity constraints, for example that a book cannot be simultaneously out on loan and available for loan, are hard constraints, which reflect physical reality. Other constraints, for example that a person is not allowed to keep a book after the return date, are soft constraints, which may be violated in practice.
This way of representing alternatives is similar to the way in which defeasible rules, such as canfly(X) ← bird(X), are turned into strict rules by adding a single extra defeasible condition, such as normalbird(X) or ¬abnormalbird(X). The various alternative ways in which a bird can fail to be normal can be represented separately.
The reformulation of soft constraints as hard constraints is also like the Andersonian reduction. However, while the Andersonian reduction employs a single propositional constant s, representing a single, general, abstract sanction, the ALP reductions of both soft constraints to hard constraints and of default rules to strict rules, employ an additional condition, such as dealwiththreat(E, T1) or normalbird(X), which is specific to the constraint or rule to which it is added.
4 The Separation of Goals from Preferences in ALP
Not only does the Andersonian reduction, O p ⇔N (¬ p → s) ⇔N (p ∨ s), treat the disjunction p ∨ s as a hard constraint, but by defining O p in terms of p ∨ s it also builds into the semantics a preference for p over s. Van Benthem et al. [66] generalises this simple preference into a more general binary relation M1 ≤ M2 between possible worlds M1 and M2, representing that M2 is at least as good as M1.
Our modeltheoretic semantics of ALP, when applied to normative tasks, similarly employs a preference ordering among minimal models, which are like possible worlds, but the ordering is separate from and external to the logic. This separation of goals from preferences is an inherent feature of abductive reasoning, where generating possible explanations is a distinct activity from preferring one explanation to another. It is also a feature of most problemsolving frameworks in Artificial Intelligence and constrained optimization e.g. [15], where it is standard practice to separate the specification of constraints from the optimisation criteria. In ALP, this separation has the advantage of simplifying the logic, because the semantics does not need to take preferences into account.
4.1 The Map Colouring Problem
For simplicity, assume that these are hard constraints, so we don’t have to worry about how to deal with failures of compliance.Every country ought to be assigned a colour.
It is forbidden to assign two different colours to the same country.
It is forbidden to assign the same colour to two adjacent countries.
So far, there is not much difference between the modal and the ALP representations. But now suppose that it is deemed desirable to colour the map using as few colours as possible. In ALP and other problemsolving frameworks, this optimisation criterion could be formalised by means of a cost function, which is represented separately, possibly in a metalanguage, as in [59, 60]. Such cost functions are employed in search strategies such as branch and bound, to generate solutions incrementally by successive approximation. Suboptimal solutions found early in the search are used as a bound to abandon partial solutions that are already worse than the best solution found so far. (For example, if a solution has already been found using five colours, then there is no point trying to extend a partial solution that already uses five colours.) Once a solution has been found, whether it is optimal or not, the search can be terminated with the best solution found so far. (Or it can be acted upon tentatively, until a better solution has been found.) Such anytime problem solving strategies are essential for practical applications.
Using deontic logic, it would be necessary to incorporate the optimisation criterion (fewest colours, in this example) into the object level statement of the goal (colour the map, subject to the constraints). It is hard to see how this could be done; and, even if it could, it is hard to see how the resulting deontic representation would then be used to find solutions for difficult problems in practice.
4.2 Decision Theory
Conversely, classical decision theory is concerned with choosing between alternative actions, without even considering the goals that motivate the actions and the beliefs that imply their possible consequences. Normative decision theory, which is concerned with maximising the utility (or goodness) of the expected consequences of actions, is a theoretical ideal, against which other, more practical, prescriptive approaches can be evaluated. Baron [5, page 231] argues that the fundamental normative basis of decision making (namely, maximising the utility of consequences) is the same, whether it is concerned with the personal goals of individual agents or with moral judgements concerning others.Decisions depend upon beliefs and goals, but we can think about beliefs and goals separately, without even knowing what decisions they will affect.
Arguably, classical decision theory, which not only separates thinking about goals and beliefs from deciding between actions but ignores the relationship between thinking and deciding, is too extreme. Deontic logic is the opposite extreme, entangling in a primary obligation O p and a secondary obligation (in one or other of the forms O (¬p → q), ¬p → O q or O (q / ¬ p)) the representation of a goal p ∨ q together with a preference for one alternative, p, over another, q. In contrast with these two extremes, ALP, like practical decision analysis [25, 36] separates thinking about goals and beliefs from deciding between alternative actions, but without ignoring their relationship.
4.3 Algorithm = Logic + Control
It might be tempting to represent the sentence as an anankastic conditional [67] in amodal logic, for example as:In an emergency, press the alarm signal button, to alert the driver.
However, the same procedure can also be understood both logically and more simply as adefinite clause:If there is an emergency and you want to alert the diver, then you should press the alarm signal button.
together with an indication that the clause should be used backwards to reduce agoal matching the conclusion of the clause to the subgoals corresponding to the conditions of the clause. The use of the imperative verb press in the English sentence suggests that the belief represents the preferred method for achieving the conclusion. There are of course other ways of trying to alert the driver, like crying out loud, which might also work, and which might even be necessary if the preferred method fails. For example:The driver is alerted to an emergency, if you press the alarm signal button.
In ALP agents [40], logic programs are used both backwards, to reduce goals to subgoals, and forwards, to infer logical consequences of candidate actions.A person is alerted to apossible danger, if you cry out loud and the person is within earshot.
The clause can be used backwards or forwards. But read as an English sentence, its clear intention is to be used in the forward direction, to derive the likely, undesirable consequence of using the alarm signal button when there isn’t an emergency. However, there is nothing to prevent a person from using the clause backwards, if he perversely wants to incur a penalty, or if he wants to use the penalty for some other mischievous purpose.You are liable to afifty pound penalty, if you use the alarm signal button improperly.
Different logic programming languages employ different control strategies. Some logic programming formalisms, including Datalog and Answer Set Programming, are entirely declarative, leaving the issue of control to the implementation, beyond the influence of the “programmer”. Prolog, on the other hand, uses clauses backwards as goalreduction procedures, and tries them one at a time, sequentially, in the order in which they are written. By determining the order in which clauses are written, the programmer can impose a preference for one goalreduction procedure over another. For example, the order in which the clauses are written in the earlier program P3 prefers eliminating a threat, over escaping from the threat, over submitting to the threat.
The sequential ordering of alternatives, as in Prolog, is sufficient for many practical applications, and it has been developed in one form or another in many other logical frameworks. For example, Brewka et al. [10] employ a noncommutative form of disjunction to indicate “alternative, ranked options for problem solutions”. In the domain of deontic logic, Governatori and Rotolo [23] employ a similar, noncommutative modal connective a ⊗ b, to represent a as a primary obligation and b as a secondary obligation if a is violated. Sequential ordering is also a common conflict resolution strategy in many production system languages.
4.4 Production Systems
Production systems have been used widely for modelling human thinking in cognitive science, and were popular for implementing expert systems in the 1980s. In recent years, they have been used in many commercial systems for representing and executing business rules.
A production system is a set of conditionaction rules (or production rules) of the form IF conditions THEN actions, which can be understood either imperatively as expressing that if the conditions hold then do the actions, or in deontic terms as expressing that if the conditions hold then the actions should be performed.
If IFTHEN were logical ifthen, then this would be logically equivalent to:IF there is athreat THEN eliminate the threat.
IF there is athreat THEN escape from the threat.
IF there is athreat THEN submit to the threat.
which is not physically possible. Production systems use “conflict resolution” strategies, to decide between such contrary actions.IF there is athreat THEN eliminate the threat AND escape from the threat AND submit to the threat.
In this example, the production rules can be reformulated in logical terms by replacing AND by OR, and by treating the resulting sentence as a goal to be made true. Conflict resolution then becomes a separate strategy for choosing between alternatives [40]. The ALP reconstruction of production systems in [40] is similar to the ALP reconstruction of deontic logic proposed in this paper.
Production rules of the form IF conditions THEN actions are purely reactive. The actions are performed only after the conditions have been recognised. But in ALP goals of the logical form antecedent → consequent can be made true in any way that conforms to the truth table for material implication. They can be made true reactively, by making consequent true when antecedent becomes true; preventatively, by making antecedent false, avoiding the need to make consequent true; or proactively, by making consequent true without waiting for antecedent to become true [38, 41, 42]. These alternative ways of making antecedent → consequent goals true is once again a separate matter of preferring one model over another.
In many cases, it is possible to identify the best way of making goals true, at “compile time”, before they need to be considered in practice; and, for this purpose, it is often sufficient to order the rules sequentially in order of preference. But in other cases, it is better to decide at “run time”, taking all the facts about the current situation into account. Separating goals from preferences, as in ALP, leaves these options open, whereas combining goals and preferences inextricably into the syntax and semantics of the logic, as in modal deontic logic (and Prolog), forces decisions to be made at compile time and closes the door to other, more flexible possibilities.
4.5 Normative ALP Frameworks
In this paper, we are neutral about the manner in which preferences are specified, and assume only that the specification of preferences induces a strict partial ordering < between models, where M < M’ means that M’ is better than M. The ordering can be defined directly on the models themselves, or it can be induced more practically by a cost function, by an ordering of clauses or rules, or by a priority ordering of candidate assumptions A. In particular, it can take into account that “other things being equal” it is normally better to make the consequents of conditional goals true earlier rather than later.
If there are several such best Δ that satisfy 〈P, G, A, <〉, then an agent can choose freely between them. But, if because of limited computational resources the agent is unable to generate a best Δ, then the definition can nonetheless serve as a normative ideal, against which other more practical solutions can be compared.^{4}the task is to satisfy 〈P, G, A, <〉 by generating some Δ ⊆ A such that
G is true in M = min(P ∪ Δ) and there does not exist any Δ’ ⊆ A
such that G is true in M’ = min(P ∪ Δ’) and M < M’.
5 ALP as a Deontic Logic
The ALP distinction between logic programs, representing beliefs, and integrity constraints, representing goals, can be viewed as a weak modal logic, in which beliefs p are expressed without a modal operator (and are not distinguished from what is actually the case), and goals p are implicitly prefixed with a modal operator, O p, expressing that p must be true. Viewed in this way, there are no nested modalities, and there are no mixed sentences, such as p → O q.
Although these syntactic restrictions may seem very limiting, they are shared with several other approaches to deontic logic, such as that of Horty [29, 31]. Moreover, they are also shared with the deontic modal logic of SBVR (Section 6), which has been developed specifically to deal with practical applications.
Arguably, the syntactic restrictions of ALP have an advantage over the more liberal syntax of modal deontic logics, because there is no need to choose between different ways of representing conditional obligations, which in effect are all represented implicitly in the same form O (p → q).
Although the main focus in ALP is on satisfying goals as well as possible, we can define a notion of logical consequence, following the lead of Horty [29, 31] building on van Fraassen [68], and referred to as vFH below.
5.1 The van FraassenHorty (vFH) NonModal Deontic Logic
This is a “credulous semantics”, (because of the qualification some G’), which is more like the notion of satisfaction than like the usual notion of logical consequence. However, Horty [29, 31] significantly extends this basic semantics, reformulating it in default logic with priorities between default rules, and considering both credulous and sceptical variants. Our notion of logical consequence in ALP, presented in the next section, 5.2, is an adaptation of Horty’s “sceptical semantics”.O p is a logical consequence of {O q  q ∈ G} if and only if,
there exists some G’ ⊆ G such that
G’ is consistent and p is a nonmodal logical consequence of G’,
i.e. p is true in all classical models of G’.
Here the second rule has priority over the first. Horty shows that, in both its credulous and sceptical versions, the default theory implies both O ¬ fingers and O (fingers /asparagus). Both of these logical consequences also hold when the same example is formulated in dyadic deontic logics. But the deontic logic formulations also imply the intuitively unintended consequence O ¬asparagus, which is not implied by the default theory with the vFH semantics. The default logic is defeasible, because, given the additional, “hard” information asparagus, the obligation O ¬ fingers no longer holds, and the contrary obligation O fingers holds instead. We give an ALP representation of the example in Section 5.3.Don’t eat with your fingers.If you are served cold asparagus, eat it with your fingers.i.e true ⇒ ¬ fingers, asparagus ⇒fingers
5.2 Normative ALP Frameworks and Implied Obligations
The modal operators F for prohibition and P for permission can be defined in terms of obligation O:O p is a logical consequence of 〈P, G, A, <〉 if and only if, for all Δ ⊆ A,
if G is true in M = min(P ∪ Δ) and there does not exist any Δ’ ⊆ A
such that G is true in M’ = min(P ∪ Δ’) and M < M’,
then p is true in M.
Viewed in vFH terms, this is a sceptical semantics, because, for O p to be a logical consequence of 〈P, G, A, <〉, p must be true in all best models of G. In contrast, the semantics of satisfying obligations is credulous, because, to satisfy 〈P, G, A, <〉 it suffices to generate some best model of G.〈P, G, A, <〉implies F p if and only if Gimplies O ¬ p.
〈P, G, A, <〉implies P p if and only if Gdoes not imply O ¬ p.
The main difference between the vFH and ALP approaches are that in vFH obligations are soft constraints, but in ALP they are hard constraints. In addition, vFH represents conditional obligations in the dyadic form O p/q, but ALP represents them, in effect, in the form O (q → p) with ordinary material implication.
Prakken [56] proposes an alternative approach, which is also based on default logic, but is combined with SDL. He argues that, by comparison, the vFH approach has several limitations. Perhaps the most serious is that all defaults in vFH are deontic defaults, but that “factual” defaults are also necessary. This limitation does not apply to the ALP approach, because ALP combines goals/constraints, to represent deontic defaults, with logic programs, to represent factual defaults.
Prakken also points out that the vFH approach can represent only weak permissions P p, which hold implicitly when O ¬p does not hold. SDL and many other deontic logics can also represent strong permissions. The difference is that, if P p is a weak permission, and ¬p later becomes obligatory, then there is no conflict, because the weak permission P p simply no longer holds. But if P p is a strong permission, then the presence or introduction of the obligation O ¬p introduces a normative conflict.
This captures the normative conflict between an obligation and apermission, which is the defining characteristic of strong permission, but it does not capture the more common use of strong permission to override an obligation. For this, we need to represent the obligation and permission as a rule and an exception:vehicle → liable to fine
vehicle ∧ authorized → ¬liable to fine
In the remainder of this section we briefly show how normative ALP frameworks deal with the other problems of deontic logic presented earlier in the paper.¬ exception ∧ vehicle → liable to fine
vehicle ∧ authorized → ¬ liable to fine
exception ← authorized
5.3 Normative Conflicts in ALP Frameworks
In this representation, the soft obligation not to eat with fingers is reformulated as a hard obligation by adding both a sanction and an extra condition ¬ exception. But for simplicity the obligation to eat asparagus with fingers is treated as a hard obligation, without the addition of any sanctions or exceptions. The only best models that satisfy G are M1 = {} and M2 = {asparagus, exception, fingers}. Consequently, as in the vFH representation, 〈P, G, A, <〉 implies both O (¬ exception → ¬ fingers) and O (asparagus → fingers), but not O (¬ asparagus).P = {exception ← asparagus},
G = {¬ exception ∧ fingers → sanction, asparagus → fingers}
A = {fingers, asparagus, sanction}
M < M’ if sanction ∈ M and sanction ∉ M’.
Suppose, however, that eating cold asparagus with fingers is not an obligation, but simply a strong permission overriding the obligation not to eat with fingers. Then it suffices to replace the goal asparagus → fingers by the goal asparagus → ¬sanction. There are then three best models that satisfy the new goals, the two models, M1 and M2, above plus the additional model M3 = {asparagus, exception}, in which the strong permission is not exercised.
5.4 Sartre’s Dilemma
There are three models that satisfy the goals: M1 = {join, sanction2}, M2 = {stay, sanction1} and M3 = {sanction1, sanction2}. All of these models involve sanctions, so are less than ideal. But none the less, there are two equally best models, M1 and M2. join ∨ stay is true in both of these. So O (join ∨ stay) is a logical consequence.P = {}, G = {join ∨ sanction1, stay ∨ sanction2, ¬ (join ∧ stay)}
A = {join, stay, sanction1, sanction2}
M < M’ if Mcontains more of the sanctions, sanction1 and sanction2, than M’.
5.5 Ross’s Paradox
Then M = {mail} is the only minimal model that satisfies G. But mail ∨ burn is true in M. So 〈P, G, A, <〉 implies both O mail and O (mail ∨ burn) as logical consequences. But it is not possible to make mail true by making burn true, because there is no model that satisfies 〈P, G, A, <〉 and also contains the action burn.P = {}, G = {mail, ¬ (mail ∧ burn)},
A = {mail, burn}, and < = {}.
There is only one best model that satisfies the modified framework, namely the same model M = {mail}, as before. So, as in the simpler framework 〈P, G, A, <〉, the more realistic framework 〈P’, G’, A’, < ’ 〉 implies both O mail and O (mail ∨ burn). But it is not possible to satisfy the obligation mail by performing the action burn, because there is no best model that contains the action burn.P’ = {}, G’ = {mail ∨ sanction, ¬ (mail ∧ burn)},
A’ = {mail, burn, sanction} and
M < ’ M’ if sanction ∈ M and sanction ∉ M’.
So, no matter whether mail is regarded as a hard or soft constraint, O (mail) implies O (mail ∨ burn), but in neither case does generating a model that makes burn true satisfy the obligation O (mail). Viewed in this way, Ross’s Paradox is not a paradox at all, but rather, as Fox [20] also argues, a confusion between satisfying an obligation and implying the obligation as a logical consequence of other obligations. Arguably, the “paradox” also suggests that the focus in deontic logic on inferring logical consequences is misdirected, and that it should be directed towards satisfying obligations instead.
5.6 The Good Samaritan Paradox
Suppose, however, that we now observe that Smith has just been robbed, and we treat the observation simply as a fact to be accepted, adding rob to P”, obtaining the updated framework 〈P” ∪ {rob}, G”, A”, < ”〉. M3 is now the only (and best) minimal model that satisfies the updated framework, which implies O (rob → help), O (rob) and O (help).
The implication O (rob) is undoubtedly unintuitive. None the less, it faithfully reflects the definition of logical consequence, which restricts the set of possible models satisfying G in a framework 〈P, G, A, <〉 to models that also satisfy P by construction. But if, as in this case, some sentence p ∈ P is already true, then it is not necessary to satisfy G by generating a model that makes p true, by adding p to Δ. So although p is true in all models that satisfy G and therefore O (p) is a logical consequence, it is not the case that it is necessary or obligatory to make p true.
The moral of the story is that, as in Ross’s Paradox, goal satisfaction is more appropriate than logical consequence for reasoning about obligations. Moreover, the moral does not depend upon possible sanctions or exceptions. So it applies as much to deontic logic as it does to ALP.
5.7 Chisholm’s Paradox
Now suppose we observe that Jones doesn’t go. We can represent this simply by removing go from the set of candidate assumptions, obtaining the updated framework 〈P, G, A – {go}, <〉. The only (and best) minimal model that satisfies the updated framework is the less than ideal model M1 = {}, which implies O (¬ go) and O (¬ tell).
5.8 Forrester’s Paradox
However, if we observe kill and update the framework to 〈P ∪ {kill}, G, A, <〉, then M2 = {kill gently, kill} is now the only (and best) model that makes G true, and O (kill gently) is a consequence. If instead we observe kill violently and update the framework to 〈P ∪ {kill violently}, G, A, <〉, then the goals are unsatisfiable, because there is no longer any model that makes the goals true.
If instead we observe kill violently, then there is a best model, M3 = {kill violently, kill, penalty, severe penalty}, and O (kill violently) is a logical consequence. But this doesn’t mean it is necessary to satisfy the goals by making kill violently true, because kill violently is already and unavoidably true.
6 Comparison with SBVR (Semantics of Business Vocabulary and Rules)
This simplified, tagged form of modal logic in SBVR is similar to the “tagging” of sentences in ALP as either goals or beliefs. Sentences tagged as obligations in SBVR correspond to goals in ALP, and sentences tagged as necessities in SBVR correspond in ALP to beliefs (representing definitions). The correspondence is not exact because “goals” in ALP include some integrity constraints that would be tagged in SBVR by an alethic modal operator representing necessity.“most statements of business rules include only one modal operator, and this operator is the main operator of the whole rule statement. For these cases, we simply tag the constraint as being of the modality corresponding to its main operator, without committing to any particular modal logic” [61, p. 108].
“For each Person, it is obligatory that that Person is ahusband of at most one Person” can be rewritten as “It is obligatory that each Person is ahusband of at most one Person” [61, p. 109].
This similar use of tagging in both ALP and SBVR supports the thesis that goals in nonmodal ALP are adequate for representing the goal component of obligations in many practical applications.“For each Invoice, if that Invoice was issued on Date1 then it is obligatory that that Invoice is paid on Date2 where Date2 < = Date1 +30 days” can be rewritten as “It is obligatory that each Invoice that was issued on Date1 is paid on Date2 where Date2 < = Date1 +30 days”^{5} [61, p. 116].
These enforcement levels are:“Depending on enforcement level, violating the rule could well invite response, which might be anything from immediate prevention and/or severe sanction, to mild tutelage” [61, page 171].
To the best of our knowledge, such responses to violations are not represented in the SBVR formalism. This avoids the problems associated with contrarytoduty obligations, which arise in ordinary deontic logics, and which are addressed with ALP in this paper.“a position in agraded or ordered scale of values that specifies the severity of action imposed in order to put or keep an operative business rule in force” [61, page 176].
7 Abductive Expectations in SCIFF
H(e, t) expresses that an event e happens at a time t. E(e, t) is an abducible predicate representing an obligation that e happens at t. EN(e, t) is an abducible predicate representing a prohibition that e happens at t. Abductive solutions are restricted to those whose obligations actually happen and whose prohibitions do not happen.
SCIFF uses a theoremproving view of goal satisfaction, which is adequate when the sequence of interactions between agents is finite, and logic programs are written in ifandonlyif form [13]. This contrasts with the modelgeneration view in this paper. For our intended applications, in which the sequence of interactions is conceptually neverending, the modelgeneration view determines the truth value of any goal expressed in FOL, but the theoremproving view is incomplete.
The situation is analogous to that of arithmetic, where the standard model of arithmetic is the minimal model of the definite clause definitions of addition and multiplication [14, 39]. The modelgeneration view of goal satisfaction determines the truth value of any sentence of arithmetic in this minimal model, but the theoremproving view is incomplete.
8 Conclusions
This paper concerns the more general controversy about the adequacy of classical firstorder logic compared with other formal logics, and compared with modal logics in particular. We have argued that, in the case of representing and reasoning about deontic attitudes, the use of FOL for representing goals in ALP is a viable alternative to the use of modal logics. We have seen that the ALP approach is related to the vFH nonmodal representation of obligations in default logic, and that, like the default logic approach, the ALP approach also tolerates normative conflicts. We have argued that, although the syntax of the deontic operators is restricted in comparison with that of modal deontic logics, it is nonetheless adequate both for many practical applications and for many of the problematic examples that have been studied in the philosophical literature.
The ALP approach represents obligations as hard goals or constraints, by representing sanctions and exceptions as additional, explicit alternatives. This is similar to the way in which abduction in Theorist turns defeasible rules into strict rules, by turning assumptions about normality into explicit defeasible conditions. We have argued that the ALP approach has the advantage that similar techniques apply to a wide range of applications, not only to satisfying obligations, but also to default reasoning, explaining observations and combinatorial optimisation.
In general, the ALP approach focuses on solving or satisfying goals, in contrast with modal deontic logics, which focus on inferring logical consequences. However, we have defined a notion of logical consequence for obligations in ALP, by adapting the sceptical version of Horty’s definition of logical consequence; and we have applied the definition to some of the examples that have proved problematic for modal deontic logic. We have argued that the ALP approach provides a satisfactory solution of the problems, and that, in cases where the ALP solution may not seem entirely intuitive, it is rather that logical consequence is a less appropriate consideration than goal satisfaction. Moreover, the goal satisfaction semantics makes it possible to distinguish between the normative ideal of satisfying goals in the best way possible, and the more practical objective of satisfying the goals in the best way possible given the resources that are available at the time.
The ALP approach of this paper is similar to the ALP approach of SCIFF. The main difference is between the theoremproving semantics of SCIFF and the model generation semantics that we use in this paper.
This paper also concerns the controversy about whether a single logic, such as ALP, might be adequate for formalising human reasoning, or whether many logics are needed for different purposes. One of the strongest arguments for ALP is that it subsumes production systems [40], which have been widely promoted as a generalpurpose theory of human thinking [64]. Other arguments include its use for abductive reasoning, default reasoning [50] and probabilistic reasoning, with the power of Bayesian networks [52].
The application of ALP to deontic reasoning in this paper is a further test of its generality. Conversely, the generality of ALP is an argument for its application to deontic reasoning. Of course, both of these claims – for ALP as a generalpurpose logic, and for ALP as a logic for deontic reasoning – need further testing. For this purpose, extending the ALP approach from single agent to multiagent systems, where different agents have different goals and beliefs, is perhaps the most important challenge for the future.
Footnotes
 1.
This translation is similar to the use of an abnormality predicate in circumscription [45], expressing the default rule in the form ∀X (bird(X) ∧ ¬abnormalbird(X) → canfly(X)).
 2.
This is similar to the minimisation of abnormality predicates in circumscription. However, circumscription is asceptical approach, in which the task is to derive sentences that are true in all minimal models.
 3.
In general, min(P) is the set of all facts derived by exhaustively applying modus ponens to the ground program obtained from P by replacing all variables in P by variablefree terms.
 4.
As in preferencebased deontic logics, we may want to exclude infinite sequences of increasingly better Δ. Alternatively, we may accept that there is no absolutely best Δ, and simply generate the best Δ possible in the given circumstances.
 5.
To be more precise, the quantification of Date1 and Date2 should be specified, i.e. It is obligatory that for each Invoice and for each Date1, if the Invoice is issued on Date1, then there exists some Date2 on which the Invoice is paid where Date2 < = Date1 +30 days.
Notes
Acknowledgements
Many thanks to Christoph Benzmueller, Harold Boley, Tom Blackson, Hendrik Decker, Guido Governatori, Henry Prakken, Fariba Sadri, Giovanni Sartor, Markus Schacher, Marek Sergot, Leon Van Der Torre, as well as the anonymous referees, for helpful comments on earlier drafts of the paper. Kowalski also thanks the Japanese Society for the Advancement of Science for its support in the initial phase of this work.
References
 1.Alberti, M., Gavanelli, M., Lamma, E., Mello, P., Torroni, P., & Sartor, G. (2006). Mapping deontic operators to abductive expectations. Computational & Mathematical Organization Theory, 12(23), 205–225.CrossRefGoogle Scholar
 2.Anderson, A.R. (1958). A reduction of deontic logic to alethic modal logic. Mind, 67(265), 100– 103.CrossRefGoogle Scholar
 3.Anderson, A.R. (1966). The formal analysis of normative systems. In Rescher, N. (Ed.), The Logic of Decision and Action. Pittsburgh: University of Pittsburgh Press.Google Scholar
 4.Asher, N., & Bonevac, D. (2005). Free choice permission is strong permission. Synthese, 145(3), 303–323.CrossRefGoogle Scholar
 5.Baron, J. (2008). Thinking and deciding, 3th edn. Cambridge University Press.Google Scholar
 6.Barringer, H., Fisher, M., Gabbay, D., Owens, R., & Reynolds, M. (1996). The imperative future: principles of executable temporal logic. Wiley.Google Scholar
 7.Bartha, P. (2002). Review of agency and deontic logic. In Notre Dame philosophical reviews.Google Scholar
 8.Bohnert, H.G. (1945). The semiotic status of commands. Philosophy of Science, 12, 302–315.CrossRefGoogle Scholar
 9.Bondarenko, A., Dung, P.M., Kowalski, R.A., & Toni, F. (1997). An abstract, argumentation theoretic approach to default reasoning. Artificial Intelligence, 93(1), 63–101.CrossRefGoogle Scholar
 10.Brewka, G., Benferhat, S., & Le Berre, D. (2004). Qualitative choice logic. Artificial Intelligence, 157(1), 203–237.CrossRefGoogle Scholar
 11.Carmo, J., & Jones, A.J. (2002). Deontic logic and contrarytoduties. Handbook of philosophical logic (pp. 265–343). Springer.Google Scholar
 12.Chisholm, R.M. (1963). Contrarytoduty imperatives and deontic logic. Analysis, 24(2), 33–36.CrossRefGoogle Scholar
 13.Clark, K.L. (1978). Negation as failure. In Logic and data bases (pp. 293–322). Springer.Google Scholar
 14.Davis, M. (1980). The mathematics of nonmonotonic reasoning. Artificial Intelligence, 13(1), 73–80.CrossRefGoogle Scholar
 15.Dechter, R. (2003). Constraint processing. Morgan Kaufmann.Google Scholar
 16.Denecker, M. (2000). Extending classical logic with inductive definitions. In Computational logic—CL 2000 (pp. 703–717). Springer.Google Scholar
 17.Van Emden, M.H., & Kowalski, R.A. (1976). The semantics of predicate logic as a programming language. Journal of the ACM (JACM), 23(4), 733–742.CrossRefGoogle Scholar
 18.Eshghi, K. (1988). Abductive planning with event calculus. In ICLP/SLP (pp. 562–579).Google Scholar
 19.Forrester, J.W. (1984). Gentle murder, or the adverbial Samaritan. Journal of Philosophy, 81(4), 193–197.CrossRefGoogle Scholar
 20.Fox, C. (2015). The semantics of imperatives. The Handbook of Contemporary Semantic Theory, 3, 314.CrossRefGoogle Scholar
 21.Fung, T.H., & Kowalski, R. (1997). The IFF proof procedure for abductive logic programming. The Journal of logic programming, 33(2), 151–165.CrossRefGoogle Scholar
 22.Goble, L. (1991). Murder most gentle: the paradox deepens. Philosophical Studies, 64(2), 217–227.CrossRefGoogle Scholar
 23.Governatori, G., & Rotolo, A. (2006). Logic of violations: A Gentzen system for reasoning with contrarytoduty obligations. Australasian Journal of Logic, 4, 193–215.CrossRefGoogle Scholar
 24.Halpern, J.Y., & Vardi, M.Y. (1991). Model checking vs. theorem proving: a manifesto. Artificial Intelligence and Mathematical Theory of Computation, 212, 151–176.CrossRefGoogle Scholar
 25.Hammond, J.S., Keeney, R.L., & Raiffa, H. (1999). Smart choices: a practical guide to making better life choices. Boston: Harvard Business School Press.Google Scholar
 26.Hansson, B. (1969). An analysis of some deontic logics. Nous, 373–398.Google Scholar
 27.Hilpinen, R., & McNamara, P. (2013). Deontic logic: a historical survey and introduction. In D. Gabbay, J. Horty, X. Parent, R. van der Meyden, L. van der Torre (Eds.), Handbook of deontic logic and normative systems (pp. 3–136). College Publications.Google Scholar
 28.Hobbs, J.R., Stickel, M., Martin, P., & Edwards, D. (1988). Interpretation as abduction. In Proceedings of the 26th annual meeting on association for computational linguistics (pp. 95–103).Google Scholar
 29.Horty, J.F. (1993). Deontic logic as founded on nonmonotonic logic. Annals of Mathematics and Artificial Intelligence, 9(12), 69–91.CrossRefGoogle Scholar
 30.Horty, J.F. (2001). Agency and Deontic Logic. Oxford: Oxford University Press.CrossRefGoogle Scholar
 31.Horty, J.F. (2012). Reasons as defaults. Oxford University Press.Google Scholar
 32.Horty, J.F. (2014). Deontic Modals: why abandon the classical semantics?. Pacific Philosophical Quarterly, 95(4), 424–460.CrossRefGoogle Scholar
 33.Jørgensen, J. (1937). Imperatives and logic. Erkenntnis, 7(1), 288–296.Google Scholar
 34.Kakas, A.C., Kowalski, R., & Toni, F. (1998). The role of logic programming in abduction. In Handbook of logic in artificial intelligence and programming 5 (pp. 235–324). Oxford University Press.Google Scholar
 35.Kanger, S. (1971). New foundations for ethical theory. In Hilpinen, R. (Ed.), Deontic logic: introductory and systematic readings (pp. 36–58). Dordrecht: Reidel, D.Google Scholar
 36.Keeney, R.L. (1992). Valuefocused thinking. A path to creative decision making. Harvard University Press.Google Scholar
 37.Kowalski, R. (1979). Algorithm = logic + control. Communications of the ACM, 22(7), 424–436.CrossRefGoogle Scholar
 38.Kowalski, R. (2011). Computational logic and human thinking. How to be artificially intelligent Cambridge University Press.Google Scholar
 39.Kowalski, R. (2014). Logic programming, volume 9, computational logic. In Siekmann, J. (Ed.), The history of logic series, edited by Dov Gabbay & John Woods (pp. 523–569). Elsevier.Google Scholar
 40.Kowalski, R., & Sadri, F. (2009). Integrating logic programming and production systems in abductive logic programming agents. In Proceedings of the third international conference on web reasoning and rule systems. Chantilly.Google Scholar
 41.Kowalski, R., & Sadri, F. (2014). A logical characterization of a reactive system language. In Proceedings of RuleML 2014. Springer Verlag.Google Scholar
 42.Kowalski, R., & Sadri, F. (2016). Programming in logic without logic programming. Theory and Practice of Logic Programming, 16(3), 269–295.CrossRefGoogle Scholar
 43.Makinson, D., & Van Der Torre, L. (2000). Input/output logics. Journal of Philosophical Logic, 29(4), 383–408.CrossRefGoogle Scholar
 44.Makinson, D. (1999). On a fundamental problem of deontic logic. Norms, logics and information systems. New studies on deontic logic and computer science (pp. 29–54).Google Scholar
 45.McCarthy, J. (1986). Applications of circumscription to formalising common sense knowledge. Artificial Intelligence, 28(1), 89–116.CrossRefGoogle Scholar
 46.McNamara, P. (2006). Deontic logic. Handbook of the History of Logic, 7, 197–289.CrossRefGoogle Scholar
 47.Meyer, J.J.C. (1988). A different approach to deontic logic: deontic logic viewed as a variant of dynamic logic. Notre Dame Journal of Formal Logic, 29(1), 109–136.CrossRefGoogle Scholar
 48.Nute, D. (Ed.) (1997). Defeasible deontic logic. Essays in Nonmonotonic Normative Reasoning. Boston: Kluwer Academic Publishers.Google Scholar
 49.Peirce, C.S. (1931). Collected papers. In Hartshorn, C., & Weiss, P. (Eds.) Cambridge: Harvard University Press.Google Scholar
 50.Poole, D., Goebel, R., & Aleliunas, R. (1987). Theorist: a logical reasoning system for defaults and diagnosis, (pp. 331–352). New York: Springer.Google Scholar
 51.Poole, D. (1988). A logical framework for default reasoning. Artificial intelligence, 36(1), 27–47.CrossRefGoogle Scholar
 52.Poole, D. (1997). The independent choice logic for modelling multiple agents under uncertainty. Artificial intelligence, 94(1), 7–56.CrossRefGoogle Scholar
 53.Pople, H.E. (1973). On the mechanization of abductive logic. In IJCAI (Vol. 73, pp. 147152).Google Scholar
 54.Prior, A.N. (1958). Escapism: the logical basis of ethics. In Essays in moral philosophy.Google Scholar
 55.Prakken, H., & Sergot, M (1996). Contrarytoduty obligations. Studia Logica, 57(1), 91–115.CrossRefGoogle Scholar
 56.Prakken, H. (1996). Two approaches to the formalisation of defeasible deontic reasoning. Studia Logica, 57(1), 73–90.CrossRefGoogle Scholar
 57.Reiter, R. (1980). A logic for default reasoning. Artificial Intelligence, 13(1), 81–132.CrossRefGoogle Scholar
 58.Ross, A. (1941). Imperatives and logic. Theoria, 7, 53–71.Google Scholar
 59.Satoh, K. (1990). Formalizing soft constraints by interpretation ordering. In Proceedings of the ninth European conference on artificial intelligence (pp. 585–590). Stockholm.Google Scholar
 60.Satoh, K., & Aiba, A. (1993). Computing soft constraints by hierarchical constraint logic programming. Transactions of Information Processing Society of Japan, 34(7), 1555–1569.Google Scholar
 61.SBVR (Semantics of Business Vocabulary and Business Rules), Version 1.2. (2013). OMG Document Number: formal/20131104. Standard document http://www.omg.org/spec/SBVR/1.2/PDF/.
 62.Sergot, M.J. (1982). Prospects for representing the law as logic programs. Logic Programming, 33–42.Google Scholar
 63.Simon, H.A. (1972). Theories of bounded rationality. Decision and Organization, 1(1), 161–176.Google Scholar
 64.Thagard, P. (1996). Mind: introduction to cognitive science. Cambridge: MIT press.Google Scholar
 65.Thomson, J.J. (1985). Double effect, triple effect and the trolley problem: squaring the circle in looping cases. Yale Law Journal, 94, 6.CrossRefGoogle Scholar
 66.Van Benthem, J., Grossi, D., & Liu, F. (2014). Priority structures in deontic logic. Theoria, 80(2), 116–152.CrossRefGoogle Scholar
 67.Fintel, V., Kai, & Iatridou, S. (2005). What to do if you want to go to Harlem: Anankastic conditionals and related matters. Manuscript MIT.Google Scholar
 68.Van Fraassen, B. (1973). Values and the heart’s command. The Journal of Philosophy, 70, 5–19.CrossRefGoogle Scholar
 69.Von Wright, G.H. (1951). Deontic logic. Mind, 60(237), 1–15.CrossRefGoogle Scholar
 70.Wason, P.C. (1968). Reasoning about a rule. The Quarterly Journal of Experimental Psychology, 20(3), 273–281.CrossRefGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.