1 Introduction

Negation is one of the most central linguistic phenomena. Therefore, negation modeling is essential to various common tasks in natural language processing, such as relation extraction (Sanchez-Graillet and Poesio 2007), recognition of textual entailment (Harabagiu et al. 2006) and particularly sentiment analysis (Wiegand et al. 2010). In the latter task, negation typically inverts the polarity of polar expressions. For example, in (1), the negated positive polar expression like conveys negative polarity.

  1. (1)

    I do [not [like]\(^+\)]\(^-\) this new Nokia model.

While most research on negation has been carried out on English language data, little research has looked into the behaviour of negation in German. This is surprising since German negation is even harder to handle than English negation. For example, since German displays a more flexible word order than English, the German negation word nicht (not) may appear both left (2) or right (3) of a polar expression it modifies. In English, however, there is a strong tendency of a negation word to precede the polar expression it negates (1).

  1. (2)

    Der Kuchen ist [nicht [köstlich\(]^+]^-\).

    (The cake is not delicious.)

  2. (3)

    Ich [[mag\(]^+\) den Kuchen nicht \(]^-\).

    (I do not like the cake.)

To make the task even more difficult, there are not only function words, such as the particle nicht (not), to express negation but also content words, such as verbs (4), nouns (5) or adjectives (6). (2)–(6) also show that these different negation word types have different scopes.

  1. (4)

    [[Dieses Bemühen\(]^+\) scheiterte \(^{ verb }\) \(]^-\).

    (These efforts failed.)

  2. (5)

    [Das Scheitern \(^{ noun }\) [dieser Bemühungen\(]^+]^-\) war vorhersehbar.

    (The failure of these efforts was foreseeable.)

  3. (6)

    Angesichts [dieser gescheiterten \(^{ adj }\) [Bemühungen\(]^+]^-\) ist nun ein Umdenken erforderlich.

    (These failed efforts now require a change of thinking.)

In this paper, we follow a rule-based approach to negation modeling for fine-grained sentiment analysis that largely draws information from lexicons. We focus on the task of identifying the scope of negation words with regard to polarity classification. In other words, given a mention of a negation word and a polar expression, we want to automatically determine whether the negation word negates the polar expression.

We do not claim to have full knowledge of all German negation words. (Given that content words can perform implicit negation, we assume the overall vocabulary of negation words to be fairly large.) Instead, we propose a typology of negation words and assign a characteristic scope to each type. Therefore, we provide a formalism that is able to compute the respective scope of every possible negation word, once the negation word has been assigned to its respective type.

Our approach heavily relies on syntactic knowledge, particularly information contained in a dependency parse. We demonstrate that the analyses that state-of-the-art parsers produce for German are insufficient for our task and require further normalization.

The contributions of this paper are:

  • We present the first comprehensive study on German negation modeling for fine-grained sentiment analysis.

  • Instead of having one generic scope for all types of negation words, we formulate different types of scopes for different types of negation words.

  • We substantially go beyond negation (function) words, that is, we also consider negation verbs, nouns and adjectives.

  • We introduce a new datasetFootnote 1 comprising German sentences in which negation words are manually annotated with respect to the polar expressions they negate.

  • We publicly release a tool\(^{1}\) for fine-grained German sentiment analysis that implements our proposed approach.

2 Data and Annotation

In order to evaluate negation in context, we built a small focused dataset comprising sentences with negation. In order to keep the annotation effort manageable, we extracted those sentences in which a negated polar expression is likely. We therefore extracted from a corpus only those sentences in which both some negation word co-occurs with at least one polar expression according to the sentiment lexicon of the PolArt-system (Klenner et al. 2009). In order not to bias the scope of the negation in those sentences we did not impose any restriction regarding the relation between negation words and polar expressions. In order to recognize negation words, we also created a negation lexicon. For that we used several resources. On the one hand, we used all negation expressions from the PolArt system. In addition, we translated a large list of English negation verbs to German and also manually added morphologically related nouns and adjectives if existent, e.g., for the verb stagnieren (stagnate) we would also add the noun Stagnation (stagnation).

In total, we sampled 500 sentences from the DeWaC-corpus (Baroni et al. 2009). We manually annotated every polar expression in those sentences. (Note that we did not only annotate those polar expressions we could automatically identify with the help of the PolArt sentiment lexicon.) We also marked every negation word in case it negates a polar expression. The dataset comes in TIGER/SALSA format (Erk and Padó 2004). Figure 1 illustrates the annotation of our dataset. Polar expressions evoke a frame SubjectiveExpression. If a polar expression is negated, then its negation word is labeled as a frame element Shifter of its frame.Footnote 2

Fig. 1.
figure 1

Example sentence annotation from dataset (translation: The shock of Erfurt seems to have faded away in the public). Polar expressions (e.g., Schock) evoke a frame SubjectiveExpression; the word that negates a polar expression (e.g., verklungen) is assigned the frame element Shifter of that frame.

Of the 500 sentences, we removed 67 sentences which contained obvious errors (i.e., misspellings, grammatical mistakes or incorrect sentence boundaries). We excluded those sentences since the methods we are going to examine rely on a correct syntactic parse. Erroneous sentences are likely to produce spurious syntactic analyses.

On a sample of 200 sentences, we measured an interannotation agreement of \(\kappa =0.87\) which can be considered substantial (Landis and Koch 1977).

Table 1 provides some statistics of our dataset. Even though every sentence contains at least one polar expression (in most cases there is more than one) and a negation word, there are only 282 cases in which a polar expression is within the scope of the negation word, i.e., it is actually negated. This shows that it is not trivial to determine whether a polar expression has been negated. It is also worth pointing out that a negation is as likely to precede the polar expression it negates as to follow it.

Table 1. Statistics of negation detection dataset.

3 Baselines

3.1 Baseline I: Window-Based Scope

Our first baseline applies a simple window-based approach for the scope detection of negation. It is inspired by various works from polarity classification on English language data (Wilson et al. 2005; Wiegand et al. 2009). One considers as scope a span of n tokens around the negation word. While on English data it typically suffices to scan only the tokens succeeding the negation word, on German data we check three different window types: one that assumes the polar expression to succeed the negation word, one that assumes the polar expression to precede the negation word and one in which both directions are examined.

Figure 2 shows the performance of those different window-based scopes on our dataset. It shows that for German negation one needs to look into both directions. (This is in line with our statistic from Table 1.) All of the three window types have their maximum at \(n\!=\!4\). In our forthcoming experiments, we use the best window-based scope (i.e., using both directions at \(n\!=\!4\)) as a baseline.

Fig. 2.
figure 2

Illustration of window-based scope using different window sizes.

3.2 Baseline II: Clause-Based Scope

Our second baseline models the scope on the basis of syntactic information. Instead of using a window of fixed size, we scan all words in the same clause in which the negation word occurs for a polar expression. Typically the scope of a negation never exceeds clause boundaries. For example, in (7) the negation word niemand (nobody) does not negate the polar expression entsetzlich (appalling) in the subordinate clause. From a linguistic perspective, this scope is more adequate than the window-based approach.

  1. (7)

    [Niemand wird etwas zu dem Ereignis sagen wollen\(]_{ main\_clause }\), [weil es sich dabei um eine [entsetzliche\(]^-\) Angelegenheit handelt\(]_{ subordinate\_clause }\).

    (Nobody will want comment on this incident, since it is an appalling affair.)

4 Our Approach

Our approach is fundamentally different to the previous baselines in the sense that we define individual scopes for different types of words. Our framework allows arbitrary scopes to be defined for every possible negation word. A scope is defined in terms of a grammatical relation. For instance, we could formulate that the subject of the negation word aufhören (subside) is the expression that is negated as in (8).

  1. (8)

    [[Die Schmerzen\(]^-\) hören auf \(^{ verb }\) \(]^+\).

    (The pain subsides.)

We do not have the knowledge to explicitly enumerate the scope for every possible negation word from our negation lexicon (Sect. 2). Instead, we grouped words with similar scope characteristics (Sects. 4.1 and 4.2) and assigned one scope which satisfies the entire group of words.

Our framework allows the specification of a priority scope list for a negation word, i.e., a list with more than one argument position (see also Table 2). We process such a list from left to right and apply the first argument that matches for the specific negation word in some sentence. The advantage of such a list is that there are negation words that may negate different arguments. The flexibility we gain with priority lists is essential for identifying the correct scope of negation content words as we will explain in Sect. 4.2.

We do not claim that our proposed approach perfectly models the scope of every German negation word. But we show that with relatively little lexical knowledge we can largely outperform a traditional approach that treats all negation words in the same way. Therefore, our proposed method should be regarded as a strong baseline for future research.

Table 2 summarizes the different negation word types that we discuss in detail below.

Table 2. The different negation word types and their scopes.

4.1 Scope for Negation Function Words

The type of lexical units most commonly associated with negation are negation particles such as nicht (not), negation adverbs, such as niemals (never), indefinite pronouns, such as kein (no), and a few prepositions, such as ohne (without). Even though these negation words only constitute a handful of lexical units, they are known to have a large impact. This is due to the fact that these words are function words which entails that they occur frequently. We call these words negation function words. Regarding the scope of those words, we distinguish between three types.

Negation Adverbs and Indefinite Pronouns. These negation words exhibit similar behaviour (in terms of scope) as sentential adverbs. As a consequence, these negation words have a wide scope. It is the entire clause in which they are embedded (9) and (10). We use the same definition as we applied for our baseline in Sect. 3.2.

  1. (9)

    [Noch nie wollte Kiew [Frieden\(]^+]^-\).

    (Kiev never wanted peace.)

  2. (10)

    [Kein Mensch möchte sie dabei [unterstützen\(]^+]^-\).

    (No one wants to support them with that.)

Negation Particle. The particle nicht (not) has a narrow scope. We only include the word which it governs in the dependency graph (11).

figure a

Negation Prepositions. Negation prepositions also have a narrow scope. However, unlike the negation particle, their scope does not include the words which they govern but which are their dependents (hence the reverse relation), e.g., Hass (hatred) in (12).

  1. (12)

    Wir bauen eine Welt ganz [ohne \(^{ prep }\) [Hass\(]_{ dependent }^-]^+\).

    (We create a world without hatred.)

4.2 Scope for Negation Content Words

In the following, we describe the remaining words, all of which are content words. We therefore refer to these words as negation content words.

Negation Nouns. Negation nouns typically reverse the polarity of two types of dependents, either a genitive modifier (13) or a prepositional object (14). Note that we leave the preposition underspecified so that it can match any potential preposition.

  1. (13)

    Das Gericht beschloss [die Aufhebung \(^{ noun }\) [der Strafe\(]_{ gmod }^-]^+\).

    (The court decided to lift the sentence.)

  2. (14)

    Qi Gong dient auch zur [Vorbeugung \(^{ noun }\) [vor Krankheiten\(]_{ objp\textit{-}vor }^-]^+\).

    (Qi gong is also used for preventing diseases.)

Negation Adjectives. There are two different major constructions in which adjectives may occur. Adjectives may be used predicatively or attributively. Negation adjectives may occur in both constructions. Therefore, polar expressions negated by an adjective may be in two different argument positions, namely a noun in subjective position in the predicative case (15) or a noun that is modified by an attributive adjective (16).

  1. (15)

    [[Diese Bemühungen\(]^+_{ subj }\) sind gescheitert \(^{ pred\_adj }\) \(]^-\).

    (These efforts failed.)

  2. (16)

    Das sind alles [korrigierbare \(^{ attr\_adj }\) [Fehler\(]^-_{ attr\textit{-}rev }]^+\).

    (These are recoverable errors.)

Negation Verbs. For this study, we distinguish between between two major types of verb groups, transitive verbs and intransitive verbs. In the case of transitive negation verbs, it is the object that is negated (17), while for intransitive verbs, it is the subject that is negated (18).

  1. (17)

    Dieses Medikament [lindert \(^{ transitive\_verb }\) [die Schmerzen\(]_{ obja }^-]^+\).

    (This drug cures the pain.)

  2. (18)

    [[Die Schmerzen\(]_{ subj }^-\) hören auf \(^{ intransitive\_verb }\) \(]^+\).

    (The pain subsides.)

Note that by transitive verbs, we understand all verbs that have at least two arguments. By arguments we do not only mean the subject and (direct) accusative object but all other types of objects, for instance, a dative object (19), a prepositional object (20) or object clause (21).

  1. (19)

    Die Menschheit [entging \(^{ transitive\_verb }\) [einer Katastrophe\(]_{ objd }^-]^+\).

    (Mankind averted disaster.)

  2. (20)

    Wir [kämpfen \(^{ transitive\_verb }\) [gegen dieses Problem\(]_{ objp\text {-}gegen }^-\) an \(]^+\).

    (We fight against this problem.)

  3. (21)

    Ich [bezweifle \(^{ transitive\_verb }\), [dass dies eine gute Idee ist\(] _{ objc }^+]^-\).

    (I doubt that this is a good idea.)

For sentences where negation verbs have more than one object, the ordering of the objects on our priority scope list decides which type of object is given priority. For example, in case of ditransitive verbs, the accusative object is more likely to be negated than the dative object (22).

  1. (22)

    Das [ersparte \(^{ transitive\_verb }\) [uns\(]_{ objd }\) [viel Ärger\(]_{ obja }^-]^+\).

    (This saved us a lot of trouble.)

In principle, the arguments of verbs to be negated could be most adequately described in terms of semantic roles. In the terminology of FrameNet (Baker et al. 1998), we are basically looking for Theme or Patient; in the terminology of PropBank (Palmer et al. 2005) it is A1. Unfortunately, automatic semantic role labeling for German is still in its infancy. As a consequence, we need to approximate semantic roles with dependency relations.

Using a priority scope list also partly allows us to model sense ambiguity. Some verbs may be used both intransitively and transitively. We simply add the subject position at the end of the priority list. This allows the German negation verb abnehmen (take from/decrease) to negate its accusative object in (23) while it negates its subject in (24).

  1. (23)

    Sie [nahm \(^{ transitive\_verb }\) ihm [ein große Last\(]_{ obja }^-\) ab \(]^+\).

    (She took a great burden from him.)

  2. (24)

    [[Seine Wut\(]_{ subj }^-\) nahm \(^{ intransitive\_verb }\) deutlich ab \(]^+\).

    (His anger notably decreased.)

4.3 Normalization of the Dependency Graph

Our previous examples have shown that in order to model scope, we largely rely on a syntactic analysis, particularly on a dependency parse. We employ ParZu (Sennrich et al. 2009). We chose that particular parser because of its fine-grained label inventory which is essential for our approach. Still, our rules cannot be immediately applied to the original output of that parser.

Our rules are defined for active-voice constructions. The parse for passive-voice constructions would be misleading since ParZu provides dependency structures that describe the surface structure. For example, in (25) we would not be able to correctly establish the scope of bremsen over Fortschritt, since it is marked as the surface subject. By normalizing the dependency relation labels to active voice (i.e., the deep structure), as indicated by (26), however, our rules work correctly since Fortschritt becomes an accusative object. It would be uneconomical to directly operate on the surface representation as it would mean writing redundant rules for negation scopes.

  1. (25)

    [[Der Fortschritt\(]_{ subj\_surface }^+\) wurde [von der Kirche\(]_{ objp\textit{-}\_surface }\) stets gebremst \(^{ verb }]^-\).

    (Progress was held off by the church.)

  2. (26)

    [[Der Forschritt\(]_{ obja\_deep }^+\) wurde [von der Kirche\(]_{ subj\_deep }\) stets gebremst \(^{ verb }]^-\).

    (Progress was held off by the church.)

Another major problem is that for several tensed predicates, such as wird versiegt sein (will be faded away) in (27), ParZu adds several auxiliary edges accommodating the auxiliary verbs of the predicate. As a consequence, a full verb and its arguments may no longer be directly related. For instance, in (27) the negation verb versiegen (dry up) is not directly related to its polar subject Zuversicht (confidence). Neither is the adjective korrigierbar (recoverable) in (29) directly related to its polar subject Fehler (error). In a further normalization step we, therefore, remove the edges involving the auxiliary verbs so that the full verb and its argument (28) or the predicate adjective and its argument (30) are directly connected.

figure b
figure c
figure d
figure e

4.4 Scope Expansion

Most of our negation rules assume the negation word and the polar expression it negates to be in a direct syntactic relation (31). However, there are also cases of negation in which there is no such direct relationship. For example, in (32) the polar expression that is negated is not the accusative object of the negation verb but its attributive adjective. To account for this, we implemented a scope expansion where also indirect relationships are allowed (i.e., we include the dependents of the words that match the direct syntactic relation).

figure f
figure g

5 Experiments

5.1 Intrinsic Evaluation on Negation Dataset

In this section, we evaluate on the dataset we specially created for the task of German negation detection for fine-grained sentiment analysis (Sect. 2). The task is to identify for each polar expression the negation word in whose scope it falls.

Since the focus of our work is neither to automatically detect polar expressions nor to detect negation words, in our first set of experiments, we consider them as given. That is, we read them off from the gold standard. The specific task therefore becomes to decide whether a given polar expression is negated by a given negation word.

Table 3 compares the different negation detection approaches. It clearly shows that our proposed method outperforms the two baseline methods, that is, the window-based approach (Sect. 3.1) and the clause-based approach (Sect. 3.2). Table 3 also displays the performance of our proposed method with some components, i. e., scope expansion (Sect. 4.3) or normalization (Sect. 4.4) switched off. The table shows that, clearly, both functionalities have a beneficial effect. With regard to the normalization, however, the active-voice conversion only contributes a minor share to the overall performance. So it is the conflation of relation edges in the dependency graph (27)–(30) that has the biggest impact.

Table 3. Comparison of different approaches.

Table 4 compares different verb rules. First, we evaluate a set of single verb rules, that is, we ignore the distinction between transitive and intransitive verbs. The performance is fairly competitive if we use the largest possible priority list objg, obja, objd, objc, obji, s, objp-*, subj. If we distinguish between transitive and intransitive verbs but only have two atomic rules and have no priority list (i. e., obja vs. subj), then this is worse than having only one rule but a priority list (i. e., objg, obja, objd, objc, obji, s, objp-*, subj). From that we conclude that many negated polar expressions are realized as a type of object but not necessarily as an accusative object (i. e., obja). Accounting for intransitive verbs has a relatively marginal impact, since the scores of 1 rule : obja and 2 atomic rules: obja for trans.; subj for intrans. are not that far apart. We assume that the reason for this is that (deep) subjects are relatively rarely negated.

Table 4. Impact of different verbs rules.

Our previous experiments all assumed knowledge of polar expressions in a sentence as given. We now want to examine how performance changes if we detect all polar expressions automatically. For this experiment, we employ the sentiment lexicon of the PolArt system (Klenner et al. 2009). The detection of polar expressions based on a lexicon has two disadvantages. Firstly, all existing sentiment lexicons only have a limited coverage. Secondly, lexicon look-up does not account for word-sense ambiguity, that is, some words may only convey subjectivity in certain contexts (Akkaya et al. 2009).

Table 5 compares the performance of our two baselines and our proposed method based on the manual detection of polar expressions and the automatic detection of those expressions. It comes as no surprise that the performance of classifiers based on the automatic detection is lower than that using manual detection. However, by and large, the difference between our three approaches to determine the scope of negation is similar on both detection types. In other words, no matter how the polar expressions are detected, our proposed method to determine negation always largely outperforms the two baseline classifiers.

We refrain from carrying out a similar experiment by detecting negation words automatically, since our dataset is biased towards the negation words we know. Moreover, inspection of our data revealed that polar expressions tend to be much more ambiguous than negation words.

Table 5. Comparison of manual and automatic detection of polar expressions.

5.2 Extrinsic Evaluation on Sentence-Level Polarity Classification

In this section, we evaluate our negation modeling approach on the task of sentence-level polarity classification. The task is to correctly classify the overall polarity of a given sentence.

We consider two datasets: the Multi-layered Reference Corpus for German Sentiment Analysis (MLSA) (Clematide et al. 2012) and the Heidelberg Sentiment Treebank (HeiST) (Haas and Versley 2015). MLSA contains 270 sentences from the DeWaC Corpus (Baroni et al. 2009) which is a collection of German-language documents of various genres obtained from the web. HeiST contains 1184 sentences from German movie reviews.

We run two types of evaluations: a three-class setting in which the sentences are to be labeled as either positive, negative or neutral, and a two-class setting where we remove the neutral instances and the classifier just has to distinguish between positive and negative polarity. For HeiST, we remove 253 (neutral) sentences in the two-class setting while for MLSA, we remove 91 sentences.

The polarity classification algorithm we follow is kept simple. For each sentence we sum the scores associated with the polar expressions occurring in that sentence according to the sentiment lexicon of the PolArt-system. In case a polar expression is within the scope of a negation, we move the polarity score in the opposite direction by the absolute value of 1.3. This is an adhoc-value, however, it complies with the recent elicitation study from Kiritchenko and Mohammad (2016) in that the score of a negated polar expression should not be represented as its inverse. This is since a negated polar expression (e. g., not excellent) is less polar intense than a (plain) polar expression of the opposite polarity with the same polar intensity (e. g., abysmal). Our scoring is illustrated in Table 6. The final sentence-level polarity is derived from the sign of the sum of scores that we computed.

Table 6. Illustration of negation scores.

In our experiments, we examine two different configurations, one where no negation modeling is considered and another where our proposed negation modeling is incorporated. We evaluate in terms of macro-average precision, recall and F-score. Table 7 shows the evaluation on HeiST while Table 8 shows the evaluation on MLSA. In both cases our proposed negation modeling outperforms the polarity classifier in which no negation modeling is incorporated.

Table 7. Polarity classification on HeiST.
Table 8. Polarity classification on MLSA.

6 Related Work

The most notable work dealing with different types of negation is Wilson et al. (2005) who point out that there are other words expressing negation than the commonly associated negation (function) words not, never, no etc. Since that work is carried out on English data, the scope modeling is kept simple using a window-based approach. Recently, Socher et al. (2013) proposed the usage of Recursive Neural Tensor Network (RNTN) for sentiment analysis. RNTN is a compositional sentence-level polarity classifier providing polarity values for each node in a constituency parse of a sentence. The authors claim that this method allows learning negation directly from labeled training data without explicit knowledge of negation words and their scopes. However, there has been no empirical examination of how reliably RNTN actually models negation. Moreover, that approach only produced results inferior to conventional SVMs trained on bag of words on German data (Haas and Versley 2015). For a detailed summary of negation modeling in sentiment analysis, we refer the reader to Wiegand et al. (2010).

Next to sentiment analysis, negation modeling has also been studied in the biomedical domain. Most of this work focuses on supervised classification on the (English) BioScope corpus (Szarvas et al. 2008), such as Morante et al. (2008) or Zou et al. (2013). The approach which is mostly related to ours is, however, the descriptive work by Morante (2010) who analyzes the individual negation words within the BioScope corpus and their scopes. This is one of the very few prominent research efforts that explicitly enumerates the different scopes of different negation words.

As far as German NLP is concerned, we are only aware of two research efforts that address negation. PolArt (Klenner et al. 2009) is a system that carries out sentence-level polarity classification. It matches polar expressions from a sentiment lexicon and then computes the sentence-level polarity compositionally on the basis of rules operating on syntactic constituents. This algorithm also incorporates negation modeling. However, the underlying lexicon only includes 22 polar shifters of which the majority are negation function words. The scope detection is further restricted by the fact that syntactic information is drawn from a chunk parser which only produces very flat output structures. Our work substantially differs from Klenner et al. (2009) in that we devised a framework for negation scope detection that is able to handle many more types of negation words and allows the specification of individual scopes. Moreover, unlike Klenner et al. (2009), we employ a dependency parser and further normalize its output. So, we exploit much more accurate syntactic information.

Cotik et al. (2016) propose a method for negation detection in clinical reports. This approach is an adaptation of NegEx (Chapman et al. 2001) which is simple negation detection algorithm that operates on a set of negation cues embedded in lexical patterns (i. e., word token sequences). This method operates on the string level, that is, unlike our approach no form of syntactic parsing is considered. Cotik et al. (2016) consider a set of 167 German negation phrases. These are highly domain-specific phrases most of which include a common negation function word, e. g., trifft fuer den Patienten nicht zu (does not apply for the patient) or keine Beschwerden ueber ( no complaints of). Due to the domain specificity of that approach, the negation cues and the scope detection mechanism cannot be applied to our dataset. Our approach also differs from Cotik et al. (2016) in that it is aimed at processing unrestricted text.

7 Conclusion

We presented an approach for modeling German negation in open-domain fine-grained sentiment analysis. Unlike most previous work in sentiment analysis, we assume that negation can be conveyed by many lexical units and that different negation words have different scopes.

We examined our approach on a new dataset comprising sentences with mentions of polar expressions and various negation words. We identify different types of negation words that have similar scopes. We showed that negation modeling based on these types largely outperforms traditional negation models assuming the same scope for all negation words no matter whether a window-based or clause-based scope is employed.

Our proposed method is only a first approximation of a more advanced negation handling for German. By making our implementation publicly available, we hope to stimulate further research in that direction using our new tool as a basis.