Encyclopedia of Complexity and Systems Science

Living Edition
| Editors: Robert A. Meyers

Information System Design Using Fuzzy and Rough Set Theory

  • Theresa Beaubouef
  • Frederick PetryEmail author
Living reference work entry
DOI: https://doi.org/10.1007/978-3-642-27737-5_458-4


Rough Sets

Rough set theory is a technique for dealing with uncertainty and for identifying cause-effect relationships in databases. It is based on a partitioning of some domain into equivalence classes and the defining of lower and upper approximation regions based on this partitioning to denote certain and possible inclusion in the rough set.

Fuzzy Sets

Fuzzy set theory is another technique for dealing with uncertainty. It is based on the concept of measuring the degree of inclusion in a set through the use of a membership value. Where elements can either belong or not belong to a regular set, with fuzzy sets elements can belong to the set to a certain degree with zero indicating not an element, one indicating complete membership, and values between zero and one indicating partial or uncertain membership in the set.

Information Theory

Information theory involves the study of measuring the information content of a signal. In databases information theoretic measures can be used to measure the information content of data. Entropy is one such measure.


A collection of data and the application programs that make use of this data for some enterprise is a database.

Information System

An information system is a database enhanced with additional tools that can be used by management for planning and decision-making.

Data Mining

Data mining involves the discovery of patterns or rules in a set of data. These patterns generate some knowledge and information from the raw data that can be used for making decisions. There are many approaches to data mining, and uncertainty management techniques play a vital role in knowledge discovery.

Definition of the Subject and Its Importance

Databases and information systems are ubiquitous in this age of information and technology. Computers have revolutionized the way data can be manipulated and stored, allowing for very large databases with sophisticated capabilities. With so much money and manpower invested in the design and daily use of these systems, it is imperative that they be as correct, secure, and adaptable to the changing needs of the enterprise as possible. Therefore it is important to understand the design and implementation of such systems and to be able to utilize all their capabilities.

Scientists and business executives alike know the value of information. The challenge has been to produce relevant information for an ever-changing uncertain world from data and facts stored on computers and archival devices. These data are considered to be exact, certain, factual values. The real world, however, is uncertain, inexact, and fraught with errors. It is a challenge, then, to extract useful and relevant information from ordinary databases. Uncertainty management techniques such as rough and fuzzy sets can help. These are emerging topics of importance in both the areas of data science/big data (Cady 2017; Dhar 2013) and the Internet of things (Höller et al. 2014; Stankovic 2014).


Databases are recognized for their ability to store and update data in an efficient manner, providing reliability and the elimination of data redundancy. The relational database model, in particular, has well-established mechanisms built into the model for properly designing the database and maintaining data integrity and consistency. Data alone, however, are only facts. What is needed is information. Knowledge discovery attempts to derive information from the pure facts, discovering high-level regularities in the data. It is defined as the nontrivial extraction of implicit, previously unknown, and potentially useful information from data (Frawley et al. 1991; Han et al. 1992).

An innovative technique in the field of uncertainty and knowledge discovery is based on rough sets. Rough set theory, introduced and further developed mathematically by Pawlak (1984), provides a framework for the representation of uncertainty. It has been used in various applications such as the rough querying of crisp data (Beaubouef and Petry 1994b), uncertainty management in databases (Beaubouef and Petry 2007a), the mining of spatial data (Beaubouef and Petry 2002), and improved information retrieval (Srinivasan 1991). These techniques may readily be extended for use with object-oriented, spatial, and other complex databases and may be integrated with additional data mining techniques for a comprehensive knowledge discovery approach.

Fuzzy Sets and Rough Sets

Fuzzy Set Theory

Fuzzy set theory (Zadeh 1965) is another approach for managing uncertainty. It has been around for a few years longer than rough sets and also has well-developed theory, properties, and applications. Applications involving fuzzy logic are diverse and plentiful, ranging from fuzzy control systems in industry to fuzzy logic in databases (Buckles and Petry 1982a; Petry 1996).


Characteristic Functions: Conventionally we can specify a set C by its characteristic function, CharC(x). If U is the universal set from which values of C are taken, then we can represent C as
$$ \mathrm{C}=\left\{\mathrm{x}|\mathrm{x}\in \mathrm{U}\ {\mathrm{and}\ \mathrm{C}\mathrm{har}}_{\mathrm{C}}\left(\mathrm{x}\right)=1\right\} $$
This is the representation for a crisp or non-fuzzy set. For an ordinary set C, the characteristic function is of the form:
$$ {\mathrm{C}\mathrm{har}}_{\mathrm{C}}\left(\mathrm{x}\right):\mathrm{U}\to \left\{0,1\right\} $$
However for a fuzzy set A we have
$$ {\mathrm{Char}}_{\mathrm{F}}\left(\mathrm{x}\right):\mathrm{U}\to \left[0,1\right] $$

That is, for a fuzzy set the characteristic function takes on all values between 0 and 1 and not just the discrete values of 0 or 1 representing the binary choice for membership in a conventional crisp set such as C. For a fuzzy set, the characteristic function is often called the membership function and denoted μF (x).


Support and α- Cuts

The support of a fuzzy set, A, is a subset of the universe set, U,
$$ \mathrm{Supp}\left(\mathrm{A}\right)=\left\{\mathrm{x}|\mathrm{x}\in \mathrm{U}\ \mathrm{and}\ {\upmu}_{\mathrm{A}}\left(\mathrm{x}\right)>0\right\} $$
So a fuzzy set A can be written as
$$ \mathrm{A}=\left\{{\upmu}_{\mathrm{A}}\left(\mathrm{x}\right)/\mathrm{x}|\mathrm{x}\in \mathrm{Supp}\ \left(\mathrm{A}\right)\right\} $$
which means that only those fuzzy elements whose membership function is greater than zero contribute to A.

A related concept to the support is α-cuts. The α-cut of a set is a nonfuzzy set of the universe whose elements have a membership function greater than or equal to some value α. A α = {x | μA (x) ≥ α} for 0 ≤ α ≤ 1. Notice that the α-cuts of a set are subsets of the support. The values of α can be chosen arbitrarily but are usually picked to select desired subsets of the universe.

All of the basic set operations must have equivalent ones in fuzzy sets, but there are additional operations based on membership values of a fuzzy set that hence have no correspondence in crisp sets. We will use the membership functions μA and μB to represent the fuzzy sets A and B involved in the operations to be illustrated.
$$ \mathrm{Set}\ \mathrm{Equality}\ \mathrm{A}=\mathrm{B}\iff {\upmu}_{\mathrm{A}}\left(\mathrm{x}\right)={\upmu}_{\mathrm{B}}\left(\mathrm{x}\right) $$
$$ \mathrm{Set}\ \mathrm{Containment}\ \mathrm{A}\subseteq \mathrm{B}\iff {\upmu}_{\mathrm{A}}\left(\mathrm{x}\right)\le {\upmu}_{\mathrm{B}}\left(\mathrm{x}\right) $$
$$ \mathrm{Set}\ \mathrm{Complement}\ \mathrm{A}=\left\{\mathrm{x}/\left(1-{\upmu}_{\mathrm{A}}\left(\mathrm{x}\right)\right)\right\} $$

For ordinary crisp sets A ∩ A = Ø; however, this is not generally true for a fuzzy set and its complement. This may seem to violate the law of the excluded middle, but this is just the essential nature of fuzzy sets. Since fuzzy sets have imprecise boundaries, we cannot place an element exclusively in a set or its complement.

Set Union and Set Intersection
$$ \mathrm{A}\cup \mathrm{B}\iff {\upmu}_{\mathrm{A}\cup \mathrm{B}}\left(\mathrm{x}\right)=\operatorname{Max}\ \left({\upmu}_{\mathrm{A}}\left(\mathrm{x}\right),\, {\upmu}_{\mathrm{B}}\left(\mathrm{x}\right)\right) $$
$$ \mathrm{A}\cap \mathrm{B}\iff {\upmu}_{\mathrm{A}\cap \mathrm{B}}\left(\mathrm{x}\right)=\operatorname{Min}\ \left(\ {\upmu}_{\mathrm{A}}\left(\mathrm{x}\right),{\upmu}_{\mathrm{B}}\left(\mathrm{x}\right)\right) $$
The justification for using the Max and Min functions for these operations is given in (28). With these definitions, the standard properties for crisp sets of commutativity, associativity, and so forth hold for fuzzy sets. There have been a number of alternative functions proposed to represent set union and intersection (Cady 2017; Dubois and Prade 1992; Mendel 2017). For example, in the case of intersection, a product definition, μA (x) ∗ μB (x), has been considered.
$$ \mathrm{Concentration}\quad \mathrm{CON}\ \left(\mathrm{A}\right):{\upmu}_{\mathrm{CON}\ \left(\mathrm{A}\right)}\left(\mathrm{x}\right)={\left({\upmu}_{\mathrm{A}}\left(\mathrm{x}\right)\right)}^2 $$
The concentration operation concentrates fuzzy elements by reducing the membership grade proportionally more for elements that have smaller membership grades. This operation and the following ones of DIL and INT have no counterpart in ordinary set operations and are commonly used to represent linguistic hedges.
$$ \mathrm{Dilation}\quad \mathrm{DIL}\ \left(\mathrm{A}\right):{\upmu}_{\mathrm{DIL}\left(\mathrm{A}\right)}\left(\mathrm{x}\right)={\left({\upmu}_{\mathrm{A}}\left(\mathrm{x}\right)\right)}^{1/2} $$

The dilation operation dilates fuzzy elements by increasing the membership grade more for the elements with smaller membership grades.

Intensification (INT (A)). The intensification operation is like contrast intensification of a picture. It raises the membership grade of those elements within the crossover points and reduces the membership grade of those outside the crossover points.

The operators such as CON and DIL are commonly used to represent linguistic hedges that act as modifiers to linguistic variables represented in fuzzy sets. The CON operator can be used to approximate the effect of the linguistic modifier hedge “Very.” That is,
$$ \mathrm{Very}\ \left(\mathrm{A}\right)=\mathrm{CON}\ \left(\mathrm{A}\right) $$

Another example is that the dilation operation may be used to represent the linguistic modifier More-or-less.

The exponents such as μ2 for CON or μ1/2 for DIL can be viewed as specific values of parameterized exponents. So the hedge Extremely might be modeled by μ3 or other values that might be obtained by a consensus of opinions.

Rough Set Theory

Rough set theory, introduced by Pawlak (1982, 1991), is a technique for dealing with uncertainty and for identifying cause-effect relationships in databases. An extensive theory for rough sets and their properties has been developed, and they have become a well-established approach for the management of uncertainty in a variety of applications. Rough sets involve the following:
  • U is the universe, which cannot be empty.

  • R is the indiscernibility relation or equivalence relation.

  • A = (U, R), an ordered pair, is called an approximation space.

  • [x]R denotes the equivalence class of R containing x, for any element x of U.

  • Elementary sets in A – the equivalence classes of R.

  • Definable set in A – any finite union of elementary sets in A.

Given an approximation space defined on some universe U that has an equivalence relation R imposed upon it, U is partitioned into equivalence classes called elementary sets that may be used to define other sets in A. A rough set X, where X ⊆ U, can be defined in terms of the definable sets in A by the following:
  • Lower approximation of X in A is the set RX = {x ∈ U / [x]R ⊆ X}.

  • Upper approximation of X in A is the set \( \overline{R}X \) = {x ∈ U / [x]R ∩ X ≠ ∅}.

POSR(X) = RX denotes the R-positive region of X or those elements which certainly belong to the rough set. The R-negative region of X, NEGR(X) = U-\( \overline{\mathrm{R}}\mathrm{X} \), contains elements which do not belong to the rough set, and the boundary or R-borderline region of X, BNR(X) = \( \overline{\mathrm{R}}\mathrm{X} \)- RX, contains those elements which may or may not belong to the set. X is R-definable if and only if RX = \( \overline{\mathrm{R}}\mathrm{X} \). Otherwise, RX ≠\( \overline{\mathrm{R}}\mathrm{X} \) and X is rough with respect to R. A rough set in A is the group of subsets of U with the same upper and lower approximations.

Because there are advantages to both fuzzy set and rough set theories, several researchers have studied various ways of combining the two theories (Chanas and Kuchta 1992; Dubois and Prade 1987, 1992; Nanda and Majumdar 1992). Others have investigated the interrelations between the two theories (Chanas and Kuchta 1992; Pawlak 1985; Wygralak 1989). Fuzzy sets and rough sets are not equivalent, but complementary.

It has been shown in Wygralak (1989) that rough sets can be expressed by a fuzzy membership function μ → {0, 0.5, 1} to represent the negative, boundary, and positive regions. In this model, all elements of the lower approximation, or positive region, have a membership value of one. Those elements of the boundary region are assigned a membership value of 0.5. Elements not belonging to the rough set have a membership value of zero. Rough set definitions of union and intersection can be modified so that the fuzzy model satisfies all the properties of rough sets (Beaubouef and Petry 2000).

We integrate fuzziness into the rough set model in order to quantify levels of roughness in boundary region areas through the use of fuzzy membership values. Therefore, we do not require membership values of elements of the boundary region to equal 0.5, but allow them to range from zero to one, noninclusive. Additionally, the union and intersection operators for fuzzy rough sets are comparable to those for ordinary fuzzy sets, where MIN and MAX are used to obtain membership values of redundant elements.

Let U be a universe, X a rough set in U.


A fuzzy rough set Y in U is a membership function μY(x) which associates a grade of membership from the interval [0,1] with every element of U where
$$ {\upmu}_{\mathrm{Y}}\left(\mathrm{RX}\right)=1,\quad {\upmu}_{\mathrm{Y}}\Big(\mathrm{U}-\overline{\mathrm{R}}\mathrm{X}=0,\mathrm{and}\quad 0<{\upmu}_{\mathrm{Y}}\left(\overline{\mathrm{R}}\mathrm{X}-\mathrm{RX}\right)<1. $$


The union of two fuzzy rough sets A and B is a fuzzy rough set C where
$$ \mathrm{C}=\left\{\mathrm{x}|\mathrm{x}\in \mathrm{A}\ \mathrm{OR}\ \mathrm{x}\in \mathrm{B}\right\},\mathrm{where}\ {\upmu}_{\mathrm{C}}\left(\mathrm{x}\right)=\operatorname{MAX}\left[{\upmu}_{\mathrm{A}}\left(\mathrm{x}\right),{\upmu}_{\mathrm{B}}\left(\mathrm{x}\right)\right]. $$


The intersection of two fuzzy rough sets A and B is a fuzzy rough set C where
$$ \mathrm{C}=\left\{\mathrm{x}|\mathrm{x}\in \mathrm{A}\ \mathrm{AND}\ \mathrm{x}\in \mathrm{B}\right\},\mathrm{where}\ {\upmu}_{\mathrm{C}}\left(\mathrm{x}\right)=\operatorname{MIN}\left[{\upmu}_{\mathrm{A}}\left(\mathrm{x}\right),{\upmu}_{\mathrm{B}}\left(\mathrm{x}\right)\right]. $$

Rough Relational Database

The rough relational database model (Beaubouef et al. 1995) is an extension of the standard relational database model of Codd (Petry 1996). It captures all the essential features of rough sets theory including indiscernibility of elements denoted by equivalence classes and lower and upper approximation regions for defining sets which are indefinable in terms of the indiscernibility.

Every attribute domain is partitioned by some equivalence relation designated by the database designer or user. Within each domain, those values that are considered indiscernible belong to an equivalence class. This information is used by the query mechanism to retrieve information based on equivalence with the class to which the value belongs rather than equality, resulting in less critical wording of queries.

Recall is also improved in the rough relational database because rough relations provide possible matches to the query in addition to the certain matches which are obtained in the standard relational database. This is accomplished by using set containment in addition to equality of attributes in the calculation of lower and upper approximation regions of the query result.

The rough relational database has several features in common with the ordinary relational database. Both models represent data as a collection of relations containing tuples. These relations are sets. The tuples of a relation are its elements and, like elements of sets in general, are unordered and nonduplicated. A tuple ti takes the form (di1, di2, ..., dim), where dij is a domain value of a particular domain set Dj. In the ordinary relational database, dij ∈ Dj. In the rough database, however, as in other non-first normal form extensions to the relational model (Makinouchi 1977; Roth et al. 1987), dij ⊆ Dj, and although it is not required that dij be a singleton, dij ≠ ∅. Let P(Di) denote the powerset(Di) - ∅.


A rough relation R is a subset of the set cross product P(D1) × P(D2) × ⋅ ⋅ ⋅ × P(Dm).

A rough tuple t is any member of R, which implies that it is also a member of P(D1) × P(D2) × ⋅ ⋅ ⋅ × P(Dm). If ti is some arbitrary tuple, then ti = (di1, di2, ..., dim) where dij ⊆ Dj. A tuple in this model differs from that of ordinary databases in that the tuple components may be sets of domain values rather than single values. The set braces are omitted from singletons for notational simplicity.

Let [dxy] denote the equivalence class to which dxy belongs. When dxy is a set of values, the equivalence class is formed by taking the union of equivalence classes of members of the set; if dxy = {c1, c2, …, cn}, then [dxy] = [c1] ∪ [c2] ∪ … ∪ [cn].


Tuples ti = (di1, di2, …, dim) and tk = (dk1, dk2, …, dkm) are redundant if [dij] = [dkj] for all j = 1,…, m.

In the rough relational database, redundant tuples are removed in the merging process since duplicates are not allowed in sets, the structure upon which the relational model is based.

There are two basic types of relational operators. The first type arises from the fact that relations are considered sets of tuples. Therefore, operations which can be applied to sets also apply to relations. The most useful of these for database purposes are set difference, union, and intersection. Operators which do not come from set theory, but which are useful for retrieval of relational data, are select, project, and join.

In the rough relational database, relations are rough sets as opposed to ordinary sets. Therefore, new rough operators (—, ∪, ∩, σ, π, ⋈), which are comparable to the standard relational operators, must be developed for the rough relational database. Moreover, a mechanism must exist within the database to mark tuples of a rough relation as belonging to the lower or upper approximation of that rough relation. Properties of the rough relational operators can be found in Beaubouef et al. (1995).

Information Theory

In communication theory, Shannon (1948) introduced the concept of entropy which was used to characterize the information content of signals. Since then, variations of these information theoretic measures have been successfully applied to applications in many diverse fields. In particular, the representation of uncertain information by entropy measures has been applied to all areas of databases, including fuzzy database querying (Buckles and Petry 1983), data allocation (Fung and Lam 1980), classification in rule-based systems (Quinlan 1986), and measuring uncertainty in rough and fuzzy rough relational databases (Beaubouef et al. 1998).

In fuzzy set theory, the representation of uncertain information measures has been extensively studied (Bhandari and Pal 1993; de Luca and Termini 1972; Klir and Folger 1988). So this paper relates the concepts of information theory to rough sets and compares these information theoretic measures to established rough set metrics of uncertainty. The measures are then applied to the rough relational database model (Beaubouef et al. 1995). Information content of both stored relational schemas and rough relations are expressed as types of rough entropy.

Rough set theory (Pawlak 1982) inherently models two types of uncertainty. The first type of uncertainty arises from the indiscernibility relation that is imposed on the universe, partitioning all values into a finite set of equivalence classes. If every equivalence class contains only one value, then there is no loss of information caused by the partitioning. In any coarser partitioning, however, there are fewer classes, and each class will contain a larger number of members. Our knowledge, or information, about a particular value decreases as the granularity of the partitioning becomes coarser.

Uncertainty is also modeled through the approximation regions of rough sets where elements of the lower approximation region have total participation in the rough set and those of the upper approximation region have uncertain participation in the rough set. Equivalently, the lower approximation is the certain region, and the boundary area of the upper approximation region is the possible region.

Pawlak (1991) discusses two numerical characterizations of imprecision of a rough set X: accuracy and roughness. Accuracy, which is simply the ratio of the number of elements in the lower approximation of X, RX, to the number of elements in the upper approximation of the rough set X, \( \overline{\mathrm{R}}\mathrm{X} \), measures the degree of completeness of knowledge about the given rough set X. It is defined as a ratio of the two set cardinalities as follows:
$$ {\upalpha}_{\mathrm{R}}\left(\mathrm{X}\right)=\operatorname{card}\left(\mathrm{RX}\right)/\operatorname{card}\left(\overline{\mathrm{R}}\mathrm{X}\right),\, \mathrm{where}\ 0\le {\upalpha}_{\mathrm{R}}\left(\mathrm{X}\right)\le 1. $$

The second measure, roughness, represents the degree of incompleteness of knowledge about the rough set. It is calculated by subtracting the accuracy from 1: ρR(X) = 1 - αR(X).

These measures require knowledge of the number of elements in each of the approximation regions and are good metrics for uncertainty as it arises from the boundary region, implicitly taking into account equivalence classes as they belong wholly or partially to the set. However, accuracy and roughness measures do not necessarily provide us with information on the uncertainty related to the granularity of the indiscernibility relation for those values that are totally included in the lower approximation region. For example,

Let the rough set X be defined as follows: X = {A11, A12, A21, A22, B11, C1}

with lower and upper approximation regions defined as
$$ \mathrm{RX}=\left\{\mathrm{A}11,\mathrm{A}12,\mathrm{A}21,\mathrm{A}22\right\}\, \mathrm{and}\ \overline{\mathrm{R}}\mathrm{X}=\left\{\mathrm{A}11,\mathrm{A}12,\mathrm{A}21,\mathrm{A}22,\mathrm{B}11,\mathrm{B}12,\mathrm{B}13,\mathrm{C}1,\mathrm{C}2\right\} $$
These approximation regions may result from one of several partitionings. Consider, for example, the following indiscernibility relations:
$$ {\mathrm{A}}_1=\left\{\left[\mathrm{A}11,\mathrm{A}12,\mathrm{A}21,\mathrm{A}22\right],\left[\mathrm{B}11,\mathrm{B}12,\mathrm{B}13\right],\left[\mathrm{C}1,\mathrm{C}2\right]\right\}, $$
$$ {\mathrm{A}}_2=\left\{\left[\mathrm{A}11,\mathrm{A}12\right],\left[\mathrm{A}21,\mathrm{A}22\right],\left[\mathrm{B}11,\mathrm{B}12,\mathrm{B}13\right],\left[\mathrm{C}1,\mathrm{C}2\right]\right\}, $$
$$ {\mathrm{A}}_3=\left\{\left[\mathrm{A}11\right],\left[\mathrm{A}12\right],\left[\mathrm{A}21\right],\left[\mathrm{A}22\right],\left[\mathrm{B}11,\mathrm{B}12,\mathrm{B}13\right],\left[\mathrm{C}1,\mathrm{C}2\right]\right\}. $$

All three of the above partitionings result in the same upper and lower approximation regions for the given set X and hence the same accuracy measure (4/9 = 0.444) since only those classes belonging to the lower approximation region were repartitioned. It is obvious, however, that there is more uncertainty in A1 than in A2 and more uncertainty in A2 than in A3. Therefore, a more comprehensive measure of uncertainty is needed.

We derive such a measure from techniques used for measuring entropy in classical information theory. Countless variations of the classical entropy have been developed, each tailored for a particular application domain or for measuring a particular type of uncertainty. Our rough entropy is defined such that we may apply it to rough databases. We define the entropy of a rough set X as follows:


The rough entropy Er(X) of a rough set X is calculated by
$$ {\mathrm{E}}_{\mathrm{r}}\left(\mathrm{x}\right)=-\left({\uprho}_{\mathrm{R}}\left(\mathrm{x}\right)\right)\ \left[\sum {\mathrm{Q}}_{\mathrm{i}}\;\log \left({\mathrm{P}}_{\mathrm{i}}\right)\right]\ \mathrm{for}\ \mathrm{i}=1,\dots \, \mathrm{n}\ \mathrm{equivalence}\ \mathrm{classes}. $$

The term ρR(X) denotes the roughness of the set X. The second term is the summation of the probabilities for each equivalence class belonging either wholly or in part to the rough set X. There is no ordering associated with individual class members. Therefore the probability of any one value of the class being named is the reciprocal of the number of elements in the class. If ci is the cardinality of, or number of elements in, equivalence class i and all members of a given equivalence class are equal, Pi = 1/ci represents the probability of one of the values in class i. Qi denotes the probability of equivalence class i within the universe. Qi is computed by taking the number of elements in class i and dividing by the total number of elements in all equivalence classes combined. The entropy of the sample rough set X, Er(X), is given below for each of the possible indiscernibility relations A1, A2, and A3.

Using A1: –(5/9)[(4/9)log(1/4) + (3/9)log(1/3) + (2/9)log(1/2)] = 0.274

Using A2: –(5/9)[(2/9)log(1/2) + (2/9)log(1/2) + (3/9)log(1/3) + (2/9)log(1/2)] = 0.20

Using A3: –(5/9)[(1/9)log(1) + (1/9)log(1) + (1/9)log(1) + (1/9)log(1) + (3/9)log(1/3) + (2/9)log(1/2)] = 0.048

From the above calculations, it is clear that although each of the partitionings results in identical roughness measures, the entropy decreases as the classes become smaller through finer partitionings.

Entropy and the Rough Relational Database

The basic concepts of rough sets and their information-theoretic measures carry over to the rough relational database model (Beaubouef et al. 1995). Recall that in the rough relational database, all domains are partitioned into equivalence classes and relations are not restricted to first normal form. We therefore have a type of rough set for each attribute of a relation. This results in a rough relation, since any tuple having a value for an attribute that belongs to the boundary region of its domain is a tuple belonging to the boundary region of the rough relation.

There are two things to consider when measuring uncertainty in databases: uncertainty or entropy of a rough relation that exists in a database at some given time and the entropy of a relation schema for an existing relation or query result. We must consider both since the approximation regions only come about by set values for attributes in given tuples. Without the extension of a database containing actual values, we only know about indiscernibility of attributes. We cannot consider the approximation regions.

We define the entropy for a rough relation schema as follows:


The rough schema entropy for a rough relation schema S is
$$ {\mathrm{E}}_{\mathrm{s}}\left(\mathrm{S}\right)=-{\sum}_{\mathrm{j}}\left[\sum {\mathrm{Q}}_{\mathrm{i}}\;\log \left({\mathrm{P}}_{\mathrm{i}}\right)\right]\quad \mathrm{for}\ \mathrm{i}=1,\dots \mathrm{n};\mathrm{j}=1,\dots, \mathrm{m} $$
where there are n equivalence classes of domain j, and m attributes in the schema R(A1, A2, …,Am).

This is similar to the definition of entropy for rough sets without factoring in roughness since there are no elements in the boundary region (lower approximation = upper approximation). However, because a relation is a cross product among the domains, we must take the sum of all these entropies to obtain the entropy of the schema. The schema entropy provides a measure of the uncertainty inherent in the definition of the rough relation schema taking into account the partitioning of the domains on which the attributes of the schema are defined.

We extend the schema entropy Es(S) to define the entropy of an actual rough relation instance ER(R) of some database D by multiplying each term in the product by the roughness of the rough set of values for the domain of that given attribute.


The rough relation entropy of a particular extension of a schema is
$$ {\mathrm{E}}_{\mathrm{R}}\left(\mathrm{R}\right)=-{\sum}_{\mathrm{j}}{\mathrm{D}\uprho}_{\mathrm{j}}\left(\mathrm{R}\right)\, \left[\sum {\mathrm{D}\mathrm{Q}}_{\mathrm{i}}\;\log \left({\mathrm{D}\mathrm{P}}_{\mathrm{i}}\right)\right]\quad \mathrm{for}\ \mathrm{i}=1,\dots \mathrm{n};\mathrm{j}=1,\dots, \mathrm{m} $$
where Dρj(R) represents a type of database roughness for the rough set of values of the domain for attribute j of the relation, m is the number of attributes in the database relation, and n is the number of equivalence classes for a given domain for the database.

We obtain the Dρj(R) values by letting the non-singleton domain values represent elements of the boundary region, computing the original rough set accuracy and subtracting it from one to obtain the roughness. DQi is the probability of a tuple in the database relation having a value from class i, and DPi is the probability of a value for class i occurring in the database relation out of all the values which are given.

Information theoretic measures again prove to be a useful metric for quantifying information content. In rough sets and the rough relational database, this is especially useful since in ordinary rough sets, Pawlak’s measure of roughness does not seem to capture the information content as precisely as our rough entropy measure.

In rough relational databases, knowledge about entropy can either guide the database user toward less uncertain data or act as a measure of the uncertainty of a data set or relation. As rough relations become larger in terms of the number of tuples or attributes, the automatic calculation of some measure of entropy becomes a necessity. Our rough relation entropy measure fulfills this need.

Rough Fuzzy Relational Database

The fuzzy rough relational database, as in the ordinary relational database, represents data as a collection of relations containing tuples. Because a relation is considered a set having the tuples as its members, the tuples are unordered. In addition, there can be no duplicate tuples in a relation. A tuple ti takes the form (di1, di2, …, dim, d), where dij is a domain value of a particular domain set Dj and d ∈ Dμ, where Dμ is the interval [0,1], the domain for fuzzy membership values. In the ordinary relational database, dij ∈ Dj. In the fuzzy rough relational database, except for the fuzzy membership value, however, dij ⊆ Dj, and although dij is not restricted to be a singleton, dij ≠ ∅. Let P(Di) denote any non-null member of the powerset of Di.


A fuzzy rough relation R is a subset of the set cross product P(D1) × P(D2) × ⋅ ⋅ ⋅ × P(Dm) × Dμ.

For a specific relation, R, membership is determined semantically. Given that D1 is the set of names of nuclear/chemical plants, D2 is the set of locations, and assuming that RIVERB is the only nuclear power plant that is located in VENTRESS,

are all elements of P(D1) × P(D2) × Dμ. However, only the element (RIVERB, VENTRESS, 1) of those listed above is a member of the relation R(PLANT, LOCATION, μ), which associates each plant with the town or community in which it is located. A fuzzy rough tuple t is any member of R. If ti is some arbitrary tuple, then ti = (di1, di2, …, dim, d) where dij ⊆ Dj and d ∈ Dμ.


An interpretation α = (a1, a2, …, am, aμ) of a fuzzy rough tuple ti = (di1, di2, …, dim, d) is any value assignment such that aj ∈ dij for all j.

The interpretation space is the cross product D1 × D2 × ⋅ ⋅ ⋅ × Dm × Dμ, but is limited for a given relation R to the set of those tuples which are valid according to the underlying semantics of R. In an ordinary relational database, because domain values are atomic, there is only one possible interpretation for each tuple ti. Moreover, the interpretation of ti is equivalent to the tuple ti. In the fuzzy rough relational database, this is not always the case.

Let [dxy] denote the equivalence class to which dxy belongs. When dxy is a set of values, the equivalence class is formed by taking the union of equivalence classes of members of the set; if dxy = {c1, c2, …, cn}, then [dxy] = [c1] ∪ [c2] ∪ … ∪ [cn].


Tuples ti = (di1, di2, …, din, d) and tk = (dk1, dk2, …, dkn, d) are redundant if [dij] = [dkj] for all j = 1, …, n.

If a relation contains only those tuples of a lower approximation, i.e., those tuples having a μ value equal to one, the interpretation α of a tuple is unique. This follows immediately from the definition of redundancy. In fuzzy rough relations, there are no redundant tuples. The merging process used in relational database operations removes duplicate tuples since duplicates are not allowed in sets, the structure upon which the relational model is based.

Tuples may be redundant in all values except μ. As in the union of fuzzy rough sets where the maximum membership value of an element is retained, it is the convention of the fuzzy rough relational database to retain the tuple having the higher μ value when removing redundant tuples during merging. If we are supplied with identical data from two sources, one certain and the other uncertain, we would want to retain the data that is certain, avoiding loss of information.

Recall that the rough relational database is in non-first normal form; there are some attribute values that are sets. Another definition, which will be used for upper approximation tuples, is necessary for some of the alternate definitions of operators to be presented. This definition captures redundancy between elements of attribute values that are sets:


Two sub-tuples X = (dx1, dx2, …, dxm) and Y = (dy1, dy2, …, dym) are roughly redundant, R, if for some [p] ⊆ [dxj] and [q] ⊆ [dyj], [p] = [q] for all j = 1, …, m.

In order for any database to be useful, a mechanism for operating on the basic elements and retrieving specified data must be provided. The concepts of redundancy and merging play a key role in the operations defined.

We must first design our database using some type of semantic model. We use a variation of the entity-relationship diagram that we call a fuzzy rough E-R diagram. This diagram is similar to the standard E-R diagram in that entity types are depicted in rectangles, relationships with diamonds, and attributes with ovals. However, in the fuzzy rough model, it is understood that membership values exist for all instances of entity types and relationships. Attributes which allow values where we want to be able to define equivalences are denoted with an asterisk (∗) above the oval. These values are defined in the indiscernibility relation, which is not actually part of the database design, but inherent in the fuzzy rough model.

Our fuzzy rough E-R model (Beaubouef and Petry 2000) is similar to the second and third levels of fuzziness defined by Zvieli and Chen (1986). However, in our model, all entity and relationship occurrences (second level) are of the fuzzy type so we do not mark an “f” beside each one. Zvieli and Chen’s third level considers attributes that may be fuzzy. They use triangles instead of ovals to represent these attributes. We do not introduce fuzziness at the attribute level of our model in this paper, only roughness or indiscernibility, and denote those attribute with the “∗.” From the fuzzy rough E-R diagram, we design the structure of the fuzzy rough relational database. If we have a priori information about the types of queries that will be involved, we can make intelligent choices that will maximize computer resources.

We next formally define the fuzzy rough relational database operators and discuss issues relating to the real-world problems of data representation and modeling. We may view indiscernibility as being modeled through the use of the indiscernibility relation, imprecision through the use of non-first normal form constructs, and degree of uncertainty and fuzziness through the use of tuple membership values, which are given as the value for the μ attribute in every fuzzy rough relation.

Fuzzy Rough Relational Operators

In Beaubouef et al. (1995), we defined several operators for the rough relational algebra. We now define similar operators for the fuzzy rough relational database as in Beaubouef and Petry (1994a). Recall that for all of these operators, the indiscernibility relation is used for equivalence of attribute values rather than equality of values.


The fuzzy rough relational difference operator is very much like the ordinary difference operator in relational databases and in sets in general. It is a binary operator that returns those elements of the first argument that are not contained in the second argument.

In the fuzzy rough relational database, the difference operator is applied to two fuzzy rough relations and, as in the rough relational database, indiscernibility, rather than equality of attribute values, is used in the elimination of redundant tuples. Hence, the difference operator is somewhat more complex. Let X and Y be two union compatible fuzzy rough relations.


The fuzzy rough difference, X - Y, between X and Y is a fuzzy rough relation T where
$$ \mathrm{T}=\left\{\mathrm{t}\left({\mathrm{d}}_1,\dots, {\mathrm{d}}_{\mathrm{n}},{\upmu}_{\mathrm{i}}\right)\in \mathrm{X}|\mathrm{t}\left({\mathrm{d}}_1,\dots, {\mathrm{d}}_{\mathrm{n}},{\upmu}_{\mathrm{i}}\right)\notin \mathrm{Y}\right\}\cup \left\{\mathrm{t}\left({\mathrm{d}}_1,\dots, {\mathrm{d}}_{\mathrm{n}},{\upmu}_{\mathrm{i}}\right)\in \mathrm{X}|\mathrm{t}\left({\mathrm{d}}_1,\dots, {\mathrm{d}}_{\mathrm{n}},{\upmu}_{\mathrm{j}}\right)\in \mathrm{Y}\ \mathrm{and}\ {\upmu}_{\mathrm{i}}>{\upmu}_{\mathrm{j}}\right\} $$

The resulting fuzzy rough relation contains all those tuples which are in the lower approximation of X, but not redundant with a tuple in the lower approximation of Y. It also contains those tuples belonging to upper approximation regions of both X and Y, but which have a higher μ value in X than in Y. For example, let X contain the tuple (MODERN, 1) and Y contain the tuple (MODERN, .02). It would not be desirable to subtract out certain information with possible information, so X - Y yields (MODERN, 1).


Because relations in databases are considered as sets, the union operator can be applied to any two union-compatible relations to result in a third relation which has as its tuples all the tuples contained in either or both of the two original relations. The union operator can be extended to apply to fuzzy rough relations. Let X and Y be two union compatible fuzzy rough relations.


The fuzzy rough union of X and Y, X ∪ Y is a fuzzy rough relation T where
$$ \mathrm{T}=\left\{\mathrm{t}|\mathrm{t}\in \mathrm{X}\ \mathrm{OR}\ \mathrm{t}\in \mathrm{Y}\right\}\quad \mathrm{and}\quad {\upmu}_{\mathrm{T}}\left(\mathrm{t}\right)=\operatorname{MAX}\left[{\upmu}_{\mathrm{X}}\left(\mathrm{t}\right),{\upmu}_{\mathrm{Y}}\left(\mathrm{t}\right)\right]. $$

The resulting relation T contains all tuples in either X or Y or both, merged together and having redundant tuples removed. If X contains a tuple that is redundant with a tuple in Y except for the μ value, the merging process will retain only that tuple with the higher μ value.


The fuzzy rough intersection, another binary operator on fuzzy rough relations, can be defined similarly.


The fuzzy rough intersection of X and Y, X ∩ Y is a fuzzy rough relation T where
$$ \mathrm{T}=\left\{\mathrm{t}|\mathrm{t}\in \mathrm{X}\ \mathrm{AND}\ \mathrm{t}\in \mathrm{Y}\right\}\quad \mathrm{and}\quad {\upmu}_{\mathrm{T}}\left(\mathrm{t}\right)=\operatorname{MIN}\ \left[{\upmu}_{\mathrm{X}}\left(\mathrm{t}\right),{\upmu}_{\mathrm{Y}}\left(\mathrm{t}\right)\right]. $$

In intersection, the MIN operator is used in the merging of equivalent tuples having different μ values, and the result contains all tuples that are members of both of the original fuzzy rough relations.


The fuzzy rough intersection of X and Y, X ∩A Y is a fuzzy rough relation T where
$$ \mathrm{T}=\left\{\mathrm{t}|\mathrm{t}\in \mathrm{X},\mathrm{and}\exists \mathrm{s}\in \mathrm{Y}|\mathrm{t}{\approx}_{\mathbf{R}}\mathrm{s}\right\}\cup \left\{\mathrm{s}|\mathrm{s}\in \mathrm{Y},\mathrm{and}\exists \mathrm{t}\in \mathrm{X}|\mathrm{s}{\approx}_{\mathbf{R}}\mathrm{t}\right\}\ \mathrm{and} $$
$$ {\upmu}_{\mathrm{T}}\left(\mathrm{t}\right)=\operatorname{MIN}\left[{\upmu}_{\mathrm{X}}\left(\mathrm{t}\right),{\upmu}_{\mathrm{Y}}\left(\mathrm{t}\right)\right]. $$


The select operator for the fuzzy rough relational database model, σ, is a unary operator which takes a fuzzy rough relation X as its argument and returns a fuzzy rough relation containing a subset of the tuples of X, selected on the basis of values for a specified attribute. The operation σA = a(X), for example, returns those tuples in X where attribute A is equivalent to the class [a]. In general, select returns a subset of the tuples that match some selection criteria.

Let R be a relation schema, X a fuzzy rough relation on that schema, and A an attribute in R, a = {ai} and b = {bj}, where ai,bj ∈ dom(A) and ∪x is interpreted as “the union over all x.”


The fuzzy rough selection, σA = a(X), of tuples from X is a fuzzy rough relation Y having the same schema as X and where
$$ \mathrm{Y}=\left\{\mathrm{t}\in \mathrm{X}|{\cup}_{\mathrm{i}}\left[{\mathrm{a}}_{\mathrm{i}}\right]\subseteq {\cup}_{\mathrm{j}}\left[{\mathrm{b}}_{\mathrm{j}}\right]\right\}, $$
and aia, bj ∈ t(A), and where membership values for tuples are calculated by multiplying the original membership value by
$$ \operatorname{card}\left(\mathbf{a}\right)/\operatorname{card}\left(\mathbf{b}\right) $$
where card(x) returns the cardinality, or number of elements, in x.

Assume we want to retrieve those elements where CITY = “ADDIS” from the following

fuzzy rough tuples:


The result of the selection is the following:

where the μ for the second tuple is the product of the original membership value 0.7 and 1/3.


Project is a unary fuzzy rough relational operator. It returns a relation that contains a subset of the columns of the original relation. Let X be a fuzzy rough relation with schema A, and let B be a subset of A. The fuzzy rough projection of X onto B is a fuzzy rough relation Y obtained by omitting the columns of X which correspond to attributes in A – B, and removing redundant tuples. Recall the definition of redundancy accounts for indiscernibility, which is central to the rough sets theory and that higher μ values have priority over lower ones.


The fuzzy rough projection of X onto B, πB(X), is a fuzzy rough relation Y with schema Y(B) where
$$ \mathrm{Y}\left(\mathrm{B}\right)=\left\{\mathrm{t}\left(\mathrm{B}\right)|\mathrm{t}\in \mathrm{X}\right\}. $$


Join is a binary operator that takes related tuples from two relations and combines them into single tuples of the resulting relation. It uses common attributes to combine the two relations into one, usually larger, relation. Let X(A1, A2, …, Am) and Y(B1, B2, …, Bn) be fuzzy rough relations with m and n attributes, respectively, and AB = C, the schema of the resulting fuzzy rough relation T.


The fuzzy rough join, X ⋈<JOIN CONDITION>Y, of two relations X and Y, is a relation T(C1, C2, …, Cm + n) where
$$ \mathrm{T}=\left\{\mathrm{t}|\exists {\mathrm{t}}_{\mathrm{X}}\in \mathrm{X},{\mathrm{t}}_{\mathrm{Y}}\in \mathrm{Y}\ \mathrm{for}\ {\mathrm{t}}_{\mathrm{X}}=\mathrm{t}\left(\mathrm{A}\right),{\mathrm{t}}_{\mathrm{Y}}=\mathrm{t}\left(\mathrm{B}\right)\right\},\mathrm{and}\ \mathrm{where} $$
$$ {\mathrm{t}}_{\mathrm{X}}\left(\mathrm{A}\cap \mathrm{B}\right)={\mathrm{t}}_{\mathrm{Y}}\left(\mathrm{A}\cap \mathrm{B}\right),\, \upmu =1 $$
$$ {\mathrm{t}}_{\mathrm{X}}\left(\mathrm{A}\cap \mathrm{B}\right)\subseteq {\mathrm{t}}_{\mathrm{Y}}\left(\mathrm{A}\cap \mathrm{B}\right)\ \mathrm{or}\ {\mathrm{t}}_{\mathrm{Y}}\left(\mathrm{A}\cap \mathrm{B}\right)\subseteq {\mathrm{t}}_{\mathrm{X}}\left(\mathrm{A}\cap \mathrm{B}\right),\upmu =\operatorname{MIN}\left({\upmu}_{\mathrm{X}},{\upmu}_{\mathrm{Y}}\right) $$

<JOIN CONDITION> is a conjunction of one or more conditions of the form A = B.

Only those tuples which resulted from the “joining” of tuples that were both in lower approximations in the original relations belong to the lower approximation of the resulting fuzzy rough relation. All other “joined” tuples belong to the upper approximation only (the boundary region) and have membership values less than one. The fuzzy membership value of the resultant tuple is simply calculated as in Buckles and Petry (1985) by taking the minimum of the membership values of the original tuples. Taking the minimum value also follows the logic of Ola and Ozsoyoglu (1993), where, in joins of tuples with different levels of information uncertainty, the resultant tuple can have no greater certainty than that of its least certain component.

Fuzzy and rough set techniques integrated into the underlying data model result in databases that can more accurately represent real-world enterprises since they incorporate uncertainty management directly into the data model itself. This is useful as is for obtaining greater information through the querying of rough and fuzzy databases. Additional benefits may be realized when they are used in the process of data mining.

Rough Set Modeling of Spatial Data

Many of the problems associated with data are prevalent in all types of database systems. Spatial databases and GIS contain descriptive as well as positional data (Jing and Wenwen 2016). The various forms of uncertainty occur in both types of data, so many of the issues apply to ordinary databases as well, such as integration of data from multiple sources, time-variant data, uncertain data, imprecision in measurement, inconsistent wording of descriptive data, and “binning” or grouping of data into fixed categories, which also are employed in spatial contexts (Petry et al. 2005; Tavana et al. 2016).

First consider an example of the use of rough sets in representing spatially related data. Let U = {tower, stream, creek, river, forest, woodland, pasture, meadow}, and let the equivalence relation R be defined as follows:
$$ \mathrm{R}\ast =\left\{\left[\mathrm{tower}\right],,,\left[\mathrm{stream},\mathrm{creek},\mathrm{river}\right],,,\left[\mathrm{forest},\mathrm{woodland}\right],,,\left[\mathrm{pasture},\mathrm{meadow}\right]\right\}. $$
Given some set X = {tower, stream, creek, river, forest, pasture}, we would like to define it in terms of its lower and upper approximations:
$$ \mathrm{RX}=\left\{\mathrm{tower},\mathrm{stream},\mathrm{creek},\mathrm{river}\right\}, $$
$$ \overline{\mathrm{R}}\mathrm{X}=\left\{\mathrm{tower},,,\mathrm{stream},,,\mathrm{creek},,,\mathrm{river},,,\mathrm{forest},,,\mathrm{woodland},,,\mathrm{pasture},,,\mathrm{meadow}\right\}. $$
A rough set in A is the group of subsets of U with the same upper and lower approximations. In the example given, the rough set is
$$ {\displaystyle \begin{array}{l}\left\{\left\{\mathrm{tower},\mathrm{stream},\mathrm{creek},\mathrm{river},\mathrm{forest},\mathrm{pasture}\right\}\right.\\ {}\, \left\{\mathrm{tower},\mathrm{stream},\mathrm{creek},\mathrm{river},\mathrm{forest},\mathrm{meadow}\right\}\\ {}\, \left\{\mathrm{tower},\mathrm{stream},\mathrm{creek},\mathrm{river},\mathrm{woodland},\mathrm{pasture}\right\}\\ {}\, \left.\, \left\{\mathrm{tower},\mathrm{stream},\mathrm{creek},\mathrm{river},\mathrm{woodland},\mathrm{meadow}\right\}\right\}.\end{array}} $$

Often spatial data is associated with a particular grid. The positions are set up in a regular matrix-like structure, and data is affiliated with point locations on the grid. This is the case for raster data and for other types of non-vector type data such as topography or sea surface temperature data. There is a trade-off between the resolution or the scale of the grid and the amount of system resources necessary to store and process the data. Higher resolutions provide more information, but at a cost of memory space and execution time.

If we approach these data issues from a rough set point of view, it can be seen that there is indiscernibility inherent in the process of gridding or rasterizing data. A data item at a particular grid point in essence may represent data near the point as well. This is due to the fact that often point data must be mapped to the grid using techniques such as nearest-neighbor, averaging, or statistics. The rough set indiscernibility relation may be set up so that the entire spatial area is partitioned into equivalence classes where each point on the grid belongs to an equivalence class. If the resolution of the grid changes, then, in fact, this is changing the granularity of the partitioning, resulting in fewer, but larger classes.

The approximation regions of rough sets are beneficial whenever information concerning spatial data regions is accessed. Consider a region such as a forest. One can reasonably conclude that any grid point identified as FOREST that is surrounded on all sides by grid points also identified as FOREST is, in fact, a point represented by the feature FOREST. However, consider points identified as FOREST that are adjacent to points identified as MEADOW. Is it not possible that these points represent meadow area as well as forest area but were identified as FOREST in the classification process? Likewise, points identified as MEADOW but adjacent to FOREST points may represent areas that contain part of the forest. This uncertainty maps naturally to the use of the approximation regions of the rough set theory, where the lower approximation region represents certain data and the boundary region of the upper approximation represents uncertain data. It applies to spatial database querying and spatial database mining operations.

If we force a finer granulation of the partitioning, a smaller boundary region results. This occurs when the resolution is increased. As the partitioning becomes finer and finer, finally a point is reached where the boundary region is nonexistent. Then the upper and lower approximation regions are the same, and there is no uncertainty in the spatial data as can be determined by the representation of the model.

In Worboys (1998a) Worboys models imprecision in spatial data based on the resolution at which the data is represented and for issues related to the integration of such data. This approach relies on the issue of indiscernibility – a core concept for rough sets – but does not carry over the entire framework and is just described as “reminiscent of the theory of rough sets” (Worboys 1998b). Ahlqvist and colleagues (Ahlqvist et al. 2000) used a rough set approach to define a rough classification of spatial data and to represent spatial locations. They also proposed a measure for quality of a rough classification compared to a crisp classification and evaluated their technique on actual data from vegetation map layers. They considered the combination of fuzzy and rough set approaches for reclassification as required by the integration of geographic data. Another research group in a mapping and GIS context (Wang et al. 2002) have developed an approach using a rough raster space for the field representation of a spatial entity and evaluated it on a classification case study for remote sensing images. In (2003) Bittner and Stell consider K-labeled partitions, which can represent maps, and then develop their relationship to rough sets to approximate map objects with vague boundaries. Additionally they investigate stratified partitions, which can be used to capture levels of details or granularity such as in consideration of scale transformations in maps, and extend this approach using the concepts of stratified rough sets. An additional approach for dealing with uncertain spatial data can be done using a Dempster-Shafer representation (Shafer 1976). By considering uncertainty in a spatial location as having a range that is most probable and an outer range that is possible, then by using nested intervals around a point, the uncertainty can be modeled (Elmore et al. 2017b).

Data Mining in Rough Databases

Association rules capture the idea of certain data items commonly occurring together and have been often considered in the analysis of a “market basket” of purchases. For example, a delicatessen retailer might analyze the previous year’s sales and observe that of all purchases 30% were of both cheese and crackers and, for any of the sales that included cheese, 75% also included crackers. Then it is possible to conclude a rule of the form:
$$ \mathrm{Cheese}\to \mathrm{Crackers} $$
This rule is said to have a 75% degree of confidence and a 30% degree of support. This particular form of data is largely based on the Apriori algorithm developed by Agrawal et al. (1993). Let a database of possible data items be Han and Kamber (2006).
$$ \mathrm{D}=\left\{{\mathrm{d}}_1,{\mathrm{d}}_2,\dots {\mathrm{d}}_{\mathrm{n}}\right\} $$
and the relevant set of transactions (sales, query results, etc.) are
$$ \mathrm{T}=\left\{{\mathrm{T}}_1,{\mathrm{T}}_2,\dots \right\} $$
where Ti ⊆ D. We are interested in discovering if there is a relationship between two sets of items (called itemsets) Xj, Xk; Xj, Xk ⊆ D. For such a relationship to be determined, the entire set of transactions in T must be examined and a count made of the number of transactions containing these sets, where a transaction Ti contains Xm if Xm ⊆ Ti. This count, called the support count of Xm, SCT (Xm), will be appropriately modified in the case of rough sets.
There are then two measures used in determining rules: the percentage of Ti’s in T that.
  1. 1.

    Contain both Xj and Xk (i.e., Xj ∪ Xk) – called the support s.

  2. 2.

    If Ti contains Xj then Ti also contains Xk – called the confidence c.

The support and confidence can be interpreted as probabilities:
  1. 1.

    s – Prob (Xj ∪ Xk) and

  2. 2.

    c – Prob (Xk | Xj)


We assume the system user has provided minimum values for these in order to generate only sufficiently interesting rules. A rule whose support and confidence exceeds these minimums is called a strong rule.

The result of a query is then
$$ \mathrm{T}=\left\{\mathrm{R}\mathrm{T},\overline{\mathrm{R}}\mathrm{T}\right\} $$
and so we must take into account the lower approximation, RT, and upper approximation, \( \overline{\mathrm{R}}\mathrm{T} \), results of the rough query in developing the association rules.
Recall that in order to generate frequent itemsets, we must count the number of transactions Tj that support an itemset Xj. In the ordinary data mining algorithm, one simply counts the occurrence of a value as 1 if in the set or 0 if not. But now since the query result T is a rough set, we must modify the support count SCT. So we define the rough support count, RSCT, for the set Xj, to count differently in the upper and lower approximations:
$$ {\mathrm{RSC}}_{\mathrm{T}}\left({\mathrm{X}}_{\mathrm{j}}\right)=\sum \limits_{\mathrm{i}}{\mathrm{W}}_{{\mathrm{T}}_{\mathrm{i}}}\left({\mathrm{X}}_{\mathrm{j}}\right);{\mathrm{X}}_{\mathrm{j}}\subseteq {\mathrm{T}}_{\mathrm{i}} $$
$$ \mathrm{where}\ \mathrm{W}\left({\mathrm{X}}_{\mathrm{j}}\right)=\left\{\begin{array}{l}1\quad \mathrm{if}\, {\mathrm{T}}_{\mathrm{i}}\in \underline {\mathrm{R}}\mathrm{T}\\ {}\mathrm{a}\quad \mathrm{if}\, {\mathrm{T}}_{\mathrm{i}}\in \overline{\mathrm{R}}\mathrm{T},\quad 0<\mathrm{a}<1\end{array}\right. $$

The value, a, can be a subjective value obtained from the user depending on relative assessment of the roughness of the query result T. For the data mining example of the next section, we chose a neutral default value of a = ½. Note that W (Xj) is included in the summation only if all of the values of the itemset Xj are included in the transaction, i.e., it is a subset of the transaction.

Finally to produce the association rules from the set of relevant data T retrieved from the spatial database, we must consider the frequent itemsets. For the purposes of generating a rule such as Xj → Xk, we can now extend the approach to rough support and confidence as follows:
$$ \mathrm{RS}={\mathrm{RSC}}_{\mathrm{T}}\left({\mathrm{X}}_{\mathrm{j}}\cup {\mathrm{X}}_{\mathrm{k}}\right)/\mid \mathrm{T}\mid $$
$$ \mathrm{RC}={\mathrm{RSC}}_{\mathrm{T}}\left({\mathrm{X}}_{\mathrm{j}}\cup {\mathrm{X}}_{\mathrm{k}}\right)/{\mathrm{RSC}}_{\mathrm{T}}\left({\mathrm{X}}_{\mathrm{j}}\right) $$

In the spatial data mining area, there have only been a few efforts using rough sets. In the research described in (Beaubouef and Petry 2002; Bhattacharya and Bhatnagar 2012), approaches for attribute induction knowledge discovery (Raschia and Mouaddib 2002) in rough spatial data are investigated. In Bittner (2000) Bittner considers rough sets for spatiotemporal data and how to discover characteristic configurations of spatial objects focusing on the use of topological relationships for characterizations. In a survey of uncertainty-based spatial data mining, Shi et al. (2003) provide a brief general comparison of fuzzy and rough set approaches for spatial data mining.

Aggregation of Uncertain Information

Uncertainty arising from multiple sources and of many forms appears in the everyday activities and decisions of humans. We want to examine approaches that can be used to combine these uncertainties into forms that can become useful for decision-making. Effective decision-making should be able to make use of all the available, relevant information about such combined uncertainty (Ferson and Kreinovich 2001). In this section we describe approaches for combining separately possibilistic uncertainty, probabilistic uncertainty, and situations where both forms of uncertainty appear.

To formalize the discussion, let V be a discrete variable taking values in a space X that has both aleatory and epistemic sources of uncertainty (Parsons 2001). Let P be a probability distribution P: X → [0, 1] such that pk ∈ [0,1], \( \sum \limits_{\mathrm{k}=1}^{\mathrm{n}} \)pk = 1, that models the aleatory uncertainty. Then the epistemic uncertainty can be modeled by a possibility distribution (Makinouchi 1977) such that Π: X → [0, 1], where π(xk) gives the possibility that xk is the value of V, k = 1…n. A usual requirement here is the normality condition, Maxx [π (x)] = 1, that is at least one element in X must be fully possible. Abbreviating our notation so that p k = p(xk), … and πk = π(xk), …, we have P = {p1, p2, …..pn} and Π = {π1, π2,…., πn}.


Shannon Entropy

Shannon entropy has been the most broadly applied measure of randomness or information content (Shannon 1948). For a probability distribution P = {p1, p2, …..pn} this is given as
$$ \mathrm{S}\left(\mathrm{P}\right)=-\sum \limits_{\mathrm{i}=1}^{\mathrm{n}}{\mathrm{p}}_{\mathrm{i}}\;\ln\;\left({\mathrm{p}}_{\mathrm{i}}\right). $$


Gini Index

The Gini index, G(P), also known as the Gini coefficient, is a measure of statistical dispersion developed by Gini (1912), and is defined as
$$ \mathrm{G}\left(\mathrm{P}\right)\equiv 1-\sum \limits_{\mathrm{i}=1}^{\mathrm{n}}{{\mathrm{p}}_{\mathrm{i}}}^2 $$

Some practitioners use G(P) versus S(P) since it does not involve a logarithm, making analytic solutions simpler. Gini index is used in consideration of inequalities in various areas such as economics, ecology, and engineering (Aristondo et al. 2012). A very important application of the Gini index is as a splitting criterion for decision tree induction in machine learning and data mining (Breiman et al. 1984).

It is accepted in practice for diagnostic test selection that the Shannon and Gini measures are interchangeable (Sent and van de Gaag 2007). The specific relationship of Shannon entropy and the Gini index has been discussed in the literature (Eliazar and Sokolov 2010). Theoretical support for this practice is provided in Yager’s independent consideration of alternative measures of entropy (Yager 1995), where he derives the same form for an entropy measure as the Gini measure.



The concept of specificity in the framework of possibility theory has a function analogous to the entropy measures for probability we have discussed above. Yager (1982) gave a formal definition of the specificity of a fuzzy subset which was extended for possibility distributions as
$$ \mathrm{Sp}\left(\Pi \right)=\underset{0}{\overset{\alpha_{\mathrm{max}}}{\int }}\frac{1}{\operatorname{card}\left({\Pi}_{\alpha}\right)} $$
where αmax = Maxx Π(x) and is the α -possibility level of Π, the subset of elements having possibility of at least α. A linear specificity measure provides an intuitive measure of specificity of a possibility distribution (Pedrycz and Gomide 1996). Let π m = \( \overset{\mathrm{n}}{\underset{\mathrm{k}=1}{\operatorname{Max}}} \) π k, and then the specificity measure is this max value minus the average of the other possibility values (Yager 1992):
$$ \mathrm{Sp}\left(\Pi \right)={\uppi}_{\mathrm{m}}-\left(\sum \limits_{\mathrm{k}=1,\, \ne \mathrm{m}}^{\mathrm{n}}{\uppi}_{\mathrm{k}}\right)/\mathrm{n} $$



There have been a number of approaches to consistency measures of probability and possibility distributions that have been proposed, in particular the Zadeh (1978) consistency measure,
$$ {\mathrm{C}}_{\mathrm{Z}}\left(\mathrm{P},\Pi \right)=\sum \limits_{\mathrm{i}=1}^{\mathrm{n}}{\mathrm{p}}_{\mathrm{i}}\ast {\uppi}_{\mathrm{i}} $$

This measure does not represent an inherent relationship but rather represents the intuition that a lowering of an event’s possibility tends to lower its probability, but not the converse.

There is some research to our specific concern of aggregating probabilistic and possibilistic uncertainty. First a related approach is the possibilistic conditioning of probability distributions using the approach of Yager (2012). This form of aggregation makes it very amenable to apply the information measures (Elmore et al. 2014).

In possibilistic conditioning, a function F dependent on both P and Π is used to find a new conditioned probability distribution such that
$$ \hat{\mathrm{P}}=\mathrm{F}\ \left(\mathrm{P},\Pi \right) $$
where \( \hat{\mathrm{P}} \)={\( \hat{\mathrm{p}} \)1,\( \hat{\mathrm{p}} \)2,…,\( \hat{\mathrm{p}} \)n}with
$$ {\hat{\mathrm{p}}}_{\mathrm{n}}={\mathrm{p}}_{\mathrm{i}}\;{\uppi}_{\mathrm{I}}/\mathrm{K};\quad \mathrm{K}=\sum \limits_{\mathrm{i}=1}^{\mathrm{n}}{\mathrm{p}}_{\mathrm{i}}\ast {\uppi}_{\mathrm{i}} $$
A strength of conditioned probability is that it also captures Zadah’s concept of consistency between the possibility and the original probability distribution. Note that the K is of the form of the consistency expression:
$$ {\mathrm{C}}_{\mathrm{Z}}\left(\mathrm{p},\, \Pi \right)=\mathrm{K}. $$
Another approach uses transformation of possibility distributions into probability distributions. A number of possibility transformations have been considered (Beaubouef and Petry 2007b; Bhandari and Pal 1993; Elmore et al. 2017a), and here we will utilize one initially suggested by Dubois and Prade (1983). For this we must assume a possibility distribution where
$$ {\uppi}_1\ge {\uppi}_2\ge \dots \ge {\uppi}_{\mathrm{n}}\ \mathrm{and}\ {\uppi}_1=1. $$
Then we can obtain an associated probability distribution P1 = (p11, … p1n) (denoting πn + 1 = 0) where
$$ {\mathrm{p}}_{1\mathrm{j}}=\sum \limits_{\mathrm{k}=\mathrm{j}}^{\mathrm{n}}\left({\uppi}_{\mathrm{k}}-{\uppi}_{\mathrm{k}+1}\right)/\mathrm{k},\quad \mathrm{for}\ \mathrm{j}=1\ \mathrm{to}\ \mathrm{n} $$

The distribution P1 from the transformation can then be aggregated with some other probability distribution P2 using a spectrum of operators such as min, max, mean, etc. (Petry et al. 2015). The aggregated probability distribution can then be evaluated using the Gini information measure to assess if there is enhanced information content over that of the initial distribuion P2. Additional recent approaches have considered an intelligent quality-based approach to fusing multi-source probabilistic information (Yager and Petry 2016) and use of fuzzy Choquet integration of homogenous possibility and probability distributions (Anderson et al. 2016).

Future Directions

There are several other approaches to uncertainty representation that may be more suitable for certain applications. Type 2 fuzzy sets have been of considerable recent interest (Mendel and John 2002). In these as opposed to ordinary fuzzy sets in which the underlying membership functions are crisp, here the membership functions are themselves fuzzy. Intuitionistic sets introduced by Atanassov (1986, 2000) are another generalization of a fuzzy set. Two characteristic functions are used for capturing both the ordinary idea of degree of membership in the intuitionistic set and the degree of non-membership of elements in the set and can be used in database design (Beaubouef and Petry 2007b). Related to the concepts introduced by rough sets is the idea of granularity for managing complex data by abstraction using information granules as discussed by Lin (1997, 1999). A granular set approach has also been introduced (Ligeza 2002) which is a set and a number of disjoint subsets that constitute a semi-partition. Use of a Dempster-Shafer approach analogous to rough sets can be used to model uncertainty in spatial, temporal, and spatiotemporal application domains (Elmore et al. 2017a, b). Some prior database research on ordered relations (Ginsburg and Hull 1983), although not presented in the context of uncertainty of data, may provide some approaches to extend our work in this area. A main emphasis for future work is the incorporation of some of these research topics into mainstream database, GIS commercial products, and semi-structured data on the semantic web.



The authors would like to thank the Naval Research Laboratory’s Base Program, Program Element No. 0602435 N, for sponsoring this research.


Primary Literature

  1. Agrawal R, Imielinski T, Swami A (1993) Mining Association Rules between sets of items in large databases. Proceedings of the 1993 ACM-SIGMOD international conference on Management of Data. ACM Press, New York, pp 207–216Google Scholar
  2. Ahlqvist O, Keukelaar J, Oukbir K (2000) Rough classification and accuracy assessment. Int J Geogr Inf Sci 14:475–496CrossRefGoogle Scholar
  3. Anderson, D, Elmore P, Petry, F Havens T (2016) Fuzzy Choquet integration of homogenous possibility and probability distributions. Inf Sci 363:24–39zbMATHCrossRefGoogle Scholar
  4. Aristondo O, Garcia-Lparesta J, de la Vega C, Pereira R (2012) The Gini index, the dual decomposition of aggregation functions and the consistent measurement of inequality. Int J Intell Syst 27:132–152CrossRefGoogle Scholar
  5. Atanassov K (1986) Intuitionistic fuzzy sets. Fuzzy Sets Syst 20:87–96zbMATHCrossRefGoogle Scholar
  6. Attanassov K (2000) Intuitionistic fuzzy sets;theory and applications. Physica Verlag, HeidlelbergGoogle Scholar
  7. Beaubouef T Petry F (1994a) Fuzzy set quantification of roughness in a rough relational database model. In: Proceedings of the third IEEE international conference on fuzzy systems, Orlando, pp 172–177Google Scholar
  8. Beaubouef T, Petry F (1994b) Rough querying of crisp data in relational databases. In: Proceedings of the third international workshop on rough sets and soft computing (RSSC’94), San Jose, Hershey, pp 368–375Google Scholar
  9. Beaubouef T, Petry F (2000) Fuzzy rough set techniques for uncertainty processing in a relational database. Int J Intell Syst 15:389–424zbMATHCrossRefGoogle Scholar
  10. Beaubouef T, Petry F (2002) A rough set foundation for spatial data mining involving vague regions. In: Proceedings of FUZZ-IEEE’02, Honolulu, pp 767–772Google Scholar
  11. Beaubouef T, Petry F (2007a) Rough sets: a versatile theory for approaches to uncertainty Management in Databases. Rough Computing: Theories, Technologies and Applications, Idea Group, IncGoogle Scholar
  12. Beaubouef T, Petry F (2007b) Intuitionistic rough sets for database applications. In: Peters JF et al (eds) Transactions on rough sets VI. LNCS 4374. Springer, Berlin/New York, pp 26–30CrossRefGoogle Scholar
  13. Beaubouef T, Petry F, Buckles B (1995) Extension of the relational database and its algebra with rough set techniques. Comput Intell 11:233–245CrossRefGoogle Scholar
  14. Beaubouef T, Petry F, Arora G (1998) Information-theoretic measures of uncertainty for rough sets and rough relational databases. Inf Sci 109:185–195CrossRefGoogle Scholar
  15. Bhandari D, Pal NR (1993) Some new information measures for fuzzy sets. Inf Sci 67:209–228MathSciNetzbMATHCrossRefGoogle Scholar
  16. Bhattacharya S, Bhatnagar V (2012) Fuzzy data mining: a literature survey and classification framework. Int J Netw Virt Org 11:382–408Google Scholar
  17. Bittner T (2000) Rough sets in spatio-temporal data mining. Proceedings of international workshop on temporal, spatial and spatio-temporal data mining. Springer, Berlin/Heidelberg, pp 89–104zbMATHCrossRefGoogle Scholar
  18. Bittner T, Stell J (2003) Stratified rough sets and vagueness. In: Kuhn W, Worboys M, Timpf S (eds) Spatial information theory: cognitive and computational foundations of geographic information science international conference (COSIT’03) pp 286–303CrossRefGoogle Scholar
  19. Breiman L, Friedman J, Olshen R, Stone C (1984) Classification and regression trees. Wadsworth & Brooks/Cole, MontereyzbMATHGoogle Scholar
  20. Buckles B, Petry F (1982) A fuzzy representation for relational data bases. Int J Fuzzy Sets Syst 7(3):213–226zbMATHCrossRefGoogle Scholar
  21. Buckles B, Petry F (1983) Information-theoretical characterization of fuzzy relational databases. IEEE Trans Syst Man Cybern 13:74–77CrossRefGoogle Scholar
  22. Buckles BP, Petry F (1985) Uncertainty models in information and database systems. J Inf Sci 11:77–87CrossRefGoogle Scholar
  23. Cady F (2017) The data science handbook. Wiley, New YorkCrossRefGoogle Scholar
  24. Chanas S, Kuchta D (1992) Further remarks on the relation between rough and fuzzy sets. Fuzzy Sets Syst 47:391–394MathSciNetzbMATHCrossRefGoogle Scholar
  25. de Luca A, Termini S (1972) A definition of a nonprobabilistic entropy in the setting of fuzzy set theory. Inf Control 20:301–312MathSciNetzbMATHCrossRefGoogle Scholar
  26. Dhar V (2013) Data science and prediction. Commun ACM 56(12):64–73CrossRefGoogle Scholar
  27. Dubois D, Prade H (1983) Unfair coins and necessity measures: towards a possibilistic interpretations of histograms. Fuzzy Sets Syst 10:15–27MathSciNetzbMATHCrossRefGoogle Scholar
  28. Dubois D, Prade H (1987) Twofold fuzzy sets and rough sets–some issues in knowledge representation. Fuzzy Sets Syst 23:3–18MathSciNetzbMATHCrossRefGoogle Scholar
  29. Dubois D, Prade H (1992) Putting rough sets and fuzzy sets together. In: Slowinski R (ed) Intelligent decision support: handbook of applications and advances of the rough sets theory. Kluwer Academic Publishers, BostonGoogle Scholar
  30. Eliazar I, Sokolov I (2010) Maximization of statistical heterogeneity: from Shannon’s entropy to Gini's index. Phys A 389:3023–3038MathSciNetCrossRefGoogle Scholar
  31. Elmore P, Petry F, Yager R (2014) Comparative measures of aggregated uncertainty representations. J Ambient Intell Humaniz Comput 5(6):809–819CrossRefGoogle Scholar
  32. Elmore P, Petry F, Yager R (2017a) Dempster-Shafer approach to temporal uncertainty. IEEE Trans Emerg Topics Comput Intell 1(5):316–325CrossRefGoogle Scholar
  33. Elmore P, Petry F, Yager R (2017b) Geospatial modeling using dempster-shafer theory. IEEE Trans Cybern 47(6):1551–1561CrossRefGoogle Scholar
  34. Ferson S, Kreinovich V (2001) Representation, elicitation, and aggregation of uncertainty in risk analysis – from traditional probabilistic techniques to more general, more realistic approaches: a survey. University of Texas at El Paso computer science tech report #11-1-2001Google Scholar
  35. Frawley W, Piatetsky-Shapiro G, Matheus C (1991) Knowledge discovery in databases: an overview. In: Piatetsky-Shapiro G, Frawley W (eds.), Knowledge discovery in databases, AAAI/MIT Press, Menlo Park pp 1–27zbMATHGoogle Scholar
  36. Fung KT, Lam CM (1980) The database entropy concept and its application to the data allocation problem. Infor 18(4):354–363zbMATHGoogle Scholar
  37. Gini C (1912) Variabilita e mutabilita (Variability and Mutability). Tipografia di Paolo Cuppini, Bologna, p 156Google Scholar
  38. Ginsburg S, Hull R (1983) Order dependency in the relational model. Theor Comput Sci 26:146–195MathSciNetzbMATHGoogle Scholar
  39. Han J, Kamber M (2006) Data mining: concepts and techniques, 2nd edn. Morgan Kaufman, San DiegozbMATHGoogle Scholar
  40. Han J, Cai, Y, Cercone, N (1992) Knowledge discovery in databases: an attribute-oriented approach, Proceedings of 18th VLDB Conference, Vancouver, Brit. Columbia, pp 547–559Google Scholar
  41. Höller J, Tsiatsis V, Mulligan C, Karnouskos S, Avesand S, Boyle D (2014) From machine-to-machine to the internet of things: introduction to a new age of intelligence. Academic Press, WalthamGoogle Scholar
  42. Jing L, Wenwen Z (2016) Overview on the using rough set theory on GIS spatial relationships constraint. Int J Adv Res Artif Intell:11–15Google Scholar
  43. Klir GJ, Folger TA (1988) Fuzzy sets, uncertainty, and information. Prentice Hall, Englewood CliffszbMATHGoogle Scholar
  44. Ligeza A (2002) Granular sets and granular relation. Intelligent information systems. Physica Verlag. Heidelberg, pp 331–340Google Scholar
  45. Lin TY (1997) Granular computing: from rough sets and neighborhood systems to information granulation and computing in words. Eur Congr Intell Tech Soft Comput 8-12:1602–1606Google Scholar
  46. Lin TY (1999) Granular computing: fuzzy logic and rough sets. In: Zadeh L, Kacprzyk J (eds) Computing with words in information/intelligent systems. Physica-Verlag, Heidelberg, pp 183–200CrossRefGoogle Scholar
  47. Makinouchi A (1977) A consideration on normal form of not-necessarily normalized relation in the relational data model. In: Proceedings of the 3rd international conference VLDB, pp 447–453Google Scholar
  48. Mendel J (2017) Uncertain rule-based fuzzy systems, 2nd edn. Springer, BerlinzbMATHCrossRefGoogle Scholar
  49. Mendel J, John R (2002) Type-2 fuzzy sets made simple. IEEE Trans Fuzzy Sets 10:117–127CrossRefGoogle Scholar
  50. Nanda S, Majumdar S (1992) Fuzzy rough sets. Fuzzy Sets Syst 45:157–160MathSciNetzbMATHCrossRefGoogle Scholar
  51. Ola A, Ozsoyoglu G (1993) Incomplete relational database models based on intervals. IEEE Trans Knowl Data Eng 5:293–308CrossRefGoogle Scholar
  52. Parsons S (2001) Qualitative methods for reasoning under uncertainty. MIT Press, CambridgezbMATHCrossRefGoogle Scholar
  53. Pawlak Z (1982) Rough sets. Int J Comput Inform Sci 11:341–356zbMATHCrossRefGoogle Scholar
  54. Pawlak Z (1984) Rough sets. Int J Man-Mach Stud 21:127–134zbMATHCrossRefGoogle Scholar
  55. Pawlak Z (1985) Rough sets and fuzzy sets. Fuzzy Sets Syst 17:99–102MathSciNetzbMATHCrossRefGoogle Scholar
  56. Pawlak Z (1991) Rough sets: theoretical aspects of reasoning about data. Kluwer Academic Publishers, NorwellzbMATHCrossRefGoogle Scholar
  57. Pedrycz W, Gomide F (1996) An introduction to fuzzy sets: analysis and design. MIT Press, BostonzbMATHGoogle Scholar
  58. Petry F (1996) Fuzzy databases: principles and applications. Kluwer Press, BostonzbMATHCrossRefGoogle Scholar
  59. Petry F, Robinson V, Cobb M (2005) Fuzzy modeling with spatial information for geographic problems. Springer, Berlin/HeidelbergzbMATHCrossRefGoogle Scholar
  60. Petry F, Elmore P, Yager R (2015) Combining uncertain information of differing modalities. Inf Sci 322:237–256MathSciNetzbMATHCrossRefGoogle Scholar
  61. Quinlan JR (1986) Induction of decision trees. Mach Learn 1:81–106Google Scholar
  62. Raschia G, Mouaddib N (2002) SAINTETIQ: a fuzzy set-based approach to database summarization. Fuzzy Sets Syst 129:137–162MathSciNetzbMATHCrossRefGoogle Scholar
  63. Roth M, Korth H, Batory D (1987) SQL/NF: a query language for non-1NF databases. Inf Syst 12:99–114CrossRefGoogle Scholar
  64. Sent D, van de Gaag L (2007) In: Carbonell J, Siebnarm J (eds) On the behavior of information measures for test selection. Lecture notes in AI 4594. Springer, BerlinGoogle Scholar
  65. Shafer G (1976) A mathematical theory of evidence. Princeton University Press, PrincetonzbMATHGoogle Scholar
  66. Shannon CL (1948) The mathematical theory of communication. Bell Syst Tech J 27:379–422Google Scholar
  67. Shi W, Wang S, Li D, Wang X (2003) Uncertainty-based spatial data mining. Proceedings of Asia GIS Association, Wuhan, pp 124–135Google Scholar
  68. Srinivasan P (1991) The importance of rough approximations for information retrieval. Int J Man-Mach Stud 34:657–671CrossRefGoogle Scholar
  69. Stankovic J (2014) Research directions for the internet of things. IEEE Internet Things J 1(1):3–9MathSciNetCrossRefGoogle Scholar
  70. Tavana M, Liu W, Elmore P, Petry F, Bourgeois BS (2016) A practical taxonomy of methods and literature for managing uncertain spatial data in geographic information systems. Measurement 82:123–162CrossRefGoogle Scholar
  71. Wang S, Li D, Shi W, Wang X (2002) Rough spatial description, International Archives of Photogrammetry and Remote Sensing, XXXII, Commission II, pp 503–510Google Scholar
  72. Worboys M (1998a) Computation with imprecise geospatial data. Comput Environ Urban Syst 22:85–106CrossRefGoogle Scholar
  73. Worboys M (1998b) Imprecision in finite resolution spatial data. GeoInformatica 2:257–280CrossRefGoogle Scholar
  74. Wygralak M (1989) Rough sets and fuzzy sets–some remarks on interrelations. Fuzzy Sets Syst 29:241–243MathSciNetzbMATHCrossRefGoogle Scholar
  75. Yager R (1982) Measuring tranquility and anxiety in decision making. Int J Gen Syst 8:139–146zbMATHCrossRefGoogle Scholar
  76. Yager R (1992) On the specificity of a possibility distribution. Fuzzy Sets Syst 50:279–292MathSciNetzbMATHCrossRefGoogle Scholar
  77. Yager R (1995) Measures of entropy and fuzziness related to aggregation operators. Inf Sci 82:147–166MathSciNetzbMATHCrossRefGoogle Scholar
  78. Yager R (2012) Conditional approach to possibility-probability fusion. IEEE Trans Fuzzy Syst 20:46–56CrossRefGoogle Scholar
  79. Yager R, Petry F (2016) An intelligent quality based approach to fusing multi-source probabilistic information. Info Fusion 31:127–136CrossRefGoogle Scholar
  80. Zadeh L (1965) Fuzzy Sets. Inf Control 8:338–353zbMATHCrossRefGoogle Scholar
  81. Zadeh L (1978) Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets Syst 1:3–28MathSciNetzbMATHCrossRefGoogle Scholar
  82. Zvieli A, Chen P (1986) Entity-relationship modeling and fuzzy databases. In: Proceedings of international conference on data engineering, pp 320–327Google Scholar

Books and Reviews

  1. Aczel J, Daroczy Z (1975) On measures of information and their characterization. Academic Press, New YorkzbMATHGoogle Scholar
  2. Angryk R, Petry F (2007) Attribute-oriented fuzzy generalization in proximity and similarity-based relation database systems. Int J Intell Syst 22:763–781zbMATHCrossRefGoogle Scholar
  3. Arora G, Petry F, Beaubouef T (1997) Information measure of type β under similarity relations, sixth IEEE international conference on fuzzy systems Barcelona, pp 857–862Google Scholar
  4. Arora G, Petry F, Beaubouef T (2001) A note on new parametric measures of information for fuzzy sets. J Combinatorics, Info Syst Sci 26:167–174MathSciNetzbMATHGoogle Scholar
  5. Beaubouef T Petry F (2001a) Vague regions and spatial relationships: a rough set approach. In: Fourth international conference on computational intelligence and multimedia applications, Yokosuka City, pp 313–318Google Scholar
  6. Beaubouef T Petry F (2001b) Vagueness in spatial data: rough set and egg-yolk approaches. In: 14th international conference on industrial & engineering applications of artificial intelligence, pp 367–373Google Scholar
  7. Beaubouef T, Petry F (2003) In: Bouchon-Meunier B, Foulloy L, Yager R (eds) Rough set uncertainty in an object oriented data model, intelligent Systems for Information Processing: from representation to applications. Elsevier, Amsterdam, pp 37–46Google Scholar
  8. Beaubouef T, Petry F (2005a) Normalization in a rough relational database, international conference on rough sets, fuzzy sets, data mining and granular computing, pp 257–265zbMATHGoogle Scholar
  9. Beaubouef T, Petry F (2005b) Representation of spatial data in an OODB using rough and fuzzy set modeling. Soft Comput J 9:364–373CrossRefGoogle Scholar
  10. Beaubouef T, Petry F (2007) An attribute-oriented approach for knowledge discovery in rough relational databases, proc FLAIRS’07, pp 507–508Google Scholar
  11. Beaubouef T, Petry F, Arora G (1998) Information measures for rough and fuzzy sets and application to uncertainty in relational databases. In: Pal S, Skowron A (eds) Rough-fuzzy hybridization: a new trend in decision-making. Springer, Singapore, pp 200–214Google Scholar
  12. Beaubouef T, Ladner R, Petry F (2004) Rough set spatial data modeling for data mining. Int J Intell Syst 19:567–584zbMATHCrossRefGoogle Scholar
  13. Beaubouef T, Petry F, Ladner R (2007) Spatial data methods and vague regions: a rough set approach. Appl Soft Comput J 7:425–440CrossRefGoogle Scholar
  14. Buckles B Petry F (1982) Security and fuzzy databases. In: Proceedings 1982 IEEE international conference on cybernetics and society, pp 622–625Google Scholar
  15. Codd E (1970) A relational model of data for large shared data banks. Commun ACM 13:377–387zbMATHCrossRefGoogle Scholar
  16. Ebanks B (1983) On measures of fuzziness and their representations. J Math Anal Appl 94:24–37MathSciNetzbMATHCrossRefGoogle Scholar
  17. Grzymala-Busse J (1991) Managing uncertainty in expert systems. Kluwer Academic Publishers, BostonzbMATHCrossRefGoogle Scholar
  18. Han J, Nishio S, Kawano H, Wang W (1998) Generalization-based data Mining in Object-Oriented Databases Using an object-cube model. Data Knowl Eng 25:55–97zbMATHCrossRefGoogle Scholar
  19. Havrda J, Charvat F (1967) Quantification methods of classification processes: concepts of structural α entropy. Kybernetica 3:149–172MathSciNetzbMATHGoogle Scholar
  20. Kapur J, Kesavan H (1992) Entropy optimization principles with applications. Academic Press, New YorkCrossRefGoogle Scholar
  21. Slowinski R (1992) A generalization of the indiscernibility relation for rough sets analysis of quantitative information. In: 1st international workshop on rough sets: state of the art and perspectives, Poland. In: pp 41–48Google Scholar

Copyright information

© This is a U.S. Government work and not under copyright protection in the US; foreign copyright protection may apply. 2020

Authors and Affiliations

  1. 1.Southeastern Louisiana UniversityHammondUSA
  2. 2.Naval Research Lab, Stennis Space CenterMississippiUSA