Advertisement

Dynamic maintenance of approximations under fuzzy rough sets

Original Article

Abstract

The lower and upper approximations are basic concepts in rough set theory. Approximations of a concept in rough set theory need to be updated for dynamic data mining and related tasks. Most existing incremental methods are based on the classical rough set model and limited to describing crisp concepts. This paper presents two new dynamic methods for incrementally updating the approximations of a concept under fuzzy rough sets to describe fuzzy concepts, one starts from the boundary set, the other is based on the cut sets of a fuzzy set. Some illustrative examples are conducted. Then two algorithms corresponding to the two incremental methods are put forward respectively. The experimental results show that the two incremental methods effectively reduce the computing time in comparison with the traditional non-incremental method.

Keywords

Fuzzy rough sets Lower approximation Upper approximation Data mining 

1 Introduction

Rough set theory, introduced by Pawlak [1] in 1982, is a powerful mathematical tool for dealing with intelligent systems characterized by insufficient and incomplete information. Rough set theory describes a crisp subset by two definable subsets called lower and upper approximations. By using the lower and upper approximations, the knowledge hidden in information systems can be discovered and expressed in the form of decision rules. The classical rough set theory is used only to describe crisp sets. To describe crisp and fuzzy concepts, Dubois and Prade [2] extended the basic idea of rough sets to a new model called fuzzy rough sets. The fuzzy rough set theory can not only describe the indiscernibility among objects but also handle the fuzziness among objects [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]. In fuzzy rough sets, a fuzzy similarity relation is employed to describe the degree of similarity between two objects instead of an equivalence relation used in rough sets. A fuzzy rough set is the approximation of a fuzzy set (or a crisp set) in a fuzzy approximation space. Specially, the approximation of a fuzzy set in a crisp approximation space is named a rough fuzzy set. Rough fuzzy sets are special cases of fuzzy rough sets.

Generally, the computing of approximations is a necessary step in knowledge representation and reduction based on fuzzy rough sets. In the traditional fuzzy rough set model, the universe \(U\), the fuzzy similarity relation \(R\) and the described set \(X\) are three essential factors of the model. The universe \(U\) and the similarity relation \(R\) are relatively fixed in the three factors. Therefore, the model is suitable for processing static data. Due to the dynamic characteristics of data collection, the size of the information system is variable with the variance of the number of objects or attributes (features). That is, the universe \(U\) and the relation \(R\) are variable. One can retrain the system from scratch when the information system varies, which is known as a non-incremental approach [15]. However, the non-incremental approach becomes very costly or even intractable as the data size grows. Alternatively, one can also apply an incremental learning scheme [15]. The essence of incremental learning is to allow the learning process to take place in a continuous and progressive manner rather than a one-shot experience [16]. The research on updating knowledge incrementally has shown its importance in many areas, such as clinical decision making, intrusion detection, stock evaluation, and text categorization. Some incremental learning methods with respect to rough set theory have been proposed [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45]. The variation of information systems includes object set (sample set),attribute set and attribute value.

The variation of objects has been widely considered in incremental learning. Bang et al. proposed an incremental inductive learning algorithm to find a minimal set of rules for a decision table without recalculating the whole sample set when a new sample is added into the universe [17]. Zheng et al. presented an effective incremental approach for knowledge acquisition based on the rule tree [18]. Wang et al. constructed an incremental rule acquisition algorithm based on variable precision rough sets while inserting new objects into the information system [19]. Zhang et al. proposed an incremental rule acquisition algorithm based on neighborhood rough sets when the object set evolves over time [20]. Li et al. proposed a dynamic maintenance method of approximations for dominance-based rough sets [21]. In addition, Luo et al. proposed an incremental approach for updating approximations in set-valued ordered information systems [22]. Zhang et al. proposed a composite rough set model for dynamic data mining [23]. Zeng et al. proposed an incremental approach for updating approximations of Gaussian kernelized fuzzy rough sets under the variation of the object set [24]. Wang et al. proposed a novel incremental simplified algorithm which can efficiently update approximations of dominance-based rough set approach (DRSA) when objects and attributes increase simultaneously [25]. Luo et al. focused on efficient updating of probabilistic approximations with incremental objects in a dynamic information table [26].

When attribute value changes in an information system, some incremental learning methods are also proposed. For example, Chen proposed a rough set-based method for updating decision rules on attribute values’ coarsening and refining [27]. Luo et al. proposed a fast algorithms for computing rough approximations in set-valued decision systems while updating criteria values [28]. Zeng et al. proposed an incremental approach for updating approximations of Gaussian kernelized fuzzy rough sets under the variation of the attribute values [29].

Under the variation of the attribute set, Chan firstly put forward an incremental method for updating the approximations of a crisp concept based on the lower and upper boundary sets [30]. Then Liu proposed a new incremental method for computing the positive region [31]. Li presented an incremental method for updating the approximations of a concept under the characteristic relation-based rough sets [32]. Zhang et al. discussed the strategies and propositions under variable precision rough sets [33]. Li et al. introduced a dominance matrix to calculate P-dominating sets and P-dominated sets in dominance-based rough sets, and proposed an incremental algorithm for updating approximations when the attribute set varies [34]. Luo et al. focused on maintaining approximations dynamically in set-valued ordered decision systems under the attribute generalization [35]. Liu et al. proposed an incremental updating approximations method in probabilistic rough sets under the variation of attributes [36]. Zhang et al. developed incremental approaches for updating rough approximations in interval-valued information systems under attribute generalization, which refers to the dynamic changing of attributes [37]. Most incremental algorithms focus on rough sets. Namely, the algorithms are suitable for the traditional approximation space and limited to describing crisp concepts. Zeng et al. presented a fuzzy rough set approach for incrementally updating approximations [38]. However, Ref. [38] is aimed at the fuzzy rough sets constructed by using the HD distance and the Gaussian kernel, which is different from the traditional fuzzy rough sets. These two kinds of fuzzy rough sets are suitable for different information systems. Furthermore, the implementation methods and details of updating approximations are completely different. As a previous work, the author had presented an incremental method for fast computing the rough fuzzy approximations [11]. Although rough fuzzy sets are special cases of fuzzy rough sets, they aim at completely different approximation spaces. The rough fuzzy sets are suitable for the traditional crisp approximation space. While the fuzzy rough sets are suitable for a fuzzy approximation space. To improve the performance of related algorithms and more effectively process data mining and knowledge acquisition in fuzzy approximation space, this paper intends to study the incremental method for fast computing the fuzzy rough approximations. In Ref. [11]. and this paper, “a redefined boundary set” and “cut sets” are both used as the tools for incremental computation of the approximations. However, because a fuzzy similarity relation is used in fuzzy rough sets instead of an equivalence relation used in rough fuzzy sets, the implementation method and details of updating approximations are different.

In order to incrementally update the approximations of a crisp set in a crisp approximation space, Chan [30] took the boundary set of a crisp set as a springboard. Let \(X\) be a subset of \(U\), \(A\) be an attribute set, \(P \subseteq A\), \(a \in A\) and \(a\) is not in \(P\). The lower approximation of \(X\) by adding \(a\) to \(P\) can be updated in terms of \(\underline {P \cup \{ a\} } X=\underline P X \cup \underline {\{ a\} } X \cup Y\) . \(Y\) is composed of the elements satisfying certain conditions in the lower boundary set. That is, when a new attribute is added to the attribute set \(P\), we don’t have to compute the lower approximation from scratch. The lower approximation \(\underline {P \cup \{ a\} } X\) consists of three parts: \(\underline P X\), \(\underline {\{ a\} } X\) and \(Y\). When \(P\) and \(a\) are determined, \(\underline P X\) and \(\underline {\{ a\} } X\) are identified and can be considered as known, it only needs to calculate the set \(Y\), the union of the above three sets is \(\underline {P \cup \{ a\} } X\). So the computation of the set \(Y\) is a key and the lower boundary set plays the role of a bridge. In this paper, in order to incrementally update the approximations of a fuzzy or crisp set in a fuzzy approximation space, the idea of Ref. [30]. is extended, and the boundary set is still considered as a bridge. However, according to existing definition, in a fuzzy approximation space, the boundary set of a fuzzy or crisp set is still a fuzzy set. It’s not helpful to analyze the structure of approximations. So we first redefine the boundary set by converting a fuzzy set into a crisp set. Then based on the redefined boundary set, a method of incrementally updating the lower and upper approximations is proposed and proved. The other incremental method is based on the relation between a fuzzy set and its cut sets, namely a fuzzy set can be reconstructed from its cut sets. One can first incrementally update the approximations of cut sets and then obtain the lower and upper approximations of the fuzzy set.

The rest of this paper is organized as follows. Section 2 introduces the basic concepts of fuzzy rough sets and some correlative propositions. In Sects. 3 and 4, the boundary set of a fuzzy set is redefined and some important properties are obtained. Then the update of lower and upper approximations is proposed by two means: one starts from the boundary set, the other is based on the cut sets of a fuzzy set. Some illustrative examples are conducted. The detailed descriptions of two incremental algorithms corresponding to Sects. 3 and 4, respectively, are given in Sect. 5. In Sect. 6, the performances of two incremental methods are evaluated on six data sets from UCI. Section 7 concludes the paper.

2 Preliminaries

In this section, we briefly introduce the basic concepts of fuzzy rough sets and some correlative propositions.

2.1 Fuzzy rough sets

In order to describe a crisp or fuzzy concept in a fuzzy approximation space, Dubois and Prade introduced an extended notion called fuzzy rough sets. An equivalence relation is a basic notion in Pawlak’s rough set theory [1]. In fuzzy rough sets, a fuzzy similarity relation is used to replace the equivalence relation. Let \(U\) be a nonempty universe. A fuzzy binary relation \(R\) on \(U\) is called a fuzzy similarity relation if \(R\) satisfies reflexivity (\(R(x,x)=1\)), symmetry (\(R(x,y)=R(y,x)\)) and sup-min transitivity (\(R(x,y) \geqslant \mathop {\sup }\limits_{z \in U} \min \{ R(x,z),R(z,y)\}\)). Using the fuzzy similarity relation, the fuzzy equivalence class \({[x]_R}\) can be defined by \({\mu _{{{[x]}_R}}}(y)={\mu _R}(x,y)\) for all \(y \in U\).

Definition 1

[2]. Let \(U\) be a finite non-empty universe, and \(R\) be a fuzzy similarity relation defined on \(U\). The pair \((U,R)\) is called an approximation space. \(F\) is a fuzzy set on \(U\), the upper and lower approximations of \(F\) with respect to \((U,R)\), denoted by \({\overline {apr} _R}(F)\) and \({\underline {apr} _R}(F)\) respectively, are defined as follows:
$${\mu _{{{\overline {apr} }_R}(F)}}(x)=\mathop {\sup }\limits_{y \in U} \min \{ {\mu _R}(x,y),{\mu _F}(y)\}$$
$${\mu _{{{\underline {apr} }_R}(F)}}(x)=\mathop {\inf }\limits_{y \in U} \max \{ 1 - {\mu _R}(x,y),{\mu _F}(y)\}$$
The fuzzy set pair (\({\underline {apr} _R}(F)\), \({\overline {apr} _R}(F)\)) is named a fuzzy rough set. For \(\forall y \in U\), \({\mu _{{{[x]}_R}}}(y)={\mu _R}(x,y)\), so the upper and lower approximations of \(F\) can also be represented as:
$$\begin{aligned} \mu _{{\overline{{apr}} _{R} (F)}} (x) = \mathop {\sup }\limits_{{y \in U}} \min \{ \mu _{{[x]_{R} }} (y),\mu _{F} (y)\} , \hfill \\ \mu _{{\underline{{apr}} _{R} (F)}} (x) = \mathop {\inf }\limits_{{y \in U}} \max \{ 1 - \mu _{{[x]_{R} }} (y),\mu _{F} (y)\} . \hfill \\ \end{aligned}$$

Remark 1

Specially, when \(F\) is a crisp set on the universe \(U\), the approximations are represented as: \({\mu _{{{\overline {apr} }_R}(F)}}(x)=\mathop {\sup }\nolimits_{y \in F} {\mu _{{{[x]}_R}}}(y)\), \({\mu _{{{\underline {apr} }_R}(F)}}(x)=\mathop {\inf }\nolimits_{y \notin F} \{ 1 - {\mu _{{{[x]}_R}}}(y)\}\). The boundary set of \(F\) is defined as: \(B{N_R}(F)=\mathop {\sup }\nolimits_{y \in F} {\mu _{{{[x]}_R}}}(y) \wedge\) \(({\text{1}} - \mathop {\inf }\nolimits_{y \notin F} \{ 1 - {\mu _{{{[x]}_R}}}(y)\} )\).

Remark 2

When \(R\) is an equivalence relation on the universe \(U\), and \(F\) is a crisp set on the universe \(U\), the upper and lower approximations are represented as:
$${\mu _{{{\overline {apr} }_R}(F)}}(x)=\{ x|{[x]_R} \cap F \ne \varnothing \} ,{\mu _{{{\underline {apr} }_R}(F)}}(x)=\{ x|{[x]_R} \subseteq F\} .$$

In order to keep consistent with Definition 1, we still use \({\mu _{{{\underline {apr} }_R}(F)}}(x)\) and \({\mu _{{{\overline {apr} }_R}(F)}}(x)\) in Remark 2. In fact, \({\mu _{{{\underline {apr} }_R}(F)}}(x)\) and \({\mu _{{{\overline {apr} }_R}(F)}}(x)\) represent two sets instead of membership functions.

In this paper, we mainly study the upper and lower approximations of a crisp set \(F\) in a fuzzy approximation space.

2.2 Cut set

The cut set is an important concept through the paper. We briefly review the notion of cut set and then prove some correlative properties.

Definition 2

[46] Let \(U\) be a finite non-empty universe, a fuzzy set \(F\) on \(U\) is defined by a membership function \({\mu _F}:U \to [0,1]\). Given a number \(\alpha \in [0,1]\), an \(\alpha\)-cut, or \(\alpha\)-level set, of a fuzzy set is defined by: \({F_\alpha }=\{ x \in U|{\mu _F}(x) \geqslant \alpha \}\), which is a subset of \(U\). An \(\alpha\)-strong cut set of a fuzzy set is defined by \({F_{{\alpha ^+}}}=\{ x \in U|{\mu _F}(x)>\alpha \}\). Through \(\alpha\)-cut sets or \(\alpha\)-strong cut sets, a fuzzy set determines a family of nested subsets of \(U\). Conversely, a fuzzy set \(F\) can be reconstructed from its \(\alpha\)-cut sets as follows: \({\mu _F}(x)=\sup \{ \alpha |x \in {F_\alpha }\}\).

Given a fuzzy approximation space \((U,R)\), where \(R\) is a fuzzy similarity relation, the \(\alpha\)-cut sets and \(\alpha\)-strong cut sets of \(R\) are denoted as \({R_\alpha }\) and \({R_{{\alpha ^+}}}\) respectively, which are equivalence relations on \(U\). Under the equivalence relation \({R_{{\alpha ^+}}}\), for a given crisp set \(F\), the upper and lower approximations of \(F\) are denoted as \({\overline {apr} _{{R_{{\alpha ^+}}}}}(F)\) and \({\underline {apr} _{{R_{{\alpha ^+}}}}}(F)\). There are close relation between \({\underline {apr} _R}(F)\), \({\overline {apr} _R}(F)\) and \({\overline {apr} _{{R_{{\alpha ^+}}}}}(F)\), \({\underline {apr} _{{R_{{\alpha ^+}}}}}(F)\). The following theorem shows that they can transform each other.

Theorem 1

Let \({\underline {apr} _{{R_{{\alpha ^+}}}}}(F)\) and \({\overline {apr} _{{R_{{\alpha ^+}}}}}(F)\), \(\alpha \in [0,1]\) be a family of lower and upper approximations with respect to \(F\) respectively, then exists a pair of fuzzy sets \({\underline {apr} _R}(F)\) and \({\overline {apr} _R}(F)\) such that:
$${({\underline {apr} _R}(F))_{1 - \alpha }}={\underline {apr} _{{R_{{a^+}}}}}(F),{({\overline {apr} _R}(F))_\alpha }={\overline {apr} _{{R_\alpha }}}(F).$$

Proof

For \(\forall \alpha \in [0,1]\), \(x \in {\underline {apr} _{{R_{{a^+}}}}}(F) \Leftrightarrow {\mu _{{{\underline {apr} }_{{R_{{a^+}}}}}(F)}}(x)=1 \Leftrightarrow \mathop {\inf }\limits_{y \notin F} \{ 1 - {\mu _{{{[x]}_{{R_{{\alpha ^+}}}}}}}(y)\} =1 \Leftrightarrow (\forall y \notin F)(1 - {\mu _{{{[x]}_{{R_{{\alpha ^+}}}}}}}(y)=1) \Leftrightarrow (\forall y \notin F)({\mu _R}(x,y) \le \alpha ) \Leftrightarrow (\forall y \notin F)(1 - {\mu _{{{[x]}_R}}}(y) \geqslant 1 - \alpha ) \Leftrightarrow \mathop {\inf }\limits_{y \notin F} \{ 1 - {\mu _{{{[x]}_R}}}(y)\} \geqslant 1 - \alpha \Leftrightarrow x \in {({\underline {apr} _R}(F))_{1 - \alpha }}.\) \(x \in {\overline {apr} _{{R_\alpha }}}(F) \Leftrightarrow {\mu _{{{\overline {apr} }_{{R_\alpha }}}(F)}}(x)=1 \Leftrightarrow \mathop {\sup }\limits_{y \in F} {\mu _{{{[x]}_{{R_\alpha }}}}}(y)=1 \Leftrightarrow (\exists y \in F)({\mu _{{{[x]}_{{R_\alpha }}}}}(y)=1) \Leftrightarrow (\exists y \in F)({\mu _R}(x,y) \geqslant \alpha ) \Leftrightarrow (\exists y \in F)({\mu _{{{[x]}_R}}}(y) \geqslant \alpha ) \Leftrightarrow \mathop {\sup }\limits_{y \in F} {\mu _{{{[x]}_R}}}(y) \geqslant \alpha \Leftrightarrow {\mu _{{{\overline {apr} }_R}(F)}}(x) \geqslant \alpha \Leftrightarrow x \in {({\overline {apr} _R}(F))_\alpha }.\)

\({\underline {apr} _R}(F)\) and \({\overline {apr} _R}(F)\) are defined by the following membership functions:
$$\begin{array}{*{20}{l}} \mu _{{\underline{{apr}} _{R} (F)}} (x) = \sup \{ 1 - \alpha |x \in (\underline{{apr}} _{R} (F))_{{1 - \alpha }} \} = \sup \{ 1 - \alpha |x \in \underline{{apr}} _{{R_{{\alpha ^{ + } }} }} (F)\} ,\\\mu _{{\overline{{apr}} _{R} (F)}} (x) = \sup \{ \alpha |x \in (\overline{{apr}} _{R} (F))_{\alpha } \} = \sup \{ \alpha |x \in \overline{{apr}} _{{R_{\alpha } }} (F)\} . \\ \end{array}$$

2.3 Fuzzy similarity relation

Knowledge is usually expressed in the form of a fuzzy information system. In order to construct a fuzzy similarity relation from a fuzzy information system, we adopt the method in Ref. [47]. To explain the computation process, an example is given. Table 1 describes a fuzzy information system. Suppose \(S=(U,A)\), where \(U=\{ {x_1},{x_2},{x_3},{x_4}\}\) is the set of four objects, each object is described by a set of fuzzy attribute \(A=\{ {A_1},{A_2},{A_3}\}\), the membership degrees are given in Table 1. Define \({r_{ij}}=\mathop {\min }\limits_{k=1,2,3} {A_k}({x_i},{x_j})\), where \({A_k}({x_i},{x_j})=\left\{ {\begin{array}{*{20}{l}} {1,i=j} \\ {\mathop {\min }\limits_{k=1}^m (1 - |{x_{ik}} - {x_{jk}}|),i \ne j} \end{array}} \right.\). Here, each object \({x_i}\) is described by \(\{ {x_{i1}},{x_{i2}}, \ldots ,{x_{im}}\}\), (\(i=1,2,3,4\)). As an example, \({r_{12}}\) is computed in the following.

Table 1

A fuzzy information system

\(U\)

\({A_1}\)

\({A_2}\)

\({A_3}\)

\({A_{11}}\)

\({A_{12}}\)

\({A_{13}}\)

\({A_{21}}\)

\({A_{22}}\)

\({A_{23}}\)

\({A_{31}}\)

\({A_{32}}\)

\({A_{33}}\)

\({x_1}\)

0.3

0.7

0.0

0.2

0.7

0.1

0.3

0.7

0.0

\({x_2}\)

1.0

0.0

0.0

1.0

0.0

0.0

0.6

0.3

0.1

\({x_3}\)

0.0

0.3

0.7

0.0

0.7

0.3

0.5

0.4

0.1

\({x_4}\)

0.8

0.2

0.0

0.0

0.7

0.3

0.2

0.6

0.2

$${A_1}({x_1},{x_2})=(1 - |0.3 - 1.0|) \wedge (1 - |0.7 - 0.0|) \wedge (1 - |0.0 - 0.0|)=0.3$$
$${A_2}({x_1},{x_2})=(1 - |0.2 - 1.0|) \wedge (1 - |0.7 - 0.0|) \wedge (1 - |0.1 - 0.0|)=0.2$$
$${A_3}({x_1},{x_2})=(1 - |0.3 - 0.6|) \wedge (1 - |0.7 - 0.3|) \wedge (1 - |0.0 - 0.1|)=0.6$$

\({r_{12}}=\mathop {\min }\limits_{k=1,2,3} {A_k}({x_1},{x_2})=\min \{ 0.3,0.2,0.6\} =0.2\). Similarly, we can obtain \({r_{13}},{r_{14}}, \ldots ,{r_{34}}\) and \(R=\left( {\begin{array}{*{20}{c}} 1&{0.2}&{0.3}&{0.5} \\ {0.2}&1&0&0 \\ {0.3}&0&1&{0.2} \\ {0.5}&0&{0.2}&1 \end{array}} \right)\).

3 Updating approximations incrementally based on boundary sets

In this section, we first redefine the boundary set of a fuzzy set. Then based on the redefined boundary set, a method of incrementally updating the lower and upper approximations is proposed and proved.

Let \(U\) be a finite non-empty universe, \(F\) be a crisp set on \(U\). Notice that Chan [30] took the boundary set as a springboard in order to incrementally update the approximations in Pawlak’s rough sets, it’s natural to take into account the boundary set of \(F\) in order to incrementally update the approximations of \(F\). However, according to Definition 1 and Remark 1, in a fuzzy approximation space, the boundary set of \(F\) is a fuzzy set, which is not helpful to analyze the structure of approximations. Because the cut set of a fuzzy set is a good bridge between a fuzzy set and a crisp set, then by using the cut set, a fuzzy set can be converted into a crisp set. The lower and upper boundary sets of \(F\) are redefined as follows.

Definition 3

Let \(U\) be a finite non-empty universe, \(R\) be a fuzzy similarity relation defined on \(U\). \(F\) is a crisp set on \(U\), the lower boundary of \(F\) is defined as: \({\underline {BN} _R}(F)=\mathop \cup \nolimits_{\alpha \in (0,1]} {\underline {BN} _{{R_\alpha }}}(F)\), where \({\underline {BN} _{{R_\alpha }}}(F)=F - {({\underline {apr} _R}(F))_{1 - \alpha }}\) \(=F - {\underline {apr} _{{R_{{\alpha ^+}}}}}(F)\) is an \(\alpha\) lower boundary of \(F\); the upper boundary of \(F\) is defined as: \({\overline {BN} _R}(F)=\mathop \cup \limits_{\alpha \in (0,1]} {\overline {BN} _{{R_\alpha }}}(F)\), where \({\overline {BN} _{{R_\alpha }}}(F)={({\overline {apr} _R}(F))_\alpha } - F={\overline {apr} _{{R_\alpha }}}(F) - F\) is an \(\alpha\) upper boundary of \(F\).

Theorem 2

Let \({R_1},{R_2}\) be two fuzzy similarity relations and \({R_1} \subseteq {R_2}\) (for \(\forall x,y \in U\), \(R_{1} (x,y) \le R_{2} (x,y)\) ), the following properties hold:

  1. (1)

    \({\underline {apr} _{{R_{1{\alpha ^+}}}}}(F) \supseteq {\underline {apr} _{{R_{2{\alpha ^+}}}}}(F),\) \((2)\,{\overline {apr} _{{R_{1\alpha }}}}(F) \subseteq {\overline {apr} _{{R_{2\alpha }}}}(F);\)

     
  2. (3)

    \({\underline {BN} _{{R_{1\alpha }}}}(F) \subseteq {\underline {BN} _{{R_{2\alpha }}}}(F)\), (4) \({\overline {BN} _{{R_{1\alpha }}}}(F) \subseteq {\overline {BN} _{{R_{2\alpha }}}}(F)\).

     

Proof

  1. 1.

    If \({R_1} \subseteq {R_2}\), then for \(\forall \alpha \in (0,1]\), one obtains \({R_{1{\alpha ^+}}} \subseteq {R_{2{\alpha ^+}}}\). That is, the partition induced by \({R_{1{\alpha ^+}}}\) is finer than the one induced by \({R_{2{\alpha ^+}}}.\) According to the property of the lower approximation in Pawlak’s rough sets, one can obtain \({\underline {apr} _{{R_{1{\alpha ^+}}}}}(F) \supseteq {\underline {apr} _{{R_{2{\alpha ^+}}}}}(F)\).

     
  2. 2.

    If \({R_1} \subseteq {R_2}\), then for \(\forall \alpha \in (0,1]\), one obtains \({R_{1\alpha }} \subseteq {R_{2\alpha }}\). According to the property of the upper approximation in Pawlak’s rough sets, one can obtain \({\overline {apr} _{{R_{1\alpha }}}}(F) \subseteq {\overline {apr} _{{R_{2\alpha }}}}(F)\).

     
  3. 3.

    According to (1), one obtains \(F - {\underline {apr} _{{R_{1{\alpha ^+}}}}}(F) \subseteq F - {\underline {apr} _{{R_{2{\alpha ^+}}}}}(F)\), that is \({\underline {BN} _{{R_{1\alpha }}}}(F) \subseteq {\underline {BN} _{{R_{2\alpha }}}}(F)\).

     
  4. 4.

    According to (2), one obtains \({\overline {apr} _{{R_{1\alpha }}}}(F) - F \subseteq {\overline {apr} _{{R_{2\alpha }}}}(F) - F\), that is \({\overline {BN} _{{R_{1\alpha }}}}(F) \subseteq {\overline {BN} _{{R_{2\alpha }}}}(F)\).

     

Theorem 3

Let \(S=(U,A)\) be a fuzzy information system, where \(U=\{ {x_1},{x_2}, \ldots ,{x_n}\}\) is the universe and \(A=\{ {A_1},{A_2}, \ldots ,{A_m}\} =\{ {A_k}|k=1,2, \ldots ,m\}\) is a fuzzy attribute set. The fuzzy similarity relation corresponding to \(A\) is denoted as \(R\). When attributes \({A_{m+1}},{A_{m+2}}, \ldots ,{A_{m+h}}\) are added to \(A\), the updated fuzzy attribute set is denoted as \(A{^\prime},\) and corresponding fuzzy similarity relation is denoted as \(R{^\prime}.\) The following properties hold:

  1. (1)

    \({\underline {apr} _{R{'_{{\alpha ^+}}}}}(F) \supseteq {\underline {apr} _{{R_{{\alpha ^+}}}}}(F)\), (2) \({\overline {apr} _{R{'_\alpha }}}(F) \subseteq {\overline {apr} _{{R_\alpha }}}(F)\);

     
  2. (3)

    \({\underline {BN} _{R{'_\alpha }}}(F) \subseteq {\underline {BN} _{{R_\alpha }}}(F)\), (4) \({\overline {BN} _{R{'_\alpha }}}(F) \subseteq {\overline {BN} _{{R_\alpha }}}(F)\).

     

Proof

According to the fuzzy similarity relation construction method in Sect. 2.3, the elements in \(R\) is \({r_{ij}}=\mathop {\min }\limits_{k=1,2, \ldots ,m} {A_k}({x_i},{x_j})\). When attributes \({A_{m+1}},{A_{m+2}}, \ldots ,{A_{m+h}}\) are added to \(A\), the element in the updated \(R{^\prime}\) is \({r_{ij}}^\prime =\mathop {\min }\limits_{k=1,2, \ldots ,m,m+1, \ldots ,m+h} {A_k}({x_i},{x_j})\). Clearly, we have \({r_{ij}}^\prime \leqslant {r_{ij}}\), that is \(R\prime \subseteq R\).According to Theorem 2, one can obtain \({\underline {apr} _{R{'_{{\alpha ^+}}}}}(F) \supseteq {\underline {apr} _{{R_{{\alpha ^+}}}}}(F)\) and \({\overline {apr} _{R{'_\alpha }}}(F) \subseteq {\overline {apr} _{{R_\alpha }}}(F)\). According to (1) and (2), one can prove (3) and (4).

Theorem 3 shows that the lower approximation is monotonically increasing and the upper approximation is monotonically decreasing when attributes are added.

We will incrementally update the approximations of a set \(F\) based on the lower and upper boundary sets in subsequent section.

Proposition 1

Let \(U\) be a finite non-empty universe, \(F\) be a crisp set on \(U\), and \({R_1},{R_2}\) be fuzzy similarity relations defined on \(U\). The lower approximations of \(F\) under \({R_1}\) and \({R_2}\) are known. When \({R_1}\) is combined with \({R_2}\), a new fuzzy similarity relation \({R_1} \cap {R_2}\) is obtained. Then under the fuzzy similarity relation \({R_1} \cap {R_2}\) , the lower approximation of \(F\) can be updated as:
$${\mu _{{{\underline {apr} }_{{R_1} \cap {R_2}}}(F)}}(x)=\left\{ {\begin{array}{*{20}{l}} {\mathop {\inf }\limits_{y \notin F} \{ 1 - {\mu _{{{[x]}_{{R_1} \cap {R_2}}}}}(y)\}, \quad x \in Y} \\ {{\mu _{{{\underline {apr} }_{{R_1}}}(F)}}(x) \vee {\mu _{{{\underline {apr} }_{{R_2}}}(F)}}(x), \quad x \notin Y} \end{array}} \right.,$$
where, \(Y=\mathop \cup \nolimits_{\alpha \in (0,1]} {Y_\alpha }\), \({Y_\alpha }=\{ x \in {\underline {BN} _{{R_{1\alpha }}}}(F) \cap {\underline {BN} _{{R_{2\alpha }}}}(F)|{[x]_{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}} \subseteq F\}\).

Proof

Partition \(U\) into two subsets, denoted as \(Y\) and \(U - Y\), respectively. Where, \(Y\) is composed of the elements satisfying certain conditions in the lower boundary set. For the elements in \(Y\), compute their lower approximations directly. And for the elements not in \(Y\), update their lower approximations by \({\underline {apr} _{{R_1}}}(F)\) and \({\underline {apr} _{{R_2}}}(F)\).

  1. 1.

    Consider conditions satisfied by the elements in \({Y_\alpha }\):

    If \(x \in {\underline {apr} _{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}}(F)\), then \({[x]_{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}} \subseteq F\). It is clear that \({R_1} \cap {R_2} \subseteq {R_1}\) and \({R_1} \cap {R_2} \subseteq {R_2}\). In terms of Theorem 2, one can obtain \({\underline {BN} _{{R_{1\alpha }}}}(F) \supseteq {\underline {BN} _{{{({R_1} \cap {R_2})}_\alpha }}}(F)\) and \({\underline {BN} _{{R_{2\alpha }}}}(F) \supseteq {\underline {BN} _{{{({R_1} \cap {R_2})}_\alpha }}}(F),\) which implies that \({\underline {BN} _{{{({R_1} \cap {R_2})}_\alpha }}}(F) \subseteq {\underline {BN} _{{R_{1\alpha }}}}(F)\) \(\cap {\underline {BN} _{{R_{2\alpha }}}}(F)\). So for \(\forall x \in {\underline {BN} _{{{({R_1} \cap {R_2})}_\alpha }}}(F)\), one obtains \(x \in {\underline {BN} _{{R_{1\alpha }}}}(F) \cap {\underline {BN} _{{R_{2\alpha }}}}(F)\), namely, \(x \in {Y_\alpha }=\{ x \in {\underline {BN} _{{R_{1\alpha }}}}(F) \cap\) \({\underline {BN} _{{R_{2\alpha }}}}(F)|\mathop \cap \limits_{b \in {{({R_1} \cap {R_2})}_{{\alpha ^+}}}} {[x]_b} \subseteq F\}\). For the elements in \(Y\), compute their lower approximations directly.

     
  1. 2.

    For \(\forall x \in {\underline {apr} _{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}}(F)\) and \(x \notin {Y_\alpha },\forall \alpha \in (0,1]\), one can obtain \(x \notin {\underline {BN} _{{R_{1\alpha }}}}(F) \cap {\underline {BN} _{{R_{2\alpha }}}}(F)\), then \(x \notin (F - {\underline {apr} _{{R_{1{\alpha ^+}}}}}(F)) \cap (F - {\underline {apr} _{{R_{2{\alpha ^+}}}}}(F))\), that is \(x \in {\underline {apr} _{{R_{1\alpha }}}}(F)\) \(\cup {\underline {apr} _{{R_{2\alpha }}}}(F)\). One can further obtain \(x \in {({\underline {apr} _{{R_1}}}(F))_{1 - \alpha }} \cup {({\underline {apr} _{{R_2}}}(F))_{1 - \alpha}},\) namely, \(x \in {({\underline {apr} _{{R_1}}}(F) \cup {\underline {apr} _{{R_2}}}(F))_{1 - \alpha }}\), so \({\underline {apr} _{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}}(F) \subseteq {({\underline {apr} _{{R_1}}}(F) \cup {\underline {apr} _{{R_2}}}(F))_{1 - \alpha }}.\) According to Theorem 1, one obtains \({\underline {apr} _{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}}(F)={({\underline {apr} _{{R_1} \cap {R_2}}}(F))_{1 - \alpha }}\), then \({({\underline {apr} _{{R_1} \cap {R_2}}}(F))_{1 - \alpha }} \subseteq {({\underline {apr} _{{R_1}}}(F) \cup {\underline {apr} _{{R_2}}}(F))_{1 - \alpha }}\).

     

On the other hand, one can obtain \({\underline {apr} _{{R_{1{\alpha ^+}}}}}(F) \subseteq {\underline {apr} _{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}}(F)\) and \({\underline {apr} _{{R_{2{\alpha ^+}}}}}(F) \subseteq {\underline {apr} _{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}}(F)\) from Theorem 2, then \({\underline {apr} _{{R_{1{\alpha ^+}}}}}(F) \cup {\underline {apr} _{{R_{2{\alpha ^+}}}}}(F) \subseteq {\underline {apr} _{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}}(F)\), that is \({({\underline {apr} _{{R_1}}}(F))_{1 - \alpha }} \cup {({\underline {apr} _{{R_2}}}(F))_{1 - \alpha }} \subseteq {({\underline {apr} _{{R_1} \cap {R_2}}}(F))_{1 - \alpha }}\), then \({({\underline {apr} _{{R_1}}}(F) \cup {\underline {apr} _{{R_2}}}(F))_{1 - \alpha }}\) \(\subseteq {({\underline {apr} _{{R_1} \cap {R_2}}}(F))_{1 - \alpha }}\).

Thus \({({\underline {apr} _{{R_1} \cap {R_2}}}(F))_{1 - \alpha }}={({\underline {apr} _{{R_1}}}(F) \cup {\underline {apr} _{{R_2}}}(F))_{1 - \alpha }},\forall \alpha \in (0,1]\). Therefore, one can derive \({\underline {apr} _{{R_1} \cap {R_2}}}(F)={\underline {apr} _{{R_1}}}(F) \cup {\underline {apr} _{{R_2}}}(F)\).

We will illustrate how to determine the value range of \(\alpha\) in the algorithms UAFRB and UAFRC below.

Proposition 2

Let \(U\) be a finite non-empty universe, \(F\) be a crisp set on \(U\), and \({R_1},{R_2}\) be fuzzy similarity relations defined on \(U\). The upper approximations of \(F\) under \({R_1}\) and \({R_2}\) are known. When \({R_1}\) is combined with \({R_2}\), a new fuzzy similarity relation \({R_1} \cap {R_2}\) is obtained. Then under the fuzzy similarity relation \({R_1} \cap {R_2}\), the upper approximation of \(F\) can be updated as:
$${\mu _{{{\overline {apr} }_{{R_1} \cap {R_2}}}(F)}}(x)=\left\{ {\begin{array}{*{20}{l}} {\mathop {\sup }\limits_{y \in F} {\mu _{{{[x]}_{{R_1} \cap {R_2}}}}}(y), \quad x \in Z} \\ {{\mu _{{{\overline {apr} }_{{R_1}}}(F)}}(x) \wedge {\mu _{{{\overline {apr} }_{{R_2}}}(F)}}(x), \quad x \notin Z} \end{array}} \right.,$$
where, \(Z=\mathop \cup \nolimits_{\alpha \in (0,1]} {Z_\alpha }\), \({Z_\alpha }=\{ x \in {\overline {BN} _{{R_{1\alpha }}}}(F) \cap {\overline {BN} _{{R_{2\alpha }}}}(F)|{[x]_{{R_{1\alpha }}}} \cap {[x]_{{R_{2\alpha }}}} \subseteq {\overline {BN} _{{R_{1\alpha }}}}(F)\) \(\cap {\overline {BN} _{{R_{2\alpha }}}}(F)\}\).

Proof

Partition \(U\) into two subsets, denoted as \(Z\) and \(U - Z\), respectively. Where, \(Z\) is composed of the elements satisfying certain conditions in the upper boundary set. For the elements in \(Z\), compute their upper approximations directly. And for the elements not in \(Z\), update their upper approximations by \({\overline {apr} _{{R_1}}}(F)\) and \({\overline {apr} _{{R_2}}}(F)\).

According to Theorem 2, one obtains \({\overline {apr} _{{{({R_1} \cap {R_2})}_\alpha }}}(F) \subseteq {\overline {apr} _{{R_{1\alpha }}}}(F)\) and \({\overline {apr} _{{{({R_1} \cap {R_2})}_\alpha }}}(F) \subseteq {\overline {apr} _{{R_{2\alpha }}}}(F)\). Then \({\overline {apr} _{{{({R_1} \cap {R_2})}_\alpha }}}(F) \subseteq {\overline {apr} _{{R_{1\alpha }}}}(F) \cap {\overline {apr} _{{R_{2\alpha }}}}(F)\) .

Conversely, if \(x \in {\overline {apr} _{{R_{1\alpha }}}}(F) \cap {\overline {apr} _{{R_{2\alpha }}}}(F)\) and \(x \notin {Z_\alpha }\), one can obtain \(x \notin {\overline {BN} _{{R_{1\alpha }}}}(F) \cap {\overline {BN} _{{R_{2\alpha }}}}(F).\) That is, \(x \notin ({\overline {apr} _{{R_{1\alpha }}}}(F) - F) \cap ({\overline {apr} _{{R_{2\alpha }}}}(F) - F),\) so \(x \in F \subseteq {\overline {apr} _{{{({R_1} \cup {R_2})}_\alpha }}}(F)\), namely, \({\overline {apr} _{{R_{1\alpha }}}}(F) \cap {\overline {apr} _{{R_{2\alpha }}}}(F) \subseteq {\overline {apr} _{{{({R_1} \cap {R_2})}_\alpha }}}(F)\).

Consequently, one can obtain \({\overline {apr} _{{{({R_1} \cap {R_2})}_\alpha }}}(F)={\overline {apr} _{{R_{1\alpha }}}}(F) \cap {\overline {apr} _{{R_{2\alpha }}}}(F)\) . According to Theorem 1, one obtains \({\overline {apr} _{{R_{1\alpha }}}}(F)={({\overline {apr} _{{R_1}}}(F))_\alpha }\) and \({\overline {apr} _{{R_{2\alpha }}}}(F)={({\overline {apr} _{{R_2}}}(F))_\alpha }\), so \({\overline {apr} _{{{({R_1} \cap {R_2})}_\alpha }}}(F)={({\overline {apr} _{{R_1} \cap {R_2}}}(F))_\alpha}.\) One can further obtain \({({\overline {apr} _{{R_1} \cap {R_2}}}(F))_\alpha }={({\overline {apr} _{{R_1}}}(F))_\alpha } \cap {({\overline {apr} _{{R_2}}}(F))_\alpha }={({\overline {apr} _{{R_1}}}(F) \cap {\overline {apr} _{{R_2}}}(F))_\alpha}.\) Therefore, one obtains \({\overline {apr} _{{R_1} \cap {R_2}}}(F)={\overline {apr} _{{R_1}}}(F) \cap {\overline {apr} _{{R_2}}}(F)\), whose membership function is \({\mu _{{{\overline {apr} }_{{R_1} \cap {R_2}}}(F)}}(x)={\mu _{{{\overline {apr} }_{{R_1}}}(F)}}(x) \wedge {\mu _{{{\overline {apr} }_{{R_2}}}(F)}}(x)\) .

An example is given to illustrate Propositions 1 and 2.

Example 1

Let \(U=\{ {x_1},{x_2},{x_3},{x_4},{x_5}\}\) be a finite non-empty universe, \(F=\{ {x_1},{x_2},{x_4}\}\) be a set on \(U\), \(R_{1} = \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} 1 & {0.4} \\ \end{array} } & {0.8} & {0.5} & {0.5} \\ \end{array} } \\ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {0.4} & 1 \\ \end{array} } & {0.4} & {0.4} & {0.4} \\ \end{array} } \\ \end{array} } \\ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {0.8} & {0.4} \\ \end{array} } & 1 \\ \end{array} } & {0.5} & {0.5} \\ \end{array} } \\ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {0.5} & {0.4} \\ \end{array} } & {0.5} & 1 & {0.6} \\ \end{array} } \\ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {0.5} & {0.4} \\ \end{array} } & {0.5} & {0.6} & 1 \\ \end{array} } \\ \end{array} } \right]\,\,\,\,{\text{and}}\,\,\,R_{2} = \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} 1 & {0.8} \\ \end{array} } & {0.4} & {0.5} & {0.8} \\ \end{array} } \\ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {0.8} & 1 \\ \end{array} } & {0.4} & {0.5} & {0.9} \\ \end{array} } \\ \end{array} } \\ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {0.4} & {0.4} \\ \end{array} } & 1 \\ \end{array} } & {0.4} & {0.4} \\ \end{array} } \\ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {0.5} & {0.5} \\ \end{array} } & {0.4} & 1 & {0.5} \\ \end{array} } \\ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {0.8} & {0.9} \\ \end{array} } & {0.4} & {0.5} & 1 \\ \end{array} } \\ \end{array} } \right]\) be fuzzy similarity relations.

Under the fuzzy similarity relation \({R_1}\), the lower and upper approximations of \(F\) are \({\mu _{{{\underline {apr} }_{{R_1}}}(F)}}({x_1})=0.2\), \({\mu _{{{\underline {apr} }_{{R_1}}}(F)}}({x_2})=0.6\), \({\mu _{{{\underline {apr} }_{{R_1}}}(F)}}({x_3})=0\), \({\mu _{{{\underline {apr} }_{{R_1}}}(F)}}({x_4})=0.4\), \({\mu _{{{\underline {apr} }_{{R_1}}}(F)}}({x_5})=0\); \({\mu _{{{\overline {apr} }_{{R_1}}}(F)}}({x_1})=1\), \({\mu _{{{\overline {apr} }_{{R_1}}}(F)}}({x_2})=1\), \({\mu _{{{\overline {apr} }_{{R_1}}}(F)}}({x_3})=0.8\), \({\mu _{{{\overline {apr} }_{{R_1}}}(F)}}({x_4})=1\), \({\mu _{{{\overline {apr} }_{{R_1}}}(F)}}({x_5})=0.6\);

Under the fuzzy similarity relation \({R_2}\), the lower and upper approximations of \(F\) are \({\mu _{{{\underline {apr} }_{{R_2}}}(F)}}({x_1})=0.2\), \({\mu _{{{\underline {apr} }_{{R_2}}}(F)}}({x_2})=0.1\), \({\mu _{{{\underline {apr} }_{{R_2}}}(F)}}({x_3})=0,{\mu _{{{\underline {apr} }_{{R_2}}}(F)}}({x_4})=0.5\), \({\mu _{{{\underline {apr} }_{{R_2}}}(F)}}({x_5})=0\); \({\mu _{{{\overline {apr} }_{{R_2}}}(F)}}({x_1})=1\), \({\mu _{{{\overline {apr} }_{{R_2}}}(F)}}({x_2})=1\), \({\mu _{{{\overline {apr} }_{{R_2}}}(F)}}({x_3})=0.4\), \({\mu _{{{\overline {apr} }_{{R_2}}}(F)}}({x_4})=1\), \({\mu _{{{\overline {apr} }_{{R_2}}}(F)}}({x_5})=0.9\).

We compute the lower and upper approximations of \(F\) under the fuzzy similarity relation \({R_1} \cap {R_2}\) by the incremental method.

  1. 1.

    Compute the lower approximation. According to Proposition 1, the computation includes two steps: (1) compute \(Y\); (2) update the lower approximations.

     

We firstly compute \(Y\). When \(\alpha =0.8\), \({Y_\alpha }=\varnothing\); When \(\alpha =0.6\), \({Y_\alpha }=\{ {x_1}\}\); When \(\alpha =0.5\), \({Y_\alpha }=\{ {x_1}\}\); When \(\alpha =0.4,\) \({Y_\alpha }=\varnothing\). Then \(Y=\{ {x_1}\}\).

For \({x_1}\), compute the lower approximation directly, one can obtain \({\mu _{{{\underline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_1})=\mathop {\inf }\limits_{y \notin F} \{ 1 - {\mu _{{{[{x_1}]}_{{R_1} \cap {R_2}}}}}(y)\} =0.5\).

For the elements not in \(Y\), update their lower approximations by \({\underline {apr} _{{R_1}}}(F)\) and \({\underline {apr} _{{R_2}}}(F)\), one can obtain:
$${\mu _{{{\underline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_2})={\mu _{{{\underline {apr} }_{{R_1}}}(F)}}({x_2}) \vee {\mu _{{{\underline {apr} }_{{R_2}}}(F)}}({x_2})=0.6 \vee 0.1=0.6;$$
$${\mu _{{{\underline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_3})={\mu _{{{\underline {apr} }_{{R_1}}}(F)}}({x_3}) \vee {\mu _{{{\underline {apr} }_{{R_2}}}(F)}}({x_3})=0 \vee 0=0;$$
$${\mu _{{{\underline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_4})={\mu _{{{\underline {apr} }_{{R_1}}}(F)}}({x_4}) \vee {\mu _{{{\underline {apr} }_{{R_2}}}(F)}}({x_4})=0.4 \vee 0.5=0.5;$$
$${\mu _{{{\underline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_5})={\mu _{{{\underline {apr} }_{{R_1}}}(F)}}({x_5}) \vee {\mu _{{{\underline {apr} }_{{R_2}}}(F)}}({x_5})=0 \vee 0=0$$
  1. 2.

    Compute the upper approximation.

     

We firstly compute \(Z\). When \(\alpha =1\), \({Z_\alpha }=\varnothing\); When \(\alpha =0.8\), \({Z_\alpha }=\varnothing\); When \(\alpha =0.6\), \({Z_\alpha }=\{ {x_5}\}\); When \(\alpha =0.5\), \({Z_\alpha }=\varnothing\); When \(\alpha =0.4\), \({Z_\alpha }=\varnothing\). Then \(Z=\{ {x_5}\}\).

For \({x_5}\), compute the upper approximation directly, one obtains \({\mu _{{{\overline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_5})=\mathop {\sup }\nolimits_{y \in F} {\mu _{{{[{x_5}]}_{{R_1} \cap {R_2}}}}}(y)=0.5\).

For the elements not in \(Z\), update their upper approximations by \({\overline {apr} _{{R_1}}}(F)\)and \({\overline {apr} _{{R_2}}}(F)\), one can obtain:
$${\mu _{{{\overline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_1})={\mu _{{{\overline {apr} }_{{R_1}}}(F)}}({x_1}) \wedge {\mu _{{{\overline {apr} }_{{R_2}}}(F)}}({x_1})=1 \wedge 1=1;$$
$${\mu _{{{\overline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_2})={\mu _{{{\overline {apr} }_{{R_1}}}(F)}}({x_2}) \wedge {\mu _{{{\overline {apr} }_{{R_2}}}(F)}}({x_2})=1 \wedge 1=1;$$
$${\mu _{{{\overline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_3})={\mu _{{{\overline {apr} }_{{R_1}}}(F)}}({x_3}) \wedge {\mu _{{{\overline {apr} }_{{R_2}}}(F)}}({x_3})=0.8 \wedge 0.4=0.4;$$
$${\mu _{{{\overline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_4})={\mu _{{{\overline {apr} }_{{R_1}}}(F)}}({x_4}) \wedge {\mu _{{{\overline {apr} }_{{R_2}}}(F)}}({x_4})=1 \wedge 1=1.$$

Remark 3

To verify the result of the incremental method, one can also directly compute the lower approximations of \(F\) under the fuzzy similarity relation \({R_1} \cap {R_2}\).

Compute \({R_1} \cap {R_2}\) firstly, one obtains.

\({R_1} \cap {R_2}=\left[ {\begin{array}{*{20}{c}} {\begin{array}{*{20}{c}} {\begin{array}{*{20}{c}} {\begin{array}{*{20}{c}} 1&{0.4} \end{array}}&{0.4}&{0.5}&{0.5} \end{array}} \\ {\begin{array}{*{20}{c}} {\begin{array}{*{20}{c}} {0.4}&1 \end{array}}&{0.4}&{0.4}&{0.4} \end{array}} \end{array}} \\ {\begin{array}{*{20}{c}} {\begin{array}{*{20}{c}} {\begin{array}{*{20}{c}} {0.4}&{0.4} \end{array}}&1 \end{array}}&{0.4}&{0.4} \end{array}} \\ {\begin{array}{*{20}{c}} {\begin{array}{*{20}{c}} {0.5}&{0.4} \end{array}}&{0.4}&1&{0.5} \end{array}} \\ {\begin{array}{*{20}{c}} {\begin{array}{*{20}{c}} {0.5}&{0.4} \end{array}}&{0.4}&{0.5}&1 \end{array}} \end{array}} \right]\). According to Definition 1, one can directly compute the lower approximations:

\({\mu _{{{\underline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_1})=\mathop {\inf }\limits_{y \notin F} \{ 1 - {\mu _{{{[{x_1}]}_{{R_1} \cap {R_2}}}}}(y)\} =0.5\); \({\mu _{{{\underline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_2})=0.6\); \({\mu _{{{\underline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_3})=0\); \({\mu _{{{\underline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_4})=0.5;\) \({\mu _{{{\underline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_5})=0\). Clearly, the lower approximations computed by the incremental method are the same as those computed by Definition 1.

In addition, we notice that for \({x_1},\) \({\mu _{{{\underline {apr} }_{{R_1}}}(F)}}({x_1}) \vee {\mu _{{{\underline {apr} }_{{R_2}}}(F)}}({x_1})=0.2,\) while \({\mu _{{{\underline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_1})=\mathop {\inf }\limits_{y \notin F} \{ 1 - {\mu _{{{[{x_1}]}_{{R_1} \cap {R_2}}}}}(y)\} =0.5\). That is, \({\mu _{{{\underline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_1}) \ne {\mu _{{{\underline {apr} }_{{R_1}}}(F)}}({x_1}) \vee {\mu _{{{\underline {apr} }_{{R_2}}}(F)}}({x_1})\), which shows that the definition of the lower boundary set is necessary.

In the same way, one can directly compute the upper approximations of \(F\) under the fuzzy similarity relation \({R_1} \cap {R_2}\).

One obtains \({\mu _{{{\overline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_1})=\mathop {\sup }\nolimits_{y \in F} {\mu _{{{[{x_1}]}_{{R_1} \cap {R_2}}}}}(y)=1\); \({\mu _{{{\overline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_2})=1\); \({\mu _{{{\overline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_4})=1\); \({\mu _{{{\overline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_5})=0.5.\) The upper approximations computed by the incremental method are the same as those computed by Definition 1.

Notice that for \({x_5}\), \({\mu _{{{\overline {apr} }_{{R_1}}}(F)}}({x_5}) \wedge {\mu _{{{\overline {apr} }_{{R_2}}}(F)}}({x_5})=0.6\), and \({\mu _{{{\overline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_5})=\mathop {\sup }\nolimits_{y \in F} {\mu _{{{[{x_5}]}_{{R_1} \cap {R_2}}}}}(y)=0.5\). That is, \({\mu _{{{\overline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_5})\) \(\ne {\mu _{{{\overline {apr} }_{{R_1}}}(F)}}({x_5}) \wedge {\mu _{{{\overline {apr} }_{{R_2}}}(F)}}({x_5})\), which shows that the upper boundary set can effectively select the elements that need to be computed directly.

When the number of objects in the boundary sets is far less than the cardinality of the universe, for most objects, one only needs an operator \(\vee\) or \(\wedge\) to incrementally update their approximations instead of directly calculating their approximation. This fact shows that the complexity of the proposed method will be reduced since the main operation of computing approximations is related to the number of objects in the corresponding boundary sets.

4 Updating approximations incrementally based on cut sets

Let \(U\) be a finite non-empty universe, \(F\) be a crisp set on \(U\). Theorem 1 provides us a new idea of updating approximations. That is, the lower and upper approximations of \(F\) under the similarity relation \(R\) can be obtained in virtue of the lower and upper approximations of \(F\) under the equivalence relation \({R_\alpha }\) or \({R_{{\alpha ^+}}}\). In order to incrementally update the approximations of \(F\) under \(R\), we firstly consider updating the approximations of \(F\) under \({R_\alpha }\) or \({R_{{\alpha ^+}}}\). The following propositions show how to update the approximations of \(F\) under \(R\).

Proposition 1′

Let \(U\) be a finite non-empty universe, \(F\) be a crisp set on \(U\), and \({R_1},{R_2}\) be fuzzy similarity relations defined on \(U\). The lower approximations of \(F\) under \({R_1}\) and \({R_2}\) are known. When \({R_1}\) is combined with \({R_2}\), a new fuzzy similarity relation \({R_1} \cap {R_2}\) is obtained. Then under the fuzzy similarity relation \({R_1} \cap {R_2}\), the lower approximation of \(F\) can be updated as:
$${\mu _{{{\underline {apr} }_{{R_1} \cap {R_2}}}(F)}}(x) =\sup \{ 1 - \alpha |x \in {({\underline {apr} _{{R_1} \cap {R_2}}}(F))_{1 - \alpha }}\} =\sup \{ 1 - \alpha |x \in {\underline {apr} _{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}}(F)\} ,$$
where, \({\underline {apr} _{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}}(F)={\underline {apr} _{{R_{1{\alpha ^+}}}}}(F) \cup {\underline {apr} _{{R_{2{\alpha ^+}}}}}(F) \cup {Y_\alpha }{^\prime}\), \({Y_\alpha }'=\{ x \in {\underline {BN} _{{R_{1\alpha }}}}(F) \cap {\underline {BN} _{{R_{2\alpha }}}}(F)|{[x]_{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}} \subseteq F\}\).

Proof

Firstly, if \(x \in {\underline {apr} _{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}}(F),\) then one obtains \({[x]_{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}} \subseteq F\).

Secondly, suppose \(x \in {\underline {apr} _{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}}(F)\). Clearly, \({\underline {apr} _{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}}(F) \subseteq F\), then \(x \in F\). According to Theorem 2, one can obtain \({\underline {apr} _{{R_{1{\alpha ^+}}}}}(F) \subseteq {\underline {apr} _{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}}(F)\) and \({\underline {apr} _{{R_{2{\alpha ^+}}}}}(F) \subseteq {\underline {apr} _{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}}(F)\). If \(x \notin {\underline {apr} _{{R_{1{\alpha ^+}}}}}(F) \cup {\underline {apr} _{{R_{2{\alpha ^+}}}}}(F),\) because \(x \in F\),then \(x \in \{ F - {\underline {apr} _{{R_{1{\alpha ^+}}}}}(F)\} \cap \{ F - {\underline {apr} _{{R_{2{\alpha ^+}}}}}(F)\}\). According the definitions of the lower boundary set, one can obtain \({\underline {BN} _{{R_{1\alpha }}}}(F)=F - {\underline {apr} _{{R_{1{\alpha ^+}}}}}(F)\) and \({\underline {BN} _{{R_{2\alpha }}}}(F)=F - {\underline {apr} _{{R_{2{\alpha ^+}}}}}(F).\) Therefore, one obtains \(x \in {\underline {BN} _{{R_{1\alpha }}}}(F) \cap {\underline {BN} _{{R_{2\alpha }}}}(F)\), that is, \(x \in {Y_\alpha }'\). Then one obtains \({\underline {apr} _{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}}(F) \subseteq {\underline {apr} _{{R_{1{\alpha ^+}}}}}(F) \cup {\underline {apr} _{{R_{2{\alpha ^+}}}}}(F) \cup {Y_\alpha }{^\prime}\) .

On the other hand, notice that \({\underline {apr} _{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}}(F) \supseteq {\underline {apr} _{{R_{1{\alpha ^+}}}}}(F) \cup {\underline {apr} _{{R_{2{\alpha ^+}}}}}(F) \cup {Y_\alpha }{^\prime},\) so one obtains \({\underline {apr} _{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}}(F)={\underline {apr} _{{R_{1{\alpha ^+}}}}}(F) \cup {\underline {apr} _{{R_{2{\alpha ^+}}}}}(F) \cup {Y_\alpha }{^\prime}\) .

According to Theorem 1, one obtains \({\mu _{{{\underline {apr} }_{{R_1} \cap {R_2}}}(F)}}(x)=\sup \{ 1 - \alpha |x \in {({\underline {apr} _{{R_1} \cap {R_2}}}(F))_{1 - \alpha }}\} =\sup \{ 1 - \alpha |x \in {\underline {apr} _{{{({R_1} \cap {R_2})}_{{\alpha ^+}}}}}(F)\} .\)

Proposition 2′

Let \(U\) be a finite non-empty universe, \(F\) be a crisp set on \(U\), and \({R_1},{R_2}\) be fuzzy similarity relations defined on \(U\). The upper approximations of \(F\) under \({R_1}\) and \({R_2}\) are known. When \({R_1}\) is combined with \({R_2}\), a new fuzzy similarity relation \({R_1} \cap {R_2}\) is obtained. Then under the fuzzy similarity relation \({R_1} \cap {R_2}\), the upper approximation of \(F\) can be updated as:
$$\mu _{{\overline{{apr}} _{{R_{1} \cap R_{2} }} (F)}} (x){\text{ }} = \sup \{ \alpha |x \in (\overline{{apr}} _{{R_{1} \cap R_{2} }} (F))_{\alpha } \} = \sup \{ \alpha |x \in \overline{{apr}} _{{(R_{1} \cap R_{2} )_{\alpha } }} (F)\} ,$$
where, \({\overline {apr} _{{{({R_1} \cap {R_2})}_\alpha }}}(F)={\overline {apr} _{{R_{1\alpha }}}}(F) \cap {\overline {apr} _{{R_{2\alpha }}}}(F) - {Z_\alpha }^\prime ={({\overline {apr} _{{R_1}}}(F))_\alpha } \cap {({\overline {apr} _{{R_2}}}(F))_\alpha } - {Z_\alpha }^\prime ,\) \({Z_\alpha }^\prime =\{ x \in {\overline {BN} _{{R_{1\alpha }}}}(F) \cap {\overline {BN} _{{R_{2\alpha }}}}(F)|{[x]_{{{({R_1} \cap {R_2})}_\alpha }}} \subseteq {\overline {BN} _{{R_{1\alpha }}}}(F) \cap {\overline {BN} _{{R_{2\alpha }}}}(F)\}\).

Proof

We prove \({\overline {apr} _{{{({R_1} \cap {R_2})}_\alpha }}}(F) \subseteq {\overline {apr} _{{R_{1\alpha }}}}(F) \cap {\overline {apr} _{{R_{2\alpha }}}}(F) - {Z_\alpha }^\prime\) firstly. Let \(x \in {\overline {apr} _{{{({R_1} \cap {R_2})}_\alpha }}}(F)\) and \(x \notin F,\) then one obtains \(x \in {\overline {BN} _{{{({R_1} \cap {R_2})}_\alpha }}}(F)\). Because \({\overline {BN} _{{R_{1\alpha }}}}(F)\) \(\supseteq {\overline {BN} _{{{({R_1} \cap {R_2})}_\alpha }}}(F)\) and \({\overline {BN} _{{R_{2\alpha }}}}(F) \supseteq {\overline {BN} _{{{({R_1} \cap {R_2})}_\alpha }}}(F),\) so \({\overline {BN} _{{{({R_1} \cap {R_2})}_\alpha }}}(F) \subseteq {\overline {BN} _{{R_{1\alpha }}}}(F) \cap {\overline {BN} _{{R_{2\alpha }}}}(F)\) is obtained. Therefore, for \(\forall x \in {\overline {BN} _{{{({R_1} \cap {R_2})}_\alpha }}}(F)\), one can obtain \(x \in {\overline {BN} _{{R_{1\alpha }}}}(F) \cap {\overline {BN} _{{R_{2\alpha }}}}(F)\).

And then, notice that \(x \in {\overline {apr} _{{{({R_1} \cap {R_2})}_\alpha }}}(F)\), so one obtains \({[x]_{{{({R_1} \cap {R_2})}_\alpha }}} \cap F \ne \varnothing\). Because \(F \cap ({\overline {BN} _{{R_{1\alpha }}}}(F) \cap {\overline {BN} _{{R_{2\alpha }}}}(F))=\varnothing\), then one has \({[x]_{{{({R_1} \cap {R_2})}_\alpha }}} \not\subset ({\overline {BN} _{{R_{1\alpha }}}}(F) \cap {\overline {BN} _{{R_{2\alpha }}}}(F)\}\), namely, \(x \notin {Z_\alpha }^\prime\). So \(x \in {\overline {BN} _{{R_{1\alpha }}}}(F) \cap {\overline {BN} _{{R_{2\alpha }}}}(F) - {Z_\alpha }^\prime\) is obtained. Therefore, \({\overline {apr} _{{{({R_1} \cap {R_2})}_\alpha }}}(F) \subseteq ({\overline {BN} _{{R_{1\alpha }}}}(F) \cap {\overline {BN} _{{R_{2\alpha }}}}(F) - {Z_\alpha }^\prime ) \cup F={\overline {apr} _{{R_{1\alpha }}}}(F) \cap {\overline {apr} _{{R_{2\alpha }}}}(F) - {Z_\alpha }^\prime\).

On the other hand, let \(x \in {\overline {apr} _{{R_{1\alpha }}}}(F) \cap {\overline {apr} _{{R_{2\alpha }}}}(F) - {Z_\alpha }^\prime\) and \(x \notin F\). If \({[x]_{{{({R_1} \cap {R_2})}_\alpha }}} \cap F=\varnothing,\) then one obtains \({[x]_{{{({R_1} \cap {R_2})}_\alpha }}} \subseteq {\overline {BN} _{{R_{1\alpha }}}}(F) \cap {\overline {BN} _{{R_{2\alpha }}}}(F)\), namely, \(x \in {Z_\alpha }^\prime\), which contradicts the assumption that \(x \in {\overline {apr} _{{R_{1\alpha }}}}(F) \cap {\overline {apr} _{{R_{2\alpha }}}}(F) - {Z_\alpha }\prime\). Therefore one obtains \({[x]_{{{({R_1} \cap {R_2})}_\alpha }}} \cap F \ne \varnothing\), namely, \(x \in {\overline {apr} _{{{({R_1} \cap {R_2})}_\alpha }}}(F)\). Then one can obtain \({\overline {apr} _{{{({R_1} \cap {R_2})}_\alpha }}}(F) \supseteq {\overline {apr} _{{R_{1\alpha }}}}(F) \cap {\overline {apr} _{{R_{2\alpha }}}}(F) - {Z_\alpha }^\prime\) .

Consequently, one obtains \({\overline {apr} _{{{({R_1} \cap {R_2})}_\alpha }}}(F)={\overline {apr} _{{R_{1\alpha }}}}(F) \cap {\overline {apr} _{{R_{2\alpha }}}}(F) - {Z_\alpha }^\prime\). According to Theorem 1, one can obtain \({\mu _{{{\overline {apr} }_{{R_1} \cap {R_2}}}(F)}}(x)=\sup \{ \alpha |x \in {({\overline {apr} _{{R_1} \cap {R_2}}}(F))_\alpha }\} =\sup \{ \alpha |\) \(x \in {\overline {apr} _{{{({R_1} \cap {R_2})}_\alpha }}}(F)\}\).

Example 2

The universe \(U\), the crisp set \(F\), and the fuzzy similarity relations \({R_1},{R_2}\) in Example 1 are reused. According to Propositions 1′ and 2′, we incrementally update the lower and upper approximations of \(F\) under the fuzzy similarity relation \({R_1} \cap {R_2}\).

  1. 1.

    Compute the lower approximations, one obtains

     
$${\underline {apr} _{{{({R_1} \cap {R_2})}_{{{0.8}^+}}}}}(F) ={\underline {apr} _{{R_{{1_{0.8}}^+}}}}(F) \cup {\underline {apr} _{{R_{{2_{0.8}}^+}}}}(F) \cup {Y_{0.8}}^\prime =\{ {x_1},{x_2},{x_4}\} \cup \{ {x_1},{x_4}\} \cup \varnothing =\{ {x_1},{x_2},{x_4}\} ;$$
$${\underline {apr} _{{{({R_1} \cap {R_2})}_{{{0.6}^+}}}}}(F) ={\underline {apr} _{{R_{{1_{0.6}}^+}}}}(F) \cup {\underline {apr} _{{R_{{2_{0.6}}^+}}}}(F) \cup {Y_{0.6}}^\prime =\{ {x_2},{x_4}\} \cup \{ {x_4}\} \cup \{ {x_1}|{[{x_1}]_{{{({R_1} \cap {R_2})}_{{{0.6}^+}}}}} \subseteq F\} =\{ {x_1},{x_2},{x_4}\} ;$$
$${\underline {apr} _{{{({R_1} \cap {R_2})}_{{{0.5}^+}}}}}(F) ={\underline {apr} _{{R_{{1_{0.5}}^+}}}}(F) \cup {\underline {apr} _{{R_{{2_{0.5}}^+}}}}(F) \cup {Y_{0.5}}^\prime =\{ {x_2}\} \cup \{ {x_4}\} \cup \{ {x_1}|{[{x_1}]_{{{({R_1} \cap {R_2})}_{{{0.5}^+}}}}} \subseteq F\} =\{ {x_1},{x_2},{x_4}\} ;$$
$${\underline {apr} _{{{({R_1} \cap {R_2})}_{{{0.4}^+}}}}}(F) ={\underline {apr} _{{R_{{1_{0.4}}^+}}}}(F) \cup {\underline {apr} _{{R_{{2_{0.4}}^+}}}}(F) \cup {Y_{0.4}}^\prime =\{ {x_2}\} \cup \varnothing \cup \varnothing =\{ {x_2}\} .$$

According to Propositions 1′, one can obtain: \({\mu _{{{\underline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_1})=\sup \{ 1 - 0.8,1 - 0.6,1 - 0.5\} =0.5;\) \({\mu _{{{\underline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_2})=0.6\); \({\mu _{{{\underline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_3})=0\); \({\mu _{{{\underline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_4})=0.5\), \({\mu _{{{\underline {apr} }_{{R_1} \cap {R_2}}}(F)}}({x_5})=0\).

  1. 2.

    Compute the upper approximations, one obtains

     
$${\overline {apr} _{{{({R_1} \cap {R_2})}_1}}}(F) ={\overline {apr} _{{R_{{1_1}}}}}(F) \cap {\overline {apr} _{{R_{{2_1}}}}}(F) - {Z_1}^\prime =\{ {x_1},{x_2},{x_4}\} \cap \{ {x_1},{x_2},{x_4}\} - \varnothing =\{ {x_1},{x_2},{x_4}\} ;$$
$${\overline {apr} _{{{({R_1} \cap {R_2})}_{0.8}}}}(F) ={\overline {apr} _{{R_{{1_{0.8}}}}}}(F) \cap {\overline {apr} _{{R_{{2_{0.8}}}}}}(F) - {Z_{0.8}}^\prime =\{ {x_1},{x_2},{x_3},{x_4}\} \cap \{ {x_1},{x_2},{x_4},{x_5}\} - \varnothing =\{ {x_1},{x_2},{x_4}\} ;$$
$${\overline {apr} _{{{({R_1} \cap {R_2})}_{0.6}}}}(F)={\overline {apr} _{{R_{{1_{0.6}}}}}}(F) \cap {\overline {apr} _{{R_{{2_{0.6}}}}}}(F) - {Z_{0.6}}^\prime =\{ {x_1},{x_2},{x_3},{x_4},{x_5}\} \cap \{ {x_1},{x_2},{x_4},{x_5}\} - \{ {x_5}\} =\{ {x_1},{x_2},{x_4}\} ;$$
$${\overline {apr} _{{{({R_1} \cap {R_2})}_{0.5}}}}(F) ={\overline {apr} _{{R_{{1_{0.5}}}}}}(F) \cap {\overline {apr} _{{R_{{2_{0.5}}}}}}(F) - {Z_{0.5}}^\prime =\{ {x_1},{x_2},{x_3},{x_4},{x_5}\} \cap \{ {x_1},{x_2},{x_4},{x_5}\} - \varnothing =\{ {x_1},{x_2},{x_4},{x_5}\} ;$$
$${\overline {apr} _{{{({R_1} \cap {R_2})}_{0.4}}}}(F) ={\overline {apr} _{{R_{{1_{0.4}}}}}}(F) \cap {\overline {apr} _{{R_{{2_{0.4}}}}}}(F) - {Z_{0.4}}^\prime =\{ {x_1},{x_2},{x_3},{x_4},{x_5}\} \cap \{ {x_1},{x_2},{x_3},{x_4},{x_5}\} - \varnothing =\{ {x_1},{x_2},{x_3},{x_4},{x_5}\} .$$
According to Propositions 2′, one can obtain:
$$\mu _{{\overline{{apr}} _{{R_{1} \cap R_{2} }} (F)}} (x_{1} ) = 1;\,\,\,\mu _{{\overline{{apr}} _{{R_{1} \cap R_{2} }} (F)}} (x_{2} ) = 1;\,\,\mu _{{\overline{{apr}} _{{R_{1} \cap R_{2} }} (F)}} (x_{3} ) = 0.4;\,\,\mu _{{\overline{{apr}} _{{R_{1} \cap R_{2} }} (F)}} (x_{4} ) = 1;\,\,\mu _{{\overline{{apr}} _{{R_{1} \cap R_{2} }} (F)}} (x_{5} ) = 0.5.$$

Clearly, the approximations computed by the incremental method are the same as those directly computed by Definition 1.

5 Two incremental algorithms for updating approximations

In this section, two incremental algorithms corresponding to Sects. 3 and 4, respectively, are designed as follows.

Algorithm UAFRB is a summary of Propositions 1 and 2. The key concept is the boundary set that runs through the whole algorithm. For the elements in the boundary set, compute their approximations directly. For the elements not in the boundary set, update their approximations by using the original information.

Algorithm UAFRC is a summary of Propositions 1′ and 2′. The common ground between UAFRB and UAFRC is that they both involve the boundary set. Therefore, the number of objects in the boundary set has important effect on the time complexities of the algorithms. The complexity of the algorithms will be greatly reduced when the number of objects in the boundary sets is far less than that in the corresponding approximations. On the other hand, the complexities of the algorithms partially depend on the cardinality of \(\alpha\). When the cardinal number of \(\alpha\) is small, the effect of the algorithms is more obvious. In general, the number of objects in the boundary set is less than the cardinality of \(U\). The cardinality of \(\alpha\) is usually much less than the cardinality of \(U\). Therefore, the efficiency of related algorithms can be improved.

6 Experimental evaluation

In this section, we employ the method of directly computing fuzzy rough approximations (denoted as DCFR) and the proposed incremental methods of updating fuzzy rough approximations (the one based on the boundary sets is denoted as UAFRB, the other based on the cut sets is denoted as UAFRC) for a performance comparison.

We download six data sets from the UCI Machine Learning database [48] to test our proposed methods. The data sets are outlined in Table 2. The six sets have continuous condition attributes and categorical class attributes. Furthermore, the samples numbers are between 197 and 5473. Algorithms are coded in Matlab 7.1.

Table 2

Data description

 

Data set

Abbreviation

Samples

Attributes

Classes

1

Parkinsons

Parkinsons

197

23

2

2

Water treatment plant

Water

521

38

13

3

Image segmentation

Image

2310

19

7

4

Yeast

Yeast

1484

8

10

5

Wine quality

Wine

4898

12

10

6

Page blocks

Page

5473

10

5

6.1 Fuzzification of the dataset

Considering that DCFR, UAFRB and UAFRC mainly deal with fuzzy data, the fuzzification of the data set is necessary. A simple algorithm [49] is used to generate a triangular membership function defined as follows:
$$T_{1} (x) = \left\{ \begin{array}{*{20}{l}} 1,x \le m_{1} \hfill \\ (m_{2} - x)/(m_{2} - m_{1} ),m_{1} < x < m_{2} \hfill \\ 0,m_{2} \le x \hfill \\ \end{array} \right.,$$
$${T_k}(x)=\left\{ {\begin{array}{*{20}{l}} {1,x \ge {m_k}} \\ {(x - {m_{k - 1}})/({m_k} - {m_{k - 1}}),{m_{k - 1}} < x< {m_k}} \\ {0,x \le {m_{k - 1}}} \end{array}} \right.,$$
$${T_i}(x)=\left\{ {\begin{array}{*{20}{l}} {0,x \ge {m_{i+1}}} \\ {({m_{i+1}} - x)/({m_{i+1}} - {m_i}),{m_i} \leqslant x \leqslant {m_{i+1}}} \\ {(x - {m_{i - 1}})/({m_i} - {m_{i - 1}}),{m_{i - 1}}<x<{m_i}} \\ {0,x \leqslant {m_{i - 1}}} \end{array}} \right..$$
The slopes of the triangular membership functions are selected such that adjacent membership functions cross at the membership value 0.5. In this case, the only parameter to be determined is \(M=\{ {m_i},i=1,2, \ldots ,k\}\). The center \({m_i}\), can be calculated using the feature-maps algorithm by Kohonen [50]. At time 0, the centers \({m_i}[0]\) are initial sets to be evenly distributed on the range of the universe \(U\), such as
$${m_i}[0]=\min \{ x,x \in U\} +(\max \{ x,x \in U\} - \min \{ x,x \in U\} ) \times (i - 1)/(k - 1),i=1,2, \ldots ,k$$

The total distance between \(U\)and \(M\) is defined as \(D(U,M)=\sum\nolimits_{x \in U} {\mathop {\min }\limits_i ||x - {m_i}||}\). The centers are then adjusted iteratively in order to reduce \(D(U,M)\).

After the fuzzification of data, each attribute has three linguistic terms. Six fuzzy information systems can be obtained.

6.2 Performance evaluation of updating fuzzy rough approximations incrementally

To compare DCFR with UAFRB and UAFRC, some attributes are added to the original attribute set, then the computation of approximations is performed using the three algorithms.

In data set “Parkinsons”, we randomly select two attribute sets \({P_1}\) and \({Q_1}\) (\({P_1} \cap {Q_1}=\varnothing\)) and an equivalence class\({A_1}\) (namely, healthy). In data set “Water”, we randomly select two attribute sets \({P_2}\) and \({Q_2}\) (\({P_2} \cap {Q_2}=\varnothing\)) and an equivalence class \({B_1}\) (namely, Normal situation). In data set “Image”, we randomly select two attribute sets \({P_3}\) and \({Q_3}\) (\({P_3} \cap {Q_3}=\varnothing\)) and an equivalence class \({C_3}\) (namely, foliage). In data set “Yeast”, we randomly select two attribute sets \({P_4}\) and \({Q_4}\) (\({P_4} \cap {Q_4}=\varnothing\)) and an equivalence class \({D_2}\) (namely, NUC). In data set “Wine”, we randomly select two attribute sets \({P_5}\) and \({Q_5}\) (\({P_5} \cap {Q_5}=\varnothing\)) and an equivalence class \({E_6}\) (namely, 6). In data set “Page”, we randomly select two attribute sets \({P_6}\) and \({Q_6}\) (\({P_6} \cap {Q_6}=\varnothing\)) and an equivalence class \({F_2}\) (namely, horiz. line).

where \({P_1}\)  = {MDVP:Fo(Hz), MDVP:Fhi(Hz), MDVP:Flo(Hz), MDVP:Jitter(%), MDVP:Jitter(Abs), MDVP:RAP, MDVP:PPQ, Jitter:DDP, MDVP:Shimmer, MDVP:Shimmer(dB)},

\({Q_1}\) = {Shimmer:APQ3, Shimmer:APQ5, MDVP:APQ, Shimmer:DDA, NHR, HNR},

\({P_2}\) = {Q-E, ZN-E, PH-E, DBO-E, DQO-E, SS-E, SSV-E, SED-E, COND-E, PH-P,DBO-P, SS-P, SSV-P, SED-P},

\({Q_2}\) = {COND-P, PH-D, DBO-D, DQO-D, SS-D, SSV-D, SED-D, COND-D, PH-S, DBO-S},

\({P_3}\) = {region-centroid-col, region-centroid-row, region-pixel-count, short-line-density-5, short-line-density-2, vedge-mean, vegde-sd, hedge-mean, hedge-sd},

\({Q_3}\) = {intensity-mean, rawred-mean, rawblue-mean, rawgreen-mean, exred-mean, exblue-mean, exgreen-mean},

\({P_4}\) = {mcg, gvh, alm, mit},

\({Q_4}\) = {erl, pox, vac},

\({P_5}\) = {fixed acidity, volatile acidity, citric acid, residual sugar, chlorides, free sulfur dioxide, total sulfur dioxide},

\({Q_5}\) = {density, pH, sulphates, alcohol},

\({P_6}\) = {height, length, area, eccen, p_black, p_and},

\({Q_6}\) = {mean_tr, blackpix, blackand}.

By using the method in Sect. 2.3, one can obtain two fuzzy similarity relations \({R_1}\) and \({R_2}\) corresponding to \({P_1}\) and \({Q_1}\) respectively. Under the fuzzy similarity relations \({R_1}\) and \({R_2},\) the lower approximations of \({A_1}\) can be obtained. When \({Q_1}\) is added to \({P_1}\), the lower approximation of \({A_1}\) is computed using the three algorithms. Analogously, when \({Q_i}\) \((i=2, \ldots ,6)\) is added to the attribute set \({P_i}\) \((i=2, \ldots ,6),\) the lower approximations of \({B_1}\),\({C_3}\),\({D_2}\), \({E_6}\) and \({F_2}\) are computed. The times of computing the lower and upper approximations employing DCFR, UAFRB and UAFRC are listed in Tables 3 and 4. Tables 3 and 4 show that UAFRB and UAFRC outperform DCFR.

Table 3

Time comparison of DCFR with UAFRB and UAFRC when computing the lower approximations (second)

 

\({P_1} \cup {Q_1}\)

\({P_2} \cup {Q_2}\)

\({P_3} \cup {Q_3}\)

\({P_4} \cup {Q_4}\)

\({P_5} \cup {Q_5}\)

\({P_6} \cup {Q_6}\)

DCFR

134.43

732.58

2522.94

1526.34

8180.41

8540.94

UAFRB

70.61

390.23

1621.21

883.55

5765.34

5563.78

UAFRC

78.84

406.32

1703.32

905.89

5943.28

5879.62

Table 4

Time comparison of DCFR with UAFRB and UAFRC when computing the upper approximations (second)

 

\({P_1} \cup {Q_1}\)

\({P_2} \cup {Q_2}\)

\({P_3} \cup {Q_3}\)

\({P_4} \cup {Q_4}\)

\({P_5} \cup {Q_5}\)

\({P_6} \cup {Q_6}\)

DCFR

146.88

728.60

2698.95

1598.90

8334.77

8470.98

UAFRB

73.14

387.29

1679.26

912.65

5698.48

5497.36

UAFRC

80.19

398.30

1767.31

932.76

5887.57

5722.29

7 Conclusions

This paper presents two incremental methods for updating the fuzzy rough approximations. One redefines the lower and upper boundary sets of a fuzzy rough set by using cut sets, and then incrementally updates the lower and upper approximations based on the boundary sets. The other incrementally updates the lower and upper approximations using cut sets directly. The proposed methods more clearly reveal the structures of the lower and upper approximations. Furthermore, for the data sets whose boundary sets only contain few objects, the incremental methods for computing the approximations are more effective.

There are several annotations as follows.

  1. 1.

    The variation of information systems includes not only attribute set, but also object set and attribute value. We first consider the case when the object set changes. Suppose \(U = \{ {x_1},{x_2}, \ldots ,{x_n}\}\) is the universe and \(F\) is a crisp set on \(U\). When some new objects \({x_{n+1}},{x_{n+2}}, \ldots ,{x_{n+m}}\) are added to \(U\), the new universe is denoted as \(U{^\prime} =\{ {x_1},{x_2}, \ldots ,{x_n},{x_{n+1}}, \ldots {x_{n+m}}\}\). In order to incrementally update the lower and upper approximations, the fuzzy similarity relation \(R\) needs to be updated, and the new fuzzy similarity relation is denoted as \({R^\prime }\). \({R^\prime }\) is a \((n+m) \times (n+m)\) matrix. The upper and lower approximations can be updated as:

    1. (1).
      When \(F \subseteq U,\)
      $${\mu _{{{\overline {apr} }_{R'}}(F)}}(x)=\left\{ {\begin{array}{*{20}{l}} {\mathop {\sup }\limits_{y \in F} {\mu _{{{[x]}_R}}}(y),x \in U} \\ {\mathop {\sup }\limits_{y \in F} {\mu _{{{[x]}_{R'}}}}(y),x \in {U^\prime } - U} \end{array}} \right.,$$
      $${\mu _{{{\underline {apr} }_{R'}}(F)}}(x)=\left\{ {\begin{array}{*{20}{l}} {\mathop {\inf }\limits_{y \notin F,y \in U} \{ 1 - {\mu _{{{[x]}_R}}}(y)\} \wedge \mathop {\inf }\limits_{y \notin F,y \in U' - U} \{ 1 - {\mu _{{{[x]}_{R'}}}}(y)\} ,x \in U} \\ {\mathop {\inf }\limits_{y \notin F,y \in U'} \{ 1 - {\mu _{{{[x]}_{R'}}}}(y)\} ,x \in {U^\prime } - U} \end{array}} \right..$$
       
    2. (2).
      When \(F \not\subset U,\)
      $${\mu _{{{\overline {apr} }_{R'}}(F)}}(x)=\left\{ {\begin{array}{*{20}{l}} {\mathop {\sup }\limits_{y \in F \cap U} {\mu _{{{[x]}_R}}}(y) \vee \mathop {\sup }\limits_{y \in F \cap (U' - U)} {\mu _{{{[x]}_{R'}}}}(y),x \in U} \\ {\mathop {\sup }\limits_{y \in F \cap U'} {\mu _{{{[x]}_{R'}}}}(y),x \in {U^\prime } - U} \end{array}} \right.,$$
      $${\mu _{{{\underline {apr} }_{R'}}(F)}}(x)=\left\{ {\begin{array}{*{20}{l}} {\mathop {\inf }\limits_{y \notin F,y \in U} \{ 1 - {\mu _{{{[x]}_R}}}(y)\} \wedge \mathop {\inf }\limits_{y \notin F,y \in U' - U} \{ 1 - {\mu _{{{[x]}_{R'}}}}(y)\} ,x \in U} \\ {\mathop {\inf }\limits_{y \notin F,y \in U'} \{ 1 - {\mu _{{{[x]}_{R'}}}}(y)\} ,x \in {U^\prime } - U} \end{array}} \right..$$
       
     

When attribute values change, the universe and the attribute set are invariant. We can obtain a new fuzzy similarity relation \({R^\prime }\) according to the method in Ref. [49]. The numbers of rows and columns of \({R^\prime }\) are the same as the numbers of rows and columns of \(R\). Analogously, we can consider updating the upper and lower approximations. The further details will be studied in another paper.

  1. 2.

    The incremental strategy only considers the combination of fuzzy similarity relations, which is identical to add attributes, and does not consider deleting attributes, the reasons are as follows. According to Definition 1, the upper and lower approximations can also be described by the fuzzy equivalence classes generated by the fuzzy similarity relation. When adding the fuzzy attributes, one can incrementally update the fuzzy equivalence classes by computing the cartesian product of the original fuzzy equivalence classes and the fuzzy equivalence classes divided by the newly added attributes. But when deleting the fuzzy attributes, how to incrementally construct the new fuzzy equivalence classes according to the original fuzzy equivalence classes, this is the question which needs further study. In essence, incremental updating of fuzzy equivalence classes can be traced back to incremental updating of fuzzy similarity relation. At present, most of the methods for calculating the fuzzy similarity relation are to reconstruct the fuzzy similarity relation after adding or deleting some attributes. There is almost no way to incrementally update the fuzzy similarity relation. Therefore, our further work is to study how to incrementally construct fuzzy similarity relation after deleting fuzzy attributes, and then to incrementally update the upper and lower approximations of the fuzzy rough set. The completion of this work can make the incremental methods proposed in this paper can be more easily applied to data mining, knowledge acquisition and other practical problems based on fuzzy rough set theory.

     
  2. 3.

    An interesting direction of future work is to study how to extend the methods to generalized fuzzy rough sets. However, it’s not a simple parallel extension due to two aspects need to be considered. On the one hand, we have a family of \(\alpha\)-cut sets \({({F_\alpha })_\alpha },\alpha \in [0,1]\), representing a fuzzy set \(F\), on the other hand, we have a family of \(\beta\)-cut sets \({({R_\beta })_\beta },\beta \in [0,1]\), representing a fuzzy similarity relation \(R\). Each \(\alpha\)-cut set \({F_\alpha }\) is a crisp set, and each \(\beta\)-cut relation \({R_\beta }\) is an equivalence relation. For a fixed pair of numbers\((\alpha ,\beta ) \in [0,1] \times [0,1]\), we obtain a submodel in which a crisp \({F_\alpha }\) is approximated in a crisp approximation space \((U,{R_\beta })\). The result is a rough set \(({\underline {apr} _{{R_\beta }}}({F_\alpha }),{\overline {apr} _{{R_\beta }}}({F_\alpha }))\). For a fixed \(\beta\), we obtain a submodel in which a fuzzy set \(F\) is approximated in a crisp approximation space \((U,{R_\beta })\). The result is a rough fuzzy set \(({\underline {apr} _{{R_\beta }}}(F),{\overline {apr} _{{R_\beta }}}(F))\). On the other hand, for a fixed \(\alpha\), we obtain a submodel in which a crisp \({F_\alpha }\) is approximated in a fuzzy approximation space \((U,R)\), namely \({(U,{R_\beta })_\beta },\beta \in [0,1]\). The result is a special fuzzy rough set \(({\underline {apr} _R}({F_\alpha }),{\overline {apr} _R}({F_\alpha }))\). In the generalized fuzzy rough set model, both \(\alpha\) and \(\beta\) are not fixed. Rough sets and rough fuzzy sets can therefore be viewed as special cases of fuzzy rough sets. Notice that the incremental update methods proposed in this paper are based on the redefined boundary set, and the definition of boundary set depends on cut sets. Analogously, we may redefine the boundary set corresponding to generalized fuzzy rough sets. However, a problem is how to take into account the relationships between different \(\alpha\)-cut sets of fuzzy sets, and the relationships between different \(\beta\)-cut relations of fuzzy similarity relations. And because of this, it’s not easy to redefine a boundary set. In this paper, we have proved two methods for incrementally updating the fuzzy rough approximations aiming at a special type of fuzzy rough sets. A further work is to study the structure of boundary sets corresponding to the generalized fuzzy rough sets, and then incrementally update approximations.

     
  3. 4.

    The effective computation of approximations is very important for improving the performance of related algorithms. A typical example is its application in learning classification rules. We take the classification method based on fuzzy rough sets in Ref. [5]. as an example. The method consists of two steps: (1) develop a dimensionality reduction algorithm to compute a reduct of the attribute set; (2) generate rules by using an existing fuzzy rule induction algorithm (RIA). The reduction algorithm follows the idea of relative reduction in Pawlak’s rough sets to keep the dependence degree invariant. It starts from an empty set and adds in turn, one at a time, those attributes that result in the greatest increase of \({\gamma _C}(D)\)(\({\gamma _C}(D)\)) is the dependency degree of the set of condition attributes \(C\) with regard to the set of decision attributes \(D\)), until this produces its maximum possible value for the data set. In order to compute dependency degree, we need to compute the lower approximations. It’s because that the reduction algorithm starts from an empty set and adds one attribute at a time. After adding one attribute, we can use the algorithm UAFRB or UAFRC to update the lower approximations incrementally, and then update the dependency degree. Or, we can directly compute the lower approximations (DCFR) and the dependency degree. Then the fuzzy rule induction algorithm (RIA) [5] is used to generate fuzzy rules. The reducts of attributes selected by DCFR, UAFRB, and UAFRC are the same. For the data sets whose boundary sets only contain few objects, UAFRB and UAFRC take less time to find the reduct of attributes. Therefore, the total time to generate fuzzy rules, which is the sum of the time of attribute reduction and the time of extracting rules, is effectively reduced.

     

Notes

Acknowledgements

This work has been supported by the national natural science foundation of China (No. 61071162).

References

  1. 1.
    Pawlak Z (1982) Rough sets. Int J Comput Inf Sci 11(5):341–356CrossRefMATHGoogle Scholar
  2. 2.
    Dubois D, Prade H (1990) Fuzzy rough sets and fuzzy rough sets. Int J Gen Syst 17(2–3):191–209CrossRefMATHGoogle Scholar
  3. 3.
    Wang X, Hong J (1999) Learning optimization in simplifying fuzzy rules. Fuzzy Sets Syst 106(3):349–356MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Sarkar M (2002) Rough–fuzzy functions in classification. Fuzzy Sets Syst 132(3):353–369MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Shen Q, Jensen R (2004) Selecting informative features with fuzzy-rough sets and its application for complex systems monitoring. Pattern Recognit 37:1351–1363CrossRefMATHGoogle Scholar
  6. 6.
    Asharafa S, Narasimha Murty M (2003) An adaptive rough fuzzy single pass algorithm for clustering large data sets. Pattern Recognit 36(12):3015–3018CrossRefMATHGoogle Scholar
  7. 7.
    Asharafa S, Narasimha Murty M (2004) A rough fuzzy approach to web usage categorization. Fuzzy Sets Syst 148(1):119–129MathSciNetCrossRefGoogle Scholar
  8. 8.
    Mi J, Zhang W (2004) An axiomatic characterization of a fuzzy generalization of rough sets. Inf Sci 160(1–4):235–249MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Wu W, Zhang W (2004) Constructive and axiomatic approaches of fuzzy approximation operators. Inf Sci 159(3–4):233–254MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Huynh V-n, Nakamori Y (2005) A roughness measure for fuzzy sets. Inf Sci 173(1–3):255–275MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    Cheng Y (2011) The incremental method for fast computing the rough fuzzy approximations. Data Knowl Eng 70:84–100CrossRefGoogle Scholar
  12. 12.
    Cheng Y, Miao D, Feng Q (2011) Positive approximation and converse approximation in interval-valued fuzzy rough sets. Inf Sci 181:2086–2110MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Cheng Y, Miao D (2011) Rules induction based on granulation in interval-valued fuzzy information system. Expert Syst Appl 38:12249–12261CrossRefGoogle Scholar
  14. 14.
    Cheng Y (2012) A new approach for rule extraction in fuzzy information systems. J Comput Inf Syst 21:8795–8805Google Scholar
  15. 15.
    Michalski RS (1985) Knowledge repair mechanisms: evolution vs. revolution. In: Proceedings of the 3rd international machine learning workshop, pp 116–119.Google Scholar
  16. 16.
    Bouchachia A, Mittermeir R (2007) Towards incremental fuzzy classifiers. Soft Comput 11(2):193–207CrossRefGoogle Scholar
  17. 17.
    Bang W, nam Bien Z (1999) New incremental learning algorithm in the framework of rough set theory. Int J Fuzzy Syst 1:25–36MathSciNetGoogle Scholar
  18. 18.
    Zheng Z, Wang G (2004) RRIA: a rough set and rule tree based incremental knowledge acquisition algorithm. Fundam Inf 59(2–3):299–313MathSciNetMATHGoogle Scholar
  19. 19.
    Wang L, Wu Y, Wang G (2005) An incremental rule acquisition algorithm based on variable precision rough set model. J Chongqing Univ Posts Telecommun Nat Sci 17(6):709–713Google Scholar
  20. 20.
    Zhang J, Li T, Ruan D, Liu D (2012) Neighborhood rough sets for dynamic data mining. Int J Intell Syst 27:317–342CrossRefGoogle Scholar
  21. 21.
    Li S, Li T, Liu D (2013) Dynamic maintenance of approximations in dominance-based rough set approach under the variation of the object set. Int J Intell Syst 28(8):729–751MathSciNetCrossRefGoogle Scholar
  22. 22.
    Luo C, Li T, Chen H, Liu D (2013) Incremental approaches for updating approximations in set-valued ordered information systems. Knowl-Based Syst 50:218–233CrossRefGoogle Scholar
  23. 23.
    Zhang J, Li T, Chen H (2014) Composite rough sets for dynamic data mining. Inf Sci 257:81–100MathSciNetCrossRefMATHGoogle Scholar
  24. 24.
    Zeng A, Li T, Luo C (2013) An incremental approach for updating approximations of gaussian Kernelized fuzzy rough sets under the variation of the object set. Comput Sci (in Chin) 40(7):20–27Google Scholar
  25. 25.
    Wang S, Li T, Luo C, Fujita H (2016) Efficient updating rough approximations with multi-dimensional variation of ordered data. Inf Sci 372:690–708CrossRefGoogle Scholar
  26. 26.
    Luo C, Li T, Chen H, Fujita H, Yi Z (2016) Efficient updating of probabilistic approximations with incremental objects. Knowl-Based Syst 109:71–83CrossRefGoogle Scholar
  27. 27.
    Chen H, Li T, Luo C, Horng S-J, Wang G (2014) A rough set-based method for updating decision rules on attribute values’ coarsening and refining. IEEE Trans Knowl Data Eng 26(12):2886–2899CrossRefGoogle Scholar
  28. 28.
    Luo C, Li T, Chen H, Lu L (2015) Fast algorithms for computing rough approximations in set-valued decision systems while updating criteria values. Inf Sci 299:221–242MathSciNetCrossRefGoogle Scholar
  29. 29.
    Zeng A, Li T, Hua J, Chen H, Luo C (2017) Dynamical updating fuzzy rough approximations for hybrid data under the variation of attribute values. Inf Sci 378:363–388MathSciNetCrossRefGoogle Scholar
  30. 30.
    Chan CC (1998) A rough set approach to attribute generalization in data mining. Inf Sci 107(1–4):177–194MathSciNetGoogle Scholar
  31. 31.
    Liu S, Sheng Q, Shi Z (2003) A new method for fast computing positive region. J Comput Res Dev (in Chin) 40(5):637–642Google Scholar
  32. 32.
    Li T, Ruan D, Geert W (2007) A rough sets based characteristic relation approach for dynamic attribute generalization in data mining. Knowl-Based Syst 20(5):485–494CrossRefGoogle Scholar
  33. 33.
    Zhang J, Li T, Liu D (2010) An approach for incremental updating approximations in variable precision rough sets while attribute generalizing. In: Proceedings of 2010 IEEE international conference on intelligent systems and knowledge engineering, pp 77–81Google Scholar
  34. 34.
    Li S, Li T, Liu D (2013) Incremental updating approximations in dominance-based rough sets approach under the variation of the attribute set. Knowl-Based Syst 40:17–26CrossRefGoogle Scholar
  35. 35.
    Luo C, Li T, Chen H (2014) Dynamic maintenance of approximations in set-valued ordered decision systems under the attribute generalization. Inf Sci 257:210–228MathSciNetCrossRefMATHGoogle Scholar
  36. 36.
    Liu D, Li T, Zhang J (2015) Incremental updating approximations in probabilistic rough sets under the variation of attributes. Knowl-Based Syst 73:81–96CrossRefGoogle Scholar
  37. 37.
    Zhang Y, Li T, Luo C (2016) Incremental updating of rough approximations in interval-valued information systems under attribute generalization. Inf Sci 373:461–475CrossRefGoogle Scholar
  38. 38.
    Zeng A, Li T, Liu D, Zhang J, Chen H (2015) A fuzzy rough set approach for incremental feature selection on hybrid information systems. Fuzzy Sets Syst 258:39–60MathSciNetCrossRefMATHGoogle Scholar
  39. 39.
    Chen H, Li T, Luo C, Horng S-J, Wang G (2015) A decision-theoretic rough set approach for dynamic data mining. IEEE Trans Fuzzy Syst 23(6):1958–1970CrossRefGoogle Scholar
  40. 40.
    Liu D, Li T, Zhang J (2014) A rough set-based incremental approach for learning knowledge in dynamic incomplete information systems. Int J Approx Reason 55:1764–1786MathSciNetCrossRefMATHGoogle Scholar
  41. 41.
    Zhang J, Wong J-S, Pan Y, Li T (2015) A parallel matrix-based method for computing approximations in incomplete information systems. IEEE Trans Knowl Data Eng 27(2):326–339CrossRefGoogle Scholar
  42. 42.
    Luo C, Li T, Chen H, Lu L (2016) Matrix approach to decision-theoretic rough sets for evolving data. Knowl-Based Syst 99:123–134CrossRefGoogle Scholar
  43. 43.
    Zhang J, Zhu Y, Pan Y, Li T (2016) Efficient parallel Boolean matrix based algorithms for computing composite rough set approximations. Inf Sci 329:287–302CrossRefGoogle Scholar
  44. 44.
    Liu D, Li T, Ruan D (2009) An incremental approach for inducing knowledge from dynamic information systems. Fundam Inf 94:245–260MathSciNetMATHGoogle Scholar
  45. 45.
    Liu D, Li T, Ruan D, Zhang J (2011) Incremental learning optimization on knowledge discovery in dynamic business intelligent systems. J Glob Optim 51:325–344MathSciNetCrossRefMATHGoogle Scholar
  46. 46.
    Yao Y (1997) Combination of rough and fuzzy sets based on level sets, rough sets and data mining: analysis for imprecise data. Kluwer Academic, Dordrecht, pp 301–321MATHGoogle Scholar
  47. 47.
    Wang X, Tsang ECC, Zhao S, Chen D, Yeung DS (2007) Learning fuzzy rules from fuzzy samples based on rough set technique. Inf Sci 177:4493–4514MathSciNetCrossRefMATHGoogle Scholar
  48. 48.
  49. 49.
    Yuan Y, Shaw MJ (1995) Introduction of fuzzy decision tree. Fuzzy Sets Syst 69:125–139CrossRefGoogle Scholar
  50. 50.
    Kohonen T (1988) Self-organization and associative memory. Springer, BerlinCrossRefMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2017

Authors and Affiliations

  1. 1.College of Computer ScienceSichuan UniversityChengduChina
  2. 2.Department of Information EngineeringSichuan College of Architectural TechnologyChengduChina

Personalised recommendations