Markov frameworks and stock market decision making

Abstract

In this paper, we present applications of Markov rough approximation framework (MRAF). The concept of MRAF is defined based on rough sets and Markov chains. MRAF is used to obtain the probability distribution function of various reference points in a rough approximation framework. We consider a set to be approximated together with its dynamacity and the effect of dynamacity on rough approximations is stated with the help of Markov chains. An extension to Pawlak’s decision algorithm is presented, and it is used for predictions in a stock market environment. In addition, suitability of the algorithm is illustrated in a multi-criteria medical diagnosis problem. Finally, the definition of fuzzy tolerance relation is extended to higher dimensions using reference points and basic results are established.

Introduction

Pawlak (1982) introduced the notion of rough sets by defining the lower approximation of a set X as the collection of all the elements of the universe whose equivalence classes are contained in X,  and the upper approximation of X as the set of all the elements of the universe whose equivalence classes have a non-empty intersection with X. We often consider the universe to be an algebraic structure and study the corresponding algebraic properties of rough approximations. The concept of rough approximation framework was defined by Ciucci (2008), basically as a collection of rough approximations of the set. A rough approximation framework is said to be regular if all the approximations of the set are inscribed in one another. An illustration of rough approximation framework was given by Kedukodi et al. (2010) using reference points.

In Markov chains, the probabilities which finally decide the stability of a system are represented in terms of a matrix known as transition probability matrix. Markov chains have been used in several applications to predict the future possibilities in dynamic and uncertain systems. One such area that has been the focus of intense research is the prediction of the performance of stock markets. In a typical stock market environment, customer’s either SELL, BUY or HOLD a particular stock by assessing and predicting its performance utilizing previous and current performance of the stock, inputs from rating agencies, etc (Tavana et al. 2017). Such an assessment of past performance of the stock with the available empirical data of a stock and predicting the future performance or value of the stock is a challenging task due to its very dynamic nature and multiple variables that affect its performance.

Despite this, a variety of mathematical models have been proposed in the literature (Aleksandar et al. 2018; Xiongwen et al. 2020; Emrah and Taylan 2017; Sudan et al. 2019; Gour et al. 2018; Prasenjit and Ranadive 2019, etc.) Such methods often apply a number of ideas from Markov chains, fuzzy sets (Chan 2015; Choji et al. 2013; Rezaie et al. 2013), rough sets, artificial neural network (Suk et al. 2010) or other interesting methods (Chen et al. 2007; Gong et al. 2019; Markovic et al. 2017).

Recently, Koppula et al. (2018) introduced the concept of Markov rough approximation framework (MRAF) by using Markov chains and rough sets. MRAF helps to assign the probabilities for various reference points in the rough approximation framework. In the present work, we present explicit examples of MRAF. The first example focuses on explaining the mathematical ideas involved, and the second example demonstrates how to apply the concept in a practical situation. In the second example, we use MRAF along with Pawlak decision algorithm (Pawlak 2002) to analyze data from the different rating agencies (reference points) in the stock market environment. We arrive at a recommendation on the future prospects of a set of stocks along with the probabilities for the suggested recommendation to be correct. Usually, the rating agencies (experts) do not get evaluated for the quality of their recommendations. However, the algorithm proposed in this paper takes care of this aspect through the idea of Markov chain transition probability matrix which yields a telescopic way of assignment of weights. This method naturally evaluates the quality of recommendations made by the rating agencies (experts). Further, the suitability of the proposed algorithm is also validated by evaluating its efficacy in arriving at a decision in a multi-criteria medical problem. We extend the fuzzy similarity-based rough set approach used in multi-criteria decision problems to identify key or crucial attribute among multiple attributes.

Basic definitions and preliminaries

In this section, we provide basic definitions and results. We refer Pawlak (1982), Ciucci (2008), Kedukodi et al. (2010), Davaaz (2004, 2006) for basic definitions of rough set and rough approximation framework.

Definition 2.1

(Medhi 2009) The stochastic process \(\{X_{n},~n=0,1,2,\ldots \}\) is called a Markov chain; if, for \(j,k,j_{1},\ldots j_{n-1}\in N,~ Pr\{X_{n}=k|X_{n-1}=j,X_{n-2}=j_{1},\ldots ,X_{0}=j_{n-1}\} = Pr\{X_{n}=k|X_{n-1}=j\}=P_{jk}\) (say) whenever the first member is defined. The transition probabilities \(P_{jk}\) satisfy \(P_{jk}\ge 0, \Sigma _{k}P_{jk}=1\) for all j.

Definition 2.2

(Ching and Ng 2006) A vector \(\pi =(\pi _{0},\pi _{1},\ldots ,\pi _{k-1})^{t}\) is said to be a stationary distribution of a finite Markov chain if it satisfies:

  1. (i)

    \(\pi _{i}\ge 0\) and \(\Sigma _{i=0}^{k-1}\pi _{i}=1.\)

  2. (ii)

    \(\Sigma _{j=0}^{k-1}P_{ij}\pi _{j}=\pi _{i}.\)

The results on fuzzy ideals of semirings, near-rings and the ideals of seminear-rings can be found in Bhavanari et al. (2010), Jagadeesha et al. (2016a, b), Kedukodi et al. (2007, 2009), Kedukodi et al. (2017), Koppula et al. (2019), Kuncham et al. (2016, 2017), Nayak et al. (2018) and Akram and Dudek (2008).

Definition 2.3

(Koppula et al. 2018) Let \(S=\{B_{1},B_{2},\ldots ,B_{m}\}\) be a collection of reference points in the universe U. Consider a sequence of trails \(Y_{n},~n\ge 1\) of selection of reference points from the set S. Let the trails dependence be connected by a Markov chain with a transition probability matrix

figurea

Let E be the rough approximation framework formed by S. Then (EC) is called a Markov rough approximation framework (MRAF).

The size of MRAF is the size of the rough approximation framework E.

Definition 2.4

(Huang 1992) quadruple \(\hbox {IS}=(U, \hbox {AT}, V, h)\) is called an information system, where \(U=\{u_{1},u_{2},\ldots ,u_{n}\}\) is a non-empty finite set of objects, called the universe of discourse, \(\hbox {AT}=\{a_{1},,a_{2},\ldots ,a_{n}\}\) is a non-empty finite set of attributes. \(V=\cup _{a\in \mathrm{AT}}V_{a}\) where \(V_{a}\) is the set of attribute values associated with each attribute \(a\in \hbox {AT}\) and \(h:U\times \hbox {AT} \rightarrow V\) is an information function that assigns particular values to the objects against attribute set such that \(\forall ~ a\in \hbox {AT}, ~\forall ~u\in U, h(u,a)\in V_{a}.\)

Definition 2.5

(Huang 1992) In an information system, if each attribute has a single entity as attribute value, then it is called single-valued information system, otherwise it is known as set-valued information system. Set-valued information system is a generalization of the single-valued information system, in which an object can have more than one attribute values.

Definition 2.6

(Orlowska 1985) For a set-valued information system \(\hbox {IS}=(U, \hbox {AT}, V, h),~\forall ~b\in \hbox {AT}\) and \(u_{i}, u_{j}\in U,\) tolerance relation is defined as

$$\begin{aligned} T_{b}=\{(u_{i}, u_{j})\mid b(u_{i})\cap b(u_{j})\ne \phi \} \end{aligned}$$

For \(B\subseteq \hbox {AT},\) a tolerance relation is defined as

$$\begin{aligned} T_{B}=\{(u_{i}, u_{j})\mid b(u_{i})\cap b(u_{j})\ne \phi , \forall ~b\in B \}=\bigcap _{b\in B}T_{b}, \end{aligned}$$

where \((u_{i}, u_{j})\in T_{B}\) implies that \(u_{i}\) and \(u_{j}\) are indiscernible(tolerant) with respect to a set of attributes B.

Definition 2.7

(Shivani et al. 2020) Let \(SVIS=(U, \hbox {AT}, V, h), \forall b\in \hbox {AT}\) be a set-valued information system, then a fuzzy relation \({R}_{b}\) is defined as

$$\begin{aligned} \mu _{R_{b}}(u_{i}, u_{j})=2 \dfrac{|b(u_{i})\cap b(u_{j})|}{|b(u_{i})|+|b(u_{j})|}. \end{aligned}$$

For a set of attributes \(B\subseteq A,\) a fuzzy relation \(R_{B}\) can be defined as \(\mu _{R_{B}}(u_{i}, u_{j})=\inf \limits _{b\in B} \mu _{R_{b}}(u_{i},u_{j})\)

Definition 2.8

(Shivani et al. 2020) Binary relation using a threshold value \(\alpha \) is defined as \(T_{b}^{\prime \prime }=\{(u_{i}, u_{j})\mid \mu _{R_{b}(u_{i}, u_{j})}\ge \alpha \},\) where \(\alpha \in (0,1)\) is a similarity threshold, which gives a level of similarity for insertion of objects within tolerance classes.

For a set of attributes \(B\subseteq \hbox {AT},\) the binary relation is defined as \(T_{B}^{\prime \prime }=\{(u_{i}, u_{j})\mid \mu _{R_{B}(u_{i}, u_{j})}\ge \alpha \}.\)

Examples of Markov frameworks

We begin with a mathematical example.

Proposition 3.1

Let \(U=\mathbb {R}\times \mathbb {R}\) and \(\overline{a},\overline{b}\in U.\) Define a relation R on U by \(\overline{a} R \overline{b}\) if there exists a square S centered at the origin with radius \(r>0\) (the radius of a square defined to be the distance from the center (0,0) of the square to one of the vertices) such that \(\overline{a},\overline{b}\) lie on the square. Then R is an equivalence relation on U.

Proof

(i) Let \(\overline{a}=(x,y)\in U\) and \( ~\delta _{x},~\delta _{y}\) are perpendicular distances from \(\overline{a}\) to \(X-\)axis and \(Y-\) axis, respectively. Take \(\delta _{\overline{a}}= \max \{\delta _{x},\delta _{y}\}.\) Form a square S with centre (0, 0) and radius \(\sqrt{2}\delta _{\overline{a}}.\) We claim that \(\overline{a}\) lies on S.

Let \(\delta _{y}\ge \delta _{x}.\) Then \(\delta _{\overline{a}}=\delta _{y}\). This implies \(|y|=\delta _{\overline{a}},~- \delta _{\overline{a}}\le x\le \delta _{\overline{a}}.\) Hence \(\overline{a}\) lies on one of its sides \((x,\delta _{y})\) or \((x,-\delta _{y}).\)

Similarly, when \(\delta _{x}\ge \delta _{y}\) we can prove that \(\overline{a}\) lies on one of its sides \((\delta _{x},y)\) or \((-\delta _{x},y)\) of the square S. Therefore \(\overline{a}~R~\overline{a}.\) Hence R is reflexive.

(ii) Let \(\overline{a} R \overline{b}\). Then there exists a square centered at the origin with radius r such that \(\overline{a},\overline{b}\) lie on the square. This implies \(\overline{b} R \overline{a}.\) Hence R is symmetric.

(iii) Let \(\overline{a} R \overline{b},~\overline{b} R \overline{c}.\) Then there exist square \(S_{1}\) centered at the origin with radius \(k_{1}\) such that \(\overline{a},\overline{b}\) lie on \(S_{1}\) and square \(S_{2}\) centered at the origin with radius \(k_{2}\) such that \(\overline{b},\overline{c}\) lie on \(S_{2}.\) This implies \(k_{1}=k_{2}\) (because \(\overline{b}\) lies on \(S_{1}\) and \(S_{2}\)). Hence \(S_{1}=S_{2}.\) Therefore \(\overline{a} R \overline{c}.\) Hence R is transitive. Therefore R is an equivalence relation on U. \(\square \)

Notation 3.2

Let \(\overline{a}\in U.\) Denote the rotations (in anticlockwise sense) of a square centered at the origin with radius \(\sqrt{2}\delta _{\overline{a}}\) by an angle \(\theta \) (reference points) as \(S_{(\overline{a},~\theta )},~0\le \theta \le 2\pi .\)

In particular, the square defined by Proposition 3.1 is denoted by \(S_{(\overline{a},~0)}\) (as shown in Fig. 1).

Proposition 3.3

Let \(U=\mathbb {R}\times \mathbb {R}\) and \(\overline{x},~\overline{y}\in U.\) Define a relation \(R_{\theta }\) on U by \(\overline{x}R_{\theta }\overline{y}\) if there exists \(\theta \) such that \( S_{(\overline{x},~\theta )}=S_{(\overline{y},~\theta )}.\) Then \(R_{\theta }\) defines a MRAF of size greater than or equal to 1 on U corresponding to the set \(X_{k}=\{(x,y)\mid x^{2}+y^{2}\le k\}.\)

Proof

First we prove that \(R_{\theta }\) is an equivalence relation on U.

(i) Let \(\overline{x}\in U.\) Take \(\theta =0.\) By Proposition 3.1, \(\overline{x}\) lies on the square \(S_{(\overline{x},0)}.\) This implies \(\overline{x} R_{\theta } \overline{x}.\) Hence \(R_{\theta }\) is reflexive.

(ii) Let \(\overline{x}R_{\theta }\overline{y}.\) Then there exists \(\theta \) such that \( S_{(\overline{x},~\theta )}=S_{(\overline{y},~\theta )}.\) This implies \(\overline{y}R_{\theta }\overline{x}.\) Hence \(R_{\theta }\) is symmetric.

(iii) Let \(\overline{x}R_{\theta }\overline{y}\) and \(\overline{y}R_{\theta }\overline{z}.\) Then there exists \(\theta \) such that \(S_{(\overline{x},~\theta )}=S_{(\overline{y},~\theta )}\) and \(S_{(\overline{y},~\theta )}=S_{(\overline{z},~\theta )}.\) This implies \(S_{(\overline{x},~\theta )}=S_{(\overline{z},~\theta )}.\) Hence \(R_{\theta }\) is transitive. Therefore \(R_{\theta }\) is an equivalence relation on U.

Fig. 1
figure1

Square with rotation zero degrees

The equivalence classes corresponding to the above equivalence relation are given by \([\overline{x}]_{\theta }=\{\overline{y}\in U~\mid \overline{x}~R_{\theta } ~\overline{y}\} =\{S_{(\overline{x},~\theta )}\mid 0\le \theta \le 2\pi \}.\)

The sets \(L(X_{k})=\cup \{\overline{x}\in U\mid [\overline{x}]_{\theta }\subseteq X_{k}\}=\cup \{\overline{x}\in U\mid S_{(\overline{x},\theta )}\subseteq X_{k}\} =\{(x,y)\mid |x|=\sqrt{\frac{k}{2}},~|y|\le \sqrt{\frac{k}{2}}\) or \(|y|=\sqrt{\frac{k}{2}},~|x|\le \sqrt{\frac{k}{2}} \}\) and \(U(X_{k})=\cup \{\overline{x}\in U\mid [\overline{x}]_{\theta }\cap X_{k} \ne \phi \}=\cup \{\overline{x}\in U\mid S_{(\overline{x},\theta )}\cap X_{k}\ne \phi \}=\{(x,y)\mid |x|=k, ~|y|\le k\) or \(|y|=k,~|x|\le k\}\) are, respectively, the lower approximation and upper approximations of the set \(X_{k}.\) Note that whenever the number of rotations of a square is zero, we get MRAF of size 1. If the number of rotations of a square is more than one, then we get a MRAF of size greater than one. \(\square \)

Suppose we take four rotations of the square given by \(\theta =\{0,22.5,45,67.5\} \) in Proposition 3.3 with the transition probability matrix given by

with \(a_{i}+b_{i}+c_{i}+d_{i}=1,~i=1,2,3,4.\) Then \((S_{\theta }, T)\) is a MRAF of size 4. A visual representation is shown in Fig. 2, wherein a circle approximated by MRAF of size 4.

Fig. 2
figure2

MRAf of size 4

Proposition 3.4

Let \(U=\mathbb {R}\times \mathbb {R}\) and \(~\overline{a},\overline{b}\in U.\) Define a relation \(R_{0}\) on U by \(\overline{a} R_{0} \overline{ b}\) if there exists a circle centered at the origin with radius \(r(>0)\) such that \(\overline{a},\overline{b}\) lie on the circle. Then \(R_{0}\) defines MRAF of size 1 on U corresponding to the set \(X_{k}=\{(x,y)\mid |x|=k,~|y|\le k\) or \(|y|=k,~|x|\le k\}.\)

Proof

We first prove that \(R_{0}\) is an equivalence relation on U.

(i) Let \(\overline{a}=(x,y)\in U.\) Then \(\overline{a}\) lies on the circle centered at the origin with radius \(\sqrt{(x-0)^{2}+(y-0)^{2}}\). Therefore \(\overline{a}~R_{0}~\overline{a}.\) Hence \(R_{0}\) is reflexive.

(ii) Let \(\overline{a}~ R_{0}~ \overline{b}.\) Then there exists a circle centered at the origin with radius r such that \(\overline{a},\overline{b}\) lie on the circle. This implies \(\overline{b}~ R_{0}~ \overline{a}.\) Hence \(R_{0}\) is symmetric.

(iii) Let \(\overline{a}~ R_{0}~ \overline{b},~\overline{b}~ R_{0}~ \overline{c}.\) Then there exist circle \(C_{1}\) centered at the origin with radius \(k_{1}\) such that \(\overline{a},\overline{b}\) lie on \(C_{1}\) and circle \(C_{2}\) centered at the origin with radius \(k_{2}\) such that \(\overline{b},\overline{c}\) lie on \(C_{2}.\) This implies \(k_{1}=k_{2}\) (because \(\overline{b}\) lies on \(C_{1}\) and \(C_{2}\)). Hence \(C_{1}=C_{2}.\) Therefore \(\overline{a} R_{0} \overline{c}.\) Hence \(R_{0}\) is transitive. Therefore \(R_{0}\) is an equivalence relation on U.

Let \(\overline{a}\in U.\) Denote \(C_{\overline{a}}\) as the circle centered at the origin and radius is the distance between (0, 0) and \(\overline{a}.\) Then the equivalence classes corresponding to the above equivalence relation are given by

$$\begin{aligned}{}[\overline{a}]=\{\overline{b}\in U~\mid \overline{a}~R_{0} ~\overline{b}\}=C_{\overline{a}}. \end{aligned}$$

The sets

$$\begin{aligned} L(X_{k})= & {} \cup \{\overline{a}\in U\mid [\overline{a}]\subseteq X_{k}\}=\cup \{\overline{a}\in U\mid C_{\overline{a}}\subseteq X_{k}\}\\= & {} \{(x,y)\mid x^{2}+y^{2}\le k\} \end{aligned}$$

and

$$\begin{aligned} U(X_{k})= & {} \cup \{\overline{a}\in U\mid [\overline{a}]\cap X_{k} \ne \phi \}=\cup \{\overline{a}\in U\mid C_{\overline{a}}\cap X_{k}\ne \phi \}\\= & {} \{(x,y)\mid x^{2}+y^{2}\le \sqrt{2}k\} \end{aligned}$$

are, respectively, called lower approximation and upper approximation of the set \(X_{k}.\) We get MRAF of size 1 because all the rotations of circle yield the same circle. Thus we get a trivial transition probability matrix having just an entry of 1 in it. \(\square \)

A visual representation of MRAF of size 1 is shown in Fig. 3, wherein a collection of squares are approximated by MRAF of size 1.

Fig. 3
figure3

MRAf of size 1

Fig. 4
figure4

Telescoping MRAF (Recursive implementation of Figs. 2b and 3b)

Figure 4 represents telescoping MRAF, wherein circles and squares are alternatively rough approximations of each other.

Example 3.5

Now we illustrate an application of MRAF in the stock market environment. In the process, we give an extension to Pawlak’s decision algorithm and address the problem of decision making in a situation involving data from several sources (rating agencies/experts) with varying criteria (attributes) for a particular stock (company/service provider). Usually, listed companies in a share market release their quarterly performance reports in terms of EBITDA (Earnings Before Interest, Tax, Depreciation and Amortization), EBITDA Margin, EBIT (Earnings Before Interest and Tax), EBIT Margin, PAT (Profit After Tax), Adjusted PAT, Revenue, Sales, etc (attributes). Among these, various rating agencies/experts select few parameters as performance indicators of a given stock, predict its future performance for the next quarter and give suggestions to the existing or prospective customers either to BUY, SELL or HOLD the shares of a given company. From the customer point of view, making a decision either to SELL, HOLD or BUY the shares of a particular company is a challenging task due to the disparity in performance indicators selected by various rating agencies/experts (reference points) and also because of their contrary views sometimes on the same stock. MRAF together with Pawlak’s decision algorithm is very useful to tackle such problems.

Algorithm 3.6

Extension of Pawlak’s Decision Algorithm:

Step 1: Gather information from multiple sources (reference points) \(R_{1},R_{2},\ldots ,R_{p}\) for \(U=\{u_{1},u_{2},\ldots ,u_{m}\}\) with respect to the attributes \(A=\{a_{1}^{k},a_{2}^{k},\ldots ,a_{n}^{k}\}\) and tabulate them as a decision table for the reference point \(R_{k}.\)

Step 2: Form decision rules associated with the decision table for each reference point \(R_{k}\) satisfying the conditions given by Pawlak (2002).

Step 3: \(Support_{s}(\Phi ,\Psi )=Supp_{s}(\Phi ,\Psi ) \times P(R_{k})\ne 0,~Supp_{s}(\Phi ,\Psi )\) is given by Pawlak (2002), \(P(R_{k})\) is the probability of the reference point \(R_{k}\) obtained from MRAF.

Step 4: Combine the decision rules from the various decision algorithms with respect to various reference points and form decision table.

Step 5: Verify whether the decision rules satisfy the Pawlak’s (2002) decision algorithm.

Step 6: Calculate strength, certainty and coverage factors for the decision algorithm given in Step 5 by using the following formulae.

$$\begin{aligned} \hbox {Cer}_{s}(\Phi ,\Psi )= & {} \frac{|(\parallel \Phi \wedge \Psi \parallel _{S})|}{|(\parallel \Phi \parallel _{S})|} =\frac{\mathrm{support}_{s(\Phi ,\Psi )}}{|(\parallel \Phi \parallel _{S})|}\\ \hbox {Cov}_{s}(\Phi , \Psi )= & {} \frac{|(\parallel \Phi \wedge \Psi \parallel _{S})|}{|(\parallel \Psi \parallel _{S})|}=\frac{\mathrm{support}_{s(\Phi ,\Psi )}}{|(\parallel \Psi \parallel _{S})|}, \end{aligned}$$

where \(\parallel \Phi \parallel _{s}\ne \phi ,~\parallel \Psi \parallel _{s}\ne \phi \).

Step 7: Arrive at decision using the values of certainty and coverage factors.

Now to demonstrate the utility of the above algorithm with a stock market case scenario as described above, we have collected relevant data of 15 companies \([C_{1}, C_{2}, C_{3},\ldots , C_{15}]\) from four different rating agencies \((R_{1},R_{2},R_{3}\) and \(R_{4})\) with respect to various attributes. Representative data obtained for a single company \((C_{4})\) are presented in Fig. 5.

Fig. 5
figure5

Quarterly performance of a company according to rating agencies (CMP current market price, TP target price, Rec. recommendation)

In Fig. 5, as the target price projections by various rating agencies are different, an expected target price is computed using MRAF. For the sake of simplicity, we consider MRAF given in the example of Koppula et al. (2018) and the corresponding stationary probability distribution [1/6,  2/6,  1/6,  2/6] for the rating agencies \(R_{1},R_{2},R_{3}\) and \(R_{4}\), respectively. Both current market price and the calculated expected target price are compared as per conditions in Fig. 6.

Fig. 6
figure6

Criteria for company performance based on E(TP) and CMP

Subsequently, all the companies are grouped based on the similarity among the various attributes along with the likely scenario (Fig. 6) of the stock and is presented in Fig. 7.

Fig. 7
figure7

Grouping of companies based on similarity criteria (numerical in the bracket indicates number of companies satisfying the corresponding decision criteria)

From Fig. 7, a decision algorithm is formed as shown in Fig. 8.

Fig. 8
figure8

Decision algorithm associated with decision Fig. 7

The certainty and coverage factors for the above decision algorithm are calculated using Algorithm 3.6 and are shown in Fig. 9.

Fig. 9
figure9

Certainty and coverage factors for the decision algorithm (Fig. 8). Support is obtained by multiplying the number (Fig. 7) with \(P(R_{i}),~i=1,2,3,4\) as defined in Algorithm 3.6

Information in Fig. 9 can be used to arrive at best possible decision as per the decision algorithm presented in Fig. 8.

For example, if (EBITDA, inline), (PAT, beat) and (CMP-TP, closer to E(TP)) then the Recommendation is BUY with the probability 0.909.

If (EBITDA, miss), (PAT, inline) and (CMP-TP, lower to E(TP)) then the Recommendation is HOLD with probability 0.667.

If Recommendation is SELL, then (EBITDA margin, miss), (EBIT margin, miss) and (CMP-TP, higher to E(TP)) with probability 0.5.

To validate the results of Example 3.5, we have considered three companies (viz. ACC Limited, Wipro Limited and Maruti Suzuki India Limited designated as \(C_{4}, C_{9}, C_{15}\), respectively) and the data from their quarterly reports. Also, recommendation reports from four different rating agencies (reference points) are noted. From these data, expected values for various parameters are computed and a recommendation is suggested based on MRAF. Subsequently, actual share prices of these companies are plotted and compared with the suggestion arrived from MRAF. The validation is shown in Fig. 10.

Fig. 10
figure10

Graph depicting the actual share prices of selected companies from July to November of 2016 along with benchmark index Nifty (Source: www.nseindia.com)

The graph indicates that the recommendation derived from MRAF and extended Pawlak’s decision algorithm seems to be appropriate for the selected companies as indicated by share price variations of these companies. (An increase in share price of Maruti Suzuki Ltd was observed and an appropriate recommendation ‘BUY’ is obtained from the algorithm. Similarly, changes in share price of ACC cements are not significant and the recommendation obtained is ‘HOLD’.) Example 3.5 can be extended to a telescoping MRAF over a period of 4 quarters in a typical financial year.

It is to be noted that the rating agencies (experts) do not get evaluated for their recommendations. However, in our method Markov chain transition probability matrix yields a telescopic way of assignment of weights which naturally evaluates the quality of recommendations made by the rating agencies (experts). The above example indicates the potential applications of MRAF in real world situations.

Example 3.7

In order to demonstrate that the proposed method can be useful in other similar uncertain and dynamic situations, we consider Example 4.3 from Bingzhen et al. (2019) that describes a multi-criteria decision-making problem and utilizes multi-granulation vague rough set method to arrive at a decision in medical diagnosis. With the help of MRAF, probability distribution function was assigned to experts \(R_{1}, R_{2}, R_{3}\) as [1/3 1/6 1/2]. Later, the expected values of criteria are calculated using the probabilities of \(R_{1}, R_{2}, R_{3}\) and are shown in Fig. 11.

Fig. 11
figure11

The expected values of criteria

The expected values of various criteria are denoted as Very Low, Low, Average, High and Very High if 0.1 \(\le \) expected value < 0.25, 0.25 \(\le \) expected value < 0.34, 0.34 \(\le \) expected value < 0.49, 0.49 \(\le \) expected value < 0.64 and expected value \(\ge \) 0.64, respectively. The values of Recommendation are taken as the sum of all the expected values corresponding to each alternative and denote them as ‘Yes’ if Recommendation value \(\ge \) 2 and ‘No’ if Recommendation value < 2 as shown in Fig. 12.

Fig. 12
figure12

The linguistic terms to the expected values

The decision rules are formed for the criteria given in decision Fig. 12 and are presented in Fig. 13.

Fig. 13
figure13

The decision algorithm associated with Fig. 12

Coverage and certainty factors for the above decision algorithm are calculated using the proposed algorithm and are shown in Fig. 14.

Fig. 14
figure14

Certainty and coverage factors for the decision algorithm in Fig. 13

From the above values, we can observe that if (\(E(y_{1})\), Average), (\(E(y_{2})\),Very High) and (\(E(y_{4})\), Very Low), then the Recommendation is Yes to say that the patient has gastroenteritis with probability 1. This is in agreement with the decision obtained using multi-granulation vague rough set theory, thus confirming the usefulness of MRAF and Pawlak’s decision algorithm.

Fuzzy tolerance relation

Shivani et al. (2020) introduced a novel approach for attribute selection in set-valued information system based on tolerance rough set theory and defined fuzzy relation between two objects using a similarity threshold. In this paper, we generalized this relation for alternative selection in set-valued information system by using the concepts Markov rough approximation frame work (MRAF) and reference points.

Definition 3.8

Let \((U, AL, V, h), \forall a\in AL\) be a set-valued information system, where \(U=\{u_{1},u_{2},\ldots ,u_{n}\}\) is a non-empty finite set of objects (Criteria).

\(\hbox {AL}=\{a_{1}, a_{2},\ldots ,a_{n}\}\) is a non-empty set of alternatives, \(V=\bigcup _{a\in AL} V_{a}\) where \(V_{a}\) is the set of criteria values associated with each alternative \(a\in AL,\) \(h:U\times AL\rightarrow V\) such that \(\forall a\in AL, \forall ~u\in U, h(u,a)=V_{a}\) and \(X=\{R_{1},R_{2},\ldots R_{n}\}\) is the collection of reference points from the universe U,  then the fuzzy relation is defined as

$$\begin{aligned} \sum _{k=1}^{n} \mu _{{a}}(u_{i}, u_{j})_{R_{k}}P(R_{k})=\sum _{k=1}^{n} 2\dfrac{|a(u_{i})_{R_{k}}\cap a(u_{j})_{R_{k}}|}{|a(u_{i})_{R_{k}}|+|a(u_{j})_{R_{k}}|}P(R_{k}), \end{aligned}$$

where \(P(R_{k}), ~k=1,2,\ldots ,n\) is the probability distribution for the reference points \(R_{k},~k=1,2,\ldots n\) obtained from MRAF.

The binary relation using a threshold value \(\alpha \in (0,1)\) is defined as

$$\begin{aligned} T_{(a, \alpha )}=\left\{ (u_{i}, u_{j})\mid \sum _{k=1}^{n} \mu _{{a}}(u_{i}, u_{j})_{R_{k}}P(R_{k})\ge \alpha \right\} . \end{aligned}$$

Lemma 3.9

\(T_{(a, \alpha )}\) is a tolerance relation.

Proof

(1) Reflexive: Consider

$$\begin{aligned}&\sum _{k=1}^{n} \mu _{{a}}(u_{i}, u_{i})_{R_{k}}P(R_{k})\\&\quad =\sum _{k=1}^{n} 2\dfrac{|a(u_{i})_{R_{k}}\cap a(u_{i})_{R_{k}}|}{|a(u_{i})_{R_{k}}|+|a(u_{i})_{R_{k}}|}P(R_{k})\\&\quad =2\dfrac{|a(u_{i})_{R_{1}}\cap a(u_{i})_{R_{1}}|}{|a(u_{i})_{R_{1}}|+|a(u_{i})_{R_{1}}|}P(R_{1})\\&\qquad +2\dfrac{|a(u_{i})_{R_{2}}\cap a(u_{i})_{R_{2}}|}{|a(u_{i})_{R_{2}}|+|a(u_{i})_{R_{2}}|}P(R_{2})+\cdots \\&\qquad +2\dfrac{|a(u_{i})_{R_{n}}\cap a(u_{i})_{R_{n}}|}{|a(u_{i})_{R_{n}}|+|a(u_{i})_{R_{n}}|}P(R_{n}).\\&\quad = \dfrac{2|a(u_{i})_{R_{1}}|}{2|a(u_{i})_{R_{1}}|}P(R_{1})+\dfrac{2|a(u_{i})_{R_{2}}|}{2|a(u_{i})_{R_{2}}|}P(R_{2})+\cdots \\&\qquad +\dfrac{2|a(u_{i})_{R_{n}}|}{2|a(u_{i})_{R_{n}}|}P(R_{n})\\&\quad = P(R_{1})+P(R_{2})+\cdots +P(R_{n})\\&\quad =1\ge \alpha . \end{aligned}$$

This implies \((u_{i}, u_{i})\in T_{(a, \alpha )}\). Hence \(T_{(a, \alpha )}\) is reflexive.

(2) Symmetry: Let \((u_{i},u_{j})\in T_{(a, \alpha )}\).

Then

$$\begin{aligned}&\sum _{k=1}^{n} \mu _{{a}}(u_{i}, u_{j})_{R_{k}}P(R_{k})\ge \alpha \\&\quad \Rightarrow [2\dfrac{|a(u_{i})_{R_{1}}\cap a(u_{j})_{R_{1}}|}{|a(u_{i})_{R_{1}}|+|a(u_{j})_{R_{1}}|}P(R_{1})\\&\qquad +2\dfrac{|a(u_{i})_{R_{2}}\cap a(u_{j})_{R_{2}}|}{|a(u_{i})_{R_{2}}|+|a(u_{j})_{R_{2}}|}P(R_{2})+\cdots \\&\qquad + 2\dfrac{|a(u_{i})_{R_{n}}\cap a(u_{j})_{R_{n}}|}{|a(u_{i})_{R_{n}}|+|a(u_{j})_{R_{n}}|}P(R_{n})]\ge \alpha \\&\quad \Rightarrow [2\dfrac{|a(u_{j})_{R_{1}}\cap a(u_{i})_{R_{1}}|}{|a(u_{j})_{R_{1}}|+|a(u_{i})_{R_{1}}|}P(R_{1})\\&\qquad +2\dfrac{|a(u_{j})_{R_{2}}\cap a(u_{i})_{R_{2}}|}{|a(u_{j})_{R_{2}}|+|a(u_{i})_{R_{2}}|}P(R_{2})+\cdots \\&\qquad +2\dfrac{|a(u_{j})_{R_{n}}\cap a(u_{i})_{R_{n}}|}{|a(u_{j})_{R_{n}}|+|a(u_{i})_{R_{n}}|}P(R_{n})]\ge \alpha \\&\quad \Rightarrow \sum _{k=1}^{n} \mu _{{a}}(u_{j}, u_{i})_{R_{k}}P(R_{k}) \ge \alpha . \end{aligned}$$

This implies \((u_{j}, u_{i})\in T_{(a, \alpha )}.\) Hence \(T_{(a, \alpha )}\) is symmetric. Thus \(T_{(a, \alpha )}\) is a tolerance relation. \(\square \)

The tolerance class for an attribute \(u_{i}\) with respect to \(T_{(a, \alpha )}\) is \([T_{(a, \alpha )}](u_{i})=\{u_{j}\in U\mid u_{i} T_{(a, \alpha )} u_{j}\}.\)

The lower and upper approximations of a non-empty subset \(B\subseteq U\) as:

$$\begin{aligned} \underline{T}_{(a, \alpha )}(B)= & {} \{u_{i}\in U\mid [T_{(a, \alpha )}](u_{i})\subseteq B\}.\\ \overline{T}_{(a, \alpha )}(B)= & {} \{u_{i}\in U\mid [T_{(a, \alpha )}](u_{i})\cap B \ne \phi \}. \end{aligned}$$

Properties of lower and upper approximations with respect to fuzzy tolerance relation In the following, we assume that (UALVh) be a set-valued information system, \(a\in AL\) and \(B\subseteq U,~\alpha \in (0,1).\)

Theorem 3.10

\(\underline{T}_{(a, \alpha )}(B)\subseteq B \subseteq \overline{T}_{(a, \alpha )}(B).\)

Notation 3.11

Let \(\{R_{1}, R_{2},\ldots ,R_{n}\}\) be the reference points in the universe U. Then

$$\begin{aligned}&\mu _{{a}}(u_{i}, u_{j})_{\min } = \hbox {Minimum of}\\&\quad \{\mu _{{a}}(u_{i}, u_{j})_{R_{1}}, \mu _{{a}}(u_{i}, u_{j})_{R_{2}},\ldots ,\mu _{{a}}(u_{i}, u_{j})_{R_{n}}\}\\&\mu _{{a}}(u_{i}, u_{j})_{\max } = \hbox {Maximum of}\\&\quad \{\mu _{{a}}(u_{i}, u_{j})_{R_{1}}, \mu _{{a}}(u_{i}, u_{j})_{R_{2}},\ldots ,\mu _{{a}}(u_{i}, u_{j})_{R_{n}}\} \end{aligned}$$

The binary relations using a threshold value \(\alpha \in (0,1)\) are defined as

$$\begin{aligned} T_{(a, \alpha )\max }= & {} \{(u_{i}, u_{j})\mid \mu _{{a}}(u_{i}, u_{j})_{\max }\ge \alpha \}.\\ T_{(a, \alpha )\min }= & {} \{(u_{i}, u_{j})\mid \mu _{{a}}(u_{i}, u_{j})_{\min }\ge \alpha \}. \end{aligned}$$

The tolerance classes for an attribute \(u_{i}\) with respect to \(T_{(a, \alpha )}\) are

$$\begin{aligned} {[}{T}_{(a, \alpha )\max }{]}(u_{i})= & {} \{u_{j}\in U\mid u_{i} {T}_{(a, \alpha )\max } u_{j}\}.\\ {[}{T}_{(a, \alpha )\min }{]}(u_{i})= & {} \{u_{j}\in U\mid u_{i} {T}_{(a, \alpha )\min } u_{j}\}. \end{aligned}$$

The lower and upper approximations of a non-empty subset \(B\subseteq U\) are

$$\begin{aligned}&\underline{T}_{(a, \alpha )\max }(B)=\{u_{i}\in U\mid [T_{(a, \alpha )\max }](u_{i})\subseteq B\}.\\&\overline{T}_{(a, \alpha )\max }(B)=\{u_{i}\in U\mid [T_{(a, \alpha )\max }](u_{i})\cap B \ne \phi \}. \\&\underline{T}_{(a, \alpha )\min }(B)=\{u_{i}\in U\mid [T_{(a, \alpha )\min }](u_{i})\subseteq B\}.\\&\overline{T}_{(a, \alpha )\min }(B)=\{u_{i}\in U\mid [T_{(a, \alpha )\min }](u_{i})\cap B \ne \phi \} \end{aligned}$$

respectively.

Theorem 3.12

\(\underline{T}_{(a, \alpha )\max }(B) \subseteq \underline{T}_{(a, \alpha )}(B) \subseteq \underline{T}_{(a, \alpha )\min }(B)\subseteq B \subseteq \overline{T}_{(a, \alpha )\min }(B) \subseteq \overline{T}_{(a, \alpha )}(B) \subseteq \overline{T}_{(a, \alpha )\max }(B).\)

Proof

By the fundamental theorem of linear programming, the expression \(\sum _{k=1}^{n} \mu _{{a}}(u_{i}, u_{j})_{R_{k}}P(R_{k})\) attains its extreme values at the vertices (corners) of the convex polytope (polygon) formed by it. Clearly, the minimum value is attained at \(\mu _{{a}}(u_{i}, u_{j})_{\min }\) and maximum value is attained at \(\mu _{{a}}(u_{i}, u_{j})_{\max }.\)

Now, we prove that \(\underline{T}_{(a, \alpha )\max }(B) \subseteq \underline{T}_{(a, \alpha )}(B).\) Let \(z\in \underline{T}_{(a, \alpha )\max }(B)\Rightarrow [T_{(a, \alpha )\max }](z)\subseteq B.\) Now, we prove that \([T_{(a, \alpha )}](z) \subseteq B.\) Let \(y\in [T_{(a, \alpha )}](z).\) Then

$$\begin{aligned} \sum _{k=1}^{n} \mu _{{a}}(y,z)_{R_{k}}P(R_{k}) \ge \alpha , \end{aligned}$$

As

$$\begin{aligned} \mu _{a}(y,z)_{\max }\ge \sum _{k=1}^{n} \mu _{{a}}(y,z)_{R_{k}}P(R_{k}) \ge \alpha , \end{aligned}$$

we get \(\mu _{a}(y,z)_{\max }\ge \alpha .\) This implies

$$\begin{aligned} y\in [T_{(a, \alpha )\max }](z). \end{aligned}$$

This gives

$$\begin{aligned}{}[T_{(a, \alpha )}](z)\subseteq [T_{(a, \alpha )\max }](z). \end{aligned}$$

As \([T_{(a, \alpha )\max }](z)\subseteq B,\) we get \([T_{(a, \alpha )}](z)\subseteq B.\) This implies \(z\in \underline{T}_{(a, \alpha )}(B).\) Now, we prove that \(\underline{T}_{(a, \alpha )}(B) \subseteq \underline{T}_{(a, \alpha )\min }(B).\) Let \(z\in \underline{T}_{(a, \alpha )}(B)\Rightarrow [T_{(a, \alpha )}](z)\subseteq B.\) Now, we prove that \([T_{(a, \alpha )\min }](z) \subseteq B.\) Let \(y\in [T_{(a, \alpha )\min }](z).\) Then \(\mu _{a}(y,z)_{\min }\ge \alpha .\) As

$$\begin{aligned} \sum _{k=1}^{n} \mu _{{a}}(y,z)_{R_{k}}P(R_{k}) \ge \mu _{a}(y,z)_{\min } \ge \alpha , \end{aligned}$$

we get

$$\begin{aligned} \sum _{k=1}^{n} \mu _{{a}}(y, z)_{R_{k}}P(R_{k})\ge \alpha . \end{aligned}$$

This implies

$$\begin{aligned} y\in [T_{(a, \alpha )}](z). \end{aligned}$$

This gives

$$\begin{aligned}{}[T_{(a, \alpha )\min }](z)\subseteq [T_{(a, \alpha )}](z). \end{aligned}$$

As \([T_{(a, \alpha )}](z)\subseteq B,\) we get \([T_{(a, \alpha )\min }](z)\subseteq B.\) This implies

$$\begin{aligned} z\in \underline{T}_{(a, \alpha )\min }(B). \end{aligned}$$

Clearly, from Theorem 3.10, we have \(\underline{T}_{(a, \alpha )\min }(B)\subseteq B \subseteq \overline{T}_{(a, \alpha )\min }(B).\) Now, we prove that \(\overline{T}_{(a, \alpha )\min }(B) \subseteq \overline{T}_{(a, \alpha )}(B).\)

Let \(z \in \overline{T}_{(a, \alpha )\min }(B).\) Then \([T_{(a, \alpha )\min }](z) \cap B \ne \phi .\) Now, take \(y\in [T_{(a, \alpha )\min }](z).\) Then \(\mu _{a}(y,z)_{\min }\ge \alpha .\) As \(\sum _{k=1}^{n} \mu _{{a}}(y, z)_{R_{k}}P(R_{k}) \ge \mu _{a}(y,z)_{\min } \ge \alpha ,\) we get \(y\in [T_{(a, \alpha )}](z).\) This implies \([T_{(a, \alpha )\min }](z) \subseteq [T_{(a, \alpha )}](z).\) As \([T_{(a, \alpha )\min }](z) \cap B \ne \phi ,\) we get \([T_{(a, \alpha )}](z) \cap B \ne \phi .\) Hence \(z\in \overline{T}_{(a, \alpha )}(B).\) Now, we prove that \(\overline{T}_{(a, \alpha )}(B) \subseteq \overline{T}_{(a, \alpha )\max }(B).\) Let \(z \in \overline{T}_{(a, \alpha )}(B).\) Then \([T_{(a, \alpha )}](z) \cap B \ne \phi .\) Now, take \(y\in [T_{(a, \alpha )}](z).\) Then \(\sum _{k=1}^{n} \mu _{{a}}(y,z)_{R_{k}}P(R_{k})\ge \alpha .\) As \(\mu _{a}(y,z)_{\max } \ge \sum _{k=1}^{n} \mu _{{a}}(y, z)_{R_{k}}P(R_{k}) \ge \alpha ,\) we get \(y\in [T_{(a, \alpha )\max }](z).\) This implies \([T_{(a, \alpha )}](z) \subseteq [T_{(a, \alpha )}](z).\) As \([T_{(a, \alpha )}](z) \cap B \ne \phi ,\) we get \([T_{(a, \alpha )\max }](z) \cap B \ne \phi .\) Hence \(z\in \overline{T}_{(a, \alpha )\max }(B).\) \(\square \)

Here we provide an example to select the best alternative in multi-criteria decision-making problem by using the tolerance relation and the proposed algorithm.

Example 3.13

The set-valued information system with alternatives \({a_{1}, a_{2}, a_{3}, a_{4}}\) and their corresponding attributes \({u_{1},u_{2},u_{3},u_{4},u_{5}}\) with respect to reference points \(R_{1}\) and \(R_{2}\) is given in Fig. 15.

Fig. 15
figure15

Set-valued information system with respect to the reference points \(R_{1}\) and \(R_{2}\)

By using MRAF, the probability assigned to the reference points \([R_{1},R_{2}]\) as \([\dfrac{1}{4}, \dfrac{3}{4}].\) Then the values obtained by tolerance relation (Definition 3.8) are presented in Fig. 16.

In order to apply the decision algorithm, the sum of each column in Fig. 16 is compiled and the compiled values \((u_{i},a_{j})\) are denoted as Low, Average, Good and Very Good if \((u_{i},a_{j})\) \(\le \) 1.75 1.75 \(\le \) \((u_{i},a_{j})\) < 2.5, 2.5 \(\le \) \((u_{i},a_{j})\) < 3.25, 3.25 \(\le \) \((u_{i},a_{j})\) <4 , respectively. The values of Recommendation are taken as the sum of all values corresponding to each alternative and denoted them as ‘Yes’ if Recommendation value \(\ge \) 14 and ‘No’ if Recommendation value < 14 and are presented in Fig. 17.

Fig. 16
figure16

Tolerance relation values

Fig. 17
figure17

Compilation values of \((u_{i},a_{j})\)

Fig. 18
figure18

Set-valued information system with respect to the reference points \(R_{1}\) and \(R_{2}\)

Fig. 19
figure19

Tolerance relation values

Then by using the proposed algorithm, the alternatives \(a_{2}\) and \(a_{3}\) can be selected with the probability 1.

The above example clearly shows that fuzzy tolerance relation can be used to arrive at a best alternative in multi-criteria decision problems.

Now, we generalize the fuzzy tolerance relation given by Shivani et al. (2020) for selecting the attributes in set-valued information system by using MRAF.

Definition 3.14

Let \((U, \hbox {AT}, V, h), \forall b\in \hbox {AT}\) be a set-valued information system, where \(U=\{u_{1},u_{2},\ldots ,u_{n}\}\) is a non-empty finite set of objects, \(\hbox {AT}=\{b_{1}, b_{2},\ldots ,b_{n}\}\) is a non-empty set of attributes, \(V=\bigcup _{b\in \mathrm{AT}} V_{b}\) where \(V_{b}\) is the set of alternative values associated with each attribute \(b\in \mathrm{AT},\) \(h:U\times \mathrm{AT}\rightarrow V\) such that \(\forall b\in \mathrm{AT}, \forall ~u\in U, h(u,b)=V_{b}\) and \(X=\{R_{1},R_{2},\ldots R_{n}\}\) is the collection of reference points from the universe U,  then the fuzzy relation is defined as \(\sum _{k=1}^{n} \mu _{{b}}(u_{i}, u_{j})_{R_{k}}P(R_{k})=\sum _{k=1}^{n} 2\dfrac{|b(u_{i})_{R_{k}}\cap b(u_{j})_{R_{k}}|}{|b(u_{i})_{R_{k}}|+|b(u_{j})_{R_{k}}|}P(R_{k}),\) where \(P(R_{k}),~k=1,2,\ldots ,n\) is the probability distribution for the reference points \(R_{k},~k=1,2,\ldots n\) obtained from MRAF.

For a set of alternatives \(B\subseteq \mathrm{AT},\) a fuzzy relation is defined as \(\mu _{B}(u_{i}, u_{j})=\inf \limits _{b\in B} (\sum _{k=1}^{n} \mu _{{b}}(u_{i}, u_{j})_{R_{k}}P(R_{k})).\)

The binary relation using a threshold value \(\alpha \in (0,1)\) is defined as

$$\begin{aligned} T_{(b, \alpha )}=\left\{ (u_{i}, u_{j})\mid \sum _{k=1}^{n} \mu _{{b}}(u_{i}, u_{j})_{R_{k}}P(R_{k})\ge \alpha \right\} . \end{aligned}$$

For a set of attributes \(B\subseteq AT,\) we define a relation as

$$\begin{aligned} T_{(B, \alpha )}=\{(u_{i}, u_{j})\mid \mu _{{B}}(u_{i}, u_{j})\ge \alpha \}. \end{aligned}$$

Theorem 3.15

\(T_{(b, \alpha )}\) is a tolerance relation.

Example 3.16

The set-valued information system with criteria \({b_{1}, b_{2}, b_{3}, b_{4}}\) and their corresponding alternatives \({u_{1},u_{2},u_{3},u_{4},u_{5}}\) with respect to reference points \(R_{1}\) and \(R_{2}\) are given in Fig. 18.

By using MRAF, the probability assigned to the reference points \([R_{1},R_{2}]\) as \([\dfrac{1}{3}, \dfrac{2}{3}].\) Then the values obtained by tolerance relation are presented in Fig. 19.

Fig. 20
figure20

Compilation values of \((u_{i},b_{j})\)

In order to apply the decision algorithm, the sum of each column in Fig. 19 is compiled and the compiled values \((u_{i},b_{j})\) are denoted as Low, Average and Good if 1 \(\le \) \((u_{i},b_{j})\) < 2, 2 \(\le \) \((u_{i},b_{j})\) < 3 and \(\ge 3\), respectively. The values of Recommendation are taken as the sum of all values corresponding to each alternative and denoted them as ‘Yes’ if Recommendation value \(\ge \) 15 and ‘No’ if Recommendation (Rec.) value < 15 and are presented in Fig. 20.

Then by using the proposed algorithm, the attributes \(b_{1}\) and \(b_{2}\) can be selected with the probability 1.

Conclusions

We have provided applications of Markov frameworks in dealing with decision problems in dynamic and uncertain environments.The MRAF enables the possibility of assigning probabilities in a telescopic manner and Pawlak’s decision algorithm enables in arriving at decisions in multi-criteria decision problems. This method can also be used to supplement other multi-criteria decision-making methods for deeper analysis of the data especially when the values of the ranked alternatives are very close to each other. The proposed algorithm is explained in the stock market environment to give the recommendation SELL, BUY or HOLD for a particular stock. The proposed algorithm is validated in the medical diagnosis problem to give the recommendation. In addition, fuzzy tolerance relation is used along with MRAF as a tool to ascertain critical attributes as well as alternatives in multi-criteria decision problems. Methods presented in this paper can be used in the development of other methods which deal with multi-criteria decision problems.

References

  1. Akram M, Dudek WA (2008) Intuitionistic fuzzy left k-ideals of semirings. Soft Comput 12:881–890

    MATH  Article  Google Scholar 

  2. Aleksandar R, Vlado S, Bratislav P, Sanja M (2018) An automated system for stock market trading based on logical clustering. Tech Gaz 25(4):970–978

    Google Scholar 

  3. Bingzhen S, Weimin M, Xiangtang C, Xiong Z (2019) Multigranulation vague rough set over two universes and its application to group decision making. Soft comput 23:8927–8956

    MATH  Article  Google Scholar 

  4. Bhavanari S, Kuncham SP, Kedukodi BS (2010) Graph of a nearring with respect to an ideal. Commun Algebra 38:1957–1962

    MathSciNet  MATH  Article  Google Scholar 

  5. Chan KC (2015) Market share modelling and forecasting using Markov chains and alternative models. Int J Innov Comput Inf Control 11(4):1205–1218

    Google Scholar 

  6. Chen MC, Lin CL, Chen AP (2007) Constructing a dynamic stock portfolio decision-making assistance model: using the Taiwan 50 Index constituents as an example. Soft Comput 11(12):1149–1156

    MATH  Article  Google Scholar 

  7. Choji DN, Eduno SN, Kassem GT (2013) Markov chain model application on share price movement in stock market. Comput Eng Intell Syst 4:84–95

    Google Scholar 

  8. Ching W, Ng MK (2006) Markov chains: models, algorithms and applications. Springer, New York

    MATH  Google Scholar 

  9. Ciucci D (2008) A unifying abstract approach for rough models. Lect Notes Artif Int 5009:371–378

    Google Scholar 

  10. Davaaz B (2004) Roughness in rings. Inf Sci 164:147–163

    MathSciNet  Article  Google Scholar 

  11. Davaaz B (2006) Roughness based on fuzzy ideals. Inf Sci 176:2417–2437

    MathSciNet  MATH  Article  Google Scholar 

  12. Emrah O, Taylan AA (2017) Financial performance evaluation of Turkish construction companies in Istanbul stock exchange (BIST). Int J Acad Res Account Finance Manag Sci 7(3):108–113

    Google Scholar 

  13. Gong XL, Liu XH, Xiong X, Zhuang XT (2019) Forecasting stock volatility process using improved least square support vector machine approach. Soft Comput. https://doi.org/10.1007/s00500-018-03743-0

    Article  Google Scholar 

  14. Gour SMT, Rupak B, Seema S (2018) Stock portfolio selection using Dempster–Shafer evidence theory. J King Saud Univ Comput Inf Sci 30:223–235

    Article  Google Scholar 

  15. Huang SY (ed) (1992) Intelligent decision support: handbook of applications and advances of rough sets theory. Springer, Berlin

    Google Scholar 

  16. Jagadeesha B, Kedukodi BS, Kuncham SP (2016a) Implications on a lattice. Fuzzy Inf Eng 8(4):411–425

    MathSciNet  Article  Google Scholar 

  17. Jagadeesha B, Kedukodi BS, Kuncham SP (2016b) Interval valued L-fuzzy ideals based on t-norms and t-conorms. J Intell Fuzzy Syst 28(6):2631–2641

    MathSciNet  MATH  Article  Google Scholar 

  18. Kedukodi BS, Kuncham SP, Bhavanari S (2007) C-prime fuzzy ideals of nearrings. Soochow J Math 33(4):891–901

    MathSciNet  MATH  Google Scholar 

  19. Kedukodi BS, Kuncham SP, Bhavanari S (2009) Equiprime, 3-prime and C-prime fuzzy ideals of nearrings. Soft Comput 13(10):933–944

    MATH  Article  Google Scholar 

  20. Kedukodi BS, Kuncham SP, Bhavanari S (2010) Reference points and roughness. Inf Sci 180(17):3348–3361

    MathSciNet  MATH  Article  Google Scholar 

  21. Kedukodi BS, Kuncham SP, Jagadeesha B (2017) Interval valued L-fuzzy prime ideals, triangular norms and partially ordered groups. Soft comput. https://doi.org/10.1007/s00500-017-2798-x

    MATH  Article  Google Scholar 

  22. Koppula K, Kedukodi BS, Kuncham SP (2018) Markov chains and rough sets. Soft comput 23(15):6441–6453

    MATH  Article  Google Scholar 

  23. Koppula K, Kedukodi BS, Kuncham SP (2019) On prime strong ideals of a seminearring. Mat Vesnik. http://www.vesnik.math.rs/inpress/mv2019_003.pdf

  24. Kuncham SP, Jagadeesha B, Kedukodi BS (2016) Interval valued L-fuzzy cosets of nearrings and isomorphism theorems. Afrika Math 27(3):393–408

    MathSciNet  MATH  Article  Google Scholar 

  25. Kuncham SP, Kedukodi BS, Harikrishnan P, Bhavanari S (2017) Nearrings, nearfields and related topics. World Scientific, Singapore. ISBN 978-981-3207-35-6

    MATH  Google Scholar 

  26. Markovic I, Stojanovic M, Stankovic J, Stankovic M (2017) Stock market trend prediction using AHP and weighted kernel LS-SVM. Soft Comput 21(18):5387–5398

    Article  Google Scholar 

  27. Medhi J (2009) Stochastic processes. New Age International Publishers Limited, New Delhi

    MATH  Google Scholar 

  28. Nayak H, Kuncham SP, Kedukodi BS (2018) \(\theta \Gamma \) N-group. Mat Vesnik 70:64–78

    MathSciNet  Google Scholar 

  29. Orlowska E (1985) Logic of nondeterministic information. Stud Log 44(1):91–100

    MathSciNet  MATH  Article  Google Scholar 

  30. Pawlak Z (1982) Rough sets. Int J Comput Inf Sci 11:341–356

    MATH  Article  Google Scholar 

  31. Pawlak Z (2002) Rough sets and intelligent data analysis. Inf Sci 147(1–4):1–12

    MathSciNet  MATH  Article  Google Scholar 

  32. Prasenjit M, Ranadive AS (2019) Fuzzy multi-granulation decision-theoretic rough sets based on fuzzy preference relation. Soft comput 23:85–99. https://doi.org/10.1007/s00500-018-3411-7

    MATH  Article  Google Scholar 

  33. Rezaie K, Majazi DV, Hatami SL, Shirkouhi N (2013) Efficiency appraisal and ranking of decision-making units using data envelopment analysis in fuzzy environment: a case study of Tehran stock exchange. Neural Comput Appl 23(Suppl 1):1. https://doi.org/10.1007/s00521-012-1209-6

    Article  Google Scholar 

  34. Shivani S, Shivam S, Tanmoy S, Gaurav S (2020) A fuzzy similarity-based rough set approach for attribute selection in set-valued information systems. Soft comput 24:4675–4691

    MATH  Article  Google Scholar 

  35. Sudan J, Raghvendra K, Le HS, Jyotir MC, Manju K, Yadav Navneet, Florentin S (2019) Neutrosophic soft set decision making for stock trending analysis. Evol Syst 10:621–627

    Article  Google Scholar 

  36. Suk JL, Jae JA, Kyong JO, Tae YK (2010) Using rough set to support investment strategies of real-time trading in futures market. Appl Intell 32:364–377

    Article  Google Scholar 

  37. Tavana M, Khanjani SR, Di CD (2017) A chance-constrained portfolio selection model with random-rough variables. Neural Comput Appl. https://doi.org/10.1007/s00521-017-3014-8

    Article  Google Scholar 

  38. Xiongwen P, Yanqiang Z, Pan W, Weiwei L, Victor C (2020) An innovative neural network approach for stock market prediction. J Supercomput. https://doi.org/10.1007/s11227-017-2228-y

    Article  Google Scholar 

Download references

Acknowledgements

The authors acknowledge anonymous reviewers and the editor for their valuable comments and suggestions. All authors acknowledge the support and encouragement of Manipal Institute of Technology, Manipal Academy of Higher Education (MAHE), India.

Funding

Open access funding provided by Manipal Academy of Higher Education, Manipal.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Babushri Srinivas Kedukodi.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical standard

This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Communicated by V. Loia.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Koppula, K., Kedukodi, B.S. & Kuncham, S.P. Markov frameworks and stock market decision making. Soft Comput 24, 16413–16424 (2020). https://doi.org/10.1007/s00500-020-04950-4

Download citation

Keywords

  • Rough set
  • Markov chain
  • Rough approximation framework