Skip to main content

Formalizing the Informal, Precisiating the Imprecise: How Fuzzy Logic Can Help Mathematicians and Physicists by Formalizing Their Intuitive Ideas

  • Chapter
  • First Online:
Book cover Towards the Future of Fuzzy Logic

Part of the book series: Studies in Fuzziness and Soft Computing ((STUDFUZZ,volume 325))

  • 1070 Accesses

Abstract

Fuzzy methodology transforms expert ideas—formulated in terms of words from natural language—into precise rules and formulas. In this paper, we show that by applying this methodology to intuitive physical and mathematical ideas, we can get known fundamental physical equations and known mathematical techniques for solving these equations. This fact makes us confident that in the future, fuzzy techniques will help physicists and mathematicians to transform their imprecise ideas into new physical equations and new techniques for solving these equations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Aczel, J.: Lectures on Functional Equations and Their Applications. Dover Publication, New York (2006)

    Google Scholar 

  2. Akhiezer, A.I., Berestetsky, V.B.: Quantum Electrodynamics. Interscience Publishers, New York (1965)

    Google Scholar 

  3. del Catillo-Mussot, M., Costa Dias, R.: Fuzzy sets and physics. Revista Mexicana de Física 39(2), 295–303 (1993)

    Google Scholar 

  4. Feynman, R., Leighton, R., Sands, M.: The Feynman Lectures on Physics. Addison Wesley, Boston (2005)

    Google Scholar 

  5. Gutierrez, E., Kreinovich, V.: Fundamental physical equations can be derived by applying fuzzy methodology to informal physical ideas. In: Proceedings of the 30th Annual Conference of the North American Fuzzy Information Processing Society NAFIPS’2011, El Paso, Texas, 18–20 March 2011

    Google Scholar 

  6. Klir, G., Yuan, B.: Fuzzy Sets and Fuzzy Logic: Theory and Applications. Prentice Hall, Upper Saddle River, New Jersy (1995)

    MATH  Google Scholar 

  7. Kreinovich, V., Chang, C.-C., Reznik, L., Solopchenko, G.N.: Inverse problems: fuzzy representation of uncertainty generates a regularization. In: Proceedings of the 1992 Conference of the North American Fuzzy Information Processing Society NAFIPS’92, Puerto Vallarta, Mexico, 15–17 December 1992, NASA Johnson Space Center, Houston, TX, pp. 418–426 (1992)

    Google Scholar 

  8. Kreinovich, V., Quintana, C., Reznik, L.: Gaussian membership functions are most adequate in representing uncertainty in measurements. In: Proceedings of the 1992 Conference of the North American Fuzzy Information Processing Society NAFIPS’92, Puerto Vallarta, Mexico, 15–17 December 1992, NASA Johnson Space Center, Houston, TX, pp. 618–625 (1992)

    Google Scholar 

  9. Nguyen, H.T., Kreinovich, V., Bouchon-Meunier, B.: Soft computing explains heuristic numerical methods in data processing and in logic programming. In: Medsker, L. (ed.) Frontiers in Soft Computing and Decision Systems, pp. 30–35. AAAI Press, Palo Alto (1997). (Publication No. FS-97-04)

    Google Scholar 

  10. Nguyen, H.T., Walker, E.A.: First Course on Fuzzy Logic. CRC Press, Boca Raton (2006)

    Google Scholar 

  11. Ross, T.J.: Fuzzy Logic with Engineering Applications. Wiley, New York (2010)

    Book  Google Scholar 

  12. Zadeh, L.A.: Fuzzy sets. Inf. Control 8, 338–353 (1965)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgments

This work was supported in part by the National Science Foundation grants HRD-0734825, HRD-124212, and DUE-0926721.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vladik Kreinovich .

Editor information

Editors and Affiliations

Appendices

Appendix

Appendix 1: Variational Equations

General derivation. Let us recall how we can transform the Least Action Principle into a differential equation. Let us first do it on the example on Newton-type situation, where we need to find a function \(x(t)\) that minimizes the following expression: \(S=\int L(x,\dot{x})\,dt\rightarrow \min \). Minimizing means, in particular, that if we take any function \(\varDelta x(t)\) and consider a function \(S(\alpha )=x+\alpha \cdot \varDelta x\), then this function must attain its maximum for \(\alpha =0\). Thus, the derivative of \(S(\alpha )\) at \(\alpha =0\) must be 0. Differentiating the expression

$$S(\alpha )=\int L(x+\alpha \cdot \varDelta x,\dot{x}+\alpha \cdot \varDelta \dot{x})\,dt$$

and equating the derivative to 0, we conclude that

$$\int \left( \frac{\partial L}{\partial x}\cdot \varDelta x+ \frac{\partial L}{\partial \dot{x}}\cdot \varDelta \dot{x}\right) \,dt= \int \left( \frac{\partial L}{\partial x}\cdot \varDelta x\right) \,dt + \int \left( \frac{\partial L}{\partial \dot{x}}\cdot \varDelta \dot{x}\right) \,dt=0.$$

Integrating the second term by parts, we conclude that

$$\int \left( \frac{\partial L}{\partial x} -\frac{d}{dt}\left( \frac{\partial L}{\partial \dot{x}}\right) \right) \cdot \varDelta x\,dt=0.$$

This must be true for every function \(\varDelta x(t)\), in particular for a function that is equal to 0 everywhere except for a small vicinity of a moment \(t\), For this function, the integral is proportional to the value of the expression \(\displaystyle \frac{\partial L}{\partial x} -\displaystyle \frac{d}{dt}\left( \displaystyle \frac{\partial L}{\partial \dot{x}}\right) \) at the point \(t\). Since the integral is 0, this expression must also be equal to 0:

$$\frac{\partial L}{\partial x}-\frac{d}{dt}\left( \frac{\partial L}{\partial \dot{x}}\right) =0.$$

The resulting equations are known as Euler-Lagrange equations.

Case of Newton’s laws. In particular, for the Newton’s case, when

$$L=V(x)-\displaystyle \frac{1}{2}\cdot m\cdot \sum \limits _{i=1}^3 \left( \frac{dx_i}{dt}\right) ^2,$$

for each of the components \(x_i(t)\), we have \(\displaystyle \frac{\partial L}{\partial x_i}=\displaystyle \frac{\partial V}{\partial x_i}\) and \(\displaystyle \frac{\partial L}{\partial \dot{x}_i}=-m\cdot \displaystyle \frac{dx_i}{dt}.\) Thus, Euler-Lagrange’s equations lead to \(\displaystyle \frac{\partial V}{\partial x}+m\cdot \displaystyle \frac{d}{dt}\left( \displaystyle \frac{dx_i}{dt}\right) =0,\) i.e., to Newton’s equations \(m\cdot \displaystyle \frac{d^2x_i}{dt^2}=-\displaystyle \frac{\partial V}{\partial x_i}.\)

General case. In the general case, Euler-Lagrange equations take the form \(\displaystyle \frac{\partial L}{\partial \varphi }-\sum \limits _{i=1}^3 \displaystyle \frac{\partial }{\partial x_i}\left( \displaystyle \frac{\partial L}{\partial \varphi _{,i}}\right) =0,\) where \(\varphi _{,i}\mathop {=}\limits ^\mathrm{def}\displaystyle \frac{\partial \varphi }{\partial x_i}.\)

Appendix 2: Proof of Proposition 1

\(1^\circ \). Let us first apply the condition \(\mu (x+x_0, a+x_0,\sigma )=\mu (x,a,\sigma )\) with \(x_0=-a\). Then, we get \(\mu (x,a,\sigma )=\mu (x-a,0,\sigma )\), or, equivalently,

$$\mu (x,a,\sigma )=\mu _1(x-a,\sigma ),$$

where we denoted \(\mu _1(z,\sigma )\mathop {=}\limits ^\mathrm{def}\mu (z,0,\sigma )\).

\(2^\circ \). In terms of the function \(\mu _1\), the condition \(\mu (\lambda \cdot x,\lambda \cdot a,\lambda \cdot \sigma )=\mu (x,a,\sigma )\) takes the form \(\mu _1(\lambda \cdot (x-a),\lambda \cdot \sigma )=\mu _1(x-a,\sigma )\). Let us apply this condition for \(\lambda =\sigma ^{-1}\). Then, we conclude that \(\mu _1(z,\sigma )=\mu _1\left( \displaystyle \frac{z}{\sigma },1\right) \), or, equivalently, \(\mu _1(z,\sigma )=\mu _0\left( \displaystyle \frac{z}{\sigma }\right) \), where we denoted \(\mu _0(z)\mathop {=}\limits ^\mathrm{def}\mu _1(z,1)\).

Substituting this expression for \(\mu _1(z,\sigma )\) in terms of \(\mu _0\) in the expression for \(\mu \) in terms of \(\mu _1\), we conclude that \(\mu (x,a,\sigma )=\mu _0\left( \displaystyle \frac{x-a}{\sigma }\right) .\)

\(3^\circ \). Substituting the expression for \(\mu \) in terms of \(\mu _0\) into the condition \(\mu (-x,-a,\sigma )=\mu (x,a,\sigma )\), we conclude that \(\mu _0(-z)=\mu _0(z)\). Thus, \(\mu (x,a,\sigma )=\mu _0\left( \displaystyle \frac{|x-a|}{\sigma }\right) .\)

\(4^\circ \). For \(a_1=a_2=0\) and \(\sigma _1=\sigma _2=1\), the fusion condition implies that

$$ \mu _0(x)\cdot \mu _0(x)=C\cdot \mu _0\left( \displaystyle \frac{x-a}{\sigma }\right) $$

for some \(a\), \(C\), and \(\sigma \). The left-hand side attains its maximum (\(=\)1) at \(x=0\), the right-hand side attains its maximum (which is equal to \(C\)) for \(x=a\). Since these two sides are one and the same function, we conclude that \(a=0\) and \(C=1\), i.e., that \(\mu _0^2(x)=\mu (k_2\cdot x)\) for some constant \(k_2\) (\(=1/\sigma \)). For an auxiliary function \(\ell (x)\mathop {=}\limits ^\mathrm{def}\ln (\mu _0(x))\) we conclude that \(2\cdot \ell (x)=\ell (k_2\cdot x)\).

Similarly, if we consider 3, 4, etc. terms, we conclude that \(3\cdot \ell (x)=\ell (k_3\cdot x)\), \(4\cdot \ell (x)=\ell (k_4\cdot x)\), etc.

\(4^\circ \). The function \(\mu _0(x)\) for \(x>0\) is monotonously decreasing from 1 to 0. Therefore, \(\ell (x)\) is monotonously decreasing from 0 to \(-\infty \). Since \(\mu \) (and thus, \(\mu _0\)) is continuous, the function \(\ell (x)\) is also continuous, and hence, there exists an inverse function \(i(x)=\ell ^{-1}(x)\), i.e., such a function that \(i(\ell (x))=x\) for every \(x\).

For this inverse function, the equality \(n\cdot \ell (x)=\ell (k_n\cdot x)\) turns into \(i(n\cdot \ell (x))=i(\ell (k_n\cdot x))=k_n\cdot x=k_n\cdot i(\ell (x))\). So, if we denote \(\ell (x)\) by \(X\), we conclude that for every \(n\), there exists a \(k_n\) such that \(i(n\cdot X)=k_n\cdot i(X)\).

If we substitute \(Y=n\cdot X\), we conclude that \(i(Y)=k_n\cdot i\left( \displaystyle \frac{Y}{n}\right) \), and therefore, \(i\left( \displaystyle \frac{Y}{n}\right) =\displaystyle \frac{1}{k_n}\cdot i(Y)\).

From these two equalities, we conclude that \(i\left( \displaystyle \frac{m}{n}\,\cdot \, X\right) =\displaystyle \frac{1}{k_n}\,\cdot \, i(m\cdot X)=\displaystyle \frac{k_m}{k_n}\,\cdot \, i(X)\). So, for every rational number \(r\), there exists a real number \(k(r)\) such that \(i(r\cdot X)=k(r)\cdot i(X)\). Therefore, the ratio \(\displaystyle \frac{i(r\cdot X)}{i(X)}\) is constant for all rational \(r\).

\(5^\circ \). Since \(i(X)\) is a continuous function, and any real number can be represented as a limit of a sequence of rational numbers, we conclude that the ratio \(\displaystyle \frac{i(r\cdot X)}{i(X)}\) is constant for real values of \(r\) as well. Therefore, for every real number \(r\), there exists a \(k(r)\) such that \(i(r\cdot X)=k(r)\cdot i(X)\).

We have thus arrived at a functional equation for which all monotonis solutions are known: they are \(i(X)=A\cdot X^p\) for some \(A\) and \(p\); see, e.g., [1]. Therefore, the inverse function \(\ell (x)\) (\(x>0\)) also takes the similar form \(\ell (x)=B\cdot x^m\) for some \(B\) and \(m\). Taking into consideration that \(\mu _0(x)\) and hence \(\ell (x)\) are even functions, we conclude that \(\ell (x)=B\cdot |x|^m\) for all \(x\).

\(6^\circ \). Now, for every \(a_1>0\), if we take \(a_2=-a_1\) and \(\sigma _1=\sigma _2=1\), then the fusion property implies that \(\mu _0(x-a_1)\cdot \mu _0(x+a_1)=C\cdot \mu _0\left( \displaystyle \frac{x-a}{\sigma }\right) \) for some \(a\) and \(\sigma \). The left-hand side of this equation is an even function, so the right-hand side must also be even, and therefore \(a=0\). So, \(\mu _0(x\,-\,a_1)\mu _0(x\,+\,a_1)=C\cdot \mu _0\left( \displaystyle \frac{x}{\sigma }\right) \). For \(x=0\) we get \(\mu _0(a_1)\cdot \mu _0(a_1)=C\). Turning to logarithms, we conclude that for every \(a_1\), there exists a \(k(a_1)\) (\(=1/\sigma \)) such that \(\ell (x-a_1)+\ell (x\,+\,a_1)=\ell (k(a_1)\cdot x)\,+\,2\cdot \ell (a_1)\). If we substitute here \(\ell (x)=B\cdot |x|^m\), and divide both sides by \(B\), we conclude that \(|x-a_1|^m+|x+a_1|^m=(k(a_1))^m\cdot |x|^m+2\cdot a_1^m\).

Let us show that this equality is satisfied only when \(m=2\).

\(7^\circ \). When \(x>0\), and \(a_1\) is sufficiently small, then \(x+a_1\), \(x\), and \(x-a_1\) are all positive, and, therefore, \((x-a_1)^m+(x+a_1)^m=(k(a_1))^m\cdot x^m+2\cdot a_1^m\). If we move \(2\cdot a_1^m\) to the left-hand side, and divide both sides by \(x^m\), we conclude that \(\left( 1-\displaystyle \frac{a_1}{x}\right) ^m+\left( 1+\displaystyle \frac{a_1}{x}\right) ^m-2\cdot \left( \displaystyle \frac{a_1}{x}\right) ^m=(k(a_1))^m\). The left-hand side of the resulting equality depends only on the ratio \(z=\displaystyle \frac{a_1}{x}\), the right-hand side only on \(a_1\). Therefore, if we choose any positive real number \(\lambda \), and take \(a_1^{\prime }=\lambda \cdot a_1\) and \(x^{\prime }=\lambda \cdot x\) instead of \(a_1\) and \(x\), then we can conclude that the left-hand side will be still the same, and therefore, the right-hand side must be the same, i.e., \((k(a_1))^m=(k(\lambda \cdot a_1))^m\). Since \(\lambda \) was an arbitrary number, we conclude that \(k(a_1)\) does not depend on \(a\) at all, i.e., that \((k(a_1))^m\) is a constant. Let us denote this constant by \(k\).

So the equation takes the form \((1-z)^m+(1+z)^m=k+2\cdot z^m\). When \(z\rightarrow 0\), then the left-hand side tends to 2 and right-hand side to \(k\), so from their equality we conclude that \(k=2\), i.e., that \((1-z)^m+(1+z)^m=2+2\cdot z^m\).

The left-hand side is an analytical function of \(z\) for \(z\) close to 0. Therefore the right-hand side must also be a regular analytical function in the neighborhood of 0 (i.e., it must have a Taylor expansion for \(z=0\)). Hence, \(m\) must be an integer.

The values \(m<2\) are impossible, because for \(m=0\) our equality turns into a false equality \(2=3\), and for \(m=1\) it turns into an equality \(1-z+1+z=2+z\), which is true only for \(z=0\). So \(m\ge 2\).

Since both sides are analytical in \(z\), the second derivatives of both sides at \(z=0\) must be equal to each other. The second derivative of the left-hand side at \(z=0\) is equal to \(m\cdot (m-1)\). The second derivative of the right-hand side is equal to \(2m\cdot (m-1)\cdot z^{m-2}\).

If \(m>2\), then this derivative equals 0 at \(z=0\) and therefore cannot be equal to \(m\cdot (m-1)\). So \(m\ge 2\), and \(m\) cannot be greater than 2. So, \(m=2\). Thus, \(\ell (x)=B\cdot x^2\), and hence \(\mu _0(x)=\exp (-\beta \cdot x^2)\) for some \(\beta >0\). The proposition is proven.

Appendix 3: Proof of Proposition 2

Let us assume that \(s_k\rightarrow x\), and let us prove that in this case, the ratios

$$X_N\mathop {=}\limits ^\mathrm{def}\frac{\sum \limits _{k=0}^N s_k\cdot (s_{k+1}-s_k)^{-2}}{\sum \limits _{k=0}^N (s_{k+1}-s_k)^{-2}}$$

also tend to \(x\), i.e., that for every \(\varepsilon >0\), there exists an \(n\) for which, for all \(N\ge n\), we have \(|X_N-x|\le \varepsilon \).

Since \(s_k\rightarrow x\), there exists an integer \(n_0\) such that for all \(k\ge n_0\), we have \(|s_k-x|\le x+\displaystyle \frac{\varepsilon }{2}\). In particular, this means that for such \(k\), we have \(s_k\le x+\displaystyle \frac{\varepsilon }{2}\). We can represent the numerator \(\mathcal{N}\) of the ratio \(X_N\) as

$$\mathcal{N}=\mathcal{N}_0+\sum \limits _{k=n_0+1}^{n_0} s_k\cdot (s_{k+1}-s_k)^{-2},$$

where \(\mathcal{N}_0\mathop {=}\limits ^\mathrm{def}\sum \limits _{k=0}^{n_0} s_k\cdot (s_{k+1}-s_k)^{-2}\). Since \(s_k\le x+\displaystyle \frac{\varepsilon }{2}\), we conclude that

$$\mathcal{N}\le \mathcal{N}_0+\left( x+\frac{\varepsilon }{2}\right) \cdot \varDelta ,$$

where we denoted \(\varDelta \mathop {=}\limits ^\mathrm{defe }\sum \limits _{k=n_0+1}^{N} (s_{k+1}-s_k)^{-2}\). Similarly, for the denominator \(\mathcal D\) of the ratio \(X_N\), we get an expression \(\mathcal{D}=\mathcal{D}_0+\varDelta \), where

$$\mathcal{D}_0\mathop {=}\limits ^\mathrm{def}\sum \limits _{k=0}^{n_0} (s_{k+1}-s_k)^{-2}.$$

Thus,

$$ X_N=\frac{\mathcal{N}}{\mathcal{D}}\le \frac{\mathcal{N}_0+ \left( x+\displaystyle \frac{\varepsilon }{2}\right) \cdot \varDelta }{\mathcal{D}_0+\varDelta }. $$

The right-hand side of this inequality can be represented as

$$\frac{\mathcal{N}_0+\left( x+\displaystyle \frac{\varepsilon }{2}\right) \cdot \varDelta }{\mathcal{D}_0+\varDelta }= x+\displaystyle \frac{\varepsilon }{2}+\frac{\mathcal{N}_0-\mathcal{D}_0\cdot \left( x+\displaystyle \frac{\varepsilon }{2}\right) }{\mathcal{D}_0+\varDelta }.$$

Here, \(|s_k-x|\le \displaystyle \frac{\varepsilon }{2}\) and \(|s_{k+1}-x|\le \displaystyle \frac{\varepsilon }{2}\) implies that

$$|s_{k+1}-s_k|\le |s_k-x|+|s_{k+1}-x|\le \displaystyle \frac{\varepsilon }{2}+\displaystyle \frac{\varepsilon }{2}=\varepsilon .$$

Thus, \((s_{k+1}-s_k)^{-2}\ge \varepsilon ^{-2}\) and so, \(\varDelta \ge (N-n_0)\cdot \varepsilon ^{-2}\). When \(N\rightarrow \infty \), we have \(\varDelta \rightarrow \infty \) and thus,

$$\frac{\mathcal{N}_0-\mathcal{D}_0\cdot \left( x+\displaystyle \frac{\varepsilon }{2}\right) }{\mathcal{D}_0+\varDelta }\le \frac{\varepsilon }{2}$$

for sufficiently large \(N\). For such \(N\), we get \(X_n=\displaystyle \frac{\mathcal{N}_0}{\mathcal{D}_0}\le x+\displaystyle \frac{\varepsilon }{2}+\displaystyle \frac{\varepsilon }{2}=x+\varepsilon \). Similarly, for sufficiently large \(N\), we get \(X_N\ge x-\varepsilon \). The proposition is proven.

Appendix 4: Example: Applying Formula (6) to the Divergent Geometric Series \(\sum z^i\) for \(|z|\ge 1\)

When \(|z|>1\), the series \(\sum z^i\) diverges. Here, \(s_1=1\), \(s_2=1+z\), ..., and, in general, \(s_k=1+z+\cdots +z^k=\displaystyle \frac{z^{k+1}-1}{z-1}.\) Thus, \(s_{k+1}-s_k=\displaystyle \frac{z^{k+2}-z^{k+1}}{z-1}=z^{k+1}\). So, the denominator \(\mathcal D\) of the formula (6) has the form \(\mathcal{D}=\sum \limits _{k=0}^N z^{-2\cdot (k+1)}\). In the limit, when \(N\rightarrow \infty \), we get \(\mathcal{D}\rightarrow \displaystyle \frac{z^{-2}}{1-z^{-2}}\).

For the numerator, we similarly have

$$\mathcal{N}=\sum _{k=0}^N \frac{z^{k+1}-1}{z-1}\cdot z^{-2\cdot (k+1)}=\frac{1}{z-1}\cdot \left( \sum _{k=0}^N z^{-{k+1}}-\sum _{k=0}^N z^{-2\cdot (k+1)}\right) .$$

In the limit, when \(N\rightarrow \infty \), we get \(\mathcal{N}\rightarrow \displaystyle \frac{1}{z-1}\cdot \left( \displaystyle \frac{z^{-1}}{1-z^{-1}} -\displaystyle \frac{z^{-2}}{1-z^{-2}}\right) .\) Thus,

$$ x=\lim _{N\rightarrow \infty }X_N=\lim _{N\rightarrow \infty } \frac{\mathcal{N}}{\mathcal{D}}= \frac{\displaystyle \frac{1}{z-1}\cdot \left( \displaystyle \frac{z^{-1}}{1-z^{-1}}-\displaystyle \frac{z^{-2}}{1-z^{-2}}\right) }{\displaystyle \frac{z^{-2}}{1-z^{-2}}}= $$
$$ \displaystyle \frac{1}{z-1}\cdot \left( \displaystyle \frac{z^{-1}}{1-z^{-1}} -\displaystyle \frac{z^{-2}}{1-z^{-2}}\right) \cdot \displaystyle \frac{1-z^{-2}}{z^{-2}}. $$

Here,

$$ \displaystyle \frac{1}{z-1}= \displaystyle \frac{1}{\displaystyle \frac{1}{z^{-1}}-1}= \frac{z^{-1}}{1-z^{-1}}.$$

Therefore,

$$ x=\frac{z^{-1}}{1-z^{-1}}\cdot \left( \displaystyle \frac{z^{-1}}{1-z^{-1}} -\displaystyle \frac{z^{-2}}{1-z^{-2}}\right) \cdot \displaystyle \frac{1-z^{-2}}{z^{-2}}. $$

Adding two fractions in parentheses, we get

$$ \displaystyle \frac{z^{-1}}{1-z^{-1}} -\displaystyle \frac{z^{-2}}{1-z^{-2}}=\frac{z^{-1}\cdot (1+z^{-1})-z^{-2}}{1-z^{-2}}=\frac{z^{-1}}{1-z^{-2}}.$$

Thus,

$$x=\frac{z^{-1}}{1-z^{-1}}\cdot \frac{z^{-1}}{1-z^{-2}}\cdot \displaystyle \frac{1-z^{-2}}{z^{-2}}.$$

The terms \(z^{-1}\), \(z^{-1}\), and \(z^{-2}\) cancel each other, as well as the terms \(1-z^{-2}\) in the numerator and in the denominator. Thus, we get \(x=\displaystyle \frac{1}{1-z^{-1}}\).

For example, for \(z=2\), we get \(x=1+2+4+\cdots =\displaystyle \frac{1}{1-1/2}=2\).

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Kosheleva, O., Reiser, R., Kreinovich, V. (2015). Formalizing the Informal, Precisiating the Imprecise: How Fuzzy Logic Can Help Mathematicians and Physicists by Formalizing Their Intuitive Ideas. In: Seising, R., Trillas, E., Kacprzyk, J. (eds) Towards the Future of Fuzzy Logic. Studies in Fuzziness and Soft Computing, vol 325. Springer, Cham. https://doi.org/10.1007/978-3-319-18750-1_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-18750-1_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-18749-5

  • Online ISBN: 978-3-319-18750-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics