Skip to main content

Trading Multifractal Markets

  • Chapter
  • First Online:
Book cover Multifractal Financial Markets

Part of the book series: SpringerBriefs in Finance ((BRIEFSFINANCE))

  • 1500 Accesses

Abstract

Each market phase has its trading opportunities. A dynamic management approach for trading in multifractal financial markets is introduced in this chapter to allow us to profit from a market’s characteristics. An offensive approach is presented based on the notion of diversification at the strategy level between directional and volatility strategies; and of a macro-design approach. Tools such as cyclical and psychological analysis, fundamental convergent analysis, and the estimation of risks, allow us to evaluate the market biases in order to establish an accurate estimation of the prevailing state of the system and the risk toward which it is heading. Once markets’ characteristics are grasped, risk forecasting models can be enhanced. Models can be built on the basis of multifractal markets but are not limited to using only fractal tools such as, for example, the Hurst exponent. In fact, fractal thinking allows us to discern the most appropriate way of developing models. Be it technical analysis, behavioral finance, cycle analysis, power laws, thermodynamic, and econophysics, etc.…all of these are useful as long as we know how to implement them in our models while remaining aware of their limits. A strategic investment decision must not only be based on the best information available, but also on the possibility of error in the systems of calculation and the development of management strategies. The art of successful tail risk management lies in the ability to hedge against sudden market drifts or any specific micro market risk as well as against long periods of low volatility; and to “time” the volatility to profit from its clustering behavior without having to rely on seismic events to gain profits. That said; total hedging is not possible in absolute terms and if so it does not work at all time. It is important to think in terms of affordable risks before thinking of potential gains. The chapter includes a discussion of recent developments in the various techniques in forecasting risk highlighting their advantages, applications, and limitations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The standard portfolio management types are: conservative, balanced, growth-oriented/dynamic, and aggressive.

  2. 2.

    Appendix B Equities valuation: FCV.

  3. 3.

    The sensitivity of a position to the variation of a risk factor measures the power of the risk factor. Here, we list some capital asset pricing model (CAPM) derivatives whose relevance is very limited and should be cautiously used when assessing risk. The beta (β) is the sensitivity coefficient of a stock with respect to its reference index. In mathematical terms, β is the slope of the curve that results from the linear regression between price and market yield, for a given period of time: \( {\text{dV}}\,{ = }\,\alpha + \beta {\text{I}}\,{ + }\,\varepsilon ,\,\beta \, = \,{\text{dV/dI,}} \) where ε is the specific risk and β the systemic risk. As for bonds, duration measures the sensitivity of a bond’s market price to yield variations. Convexity, on the other hand, is a measure of the sensitivity of the duration of a bond to changes in yield. We have to assess as well the sensitivity of options in relation to various factors: Delta measures an option’s sensitivity to changes in the price of the underlying asset; Gamma measures the Delta’s sensitivity to changes in the price of the underlying asset; Vega measures an option’s sensitivity to changes in the volatility of the underlying asset; Theta measures an option’s sensitivity to time decay; and Rho measures an option’s sensitivity to changes in the risk-free interest rate.

  4. 4.

    Martin Neil Professor of Computer Science and Statistics and Norman Fenton Professor of Risk and Information management, School of Electronic Engineering and Computer Science, Queen Mary University of London. Their book "Risk Assessment and Decision Analysis with Bayesian Networks", 2012, Taylor and Francis Group is much more of a “how–to” guide using existing Bayesian technology.

  5. 5.

    The Financial Services Authority (FSA) introduced the reverse stress test on December 2008. An underlying aim of this test requirement is to ensure that a firm could survive long enough after risks have formed either to restructure a business, or to transfer a business.

  6. 6.

    See Harrington, Weiss and Bhaktavatsalam (2010) for more details.

  7. 7.

    See Jones (2009) for more details.

  8. 8.

    According to Smith (2009), to compute the probability of a specific event, a predictive distribution may be much more meaningful than a posterior or likelihood-based interval for some parameter. Bayesian methods are used as a device for taking account of model uncertainty in extreme risk calculations. Only a Bayesian approach adequately provides an operational solution to the problem of calculating predictive distributions rather than inference for unknown parameters in the presence of unknown parameters (Smith 1998).

  9. 9.

    In 2011, for instance, factors such as the aging population and the ever-changing opportunities and innovations in health care technology influence the identification of promising health care stocks.

  10. 10.

    The price of price of equity is a function of the actualized forecast revenues and the yields corresponding to the investor’s investment horizon

    $$ {\text{V}}_{ 0} = \sum\nolimits_{i = 1} {\frac{{{\text{D}}_{\text{i}} }}{{(1 + {\text{t}})^{\text{i}} }} + \frac{{{\text{V}}_{\text{n}} }}{{(1 + {\text{t}})^{\text{n}} }}} $$

    where V0 is the value of the share on the starting date;

    Di is dividend to be received in year i, with i varying from 1 to n;

    Vn is the expected value of the share in year n; and t is the rate of discount.

    An investor decides to buy a stock based on whether the market price is less than or equal to V0.

  11. 11.

    The adjusted price earnings ratio (aPER): The most common method for measuring the difference between the fundamental and market values is to calculate the fundamental PER and to compare it to the equity market PER. Another method involves comparing the equity market PER to its peer group. In using either of these methods, note that the ratios are influenced by:(1) the phase of the economic cycle: e.g. it is completely normal that the PER increases in a growth environment; (2) the quality of management, position of the company in its industry and its long-term potential; (3) interest rates; and (4)Speculation or rumors surrounding the company affect its price premium

    The self-financing ratio: The PER for some sectors including the media, pharmaceutical and technology sectors cannot be assessed accurately because companies in these sectors invest heavily in research and development and have significant financial needs. As such, it is preferable to base our analysis on the self-financing ratio

    Dividends: Companies who get most of their yields from dividends are more sensitive to a change in interest rates than others. In order to keep attracting investors, these companies, well-established and mature, offer high dividends in order to compete with certificates of deposit. This method, however, is criticized because it can be sometimes difficult to identify a specific sector of activity for a given company. The peer group is not always easy to establish and if improperly identified, can lead to false market conclusions. Also, difficulties can arise given the weakness of all statistical and historical approaches and the fact that ratios are static. Take for example the crash of 1990. In How Can You Tell a Bear Market Is Over by Birinyini Associates, several indicators including the PER ratio, dividend yields and short and long term treasury bonds, were analyzed in order to identify those which could have detected the floor for the S&P 500 crash in October 1990. The three-month treasury bonds signaled to purchase on 10 May 1991. The long-term treasury bonds, gave the same signal in March 1993. Neither the PER nor the dividend yields gave such signals regarding the end of the bear market in early 1990.

  12. 12.

    This is the equivalent of a Richter scale in geology for financial markets. The list of scaling laws is on page 11 of Dupuis & Olsen paper.

  13. 13.

    These are either local minima where the share price falls before starting to rise again (also known as an “uptrend”) or local maxima where the price peaks before falling (a “downtrend”).

  14. 14.

    We analyze the most traded 531 stocks in U.S. markets during the 2 year period of 2001–2002 at the 1 min time resolution.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yasmine Hayek Kobeissi .

Appendices

Appendix B: Equities Valuation (Hayek 2010)

The valuation of equities is articulated around two key elements: the financial soundness of the company and its market price. Our aim in this section is to identify stocks with the greatest promise in terms of revenue, earnings, cash flow as well as positive market predisposition.Footnote 9 To do so, we study the financial strength of the company and rate its credit risk profile.

5.1.1 Financial Strength and Credit Rating

The credit rating process involves quantitative and qualitative analysis of a company’s balance sheet, operating performance, and business profile. This information is gathered from official company data and meetings with the company management team. There are a number of ratios to choose from for assessing credit risk and it is up to the analyst to identify the more pertinent ratios for a company, its operations, and the industry in which it competes. It is important to look into the company’s overall standing, including the company’s financial past and present situation to estimate its future outlook. Some of the most widely used ratios for assessing company’s strength are: the Earnings per share, the Flow ration, the cash burn rate…etc. These ratios assess how a company uses its available cash, which provides insight into the life expectancy of firms. However, other methods such as evaluating the credibility of the management, measuring the number of projects in development, and examining the quality of the technological platforms are more appropriate. It is also essential to pay adequate attention to the results of products in their first and second phases of development including their efficiency testing.

Once the company’s financial solidity is assessed, its fundamental value is compared it to its market value, in order to measure the value gap.

5.1.2 The Fundamental Convergent Price

The fundamental convergent price (FCV) introduced here is the equity adjusted value to the market opinion. It is a forward looking valuation, thereby the notion of convergence of the fundamental value (FV) to the market price and vice versa.

The standard formulaFootnote 10 for equity valuation assumes that the following factors are known with certainty: the sale price of the share in n years; future dividends; and exact discount rates. In reality, however, there is always a difference between FV and the market price, because of market disequilibrium.

The valuation of a company on its own would be useless if we do not add to it its market premium or discount as it is through the market that we execute our trades. Our goal is to calculate a value according to our own valuation while discerning the market’s perception. Consequently, most of the distortions and obstructions to the establishment of this value can be eliminated so that the FCV can be estimated. As Keynes (1936) states:

It is not a case of choosing those [faces] that, to the best of one’s judgment, are really the prettiest, nor even those that average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practice the fourth, fifth and higher degrees. (p. 140)

People price shares not based on what they think their fundamental value is, but rather on what they think everyone else thinks their value or predicted average value is. The FCV is calculated as:

$$ {\text{FCV }} = {\text{FV }} + {\text{ MPD}} $$

where FV is adjusted and evaluated according to our own judgment; and

MPD is the evaluation of the market premium or discount, based on our estimates concerning market judgment and forecasts.

The FCV takes into account the phenomenon of convergence and the distortion between the fundamental and market values. At one point in time, we may face two scenarios: either the price approaches the value or the value is modified until it justifies the price. Any of these scenarios can be self-validating. Tvede (2002) describes this process as follows:

If a stock rises, the company’s credit rating improves, as well as its opportunities to finance activities with loans or by issuing new stock… Price fluctuations thus have impact on true value… (p. 7).

5.1.3 The Fundamental Value

In assessing FV, it is necessary to determine a margin of fluctuation for the price, depending on the discount rate, valuation methods used, and growth rate of the forecasted price. The discount rate is the minimum acceptable rate of return or “hurdle” rate. It is composed of two elements: the risk-free rate and risk premium. These elements reflect both the objective elements which are proportional to the volatility of the share, subjective elements which depend on the sector, quality of the company’s management, and the company’s competitive advantage. As it is impossible to forecast future dividends from one year to another in the long term; we introduce a second method by calculating V0 as follows:

$$ {\text{V}}_{0} = {\text{ NP}}\;\times\;{\text{ PER}}({\text{sector}}) $$

where NP refers to net profit; and PER (sector) refers to the price earnings ratio of the sector.

Accordingly, V0 reflects FV of the equity based on the company’s net profit realized or forecast according to the rate of growth (g) and the market reference measure, PER, for the sector. We use PER in this calculation because it is the measure most widely used by the market, to minimize the difference between calculated and market price. In deciding which valuation method to apply to forecast dividends, we need to look into analysts’ forecasts. Nonetheless, even if all analysts produce the same forecasts, different fundamental valuations for a company can result. This difficulty in arriving at the appropriate valuation method justifies and reinforces the existence of a spread between FV and market price of the equity.

5.1.4 Market Premium/Discount and Limits of Technical Analysis

The market premium or discount (MPD) calculation is influenced by:

  • The analysis of economic and financial cycles, where we assess the so-called inner forces which alternatively inspire and discourage human beings. These forces cause intense emotions including fear, anxiety, and greed.

  • The technical analysis indicators, in which we examine the behavior that drives decision-making and consequently determines the variations in prices. According to Mandelbrot (2008) it is important to pay attention to the importance given to these indicators, they are a means to an end:

    The short term has essentially been given up to the black arts of the traders, who most of the time make bets on the bets that others are making. Investors love to find patterns and statistical mirages where none exist. (p. 4).

Technical analysis is valuable when studied using the following approach: if part of one’s knowledge, hopes, forecasts, revenues, anxieties, and doubts is integrated with the price, then the price chart of a stock also contains part of the existing information with respect to this stock. The efficiency of this hypothesis is its ability to recognize a trend, reveal the characteristics of this trend, and determine its persistency or reversal. Technical analysis is not only another way to interpret the market; rather, it also sometimes influences market psychology and behavior. In reality, market trends are self-fulfilling in the sense that:

…Even if a pattern is entirely coincidental, it may start generating valid signals if enough dealers use it (Tvede 2002, p. 49)

Nonetheless, it is important to be aware that this type of analysis is auto-destructive. Technical analysts recognise long-term dependency where events of the past are reflected in today’s prices. However, we need to remember that initial condition changes lead to different results. We cannot copy-paste events and outcomes based on historical behaviors. The key is to not become radical but rather to try to perceive all other indicators. Our mind can play tricks and we end up imagining patterns where there are none. In essence, technical analysts can be right sometimes when trying to guess crowd behavior. We can use their analysis as a tool to trace trade opportunities established by other indicators.

Once the FCV is defined, we can compare it to the market price. The choice of method for avoiding the forecasting trap rests on the choice of financial ratios. In the short- and medium-term, it is necessary to concentrate on known or predictable ratios, compare them to the peer industrial group, or to situate them historically in order to detect a relative under or over valuationFootnote 11.

There is no one correct method for market and stock analysis. The breakdown of the price into objective and subjective elements and the instability of the economic and financial systems make the calculation of the market value difficult. We must be conscious of the difficulty of this exercise and approach the FCV by situating it in a margin and proceed with forecasts to regularly monitor and manage the portfolio.

Given the forecasting dilemma, it is better to forego long-term forecasts and to use short- and medium-term forecasts in estimating the FCV. The gap between equity market price and fundamental value creates an uncertainty in the financial environment and makes all portfolios vulnerable to risks factors. Bear in mind, that while it is important to find out if a share is correctly valued, it is critical to know how this evaluation will be modified over time with the movement of the economic and financial cycles (the strange attractors). In this sense, Bernd Engelmann & Daniel Porath article (Engelmann & Porath 2012) “Do not Forget the Economy when Estimating Default Probabilities” is quiet interesting as it introduces techniques to integrate macroeconomic information into a rating model and then illustrates how the macroeconomic variables improve the performance of a model for small- and medium-sized companies.

Rating systems without macroeconomic information run the risk of yielding imprecise probability default (PD) estimations….Modular approaches, like e.g. the Bayes approach generally are more flexible. In principle the Bayes approach can be applied to any rating system, even to pure expert ratings and incorporate forward looking components.

Appendix C: In the Lab

Researches and most intriguing in the lab findings that we came across are presented hereunder.

Predicting Risk/Return Performance Using Upper Partial Moment/Lower Partial Moment Metrics (2011d) (Viole & Nawrocki, Predicting Risk/Return Performance Using UPM/LPM Metrics (2011d).

In their paper, Viole and Nawrocki develop a better explanatory/predictive measure that takes lower as well as higher moments into account. Below is a summary of their rationale.

  • Below target analysis alone is akin to only hiring a defensive coordinator.

  • Diversification is the panacea of risk management techniques to reduce the non systemic risk relative to an individual position. The preferred method to reduce nonsystemic risk is to add investments with the greatest historical marginal non systemic risk net of systemic risk. This has been quantified by subtracting the systemic benchmark from the investment as in the below equation.

    $$ \begin{array}{*{20}c} {\text{Non Systemic}} & {} & {\text{Systemic}} \\ {\left( {\frac{{UPM \left( {q,l,x} \right)}}{{LPM \left( {n,h,x} \right)}}} \right)} & - & {\left( {\frac{{UPM \left( {q,l,y} \right)}}{{LPM \left( {n,h,y} \right)}}} \right)} \\ \end{array} $$
  • This ratio answers the question when comparing and ranking multiple investments simultaneously: What investment historically goes up more than the market when the market goes up and historically loses less when the market loses—And should I even be invested in this asset class?

  • Systemic risk. Autocorrelation/dependence/serial correlation (p(x)) is an important tool in identifying increasing distributional risks such as muted entropic environments and lending a predictive ability to an explanatory metric. The autocorrelation formula for a 1 period lag is for investment x at time t is:

    $$ |\rho (x) |\; = \;|cov\left( {xt,xt - 1} \right)| $$
  • The absolute value is used because an autocorrelation of −1 or 1 is equally dangerous to investors. A 1 period lag is used because we aim to err on the side of caution. Where there’s smoke there’s fire. If a 10 period lag presents autocorrelation, it will obviously be noticed in the one period prior. The risk is that between lag differences, a bifurcation abruptly ceases thus leaving the investor waiting for a confirmation to avoid the very event that has just transpired, effectively rendering this metric explanatory.

    $$ \begin{array}{*{20}c} {\text{Explanatory }} & {} & {} & {\text{Predictive}} \\ {\left( {\frac{{UPM \left( {q,l,x} \right)}}{{LPM \left( {n,h,x} \right)}}} \right)} & - & {\left| {\rho \left( x \right)} \right|} & {\left( {\frac{{UPM \left( {q,l,x} \right)}}{{LPM \left( {n,h,x} \right)}}} \right)} \\ {} & {} & {} & {} \\ {} & {} & {} & {} \\ \end{array} $$
  • An observed one autocorrelation reading denotes a dubious situation. The increased autocorrelation influence can be subtracted from itself to compensate for an increased likelihood of an unstable investment, thus lowering the metric to reflect this probable risk. The trick is to know when to exit prior to the bubble ρ(x) is our predictive metric using the autocorrelation coefficient. It is not intended to pick an inflection point (since one never knows the top until after they have seen it), but the translation to deltas onto your position will properly manage anticipated risks. Then replacing the investment with a senior ranked investment is a viable interpretation of the data.

Our empirical test generates rank correlations between the performance measures and asset returns for both an explanatory period and an out-of-sample predictive period. On balance, we were able to generate minimal explanatory correlations with no out-of-sample predictive correlations using the explanatory model and no explanatory correlations with significant out-of-sample correlations when using the predictive model. Our use of statistical certainty ρ(x) as a punitive variable in quantifying risk humbles the methodology in so far as admitting we are beholden to an uncertain future per Frank Knight. This is an important point as we use certainty as a punitive variable and distinguish between explanatory and predictive functions for the metrics; it has always been just explanatory in nature. Coupled with the degrees in the UPM/LPM measure it is the most behavioral statistic of behavioral finance we have come across.

5.2.1 Do Bayesian Methods Help Model Uncertainty?

As per discussion with Professor Neil, the Bayesian approach uses the prior and likelihood distributions to produce the posterior distribution for parameters of interest. The posterior predictive distribution is the posterior probability of the event/proposition based on the parameters. So, from a purely mechanical perspective it is adaptive to real data and can respond non-monotonically, if necessary, by selecting hypotheses that are better supported by empirical events. These properties are fundamental to “common sense” reasoning i.e. the model can change “its mind”. It favors predictions that are most often correct and admits to uncertainty (by virtue of the fact that everything is a distribution of belief).

For extreme risk calculations I would say that one might need to mix imagination, history, data and insight to produce a causal explanatory model. For this pure Bayesian statistics is not enough (i.e. just using Bayesian parameter updating as an alternative to maximum likelihood won’t cut it). That’s why there is a lot of interest in using Bayesian networks to model the causality structure that might better predict extremes and norms, as mixture models of market epochs/states. Data then helps identify which hypothetical state/epoch the market is in and the beauty of the Bayesian approach is that this data can take any form (expert opinion, market data, harbingers etc.).

Doing this is, computationally hard, especially if you are aiming at an alternative to CAPM.

Any model should strive to be explanatory as well as prognostic, hence our stress on causal attribution and the explicit role of systematic causal factors in the models. What we really need is a way of building into our models discrete measures (“in control”, “out of control” etc.) to track, monitor, and signal predicted risks. The research work of Professors Neil and Fenton fuses causal modeling, copulas and portfolio aggregation into a single Bayesian framework—early results are promising, but there is a still lot to be done. Portfolio rebalancing starts by setting up the proper portfolio control tools and periodic portfolio follow-up procedures at the outset.

5.2.2 Scale of Market Quakes in High Frequency Data

Dupuis and Olsen (Dupuis and Olsen 2011) propose a different way to analyze high frequency data: an approach in which the time series is dissected based on market events where the direction of the trend changes from up to down or vice versa. Physical time is replaced by intrinsic time (ticking at every occurrence of a directional change of price) where any occurrence of a directional change represents a new intrinsic time unit. The scale of market quakes (SMQ) defines a tick-by-tick metric to quantify market evolution on a continuous basis.

In the probability density function \( {\text{Log}}\left( {{\text{f}}\left( {\text{t}} \right)} \right) \, = {\text{ I}}. \, \delta .{\text{ t }}- \, \gamma . \, \left| {\text{t}} \right| \, ^{\alpha \, } ( 1 { } + {\text{ I}}. \, \beta . \, \left( { 1/ \, |{\text{t}}|} \right).\,{ \tan }(\alpha . \, \pi \, /{ 2}) \)

For \( 1< \, \alpha \, < 2, \, \delta \) (local parameter) is defined as the mean, and the variance is infinite.

Estimators which involve only 1st powers of the stable variable have finite expectation, like the fractile ranges and absolute mean deviation, are appropriate measures of variability for these distributions than the variance.

In Olsen Ltd. model, the fractile ranges are estimated as the intrinsic timeλ, and the absolute mean deviation ϖ (δ i) is the quantile averages of the scale magnitude quake (SMQ). Since α and β remain constant under addition, the means δ i and the scale parameters γ i can be identified (12 scaling lawsFootnote 12 have been discovered by Olsen Ltd.)

…The scaling laws are powerful tools for model building: they are a frame of reference to relate different values to each other… The scale of market quakes is an objective measure of the impact of political and economic events in foreign exchange and used as a support tool for decision makers and commentators in financial markets or as an input for an economic model measuring the impact of fundamental economic events. The scale of market quakes can be used in different ways; decision makers can use the indicator as a tool to filter the significance of market events. The output of the SMQ can be used as an input to forecasting or trading models to identify regime shifts and change the input factors.

Their bet is that imbalances will correct themselves… this is where the model finds its limit: what happens when imbalances does not correct or takes more time to do it?

5.2.3 Market Behavior Near the “Switching Points”

Preis and Stanley (2011) examine whether concepts from physics to discover if there are general laws to describe market behavior near the “switching points” in the data. To do so they analyzed massive data sets (transactions recorded every 10 ms of the German DAX Future stock market, daily closing prices of stocks in the S&P 500 share index in the US) comprising three fluctuating quantities—the price of each transaction; the volume; and the time between each transaction and the next—to find out if there are regularities either just before or just after a switching point.Footnote 13 Under the Panic hypothesis, their analysis revealed that the volume of each transaction increases dramatically as the end of a trend is reached, while the time interval between each transaction drops:

In other words, as prices start to rise or fall, stock is sold more frequently and in larger chunks. Traders become tense and panic because they are scared of missing a trend switch.

5.2.4 Cascade Dynamics of Price Volatility

Alexander M. Petersen, Fengzhong Wang, Shlomo Havlin, and H. Eugene Stanley studied the cascade dynamics of price Volatility (Petersen, Wang, Havlin, and Stanley., 2010) immediately before and immediately after 219 market shocks.Footnote 14 The results of their paper are of potential interest for traders modeling derivatives (option pricing and volatility trading) on short time scales around expected market shocks, e.g., earnings reports.

We define the time of a market shock Tc to be the time for which the market volatility V(Tc) has a peak that exceeds a predetermined threshold. The cascade of high volatility “aftershocks” triggered by the “main shock” is quantitatively similar to earthquakes and solar flares, which have been described by three empirical laws—the Omori law, the productivity law, and the Bath law. We find quantitative relations between the main shock magnitude M = log10 V(Tc) and the parameters quantifying the decay of volatility aftershocks as well as the volatility preshocks….Information that could be used in hedging, since we observe a crossover in the cascade dynamics for M 0.5. Knowledge of the Omori response dynamics provides a time window over which aftershocks can be expected. Similarly, the productivity law provides a more quantitative value for the number of aftershocks to expect. Finally, the Bath law provides conditional expectation of the largest aftershock and even the largest preshock, given the size of the main shock. Of particular importance, from the inequality of the productivity law scaling exponents and the pdf scaling exponent for price volatility, we find that the role of small fluctuations is larger than the role of extremely large fluctuations in accounting for the prevalence of aftershocks.

Rights and permissions

Reprints and permissions

Copyright information

© 2013 The Author(s)

About this chapter

Cite this chapter

Hayek Kobeissi, Y. (2013). Trading Multifractal Markets. In: Multifractal Financial Markets. SpringerBriefs in Finance. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-4490-9_5

Download citation

Publish with us

Policies and ethics