Measuring short-term risk of initial public offering of equity securities: a hybrid Bayesian and Data-Envelopment-Analysis-based approach

A Correction to this article is available

This article has been updated

Abstract

This paper offers a methodology to estimate an unconditional probability density function (PDF) for the stock price of an initial public offering (IPO), at a short-term post-IPO horizon. The resultant PDF is unique to the IPO of interest (IPOI) and serves to model the short-term post-market uncertainty associated with its price. Such a methodology is unprecedented in the IPO risk literature since the ex ante quantification of the short-term uncertainty associated with the stock price of a newly public firm was viewed as burdened by the lack of sufficient accounting and market history at the IPO stage. This gap is addressed here through recognizing that common in most IPO cases are the scarcity of hard data and abundance of soft data (strong prior belief), and that one can combine Bayesian inference and Data Envelopment Analysis (DEA) to develop a unique risk quantification setting that befits and serves these two characteristics of IPOs. In this setting, DEA serves to quantify the prior belief, to be subsequently updated in the Bayesian phase. This paper remains the first of its kind which unravels the IPO risk analysis from such perspective. It develops an iterative process that uses DEA to design a multi-dimensional similarity metric to find the ‘comparables’ to IPOI, and thereof the closest comparable to it, whereupon Bayesian inference is employed to utilize the information available from these comparables to sequentially update and revise the IPOI’s prior PDF. The validity of the proposed risk methodology was examined by backtesting analyses.

This is a preview of subscription content, log in to check access.

Change history

  • 26 November 2019

    This erratum is published as several typo errors during proofing stage were overlooked such.

References

  1. Abdou, K., & Dicle, M. F. (2007). Do risk factors matter in the IPO valuation? Journal of Financial Regulation and Compliance, 15(1), 63–89.

    Google Scholar 

  2. Abidin, S. N. Z., & Jaffar, M. M. (2012). A review on geometric Brownian motion in forecasting the share prices in Bursa Malaysia. World Applied Sciences Journal, 17, 87–93.

    Google Scholar 

  3. Alford, A. W. (1992). The effect of the set of comparable firms on the accuracy of the price-earnings valuation method. Journal of Accounting Research, 30(1), 94–108. ISSN 00218456.

    Google Scholar 

  4. Anadol, B., Paradi, J. C., Simak, P., & Yang, X. (2014). Valuing private companies: A DEA approach. International Journal of Business and Management, 9(12), 16.

    Google Scholar 

  5. Arnold, T., Fishe, R. P. H., & North, D. (2010). The effects of ambiguous information on initial and subsequent IPO returns. Financial Management, 39(4), 1497–1519.

    Google Scholar 

  6. Asquith, D., Jones, J. D., & Kieschnick, R. (1998). Evidence on price stabilization and underpricing in early IPO returns. The Journal of Finance, 53(5), 1759–1773.

    Google Scholar 

  7. Banker, R., Charnes, A., Cooper, W., Swarts, J., & Thomas, D. (1989). An introduction to data envelopment analysis with some of its models and their uses. Research in Governmental and Nonprofit Accounting, 5, 125–163.

    Google Scholar 

  8. Baron, D. P. (1982). A model of the demand for investment banking advising and distribution services for new issues. The Journal of Finance, 37(4), 955–976.

    Google Scholar 

  9. Baron, D. P., & Holmstrom, B. (1980). The investment banking contract for new issues under asymmetric information: Delegation and the incentive problem. The Journal of Finance, 35(5), 1115–1138.

    Google Scholar 

  10. Beatty, R. P., & Ritter, J. R. (1986). Investment banking, reputation, and the underpricing of initial public offerings. Journal of Financial Economics, 15(1–2), 213–232.

    Google Scholar 

  11. Berger, J. (2006). The case for objective Bayesian analysis. Bayesian Analysis, 1(3), 385–402.

    Google Scholar 

  12. Bhojraj, S., & Lee, C. M. C. (2002). Who is my peer? A valuation-based approach to the selection of comparable firms. Journal of Accounting Research, 40(2), 407–439.

    Google Scholar 

  13. Boatsman, J. R., & Baskin, E. F. (1981). Asset valuation with incomplete markets. The Accounting Review, 56(1), 38–53.

    Google Scholar 

  14. Charnes, A., Cooper, W. W., Lewin, A. Y., Morey, R. C., & Rousseau, J. (1984). Sensitivity and stability analysis in DEA. Annals of Operations Research, 2(1), 139–156.

    Google Scholar 

  15. Chernick, M. R. (2008). Bootstrap methods: A guide for practitioners and researchers. Hoboken: Wiley.

    Google Scholar 

  16. Cooper, W. W., Seiford, L. M., & Tone, K. (2007). Data envelopment analysis: A comprehensive text with models, applications, references and dea-solver software. Berlin: Springer.

    Google Scholar 

  17. De Sousa, M., & Stosic, B. (2005). Technical efficiency of the Brazilian municipalities: correcting nonparametric frontier measurements for outliers. Journal of Productivity Analysis, 24(2), 157–181.

    Google Scholar 

  18. Draho, J. (2004). The IPO decision: Why and how companies go public. Cheltenham: Edward Elgar Publishing Limited.

    Google Scholar 

  19. Dyson, R. G., Allen, R., Camanho, A. S., Podinovski, V. V., Sarrico, C. S., & Shale, E. A. (2001). Pitfalls and protocols in DEA. European Journal of Operational Research, 132(2), 245–259.

    Google Scholar 

  20. Emrouznejad, A., Anouze, A. L., & Thanassoulis, E. (2010). A semi-oriented radial measure for measuring the efficiency of decision making units with negative data, using DEA. European Journal of Operational Research, 200(1), 297–304.

    Google Scholar 

  21. Emrouznejad, A., Parker, B. R., & Tavares, G. (2008). Evaluation of research in efficiency and productivity: A survey and analysis of the first 30 years of scholarly literature in DEA. Socio-economic Planning Sciences, 42(3), 151–157.

    Google Scholar 

  22. Emrouznejad, A., & Podinovski, (2004). Data envelopment analysis and performance management. Birmingham: Aston Business School, Aston University.

    Google Scholar 

  23. Gibbons, J. D., & Chakraborti, S. (2005). Nonparametric statistical inference. Abingdon: Taylor & Francis e-Library.

    Google Scholar 

  24. Hardle, W. K., & Simar, L. (2012). Applied multivariate statistical analysis. Berlin: Springer.

    Google Scholar 

  25. Houston, J., James, C., & Karceski, J. (2006). What a difference a month makes: Stock analyst valuations following initial public offerings. Journal of Financial and Quantitative Analysis, 41(1), 111–137. ISSN 00221090.

    Google Scholar 

  26. Hu, Y. (2018). Short-horizon market efficiency, order imbalance, and speculative trading: Evidence from the Chinese stock market. Annals of Operations Research, 281(1–2), 253–274.

    Google Scholar 

  27. Ibbotson, R. G., & Ritter, J. R. (1995). North-Holland handbooks of operations research and management science: Finance, chapter initial public offerings (Vol. 9, pp. 993–1016). Elsevier.

  28. Jain, B. A., & Nag, B. N. (1998). A neural network model to predict long-run operating performance of new ventures. Annals of Operations Research, 78, 83–110.

    Google Scholar 

  29. Kay, S. (1993). Fundamentals of statistical signal processing, volume I: Estimation theory. Upper Saddle River: Prentice Hall.

    Google Scholar 

  30. Kim, M., & Ritter, J. R. (1999). Valuing IPOs. Journal of Financial Economics, 53(3), 409–437.

    Google Scholar 

  31. McGuinness, P. (1993). Investor- and issuer-related perspectives of IPO underpricing. Omega, 21(3), 377–392.

    Google Scholar 

  32. Ohlson, J. A. (1995). Eamings, book values, and dividends in equity valuation. Contemporary Accounting Research, 11(2), 661–687.

    Google Scholar 

  33. Paradi, J. C., Sherman, H. D., & Tam, F. K. (2018). Securities market applications: Risk measurement of IPOs. In C. C. Price, J. Zhu, & F. S. Hillier (Eds.), Data envelopment analysis in the financial services industry: A guide for practitioners and analysts working in operations research using DEA (pp. 187–206). Springer.

  34. Prem, K. P., Nga, D., Pasmana, H. J., Sawyerb, M., Guoa, Y., & Mannan, M. S. (2010). Risk measures constituting a risk metrics which enables improved decision making: Value-at-risk. Journal of Loss Prevention in the Process Industries, 23(2), 211–219.

    Google Scholar 

  35. Schultz, P. H., & Zaman, M. A. (1994). Aftermarket support and underpricing of initial public offerings. Journal of Financial Economics, 35(2), 199–219.

    Google Scholar 

  36. Seidl, I., & Sommersguter-Reichmann, M. (2011). Visualizing production surfaces in 3D diagrams. Advances in Operations Research. https://doi.org/10.1155/2011/424989.

  37. Sharp, J. A., Meng, W., & Liu, W. (2007). A modified slacks-based measure model for data envelopment analysis with ‘natural’ negative outputs and inputs. Journal of the Operational Research Society, 58(12), 1672–1677.

    Google Scholar 

  38. Silva Portela, M. C. A., & Thanassoulis, E. (2001). Decomposing school and school-type efficiency. European Journal of Operational Research, 132(2), 357–373.

    Google Scholar 

  39. Silva Portela, M. C. A., Thanassoulis, E., & Simpson, G. (2004). Negative data in DEA: A directional distance approach applied to bank branches. Journal of the Operational Research Society, 55(10), 1111–1121.

    Google Scholar 

  40. Simak, P. C. (2000). Inverse and negative DEA and their application to credit risk evaluation. Toronto: University of Toronto.

    Google Scholar 

  41. Sorkhi, S. (2015). A hybrid Bayesian and data-envelopment-analysis-based approach to measure the short-term risk of initial public offerings. Ph.D. thesis, University of Toronto

  42. Teoh, S. H., Wong, T. J., & Rao, G. R. (1998). Are accruals during initial public offerings opportunistic? Review of Accounting Studies, 3(1–2), 175–208.

    Google Scholar 

  43. Thanassoulis, E. (2001). Introduction to the theory and application of data envelopment analysis: A foundation text with integrated software. Boston: Springer.

    Google Scholar 

  44. Thanassoulis, E., Portela, M. C., & Despic, O. (2008). Data envelopment analysis: The mathematical programming approach to efficiency analysis. In H. O. Fried, C. A. Knox Lovell, & S. S. Schmidt (Eds.), The measurement of productive efficiency and productivity change (pp. 251–420). Elsevier.

  45. Titman, S., & Trueman, B. (1986). Information quality and the valuation of new issues. Journal of Accounting and Economics, 8(2), 159–172.

    Google Scholar 

  46. Tone, K. (2002). A slacks-based measure of super-efficiency in data envelopment analysis. European Journal of Operational Research, 143(1), 32–41.

    Google Scholar 

  47. Zhong, H., Liu, C., & Zhong, J. (2018). Which startup to invest in: A personalized portfolio strategy. Annals of Operations Research, 263(1–2), 336–339.

    Google Scholar 

  48. Zhu, J. (2000). Multi-factor performance measure model with an application to fortune 500 companies. European Journal of Operational Research, 123(1), 105–124.

    Google Scholar 

Download references

Acknowledgements

This work was supported by Ontario Graduate Scholarship; Queen Elizabeth II Graduate Scholarships in Science & Technology; and grants to the Center for Management of Technology and Entrepreneurship from the Financial Services Industry.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Shabnam Sorkhi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The authors would like to note that a summary of the methodology presented in this paper has been published as a book chapter in Paradi et al. (2018): “Data Envelopment Analysis in the Financial Services Industry: A Guide for Practitioners and Analysts Working in Operations Research Using DEA.” The methodology comprises two main phases, referred to as Phase I (Sect. 3.1 in this paper) and Phase II (Sect. 3.2). The published chapter in the book brought more clarity to Phase I, de-scoping a detailed illustration of Phase II, whose full particulars are given here and is viewed as the authors’ primary contribution. With regard to the chronological sequence of the events, the initial intention was to have the current paper published in advance of the 2018 edition of the book. However, due to the lengthy review process, the paper was delayed but the book appeared in print much earlier than expected, and the current paper had to be cited as a working paper in the book. The authors aim to update the citation in any future revision of the book.

Appendix: Robustness testing of phase I

Appendix: Robustness testing of phase I

As indicated in Sect. 4, the three tests presented here pursue to demonstrate the robustness of Phase I and can be considered as additional checks to ensure accuracy. Phase I incorporates certain steps to address specific characteristics of input data such as negative data and non-discretionary factors. To this end, we have benefited from important works published by Emrouznejad et al. (2010), Silva Portela et al. (2004) and Sharp et al. (2007). In addition, we needed to take into account the cases where the IPO of Interest (IPOI) is an efficient DMU itself. Recall that the set of comparables for an ordinary (inefficient) IPO consists of its efficient peers as well as other inefficient peers that share the same efficient peers. The notion of slacks-based measure of ‘super-efficiency’ in DEA was utilized to tackle the cases of efficient IPOI (Tone 2002; Cooper et al. 2007). A detailed description of these steps can be found in Paradi et al. (2018). The robustness testing, which only concerns Phase I, focuses on examining the contribution of these additional steps. Undoubtedly, a DEA model capable of handling efficient IPOIs or negative data would be a more inclusive model, capable of covering a larger sample of IPOs. It is, therefore, interesting to study and visualize how the addition of such steps impacts the results, and whether the expected benefits are yielded through the added complexity.

The first test to this end consists of three different “Runs.” Figure 1 visualizes the results. In Run 1, we exclude the two steps of outlier detection and efficient IPOI treatment, where the latter deprives the model of its capacity to handle efficient IPOIs. In Run 2, the model is enriched by adding the efficient IPOI treatment capability. In Run 3, outliers are detected as well, using the “Jackstrap” approach proposed by De Sousa and Stosic (2005). It is clear from the graph how the incorporation of the two steps of outlier detection and efficient IPOI treatment smoothes out the erratic changes in the number of comparables per IPOI, making the model more robust in selecting comparables. The outcome depicted on the lower pane seems to be more intuitive as these IPOs are from the same industry and it is more in line with expectations to have a similar count of comparables per IPO. Furthermore, under RUN 1, the number of comparables grows with the sample size. While this observation alone does not provide enough evidence to conclude instability, it raises the question of whether the model is sufficiently sensitive to the differences between the DMUs/firms. Furthermore, the rather increasing trend shown in the first pane of Fig. 1 can be interpreted as that a more recent IPOI is more likely to be linked with a larger group of comparables. This upward trend seems incompatible with practical intuition since given sufficient data per IPOI, one does not expect a randomly selected latter IPOI to be associated with more comparables, relative to a randomly selected former IPOI which took place a few years earlier.

Fig. 1
figure1

Number of comparables associated with individual IPOIs

A comprehensive theoretical discussion focusing on how the addition of the steps described above improves the model’s soundness is considered out of scope here; yet, we illustrate some of the underlying theoretical concepts, that add to the sophistication of the model but clearly increase robustness, through the exemplary graphs shown in Fig. 2.

Fig. 2
figure2

The data presented in Table 1.5 of Cooper et al. (2007) has been used to construct this graph. Note that the output ‘Inpatients’ has been eliminated which facilitates a 3D visualization. It is illustrated how the shape of the frontier changes by the exclusion of the efficient DMU G. The production possibility set, which is capped by the efficient frontier, spans a smaller space subsequent to the removal of the efficient DMU G. The segmentation of the frontier changes as well; the number of efficient hyperplanes decreases in this example

In Fig. 2, once the efficient DMU G is excluded, the production possibility set shrinks. Moreover, the inefficient DMUs which were previously associated with either of the two hyperplanes GLJ and DGJ, are now jointly enveloped by the new and larger hyperplane DLJ. It is this mathematical property that is primarily responsible for the differences between the first and second panes of Fig. 1. Under RUN 1, once an IPOI is identified as an efficient unit, it is excluded from the pool of candidates of any succeeding IPOI. Since the eliminated efficient IPO could potentially remain as an efficient unit if it were preserved in the pool of candidates, its removal could, therefore, impact the comparables associated with the subsequent IPOIs.

The second testing carried out to assess model robustness focuses on whether the size of the set of comparables of a randomly picked IPOI changes significantly if its pool of candidates grows in size. First, the algorithm identifies the set of comparables for a given IPO of interest. Recall that this IPO is no longer the IPO of interest (IPOI) in any of the next iterations but a candidate comparable in the pool of candidates of any succeeding IPOI. Yet, we can continue to find and record the number of comparables for it using the same definition used in the case of IPOIs. Once all the iterations have been executed, a ratio called mean-to-union is calculated for each IPO, which acts as another summary measure to gauge the robustness of the model. The numerator of this ratio is the average size of all the comparable sets identified for a particular IPO, across all the iterations. The denominator is the size of the union of all the comparable sets identified for it across all the consecutive iterations. The closer this ratio is to unity, the less is the variability in the size of the set of comparables selected for the IPO. The ratio would be equal to unity if the set of comparables remains intact for an IPO through all the executions. Ratios greater than 70% were observed for 72%, 66% and 65% of the IPOIs in RUN 1, RUN 2 and RUN 3, respectively.

In this connection, we note that it is not just the size of the comparables set that is expected to remain stable across the iterations but the composition of the set, as well. The detail of the testing carried out to assess this avenue can be found in Sorkhi (2015). Broadly, the outcome indicates that the composition of the set of comparables for a given IPO either remains intact across iterations or is replaced with newly-added IPOs, and not the existing ones. Further scrutiny of the data reveals that compared to the retained former comparables, the excluded former comparables tend to be positioned farther from the IPO under investigation.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Sorkhi, S., Paradi, J.C. Measuring short-term risk of initial public offering of equity securities: a hybrid Bayesian and Data-Envelopment-Analysis-based approach. Ann Oper Res 288, 733–753 (2020). https://doi.org/10.1007/s10479-019-03439-0

Download citation

Keywords

  • Data Envelopment Analysis
  • Initial public offerings
  • Bayesian
  • Financial risk
  • Investment decision processes