Advertisement

Review of Accounting Studies

, Volume 21, Issue 4, pp 1327–1360 | Cite as

Analyst information precision and small earnings surprises

  • Sanjay W. Bissessur
  • David Veenman
Open Access
Article

Abstract

This study proposes and tests an alternative to the extant earnings management explanation for zero and small positive earnings surprises (i.e., analyst forecast errors). We argue that analysts’ ability to strategically induce slight pessimism in earnings forecasts varies with the precision of their information. Accordingly, we predict that the probability that a firm reports a small positive instead of a small negative earnings surprise is negatively related to earnings forecast uncertainty, and we present evidence consistent with this prediction. Our findings have important implications for the earnings management interpretation of the asymmetry around zero in the frequency distribution of earnings surprises. We demonstrate how empirically controlling for earnings forecast uncertainty can materially change inferences in studies that employ the incidence of zero and small positive earnings surprises to categorize firms as suspected of managing earnings.

Keywords

Earnings surprise Analyst forecast error Forecast uncertainty Analyst incentives Earnings management Discontinuity Benchmark beating 

JEL classification

D80 M41 G24 G29 

1 Introduction

The literature provides robust evidence of an asymmetry around zero in the frequency distribution of U.S. firms’ earnings surprises (i.e., analyst forecast errors), suggesting a systematic tendency of firms to meet or just beat rather than just miss analyst expectations (Degeorge et al. 1999; Brown 2001; Abarbanell and Lehavy 2003; Dechow et al. 2003). The common explanation for this is that firms systematically manage earnings to meet expectations (e.g., Degeorge et al. 1999), and subsequent research frequently categorizes firms as suspected of managing earnings when their earnings meet or just beat analyst expectations (e.g., Cheng and Warfield 2005; Lim and Tan 2008; Fang et al. 2015).

However, empirical evidence on the earnings management explanation is mixed. We show that strategic analyst forecast pessimism provides an important alternative explanation. Strategic forecast pessimism refers to sell-side analysts understating their public forecasts relative to their private expectations about a firm’s earnings. Incentives to issue pessimistic forecasts are rooted in analysts’ relations with firms’ management (e.g., Francis and Philbrick 1993; Lim 2001). One way in which analysts can maintain good relations with managers is by slightly understating their forecasts and thus helping firms meet or just beat expectations. Empirical studies show that analysts obtain informational benefits from such behavior, by documenting relations between forecast pessimism and analysts’ subsequent forecast accuracy and career outcomes (Ke and Yu 2006; Hilary and Hsu 2013). Recent research also suggests that, even after SEC Regulation Fair Disclosure (Reg FD), private communication with management remains widespread and continues to figure in analyst research (Brown et al. 2015).1

Our premise is that ex ante forecast pessimism (i.e., the forecast is lower than the analyst’s expectation) can differ from ex post forecast pessimism (i.e., a positive earnings surprise) because analysts generally face uncertainty about the earnings outcome. This uncertainty can be defined as the imprecision of the information signals analysts use when generating a point estimate earnings expectation. This precision is determined by both the precision of public information (i.e., the predictability of earnings given publicly available information) and the precision of analysts’ unique private information.

When analysts’ information about earnings is relatively precise, the range of earnings outcomes around their point estimate is relatively small. A smaller range leads to a greater likelihood that a slightly understated forecast will actually induce a positive surprise when earnings are announced.2 Thus, given the incentives to please managers, the ability of analysts to be strategically pessimistic relates positively to the precision of their information. Accordingly, we hypothesize that the likelihood that a firm’s earnings just beat analyst expectations (i.e., the firm reports a small positive earnings surprise) is negatively associated with earnings forecast uncertainty.

We test our hypothesis using a large sample of U.S. firms’ quarterly earnings surprises over the period 1993–2013. Earnings forecast uncertainty is measured using analyst forecast dispersion, which is the standard deviation of individual analysts’ latest forecasts before earnings announcements (e.g., Barron and Stuerke 1998; Clement et al. 2003). In initial descriptive analyses, we document a strong link between the sign of small earnings surprises and levels of earnings forecast uncertainty. Specifically, we find that earnings that ultimately just beat analyst expectations are associated with substantially lower forecast dispersion before earnings announcements than earnings that just miss expectations. The average dispersion in firm-quarters with an earnings surprise of –1 cents or –2 cents per share is about 50 % higher than the average dispersion in firm-quarters with a small positive surprise of similar magnitude.

We formally test our hypothesis using a multiple logistic regression framework that explains the likelihood that firms report a small positive (1 or 2 cents) versus a small negative (–1 or –2 cents) earnings surprise, and control for an array of other determinants of earnings surprises. Regression results reveal a strong negative relation between earnings forecast uncertainty and the likelihood that firms just beat instead of just miss analyst expectations. This relation is statistically and economically highly significant and—given the link between uncertainty and analysts’ ability to issue slightly pessimistic forecasts—supports the notion that strategic forecast pessimism helps shape the distribution of earnings surprises around zero.

We also examine other subsets of the earnings surprise distribution that are commonly used to capture earnings management. We predict that earnings forecast uncertainty becomes even more salient when considering the likelihood that a firm reports a 0 or 1 cent earnings surprise because, statistically, firms are more likely to report a small earnings surprise when analysts face little uncertainty. When we introduce variables that capture whether firms report a 0 or 1 cent surprise relative to (1) all other surprises and (2) all negative surprises, respectively, the empirical results confirm that the relation with earnings forecast uncertainty strengthens. For instance, we find that an inter-quartile reduction in forecast dispersion increases the conditional probability that a firm reports a 0 or 1 cent surprise from 14 to 35 %.

We illustrate the implications of our findings by examining potential research settings that make use of the above subsets of the earnings surprise distribution. Specifically, we examine the consequences of controlling for earnings forecast uncertainty for the relation between small positive earnings-surprise incidence and variables that capture constraints on firms’ ability to manage earnings. We show that each of these variables is both negatively related to small positive surprise incidence and positively related to forecast dispersion. After controlling for dispersion, most of the significant relations with small positive earnings-surprise incidence disappear. In other words, controlling for analysts’ ability to strategically issue pessimistic forecasts can overturn conclusions about earnings management.

We do several additional tests that rule out alternative explanations. First, we introduce an alternative measure of earnings forecast uncertainty that exploits information on the relative precision of individual analysts’ prior forecasts issued for other firms and again find a strong negative association with the likelihood of small positive earnings surprises. Second, we show that our results are not driven by firms’ use of (and ability to use) earnings management, expectations management, or the relation between dispersion and future firm performance. Third, we show that our findings are not simply an artifact of previously documented relations between uncertainty and analyst forecast optimism.

This study contributes to the earnings discontinuity literature.3 The common explanation for the asymmetry in the earnings surprise distribution is earnings management, although empirical evidence for this explanation is limited and mixed.4 Durtschi and Easton (2005) provide an alternative perspective and interpret the earnings surprise distribution in terms of the magnitude of optimistic versus pessimistic analyst forecast errors. They show that pessimistic forecast errors are smaller than optimistic ones, and conclude that forecast error patterns provide an alternative explanation for the asymmetry around zero earnings surprise: positive surprises cluster around zero, while negative surprises spread away from zero. By showing how forecast uncertainty maps into both the magnitude and sign of forecast errors, we provide an explanation for why forecast pessimism is concentrated in smaller earnings surprises.

Our findings have important implications for research designs that employ the incidence of zero and small positive earnings surprises to capture constructs related to earnings management.5 We show that these designs are more likely to sort samples based on earnings forecast uncertainty than on earnings management, and that controlling for dispersion can materially change inferences in settings where a variable of interest relates to uncertainty. We expect these implications to apply to a wider range of settings, given that variables of interest are commonly related to a firm’s information environment. Moreover, the relation between forecast uncertainty and the sign of earnings surprises is likely to be important for studies that rely on signed earnings surprises in other contexts, such as when modeling predictable variation in analysts’ forecast errors (e.g., Hughes et al. 2008; Mohanram and Gode 2013; So 2013).

2 Background and hypothesis development

Degeorge et al. (1999) document a strong asymmetry around zero in the frequency distribution of U.S. firms’ quarterly earnings surprises and show that firms are substantially more likely to just beat rather than just miss analyst forecasts. Brown (2001) finds this asymmetry has strengthened over time, and Dechow et al. (2003) find a similar pattern for annual earnings surprises. The financial media also commonly acknowledge the tendency of firms to beat rather than miss analyst earnings forecasts, even in economic downturns when firms’ earnings are weak (Zweig 2011; Jakab 2013).

In this study, we examine the role of predictable variation in sell-side analyst forecast errors in explaining this phenomenon. Forecast errors arise because the information analysts use to generate their forecasts is not perfect. Analysts have incentives to minimize their forecast errors by generating and acquiring better information, as prior research suggests that forecast accuracy matters to them because of career and reputational concerns.6 This suggests that rational analysts can be expected to issue forecasts that reflect their best estimate of a firm’s future earnings.

At the same time, analysts face incentives to issue forecasts that deviate from their best estimate (e.g., Lim 2001; Hong and Kubik 2003). Such strategically biased forecasts may arise from analysts’ attempts to win trading commissions for the brokerages they work for (e.g., Cowen et al. 2006), from efforts to acquire investment banking deals (e.g., Lin and McNichols 1998), and from attempts to ingratiate themselves with firm managers to obtain privileged access to information (e.g., Francis and Philbrick 1993). For instance, Lim (2001) models analysts’ trade-off between issuing their best-estimate forecasts and issuing biased forecasts that increase access to management. Access to management increases the precision of analysts’ estimates and thereby reduces expected forecast errors.

Whether strategic forecast bias is optimistic (i.e., the forecast is above the analyst’s best estimate) or pessimistic (i.e., the forecast is below the best estimate) depends on analysts’ specific incentives and the forecast horizon. Brokerage trading commissions and investment banking typically induce incentives for forecast optimism.7 Incentives to please managers lead to both optimism in long-horizon forecasts to support high market valuations (Dechow et al. 2000; Bradshaw et al. 2006) and pessimism in short-horizon forecasts to help firms meet or beat expectations. Ke and Yu (2006) and Hilary and Hsu (2013) present empirical evidence suggesting that analysts obtain benefits from short-horizon pessimism in terms of greater access to management and more consistent and influential forecasts.

Meeting analysts’ earnings expectations matters to firms because of the negative consequences of missing expectations (e.g., Skinner and Sloan 2002; Frankel et al. 2010). In addition, managers prefer small positive earnings surprises and therefore slightly understated analyst earnings forecasts. Survey evidence by Graham et al. (2005) suggests that managers believe that large earnings surprises negatively influence investor perceptions of the predictability of firm performance and increase the cost of capital. Based on their interviews with CFOs, Graham et al. (2005, 43) note, “When asked about whether they would prefer to meet or to beat the earnings target, several CFOs say they would rather meet (or slightly beat) the earnings target rather than positively surprising the market in a big way every quarter.”

The costs that firms incur from more volatile earnings surprises are unlikely to be overcome by the benefits of reporting large positive surprises, for two reasons. First, Freeman and Tse (1992) document that the marginal stock price benefits of beating expectations decrease sharply with the magnitude of earnings surprises. In other words, the marginal benefits to managers of beating expectations by a large versus a small amount are much smaller than the marginal benefits of meeting versus missing expectations.8 Second, stock prices respond to analyst forecast revisions (e.g., Lys and Sohn 1990; Stickel 1991), which implies that when analysts are overly pessimistic before earnings announcements, the negative pricing effects of understated forecasts are likely to outweigh the positive consequences of beating expectations by a greater margin.

Given managers’ preference for small positive earnings surprises, the extent to which analysts can please managers depends on their ability to induce such surprises. This ability, in turn, depends on the precision of analysts’ information. Consider the contrast between the required strategic bias for a firm with relatively low uncertainty—for example, with an expected range of possible earnings outcomes of 18–22 cents per share—and the required strategic bias for a firm with relatively high uncertainty—for example, with an expected range of possible earnings outcomes of 0–40 cents per share. Assume the analyst privately has an ex ante 20 cents point estimate expectation for both firms (i.e., the mid-point of both ranges). To obtain a similar probability of a small positive earnings surprise for both firms ex post, the analyst would have to pessimistically bias the forecast for the high uncertainty firm by a much larger amount than for the low uncertainty firm. Such larger bias is costly to the analyst because of the expected increase in forecast error. As a result, the analyst’s forecast is much less likely to display ex post pessimism for the high uncertainty firm than for the low uncertainty firm.

To summarize, given the existence of incentives for analysts to induce strategic pessimism in their forecasts, we posit that analysts’ ability to slightly understate their forecasts and please managers is greater when their information signals about a firm’s future earnings are more precise. We therefore predict that when earnings forecast uncertainty is relatively low, firms are more likely to report earnings that just beat rather than just miss consensus analyst expectations. Our main hypothesis is stated below (in alternative form):

H1

The likelihood that a firm reports earnings that just beat instead of just miss consensus analyst expectations is negatively related to earnings forecast uncertainty.

3 Research design

3.1 Sample selection

Table 1 presents the sample selection procedure. Our initial sample consists of all quarterly earnings per share (EPS) forecasts in the I/B/E/S unadjusted detail file for U.S. firms with fiscal quarters ending in 1993–2013.9 We require actual EPS data from I/B/E/S and retain each analyst’s latest forecast before the quarterly earnings announcement. The sample is restricted to firm-quarters with at least three individual forecasts, as in, for example, Ke and Yu (2006). We drop late earnings announcements that occur more than 180 calendar days after fiscal quarter-end, and, following standard I/B/E/S methodology, we remove stale forecasts made more than 180 days before the quarterly earnings announcement.10 We merge the I/B/E/S sample with CRSP and Compustat and restrict the sample to firms that have nonnegative total assets and are listed on NYSE, AMEX, or NASDAQ. We delete utilities and financial services firms (SIC codes 4400–4999 and 6000–6499) and obtain a final sample of 118,730 firm-quarter observations after eliminating observations with missing data to compute our main regression variables.11
Table 1

Sample selection

Description

No. obs.

Panel A: Initial selection of earnings surprise sample

One-quarter-ahead forecasts of EPS on I/B/E/S 1993–2013 (“FPI” = 6)

2,486,904

 Less: Analyst code missing or equal to 0 or 1, or CUSIP code missing

−24,694

 Less: No CUSIP-PERMNO match with CRSP “stocknames” file

−39,207

 Less: Missing actual EPS value or announcement date on I/B/E/S

−15,511

 Less: Earnings announcement date more than 180 days after fiscal quarter end

−4906

 Less: Individual forecast more than 180 days before earnings announcement date

−22,995

 Less: Retain only last forecast by each individual analyst

−640,562

Cleaned sample of individual forecasts

1,739,029

 Unique firm-quarters with earnings surprise data

301,027

Panel B: Final sample of firm-quarters

Firm-quarters in CRSP/COMPUSTAT with positive total assets 1993–2013

463,527

 Less: Missing SIC code or exchange code on CRSP

−417

 Less: Firm not listed on NYSE, AMEX, or NASDAQ

−3473

 Less: Utilities and financial firms (SIC codes 4400–4999 and 6000–6499)

−94,722

 Less: Missing data for control variables in COMPUSTAT

−51,919

 Less: No quarterly earnings surprise data available on I/B/E/S

−120,041

CRSP/COMPUSTAT firm-quarters with I/B/E/S earnings surprise data

192,955

 Less: Missing prior return (CRSP) and firm-specific ERC data

−28,320

 Less: Fewer than three individual analyst forecasts available for firm-quarter

−45,905

Final sample of firm-quarter observations

118,730

EPS forecast and actual data are obtained from the I/B/E/S unadjusted detail files. We include all fiscal quarters ending in the first calendar quarter of 1993 through the fourth calendar quarter of 2013

The quarterly earnings surprise (SURPRISE) is defined as actual EPS less our consensus forecast, which is calculated as the mean of individual analysts’ latest EPS forecasts.12 Before taking the difference between actual earnings and the consensus forecast, both figures are rounded to cents per share to ensure consistency with the presentation of these numbers in press releases and financial media. Based on SURPRISE, we create three dependent variables.

The first variable captures the incidence of small positive versus negative surprises. Just beat versus just miss is an indicator variable that equals 1 for positive 1 and 2 cent earnings surprises and 0 for small negative surprises of 1 or 2 cents per share. Although we are unaware of prior literature using this specific classification, it is best suited in our setting to test H1. The design of Singer and You (2011) comes closest to our classification, except theirs focuses on price-scaled earnings surprises. McVay et al. (2006), Brown and Pinello (2007), and Shon and Veliotis (2013) do focus on unscaled surprises, but they include 0 cent surprises as small “beats.” Our choice to focus on 1 and 2 cent surprises, though arbitrary, is driven by our motivation to obtain a reasonable small interval around zero while also maintaining enough observations for the analyses. Results are not affected by choosing alternative intervals around zero and extend to examining positive versus negative surprises of any magnitude.

Given managers’ (Graham et al. 2005), investors’ (Keung et al. 2010), and academics’ emphasis on zero and small earnings surprises, we also create a variable Meet/just beat versus all other, which is an indicator variable that equals 1 for 0 and 1 cent earnings surprises and 0 for all other surprises (e.g., Cheng and Warfield 2005; Lim and Tan 2008; Brochet et al. 2015). We create a similar variable, Meet/just beat versus miss, that equals 0 only for negative earnings surprise observations. This classification comports with studies such as that of McVay et al. (2006), except that it includes misses of any magnitude in the 0 category. This variable can provide important additional insights, since a common assumption is that firms would have missed expectations in the absence of earnings management.

3.2 Measuring earnings forecast uncertainty

Earnings forecast uncertainty is measured as the dispersion in the individual analysts’ forecasts used to construct the consensus forecast (DISP). The intuition for this measurement is that when analysts’ information precision increases, forecast dispersion decreases. Dispersion has been widely used in the literature as an empirical proxy for forecast uncertainty (e.g., Barron and Stuerke 1998; Kinney et al. 2002; Clement et al. 2003; Ke and Yu 2006). Consistent with our focus on unscaled earnings surprises, we use the unscaled standard deviation measured in cents per share (Cheong and Thomas 2011).13

The use of dispersion as a measure of earnings forecast uncertainty has benefits and drawbacks. A clear advantage of dispersion is that it is an ex ante measure and available in real-time (Sheng and Thevenot 2012). Other proxies, such as realized earnings volatility, rely on time-series data and result in stale measures of uncertainty. A drawback of dispersion is that it is not a perfect measure of earnings forecast uncertainty (Barry and Jennings 1992; Abarbanell et al. 1995; Sheng and Thevenot 2012). In addition, while our study focuses on the uncertainty faced by individual analysts, the use of dispersion hinges on the assumption that inter-analyst variation in forecasts captures average intra-analyst uncertainty.

Barron et al. (1998) show how dispersion is conceptually related to earnings forecast uncertainty, and Barron and Stuerke (1998) provide empirical evidence that validates the use of dispersion as a proxy for earnings forecast uncertainty.14 Evidence suggesting that lower quality disclosures are associated with greater dispersion (Lang and Lundholm 1996; Lehavy et al. 2011; Rajgopal and Venkatachalam 2011) further supports the link between dispersion and the precision of the information available to the average analyst. Lahiri and Sheng (2010) and Sheng and Thevenot (2012) provide evidence suggesting that dispersion approximates earnings forecast uncertainty for short-horizon forecasts, the forecasts we consider in this study.

3.3 Control variables

Prior studies show that firm size relates to forecast accuracy (Lang and Lundholm 1996) and analyst optimism (Das et al. 1998). We therefore control for the natural logarithm of market value of equity (MV) at the beginning of the fiscal quarter to control for firm size. Also, the evidence of Skinner and Sloan (2002) suggests that growth firms are associated with asymmetrically negative stock market reactions to negative earnings surprises, which implies that growth is associated with managers’ incentives to avoid missing expectations. We control for growth using the book-to-market ratio (BTM) at the end of the previous fiscal quarter and a variable that captures the fraction of the most recent eight quarterly earnings numbers that exceed earnings of the same quarter in the previous year (GROWTH).

We further control for market-based incentives to meet expectations by including fixed effects for each of the 84 calendar quarters in our sample, because the incidence and market reward for zero or small positive earnings surprises vary over the sample period (Keung et al. 2010).15 Next, we calculate a firm-specific measure of stock price sensitivity to earnings news based on the earnings response coefficient (ERC) obtained from firm-specific time-series regressions of earnings announcement returns on earnings surprises. Kinney et al. (2002) show that ERCs are larger for firms with lower forecast dispersion, which implies stronger stock price declines when firms miss expectations and dispersion is relatively low. Higher ERCs imply greater price sensitivity to earnings news and potentially greater incentives to meet expectations.

Managers have incentives to meet expectations and maintain high stock prices around mergers and acquisitions (M&As). For instance, the evidence of Erickson and Wang (1999) suggests firms manage earnings around M&As to inflate stock prices in stock-for-stock mergers. At the same time, Erickson et al. (2012) show that M&A announcements lead to increases in forecast dispersion. We control for M&As by creating an indicator variable capturing whether or not M&A announcements take place in the 90 days before the quarterly earnings announcement (MNA). We further control for managers’ trading incentives to meet expectations (Richardson et al. 2004; McVay et al. 2006) by controlling for seasoned equity offering (SEO) and insider selling (INSELL) activity. We control for institutional ownership (INST), because Matsumoto (2002) suggests that managerial incentives to avoid missing expectations increase with institutional ownership.

Prior research suggests that, on average, analysts’ forecasts do not fully incorporate public news (e.g., Lys and Sohn 1990; Abarbanell 1991). We control for prior news by including variables that capture firms’ recent stock price performance (LAGRET) and the prior quarter’s seasonally differenced earnings change (LAGDQEPS). If analysts underreact to news, these variables will be positively correlated with the earnings surprise, because analyst underreaction to good (bad) news should lead to a forecast that is too low (high), which induces a positive (negative) earnings surprise.

We also control for the number of shares outstanding (SHRS), because earnings surprises more easily round to zero when a greater number of shares is used to compute EPS. Although forecast errors and dispersion tend not to vary with price per share (Cheong and Thomas 2011), we also include the natural logarithm of lagged price to control for any unobserved scale-related effects (PRICE). Since a rounded zero earnings surprise can also be driven by stock splits over the prior period, we control for splits over the year before the earnings announcement (SPLIT). We further control for the natural logarithm of analyst following by including NUMEST, the number of analysts contributing to our consensus. Lastly, HORIZON controls for the average age of the forecasts in the consensus. This control is included because late-quarter revisions in forecasts are more likely influenced by guidance (e.g., Richardson et al. 2004), and because later forecasts are likely to be based on more precise information than forecasts made earlier in the quarter.

4 Results

4.1 Descriptive statistics

Panel A of Table 2 presents descriptive statistics on earnings surprises and our dependent variables.16 The median firm-quarter has a positive earnings surprise of 1 cent per share. The first and third quartile values of SURPRISE are −1 and +4 cents, respectively. These figures corroborate the asymmetry in the earnings-surprise distribution and comport with other studies (Cheong and Thomas 2014). Just beat versus just miss equals 1 for 67.5 % of observations, while 26.3 % of sample firm-quarters have a 0 or 1 cent earnings surprise (Meet/just beat vs. all other). Meet/just beat versus miss equals 1 in 47.5 % of cases, which suggests zero and small positive earnings surprises occur almost as frequently as negative surprises of any amount.
Table 2

Descriptive statistics

 

n

Mean

St. dev.

p25

Median

p75

Panel A: Descriptive statistics for quarterly earnings surprise variables

SURPRISE

118,730

1.137

10.317

−1.000

1.000

4.000

 Just beat (1) versus just miss (0)

39,227

0.675

0.468

0.000

1.000

1.000

 Meet/just beat (1) versus all other (0)

118,730

0.263

0.440

0.000

0.000

1.000

 Meet/just beat (1) versus miss (0)

65,809

0.475

0.499

0.000

0.000

1.000

 

SURPRISE:

Mean +1¢

or +2¢ surprise

Mean −1¢

or −2¢ surprise

Diff.

≤−3

−2

−1

0

1

2

≥3

Panel B: Mean earnings forecast uncertainty and firm characteristics by quarterly earnings surprise bin

DISP

6.969

3.185

2.487

1.667

1.725

2.060

4.447

2.754

1.868

0.885***

MV

2974

3387

4068

5618

5418

5373

5221

3808

5399

−1591***

BTM

0.576

0.503

0.462

0.424

0.425

0.436

0.484

0.478

0.430

0.048***

GROWTH

0.546

0.592

0.621

0.660

0.655

0.642

0.606

0.610

0.649

−0.039***

ERC

7.523

10.745

13.216

16.177

16.258

15.842

11.467

12.274

16.080

−3.806***

MNA

0.126

0.144

0.168

0.189

0.188

0.179

0.160

0.159

0.184

−0.025***

SEO

0.065

0.059

0.052

0.048

0.048

0.049

0.050

0.054

0.049

0.006***

INSELL

0.378

0.398

0.418

0.461

0.478

0.492

0.482

0.411

0.484

−0.073***

INST

0.532

0.529

0.539

0.546

0.571

0.589

0.614

0.535

0.579

−0.044***

LAGRET

−0.037

−0.033

−0.023

−0.007

0.015

0.026

0.038

−0.027

0.020

−0.046***

LAGDQEPS

−0.070

−0.009

−0.009

0.023

0.033

0.030

0.039

−0.009

0.032

−0.041***

SHRS

92

111

131

164

157

151

135

123

154

−31***

PRICE

25.619

23.764

24.777

26.836

27.774

28.802

32.384

24.391

28.214

−3.823***

SPLIT

0.048

0.068

0.087

0.114

0.115

0.113

0.075

0.080

0.114

−0.034***

NUMEST

7.787

7.921

8.231

8.376

8.579

8.803

9.216

8.113

8.675

−0.562***

HORIZON

60.848

59.850

60.630

62.366

64.268

64.634

64.151

60.333

64.424

−4.092***

See Table 1 for sample selection details and the “Appendix” for variable descriptions. All continuous variables are winsorized to the 1st and 99th percentiles of their distributions. Test statistics in Panel B are calculated using standard errors adjusted for two-way clustering by firm and quarter. ***, **, and * reflect statistical significance at the level of 0.01, 0.05, and 0.10, respectively

Panel B presents means of forecast dispersion by cents of earnings surprise. We find a strong V-shaped pattern for DISP across the earnings surprise distribution, which supports the use of forecast dispersion as a proxy for the average precision of information in analysts’ forecasts. Greater dispersion is associated with larger (ex post) earnings surprises. More importantly, there is a strong asymmetry in the V-shape based on the average dispersion for positive versus negative earnings surprises. Small negative surprises are associated with greater forecast uncertainty than small positive earnings surprises. The average dispersion in the −1 and −2 cents bins (2.754) is about 50 % higher than the average dispersion in the +1 and +2 cents bins (1.868). This difference is statistically highly significant. As a result of the asymmetry, we find that the 0 and +1 cent earnings surprise bins are associated with the lowest dispersion.

Figure 1 presents graphical evidence on the link between dispersion and small earnings surprises. Panel A visualizes the strongly asymmetric nature of the V-shaped relation between earnings surprises and forecast dispersion around zero earnings surprise. Panel B illustrates the relation between dispersion and the likelihood that firms just beat versus just miss expectations. When uncertainty is lowest (i.e., dispersion decile 1), firms are more likely to just beat instead of just miss expectations. As uncertainty increases, the likelihood of just beating versus just missing decreases monotonically. In fact, firms in the highest two deciles are more likely to just miss instead of just beat expectations. Focusing on 0 and +1 cent surprises relative to all other surprises and negative surprises, the strength of the relation with dispersion increases.
Fig. 1

Relation between forecast dispersion and small earnings surprises. Panel A: Asymmetric distribution of forecast dispersion across earnings surprise bins. Panel B: Frequency of zero and small positive earnings surprises by deciles of forecast dispersion. Panel A is based on the 111,181 firm-quarters with an absolute earnings surprise smaller than or equal to 20¢ per share. Panel B is based on the full sample of 118,730 firm-quarters. See Table 1 for sample selection details. See the “Appendix” for information on the construction of the forecast dispersion measure and the earnings surprise category variables. Deciles based on forecast dispersion are constructed each calendar quarter

Panel B of Table 2 further shows that the 0 and +1 cent earnings surprise bins are also associated with the lowest average BTM, highest GROWTH, highest ERC, and most frequent MNA.17 Consistent with analyst underreaction to news, LAGRET and LAGDQEPS are positively correlated with small positive earnings surprises. Zero and small positive surprises are also associated with the highest SHRS and SPLIT, confirming that surprises round more easily to the target level with more outstanding shares. HORIZON is greater for small positive versus small negative surprises, which is not consistent with an interpretation that small positive surprises are induced by late downward revisions in forecasts that result from expectations management.18 All differences in average characteristics between small positive and small negative surprise firm-quarters are statistically significant, as indicated by the two rightmost columns of Panel B.

Due to the mechanical relations among MV, SHRS, and PRICE, untabulated variance inflation factors indicate potential multicollinearity. Therefore, in our analyses to follow, we exclude SHRS as a control variable. Results are qualitatively similar regardless of whether SHRS is included. Variance inflation factors for the other variables indicate no cause for concern. After excluding SHRS, the highest variance inflation factor is only 1.65 (for PRICE). Due to the right-skewness in variables DISP, MV, PRICE, and NUMEST, we use the natural logarithm of these variables in our regression analyses.19 Because DISP can take on the value of zero, ln(DISP) is defined as the natural logarithm of 0.001 plus the standard deviation in forecasts.20

4.2 Main results

Table 3 presents results of testing our hypothesis. All models are estimated using logit regression, and standard errors are clustered by firm and time (year-quarter). Estimation results indicate that forecast uncertainty (ln(DISP)) is significantly negatively related to the likelihood of a small positive versus a small negative earnings surprise. This finding supports H1 and suggests that, after controlling for other determinants of earnings surprises, less forecast uncertainty is associated with a greater likelihood that a firm reports earnings that just beat analyst expectations. Focusing on the other two columns of Table 3, we find that the relation with forecast uncertainty is also highly significant when we compare zero and small positive earnings surprises to all other firm-quarters or to negative surprises.
Table 3

Earnings forecast uncertainty and small positive earnings surprises

Logit regression

Dependent variable

Just beat (1) versus just miss (0)

Meet/just beat (1) versus all other (0)

Meet/just beat (1) versus miss (0)

Coeff.

z-stat

Coeff.

z-stat

Coeff.

z-stat

Panel A: Regression results

ln(DISP)

−0.524

−22.79***

−0.854

−55.63***

−1.079

−53.78***

ln(MV)

0.015

0.87

0.122

8.37***

0.097

6.65***

BTM

0.053

1.10

−0.213

−5.74***

−0.191

−4.63***

GROWTH

0.085

1.49

0.389

9.03***

0.385

7.91***

ERC

0.000

0.34

0.004

6.54***

0.003

4.29***

MNA

0.003

0.10

0.093

4.21***

0.087

3.49***

SEO

−0.172

−1.12

−0.169

−1.92*

−0.423

−3.57***

INSELL

0.102

1.93*

0.036

1.02

0.131

3.09***

INST

0.207

4.33***

0.110

3.27***

0.245

6.29***

LAGRET

1.241

15.76***

−0.332

−6.74***

0.861

13.86***

LAGDQEPS

0.072

3.41***

0.001

0.08

0.069

4.46***

ln(PRICE)

0.291

7.64***

−0.308

−13.84***

0.011

0.32

SPLIT

0.121

2.43**

−0.007

−0.26

0.105

2.67***

ln(NUMEST)

0.178

4.56***

0.143

4.86***

0.310

9.01***

HORIZON

0.006

7.86***

−0.003

−5.19***

0.001

0.91

Quarter fixed effects

 

Included

 

Included

 

Included

n

 

39,227

 

118,730

 

65,809

Pseudo R2

 

0.067

 

0.138

 

0.212

Dependent variable

Just beat (1) versus just miss (0)

Meet/just beat (1) versus all other (0)

Meet/just beat (1) versus miss (0)

Panel B: Marginal effects analysis

Probability with low DISP (p25 value)

0.730

0.346

0.638

Probability with high DISP (p75 value)

0.567

0.140

0.284

Difference p25-p75

0.163

0.206

0.354

Percentage difference

28.7 %

147.0 %

124.3 %

Probability with low DISP (p10 value)

0.777

0.445

0.748

Probability with high DISP (p90 value)

0.478

0.083

0.160

Difference p10-p90

0.299

0.361

0.589

Percentage difference

62.5 %

433.0 %

368.0 %

See Table 1 for sample selection details and the “Appendix” for variable descriptions. Test statistics are calculated using standard errors adjusted for two-way clustering by firm and quarter. ***, **, and * reflect statistical significance at the level of 0.01, 0.05, and 0.10, respectively. Marginal effects are calculated as the effect of an inter-quartile range (or 10th to 90th percentile) shift in ln(DISP) on the likelihood that the dependent variable equals 1, while holding all other variables constant at their means. The number of shares outstanding variable (SHRS) is excluded from the regressions to avoid multicollinearity problems

Focusing on the coefficients on the control variables in the Just beat versus just miss estimation, we find no evidence to suggest that the incidence of small positive versus small negative earnings surprises is significantly associated with our variables for management incentives to meet expectations. Of the incentive variables, only INST is significantly positively associated with the likelihood of a small positive versus a small negative surprise at p < 0.05. The significant positive coefficients on LAGRET and LAGDQEPS are consistent with earnings surprises being partly determined by the staleness of information in the consensus forecast.

In Panel B, we quantify the effect of changing the variable of interest (ln(DISP)) on the conditional probability of a small positive earnings surprise. This test measures the marginal effect of a change in earnings forecast uncertainty on the likelihood of observing a small positive earnings surprise, while controlling for the influence of other factors. Results suggest that an interquartile move from high to low dispersion is associated with a 28.7 % increase in the conditional probability that firms report a small positive instead of a small negative earnings surprise. When we increase the magnitude of the shift in forecast dispersion from the 90th to the 10th percentile, this figure even increases to 62.5 %.21 Thus the relation we document is not only statistically significant but also economically highly significant.

The Meet/just beat versus all other indicator variable is affected by both the sign and magnitude of analysts’ forecast errors, since greater forecast uncertainty is associated with larger absolute forecast errors. Our results suggest that a move from high to low earnings forecast uncertainty is associated with a large increase in the conditional probability of identifying a zero or small positive earnings surprise from 14.0 to 34.6 %. That is, the odds that a firm is identified as meeting or just beating expectations more than double. Similarly, a move from high to low forecast uncertainty more than doubles the likelihood that a firm is identified as meeting or just beating instead of missing expectations.

Overall, these findings provide strong support for our hypothesis and suggest that variation in strategic analyst-forecast pessimism may explain the systematic tendency of firms to report zero and small positive earnings surprises. This conclusion follows from our conceptual reasoning in Sect. 2, which suggests that variation in earnings forecast uncertainty allows us to empirically identify variation in the analysts’ role in shaping small earnings surprises. Our findings are particularly important for studies that use zero and small positive earnings surprises as a screen for earnings management, since variables of interest are often correlated with a firm’s information environment. Because earnings forecast uncertainty also relates to the information environment, omitted correlated variable problems are likely to arise if uncertainty is not controlled for. We illustrate this issue in the following section.

4.3 Implications for research

We illustrate the implications of controlling for forecast uncertainty for relations between earnings surprises and three variables identified as constraints on managers’ ability to manage earnings: balance sheet “bloat” (Barton and Simko 2002), the fourth fiscal quarter (Brown and Pinello 2007), and analyst cash flow forecasts (McInnis and Collins 2011).22 If these variables capture constraints on earnings management and our dependent variables for zero and small positive earnings surprise incidence measure variation in earnings management, we should observe negative associations between the constraint variables and our dependent variables. However, these variables are also conceptually related to earnings forecast uncertainty.

Barton and Simko (2002) suggest that prior-period earnings management translates from the income statement to the balance sheet as an overstatement of net operating assets. A firms’ ability to manage earnings upward is therefore constrained by the level of overstated net operating assets (balance sheet “bloat”). At the same time, balance sheet bloat is expected to be positively related to forecast uncertainty because past earnings management deteriorates the quality of earnings. Prior studies such as Rajgopal and Venkatachalam (2011) provide evidence suggesting that earnings quality negatively relates to forecast dispersion.

Because the fourth fiscal quarter is subject to the annual audit, prior research suggests that firms are more constrained in their ability to manage earnings in the fourth quarter (Brown and Pinello 2007). At the same time, fourth quarter earnings are likely to be more difficult to forecast, given relatively more unpredictable year-end accounting adjustments. Consistent with this notion, Mikhail et al. (1997) document significantly less accurate analyst forecasts in the fourth fiscal quarter. Lastly, McInnis and Collins (2011) suggest that analysts’ initiation of cash flow forecasts constrains managers’ ability to manage earnings to meet or beat expectations. However, DeFond and Hung (2003) show that analysts are more likely to forecast cash flows when earnings are volatile and difficult to predict.

Table 4, Panel A, presents descriptive statistics confirming that each of the three earnings management constraint variables (continuous variable BLOAT, the indicator variable Q4, and an indicator variable for cash flow forecasts, CFF) is positively correlated with forecast dispersion. Mean dispersion is substantially higher for firms in the highest quartile of balance sheet bloat (4.76) compared to firms in the other three quartiles (3.45). The fourth fiscal quarter (Q4) is associated with higher mean dispersion (4.06) than the other three quarters (3.68). Lastly, average DISP for quarters with cash flow forecasts (4.43) is substantially higher compared to firm-quarters without such forecasts (3.25).23
Table 4

Illustration of the implications of controlling for forecast dispersion

BLOAT quartile

Mean DISP

Fiscal quarter

Mean DISP

Cash flow forecasts

Mean DISP

Panel A: Relation between forecast dispersion and variables for constraints on earnings management

1

3.60

1

3.71

0

3.25

2

3.32

2

3.60

  

3

3.43

3

3.74

1

4.43

4

4.76

4

4.06

  

Logit regression

Dependent variable

Just beat (1) versus just miss (0)

Meet/just beat (1) versus all other (0)

Meet/just beat (1) versus miss (0)

Coeff.

z-stat

Coeff.

z-stat

Coeff.

z-stat

Panel B: Relation between earnings management constraints and meeting/just beating before dispersion controls

BLOAT

−0.016

−5.90***

0.000

0.17

−0.012

−4.30***

Q4

−0.110

−3.08***

−0.091

−3.53***

−0.188

−6.84***

CFF

−0.017

−0.39

−0.092

−2.78***

−0.099

−2.52**

Control variables

 

Included

 

Included

 

Included

Quarter fixed effects

 

Included

 

Included

 

Included

n

 

38,610

 

116,839

 

64,708

Pseudo R2

 

0.042

 

0.058

 

0.094

Panel C: Relation between earnings management constraints and meeting/just beating after dispersion controls

BLOAT

−0.015

−5.08***

0.006

3.35***

−0.008

−3.80***

Q4

−0.053

−1.44

0.022

0.95

−0.024

−0.93

CFF

0.026

0.62

−0.013

−0.44

0.020

0.57

ln(DISP)

−0.526

−23.07***

−0.857

−55.51***

−1.084

−54.38***

Control variables

 

Included

 

Included

 

Included

Quarter fixed effects

 

Included

 

Included

 

Included

n

 

38,610

 

116,839

 

64,708

Pseudo R2

 

0.068

 

0.138

 

0.213

Analyses are based on a reduced sample of 116,839 firm-quarters with available data to compute BLOAT. See Table 1 for sample selection details and the “Appendix” for variable descriptions. Test statistics are calculated using standard errors adjusted for two-way clustering by firm and quarter. ***, **, and * reflect statistical significance at the level of 0.01, 0.05, and 0.10, respectively

Panel B presents results of testing the relations of BLOAT, Q4, and CFF with our dependent variables without controlling for dispersion but including our main set of control variables. Consistent with the conjecture that balance sheet bloat constrains firms’ ability to meet or just beat expectations, the coefficients on BLOAT are significantly negative in the first and third specifications. Similarly, coefficients on Q4 are significantly negative, suggesting it is more difficult to meet or just beat expectations in the fourth fiscal quarter. The coefficient on CFF is significantly negative in the second and third specifications, suggesting the ability to meet or just beat expectations is constrained by analysts’ cash flow forecasts.

In Panel C, we control for dispersion and no longer find negative significant coefficients for five out of seven cases that were significantly negative in Panel B. For BLOAT, the coefficient in the meet/just beat specification even switches from statistically insignificant to significantly positive. For Q4 and CFF, all coefficient estimates turn insignificant after we control for dispersion. These findings highlight that, given our previous findings on the importance of forecast uncertainty in explaining the incidence of zero and small positive earnings surprises, researchers should consider controlling for forecast uncertainty when examining variables that capture the incidence of small positive (and zero) earnings surprises.

5 Additional analyses

5.1 Alternative uncertainty measure

To assess the robustness of our main findings, we introduce a different measure of earnings forecast uncertainty. Specifically, we measure the uncertainty component of each analyst’s private information by computing the average individual (relative) forecast ability of analysts in the consensus. For each firm-quarter, we first identify the individual analysts that contribute to a consensus forecast. For each of these analysts, we calculate the fraction of all their absolute forecast errors over the preceding 5 years (excluding forecasts for firm i) that exceeded the median absolute consensus forecast error. We then define a new variable, INACCR, as the firm-quarter average of these figures across the individual analysts that contribute to the consensus forecast for the current firm-quarter.

Panel A of Table 5 replicates the descriptive analysis of Table 2, Panel B, for our alternative measure of earnings forecast uncertainty. We compute the quarterly decile rank of this variable to enhance the interpretation of the variation in this variable. Consistent with results for dispersion, the 0 and +1 cent surprise bins are associated with the lowest level of earnings forecast uncertainty based on our alternative measure. In addition, average INACCR is significantly lower for small positive earnings surprises, which confirms the prediction that firms are more likely to report earnings that just beat expectations when analysts’ information is more precise.
Table 5

Earnings forecast uncertainty measured by analysts’ relative information imprecision

 

SURPRISE

Mean +1¢ or +2¢ surprise

Mean −1¢ or −2¢ surprise

Diff.

≦−3

−2

−1

0

1

2

≧3

Panel A: Average rank of relative analyst information imprecision across earnings surprise bins

INACCR

6.143

5.589

5.292

4.868

4.905

5.051

5.773

4.967

5.405

−0.438***

Logit regression

Dependent variable

Just beat (1) versus just miss (0)

Meet/just beat (1) versus all other (0)

Meet/just beat (1) versus miss (0)

Coeff.

z-stat

Coeff.

z-stat

Coeff.

z-stat

Panel B: Logit regression results with alternative measure

INACCR

−0.047

−9.99***

−0.081

−20.40***

−0.096

−19.79***

Control variables

 

Included

 

Included

 

Included

Quarter fixed effects

 

Included

 

Included

 

Included

N

 

39,227

 

118,730

 

65,809

Pseudo R2

 

0.044

 

0.066

 

0.104

See Table 1 for sample selection details and the “Appendix” for variable descriptions. INACCR is the decile rank of a firm-quarter measure based on the past relative (to other analysts in the consensus) absolute forecast inaccuracy of individual analysts. A higher rank implies that the consensus is composed by analysts with relatively lower (ex ante) information precision. Test statistics are calculated using standard errors adjusted for two-way clustering by firm and quarter. ***, **, and * reflect statistical significance at the level of 0.01, 0.05, and 0.10, respectively

Panel B presents results of replicating our main tests using the alternative measure. The coefficient on INACCR is significantly negative in each of the estimations. For instance, results for the Just beat versus just miss specification suggest that firms characterized by low uncertainty due to analysts’ (lack of) private information (INACCR) are significantly more likely to report earnings that just beat rather than just miss expectations. Overall, we conclude that these findings are consistent with our results using forecast dispersion, and that they further support our interpretation that variation in the precision of analysts’ information has important implications for the likelihood of identifying firms reporting a (zero or) small positive earnings surprise.

5.2 Earnings management as alternative explanation

The findings in Kinney et al. (2002) suggest that ERCs are substantially greater when forecast dispersion is relatively low. This could imply that managers have greater incentives to manage earnings to meet expectations when dispersion is low.24 If earnings management would explain our main findings, we should observe (1) a significantly positive association between our dependent variables and variables for earnings management, and (2) a significantly stronger (i.e., more negative) relation between dispersion and our dependent variables in cases where earnings management is observed. We test this alternative story in Panel A of Table 6.
Table 6

Earnings management and expectations management as alternative explanations

Logit regression

Dependent variable

Just beat (1) versus just miss (0)

Meet/just beat (1) versus all other (0)

Meet/just beat (1) versus miss (0)

Coeff.

z-stat

Coeff.

z-stat

Coeff.

z-stat

Panel A: Earnings management as moderating factor based on AAERs

ln(DISP)

−0.565

−22.48***

−0.882

−52.63***

−1.122

−52.48***

AAER

−0.078

−0.61

0.208

2.18**

0.093

0.81

ln(DISP)*AAER

−0.203

−1.30

0.035

0.33

−0.027

−0.20

Control variables

 

Included

 

Included

 

Included

Quarter fixed effects

 

Included

 

Included

 

Included

n

 

28,287

 

78,101

 

45,898

Pseudo R2

 

0.079

 

0.137

 

0.218

Panel B: Expectations management as moderating factor based on public earnings guidance

ln(DISP)

−0.456

−14.71***

−0.811

−41.91***

−0.993

−33.37***

GUIDE

0.166

4.01***

0.137

5.34***

0.379

10.25***

ln(DISP)*GUIDE

−0.191

−4.89***

−0.008

−0.31

−0.106

−2.84***

Control variables

 

Included

 

Included

 

Included

Quarter fixed effects

 

Included

 

Included

 

Included

n

 

26,137

 

83,103

 

43,689

Pseudo R2

 

0.057

 

0.131

 

0.207

See Table 1 for sample selection details and the “Appendix” for variable descriptions. In Panel A, AAER is an indicator variable capturing whether or not the firm-quarter had a material misstatement as identified in an SEC Accounting and Auditing Enforcement Release (AAER). The AAER data are based on misstatements which originally resulted in an overstatement of net income. AAER is set to missing for fiscal quarters after 2007 due to limited data coverage. In Panel B, GUIDE is an indicator capturing whether management issues earnings guidance for the fiscal-quarter end. This variable is used only for fiscal quarters ending in the post-Reg FD period (i.e., fiscal quarters ending in November 2000 and later) in order to capture public guidance. The interaction variables in Panels A and B are computed based on mean-adjusted main effects to reduce the influence of multicollinearity. Test statistics are calculated using standard errors adjusted for two-way clustering by firm and quarter. ***, **, and * reflect statistical significance at the level of 0.01, 0.05, and 0.10, respectively

We measure earnings management using an indicator variable (AAER) that captures whether a firm-quarter is identified as materially overstated in an SEC Accounting and Auditing Enforcement Release (AAER) based on the extended dataset of Dechow et al. (2011). Panel A of Table 6 shows that this AAER indicator is significantly positively associated with the meet/just beat classification (Meet/just beat vs. all other) but not with the incidence of a small positive versus a small negative earnings surprise. Focusing on the interactions of dispersion with AAER, we find that none of the coefficients is significantly negative. These results are not consistent with the conjecture that our documented negative relation between dispersion and small earnings surprise incidence is explained by earnings management.25

5.3 The role of expectations management

Our study relates to research that suggests managers try to walk down analyst expectations with guidance (i.e., expectations management). For instance, Cotter et al. (2006) find that guidance reduces forecast dispersion and increases the likelihood that firms meet or beat analyst expectations. Note, however, that our prediction and findings should also hold in the absence of guidance. Uncertainty before earnings announcements varies for reasons other than guidance, leading to the same conceptual prediction that high uncertainty impairs analysts’ ability to slightly understate their forecasts. Moreover, many firms do not issue guidance. Nevertheless, we explore the role of earnings guidance in this section.

We examine the moderating effect of a variable that captures public guidance. If guidance explains our findings, we should observe (1) a more negative association between dispersion and our dependent variables when earnings guidance occurs, and (2) no relation between dispersion and our dependent variables when earnings guidance does not occur. To examine these predictions, we obtain data on public guidance from the I/B/E/S Guidance database. Given data coverage and the importance of public guidance post-Reg FD, we restrict the analyses to the period starting in November 2000 (n = 83,103).26 Indicator variable GUIDE equals 1 for firm-quarters for which earnings guidance was issued and 0 otherwise.

Panel B of Table 6 presents the results of running our main analyses with an interaction term between dispersion and GUIDE. The negative and significant coefficient on the interaction term is consistent with earnings guidance contributing to our findings and suggests that guidance could be viewed as one of the mechanisms through which forecast uncertainty predicts the likelihood of a small positive earnings surprise. The coefficient on the interaction term is insignificant in the specification that predicts firms’ incidence of zero and small positive surprises. More importantly, the coefficient on dispersion remains strongly negative and significant, which suggests that our results remain strong in cases where guidance is not observed. We conclude that our findings are unlikely to be explained solely by expectations management.

5.4 Other sensitivity analyses

5.4.1 Dispersion as proxy for a moving earnings target

Dispersion could also proxy for the difficulty managers face in managing earnings to meet expectations. If lower dispersion captures a more stable earnings target that is easier to identify, it could be associated with a greater ability to manage earnings. To test this conjecture, we construct a new variable based on the variability in the consensus forecast in the period before each firm-quarter’s earnings announcement. Specifically, we compute a daily consensus for each of the 360 days before the earnings announcement based on individual forecasts outstanding prior to that day. We construct a variable TARGETVAR as a firm-quarter’s time series coefficient of variation in the daily changes in the consensus forecast.

If ability to manage earnings explains our findings, we should observe a negative association between TARGETVAR and the incidence of small positive earnings surprises. Moreover, this relation should subsume the effect of dispersion. The first regression in Panel A of Table 7 shows that dispersion and TARGETVAR are significantly positively related. The second regression confirms the existence of a significant negative association between TARGETVAR and the incidence of small positive earnings surprises. However, after including dispersion in the rightmost column, we find the coefficient on dispersion remains strongly negative, while the coefficient on TARGETVAR switches from negative to positive. We therefore conclude that it is unlikely that the ability to manage earnings, as a result of variability in the earnings target to meet or beat, explains our main findings.
Table 7

Additional sensitivity tests

 

OLS regression

Logit regression

Dependent variable

ln(DISP)

Just beat (1) versus just miss (0)

Just beat (1) versus just miss (0)

Coeff.

t-stat

Coeff.

z-stat

Coeff.

z-stat

Panel A: Dispersion as proxy for a moving earnings target

ln(TARGETVAR)

0.226

40.53***

−0.029

−2.62***

0.050

4.18***

ln(DISP)

    

−0.544

−22.45***

Control variables

 

Included

 

Included

 

Included

Quarter fixed effects

 

Included

 

Included

 

Included

n

 

118,730

 

39,227

 

39,227

Adjusted R2 | Pseudo R2

 

0.311

 

0.041

 

0.068

Logit regression

Performance quartile 1

Performance quartile 2

Performance quartile 3

Performance quartile 4

Dependent variable

Just beat (1) versus just miss (0)

Just beat (1) versus just miss (0)

Just beat (1) versus just miss (0)

Just beat (1) versus just miss (0)

Coeff.

z-stat

Coeff.

z-stat

Coeff.

z-stat

Coeff.

z-stat

Panel B: Dispersion and the role of actual future performance

ln(DISP)

−0.411

−11.92***

−0.608

−14.93***

−0.546

−14.39***

−0.553

−13.06***

Control variables

 

Included

 

Included

 

Included

 

Included

Quarter fixed effects

 

Included

 

Included

 

Included

 

Included

n

 

8766

 

11,651

 

10,995

 

7815

Pseudo R2

 

0.071

 

0.087

 

0.065

 

0.063

Panel A presents tests on whether dispersion is merely a proxy for firms’ difficulty to manage earnings, based on the variability in the consensus forecast target (TARGETVAR). Panel B presents tests partitioned by the firm-quarter’s earnings performance, where earnings performance is measured as the actual earnings per share used to construct the earnings surprise, scaled by share price at the end of the previous fiscal quarter. Quartiles are formed each calendar quarter. Test statistics are calculated using standard errors adjusted for two-way clustering by firm and quarter. ***, **, and * reflect statistical significance at the level of 0.01, 0.05, and 0.10, respectively

5.4.2 Dispersion and future firm performance

Minton et al. (2002) show that higher forecast uncertainty firms tend to have lower future operating cash flows and earnings. To the extent that such lower performance is not fully anticipated in analysts’ earnings forecasts, our findings could capture the relation between uncertainty and ex post performance. To test this possibility, we split our sample into quartile portfolios, each calendar quarter, based on ex post earnings performance, and estimate our regressions for each of these portfolios. Panel B of Table 7 presents results of these estimations and suggests our findings are robust to partitioning on performance and are not driven by any particular portfolio of performance. We conclude that our findings are not explained by the link between forecast uncertainty and future performance.

5.4.3 Earnings forecast uncertainty and forecast optimism

Prior related research presents evidence on a relation between forecast uncertainty and optimism in longer-horizon forecasts (Ackert and Athanassakos 1997; Das et al. 1998; Lim 2001). With longer-horizon forecasts, managers prefer analysts to be overly optimistic (see the discussion in Sect. 2), and these prior studies conjecture that analysts have more to gain from pleasing managers with forecast optimism when uncertainty is relatively high. To test whether our findings are simply an outcome of the previously documented empirical relation between uncertainty and analysts’ forecast optimism, we use the same daily consensus forecasts as in subsection 5.4.1 and analyze the correlation between dispersion and just beating versus just missing over the forecast horizon.

Our prediction on forecast uncertainty and forecast pessimism implies that the negative association between dispersion and small positive earnings surprise incidence should be strongest based on forecasts measured just before the earnings announcement. This is because forecast pessimism is desirable for managers only for those short-horizon forecasts. If, instead, the negative association between dispersion and small positive earnings surprise incidence is simply an artifact of the positive relation between uncertainty and forecast optimism, we should observe the negative correlation to be driven by longer-horizon forecasts.

Figure 2 plots the cross-sectional correlation between dispersion and an indicator variable for small positive versus small negative earnings surprise incidence for each of the 360 days before the quarterly earnings announcement. The figure shows that the relation between dispersion and small positive earnings surprise incidence is driven by short-horizon forecasts. For longer-horizon forecasts, the correlation is virtually zero. Shortly before the earnings announcement, however, the correlation strengthens to around −0.23. These results are more consistent with the predicted link between uncertainty and pessimism than with a link between uncertainty and optimism. In addition, a plot of the indicator variable for small positive surprise incidence itself reveals that the temporal pattern in small positive earnings surprise frequency virtually mirrors that of the cross-sectional correlations. Overall, we conclude that our findings are not an artifact of the previously documented association between uncertainty and forecast optimism.
Fig. 2

Additional sensitivity test: Relation between forecast dispersion and small earnings surprises over the forecast horizon. For each day in the period from 360 days to 1 day before the quarterly earnings announcement date, a consensus (mean) forecast is computed based on outstanding individual forecasts. For this purpose, we additionally use individual forecasts with “FPI” equal to 7 (two quarters ahead), 8 (three quarters ahead), 9 (four quarters ahead), and N (five quarters ahead), in order to construct a daily consensus. At each point in time, we remove stale forecasts by including only forecasts made in the most recent 180 days when computing the daily consensus. If multiple forecasts are found by the same analyst for the same firm-quarter, only the last forecast is retained. The figure displays the daily Spearman correlation between dispersion and an indicator variable for small positive earnings surprises of 1 or 2 cents (“just beat”) versus small negative earnings surprises of −1 or −2 cents (“just miss”)

6 Conclusions

Prior studies link variation in the incidence of zero and small positive earnings surprises to earnings management. We examine an alternative explanation based on strategic analyst forecast pessimism. We argue that earnings forecast uncertainty (i.e., lack of precision in analysts’ information signals about a firm’s earnings) is negatively associated with analysts’ ability to slightly understate their forecasts and help firms meet or just beat expectations. Consistent with this prediction, we show that firms with relatively low forecast uncertainty are significantly more likely to report earnings that just beat rather than just miss analyst expectations. Our results are robust to the inclusion of an array of control variables, the use of an alternative measure of forecast uncertainty, and tests of potential alternative explanations.

Our study contributes to the literature on the interpretation of the asymmetry around zero in the frequency distribution of earnings surprises and provides evidence on a relatively unexplored explanation for the asymmetry based on analysts’ forecasts. We highlight the importance of our findings for studies that employ small earnings surprises to measure constructs related to earnings management, and show that controlling for dispersion can materially change inferences in settings where a variable of interest is correlated with uncertainty. Overall, our findings are consistent with the notion that strategic analyst forecast pessimism plays a key role in explaining zero and small positive earnings surprises. These findings matter for researchers that select firms suspected of earnings management based on zero and small positive earnings surprises. Our study suggests that their samples are more likely sorted based on variation in forecast uncertainty than on earnings management.

Footnotes

  1. 1.

    Evidence in studies such as those of Soltes (2014), Green et al. (2014), and Brown et al. (2015) suggests that private communication between managers and analysts occurs frequently post-Reg FD. Even if managers do not selectively disclose material private information, analysts can still benefit from nonverbal cues and nonmaterial information disclosures that are valuable in combination with their private information. For instance, Mayew (2008) shows in a post-Reg FD setting that managers reward supportive analysts by allowing them to ask questions during conference calls. Asking questions allows these analysts to convert the public information revealed from management’s responses into material private information. We conclude that analysts’ incentives to please management and obtain access to information remain important post-Reg FD.

  2. 2.

    To illustrate this argument, consider the case where an analyst’s point-estimate is 20 cents, with an expected range of possible earnings outcomes of 18–22 cents per share (i.e., relatively low uncertainty). Assume the analyst is willing to induce strategic pessimism of 2 cents per share. In this case, the likelihood that a forecast understated by 2 cents per share will result in a positive earnings surprise is high. (The firm is expected to meet or beat the 18 cents estimate.) In contrast, at the same level of strategic pessimism of 2 cents per share, this likelihood is much lower in the case where the expected range of possible earnings outcomes would be 0–40 cents per share (i.e., relatively high uncertainty).

  3. 3.

    By focusing on earnings surprises, our research differs fundamentally from prior work debating the validity of economic versus artifactual explanations for “discontinuities” around zero earnings and zero earnings change (Durtschi and Easton 2005, 2009; Jorgensen et al. 2014; Burgstahler and Chuk 2015).

  4. 4.

    See Dechow et al. (2010, 364–366) for an overview of the mixed and mostly indirect evidence on the earnings management explanation. Badertscher et al. (2012) examine restatement firms that have likely managed earnings and find no evidence of elevated discretionary accruals for the 0 and +1 cent earnings surprise bins. Moreover, they conclude: “Roughly 55 % of the observations where earnings are deemed to be opportunistically managed do not fall into the zero or just beat (0.01) earnings surprise bins that are typically used to infer earnings management. Thus, these cases would be missed in studies that use the zero bin and the bin just to the right of zero to infer earnings management” (pp. 346–347). The survey evidence of Brown et al. (2015) further suggests that sell-side analysts do not support the common conjecture that firms’ consistent meeting or beating of expectations is a red flag for financial misreporting.

  5. 5.

    Variables based on the incidence of zero and small positive earnings surprises are commonly used to empirically proxy for constructs such as earnings management (e.g., Cheng and Warfield 2005; Brochet et al. 2015; Fang et al. 2015) and audit quality (e.g., Lim and Tan 2008; Reichelt and Wang 2010). Zero and small positive earnings surprises are also used to select samples of “suspect” firms (Balsam et al. 2002; Cohen et al. 2008). Other studies rely on the likelihood of firms reporting a nonnegative earnings surprise regardless of magnitude (e.g., Matsumoto 2002) or the likelihood of meeting/just beating versus just missing expectations (e.g., McVay et al. 2006).

  6. 6.

    Although analysts are generally not directly compensated for their forecast accuracy, prior research finds that larger errors (relative to analysts’ peers) are associated with greater analyst turnover (Mikhail et al. 1999; Groysberg et al. 2011), while smaller errors are associated with promotions (Hong and Kubik 2003) and Institutional Investor All-Star rankings (Stickel 1992; Leone and Wu 2007). In addition, analysts face incentives to protect their reputations with clients by issuing accurate forecasts (Jackson 2005; Cowen et al. 2006; Ljungqvist et al. 2007; Fang and Yasuda 2009).

  7. 7.

    While both forecast optimism and pessimism can induce trading commissions when triggering investor belief revision, the effect is stronger for forecast optimism. Cowen et al. (2006) argue this is due to constraints on, and costs of, short-selling. Beyer and Guttman (2011) show that as a result of investor risk aversion, analysts are more likely to bias their forecasts upward in response to trading incentives.

  8. 8.

    In this regard, theoretical work such as that of Beyer (2008) assumes that managers’ utility with respect to meeting versus missing expectations is asymmetric. In her study, managers are assumed to be indifferent between meeting and beating analysts’ expectations, while the costs of missing expectations increase with the amount of the miss.

  9. 9.

    The sample period starts in 1993 because, until the early 1990s, I/B/E/S data suffer from a mismatch between the definitions of forecasted and actual earnings (Cohen et al. 2007).

  10. 10.

    We impose the 180-day restriction for earnings announcement timing because some earnings announcements in the I/B/E/S data are extremely delayed relative to the fiscal quarter-end, reflecting either extreme cases of firms in trouble or data errors. Our results are not sensitive to excluding this restriction or using alternative numbers of days.

  11. 11.

    To adjust for stock splits between the analyst forecast and earnings announcement dates, cumulative stock split factors are obtained from CRSP and merged separately with the forecast and announcement dates. If the cumulative split factor at the forecast date differs from the cumulative split factor at the earnings announcement date, the forecast of earnings per share is multiplied by the ratio of the cumulative price adjustment factor at the announcement date to the cumulative price adjustment factor at the forecast date. This ensures that differences in split levels do not erroneously drive firms towards (or away from) zero earnings surprise.

  12. 12.

    We focus on unscaled earnings surprises per share, because managers and the investment community are mainly concerned with earnings per share rather than scaled earnings numbers. Also, Degeorge et al. (1999) and Cheong and Thomas (2011) show that earnings surprise magnitude does not vary with scale (i.e., share price). This lack of variation in scale results in an asymmetry around zero in both the unscaled and scaled distributions.

  13. 13.

    Our results are qualitatively similar if we scale the standard deviation of forecasts by the absolute value of the consensus, i.e., use the coefficient of variation (Diether et al. 2002).

  14. 14.

    Barron et al. (1998) show that the expected squared error in consensus (average) forecasts can be viewed as the sum of common uncertainty and 1/N times idiosyncratic uncertainty (where N is the number of analysts). If uncertainty is not purely idiosyncratic, the earnings forecast uncertainty faced by individual analysts is expected to increase consensus forecast error magnitude. At the same time, even if uncertainty is purely idiosyncratic, which maps into forecast dispersion, our sample’s median firm-quarter has seven analysts issuing a forecast. This suggests that idiosyncratic uncertainty is unlikely to be fully diversified in the consensus for the average firm-quarter. Also, prior empirical research confirms that forecast dispersion is positively associated with the magnitude of earnings surprises (e.g., Kinney et al. 2002).

  15. 15.

    We include calendar-quarter fixed effects as opposed to fiscal-quarter fixed effects because one firm’s first fiscal quarter can occur at a different point in time than another firm’s first fiscal quarter. Controlling for calendar-quarter fixed effects allows us to better capture general time trends. Nevertheless, results are virtually the same when using fiscal-quarter fixed effects.

  16. 16.

    All continuous variables are winsorized to the 1st and 99th percentiles of their distributions. Our results are qualitatively similar when we exclude extreme observations rather than winsorize them.

  17. 17.

    The firm-specific earnings response coefficients (ERC) based on time series data are relatively high compared to those based on pooled samples in prior research but more consistent with values predicted by theory. This finding is consistent with the evidence of, for example, Kinney et al. (2002) and Cheong and Thomas (2014), who suggest that the pooling of observations across firms results in significant downward biases in ERCs.

  18. 18.

    The average HORIZON of >60 calendar days is consistent with most analysts issuing forecasts after the previous quarterly earnings announcement.

  19. 19.

    Even though ERC is relatively right-skewed, we do not use its logarithm because the estimated earnings response coefficient is negative for some firm-quarter observations. Nevertheless, our results are qualitatively similar if we set negative values equal to zero and take the natural logarithm of ERC.

  20. 20.

    Our findings are not sensitive to this specific design choice. We obtain qualitatively highly similar results when we i) split ln(DISP) into a variable that captures the log of dispersion when dispersion is nonzero and an indicator variable for zero dispersion or ii) use the quarterly decile rank of dispersion as the test variable.

  21. 21.

    To further gauge the strength of our findings, we also ran (untabulated) quarterly logit regressions. Out of the 84 quarterly estimations, the coefficient on dispersion in the model explaining Just beat versus just miss incidence is negative in 83 cases and significantly negative (p value <0.05) in 74 cases.

  22. 22.

    We extend, rather than replicate, these studies to illustrate the potential implications of our findings. While Barton and Simko (2002) and Brown and Pinello (2007) focus on the association between earnings management constraints and nonnegative earnings surprises, we focus on zero and small positive surprises. In untabulated tests, we find the results of Barton and Simko (2002) and Brown and Pinello (2007) to be robust when using an indicator for nonnegative earnings surprise as dependent variable and controlling for forecast dispersion. Barton and Simko (2002) control for dispersion, while Brown and Pinello (2007) control for the absolute forecast error. McInnis and Collins (2011) examine differences in the frequency of nonnegative earnings surprises around analysts’ cash flow forecast initiations. We examine differences in the frequency of zero and small positive earnings surprises between firms with and without cash flow forecasts. McInnis and Collins (2011) control for forecast uncertainty by matching their treatment and control firms using earnings volatility as one of the covariates.

  23. 23.

    All differences in dispersion are statistically significant. The significant positive associations between DISP and BLOAT, Q4, and CFF are robust to controlling for our firm characteristic control variables in a multiple OLS regression.

  24. 24.

    Payne and Robb (2000) argue that managers have more incentives to manage earnings to meet analyst expectations when dispersion is low. They find a significant negative association between dispersion and discretionary accruals and conclude that firms use more income-increasing earnings management tactics when dispersion is low. However, the results of their tests do not support their additional conjecture that firms use more income-increasing discretionary accruals to meet or beat expectations when dispersion is low.

  25. 25.

    In untabulated analyses, we also used alternative earnings-management variables based on restatements. Specifically, we used indicator variables that capture whether the firm-quarter was misstated as identified in a subsequent restatement, based on the Audit Analytics Non-Reliance Restatement database. The key difference between the two variables is that the first captures all restatements, while the second captures more severe restatements as indicated by SEC investigation, indications of fraud, and class action lawsuits. For both restatement variables, we also find no significant negative coefficients.

  26. 26.

    Chuk et al. (2013) investigate the reliability of the widely used I/B/E/S (previously “CIG”) data for public guidance and find that these data suffer from coverage biases. They recommend that researchers limit the use of data from years before 1998 and that they examine subsets of data to determine whether results using the data are driven by coverage bias. Our focus is on post-Reg FD, which is consistent with the first recommendation. Following the second recommendation, we find our results to be qualitatively similar if we split our sample based on median analyst coverage. This suggests our results are not likely to be driven by data coverage biases.

Notes

Acknowledgments

We thank two anonymous reviewers, Peter Easton (the editor), Mark Bradshaw, Hans Christensen, Lili Dai, Michael Erkens, Joachim Gassen, Tuan Ho, Steven Huddart, Joost Impink, Wim Janssen, Thomas Keusch, Felix Lamp, Wayne Landsman, Edith Leung, Mike Mao, Per Olsson, Erik Peek, Marlene Plumlee, Jeroen van Raak, Gordon Richardson, Mario Schabus, Holly Skaife, Jeroen Suijs, Siew Hong Teoh, Beverly Walther, Dan Wangerin, seminar participants at Humboldt University Berlin, University of Bologna, Maastricht University, London School of Economics, WHU Otto Beisheim School of Management, University of Toronto, University of Graz, INSEAD, Tilburg University, HEC Paris, and Lancaster University, participants at the IAAER conference at VU University Amsterdam and EAA annual meeting, and PhD students of the Limperg Capital Markets course for helpful comments and discussions. We thank Thomson Reuters for providing access to I/B/E/S analyst and management earnings forecast data. We thank the Center for Financial Reporting and Management at the Haas School of Business for sharing the AAER data. This paper was previously titled “The role of ex-ante uncertainty in explaining why firms meet or just beat analysts’ earnings forecasts.”

References

  1. Abarbanell, J. S. (1991). Do analysts’ earnings forecasts incorporate information in prior stock price changes? Journal of Accounting and Economics, 14(2), 147–165.CrossRefGoogle Scholar
  2. Abarbanell, J. S., Lanen, W. N., & Verrecchia, R. E. (1995). Analysts’ forecasts as proxies for investor beliefs in empirical research. Journal of Accounting and Economics, 20(1), 31–60.CrossRefGoogle Scholar
  3. Abarbanell, J., & Lehavy, R. (2003). Biased forecasts or biased earnings? The role of reported earnings in explaining apparent bias and over/underreaction in analysts’ earnings forecasts. Journal of Accounting and Economics, 36(1–3), 105–146.CrossRefGoogle Scholar
  4. Ackert, L. F., & Athanassakos, G. (1997). Prior uncertainty, analyst bias, and subsequent abnormal returns. Journal of Financial Research, 20(2), 263–273.CrossRefGoogle Scholar
  5. Badertscher, B. A., Collins, D. W., & Lys, T. Z. (2012). Discretionary accounting choices and the predictive ability of accruals with respect to future cash flows. Journal of Accounting and Economics, 53(1–2), 330–352.CrossRefGoogle Scholar
  6. Balsam, S., Bartov, E., & Marquardt, C. (2002). Accruals management, investor sophistication, and equity valuation: evidence from 10-Q filings. Journal of Accounting Research, 40(4), 987–1012.CrossRefGoogle Scholar
  7. Barron, O. E., Kim, O., Lim, S. C., & Stevens, D. E. (1998). Using analysts’ forecasts to measure properties of analysts’ information environment. The Accounting Review, 73(4), 421–433.Google Scholar
  8. Barron, O. E., & Stuerke, P. S. (1998). Dispersion in analysts’ earnings forecasts as a measure of uncertainty. Journal of Accounting, Auditing & Finance, 13(3), 245–270.Google Scholar
  9. Barry, C. B., & Jennings, R. H. (1992). Information and diversity of analyst opinion. Journal of Financial and Quantitative Analysis, 27(2), 169–183.CrossRefGoogle Scholar
  10. Barton, J., & Simko, P. J. (2002). The balance sheet as an earnings management constraint. The Accounting Review, 77, 1–27.CrossRefGoogle Scholar
  11. Beyer, A. (2008). Financial analysts’ forecast revisions and managers’ reporting behavior. Journal of Accounting and Economics, 46(2–3), 334–348.CrossRefGoogle Scholar
  12. Beyer, A., & Guttman, I. (2011). The effect of trading volume on analysts’ forecast bias. The Accounting Review, 86(2), 451–481.CrossRefGoogle Scholar
  13. Bradshaw, M. T., Richardson, S. A., & Sloan, R. G. (2006). The relation between corporate financing activities, analysts’ forecasts and stock returns. Journal of Accounting and Economics, 42(1–2), 53–85.CrossRefGoogle Scholar
  14. Brochet, F., Loumioti, M., & Serafeim, G. (2015). Speaking of the short-term: Disclosure horizon and managerial myopia. Review of Accounting Studies, 20(3), 1122–1163.CrossRefGoogle Scholar
  15. Brown, L. D. (2001). A temporal analysis of earnings surprises: Profits versus losses. Journal of Accounting Research, 39(2), 221–241.CrossRefGoogle Scholar
  16. Brown, L. D., Call, A. C., Clement, M. B., & Sharp, N. Y. (2015). Inside the “black box” of sell-side financial analysts. Journal of Accounting Research, 53(1), 1–47.CrossRefGoogle Scholar
  17. Brown, L. D., & Pinello, A. S. (2007). To what extent does the financial reporting process curb earnings surprise games? Journal of Accounting Research, 45(5), 947–981.CrossRefGoogle Scholar
  18. Burgstahler, D., & Chuk, E. (2015). Do scaling and selection explain earnings discontinuities? Journal of Accounting and Economics, 60(1), 168–186.CrossRefGoogle Scholar
  19. Cheng, Q., & Warfield, T. D. (2005). Equity incentives and earnings management. The Accounting Review, 80(2), 441–476.CrossRefGoogle Scholar
  20. Cheong, F. S., & Thomas, J. (2011). Why do EPS forecast error and dispersion not vary with scale? Implications for analyst and managerial behavior. Journal of Accounting Research, 49(2), 359–401.CrossRefGoogle Scholar
  21. Cheong, F. S., & Thomas, J. (2014). Research implications of the unusual properties of distributions for variables derived from actual and forecast EPS. Working paper (June 2014). http://faculty.som.yale.edu/jakethomas/papers/smoothingresearch.pdf.
  22. Chuk, E., Matsumoto, D., & Miller, G. S. (2013). Assessing methods of identifying management forecasts: CIG vs. researcher collected. Journal of Accounting and Economics, 55(1), 23–42.CrossRefGoogle Scholar
  23. Clement, M., Frankel, R., & Miller, J. (2003). Confirming management earnings forecasts, earnings uncertainty, and stock returns. Journal of Accounting Research, 41(4), 653–679.CrossRefGoogle Scholar
  24. Cohen, D. A., Dey, A., & Lys, T. Z. (2008). Real and accrual-based earnings management in the pre- and post-Sarbanes-Oxley periods. The Accounting Review, 83(3), 757–787.CrossRefGoogle Scholar
  25. Cohen, D. A., Hann, R. N., & Ogneva, M. (2007). Another look at GAAP versus the street: An empirical assessment of measurement error bias. Review of Accounting Studies, 12, 271–303.CrossRefGoogle Scholar
  26. Cotter, J., Tuna, I., & Wysocki, P. D. (2006). Expectations management and beatable targets: How do analysts react to explicit earnings guidance? Contemporary Accounting Research, 23(3), 593–624.CrossRefGoogle Scholar
  27. Cowen, A., Groysberg, B., & Healy, P. (2006). Which types of analyst firms are more optimistic? Journal of Accounting and Economics, 41(1–2), 119–146.CrossRefGoogle Scholar
  28. Das, S., Levine, C. B., & Sivaramakrishnan, K. (1998). Earnings predictability and bias in analysts’ earnings forecasts. The Accounting Review, 73(2), 277–294.Google Scholar
  29. Dechow, P. M., Ge, W., Larson, C. R., & Sloan, R. G. (2011). Predicting material accounting misstatements. Contemporary Accounting Research, 28(1), 17–82.CrossRefGoogle Scholar
  30. Dechow, P., Ge, W., & Schrand, C. (2010). Understanding earnings quality: A review of the proxies, their determinants and their consequences. Journal of Accounting and Economics, 50(2–3), 344–401.CrossRefGoogle Scholar
  31. Dechow, P. M., Hutton, A. P., & Sloan, R. G. (2000). The relation between analysts’ forecasts of long-term earnings growth and stock price performance following equity offerings. Contemporary Accounting Research, 17(1), 1–32.CrossRefGoogle Scholar
  32. Dechow, P. M., Richardson, S. A., & Tuna, I. (2003). Why are earnings kinky? An examination of the earnings management explanation. Review of Accounting Studies, 8(2), 355–384.CrossRefGoogle Scholar
  33. DeFond, M. L., & Hung, M. (2003). An empirical analysis of analysts’ cash flow forecasts. Journal of Accounting and Economics, 35(1), 73–100.CrossRefGoogle Scholar
  34. Degeorge, F., Patel, J., & Zeckhauser, R. (1999). Earnings management to exceed thresholds. The Journal of Business, 72(1), 1–33.CrossRefGoogle Scholar
  35. Diether, K. B., Malloy, C. J., & Scherbina, A. (2002). Differences of opinion and the cross section of stock returns. The Journal of Finance, 57(5), 2113–2141.CrossRefGoogle Scholar
  36. Durtschi, C., & Easton, P. (2005). Earnings management? The shapes of the frequency distributions of earnings metrics are not evidence ipso facto. Journal of Accounting Research, 43(4), 557–592.CrossRefGoogle Scholar
  37. Durtschi, C., & Easton, P. (2009). Earnings management? Erroneous inferences based on earnings frequency distributions. Journal of Accounting Research, 47(5), 1249–1281.CrossRefGoogle Scholar
  38. Erickson, M., & Wang, S. (1999). Earnings management by acquiring firms in stock for stock mergers. Journal of Accounting and Economics, 27(2), 149–176.CrossRefGoogle Scholar
  39. Erickson, M., Wang, S.-W., & Zhang, X. F. (2012). The change in information uncertainty and acquirer wealth losses. Review of Accounting Studies, 17(4), 913–943.CrossRefGoogle Scholar
  40. Fang, V. W., Huang, A. H., & Karpoff, J. M. (2015). Short selling and earnings management: A controlled experiment. The Journal of Finance, 71(3), 1251–1294.CrossRefGoogle Scholar
  41. Fang, L., & Yasuda, A. (2009). The effectiveness of reputation as a disciplinary mechanism in sell-side research. Review of Financial Studies, 22(9), 3735–3777.CrossRefGoogle Scholar
  42. Francis, J., & Philbrick, D. (1993). Analysts’ decisions as products of a multi-task environment. Journal of Accounting Research, 31(2), 216–230.CrossRefGoogle Scholar
  43. Frankel, R., Mayew, W. J., & Sun, Y. (2010). Do pennies matter? Investor relations consequences of small negative earnings surprises. Review of Accounting Studies, 15(1), 220–242.CrossRefGoogle Scholar
  44. Freeman, R. N., & Tse, S. Y. (1992). A nonlinear model of security price responses to unexpected earnings. Journal of Accounting Research, 30(2), 185–209.CrossRefGoogle Scholar
  45. Graham, J. R., Harvey, C. R., & Rajgopal, S. (2005). The economic implications of corporate financial reporting. Journal of Accounting and Economics, 40(1–3), 3–73.CrossRefGoogle Scholar
  46. Green, T. C., Jame, R., Markov, S., & Subasi, M. (2014). Access to management and the informativeness of analyst research. Journal of Financial Economics, 114(2), 239–255.CrossRefGoogle Scholar
  47. Groysberg, B., Healy, P. M., & Maber, D. A. (2011). What drives sell-side analyst compensation at high-status investment banks? Journal of Accounting Research, 49(4), 969–1000.CrossRefGoogle Scholar
  48. Hilary, G., & Hsu, C. (2013). Analyst forecast consistency. The Journal of Finance, 68(1), 271–297.CrossRefGoogle Scholar
  49. Hong, H., & Kubik, J. D. (2003). Analyzing the analysts: career concerns and biased earnings forecasts. The Journal of Finance, 58(1), 313–351.CrossRefGoogle Scholar
  50. Hughes, J., Liu, J., & Su, W. (2008). On the relation between predictable market returns and predictable analyst forecast errors. Review of Accounting Studies, 13(2–3), 266–291.CrossRefGoogle Scholar
  51. Jackson, A. R. (2005). Trade generation, reputation, and sell-side analysts. The Journal of Finance, 60(2), 673–717.CrossRefGoogle Scholar
  52. Jakab, S. (2013). Ahead of the tape: Don’t stay late at market’s surprise party. The Wall Street Journal. http://www.wsj.com/articles/SB10001424127887323706704578230002990343068.
  53. Jorgensen, B. N., Lee, Y. G., & Rock, S. (2014). The shapes of scaled earnings histograms are not due to scaling and sample selection: Evidence from distributions of reported earnings per share. Contemporary Accounting Research, 31(2), 498–521.CrossRefGoogle Scholar
  54. Ke, B., & Yu, Y. (2006). The effect of issuing biased earnings forecasts on analysts’ access to management and survival. Journal of Accounting Research, 44(5), 965–999.CrossRefGoogle Scholar
  55. Keung, E., Lin, Z.-X., & Shih, M. (2010). Does the stock market see a zero or small positive earnings surprise as a red flag? Journal of Accounting Research, 48(1), 91–121.CrossRefGoogle Scholar
  56. Kinney, W., Burgstahler, D., & Martin, R. (2002). Earnings surprise “materiality” as measured by stock returns. Journal of Accounting Research, 40(5), 1297–1329.CrossRefGoogle Scholar
  57. Lahiri, K., & Sheng, X. (2010). Measuring forecast uncertainty by disagreement: The missing link. Journal of Applied Econometrics, 25(4), 514–538.CrossRefGoogle Scholar
  58. Lang, M. H., & Lundholm, R. J. (1996). Corporate disclosure policy and analyst behavior. The Accounting Review, 71(4), 467–492.Google Scholar
  59. Lehavy, R., Li, F., & Merkley, K. (2011). The effect of annual report readability on analyst following and the properties of their earnings forecasts. The Accounting Review, 86(3), 1087–1115.CrossRefGoogle Scholar
  60. Leone, A. J., & Wu, J. S. (2007). What does it take to become a superstar? Evidence from institutional investor rankings of financial analysts. Working paper (May 2007). http://papers.ssrn.com/sol3/papers.cfm?abstract_id=313594.
  61. Lim, T. (2001). Rationality and analysts’ forecast bias. The Journal of Finance, 56(1), 369–385.CrossRefGoogle Scholar
  62. Lim, C.-Y., & Tan, H.-T. (2008). Non-audit service fees and audit quality: The impact of auditor specialization. Journal of Accounting Research, 46(1), 199–246.CrossRefGoogle Scholar
  63. Lin, H., & McNichols, M. F. (1998). Underwriting relationships, analysts’ earnings forecasts and investment recommendations. Journal of Accounting and Economics, 25(1), 101–127.CrossRefGoogle Scholar
  64. Ljungqvist, A., Marston, F., Starks, L. T., Wei, K. D., & Yan, H. (2007). Conflicts of interest in sell-side research and the moderating role of institutional investors. Journal of Financial Economics, 85(2), 420–456.CrossRefGoogle Scholar
  65. Lys, T., & Sohn, S. (1990). The association between revisions of financial analysts’ earnings forecasts and security-price changes. Journal of Accounting and Economics, 13(4), 341–363.CrossRefGoogle Scholar
  66. Matsumoto, D. A. (2002). Management’s incentives to avoid negative earnings surprises. The Accounting Review, 77(3), 483–514.CrossRefGoogle Scholar
  67. Mayew, W. J. (2008). Evidence of management discrimination among analysts during earnings conference calls. Journal of Accounting Research, 46(3), 627–659.CrossRefGoogle Scholar
  68. McInnis, J., & Collins, D. W. (2011). The effect of cash flow forecasts on accrual quality and benchmark beating. Journal of Accounting and Economics, 51(3), 219–239.CrossRefGoogle Scholar
  69. McVay, S., Nagar, V., & Tang, V. (2006). Trading incentives to meet the analyst forecast. Review of Accounting Studies, 11(4), 575–598.CrossRefGoogle Scholar
  70. Mikhail, M. B., Walther, B. R., & Willis, R. H. (1997). Do security analysts improve their performance with experience? Journal of Accounting Research, 35, 131.CrossRefGoogle Scholar
  71. Mikhail, M. B., Walther, B. R., & Willis, R. H. (1999). Does forecast accuracy matter to security analysts? The Accounting Review, 74(2), 185–200.CrossRefGoogle Scholar
  72. Minton, B. A., Schrand, C. M., & Walther, B. R. (2002). The role of volatility in forecasting. Review of Accounting Studies, 7(2–3), 195–215.CrossRefGoogle Scholar
  73. Mohanram, P., & Gode, D. (2013). Removing predictable analyst forecast errors to improve implied cost of equity estimates. Review of Accounting Studies, 18(2), 443–478.CrossRefGoogle Scholar
  74. Payne, J. L., & Robb, S. W. G. (2000). Earnings management: the effect of ex ante earnings expectations. Journal of Accounting, Auditing & Finance, 15(4), 371–392.Google Scholar
  75. Rajgopal, S., & Venkatachalam, M. (2011). Financial reporting quality and idiosyncratic return volatility. Journal of Accounting and Economics, 51(1–2), 1–20.CrossRefGoogle Scholar
  76. Reichelt, K. J., & Wang, D. (2010). National and office-specific measures of auditor industry expertise and effects on audit quality. Journal of Accounting Research, 48(3), 647–686.CrossRefGoogle Scholar
  77. Richardson, S. A., Teoh, S. H., & Wysocki, P. D. (2004). The walk-down to beatable analyst forecasts: the role of equity issuance and insider trading incentives. Contemporary Accounting Research, 21(4), 885–924.CrossRefGoogle Scholar
  78. Sheng, X., & Thevenot, M. (2012). A new measure of earnings forecast uncertainty. Journal of Accounting and Economics, 53(1–2), 21–33.CrossRefGoogle Scholar
  79. Shon, J., & Veliotis, S. (2013). Insiders’ sales under rule 10b5-1 plans and meeting or beating earnings expectations. Management Science, 59(9), 1988–2002.CrossRefGoogle Scholar
  80. Singer, Z., & You, H. (2011). The effect of Section 404 of the Sarbanes-Oxley Act on earnings quality. Journal of Accounting, Auditing & Finance, 26(3), 556–589.CrossRefGoogle Scholar
  81. Skinner, D., & Sloan, R. (2002). Earnings surprises, growth expectations, and stock returns or don’t let an earnings torpedo sink your portfolio. Review of Accounting Studies, 7(2), 289–312.CrossRefGoogle Scholar
  82. So, E. C. (2013). A new approach to predicting analyst forecast errors: do investors overweight analyst forecasts? Journal of Financial Economics, 108(3), 615–640.CrossRefGoogle Scholar
  83. Soltes, E. (2014). Private interaction between firm management and sell-side analysts. Journal of Accounting Research, 52(1), 245–272.CrossRefGoogle Scholar
  84. Stickel, S. E. (1991). Common stock returns surrounding earnings forecast revisions: more puzzling evidence. The Accounting Review, 66(2), 402–416.Google Scholar
  85. Stickel, S. E. (1992). Reputation and performance among security analysts. The Journal of Finance, 47(5), 1811.CrossRefGoogle Scholar
  86. Zweig, J. (2011). The intelligent investor: Why you shouldn’t buy those quarterly earnings surprises. The Wall Street Journal. http://www.wsj.com/articles/SB10001424052702303763404576419783497869132.

Copyright information

© The Author(s) 2016

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Amsterdam Business SchoolUniversity of AmsterdamAmsterdamThe Netherlands

Personalised recommendations