1 Introduction

The standard model (SM) description of electroweak and strong interactions can be tested through measurements of the W+W production cross section at a hadron collider. The s-channel and t-channel \(\mathrm {q} \overline {\mathrm {q}} \) annihilation diagrams, shown in Fig. 1, correspond to the dominant process in the SM, at present energies. The gluon–gluon diagrams, which contain a loop at lowest order, contribute only 3 % of the total cross section [1] at \(\sqrt{s} = 7~\mathrm{TeV}\). WWγ and WWZ triple gauge-boson couplings (TGCs) [2], responsible for s-channel W+W production, are sensitive to possible new physics processes at a higher mass scale. Anomalous values of the TGCs would change the W+W production rate and potentially certain kinematic distributions from the SM prediction. Aside from tests of the SM, W+W production represents an important background source for new particle searches, e.g. for Higgs boson searches [35]. Next-to-leading-order (NLO) calculations of W+W production in pp collisions at \(\sqrt{s} = 7~\mathrm{TeV}\) predict a cross section of σ NLO(pp→W+W)=47.0±2.0 pb [1].

Fig. 1
figure 1

Leading-order Feynman diagrams for \(\mathrm {q} \overline {\mathrm {q}} \) annihilation, for s-channel (top) and t-channel (bottom) production of W pairs. The triple gauge-boson vertex corresponds to the WWγ(Z) interaction in the first diagram

This paper reports a measurement of the W+W cross section in the \(\mathrm {W}^{+} \mathrm {W}^{-} \to\ell^{+}\nu\ell^{-} \overline {\nu } \) final state in pp collisions at \(\sqrt{s} = 7~\mathrm{TeV}\) and constraints on anomalous triple gauge-boson couplings. The measurement is performed with the Compact Muon Solenoid (CMS) detector at the Large Hadron Collider (LHC) using the full 2011 data sample, corresponding to an integrated luminosity of 4.92±0.11 fb−1, more than two orders of magnitude larger than data used in the first measurements with the CMS [6] and ATLAS [7] experiments at the LHC, and comparable in size to the data sets more recently analyzed by ATLAS [8, 9].

2 The CMS detector and simulations

The CMS detector is described in detail elsewhere [10] so only the key components for this analysis are summarised here. A superconducting solenoid occupies the central region of the CMS detector, providing an axial magnetic field of 3.8 T parallel to the beam direction. A silicon pixel and strip tracker, a crystal electromagnetic calorimeter, and a brass/scintillator hadron calorimeter are located within the solenoid. A quartz-fiber Cherenkov calorimeter extends the coverage to |η|<5.0, where pseudorapidity is defined as η=−ln[tan(θ/2)], and θ is the polar angle of the particle trajectory with respect to the anticlockwise-beam direction. Muons are measured in gas-ionisation detectors embedded in the steel magnetic-flux-return yoke outside the solenoid. The first level of the CMS trigger system, composed of custom hardware processors, is designed to select the most interesting events in less than 3 μs using information from the calorimeters and muon detectors. The high-level trigger processor farm further decreases the rate of stored events to a few hundred hertz for subsequent analysis.

This measurement exploits W+W pairs in which both bosons decay leptonically, yielding an experimental signature of two isolated, high transverse momentum (p T), oppositely charged leptons (electrons or muons) and large missing transverse energy (\(E_{\mathrm {T}}^{\mathrm {miss}} \)) due to the undetected neutrinos. The \(E_{\mathrm {T}}^{\mathrm {miss}} \) is defined as the modulus of the vectorial sum of the transverse momenta of all reconstructed particles, charged and neutral, in the event. This variable, together with the full event selection, is explained in detail in Sect. 3.

Several SM processes constitute backgrounds for the W+W sample. These include W+jets and quantum chromodynamics (QCD) multijet events where at least one of the jets is misidentified as a lepton, top-quark production (\(\mathrm {t}\overline {\mathrm {t}} \) and tW), Drell–Yan Z/γ + , and diboson production (Wγ (∗), WZ, and ZZ) processes.

A number of Monte Carlo (MC) event generators are used to simulate the signal and backgrounds. The \(\mathrm {q} \overline {\mathrm {q}} \to \mathrm {W}^{+} \mathrm {W}^{-} \) signal, W+jets, WZ, and Wγ (∗) processes are generated using the MadGraph 5.1.3 [11] event generator. The gg→W+W signal component is simulated using gg2ww [12]. The powheg 2.0 program [13] provides event samples for the Drell–Yan, \(\mathrm {t}\overline {\mathrm {t}} \), and tW processes. The remaining background processes are simulated using pythia 6.424 [14].

The default set of parton distribution functions (PDFs) used to produce the LO MC samples is CTEQ6L [15], while CT10 [16] is used for NLO generators. The NLO calculations are used for background cross sections. For all processes, the detector response is simulated using a detailed description of the CMS detector, based on the Geant4 package [17].

The simulated samples include the effects of multiple pp interactions in each beam crossing (pileup), and are reweighted to match the pileup distribution as measured in data.

3 Event selection

This measurement considers signal candidates in three final states: e+e, μ + μ , and e± μ . The W→ℓν (=e or μ) decays are the main signal components; W→τν τ events with leptonic τ decays are included, although the analysis is not optimised for this final state. The trigger requires the presence of one or two high-p T electrons or muons. For single lepton triggers the p T threshold for the selection is 27 (15) GeV for electrons (muons). For double lepton triggers, the p T thresholds, for pairs of leptons of the same flavour, are lowered to 18 and 8 GeV for the first and second electrons, respectively, and to 7 GeV for the each of the two muons. Different flavour lepton triggers are also used. The overall trigger efficiency for signal events is measured to be approximately 98 % using data.

Two oppositely charged lepton candidates are required, both with p T>20 GeV. Electron candidates are selected using a multivariate approach that exploits correlations between the selection variables described in Ref. [18] to improve identification performance, while muon candidates [19] are identified using a selection close to that described in Ref. [6]. Charged leptons from W boson decays are expected to be isolated from any other activity in the event. The lepton candidates are required to be consistent with originating at the primary vertex of the event, which is chosen as the vertex with the highest \(\sum p_{\mathrm {T}} ^{2}\) of its associated tracks. This criterion provides the correct assignment for the primary vertex in more than 99 % of events for the pileup distribution observed in the data. The efficiency is measured by checking how often a primary vertex with the highest \(\sum p_{\mathrm{T}}^{2}\) of the constituent tracks is consistent with the vertex formed by the two primary leptons. This is done in MC and checked in data.

The particle-flow (PF) technique [20] that combines the information from all CMS subdetectors to reconstruct each individual particle is used to calculate the isolation variable. For each lepton candidate, a cone around the lepton direction at the event vertex is reconstructed, defined as \(\Delta R = \sqrt{(\Delta\eta)^{2} + (\Delta\phi)^{2}}\), where Δη and Δϕ are the distances from the lepton track in η and azimuthal angle, ϕ (in radians), respectively; ΔR takes a value of 0.4 (0.3) for electrons (muons). The scalar sum of the transverse momentum is calculated for the particles reconstructed with the PF algorithm that are contained within the cone, excluding the contribution from the lepton candidate itself. If this sum exceeds approximately 10 % of the candidate p T, the lepton is rejected; the exact requirement depends on the lepton flavour and on η.

Jets are reconstructed from calorimeter and tracker information using the PF technique [21]. The anti-k T clustering algorithm [22] with a distance parameter of 0.5, as implemented in the FastJet package [23, 24], is used. To correct for the contribution to the jet energy from pileup, a median energy density ρ, or energy per area of jet, is determined event by event. The pileup contribution to the jet energy is estimated as the product of ρ and the area of the jet and subsequently subtracted [25] from the jet transverse energy E T. Jet energy corrections are also applied as a function of the jet E T and η [26]. To reduce the background from top-quark decays, a jet veto is applied: events with one or more jets with corrected E T>30 GeV and |η|<5.0 are rejected.

To further suppress the top-quark background, two top-quark tagging techniques based on soft-muon and b-jet tagging [27, 28] are applied. The first method vetoes events containing muons from b-quark decays, which can be either low-p T muons or nonisolated high-p T muons. The second method uses information from tracks with large impact parameter within jets, and applies a veto on those with the b-jet tagging value above the selected veto threshold. The combined rejection efficiency for these tagging techniques, in the case of \(\mathrm {t}\overline {\mathrm {t}} \) events, is about a factor of two, once the full event selection is applied.

The Drell–Yan background has a production cross section some orders of magnitude larger than the W+W process. To eliminate Drell–Yan events, two different \(E_{\mathrm {T}}^{\mathrm {miss}} \) vectors are used [29]. The first is reconstructed using the particle-flow algorithm, while the second uses only the charged-particle candidates associated with the primary vertex and is therefore less sensitive to pileup. The projected \(E_{\mathrm {T}}^{\mathrm {miss}} \) is defined as the component of \(E_{\mathrm {T}}^{\mathrm {miss}} \) transverse to the direction of the nearest lepton, if it is closer than π/2 in azimuthal angle, and the full \(E_{\mathrm {T}}^{\mathrm {miss}} \) otherwise. A lower cut on this observable efficiently rejects Z/γ τ + τ background events, in which the \(E_{\mathrm {T}}^{\mathrm {miss}} \) is preferentially aligned with leptons, as well as Z/γ + events with mismeasured \(E_{\mathrm {T}}^{\mathrm {miss}} \) associated with poorly reconstructed leptons or jets. The minimum of the projections of the two \(E_{\mathrm {T}}^{\mathrm {miss}} \) vectors is used, exploiting the correlation between them in events with significant genuine \(E_{\mathrm {T}}^{\mathrm {miss}} \), as in the signal, and the lack of correlation otherwise, as in Drell–Yan events. The requirement for this variable in the e+e and μ + μ final states is projected \(E_{\mathrm {T}}^{\mathrm {miss}} > ( 37 + N_{\mathrm{vtx}}/ 2 )~\mathrm{GeV}\), which depends on the number of reconstructed primary vertices (N vtx). In this way the dependence of the Drell–Yan background on pileup is minimised. For the e± μ final state, which has smaller contamination from Z/γ + decays, the threshold is lowered to 20 GeV. These requirements remove more than 99 % of the Drell–Yan background, the actual number of accepted background events is obtained from the data, as explained below.

Remaining Z/γ + events in which the Z boson recoils against a jet are reduced by requiring the angle in the transverse plane between the dilepton system and the most energetic jet to be smaller than 165 degrees. This selection is applied only in the e+e and μ + μ final states when the leading jet has E T>15 GeV.

To further reduce the Drell–Yan background in the e+e and μ + μ final states, events with a dilepton mass within ±15 GeV of the Z mass are rejected. Events with dilepton masses below 20 GeV are also rejected to suppress contributions from low-mass resonances. The same requirement, where the threshold is lowered to 12 GeV, is also applied in the e± μ final state. Finally, the transverse momentum of the dilepton system (\(p_{\mathrm {T}} ^{\ell\ell}\)) is required to be above 45 GeV to reduce both the Drell–Yan background and the contribution from misidentified leptons.

To reduce the background from other diboson processes, such as WZ or ZZ production, any event that has an additional third lepton with p T>10 GeV passing the identification and isolation requirements is rejected. Wγ (∗) background, in which the photon is misidentified as an electron, is suppressed by stringent γ conversion rejection requirements [18].

4 Estimation of backgrounds

A combination of techniques is used to determine the contributions from backgrounds that remain after the W+W selection. The major contribution at this level comes from the top-quark processes, followed by the W+jets background.

The normalisation of the top-quark background is estimated from data by counting top-quark-tagged events, with the requirements explained in Sect. 3, and applying the corresponding tagging efficiency. The top-quark tagging efficiency (ϵ top tagged) is measured in a data sample, dominated by \(\mathrm {t}\overline {\mathrm {t}} \) and tW events, that is selected from a phase space close to that for W+W events, but instead requiring one jet with E T>30 GeV. The residual number of top-quark events (N not tagged) in the signal sample is given by

$$N_{\mathrm{not}\ \mathrm{tagged}} = N_{\mathrm{tagged}} \times(1-\epsilon_{\mathrm{top}\ \mathrm{tagged}}) / \epsilon_{\mathrm{top}\ \mathrm{tagged}}, $$

where N tagged is the number of tagged events. The total uncertainty on this background estimation is about 18 %. The main contribution comes from the statistical and systematic uncertainties related to the measurement of ϵ top tagged.

The W+jets and QCD multijet background with jets misidentified as leptons are estimated by counting the number of events containing one lepton that satisfies the nominal selection criteria and another lepton that satisfies relaxed requirements on impact parameter and isolation but not the nominal criteria. This sample, enriched in W+jets events, is extrapolated to the signal region using the efficiencies for such loosely identified leptons to pass the tight selection. These efficiencies are measured in data using multijet events and are parametrised as functions of the p T and η of the lepton candidate. QCD backgrounds are found to be negligible. The systematic uncertainties stemming from this efficiency determination dominate the overall uncertainty, which is estimated to be about 36 %. The main contribution to this uncertainty comes from the differences in the p T spectrum of the jets in the measurement data sample, composed mainly of QCD events, compared to the sample, primarily W+jets, from which the extrapolation is performed.

The residual Drell–Yan contribution to the e+e and μ + μ final states outside of the Z boson mass window (\(N_{\mathrm{out}}^{\ell\ell,\mathrm{exp}}\)) is estimated by normalising the simulation to the observed number of events inside the Z boson mass window in data (\(N_{\mathrm{in}}^{\ell\ell}\)). The contribution in this region from other processes where the two leptons do not come from a Z boson (\(N_{\mathrm{in}}^{\text{non-Z}}\)) is subtracted before performing the normalisation. This contribution is estimated on the basis of the number of e± μ data events within the Z boson mass window. The WZ and ZZ contributions in the Z mass window (\(N_{\mathrm{in}}^{ \mathrm {Z} \mathrm{V}}\)) are also subtracted, using simulation, when leptons come from the same Z boson as in the case of the Drell–Yan production. The residual background in the W+W data outside the Z boson mass window is thus expressed as

$$N_{\mathrm{out}}^{\ell\ell,\mathrm{exp}} = R^{\ell\ell}_{\mathrm{out}/\mathrm{in}} \bigl(N_{\mathrm{in}}^{\ell\ell} - N_{\mathrm{in}}^{\text{non-Z}} - N_{\mathrm{in}}^{\mathrm{ZV}}\bigr), $$

with

$$R^{\ell\ell}_{\mathrm{out}/\mathrm{in}} = N_{\mathrm{out}}^{\ell \ell ,\mathrm{MC}}/N_{\mathrm{in}}^{\ell\ell,\mathrm{MC}}. $$

The systematic uncertainty in the final Drell–Yan estimate is derived from the dependence of \(R^{\ell\ell}_{\mathrm{out}/\mathrm{in}}\) on the value of the \(E_{\mathrm {T}}^{\mathrm {miss}} \) requirement.

Finally, a control sample with three reconstructed leptons is defined to rescale the estimate, based on the simulation, of the background Wγ contribution coming from asymmetric γ decays, where one lepton escapes detection [30].

Other backgrounds are estimated from simulation. The Wγ background estimate is cross-checked in data using the events passing all the selection requirements except that the two leptons must have the same charge; this sample is dominated by W+jets and Wγ events. The Z/γ τ + τ contamination is also cross-checked using Z/γ →e+e and Z/γ μ + μ events selected in data, where the leptons are replaced with simulated τ-lepton decays, and the results are consistent with the simulation. Other minor backgrounds are WZ and ZZ diboson production where the two selected leptons come from different bosons.

The estimated event yields for all processes after the event selection are summarised in Table 1. The distributions of the key analysis variables are shown in Fig. 2.

Fig. 2
figure 2

Distributions of the maximum lepton transverse momentum (p Tmax), the minimum lepton transverse momentum (p Tmin), the dilepton transverse momentum (\(p_{\mathrm {T}} ^{\ell\ell}\)) and invariant mass (M ℓℓ ) at the final selection level. Some of the backgrounds have been rescaled to the estimates based on control samples in data, as described in the text. All leptonic channels are combined, and the uncertainty band corresponds to the statistical and systematic uncertainties in the predicted yield. The last bin includes the overflow. In the box below each distribution, the ratio of the observed CMS event yield to the total SM prediction is shown

Table 1 Signal and background predictions, compared to the yield in data. The prediction for the W+W process assumes the SM cross section value

5 Efficiencies and systematic uncertainties

The signal efficiency, which includes the acceptance of the detector, is estimated using simulation and including both the \(\mathrm {q} \overline {\mathrm {q}} \to \mathrm {W}^{+} \mathrm {W}^{-} \) and gg→W+W processes. Residual discrepancies in the lepton reconstruction and identification efficiencies between data and simulation are corrected by determining data-to-simulation scale factors measured using Z/γ + events in the Z peak region [31] that are recorded with unbiased triggers. These factors depend on the lepton p T and |η| and are within 4 % (2 %) of unity for electrons (muons). Effects due to W→τν τ decays with τ leptons decaying into lower-energy electrons or muons are included in the signal efficiency.

The experimental uncertainties in lepton reconstruction and identification efficiency, momentum scale and resolution, \(E_{\mathrm {T}}^{\mathrm {miss}} \) modelling, and jet energy scale are applied to the reconstructed objects in simulated events by smearing and scaling the relevant observables and propagating the effects to the kinematic variables used in the analysis. A relative uncertainty of 2.3 % in the signal efficiency due to multiple collisions within a bunch crossing is taken from the observed variation in the efficiency in a comparison of two different pileup scenarios in simulation, reweighted to the observed data.

The relative uncertainty in the signal efficiency due to variations in the PDFs and the value of α s is 2.3 % (0.8 %) for \(\mathrm {q} \overline {\mathrm {q}} \) (gg) production, following the PDF4LHC prescription [16, 3236]. The effect of higher-order corrections, studied using the mcfm program [1], is found to be 1.5 % (30 %) for \(\mathrm {q} \overline {\mathrm {q}} \) annihilation (gg) by varying the renormalisation (μ R ) and factorisation (μ F ) scales in the range (μ 0/2,2μ 0), with μ 0 equal to the mass of the W boson, and setting μ R =μ F . The W+W jet veto efficiency in data is estimated from simulation and multiplied by a data-to-simulation scale factor derived from Z/γ + events in the Z peak,

$$\epsilon_{ \mathrm {W}^{+} \mathrm {W}^{-} }^{\mathrm{data}} = \epsilon_{ \mathrm {W}^{+} \mathrm {W}^{-} }^\mathrm{MC} \times\epsilon_{ \mathrm {Z} }^{\mathrm{data}}/\epsilon_{ \mathrm {Z} }^\mathrm{MC}, $$

where \(\epsilon_{ \mathrm {W}^{+} \mathrm {W}^{-} }^{\mathrm{data}}\) and \(\epsilon_{ \mathrm {W}^{+} \mathrm {W}^{-} }^{\mathrm{MC}}\) (\(\epsilon_{ \mathrm {Z} }^{\mathrm{data}}\) and \(\epsilon_{ \mathrm {Z} }^{\mathrm{MC}}\)) are the efficiencies for the jet veto on the W+W (Z) process for data and MC, respectively. The uncertainty in this efficiency is factorised into the uncertainty in the Z efficiency in data and the uncertainty in the ratio of the W+W efficiency to the Z efficiency in simulation (\(\epsilon_{ \mathrm {W}^{+} \mathrm {W}^{-} }^{\mathrm{MC}}/\epsilon _{ \mathrm {Z} }^{\mathrm{MC}}\)). The former, which is dominated by statistics, is 0.3 %. Theoretical uncertainties due to higher-order corrections contribute most to the \(\epsilon_{ \mathrm {W}^{+} \mathrm {W}^{-} }^{\mathrm{MC}}/\epsilon_{ \mathrm {Z} }^{\mathrm{MC}}\) ratio uncertainty, which is 4.6 %. The data-to-simulation correction factor is close to unity, using the Z/γ + events.

The uncertainties in the W+jets and top-quark background predictions are evaluated to be 36 % and 18 %, respectively, as described in Sect. 4. The total uncertainty in the Z/γ + normalisation is about 50 %, including both statistical and systematic contributions.

The theoretical uncertainties in the diboson cross sections are calculated by varying the renormalisation and factorisation scales using the mcfm program [1]. The effect of variations in the PDFs and the value of α s on the predicted cross section are derived by following the same prescription as for the signal acceptance. Including the experimental uncertainties gives a systematic uncertainty of around 10 % for WZ and ZZ processes. In the case of Wγ (∗) backgrounds, it rises to 30 %, due to the lack of knowledge of the overall normalisation. The total uncertainty in the background estimates is about 15 %, which is dominated by the systematic uncertainties in the normalisation of the top-quark and W+jets backgrounds. A 2.2 % uncertainty is assigned to the integrated luminosity measurement [37]. A summary of the uncertainties is given in Table 2. For simplicity, averages of the estimates for WZ and ZZ backgrounds are shown.

Table 2 Relative systematic uncertainties in the estimated signal and background yields, in units of percent

6 The WW cross section measurement

The number of events observed in the signal region is N data=1134. The W+W yield is calculated by subtracting the expected contributions of the various SM background processes, N bkg=247±15 (stat.)±30 (syst.) events. The inclusive cross section is obtained from the expression

$$ \sigma_{ \mathrm {W}^{+} \mathrm {W}^{-} } = \frac{N_{\mathrm{data}}-N_{\mathrm{bkg}} }{\mathcal {L}_{\mathrm{int}} \cdot\epsilon\cdot ( 3 \cdot\mathcal{B}( \mathrm {W} \to\ell \overline {\nu } ) )^2}, $$
(1)

where the signal selection efficiency ϵ, including the detector acceptance and averaging over all lepton flavours, is found to be (3.28±0.02 (stat.)±0.26 (syst.)) % using simulation and taking into account the two production modes. As shown in Eq. (1), the efficiency is corrected by the branching fraction for a W boson decaying to each lepton family, \(\mathcal{B}( \mathrm {W} \to\ell \overline {\nu } ) = (10.80 \pm0.09)~\%\) [38], to estimate the final inclusive efficiency for the signal.

The W+W production cross section in pp collision data at \(\sqrt{s} = 7~\mathrm{TeV}\) is measured to be

$$ \sigma_{ \mathrm {W}^{+} \mathrm {W}^{-} } = 52.4 \pm2.0 \ \text {(stat.)} \pm4.5 \ \text {(syst.)} \pm1.2 \ \text {(lum.)} ~\mathrm{pb}. $$

The statistical uncertainty is due to the total number of observed events. The systematic uncertainty includes both the statistical component from the limited number of events and systematic uncertainties in the background prediction, as well as the uncertainty in the signal efficiency.

This measurement is consistent with the SM expectation of 47.0±2.0 pb, based on \(\mathrm {q} \overline {\mathrm {q}} \) annihilation and gluon–gluon fusion. For the event selection used in the analysis, the expected theoretical cross section may be larger by as much as 5 % because of additional W+W production processes, such as diffractive production [39], double parton scattering, QED exclusive production [40] and Higgs boson production with decay to W+W. The dominant contribution of about 4 % would come from SM Higgs production, assuming its mass to be near 125 GeV [4].

The measured W+W cross section can be presented in terms of a ratio to the Z boson production cross section in the same data set. The W+W to Z cross section ratio, \(\sigma_{ \mathrm {W}^{+} \mathrm {W}^{-} }/\sigma_{ \mathrm {Z} }\), provides a good cross-check of this W+W cross section measurement, using the precisely known Z boson production cross section as a reference. This ratio has the advantage that some systematic effects cancel. More precise comparisons between measurements from different data-taking periods are possible because the ratio is independent of the integrated luminosity. The PDF uncertainty in the theoretical cross section prediction is also largely cancelled in this ratio, since both W+W and the Z boson are produced mainly via \(\mathrm {q} \overline {\mathrm {q}} \) annihilation. The estimated theoretical value for this ratio is [1.63±0.07 (theor.)]×10−3 [31], where the scale uncertainty between both processes is considered uncorrelated, while the PDF uncertainty is assumed fully correlated.

The Z boson production process is measured in the e+e/μ + μ final states using events passing the same lepton selection as in the W+W measurement and lying within the Z mass window, where the purity of the sample is about 99.8 % [31]. Nonresonant backgrounds (including Z/γ τ + τ ) are estimated from data, while the resonant component of WZ and ZZ processes is normalised to NLO cross sections using MC samples. Correlation of theoretical and experimental uncertainties between the two processes is taken into account. An additional 2 % uncertainty in the shape of the Z resonance due to final-state radiation and higher-order effects is assigned. The latter is based on the difference between the next-to-next-to-leading-order prediction from fewz 2.0 [41] simulation code and the MC generator used in the analysis, and on the renormalisation and factorisation scale variation given by fewz.

The ratio of the inclusive W+W cross section to the Z cross section in the dilepton mass range between 60 and 120 GeV is measured to be

$$ \sigma_{ \mathrm {W}^{+} \mathrm {W}^{-} }/\sigma_{ \mathrm {Z} } = \bigl[ 1.79 \pm0.16\ (\text{stat.} {\oplus}\text{syst.}) \bigr] \times10^{-3}, $$

in agreement with the theoretical expectation. The Z cross section resulting from this ratio, assuming the standard model value for the W+W cross section, is 1.1 % higher than the inclusive Z cross section measurement in CMS using the 2010 data set [31], which had an integrated luminosity of 36 pb−1, but well within the systematic uncertainties of both measurements.

7 Limits on the anomalous triple gauge–boson couplings

A search for anomalous TGCs is done using the effective Lagrangian approach with the LEP parametrisation [2] without form factors. The most general form of such a Lagrangian has 14 complex couplings (seven for WWZ and seven for WWγ). Assuming electromagnetic gauge invariance and charge and parity symmetry conservation, that number is reduced to five real couplings: Δκ Z , \(\Delta g_{1}^{Z}\), Δκ γ , λ Z and λ γ . Applying gauge invariance constraints leads to

$$\begin{aligned} &\Delta\kappa_ \mathrm {Z} = \Delta g_1^ \mathrm {Z} - \Delta \kappa_\gamma{\tan }^2(\theta_{ \mathrm {W} }), \\ &\lambda_ \mathrm {Z} = \lambda_\gamma, \end{aligned}$$

which reduces the number of independent couplings to three. In the SM, all five couplings are zero. The coupling constants \(\Delta g_{1}^{ \mathrm {Z} }\) and Δκ γ parametrise the differences from the standard model values of 1 for both \(g_{1}^{ \mathrm {Z} }\) and κ γ , which are measures of the WWZ and WWγ coupling strengths, respectively.

The presence of anomalous TGCs would enhance the production rate for diboson processes at high boson p T and high invariant mass. The effect of these couplings is ascertained by evaluating the expected distribution of p Tmax, the transverse momentum of the leading (highest-p T) lepton, and by comparing it to the measured distribution, using a maximum-likelihood fit. The p Tmax is a very sensitive observable for these searches, and it is widely used in the fully leptonic final states, since the total mass of the event cannot be fully reconstructed. The likelihood L is defined as a product of Poisson probability distribution functions for the observed number of events (N obs) and the combined one for each event, P(p T):

$$ L = \mathrm{e}^{-N_{\mathrm{exp}}}(N_{\mathrm{exp}})^{N_{\mathrm{obs}}} \prod _{i=1}^{N_{\mathrm{obs}}} P(p_{\mathrm{T}i}), $$
(2)

where N exp is the expected number of signal and background events. The leading lepton p T distributions with anomalous couplings are simulated using the mcfm NLO generator, taking into account the detector effects. The distributions are corrected for the acceptance and lepton reconstruction efficiency, as described in Sect. 5. The uncertainties in the quoted integrated luminosity, signal selection and background fraction are assumed to be Gaussian. These uncertainties are incorporated in the likelihood function in Eq. (2) by introducing nuisance parameters with Gaussian constraints. A set of points with nonzero anomalous couplings is used and distributions between the points are extrapolated assuming a quadratic dependence of the differential cross section as a function of the anomalous couplings.

Figure 3 shows the measured leading lepton p T distributions in data and the predictions for the SM W+W signal and background processes, as well as the expected distributions with non-negative anomalous couplings, in the two-dimensional model λ Z\(\Delta g_{1}^{ \mathrm {Z} }\).

Fig. 3
figure 3

Leading lepton p T distribution in data (points with error bars) overlaid with the best fit using a two-dimensional λ Z\(\Delta g_{1}^{ \mathrm {Z} }\) model (solid histogram) and two expected distributions with anomalous coupling value, λ Z≠0 (dashed and dotted histograms). In the SM, λ Z=0. The last bin includes the overflow

No evidence for anomalous couplings is found. The 95 % confidence level (CL) intervals of allowed anomalous couplings values, setting the other two couplings to their SM expected values, are

$$\begin{aligned} &-0.048 \leq\lambda_{ \mathrm {Z} } \leq0.048, \\ &-0.095 \leq\Delta g^{ \mathrm {Z} }_1 \leq0.095, \\ &-0.21 \leq\Delta\kappa_\gamma\leq0.22. \end{aligned}$$

The results presented here are comparable with the measurements performed by the ATLAS Collaboration [8] using the LEP parametrisation. These results are also comparable upon those obtained at the Tevatron [42, 43], which are based on the HISZ parametrisation [44] and LEP parametrisation with form factors, but they are not as precise as the combination of the LEP experiments [4547]. Recently, CMS has set limits on these couplings [48], using a different final-state channel. Our measurements clearly demonstrate that both the WWZ and WWγ couplings exist, as predicted in the standard model (\(g_{1}^{ \mathrm {Z} } = 1\), κ γ =1). Figure 4 displays the contour plots at the 68 % and 95 % CL for the Δκ γ =0 and \(\Delta g_{1}^{ \mathrm {Z} } = 0\) scenarios.

Fig. 4
figure 4

The 68 % (solid line) and 95 % CL (dashed line) limit contours, as well as the central value (point) of the fit results using unbinned fits, for Δκ γ =0 (top) and \(\Delta g_{1}^{ \mathrm {Z} } = 0\) (bottom). The one-dimensional 95 % CL limit for each coupling is also shown

8 Summary

This paper reports a measurement of the W+W cross section in the \(\mathrm {W}^{+} \mathrm {W}^{-} \to\ell^{+}\nu\ell^{-} \overline {\nu } \) decay channel in proton-proton collisions at a centre of mass energy of 7 TeV, using the full CMS data set of 2011. The W+W cross section is measured to be 52.4±2.0 (stat.)±4.5 (syst.)±1.2 (lum.) pb, consistent with the NLO theoretical prediction, σ NLO(pp→W+W)=47.0±2.0 pb. No evidence for anomalous WWZ and WWγ triple gauge-boson couplings is found, and stringent limits on their magnitude are set.