1 Introduction

An overview of the FCC-ee project can be found in the FCC-ee conceptual design report (CDR) [1] and in this issue [2]. Two first concepts for the FCC-ee detectors were described in the CDR: CLD, based on the work done for a detector for the CLIC collider, and IDEA, a new concept, possibly more cost effective. They are typical from the detectors used in collider experiments, with a cylindrical “barrel” region closed at the extremities by two “endcaps”, and with an onion-like structure consisting of: subsystems that determine the momenta of charged particles by measuring the curvature of their trajectory in a magnetic field; calorimeter(s) that fully absorb electrons, photons and hadrons and allow their energy to be measured; a solenoid that provides the aforementioned magnetic field, that surrounds the calorimeter or is situated inside it; and a muon detection system. A careful determination of the requirements that the FCC-ee physics programme sets on the detector is now in order, and may call for a further optimisation or a redefinition of these concepts, or for the development of new ones. The goal of this paper is to review our current understanding of these detector requirements and of the processes that will be used to further define them.

Detectors for FCC-ee have to comply with the tight constraints imposed by the invasive machine detector interface [1]: the last focusing quadrupole is at two meters only from the interaction point (IP); the experiment magnetic field is constrained to be below 2 T for the run at the Z resonance; the angular coverage of the detector cannot extend below 100 mrad from the beam axis, as this space is used by machine magnets. The experimental environment, which is different from that of a linear collider in particular at the Z peak (high physics event rates, small bunch spacing), also sets important constraints, preventing, for example, the use of pulsed electronics. The detector requirements imposed by the physics programme [2, 3], at a centre-of-mass energy \(\sqrt{s}\) of 240 GeV and above, have already been studied extensively for the linear colliders, but will have to be revisited in the context of the FCC-ee environment. Moreover, the huge statistics anticipated at the Z resonance (the so-called “Tera-Z” run) comes with specific challenges, as the systematic uncertainties of the measurements should be commensurate with their very small statistical uncertainties. In addition, the specific discovery potential for very weakly coupled particles, offered by the huge FCC-ee statistics, should be kept in mind too when designing the detectors.

2 Control of acceptances

One of the strongest requirements imposed by the Tera-Z programme concerns the determination of the acceptances, which, generally, have to be known with an accuracy in the range from a few \(10^{-6}\) to \(10^{-4}\). For example, for the luminosity measurement [4], the goal is to reach an uncertainty of \(10^{-4}\) from low-angle Bhabha events, which would match the anticipated theoretical precision on the Bhabha cross section. With the luminosity monitor (LumiCal) at 1 m only from the interaction point, and the measurement starting at an angle of 65 mrad, the inner radius of the LumiCal must be known to \(1.6\,\upmu \)m only [1]. A second example is provided by the measurement of \(R_\ell \), the ratio of the hadronic to leptonic Z decays, for which the lepton acceptance is a key systematic uncertainty. To match the anticipated statistical uncertainty, a careful control of the boundaries is again mandatory. For example, the innermost radius of the endcap calorimeter should be known to \(\mathcal{{O}}(15)\) \(\upmu \)m, which poses constraints on the mechanical assembly of the calorimeter. A hermetic calorimeter would be better suited than a petal design in this perspective.

Fig. 1
figure 1

Distribution of the Higgs recoil mass in ZH events where the Z decays into muons, assuming: (black) an ideal momentum resolution, such that the resolution on the recoil mass is determined by the beam energy spread; (blue) the momentum resolution of the IDEA detector; (red) that of the CLD detector

3 Measurement of the tracks of charged particles

Angular resolutions A precise determination of the beam energy spread (BES) is crucial for several key measurements at FCC-ee. In particular, the BES affects the Z lineshape and it would have a huge effect on the extracted Z width if it were not accounted for. The BES is determined from the measurement of the longitudinal bunch length \(\sigma _s\). As accelerator diagnostics on \(\sigma _s\) cannot provide the level of precision required for FCC-ee measurements, the BES has to be determined by the experiments. In head-on collisions as at LEP, the measurement of the longitudinal size of the luminous region from the tracking system provides a measurement of \(\sigma _s\) (see, e.g., Ref. [5]). At FCC-ee, because of the crossing-angle, the longitudinal size of the luminous region depends on \(\sigma _s\) and on the transverse bunch size and is actually driven by the latter, such that this method can not be used. However, the BES can be measured at the level of a few per-mil from the scattering angles of dimuon events [6]. Because of the constrained kinematics of such events, the longitudinal imbalance can be reconstructed event by event, and the BES can be extracted from the width of its distribution. To ensure that the BES has a negligible effect on the extracted Z width, muon tracks from Z decays must be measured with an angular resolution of 0.1 mrad or better, a requirement that is fulfilled [7] by the detector concepts presented in the CDR.

Momentum resolution The beam energy spread, which amounts to \(0.13\%\) and \(0.16\%\) of the beam energy at \(\sqrt{s} = 91.2\) GeV and \(\sqrt{s}=240\) GeV, respectively, also sets a target for the track momentum resolution. This is illustrated in Fig. 1, which shows the Higgs recoil mass in ZH events where the Z decays into muons. The goal is that the reconstruction of the recoil mass be limited by the BES and not by the detector resolution. The very light tracker from IDEA, with a resolution of \(\mathcal{{O}}(0.15\%)\) for central, 50 GeV muons, is close to reaching this goal. The (heavier) full silicon tracker of CLD performs a bit worse because, in the momentum range of interest, the resolution is dominated by multiple scattering. The determination of the Higgs mass (for which a precision of a few MeV would be needed in view of a possible run at the Higgs resonance) will clearly benefit from the better momentum resolution offered by a light, gaseous tracker [8]. A momentum resolution comparable to the BES for beam-energy muons is also important for Z physics. For example, the analysis strategy for searching for lepton flavour violating Z decays into \(\tau \mu \) demands a clear tau decay in one hemisphere, and a beam-energy muon in the other, in order to suppress the \(\mathrm{Z} \rightarrow \tau \tau \) background: the sensitivity improves linearly with the momentum resolution on the muon [9]. A precise measurement of the \(\tau \) mass will also put some constraints on the track momentum resolution. Requirements are also expected from flavour physics, where momentum resolution is often a key for reducing the backgrounds. On the other hand, the measurement of the Higgs coupling to muons is unlikely to be a good benchmark process for determining this requirement: because of the very low statistics, a resolution four times better than the exquisite one assumed in Ref. [10] would be needed, in order to barely reach the precision anticipated at HL-LHC on this coupling.

Stability of the track momentum scale As shown in Ref. [6], a control of the point-to-point uncertainty on the centre-of-mass energy in the lineshape scan, at the level of 40 keV, can be achieved in situ from the invariant mass distribution of dimuon events. Such a precision demands that the scale of the momentum measurements, and in particular the detector magnetic field, be stable at (or that its variations be monitored to) the level of 40 keV/91 GeV, hence to a few \(10^{-7}\). Such a precise monitoring may be difficult to achieve with magnetic NMR probes, but the large statistics of well-known resonances (\(J/\psi \rightarrow \mu \mu \), \(\mathrm{D}^0 \rightarrow \mathrm{K} \pi \), etc) may provide an in situ monitoring down to this challenging precision.

4 Requirements on the vertex detector

The requirements on the resolutions on the track impact parameter are currently believed to be similar to, or better than, those derived in the context of the ILC [11] or CLIC [12], typically \(\sigma = a \oplus b/\sin ^{3/2} \theta \) with \(a \simeq 5\,\upmu \)m and \(b \simeq 15\,\upmu \)m. These requirements will have to be reached despite the additional constraints set by the FCC-ee environment on the readout electronics of the detector: (i) its power budget is smaller than for a detector operating at a linear collider (since power pulsing, the electronics is not possible with collisions occurring every \(\sim 20\) ns); and (ii) it should be fast enough, better than about 1 \(\upmu \)s, such that the integrated background remains negligible. In addition to the measurement of the Higgs couplings to pairs of b quarks, c quarks and gluons, which demands a high-performance flavour tagging, requirements on the vertex detector will come from the measurement of heavy-quark electroweak observables (\(R_\mathrm{b}\), \(R_\mathrm{c}\) and the heavy-quark forward–backward asymmetries), for which a huge improvement is expected compared to LEP. These measurements will indeed benefit not only from the large luminosity increase, but also from decades of improvements in detector technology, which, currently, leads to b-tagging efficiencies that are three times larger than those achieved at LEP for the same mis-tag efficiency. Moreover, the rich flavour physics programme at the Z pole, which will complement and surpass in several cases the physics reach of the LHCb and B-factories experiments [13], is expected to set demanding goals on the resolutions with which vertices are reconstructed. For example, an excellent vertexing is a key for extracting a signal of \(\mathrm B \rightarrow K^* \tau \tau \) when both \(\tau \)s decay into three charged pions, as it allows this decay to be fully reconstructed. Assuming that the position of the primary vertex, of the B decay vertex and of the \(\tau \) decay vertex can be reconstructed with a resolution of, respectively, \(3\,\upmu \)m, \(7\,\upmu \)m and \(5\,\upmu \)m, in each cartesian coordinate, more than one thousand events will be reconstructed [1], opening the door for measurements of the angular properties of this decay, which is likely to be unique to FCC-ee. An excellent secondary vertex reconstruction will be crucial also for the sensitivity to new long-lived particles signatures and reduction of SM backgrounds.

5 Requirements on the electromagnetic calorimeter

Since about \(25\%\) of the jet energies is carried by photons, a good energy resolution for photons is needed for a good measurement of jets. The corresponding requirement is however not very stringent: a stochastic term of 15–20% is enough to ensure a jet energy resolution better than \(3\%\) for 50 GeV jets, using a particle-flow reconstruction algorithm [14]. The measurement of the Higgs to \(\gamma \gamma \) coupling obviously benefits from a good electromagnetic resolution. This measurement is however very statistically limited at FCC-ee. With a resolution of \(\sigma (E)/E = 15\%\)/\(\sqrt{E} \oplus 1\%\), the anticipated precision on the \(\mathrm {H} \gamma \gamma \) coupling amounts to \(\mathcal{{O}} (3\%)\) [10], a factor of two worse than what HL-LHC should achieve. Even with an exquisite stochastic term and a constant term well below \(1\%\), it will be difficult to achieve a precision significantly better than that of the anticipated HL-LHC measurement. On the other hand, requirements of a resolution much better than \(15\%\)/\(\sqrt{E}\), in particular at rather low energies, are expected from flavour physics, as many important measurements of CP violation rely on the reconstruction of decays with several \(\pi ^0\)’s in the final state, as in \(B^0 \rightarrow \pi ^+ \pi ^- \pi ^0 \pi ^0\). The extraction of a \(B_s \rightarrow D_s K\) signal in modes with neutral pions, which would considerably increase the statistics collected in modes with charged tracks only (since the branching fraction for the decay \(D_s^{\pm } \rightarrow \varPhi \rho ^{\pm }\) is twice larger than the one for \(D_s^{\pm } \rightarrow \varPhi \pi ^{\pm }\)), is likely to require an exquisite resolution of \( 5\%\)/\(\sqrt{E}\) or better, which can be achieved with crystal calorimetry [15]. Such a resolution is also required for a precise measurement of the \(\mathrm{Z} \nu _e {\bar{\nu }}_e\) coupling, exploiting radiative return events with a single photon in the final state [16]. Moreover, the electromagnetic resolution is a key for pushing the sensitivity to rare or forbidden processes, like the \(\tau \rightarrow \mu \gamma \) or \(\mathrm{Z} \rightarrow \tau e\) decays [9], and its role in searches for long-lived resonances (e.g., dark photons) decaying into electrons should be studied too, as electron tracks resulting from such decays will be badly measured if they are very short.

In addition, an exquisite electromagnetic resolution, together with a fine granularity of the calorimeter, would allow photons to be clustered into \(\pi ^0\)’s prior to clustering the jet, which improves the jet resolution [14].

A high granularity of the electromagnetic calorimeter plays indeed a crucial role in the identification of individual photons in jets, of close-by photons coming, for example, from the decay of \(\pi ^0\)’s or low mass axions or axion-like particles (ALPs), and, more generally, is a key for an optimal particle-flow reconstruction. Requirements on the granularity will be studied using as benchmarks the measurement of the tau polarisation, and the sensitivity on low mass ALPs, that could be copiously produced in Z decays [17, 18].

6 Jets and resolution of hadronic systems

Among the many advantages of an electron–positron collider such as the FCC-ee, the most cited ones are the cleanliness of the final state, the precise knowledge of the centre-of-mass energy and the momentum conservation in all three directions of space. This comes in stark contrast with the situation at an hadron collider, such as the LHC, for example, where for many measurements and searches the analysis needs to recur to the selection of leptonic final states to control backgrounds and uncertainties, sacrificing a large fraction of the event statistics. At a lepton collider instead, the hadronic final states are very important players in the overall physics program, often driving the measurement statistical precision for many (sometimes rare) processes. Uncertainties on the jet properties directly propagate to the measurement precision and need to be understood and controlled in order to achieve a high precision or a better background rejection. A jet is a complex object that derives from the clustering of the products of the fragmentation process of an hadronic particle. Optimal reconstruction approaches, such as particle-flow, use information from all sub-detectors, beyond just the one from the calorimeter, imposing requirements on the design of the overall detector. The starting set of jet clustering algorithms being considered for the current studies includes the ones historically developed for LEP, the ones being used at the LHC and also those optimised for the linear colliders. In particular, the exclusive jet reconstruction that has an excellent performance for a lepton collider requires to be optimised analysis by analysis which is a conceptually different approach from the inclusive hadron collider case. In addition, it is important to note that the jet energy resolution is less critical in the lepton collider case where the precisely known beam energy (via the four-momentum conservation) and/or particle masses overconstrain the event kinematics—for example, in events with four jets and no missing energy, the energy of the jets can be reconstructed from the sole measurements of their angles. The angular resolution then provides a more robust estimate of the energy, even more so if the system is boosted.

Moreover, the jet energy resolution is affected both by the choice of the algorithm and by the stochastic nature of the fragmentation process itself that cannot be fully disentangled. For this reason, it might be better to exploit the resolution not of a jet, but of a colour singlet object such as a W, Z or Higgs boson, and use the mass of the particle itself as a well-defined quantity to assess detector performance. In the case of a lepton collider, this variable could also be not a specific particle but the visible mass of the event or the missing mass. For example, the ILC/CLIC studies derived that a resolution of \(\sigma (E_\mathrm{jet})\)/\(E_\mathrm{jet} \simeq 30\%\)/\(\sqrt{E_\mathrm{jet}}\) is needed to separate Z and W in their hadronic decays, which is valid irrespective of the overall event environment. This statement can possibly be challenged and the requirements reviewed, in the case of the FCC-ee, for those cases where fits can be used to constrain the kinematics of fully hadronic final states in specific processes, such as ZH production.

In practical terms, the jet energy resolution is relevant in those events where the kinematic constraints are insufficient to offer a reconstruction of the jet energies that relies on their angles, such as multi-jet events accompanied by missing energy, or when there is a need for a strong background rejection in the early stage of an analysis. A benchmark, where the hadronic resolution on the missing mass is crucial to distinguish between similar processes, is the separation of \(\mathrm e^+e^-\rightarrow H\nu \nu \) (via WW fusion) from \(\mathrm e^+e^- \rightarrow ZH\) with \(\mathrm{Z}\rightarrow \nu \bar{\nu }\), instrumental for the determination of the Higgs width.

Trying to factorise the requirements imposed by the need of precise jet reconstruction on the detector, it has to be noted that in the case of a particle-flow reconstruction, the most important characteristic for the calorimetry is not just its energy resolution but the granularity that allows the best matching with the tracker information and the best identification capabilities for neutral particles (photons and neutral hadrons). Once the list of particle candidates is defined, they need to be clustered into jets objects.

The clustering algorithms will evolve integrating the fact that with new technologies for future detectors the limiting factor might not be the detector resolution, but instead the correct assignment of the particle candidates to the jets, and then of the jets to their parent particle. The ambiguity of the assignment could end up being one of the dominant systematic in multi-jet final states (four of more jets such as ZH or \(\mathrm{t}\bar{\mathrm{t}}\) events). Theoretical progress on QCD and jets substructure will be important factors as well to improve the accuracy of the jet definition.

Thinking ahead of the possibilities for future algorithmic development, we should consider a global event reconstruction feeding all the particle candidates into a deep neural network that would take care of associating them, without having an intermediate clustering step. An evolution of this approach could result in a different strategy for the optimisation of the detector design as well [19, 20].

7 Particle identification

Excellent lepton and photon identification capabilities are essential for many analyses. In particular, a good separation \(\mathrm e / \gamma \), \(\gamma / \pi ^0\), \(\mathrm e / \pi \), and an excellent separation of photons from neutral hadrons are key ingredients for an effective particle-flow reconstruction. This separation should remain effective also in collimated topologies, as required by a precise measurement of the \(\tau \) polarisation, for example. Moreover, charged particle identification will be mandatory to the flavour physics programme. A separation of pions from kaons, in a momentum range that extends up to at least 10 GeV, will be vital for time-dependent CP violation measurements. Separation at higher momentum would be extremely useful too; for example, the spectrum of the kaon in the \(\mathrm B_s \rightarrow D_s K\) decay, a process that suffers from an order-of-magnitude larger background from \(\mathrm B_s \rightarrow D_s \pi \), extends up to \(\sim 30\) GeV. The precise determination of the branching ratios of the \(\tau \), and of the \(\tau \) polarisation, will also benefit from a separation of pions from kaons up to \(\sim 40\) GeV. Candidate technologies are being reviewed. With the IDEA drift chamber, the “cluster counting” method looks promising and may cover the whole momentum range of interest. For a detector with a full silicon tracker, no ideal solution exists yet, as it is not easy to cover the whole momentum range and, at the same time, comply with the space and hermeticity constraints of the experiment.

8 Detection of new, long-lived particles

Some new physics processes can produce very long-lived particles (LLPs) that might decay far from the primary interaction point, producing a secondary interaction vertex, containing charged and neutral SM particles. Other exotic models might produce particles that would give rise to “short”, “broken” or “stopped” tracks signatures. In addition, we could expect also more complex unusual signatures such as “emerging” or “dark showers” jets. A review of the possible models that have been considered can be found in Ref. [21] (see also [3, 17, 22,23,24,25]).

These peculiar experimental signatures are very distinct from those of standard model processes and, if observed, they would be a clear sign of new physics. Unfortunately, at colliders standard trigger and reconstruction techniques are typically unable to recognise them efficiently. The large statistics and the clean environment of the FCC-ee Tera-Z run make it the ideal playground to optimise these types of searches and it has been shown that it can be competitive and complementary, in mass and coupling range, to the discovery reach of non-collider experiments [26]. The needs for an efficient detection of such signatures might call for the proposal of a specific detector with optimised design choices in addition to the improvement of reconstruction techniques of more general purpose ones. The variety of signatures imposes requirements on several different experimental aspects. First and foremost, a very large, light and granular tracking system, that would allow to reconstruct efficiently charged tracks, characterised by potentially having a short/variable length and starting significantly away from the primary collision vertex. This could be extended to a choice of technology for the calorimeter that would allow to disentangle also emerging or dark shower jets, that might start outside the tracker radius.

Timing information will be also essential to be sensitive to heavy particles, with \(\beta < 1\), leading to out of time or even stopped decays. Once appropriate design and technology choices guarantee the detection of these particles signatures, most of the effort has to be spent in the optimisation and development of new reconstruction algorithms. The requirement on track and vertex reconstruction is to be able to identify and measure the charge and momentum of tracks with a small number of hits, and large displacements, and the possibility of reconstructing (multiple) vertexes significantly displaced in the tracking volume containing a small number of tracks. The requirement on jet reconstruction is to be able to identify multiple sub-components, with fewer particles that normal hadronic jets and possibly starting in the calorimeter at different depths. This is a case where new Machine Learning techniques could be employed, profiting of the particle-flow event reconstruction that allows to feed the networks with more granular information, such as the particle candidates themselves. Finally, another challenging aspect is the similar effort to identify and reduce or control the backgrounds to these unusual signatures. These backgrounds are generally dominated by machine, detector or other external factors, and tend to require specific techniques to be evaluated and monitored. In this respect, there is a role to play for the data acquisition system to profit from both in-time and off-time information for the study of potential asynchronous signal and backgrounds.

9 Conclusions

An initial list of the requirements that a FCC-ee detector should fulfil, in order to match the physics programme offered by the huge statistics that will be collected, has been established. A first list of benchmark processes, that will allow additional requirements to be quantified and better defined, is identified. They will be studied carefully in order to complete the “wish-list” of detector requirements. The implementation of these requirements into detector designs is likely to come with compromises—for example, a dedicated detector for particle identification, in front of the calorimeter, would unavoidably degrade the electromagnetic resolution. The possibility of having four interaction points offers very interesting possibilities, as complementary detectors could be designed in view of ideally covering the whole FCC-ee physics programme.