Cosmology and Fundamental Physics with the Euclid Satellite
 5.2k Downloads
 347 Citations
Abstract
Euclid is a European Space Agency mediumclass mission selected for launch in 2019 within the Cosmic Vision 2015–2025 program. The main goal of Euclid is to understand the origin of the accelerated expansion of the universe. Euclid will explore the expansion history of the universe and the evolution of cosmic structures by measuring shapes and redshifts of galaxies as well as the distribution of clusters of galaxies over a large fraction of the sky.
Although the main driver for Euclid is the nature of dark energy, Euclid science covers a vast range of topics, from cosmology to galaxy evolution to planetary research. In this review we focus on cosmology and fundamental physics, with a strong emphasis on science beyond the current standard models. We discuss five broad topics: dark energy and modified gravity, dark matter, initial conditions, basic assumptions and questions of methodology in the data analysis.
This review has been planned and carried out within Euclid’s Theory Working Group and is meant to provide a guide to the scientific themes that will underlie the activity of the group during the preparation of the Euclid mission.
Keywords
dark energy cosmology galaxy evolutionList of acronyms
 AGN
Active Galactic Nucleus
 ALP
AxioLike Particle
 BAO
Baryonic Acoustic Oscillations
 BBKS
BardeenBondKaiserSzalay
 BOSS
Baryon Oscillation Spectroscopic Survey
 BPol
BPolarization Satellite
 BigBOSS
Baryon Oscillation Spectroskopic Survey
 CAMB
Code for Anisotropies in the Microwave Background
 CDE
Coupled Dark Energy
 CDM
Cold Dark Matter
 CDMS
Cryogenic Dark Matter Search
 CL
Confidence Level
 CMB
Cosmic Microwave Background
 COMBO17
Classifying Objects by MediumBand Observations
 COSMOS
Cosmological Evolution Survey
 CPL
ChevallierPolarskiLinder
 CQ
Coupled Quintessence
 CRESST
Cryogenic Rare Event Search with Superconducting Thermometers
 DE
Dark Energy
 DES
Dark Energy Survey
 DETF
Dark Energy Task Force
 DGP
DvaliGabadadzePorrati
 DM
Dark Matter
 EBI
EddingtonBornInfeld
 EDE
Early Dark Energy
 EROS
Expérience pour la Recherche d’Objets Sombres
 eROSITA
Extended ROentgen Survey with an Imaging Telescope Array
 FCDM
Fuzzy Cold Dark Matter
 FFT
Fast Fourier Transform
 FLRW
FriedmannLemaîtreRobertsonWalker
 FoM
Figure of Merit
 FoG
Fingers of God
 GEA
Generalized EinsteinAether
 GR
General Relativity
 HETDEX
HobbyEberly Telescope Dark Energy Experiment
 ICM
Intracluster Medium
 IH
Inverted Hierarchy
 IR
Infrared
 ISW
Integrated SachsWolfe
 KL
KullbackLeibler divergence
 LCDM
Lambda Cold Dark Matter
 LHC
Large Hadron Collider
 LRG
Luminous Red Galaxy
 LSB
Low Surface Brightness
 LSS
Large Scale Structure
 LSST
Large Synoptic Survey Telescope
 LTB
LemaîtreTolmanBondi
 MACHO
MAssive Compact Halo Object
 MCMC
Markov Chain Monte Carlo
 MCP
MiniCharged Particles
 MF
Mass Function
 MG
Modified Gravity
 MOND
MOdified Newtonian Dynamics
 MaVaNs
Mass Varying Neutrinos
 NFW
NavarroFrenkWhite
 NH
Normal Hierarchy
 PCA
Principal Component Analysis
Probability Distribution Function
 PGB
PseudoGoldstein Boson
 PKDGRAV
Parallel KD tree GRAVity code
 PPF
Parameterized PostFriedmann
 PPN
Parameterized PostNewtonian
 PPOD
Predictive Posterior Odds Distribution
 PSF
Point Spread Function
 QCD
Quantum ChromoDynamics
 RDS
Redshift Space Distortions
 RG
Renormalization Group
 SD
SavageDickey
 SDSS
Sloan Digital Sky Survey
 SIDM
Self Interacting Dark Matter
 SN
Supernova
 TeVeS
Tensor Vector Scalar
 UDM
Unified Dark Matter
 UV
Ultra Violett
 WDM
Warm Dark Matter
 WFXT
WideField XRay Telescope
 WIMP
Weakly Interacting Massive Particle
 WKB
WentzelKramersBrillouin
 WL
Weak Lensing
 WLS
Weak Lensing Survey
 WMAP
Wilkinson Microwave Anisotropy Probe
 XMMNewton
Xray MultiMirror Mission
 vDVZ
van DamVeltmanZakharov
List of symbols
 c_{a}
Adiabatic sound speed p. 36
 D_{A}(z)
Angular diameter distance p. 81
 \(\rlap{/}\partial\)
Angular spin raising operator p. 76
 \(\Pi _j^i\)
Anisotropic stress perturbation tensor p. 21
 σ
Uncertainty
 Bo
Bayes factor p. 213
 b
Bias (ratio of galaxy to total matter perturbations) p. 82
 B_{Φ}(k_{1}, k_{2}, k_{3})
Bispectrum of the Bardeen’s potential p. 202
 g(X)
BornInfeld kinetic term p. 151
 b
Bulleticity p. 133
 ζ
Comoving curvature perturbation p. 164
 r(z)
Comoving distance
 \({\mathcal H}\)
Conformal Hubble parameter, \({\mathcal H}\) p. 18
 η, τ
Conformal time p. 18
 η
Convergence p. 76
 t
Cosmic time p. 33
 Λ
Cosmological constant
 Θ
Cosmological parameters p. 207
 r_{c}
Cross over scale p. 44
 □
d’Alembertian, □ = ∇^{2}
 F
Derivative of f(R) p. 39
 θ
Divergence of velocity field p. 22
 μ
Direction cosine p. 166
 π
Effective anisotropic stress p. 38
 η(a, k)
Effective anisotropic stress parameterization p. 24
 ρ
Energy density
 T_{μν}
Energy momentum tensor p. 21
 w
Equation of state p. 19
 F_{αβ}
Fisher information matrix p. 211
 σ_{8}
Fluctuation amplitude at 8 km/s/Mpc
 u^{μ}
Fourvelocity p. 21
 Ω_{m}
Fractional matter density
 f_{sky}
Fraction of sky observed p. 101
 Δ_{M}
Gauge invariant comoving density contrast p. 23
 τ(z)
Generic opacity parameter p. 185
 ϖ
Gravitational slip parameter p. 25
 G(a)
Growth function/Growth factor p. 26
 γ
Growth index/Shear p. 26/p. 76
 f_{g}
Growth rate p. 23
 b_{eff}
Halo effective linear bias factor p. 143
 h
Hubble constant in units of 100 km/s/Mpc
 H(z)
Hubble parameter
 ζ_{i}
Killing field p. 188
 δ_{ij}
Kronecker delta
 f(R)
Lagrangian in modified gravity p. 39
 P_{l}(μ)
Legendre polynomials p. 83
 \({\mathcal L}(\Theta)\)
Likelihood function p. 207
 β(z)
Linear redshiftspace distortion parameter p. 82
 D_{L}(z)
Luminosity distance p. 184
 Q(a, k)
Mass screening effect p. 24
 δ_{m}
Matter density perturbation
 g_{μν}
Metric tensor p. 21
 μ
Modified gravity function: μ = Q/η p. 25
 C_{ℓ}
Multipole power spectrum p. 166
 G
Newton’s gravitational constant
 N
Number of efolds, N = ln a p. 165
 P(k)
Matter power spectrum
 p
Pressure
 δp
Pressure perturbation
 χ(z)
Radial, dimensionless comoving distance p. 81
 z
Redshift
 R
Ricci scalar
 ϕ
Scalar field p. 32
 A
Scalar potential p. 21
 Ψ, Φ
Scalar potentials p. 21
 n_{s}
Scalar spectral index p. 164
 a
Scale factor
 f_{a}
Scale of PecceiQuinn symmetry breaking p. 155
 ℓ
Spherical harmonic multipoles
 c_{s}
Sound speed p. 106
 Σ
Total neutrino mass/Inverse covariance matrix/PPN parameter p. 136/p. 210/p. 25
 \(H_T^{ij}\)
Tracefree distortion p. 21
 T(k)
Transfer function p. 174
 B_{i}
Vector shift p. 21
 k
Wavenumber
1 Introduction
Euclid^{1} [551, 760, 239] is an ESA mediumclass mission selected for the second launch slot (expected for 2019) of the Cosmic Vision 2015–2025 program. The main goal of Euclid is to understand the physical origin of the accelerated expansion of the universe. Euclid is a satellite equipped with a 1.2 m telescope and three imaging and spectroscopic instruments working in the visible and nearinfrared wavelength domains. These instruments will explore the expansion history of the universe and the evolution of cosmic structures by measuring shapes and redshifts of galaxies over a large fraction of the sky. The satellite will be launched by a Soyuz ST2.1B rocket and transferred to the L2 Lagrange point for a sixyear mission that will cover at least 15 000 square degrees of sky. Euclid plans to image a billion galaxies and measure nearly 100 million galaxy redshifts.
These impressive numbers will allow Euclid to realize a detailed reconstruction of the clustering of galaxies out to a redshift 2 and the pattern of light distortion from weak lensing to redshift 3. The two main probes, redshift clustering and weak lensing, are complemented by a number of additional cosmological probes: cross correlation between the cosmic microwave background and the large scale structure; luminosity distance through supernovae Ia; abundance and properties of galaxy clusters and strong lensing. To extract the maximum of information also in the nonlinear regime of perturbations, these probes will require accurate highresolution numerical simulations. Besides cosmology, Euclid will provide an exceptional dataset for galaxy evolution, galaxy structure, and planetary searches. All Euclid data will be publicly released after a relatively short proprietary period and will constitute for many years the ultimate survey database for astrophysics.
A huge enterprise like Euclid requires highly considered planning in terms not only of technology but also for the scientific exploitation of future data. Many ideas and models that today seem to be abstract exercises for theorists will in fact finally become testable with the Euclid surveys. The main science driver of Euclid is clearly the nature of dark energy, the enigmatic substance that is driving the accelerated expansion of the universe. As we discuss in detail in Part 1, under the label “dark energy” we include a wide variety of hypotheses, from extradimensional physics to higherorder gravity, from new fields and new forces to large violations of homogeneity and isotropy. The simplest explanation, Einstein’s famous cosmological constant, is still currently acceptable from the observational point of view, but is not the only one, nor necessarily the most satisfying, as we will argue. Therefore, it is important to identify the main observables that will help distinguish the cosmological constant from the alternatives and to forecast Euclid’s performance in testing the various models.
Since clustering and weak lensing also depend on the properties of dark matter, Euclid is a dark matter probe as well. In Part 2 we focus on the models of dark matter that can be tested with Euclid data, from massive neutrinos to ultralight scalar fields. We show that Euclid can measure the neutrino mass to a very high precision, making it one of the most sensitive neutrino experiments of its time, and it can help identify new light fields in the cosmic fluid.
The evolution of perturbations depends not only on the fields and forces active during the cosmic eras, but also on the initial conditions. By reconstructing the initial conditions we open a window on the inflationary physics that created the perturbations, and allow ourselves the chance of determining whether a single inflaton drove the expansion or a mixture of fields. In Part 3 we review the choices of initial conditions and their impact on Euclid science. In particular we discuss deviations from simple scale invariance, mixed isocurvatureadiabatic initial conditions, nonGaussianity, and the combined forecasts of Euclid and CMB experiments.
Practically all of cosmology is built on the Copernican Principle, a very fruitful idea postulating a homogeneous and isotropic background. Although this assumption has been confirmed time and again since the beginning of modern cosmology, Euclid’s capabilities can push the test to new levels. In Part 4 we challenge some of the basic cosmological assumptions and predict how well Euclid can constrain them. We explore the basic relation between luminosity and angular diameter distance that holds in any metric theory of gravity if the universe is transparent to light, and the existence of large violations of homogeneity and isotropy, either due to local voids or to the cumulative stochastic effects of perturbations, or to intrinsically anisotropic vector fields or spacetime geometry.
Finally, in Part 5 we review some of the statistical methods that are used to forecast the performance of probes like Euclid, and we discuss some possible future developments.
This review has been planned and carried out within Euclid’s Theory Working Group and is meant to provide a guide to the scientific themes that will underlie the activity of the group during the preparation of the mission. At the same time, this review will help us and the community at large to identify the areas that deserve closer attention, to improve the development of Euclid science and to offer new scientific challenges and opportunities.
2 Dark Energy
2.1 Introduction
With the discovery of cosmic acceleration at the end of the 1990s, and its possible explanation in terms of a cosmological constant, cosmology has returned to its roots in Einstein’s famous 1917 paper that simultaneously inaugurated modern cosmology and the history of the constant Λ. Perhaps cosmology is approaching a robust and allencompassing standard model, like its cousin, the very successful standard model of particle physics. In this scenario, the cosmological standard model could essentially close the search for a broad picture of cosmic evolution, leaving to future generations only the task of filling in a number of important, but not crucial, details.
The cosmological constant is still in remarkably good agreement with almost all cosmological data more than ten years after the observational discovery of the accelerated expansion rate of the universe. However, our knowledge of the universe’s evolution is so incomplete that it would be premature to claim that we are close to understanding the ingredients of the cosmological standard model. If we ask ourselves what we know for certain about the expansion rate at redshifts larger than unity, or the growth rate of matter fluctuations, or about the properties of gravity on large scales and at early times, or about the influence of extra dimensions (or their absence) on our four dimensional world, the answer would be surprisingly disappointing.
Our present knowledge can be succinctly summarized as follows: we live in a universe that is consistent with the presence of a cosmological constant in the field equations of general relativity, and as of 2012, the value of this constant corresponds to a fractional energy density today of Ω_{Λ} ≈ 0.73. However, far from being disheartening, this current lack of knowledge points to an exciting future. A decade of research on dark energy has taught many cosmologists that this ignorance can be overcome by the same tools that revealed it, together with many more that have been developed in recent years.
Why then is the cosmological constant not the end of the story as far as cosmic acceleration is concerned? There are at least three reasons. The first is that we have no simple way to explain its small but nonzero value. In fact, its value is unexpectedly small with respect to any physically meaningful scale, except the current horizon scale. The second reason is that this value is not only small, but also surprisingly close to another unrelated quantity, the present matterenergy density. That this happens just by coincidence is hard to accept, as the matter density is diluted rapidly with the expansion of space. Why is it that we happen to live at the precise, fleeting epoch when the energy densities of matter and the cosmological constant are of comparable magnitude? Finally, observations of coherent acoustic oscillations in the cosmic microwave background (CMB) have turned the notion of accelerated expansion in the very early universe (inflation) into an integral part of the cosmological standard model. Yet the simple truth that we exist as observers demonstrates that this early accelerated expansion was of a finite duration, and hence cannot be ascribable to a true, constant Λ; this sheds doubt on the nature of the current accelerated expansion. The very fact that we know so little about the past dynamics of the universe forces us to enlarge the theoretical parameter space and to consider phenomenology that a simple cosmological constant cannot accommodate.
These motivations have led many scientists to challenge one of the most basic tenets of physics: Einstein’s law of gravity. Einstein’s theory of general relativity (GR) is a supremely successful theory on scales ranging from the size of our solar system down to micrometers, the shortest distances at which GR has been probed in the laboratory so far. Although specific predictions about such diverse phenomena as the gravitational redshift of light, energy loss from binary pulsars, the rate of precession of the perihelia of bound orbits, and light deflection by the sun are not unique to GR, it must be regarded as highly significant that GR is consistent with each of these tests and more. We can securely state that GR has been tested to high accuracy at these distance scales.
The success of GR on larger scales is less clear. On astrophysical and cosmological scales, tests of GR are complicated by the existence of invisible components like dark matter and by the effects of spacetime geometry. We do not know whether the physics underlying the apparent cosmological constant originates from modifications to GR (i.e., an extended theory of gravity), or from a new fluid or field in our universe that we have not yet detected directly. The latter phenomena are generally referred to as ‘dark energy’ models.
If we only consider observations of the expansion rate of the universe we cannot discriminate between a theory of modified gravity and a darkenergy model. However, it is likely that these two alternatives will cause perturbations around the ‘background’ universe to behave differently. Only by improving our knowledge of the growth of structure in the universe can we hope to progress towards breaking the degeneracy between dark energy and modified gravity. Part 1 of this review is dedicated to this effort. We begin with a review of the background and linear perturbation equations in a general setting, defining quantities that will be employed throughout. We then explore the nonlinear effects of dark energy, making use of analytical tools such as the spherical collapse model, perturbation theory and numerical Nbody simulations. We discuss a number of competing models proposed in literature and demonstrate what the Euclid survey will be able to tell us about them.
2.2 Background evolution
However, when GR is modified or when an interaction with other species is active, dark energy may very well have a nonnegligible contribution at early times. Therefore, it is important, already at the background level, to understand the best way to characterize the main features of the evolution of quintessence and dark energy in general, pointing out which parameterizations are more suitable and which ranges of parameters are of interest to disentangle quintessence or modified gravity from a cosmological constant scenario.
In the following we briefly discuss how to describe the cosmic expansion rate in terms of a small number of parameters. This will set the stage for the more detailed cases discussed in the subsequent sections. Even within specific physical models it is often convenient to reduce the information to a few phenomenological parameters.
Two important points are left for later: from Eq. (1.2.3) we can easily see that w_{ ϕ } ≥ −1 as long as ρ_{ ϕ } > 0, i.e., uncoupled canonical scalar field dark energy never crosses w_{ ϕ } = −1. However, this is not necessarily the case for noncanonical scalar fields or for cases where GR is modified. We postpone to Section 1.4.5 the discussion of how to parametrize this ‘phantom crossing’ to avoid singularities, as it also requires the study of perturbations.
The second deferred part on the background expansion concerns a basic statistical question: what is a sensible precision target for a measurement of dark energy, e.g., of its equation of state? In other words, how close to w_{ ϕ } = −1 should we go before we can be satisfied and declare that dark energy is the cosmological constant? We will address this question in Section 1.5.
2.2.1 Parametrization of the background evolution
The second approach is to start from a simple expression of w without assuming any specific darkenergy model (but still checking afterwards whether known theoretical darkenergy models can be represented). This is what has been done by [470, 623, 953] (linear and logarithmic parametrization in z), [229], [584] (linear and power law parametrization in a), [322], [97] (rapidly varying equation of state).
Note that the measurement of ρ_{ X } (z) is straightforward once H (z) is measured from baryon acoustic oscillations, and Ω_{ m } is constrained tightly by the combined data from galaxy clustering, weak lensing, and cosmic microwave background data — although strictly speaking this requires a choice of perturbation evolution for the dark energy as well, and in addition one that is not degenerate with the evolution of dark matter perturbations; see [534].
Another useful possibility is to adopt the principal component approach [468], which avoids any assumption about the form of w and assumes it to be constant or linear in redshift bins, then derives which combination of parameters is best constrained by each experiment.
For a crosscheck of the results using more complicated parameterizations, one can use simple polynomial parameterizations of w and ρ_{DE}(z)/ρ_{DE}(0) [939].
2.3 Perturbations
This section is devoted to a discussion of linear perturbation theory in darkenergy models. Since we will discuss a number of nonstandard models in later sections, we present here the main equations in a general form that can be adapted to various contexts. This section will identify which perturbation functions the Euclid survey [551] will try to measure and how they can help us to characterize the nature of dark energy and the properties of gravity.
2.3.1 Cosmological perturbation theory
Here we provide the perturbation equations in a darkenergy dominated universe for a general fluid, focusing on scalar perturbations.
The problem here is not only to parameterize the pressure perturbation and the anisotropic stress for the dark energy (there is not a unique way to do it, see below, especially Section 1.4.5 for what to do when w crosses −1) but rather that we need to run the perturbation equations for each model we assume, making predictions and compare the results with observations. Clearly, this approach takes too much time. In the following Section 1.3.2 we show a general approach to understanding the observed latetime accelerated expansion of the universe through the evolution of the matter density contrast.
In the following, whenever there is no risk of confusion, we remove the overbars from the background quantities.
2.3.2 Modified growth parameters
Even if the expansion history, H (z), of the FLRW background has been measured (at least up to redshifts ∼ 1 by supernova data), it is not yet possible yet to identify the physics causing the recent acceleration of the expansion of the universe. Information on the growth of structure at different scales and different redshifts is needed to discriminate between models of dark energy (DE) and modified gravity (MG). A definition of what we mean by DE and MG will be postponed to Section 1.4.

Model parameters capture the degrees of freedom of DE/MG and modify the evolution equations of the energymomentum content of the fiducial model. They can be associated with physical meanings and have uniquelypredicted behavior in specific theories of DE and MG.

Trigger relations are derived directly from observations and only hold in the fiducial model. They are constructed to break down if the fiducial model does not describe the growth of structure correctly.
For a largescale structure and weak lensing survey the crucial quantities are the matterdensity contrast and the gravitational potentials and we therefore focus on scalar perturbations in the Newtonian gauge with the metric (1.3.8).
We describe the matter perturbations using the gaugeinvariant comoving density contrast Λ_{ M } ≡ δ_{ M } + 3aHθ_{ M }/k^{2} where δ_{ M } and θ_{ M } are the matter density contrast and the divergence of the fluid velocity for matter, respectively. The discussion can be generalized to include multiple fluids.
Clearly, if the actual theory of structure growth is not the ΛCDM scenario, the constraints (1.3.19) will be modified, the growth equation (1.3.20) will be different, and finally the growth factor (1.3.21) is changed, i.e., the growth index is different from γ_{Λ} and may become time and scale dependent. Therefore, the inconsistency of these three points of view can be used to test the ΛCDM paradigm.
2.3.2.1 Two new degrees of freedom
Given an MG or DE theory, the scale and timedependence of the functions Q and η can be derived and predictions projected into the (Q, η) plane. This is also true for interacting dark sector models, although in this case the identification of the total matter density contrast (DM plus baryonic matter) and the galaxy bias become somewhat contrived [see, e.g., 848, for an overview of predictions for different MG/DE models].
Many different names and combinations of the above defined functions (Q, η) have been used in the literature, some of which are more closely related to actual observables and are less correlated than others in certain situations [see, e.g., 41, 667, 848, 737, 278, 277, 363].
Any combination of two variables out of {Q, η, μ, Σ, …} is a valid alternative to (Q, η). It turns out that the pair (μ, Σ) is particularly well suited when CMB, WL and LSS data are combined as it is less correlated than others [see 980, 277, 68].
2.3.2.2 Parameterizations and nonparametric approaches
So far we have defined two free functions that can encode any departure of the growth of linear perturbations from ΛCDM. However, these free functions are not measurable, but have to be inferred via their impact on the observables. Therefore, one needs to specify a parameterization of, e.g., (Q, η) such that departures from ΛCDM can be quantified. Alternatively, one can use nonparametric approaches to infer the time and scaledependence of the modified growth functions from the observations.
Daniel et al. [278, 277] investigate the modified growth parameters binned in z and k. The functions are taken constant in each bin. This approach is simple and only mildly dependent on the size and number of the bins. However, the bins can be correlated and therefore the data might not be used in the most efficient way with fixed bins. Slightly more sophisticated than simple binning is a principal component analysis (PCA) of the binned (or pixelized) modified growth functions. In PCA uncorrelated linear combinations of the original pixels are constructed. In the limit of a large number of pixels the model dependence disappears. At the moment however, computational cost limits the number of pixels to only a few. Zhao et al. [982, 980] employ a PCA in the (μ, η) plane and find that the observables are more strongly sensitive to the scalevariation of the modified growth parameters rather than the timedependence and their average values. This suggests that simple, monotonically or mildlyvarying parameterizations as well as only timedependent parameterizations are poorly suited to detect departures from ΛCDM.
2.3.2.3 Trigger relations

As only one additional parameter is introduced, a second parameter, such as η, is needed to close the system and be general enough to capture all possible modifications.

The growth factor is a solution of the growth equation on subHubble scales and, therefore, is not general enough to be consistent on all scales.

The framework is designed to describe the evolution of the matter density contrast and is not easily extended to describe all other energymomentum components and integrated into a CMBBoltzmann code.
2.4 Models of dark energy and modified gravity
In this section we review a number of popular models of dynamical DE and MG. This section is more technical than the rest and it is meant to provide a quick but selfcontained review of the current research in the theoretical foundations of DE models. The selection of models is of course somewhat arbitrary but we tried to cover the most wellstudied cases and those that introduce new and interesting observable phenomena.
2.4.1 Quintessence
In this review we refer to scalar field models with canonical kinetic energy in Einstein’s gravity as “quintessence models”. Scalar fields are obvious candidates for dark energy, as they are for the inflaton, for many reasons: they are the simplest fields since they lack internal degrees of freedom, do not introduce preferred directions, are typically weakly clustered (as discussed later on), and can easily drive an accelerated expansion. If the kinetic energy has a canonical form, the only degree of freedom is then provided by the field potential (and of course by the initial conditions). The typical requirement is that the potentials are flat enough to lead to the slowroll inflation today with an energy scale \({\rho _{DE}} \simeq {10^{ 123}}m_{pl}^4\) and a mass scale m_{ ϕ } ≲ 10^{−33} eV.
Quintessence models are the protoypical DE models [195] and as such are the most studied ones. Since they have been explored in many reviews of DE, we limit ourselves here to a few remarks.^{2}
During radiation or matter dominated epochs, the energy density ρ_{ M } of the fluid dominates over that of quintessence, i.e., ρ_{ M } ≫ ρ_{ ϕ }. If the potential is steep so that the condition \({\dot \phi ^2}/2 \gg V\left(\phi \right)\) is always satisfied, the field equation of state is given by w_{ ϕ } ≃ 1 from Eq. (1.4.6). In this case the energy density of the field evolves as ρ_{ ϕ } ∝ a^{−6}, which decreases much faster than the background fluid density.
However, in order to study the evolution of the perturbations of a quintessence field it is not even necessary to compute the field evolution explicitly. Rewriting the perturbation equations of the field in terms of the perturbations of the density contrast δ_{ ϕ } and the velocity θ_{ ϕ } in the conformal Newtonian gauge, one finds [see, e.g., 536, Appendix A] that they correspond precisely to those of a fluid, (1.3.17) and (1.3.18), with π = 0 and \(\delta p = c_s^2\delta \rho + 3aH\left({c_s^2  c_a^2} \right)\left({1 + w} \right){\rho ^\theta}/{k^2}\) with \(c_s^2 = 1\). The adiabatic sound speed, c_{ a }, is defined in Eq. (1.4.31). The large value of the sound speed \(c_s^2\), equal to the speed of light, means that quintessence models do not cluster significantly inside the horizon [see 785, 786, and Section 1.8.6 for a detailed analytical discussion of quintessence clustering and its detectability with future probes, for arbitrary \(c_s^2\) ].
 (i)Freezing models

V (ϕ) = M^{4+n}ϕ^{−n} (n > 0),

\(V\left(\phi \right) = {M^{4 + n}}{\phi ^{ n}}{\rm{exp}}\left({\alpha {\phi ^2}/m_{{\rm{pl}}}^2} \right)\).

The former potential does not possess a minimum and hence the field rolls down the potential toward infinity. This appears, for example, in the fermion condensate model as a dynamical supersymmetry breaking [138]. The latter potential has a minimum at which the field is eventually trapped (corresponding to w_{ ϕ } = −1). This potential can be constructed in the framework of supergravity [170].
 (ii)Thawing models

V (ϕ) = V_{0} + M^{4−n}ϕ^{ n } (n > 0),

V (ϕ) = M^{4} cos^{2} (ϕ/f).

The former potential is similar to that of chaotic inflation (n = 2, 4) used in the early universe (with V_{0} = 0) [577], while the mass scale M is very different. The model with n = 1 was proposed by [487] in connection with the possibility to allow for negative values of V (ϕ). The universe will collapse in the future if the system enters the region with V (ϕ) < 0. The latter potential appears as a potential for the PseudoNambuGoldstone Boson (PNGB). This was introduced by [370] in response to the first tentative suggestions that the universe may be dominated by the cosmological constant. In this model the field is nearly frozen at the potential maximum during the period in which the field mass m_{ ϕ } is smaller than H, but it begins to roll down around the present (m_{ ϕ } ≃ H_{0}).
Potentials can also be classified in several other ways, e.g., on the basis of the existence of special solutions. For instance, tracker solutions have approximately constant w_{ ϕ } and Ω_{ ϕ } along special attractors. A wide range of initial conditions converge to a common, cosmic evolutionary tracker. Early DE models contain instead solutions in which DE was not negligible even during the last scattering. While in the specific Euclid forecasts section (1.8) we will not explicitly consider these models, it is worthwhile to note that the combination of observations of the CMB and of large scale structure (such as Euclid) can dramatically constrain these models drastically improving the inverse area figure of merit compared to current constraints, as discussed in [467].
2.4.2 Kessence
2.4.3 A definition of modified gravity
In this review we often make reference to DE and MG models. Although in an increasing number of publications a similar dichotomy is employed, there is currently no consensus on where to draw the line between the two classes. Here we will introduce an operational definition for the purpose of this document.
Roughly speaking, what most people have in mind when talking about standard dark energy are models of minimallycoupled scalar fields with standard kinetic energy in 4dimensional Einstein gravity, the only functional degree of freedom being the scalar potential. Often, this class of model is referred to simply as “quintessence”. However, when we depart from this picture a simple classification is not easy to draw. One problem is that, as we have seen in the previous sections, both at background and at the perturbation level, different models can have the same observational signatures [537]. This problem is not due to the use of perturbation theory: any modification to Einstein’s equations can be interpreted as standard Einstein gravity with a modified “matter” source, containing an arbitrary mixture of scalars, vectors and tensors [457, 535].
The simplest example can be discussed by looking at Eqs. (1.3.23). One can modify gravity and obtain a modified Poisson equation, and therefore Q ≠ 1, or one can introduce a clustering dark energy (for example a kessence model with small sound speed) that also induces the same Q ≠ 1 (see Eq. 1.3.23). This extends to the anisotropic stress η: there is in general a onetoone relation at first order between a fluid with arbitrary equation of state, sound speed, and anisotropic stress and a modification of the EinsteinHilbert Lagrangian.

Standard dark energy: These are models in which dark energy lives in standard Einstein gravity and does not cluster appreciably on subhorizon scales. As already noted, the prime example of a standard darkenergy model is a minimallycoupled scalar field with standard kinetic energy, for which the sound speed equals the speed of light.

Clustering dark energy: In clustering darkenergy models, there is an additional contribution to the Poisson equation due to the darkenergy perturbation, which induces Q ≠ 1. However, in this class we require η = 1, i.e., no extra effective anisotropic stress is induced by the extra dark component. A typical example is a kessence model with a low sound speed, \(c_s^2 \ll 1\).

Explicit modified gravity models: These are models where from the start the Einstein equations are modified, for example scalartensor and f (R) type theories, DvaliGabadadzePorrati (DGP) as well as interacting dark energy, in which effectively a fifth force is introduced in addition to gravity. Generically they change the clustering and/or induce a nonzero anisotropic stress. Since our definitions are based on the phenomenological parameters, we also add darkenergy models that live in Einstein’s gravity but that have nonvanishing anisotropic stress into this class since they cannot be distinguished by cosmological observations.
Notice that both clustering dark energy and explicit modified gravity models lead to deviations from what is often called ‘general relativity’ (or, like here, standard dark energy) in the literature when constraining extra perturbation parameters like the growth index γ. For this reason we generically call both of these classes MG models. In other words, in this review we use the simple and by now extremely popular (although admittedly somewhat misleading) expression “modified gravity” to denote models in which gravity is modified and/or dark energy clusters or interacts with other fields. Whenever we feel useful, we will remind the reader of the actual meaning of the expression “modified gravity” in this review.
Therefore, on subhorizon scales and at first order in perturbation theory our definition of MG is straightforward: models with Q = η = 1 (see Eq. 1.3.23) are standard DE, otherwise they are MG models. In this sense the definition above is rather convenient: we can use it to quantify, for instance, how well Euclid will distinguish between standard dynamical dark energy and modified gravity by forecasting the errors on Q, η or on related quantities like the growth index γ.
On the other hand, it is clear that this definition is only a practical way to group different models and should not be taken as a fundamental one. We do not try to set a precise threshold on, for instance, how much dark energy should cluster before we call it modified gravity: the boundary between the classes is therefore left undetermined but we think this will not harm the understanding of this document.
2.4.4 Coupled darkenergy models
A first class of models in which dark energy shows dynamics, in connection with the presence of a fifth force different from gravity, is the case of ‘interacting dark energy’: we consider the possibility that dark energy, seen as a dynamical scalar field, may interact with other components in the universe. This class of models effectively enters in the “explicit modified gravity models” in the classification above, because the gravitational attraction between dark matter particles is modified by the presence of a fifth force. However, we note that the anisotropic stress for DE is still zero in the Einstein frame, while it is, in general, nonzero in the Jordan frame. In some cases (when a universal coupling is present) such an interaction can be explicitly recast in a nonminimal coupling to gravity, after a redefinition of the metric and matter fields (Weyl scaling). We would like to identify whether interactions (couplings) of dark energy with matter fields, neutrinos or gravity itself can affect the universe in an observable way.
 1.
couplings between dark energy and baryons;
 2.
couplings between dark energy and dark matter (coupled quintessence);
 3.
couplings between dark energy and neutrinos (growing neutrinos, MaVaNs);
 4.
universal couplings with all species (scalartensor theories and f (R)).
 1.
a fifth force ∇[Φ_{ α } + βϕ ] with an effective \({\tilde G_a} = {G_N}\left[ {1 + 2{\beta ^2}\left(\phi \right)} \right]\);
 2.
a velocity dependent term \({\tilde H_{{\rm{V}}\alpha}} \equiv H\left({1  \beta \left(\phi \right){{\dot \phi} \over H}} \right){{\rm{V}}_\alpha}\)
 3.
a timedependent mass for each particle α, evolving according to (1.4.25).
The relative significance of these key ingredients can lead to a variety of potentially observable effects, especially on structure formation. We will recall some of them in the following subsections as well as, in more detail, for two specific couplings in the dark matter section (2.11, 2.9) of this report.
2.4.4.1 Dark energy and baryons
A coupling between dark energy and baryons is active when the baryon mass is a function of the darkenergy scalar field: m_{ b } = m_{ b } (ϕ). Such a coupling is constrained to be very small: main bounds come from tests of the equivalence principle and solar system constraints [130]. More in general, depending on the coupling, bounds on the variation of fundamental constants over cosmological timescales may have to be considered ([631, 303, 304, 639] and references therein). It is presumably very difficult to have significant cosmological effects due to a coupling to baryons only. However, uncoupled baryons can still play a role in the presence of a coupling to dark matter (see Section 1.6 on nonlinear aspects).
2.4.4.2 Dark energy and dark matter
An interaction between dark energy and dark matter (CDM) is active when CDM mass is a function of the darkenergy scalar field: m_{ c } = m_{ c } (ϕ). In this case the coupling is not affected by tests on the equivalence principle and solarsystem constraints and can therefore be stronger than the one with baryons. One may argue that darkmatter particles are themselves coupled to baryons, which leads, through quantum corrections, to direct coupling between dark energy and baryons. The strength of such couplings can still be small and was discussed in [304] for the case of neutrinodarkenergy couplings. Also, quantum corrections are often recalled to spoil the flatness of a quintessence potential. However, it may be misleading to calculate quantum corrections up to a cutoff scale, as contributions above the cutoff can possibly compensate terms below the cutoff, as discussed in [958].
Typical values of β presently allowed by observations (within current CMB data) are within the range 0 < β < 0.06 (at 95% CL for a constant coupling and an exponential potential) [114, 47, 35, 44], or possibly more [539, 531] if neutrinos are taken into account or for more realistic timedependent choices of the coupling. This framework is generally referred to as ‘coupled quintessence’ (CQ). Various choices of couplings have been investigated in literature, including constant and varying β (ϕ) [33, 619, 35, 518, 414, 747, 748, 724, 377].
The presence of a coupling (and therefore, of a fifth force acting among darkmatter particles) modifies the background expansion and linear perturbations [34, 33, 35], therefore affecting CMB and crosscorrelation of CMB and LSS [44, 35, 47, 45, 114, 539, 531, 970, 612, 42].
Furthermore, structure formation itself is modified [604, 618, 518, 611, 870, 3, 666, 129, 962, 79, 76, 77, 80, 565, 562, 75, 980, 640].
An alternative approach, also investigated in the literature [619, 916, 915, 613, 387, 388, 193, 794, 192], where the authors consider as a starting point Eq. (1.4.21): the coupling is then introduced by choosing directly a covariant stressenergy tensor on the RHS of the equation, treating dark energy as a fluid and in the absence of a starting action. The advantage of this approach is that a good parameterization allows us to investigate several models of dark energy at the same time. Problems connected to instabilities of some parameterizations or to the definition of a physicallymotivated speed of sound for the density fluctuations can be found in [916]. It is also possible to both take a covariant form for the coupling and a quintessence darkenergy scalar field, starting again directly from Eq. (1.4.21). This has been done, e.g., in [145], [144]. At the background level only, [235], [237], [302] and [695] have also considered which background constraints can be obtained when starting from a fixed present ratio of dark energy and dark matter. The disadvantage of this approach is that it is not clear how to perturb a coupling that has been defined as a background quantity.
A Yukawalike interaction was investigated [357, 279], pointing out that coupled dark energy behaves as a fluid with an effective equation of state w ≲ −1, though staying well defined and without the presence of ghosts [279].
For an illustration of observable effects related to darkenergydarkmatter interaction see also Section (2.11) of this report.
2.4.4.3 Dark energy and neutrinos
A coupling between dark energy and neutrinos can be even stronger than the one with dark matter and as compared to gravitational strength. Typical values of β are order 50–100 or even more, such that even the small fraction of cosmic energy density in neutrinos can have a substantial influence on the time evolution of the quintessence field. In this scenario neutrino masses change in time, depending on the value of the darkenergy scalar field ϕ. Such a coupling has been investigated within MaVaNs [356, 714, 135, 12, 952, 280, 874, 856, 139, 178, 177] and more recently within growing neutrino cosmologies [36, 957, 668, 963, 962, 727, 179, 78]. In this latter case, DE properties are related to the neutrino mass and to a cosmological event, i.e., neutrinos becoming nonrelativistic. This leads to the formation of stable neutrino lumps [668, 963, 78] at very large scales only (∼ 100 Mpc and beyond) as well as to signatures in the CMB spectra [727]. For an illustration of observable effects related to this case see Section (2.9) of this report.
2.4.4.4 Scalartensor theories
Scalartensor theories [954, 471, 472, 276, 216, 217, 955, 912, 722, 354, 146, 764, 721, 797, 646, 725, 726, 205, 54] extend GR by introducing a nonminimal coupling between a scalar field (acting also as dark energy) and the metric tensor (gravity); they are also sometimes referred to as “extended quintessence”. We include scalartensor theories among “interacting cosmologies” because, via a Weyl transformation, they are equivalent to a GR framework (minimal coupling to gravity) in which the darkenergy scalar field ϕ is coupled (universally) to all species [954, 608, 936, 351, 724, 219]. In other words, these theories correspond to the case where, in action (1.4.20), the mass of all species (baryons, dark matter, …) is a function m = m (ϕ) with the same coupling for every species α. Indeed, a description of the coupling via an action such as (1.4.20) is originally motivated by extensions of GR such as scalartensor theories. Typically the strength of the scalarmediated interaction is required to be orders of magnitude weaker than gravity ([553], [725] and references therein for recent constraints). It is possible to tune this coupling to be as small as is required — for example by choosing a suitably flat potential V (ϕ) for the scalar field. However, this leads back to naturalness and finetuning problems.
In Sections 1.4.6 and 1.4.7 we will discuss in more detail a number of ways in which new scalar degrees of freedom can naturally couple to standard model fields, while still being in agreement with observations. We mention here only that the presence of chameleon mechanisms [171, 672, 670, 172, 464, 173, 282] can, for example, modify the coupling depending on the environment. In this way, a small (screened) coupling in highdensity regions, in agreement with observations, is still compatible with a bigger coupling (β ∼ 1) active in low density regions. In other words, a dynamical mechanism ensures that the effects of the coupling are screened in laboratory and solar system tests of gravity.
However, it is important to remark that screening mechanisms are meant to protect the scalar field in highdensity regions (and therefore allow for bigger couplings in low density environments) but they do not address problems related to selfacceleration of the DE scalar field, which still usually require some finetuning to match present observations on w. f (R) theories, which can be mapped into a subclass of scalartensor theories, will be discussed in more detail in Section 1.4.6.
2.4.5 Phantom crossing
In this section we pay attention to the evolution of the perturbations of a general darkenergy fluid with an evolving equation of state parameter w. Current limits on the equation of state parameter w = p/ρ of the dark energy indicate that p ≈ −ρ, and so do not exclude p < −ρ, a region of parameter space often called phantom energy. Even though the region for which w < −1 may be unphysical at the quantum level, it is still important to probe it, not least to test for coupled dark energy and alternative theories of gravity or higher dimensional models that can give rise to an effective or apparent phantom energy.
Although there is no problem in considering w < −1 for the background evolution, there are apparent divergences appearing in the perturbations when a model tries to cross the limit w = −1. This is a potential headache for experiments like Euclid that directly probe the perturbations through measurements of the galaxy clustering and weak lensing. To analyze the Euclid data, we need to be able to consider models that cross the phantom divide w = −1 at the level of firstorder perturbations (since the only darkenergy model that has no perturbations at all is the cosmological constant).
However, at the level of cosmological firstorder perturbation theory, there is no fundamental limitation that prevents an effective fluid from crossing the phantom divide.
2.4.5.1 Parameterizing the pressure perturbation
This divergence appears because for w = −1 the energy momentum tensor Eq. (1.3.3) reads: T^{ μν } = pg^{ μν }. Normally the fourvelocity u^{ μ } is the timelike eigenvector of the energymomentum tensor, but now all vectors are eigenvectors. So the problem of fixing a unique restframe is no longer well posed. Then, even though the pressure perturbation looks fine for the observer in the restframe, because it does not diverge, the badlydefined gauge transformation to the Newtonian frame does, as it also contains \(c_a^2\).
2.4.5.2 Regularizing the divergences
We have seen that neither barotropic fluids nor canonical scalar fields, for which the pressure perturbation is of the type (1.4.34), can cross the phantom divide. However, there is a simple model [called the quintom model 360, 451] consisting of two fluids of the same type as in the previous Section 1.4.5.1 but with a constant w on either side of w = −1. The combination of the two fluids then effectively crosses the phantom divide if we start with w_{tot} > −1, as the energy density in the fluid with w < −1 will grow faster, so that this fluid will eventually dominate and we will end up with w_{tot} < −1.
This result appears also related to the behavior found for coupled darkenergy models (originally introduced to solve the coincidence problem) where dark matter and dark energy interact not only through gravity [33]. The effective dark energy in these models can also cross the phantom divide without divergences [462, 279, 534].
However in this class of models there are other instabilities arising at the perturbation level regardless of the coupling used, [cf. 916].
2.4.5.3 A word on perturbations when w = −1
For instance, if we set w = −1 and δp = γδρ (where γ can be a generic function) in Eqs. (1.4.28) and (1.4.29) we have δ ≠ 0 and V ≠ 0. However, the solutions are decaying modes due to the \( {1 \over a}\left({1  3w} \right)V\) term so they are not important at late times; but it is interesting to notice that they are in general not zero.
It is also interesting to notice that when w = −1 the perturbation equations tell us that darkenergy perturbations are not influenced through Ψ and Φ′ (see Eq. (1.4.28) and (1.4.29)). Since Φ and Ψ are the quantities directly entering the metric, they must remain finite, and even much smaller than 1 for perturbation theory to hold. Since, in the absence of direct couplings, the dark energy only feels the other constituents through the terms (1 + w)Ψ and (1 + w)Φ′, it decouples completely in the limit w = −1 and just evolves on its own. But its perturbations still enter the Poisson equation and so the dark matter perturbation will feel the effects of the darkenergy perturbations.
Although this situation may seem contrived, it might be that the acceleration of the universe is just an observed effect as a consequence of a modified theory of gravity. As was shown in [537], any modified gravity theory can be described as an effective fluid both at background and at perturbation level; in such a situation it is imperative to describe its perturbations properly as this effective fluid may manifest unexpected behavior.
2.4.6 f(R) gravity

(I) The metric formalism
The first approach is the metric formalism in which the connections \(\Gamma _{\beta \gamma}^\alpha\) are the usual connections defined in terms of the metric g_{ μν }. The field equations can be obtained by varying the action (1.4.44) with respect to g_{ μν }:where F (R) ≡ ∂f/∂R (we also use the notation f_{,R} ≡ ∂f/∂R, f_{,RR} ≡ ∂^{2}f/∂R^{ 2 }), and T_{ μν } is the matter energymomentum tensor. The trace of Eq. (1.4.45) is given by$$F(R){R_{\mu \nu}}(g)  {1 \over 2}f(R){g_{\mu \nu}}  {\nabla _\mu}{\nabla _\nu}F(R) + {g_{\mu \nu}}\square F(R) = {\kappa ^2}{T_{\mu \nu}}\,,$$(1.4.45)where T = g^{ μν }T_{ μν } = − ρ + 3P. Here ρ and P are the energy density and the pressure of the matter, respectively.$$3\,\square F(R) + F(R)R  2f(R) = {\kappa ^2}T\,,$$(1.4.46) 
(II) The Palatini formalism
The second approach is the Palatini formalism, where \(\Gamma _{\beta \gamma}^\alpha\) and g_{ μν } are treated as independent variables. Varying the action (1.4.44) with respect to g_{ μν } giveswhere R_{ μν } (Γ) is the Ricci tensor corresponding to the connections \(\Gamma _{\beta \gamma}^\alpha\). In general this is different from the Ricci tensor R_{ μν } (g) corresponding to the metric connections. Taking the trace of Eq. (1.4.47), we obtain$$F(R){R_{\mu \nu}}(\Gamma)  {1 \over 2}f(R){g_{\mu \nu}} = {\kappa ^2}{T_{\mu \nu}}\,,$$(1.4.47)where R (T) = g^{ μν }R_{ μν } (Γ) is directly related to T. Taking the variation of the action (1.4.44) with respect to the connection, and using Eq. (1.4.47), we find$$F(R)R  2f(R) = {\kappa ^2}T\,,$$(1.4.48)$$\begin{array}{*{20}c} {{R_{\mu \nu}}(g)  {1 \over 2}{g_{\mu \nu}}R(g) = {{{\kappa ^2}{T_{\mu \nu}}} \over F}  {{FR(T)  f} \over {2F}}{g_{\mu \nu}} + {1 \over F}({\nabla _\mu}{\nabla _\nu}F  {g_{\mu \nu}}\square F)\quad \quad \quad}\\ { {3 \over {2{F^2}}}\left[ {{\partial _\mu}F{\partial _\nu}F  {1 \over 2}{g_{\mu \nu}}{{(\nabla F)}^2}} \right]\,.}\\ \end{array}$$(1.4.49)
In modified gravity models where F (R) is a function of R, the term □F (R) does not vanish in Eq. (1.4.46). This means that, in the metric formalism, there is a propagating scalar degree of freedom, ψ ≡ F (R). The trace equation (1.4.46) governs the dynamics of the scalar field ψ — dubbed “scalaron” [862]. In the Palatini formalism the kinetic term □F (R) is not present in Eq. (1.4.48), which means that the scalarfield degree of freedom does not propagate freely [32, 563, 567, 566].
It is important to realize that the dynamics of f (R) darkenergy models is different depending on the two formalisms. Here we confine ourselves to the metric case only.
Already in the early 1980s it was known that the model f (R) = R + αR^{2} can be responsible for inflation in the early universe [862]. This comes from the fact that the presence of the quadratic term αR^{2} gives rise to an asymptotically exact de Sitter solution. Inflation ends when the term αR^{2} becomes smaller than the linear term R. Since the term αR^{2} is negligibly small relative to R at the present epoch, this model is not suitable to realizing the present cosmic acceleration.
Since a latetime acceleration requires modification for small R, models of the type f (R) = R − α/R^{ n } (α > 0, n > 0) were proposed as a candidate for dark energy [204, 212, 687]. While the latetime cosmic acceleration is possible in these models, it has become clear that they do not satisfy local gravity constraints because of the instability associated with negative values of f_{,RR} [230, 319, 852, 697, 355]. Moreover a standard matter epoch is not present because of a large coupling between the Ricci scalar and the nonrelativistic matter [43].

(i) f_{,R} > 0 for R ≥ R_{0} (> 0), where R_{0} is the Ricci scalar at the present epoch. Strictly speaking, if the final attractor is a de Sitter point with the Ricci scalar R_{1} (> 0), then the condition f_{,R} > 0 needs to hold for R ≥ R_{1}.
This is required to avoid a negative effective gravitational constant.

(ii) f_{,RR} > 0 for R ≥ R_{0}.
This is required for consistency with local gravity tests [319, 697, 355, 683], for the presence of the matterdominated epoch [43, 39], and for the stability of cosmological perturbations [213, 849, 110, 358].

(iii) f (R) → R − 2Λ for R ≫ R_{0}.
This is required for consistency with local gravity tests [48, 456, 864, 53, 904] and for the presence of the matterdominated epoch [39].

(iv) \(0 < {{Rf{,_{RR}}} \over {f{,_R}}}\left({r =  2} \right) < 1\,{\rm{at}}\,r =  {{Rf{,_R}} \over f} =  2\).
This is required for the stability of the latetime de Sitter point [678, 39].
The dynamics of the full system can be investigated by analyzing the stability properties of the critical phasespace points as in, e.g., [39]. The general conclusions is that only models with a characteristic function m (r) positive and close to ΛCDM, i.e.,m ≥ 0, are cosmologically viable. That is, only for these models one finds a sequence of a long decelerated matter epoch followed by a stable accelerated attractor.
Euclid forecasts for the f (R) models will be presented in Section 1.8.7
2.4.7 Massive gravity and higherdimensional models
Instead of introducing new scalar degrees of freedom such as in f (R) theories, another philosophy in modifying gravity is to modify the graviton itself In this case the new degrees of freedom belong to the gravitational sector itself; examples include massive gravity and higherdimensional frameworks, such as the DvaliGabadadzePorrati (DGP) model [326] and its extensions. The new degrees of freedom can be responsible for a latetime speedup of the universe, as is summarized below for a choice of selected models We note here that while such selfaccelerating solutions are interesting in their own right, they do not tackle the old cosmological constant problem: why the observed cosmological constant is so much smaller than expected in the first place. Instead of answering this question directly, an alternative approach is the idea of degravitation [see 327, 328, 58, 330], where the cosmological constant could be as large as expected from standard field theory, but would simply gravitate very little (see the paragraph in Section 1.4.7.1 below)
2.4.7.1 Selfacceleration
As shown in [686], such theories can allow for selfaccelerating de Sitter solutions without any ghosts, unlike in the DGP model. In the presence of compact sources, these solutions can support sphericallysymmetric, Vainshteinlike nonlinear perturbations that are also stable against small fluctuations. However, this is constrained to the subset of the thirdorder Galileon, which contains only \({\mathcal L}_\pi ^{\left(1 \right)}\), \({\mathcal L}_\pi ^{\left(2 \right)}\) and \({\mathcal L}_\pi ^{\left(3 \right)}\) [669].
The Galileon terms described above form a subset of the “generalized Galileons”. A generalized Galileon model allows nonlinear derivative interactions of the scalar field π in the Lagrangian while insisting that the equations of motion remain at most second order in derivatives, thus removing any ghostlike instabilities. However, unlike the pure Galileon models, generalized Galileons do not impose the symmetry of Eq. (1.4.75). These theories were first written down by Horndeski [445] and later rediscoved by Deffayet et al. [300]. They are a linear combination of Lagrangians constructed by multiplying the Galileon Lagrangians \({\mathcal L}_\pi ^{\left(n \right)}\) by an arbitrary scalar function of the scalar π and its first derivatives. Just like the Galileon, generalized Galileons can give rise to cosmological acceleration and to Vainshtein screening. However, as they lack the Galileon symmetry these theories are not protected from quantum corrections. Many other theories can also be found within the spectrum of generalized Galileon models, including kessence.
Degravitation. The idea behind degravitation is to modify gravity in the IR, such that the vacuum energy could have a weaker effect on the geometry, and therefore reconcile a natural value for the vacuum energy as expected from particle physics with the observed latetime acceleration. Such modifications of gravity typically arise in models of massive gravity [327, 328, 58, 330], i.e., where gravity is mediated by a massive spin2 field. The extradimensional DGP scenario presented previously, represents a specific model of soft mass gravity, where gravity weakens down at large distance, with a force law going as 1/r. Nevertheless, this weakening is too weak to achieve degravitation and tackle the cosmological constant problem. However, an obvious way out is to extend the DGP model to higher dimensions, thereby diluting gravity more efficiently at large distances. This is achieved in models of cascading gravity, as is presented below. An alternative to cascading gravity is to work directly with theories of constant mass gravity (hard mass graviton).
Cascading gravity. Cascading gravity is an explicit realization of the idea of degravitation, where gravity behaves as a highpass filter, allowing sources with characteristic wavelength (in space and in time) shorter than a characteristic scale r_{ c } to behave as expected from GR, but weakening the effect of sources with longer wavelengths. This could explain why a large cosmological constant does not backreact as much as anticipated from standard GR. Since the DGP model does not modify gravity enough in the IR, “cascading gravity” relies on the presence of at least two infinite extra dimensions, while our world is confined on a fourdimensional brane [293]. Similarly as in DGP, fourdimensional gravity is recovered at short distances thanks to an induced EinsteinHilbert term on the brane with associated Planck scale M_{4}. The brane we live in is then embedded in a fivedimensional brane, which bears a fivedimensional Planck scale M_{5}, itself embedded in six dimensions (with Planck scale M_{6}). From a fourdimensional perspective, the relevant scales are the 5d and 6d masses \({m_4} = M_5^3/M_4^2\) and \({m_5} = M_6^4/M_5^3\), which characterize the transition from the 4d to 5d and 5d to 6d behavior respectively.
Such theories embedded in morethanone extra dimensions involve at least one additional scalar field that typically enters as a ghost. This ghost is independent of the ghost present in the selfaccelerating branch of DGP but is completely generic to any codimensiontwo and higher framework with brane localized kinetic terms. However, there are two ways to cure the ghost, both of which are natural when considering a realistic higher codimensional scenario, namely smoothing out the brane, or including a brane tension [293, 290, 294].
When properly taking into account the issue associated with the ghost, such models give rise to a theory of massive gravity (soft mass graviton) composed of one helicity2 mode, helicity1 modes that decouple and 2 helicity0 modes. In order for this theory to be consistent with standard GR in four dimensions, both helicity0 modes should decouple from the theory. As in DGP, this decoupling does not happen in a trivial way, and relies on a phenomenon of strong coupling. Close enough to any source, both scalar modes are strongly coupled and therefore freeze.
The resulting theory appears as a theory of a massless spin2 field in fourdimensions, in other words as GR. If r ≪ m_{5} and for m_{6} ≤ m_{5}, the respective Vainshtein scale or strong coupling scale, i.e., the distance from the source M within which each mode is strongly coupled is \(r_i^3 = M/m_i^2M_4^2\), where i = 5, 6. Around a source M, one recovers fourdimensional gravity for r ≪ r_{5}, fivedimensional gravity for r_{5} ≪ r ≪ r_{6} and finally sixdimensional gravity at larger distances r ≪ r_{6}.
Massive gravity. While laboratory experiments, solar systems tests and cosmological observations have all been in complete agreement with GR for almost a century now, these bounds do not eliminate the possibility for the graviton to bear a small hard mass m ≲ 6.10^{−32} eV [400]. The question of whether or not gravity could be mediated by a hardmass graviton is not only a purely fundamental but an abstract one. Since the degravitation mechanism is also expected to be present if the graviton bears a hard mass, such models can play an important role for latetime cosmology, and more precisely when the age of the universe becomes on the order of the graviton Compton wavelength.
2.4.7.2 Observations
All models of modified gravity presented in this section have in common the presence of at least one additional helicity0 degree of freedom that is not an arbitrary scalar, but descends from a fullfledged spintwo field. As such it has no potential and enters the Lagrangian via very specific derivative terms fixed by symmetries. However, tests of gravity severely constrain the presence of additional scalar degrees of freedom. As is well known, in theories of massive gravity the helicity0 mode can evade fifthforce constraints in the vicinity of matter if the helicity0 mode interactions are important enough to freeze out the field fluctuations [913]. This Vainshtein mechanism is similar in spirit but different in practice to the chameleon and symmetron mechanisms presented in detail in the next Sections 1.4.7.3 and 1.4.7.4. One key difference relies on the presence of derivative interactions rather than a specific potential. So, rather than becoming massive in dense regions, in the Vainshtein mechanism the helicity0 mode becomes weakly coupled to matter (and light, i.e., sources in general) at high energy. This screening of scalar mode can yet have distinct signatures in cosmology and in particular for structure formation.
2.4.7.3 Screening mechanisms
While quintessence introduces a new degree of freedom to explain the latetime acceleration of the universe, the idea behind modified gravity is instead to tackle the core of the cosmological constant problem and its tuning issues as well as screening any fifth forces that would come from the introduction of extra degrees of freedom. As mentioned in Section 1.4.4.1, the strength with which these new degrees of freedom can couple to the fields of the standard model is very tightly constrained by searches for fifth forces and violations of the weak equivalence principle. Typically the strength of the scalar mediated interaction is required to be orders of magnitude weaker than gravity. It is possible to tune this coupling to be as small as is required, leading however to additional naturalness problems. Here we discuss in more detail a number of ways in which new scalar degrees of freedom can naturally couple to standard model fields, whilst still being in agreement with observations, because a dynamical mechanism ensures that their effects are screened in laboratory and solar system tests of gravity. This is done by making some property of the field dependent on the background environment under consideration. These models typically fall into two classes; either the field becomes massive in a dense environment so that the scalar force is suppressed because the Compton wavelength of the interaction is small, or the coupling to matter becomes weaker in dense environments to ensure that the effects of the scalar are suppressed. Both types of behavior require the presence of nonlinearities.
Density dependent masses: The chameleon. The chameleon [499] is the archetypal model of a scalar field with a mass that depends on its environment, becoming heavy in dense environments and light in diffuse ones. The ingredients for construction of a chameleon model are a conformal coupling between the scalar field and the matter fields of the standard model, and a potential for the scalar field, which includes relevant selfinteraction terms.
The environmental dependence of the mass of the field allows the chameleon to avoid the constraints of fifthforce experiments through what is known as the thinshell effect. If a dense object is embedded in a diffuse background the chameleon is massive inside the object. There, its Compton wavelength is small. If the Compton wavelength is smaller than the size of the object, then the scalar mediated force felt by an observer at infinity is sourced, not by the entire object, but instead only by a thin shell of matter (of depth the Compton wavelength) at the surface. This leads to a natural suppression of the force without the need to fine tune the coupling constant.
2.4.7.4 Density dependent couplings
The Vainshtein Mechanism. In models such as DGP and the Galileon, the effects of the scalar field are screened by the Vainshtein mechanism [913, 299]. This occurs when nonlinear, higherderivative operators are present in the Lagrangian for a scalar field, arranged in such a way that the equations of motion for the field are still second order, such as the interactions presented in Eq. (1.4.73).
Inside the Vainshtein radius, when the nonlinear, higherderivative terms become important they cause the kinetic terms for scalar fluctuations to become large. This can be interpreted as a relative weakening of the coupling between the scalar field and matter. In this way the strength of the interaction is suppressed in the vicinity of massive objects.
In sufficiently dense environments, ρ > μ^{2}M^{2}, the field sits in a minimum at the origin. As the local density drops the symmetry of the field is spontaneously broken and the field falls into one of the two new minima with a nonzero vacuum expectation value. In highdensity symmetryrestoring environments, the scalar field vacuum expectation value should be near zero and fluctuations of the field should not couple to matter. Thus, the symmetron force in the exterior of a massive object is suppressed because the field does not couple to the core of the object.
The OlivePospelov model. The OlivePospelov model [696] again uses a scalar conformally coupled to matter. In this construction both the coupling function and the scalar field potential are chosen to have quadratic minima. If the background field takes the value that minimizes the coupling function, then fluctuations of the scalar field decouple from matter. In nonrelativistic environments the scalar field feels an effective potential, which is a combinations of these two functions. In highdensity environments the field is very close to the value that minimizes the form of the coupling function. In lowdensity environments the field relaxes to the minimum of the bare potential. Thus, the interactions of the scalar field are suppressed in dense environments.
2.4.8 Einstein Aether and its generalizations
Indeed, when the geometry is of the form (1.4.82), anisotropic stresses are negligible and A^{ μ } is aligned with the flow of time t^{ μ }, then one can find appropriate values of the c_{ A } and ℓ such that K is dominated by a term equal to \(\vert \nabla \Psi {\vert ^2}/a_0^2\). This influence then leads to a modification to the timetime component of Einstein’s equations: instead of reducing to Poisson’s equation, one recovers an equation of the form (1.4.81). Therefore the models are successful covariant realizations of MOND.
Returning to the original motivation behind the theory, the next step is to look at the theory on cosmological scales and see whether the GEA models are realistic alternatives to dark matter. As emphasized, the additional structure in spacetime is dynamical and so possesses independent degrees of freedom. As the model is assumed to be uncoupled to other matter, the gravitational field equations would regard the influence of these degrees of freedom as a type of dark matter (possibly coupled nonminimally to gravity, and not necessarily ‘cold’).
The possibility that the model may then be a viable alternative to the dark sector in background cosmology and linear cosmological perturbations has been explored in depth in [989, 564] and [991]. As an alternative to dark matter, it was found that the GEA models could replicate some but not all of the following features of cold dark matter: influence on background dynamics of the universe; negligible sound speed of perturbations; growth rate of dark matter ‘overdensity’; absence of anisotropic stress and contribution to the cosmological Poisson equation; effective minimal coupling to the gravitational field. When compared to the data from large scale structure and the CMB, the model fared significantly less well than the Concordance Model and so is excluded. If one relaxes the requirement that the vector field be responsible for the effects of cosmological dark matter, one can look at the model as one responsible only for the effects of dark energy. It was found [991] that the current most stringent constraints on the model’s success as dark energy were from constraints on the size of large scale CMB anisotropy. Specifically, possible variation in w (z) of the ‘dark energy’ along with new degrees of freedom sourcing anisotropic stress in the perturbations was found to lead to new, nonstandard time variation of the potentials Φ and Ψ. These time variations source large scale anisotropies via the integrated SachsWolfe effect, and the parameter space of the model is constrained in avoiding the effect becoming too pronounced.
In spite of this, given the status of current experimental bounds it is conceivable that a more successful alternative to the dark sector may share some of these points of departure from the Concordance Model and yet fare significantly better at the level of the background and linear perturbations.
2.4.9 The TensorVectorScalar theory of gravity
Although no further studies of accelerated expansion in TeVeS have been performed, it is very plausible that certain choices of function will inevitably lead to acceleration. It is easy to see that the scalar field action has the same form as a kessence/kinflation [61] action which has been considered as a candidate theory for acceleration. It is unknown in general whether this has similar features as the uncoupled kessence, although Zhao’s study indicates that this a promising research direction [984].
In TeVeS, cold dark matter is absent. Therefore, in order to get acceptable values for the physical Hubble constant today (i.e., around H_{0} ∼ 70 km/s/Mpc), we have to supplement the absence of CDM with something else. Possibilities include the scalar field itself, massive neutrinos [841, 364] and a cosmological constant. At the same time, one has to get the right angular diameter distance to recombination [364]. These two requirements can place severe constraints on the allowed free functions.
Until TeVeS was proposed and studied in detail, MONDtype theories were assumed to be fatally flawed: their lack of a dark matter component would necessarily prevent the formation of largescale structure compatible with current observational data. In the case of an Einstein universe, it is well known that, since baryons are coupled to photons before recombination they do not have enough time to grow into structures on their own. In particular, on scales smaller than the diffusion damping scale perturbations in such a universe are exponentially damped due to the Silkdamping effect. CDM solves all of these problems because it does not couple to photons and therefore can start creating potential wells early on, into which the baryons fall.
It is premature to claim (as in [843, 855]) that only a theory with CDM can fit CMB observations; a prime example to the contrary is the EBI theory [83]. Nevertheless, in the case of TeVeS [841] numerically solved the linear Boltzmann equation in the case of TeVeS and calculated the CMB angular power spectrum for TeVeS. By using initial conditions close to adiabatic the spectrum thus found provides very poor fit as compared to the ΛCDM model (see the left panel of Figure 1). The CMB seems to put TeVeS into trouble, at least for the Bekenstein free function. The result of [318] has a further direct consequence. The difference Φ − Ψ, sometimes named the gravitational slip (see Section 1.3.2), has additional contributions coming from the perturbed vector field α. Since the vector field is required to grow in order to drive structure formation, it will inevitably lead to a growing Φ − Ψ. If the difference Φ − Ψ can be measured observationally, it can provide a substantial test of TeVeS that can distinguish TeVeS from ΛCDM.
2.5 Generic properties of dark energy and modified gravity models
This section explores some generic issues that are not connected to particular models (although we use some specific models as examples). First, we ask ourselves to which precision we should measure w in order to make a significant progress in understanding dark energy. Second, we discuss the role of the anisotropic stress in distinguishing between dark energy and modified gravity models. Finally, we present some general consistency relations among the perturbation variables that all models of modified gravity should fulfill.
2.5.1 To which precision should we measure w ?

Since w is so close to −1, do we not already know that the dark energy is a cosmological constant?

To which precision should we measure w ? Or equivalently, why is the Euclid target precision of about 0.01 on w_{0} and 0.1 on w_{ a } interesting?
In this section we will attempt to answer these questions at least partially, in two different ways. We will start by examining whether we can draw useful lessons from inflation, and then we will look at what we can learn from arguments based on Bayesian model comparison.
In the second part we will consider the Bayesian evidence in favor of a true cosmological constant if we keep finding w = −1; we will see that for priors on w_{0} and w_{ a } of order unity, a precision like the one for Euclid is necessary to favor a true cosmological constant decisively. We will also discuss how this conclusion changes depending on the choice of priors.
2.5.1.1 Lessons from inflation

Why is the universe very close to being spatially flat?

Why do we observe homogeneity and isotropy on scales that were naively never in causal contact?

What created the initial fluctuations?
While there is no conclusive proof that an inflationary phase took place in the early universe, it is surprisingly difficult to create the observed fluctuation spectrum in alternative scenarios that are strictly causal and only act on subhorizon scales [854, 803].
If, however, inflation took place, then it seems natural to ask the question whether its observed properties appear similar to the current knowledge about the dark energy, and if yes, whether we can use inflation to learn something about the dark energy. The first lesson to draw from inflation is that it was not due to a pure cosmological constant. This is immediately clear since we exist: inflation ended. We can go even further: if Planck confirms the observations of a deviation from a scale invariant initial spectrum (n_{ s } ≠ 1) of WMAP [526] then this excludes an exactly exponential expansion during the observable epoch and, thus, also a temporary, effective cosmological constant.
As already said earlier, we conclude that inflation is not due to a cosmological constant. However, an observer back then would nonetheless have found w ≈ −1. Thus, observation of w ≈ −1 (at least down to an error of about 0.02, see Figure 2) does not provide a very strong reason to believe that we are dealing with a cosmological constant.
2.5.1.2 HiggsDilaton Inflation: a connection between the early and late universe acceleration
Despite previous arguments, it is natural to ask for a connection between the two known acceleration periods. In fact, in the last few years there has been a renewal of model building in inflationary cosmology by considering the fundamental Higgs as the inflaton field [133]. Such an elegant and economical model can give rise to the observed amplitude of CMB anisotropies when we include a large nonminimal coupling of the Higgs to the scalar curvature. In the context of quantum field theory, the running of the Higgs mass from the electroweak scale to the Planck scale is affected by this nonminimal coupling in such a way that the beta function of the Higgs’ selfcoupling vanishes at an intermediate scale (μ ∼ 10^{15} GeV), if the mass of the Higgs is precisely 126 GeV, as measured at the LHC. This partial fixed point (other beta functions do not vanish) suggests an enhancement of symmetry at that scale, and the presence of a NambuGoldstone boson (the dilaton field) associated with the breaking of scale invariance [820]. In a subsequent paper [383], the HiggsDilaton scenario was explored in full detail. The model predicts a bound on the scalar spectral index, n_{ s } < 0.97, with negligible associated running, −0.0006 < d ln n_{ s }/d ln k < 0.00015, and a scalar to tensor ratio, 0.0009 < r < 0.0033, which, although out of reach of the Planck satellite mission, is within the capabilities of future CMB satellite projects like PRISM [52]. Moreover, the model predicts that, after inflation, the dilaton plays the role of a thawing quintessence field, whose slow motion determines a concrete relation between the early universe fluctuations and the equation of state of dark energy, 3(1 + w) = 1 − n_{ s } > 0.03, which could be within reach of Euclid satellite mission [383]. Furthermore, within the HDI model, there is also a relation between the running of the scalar tilt and the variation of w (a), d ln n_{ s }/d ln k = 3w_{ a }, a prediction that can easily be ruled out with future surveys.
These relationships between early and late universe acceleration parameters constitute a fundamental physics connection within a very concrete and economical model, where the Higgs plays the role of the inflaton and the dilaton is a thawing quintessence field, whose dynamics has almost no freedom and satisfies all of the present constraints [383].
2.5.1.3 When should we stop: Bayesian model comparison
In the previous section we saw that inflation provides an argument why an observation of w ≈ −1 need not support a cosmological constant strongly. Let us now investigate this argument more precisely with Bayesian model comparison. One model, M_{0}, posits that the accelerated expansion is due to a cosmological constant. The other models assume that the dark energy is dynamical, in a way that is well parametrized either by an arbitrary constant w (model M_{1}) or by a linear fit w (a) = w_{0} + (1 − a)w_{ a } (model M_{2}). Under the assumption that no deviation from w = −1 will be detected in the future, at which point should we stop trying to measure w ever more accurately? The relevant target here is to quantify at what point we will be able to rule out an entire class of theoretical darkenergy models (when compared to ΛCDM) at a certain threshold for the strength of evidence.
Here we are using the constant and linear parametrization of w because on the one hand we can consider the constant w to be an effective quantity, averaged over redshift with the appropriate weighting factor for the observable, see [838], and on the other hand because the precision targets for observations are conventionally phrased in terms of the figure of merit (FoM) given by \(1/\sqrt {\vert{\rm{Coc}}\left({{w_0},{w_a}} \right)\vert}\) We will, therefore, find a direct link between the model probability and the FoM. It would be an interesting exercise to repeat the calculations with a more general model, using e.g. PCA, although we would expect to reach a similar conclusion.
Strength of evidence disfavoring the three benchmark models against a cosmological constant model, using an indicative accuracy on w = −1 from present data, σ ∼ 0.1.
Model  (Δ_{+}, Δ_{−})  ln B today (σ = 0.1) 

Phantom  (0, 10)  4.4 (strongly disfavored) 
Fluidlike  (2/3, 0)  1.7 (slightly disfavored) 
Small departures  (0.01, 0.01)  0.0 (inconclusive) 
Required accuracy for future surveys in order to disfavor the three benchmark models against w = −1 for two different strengths of evidence.
Model  (Δ_{+}, Δ_{−})  Required σ for odds  

> 20 : 1  > 150 : 1  
Phantom  (0, 10)  0.4  5 · 10^{−2} 
Fluidlike  (2/3, 0)  3 · 10^{−2}  3 · 10^{−3} 
Small departures  (0.01, 0.01)  4 · 10^{−4}  5 · 10^{−5} 
By considering the model M_{2} we can also provide a direct link with the target DETF FoM: Let us choose (fairly arbitrarily) a flat probability distribution for the prior, of width Δw_{0} and Δw_{ a } in the darkenergy parameters, so that the value of the prior is 1/(Δw_{0}Δw_{ a }) everywhere. Let us assume that the likelihood is Gaussian in w_{0} and w_{ a } and centered on ΛCDM (i.e., the data fully supports Λ as the dark energy).
As above, we need to distinguish different cases depending on the width of the prior. If you accept the argument of the previous section that we expect only a small deviation from w = −1, and set a prior width of order 0.01 on both w_{0} and w_{ a }, then the posterior is dominated by the prior, and the ratio will be of order 1 if the future data is compatible with w = −1. Since the precision of the experiment is comparable to the expected deviation, both ΛCDM and evolving dark energy are equally probable (as argued above and shown for model M_{1} in Table 1), and we have to wait for a detection of w ≠ −1 or a significant further increase in precision (cf. the last row in Table 2).
A similar analysis could be easily carried out to compare the cosmological constant model against departures from Einstein gravity, thus giving some useful insight into the potential of future surveys in terms of Bayesian model selection.
To summarize, we used inflation as a darkenergy prototype to show that the current experimental bounds of w ≈ −1.0±0.1 are not yet sufficient to significantly favor a cosmological constant over other models. In addition, even when expecting a deviation of w = −1 of order unity, our current knowledge of w does not allow us to favor Λ strongly in a Bayesian context. Here we showed that we need to reach a percent level accuracy both to have any chance of observing a deviation of w from −1 if the dark energy is similar to inflation, and because it is at this point that a cosmological constant starts to be favored decisively for prior widths of order 1. In either scenario, we do not expect to be able to improve much our knowledge with a lower precision measurement of w. The dark energy can of course be quite different from the inflaton and may lead to larger deviations from w = −1. This indeed would be the preferred situation for Euclid, as then we will be able to investigate much more easily the physical origin of the accelerate expansion. We can, however, have departures from ΛCDM even if w is very close to −1 today. In fact most present models of modified gravity and dynamical dark energy have a value of w_{0} which is asymptotically close to −1 (in the sense that large departures from this value is already excluded). In this sense, for example, early darkenergy parameterizations (Ω_{ e }) test the amount of dark energy in the past, which can still be non negligible (ex. [723]). Similarly, a fifth force can lead to a background similar to LCDM but different effects on perturbations and structure formation [79].
2.5.2 The effective anisotropic stress as evidence for modified gravity
As discussed in Section 1.4, all dark energy and modified gravity models can be described with the same effective metric degrees of freedom. This makes it impossible in principle to distinguish clearly between the two possibilities with cosmological observations alone. But while the cleanest tests would come from laboratory experiments, this may well be impossible to achieve. We would expect that model comparison analyses would still favor the correct model as it should provide the most elegant and economical description of the data. However, we may not know the correct model a priori, and it would be more useful if we could identify generic differences between the different classes of explanations, based on the phenomenological description that can be used directly to analyze the data.
Looking at the effective energy momentum tensor of the darkenergy sector, we can either try to find a hint in the form of the pressure perturbation δp or in the effective anisotropic stress π. Whilst all scalar field dark energy affects δp (and for multiple fields with different sound speeds in potentially quite complex ways), they generically have π = 0. The opposite is also true, modified gravity models have generically π ≠ 0 [537]. Radiation and neutrinos will contribute to anisotropic stress on cosmological scales, but their contribution is safely negligible in the latetime universe. In the following sections we will first look at models with single extra degrees of freedom, for which we will find that π ≠ 0 is a firm prediction. We will then consider the f (R, G) case as an example for multiple degrees of freedom [782].
2.5.2.1 Modified gravity models with a single degree of freedom
In the prototypical scalartensor theory, where the scalar φ is coupled to R through F (φ)R, we find that π ∝ (F ′/F)δφ. This is very similar to the f (R) case for which π ∝ (F ′/F)δR (where now F = df/dR). In both cases the generic model with vanishing anisotropic stress is given by F ′ = 0, which corresponds to a constant coupling (for scalartensor) or f (R) ∝ R + Λ. In both cases we find the GR limit. The other possibility, δφ = 0 or δR = 0, imposes a very specific evolution on the perturbations that in general does not agree with observations.
In all of these examples only the GR limit has consistently no effective anisotropic stress in situations compatible with observational results (matter dominated evolution with a transition towards a state with w ≪ −1/3).
2.5.2.2 Balancing multiple degrees of freedom
In summary, none of the standard examples with a single extra degree of freedom discussed above allows for a viable model with π = 0. While finely balanced solutions can be constructed for models with several degrees of freedom, one would need to link the motion in model space to the evolution of the universe, in order to preserve π = 0. This requires even more fine tuning, and in some cases is not possible at all, most notably for evolution to a de Sitter state. The effective anisotropic stress appears therefore to be a very good quantity to look at when searching for generic conclusions on the nature of the accelerated expansion from cosmological observations.
2.5.3 Parameterized frameworks for theories of modified gravity
As explained in earlier sections of this report, modifiedgravity models cannot be distinguished from darkenergy models by using solely the FLRW background equations. But by comparing the background expansion rate of the universe with observables that depend on linear perturbations of an FRW spacetime we can hope to distinguish between these two categories of explanations. An efficient way to do this is via a parameterized, modelindependent framework that describes cosmological perturbation theory in modified gravity. We present here one such framework, the parameterized postFriedmann formalism [73]^{3} that implements possible extensions to the linearized gravitational field equations.
The parameterized postFriedmann approach (PPF) is inspired by the parameterized postNewtonian (PPN) formalism [961, 960], which uses a set of parameters to summarize leadingorder deviations from the metric of GR. PPN was developed in the 1970s for the purpose of testing of alternative gravity theories in the solar system or binary systems, and is valid in weakfield, lowvelocity scenarios. PPN itself cannot be applied to cosmology, because we do not know the exact form of the linearized metric for our Hubble volume. Furthermore, PPN can only test for constant deviations from GR, whereas the cosmological data we collect contain inherent redshift dependence.
For these reasons the PPF framework is a parameterization of the gravitational field equations (instead of the metric) in terms of a set of functions of redshift. A theory of modified gravity can be analytically mapped onto these PPF functions, which in turn can be constrained by data.
In principle there could also be new terms containing matter perturbations on the RHS of Eq. (1.5.12). However, for theories that maintain the weak equivalence principle — i.e., those with a Jordan frame where matter is uncoupled to any new fields — these matter terms can be eliminated in favor of additional contributions to \(\delta U_{\mu v}^{{\rm{metric}}}\) and \(\delta U_{\mu v}^{{\rm{d}}{\rm{.o}}{\rm{.f}}{\rm{.}}}\).
The tensor \(\delta U_{\mu v}^{{\rm{metric}}}\) is then expanded in terms of two gaugeinvariant perturbation variables \(\hat \Phi\) and \(\hat \Gamma\). \(\hat \Phi\) is one of the standard gaugeinvariant Bardeen potentials, while \(\hat \Gamma\) is the following combination of the Bardeen potentials: \(\hat \Gamma = 1/k\left({\dot \hat \Phi + {\mathcal H}\hat \Psi} \right)\). We use \(\hat \Gamma\) instead of the usual Bardeen potential \(\hat \Psi\) because \(\hat \Gamma\) has the same derivative order as \({\hat \Phi}\) (whereas \({\hat \Psi}\) does not). We then deduce that the only possible structure of \(\delta U_{\mu v}^{{\rm{metric}}}\) that maintains the gaugeinvariance of the field equations is a linear combination of \(\hat \Phi, \hat \Gamma\) and their derivatives, multiplied by functions of the cosmological background (see Eqs. (1.5.13)–(1.5.17) below).
\(\delta U_{\mu v}^{{\rm{d}}{\rm{.o}}{\rm{.f}}{\rm{.}}}\) is similarly expanded in a set of gaugeinvariant potentials \(\left\{{{{\hat x}_i}} \right\}\) that contain the new degrees of freedom. [73] presented an algorithm for constructing the relevant gaugeinvariant quantities in any theory.
The final terms in Eqs. (1.5.13)–(1.5.16) are present to ensure the gauge invariance of the modified field equations, as is required for any theory governed by a covariant action. The quantities M_{Δ}, M_{⊖} and M_{ P } are all predetermined functions of the background. ϵ and ν are offdiagonal metric perturbations, so these terms vanish in the conformal Newtonian gauge. The gaugefixing terms should be regarded as a piece of mathematical bookkeeping; there is no constrainable freedom associated with them.
Let us make a comment about the number of coefficient functions employed in the PPF formalism. One may justifiably question whether the number of unknown functions in Eqs. (1.5.13)–(1.5.17) could ever be constrained. In reality, the PPF coefficients are not all independent. The form shown above represents a fully agnostic description of the extended field equations. However, as one begins to impose restrictions in theory space (even the simple requirement that the modified field equations must originate from a covariant action), constraint relations between the PPF coefficients begin to emerge. These constraints remove freedom from the parameterization.
Even so, degeneracies will exist between the PPF coefficients. It is likely that a subset of them can be wellconstrained, while another subset have relatively little impact on current observables and so cannot be tested. In this case it is justifiable to drop the untestable terms. Note that this realization, in itself, would be an interesting statement — that there are parts of the gravitational field equations that are essentially unknowable.
Finally, we note that there is also a completely different, complementary approach to parameterizing modifications to gravity. Instead of parameterizing the linearized field equations, one could choose to parameterize the perturbed gravitational action. This approach has been used recently to apply the standard techniques of effective field theory to modified gravity; see [107, 142, 411] and references therein.
2.6 Nonlinear aspects
In this section we discuss how the nonlinear evolution of cosmic structures in the context of different nonstandard cosmological models can be studied by means of numerical simulations based on Nbody algorithms and of analytical approaches based on the spherical collapse model.
2.6.1 Nbody simulations of dark energy and modified gravity
Here we discuss the numerical methods presently available for this type of analyses, and we review the main results obtained so far for different classes of alternative cosmologies. These can be grouped into models where structure formation is affected only through a modified expansion history (such as quintessence and early darkenergy models, Section 1.4.1) and models where particles experience modified gravitational forces, either for individual particle species (interacting darkenergy models and growing neutrino models, Section 1.4.4.4) or for all types of particles in the universe (modified gravity models).
2.6.1.1 Quintessence and early darkenergy models
In general, in the context of flat FLRW cosmologies, any dynamical evolution of the darkenergy density (ρ_{DE} ≠ const. = ρ_{Λ}) determines a modification of the cosmic expansion history with respect to the standard ΛCDM cosmology. In other words, if the dark energy is a dynamical quantity, i.e., if its equation of state parameter w ≠ −1 exactly, for any given set of cosmological parameters (H_{0}, Ω_{CDM}, Ω_{b}, Ω_{DE}, Ω_{rad}), the redshift evolution of the Hubble function H (z) will differ from the standard ΛCDM case H_{Λ}(z).
Early dark energy (EDE) is, therefore, a common prediction of scalar field models of dark energy, and observational constraints put firm bounds on the allowed range of Ω_{ ϕ } at early times, and consequently on the potential slope α.
As we have seen in Section 1.2.1, a completely phenomenological parametrization of EDE, independent from any specific model of dynamical dark energy has been proposed by [956] as a function of the present darkenergy density Ω_{DE}, its value at early times Ω_{EDE}, and the present value of the equation of state parameter w_{0}. From Eq. 1.2.4, the full expansion history of the corresponding EDE model can be derived.
A modification of the expansion history indirectly influences also the growth of density perturbations and ultimately the formation of cosmic structures. While this effect can be investigated analytically for the linear regime, Nbody simulations are required to extend the analysis to the nonlinear stages of structure formation. For standard Quintessence and EDE models, the only modification that is necessary to implement into standard Nbody algorithms is the computation of the correct Hubble function H (z) for the specific model under investigation, since this is the only way in which these non standard cosmological models can alter structure formation processes.
This has been done by the independent studies of [406] and [367], where a modified expansion history consistent with EDE models described by the parametrization of Eq. 1.2.4 has been implemented in the widely used Nbody code Gadget2 [857] and the properties of nonlinear structures forming in these EDE cosmologies have been analyzed. Both studies have shown that the standard formalism for the computation of the halo mass function still holds for EDE models at z = 0. In other words, both the standard fitting formulae for the number density of collapsed objects as a function of mass, and their key parameter δ_{ c } = 1.686 representing the linear overdensity at collapse for a spherical density perturbation, remain unchanged also for EDE cosmologies.
The work of [406], however, investigated also the internal properties of collapsed halos in EDE models, finding a slight increase of halo concentrations due to the earlier onset of structure formation and most importantly a significant increment of the lineofsight velocity dispersion of massive halos. The latter effect could mimic a higher σ_{8} normalization for cluster mass estimates based on galaxy velocity dispersion measurements and, therefore, represents a potentially detectable signature of EDE models.
2.6.1.2 Interacting darkenergy models
While such direct interaction with baryonic particles (α = b) is tightly constrained by observational bounds, and while it is suppressed for relativistic particles (α = r) by symmetry reasons (1 − 3w_{ r } = 0), a selective interaction with cold dark matter (CDM hereafter) or with massive neutrinos is still observationally viable (see Section 1.4.4).
As a consequence of these new terms in the Newtonian acceleration equation the growth of density perturbations will be affected, in interacting darkenergy models, not only by the different Hubble expansion due to the dynamical nature of dark energy, but also by a direct modification of the effective gravitational interactions at subhorizon scales. Therefore, linear perturbations of coupled species will grow with a higher rate in these cosmologies In particular, for the case of a coupling to CDM, a different amplitude of the matter power spectrum will be reached at z = 0 with respect to ΛCDM if a normalization in accordance with CMB measurements at high redshifts is assumed.
Clearly, the new acceleration equation (1.6.6) will have an influence also on the formation and evolution of nonlinear structures, and a consistent implementation of all the above mentioned effects into an Nbody algorithm is required in order to investigate this regime.
For the case of a coupling to CDM (a coupling with neutrinos will be discussed in the next section) this has been done, e.g., by [604, 870] with 1D or 3D gridbased field solvers, and more recently by means of a suitable modification by [79] of the TreePM hydrodynamic Nbody code Gadget2 [857].

The suppression of power at small scales in the power spectrum of interacting darkenergy models as compared to ΛCDM;

The development of a gravitational bias in the amplitude of density perturbations of uncoupled baryons and coupled CDM particles defined as P_{ b } (k)/P_{ c } (k) < 1, which determines a significant decrease of the baryonic content of massive halos at low redshifts in accordance with a large number of observations [79, 75];

The increase of the number density of highmass objects at any redshift as compared to ΛCDM [see 77];

An enhanced ISW effect [33, 35, 612]; such effects may be partially reduced when taking into account nonlinearities, as described in [727];

A less steep inner core halo profiles (depending on the interplay between fifth force and velocitydependent terms) [79, 76, 565, 562, 75];

Voids are emptier when a coupling is active [80].

Small scale power can be both suppressed and enhanced when a growing coupling function is considered, depending on the magnitude of the coupling time derivative dβ (ϕ)/dϕ

The inner overdensity of CDM halos, and consequently the halo concentrations, can both decrease (as always happens for the case of constant couplings) or increase, again depending on the rate of change of the coupling strength;
All these effects represent characteristic features of interacting darkenergy models and could provide a direct way to observationally test these scenarios. Higher resolution studies would be required in order to quantify the impact of a DECDM interaction on the statistical properties of halo substructures and on the redshift evolution of the internal properties of CDM halos.
As discussed in Section 1.6.1, when a variable coupling β (ϕ) is active the relative balance of the fifthforce and other dynamical effects depends on the specific time evolution of the coupling strength. Under such conditions, certain cases may also lead to the opposite effect of larger halo inner overdensities and higher concentrations, as in the case of a steeply growing coupling function [see 76]. Alternatively, the coupling can be introduced by choosing directly a covariant stressenergy tensor, treating dark energy as a fluid in the absence of a starting action [619, 916, 193, 794, 915, 613, 387, 192, 388].
2.6.1.3 Growing neutrinos
In case of a coupling between the darkenergy scalar field ϕ and the relic fraction of massive neutrinos, all the above basic equations (1.6.5)–(1.6.8) still hold. However, such models are found to be cosmologically viable only for large negative values of the coupling β [as shown by 36], that according to Eq. 1.6.5 determines a neutrino mass that grows in time (from which these models have been dubbed “growing neutrinos”). An exponential growth of the neutrino mass implies that cosmological bounds on the neutrino mass are no longer applicable and that neutrinos remain relativistic much longer than in the standard scenario, which keeps them effectively uncoupled until recent epochs, according to Eqs. (1.6.3 and 1.6.4). However, as soon as neutrinos become nonrelativistic at redshift z_{nr} due to the exponential growth of their mass, the pressure terms 1 − 3w_{ ν } in Eqs. (1.6.3 and 1.6.4) no longer vanish and the coupling with the DE scalar field ϕ becomes active.
Therefore, while before z_{nr} the model behaves as a standard ΛCDM scenario, after z_{nr} the nonrelativistic massive neutrinos obey the modified Newtonian equation (1.6.6) and a fast growth of neutrino density perturbation takes place due to the strong fifth force described by Eq. (1.6.8).
The growth of neutrino overdensities in the context of growing neutrinos models has been studied in the linear regime by [668], predicting the formation of very large neutrino lumps at the scale of superclusters and above (10–100 Mpc/h) at redshift z ≈ 1.
The analysis has been extended to the nonlinear regime in [963] by following the spherical collapse of a neutrino lump in the context of growing neutrino cosmologies. This study has witnessed the onset of virialization processes in the nonlinear evolution of the neutrino halo at z ≈ 1.3, and provided a first estimate of the associated gravitational potential at virialization being of the order of Φ_{ ν } ≈ 10^{−6} for a neutrino lump with radius R ≈ 15 Mpc.
An estimate of the potential impact of such very large nonlinear structures onto the CMB angular power spectrum through the Integrated SachsWolfe effect has been attempted by [727]. This study has shown that the linear approximation fails in predicting the global impact of the model on CMB anisotropies at low multipoles, and that the effects under consideration are very sensitive to the details of the transition between the linear and nonlinear regimes and of the virialization processes of nonlinear neutrino lumps, and that also significantly depend on possible backreaction effects of the evolved neutrino density field onto the local scalar filed evolution.
A full nonlinear treatment by means of specifically designed Nbody simulations is, therefore, required in order to follow in further detail the evolution of a cosmological sample of neutrino lumps beyond virialization, and to assess the impact of growing neutrinos models onto potentially observable quantities as the lowmultipoles CMB power spectrum or the statistical properties of CDM large scale structures.
2.6.1.4 Modified gravity
Modified gravity models, presented in Section 1.4, represent a different perspective to account for the nature of the dark components of the universe. Although most of the viable modifications of GR are constructed in order to provide an identical cosmic expansion history to the standard ΛCDM model, their effects on the growth of density perturbations could lead to observationally testable predictions capable of distinguishing modified gravity models from standard GR plus a cosmological constant.
Since a modification of the theory of gravity would affect all test masses in the universe, i.e., including the standard baryonic matter, an asymptotic recovery of GR for solar system environments, where deviations from GR are tightly constrained, is required for all viable modified gravity models. Such mechanism, often referred to as the “Chameleon effect”, represents the main difference between modified gravity models and the interacting darkenergy scenarios discussed above, by determining a local dependence of the modified gravitational laws in the Newtonian limit.
While the linear growth of density perturbations in the context of modified gravity theories can be studied [see, e.g., 456, 674, 32, 54] by parametrizing the scale dependence of the modified Poisson and Euler equations in Fourier space (see the discussion in Section 1.3), the nonlinear evolution of the “Chameleon effect” makes the implementation of these theories into nonlinear Nbody algorithms much more challenging. For this reason, very little work has been done so far in this direction. A few attempts to solve the modified gravity interactions in the nonlinear regime by means of meshbased iterative relaxation schemes have been carried out by [700, 701, 800, 500, 981, 281, 964] and showed an enhancement of the power spectrum amplitude at intermediate and small scales. These studies also showed that this nonlinear enhancement of small scale power cannot be accurately reproduced by applying the linear perturbed equations of each specific modified gravity theory to the standard nonlinear fitting formulae [as, e.g., 844].
Higher resolution simulations and new numerical approaches will be necessary in order to extend these first results to smaller scales and to accurately evaluate the deviations of specific models of modified gravity from the standard GR predictions to a potentially detectable precision level.
2.6.2 The spherical collapse model
A popular analytical approach to study nonlinear clustering of dark matter without recurring to Nbody simulations is the spherical collapse model, first studied by [413]. In this approach, one studies the collapse of a spherical overdensity and determines its critical overdensity for collapse as a function of redshift. Combining this information with the extended PressSchechter theory ([743, 147]; see [976] for a review) one can provide a statistical model for the formation of structures which allows to predict the abundance of virialized objects as a function of their mass. Although it fails to match the details of Nbody simulations, this simple model works surprisingly well and can give useful insigths into the physics of structure formation. Improved models accounting for the complexity of the collapse exist in the literature and offer a better fit to numerical simulations. For instance, [823] showed that a significant improvement can be obtained by considering an ellipsoidal collapse model. Furthermore, recent theoretical developments and new improvements in the excursion set theory have been undertaken by [609] and other authors (see e.g., [821]).
In the following we will discuss the spherical collapse model in the contest of other dark energy and modified gravity models.
2.6.2.1 Clustering dark energy
In its standard version, quintessence is described by a minimallycoupled canonical field, with speed of sound c_{s} = 1. As mentioned above, in this case clustering can only take place on scales larger than the horizon, where sound waves have no time to propagate. However, observations on such large scales are strongly limited by cosmic variance and this effect is difficult to observe. A minimallycoupled scalar field with fluctuations characterized by a practically zero speed of sound can cluster on all observable scales. There are several theoretical motivations to consider this case. In the limit of zero sound speed one recovers the Ghost Condensate theory proposed by [56] in the context of modification of gravity, which is invariant under shift symmetry of the field ϕ → ϕ + constant. Thus, there is no fine tuning in assuming that the speed of sound is very small: quintessence models with vanishing speed of sound should be thought of as deformations of this particular limit where shift symmetry is recovered. Moreover, it has been shown that minimallycoupled quintessence with an equation of state w < −1 can be free from ghosts and gradient instabilities only if the speed of sound is very tiny, ∣c_{ s } ∣ ≲ 10^{−15}. Stability can be guaranteed by the presence of higher derivative operators, although their effect is absent on cosmologically relevant scales [260, 228, 259].
The fact that the speed of sound of quintessence may vanish opens up new observational consequences. Indeed, the absence of quintessence pressure gradients allows instabilities to develop on all scales, also on scales where dark matter perturbations become nonlinear. Thus, we expect quintessence to modify the growth history of dark matter not only through its different background evolution but also by actively participating to the structure formation mechanism, in the linear and nonlinear regime, and by contributing to the total mass of virialized halos.
Following [258], in the limit of zero sound speed pressure gradients are negligible and, as long as the fluid approximation is valid, quintessence follows geodesics remaining comoving with the dark matter (see also [574] for a more recent model with identical phenomenology). In particular, one can study the effect of quintessence with vanishing sound speed on the structure formation in the nonlinear regime, in the context of the spherical collapse model. The zero speed of sound limit represents the natural counterpart of the opposite case c_{ s } = 1. Indeed, in both cases there are no characteristic length scales associated with the quintessence clustering and the spherical collapse remains independent of the size of the object (see [95, 671, 692] for a study of the spherical collapse when c_{ s } of quintessence is small but finite).
2.6.2.2 Coupled dark energy
We now consider spherical collapse within coupled darkenergy cosmologies. The presence of an interaction that couples the cosmon dynamics to another species introduces a new force acting between particles (CDM or neutrinos in the examples mentioned in Section 1.4.4) and mediated by darkenergy fluctuations. Whenever such a coupling is active, spherical collapse, whose concept is intrinsically based on gravitational attraction via the Friedmann equations, has to be suitably modified in order to account for other external forces. As shown in [962] the inclusion of the fifth force within the spherical collapse picture deserves particular caution. Here we summarize the main results on this topic and we refer to [962] for a detailed illustration of spherical collapse in presence of a fifth force.
2.6.2.3 Early dark energy
2.7 Observational properties of dark energy and modified gravity
Both scalar field darkenergy models and modifications of gravity can in principle lead to any desired expansion history H (z), or equivalently any evolution of the effective darkenergy equation of state parameter w (z). For canonical scalar fields, this can be achieved by selecting the appropriate potential V (φ) along the evolution of the scalar field φ (t), as was done, e.g., in [102]. For modified gravity models, the same procedure can be followed for example for f (R) type models [e.g. 736]. The evolution history on its own can thus not tell us very much about the physical nature of the mechanism behind the accelerated expansion (although of course a clear measurement showing that w ≠ −1 would be a sensational discovery). A smoking gun for modifications of gravity can thus only appear at perturbation level.
In the next subsections we explore how dark energy or modified gravity effects can be detected through weak lensing and redshift surveys.
2.7.1 General remarks
Quite generally, cosmological observations fall into two categories: geometrical probes and structure formation probes. While the former provide a measurement of the Hubble function, the latter are a test of the gravitational theory in an almost Newtonian limit on subhorizon scales. Furthermore, possible effects on the geodesics of test particles need to be derived: naturally, photons follow nullgeodesics while massive particles, which constitute the cosmic largescale structure, move along geodesics for nonrelativistic particles.
In some special cases, modified gravity models predict a strong deviation from the standard Friedmann equation as in, e.g., DGP, (1.4.74). While the Friedmann equation is not know explicitly in more general models of massive gravity (cascading gravity or hard mass gravity), similar modifications are expected to arise and provide characteristic features, [see, e.g., 11, 478]) that could distinguish these models from other scenarios of modified gravity or with additional dynamical degrees of freedom.
In general however the most interesting signatures of modified gravity models are to be found in the perturbation sector. For instance, in DGP, growth functions differ from those in darkenergy models by a few percent for identical Hubble functions, and for that reason, an observation of both the Hubble and the growth function gives a handle on constraining the gravitational theory, [592]. The growth function can be estimated both through weak lensing and through galaxy clustering and redshift distortions.
Concerning the interactions of light with the cosmic largescale structure, one sees a modified coupling in general models and a difference between the metric potentials. These effects are present in the anisotropy pattern of the CMB, as shown in [792], where smaller fluctuations were found on large angular scales, which can possibly alleviate the tension between the CMB and the ΛCDM model on small multipoles where the CMB spectrum acquires smaller amplitudes due to the ISWeffect on the lastscattering surface, but provides a worse fit to supernova data. An interesting effect inexplicable in GR is the anticorrelation between the CMB temperature and the density of galaxies at high redshift due to a sign change in the integrated SachsWolfe effect. Interestingly, this behavior is very common in modified gravity theories.
A very powerful probe of structure growth is of course weak lensing, but to evaluate the lensing effect it is important to understand the nonlinear structure formation dynamics as a good part of the total signal is generated by small structures. Only recently has it been possible to perform structure formation simulations in modified gravity models, although still without a mechanism in which GR is recovered on very small scales, necessary to be in accordance with local tests of gravity.
In contrast, the number density of collapsed objects relies only little on nonlinear physics and can be used to investigate modified gravity cosmologies. One needs to solve the dynamical equations for a spherically symmetric matter distribution. Modified gravity theories show the feature of lowering the collapse threshold for density fluctuations in the largescale structure, leading to a higher comoving number density of galaxies and clusters of galaxies. This probe is degenerate with respect to darkenergy cosmologies, which generically give the same trends.
2.7.2 Observing modified gravity with weak lensing
The magnification matrix is a 2 × 2 matrix that relates the true shape of a galaxy to its image. It contains two distinct parts: the convergence, defined as the trace of the matrix, modifies the size of the image, whereas the shear, defined as the symmetric traceless part, distorts the shape of the image. At small scales the shear and the convergence are not independent. They satisfy a consistency relation, and they contain therefore the same information on matter density perturbations. More precisely, the shear and the convergence are both related to the sum of the two Bardeen potentials, Φ + Ψ, integrated along the photon trajectory. At large scales however, this consistency relation does not hold anymore. Various relativistic effects contribute to the convergence, see [150]. Some of these effects are generated along the photon trajectory, whereas others are due to the perturbations of the galaxies redshift. These relativistic effects provide independent information on the two Bardeen potentials, breaking their degeneracy. The convergence is therefore a useful quantity that can increase the discriminatory power of weak lensing.
The convergence can be measured through its effect on the galaxy number density, see e.g. [175]. The standard method extracts the magnification from correlations of distant quasars with foreground clusters, see [804, 657]. Recently, [977, 978] designed a new method that permits to accurately measure autocorrelations of the magnification, as a function of the galaxies redshift. This method potentially allows measurements of the relativistic effects in the convergence.
2.7.2.1 Magnification matrix
2.7.2.2 Observable quantities
Figure 9 shows that \(C_\ell ^{{\rm{vel}}}\) peaks at rather small ℓ, between 30 and 120 depending on the redshift. This corresponds to rather large angle θ ∼ 90–360 arcmin. This behavior differs from the standard term (Figure 9) that peaks at large ℓ. Therefore, it is important to have large sky surveys to detect the velocity contribution. The relative importance of \(C_\ell ^{{\rm{vel}}}\) and \(C_\ell ^{{\rm{st}}}\) depends strongly on the redshift of the source. At small redshift, z_{ S } = 0.2, the velocity contribution is about 4 · 10^{−5} and is hence larger than the standard contribution which reaches 10^{−6}. At redshift z_{ S } = 0.5, \(C_\ell ^{{\rm{vel}}}\) is about 20% of \(C_\ell ^{{\rm{st}}}\) whereas at redshift z_{ S } = 1, it is about 1% of \(C_\ell ^{{\rm{st}}}\). Then at redshift z_{ S } = 1.5 and above, \(C_\ell ^{{\rm{vel}}}\) becomes very small with respect to \(C_\ell ^{{\rm{st}}}:C_\ell ^{{\rm{vel}}} \le {10^{ 4}}C_\ell ^{{\rm{st}}}\). The enhancement of \(C_\ell ^{{\rm{vel}}}\) at small redshift together with its fast decrease at large redshift are due to the prefactor \({\left({{1 \over {{{\mathcal H}_{S\chi S}}}}  1} \right)^2}\) in Eq. (1.7.15). Thanks to this enhancement we see that if the magnification can be measured with an accuracy of 10%, then the velocity contribution is observable up to redshifts z ≤ 0.6. If the accuracy reaches 1% then the velocity contribution becomes interesting up to redshifts of order 1.
2.7.3 Observing modified gravity with redshift surveys
Widedeep galaxy redshift surveys have the power to yield information on both H (z) and f_{ g } (z) through measurements of Baryon Acoustic Oscillations (BAO) and redshiftspace distortions. In particular, if gravity is not modified and matter is not interacting other than gravitationally, then a detection of the expansion rate is directly linked to a unique prediction of the growth rate. Otherwise galaxy redshift surveys provide a unique and crucial way to make a combined analysis of H (z) and f_{ g } (z) to test gravity. As a widedeep survey, Euclid allows us to measure H (z) directly from BAO, but also indirectly through the angular diameter distance D_{ A } (z) (and possibly distance ratios from weak lensing). Most importantly, Euclid survey enables us to measure the cosmic growth history using two independent methods: f_{ g } (z) from galaxy clustering, and G (z) from weak lensing. In the following we discuss the estimation of [H (z), D_{ A } (z) and f_{ g } (z)] from galaxy clustering.
The characteristic scale of the BAO is set by the sound horizon at decoupling. Consequently, one can attain the angular diameter distance and Hubble parameter separately. This scale along the line of sight (s_{∥}(z)) measures H (z) through H (z) = c Δz/s_{∥}(z), while the tangential mode measures the angular diameter distance D_{ A } (z) = s_{⊥}/ Δθ (1 + z).
One can then use the power spectrum to derive predictions on the parameter constraining power of the survey (see e.g., [46, 418, 938, 945, 308]).
In general, bias can be measured from weak lensing through the comparison of the shearshear and sheargalaxy correlations functions. A combined constraint on bias and the growth factor G (z) can be derived from weak lensing by comparing the crosscorrelations of multiple redshift slices.
Of course, if bias is assumed to be linear (b_{2} = 0) and scale independent, or is parametrized in some simple way, e.g., with a power law scale dependence, then it is possible to estimate it even from linear galaxy clustering alone, as we will see in Section 1.8.3.
2.7.4 Cosmological bulk flows
As we have seen, the additional redshift induced by the galaxy peculiar velocity field generates the redshift distortion in the power spectrum. In this section we discuss a related effect on the luminosity of the galaxies and on its use to measure the peculiar velocity in large volumes, the socalled bulk flow.
Over the years the bulk flows has been estimated from the measured peculiar velocities of a large variety of objects ranging from galaxies [397, 398, 301, 256, 271, 788] clusters of galaxies [549, 165, 461] and SN Ia [766]. Conflicting results triggered by the use of errorprone distance indicators have fueled a long lasting controversy on the amplitude and convergence of the bulk flow that is still on. For example, the recent claim of a bulk flow of 407 ± 81 km s^{−1} within R = 50 h^{−1} Mpc [947], inconsistent with expectation from the ΛCDM model, has been seriously challenged by the reanalysis of the same data by [694] who found a bulk flow amplitude consistent with ΛCDM expectations and from which they were able to set the strongest constraints on modified gravity models so far. On larger scales, [493] claimed the detection of a dipole anisotropy attributed to the kinetic SZ decrement in the WMAP temperature map at the position of Xray galaxy clusters. When interpreted as a coherent motion, this signal would indicate a gigantic bulk flow of 1028 ± 265 km s^{−1} within R = 528 h^{−1} Mpc. This highly debated result has been seriously questioned by independent analyses of WMAP data [see, e.g., 699])
The large, homogeneous dataset expected from Euclid has the potential to settle these issues. The idea is to measure bulk flows in large redshift surveys, based on the apparent, dimming or brightening of galaxies due to their peculiar motion. The method, originally proposed by [875], has been recently extended by [693] who propose to estimate the bulk flow by minimizing systematic variations in galaxy luminosities with respect to a reference luminosity function measured from the whole survey. It turns out that, if applied to the photoz catalog expected from Euclid, this method would be able to detect at 5σ significance a bulk flow like the one of [947] over ∼ 50 independent spherical volumes at z ≥ 0.2, provided that the systematic magnitude offset over the corresponding areas in the sky does not exceed the expected random magnitude errors of 0.02–0.04 mag. Additionally, photoz or spectralz could be used to validate or disproof with very large (> 7σ) significance the claimed bulk flow detection of [493] at z = 0.5.
As for the bulk flow case, despite the many measurements of cosmological dipoles using galaxies [972, 283, 654, 868, 801, 513] there is still no general consensus on the scale of convergence and even on the convergence itself. Even the recent analyses of measuring the acceleration of the Local Group from the 2MASS redshift catalogs provided conflicting results. [344] found that the galaxy dipole seems to converge beyond R = 60 h^{−1} Mpc, whereas [552] find no convergence within R = 120 h^{−1} Mpc.
Once again, Euclid will be in the position to solve this controversy by measuring the galaxy and cluster dipoles not only at the LG position and out to very large radii, but also in several independent ad truly allsky spherical samples carved out from the the observed areas with ∣b ∣ > 20°. In particular, coupling photometry with photoz one expects to be able to estimate the convergence scale of the fluxweighted dipole over about 100 independent spheres of radius 200 h^{−1} Mpc out to z = 0.5 and, beyond that, to compare numberweighted and fluxweighted dipoles over a larger number of similar volumes using spectroscopic redshifts.
2.8 Forecasts for Euclid
Here we describe forecasts for the constraints on modified gravity parameters which Euclid observations should be able to achieve. We begin with reviewing the relevant works in literature. Then, after we define our “Euclid model”, i.e., the main specifics of the redshift and weak lensing survey, we illustrate a number of Euclid forecasts obtained through a Fisher matrix approach.
2.8.1 A review of forecasts for parametrized modified gravity with Euclid
Heavens et al. [429] have used Bayesian evidence to distinguish between models, using the Fisher matrices for the parameters of interest. This study calculates the ratio of evidences B for a 3D weak lensing analysis of the full Euclid survey, for a darkenergy model with varying equation of state, and modified gravity with additionally varying growth parameter γ. They find that Euclid can decisively distinguish between, e.g., DGP and dark energy, with ∣ ln B ∣ ≃ 50. In addition, they find that it will be possible to distinguish any departure from GR which has a difference in γ greater than ≃ 0.03. A phenomenological extension of the DGP model [332, 11] has also been tested with Euclid. Specifically, [199] found that it will be possible to discriminate between this modification to gravity from ΛCDM at the 3σ level in a wide range of angular scale, approximately 1000 ≲ ℓ ≲ 4000.
Amendola et al. [41] find Euclid weak lensing constraints for a more general parameterization that includes evolution. In particular, Σ(z) is investigated by dividing the Euclid weak lensing survey into three redshift bins with equal numbers of galaxies in each bin, and approximating that Σ is constant within that bin. Since Σ_{1}, i.e., the value of Σ in the a = 1 bin (presentday) is degenerate with the amplitude of matter fluctuations, it is set to unity. The study finds that a deviation from unit Σ (i.e., GR) of 3% can be detected in the second redshift bin, and a deviation of 10% is still detected in the furthest redshift bin.
Beynon et al. [132] make forecasts for modified gravity with Euclid weak lensing including [457] in interpolating between the linear spectrum predicted by modified gravity, and GR on small scales as required by Solar System tests. This requires parameters A (a measure of the abruptness of transitioning between these two regimes), α_{1} (controlling the kdependence of the transition) and α_{2} (controlling the zdependence of the transition).
Finally, Song et al. [848] have shown forecasts for measuring Σ and μ using both imaging and spectroscopic surveys. They combine 20,000 squaredegree lensing data (corresponding to [550] rather than to the updated [551]) with the peculiar velocity dispersion measured from redshift space distortions in the spectroscopic survey, together with stringent background expansion measurements from the CMB and supernovae. They find that for simple models for the redshift evolution of Σ and μ, both quantities can be measured to 20% accuracy.
2.8.2 Euclid surveys
The Euclid mission will produce a catalog of up to 100 million galaxy redshifts and an imaging survey that should allow to estimate the galaxy ellipticity of up to 2 billion galaxy images. Here we discuss these surveys and fix their main properties into a “Euclid model”, i.e., an approximation to the real Euclid survey that will be used as reference mission in the following.
2.8.2.1 Modeling the Redshift Survey.
The main goals of next generation redshift surveys will be to constrain the darkenergy parameters and to explore models alternative to standard Einstein gravity. For these purposes they will need to consider very large volumes that encompass z ∼ 1, i.e., the epoch at which dark energy started dominating the energy budget, spanning a range of epochs large enough to provide a sufficient leverage to discriminate among competing models at different redshifts.
Here we consider a survey covering a large fraction of the extragalactic corresponding to ∼ 15000 deg^{2} capable to measure a large number of galaxy redshifts out to z ∼ 2. A promising observational strategy is to target Hα emitters at nearinfrared wavelengths (which implies z > 0.5) since they guarantee both relatively dense sampling (the space density of this population is expected to increase out to z ∼ 2) and an efficient method to measure the redshift of the object. The limiting flux of the survey should be the tradeoff between the requirement of minimizing the shot noise, the contamination by other lines (chiefly among them the [O II] line), and that of maximizing the socalled efficiency ε, i.e., the fraction of successfully measured redshifts. To minimize shot noise one should obviously strive for a low flux. Indeed, [389] found that a limiting flux f_{Hα} ≥ 1 × 10^{−16} erg cm^{−2} s^{−1} would be able to balance shot noise and cosmic variance out to z = 1.5. However, simulated observations of mock Hα galaxy spectra have shown that ε ranges between 30% and 60% (depending on the redshift) for a limiting flux f_{Hα} ≥ 3 × 10^{−16} erg cm^{−2} s^{−1} [551]. Moreover, contamination from [O II] line drops from 12% to 1% when the limiting flux increases from 1 × 10^{−16} to 5 × 10^{−16} erg cm^{−2} s^{−1} [389].
Taking all this into account, in order to reach the toplevel science requirement on the number density of Hα galaxies, the average effective Hα line flux limit from a 1arcsec diameter source shall be lower than or equal to 3 × 10^{−16} erg cm^{−2} s^{−1}. However, a slitless spectroscopic survey has a success rate in measuring redshifts that is a function of the emission line flux. As such, the Euclid survey cannot be characterized by a single flux limit, as in conventional slit spectroscopy.
We use the number density of Hα galaxies at a given redshift, n (z), estimated using the latest empirical data (see Figure 3.2 of [551]), where the values account for redshift — and flux — success rate, to which we refer as our reference efficiency ε_{ r }.

Reference case (ref.). Galaxy number density n (z) which include efficiency ε_{ r } (column n_{2} (z) in Table 3).

Pessimistic case (pess.). Galaxy number density n (z) · 0.5, i.e., efficiency is ε_{ r } · 0.5 (column n_{3}(z) in Table 3).

Optimistic case (opt.). Galaxy number density n (z) · 1.4, i.e., efficiency is ε_{ r } · 1.4 (column n_{1}(z) in Table 3).
Expected galaxy number densities in units of (h/Mpc)^{3} for Euclid survey. Let us notice that the galaxy number densities n(z) depend on the fiducial cosmology adopted in the computation of the survey volume, needed for the conversion from the galaxy numbers dN/dz to n(z).
z  n_{1}(z) × 10^{−3}  n_{2}(z) × 10^{−3}  n_{3}(z) × 10^{−3} 

0.65–0.75  1.75  1.25  0.63 
0.75–0.85  2.68  1.92  0.96 
0.85–0.95  2.56  1.83  0.91 
0.95–1.05  2.35  1.68  0.84 
1.05–1.15  2.12  1.51  0.76 
1.15–1.25  1.88  1.35  0.67 
1.25–1.35  1.68  1.20  0.60 
1.35–1.45  1.40  1.00  0.50 
1.45–1.55  1.12  0.80  0.40 
1.55–1.65  0.81  0.58  0.29 
1.65–1.75  0.53  0.38  0.19 
1.75–1.85  0.49  0.35  0.18 
1.85–1.95  0.29  0.21  0.10 
1.95–2.05  0.16  0.11  0.06 
The total number of observed galaxies ranges from 3 · 10^{7} (pess.) to 9 · 10^{7} (opt.). For all cases we assume that the error on the measured redshift is Δz = 0.001(1 + z), independent of the limiting flux of the survey.
2.8.3 Forecasts for the growth rate from the redshift survey
In this section we forecast the constraints that future observations can put on the growth rate and on a scaleindependent bias, employing the Fisher matrix method presented in Section 1.7.3. We use the representative Euclid survey presented in Section 1.8.2. We assess how well one can constrain the bias function from the analysis of the power spectrum itself and evaluate the impact that treating bias as a free parameter has on the estimates of the growth factor. We estimate how errors depend on the parametrization of the growth factor and on the number and type of degrees of freedom in the analysis. Finally, we explicitly explore the case of coupling between dark energy and dark matter and assess the ability of measuring the coupling constant. Our parametrization is defined as follows. More details can be found in [308].

fparameterization. This is in fact a nonparametric model in which the growth rate itself is modeled as a stepwise function f_{ g } (z) = f_{ i }, specified in different redshift bins. The errors are derived on f_{ i } in each ith redshift bin of the survey.
 γparameterization. As a second case we assumewhere the γ (z) function is parametrized as$${f_g} \equiv {\Omega _m}{(z)^{\gamma (z)}}\,.$$(1.8.4)As shown by [969, 372], this parameterization is more accurate than that of Eq. (1.8.3) for both ΛCDM and DGP models. Furthermore, this parameterization is especially effective to distinguish between a w CDM model (i.e., a darkenergy model with a constant equation of state) that has a negative γ_{1} (−0.020 ≲ γ_{1} ≲ −0.016) and a DGP model that instead, has a positive γ_{1} (0.035 < γ_{1} < 0.042). In addition, modified gravity models show a strongly evolving γ (z) [378, 673, 372], in contrast with conventional darkenergy models. As a special case we also consider γ = constant (only when w also is assumed constant), to compare our results with those of previous works.$$\gamma (z) = {\gamma _0} + {\gamma _1}{z \over {1 + z}}\,.$$(1.8.5)
 ηparameterization. To explore models in which perturbations grow faster than in the ΛCDM case, like in the case of a coupling between dark energy and dark matter [307], we consider a model in which γ is constant and the growth rate varies aswhere η quantifies the strength of the coupling. The example of the coupled quintessence model worked out by [307] illustrates this point. In that model, the numerical solution for the growth rate can be fitted by the formula (1.8.6), with \(\eta = c\beta _c^2\), where β_{ c } is the dark energydark matter coupling constant and best fit values γ = 0.56 and c = 2.1. In this simple case, observational constraints over η can be readily transformed into constraints over β_{ c }.$${f_g} \equiv {\Omega _m}{(z)^\gamma}(1 + \eta)\,,$$(1.8.6)
Reference Cosmological Models. We assume as reference model a “pseudo” ΛCDM, where the growth rate values are obtained from Eq. (1.8.3) with γ = 0.545 and Ω_{ m } (z) is given by the standard evolution. Then Ω_{ m } (z) is completely specified by setting Ω_{m,0} = 0.271, Ω_{ k } = 0, w_{0} = −0.95, w_{1} = 0. When the corresponding parameterizations are employed, we choose as fiducial values γ_{1} = 0 and η = 0, We also assume a primordial slope n_{ s } = 0.966 and a present normalization σ_{8} = 0.809.

DGP model. We consider the flat space case studied in [602]. When we adopt this model then we set γ_{0} = 0.663, γ_{1} = 0.041 [372] or γ = 0.68 [587] and w = −0.8 when γ and w are assumed constant.

f (R) model. Here we consider different classes of f (R) models: i) the one proposed in [456], depending on two parameters, n and λ, which we fix to n = 0.5, 1, 2 and λ = 3. For the model with n = 2 we assume γ_{0} = 0.43, γ_{1} = −0.2, values that apply quite generally in the limit of small scales (provided they are still linear, see [378]) or γ = 0.4 and w = −0.99. Unless differently specified, we will always refer to this specific model when we mention comparisons to a single f (R) model. ii) The model proposed in [864] fixing λ = 3 and n = 2, which shows a very similar behavior to the previous one. iii) The one proposed in [904] fixing λ = 1.

Coupled darkenergy (CDE) model. This is the coupled model proposed by [33, 955]. In this case we assume γ_{0} = 0.56, η = 0.056 (this value comes from putting β_{ c } = 0.16 as coupling, which is of the order of the maximal value allowed by CMB constraints) [44]. As already explained, this model cannot be reproduced by a constant γ. Forecasts on coupled quintessence based on [42, 33, 724] are discussed in more detail in Section 1.8.8.
For the fiducial values of the bias parameters in every bin, we assume \(b(z) = \sqrt {1 + z}\) already used in [753]) since this function provides a good fit to Hα line galaxies with luminosity L_{ Hα } = 10^{42} erg^{−1} s^{−1} h^{−2} modeled by [698] using the semianalytic GALFORM models of [108]. For the sake of comparison, we will also consider the case of constant b = 1 corresponding to the rather unphysical case of a redshiftindependent population of unbiased mass tracers.
Now we express the growth function G (z) and the redshift distortion parameter β (z) in terms of the growth rate f_{ g } (see Eqs. (1.8.8), (1.8.7)). When we compute the derivatives of the spectrum in the Fisher matrix b (z) and f_{ g } (z) are considered as independent parameters in each redshift bin. In this way we can compute the errors on b (and f_{ g }) self consistently by marginalizing over all other parameters.
Now we are ready to present the main result of the Fisher matrix analysis. We note that in all tables below we always quote errors at 68% probability level and draw in the plots the probability regions at 68% and/or 95% (denoted for shortness as 1 and 2σ values). Moreover, in all figures, all the parameters that are not shown have been marginalized over or fixed to a fiducial value when so indicated.
Results for the fparameterization. The total number of parameters that enter in the Fisher matrix analysis is 45: 5 parameters that describe the background cosmology (Ω_{m,0}h^{2}, Ω_{b,0}h^{2}, h, n, Ω_{ k }) plus 5 zdependent parameters specified in 8 redshift bins evenly spaced in the range z = [0.5, 2.1]. They are P_{s}(z), D (z), H (z), f_{ g } (z), b (z). However, since we are not interested in constraining D (z) and H (z), we always project them to the set of parameters they depend on (as explained in [815]) instead of marginalizing over, so extracting more information on the background parameters.
1σ marginalized errors for the bias and the growth rates in each redshift bin.
z  σ _{ b }  b ^{ F }  z  \(f_g^F\)  \({\sigma _{{f_g}}}\)  

ref.  opt.  pess.  ref.  opt.  pess.  
0.7  0.016  0.015  0.019  1.30  0.7  0.76  0.011  0.010  0.012 
0.8  0.014  0.014  0.017  1.34  0.8  0.80  0.010  0.009  0.011 
0.9  0.014  0.013  0.017  1.38  0.9  0.82  0.009  0.009  0.011 
1.0  0.013  0.012  0.016  1.41  1.0  0.84  0.009  0.008  0.011 
1.1  0.013  0.012  0.016  1.45  1.1  0.86  0.009  0.008  0.011 
1.2  0.013  0.012  0.016  1.48  1.2  0.87  0.009  0.009  0.011 
1.3  0.013  0.012  0.016  1.52  1.3  0.88  0.010  0.009  0.012 
1.4  0.013  0.012  0.016  1.55  1.4  0.89  0.010  0.009  0.013 
1.5  0.013  0.012  0.016  1.58  1.5  0.91  0.011  0.010  0.014 
1.6  0.013  0.012  0.016  1.61  1.6  0.91  0.012  0.011  0.016 
1.7  0.014  0.013  0.017  1.64  1.7  0.92  0.014  0.012  0.018 
1.8  0.014  0.013  0.018  1.67  1.8  0.93  0.014  0.013  0.019 
1.9  0.016  0.014  0.021  1.70  1.9  0.93  0.017  0.015  0.025 
2.0  0.019  0.016  0.028  1.73  2.0  0.94  0.023  0.019  0.037 
Table 4 illustrates one important result: through the analysis of the redshiftspace galaxy power spectrum in a nextgeneration Euclidlike survey, it will be possible to measure galaxy biasing in Δz = 0. 1 redshift bins with less than 1.6% error, provided that the bias function is independent of scale. We also tested a different choice for the fiducial form of the bias: b (z) = 1 finding that the precision in measuring the bias as well as the other parameters has a very little dependence on the b (z) form. Given the robustness of the results on the choice of b (z) in the following we only consider the \(b(z) = \sqrt {1 + z}\) case.
 1.
The ability of measuring the biasing function is not too sensitive to the characteristic of the survey (b (z) can be constrained to within 1% in the Optimistic scenario and up to 1.6% in the Pessimistic one) provided that the bias function is independent of scale. Moreover, we checked that the precision in measuring the bias has a very little dependence on the b (z) form.
 2.
The growth rate f_{ g } can be estimated to within 1–2.5% in each bin for the Reference case survey with no need of estimating the bias function b (z) from some dedicated, independent analysis using higher order statistics [925] or fullPDF analysis [825].
 3.
The estimated errors on f_{ g } depend weakly on the fiducial model of b (z).

ηparameterization. We start by considering the case of constant γ and w in which we set γ = γ^{ F } = 0.545 and w = w^{ F } = −0.95. As we will discuss in the next Section, this simple case will allow us to crosscheck our results with those in the literature. In Figure 16 we show the marginalized probability regions, at 1 and 2σ levels, for γ and w. The regions with different shades of green illustrate the Reference case for the survey whereas the blue longdashed and the black shortdashed ellipses refer to the Optimistic and Pessimistic cases, respectively. Errors on γ and w are listed in Table 5 together with the corresponding figures of merit [FoM] defined to be the squared inverse of the Fisher matrix determinant and therefore equal to the inverse of the product of the errors in the pivot point, see [21]. Contours are centered on the fiducial model. The blue triangle and the blue square represent the flat DGP and the f (R) models’ predictions, respectively. It is clear that, in the case of constant γ and w, the measurement of the growth rate in a Euclidlike survey will allow us to discriminate among these models. These results have been obtained by fixing the curvature to its fiducial value Ω_{ k } = 0. If instead, we consider curvature as a free parameter and marginalize over, the errors on γ and w increase significantly, as shown in Table 6, and yet the precision is good enough to distinguish the different models. For completeness, we also computed the fully marginalized errors over the other cosmological parameters for the reference survey, given in Table 7.
As a second step we considered the case in which γ and w evolve with redshift according to Eqs. (1.8.5) and (1.8.2) and then we marginalized over the parameters γ_{1}, w_{1} and Ω_{ k }. The marginalized probability contours are shown in Figure 17 in which we have shown the three survey setups in three different panels to avoid overcrowding. Dashed contours refer to the zdependent parameterizations while red, continuous contours refer to the case of constant γ and w obtained after marginalizing over Ω_{ k }. Allowing for time dependency increases the size of the confidence ellipses since the Fisher matrix analysis now accounts for the additional uncertainties in the extraparameters γ_{1} and w_{1}; marginalized error values are in columns \({\sigma _{{\gamma _{{\rm{marg}}}},{\rm{1}}}},\,{\sigma _{{w_{{\rm{marg}}}},1}}\) of Table 8. The uncertainty ellipses are now larger and show that DGP and fiducial models could be distinguished at > 2σ level only if the redshift survey parameter will be more favorable than in the Reference case.
We have also projected the marginalized ellipses for the parameters γ_{0} and γ_{1} and calculated their marginalized errors and figures of merit, which are reported in Table 9. The corresponding uncertainties contours are shown in the right panel of Figure 16. Once again we overplot the expected values in the f (R) and DGP scenarios to stress the fact that one is expected to be able to distinguish among competing models, irrespective on the survey’s precise characteristics.

ηparameterization.
We have repeated the same analysis as for the γparameterization taking into account the possibility of coupling between DE and DM, i.e., we have modeled the growth factor according to Eq. (1.8.6) and the darkenergy equation of state as in Eq. (1.8.2) and marginalized over all parameters, including Ω_{ k }. The marginalized errors are shown in columns \({\sigma _{{\gamma _{{\rm{marg}}}}{\rm{,2}}}},\,{\sigma _{{w_{{\rm{marg}}}},2}}\) of Table 8 and the significance contours are shown in the three panels of Figure 18 which is analogous to Figure 17. Even if the ellipses are now larger we note that errors are still small enough to distinguish the fiducial model from the f (R) and DGP scenarios at > 1σ and > 2σ level respectively.
Marginalizing over all other parameters we can compute the uncertainties in the γ and η parameters, as listed in Table 10. The relative confidence ellipses are shown in the left panel of Figure 19. This plot shows that next generation Euclidlike surveys will be able to distinguish the reference model with no coupling (central, red dot) to the CDE model proposed by [44] (white square) only at the 1–1.5σ level.
Numerical values for 1σ constraints on parameters in Figure 16 and figures of merit. Here we have fixed Ω_{ k } to its fiducial value, Ω_{ k } = 0.
case  σ _{ γ }  σ _{ w }  FoM  

\(b = \sqrt {1 + z}\)  ref.  0.02  0.017  3052 
with  opt.  0.02  0.016  3509 
Ω_{ k } fixed  pess.  0.026  0.02  2106 
bias  case  σ _{ γ }  FoM  

ref.  0.03  0.04  1342  
\(b = \sqrt {1 + z}\)  opt.  0.03  0.03  1589 
pess.  0.04  0.05  864 
Numerical values for marginalized 1σ constraints on cosmological parameters using constant γ and w.
case  σ _{ h }  \({\sigma _{{\Omega _m}{h^2}}}\)  \({\sigma _{{\Omega _b}{h^2}}}\)  \({\sigma _{{\Omega _k}}}\)  \({\sigma _{{n_s}}}\)  \({\sigma _{{\sigma _{\rm{s}}}}}\)  

ref.  0.007  0.002  0.0004  0.008  0.03  0.006 
bias  case  \({\sigma _{{\gamma _{{\rm{marg,1}}}}}}\)  \({\sigma _{{w_{{\rm{marg,1}}}}}}\)  FoM  \({\sigma _{{\gamma _{{\rm{marg,2}}}}}}\)  \({\sigma _{{w_{{\rm{marg,2}}}}}}\)  FoM 

ref.  0.15  0.07  97  0.07  0.07  216  
\(b = \sqrt {1 + z}\)  opt.  0.14  0.06  112  0.07  0.06  249 
pess.  0.18  0.09  66  0.09  0.09  147 
Numerical values for 1σ constraints on parameters in right panel of Figure 16 and figures of merit.
bias  case  σ _{ γ } _{0}  σ _{ γ } _{1}  FoM 

ref.  0.15  0.4  87  
\(b = \sqrt {1 + z}\)  opt.  0.14  0.36  102 
pess.  0.18  0.48  58 
Numerical values for 1σ constraints on parameters in Figure 19 and figures of merit.
bias  case  σ _{ γ }  σ _{ η }  FoM 

ref.  0.07  0.06  554  
\(b = \sqrt {1 + z}\)  opt.  0.07  0.06  650 
pess.  0.09  0.08  362 
1σ marginalized errors for the parameters w_{0} and w_{1}, obtained with three different methods (reference case, see Figure 20).
\({\sigma _{{w_0}}}\)  \({\sigma _{{w_1}}}\)  FoM  

γ_{0}, γ_{1}, Ω_{ k } fixed  0.05  0.16  430 
γ_{0}, γ_{1} fixed  0.06  0.26  148 
marginalization over all other parameters  0.07  0.3  87 
 1.
If both γ and w are assumed to be constant and setting Ω_{ k } = 0, then a redshift survey described by our Reference case will be able to constrain these parameters to within 4% and 2%, respectively.
 2.
Marginalizing over Ω_{ k } degrades these constraints to 5.3% and 4% respectively.
 3.
If w and γ are considered redshiftdependent and parametrized according to Eqs. (1.8.5) and (1.8.2) then the errors on γ_{0} and w_{0} obtained after marginalizing over γ_{1} and w_{1} increase by a factor ∼ 7, 5. However, with this precision we will be able to distinguish the fiducial model from the DGP and f (R) scenarios with more than 2σ and 1σ significance, respectively.
 4.
The ability to discriminate these models with a significance above 2σ is confirmed by the confidence contours drawn in the γ_{0}γ_{1} plane, obtained after marginalizing over all other parameters.
 5.
If we allow for a coupling between dark matter and dark energy, and we marginalize over η rather than over γ_{1}, then the errors on w_{0} are almost identical to those obtained in the case of the γparameterization, while the errors on γ_{0} decrease significantly.
However, our ability in separating the fiducial model from the CDE model is significantly hampered: the confidence contours plotted in the γ − η plane show that discrimination can only be performed wit 1–1.5σ significance. Yet, this is still a remarkable improvement over the present situation, as can be appreciated from Figure 19 where we compare the constraints expected by next generation data to the present ones. Moreover, the Reference survey will be able to constrain the parameter η to within 0.06. Reminding that we can write \(\eta = 2.1\beta _c^2\) [307], this means that the coupling parameter β_{ c } between dark energy and dark matter can be constrained to within 0.14, solely employing the growth rate information. This is comparable to existing constraints from the CMB but is complementary since obviously it is obtained at much smaller redshifts. A variable coupling could therefore be detected by comparing the redshift survey results with the CMB ones.
It is worth pointing out that, whenever we have performed statistical tests similar to those already discussed by other authors in the context of a Euclidlike survey, we did find consistent results. Examples of this are the values of FoM and errors for w_{0}, w_{1}, similar to those in [945, 614] and the errors on constant γ and w [614]. However, let us notice that all these values strictly depend on the parametrizations adopted and on the numbers of parameters fixed or marginalized over (see, e.g., [753]).
2.8.4 Weak lensing nonparametric measurement of expansion and growth rate
In this section we apply power spectrum tomography [448] to the Euclid weak lensing survey without using any parameterization of the Hubble parameter H (z) as well as the growth function G (z). Instead, we add the fiducial values of those functions at the center of some redshift bins of our choice to the list of cosmological parameters. Using the Fisher matrix formalism, we can forecast the constraints that future surveys can put on H (z) and G (z). Although such a nonparametric approach is quite common for as concerns the equationofstate ratio w (z) in supernovae surveys [see, e.g., 22] and also in redshift surveys [815], it has not been investigated for weak lensing surveys.
Values used in our computation. The values of the fiducial model (WMAP7, on the left) and the survey parameters (on the right).
ω _{ m }  0.1341 
ω _{ b }  0.02258 
τ  0.088 
n _{ s }  0.963 
Ω_{ m }  0.266 
w _{0}  −1 
w _{1}  0 
γ  0.547 
γ _{ppn}  0 
σ _{8}  0.801 
f _{sky}  0.375 
z _{mean}  0.9 
σ _{ Z }  0.05 
n _{ θ }  30 
γ _{int}  0.22 
ℓ _{max}  5.10^{3} 
Δlog_{10}ℓ  0.02 
Note that this is a fundamentally different FoM than the one defined by the Dark Energy Task Force. Our definition allows for a single large error without influencing the FoM significantly and should stay almost constant after dividing a bin arbitrarily in two bins, assuming the error scales roughly as the inverse of the root of the number of galaxies in a given bin.
Notice that here we assumed no prior information. Of course one could improve the FoM by taking into account some external constraints due to other experiments.
2.8.5 Testing the nonlinear corrections for weak lensing forecasts
In order to fully exploit next generation weak lensing survey potentialities, accurate knowledge of nonlinear power spectra up to ∼ 1% is needed [465, 469]. However, such precision goes beyond the claimed ±3% accuracy of the popular halofit code [844].
[651] showed that, using halofit for nonΛCDM models, requires suitable corrections. In spite ofthat, halofit has been often used to calculate the spectra of models with nonconstant DE state parameter w (z). This procedure was dictated by the lack of appropriate extensions of halofit to nonΛCDM cosmologies.
In this paragraph we quantify the effects of using the halofit code instead of Nbody outputs for nonlinear corrections for DE spectra, when the nature of DE is investigated through weak lensing surveys. Using a Fishermatrix approach, we evaluate the discrepancies in error forecasts for w_{0}, w_{ a } and Ω_{ m } and compare the related confidence ellipses. See [215] for further details.
The weak lensing survey is as specified in Section 1.8.2. Tests are performed assuming three different fiducial cosmologies: ΛCDM model (w_{0} = − 1, w_{ a } = 0) and two dynamical DE models, still consistent with the WMAP+BAO+SN combination [526] at 95% C.L. They will be dubbed M1 (w_{0} = −0.67, w_{ a } = 2.28) and M3 (w_{0} = −1.18, w_{ a } = 0.89). In this way we explore the dependence of our results on the assumed fiducial model. For the other parameters we adopt the fiducial cosmology of Secton 1.8.2.
The derivatives to calculate the Fisher matrix are evaluated by extracting the power spectra from the Nbody simulations of models close to the fiducial ones, obtained by considering parameter increments ±5%. For the ΛCDM case, two different initial seeds were also considered, to test the dependence on initial conditions, finding that Fisher matrix results are almost insensitive to it. For the other fiducial models, only one seed is used.
Nbody simulations are performed by using a modified version of pkdgrav [859] able to handle any DE state equation w (a), with N^{3} = 256^{3} particles in a box with side L = 256 h^{−1} Mpc. Transfer functions generated using the camb package are employed to create initial conditions, with a modified version of the PM software by [510], also able to handle suitable parameterizations of DE.
Matter power spectra are obtained by performing a FFT (Fast Fourier Transform) of the matter density fields, computed from the particles distribution through a CloudinCell algorithm, by using a regular grid with N_{ g } = 2048. This allows us to obtain nonlinear spectra in a large kinterval. In particular, our resolution allows to work out spectra up to k ≃ 10 h Mpc^{−1}. However, for k > 2–3 h Mpc^{−1} neglecting baryon physics is no longer accurate [481, 774, 149, 976, 426]. For this reason, we consider WL spectra only up to ℓ_{max} = 2000.
Particular attention has to be paid to matter power spectra normalizations. In fact, we found that, normalizing all models to the same linear σ 8(z = 0), the shear derivatives with respect to w_{0}, w_{ a } or Ω_{ m } were largely dominated by the normalization shift at z = 0, σ_{8} and σ_{8,nl} values being quite different and the shift itself depending on w_{0}, w_{ a } and Ω_{ m }. This would confuse the z dependence of the growth factor, through the observational zrange. This normalization problem was not previously met in analogous tests with the Fisher matrix, as halofit does not directly depend on the DE state equation.
As a matter of fact, one should keep in mind that, observing the galaxy distribution with future surveys, one can effectively measure σ_{8,nl}, and not its linear counterpart. For these reasons, we choose to normalize matter power spectra to σ_{8,nl}, assuming to know it with high precision.
As expected, the error on Ω_{ m } estimate is not affected by the passage from simulations to halofit, since we are dealing with ΛCDM models only. On the contrary, using halofit leads to underestimates of the errors on w_{0} and w_{ a }, by a substantial 30–40% (see [215] for further details).
This confirms that, when considering models different from ΛCDM, nonlinear correction obtained through halofit may be misleading. This is true even when the fiducial model is ΛCDM itself and we just consider mild deviations of w from −1.
The effect of baryon physics is another nonlinear correction to be considered. We note that the details of a study on the impact of baryon physics on the power spectrum and the parameter estimation can be found in [813]
2.8.6 Forecasts for the darkenergy sound speed
As we have seen in Section 1.3.1, when dark energy clusters, the standard subhorizon Poisson equation that links matter fluctuations to the gravitational potential is modified and Q ≠ 1. The deviation from unity will depend on the degree of DE clustering and therefore on the sound speed c_{ s }. In this subsection we try to forecast the constraints that Euclid can put on a constant c_{ s } by measuring Q both via weak lensing and via redshift clustering. Here we assume standard Einstein gravity and zero anisotropic stress (and therefore we have Ψ = Φ) and we allow c_{ s } to assume different values in the range 0–1.
 1.
perturbations larger than the causal horizon (where perturbations are not causally connected and their growth is suppressed),
 2.
perturbations smaller than the causal horizon but larger than the sound horizon, k ≪ aH/c_{ s } (this is the only regime where perturbations are free to grow because the velocity dispersion, or equivalently the pressure perturbation, is smaller than the gravitational attraction),
 3.
perturbations smaller than the sound horizon, k ≫ aH/c_{ s } (here perturbations stop growing because the pressure perturbation is larger than the gravitational attraction).

The growth of matter perturbations
There are two ways to influence the growth factor: firstly at background level, with a different Hubble expansion. Secondly at perturbation level: if dark energy clusters then the gravitational potential changes because of the Poisson equation, and this will also affect the growth rate of dark matter. All these effects can be included in the growth index γ and we therefore expect that γ is a function of w and \(c_s^2\) (or equivalently of w and Q).
The growth index depends on darkenergy perturbations (through Q) as [785]$$\gamma = {{3(1  w  A(Q))} \over {5  6w}}$$(1.8.24)Clearly here, the key quantity is the derivative of the growth factor with respect to the sound speed:$$A(Q) = {{Q  1} \over {1  {\Omega _M}(a)}}.$$(1.8.25)From the above equation we also notice that the derivative of the growth factor does not depend on Q − 1 like the derivative Q, but on Q − Q_{0} as it is an integral (being Q_{0} the value of Q today). The growth factor is thus not directly probing the deviation of Q from unity, but rather how Q evolves over time, see [786] for more details.$${{\partial \log G} \over {\partial \ln c_s^2}} \propto \int\nolimits_{{a_0}}^{{a_1}} {{{\partial \gamma} \over {\partial c_s^2}}{\rm{d}}a} \propto \int\nolimits_{{a_0}}^{{a_1}} {{{\partial Q} \over {\partial c_s^2}}{\rm{d}}a} \propto \int\nolimits_{{a_0}}^{{a_1}} {(Q  1)\;{\rm{d}}a} \,.$$(1.8.26) 
Redshift space distortions
The distortion induced by redshift can be expressed in linear theory by the β factor, related to the bias factor and the growth rate via:The derivative of the redshift distortion parameter with respect to the sound speed is:$$\beta (z,k) = {{{\Omega _m}{{\left(z \right)}^{\gamma (k,z)}}} \over {b(z)}}\,.$$(1.8.27)We see that the behavior versus \(c_s^2\) is similar to the one for the Q derivative, so the same discussion applies. Once again, the effect is maximized for small c_{ s }. The β derivative is comparable to that of G at z = 0 but becomes more important at low redshifts.$${{\partial \log (1 + \beta {\mu ^2})} \over {\partial \log c_s^2}} =  {3 \over {5  6w}}{{\beta {\mu ^2}} \over {1 + \beta {\mu ^2}}}{x \over {1 + x}}(Q  1)\,.$$(1.8.28) 
Shape of the dark matter power spectrum
Quantifying the impact of the sound speed on the matter power spectrum is quite hard as we need to run Boltzmann codes (such as camb, [559]) in order to get the full impact of darkenergy perturbations into the matter power spectrum. [786] proceeded in two ways: first using the camb output and then considering the analytic expression from [337] (which does not include dark energy perturbations, i.e., does not include c_{ s }).
They find that the impact of the derivative of the matter power spectrum with respect the sound speed on the final errors is only relevant if high values of \(c_s^2\) are considered; by decreasing the sound speed, the results are less and less affected. The reason is that for low values of the sound speed other parameters, like the growth factor, start to be the dominant source of information on \(c_s^2\).

The direct contribution of the perturbations to the gravitational potential through the factor Q.

The impact of the darkenergy perturbations on the growth rate of the dark matter perturbations, affecting the time dependence of Δ_{ M }, through G (a, k).

A change in the shape of the matter power spectrum P (k), corresponding to the dark energy induced k dependence of Δ_{ M }.
Once we decrease the sound speed then darkenergy perturbations are free to grow at smaller scales. In Figure 25 the confidence region for w_{0}, \(c_s^2\) for \(c_s^2 = {10^{ 6}}\) is shown; we find σ (w_{0}) = 0.0286, \(\sigma (c_s^2)/c_s^2 = 0.132\); in the last case the error on the measurement of the sound speed is reduced to the 70% of the total signal.
Impact on galaxy power spectrum. We now explore a second probe of clustering, the galaxy power spectrum. The procedure is the same outlined in Section 1.7.3. We use the representative Euclid survey presented in Section 1.8.2. Here too we also consider in addition possible extended surveys to z_{max} = 2.5 and z_{max} = 4.
In conclusion, as perhaps expected, we find that darkenergy perturbations have a very small effect on dark matter clustering unless the sound speed is extremely small, c_{ s } ≤ 0.01. Let us remind that in order to boost the observable effect, we always assumed w = −0.8; for values closer to −1 the sensitivity to \(c_s^2\) is further reduced. As a test, [786] performed the calculation for w = −0.9 and \(c_s^2 = {10^{ 5}}\) and found \({\sigma _{c_s^2}}/c_s^2 = 2.6\) and \({\sigma _{c_s^2}}/c_s^2 = 1.09\) for WL and galaxy power spectrum experiments, respectively.
Such small sound speeds are not in contrast with the fundamental expectation of dark energy being much smoother that dark matter: even with c_{ s } ≈ 0.01, darkenergy perturbations are more than one order of magnitude weaker than dark matter ones (at least for the class of models investigated here) and safely below nonlinearity at the present time at all scales. Models of “cold” dark energy are interesting because they can cross the phantom divide [536] and contribute to the cluster masses [258] (see also Section 1.6.2 of this review). Small c_{ s } could be constructed for instance with scalar fields with nonstandard kinetic energy terms.
2.8.7 Weak lensing constraints on f(R) gravity
In principle one has complete freedom to specify the function f (R), and so any expansion history can be reproduced. However, as discussed in Section 1.4.6, those that remain viable are the subset that very closely mimic the standard ΛCDM background expansion, as this restricted subclass of models can evade solar system constraints [230, 906, 410], have a standard matter era in which the scale factor evolves according to a (t) ∝ t^{2/3} [43] and can also be free of ghost and tachyon instabilities [682, 415].
Whilst these models are practically indistinguishable from ΛCDM at the level of background expansion, there is a significant difference in the evolution of perturbations relative to the standard GR behavior.
The evolution of linear density perturbations in the context of f (R) gravity is markedly different than in the standard ΛCDM scenario; δ_{m} ≡ δρ_{m}/ρ_{m} acquires a nontrivial scale dependence at late times. This is due to the presence of an additional scale M (a) in the equations; as any given mode crosses the modified gravity ‘horizon’ k = aM (a), said mode will feel an enhanced gravitational force due to the scalar field. This will have the effect of increasing the power of small scale modes.
2.8.8 Forecast constraints on coupled quintessence cosmologies
In this section we present forecasts for coupled quintessence cosmologies [33, 955, 724], obtained when combining Euclid weak lensing, Euclid redshift survey (baryon acoustic oscillations, redshift distortions and full P (k) shape) and CMB as obtained in Planck (see also the next section for CMB priors). Results reported here were obtained in [42] and we refer to it for details on the analysis and Planck specifications (for weak lensing and CMB constraints on coupled quintessence with a different coupling see also [637, 284]). In [42] the coupling is the one described in Section 1.4.4.4, as induced by a scalartensor model. The slope α of the RatraPeebles potential is included as an additional parameter and Euclid specifications refer to the Euclid Definition phase [551].
1σ errors for the set Θ ≡ {β^{2},α, Ω_{ c },h, Ω_{ b }, n_{ s } σ_{8}, log(A)} of cosmological parameters, combining CMB + P(k) (left column) and CMB + P(k) + WL (right column).
Parameter  σ_{ i } CMB + P(k)  σ_{ i } CMB + P(k) + WL 

β ^{2}  0.00051  0.00032 
α  0.055  0.032 
Ω_{ c }  0.0037  0.0010 
h  0.0080  0.0048 
Ω_{ b }  0.00047  0.00041 
n _{ s }  0.0057  0.0049 
σ _{8}  0.0049  0.0036 
log(A)  0.0051  0.0027 
1σ errors for β^{2}, for CMB, P(k), WL and CMB + P(k) + WL. For each line, only the parameter in the left column has been fixed to the reference value. The first line corresponds to the case in which we have marginalized over all parameters. Table reproduced by permission from [42], copyright by APS.
Fixed parameter  CMB  P(k)  WL  CMB + P(k) + WL 

(Marginalized on all params)  0.0094  0.0015  0.012  0.00032 
α  0.0093  0.00085  0.0098  0.00030 
Ω_{ c }  0.0026  0.00066  0.0093  0.00032 
h  0.0044  0.0013  0.011  0.00032 
Ω_{ b }  0.0087  0.0014  0.012  0.00030 
n _{ s }  0.0074  0.0014  0.012  0.00028 
σ _{8}  0.0094  0.00084  0.0053  0.00030 
log(A)  0.0090  0.0015  0.012  0.00032 
It is remarkable to notice that the combination of CMB, power spectrum and weak lensing is already a powerful tool and a better knowledge of one parameter does not improve much the constraints on β^{2}. CMB alone, instead, improves by a factor 3 when Ω_{ c } is known and by a factor 2 when h is known. The power spectrum is mostly influenced by Ω_{ c }, which allows to improve constraints on the coupling by more than a factor 2. Weak lensing gains the most by a better knowledge of σ_{8}.
2.8.9 ExtraEuclidean data and priors
Other darkenergy projects will enable the crosscheck of the darkenergy constraints from Euclid. These include Planck, BOSS, WiggleZ, HETDEX, DES, Panstarrs, LSST, BigBOSS and SKA.
Planck will provide exquisite constraints on cosmological parameters, but not tight constraints on dark energy by itself, as CMB data are not sensitive to the nature of dark energy (which has to be probed at z < 2, where dark energy becomes increasingly important in the cosmic expansion history and the growth history of cosmic large scale structure). Planck data in combination with Euclid data provide powerful constraints on dark energy and tests of gravity. In the next Section 1.8.9.1, we will discuss how to create a Gaussian approximation to the Planck parameter constraints that can be combined with Euclid forecasts in order to model the expected sensitivity until the actual Planck data is available towards the end of 2012.
The galaxy redshift surveys BOSS, WiggleZ, HETDEX, and BigBOSS are complementary to Euclid, since the overlap in redshift ranges of different galaxy redshift surveys, both space and groundbased, is critical for understanding systematic effects such as bias through the use of multiple tracers of cosmic large scale structure. Euclid will survey Hα emission line galaxies at 0.5 < z < 2.0 over 20,000 square degrees. The use of multiple tracers of cosmic large scale structure can reduce systematic effects and ultimately increase the precision of darkenergy measurements from galaxy redshift surveys [see, e.g., 811].
Currently ongoing or recently completed surveys which cover a sufficiently large volume to measure BAO at several redshifts and thus have science goals common to Euclid, are the Sloan Digital Sky Survey III Baryon Oscillations Spectroscopic Survey (BOSS for short) and the WiggleZ survey.
BOSS^{9} maps the redshifts of 1.5 million Luminous Red Galaxies (LRGs) out to z ∼ 0.7 over 10,000 square degrees, measuring the BAO signal, the largescale galaxy correlations and extracting information of the growth from redshift space distortions. A simultaneous survey of 2.2 < z < 3.5 quasars measures the acoustic oscillations in the correlations of the Lymanα forest. LRGs were chosen for their high bias, their approximately constant number density and, of course, the fact that they are bright. Their spectra and redshift can be measured with relatively short exposures in a 2.4 m groundbased telescope. The datataking of BOSS will end in 2014.
The WiggleZ^{10} survey is now completed, it measured redshifts for almost 240,000 galaxies over 1000 square degrees at 0.2 < z < 1. The target are luminous blue starforming galaxies with spectra dominated by patterns of strong atomic emission lines. This choice is motivated by the fact that these emission lines can be used to measure a galaxy redshift in relatively short exposures of a 4 m class groundbased telescope.
Red quiescent galaxies inhabit dense clusters environments, while blue starforming galaxies trace better lower density regions such as sheets and filaments. It is believed that on large cosmological scales these details are unimportant and that galaxies are simply tracers of the underlying dark matter: different galaxy type will only have a different ‘bias factor’. The fact that so far results from BOSS and WiggleZ agree well confirms this assumption.
Between now and the availability of Euclid data other widefield spectroscopic galaxy redshift surveys will take place. Among them, eBOSS will extend BOSS operations focusing on 3100 square degrees using a variety of tracers. Emission line galaxies will be targeted in the redshift window 0.6 < z < 1. This will extend to higher redshift and extend the sky coverage of the WiggleZ survey. Quasars in the redshift range 1 < z < 2.2 will be used as tracers of the BAO feature instead of galaxies. The BAO LRG measurement will be extended to z ∼ 0.8, and the quasar number density at z > 2.2 of BOSS will be tripled, thus improving the BAO Lymanα forest measure.
HETDEX is expected to begin full science operation is 2014: it aims at surveying 1 million Lymanα emitting galaxies at 1.9 < z < 3.5 over 420 square degrees. The main science goal is to map the BAO feature over this redshift range.
Further in the future, we highlight here the proposed BigBOSS survey and SuMIRe survey with HyperSupremeCam on the Subaru telescope. The BigBOSS survey will target [OII] emission line galaxies at 0.6 < z < 1.5 (and LRGs at z < 0.6) over 14,000 square degrees. The SuMIRe wide survey proposes to survey ∼ 2000 square degrees in the redshift range 0.6 < z < 1.6 targeting LRGs and [OII] emissionline galaxies. Both these surveys will likely reach full science operations roughly at the same time as the Euclid launch.
Wide field photometric surveys are also being carried out and planned. The ongoing Dark Energy Survey (DES)^{11} will cover 5000 square degrees out to z ∼ 1.3 and is expected to complete observations in 2017; the Panoramic Survey Telescope & Rapid Response System (PanSTARRS), ongoing at the singlemirror stage, The PanSTARSS survey, which first phase is already ongoing, will cover 30,000 square degrees with 5 photometry bands for redshifts up to z ∼ 1.5. The second pause of the survey is expected to be competed by the time Euclid launches. More in the future the Large Synoptic Survey Telescope (LSST) will cover redshifts 0.3 < z < 3.6 over 20,000 square degrees, but is expected to begin operations in 2021, after Euclid’s planned launch date. The galaxy imaging surveys DES, Panstarrs, and LSST will complement Euclid imaging survey in both the choice of band passes, and the sky coverage.
SKA (which is expected to begin operations in 2020 and reach full operational capability in 2024) will survey neutral atomic hydrogen (HI) through the radio 21 cm line, over a very wide area of the sky. It is expected to detect HI emitting galaxies out to z ∼ 1.5 making it nicely complementary to Euclid. Such galaxy redshift survey will of course offer the opportunity to measure the galaxy power spectrum (and therefore the BAO feature) out to z ∼ 1.5. The well behaved point spread function of a synthesis array like the SKA should ensure superb image quality enabling cosmic shear to be accurately measured and tomographic weak lensing used to constrain cosmology and in particular dark energy. This weak lensing capability also makes SKA and Euclid very complementary. For more information see, e.g., [755, 140].
Moreover, having both spectroscopic and imaging capabilities Euclid is uniquely poised to explore the clustering with both the three dimensional distribution of galaxies and weak gravitational lensing.
2.8.9.1 The Planck prior
In this scheme, l_{ a } describes the peak location through the angular diameter distance to decoupling and the size of the sound horizon at that time. If the geometry changes, either due to nonzero curvature or due to a different equation of state of dark energy, l_{ a } changes in the same way as the peak structure. R encodes similar information, but in addition contains the matter density which is connected with the peak height. In a given class of models (for example, quintessence dark energy), these parameters are “observables” related to the shape of the observed CMB spectrum, and constraints on them remain the same independent of (the prescription for) the equation of state of the dark energy.
As a caveat we note that if some assumptions regarding the evolution of perturbations are changed, then the corresponding R and l_{ a } constraints and covariance matrix will need to be recalculated under each such hypothesis, for instance, if massive neutrinos were to be included, or even if tensors were included in the analysis [255]. Further, R as defined in Eq. (1.8.38) can be badly constrained and is quite useless if the dark energy clusters as well, e.g., if it has a low sound speed, as in the model discussed in [534].
In order to derive a Planck fisher matrix, [676] simulated Planck data as described in [703] and derived constraints on our base parameter set {R, l_{ a }, Ω_{ b }h^{2}, n_{ s }} with a MCMC based likelihood analysis. In addition to R and l_{ a } they used the baryon density Ω_{ b }h^{2}, and optionally the spectral index of the scalar perturbations n_{ s }, as these are strongly correlated with R and l_{ a }, which means that we will lose information if we do not include these correlations. As shown in [676], the resulting Fisher matrix loses some information relative to the full likelihood when only considering Planck data, but it is very close to the full analysis as soon as extra data is used. Since this is the intended application here, it is perfectly sufficient for our purposes.
R, l_{ a }, Ω_{ b }h^{2} and n_{ s } estimated from Planck simulated data. Table reproduced by permission from [676], copyright by APS.
Parameter  mean  rms variance 

Ω_{ k } ≠ 0  
R  1.7016  0.0055 
l _{ a }  302.108  0.098 
Ω_{ b }h^{2}  0.02199  0.00017 
n _{ s }  0.9602  0.0038 
Covariance matrix for (R, l_{ a }, Ω_{ b }h^{2}, n_{ s }) from Planck. Table reproduced by permission from [676], copyright by APS.
R  l _{ a }  Ω_{ b }h^{2}  n _{ s }  

Ω_{ k } ≠ 0  
R  0.303492E04  0.297688E03  −0.545532E06  −0.175976E04 
l _{ a }  0.297688E03  0.951881E02  −0.759752E05  −0.183814E03 
Ω_{ b }h^{2}  −0.545532E06  −0.759752E05  0.279464E07  0.238882E06 
n _{ s }  −0.175976E04  −0.183814E03  0.238882E06  0.147219E04 
Fisher matrix for (w_{0}, w_{ a }, Ω_{DE}, Ω_{ k }, ω_{ m }, ω_{ b }, n_{ S }) derived from the covariance matrix for (R, l_{ a }, Ω_{ b }h^{2}, n_{ s }) from Planck. Table reproduced by permission from [676], copyright by APS.
w _{0}  w _{ a }  Ω_{DE}  Ω_{ k }  ω _{ m }  ω _{ b }  n _{ S }  

ω _{0}  .172276E+06  .490320E+05  .674392E+06  −.208974E+07  .325219E+07  −.790504E+07  −.549427E+05 
ω _{ a }  .490320E+05  .139551E+05  .191940E+06  −.594767E+06  .925615E+06  −.224987E+07  −.156374E+05 
Ω_{DE}  .674392E+06  .191940E+06  .263997E+07  −.818048E+07  .127310E+08  −.309450E+08  −.215078E+06 
Ω_{ k }  −.208974E+07  −.594767E+06  −.818048E+07  .253489E+08  −.394501E+08  .958892E+08  .666335E+06 
ω _{ m }  .325219E+07  .925615E+06  .127310E+08  −.394501E+08  .633564E+08  −.147973E+09  −.501247E+06 
ω _{ b }  −.790504E+07  −.224987E+07  −.309450E+08  .958892E+08  −.147973E+09  .405079E+09  .219009E+07 
n _{ S }  −.549427E+05  −.156374E+05  −.215078E+06  .666335E+06  −.501247E+06  .219009E+07  .242767E+06 
2.9 Summary and outlook
 1.
Euclid (RS) should be able to measure the main standard cosmological parameters to percent or subpercent level as detailed in Table 7 (all marginalized errors, including constant equation of state and constant growth rate, see Table 11 and Figure 20).
 2.
The two CPL parameters w_{0}, w_{1} should be measured with errors 0.06 and 0.26, respectively (fixing the growth rate to fiducial), see Table 11 and Figure 20.
 3.
The equation of state w and the growth rate parameter γ, both assumed constant, should be simultaneously constrained to within 0.04 and 0.03, respectively.
 4.
The growth function should be constrained to within 0.01–0.02 for each redshift bin from z = 0.7 to z = 2 (see Table 4).
 5.
A scaleindependent bias function b (z) should be constrained to within 0.02 for each redshift bin (see Table 4).
 6.
The growth rate parameters γ_{0}, γ_{1} defined in Eq. 1.8.5 should be measured to within 0.08, 0.17, respectively.
 7.
Euclid will achieve an accuracy on measurements of the dark energy sound speed of \(\sigma (c_s^2)/c_s^2 = 2615\) (WLS) and \(\sigma (c_s^2)/c_s^2 = 50.05\) (RS), if \(c_s^2 = 1\), or \(\sigma (c_s^2)/c_s^2 = 0.132\) (WLS) and \(\sigma (c_s^2)/c_s^2 = 0.118\) (RS), if \(c_s^2 = {10^{ 6}}\).
 8.
The coupling β^{2} between dark energy and dark matter can be constrained by Euclid (with Planck) to less than 3 · 10^{−4} (see Figure 30 and Table 13).
 9.
Any departure from GR greater than ≃ 0.03 in the growth index γ will be distinguished by the WLS [429].
 10.
Euclid WLS can detect deviations between 3% and 10% from the GR value of the modifiedgravity parameter Σ (Eq. 1.3.28), whilst with the RS there will be a 20% accuracy on both γ and μ (Eq. 1.3.27).
 11.
With the WLS, Euclid should provide an upper limit to the present dimensionless scalaron inverse mass μ ≡ H_{0}/M_{0} of the f (R) scalar (where the time dependent scalar field mass is defined in Eq. 1.8.37) as μ = 0.00 ± 1.10 × 10^{−3} for l < 400 and μ = 0.0 ± 2.10 × 10^{−4} for l < 10000
 12.
The WLS will be able to rule out the DGP model growth index with a Bayes factor ∣ln B ∣ ≃ 50 [429], and viable phenomenological extensions could be detected at the 3σ level for 1000 ≲ ℓ ≲ 4000 [199].
 1.
The results of the redshift survey and weak lensing surveys should be combined in a statistically coherent way
 2.
The set of possible priors to be combined with Euclid data should be better defined
 3.
The forecasts for the parameters of the modified gravity and clustered darkenergy models should be extended to include more general cases
 4.
We should estimate the errors on a general reconstruction of the modified gravity functions Σ, μ or of the metric potentials Ψ, Φ as a function of both scale and time.
3 Dark Matter and Neutrinos
3.1 Introduction
The identification of dark matter is one of the most important open problems in particle physics and cosmology. In standard cosmology, dark matter contributes 85% of all the matter in the universe, but we do not know what it is made of, as we have never observed dark matter particles in our laboratories. The foundations of the modern dark matter paradigm were laid in the 1970s and 1980s, after decades of slow accumulation of evidence. Back in the 1930s, it was noticed that the Coma cluster seemed to contain much more mass than what could be inferred from visible galaxies [992, 993], and a few years later, it became clear that the Andromeda galaxy M31 rotates anomalously fast at large radii, as if most of its mass resides in its outer regions. Several other pieces of evidence provided further support to the dark matter hypothesis, including the so called timingargument. In the 1970s, rotation curves were extended to larger radii and to many other spiral galaxies, proving the presence of large amounts of mass on scales much larger than the size of galactic disks [712].
We are now in the position of determining the total abundance of dark matter relative to normal, baryonic matter, in the universe with exquisite accuracy; we have a much better understanding of how dark matter is distributed in structures ranging from dwarf galaxies to clusters of galaxies, thanks to gravitational lensing observations [see 644, for a review] and theoretically from highresolution numerical simulations made possible by modern supercomputers (such as, for example, the Millennium or Marenostrum simulations).
Originally, Zwicky thought of dark matter as most likely baryonic — missing cold gas, or low mass stars. Rotation curve observation could be explained by dark matter in the form of MAssive Compact Halo Objects (MACHOs, e.g., a halo of black holes or brown dwarfs). However, the MACHO and EROS experiments have shown that dark matter cannot be in the mass range 0.6 × 10^{−7} M_{⊙} < M < 15 M_{⊙} if it comprises massive compact objects [23, 889]. Gas measurements are now extremely sensitive, ruling out dark matter as undetected gas ([134, 238, 765]; but see [728]). And the CMB and Big Bang Nucleosynthesis require the total mass in baryons in the universe to be significantly less that the total matter density [759, 246, 909].
This is one of the most spectacular results in cosmology obtained at the end of the 20th century: dark matter has to be nonbaryonic. As a result, our expectation of the nature of dark matter shifted from an astrophysical explanation to particle physics, linking the smallest and largest scales that we can probe.
During the seventies the possibility of the neutrino to be the dark matter particle with a mass of tenth of eV was explored, but it was realized that such light particle would erase the primordial fluctuations on small scales, leading to a lack of structure formation on galactic scales and below. It was therefore postulated that the dark matter particle must be cold (low thermal energy, to allow structures on small scale to form), collisionless (or have a very low interaction cross section, because dark matter is observed to be pressureless) and stable over a long period of time: such a candidate is referred to as a weakly interacting massive particle (WIMP). This is the standard cold dark matter (CDM) picture [see 369, 719].
Particle physicists have proposed several possible dark matter candidates. Supersymmetry (SUSY) is an attractive extension of the Standard Model of particle physics. The lightest SUSY particle (the LSP) is stable, uncharged, and weakly interacting, providing a perfect WIMP candidate known as a neutralino. Specific realizations of SUSY each provide slightly different dark matter candidates [for a review see 482]. Another distinct dark matter candidate arising from extensions of the Standard Model is the axion, a hypothetical pseudoGoldstone boson whose existence was postulated to solve the so called strong CP problem in quantum chromodynamics [715], also arising generically in string theory [965, 871]. They are known to be very well motivated dark matter candidates [for a review of axions in cosmology see 826]. Other wellknown candidates are sterile neutrinos, which interact only gravitationally with ordinary matter, apart from a small mixing with the familiar neutrinos of the Standard Model (which should make them ultimately unstable), and candidates arising from technicolor [see, e.g., 412]. A wide array of other possibilities have been discussed in the literature, and they are currently being searched for with a variety of experimental strategies [for a complete review of dark matter in particle physics see 51].
There remain some possible discrepancies in the standard cold dark matter model, such as the missing satellites problem, and the cuspcore controversy (see below for details and references) that have led some authors to question the CDM model and to propose alternative solutions. The physical mechanism by which one may reconcile the observations with the standard theory of structure formation is the suppression of the matter power spectrum at small scales. This can be achieved with dark matter particles with a strong selfscattering cross section, or with particles with a nonnegligible velocity dispersion at the epoch of structure formation, also referred to as warm dark matter (WDM) particles.
Another possibility is that the extra gravitational degrees of freedom arising in modified theories of gravity play the role of dark matter. In particular this happens for the EinsteinAether, TeVeS and bigravity models. These theories were developed following the idea that the presence of unknown dark components in the universe may be indicating us that it is not the matter component that is exotic but rather that gravity is not described by standard GR.

The discovery of an exponential suppression in the power spectrum at small scales, that would rule out CDM and favor WDM candidates, or, in absence of it, the determination of a lower limit on the mass of the WDM particle, m_{WDM}, of 2 keV;

the determination of an upper limit on the dark matter selfinteraction cross section σ/m ∼ 10^{−27} cm^{2} GeV^{−1} at 68% CL, which represents an improvement of three orders of magnitude compared to the best constraint available today, which arises from the analysis of the dynamics of the bullet cluster;

the measurement of the slope of the dark matter distribution within galaxies and clusters of galaxies with unprecedented accuracy;

the determination of the properties of the only known — though certainly subdominant — nonbaryonic dark matter particle: the standard neutrino, for which Euclid can provide information on the absolute mass scale, its normal or inverted hierarchy, as well as its Dirac or Majorana nature;

the test of unified dark matter (UDM, or quartessence) models, through the detection of characteristic oscillatory features predicted by these theories on the matter power spectrum, detectable through weak lensing or baryonic acoustic oscillations studies;

a probe of the axiverse, i.e., of the legacy of string theory through the presence of ultralight scalar fields that can affect the growth of structure, introducing features in the matter power spectrum and modifying the growth rate of structures.
3.2 Dark matter halo properties
Dark matter was first proposed by [993] to explain the anomalously high velocity of galaxies in galaxy clusters. Since then, evidence for dark matter has been accumulating on all scales. The velocities of individual stars in dwarf galaxies suggest that these are the most dark matter dominated systems in the universe [e.g., 650, 509, 834, 635, 934]. Low surface brightness (LSB) and giant spiral galaxies rotate too fast to be supported by their stars and gas alone, indicating the presence of dark matter [286, 833, 153, 512]. Gravitationally lensed giant elliptical galaxies and galaxy clusters require dark matter to explain their observed image distributions [e.g., 761, 156, 935, 851, 244]. Finally, the temperature fluctuations in the cosmic microwave background (CMB) radiation indicate the need for dark matter in about the same amount as that required in galaxy clusters [e.g., 845, 968, 855].
While the case for particle dark matter is compelling, until we find direct evidence for such a particle, astrophysics remains a unique dark matter probe. Many varieties of dark matter candidates produce a noticeable change in the growth of structure in the universe [482, 865]. Warm dark matter (WDM) suppresses the growth of structure in the early universe producing a measurable effect on the smallscale matter power spectrum [143, 67, 87]. Selfinteracting dark matter (SIDM) changes the expected density distribution within bound dark matter structures [273, 440]. In both cases, the key information about dark matter is contained on very small scales. In this section, we discuss previous work that has attempted to measure the small scale matter distribution in the universe, and discuss how Euclid will revolutionize the field. We divide efforts into three main areas: measuring the halo mass function on large scales, but at high redshift; measuring the halo mass function on small scales through lens substructures; measuring the dark matter density profile within galaxies and galaxy clusters.
3.2.1 The halo mass function as a function of redshift
The baryonic mass function already turns up an interesting result. Overplotted in blue on Figure 32 is the dark matter mass function expected assuming that dark matter is ‘cold’ — i.e., that it has no preferred scale. Notice that this has a different shape. On large scales, there should be bound dark matter structures with masses as large as 10^{14} M_{⊙}, yet the number of observed galaxies drops off exponentially above a baryonic mass of ∼10^{12} M_{⊙}. This discrepancy is wellunderstood. Such large dark matter haloes have been observed, but they no longer host a single galaxy; rather they are bound collections of galaxies — galaxy clusters [see e.g. 993]. However, there is also a discrepancy at low masses that is not so well understood. There should be far more bound dark matter haloes than observed small galaxies. This is the wellknown ‘missing satellite’ problem [662, 511].
The missing satellite problem could be telling us that dark matter is not cold. The red line on Figure 32 shows the expected dark matter mass function for WDM with a (thermal relic) mass of m_{WDM} = 1 keV. Notice that this gives an excellent match to the observed slope of the baryonic mass function on small scales. However, there may be a less exotic solution. It is likely that star formation becomes inefficient in galaxies on small scales. A combination of supernovae feedback, reionization and rampressure stripping is sufficient to fully explain the observed distribution assuming pure CDM [529, 756, 603]. Such ‘baryon feedback’ solutions to the missing satellite problem are also supported by recent measurements of the orbits of the Milky Way’s dwarf galaxies [594].
3.2.1.1 Weak and strong lensing measurements of the halo mass function
To make further progress on WDM constraints from astrophysics, we must avoid the issue of baryonic physics by probing the halo mass function directly. The only tool for achieving this is gravitational lensing. In weak lensing this means stacking data for a very large number of galaxies to obtain an averaged mass function. In strong lensing, this means simply finding enough systems with ‘good data.’ Good data ideally means multiple sources with wide redshift separation [776]; combining independent data from dynamics with lensing may also prove a promising route [see e.g. 893].
Euclid will measure the halo mass function down to ∼ 10^{13} M_{⊙} using weak lensing. It will simultaneously find 1000s of strong lensing systems. However, in both cases, the lowest mass scale is limited by the lensing critical density. This limits us to probing down to a halo mass of ∼ 10^{11} M_{⊙} which gives poor constraints on the nature of dark matter. However, if such measurements can be made as a function of redshift, the constraints improve dramatically. We discuss this in the next Section.
3.2.1.2 The advantage of going to high redshift
The utility of redshift information was illustrated recently by observations of the Lymanα absorption spectra from Quasars [927, 812]. Quasars act as cosmic ‘flashlights’ shining light from the very distant universe. Some of this light is absorbed by intervening neutral gas leading to absorption features in the Quasar spectra. Such features contain rich information about the matter distribution in the universe at high redshift. Thus, the Lymanα forest measurements have been able to place a lower bound of m_{WDM} > 4 keV probing scales of ∼ 1 Mpc. Key to the success of this measurement is that much of the neutral gas lies inbetween galaxies in filaments. Thus, linear approximations for the growth of structures in WDM versus CDM remain acceptable, while assuming that the baryons are a good tracer of the underlying matter field is also a good approximation. However, improving on these early results means probing smaller scales where nonlinearities and baryon physics will creep in. For this reason, tighter bounds must come from techniques that either probe even higher redshifts, or even smaller scales. Lensing from Euclid is an excellent candidate since it will achieve both while measuring the halo mass function directly rather than through the visible baryons.
3.2.2 The dark matter density profile
3.3 Euclid dark matter studies: widefield Xray complementarity
The predominant extragalactic Xray sources are AGNs and galaxy clusters. For dark matter studies the latter are the more interesting targets. Xrays from clusters are emitted as thermal bremsstrahlung by the hot intracluster medium (ICM) which contains most of the baryons in the cluster. The thermal pressure of the ICM supports it against gravitational collapse so that measuring the temperature through Xray observations provides information about the mass of the cluster and its distribution. Hence, Xrays form a complementary probe of the dark matter in clusters to Euclid weak lensing measurements.
The ongoing Xray missions XMMNewton and Chandra have good enough angular resolution to measure the temperature and mass profiles in ∼ 10 radial bins for clusters at reasonable redshifts, although this requires long exposures. Many planned Xray missions aim to improve the spectral coverage, spectral resolution, and/or collection area of the present mission, but they are nonetheless mostly suited for targeted observations of individual objects. Two notable exceptions are eROSITA^{12} [207, launch 2014] and the Wide Field Xray Telescope^{13} [WFXT 390, 931, 789, 773, 152, 790, proposed] which will both conduct full sky surveys and, in the case of WFXT, also smaller but deeper surveys of large fractions of the sky.
A sample of highangular resolution Xray cluster observations can be used to test the prediction from Nbody simulations of structure formation that dark matter haloes are described by the NFW profile [684] with a concentration parameter c. This describes the steepness of the profile, which is related to the mass of the halo [685]. Weak or strong lensing measurements of the mass profile, such as those that will be provided from Euclid, can supplement the Xray measurement and have different systematics. Euclid could provide wide field weak lensing data for such a purpose with very good point spread function (PSF) properties, but it is likely that the depth of the Euclid survey will make dedicated deep field observations a better choice for a lensing counterpart to the Xray observations. However, if the WFXT mission becomes a reality, the sheer number of detected clusters with mass profiles would mean Euclid could play a much more important rôle.
Xray observations of galaxy clusters can constrain cosmology by measuring the geometry of the universe through the baryon fraction f_{gas} [26] or by measuring the growth of structures by determining the highmass tail of the mass function [622]. The latter method would make the most of the large number of clusters detected in fullsky surveys and there would be several benefits by combining an Xray and a lensing survey. It is not immediately clear which type of survey would be able to better detect clusters at various redshifts and masses, and the combination of the two probes could improve understanding of the sample completeness. An Xray survey alone cannot measure cluster masses with the required precision for cosmology. Instead, it requires a calibrated relation between the Xray temperature and the cluster mass. Such a calibration, derived from a large sample of clusters, could be provided by Euclid. In any case, it is not clear yet whether the large size of a Euclid sample would be more beneficial than deeper observations of fewer clusters.
Finally, Xray observations can also confirm the nature of possible ‘bulletlike’ merging clusters. In such systems the shock of the collision has displaced the ICM from the dark matter mass, which is identified through gravitational lensing. This offers the opportunity to study dark matter haloes with very few baryons and, e.g., search for signatures of decaying or annihilating dark matter.
3.4 Dark matter mapping
Gravitational lensing offers a unique way to chart dark matter structures in the universe as it is sensitive to all forms of matter. Weak lensing has been used to map the dark matter in galaxy clusters [see for example 245] with high resolution reconstructions recovered for the most massive strong lensing clusters [see for example 164]. Several lensing studies have also mapped the projected surface mass density over degree scalefields [386, 798, 532] to identify shearselected groups and clusters. The minimum mass scale that can be identified is limited only by the intrinsic ellipticity noise in the lensing analysis and projection effects. Using a higher number density of galaxies in the shear measurement reduces this noise, and for this reason the Deep Field Euclid Survey will be truly unique for this area of research, permitting high resolution reconstructions of dark matter in the field [645, 432] and the study of lenses at higher redshift.
There are several nonparametric methods to reconstruct dark matter in 2D which can be broadly split into two categories: convergence techniques [486] and potential techniques [90]. In the former one measures the projected surface mass density (or convergence) κ directly by applying a convolution to the measured shear under the assumption that κ ≪ 1. Potential techniques perform a χ^{2} minimization and are better suited to the cluster regime and can also incorporate strong lensing information [163]. In the majority of methods, choices need to be made about smoothing scales to optimize signaltonoise whilst preserving reconstruction resolution. Using a wavelet method circumvents this choice [860, 497] but makes the resulting significance of the reconstruction difficult to measure.
3.4.1 Charting the universe in 3D
The lensing distortion depends on the total projected surface mass density along the line of sight and a geometrical factor that increases with source distance. This redshift dependence can be used to recover the full 3D gravitational potential of the matter density as described in [455, 72] and applied to the COMBO17 survey in [879] and the COSMOS survey in [645]. This work has been extended in [835] to reconstruct the full 3D mass density field and applied to the STAGES survey in [836].
All 3D mass reconstruction methods require the use of a prior based on the expected mean growth of matter density fluctuations. Without the inclusion of such a prior, [455] have shown that one is unable to reasonably constrain the radial matter distribution, even for densely sampled spacebased quality lensing data. Therefore 3D maps cannot be directly used to infer cosmological parameters.
The driving motivation behind the development of 3D reconstruction techniques was to enable an unbiased 3D comparison of mass and light. Dark haloes for example would only be detected in this manner. However the detailed analysis of noise and the radial PSF in the 3D lensing reconstructions presented for the first time in [836] show how inherently noisy the process is. Given the limitations of the method to resolve only the most massive structures in 3D the future direction of the application of this method for the Euclid Wide survey should be to reconstruct large scale structures in the 3D density field. Using more heavily spatially smoothed data we can expect higher quality 3D resolution reconstructions as on degree scales the significance of modes in a 3D mass density reconstruction are increased [835]. Adding additional information from flexion may also improve mass reconstruction, although using flexion information alone is much less sensitive than shear [733].
3.5 Scattering cross sections
We now move towards discussing the particulate aspects of dark matter, starting with a discussion on the scattering crosssections of dark matter. At present, many physical properties of the dark matter particle remain highly uncertain. Prospects for studying the scattering of dark matter with each of the three major constituents of the universe — itself, baryons, and dark energy — are outlined below.
3.5.1 Dark matterdark matter interactions
Selfinteracting dark matter (SIDM) was first postulated by [853], in an attempt to explain the apparent paucity of lowmass haloes within the Local Group. The key characteristic of this model is that CDM particles possess a large scattering crosssection, yet with negligible annihilation or dissipation. The process of elastic scattering erases small substructures and cuspy cores, whilst preserving the density profile of the haloes.
However, as highlighted by [399], crosssections large enough to alleviate the structure formation issues would also allow significant heat transfer from particles within a large halo to the cooler subhaloes. This effect is most prominent close to the centers of clusters. As the subhalo evaporates, the galaxy residing within the halo would be disrupted. Limiting this rate of evaporation to exceed the Hubble time allows an upper bound to be placed on the scattering crosssection of approximately σ_{ p }/m_{ p } ≲ 0.3 cm^{2} g^{−1} (neglecting any velocity dependence). Note the dependence on particle mass — a more massive CDM particle would be associated with a lower number density, thereby reducing the frequency of collisions.
[658] have performed raytracing through Nbody simulations, and have discovered that the ability for galaxy clusters to generate giant arcs from strong lensing is compromized if the dark matter is subject to just a few collisions per particle. This constraint translates to an upper bound σ_{ p }/m_{ p } ≲ 0.1 cm^{2} g^{−1}. Furthermore, more recent analyses of SIDM models [629, 750] utilize data from the Bullet Cluster to provide another independent limit on the scattering cross section, though the upper bound remains unchanged. [643] have proposed that the tendency for baryonic and dark matter to become separated within dynamical systems, as seen in the Bullet Cluster, could be studied in greater detail if the analysis were to be extended over the full sky in Euclid. This concept is explored in further detail in the following section.
3.5.2 Dark matterbaryonic interactions
Currently, a number of efforts are underway to directly detect WIMPs via the recoil of atomic nuclei. The underground experiments such as CDMS, CRESST, XENON, EDELWEISS and ZEPLIN have pushed observational limits for the spinindependent WIMPnucleon crosssection down to the σ ≲ 10^{−43}cm^{2} régime.^{14} A collection of the latest constraints can be found at http://dmtools.brown.edu.
Another opportunity to unearth the dark matter particle lies in accelerators such as the LHC. By 2018 it is possible these experiments will have yielded mass estimates for dark matter candidates, provided its mass is lighter than a few hundred GeV. However, the discovery of more detailed properties of the particle, which are essential to confirm the link to cosmological dark matter, would have to wait until the International Linear Collider is constructed.
3.5.3 Dark matterdark energy interactions
It is clear that such dark sector interactions do not arise in the simplest models of dark matter and dark energy. However a rigorous refutation of GR will require not only a robust measure of the growth of cosmic structures, but confirmation that the anomalous dynamics are not simply due to physics within the dark sector.
3.6 Crosssection constraints from galaxy clusters
Clusters of galaxies present an interesting environment in which the dark matter density is high and where processes such as collisions present the possibility of distinguishing dark matter from baryonic matter as the two components interact differently. For instance, particulate dark matter and baryonic matter may be temporarily separated during collisions between galaxy clusters, such as 1E 065756 [244, 164] and MACS J0025.41222 [162]. These ‘bullet clusters’ have provided astrophysical constraints on the interaction crosssection of hypothesized dark matter particles [750], and may ultimately prove the most useful laboratory in which to test for any velocity dependence of the crosssection. Unfortunately, the contribution of individual systems is limited by uncertainties in the collision velocity, impact parameter and angle with respect to the plane of the sky. Current constraints are three orders of magnitude weaker than constraints from the shapes of haloes [361] and, since collisions between two massive progenitors are rare [818, 819], the total observable number of such systems may be inadequate to investigate a physically interesting regime of dark matter properties.
Current constraints from bullet clusters on the crosssection of particulate dark matter are ∼ 18 orders of magnitude larger than that required to distinguish between plausible particlephysics dark matter candidates (for example from supersymmetric extensions to the standard model). In order to investigate a physically interesting régime of dark matter crosssection, and provide smaller error bars, many more individual bullet clusters are required. However collisions between two massive progenitors are rare and ultimately the total observable number of such systems may be inadequate.
3.6.1 Bulleticity
Finally, a Fisher matrix calculation has shown that, under the assumption that systematic effects can be controlled, Euclid could use such a technique to constrain the relative particulate crosssections to 6 × 10^{−27} cm^{2} GeV^{−1}.
The raw bulleticity measurement would constrain the relative crosssections of the baryonbaryon interaction and the dark matterdark matter interaction. However, since we know the baryonic crosssection relatively well, we can infer the dark matterdark matter crosssection. The dark matterdark matter interaction probed by Euclid using this technique will be complementary to the interactions constrained by direct detection and accelerator experiments where the primary constraints will be on the dark matterbaryon interaction.
3.7 Constraints on warm dark matter
Nbody simulations of largescale structures that assume a ΛCDM cosmology appear to overpredict the power on small scales when compared to observations [744]: ‘the missingsatellite problem’ [494, 511, 869, 188], the ‘cuspcore problem’ [568, 833, 974] and sizes of minivoids [888]. These problems may be more or less solved by several different phenomena [e.g. 310], however one which could explain all of the above is warm dark matter (WDM) [143, 248, 159]. If the dark matter particle is very light, it can cause a suppression of the growth of structures on small scales via freestreaming of the dark matter particles whilst relativistic in the early universe.
3.7.1 Warm dark matter particle candidates

Sterile neutrinos may be constructed to extend the standard model of particle physics. The standard model active (lefthanded) neutrinos can then receive the observed small masses through, e.g., a seesaw mechanism. This implies that righthanded sterile neutrinos must be rather heavy, but the lightest of them naturally has a mass in the keV region, which makes it a suitable WDM candidate. The simplest model of sterile neutrinos as WDM candidate assumes that these particles were produced at the same time as active neutrinos, but they never thermalized and were thus produced with a much reduced abundance due to their weak coupling [see 136, and references therein].

The gravitino appears as the supersymmetric partner of the graviton in supergravity models. If it has a mass in the keV range, it will be a suitable WDM candidate. It belongs to a more general class of thermalized WDM candidates. It is assumed that this class of particles achieved a full thermal equilibrium, but at an earlier stage, when the number of degrees of freedom was much higher and hence their relative temperature with respect to the CMB is much reduced. Note that in order for the gravitino to be a good dark matter particle in general, it must be very stable, which in most models corresponds to it being the LSP [e.g. 151, 221].
3.7.2 Dark matter freestreaming
In order to extrapolate the matter power spectrum to later times one must take into account the nonlinear evolution of the matter density field. This is not straightforward in the WDM case [630] and most likely needs to be explored through further simulations [974].
3.7.3 Current constraints on the WDM particle from largescale structure
Measurements in the particlephysics energy domain can only reach masses uninteresting in the WDM context, since direct detectors look mainly for a WIMP, whose mass should be in the GeVTeV range. However, as described above, cosmological observations are able to place constraints on light dark matter particles. Observation of the flux power spectrum of the Lymanα forest, which can indirectly measure the fluctuations in the dark matter density on scales between ∼ 100 kpc and ∼ 10 Mpc gives the limits of m_{WDM} > 4 keV or equivalently m_{νs} > 28 keV at 95% confidence level [927, 929, 812]. For the simplest sterile neutrino model, these lower limits are at odds with the upper limits derived from Xray observations, which come from the lack of observed diffuse Xray background from sterile neutrino annihilation and set the limit m_{νs} < 1.8 keV at the 95% confidence limit [161]. However, these results do not rule the simplest sterile neutrino models out. There exist theoretical means of evading smallscale power constraints [see e.g. 160, and references therein]. The weak lensing power spectrum from Euclid will be able to constrain the dark matter particle mass to about m_{WDM} < 2 keV [630].
3.8 Neutrino properties
The first significant evidence for a finite neutrino mass [373] indicated the incompleteness of the standard model of particle physics. Subsequent experiments have further strengthened this evidence and improved the determination of the neutrino mass splitting required to explain observations of neutrino oscillations.
As a summary of the last decade of neutrino experiments, two hierarchical neutrino mass splittings and three mixing angles have been measured. Furthermore, the standard model has three neutrinos: the motivation for considering deviations from the standard model in the form of extra sterile neutrinos has disappeared [655, 13]. Of course, deviations from the standard effective numbers of neutrino species could still indicate exotic physics which we will discuss below (Section 2.8.4).

Relic neutrinos produced in the early universe are hardly detectable by weak interactions, making it impossible with foreseeable technology to detect them directly. But new cosmological probes such as Euclid offer the opportunity to detect (albeit indirectly) relic neutrinos, through the effect of their mass on the growth of cosmological perturbations.

Cosmology remains a key avenue to determine the absolute neutrino mass scale.
Particle physics experiments will be able to place lower limits on the effective neutrino mass, which depends on the hierarchy, with no rigorous limit achievable in the case of normal hierarchy [680]. Contrarily, neutrino free streaming suppresses the smallscale clustering of largescale cosmological structures by an amount that depends on neutrino mass.

“What is the hierarchy (normal, inverted or degenerate)?” Neutrino oscillation data are unable to resolve whether the mass spectrum consists in two light states with mass m and a heavy one with mass M — normal hierarchy — or two heavy states with mass M and a light one with mass m — inverted hierarchy — in a modelindependent way. Cosmological observations, such as the data provided by Euclid, can determine the hierarchy, complementarily to data from particle physics experiments.

“Are neutrinos their own antiparticle?” If the answer is yes, then neutrinos are Majorana fermions; if not, they are Dirac. If neutrinos and antineutrinos are identical, there could have been a process in the early universe that affected the balance between particles and antiparticles, leading to the matter antimatter asymmetry we need to exist [374]. This question can, in principle, be resolved if neutrinoless doubleβ decay is observed [see 680, and references therein]. However, if such experiments [ongoing and planned, e.g., 265] lead to a negative result, the implications for the nature of neutrinos depend on the hierarchy. As shown in [480], in this case cosmology can offer complementary information by helping determine the hierarchy.
3.8.1 Evidence of relic neutrinos
The hot big bang model predicts a background of relic neutrinos in the universe with an average number density of ∼ 100 N_{ ν } cm^{−3}, where N_{ ν } is the number of neutrino species. These neutrinos decouple from the CMB at redshift z ∼ 10^{10} when the temperature was T ∼ o (MeV), but remain relativistic down to much lower redshifts depending on their mass. A detection of such a neutrino background would be an important confirmation of our understanding of the physics of the early universe.
Massive neutrinos affect cosmological observations in different ways. Primary CMB data alone can constrain the total neutrino mass Σ, if it is above ∼ 1 eV [526, finds Σ Σ 1.3 eV at 95% confidence] because these neutrinos become nonrelativistic before recombination leaving an imprint in the CMB. Neutrinos with masses Σ < 1 eV become nonrelativistic after recombination altering matterradiation equality for fixed Ω_{ m }h^{2}; this effect is degenerate with other cosmological parameters from primary CMB data alone. After neutrinos become nonrelativistic, their free streaming damps the smallscale power and modifies the shape of the matter power spectrum below the freestreaming length. The freestreaming length of each neutrino family depends on its mass.
Current cosmological observations do not detect any smallscale power suppression and break many of the degeneracies of the primary CMB, yielding constraints of Σ < 0.3 eV [762] if we assume the neutrino mass to be a constant. A detection of such an effect, however, would provide a detection, although indirect, of the cosmic neutrino background. As shown in the next section, the fact that oscillations predict a minimum total mass Σ ∼ 0.054 eV implies that Euclid has the statistical power to detect the cosmic neutrino background. We finally remark that the neutrino mass may also very well vary in time [957]; this might be tested by comparing (and not combining) measurements from CMB at decoupling with lowz measurements. An inconsistency would point out a direct measurement of a time varying neutrino mass [959].
3.8.2 Neutrino mass
Particle physics experiments are sensitive to neutrino flavours making a determination of the neutrino absolutemass scales very model dependent. On the other hand, cosmology is not sensitive to neutrino flavour, but is sensitive to the total neutrino mass.
The smallscale powersuppression caused by neutrinos leaves imprints on CMB lensing: forecasts indicate that Planck should be able to constrain the sum of neutrino masses Σ, with a 1σ error of 0.13 eV [491, 557, 289].
Euclid’s measurement of the galaxy power spectrum, combined with Planck (primary CMB only) priors should yield an error on Σ of 0.04 eV [for details see 211] which is in qualitative agreement with previous work [e.g. 779]), assuming a minimal value for Σ and constant neutrino mass. Euclid’s weak lensing should also yield an error on Σ of 0.05 eV [507]. While these two determinations are not fully independent (the cosmic variance part of the error is in common given that the lensing survey and the galaxy survey cover the same volume of the universe) the size of the errorbars implies more than 1σ detection of even the minimum Σ allowed by oscillations. Moreover, the two independent techniques will offer crosschecks and robustness to systematics. The error on Σ depends on the fiducial model assumed, decreasing for fiducial models with larger Σ. Euclid will enable us not only to detect the effect of massive neutrinos on clustering but also to determine the absolute neutrino mass scale.
3.8.3 Hierarchy and the nature of neutrinos
Since cosmology is insensitive to flavour, one might expect that cosmology may not help in determining the neutrino mass hierarchy. However, for Σ < 0. 1 eV, only normal hierarchy is allowed, thus a mass determination can help disentangle the hierarchy. There is however another effect: neutrinos of different masses become nonrelativistic at slightly different epochs; the free streaming length is sightly different for the different species and thus the detailed shape of the small scale power suppression depends on the individual neutrino masses and not just on their sum. As discussed in [480], in cosmology one can safely neglect the impact of the solar mass splitting. Thus, two masses characterize the neutrino mass spectrum: the lightest m, and the heaviest M. The mass splitting can be parameterized by Δ = (M − m)/Σ for normal hierarchy and Δ = (m − M)/Σ for inverted hierarchy. The absolute value of Δ determines the mass splitting, whilst the sign of Δ gives the hierarchy. Cosmological data are very sensitive to ∣Δ∣; the direction of the splitting — i.e., the sign of Δ — introduces a subdominant correction to the main effect. Nonetheless, [480] show that weak gravitational lensing from Euclid data will be able to determine the hierarchy (i.e., the mass splitting and its sign) if far enough away from the degenerate hierarchy (i.e., if Σ < 0. 13).
3.8.4 Number of neutrino species
Neutrinos decouple early in cosmic history and contribute to a relativistic energy density with an effective number of species N_{ν,eff} = 3.046. Cosmology is sensitive to the physical energy density in relativistic particles in the early universe, which in the standard cosmological model includes only photons and neutrinos: ω_{rel} = ω_{ γ } + N_{ν,eff}ω_{ ν }, where ω_{ γ } denotes the energy density in photons and is exquisitely constrained from the CMB, and ω_{ ν } is the energy density in one neutrino. Deviations from the standard value for N_{ν,eff} would signal nonstandard neutrino features or additional relativistic species. N_{ν,eff} impacts the big bang nucleosynthesis epoch through its effect on the expansion rate; measurements of primordial light element abundances can constrain N_{ν,eff} and rely on physics at T ∼ MeV [158]. In several nonstandard models — e.g., decay of dark matter particles, axions, quintessence — the energy density in relativistic species can change at some later time. The energy density of freestreaming relativistic particles alters the epoch of matterradiation equality and leaves therefore a signature in the CMB and in the mattertransfer function. However, there is a degeneracy between N_{ν,eff} and Ω_{ m }h^{2} from CMB data alone (given by the combination of these two parameters that leave matterradiation equality unchanged) and between N_{ν,eff} and σ_{8} and/or n_{ s }. Largescale structure surveys measuring the shape of the power spectrum at large scale can constrain independently the combination Ω_{ m }h and n_{ s }, thus breaking the CMB degeneracy. Furthermore, anisotropies in the neutrino background affect the CMB anisotropy angular power spectrum at a level of ∼ 20% through the gravitational feedback of their free streaming damping and anisotropic stress contributions. Detection of this effect is now possible by combining CMB and largescale structure observations. This yields an indication at more than 2σ level that there exists a neutrino background with characteristics compatible with what is expected under the cosmological standard model [901, 285].
The forecasted errors on N_{ν,eff} for Euclid (with a Planck prior) are ±0.1 at 1σ level [507], which is a factor ∼ 5 better than current constraints from CMB and LSS and about a factor ∼ 2 better than constraints from light element abundance and nucleosynthesis.
3.8.5 Model dependence
A recurring question is how much model dependent will the neutrino constraints be. It is important to recall that usually parameterfitting is done within the context of a ΛCDM model and that the neutrino effects are seen indirectly in the clustering. Considering more general cosmological models, might degrade neutrino constraints, and vice versa, including neutrinos in the model might degrade darkenergy constraints. Here below we discuss the two cases of varying the total neutrino mass Σ and the number of relativistic species N_{eff}, separately.
3.8.6 Σ forecasted error bars and degeneracies
In [211] it is shown that, for a general model which allows for a nonflat universe, and a redshift dependent darkenergy equation of state, the 1σ spectroscopic errors on the neutrino mass Σ are in the range 0.036–0.056 eV, depending on the fiducial total neutrino mass Σ, for the combination Euclid+Planck.
On the other hand, looking at the effect that massive neutrinos have on the darkenergy parameter constraints, it is shown that the total CMB+LSS darkenergy FoM decreases only by ∼ 15%–25% with respect to the value obtained if neutrinos are supposed to be massless, when the forecasts are computed using the socalled “P (k)method marginalized over growthinformation” (see Methodology section), which therefore results to be quite robust in constraining the darkenergy equation of state.
For what concerns the parameter correlations, at the LSS level, the total neutrino mass Σ is correlated with all the cosmological parameters affecting the galaxy power spectrum shape and BAO positions. When Planck priors are added to the Euclid constraints, all degeneracies are either resolved or reduced, and the remaining dominant correlations among Σ and the other cosmological parameters are ΣΩ_{de}, ΣΩ_{ m }, and Σw_{ a }, with the ΣΩ_{de} degeneracy being the largest one.
3.8.6.1 Hierarchy dependence
In addition, the neutrino mass spectroscopic constraints depend also on the neutrino hierarchy. In fact, the 1σ errors on total neutrino mass for normal hierarchy are ∼ 17%–20% larger than for the inverted one. It appears that the matter power spectrum is less able to give information on the total neutrino mass when the normal hierarchy is assumed as fiducial neutrino mass spectrum. This is similar to what found in [480] for the constraints on the neutrino mass hierarchy itself, when a normal hierarchy is assumed as the fiducial one. On the other hand, when CMB information are included, the Σerrors decrease by ∼ 35% in favor of the normal hierarchy, at a given fiducial value Σ∣_{fid}. This difference arises from the changes in the freestreaming effect due to the assumed mass hierarchy, and is in agreement with the results of [556], which confirms that the expected errors on the neutrino masses depend not only on the sum of neutrino masses, but also on the order of the mass splitting between the neutrino mass states.
3.8.6.2 Growth and incoherent peculiar velocity dependence
Σ spectroscopic errors stay mostly unchanged whether growthinformation are included or marginalised over, and decrease only by 10%–20% when adding f_{ g }σ_{8} measurements. This result is expected, if we consider that, unlike darkenergy parameters, Σ affects the shape of the power spectrum via a redshiftdependent transfer function T (k, z), which is sampled on a very large range of scales including the P (k) turnover scale, therefore this effect dominates over the information extracted from measurements of f_{ g }σ_{8}. This quantity, in turn, generates new correlations with Σ via the σ_{8}term, which actually is anticorrelated with M_{ ν } [641]. On the other hand, if we suppose that early darkenergy is negligible, the darkenergy parameters Ω_{de}, w_{0} and w_{ a } do not enter the transfer function, and consequently growth information have relatively more weight when added to constraints from H (z) and D_{ A } (z) alone. Therefore, the value of the darkenergy FoM does increase when growthinformation are included, even if it decreases by a factor ∼ 50%–60% with respect to cosmologies where neutrinos are assumed to be massless, due to the correlation among Σ and the darkenergy parameters. As confirmation of this degeneracy, when growthinformation are added and if the darkenergy parameters Ω_{de}, w_{0}, w_{ a } are held fixed to their fiducial values, the errors σ (Σ) decrease from 0.056 eV to 0.028 eV, for Euclid combined with Planck.
We expect that darkenergy parameter errors are somewhat sensitive also to the effect of incoherent peculiar velocities, the socalled “Fingers of God” (FoG). This can be understood in terms of correlation functions in the redshiftspace; the stretching effect due to random peculiar velocities contrasts the flattening effect due to largescale bulk velocities. Consequently, these two competing effects act along opposite directions on the darkenergy parameter constraints (see methodology Section 5).
On the other hand, the neutrino mass errors are found to be stable again at σ (Σ) = 0.056, also when FoG effects are taken into account by marginalising over σ_{ υ } (z); in fact, they increase only by 10%–14% with respect to the case where FoG are not taken into account.
σ(M_{ ν }) and σ(N_{eff}) marginalized errors from LSS+CMB
General cosmology  

fiducial →  Σ = 0.3 eV^{ a }  Σ = 0.2 eV^{ a }  Σ = 0.125 eV^{ b }  Σ = 0.125 eV^{ c }  Σ = 0.05 eV^{ b }  N_{eff} = 3.04^{ d } 
EUCLID+Planck  0.0361  0.0458  0.0322  0.0466  0.0563  0.0862 
ΛCDM cosmology  
EUCLID+Planck  0.0176  0.0198  0.0173  0.0218  0.0217  0.0224 
3.8.7 N_{eff} forecasted errors and degeneracies
Regarding the N_{eff} spectroscopic errors, [211] finds σ (N_{eff}) ∼ 0.56 from Euclid, and σ (N_{eff}) ∼ 0.086, for Euclid+Planck. Concerning the effect of N_{eff} uncertainties on the darkenergy parameter errors, the CMB+LSS darkenergy FoM decreases only by ∼ 5% with respect to the value obtained holding N_{eff} fixed at its fiducial value, meaning that also in this case the “P (k)method marginalized over growthinformation” is not too sensitive to assumptions about model cosmology when constraining the darkenergy equation of state.
About the degeneracies between N_{eff} and the other cosmological parameters, it is necessary to say that the number of relativistic species gives two opposite contributions to the observed power spectrum P_{obs} (see methodology Section 5), and the total sign of the correlation depends on the dominant one, for each single cosmological parameter. In fact, a larger N_{eff} value suppresses the transfer function T (k) on scales k ≤ k_{max}. On the other hand, a larger N_{eff} value also increases the AlcockPaczynski prefactor in P_{obs}. For what concerns the darkenergy parameters Ω_{de}, w_{0}, w_{ a }, and the darkmatter density Ω_{ m }, the AlcockPaczynski prefactor dominates, so that N_{eff} is positively correlated to Ω_{de} and w_{ a }, and anticorrelated to Ω_{ m } and w_{0}. In contrast, for the other parameters, the T (k) suppression produces the larger effect and N_{eff} results to be anticorrelated to Ω_{ b }, and positively correlated to h and n_{ s }. The degree of the correlation is very large in the n_{ s } − N_{eff} case, being of the order ∼ 0.8 with and without Planck priors. For the remaining cosmological parameters, all the correlations are reduced when CMB information are added, except for the covariance N_{eff}Ω_{de}, as happens also for the M_{ ν }correlations. To summarize, after the inclusion of Planck priors, the remaining dominant degeneracies among N_{eff} and the other cosmological parameters are N_{eff}n_{ s }, N_{eff}Ω_{de}, and N_{eff}h, and the forecasted error is σ (N_{eff}) ∼ 0.086, from Euclid+Planck. Finally, if we fix to their fiducial values the darkenergy parameters Ω_{de}, w_{0} and w_{ a }, σ (N_{eff}) decreases from 0.086 to 0.048, for the combination Euclid+Planck.
3.8.8 Nonlinear effects of massive cosmological neutrinos on bias and RSD
In general, forecasted errors are obtained using techniques, like the Fishermatrix approach, that are not particularly well suited to quantify systematic effects. These techniques forecast only statistical errors, which are meaningful as long as they dominate over systematic errors. Therefore, it is important to consider sources of systematics and their possible effects on the recovered parameters. Possible sources of systematic errors of major concern are the effect of nonlinearities and the effects of galaxy bias.
The description of nonlinearities in the matter power spectrum in the presence of massive neutrinos has been addressed in several different ways: [966, 779, 778, 780] have used perturbation theory, [555] the timeRG flow approach and [167, 166, 168, 928] different schemes of Nbody simulations. Another nonlinear scheme that has been examined in the literature is the halo model. This has been applied to massive neutrino cosmologies in [1, 421, 422].
On the other hand, galaxy/halo bias is known to be almost scaleindependent only on large, linear scales, but to become nonlinear and scaledependent for small scales and/or for very massive haloes. From the above discussion and references, it is clear that the effect of massive neutrinos on the galaxy power spectrum in the nonlinear regime must be explored via Nbody simulations to encompass all the relevant effects.
Here below we focus on the behavior of the DMhalo mass function (MF), the DMhalo bias, and the redshiftspace distortions (RSD), in the presence of a cosmological background of massive neutrinos. To this aim, [168] and [641] have analysed a set of large Nbody hydrodynamical simulations, developed with an extended version of the code gadget3 [928], which take into account the effect of massive freestreaming neutrinos on the evolution of cosmic structures.
As is well known, massive neutrinos also strongly affect the spatial clustering of cosmic structures. A standard statistics generally used to quantify the degree of clustering of a population of sources is the twopoint autocorrelation function. Although the freestreaming of massive neutrinos causes a suppression of the matter power spectrum on scales k larger than the neutrino freestreaming scale, the halo bias is significantly enhanced. This effect can be physically explained thinking that, due to neutrino structure suppression, the same halo bias would correspond, in a ΛCDM cosmology, to more massive haloes (than in a ΛCDMν cosmology), which as known are typically more clustered.
3.9 Coupling between dark energy and neutrinos
As we have seen in Section 1.4.4, it is interesting to consider the possibility that dark energy, seen as a dynamical scalar field (quintessence), may interact with other components in the universe. In this section we focus on the possibility that a coupling may exist between dark energy and neutrinos.
The idea of such a coupling has been addressed and developed by several authors within MaVaNs theories first [356, 714, 135, 12, 952, 280, 874, 856, 139, 178, 177] and more recently within growing neutrino cosmologies [36, 957, 668, 963, 962, 727, 179]. It has been shown that neutrinos can play a crucial role in cosmology, setting naturally the desired scale for dark energy. Interestingly, a coupling between neutrinos and dark energy may help solving the ‘why now’ problem, explaining why dark energy dominates only in recent epochs. The coupling follows the description illustrated in Section 1.4.4 for a general interacting darkenergy cosmology, where now m_{ ν } = m_{ ν } (ϕ).
Typically, in growing neutrino cosmologies, the function m_{ ν } (ϕ) is such that the neutrino mass grows with time from low, nearly massless values (when neutrinos are nonrelativistic) up to present masses in a range in agreement with current observations (see the previous section of this review for latest bounds on neutrino masses). The key feature of growing neutrino models is that the amount of dark energy today is triggered by a cosmological event, corresponding to the transition from relativistic to nonrelativistic neutrinos at redshift z_{NR} ∼ 5 ÷ 10. As long as neutrinos are relativistic, the coupling plays no role on the dynamics of the scalar field, which follows attractor solutions of the type described in Section 1.4.4. From there on, the evolution of dark energy resembles that of a cosmological constant, plus small oscillations of the coupled dark energyneutrino fluid. As a consequence, when a coupling between dark energy and neutrinos is active, the amount of dark energy and its equation of state today are strictly connected to the present value of the neutrino mass.
The interaction between neutrinos and dark energy is a nice and concrete example of the significant imprint that dynamical coupled dark energy can leave on observables and in particular on structure formation and on the cosmic microwave background. This is due to the fact that the coupling, playing a role only after neutrinos become nonrelativistic, can reach relatively high values as compared to gravitational attraction. Typical values of β are order 50 ÷ 100 or even more such that even the small fraction of cosmic energy density in neutrinos can have a substantial influence on the time evolution of the quintessence field. During this time the fifth force can be of order 10^{2} ÷ 10^{4} times stronger than gravity. The neutrino contribution to the gravitational potential influences indirectly also dark matter and structure formation, as well as CMB, via the Integrated SachsWolfe effect and the nonlinear ReesSciama effect, which is nonnegligible at the scales where neutrinos form stable lumps. Furthermore, backreaction effects can substantially modify the growth of large scale neutrino lumps, with effects which are much larger than in the dark matter case. The presence of a fifth force due to an interaction between neutrinos and dark energy can lead to remarkably peculiar differences with respect to a cosmological constant scenario.

existence of very large structures, order 10 ÷ 500 Mpc [12, 668, 963, 962, 727];

enhanced ISW effect, drastically reduced when taking into account nonlinearities [727]: information on the gravitational potential is a good mean to constrain the range of allowed values for the coupling β;

largescale anisotropies and enhanced peculiar velocities [947, 69];

the influence of the gravitational potential induced by the neutrino inhomogeneities can affect BAO in the darkmatter spectra [179].
Investigation of structure formation at very large scales (order 1 ÷ 100 Mpc) as well as cross correlation with CMB are crucial in order to disentangle coupled neutrinoquintessence cosmologies from a cosmological constant scenario. Detection of a population of very largescale structures could pose serious difficulties to the standard framework and open the way to the existence of a new cosmological interaction stronger than gravity.
3.10 Unified Dark Matter
The appearance of two unknown components in the standard cosmological model, dark matter and dark energy, has prompted discussion of whether they are two facets of a single underlying dark component. This concept goes under the name of quartessence [615], or unified dark matter (UDM). A priori this is attractive, replacing two unknown components with one, and in principle it might explain the ‘why now?’ problem of why the energy densities of the two components are similar (also referred to as the coincidence problem). Many UDM models are characterized by a sound speed, whose value and evolution imprints oscillatory features on the matter power spectrum, which may be detectable through weak lensing or BAO signatures with Euclid.
The field is rich in UDM models [see 128, for a review and for references to the literature]. The models can grow structure, as well as providing acceleration of the universe at late times. In many cases, these models have a noncanonical kinetic term in the Lagrangian, e.g., an arbitrary function of the square of the time derivative of the field in a homogeneous and isotropic background. Early models with acceleration driven by kinetic energy [kinflation 60, 384, 154] were generalized to more general Lagrangians [kessence; e.g., 61, 62, 795]. For UDM, several models have been investigated, such as the generalized Chaplygin gas [488, 123, 137, 979, 741], although these may be tightly constrained due to the finite sound speed [e.g. 38, 124, 784, 985]. Vanishing sound speed models however evade these constraints [e.g., the silent Chaplygin gas of 50]. Other models consider a single fluid with a twoparameter equation of state [e.g 74]), models with canonical Lagrangians but a complex scalar field [55], models with a kinetic term in the energymomentum tensor [379, 234], models based on a DBI action [236], models which violate the weak equivalence principle [375] and models with viscosity [321]. Finally, there are some models which try to unify inflation as well as dark matter and dark energy [206, 688, 572, 575, 430].
A requirement for UDM models to be viable is that they must be able to cluster to allow structure to form. A generic feature of the UDM models is an effective sound speed, which may become significantly nonzero during the evolution of the universe, and the resulting Jeans length may then be large enough to inhibit structure formation. The appearance of this sound speed leads to observable consequences in the CMB as well, and generally speaking the speed needs to be small enough to allow structure formation and for agreement with CMB measurements. In the limit of zero sound speed, the standard cosmological model is recovered in many models. Generally the models require finetuning, although some models have a fast transition between a dark matter only behavior and ΛCDM. Such models [729] can have acceptable Jeans lengths even if the sound speed is not negligible.
3.10.1 Theoretical background
3.10.2 Euclid observables
Of interest for Euclid are the weak lensing and BAO signatures of these models, although the supernova Hubble diagram can also be used [885]. The observable effects come from the power spectrum and the evolution of the equationofstate parameter of the unified fluid, which affects distance measurements. The observational constraints of the generalized Chaplygin gas have been investigated [706], with the model already constrained to be close to ΛCDM with SDSS data and the CMB. The effect on BAO measurements for Euclid has yet to be calculated, but the weak lensing effect has been considered for noncanonical UDM models [202]. The change in shape and oscillatory features introduced in the power spectrum allow the sound speed parameter to be constrained very well by Euclid, using 3D weak lensing [427, 506] with errors ∼ 10^{−5} [see also 198].
3.11 Dark energy and dark matter
In Section 1.4, we have illustrated the possibility that dark energy, seen as a dynamical scalar field (quintessence), may interact with other components in the universe. When starting from an action such as Eq. (1.4.20), the species which interact with quintessence are characterized by a mass function that changes in time [514, 33, 35, 724]. Here, we consider the case in which the evolution of cold dark matter (CDM) particles depends on the evolution of the darkenergy scalar field. In this case the general framework seen in Section 1.4 is specified by the choice of the function m_{ c } = m_{ c } (ϕ). The coupling is not constrained by tests of the equivalence principle and solar system constraints, and can therefore be stronger than the coupling with baryons. Typical values of β presently allowed by observations (within current CMB data) are within the range 0 < β < 0.06 at 95% CL for a constant coupling and an exponential potential, [114, 47, 35, 44], or possibly more if neutrinos are taken into account or more realistic timedependent choices of the coupling are used [539, 531]. As mentioned in Section 1.4.4, this framework is generally referred to as ‘coupled quintessence’ (CQ). Various choices of couplings have been investigated in the literature, including constant β [33, 619, 35, 518, 414, 747, 748, 724] and varying couplings [76], with effects on Supernovæ, CMB and crosscorrelation of the CMB and LSS [114, 47, 35, 44, 539, 531, 612].
 1.
a fifth force ∇ [Φ_{ α } + βϕ] with an effective \({\tilde G_\alpha} = {G_N}\left[ {1 + 2{\beta ^2}\left(\phi \right)} \right]\);
 2.
a velocitydependent term \(\tilde H{{\bf{v}}_\alpha} \equiv H\left({1  \beta \left(\phi \right){{\dot \phi} \over H}} \right){{\bf{v}}_\alpha}\);
 3.
a timedependent mass for each particle α, evolving according to Eq. (1.4.25).

enhanced ISW effect [33, 35, 612]; such effects may be partially reduced when taking into account nonlinearities, as described in [727];

increase in the number counts of massive clusters at high redshift [77];

scaledependent bias between baryons and dark matter, which behave differently if only dark matter is coupled to dark energy [79, 75];

less steep inner core halo profiles (depending on the interplay between fifth force and velocitydependent terms) [79, 76, 565, 562, 75];

voids are emptier when a coupling is active [80].
As discussed in subsection 1.6.1, when a variable coupling β (ϕ) is active the relative balance of the fifthforce and other dynamical effects depends on the specific time evolution of the coupling strength. Under such conditions, certain cases may also lead to the opposite effect of larger halo inner overdensities and higher concentrations, as in the case of a steeply growing coupling function [see 76]. Alternatively, the coupling can be introduced by choosing directly a covariant stressenergy tensor, treating dark energy as a fluid in the absence of a starting action [619, 916, 193, 794, 915, 613, 387, 192, 388]. For an illustration of nonlinear effects in the presence of a coupling see Section 1.6.
3.12 Ultralight scalar fields
Ultralight scalar fields arise generically in high energy physics, most commonly as axions or other axionlike particles (ALPs). They are the PseudoGoldstone bosons (PGBs) of spontaneously broken symmetries. Their mass remains protected to all loop orders by a shift symmetry, which is only weakly broken to give the fields a mass and potential, through non perturbative effects. Commonly these effects are presumed to be caused by instantons, as in the case of the QCD axion, but the potential can also be generated in other ways that give potentials that are useful, for example, in the study of quintessence [705]. Here we will be considering a general scenario, motivated by the suggestions of [63] and [452], where an ultralight scalar field constitutes some fraction of the dark matter, and we make no detailed assumptions about its origin.
Axions arise generically in string theory [871]. They are similar to the well known QCD axion [715, 873, 872, 315, 742, 866, 911, 2, 316, 908, 932], and their cosmology has been extensively studied [see, for example, 84]. String axions are the KaluzaKlein zero modes of antisymmetric tensor fields, the number of which is given by the number of closed cycles in the compact space: for example a twoform such as B_{ MN } ^{16} has a number of zero modes coming from the number of closed twocycles. In any realistic compactification giving rise to the Standard Model of particle physics the number of closed cycles will typically be in the region of hundreds. Since such large numbers of these particles are predicted by String Theory, we are motivated to look for their general properties and resulting cosmological phenomenology.
There will be a small thermal population of ALPs, but the majority of the cosmological population will be cold and nonthermally produced. Production of cosmological ALPs proceeds by the vacuum realignment mechanism. When the PecceiQuinnlike U (1) symmetry is spontaneously broken at the scale f_{ a } the ALP acquires a vacuum expectation value, the misalignment angle θ_{ i }, uncorrelated across different causal horizons. However, provided that inflation occurs after symmetry breaking, and with a reheat temperature T ≲ f_{ a }, then the field is homogenized over our entire causal volume. This is the scenario we will consider. The field θ is a PGB and evolves according to the potential U acquired at the scale μ. However, a light field will be frozen at θ_{ i } until the much later time when the mass overcomes the Hubble drag and the field begins to roll towards the minimum of the potential, in exact analogy to the minimum of the instanton potential restoring \({\mathcal C}{\mathcal P}\) invariance in the PecceiQuinn mechanism for the QCD axion. Coherent oscillations about the minimum of U lead to the production of the weakly coupled ALPs, and it is the value of the misalignment angle that determines the cosmological density in ALPs [579, 431, 826].
The underlying shift symmetry restricts U to be a periodic function of θ for true axions, but since in the expansion all couplings will be suppressed by the high scale f_{ a }, and the specific form of U is modeldependent, we will make the simplification to consider only the quadratic mass term as relevant in the cosmological setting, though some discussion of the effects of anharmonicites will be made. In addition, [705] have constructed nonperiodic potentials in string theory.
Scalar fields with masses in the range 10^{−33} eV < m < 10^{−22} eV are also wellmotivated dark matter candidates independently of their predicted existence in string theory, and constitute what Hu has dubbed “fuzzy cold dark matter”, or FCDM [452]. The Compton wavelength of the particles associated with ultralight scalar fields, λ_{ c } = 1/m in natural units, is of the size of galaxies or clusters of galaxies, and so the uncertainty principle prevents localization of the particles on any smaller scale. This naturally suppresses formation of structure and serves as a simple solution to the problem of “cuspy halos” and the large number of dwarf galaxies, which are not observed and are otherwise expected in the standard picture of CDM. Sikivie has argued [827] that axion dark matter fits the observed caustics in dark matter profiles of galaxies, which cannot be explained by ordinary dust CDM.
The large phase space density of ultralight scalar fields causes them to form BoseEinstein condensates [see 828, and references therein] and allows them to be treated as classical fields in a cosmological setting. This could lead to many interesting, and potentially observable phenomena, such as formation of vortices in the condensate, which may effect halo mass profiles [829, 484], and black hole super radiance [63, 64, 772], which could provide direct tests of the “string axiverse” scenario of [63]. In this summary we will be concerned with the largescale indirect effects of ultralight scalar fields on structure formation via the matter power spectrum in a cosmology where a fraction f = Ω_{ a }/Ω_{ m } of the dark matter is made up of such a field, with the remaining dark matter a mixture of any other components but for simplicity we will here assume it to be CDM so that (1 − f)Ω_{ m } = Ω_{ C }.
If ALPs exist in the high energy completion of the standard model of particle physics, and are stable on cosmological time scales, then regardless of the specifics of the model [882] have argued that on general statistical grounds we indeed expect a scenario where they make up an order one fraction of the CDM, alongside the standard WIMP candidate of the lightest supersymmetric particle. However, it must be noted that there are objections when we consider a population of light fields in the context of inflation [605, 606]. The problem with these objections is that they make some assumptions about what we mean by “fine tuning” of fundamental physical theories, which is also related to the problem of finding a measure on the landscape of string theory and inflation models [see, e.g., 583], the socalled “Goldilocks problem.” Addressing these arguments in any detail is beyond the scope of this summary.
 In conformal time and in the synchronous gauge with scalar perturbation h as defined in [599], a scalar field with a quadratic potential evolves according to the following equations for the homogeneous, ϕ_{0}(τ), and first order perturbation, \({\phi _1}\left({\tau, \vec k} \right)\), components$${\ddot \phi _0} + 2{\mathcal H}{\dot \phi _o} + {m^2}{a^2}{\phi _0} = 0,$$(2.12.5)$${\ddot \phi _1} + 2{\mathcal H}{\dot \phi _1} + ({m^2}{a^2} + {k^2}){\phi _1} =  {1 \over 2}{\dot \phi _0}\dot h;$$(2.12.6)
 In cosmology we are interested in the growth of density perturbations in the dark matter, and how they effect the expansion of the universe and the growth of structure. The energymomentum tensor for a scalar field iswhich to first order in the perturbations has the form of a perfect fluid and so we find the density and pressure components in terms of ϕ_{0}, ϕ_{1},$${T^\mu}_\nu = {\phi ^{;\mu}}{\phi _{;\nu}}  {1 \over 2}({\phi ^{;\alpha}}{\phi _{;\alpha}} + 2V)\delta _{\;\nu}^\mu$$(2.12.7)$${\rho _a} = {{{a^{ 2}}} \over 2}\dot \phi _0^2 + {{{m^2}} \over 2}\phi _0^2,$$(2.12.8)$$\delta {\rho _a} = {a^{ 2}}{\dot \phi _0}{\dot \phi _1} + {m^2}{\phi _0}{\phi _1},$$(2.12.9)$${P_a} = {{{a^{ 2}}} \over 2}\dot \phi _0^2  {{{m^2}} \over 2}\phi _0^2,$$(2.12.10)$$\delta {P_a} = {a^{ 2}}{\dot \phi _0}{\dot \phi _1}  {m^2}{\phi _0}{\phi _1},$$(2.12.11)$$(\rho + P){\theta _a} = {a^{ 2}}{k^2}{\dot \phi _0}{\phi _1};$$(2.12.12)
 The scalar field receives an initial value after symmetry breaking and at early times it remains frozen at this value by the Hubble drag. A frozen scalar field behaves as a cosmological constant; once it begins oscillating it will behave as matter. A field begins oscillating when$$H(t) < m;$$(2.12.13)
 Do oscillations begin in the radiation or matter dominated era? The scale factor at which oscillations begin, a_{osc}, is given by$$\begin{array}{*{20}c} {{a_{{\rm{osc}}}} = {{\left({{{{t_{{\rm{eq}}}}} \over {{t_0}}}} \right)}^{1/6}}{{\left({{1 \over {m{t_0}}}} \right)}^{1/2}},\qquad m \lesssim {{10}^{ 27}}{\rm{eV}},} \\ {{a_{{\rm{osc}}}} = {{\left({{1 \over {m{t_0}}}} \right)}^{2/3}},\qquad m \lesssim {{10}^{ 27}}{\rm{eV}};\quad \quad \quad \;} \\ \end{array}$$(2.12.14)
 If oscillations begin in the matterdominated era then the epoch of equality will not be the same as that inferred from the matter density today. Only CDM will contribute to the matter density at equality, so that the scale factor of equality is given by$${a_{{\rm{eq}}}} \simeq {{{\Omega _r}} \over {{\Omega _m}}}{1 \over {(1  f)}};$$(2.12.15)
 The energy density today in such an ultralight field can be estimated from the time when oscillations set in and depends on its initial value aswhile fields that begin oscillations in the radiation era also have a mass dependence in the final density as ∼ m^{1/2};$${\Omega _a} = {1 \over 6}\;{\left({{1 \over {{t_0}}}} \right)^2}{\phi _0}{({t_i})^2},$$(2.12.16)
 In the context of generalized dark matter [447] we can see the effect of the Compton scale of these fields through the fluid dynamics of the classical field. The sound speed of a field with momentum k and mass m at a time when the scale factor of the FLRW metric is a is given byOn large scales the pressure becomes negligible, the sound speed goes to zero and the field behaves as ordinary dust CDM and will collapse under gravity to form structure. However on small scales, set by λ_{ c }, the sound speed becomes relativistic, suppressing the formation of structure;$$\begin{array}{*{20}c} {c_s^2 = {{{k^2}} \over {4{m^2}{a^2}}},\quad k < 2ma,} \\ {c_s^2 = 1,\quad k > 2ma.\quad \quad \;\;} \\ \end{array}$$(2.12.17)
 This scaledependent sound speed will affect the growth of overdensities, so we ask: are the perturbations on a given scale at a given time relativistic? The scaleseparates the two regimes. On small scales: k > k_{ R } the sound speed is relativistic. Structure formation is suppressed in modes that entered the horizon whilst relativistic.$${k_R} = ma(t)$$(2.12.18)
 Time dependence of the scale k_{ R } and the finite size of the horizon mean that suppression of structure formation will accumulate on scales larger than k_{ R }. For the example of ultralight fields that began oscillations in the matterdominated regime, we calculate that suppression of structure begins at a scalewhich is altered to k_{ m } ∼ m^{1/2} for heavier fields that begin oscillations in the radiation era [37];$${k_m}\sim {\left({{m \over {{{10}^{ 33}}\;{\rm{eV}}}}} \right)^{1/3}}\left({{{100\;{\rm{km}}\;{{\rm{s}}^{ 1}}} \over c}} \right)\;h\;{\rm{Mp}}{{\rm{c}}^{ 1}},$$(2.12.19)
 The suppression leads to steps in the matter power spectrum, the size of which depends on f. The amount of suppression can be estimated, following [37], asAs one would expect, a larger f gives rise to greater suppression of structure, as do lighter fields that freestream on larger scales.$$S(a) = {\left({{{{a_{{\rm{osc}}}}} \over a}} \right)^{2(1  1/4( 1 + \sqrt {25  24f}))}}.$$(2.12.20)
Numerical solutions to the perturbation equations indeed show that the effect of ultralight fields on the growth of structure is approximately as expected, with steps in the matter power spectrum appearing. However, the fits become less reliable in some of the most interesting regimes where the field begins oscillations around the epoch of equality, and suppression of structure occurs near the turnover of the power spectrum, and also for the lightest fields that are still undergoing the transition from cosmological constant to matterlike behavior today [632]. These uncertainties are caused by the uncertainty in the background expansion during such an epoch. In both cases a change in the expansion rate away from the expectation of the simplest ΛCDM model is expected. During matter and radiation eras the scale factor grows as a ∼ τ^{ p } and p can be altered away from the ΛCDM expectation by \({\mathcal O}\left({10} \right)\%\) by oscillations caused during the scalar field transition, which can last over an order of magnitude in scale factor growth, before returning to the expected behavior when the scalar field is oscillating sufficiently rapidly and behaves as CDM.
The combined CMBlarge scale structure likelihood analysis of [37] has shown that ultralight fields with mass around 10^{−30}–10^{−24} eV might account for up to 10% of the dark matter abundance.
3.12.1 Requirements
Ultralight fields are similar in many ways to massive neutrinos [37], the major difference being that their nonthermal production breaks the link between the scale of suppression, k_{ m }, and the fraction of dark matter, f_{ ax }, through the dependence of f_{ ax } on the initial field value ϕ_{ i }. Therefore an accurate measurement of the matter power spectrum in the lowk region where massive neutrinos corresponding to the WMAP limits on Ω_{ ν } are expected to suppress structure will determine whether the expected relationship between Ω_{ ν } and k_{ m } holds. These measurements will limit the abundance of ultralight fields that begin oscillations in the matterdominated era.
Another powerful test of the possible abundance of ultralight fields beginning oscillations in the matter era will be an accurate measure of the position of the turn over in the matter power spectrum, since this gives a handle on the species present at equality. Ultralight fields with masses in the regime such that they begin oscillations in the radiationdominated era may suppress structure at scales where the BAO are relevant, and thus distort them. An accurate measurement of the BAO that fits the profile in P (k) expected from standard ΛCDM would place severe limits on ultralight fields in this mass regime.
Finally, the expected suppression of structure caused by ultralight fields should be properly taken into account in Nbody simulations. The nonlinear regime of P (k) needs to be explored further both analytically and numerically for cosmologies containing exotic components such as ultralight fields, especially to constrain those fields which are heavy enough such that k_{ m } occurs around the scale where nonlinearities become significant, i.e., those that begin oscillation deep inside the radiationdominated regime. For lighter fields the effects in the nonlinear regime should be wellmodelled by using the linear P (k) for Nbody input, and shifting the other variables such as Ω_{ c } accordingly.
3.13 Darkmatter surrogates in theories of modified gravity
3.13.1 Extra fields in modified gravity
The idea that the dark universe may be a signal of modified gravity has led to the development of a plethora of theories. From polynomials in curvature invariants, preferred reference frames, UV and IR modifications and extra dimensions, all lead to significant modifications to the gravitational sector. A universal feature that seems to emerge in such theories is the existence of fields that may serve as a proxy to dark matter. This should not be unexpected. On a case by case basis, one can see that modifications to gravity generically lead to extra degrees of freedom.
For example, polynomials in curvature invariants lead to higherderivative theories which inevitably imply extra (often unstable) solutions that can play the role of dark matter. This can be made patently obvious when mapping such theories onto the Einstein frame with an addition scalar field (ScalarTensor theories). EinsteinAether theories [989] explicitly introduce an extra timelike vector field. The timelike constraint locks the background, leading to modifications to the background expansion; perturbations in the vector field can, under certain conditions, lead to growth of structure, mimicking the effect of pressureless dark matter. The vector field plays the same role in TeVeS [117], where two extra fields are introduced to modify the gravitational dynamics. And the same effects come into play in bigravity models [83] where two metrics are explicitly introduced — the scalar modes of the second metric can play the role of dark matter.
In what follows we briefly focus on three of the above cases where extra gravitational degrees of freedom play the role of dark matter: EinsteinAether models, TeVeS models and bigravity models. We will look at the EinsteinAether model more carefully and then briefly discuss the other two cases.
3.13.2 Vector dark matter in EinsteinAether models
As we have seen in a previous section, EinsteinAether models introduce a timelike vector field A^{ a } into gravitational dynamics. The four vector A^{ a } can be expanded as \({A^\mu} = \left({1 + \epsilon X, \epsilon {\partial ^j}Z} \right) = \left({1 + \epsilon X,{\epsilon \over {{a^2}}}{\partial _j}Z} \right)\) [989]. In Fourier space we have \({A^\mu} = \left({1  \epsilon \Psi, i{\epsilon \over a}{k_j}V} \right)\), where, for computational convenience, we have defined V ≡ Z/a and have used the fact that the constraint fixes X = −Ψ.
3.13.3 Scalar and tensors in TeVeS
We have already come across the effect of the extra fields of TeVeS. Recall that, in TeVeS, as well as a metric (tensor) field, there is a timelike vector field and a scalar field both of which map the two frames on to each other. While at the background level the extra fields contribute to modifying the overall dynamics, they do not contribute significantly to the overall energy density. This is not so at the perturbative level. The field equations for the scalar modes of all three fields can be found in the conformal Newtonian gauge in [841]. While the perturbations in the scalar field will have a negligible effect, the spacelike perturbation in the vector field has an intriguing property: it leads to growth. [318] have shown that the growing vector field feeds into the Einstein equations and gives rise to a growing mode in the gravitational potentials and in the baryon density. Thus, baryons will be aided by the vector field leading to an effect akin to that of pressureless dark matter. The effect is very much akin to that of the vector field in EinsteinAether models — in fact it is possible to map TeVeS models onto a specific subclass of EinsteinAether models. Hence the discussion above for EinsteinAether scenarios can be used in the case of TeVeS.
3.13.4 Tensor dark matter in models of bigravity
In bigravity theories [83], one considers two metrics: a dynamical metric g_{ μν } and a background metric, \({\tilde g_{\alpha \beta}}\). As in TeVeS, the dynamical metric is used to construct the energymomentum tensor of the nongravitational fields and is what is used to define the geodesic equations of test particles. The equations that define its evolution are usually not the Einstein field equations but may be defined in terms of the background metric.
3.14 Outlook
Dark matter dominates the matter content of the universe, and only through astrophysical and cosmological observations can the nature of dark matter on large scales be determined. In this review, we have discussed a number of observational techniques available to Euclid: dark matter mapping, complementarity with other astronomical observations (e.g., Xray and CMB experiments); cluster and galaxy scale dark matter halo mapping; and power spectrum analyses. The techniques described will allow Euclid to constrain a variety of dark matter candidates and their microphysical properties. We have discussed Warm Dark Matter scenarios, axionlike dark matter, scalar field dark matter models (as well as the possible interactions between dark energy and scattering with ordinary matter) and massive neutrinos (the only known component of dark matter).

The weak lensing power spectrum from Euclid will be able to constrain warm dark matter particle mass to about m_{WDM} > 2 keV [630];

The galaxy power spectrum, with priors from Planck (primary CMB only), will yield an error on the sum of neutrino masses Σ of 0.04 eV (see Table 18; [211]);

Euclid’s weak lensing should also yield an error on ∑ of 0.05 eV [507];

[480] have shown that weak gravitational lensing from Euclid data will be able to determine neutrino hierarchy (if Σ > 0.13);

The forecasted errors on the effective number of neutrino species N_{ν,eff} for Euclid (with a Planck prior) are ±0.1 [for weak lensing 507] and ±0.086 [for galaxy clustering 211];

The sound speed of unified dark energydark matter can be constrained with errors ∼ 10^{−5} by using 3D weak lensing [202];

Recently, [633] showed that with current and next generation galaxy surveys alone it should be possible to unambiguously detect a fraction of dark matter in axions of the order of 1% of the total;
We envisage a number of future scenarios, all of which give Euclid an imperative to confirm or identify the nature of dark matter. In the event that a dark matter candidate is discovered in direct detection experiments or an accelerator (e.g. LHC) a primary goal for Euclid will be to confirm, or refute, the existence of this particle on large scales. In the event that no discovery is made directly, then astronomical observations will remain our only way to determine the nature of dark matter.
4 Initial Conditions
4.1 Introduction
The exact origin of the primordial perturbations that seeded the formation of the largescale structure in the universe is still unknown. Our current understanding of the initial conditions is based on inflation, a phase of accelerated expansion preceding the standard evolution of the universe [416, 861, 863, 791]. In particular, inflation explains why the universe is so precisely flat, homogeneous and isotropic. During this phase, scales much smaller than the Hubble radius are inflated to superhorizon sizes, so that regions appearing today as causally disconnected were in fact very close in the past. This mechanism is also at the origin of the cosmic largescale structure. Vacuum quantum fluctuations of any light field present during inflation are amplified by the accelerated expansion and freezeout on superHubble scales acquiring a quasiscale invariant spectrum [675, 425, 863, 417, 86].
From the early development of inflation, the simplest proposal based on a weaklycoupled single field rolling along its potential [576, 20] has gained strength and many models have been built based on this picture (see for instance [581] for a review). Although some inflationary potentials are now excluded by current data (see for instance [525]), this scenario has been extremely successful in passing many observational tests: it predicts perfectly adiabatic and almost Gaussian fluctuations with a quasi scaleinvariant spectrum and a small amount of gravitational waves.
While current data have ruled out some classes of inflationary models, the next qualitative step forward is investigating the physics responsible for inflation: we still lack a complete understanding of the high energy physics describing it. In fact, most likely the physics of inflation is far out of reach of terrestrial experiments, many orders of magnitude larger than the centerofmass energy at the Large Hadron Collider (LHC). Thus, cosmological tests of inflation offer a unique opportunity to learn about ultrahigh energy physics. We can do this by targeting observations which directly probe the dynamics of inflation. One route is to accurately measure the shape of the primordial power spectrum of scalar perturbations produced during the phase of accelerated expansion, which is directly related to the shape of the inflaton potential, and to constrain the amplitude of the corresponding stochastic gravitationalwave background, which is related instead to the energyscale of inflation.
A complementary approach is offered by constraining — or exploring — how much the distribution of primordial density perturbations departs from Gaussian statistics and purely adiabatic fluctuations. Indeed, future largescale structure surveys like Euclid can probe these features with an unprecedented accuracy, thus providing a way to test aspects of inflationary physics that are not easily accessible otherwise. NonGaussianity is a very sensitive probe of selfcouplings and interactions between the fields generating the primordial perturbations, whereas the presence of isocurvature modes can teach us about the number of fields present during inflation and their role in reheating and generating the matter in the universe.
Furthermore, nonminimal scenarios or proposals even radically different from singlefield inflation are still compatible with the data. In order to learn something about the physics of the early universe we need to rule out or confirm the conventional slowroll scenario and possibly discriminate between nonconventional models. NonGaussianities and isocurvature perturbations currently represent the best tools that we have to accomplish this task. Any deviation from the conventional Gaussian and adiabatic initial perturbations would represent important breakthroughs in our understanding of the early universe. In this section we are going to review what we can learn by constraining the initial conditions with a largescale structure survey like Euclid.
4.2 Constraining inflation
The spectrum of cosmological perturbations represents an important source of information on the early universe. During inflation scalar (compressional) and tensor (purely gravitational) fluctuations are produced. The shape and the amplitude of the power spectrum of scalar fluctuations can be related to the dynamics of the inflationary phase, providing a window on the inflaton potential. Inflation generically predicts a deviation from a purely scaleinvariant spectrum. Together with future CMB experiments such as Planck, Euclid will improve our constraints on the scalar spectral index and its running, helping to pin down the model of inflation.
4.2.1 Primordial perturbations from inflation
4.2.2 Forecast constraints on the power spectrum
Instrument specifics for the Planck satellite with 30 months of integration.
Channel Frequency (GHz)  70  100  143 

Resolution (arcmin)  14  10  7.1 
Sensitivity — intensity (μk)  8.8  4.7  4.1 
Sensitivity — polarization (μk)  12.5  7.5  7.8 
To produce the mock data we use a fiducial ΛCDM model with Ω_{ c }h^{2} = 0.1128, Ω_{ b }h^{2} = 0.022, h = 0.72, σ_{8} = 0.8 and τ = 0.09, where τ is the reionization optical depth. As mentioned above, we take the fiducial value for the spectral index, running and tensor to scalar ratio, defined at the pivot scale k_{*} = 0.05 Mpc^{−1}, as given by chaotic inflation with quadratic potential, i.e., n_{ s } = 0.968, α_{ s } = 0 and r = 0.128. We have checked that for Planck data r is almost orthogonal to n_{ s } and α_{ s }. Therefore our result is not sensitive to the fiducial value of r.
The fiducial Euclid spectroscopically selected galaxies are split into 14 redshift bins. The redshift ranges and expected numbers of observed galaxies per unit volume \({\bar n_{{\rm{obs}}}}\) are taken from [551] and shown in the third column of Table 3 in Section 1.8.2 (n_{2}(z)). The number density of galaxies that can be used is \(\bar n = \varepsilon {\bar n_{{\rm{obs}}}}\), where ε is the fraction of galaxies with measured redshift. The boundaries of the wavenumber range used in the analysis, labeled k_{min} and k_{max}, vary in the ranges (0.00435–0.00334)h Mpc^{−1} and (0.16004–0.23644)h Mpc^{−1} respectively, for 0.7 ≤ z ≤ 2. The IR cutoff k_{min} is chosen such that k_{min}r = 2π, where r is the comoving distance of the redshift slice. The UV cutoff is the smallest between \({H \over {{v_p}\left({1 + z} \right)}}\) and \({\pi \over {2R}}\). Here R is chosen such that the r.m.s. linear density fluctuation of the matter field in a sphere with radius R is 0.5. In each redshift bin we use 30 kbins uniformly in ln k and 20 uniform μbins.
For the fiducial value of the bias, in each of the 14 redshift bins of width Δz = 0.1 in the range (0.7–2), we use those derived from [698], i.e. (1.083, 1.125, 1.104, 1.126, 1.208, 1.243, 1.282, 1.292, 1.363, 1.497, 1.486, 1.491, 1.573, 1.568), and we assume that υ_{ p } is redshift dependent choosing υ_{ p } = 400 km/s as the fiducial value in each redshift bin. Then we marginalize over b and υ_{ p } in the 14 redshift bins, for a total of 28 nuisance parameters.
Cosmological parameters
parameter  Planck constraint  Planck + Euclid constraint 

Ω_{ b }h^{2}  \(0.02227_{ 0.00011}^{+ 0.00011}\)  \(0.02227_{ 0.00008}^{+ 0.00008}\) 
Ω_{ c }h^{2}  \(0.1116_{ 0.0008}^{+ 0.0008}\)  \(0.1116_{ 0.0002}^{+ 0.0002}\) 
θ  \(1.0392_{ 0.0002}^{+ 0.0002}\)  \(1.0392_{ 0.0002}^{+ 0.0002}\) 
τ _{re}  \(0.085_{ 0.004}^{+ 0.004}\)  \(0.085_{ 0.003}^{+ 0.003}\) 
n _{ s }  \(0.966_{ 0.003}^{+ 0.003}\)  \(0.966_{ 0.002}^{+ 0.002}\) 
α _{ s }  \( 0.000_{ 0.005}^{+ 0.005}\)  \( 0.000_{ 0.003}^{+ 0.003}\) 
ln(10^{10}A_{ s })  \(3.078_{ 0.009}^{+ 0.009}\)  \(3.077_{ 0.006}^{+ 0.006}\) 
r  \(0.128_{ 0.018}^{+ 0.018}\)  \(0.127_{ 0.018}^{+ 0.019}\) 
Ω_{ m }  \(0.271_{ 0.004}^{+ 0.005}\)  \(0.271_{ 0.001}^{+ 0.001}\) 
σ _{8}  \(0.808_{ 0.005}^{+ 0.005}\)  \(0.808_{ 0.003}^{+ 0.003}\) 
h  \(0.703_{ 0.004}^{+ 0.004}\)  \(0.703_{ 0.001}^{+ 0.001}\) 
A more extensive and in depth analysis of what constraints on inflationary models a survey like Euclid can provide is presented in [460]. In particular they find that for models where the primordial power spectrum is not featureless (i.e., close to a power law with small running) a survey like Euclid will be crucial to detect and measure features. Indeed, what we measure with the CMB is the angular power spectrum of the anisotropies in the 2D multipole space, which is a projection of the power spectrum in the 3D momentum space. Features at large ℓ’s and for small width in momentum space get smoothed during this projection but this does not happen for largescale structure surveys. The main limitation on the width of features measured using largescale structure comes from the size of the volume of the survey: the smallest detectable feature being of the order of the inverse cubic root of this volume and the error being determined by number of modes contained in this volume. Euclid, with the large volume surveyed and the sheer number of modes that are sampled and cosmic variance dominated offers a unique opportunity to probe inflationary models where the potential is not featureless. In addition the increased statistical power would enable us to perform a Bayesian model selection on the space of inflationary models (e.g., [334, 691] and references therein).
4.3 Probing the early universe with nonGaussianities
In this section we review the theoretical motivations and implications for looking into primordial nonGaussianity; the readers less theoretically oriented can go directly to Section 3.4.
4.3.1 Local nonGaussianity
Furthermore, [264] showed that irrespective of slowroll and of the particular inflaton Lagrangian or dynamics, in singlefield inflation, or more generally when only adiabatic fluctuations are present, there exists a consistency relation involving the 3point function of scalar perturbations in the squeezed limit (see also [806, 226, 227]). In this limit, when the short wavelength modes are inside the Hubble radius during inflation, the long mode is far out of the horizon and its only effect on the short modes is to rescale the unperturbed history of the universe. This implies that the 3point function is simply proportional to the 2point function of the long wavelength modes times the 2point function of the short wavelength mode times its deviation from scale invariance. In terms of local nonGaussianity this translates into the same \(f_{{\rm{NL}}}^{{\rm{local}}}\) found in [616]. Thus, a convincing detection of local nonGaussianity would rule out all classes of inflationary singlefield models.
To overcome the consistency relation and produce large local nonGaussianity one can go beyond the singlefield case and consider scenarios where a second field plays a role in generating perturbations. In this case, because of nonadiabatic fluctuations, scalar perturbations can evolve outside the horizon invalidating the argument of the consistency relation and possibly generating a large \(f_{{\rm{NL}}}^{{\rm{local}}}\) as in [582]. The curvaton scenario is one of such mechanisms. The curvaton is a light scalar field that acquires scaleinvariant fluctuations during inflation and decays after inflation but well before nucleosynthesis [661, 664, 598, 342]. During the decay it dominates the universe affecting its expansion history thus imprints its perturbations on superhorizon scales. The way the expansion history depends on the value of the curvaton field at the end of the decay can be highly nonlinear, leading to large nonGaussianity. Indeed, the nonlinear parameter \(f_{{\rm{NL}}}^{{\rm{local}}}\) is inversely proportional to the curvaton abundance before the decay [597].
Models exists where both curvaton and inflaton fluctuations contribute to cosmological perturbations [545]. Interestingly, curvaton fluctuations could be negligible in the 2point function but detectable through their nonGaussian signature in the 3point function, as studied in [155]. We shall come back on this point when discussing isocurvature perturbations. Other models generating local nonGaussianities are the so called modulated reheating models, in which one light field modulates the decay of the inflaton field [329, 515]. Indeed, nonGaussianity could be a powerful window into the physics of reheating and preheating, the phase of transition from inflation to the standard radiation dominated era (see e.g., [148, 222]).
In the examples above only one field is responsible for the dynamics of inflation, while the others are spectators. When the inflationary dynamics is dominated by several fields along the ∼ 60 efoldings of expansion from Hubble crossing to the end of inflation we are truly in the multifield case. For instance, a wellstudied model is double inflation with two massive noninteracting scalar fields [739]. In this case, the overall expansion of the universe is affected by each of the field while it is in slowroll; thus, the final nonGaussianity is slowroll suppressed, as in single field inflation [768, 19, 926].
Because the slowroll conditions are enforced on the fields while they dominate the inflationary dynamics, it seems difficult to produce large nonGaussianity in multifield inflation; however, by tuning the initial conditions it is possible to construct models leading to an observable signal (see [191, 876]). NonGaussianity can be also generated at the end of inflation, where largescale perturbations may have a nonlinear dependence on the nonadiabatic modes, especially if there is an abrupt change in the equation of state (see e.g., [126, 596]). Hybrid models [580], where inflation is ended by a tachyonic instability triggered by a waterfall field decaying in the true vacuum, are natural realizations of this mechanism [343, 88].
4.3.2 Shapes: what do they tell us?
As explained above, local nonGaussianity is expected for models where nonlinearities develop outside the Hubble radius. However, this is not the only type of nonGaussianity. Singlefield models with derivative interactions yield a negligible 3point function in the squeezed limit, yet leading to possibly observable nonGaussianities. Indeed, as the interactions contain time derivatives and gradients, they vanish outside the horizon and are unable to produce a signal in the squeezed limit. Correlations will be larger for other configurations, for instance between modes of comparable wavelength. In order to study the observational signatures of these models we need to go beyond the local case and study the shape of nonGaussianity [70].
An example of models containing large derivative interactions has been proposed by [830, 25]. Based on the DiracBornInfeld Lagrangian, \({\mathcal L} = f{\left(\phi \right)^{ 1}}\sqrt {1  f\left(\phi \right)X} + V\left(\phi \right)\), with X = −g^{ μν }∂_{ μ }ϕ∂_{ ν }ϕ, it is called DBI inflation. This Lagrangian is string theorymotivated and ϕ describes the lowenergy radial dynamics of a D3brane in a warped throat: f (ϕ)^{−1} is the warped brane tension and V (ϕ) the interaction field potential. In this model the nonGaussianity is dominated by derivative interactions of the field perturbations so that we do not need to take into account mixing with gravity. An estimate of the nonGaussianity is given by the ratio between the thirdorder and the second order Lagrangians, respectively \({{\mathcal L}_3}\) and \({{\mathcal L}_2}\), divided by the amplitude of scalar fluctuations. This gives \({f_{{\rm{NL}}}} \sim \left({{{\mathcal L}_3}/{{\mathcal L}_2}} \right){\Phi ^{ 1}} \sim  1/c_s^2\), where \(c_s^2 = {\left[ {1 + 2X\left({{\partial ^2}{\mathcal L}/\partial {X^2}} \right)/\left({\partial {\mathcal L}/\partial X} \right)} \right]^{ 1}} < 1\) is the speed of sound of linear fluctuations and we have assumed that this is small, as it is the case for DBI inflation. Thus, the nonGaussianity can be quite large if c_{ s } ≪ 1.
The interplay between theory and observations, reflected in the relation between derivative interactions and the shape of nonGaussianity, has motivated the study of inflation according to a new approach, the effective field theory of inflation ([228]; see also [951]). Inflationary models can be viewed as effective field theories in presence of symmetries. Once symmetries are defined, the Lagrangian will contain each possible operator respecting such symmetries. As each operator leads to a particular nonGaussian signal, constraining nonGaussianity directly constrains the coefficients in front of these operators, similarly to what is done in highenergy physics with particle accelerators. For instance, the operator \({{\mathcal L}_3}\) discussed in the context of DBI inflation leads to nonGaussianity controlled by the speed of sound of linear perturbations. This operator can be quite generic in single field models. Current constraints on nonGaussianity allow to constrain the speed of sound of the inflaton field during inflation to be c_{ s } ≥ 0.01 [228, 814]. Another wellstudied example is ghost inflation [57], based on the ghost condensation, a model proposed by [56] to modify gravity in the infrared. This model is motivated by shift symmetry and exploits the fact that in the limit where this symmetry is exact, higherderivative operators play an important role in the dynamics, generating large nonGaussianity with approximately equilateral shape.
Following this approach has allowed to construct operators or combination of operators leading to new shapes, orthogonal to the equilateral one. An example of such a shape is the orthogonal shape proposed in [814]. This shape is generated by a particular combination of two operators already present in DBI inflation. It is peaked both on equilateraltriangle configurations and on flattenedtriangle configurations (where the two lowestk sides are equal exactly to half of the highestside) — the sign in this two limits being opposite. The orthogonal and equilateral are not an exhaustive list. For instance, [258] have shown that the presence in the inflationary theory of an approximate Galilean symmetry (proposed by [686] in the context of modified gravity) generates thirdorder operators with two derivatives on each field. A particular combination of these operators produces a shape that is approximately orthogonal to the three shapes discussed above.
NonGaussianity is also sensitive to deviations from the initial adiabatic BunchDavies vacuum of inflaton fluctuations. Indeed, considering excited states over it, as done in [226, 444, 653], leads to a shape which is maximized in the collinear limit, corresponding to enfolded or squashed triangles in momentum space, although one can show that this shape can be written as a combination of the equilateral and orthogonal ones [814].
4.3.3 Beyond shapes: scale dependence and the squeezed limit
There is a way out to generate large nonGaussianity in singlefield inflation. Indeed, one can temporarily break scaleinvariance, for instance by introducing features in the potential as in [225]. This can lead to large nonGaussianity typically associated with scaledependence. These signatures could even teach us something about string theory. Indeed, in axion monodromy, a model recently proposed by [831] based on a particular string compactification mechanism, the inflaton potential is approximately linear, but periodically modulated. These modulations lead to tiny oscillations in the power spectrum of cosmological fluctuations and to large nonGaussianity (see for instance [366]).
This is not the only example of scale dependence. While in general the amplitude of the nonGaussianity signal is considered constant, there are several models, beside the above example, which predict a scaledependence. For example models like the DiracBornInfeld (DBI) inflation, e.g., [25, 223, 224, 111] can be characterized by a primordial bispectrum whose amplitude varies significantly over the range of scales accessible by cosmological probes.
In view of measurements from observations it is also worth considering the socalled squeezed limit of nonGaussianity that is the limit in which one of the momenta is much smaller than the other two. Observationally this is because some probes (like, for example, the halo bias Section 3.4.2, accessible by largescale structure surveys like Euclid) are sensitive to this limit. Most importantly, from the theoretical point of view, there are consistency relations valid in this limit that identify different classes of inflation, e.g., [262, 257] and references therein.
The scale dependence of nongaussianity, the shapes of nongaussianity and the behavior of the squeezed limit are all promising avenues, where the combination of CMB data and largescale structure surveys such as Euclid can provide powerful constraints as illustrated, e.g., in [809, 690, 807].
4.3.4 Beyond inflation
As explained above, the search of nonGaussianity could represent a unique way to rule out the simplest of the inflationary models and distinguish between different scenarios of inflation. Interestingly, it could also open up a window on new scenarios, alternative to inflation. There have been numerous attempts to construct models alternative to inflation able to explain the initial conditions of our universe. In order to solve the cosmological problems and generate largescale primordial fluctuations, most of them require a phase during which observable scales today have exited the Hubble size. This can happen in bouncing cosmologies, in which the present era of expansion is preceded by a contracting phase. Examples are the prebig bang [385] and the ekpyrotic scenario [498].
In the latter, the 4d effective dynamics corresponds to a cosmology driven by a scalar field with a steep exponential potential V (ϕ) = exp(−cϕ), with c ≫ 1. Leaving aside the problem of the realization of the bounce, it has been shown that the adiabatic mode in this model generically leads to a steep blue spectrum for the curvature perturbations [595, 261]. Thus, at least a second field is required to generate an almost scaleinvariant spectrum of perturbations [365, 263, 182, 528]. If two fields are present, both with exponential potentials and steepness coefficients c_{1} and c_{2}, the nonadiabatic component has negative mass and acquires a quasi invariant spectrum of fluctuations with tilt \({n_s}  1 = 4\left({c_1^{ 2} + c_2^{ 2}} \right)\), with c_{1}, c_{2} ≫ 1. Then one needs to convert the nonadiabatic fluctuation into curvature perturbation, similarly to what the curvaton mechanism does.
As the Hubble rate increases during the collapse, one expects nonlinearities in the fields to become more and more important, leading to nonGaussianity in the produced perturbations. As nonlinearities grow larger on superHubble scales, one expects the signal to be of local type. The particular amplitude of the nonGaussianity in the observable curvature perturbations depends on the conversion mechanism from the nonadiabatic mode to the observable perturbations. The tachyonic instability itself can lead to a phase transition to an ekpyrotic phase dominated by just one field ϕ_{1}. In this case [527] have found that \(f_{{\rm{NL}}}^{{\rm{local}}} =  \left({5/12} \right)c_1^2\). Current constraints on \(f_{{\rm{NL}}}^{{\rm{local}}}\) (WMAP7 year data imposes \( 10 < f_{{\rm{NL}}}^{{\rm{local}}} < 74\) at 95% confidence) gives an unacceptably large value for the scalar spectral index. In fact in this model, even for f_{NL} = −10, c_{2} ≃ 5 which implies a too large value of the scalar spectral index (n_{ s } − 1 > 0.17) which is excluded by observations (recall that WMAP7 year data implies n_{ s } = 0.963±0.014 at 68% confidence). Thus, one needs to modify the potential to accommodate a red spectrum or consider alternative conversion mechanisms to change the value of the generated nonGaussianity [183, 554].
4.4 Primordial NonGaussianity and LargeScale Structure
As we have seen, even the simplest inflationary models predict deviations from Gaussian initial conditions. Confirming or ruling out the simplest inflationary model is an important goal and in this section we will show how Euclid can help achieving this. Moreover, Euclid data (alone or in combination with CMB experiments like Planck) can be used to explore the primordial bispectrum and thus explore the interaction of the fields during inflation.
4.4.1 Constraining primordial nonGaussianity and gravity from 3point statistics
From the observational point of view, it is important to note that photometric surveys are not well suited for extracting a primordial signal out of the galaxy bispectrum. Although in general they can cover larger volumes than spectroscopic surveys, the projection effects due to the photoz smearing along the lineofsight is expected to suppress significantly the sensitivity of the measured bispectrum to the shape of the primordial one (see e.g., [921]). [808] have shown that, if the evolution of the bias parameters is known a priori, spectroscopic surveys like Euclid would be able to give constraints on the f_{NL} parameter that are competitive with CMB studies. While the gravitationallyinduced nonGaussian signal in the bispectrum has been detected to high statistical significance (see, e.g., [925, 533] and references therein), the identification of nonlinear biasing (i.e., b_{2} ≠ 0) is still controversial, and there has been so far no detection of any extra (primordial) bispectrum contributions.
Of course, one could also consider higherorder correlations. One of the advantages of considering, e.g., the trispectrum is that, contrary to the bispectrum, it has very weak nonlinear growth [920], but it has the disadvantage that the signal is delocalized: the number of possible configurations grows fast with the dimensionality n of the npoint function!
Finally, it has been proposed to measure the level of primordial nonGaussianity using Minkowski functionals applied either to the galaxy distribution or the weak lensing maps (see, e.g., [433, 679] and references therein). The potentiality of this approach compared to more traditional methods needs to be further explored in the near future.
4.4.2 NonGaussian halo bias
The discussion above neglects an important fact which went unnoticed until year 2008: the presence of small nonGaussianity can have a large effect on the clustering of dark matter halos [272, 647]. The argument goes as follows. The clustering of the peaks in a Gaussian random field is completely specified by the field power spectrum. Thus, assuming that halos form out of linear density peaks, for Gaussian initial conditions the clustering of the dark matter halos is completely specified by the linear matter power spectrum. On the other hand, for a nonGaussian field, the clustering of the peaks depends on all higherorder correlations, not just on the power spectrum. Therefore, for nonGaussian initial conditions, the clustering of dark matter halos depends on the linear bispectrum (and higherorder moments).
One can also understand the effect in the peakbackgroundsplit framework: overdense patches of the (linear) universe collapse to form dark matter halos if their overdensity lies above a critical collapse threshold. Shortwavelength modes define the overdense patches while the longwavelength modes determine the spatial distribution of the collapsing ones by modulating their height above and below the critical threshold. In the Gaussian case, long and shortwavelength modes are uncorrelated, yielding the well known linear, scaleindependent peak bias. In the nonGaussian case, however, long and short wavelength modes are coupled, yielding a different spatial pattern of regions that cross the collapse threshold.
The presence of this effect is extremely important for observational studies as it allows to detect primordial nonGaussianity from 2point statistics of the galaxy distribution like the power spectrum. Combining current LSS data gives constraints on f_{NL} which are comparable to the CMB ones [842, 971]. Similarly, planned galaxy surveys are expected to progressively improve upon existing limits [210, 209, 393]. For example, Euclid could reach an error on f_{NL} of ∼ 5 (see below for further details) which is comparable with the BPol forecast errors.
The scale dependence of the halo bias changes considering different shapes of primordial nonGaussianity [799, 933]. For instance, orthogonal and folded models produce an effective bias that scales as k^{−1} while the scale dependence becomes extremely weak for equilateral models. Therefore, measurements of the galaxy power spectrum on the largest possible scales have the possibility to constrain the shape and the amplitude of primordial nonGaussianity and thus shed new light on the dynamics of inflation.
On scales comparable with the Hubble radius, matter and halo clustering are affected by generalrelativity effects: the Poisson equation gets a quadratic correction that acts effectively as a nonzero local f_{NL} [94, 731]. This contribution is peculiar to the inflationary initial conditions because it requires perturbations on superhorizon scales and it is mimicked in the halo bias by a local f_{NL} = −1.6 [922]. This is at the level of detectability by a survey like Euclid.
4.4.3 Number counts of nonlinear structures
Even a small deviation from Gaussianity in the initial conditions can have a strong impact on those statistics which probe the tails of the linear density distribution. This is the case for the abundance of the most extreme nonlinear objects existing at a given cosmic epoch, massive dark matter halos and voids, as they correspond to the highest and lowest density peaks (the rarest events) in the underlying linear density field.
Thus small values of f_{NL} are potentially detectable by measuring the abundance of massive dark matter halos as traced by galaxies and galaxy clusters at z ≳ 1 [649]. This approach has recently received renewed attention (e.g., [591, 407, 730, 609, 275, 918, 732] and references therein) and might represent a promising tool for Euclid science. In Euclid, galaxy clusters at high redshift can be identified either by lensing studies or by building group catalogs based on the spectroscopic and photometric galaxy data. The main challenge here is to determine the corresponding halo mass with sufficient accuracy to allow comparison with the theoretical models.
While galaxy clusters form at the highest overdensities of the primordial density field and probe the highdensity tail of the PDF, voids form in the lowdensity regions and thus probe the lowdensity tail of the PDF. Most of the volume of the evolved universe is underdense, so it seems interesting to pay attention to the distribution of underdense regions. For the derivation of the nonGaussian void probability function one proceeds in parallel to the treatment for halos with the only subtlety that the critical threshold is not negative and that its numerical value depends on the precise definition of a void (and may depend on the observables used to find voids), e.g., [490]. Note that while a positive skewness (f_{NL} > 0) boosts the number of halos at the high mass end (and slightly suppress the number of lowmass halos), it is a negative skewness that will increase the voids size distribution at the largest voids end (and slightly decrease it for small void sizes). In addition voids may probe slightly larger scales than halos, making the two approaches highly complementary.
Even though a number of observational techniques to detect voids in galaxy surveys have been proposed (see, e.g., [247] and references therein), the challenge here is to match the theoretical predictions to a particular voididentification criterion based on a specific galaxy sample. We envision that mock galaxy catalogs based on numerical simulations will be employed to calibrate these studies for Euclid.
4.4.4 Forecasts for Euclid
A number of authors have used the Fishermatrix formalism to explore the potentiality of Euclid in determining the level and the shape of primordial nonGaussianity [210, 209, 393]. In what follows, unless specifically mentioned, we will focus on the local type of nonGaussianity which has been more widely studied so far.
Photometric survey  Spectroscopic survey  

Surveyed area (deg^{2})  15,000  15,000 
Galaxy density (arcmin^{−2})  30  1.2 
Median redshift  0.8  1.0 
Number of redshift bins  12  12 
Redshift uncertainty σ_{ z }/(1 + z)  0.05  0.001 
Intrinsic ellipticity noise γ    0.247 
Gaussian linear bias param.  \(\sqrt {1 + z}\)  \(\sqrt {1 + z}\) 
Forecast 1σ errors for the nonlinearity parameter f_{NL} based on twopoint statistics (power spectra) of the Euclid redshift and weaklensing surveys. Results are obtained using the Fishermatrix formalism and marginalizing over eight cosmological parameters (Ω_{Λ}, Ω_{ m }, Ω_{ b }, h, n_{ s }, σ_{8}, w_{0}, w_{ a }) plus a large number of nuisance parameters to account for galaxy biasing, nonlinear redshiftspace distortions and shot noise (see [393] for details). Results within parentheses include the forecast priors for the cosmological parameters from the power spectrum of CMB temperature anisotropies measured with the Planck satellite (note that no prior is assumed on f_{NL}). The label “Galaxy clustering” refers to the anisotropic power spectrum P(k_{∥}, k_{⊥}) for spectroscopic data and to the angular power spectrum C_{ ℓ } for photometric data. The combined analysis of clustering and lensing data is based on angular power spectra and includes all possible crosscorrelations between different redshift bins and probes. nonlinear power spectra are computed using the halo model. This introduces possible inaccuracies in the forecasts for weak lensing data in the equilateral and orthogonal shapes (see main text for details).
Bispectrum shape  local  orthogonal  equilateral 

Fiducial f_{NL}  0  0  0 
Galaxy clustering (spectr. z)  4.1 (4.0)  54 (11)  220 (35) 
Galaxy clustering (photom. z)  5.8 (5.5)  38 (9.6)  140 (37) 
Weak lensing  73 (27)  9.6 (3.5)  34 (13) 
Combined  4.7 (4.5)  4.0 (2.2)  16 (7.5) 
The amplitude and shape of the matter power spectrum in the mildly nonlinear regime depend (at a level of a few per cent) on the level of primordial nonGaussianity [877, 730, 392]. Measuring this signal with the Euclid weaklensing survey gives Δf_{NL} ≃ 70 (30 with Planck priors) [393]. On the other hand, counting nonlinear structures in terms of peaks in the weaklensing maps (convergence or shear) should give limits in the same ballpark ([627] find Δf_{NL} = 13 assuming perfect knowledge of all the cosmological parameters).
Finally, by combining lensing and angular power spectra (and accounting for all possible crosscorrelations) one should achieve Δf_{NL} ≃ 5 (4.5 with Planck priors) [393]. This matches what is expected from both the Planck mission and the proposed BPol satellite.
Note that the forecast errors on f_{NL} are somewhat sensitive to the assumed fiducial values of the galaxy bias. In our study we have adopted the approximation \(b\left(z \right) = \sqrt {1 + z}\) [753]. On the other hand, using semianalytic models of galaxy formation, [698] found bias values which are nearly 10–15% lower at all redshifts. Adopting this slightly different bias, the constraint on f_{NL} already degrades by 50% with respect to our fiducial case.
\(\Delta f_{{\rm{NL}}}^{({\rm{piv)}}}\)  \(\Delta {n_{{f_{{\rm{NL}}}}}}\)  

Galaxy clustering (spectr. z)  9.3 (7.2)  0.28 (0.21) 
Galaxy clustering (photom. z)  25 (18)  0.38 (0.26) 
Weak lensing  134 (82)  0.66 (0.59) 
Combined  8.9 (7.4)  0.18 (0.14) 
In the end, we briefly comment on how well Euclid data could constrain the amplitude of alternative forms of primordial nonGaussianity than the local one. In particular, we consider the equilateral and orthogonal shapes introduced in Section 3.3.2. Table 22 summarizes the resulting constraints on the amplitude of the primordial bispectrum, f_{NL}. The forecast errors from galaxy clustering grow larger and larger when one moves from the local to the orthogonal and finally to the equilateral model. This reflects the fact that the scaledependent part of the galaxy bias for k → 0 approximately scales as k^{−2}, k^{−1}, and k^{0} for the local, orthogonal, and equilateral shapes, respectively [799, 933, 802, 305, 306]. On the other hand, the lensing constraints (that, in this case, come from the very nonlinear scales) appear to get much stronger for the nonlocal shapes. A note of caution is in order here. In [393], the nonlinear matter power spectrum is computed using a halo model which has been tested against Nbody simulations only for nonGaussianity of the local type.^{17} In consequence, the weaklensing forecasts might be less reliable than in the local case (see the detailed discussion in [393]). This does not apply for the forecasts based on galaxy clustering which are always robust as they are based on the scale dependence of the galaxy bias on very large scales.
4.4.5 Complementarity
The CMB bispectrum is very sensitive to the shape of nonGaussianity; halo bias and mass function, the most promising approaches to constrain f_{NL} with a survey like Euclid, are much less sensitive. However, it is the complementarity between CMB and LSS that matters. One could envision different scenarios. If nonGaussianity is local with negative f_{NL} and CMB obtains a detection, then the halo bias approach should also give a highsignificance detection (GR correction and primordial contributions add up), while if it is local but with positive f_{NL}, the halobias approach could give a lower statistical significance as the GR correction contribution has the opposite sign. If CMB detects f_{NL} at the level of 10 and a form that is close to local, but halo bias does not detect it, then the CMB bispectrum is given by secondary effects (e.g., [620]). If CMB detects nonGaussianity that is not of the local type, then halo bias can help discriminate between equilateral and enfolded shapes: if halo bias sees a signal, it indicates the enfolded type, and if halo bias does not see a signal, it indicates the equilateral type. Thus even a nondetection of the halobias effect, in combination with CMB constraints, can have an important discriminative power.
4.5 Isocurvature modes
At some time well after inflation but deep into the radiation era the universe is filled with several components. For instance, in the standard picture right before recombination there are four components: baryons, cold dark matter, photons and neutrinos. One can study the distribution of superHubble fluctuations between different species, which represent the initial conditions for the subsequent evolution. So far we have investigated mostly the adiabatic initial conditions; in this section we explore more generally the possibility of isocurvature initial conditions. Although CMB data are the most sensitive to constrain isocurvature perturbations, we discuss here the impact on Euclid results.
4.5.1 The origin of isocurvature perturbations
A sufficient condition for having purely adiabatic perturbations is that all the components in the universe were created by a single degree of freedom, such as during reheating after single field inflation.^{19} Even if inflation has been driven by several fields, thermal equilibrium may erase isocurvature perturbations if it is established before any nonzero conserving quantum number was created (see [950]). Thus, a detection of nonadiabatic fluctuations would imply that severalvscalar fields where present during inflation and that either some ofvthe species were not in thermal equilibrium afterwards or that some nonzero conserving quantum number was created before thermal equilibrium.
The presence of many fields is not unexpected. Indeed, in all the extension of the Standard Model scalar fields are rather ubiquitous. In particular, in String Theory dimensionless couplings are functions of moduli, i.e., scalar fields describing the compactification. Another reason to consider the relevant role of a second field other than the inflaton is that this can allow to circumvent the necessity of slowroll (see, e.g., [331]) enlarging the possibility of inflationary models.
Departure from thermal equilibrium is one of the necessary conditions for the generation of baryon asymmetry and thus of the matter in the universe. Interestingly, the oscillations and decay of a scalar field requires departure from thermal equilibrium. Thus, baryon asymmetry can be generated by this process; examples are the decay of a righthanded sneutrino [419] or the [10] scenario. If the source of the baryonnumber asymmetry in the universe is the condensation of a scalar field after inflation, one expects generation of baryon isocurvature perturbations [664]. This scalar field can also totally or partially generate adiabatic density perturbations through the curvaton mechanism.
In summary, given our ignorance about inflation, reheating, and the generation of matter in the universe, a discovery of the presence of isocurvature initial conditions would have radical implications on both the inflationary process and on the mechanisms of generation of matter in the universe.
Let us concentrate on the nonadiabatic perturbation between cold dark matter (or baryons, which are also nonrelativistic) and radiation \({\mathcal S} = {{\mathcal S}_{{\rm{cdm}},\gamma}}\). Constraints on the amplitude of the nonadiabatic component are given in terms of the parameter α, defined at a given scale k_{0}, by \({P_{\mathcal S}}  {P_\zeta} \equiv \alpha  \left({1  \alpha} \right)\), see e.g., [120, 113, 525]. As discussed in [542], adiabatic and entropy perturbations may be correlated. To measure the amplitude of the correlation one defines a crosscorrelation coefficient, \(\beta \equiv  {P_{{\mathcal S},\zeta}}/\sqrt {{P_{\mathcal S}}{P_\zeta}}\). Here \({P_{{\mathcal S},\zeta}}\) is the crosscorrelation powerspectrum between \({\mathcal S}\) and ζ and for the definition of β we have adopted the sign convention of [525]. Observables, such as for instance the CMB anisotropies, depend on linear combinations of ζ and \({\mathcal S}\). Thus, constraints on α will considerably depend on the crosscorrelation coefficient β (see for instance discussion in [403]).
If part of the cold dark matter is created out of equilibrium from a field other than the inflaton, totally uncorrelated isocurvature perturbations, with β = 0, are produced, as discussed for instance in [336, 582]. The axion is a wellknown example of such a field. The axion is the NambuGoldstone boson associated with the [715] mechanism to solve the strongCP problem in QCD. As it acquires a mass through QCD nonperturbative effects, when the Hubble rate drops below its mass the axion starts oscillating coherently, behaving as cold dark matter [742, 2, 316]. During inflation, the axion is practically massless and acquires fluctuations which are totally uncorrelated from photons, produced by the inflaton decay [805, 578, 579, 910]. As constraints on α_{β=0} are currently very strong (see e.g., [118, 526]), axions can only represent a small fraction of the total dark matter.
Totally uncorrelated isocurvature perturbations can also be produced in the curvaton mechanism, if the dark matter or baryons are created from inflation, before the curvaton decay, and remain decoupled from the product of curvaton reheating [546]. This scenario is ruled out if the curvaton is entirely responsible for the curvature perturbations. However, in models when the final curvature perturbation is a mix of the inflaton and curvaton perturbations [545], such an entropy contribution is still allowed.
When dark matter or baryons are produced solely from the curvaton decay, such as discussed by [597], the isocurvature perturbations are totally anticorrelated, with β = −1. For instance, some fraction of the curvaton decays to produce CDM particles or the outofequilibrium curvaton decay generates the primordial baryon asymmetry [419, 10].
If present, isocurvature fields are not constrained by the slowroll conditions imposed on the inflaton field to drive inflation. Thus, they can be highly nonGaussian [582, 126]. Even though negligible in the twopoint function, their presence could be detected in the threepoint function of the primordial curvature and isocurvature perturbations and their crosscorrelations, as studied in [495, 546].
4.5.2 Constraining isocurvature perturbations
Even if pure isocurvature models have been ruled out, current observations allow for mixed adiabatic and isocurvature contributions (e.g., [267, 896, 525, 914]). As shown in [902, 40, 914, 544, 184, 847], the initial conditions issue is a very delicate problem: in fact, for current cosmological data, relaxing the assumption of adiabaticity reduces our ability to do precision cosmology since it compromises the accuracy of parameter constraints. Generally, allowing for isocurvature modes introduces new degeneracies in the parameter space which weaken constraints considerably.
The cosmic microwave background radiation (CMB), being our window on the early universe, is the preferred data set to learn about initial conditions. Up to now, however, the CMB temperature power spectrum alone, which is the CMB observable better constrained so far, has not been able to break the degeneracy between the nature of initial perturbations (i.e., the amount and properties of an isocurvature component) and cosmological parameters, e.g., [538, 902]. Even if the precision measurement of the CMB first acoustic peak at ℓ ≃ 220 ruled out the possibility of a dominant isocurvature mode, allowing for isocurvature perturbations together with the adiabatic ones introduce additional degeneracies in the interpretation of the CMB data that current experiments could not break. Adding external data sets somewhat alleviates the issue for some degeneracy directions, e.g., [903, 120, 323]. As shown in [184], the precision polarization measurement of the next CMB experiments like Planck will be crucial to lift such degeneracies, i.e., to distinguish the effect of the isocurvature modes from those due to the variations of the cosmological parameters.
It is important to keep in mind that analyzing the CMB data with the prior assumption of purely adiabatic initial conditions when the real universe contains even a small isocurvature contribution, could lead to an incorrect determination of the cosmological parameters and on the inferred value of the sound horizon at radiation drag. The sound horizon at radiation drag is the standard ruler that is used to extract information about the expansion history of the universe from measurements of the baryon acoustic oscillations. Even for a CMB experiment like Planck, a small but nonzero isocurvature contribution, still allowed by Planck data, if ignored, can introduce a systematic error in the interpretation of the BAO signal that is comparable if not larger than the statistical errors. In fact, [621] shows that even a tiny amount of isocurvature perturbation, if not accounted for, could affect standard rulers calibration from CMB observations such as those provided by the Planck mission, affect BAO interpretation, and introduce biases in the recovered dark energy properties that are larger than forecast statistical errors from future surveys. In addition it will introduce a mismatch of the expansion history as inferred from CMB and as measured by BAO surveys. The mismatch between CMB predicted and the measured expansion histories has been proposed as a signature for deviations from a DM cosmology in the form of deviations from Einstein’s gravity (e.g., [8, 476]), couplings in the dark sector (e.g., [589]) or timeevolving dark energy.
For the above reasons, extending on the work of [621], [208] adopted a general fiducial cosmology which includes a varying dark energy equation of state parameter and curvature. In addition to BAO measurements, in this case the information from the shape of the galaxy power spectrum are included and a joint analysis of a Plancklike CMB probe and a Euclidtype survey is considered. This allows one to break the degeneracies that affect the CMB and BAO combination. As a result, most of the cosmological parameter systematic biases arising from an incorrect assumption on the isocurvature fraction parameter f_{iso}, become negligible with respect to the statistical errors. The combination of CMB and LSS gives a statistical error σ (f_{iso}) ∼ 0.008, even when curvature and a varying dark energy equation of state are included, which is smaller than the error obtained from CMB alone when flatness and cosmological constant are assumed. These results confirm the synergy and complementarity between CMB and LSS, and the great potential of future and planned galaxy surveys.
4.6 Summary and outlook
We have summarized aspects of the initial conditions for the growth of cosmological perturbations that Euclid will enable us to probe. In particular we have considered the shape of the primordial power spectrum and its connection to inflationary models, primordial nonGaussianity and isocurvature perturbations.
A survey like Euclid will greatly improve our knowledge of the initial conditions for the growth of perturbations and will help shed light on the mechanism for the generation of primordial perturbations. The addition of Euclid data will improve the Planck satellite’s cosmic microwave background constraints on parameters describing the shape of the primordial power spectrum by a factor of 2–3.
Primordial nonGaussianity can be tested by Euclid in three different and complementary ways: via the galaxy bispectrum, number counts of nonlinear structures and the nonGaussian halo bias. These approaches are also highly competitive with and complementary to CMB constraints. In combination with Planck, Euclid will not only test a possible scaledependence of nonGaussianity but also its shape. The shape of nonGaussianity is the key to constrain and classify possible deviations for the simplest singlefield slow roll inflation.
Isocurvature modes affect the interpretation of largescale structure clustering in two ways. The power spectrum shape is modified on small scales due to the extra perturbations although this effect however can be mimicked by scaledependent bias. More importantly isocurvature modes can lead to an incorrect inferred value for the sound horizon at radiation drag from CMB data. This then predicts an incorrect location of the baryon acoustic feature. It is through this effect that Euclid BAO measurements improve constraints on isocurvature modes.
5 Testing the Basic Cosmological Hypotheses
5.1 Introduction
The standard cosmological analyses implicitly make several assumptions, none of which are seriously challenged by current data. Nevertheless, Euclid offers the possibility of testing some of these basic hypotheses. Examples of the standard assumptions are that photon number is conserved, that the Copernican principle holds (i.e., we are not at a special place in the universe) and that the universe is homogeneous and isotropic, at least on large enough scales. These are the pillars on which standard cosmology is built, so it is important to take the opportunity offered by Euclid observations to test these basic hypotheses.
5.2 Transparency and Etherington Relation
The Etherington relation [352] implies that, in a cosmology based on a metric theory of gravity, distance measures are unique: the luminosity distance is (1 + z)^{2} times the angular diameter distance. This is valid in any cosmological background where photons travel on null geodesics and where, crucially, photon number is conserved. There are several scenarios in which the Etherington relation would be violated: for instance we can have deviations from a metric theory of gravity, photons not travelling along unique null geodesics, variations of fundamental constants, etc. We follow here the approach of [65].
5.2.1 Violation of photon conservation
A change in the photon flux during propagation towards the Earth will affect the supernovae (SNe) luminosity distance measures D_{ L } (z) but not the determinations of the angular diameter distance. BAO will not be affected so D_{ A } (z) and H (z) measurements from BAO could be combined with supernovae measurements of D_{ L } (z) to constrain deviations from photon number conservation. Photon conservation can be violated by simple astrophysical effects or by exotic physics. Amongst the former we find, for instance, attenuation due to interstellar dust, gas and/or plasmas. Most known sources of attenuation are expected to be clustered and can be typically constrained down to the 0.1% level [656, 663]. Unclustered sources of attenuation are however much more difficult to constrain. For example, grey dust [14] has been invoked to explain the observed dimming of Type Ia supernovae without resorting to cosmic acceleration. More exotic sources of photon conservation violation involve a coupling of photons to particles beyond the standard model of particle physics. Such couplings would mean that, while passing through the intergalactic medium, a photon could disappear or even (re)appear! Interacting with such exotic particles, modifying the apparent luminosity of sources. Recently, [65] considered the mixing of photons with scalars, known as axionlike particles, chameleons, and the possibility of minicharged particles which have a tiny, and unquantized electric charge. In particular, the implications of these particles on the SN luminosity have been described in a number of publications [270, 665, 190, 16] and a detailed discussion of the proposed approach can be found in [100, 101, 66, 65].
For particular models of exotic matterphoton coupling, namely axionlike particles (ALPs), chameleons, and minicharged particles (MCPs), the appropriate parameterization parametrization of τ (z) is used instead.
5.2.2 Axionlike particles
Axionlike particles (ALP) can arise from field theoretic extensions of the standard model as Goldstone bosons when a global shift symmetry, present in the high energy sector, is spontaneously broken. Interestingly, these fields also arise naturally in string theory (for a review see [871]). Chameleon scalar fields are another very interesting type of ALPs [169]. They were originally invoked to explain the current accelerated expansion of the universe with a quintessence field which can couple to matter without giving rise to large fifth forces or unacceptable violations of the weak equivalence principle. A chameleon model with only matter couplings will induce a coupling to photons.
5.2.3 Minicharged particles
New particles with a small unquantized charge have been investigated in several extensions of the standard model [443, 105]. In particular, they arise naturally in extensions of the standard model which contain at least one additional U(1) hidden sector gauge group [443, 181]. The gauge boson of this additional U(1) is known as a hidden photon, and hidden sector particles, charged under the hidden U(1), get an induced electric charge proportional to the small mixing angle between the kinetic terms of the two photons. In string theory, such hidden U(1)s and the required kinetic mixing are a generic feature [5, 4, 311, 6, 402]. Hidden photons are not necessary however to explain minicharged particles, and explicit braneworld scenarios have been constructed [105] where MCPs arise without the need for hidden photons.
5.3 Beyond homogeneity and isotropy
The crucial ingredient that kickstarted dark energy research was the interpretation in 1998 of standard candle observations in terms of cosmic acceleration required to explain the data in the context of the FLRW metric. What we observe is however merely that distant sources (z > 0.3) are dimmer than we would predict in a matteronly universe calibrated through “nearby” sources. That is, we observe a different evolution of luminosity rather than directly an increase in the expansion rate. Can this be caused by a strong inhomogeneity rather than by an accelerating universe?
In addition, cosmic acceleration seems to be a recent phenomenon at least for standard darkenergy models, which gives rise to the coincidence problem. The epoch in which dark energy begins to play a role is close to the epoch in which most of the cosmic structures formed out of the slow linear gravitational growth. We are led to ask again: can the acceleration be caused by strong inhomogeneities rather than by a dark energy component?
Finally, one must notice that in all the standard treatment of dark energy one always assumes a perfectly isotropic expansion. Could it be that some of the properties of acceleration depends critically on this assumption?
In order to investigate these issues, in this section we explore radical deviations from homogeneity and isotropy and see how Euclid can test them.
5.3.1 Anisotropic models
In recent times, there has been a resurgent interest towards anisotropic cosmologies, classified in terms of Bianchi solutions to general relativity. This has been mainly motivated by hints of anomalies in the cosmic microwave background (CMB) distribution observed on the full sky by the WMAP satellite [288, 930, 268, 349]. While the CMB is very well described as a highly isotropic (in a statistical sense) Gaussian random field, and the anomalies are a posteriori statistics and therefore their statistical significance should be corrected at least for the socalled look elsewhere effect (see, e.g., [740, 122] and references therein) recent analyses have shown that local deviations from Gaussianity in some directions (the so called cold spots, see [268]) cannot be excluded at high confidence levels. Furthermore, the CMB angular power spectrum extracted from the WMAP maps has shown in the past a quadrupole power lower than expected from the bestfit cosmological model [335]. Several explanations for this anomaly have been proposed (see, e.g., [905, 243, 297, 203, 408]) including the fact that the universe is expanding with different velocities along different directions. While deviations from homogeneity and isotropy are constrained to be very small from cosmological observations, these usually assume the nonexistence of anisotropic sources in the late universe. Conversely, as suggested in [520, 519, 106, 233, 251], dark energy with anisotropic pressure acts as a latetime source of anisotropy. Even if one considers no anisotropic pressure fields, small departures from isotropy cannot be excluded, and it is interesting to devise possible strategies to detect them.
The effect of assuming an anisotropic cosmological model on the CMB pattern has been studied by [249, 89, 638, 601, 189, 516]. The Bianchi solutions describing the anisotropic line element were treated as small perturbations to a FriedmannRobertsonWalker (FRW) background. Such early studies did not consider the possible presence of a nonnull cosmological constant or dark energy and were upgraded recently by [652, 477].
One difficulty with the anisotropic models that have been shown to fit the largescale CMB pattern, is that they have to be produced according to very unrealistic choices of the cosmological parameters. For example, the Bianchi VIIh template used in [477] requires an open universe, an hypothesis which is excluded by most cosmological observations. An additional problem is that an inflationary phase — required to explain a number of feature of the cosmological model — isotropizes the universe very efficiently, leaving a residual anisotropy that is negligible for any practical application. These difficulties vanish if an anisotropic expansion takes place only well after the decoupling between matter and radiation, for example at the time of dark energy domination [520, 519, 106, 233, 251].
Bianchi models are described by homogeneous and anisotropic metrics. If anisotropy is slight, the dynamics of any Bianchi model can be decomposed into an isotropic FRW background linearly perturbed to break isotropy; on the other side, homogeneity is maintained with respect to three Killing vector fields.
Bianchi models containing FRW limit and their structure constants.
Type  a  n _{1}  n _{2}  n _{3} 

I  0  0  0  0 
V  1  0  0  0 
VII_{0}  0  0  1  1 
VII_{ h }  \(\sqrt h\)  0  1  1 
IX  0  1  1  1 
5.3.1.1 Latetime anisotropy
While deviations from homogeneity and isotropy are constrained to be very small from cosmological observations, these usually assume the nonexistence of anisotropic sources in the late universe. The CMB provides very tight constraints on Bianchi models at the time of recombination [189, 516, 638] of order of the quadrupole value, i.e., ∼ 10^{−5}. Usually, in standard cosmologies with a cosmological constant the anisotropy parameters scale as the inverse of the comoving volume. This implies an isotropization of the expansion from the recombination up to the present, leading to the typically derived constraints on the shear today, namely ∼ 10^{−9} ÷ 10^{−10}. However, this is only true if the anisotropic expansion is not generated by any anisotropic source arising after decoupling, e.g., vector fields representing anisotropic dark energy [519].
As suggested in [520, 519, 106, 233, 251], dark energy with anisotropic pressure acts as a latetime source of anisotropy. An additional problem is that an inflationary phase — required to explain a number of feature of the cosmological model — isotropizes the universe very efficiently, leaving a residual anisotropy that is negligible for any practical application. These difficulties vanish if an anisotropic expansion takes place only well after the decoupling between matter and radiation, for example at the time of dark energy domination [520, 519, 106, 233, 251].
For example, the effect of cosmic parallax [749] has been recently proposed as a tool to assess the presence of an anisotropic expansion of the universe. It is essentially the change in angular separation in the sky between faroff sources, due to an anisotropic expansion.
Anisotropic distribution of sources in Euclid survey might constrain the anisotropy at present, when the dark energy density is of order 74%, hence not yet in the final dark energy dominant attractor phase (4.3.8).
5.3.2 Latetime inhomogeneity
5.3.3 Inhomogeneous models: Large voids
Nonlinear inhomogeneous models are traditionally studied either with higherorder perturbation theory or with Nbody codes. Both approaches have their limits. A perturbation expansion obviously breaks down when the perturbations are deeply in the nonlinear regime. Nbody codes, on the other hand, are intrinsically Newtonian and, at the moment, are unable to take into account full relativistic effects. Nevertheless, these codes can still account for the general relativistic behavior of gravitational collapse in the case of inhomogeneous large void models, as shown recently in [30], where the growth of the void follows the full nonlinear GR solution down to large density contrasts (of order one).
A possibility to make progress is to proceed with the most extreme simplification: radial symmetry. By assuming that the inhomogeneity is radial (i.e., we are at the center of a large void or halo) the dynamical equations can be solved exactly and one can make definite observable predictions.
It is however clear from the start that these models are highly controversial, since the observer needs to be located at the center of the void with a tolerance of about few percent of the void scale radius, see [141, 242], disfavoring the longheld Copernican principle (CP). Notwithstanding this, the idea that we live near the center of a huge void is attractive for another important reason: a void creates an apparent acceleration field that could in principle match the supernovae observations [891, 892, 220, 474]. Since we observe that nearby SN Ia recede faster than the H (z) predicted by the Einsteinde Sitter universe, we could assume that we live in the middle of a huge spherical region which is expanding faster because it is emptier than the outside. The transi