Michael Peskin: "Supersymmetry at a Linear Collider"
- Machine parameters necessary for incisive Susy studies
- What is the region of parameter space for which the lightest chargino
and lightest two neutralinos would not be observed in a 0.5 (1) TeV LC?
From simulation studies, and from the experience of LEP 2 , the chargino
should be observed at the LC with relatively small luminosity samples at
masses up to 90% of the beam energy. So, roughly, a 500 GeV collider
covers chargino masses up to 250, and so on. It is typical that the second
lightest neutralino is almost degenerate with the chargino. However, the
production cross section for this particle is more model-dependent, and
luminosity samples of 20 fb-1 rather than a fraction of an
fb-1 are needed to find it in some regions of parameter
To assess in how much of the underlying parameter space one can observe the
chargino, one needs a model. In minimal SUGRA, one can use criteria based
on the idea that it requires fine-tuning for the electroweak
symmetry-breaking scale to be very far below the weak scale. Detailed
studies based on this idea claim that almost all `natural' models have a
chargino below 250 GeV. However, the conclusion is model-dependent. A
cartoon version of the argument is as follows: The direct consequence of
naturalness is that the parameters m and mgluino should both be much less
1 TeV. The model is needed to extend this conclusion to the other
parameters of the theory. Since minimal SUGRA contains the relation
m2 = (1/3.5) m3
a bound on the gluino mass translates into a bound on the chargino mass.
The relation just quoted also is true in the simplest gauge-mediated
models, but there are other models in which it is violated.
One can go on and on along this line. A simpler argument is that, since in
SUSY models the Z and W masses are consequences of supersymmetry breaking,
it is not stupid that the W mass and the chargino mass should be of the
same order. A 1 TeV LC can probe to mchargino = 6
- What loss is there if the higher chargino and higher two neutralinos
cannot be produced in the LC? What is the loss in physics reach if the LC
cannot produce the squarks and gluino?
One of the key measurements for the LC is the measurement of
m from the
chargino mass spectrum. In the extreme limit m >>
m2, m is the mass of
the higher chargino, and one has to observe this particle to measure
accurately. In less extreme cases, it is possible to determine m from the
cross section for the lighter chargino.
It is less important, at least initially, to reach the squarks and gluino.
As long as one can reach the sleptons, there is plenty of precision
information available to the LC on flavor-dependence of scalar masses.
Combining this information with the squark and gluino masses from the LHC
would already provide a useful discrimination of models. Eventually, one
would like to measure the details of squark spectroscopy that are
inaccessible to the LHC---the detailed dependence of the squark masses on
flavor and the qL/qR mass differences. As for the
gluino, I do not know
a method for a precision gluino mass measurement at the LC in the standard
scenarios. However, it would be interesting to measure the
gluino-quark-squark coupling constant (e.g. by e+e-
Æ q q\tilde gluino).
- What is the likely gain for Susy studies if the energy were increased
to 1.5 or 2 TeV? For mSugra, what is the likely tradeoff between
increased energy and higher luminosity?
In a large part of the mSUGRA parameter space, one needs energies above 1
TeV to find the heavier chargino and the heavy Higgs boson states H, A. An
energy of 2 TeV is needed to cover the region allowed by the naturalness
criterion with reasonable safety. Presumably, the best strategy is to
study the lighter chargino and use its properties to predict the location
of the heavier one.
It is also possible (in mSUGRA, but not in gauge-mediated models) that the
sleptons will not be found at a 1 TeV collider. In mSUGRA, the sleptons
and squarks could consistently be almost degenerate at 2 TeV. Then LHC and
LC would learn a lot about the gauge sector of SUSY, but the matter sector
could only be studied in detail at a later stage.
From these remarks, it is clear that the tradeoff between increased energy
and increased luminosity depends crucially on the explicit scenario.
- How will discovery of supersymmetry at LHC modify the desired design
of a LC?
If the LHC can set the mass scale for superparticles, that would be very
helpful. However, it is still most likely that the first LC running would
be at 350 GeV or below, to do precision Higgs studies. (If SUSY would be
discovered, precision top studies would be less interesting.) The
accelerator physicists tell me that it would be important to run the LC at
lower energy (usually they say 500 GeV) to understand how to maximize its
performance before going to higher energy where event rates are lower.
- Of what utility are ee, gg collisions for supersymmetry studies?
It would be interesting to study SUSY production in gamma-gamma collisions,
but I do not know of a crucial experiment that would drive one to use this
technology. (The situation is different with the Higgs, where one can
study the light and heavy Higgses as resonances in gamma-gamma.)
For e-e-, there are compelling arguments. If the sleptons are accessible,
e-e- is actually superior to e+e- for measuring the neutralino spectrum
from t-channel exchange, and for searching for CP violating phases, slepton
number violation, etc.
- Sparticle properties:
- what precision is expected for sparticle mass determinations, and to
what extent are model dependent assumptions needed?
What are the expectations for LC to measure quantum numbers of sparticle
states? branching ratios? coupling constants or coupling relations ?
Measurements of the sparticle masses from kinematic distributions (e.g.,
endpoints) should reach the level of 1% accuracy. For higher precision,
one can map the production thresholds for the various SUSY states. The
TESLA group claims that 0.1% accuracy can be achieved with this method.
Both methods are model-independent.
- What are the expectations for LC to measure quantum numbers of
sparticle states? branching ratios? coupling constants or coupling
In e+e- Æ S Sbar, there
are characteristic angular distributions that depend on the spin of S, so
the spin should be found unambiguously. Similarly, for given spin, the
cross sections (measurable separately for e- beams of the two
polarizations) depend only on the electroweak quantum numbers, so these can
be identified. Branching ratio measurements are typically statistics
limited; this means 1% errors with high luminosity. Coupling constant
relations can be checked to 1% accuracy; I'll give some examples in my
- How do you make the case that the increased precision (over LHC)
for sparticle properties is required?
The importance of increased precision depends on whether the true model of
the SUSY spectrum is one that the theorists guessed in advance or one that
must be inferred from the data. In the case of mSUGRA, the most precise
tests of unification will be the comparison of m1 and
m2 and of the L and R slepton masses. The first of these might
be done at the few-% level at LHC, the second probably not with any
significant accuracy. The LC will do both at the few-per-mil level. In
the case of a weird new spectrum, the advantage of the LC will not be
increased precision but the ability to measure more states in a
- How can CP-violating phases beyond CKM be measured?
If the production model is simple and understood precisely, one can infer
the presence of phases from the form of the differential cross section.
This measures cos(d), not sin(d), but the presence of the phase can be clear. The
nicest example is a phase between m1 and m2, which
profoundly alters the differential cross sections in
e-e- Æ selectrons.
By the way, the large statistics of the LHC make it a wonderful place to
look for CP-violating signatures (e.g., a difference between the electron
and positron energy spectra in event samples which require various SUSY
signatures). The role of the LC in these analyses would be to provide a
precise neutralino spectrum which would be needed to make an accurate
model of the production.
- Could a LC make measurements that enable subsequent reanalysis of
LHC data to make new advances? (or vice versa) We are trying here
to establish the potential synergy of having both types of machines.
(any examples of such complementarity?)
The paragraph above gives a characteristic example: To interpret inclusive
SUSY measurements at LHC, one would want to feed back precision values of
the chargino and neutralino masses into a detailed model of squark decays.
Small differences in the mass values can make dramatic changes if one is
near a kinematic threshold for 2-body decay channel.
It is also possible that measurements at an LC might reveal that the wrong
hypothesis was chosen for an LHC decay-chain analysis. For example, the
shape of the dilepton spectrum in c2
c1 is very
model-dependent and can look similar for the cases where the direct decay
to sleptons is open or forbidden. For the points that Ian discussed
yesterday, the situation is clear and there are a number of cross-checks.
I am am not sure it is always so. The situation could also be made more
difficult by having multiple thresholds (due to neutralino mixing) or
leptonic decays mainly to tau.
- LC specific detection issues
- what are the limitations on sparticle mass determination due to
For the LC at 0.5 to 1 TeV, both beamstrahlung and initial state radiation
lower the effective center-of-mass energy of e+e-
collisions, and the two effects are roughly equal in magnitude. The width
of the distribution is about 10% of ECM. It is possible to measure the
distribution of e+e- annihilation energies and the
boosts of the annihilating system by measuring acollinear Bhabha events in
the detector endcaps. There is plenty of rate for a detailed study (about
100 R). Note that, because the e- energy distributions from both ISR and
beamstrahlung are strongly peaked at x=1, the actual positions of
thresholds and endpoints are not affected. One only needs to determine
what function to use in fitting the leading edges of distributions.
- can you outline the program needed to fully explore supersymmetry with
an Ecm £ 1 TeV LC? What running
time and luminosity
is required, taking into account the set of machine energies,
the beam polarization states and the particle types needed?
(for this you may want to focus on some specific Susy model).
Running time should be devoted to each level of the SUSY spectrum that
appears within the range of the machine. The first running for precision
should be done at an energy at which only the lightest charged superpartner
is visible. The budget for integrated luminosity is roughly the following:
Based on the details of the spectrum (revealed by the LHC and the LC
results at lower energies), one can decide which higher energies should
- 20 fb-1 for a mass measurement by the endpoint technique (1%
- 100 fb-1 for a differential cross section measurement at a
fixed energy, suitable to measure mixing angles
- 100 fb-1 for a high-precision mass measurement by threshold
scan (0.1% accuracy)
- What is the case for polarizing both electron and positron beams,
and what is the marginal benefit from increases in polarizations?
In simple e+e- annihilation, without additional
t-channel contributions, the e-L always annihilates
an e+R and vice versa. So electron polarization
suffices if positron polarization is not available.
Polarization is useful both for asymmetry measurements and to remove
background. Naively, one would say that asymmetries are proportional to P
and a marginal increase from 80% to 90% is not so useful. However, there
are examples (e.g. chargino pair production in the gaugino limit) where
the e-L and e-R cross sections
differ by a factor 10. Then, with 90% polarization, one can adopt a
strategy of taking much more data with R rather than L beams. In processes
for which W+W- pair production is a background (e.g.,
slepton production, leading to acoplanar lepton pairs) it is useful to
extinguish W production by choosing R polarization. The background is then
proportional to (1-P) and a change from 80% to 90% makes a factor 2
There are a number of reasons why it is desirable to have positron
polarization in addition to electron polarization: (1) for selectron pair
production, which has t-channel exchange contributions, the positron
polarization controls which se+ state is produced; (2) in
extinguishing W pair background, it may not be achievable to increase the
polarization from 90% to 95%; adding 50% positron polarization does the
same job; (3) single W production (g e Æ W n) is sometimes a
relevant background; e-R polarization removes single W- but not
single W+; (4) with both beams polarized, it is possible to
determine both polarizations (at the expense of luminosity) without an
external polarization monitor (`the Blondel scheme'); (5) for studies in
which the only important cross sections come from e-L
e+R (e.g., WW scattering), positron polarization
increases the cross section. Typically, though, there is a trade-off
between positron polarization and luminosity. None of these arguments is
worth a factor 2 in luminosity.
- Understanding the Susy Model;
- Can measurements distinguish between mSUGRA, GMSB, Anomaly mediated
supersymmetry (etc.) ? For what regions of parameter space? What
integrated luminosity is needed?
This question can be answered at many different levels of sophistication.
First, there are characteristic signatures of GMSB (direct photons or
leptons) and AMSB (light long-lived chargino) which will be seen if they
are present. In GMSB, the delayed decays are visible for a supersymmetry
breaking scale up to about 104 TeV, while in models this scale
can go up to 108 TeV. If pure AMSB is mixed with mass
contributions from SUGRA at a few percent of full strength, the chargino
Second, there are characteristic differences in the spectrum. In minimal
SUGRA, the mass ratio of slR : slL : sq is never
larger than 0.6 : 1 : 3, while in GMSB, one finds at tree level 0.4 : 1 :
6. Both cases, in their simplest form, predict gaugino unification, which
is violated in AMSB. Including higher-order corrections, or RG running in
the case of a high SUSY breaking scale in GMSB, can make the differences
As an extreme case, Bagger, Matchev, Pierce, and Zhang found a region of
parameter space in which the spectrum predictions for mSUGRA and GMSB
overlap to 2% accuracy for all particles visible at the LHC. This happens
as a result of large tanb in the GMSB models, which can be diagnosed
with precision superspectrum measurements or with heavy Higgs measurements.
- How well, and with what model assumptions, can unification at SGUT scale
In published analyses by the JLC group, unification of m1 and
m2 and of slL and slR masses in minimal SUGRA is
demonstrated to about 10% accuracy. This can probably be pushed to the
percent level with large data samples. At the other extreme, it is very
difficult to demonstrate the unification of Higgs and stop masses with the
other scalar masses, due to an infrared attractor in the renomalization
In his talk on Wednesday, Peter Zerwas presented a new, rather
sophisticated study of this question by the TESLA group. Hopefully, it
will be published soon.
- What is the set of measurements needed to fully explore R-parity violating
SUSY and how well will the measurement of the Yukawa couplings LLL, LQQ,
QQQ types be made?
R-parity violating couplings may or may not be large enough to affect the
production amplitudes for SUSY particles. One case that could give a
remarkable signature is a sneutrino produced as a resonance in
e+e-. For this een
coupling, one is sensitive down to 0.01. It should be possible to observe
couplings that would allow t-channel exchanges to contribute to SUSY
production (e.g., an e q sq coupling allowing t-channel quark exchange to
produce squarks, or vice versa). To my knowledge, there has been no study
of the limits one could set. I would guess a limit of 0.03 if squarks are
accessible (similarly for sleptons).
If the R-parity violating couplings are smaller, the production dynamics of
SUSY particles is exactly as in the standard scenario, but the signatures
are different. It is still easy to find SUSY particles, so the masses and
production parameters can be measured. From the decays, it is not possible
to obtain the overall magnitude of the R-parity violating couplings, but it
is possible to obtain ratios of these couplings by measuring the decay
branching ratios. Since the R-parity violating couplings should depend
strongly on flavor, this should be interesting information.