# Difference: CombineTutorial (3 vs. 4)

#### Revision 42016-11-24 - MingshuiChen

Line: 1 to 1

 META TOPICPARENT name="StatisticsAnalysis"
Line: 6 to 6
-->
Changed:
<
<

>
>

# Higgs combine tutorial based on CMSDAS2014HiggsCombExercise

Line: 27 to 27

• Datacards: in order to perform a statistical analysis of the results, datacards are used to encode the outcomes of the Higgs searches (i.e. the observed and expected events after the selection, and distribution in whatever variables are used in the analysis to discriminate between signal and background). a description of how these are built and interpreted will be given.
• Higgs Couplings
• The basics: how to define a simple model to interpret higgs search results fitting some parameters of interest (e.g. the signal strength), how to use basic statistics to define the uncertainties on the extracted parameters in one and two dimensions.
Changed:
<
<
• First fits to Higgs search results: use what learned before to perform a first fit to the CMS datacards to produce the signal strength plots like these two
>
>
• First fits to Higgs search results: use what learned before to perform a first fit to the CMS datacards to produce the signal strength plots like these two

• Higgs Coupling models: models to interpret the data for a Higgs with couplings different from the SM ones, and how they are realized in the statistical toolkit
• Coupling fits from the CMS combination: finally, you will combine all the CMS datacards and fit them to evaluate if the Higgs boson we see is compatible with the SM one or not.
• Higgs JCP
Line: 42 to 42

• Shell commands will be embedded in grey box, e.g.
combine -M MaxLikelihoodFit --justFit comb_hww.txt -m 125.7 --robustFit=1
• Datacards will be embedded in a yellow box, e.g.
observation 70920
• Python code will be embedded in light blue box, e.g.

%CODE{"python"}% return self.getHiggsSignalYieldScale(processSource, foundDecay, foundEnergy) %ENDCODE%

Changed:
<
<
• ROOT commands and C++ macro will be embedded in light green box, e.g.

%CODE{"cpp"}% limit->Draw("2*deltaNLL:r"); %ENDCODE%

>
>
• ROOT commands and C++ macro will be embedded in light green box, e.g.

%CODE{"cpp"}% limit->Draw("2*deltaNLL:r"); %ENDCODE%

• Quizes and important messages will be highlighted in red

# Intro I: The Higgs at CMS

Measurements of the Higgs properties in CMS exploit the different production and decay modes expected for the SMH boson.

Changed:
<
<
The SMH boson is produced through gluon-fusion (ggH, red), vector-boson fusion (VBF, blue), associated vector-boson production (VH, purple - where the Higgs is radiated off an energetic V=W,Z boson), and in associated production with a top-quark pair (ttH, green):
>
>
The SMH boson is produced through gluon-fusion (ggH, red), vector-boson fusion (VBF, blue), associated vector-boson production (VH, purple - where the Higgs is radiated off an energetic V=W,Z boson), and in associated production with a top-quark pair (ttH, green):
As can be seen, the latter three modes have very clear signatures that can be targeted experimentally by, e.g., requiring vector boson candidates in the event or top quark candidates.
Line: 57 to 57
At the LHC, given the very large gluon luminosities, the SMH boson is mainly produced through gluon-fusion and, at much lower rates, the more distinctive processes:
Changed:
<
<
>
>
At a mass of mH ~ 125 GeV, the SMH boson decays into almost every possibly mode except into a pair of top quarks:
Changed:
<
<
>
>
Here, the decay into a pair of photons is also loop-mediated as in the case of gluon-fusion production, except that in this case, the photon is sensitive not only to the colour-charge but also to the electric-charge:
Changed:
<
<
>
>
While the top and W contributions to the diphoton decay, contributions have also been calculated for other quarks and even the tau lepton. In this way, the diphoton decay branching fraction is sensitive no only to the SMH coupling to those SM particles but as well to BSM particles.

In terms of decay rates, although they seem very small for some decays, in practice and in order to observe the ZZ or WW decays one needs to take into account the branching fractions of the Z and W. After that, the picture is a little different:

Changed:
<
<
>
>

Changed:
<
<
Finally, while it is somewhat straightforward to identify the decay signatures and experimentally select pure samples of H→γγ or H→ττ, it is much more complicated to distinguish between production modes. In an nutshell, only leptonic decays of Vs in VH production are extremely pure with the present amount of data. Both VBF and tt¨¬„H selections inevitably include some gluon-fusion events where there was one or more jets radiated from the gluon or quark lines. With more data, stricter cuts can be applied and purer production tags developed.
>
>
Finally, while it is somewhat straightforward to identify the decay signatures and experimentally select pure samples of H→γγ or H→ττ, it is much more complicated to distinguish between production modes. In an nutshell, only leptonic decays of Vs in VH production are extremely pure with the present amount of data. Both VBF and ttì„H selections inevitably include some gluon-fusion events where there was one or more jets radiated from the gluon or quark lines. With more data, stricter cuts can be applied and purer production tags developed.
But for the reasons above we usually refer to "dijet tag" and not "VBF production" when discussing the experimental selections, since such selections will contains some events other than VBF.

In CMS, a large amount of production and decay modes are studied exploring different experimental selections targeting the different production modes. The following table summarises the existing and planned analyses:

Changed:
<
<
 ZZ WW γγ ττ bb¨¬„ Zγ μμ Invisible ggH tag a˜¡­ a˜¡­ a˜¡­ a˜¡­ a˜ a˜† a˜† a˜ VBF tag a˜† a˜¡­ a˜¡­ a˜¡­ a˜† a˜† a˜† a˜† VH tag a˜† a˜† a˜† a˜† a˜¡­ a˜¡­ tt¨¬„H tag a˜† a˜† a˜† a˜† a˜†
>
>
 ZZ WW γγ ττ bbì„ Zγ μμ Invisible ggH tag a˜… a˜… a˜… a˜… a˜ a˜† a˜† a˜ VBF tag a˜† a˜… a˜… a˜… a˜† a˜† a˜† a˜† VH tag a˜† a˜† a˜† a˜† a˜… a˜… ttì„H tag a˜† a˜† a˜† a˜† a˜†
There are several notes to this table:
• The H→Zγ decay is a rare, loop-mediated, decay process which is also sensitive to BSM physics as the ggH and H→γγ, with the addition that the Z knows left from right, given the V-A structure of the electroweak interaction.
• In the SM one expects an invisible decay of the SMH from H→ZZ→2ν2ν at the level of 0.1% for mH=125 GeV. In BSM alternative, the Higgs bosons could decay into massive invisible particles with mχ<mH/2 at a substantial rate. From searching those invisible decays one can set stringent limits on this invisible possiblity.
Changed:
<
<
• Combinations denoted with the skull (a˜ ) are not thought to be possible to measure mostly because of the lack of distinctive features with which to tag the production of the SMH.
• Many rows and columns are extremely condensed and do not give the full flavour of the signatures explored. For instance since Z bosons are explored in many decay signatures, such as 2e¡° or bb¨¬„ and they appear in both the production tag and decay mode. On the other hand, there are analyses such as tt¨¬„H with H→multileptons, where there are contributions from ZZ, WW and ττ which cannot at present be discerned.
• Open stars (a˜†) are taken to mean that no observation has been made in this particular channel and mostly only limits were set.
>
>
• Combinations denoted with the skull (a˜ ) are not thought to be possible to measure mostly because of the lack of distinctive features with which to tag the production of the SMH.
• Many rows and columns are extremely condensed and do not give the full flavour of the signatures explored. For instance since Z bosons are explored in many decay signatures, such as 2e“ or bbì„ and they appear in both the production tag and decay mode. On the other hand, there are analyses such as ttì„H with H→multileptons, where there are contributions from ZZ, WW and ττ which cannot at present be discerned.
• Open stars (a˜†) are taken to mean that no observation has been made in this particular channel and mostly only limits were set.
From this rich picture of measurements, one can explore the most important aspects of the scalar coupling structure of the new boson.
Changed:
<
<
But not all ends with the measurements of σÃ¡ªBR.
>
>
But not all ends with the measurements of σÃ—BR.
In fact there is a much richer and complex structure in terms of the tensor couplings. This is a more difficult arena where the spin (J) and charge-parity (CP) properties of the boson are involved.
Changed:
<
<
The only channel where all information about the boson is available in the detector is the H→ZZ→4e¡° where the leptons are electrons or muons (and not neutrinos). This does not mean that other channels cannot contribute to this picture (like H→WW→e¡°νe¡°ν or H→γγ), but the fact that all information is readily available renders the analysis much more straightforward in the H→ZZ→4e¡° channels.
>
>
The only channel where all information about the boson is available in the detector is the H→ZZ→4e“ where the leptons are electrons or muons (and not neutrinos). This does not mean that other channels cannot contribute to this picture (like H→WW→e“νe“ν or H→γγ), but the fact that all information is readily available renders the analysis much more straightforward in the H→ZZ→4e“ channels.
The following is an illustration from http://arxiv.org/pdf/1208.4018.pdf :
Changed:
<
<
>
>
Since the spin-parity tensor structure is much more complicated than the scalar structure of the couplings, much more data will be needed to perform precise measurements. For the moment the strategy is to perform hypotheses tests between the 0+ (SMH-like) hypothesis and other, alternative, hypotheses such as pseudoscalar (0-) bosons (or admixtures), spin-2 bosons, like the graviton, (2+), or exotic spin-1 bosons.

As an example, the following figure illustrates the differences in one observable of the ZZ (left), WW (middle), and γγ (right) decays for different JCP hypotheses (0+ red circles, 0- magenta squares, and two types of spin-2 particle in green and blue):

Changed:
<
<
>
>

# Intro 2: Datacards

Line: 240 to 239
CMS_vbfhinv_zvv_stat gmN 12 - 8.4899 - - - - -
Changed:
<
<
The gamma distribution, in the form we use it, is characterized by two parameters: N, the number of events in the control region (or in MC), and α the weight by which each event is multiplied to determine the expected yield in the signal region (if the weights are not all the same, one can take α to be the average weight). By definition, the prediciton in the signal region is αÂ¡¤N.
>
>
The gamma distribution, in the form we use it, is characterized by two parameters: N, the number of events in the control region (or in MC), and α the weight by which each event is multiplied to determine the expected yield in the signal region (if the weights are not all the same, one can take α to be the average weight). By definition, the prediciton in the signal region is αÂ·N.
In the datacards, the uncertainty is encoded writing gmN immediately followed by the number of events N, and then putting the factor α in the column corresponding to the process. So, in this case N=12, α=8.5 and the expected yield is about 102 events.
Line: 270 to 269

• 10.3709*(1.30341-1) from CMS_vbfhinv_mc_norm
And the grand total is sqrt((10.3709*(1.04-1))^2+(101.879*1/sqrt(12+1))^2+(101.879*(1.25331-1) )^2+(67.2274*(1.23748-1) )^2+(68.2064*(1.29736-1))^2+(54.4137*(1.44471-1))^2+(36.8432*(1.84373-1))^2+(10.3709*(1.30341-1) )^2) which gives 60.8 events
Changed:
<
<
So, keeping only the relevant decimals, the total expected background is 340 Â¡À 18(stat) Â¡À 61 (syst).
>
>
So, keeping only the relevant decimals, the total expected background is 340 Â± 18(stat) Â± 61 (syst).
</>
<--/twistyPlugin-->

## A shape analysis using templates: H→ττ

Line: 342 to 344
%CODE{"cpp"}% /// root -l root -l htt_mt.input_8TeV.root /// Attaching file htt_mt.input_8TeV.root as _file0...
Changed:
<
<
_file0->cd("muTau_vbf"); QCD->Draw("HIST");
>
>
_file0->cd("muTau_vbf"); QCD->Draw("HIST");
%ENDCODE%
Changed:
<
<
Note that we're not drawing the error bars on the histograms, since combine is not using them so it would be misleading to show them.
>
>
Note that we're not drawing the error bars on the histograms, since combine is not using them so it would be misleading to show them.
We can also check the normalization of the histogram, to see that it matches the expected yield in the datacard as it should.

%CODE{"cpp"}%

Changed:
<
<
cout << QCD->Integral() << endl; /// ----> 39.2563
>
>
cout << QCD->Integral() << endl; /// ----> 39.2563
%ENDCODE%

Note that underflow and overflow bins are not used by combine.

Line: 359 to 361
Similarly we can look at how the 199 events observed in data are distributed: the first rule again will give muTau_vbf/data_obs

%CODE{"cpp"}%

Changed:
<
<
cout << data_obs->Integral() << endl; /// ----> 199 data_obs->Draw();
>
>
cout << data_obs->Integral() << endl; /// ----> 199 data_obs->Draw();
%ENDCODE%
Changed:
<
<
>
>
We can also at how the signal looks like for different Higgs boson mass hypotheses; we will use the useful Form("pattern", args) method of ROOT to create the proper string with the mass written inside.

%CODE{"cpp"}%

Changed:
<
<
qqH110->Draw("HIST"); for (int m = 110; m <= 140; m += 10) { TH1F h = (TH1F) gDirectory->Get(Form("qqH%d",m)); h->SetLineColor(206+4*(m-110)/10); h->SetLineWidth(2); h->Draw("HIST SAME"); }
>
>
qqH110->Draw("HIST"); for (int m = 110; m <= 140; m += 10) { TH1F h = (TH1F) gDirectory->Get(Form("qqH%d",m)); h->SetLineColor(206+4*(m-110)/10); h->SetLineWidth(2); h->Draw("HIST SAME"); }
%ENDCODE%
Changed:
<
<
and in a similar way we can see how do the mean and rms of the signal vary with the mass
>
>
and in a similar way we can see how do the mean and rms of the signal vary with the mass

Changed:
<
<
%CODE{"cpp"}% for (int m = 110; m <= 140; m += 5) { TH1F h = (TH1F) gDirectory->Get(Form("qqH%d",m)); printf("for mH=%3d, signal mean = %5.1f, rms = %5.1f\n", m, h->GetMean(), h->GetRMS()); }
>
>
%CODE{"cpp"}% for (int m = 110; m <= 140; m += 5) { TH1F h = (TH1F) gDirectory->Get(Form("qqH%d",m)); printf("for mH=%3d, signal mean = %5.1f, rms = %5.1f\n", m, h->GetMean(), h->GetRMS()); }
%ENDCODE%
>
>

for mH=110, signal mean = 100.5, rms =  15.1
for mH=115, signal mean = 104.6, rms =  16.8
for mH=120, signal mean = 108.8, rms =  17.5


Line: 402 to 406
This tells us that there is a shape uncertainty called CMS_scale_t_mutau_8TeV which affects the signals and the ZTT background (this background is determined embedding simulated taus into Z→μμ events, so any data/mc discrepancy on the energy response for taus will affect the prediction for it just like it does for the simulated taus in signal MC)
Changed:
<
<
For shape analyses using templates, the effect of a shape-affecting systematical uncertainty is encoded in the datacard by providing two additional templates corresponding to Â¡ÀNσ shifts of the corresponding parameter. For each process affected by this uncertainty, the column in the datacard will contain a value 1/N. In this case, as often, Â¡À1σ shapes have been used, and so the columns of the datacard contain a value of 1.
>
>
For shape analyses using templates, the effect of a shape-affecting systematical uncertainty is encoded in the datacard by providing two additional templates corresponding to Â±Nσ shifts of the corresponding parameter. For each process affected by this uncertainty, the column in the datacard will contain a value 1/N. In this case, as often, Â±1σ shapes have been used, and so the columns of the datacard contain a value of 1.

Changed:
<
<
The associated shapes are determined using the last column of the shapes line, which for this datacard reads $CHANNEL/$PROCESS_$SYSTEMATIC, where $SYSTEMATIC is taken to be the name of the uncertainty in the card, and the postixes Up and Down are added to get the up and down templates. Let's see how much does the mean of the expected ZTT distribution change for the Â¡À1σ shifts, and also look at the shapes in ROOT. We can also see that the tau energy scale does not affect only the shape of the background, but also the normalization (probably because if taus are reconstructed as more energetic, they have a higher probability of passing the pT cuts or other kinematic cuts of the analsysis)
>
>
The associated shapes are determined using the last column of the shapes line, which for this datacard reads $CHANNEL/$PROCESS_$SYSTEMATIC, where $SYSTEMATIC is taken to be the name of the uncertainty in the card, and the postixes Up and Down are added to get the up and down templates. Let's see how much does the mean of the expected ZTT distribution change for the Â±1σ shifts, and also look at the shapes in ROOT. We can also see that the tau energy scale does not affect only the shape of the background, but also the normalization (probably because if taus are reconstructed as more energetic, they have a higher probability of passing the pT cuts or other kinematic cuts of the analsysis)
%CODE{"cpp"}% /// root -l htt_mt.input_8TeV.root /// Attaching file htt_mt.input_8TeV.root as _file0...
Changed:
<
<
_file0->cd("muTau_vbf"); TH1F nominal = (TH1F) gDirectory->Get("ZTT"); TH1F tau_up = (TH1F) gDirectory->Get("ZTT_CMS_scale_t_mutau_8TeVUp"); TH1F tau_dn = (TH1F) gDirectory->Get("ZTT_CMS_scale_t_mutau_8TeVDown");
>
>
_file0->cd("muTau_vbf"); TH1F nominal = (TH1F) gDirectory->Get("ZTT"); TH1F tau_up = (TH1F) gDirectory->Get("ZTT_CMS_scale_t_mutau_8TeVUp"); TH1F tau_dn = (TH1F) gDirectory->Get("ZTT_CMS_scale_t_mutau_8TeVDown");

Changed:
<
<
printf("Nominal shape: mean = %6.2f (norm: %7.3f events)\n", nominal->GetMean(), nominal->Integral()); // --> Nominal shape: mean = 88.03 (norm: 94.286 events) printf("Up shape: mean = %6.2f (norm: %7.3f events)\n", tau_up->GetMean(), tau_up->Integral()); // --> Up shape: mean = 88.86 (norm: 97.172 events) printf("Down shape: mean = %6.2f (norm: %7.3f events)\n", tau_dn->GetMean(), tau_dn->Integral()); // --> Down shape: mean = 87.20 (norm: 91.995 events)
>
>
printf("Nominal shape: mean = %6.2f (norm: %7.3f events)\n", nominal->GetMean(), nominal->Integral()); // --> Nominal shape: mean = 88.03 (norm: 94.286 events) printf("Up shape: mean = %6.2f (norm: %7.3f events)\n", tau_up->GetMean(), tau_up->Integral()); // --> Up shape: mean = 88.86 (norm: 97.172 events) printf("Down shape: mean = %6.2f (norm: %7.3f events)\n", tau_dn->GetMean(), tau_dn->Integral()); // --> Down shape: mean = 87.20 (norm: 91.995 events)

Changed:
<
<
nominal->SetLineWidth(2); nominal->SetLineColor(kBlack); tau_up->SetLineWidth(2); tau_up->SetLineColor(kBlue); tau_dn->SetLineWidth(2); tau_dn->SetLineColor(kRed); nominal->Draw("HIST"); tau_up->Draw("HIST SAME"); tau_dn->Draw("HIST SAME"); nominal->GetYaxis()->SetRangeUser(0,45);
>
>
nominal->SetLineWidth(2); nominal->SetLineColor(kBlack); tau_up->SetLineWidth(2); tau_up->SetLineColor(kBlue); tau_dn->SetLineWidth(2); tau_dn->SetLineColor(kRed); nominal->Draw("HIST"); tau_up->Draw("HIST SAME"); tau_dn->Draw("HIST SAME"); nominal->GetYaxis()->SetRangeUser(0,45);
%ENDCODE%
Changed:
<
<
>
>

## A parametric shape analysis: H→γγ

Line: 498 to 504
%CODE{"cpp"}% TFile *fDat = TFile::Open("hgg.inputbkgdata_8TeV_MVA.root");
Changed:
<
<
RooAbsData *data = cms_hgg_workspace->data("roohist_data_mass_cat0"); data->Print("") // --> RooDataHist::roohist_data_mass_cat0[CMS_hgg_mass] = 160 bins (1449 weights)
>
>
RooAbsData *data = cms_hgg_workspace->data("roohist_data_mass_cat0"); data->Print("") // --> RooDataHist::roohist_data_mass_cat0[CMS_hgg_mass] = 160 bins (1449 weights)
// so, we have a binned dataset, whose variable is called CMS_hgg_mass:
Changed:
<
<
RooRealVar *mass = cms_hgg_workspace->var("CMS_hgg_mass"); mass->Print("");
>
>
RooRealVar *mass = cms_hgg_workspace->var("CMS_hgg_mass"); mass->Print("");
// RooRealVar::CMS_hgg_mass = 140 L(100 - 180)

// we can make a plot of the dataset with the following

Changed:
<
<
RooPlot *plot = mass->frame(); data->plotOn(plot); plot->Draw();
>
>
RooPlot *plot = mass->frame(); data->plotOn(plot); plot->Draw();
%ENDCODE%

Line: 518 to 524
%CODE{"cpp"}% TFile *sig = TFile::Open("hgg.inputsig_8TeV_MVA.root");
Changed:
<
<
RooAbsPdf *ggH = wsig_8TeV->pdf("hggpdfrel_ggh_cat0"); ggH->Print(""); // --> RooAddPdf::hggpdfrel_ggh_cat0[ hist_func_frac_g0_ggh_cat0 * hgg_gaus_g0_ggh_cat0 + hist_func_frac_g1_ggh_cat0 * hgg_gaus_g1_ggh_cat0 + const_func_frac_g2_ggh_cat0 * hgg_gaus_g2_ggh_cat0 ] = 0.604097 // this appears to be a linear combination of multiple gaussian pdfs (hgg_gaus_g0_ggh_cat0, hgg_gaus_g1_ggh_cat0, ...) with coefficents hist_func_frac_g0_ggh_cat0, hist_func_frac_g1_ggh_cat0
>
>
RooAbsPdf *ggH = wsig_8TeV->pdf("hggpdfrel_ggh_cat0"); ggH->Print(""); // --> RooAddPdf::hggpdfrel_ggh_cat0[ hist_func_frac_g0_ggh_cat0 * hgg_gaus_g0_ggh_cat0 + hist_func_frac_g1_ggh_cat0 * hgg_gaus_g1_ggh_cat0 + const_func_frac_g2_ggh_cat0 * hgg_gaus_g2_ggh_cat0 ] = 0.604097 // this appears to be a linear combination of multiple gaussian pdfs (hgg_gaus_g0_ggh_cat0, hgg_gaus_g1_ggh_cat0, ...) with coefficents hist_func_frac_g0_ggh_cat0, hist_func_frac_g1_ggh_cat0
// let's get the list of the parameters that describe it
Changed:
<
<
RooArgSet *params = ggH->getParameters(*data); params->Print(""); // --> RooArgSet::parameters = (CMS_hgg_globalscale,CMS_hgg_nuissancedeltamcat0,CMS_hgg_nuissancedeltasmearcat0,MH)
>
>
RooArgSet *params = ggH->getParameters(*data); params->Print(""); // --> RooArgSet::parameters = (CMS_hgg_globalscale,CMS_hgg_nuissancedeltamcat0,CMS_hgg_nuissancedeltasmearcat0,MH)
// MH is a special parameter, which combine and text2workspace set to the Higgs mass hypothesis. // now since we're just looking into the input workspace, we can set it by hand
Changed:
<
<
wsig_8TeV->var("MH")->setVal(125.7);
>
>
wsig_8TeV->var("MH")->setVal(125.7);
// Now we can make a plot of the pdf, and show also the contributions from the different gaussians from which it is composed
Changed:
<
<
RooPlot *plot = wsig_8TeV->var("CMS_hgg_mass")->frame(); ggH->plotOn(plot); ggH->plotOn(plot, RooFit::Components("hgg_gaus_g0_ggh_cat0"), RooFit::LineColor(kRed)); ggH->plotOn(plot, RooFit::Components("hgg_gaus_g1_ggh_cat0"), RooFit::LineColor(209)); ggH->plotOn(plot, RooFit::Components("hgg_gaus_g2_ggh_cat0"), RooFit::LineColor(222)); plot->Draw();
>
>
RooPlot *plot = wsig_8TeV->var("CMS_hgg_mass")->frame(); ggH->plotOn(plot); ggH->plotOn(plot, RooFit::Components("hgg_gaus_g0_ggh_cat0"), RooFit::LineColor(kRed)); ggH->plotOn(plot, RooFit::Components("hgg_gaus_g1_ggh_cat0"), RooFit::LineColor(209)); ggH->plotOn(plot, RooFit::Components("hgg_gaus_g2_ggh_cat0"), RooFit::LineColor(222)); plot->Draw();
%ENDCODE%

Line: 555 to 561
%CODE{"cpp"}% TFile *sig = TFile::Open("hgg.inputsig_8TeV_MVA.root");
Changed:
<
<
wsig_8TeV->var("MH")->setVal(125.7); RooAbsPdf *ggH = wsig_8TeV->pdf("hggpdfrel_ggh_cat0"); RooAbsReal *ggH_norm = wsig_8TeV->function("hggpdfrel_ggh_cat0_norm"); cout << ggH_norm->getVal()*19620.0000 << endl; // --> 12.3902
>
>
wsig_8TeV->var("MH")->setVal(125.7); RooAbsPdf *ggH = wsig_8TeV->pdf("hggpdfrel_ggh_cat0"); RooAbsReal *ggH_norm = wsig_8TeV->function("hggpdfrel_ggh_cat0_norm"); cout << ggH_norm->getVal()*19620.0000 << endl; // --> 12.3902
%ENDCODE%

This approach can also be used to make a background freely floating, by associating to them as normalization term a floating RooRealVar. However, this can be achieved in a more transparent way by putting instead a normalization uncertainty on that background using a flat pdf: e.g. to leave the background floating between 50% and 200% of its input prediction, a lnU systematic can be used with κ=2 ( lnU has a syntax like lnN, but produces a uniform pdf between 1/κ and κ rather than a log-normal; see SWGuideHiggsAnalysisCombinedLimit)

Line: 569 to 575
The part of the H→γγ datacard related to the systematics starts with many lines of log-normals that should already be familiar, except possibly for the notation with two numbers separated by a slash (e.g. 0.950/1.099). This notation is used for asymmetrical uncertainties: a log-normal with 0.950/1.099 means that at -1σ the yield is scaled down by a factor 0.95, while at +1σ the yield is scaled up by a factor 1.099.

The last part of the datacard contains some lines that use a different syntax, e.g.

>
>

CMS_hgg_globalscale  param  0.0 0.004717
Changed:
<
<
These lines encode uncertainties on the parameters of the signal and background pdfs. The example line quoted here informs text2workspace that the parameter CMS_hgg_globalscale is to be assigned a Gaussian uncertainty of Â¡À0.004717 around its mean value of zero (0.0).
>
>
These lines encode uncertainties on the parameters of the signal and background pdfs. The example line quoted here informs text2workspace that the parameter CMS_hgg_globalscale is to be assigned a Gaussian uncertainty of Â±0.004717 around its mean value of zero (0.0).
The effect can be visualized from RooFit, e.g.

%CODE{"cpp"}% TFile *sig = TFile::Open("hgg.inputsig_8TeV_MVA.root");

Changed:
<
<
wsig_8TeV->var("MH")->setVal(125.7); RooAbsPdf *ggH = wsig_8TeV->pdf("hggpdfrel_ggh_cat0");
>
>
wsig_8TeV->var("MH")->setVal(125.7); RooAbsPdf *ggH = wsig_8TeV->pdf("hggpdfrel_ggh_cat0");
// prepare the canvas
Changed:
<
<
RooPlot *plot = wsig_8TeV->var("CMS_hgg_mass")->frame();
>
>
RooPlot *plot = wsig_8TeV->var("CMS_hgg_mass")->frame();
// plot nominal pdf
Changed:
<
<
ggH->plotOn(plot, RooFit::LineColor(kBlack));
>
>
ggH->plotOn(plot, RooFit::LineColor(kBlack));
// plot minus 3 sigma pdf
Changed:
<
<
wsig_8TeV->var("CMS_hgg_globalscale")->setVal(-3*0.004717); ggH->plotOn(plot, RooFit::LineColor(kBlue));
>
>
wsig_8TeV->var("CMS_hgg_globalscale")->setVal(-3*0.004717); ggH->plotOn(plot, RooFit::LineColor(kBlue));
// plot plus 3 sigma pdf
Changed:
<
<
wsig_8TeV->var("CMS_hgg_globalscale")->setVal(+3*0.004717); ggH->plotOn(plot, RooFit::LineColor(kRed)); plot->Draw();
>
>
wsig_8TeV->var("CMS_hgg_globalscale")->setVal(+3*0.004717); ggH->plotOn(plot, RooFit::LineColor(kRed)); plot->Draw();
%ENDCODE%

Line: 612 to 619
%CODE{"cpp"}% TFile *fDat = TFile::Open("hgg.inputbkgdata_8TeV_MVA.root");
Changed:
<
<
cout << cms_hgg_workspace->data("roohist_data_mass_cat0")->sumEntries() << endl;
>
>
cout << cms_hgg_workspace->data("roohist_data_mass_cat0")->sumEntries() << endl;
// .... and so on %ENDCODE%
Line: 636 to 642
%CODE{"cpp"}% TFile *fSig = TFile::Open("hgg.inputsig_8TeV_MVA.root");
Changed:
<
<
wsig_8TeV->var("MH")->setVal(125.7); cout << wsig_8TeV->function("hggpdfrel_ggh_cat0_norm")->getVal()*19620.0000 << endl; cout << wsig_8TeV->function("hggpdfrel_vbf_cat0_norm")->getVal()*19620.0000 + wsig_8TeV->function("hggpdfrel_wh_cat0_norm")->getVal()*12753.0000 + wsig_8TeV->function("hggpdfrel_zh_cat0_norm")->getVal()*6867.0000 << endl; // .... and so on
>
>
wsig_8TeV->var("MH")->setVal(125.7); cout << wsig_8TeV->function("hggpdfrel_ggh_cat0_norm")->getVal()*19620.0000 << endl; cout << wsig_8TeV->function("hggpdfrel_vbf_cat0_norm")->getVal()*19620.0000 + wsig_8TeV->function("hggpdfrel_wh_cat0_norm")->getVal()*12753.0000 + wsig_8TeV->function("hggpdfrel_zh_cat0_norm")->getVal()*6867.0000 << endl; // .... and so on
%ENDCODE%

category N(ggH) N(VBF+VH)
Line: 703 to 709
The interesting difference however is the observed dataset:

%CODE{"cpp"}%

Changed:
<
<
>
>
gFile = TFile::Open("hzz4l_4muS_8TeV_0.input.root");
Changed:
<
<
RooAbsData *data = w->data("data_obs"); data->Print("") // --> RooDataSet::data_obs[CMS_zz4l_mass,melaLD,CMS_zz4l_PToverM] = 13 entries
>
>
RooAbsData *data = w->data("data_obs"); data->Print("") // --> RooDataSet::data_obs[CMS_zz4l_mass,melaLD,CMS_zz4l_PToverM] = 13 entries
%ENDCODE%

In the Hγγ the dataset was a RooDataHist with one entry per bin, each entry having a weight equal to the number of events in that bin. Here instead it's a RooDataSet, with one entry per event.

Line: 715 to 721
Let's explore its contents, both building some binned 1D projections and by making a 2D scatter plot.

%CODE{"cpp"}%

Changed:
<
<
w->var("CMS_zz4l_mass")->Print(""); // --> RooRealVar::CMS_zz4l_mass = 114.157 L(104 - 142) B(200) RooPlot *mass = w->var("CMS_zz4l_mass")->frame(RooFit::Bins(19)); data->plotOn(mass); mass->Draw();
>
>
w->var("CMS_zz4l_mass")->Print(""); // --> RooRealVar::CMS_zz4l_mass = 114.157 L(104 - 142) B(200) RooPlot *mass = w->var("CMS_zz4l_mass")->frame(RooFit::Bins(19)); data->plotOn(mass); mass->Draw();

Changed:
<
<
RooPlot *mela = w->var("melaLD")->frame(RooFit::Bins(10)); data->plotOn(mass); mass->Draw();
>
>
RooPlot *mela = w->var("melaLD")->frame(RooFit::Bins(10)); data->plotOn(mass); mass->Draw();

Changed:
<
<
TGraph points(data->numEntries()); for (int i = 0; i < data->numEntries(); ++i) { const RooArgSet *ev = data->get(i); points->SetPoint(i, ev->getRealValue("CMS_zz4l_mass"), ev->getRealValue("melaLD")); } points->SetMarkerStyle(20); points->GetXaxis()->SetTitle("m_{4l} (GeV)") points->GetYaxis()->SetTitle("K_{D}") .x tdrstyle.cc points->Draw("AP");
>
>
TGraph points(data->numEntries()); for (int i = 0; i < data->numEntries(); ++i) { const RooArgSet *ev = data->get(i); points->SetPoint(i, ev->getRealValue("CMS_zz4l_mass"), ev->getRealValue("melaLD")); } points->SetMarkerStyle(20); points->GetXaxis()->SetTitle("m_{4l} (GeV)") points->GetYaxis()->SetTitle("K_{D}") .x tdrstyle.cc points->Draw("AP");
%ENDCODE%

1D (mass) 1D (MELA KD) 2D
Changed:
<
<
>
>

## Combining datacards

Line: 846 to 858
Now we can open the hww2j.root file in ROOT and see its contents. Start a root prompt with the command root.exe -b -l -n and
Changed:
<
<
%CODE{"cpp"}% // Load the libraries needed to interpret the likelihood gSystem->Load("libHiggsAnalysisCombinedLimit.so"); // Open the file gFile = TFile::Open("hww2j.root"); // Retrieve the workspace from the file RooWorkspace *w = (RooWorkspace *) gFile->Get("w"); // Retrieve the ModelConfig from the workspace RooStats::ModelConfig *mc = (RooStats::ModelConfig *) w->genobj("ModelConfig");
>
>
%CODE{"cpp"}% // Load the libraries needed to interpret the likelihood gSystem->Load("libHiggsAnalysisCombinedLimit.so"); // Open the file gFile = TFile::Open("hww2j.root"); // Retrieve the workspace from the file RooWorkspace *w = (RooWorkspace *) gFile->Get("w"); // Retrieve the ModelConfig from the workspace RooStats::ModelConfig *mc = (RooStats::ModelConfig *) w->genobj("ModelConfig");

Changed:
<
<
// Now we get the list of parameters of the workspace mc->GetParametersOfInterest()->Print("V"); // the output will be // 1) RooRealVar:: r = 1
>
>
// Now we get the list of parameters of the workspace mc->GetParametersOfInterest()->Print("V"); // the output will be // 1) RooRealVar:: r = 1

Changed:
<
<
// Now we get the expected signal yield for ggH and DYTT cout << w->function("n_exp_final_binof_2j_proc_ggH")->getVal() << endl; cout << w->function("n_exp_final_binof_2j_proc_DYTT")->getVal() << endl; // the output will be 1.44, and 1.879, as expected
>
>
// Now we get the expected signal yield for ggH and DYTT cout << w->function("n_exp_final_binof_2j_proc_ggH")->getVal() << endl; cout << w->function("n_exp_final_binof_2j_proc_DYTT")->getVal() << endl; // the output will be 1.44, and 1.879, as expected

Changed:
<
<
// Now we change the value of r to be 2.0... we'd expect the ggH yield to grow to 2.88 and the DYTT to remain the same w->var("r")->setVal(2.0); cout << w->function("n_exp_final_binof_2j_proc_ggH")->getVal() << endl; // ----> 2.88 cout << w->function("n_exp_final_binof_2j_proc_DYTT")->getVal() << endl; // ----> 1.879
>
>
// Now we change the value of r to be 2.0... we'd expect the ggH yield to grow to 2.88 and the DYTT to remain the same w->var("r")->setVal(2.0); cout << w->function("n_exp_final_binof_2j_proc_ggH")->getVal() << endl; // ----> 2.88 cout << w->function("n_exp_final_binof_2j_proc_DYTT")->getVal() << endl; // ----> 1.879

Changed:
<
<
// We can also get the number of observed events, that should match the "observation" line in the datacard cout << w->data("data_obs")->sumEntries() << endl; // ----> 24
>
>
// We can also get the number of observed events, that should match the "observation" line in the datacard cout << w->data("data_obs")->sumEntries() << endl; // ----> 24
%ENDCODE%

### One-dimensional fits and profile likelihood

Line: 880 to 894
The output file from combine will contain a tree with a single entry, corresponding to this value.
Changed:
<
<
%CODE{"cpp"}% // root.exe -b -l higgsCombineHWW_FIT.MultiDimFit.mH125.7.root // Attaching file higgsCombineHWW_FIT.MultiDimFit.mH125.7.root as _file0... root [1] limit->Scan("r"); //************************ //* Row * r * //************************ //* 0 * 0.5986483 * //************************
>
>
%CODE{"cpp"}% // root.exe -b -l higgsCombineHWW_FIT.MultiDimFit.mH125.7.root // Attaching file higgsCombineHWW_FIT.MultiDimFit.mH125.7.root as _file0... root [1] limit->Scan("r"); //************************ //* Row * r * //************************ //* 0 * 0.5986483 * //************************
%ENDCODE%

Now, to compare different values of μ, we will use the negative log of the profile likelihood function λ
q(\mu) = - 2 \ln \lambda(\mu) = - 2 \ln \frac{L(\,\mathrm{data}\,|\, \mu\cdot s(\hat{\theta}_\mu) + b(\hat{\theta}_\mu)\,)}{L(\,\mathrm{data}\,|\, \hat{\mu}\cdot s(\hat{\theta}) + b(\hat{\theta})\,)}

Line: 898 to 913
%CODE{"cpp"}% // root -l higgsCombineHWW_SCAN_MU.MultiDimFit.mH125.7.root
Changed:
<
<
limit->GetEntries() // (const Long64_t)101
>
>
limit->GetEntries() // (const Long64_t)101

Changed:
<
<
limit->Scan("r:2*deltaNLL") //************************************ //* Row * r * 2*deltaNL * //************************************ //* 0 * 0.5986473 * 0 * //* 1 * 0.0149999 * 5.9730005 * //* 2 * 0.0450000 * 5.3444776 * //* 3 * 0.0750000 * 4.7531828 * //* 4 * 0.1049999 * 4.1995768 * //* 5 * 0.1350000 * 3.6845009 *
>
>
limit->Scan("r:2*deltaNLL") //************************************ //* Row * r * 2*deltaNL * //************************************ //* 0 * 0.5986473 * 0 * //* 1 * 0.0149999 * 5.9730005 * //* 2 * 0.0450000 * 5.3444776 * //* 3 * 0.0750000 * 4.7531828 * //* 4 * 0.1049999 * 4.1995768 * //* 5 * 0.1350000 * 3.6845009 *

Changed:
<
<
limit->Draw("2*deltaNLL:r");
>
>
limit->Draw("2*deltaNLL:r");
%ENDCODE%
Changed:
<
<
>
>
The quality of the default plot looks awful, so we will retrieve that information as a TGraph object and make it a bit more pretty; also, we will zoom into the region closer to the minimum which is more interesting.
Changed:
<
<
%CODE{"cpp"}% // Draw limit->Draw("2*deltaNLL:r");
>
>
%CODE{"cpp"}% // Draw limit->Draw("2*deltaNLL:r");

Changed:
<
<
// Make a copy of the graph TGraph *graph = gROOT->FindObject("Graph")->Clone();
>
>
// Make a copy of the graph TGraph *graph = gROOT->FindObject("Graph")->Clone();

Changed:
<
<
// sort the points for increasing values of r graph->Sort();
>
>
// sort the points for increasing values of r graph->Sort();

Changed:
<
<
// set the style graph->SetLineWidth(2); graph->SetLineColor(214); graph->SetMarkerStyle(7); graph->SetTitle("Likelihood scan as function of #mu;#mu;-2 #Delta ln L"); graph->GetXaxis()->SetRangeUser(0,2); graph->GetYaxis()->SetRangeUser(0,8); c1->SetGridy();
>
>
// set the style graph->SetLineWidth(2); graph->SetLineColor(214); graph->SetMarkerStyle(7); graph->SetTitle("Likelihood scan as function of #mu;#mu;-2 #Delta ln L"); graph->GetXaxis()->SetRangeUser(0,2); graph->GetYaxis()->SetRangeUser(0,8); c1->SetGridy();

Changed:
<
<
// draw as a smooth line plus the points graph->Draw("APC");
>
>
// draw as a smooth line plus the points graph->Draw("APC");
%ENDCODE%
Changed:
<
<
>
>

Now, from the scan you could read the boundaries of the Â±1σ interval by looking at where q(μ) crosses the horizontal line at 1. However, since that's a very common task, for models with just one parameter combine can do that for you using the MaxLikelihoodFit module

Deleted:
<
<
Now, from the scan you could read the boundaries of the Â¡À1σ interval by looking at where q(μ) crosses the horizontal line at 1. However, since that's a very common task, for models with just one parameter combine can do that for you using the MaxLikelihoodFit module

combine -M MaxLikelihoodFit comb_hww.root -m 125.7 -n HWW_FIT_MU --justFit --robustFit=1

Line: 1027 to 1045
class="twikiHelp" }%
Changed:
<
<
%CODE{"cpp"}% /// root -l higgsCombineHWW_SCAN_MU.MultiDimFit.mH125.7.root higgsCombineHWW_SCAN_MU_ASIMOV.MultiDimFit.mH125.7.root // Attaching file higgsCombineHWW_SCAN_MU.MultiDimFit.mH125.7.root as _file0... // Attaching file higgsCombineHWW_SCAN_MU_ASIMOV.MultiDimFit.mH125.7.root as _file1... _file0->cd(); limit->Draw("2*deltaNLL:r"); TGraph *graph = gROOT->FindObject("Graph")->Clone(); graph->Sort();
>
>
%CODE{"cpp"}% /// root -l higgsCombineHWW_SCAN_MU.MultiDimFit.mH125.7.root higgsCombineHWW_SCAN_MU_ASIMOV.MultiDimFit.mH125.7.root // Attaching file higgsCombineHWW_SCAN_MU.MultiDimFit.mH125.7.root as _file0... // Attaching file higgsCombineHWW_SCAN_MU_ASIMOV.MultiDimFit.mH125.7.root as _file1... _file0->cd(); limit->Draw("2*deltaNLL:r"); TGraph *graph = gROOT->FindObject("Graph")->Clone(); graph->Sort();

Changed:
<
<
_file1->cd(); limit->Draw("2*deltaNLL:r"); TGraph *graphA = gROOT->FindObject("Graph")->Clone(); graphA->Sort();
>
>
_file1->cd(); limit->Draw("2*deltaNLL:r"); TGraph *graphA = gROOT->FindObject("Graph")->Clone(); graphA->Sort();

Changed:
<
<
graph->SetLineWidth(3); graph->SetLineColor(1); graphA->SetLineWidth(1); graphA->SetLineColor(4); graphA->SetLineStyle(2); graph->Draw("AC"); graphA->Draw("C SAME");
>
>
graph->SetLineWidth(3); graph->SetLineColor(1); graphA->SetLineWidth(1); graphA->SetLineColor(4); graphA->SetLineStyle(2); graph->Draw("AC"); graphA->Draw("C SAME");

Changed:
<
<
graph->SetTitle("Likelihood scan as function of #mu;#mu;-2 #Delta ln L"); graph->GetXaxis()->SetRangeUser(0,2); graph->GetYaxis()->SetRangeUser(0,8);
>
>
graph->SetTitle("Likelihood scan as function of #mu;#mu;-2 #Delta ln L"); graph->GetXaxis()->SetRangeUser(0,2); graph->GetYaxis()->SetRangeUser(0,8);
%ENDCODE%

</>

<--/twistyPlugin-->
Line: 1101 to 1121
A simple plot from the output of the bidimensional scan can be done with

%CODE{"cpp"}% // plot the likelihood in the 20x20 grid, selecting the interesting region of small deltaNLL

Changed:
<
<
limit->Draw("2*deltaNLL:RV:RF>>h2(20,-1,3,20,-2,5)","deltaNLL > 0 && deltaNLL < 11","PROF COLZ");
>
>
limit->Draw("2*deltaNLL:RV:RF>>h2(20,-1,3,20,-2,5)","deltaNLL > 0 && deltaNLL < 11","PROF COLZ");

Changed:
<
<
// make the plot less horrible gStyle->SetOptStat(0); h2->SetContour(200); h2->SetTitle("Profile Likelihood in 2D;#mu_{ggH,ttH};#mu_{VBF,VH}");
>
>
// make the plot less horrible gStyle->SetOptStat(0); h2->SetContour(200); h2->SetTitle("Profile Likelihood in 2D;#mu_{ggH,ttH};#mu_{VBF,VH}");
/// Now we add the best fit point
Changed:
<
<
limit->Draw("RV:RF","deltaNLL == 0","P SAME"); TGraph *bestFit = gROOT->FindObject("Graph"); bestFit->SetMarkerStyle(34); bestFit->SetMarkerSize(3); c1->Update();
>
>
limit->Draw("RV:RF","deltaNLL == 0","P SAME"); TGraph *bestFit = gROOT->FindObject("Graph"); bestFit->SetMarkerStyle(34); bestFit->SetMarkerSize(3); c1->Update();
%ENDCODE%
Changed:
<
<
>
>
Contours for different confidence levels can be constructed starting from the 2d likelihood scan, using the thresholds for a chisquare with two degrees of freedom (pdf review). In particular, the 1σ region is given by q ≤ 2.30, the 95% one by q ≤ 5.99 and the 3σ one by q ≤ 11.83; the C++ function ROOT::Math::chisquared_quantile(cl,ndf) can be used to compute the thresholds for each CL.
Line: 1133 to 1153
%CODE{"cpp"}% // root -l higgsCombineHGG_RVRF_SCAN_2D.MultiDimFit.mH125.7.root contours2D.cxx // Attaching file higgsCombineHGG_RVRF_SCAN_2D.MultiDimFit.mH125.7.root as _file0... // Processing contours2D.cxx...
Changed:
<
<
// before we had done // limit->Draw("2*deltaNLL:RV:RF>>h2(20,-1,3,20,-2,5)","deltaNLL > 0 && deltaNLL < 11","PROF COLZ");
>
>
// before we had done // limit->Draw("2*deltaNLL:RV:RF>>h2(20,-1,3,20,-2,5)","deltaNLL > 0 && deltaNLL < 11","PROF COLZ");
// now we do contour2D("RF",20,-1.,3., "RV",20,-2.,5., 1.,1.); %ENDCODE%
Line: 1149 to 1170
combine -M MultiDimFit rVrF_hgg.root -m 125.7 --algo=grid --points=50 --setPhysicsModelParameterRanges RF=-1,3 -P RF --floatOtherPOI=1 -n HGG_RVRF_SCAN_1D_RF_PROF
Changed:
<
<
%CODE{"cpp"}% // root -l higgsCombineHGG_RVRF_SCAN_1D_RF_* /// Attaching file higgsCombineHGG_RVRF_SCAN_1D_RF_FIX.MultiDimFit.mH125.7.root as _file0... /// Attaching file higgsCombineHGG_RVRF_SCAN_1D_RF_PROF.MultiDimFit.mH125.7.root as _file1... _file0->cd(); limit->Draw("2*deltaNLL:RF"); TGraph *graphFix = gROOT->FindObject("Graph")->Clone(); graphFix->Sort();
>
>
%CODE{"cpp"}% // root -l higgsCombineHGG_RVRF_SCAN_1D_RF_* /// Attaching file higgsCombineHGG_RVRF_SCAN_1D_RF_FIX.MultiDimFit.mH125.7.root as _file0... /// Attaching file higgsCombineHGG_RVRF_SCAN_1D_RF_PROF.MultiDimFit.mH125.7.root as _file1... _file0->cd(); limit->Draw("2*deltaNLL:RF"); TGraph *graphFix = gROOT->FindObject("Graph")->Clone(); graphFix->Sort();

Changed:
<
<
_file1->cd(); limit->Draw("2*deltaNLL:RF"); TGraph *graphProf = gROOT->FindObject("Graph")->Clone(); graphProf->Sort();
>
>
_file1->cd(); limit->Draw("2*deltaNLL:RF"); TGraph *graphProf = gROOT->FindObject("Graph")->Clone(); graphProf->Sort();

Changed:
<
<
graphProf->SetLineWidth(3); graphProf->Draw("AC");
>
>
graphProf->SetLineWidth(3); graphProf->Draw("AC");

Changed:
<
<
graphFix->SetLineWidth(2); graphFix->SetLineStyle(9); graphFix->SetLineColor(206); graphFix->Draw("C SAME");
>
>
graphFix->SetLineWidth(2); graphFix->SetLineStyle(9); graphFix->SetLineColor(206); graphFix->Draw("C SAME");
%ENDCODE%

Line: 1219 to 1242
%ENDCODE%

 H → WW H → ZZ H → γγ H → bb H → ττ
Changed:
<
<
>
>
A macro to make all the plots including the summary one is below:
Changed:
<
<
%CODE{"cpp"}% // run with root.exe -b -l tdrstyle.cc contours2D.cxx plotall_rvrf.cxx -q void plotall_rvrf() { TCanvas *c1 = new TCanvas("c1","c1"); TFile *fOut = TFile::Open("rvrf_all.root", "RECREATE"); const int decays = 5; const char *decay[decays] = { "hww", "hzz", "hgg", "hbb", "htt" }; const int color[decays] = { 4 , 2 , 209 , 67 , 221 }; for (int i = 0; i < decays; ++i) { gFile = TFile::Open(Form("higgsCombine%s_RVRF_SCAN_2D.MultiDimFit.mH125.7.root", decay[i])); contour2D("RF",60,-1,3,"RV",60,-2,5,1.0,1.0,fOut,Form("rvrf_%s",decay[i])); c1->Print(Form("output_rvrf_%s.png",decay[i])); gFile->Close(); } fOut->cd(); TH2F *frame = new TH2F("frame",";#mu_{ggH,ttH};#mu_{VBF,VH}",60,-1.,3.,60,-2.,5.); frame->Draw("AXIS"); for (int i = 0; i < decays; ++i) { TGraph *best = fOut->Get(Form("rvrf_%s_best",decay[i])); best->SetMarkerColor(color[i]); best->SetMarkerStyle(34); // apparently ROOT forgets this best->SetMarkerSize(2.0); // so we do this again best->Draw("P SAME"); TList *c68 = fOut->Get(Form("rvrf_%s_c68",decay[i])); styleMultiGraph(c68, /*color=*/color[i], /*width=*/3, /*style=*/1); c68->Draw("L SAME"); } TMarker m; m.SetMarkerSize(3.0); m.SetMarkerColor(97); m.SetMarkerStyle(33); m.DrawMarker(1.0,1.0); m.SetMarkerSize(1.8); m.SetMarkerColor(89); m.SetMarkerStyle(33); m.DrawMarker(1.0,1.0); c1->Print("output_rvrf_all.png"); fOut->Close(); }
>
>
%CODE{"cpp"}% // run with root.exe -b -l tdrstyle.cc contours2D.cxx plotall_rvrf.cxx -q void plotall_rvrf() { TCanvas *c1 = new TCanvas("c1","c1"); TFile *fOut = TFile::Open("rvrf_all.root", "RECREATE"); const int decays = 5; const char *decay[decays] = { "hww", "hzz", "hgg", "hbb", "htt" }; const int color[decays] = { 4 , 2 , 209 , 67 , 221 }; for (int i = 0; i < decays; ++i) { gFile = TFile::Open(Form("higgsCombine%s_RVRF_SCAN_2D.MultiDimFit.mH125.7.root", decay[i])); contour2D("RF",60,-1,3,"RV",60,-2,5,1.0,1.0,fOut,Form("rvrf_%s",decay[i])); c1->Print(Form("output_rvrf_%s.png",decay[i])); gFile->Close(); } fOut->cd(); TH2F *frame = new TH2F("frame",";#mu_{ggH,ttH};#mu_{VBF,VH}",60,-1.,3.,60,-2.,5.); frame->Draw("AXIS"); for (int i = 0; i < decays; ++i) { TGraph *best = fOut->Get(Form("rvrf_%s_best",decay[i])); best->SetMarkerColor(color[i]); best->SetMarkerStyle(34); // apparently ROOT forgets this best->SetMarkerSize(2.0); // so we do this again best->Draw("P SAME"); TList *c68 = fOut->Get(Form("rvrf_%s_c68",decay[i])); styleMultiGraph(c68, /*color=*/color[i], /*width=*/3, /*style=*/1); c68->Draw("L SAME"); } TMarker m; m.SetMarkerSize(3.0); m.SetMarkerColor(97); m.SetMarkerStyle(33); m.DrawMarker(1.0,1.0); m.SetMarkerSize(1.8); m.SetMarkerColor(89); m.SetMarkerStyle(33); m.DrawMarker(1.0,1.0); c1->Print("output_rvrf_all.png"); fOut->Close(); }
%ENDCODE%
Changed:
<
<
>
>
</>
<--/twistyPlugin-->
Line: 1244 to 1266
The loop-induced effective couplings of the Higgs boson to gluons and to photons are computed in terms of κV, κf by assuming that only the SM particles participate to the loop.
• for the gluon coupling, at the leading order the process is induced only by loops containing quarks, so the coupling will scale as κf
• for the photon coupling, the W and quark loops both contribute, and interfere destructively in the SM. The effective coupling scales as |α κV + β κf|, with α ~ 1.28 and β ~ -0.28 (for a SM Higgs boson mass of 125.7 GeV)
Changed:
<
<
Cross sections and partial all widths scale quadratically with the couplings, so in this model we'll have e.g. σVBF = κV2Â¡¤σVBFSM and Γbb = κf2Â¡¤ΓbbSM
>
>
Cross sections and partial all widths scale quadratically with the couplings, so in this model we'll have e.g. σVBF = κV2Â·σVBFSM and Γbb = κf2Â·ΓbbSM
In this model we assume that there are no BSM decays, so
\Gamma_\mathrm{tot} = \Gamma_\mathrm{ff} + \Gamma_\mathrm{gg} + \Gamma_\mathrm{VV} = |\kappa_\mathrm{f}|^2(\Gamma_\mathrm{ff}^\mathrm{SM} + \Gamma_\mathrm{gg}^\mathrm{SM}) + |\kappa_\mathrm{V}|^2\Gamma_\mathrm{VV}^\mathrm{SM}
where we have neglected the γγ and Zγ decays that contribute only to the few per-mille level. We can then dividing by the SM width, so that we can express a scale factor for the total width κH in terms of SM BRs
\kappa_\mathrm{H}^2 := \frac{\Gamma_\mathrm{tot}}{\Gamma_\mathrm{tot}^\mathrm{SM}} = \ |\kappa_\mathrm{f}|^2 (\mathrm{BR}_\mathrm{ff}^\mathrm{SM} + \mathrm{BR}_\mathrm{gg}^\mathrm{SM}) + |\kappa_\mathrm{V}|^2\mathrm{BR}_\mathrm{VV}^\mathrm{SM}
Since in the SM for a 125.7 GeV Higgs we have BR(H→VV) = 0.254, to a good approximation
\kappa_\mathrm{H}^2 = \frac14 |\kappa_\mathrm{V}|^2 + \frac34 |\kappa_\mathrm{f}|^2

Changed:
<
<
• Calculate the scale factors for σÂ¡¤BR of ggH with H→ZZ; VBF with H→WW, VBF with H→γγ; VH with H→bb; ttH with H→bb. Express the results in terms of κV, κf, κH, α, β.
>
>
• Calculate the scale factors for σÂ·BR of ggH with H→ZZ; VBF with H→WW, VBF with H→γγ; VH with H→bb; ttH with H→bb. Express the results in terms of κV, κf, κH, α, β.
Line: 1261 to 1283
class="twikiHelp" }%
Changed:
<
<
 ggH, H→ZZ κf2 Â¡¤ κV2 / κH2 VBF, H→WW κV2 Â¡¤ κV2 / κH2 VBF, H→γγ κV2 Â¡¤ ακV+βκf 2 / κH2 VH, H→bb κV2 Â¡¤ κf2 / κH2 ttH, H→bb κf2 Â¡¤ κf2 / κH2
>
>
 ggH, H→ZZ κf2 Â· κV2 / κH2 VBF, H→WW κV2 Â· κV2 / κH2 VBF, H→γγ κV2 Â· ακV+βκf 2 / κH2 VH, H→bb κV2 Â· κf2 / κH2 ttH, H→bb κf2 Â· κf2 / κH2
It is instructive to note that in this benchmark model the search for VH, H→bb provides exactly the same kind of measurement as ggH, H→ZZ (or ggH, H→VV). The same is true also for another channel which is often intuitively associated with fermion couplings, VBF H→ττ. </>
<--/twistyPlugin-->
Line: 1280 to 1302
class="twikiHelp" }%
Process Scale factor dN/dκf2 dN/dκV2
Changed:
<
<
 ggH, H→ZZ κf2 Â¡¤ κV2 / κH2 0.25 0.75 VBF, H→WW κV2 Â¡¤ κV2 / κH2 -0.75 1.75 VBF, H→γγ κV2 Â¡¤ ακV+βκf 2 / κH2 -1.03 2.03 VH, H→bb κV2 Â¡¤ κf2 / κH2 0.25 0.75 ttH, H→bb κf2 Â¡¤ κf2 / κH2 1.25 -0.25
>
>
 ggH, H→ZZ κf2 Â· κV2 / κH2 0.25 0.75 VBF, H→WW κV2 Â· κV2 / κH2 -0.75 1.75 VBF, H→γγ κV2 Â· ακV+βκf 2 / κH2 -1.03 2.03 VH, H→bb κV2 Â· κf2 / κH2 0.25 0.75 ttH, H→bb κf2 Â· κf2 / κH2 1.25 -0.25

Changed:
<
<
Among the channels considered, the ones with the strongest dependency on κV2 and κf2 are respectively VBF H→γγ and ttH, H→bb. More in general, you can deduce that the strongest dependency is achieved when considering final states with the same coupling both at production and decay (VBF and VH production with H→γγ,VV; ggH and ttH production with H→bb, τtau;), while the channels with mixed productions and decays have smaller coefficients since there is a partial cancellation between the κf2 at numerator and the Â¾κf2 at denominator (from κH2).
>
>
Among the channels considered, the ones with the strongest dependency on κV2 and κf2 are respectively VBF H→γγ and ttH, H→bb. More in general, you can deduce that the strongest dependency is achieved when considering final states with the same coupling both at production and decay (VBF and VH production with H→γγ,VV; ggH and ttH production with H→bb, τtau;), while the channels with mixed productions and decays have smaller coefficients since there is a partial cancellation between the κf2 at numerator and the Â¾κf2 at denominator (from κH2).
</>
<--/twistyPlugin-->

#### Implementation in combine

Line: 1351 to 1375
class="twikiHelp" }%
Changed:
<
<
%CODE{"cpp"}% // run with root.exe -b -l tdrstyle.cc contours2D.cxx plotcvcf_hww.cxx -q void fillMultiGraph(TList tmg, int fillColor, int fillStyle) { for (int i = 0; i < tmg->GetSize(); ++i) { TGraph *g = (TGraph) tmg->At(i); g->SetFillColor(fillColor); g->SetFillStyle(fillStyle); } } void plotcvcf_hww() { TCanvas *c1 = new TCanvas("c1","c1"); TFile *fOut = TFile::Open("kvkf_plots_hww.root", "RECREATE"); const int states = 3; const char *state[states] = { "hww", "hww0j", "hww2j" }; for (int i = 0; i < states; ++i) { gFile = TFile::Open(Form("higgsCombine%s_CVCF_SCAN_2D_ASIMOV.MultiDimFit.mH125.7.root", state[i])); contour2D("CV",45,0.,2., "CF",45,0.,2., 1.0,1.0, fOut,Form("kvkf_%s",state[i])); gFile->Close(); } fOut->cd(); TH2F *frame = new TH2F("frame",";#kappa_{V};#kappa_{f}", 45,0.,2., 45,0.,2.); frame->GetXaxis()->SetDecimals(1); frame->GetYaxis()->SetDecimals(1); frame->Draw("AXIS");
>
>
%CODE{"cpp"}% // run with root.exe -b -l tdrstyle.cc contours2D.cxx plotcvcf_hww.cxx -q void fillMultiGraph(TList tmg, int fillColor, int fillStyle) { for (int i = 0; i < tmg->GetSize(); ++i) { TGraph *g = (TGraph) tmg->At(i); g->SetFillColor(fillColor); g->SetFillStyle(fillStyle); } } void plotcvcf_hww() { TCanvas *c1 = new TCanvas("c1","c1"); TFile *fOut = TFile::Open("kvkf_plots_hww.root", "RECREATE"); const int states = 3; const char *state[states] = { "hww", "hww0j", "hww2j" }; for (int i = 0; i < states; ++i) { gFile = TFile::Open(Form("higgsCombine%s_CVCF_SCAN_2D_ASIMOV.MultiDimFit.mH125.7.root", state[i])); contour2D("CV",45,0.,2., "CF",45,0.,2., 1.0,1.0, fOut,Form("kvkf_%s",state[i])); gFile->Close(); } fOut->cd(); TH2F *frame = new TH2F("frame",";#kappa_{V};#kappa_{f}", 45,0.,2., 45,0.,2.); frame->GetXaxis()->SetDecimals(1); frame->GetYaxis()->SetDecimals(1); frame->Draw("AXIS");

Changed:
<
<
TList c68_comb = (TList) fOut->Get("kvkf_hww_c68"); styleMultiGraph(c68_comb, /*color=*/1, /*width=*/3, /*style=*/1); fillMultiGraph(c68_comb, /*color=*/kGray, /*style=*/1001);
>
>
TList c68_comb = (TList) fOut->Get("kvkf_hww_c68"); styleMultiGraph(c68_comb, /*color=*/1, /*width=*/3, /*style=*/1); fillMultiGraph(c68_comb, /*color=*/kGray, /*style=*/1001);

Changed:
<
<
TList c68_0j = (TList) fOut->Get("kvkf_hww0j_c68"); styleMultiGraph(c68_0j, /*color=*/4, /*width=*/3, /*style=*/1); fillMultiGraph(c68_0j, /*color=*/4, /*style=*/3005);
>
>
TList c68_0j = (TList) fOut->Get("kvkf_hww0j_c68"); styleMultiGraph(c68_0j, /*color=*/4, /*width=*/3, /*style=*/1); fillMultiGraph(c68_0j, /*color=*/4, /*style=*/3005);

Changed:
<
<
TList c68_2j = (TList) fOut->Get("kvkf_hww2j_c68"); fillMultiGraph(c68_2j, /*color=*/2, /*style=*/3004); styleMultiGraph(c68_2j, /*color=*/2, /*width=*/3, /*style=*/1);
>
>
TList c68_2j = (TList) fOut->Get("kvkf_hww2j_c68"); fillMultiGraph(c68_2j, /*color=*/2, /*style=*/3004); styleMultiGraph(c68_2j, /*color=*/2, /*width=*/3, /*style=*/1);

Changed:
<
<
c68_2j->Draw("F SAME"); c68_0j->Draw("F SAME"); c68_comb->Draw("F SAME");
>
>
c68_2j->Draw("F SAME"); c68_0j->Draw("F SAME"); c68_comb->Draw("F SAME");

Changed:
<
<
c68_2j->Draw("L SAME"); c68_0j->Draw("L SAME"); c68_comb->Draw("L SAME");
>
>
c68_2j->Draw("L SAME"); c68_0j->Draw("L SAME"); c68_comb->Draw("L SAME");

Changed:
<
<
TGraph *best = fOut->Get("kvkf_hww_best"); best->SetMarkerColor(1); best->SetMarkerStyle(34); // apparently ROOT forgets this best->SetMarkerSize(2.0); // so we do this again best->Draw("P SAME");
>
>
TGraph *best = fOut->Get("kvkf_hww_best"); best->SetMarkerColor(1); best->SetMarkerStyle(34); // apparently ROOT forgets this best->SetMarkerSize(2.0); // so we do this again best->Draw("P SAME");

Changed:
<
<
TMarker m; m.SetMarkerSize(3.0); m.SetMarkerColor(97); m.SetMarkerStyle(33); m.DrawMarker(1.0,1.0); m.SetMarkerSize(1.8); m.SetMarkerColor(89); m.SetMarkerStyle(33); m.DrawMarker(1.0,1.0); TLegend *leg = new TLegend(0.20,0.80,0.45,0.95); leg->SetTextFont(42); leg->SetShadowColor(0); leg->SetFillColor(0); leg->AddEntry(best, "Comb.", "P"); leg->AddEntry(c68_comb->At(0), "(68% CL)", "LF"); leg->AddEntry(c68_0j->At(0), "0 jets", "LF"); leg->AddEntry(c68_2j->At(0), "2 jets", "LF"); leg->Draw(); c1->Print("output_kvkf_hww.png"); fOut->Close(); }
>
>
TMarker m; m.SetMarkerSize(3.0); m.SetMarkerColor(97); m.SetMarkerStyle(33); m.DrawMarker(1.0,1.0); m.SetMarkerSize(1.8); m.SetMarkerColor(89); m.SetMarkerStyle(33); m.DrawMarker(1.0,1.0); TLegend *leg = new TLegend(0.20,0.80,0.45,0.95); leg->SetTextFont(42); leg->SetShadowColor(0); leg->SetFillColor(0); leg->AddEntry(best, "Comb.", "P"); leg->AddEntry(c68_comb->At(0), "(68% CL)", "LF"); leg->AddEntry(c68_0j->At(0), "0 jets", "LF"); leg->AddEntry(c68_2j->At(0), "2 jets", "LF"); leg->Draw(); c1->Print("output_kvkf_hww.png"); fOut->Close(); }
%ENDCODE%

</>

<--/twistyPlugin-->
Line: 1401 to 1425
showimgleft="/pub/TWiki/TWikiDocGraphics/toggleopen-small.gif" hideimgleft="/pub/TWiki/TWikiDocGraphics/toggleclose-small.gif" class="twikiHelp"
Changed:
<
<
}% ΓtottotSM = 0.566Â¡¤κb2 + 0.2541Â¡¤κV2 + 0.0851Â¡¤κg2 + 0.0623Â¡¤κτ2 + 0.0285Â¡¤κt2 + 0.00388Â¡¤κγ2
>
>
}% ΓtottotSM = 0.566Â·κb2 + 0.2541Â·κV2 + 0.0851Â·κg2 + 0.0623Â·κτ2 + 0.0285Â·κt2 + 0.00388Â·κγ2
</>
<--/twistyPlugin-->

#### Implementation in combine

Line: 1426 to 1450
## BRs, normalized to the SM ones: they scale as (coupling/coupling_SM)^2 / (total/total_SM) self.modelBuilder.factory_('expr::c6_BRscal_hvv("@0*@0/@1", kV, c6_Gscal_tot)') self.modelBuilder.factory_('expr::c6_BRscal_htt("@0*@0/@1", ktau, c6_Gscal_tot)') self.modelBuilder.factory_('expr::c6_BRscal_hbb("@0*@0/@1", kbottom, c6_Gscal_tot)') self.modelBuilder.factory_('expr::c6_BRscal_hgg("@0*@0/@1", kgamma, c6_Gscal_tot)') if self.doHZg: self.modelBuilder.factory_('expr::c6_BRscal_hzg("@0*@0/@1", kZgamma, c6_Gscal_tot)')
Changed:
<
<
def getHiggsSignalYieldScale(self,production,decay,energy): name = "c6_XSBRscal_%s_%s" % (production,decay) print '[LOFullParametrization::C6]' print name, production, decay, energy if self.modelBuilder.out.function(name) = None: XSscal = "kgluon" if production in ["WH","ZH","VH","qqH"]: XSscal = "kV" if production = "ttH": XSscal = "ktop" BRscal = "hgg" if decay in ["hbb", "htt"]: BRscal = decay if decay in ["hww", "hzz"]: BRscal = "hvv" if self.doHZg and decay == "hzg": BRscal = "hzg" self.modelBuilder.factory_('expr::%s("@0*@0 * @1", %s, c6_BRscal_%s)' % (name, XSscal, BRscal)) return name
>
>
def getHiggsSignalYieldScale(self,production,decay,energy): name = "c6_XSBRscal_%s_%s" % (production,decay) print '[LOFullParametrization::C6]' print name, production, decay, energy if self.modelBuilder.out.function(name) = None: XSscal = "kgluon" if production in ["WH","ZH","VH","qqH"]: XSscal = "kV" if production  "ttH": XSscal = "ktop" BRscal = "hgg" if decay in ["hbb", "htt"]: BRscal = decay if decay in ["hww", "hzz"]: BRscal = "hvv" if self.doHZg and decay = "hzg": BRscal = "hzg" self.modelBuilder.factory_('expr::%s("@0*@0 * @1", %s, c6_BRscal_%s)' % (name, XSscal, BRscal)) return name
## This lines below are instead in HiggsCouplings.py from LOFullParametrization import C5, C6, C7, PartialWidthsModel c6 = C6() %ENDCODE%

#### Working example 1: κγ

Changed:
<
<
We will take the full combination but excluding the H→inv search, and extract the κγ modifier, profiling the others. However, to profile them correctly we need to select ranges that are large enough so that the parameter doesn't end on the boundary. We can look at the past CMS Higgs combination, HIG-13-005, to see what was the sensitivity to the other parameters, e.g from the summary plot

>
>
We will take the full combination but excluding the H→inv search, and extract the κγ modifier, profiling the others. However, to profile them correctly we need to select ranges that are large enough so that the parameter doesn't end on the boundary. We can look at the past CMS Higgs combination, HIG-13-005, to see what was the sensitivity to the other parameters, e.g from the summary plot

In addition to the observed outcome, we will compute also the Asimov one
>
>

text2workspace.py comb.txt -m 125.7  -P HiggsAnalysis.CombinedLimit.HiggsCouplings:c6 -o c6_comb.root --for-fit

RANGES="--setPhysicsModelParameterRanges kbottom=0,5:ktop=0,8:ktau=0,4:kgluon=0,4:kgamma=0,3"


Line: 1472 to 1499

#### Working example 2: κg vs κb degeneracy

Changed:
<
<
The channels with the best sensitivity to the signal strength are mostly driven by the gluon fusion production cross section (e.g. H→WW and H→ZZ, H→γγboosted H→ττ). Conversely, the H→bb is the dominant decay mode, so that the total width is mostly sensitive to κb. The combination of these two features results in a strong correlation between the uncertainties of the two measurements of κg and κb: a simultaneous increase of both couplings would produce an increase in the gluon fusion production cross section but also a decrease of the branching ratio in the other modes (WW, ZZ, γγ and ττ), leaving the expected σÂ¡¤BR unchanged, and so provides still a good fit to the data.
>
>
The channels with the best sensitivity to the signal strength are mostly driven by the gluon fusion production cross section (e.g. H→WW and H→ZZ, H→γγboosted H→ττ). Conversely, the H→bb is the dominant decay mode, so that the total width is mostly sensitive to κb. The combination of these two features results in a strong correlation between the uncertainties of the two measurements of κg and κb: a simultaneous increase of both couplings would produce an increase in the gluon fusion production cross section but also a decrease of the branching ratio in the other modes (WW, ZZ, γγ and ττ), leaving the expected σÂ·BR unchanged, and so provides still a good fit to the data.
We can see this feature with a 2D likelihood scan:
>
>

./parallelScan.py c6_comb.root -m 125.7 -M MultiDimFit --algo=grid -P kgluon -P kbottom --floatOtherPOI=1 $RANGES --points=400 -n C6_comb_SCAN_2D_kgluon_kbottom  Line: 1500 to 1529 A macro to make the summary plot is below: Changed: < < %CODE{"cpp"}% // run with root.exe -b -l tdrstyle.cc contours2D.cxx plotcvcf_all.cxx -q void fillMultiGraph(TList tmg, int fillColor, int fillStyle) { for (int i = 0; i < tmg->GetSize(); ++i) { TGraph *g = (TGraph) tmg->At(i); g->SetFillColor(fillColor); g->SetFillStyle(fillStyle); } } void plotcvcf_all() { TCanvas c1 = new TCanvas("c1","c1"); TFile *fOut = TFile::Open("kvkf_plots_all.root", "RECREATE"); const int decays = 6; const char *decay[decays] = { "hbb", "htt", "hzz", "hww", "hgg", "comb" }; const int color[decays] = { 67 , 221 , 2 , 4 , 209 , 1 }; const int fill[decays] = { 228 , 223 , 208 , 216 , 211 , 203 }; TList *c68[decays] = { 0,0,0,0,0, 0 }; for (int i = 0; i < decays; ++i) { gFile = TFile::Open(Form("higgsCombine%s_CVCF_SCAN_2D.MultiDimFit.mH125.7.root", decay[i])); contour2D("CV",45,0.,2., "CF",45,0.,2., 1.0,1.0, fOut,Form("kvkf_%s",decay[i])); gFile->Close(); } fOut->cd(); fOut->ls(); for (int i = 0; i < decays; ++i) { c68[i] = (TList) fOut->Get(Form("kvkf_%s_c68",decay[i])); styleMultiGraph(c68[i], color[i], 3, 1); fillMultiGraph(c68[i], fill[i], 1001); } TH2F *frame = new TH2F("frame",";#kappa_{V};#kappa_{f}", 45,0.,2., 45,0.,2.); frame->GetXaxis()->SetDecimals(1); frame->GetYaxis()->SetDecimals(1); frame->Draw("AXIS"); > > %CODE{"cpp"}% // run with root.exe -b -l tdrstyle.cc contours2D.cxx plotcvcf_all.cxx -q void fillMultiGraph(TList tmg, int fillColor, int fillStyle) { for (int i = 0; i < tmg->GetSize(); ++i) { TGraph *g = (TGraph) tmg->At(i); g->SetFillColor(fillColor); g->SetFillStyle(fillStyle); } } void plotcvcf_all() { TCanvas c1 = new TCanvas("c1","c1"); TFile *fOut = TFile::Open("kvkf_plots_all.root", "RECREATE"); const int decays = 6; const char *decay[decays] = { "hbb", "htt", "hzz", "hww", "hgg", "comb" }; const int color[decays] = { 67 , 221 , 2 , 4 , 209 , 1 }; const int fill[decays] = { 228 , 223 , 208 , 216 , 211 , 203 }; TList *c68[decays] = { 0,0,0,0,0, 0 }; for (int i = 0; i < decays; ++i) { gFile = TFile::Open(Form("higgsCombine%s_CVCF_SCAN_2D.MultiDimFit.mH125.7.root", decay[i])); contour2D("CV",45,0.,2., "CF",45,0.,2., 1.0,1.0, fOut,Form("kvkf_%s",decay[i])); gFile->Close(); } fOut->cd(); fOut->ls(); for (int i = 0; i < decays; ++i) { c68[i] = (TList) fOut->Get(Form("kvkf_%s_c68",decay[i])); styleMultiGraph(c68[i], color[i], 3, 1); fillMultiGraph(c68[i], fill[i], 1001); } TH2F *frame = new TH2F("frame",";#kappa_{V};#kappa_{f}", 45,0.,2., 45,0.,2.); frame->GetXaxis()->SetDecimals(1); frame->GetYaxis()->SetDecimals(1); frame->Draw("AXIS"); Changed: < < for (int i = 0; i < decays; ++i) { c68[i]->Draw("F SAME"); } for (int i = 0; i < decays; ++i) { c68[i]->Draw("L SAME"); } > > for (int i = 0; i < decays; ++i) { c68[i]->Draw("F SAME"); } for (int i = 0; i < decays; ++i) { c68[i]->Draw("L SAME"); } Changed: < < TGraph *best = fOut->Get("kvkf_comb_best"); best->SetMarkerColor(1); best->SetMarkerStyle(34); // apparently ROOT forgets this best->SetMarkerSize(2.0); // so we do this again best->Draw("P SAME"); > > TGraph *best = fOut->Get("kvkf_comb_best"); best->SetMarkerColor(1); best->SetMarkerStyle(34); // apparently ROOT forgets this best->SetMarkerSize(2.0); // so we do this again best->Draw("P SAME"); Changed: < < TMarker m; m.SetMarkerSize(3.0); m.SetMarkerColor(97); m.SetMarkerStyle(33); m.DrawMarker(1.0,1.0); m.SetMarkerSize(1.8); m.SetMarkerColor(89); m.SetMarkerStyle(33); m.DrawMarker(1.0,1.0); c1->Print("output_kvkf_all.png"); fOut->Close(); } > > TMarker m; m.SetMarkerSize(3.0); m.SetMarkerColor(97); m.SetMarkerStyle(33); m.DrawMarker(1.0,1.0); m.SetMarkerSize(1.8); m.SetMarkerColor(89); m.SetMarkerStyle(33); m.DrawMarker(1.0,1.0); c1->Print("output_kvkf_all.png"); fOut->Close(); } %ENDCODE% Changed: < < > > </> <--/twistyPlugin--> Line: 1543 to 1573 kgluon : +1.039 kgamma : +1.143 Done in 0.11 min (cpu), 0.11 min (real) Changed: < < combine c6_comb.root -M MultiDimFit --algo=singles -m 125.7$RANGES


>
>
combine c6_comb.root -M MultiDimFit --algo=singles -m 125.7 $RANGES  >>> including systematics >>> method used to compute upper limit is MultiDimFit >>> random number generator seed is 123456  Line: 1617 to 1645 class="twikiHelp" }% Changed: < < %CODE{"python"}% class C6I(SMLikeHiggsModel): "as C6 but including direct searches for H->invis" def __init__(self): SMLikeHiggsModel.__init__(self) # not using 'super(x,self).__init__' since I don't understand it self.floatMass = False self.doHZg = False def setPhysicsOptions(self,physOptions): for po in physOptions: if po.startswith("higgsMassRange="): self.floatMass = True self.mHRange = po.replace("higgsMassRange=","").split(",") print 'The Higgs mass range:', self.mHRange if len(self.mHRange) = 2: raise RuntimeError, "Higgs mass range definition requires two extrema" elif float(self.mHRange[0]) >= float(self.mHRange[1]): raise RuntimeError, "Extrama for Higgs mass range defined with inverterd order. Second must be larger the first" if po = 'doHZg': self.doHZg = True def doParametersOfInterest(self): self.modelBuilder.doVar("kV[1,0.0,2.0]") self.modelBuilder.doVar("ktau[1,0.0,2.0]") self.modelBuilder.doVar("ktop[1,0.0,2.0]") self.modelBuilder.doVar("kbottom[1,0.0,2.0]") self.modelBuilder.doVar("kgluon[1,0.0,2.0]") self.modelBuilder.doVar("kgamma[1,0.0,2.0]") self.modelBuilder.doVar("BRInv[0,0,1]") pois = 'kV,ktau,ktop,kbottom,kgluon,kgamma,BRInv' if self.doHZg: self.modelBuilder.doVar("kZgamma[1,0.0,30.0]") pois + ",kZgamma" if self.floatMass: if self.modelBuilder.out.var("MH"): self.modelBuilder.out.var("MH").setRange(float(self.mHRange[0]),float(self.mHRange[1])) self.modelBuilder.out.var("MH").setConstant(False) else: self.modelBuilder.doVar("MH[%s,%s]" % (self.mHRange[0],self.mHRange[1])) self.modelBuilder.doSet("POI",pois+',MH') else: if self.modelBuilder.out.var("MH"): self.modelBuilder.out.var("MH").setVal(self.options.mass) self.modelBuilder.out.var("MH").setConstant(True) else: self.modelBuilder.doVar("MH[%g]" % self.options.mass) self.modelBuilder.doSet("POI",pois) self.SMH = SMHiggsBuilder(self.modelBuilder) self.setup() > > %CODE{"python"}% class C6I(SMLikeHiggsModel): "as C6 but including direct searches for H->invis" def __init__(self): SMLikeHiggsModel.__init__(self) # not using 'super(x,self).__init__' since I don't understand it self.floatMass = False self.doHZg = False def setPhysicsOptions(self,physOptions): for po in physOptions: if po.startswith("higgsMassRange="): self.floatMass = True self.mHRange = po.replace("higgsMassRange=","").split(",") print 'The Higgs mass range:', self.mHRange if len(self.mHRange) = 2: raise RuntimeError, "Higgs mass range definition requires two extrema" elif float(self.mHRange[0]) >= float(self.mHRange[1]): raise RuntimeError, "Extrama for Higgs mass range defined with inverterd order. Second must be larger the first" if po = 'doHZg': self.doHZg = True def doParametersOfInterest(self): self.modelBuilder.doVar("kV[1,0.0,2.0]") self.modelBuilder.doVar("ktau[1,0.0,2.0]") self.modelBuilder.doVar("ktop[1,0.0,2.0]") self.modelBuilder.doVar("kbottom[1,0.0,2.0]") self.modelBuilder.doVar("kgluon[1,0.0,2.0]") self.modelBuilder.doVar("kgamma[1,0.0,2.0]") self.modelBuilder.doVar("BRInv[0,0,1]") pois = 'kV,ktau,ktop,kbottom,kgluon,kgamma,BRInv' if self.doHZg: self.modelBuilder.doVar("kZgamma[1,0.0,30.0]") pois + ",kZgamma" if self.floatMass: if self.modelBuilder.out.var("MH"): self.modelBuilder.out.var("MH").setRange(float(self.mHRange[0]),float(self.mHRange[1])) self.modelBuilder.out.var("MH").setConstant(False) else: self.modelBuilder.doVar("MH[%s,%s]" % (self.mHRange[0],self.mHRange[1])) self.modelBuilder.doSet("POI",pois+',MH') else: if self.modelBuilder.out.var("MH"): self.modelBuilder.out.var("MH").setVal(self.options.mass) self.modelBuilder.out.var("MH").setConstant(True) else: self.modelBuilder.doVar("MH[%g]" % self.options.mass) self.modelBuilder.doSet("POI",pois) self.SMH = SMHiggsBuilder(self.modelBuilder) self.setup() def setup(self): Line: 1629 to 1657 ## The invisible BR is by construction BRInv self.modelBuilder.factory_('expr::c6i_BRscal_hinv("@0", BRInv)') Changed: < < def getHiggsSignalYieldScale(self,production,decay,energy): name = "c6i_XSBRscal_%s_%s" % (production,decay) if self.modelBuilder.out.function(name) = None: XSscal = "kgluon" if production in ["WH","ZH","VH","qqH"]: XSscal = "kV" if production = "ttH": XSscal = "ktop" BRscal = "hgg" if decay in ["hbb", "htt", "hinv"]: BRscal = decay if decay in ["hww", "hzz"]: BRscal = "hvv" if self.doHZg and decay == "hzg": BRscal = "hzg" self.modelBuilder.factory_('expr::%s("@0*@0 * @1", %s, c6i_BRscal_%s)' % (name, XSscal, BRscal)) return name > > def getHiggsSignalYieldScale(self,production,decay,energy): name = "c6i_XSBRscal_%s_%s" % (production,decay) if self.modelBuilder.out.function(name) = None: XSscal = "kgluon" if production in ["WH","ZH","VH","qqH"]: XSscal = "kV" if production  "ttH": XSscal = "ktop" BRscal = "hgg" if decay in ["hbb", "htt", "hinv"]: BRscal = decay if decay in ["hww", "hzz"]: BRscal = "hvv" if self.doHZg and decay = "hzg": BRscal = "hzg" self.modelBuilder.factory_('expr::%s("@0*@0 * @1", %s, c6i_BRscal_%s)' % (name, XSscal, BRscal)) return name c6i = C6I() %ENDCODE% Line: 1647 to 1675 hideimgleft="/pub/TWiki/TWikiDocGraphics/toggleclose-small.gif" class="twikiHelp" }% Added: > > text2workspace.py hinv_vbf_8TeV.txt -m 125.7 -P HiggsAnalysis.CombinedLimit.LOFullParametrization:c6i -o c6i_vbfinv.root --for-fit combine -m 125.7 -M MultiDimFit c6i_vbfinv.root -n C6I_vbfinv_SCAN_2D_kV_BRInv -P kV -P BRInv --floatOtherPOI=0 --points=400 --algo=grid3x3$RANGES


Changed:
<
<

text2workspace.py comb_hww.txt -m 125.7  -P HiggsAnalysis.CombinedLimit.LOFullParametrization:c6i -o c6i_hww.root --for-fit


>
>
text2workspace.py comb_hww.txt -m 125.7  -P HiggsAnalysis.CombinedLimit.LOFullParametrization:c6i -o c6i_hww.root --for-fit


./parallelScan.py -M MultiDimFit c6i_hww.root -n C6I_hww_SCAN_2D_kV_BRInv -P kV -P BRInv --floatOtherPOI=0 --points=100 --algo=grid3x3 $RANGES hadd -f higgsCombineC6I_hww_SCAN_2D_kV_BRInv.MultiDimFit.mH125.7.root higgsCombineC6I_hww_SCAN_2D_kV_BRInv.*.MultiDimFit.mH125.7.root Changed: < < HWW="hww0j_8TeV=hwwof_0j_shape_8TeV.txt hww2j_8TeV=hwwof_2j_shape_8TeV.txt"  > > HWW="hww0j_8TeV=hwwof_0j_shape_8TeV.txt hww2j_8TeV=hwwof_2j_shape_8TeV.txt"  HIN="hinv_8TeV=hinv_vbf_8TeV.txt" combineCards.py -S$HWW $HIN > comb_test2.txt text2workspace.py comb_test2.txt -m 125.7 -P LOFullParametrization:c6i -o c6i_test2.root --for-fit Line: 1665 to 1690 VBF H→inv H→WW Combined Changed: < < > > </> <--/twistyPlugin--> • Using this C6Inv model and taking the combination of all datacards do a 2D likelihood scan of κV vs BRInv, profiling all other nuisance parameters; do this twice, including and not including the VBF H→Inv search in the combination. Line: 1685 to 1708 ./parallelScan.py -m 125.7 -M MultiDimFit c6i_combx.root -n C6I_combx_SCAN_2D_kV_BRInv -P kV -P BRInv --floatOtherPOI=1 --points=400 --algo=grid$RANGES ---hadd
Changed:
<
<

The solid-filled areas are the 68% and 95% CL regions for the combination not including the direct search for H→invis, the hatched areas are the 68% and 95% CL regions including it. You can appreciate how without the constraint there is a full degeneracy between the measured coupling and the invisible BR, which is then broken by the direct search.
>
>

The solid-filled areas are the 68% and 95% CL regions for the combination not including the direct search for H→invis, the hatched areas are the 68% and 95% CL regions including it. You can appreciate how without the constraint there is a full degeneracy between the measured coupling and the invisible BR, which is then broken by the direct search.
</>
<--/twistyPlugin-->
Line: 1756 to 1782
Results:
 κV κg κγ BRInv
Changed:
<
<
>
>

 κt κb κτ
Changed:
<
<
>
>
Line: 1769 to 1794
class="twikiHelp" }%
Changed:
<
<
%CODE{"cpp"}% void plot1ds_c6i() { TCanvas c1 = new TCanvas("c1","c1"); gStyle->SetOptStat(0); TFile *fOut = TFile::Open("plot1d_c6i.root", "RECREATE"); const int params = 7; const char *param[params] = { "kV", "kbottom", "ktop", "ktau", "kgluon", "kgamma", "BRInv" }; const float min[params] = { 0, 0, 0, 0, 0, 0, 0, }; const float max[params] = { 2, 5, 8, 4, 4, 3, 1, }; const char *label[params] = { "#kappa_{V}", "#kappa_{b}", "#kappa_{t}", "#kappa_{#tau}", "#kappa_{g}", "#kappa_{#gamma}","BR_{Inv}" }; for (int i = 0; i < params; ++i) { cout << param[i] << endl; gFile = TFile::Open(Form("higgsCombineC6I_combx_SCAN_1D_%s.MultiDimFit.mH125.7.root", param[i])); TTree *limit = (TTree) gFile->Get("limit"); limit->Draw(Form("2*deltaNLL:%s",param[i]),""); TGraph gO = gROOT->FindObject("Graph")->Clone(); gO->Sort(); gFile->Close(); gFile = TFile::Open(Form("higgsCombineC6I_combx_SCAN_1D_%s_ASIMOV.MultiDimFit.mH125.7.root", param[i])); limit = (TTree) gFile->Get("limit"); limit->Draw(Form("2*deltaNLL:%s",param[i]),""); TGraph *gA = gROOT->FindObject("Graph")->Clone(); gA->Sort(); gFile->Close();
>
>
%CODE{"cpp"}% void plot1ds_c6i() { TCanvas c1 = new TCanvas("c1","c1"); gStyle->SetOptStat(0); TFile *fOut = TFile::Open("plot1d_c6i.root", "RECREATE"); const int params = 7; const char *param[params] = { "kV", "kbottom", "ktop", "ktau", "kgluon", "kgamma", "BRInv" }; const float min[params] = { 0, 0, 0, 0, 0, 0, 0, }; const float max[params] = { 2, 5, 8, 4, 4, 3, 1, }; const char *label[params] = { "#kappa_{V}", "#kappa_{b}", "#kappa_{t}", "#kappa_{#tau}", "#kappa_{g}", "#kappa_{#gamma}","BR_{Inv}" }; for (int i = 0; i < params; ++i) { cout << param[i] << endl; gFile = TFile::Open(Form("higgsCombineC6I_combx_SCAN_1D_%s.MultiDimFit.mH125.7.root", param[i])); TTree *limit = (TTree) gFile->Get("limit"); limit->Draw(Form("2*deltaNLL:%s",param[i]),""); TGraph gO = gROOT->FindObject("Graph")->Clone(); gO->Sort(); gFile->Close(); gFile = TFile::Open(Form("higgsCombineC6I_combx_SCAN_1D_%s_ASIMOV.MultiDimFit.mH125.7.root", param[i])); limit = (TTree) gFile->Get("limit"); limit->Draw(Form("2*deltaNLL:%s",param[i]),""); TGraph *gA = gROOT->FindObject("Graph")->Clone(); gA->Sort(); gFile->Close();

Changed:
<
<
gO->SetName(Form("c6i_1d_%s_obs", param[i])); gA->SetName(Form("c6i_1d_%s_asimov",param[i])); fOut->WriteTObject(gO); fOut->WriteTObject(gA);
>
>
gO->SetName(Form("c6i_1d_%s_obs", param[i])); gA->SetName(Form("c6i_1d_%s_asimov",param[i])); fOut->WriteTObject(gO); fOut->WriteTObject(gA);

Changed:
<
<
TH1F *frame = new TH1F("frame",Form(";%s;- 2 #Delta ln L",label[i]), 10, min[i], max[i]); frame->Draw(); frame->GetYaxis()->SetRangeUser(0,7); gO->SetLineWidth(3); gO->SetLineColor(1); gA->SetLineWidth(2); gA->SetLineColor(1); gA->SetLineStyle(2); gO->Draw("C SAME"); gA->Draw("C SAME");
>
>
TH1F *frame = new TH1F("frame",Form(";%s;- 2 #Delta ln L",label[i]), 10, min[i], max[i]); frame->Draw(); frame->GetYaxis()->SetRangeUser(0,7); gO->SetLineWidth(3); gO->SetLineColor(1); gA->SetLineWidth(2); gA->SetLineColor(1); gA->SetLineStyle(2); gO->Draw("C SAME"); gA->Draw("C SAME");

Changed:
<
<
c1->Print(Form("output_C6I_1D_%s.png", param[i])); delete frame; } fOut->Close(); }
>
>
c1->Print(Form("output_C6I_1D_%s.png", param[i])); delete frame; } fOut->Close(); }
%ENDCODE%

</>

<--/twistyPlugin-->
Line: 1794 to 1819
The tdefinition of transverse mass mT(ll,ETmiss) used in this analysis is
mT(ll,ETmiss) := sqrt( 2 * pT(ll) * ETmiss * (1 - cos Δ#phi;(ll, ETmiss) )

m(ll) Δφ(ll) mT(ll,ETmiss)
Changed:
<
<
>
>

Changed:
<
<
>
>

Line: 1880 to 1905
%CODE{"cpp"}% //// == rootlogon === { TString makeshared(gSystem->GetMakeSharedLib()); TString dummy = makeshared.ReplaceAll("-W ", ""); dummy.ReplaceAll("g++ ","g++ -std=c++0x "); dummy.ReplaceAll("-Wshadow "," "); dummy.ReplaceAll("-Wall ","-Wall -Wno-deprecated-declarations "); gSystem->SetMakeSharedLib(dummy); gSystem->Load("libGenVector.so"); // strangely this is needed if (getenv("CMSSW_BASE")) { gSystem->Load("libFWCoreFWLite.so"); gSystem->Load("libDataFormatsFWLite.so"); gSystem->SetIncludePath("-I$ROOFITSYS/include"); AutoLibraryLoader::enable(); } } %ENDCODE% Changed: < < %CODE{"cpp"}% //// =  makePlots_PAT.cxx++ = #include #include #include #include #include #include #include #include #include #include #include #include #include > > %CODE{"cpp"}% //// = makePlots_PAT.cxx++ = #include #include #include #include #include #include #include #include #include #include #include #include #include // hide all from ROOT #if defined( CINT) && defined( MAKECINT) #include #include #include "DataFormats/FWLite/interface/Event.h" #include "FWCore/FWLite/interface/AutoLibraryLoader.h" #include "DataFormats/Common/interface/Handle.h" #include "DataFormats/Math/interface/deltaR.h" #include "DataFormats/PatCandidates/interface/Muon.h" #include "DataFormats/PatCandidates/interface/Electron.h" #include "DataFormats/PatCandidates/interface/MET.h" #include "FWCore/Utilities/interface/InputTag.h" #endif Line: 1914 to 1939 stackPlots(hmll_sm, hmll_2p, "mll"); stackPlots(hdphill_sm, hdphill_2p, "dphill"); stackPlots(hmT_sm, hmT_2p, "mT"); stackPlots(hm2d_sm, hm2d_2p, "m2d"); Changed: < < output->Close(); } > > output->Close(); } /* Note: you can also compile it with gcc, though it's a bit a pain in the neck gcc -std=c++0x -Wno-deprecated -Wno-deprecated-declarations -lstdc++$(root-config --cflags --ldflags --libs) -lGenVector -I $CMSSW_RELEASE_BASE/src -L$CMSSW_RELEASE_BASE/lib/$SCRAM_ARCH -lDataFormatsPatCandidates -lDataFormatsFWLite -lDataFormatsCommon -lFWCoreUtilities -lFWCoreFWLite -lDataFormatsMath -I$(scramv1 tool tag clhepheader INCLUDE) makePlots_PAT.cxx ; ./a.out */ int main(int argc, char **argv) { makePlots_PAT(); } %ENDCODE%
Line: 1934 to 1959
You can find the example of such distributions below for the ZZ+WW combination performed in HIG-13-005:
Changed:
<
<
>
>

### Measures of separation and the CLs criterium

Line: 1953 to 1978

• post-fit what you expect after matching the signal yield to the best-fit signal yield in data.
To illustrate the difference between pre-fit and post-fit, look at the results below for the ZZ+WW combination performed in HIG-13-005. The left panel shows the pre-fit expected distributions, while the right panel displays the expected distributions but after knowing the best-fit signal normalisation (as well as the observed likelihood ratio in data):
Changed:
<
<
>
>
It can be seen that the when going from left to right, the distributions P(q|A) and P(q|N) become closer, less discernible, and the separation is smaller. This is due to the fact that the WW channel has a deficit of signal.

Finally, in the following table you can find the results of the hypothesis test performed for HIG-13-005, including all the quantities discussed above:

Changed:
<
<
>
>

Line: 1974 to 1999
class="twikiHelp" }% The answer is deceivingly simple: since P( q < qobs | N ) < 0.5, one needs to have signed deviations to note that qobs is to the left of the median of P( q | N ). In this case, since P( q < qobs | N ) < 0.5 and P( q > qobs | A ) > 0.5, it means that the observation is somewhere in between the qexp(A) and qexp(N) as can be seen in the following figure:
Changed:
<
<
>
>
which is part of the plots for the individual ZZ and WW channels.
Line: 1982 to 2007

## Practice

Changed:
<
<
For the spin-2 (2+m in this case) vs spin-0 (0+) hypotheses test, the full H→WW→2a„¡°2ν analysis includes 4 channels corresponding to {0-jet,1-jet}⊗{7 TeV, 8 TeV}. For simplicity we will perform the exercise on the 0-jet 8 TeV data card.
>
>
For the spin-2 (2+m in this case) vs spin-0 (0+) hypotheses test, the full H→WW→2a„“2ν analysis includes 4 channels corresponding to {0-jet,1-jet}⊗{7 TeV, 8 TeV}. For simplicity we will perform the exercise on the 0-jet 8 TeV data card.

head jcp_hwwof_0j_8TeV.txt

[...]
Observation 5747


Line: 2004 to 2027
%CODE{"cpp"}% //Compare main signal and background
Changed:
<
<
histo_ggH->GetXaxis()->SetRangeUser(-1,1) histo_ggH->SetLineColor(kBlack) histo_ggH->DrawNormalized("H") histo_qqWW->SetLineColor(kGray) histo_qqWW->DrawNormalized("Hsame")
>
>
histo_ggH->GetXaxis()->SetRangeUser(-1,1) histo_ggH->SetLineColor(kBlack) histo_ggH->DrawNormalized("H") histo_qqWW->SetLineColor(kGray) histo_qqWW->DrawNormalized("Hsame")
%ENDCODE%
Changed:
<
<
>
>
You will notice multiple peaks. These correspond to an unrolled 2D histogram encoding the information of mT and mll as explained in https://twiki.cern.ch/twiki/bin/view/CMSPublic/Hig12042TWiki Basically, inside each peak mll is increasing, while different peaks correspond to larger mT values.

You can compare the plots you obtained with the following:

Changed:
<
<
>
>
Let's now compare the different signal model shapes:
Deleted:
<
<

root -l jcp_hwwof_0j.input_8TeV.root

Line: 2026 to 2048
%CODE{"cpp"}% //Define a region where it is easy to see the differences between the alternate signals
Changed:
<
<
histo_ggH->;GetXaxis()->SetRangeUser(-1,0) histo_ggH->SetLineColor(kBlack) histo_ggH->DrawNormalized("H") histo_ggH_ALT->SetLineColor(kBlue) histo_ggH_ALT->DrawNormalized("Hsame") histo_qqbarH_ALT->SetLineColor(kRed) histo_qqbarH_ALT->DrawNormalized("Hsame")
>
>
histo_ggH->;GetXaxis()->SetRangeUser(-1,0) histo_ggH->SetLineColor(kBlack) histo_ggH->DrawNormalized("H") histo_ggH_ALT->SetLineColor(kBlue) histo_ggH_ALT->DrawNormalized("Hsame") histo_qqbarH_ALT->SetLineColor(kRed) histo_qqbarH_ALT->DrawNormalized("Hsame")
%ENDCODE%
Changed:
<
<
>
>
You can see that the shapes are different, with different hypotheses having more or less signal in the different "peaks" (which correspond to different mT values), as shown in:
Changed:
<
<
>
>

### Building a nested model

Line: 2168 to 2187
combine jcp_hww_2poi.root -M MultiDimFit -m 125.7 -P x --floatOtherPOI=1 --algo=grid -n JCP_X_SCAN_Asimov --points=200 -t -1 --expectSignal=1 --setPhysicsModelParameters x=0,r=1
Changed:
<
<

From there you can see that the data prefers the scalar hypothesis (x=0), and in particular the events are even more scalar-like than what expected on average for a scalar (the likelihood approaches zero with a non-zero slope). You can also see that for larger x the a-priori expected curve grows faster for increasing x (i.e. it has a smaller that the a-priori sensitivity is better) but that is in part compensated by the fact that the observed starts faster. For a gaussian likelihood, a difference in 2*Δ log L of about 3 would correspond to a p-value of about 10%, which is in the same ballpark as the CLs number; a precise comparison cannot be made, since the two kinds of tests are different and since we're considering a variable at the boundary of the interval in which it is defined (and so it is not really expected to behave as a gaussian)
>
>

From there you can see that the data prefers the scalar hypothesis (x=0), and in particular the events are even more scalar-like than what expected on average for a scalar (the likelihood approaches zero with a non-zero slope). You can also see that for larger x the a-priori expected curve grows faster for increasing x (i.e. it has a smaller that the a-priori sensitivity is better) but that is in part compensated by the fact that the observed starts faster. For a gaussian likelihood, a difference in 2*Δ log L of about 3 would correspond to a p-value of about 10%, which is in the same ballpark as the CLs number; a precise comparison cannot be made, since the two kinds of tests are different and since we're considering a variable at the boundary of the interval in which it is defined (and so it is not really expected to behave as a gaussian)
In all the results above, fqq is not floated: it is a constant parameter set at zero. This means that everything we have done up to now tests gg→2+m vs 0+.
Line: 2193 to 2213
The output is for the two cases is:
Changed:
<
<
 pre-fit expected post-fit expected 0.058 Â¡À 0.011 0.008 Â¡À 0.004 0.207 Â¡À 0.020 0.074 Â¡À 0.012 0.089 Â¡À 0.017 0.031 Â¡À 0.010
>
>
 pre-fit expected post-fit expected 0.058 Â± 0.011 0.008 Â± 0.004 0.207 Â± 0.020 0.074 Â± 0.012 0.089 Â± 0.017 0.031 Â± 0.010
This means that: - a 2+m graviton produced in qqbar annihilation would be easier to distinguish from the SM Higgs compared to a 2+m graviton produced in gluon fusion - also for the qqbar case the post-fit sensitivity is worse than the pre-fit one (that's understandable since we had already seen that the post-fit signal strength for the SM is below unity, and since the 2+m are not that different) - also for the qqbar case, the observed exclusion is stronger than the expectation, meaning that the data is more SM-like than graviton-like (it's the same data, and the expected distributions for the graviton in the two production modes lie on the same side of the SM, both preferring smaller mT and larger m(ll) )
Changed:
<
<
Again, one can see that also from looking at the likelihood as function of x in th two cases:
>
>
Again, one can see that also from looking at the likelihood as function of x in th two cases:
</>
<--/twistyPlugin-->
Line: 2224 to 2242
and let's explore:
Changed:
<
<
%CODE{"cpp"}% q->Print()
>
>
%CODE{"cpp"}% q->Print()
%ENDCODE%
>
>

******************************************************************************
*Tree    :q         : Test statistics                                        *
*Entries :     1993 : Total =           42889 bytes  File  Size =       8799 *


Line: 2256 to 2275
The type encode which hypothesis the value of q corresponds to (observation has type zero). To see that, run:

%CODE{"cpp"}%

Changed:
<
<
>
>
%ENDCODE%
Changed:
<
<
>
>
And finally, you can quickly visualise the results of the hypothesis test by running:

%CODE{"cpp"}%

Changed:
<
<
gPad->SetLogy(kFalse) q->Draw("-2*q>>hnull(50,-10,10)","type>0","goff") q->Draw("-2*q>>halt(50,-10,10)", "type<0","goff") q->Draw("-2*q>>hobs(50,-10,10)", "type==0","goff") hnull->SetLineColor(kBlue) halt->SetLineColor(kOrange) hobs->SetLineColor(kBlack) hobs->Scale(1e3) hobs->SetFillColor(kBlack) hnull->Draw() halt->Draw("same") hobs->Draw("same")
>
>
gPad->SetLogy(kFalse) q->Draw("-2*q>>hnull(50,-10,10)","type>0","goff") q->Draw("-2*q>>halt(50,-10,10)", "type<0","goff") q->Draw("-2*q>>hobs(50,-10,10)", "type==0","goff") hnull->SetLineColor(kBlue) halt->SetLineColor(kOrange) hobs->SetLineColor(kBlack) hobs->Scale(1e3) hobs->SetFillColor(kBlack) hnull->Draw() halt->Draw("same") hobs->Draw("same")
%ENDCODE%

Note how the variable q needs to be multiplied by -2 in order to obtain the test-statistic -2*ln(L(A)/L(N)).

Changed:
<
<
>
>

Deleted:
<
<
--

<--/commentPlugin-->