-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathmethod.tex
121 lines (98 loc) · 7.28 KB
/
method.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
\chapter{Method}\label{cha:meth}
In this chapter the methodology used to perform the tasks given above in \sectionref{sec:goals}.
\newpage
\section{Validation of smearing functions}
\subsection{Smearing functions}\label{sec:vali:subsec:smear}
One might assume that using a Monte Carlo simulation it would be easy to model and emulate the whole process, from collision to detection and reconstruction in the upgraded LHC. It is possible, but it requires a lot of computing power. Instead one can use one simulation and a mathematical model to calculate the estimated response in the detector. This was validated and used in this thesis to be able to create the data needed for further analysis.
This was done by using a Monte Carlo simulation of a proton-proton collision and applying the official Truth to reco code, also known as the smearing functions, that was developed using previous studies \citep{ATL-PHYS-PUB-2013-004} to simulate how the detector and the reconstruction is affected by the increased luminosity and the pile-up that comes with this.
The code uses the experimental data from the previous studies to smear the reconstructed energy and momenta, it is from this that the name smearing functions comes.
The code does not however alter the direction of the momenta. Other experimental data was used and shows that only jets and $E^{miss}_T$ are effected by pile-up, more in \subsectionref{sec:vali:subsec:vali}.
\subsection{Validation}\label{sec:vali:subsec:vali}
The data was taken from these processes:
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|l|}
\hline
Entity & Process \\ \hline
Electron & W$\rightarrow e\nu$ \\
Muon & W$\rightarrow \mu \nu$ \\
$\gamma$ & $\gamma$ + Jet sample \\
Tau & W$\rightarrow \tau \nu$ \\
Jets & Jet sample \\
$E_T^{Miss}$ & Z$\rightarrow \nu \nu$ + Jet sample \\ \hline
\end{tabular}
\end{center}
\end{table}
That this is true can be shown from \textbf{figures and references from nonpileupdep.txt presentation!}. The smearing functions should be given!
Perhaps under results?
The expected response has been calculated and taken from: ATL-PHYS-PUB-2013-004
Find more information in my presentation. also mention no pile-up dependence of leptons.
Confirmed that my code produces resolutions consistent with that which is stated in the Letter of Intent for Phase 2 for the upgrade when using common background processes.
To validate the smearing code comparisons were made with \citep{ATL-PHYS-PUB-2013-004} which gave the following formulation for the expected rms:
\begin{table}[h]
\renewcommand{\arraystretch}{1.5} %Change height of tabel
\begin{tabular}{|l|l|}
\hline
Process & Absolute rms \\ \hline
Electron \& photon & $\sigma=0.3\oplus 0.1\sqrt{E(GeV)}\oplus 0.01E(GeV)$, $|\eta|<$ 1.4 \\
& $\sigma=0.3\oplus 0.15\sqrt{E(GeV)}\oplus 0.015E(GeV)$, 1.4 $<|\eta|<$ 2.47 \\ \hline
Muon & $\sigma=\frac{\sigma_{id} \sigma_{ms}}{\sigma_{id} \oplus \sigma_{ms}}$\\
& $\sigma_{id}=P_T(a_1 \oplus a_2 P_T)$\\
& $\sigma_{ms}=P_T(\frac{b_0}{P_T} \oplus b_1 \oplus b_2 P_T)$\\ \hline
Tau & $\sigma =(0.03\oplus \frac{0.76}{\sqrt{E(GeV)}})E(GeV)$ \\ \hline
Jet & $\sigma = P_T(GeV)(\frac{N}{P_T} \oplus \frac{S}{\sqrt{P_T}} \oplus C)$ \\ \hline
$E_T^{miss}$ & $\sigma = (0.4+0.09\sqrt{\mu})\sqrt{\sum E(GeV)+20\mu}$ \\ \hline
\end{tabular}
\renewcommand{\arraystretch}{1.0} %Change back
\end{table}
\begin{itemize}
\item For tau: Fixed at 3 prong. 1 prong exists though was not used in this thesis. \\
Where prong refers to the different amount of tracks that were from which they were reconstructed.
\item For Jet: Where N, S, and C depend on $\eta$. N is also dependent on the pile-up that is simulated.\\
Where $\eta$ is the same as discussed in \subsectionref{sec:eo:subsec:coord}
\item All parameters can be found in \citep{ATL-PHYS-PUB-2013-004}.
\end{itemize}
Remember for the discussion to mention different types of rms, relative or absolute. and the problem which occurred with this and the papers faults.
\section{Evaluating dark matter signals}
The main goal of the thesis is to investigate if certain dark matter signals can be detected after the high luminosity upgrade. One immediate worry is that the background will be large in comparison to the signal, thus making it undetectable.
The following signals models have been used:
\textbf{Here only the operators should be explained, or different models. The names and the MC here or in appendix?} They are explained somewhat in the introduction.
Each of these has been evaluated in different signal regions and the detectability has been evaluated using a statistical P-value. This process has been performed at different pile-up values.
\textbf{What background existed? How was it simulated in MC? Should that be here or in appendix?}
Dont mention, but good to know. Used METpt in all histograms, with the weight as in main.C and mainclass.C.
\subsection{Signal to background ratio}
What I am doing now, looking at what signal? What are the different background processes? What and why was the weight used?
Signals should be explained somewhat in the introduction.
Look at presentation, is it worth bringing up the first signal regions when the data has already been filtered? Should that be here?
\subsection{Selection criteria}
What criteria were used and more importantly why? It is quite important that you can explain why this was used.
\subsection{Comparing with published papers}
To verify that the background data was correct it was compared with \citep{ATLAS-CONF-2012-147}, in which the luminosity if 10 fb$^{-1}$ and thus the expected values from the paper scaled up with a factor 100. \textbf{Also, somewhat unexpectedly is that the difference in center of mass energy required the cross-sections to be lowered than compared with the upgrade.} The signal region used in the article were the following:
\begin{itemize}
\item Jet veto, require no more than 2 jets with $p_T > 30 GeV$ and $|\eta| < 4.5$
\item Lepton veto, no electron or muon, leading jet with $|\eta| < 2.0$ and $\Delta \phi (jet, E_T^{miss})>0.5$ (second-leading jet)
\item Leading jet with $p_T > 500 GeV$ and $E_T^{miss}>500 GeV$
\end{itemize}
The article has several different signal regions, the difference is the last item, unfortunately since the simulated events are already filtered before the analysis only one of the regions could be used.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
Process & Simulated events & Expected events (Scaled to 1000 fb$^{-1}$) \\ \hline
Z$\rightarrow\nu\nu$&27675.1&27000 \\
W$\rightarrow\tau\nu$&6506.09&3900 \\
W$\rightarrow e\nu$&1660.06&1600 \\
W$\rightarrow\mu\nu$&2048.77&4200 \\ \hline
Total background&37890&36700 \\ \hline
\end{tabular}
\caption{Comparison of the simulated and expected events from \citep{ATLAS-CONF-2012-147}.}
\label{tab:Compare1}
\end{center}
\end{table}
In \tableref{tab:Compare1} a comparison has been made. It can be seen that the simulated events and expected events coincide on all accounts apart from W$\rightarrow\tau\nu$, W$\rightarrow\mu\nu$ and thus the total as well. \textbf{This can be explained by better separation of $\mu$,$\tau$ and missing energy.}
\subsection{Figures of merit}
P-value, see more in Majas phd thesis when completed.
\subsection{D5 operators and M*}
\subsection{Light vector mediator models}
\section{Other selection criteria and observables}
\section{Mitigating the effect of the high luminosity}