-
Notifications
You must be signed in to change notification settings - Fork 0
/
abstract.tex
182 lines (154 loc) · 9.01 KB
/
abstract.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
\documentclass[11pt]{article}
\usepackage{geometry}
\usepackage[T1]{fontenc}
\usepackage{siunitx}
\usepackage{lineno} % \linenumbers
\usepackage{fancyhdr}
\newlength\bindingoffset
\setlength\bindingoffset{1cm}
\geometry{%
a4paper,
asymmetric, % twoside, but marginpar are always to the right
centering,
textwidth=360pt, % default LaTeX textwidth with 11pt (345pt for 10pt)
top=1.8cm,
bottom=1.8cm,
marginparwidth=3.4cm,
headsep=10pt,
footskip=17pt,
bindingoffset=\bindingoffset,
% showframe,
}
\addtolength\marginparwidth{-0.5\bindingoffset}
% \linenumbers % print line numbers (for the draft)
% small sans font for marginpar
\let\oldmarginpar\marginpar
\renewcommand\marginpar[1]{\oldmarginpar{\sffamily\scriptsize #1}}
\newlength\pagenumbermargin
\setlength\pagenumbermargin{2.9cm}
\addtolength\pagenumbermargin{-0.5\bindingoffset}
\newcommand\hdrside{RO,LE}
\newcommand\sharedstyle{%
\renewcommand\headrulewidth{0pt}
\fancyhf{}
\fancyhfoffset\pagenumbermargin
\fancyfoot[\hdrside]\thepage
}
\fancypagestyle{plain}\sharedstyle
\pagestyle{fancy}
\sharedstyle
\author{Giacomo Petrillo\\
Supervisors: Eugenio Paoloni, Simone Stracka\\
University of Pisa}
\title{Thesis abstract: Online processing of the large area SiPM detector
signals for the DarkSide20k experiment}
\begin{document}
\maketitle
DarkSide20k is a planned dual-phase liquid argon (LAr) time projection
chamber (TPC) designed to detect dark matter, the successor to DarkSide-50.
It will be the largest detector of its kind, with 20~metric tons of argon
in the fiducial volume. The predicted resulting upper bound on the
spin-independent WIMP-nucleon scattering cross-section, in case of no
discovery, is $\approx\SI{1e-47}{cm^2}$ at \SI{1}{TeV/c^2} WIMP mass, to be
compared with the current best limit $\approx\SI{1e-45}{cm^2}$ by XENON1T.
In this thesis we present reconstruction and characterization studies on
the photodetector modules (PDMs) that will be used in the TPC. These
studies are primarily meant as a support to the definition of the first
stages of the online processing chain.
Each PDM has a \SI{25}{cm^2} matrix of silicon photomultipliers (SiPMs),
instead of the usual photomultiplier tube (PMT). The SiPM has Geiger-mode
single photon response, i.e., each detected photon produces one fixed
amplitude pulse. Compared to a PMT, the photodetection efficiency is
expected to be higher, reaching \SI{50}\% at room temperature, with also a
better filling factor, which should be greater than \SI{85}\% in
DarkSide20k, compared to \SI{80}\% in DarkSide-50, and a much better single
photon resolution. The pulse looks like a sharp peak followed by a rather
long exponential tail, which is a disadvantage because pulses can pile-up
leading to saturation.
SiPMs have three kinds of noise: 1) stationary electric noise, which scales
with the square root of the area; 2) a ``dark count rate'' (DCR) of pulses
independent of incident light that scales with the area; 3) ``correlated
noise'' produced by primary pulses, which contributes a factor proportional
to the DCR and photon pulses.
The first two stages in the readout chain will be the digitizers and the
front end processors (FEP). The digitizers find candidate pulses, and for
each one send a slice of waveform to the FEP, where the final
identification of pulses is decided. The performance of these stages is
mainly determined by the electric noise, characterized with the signal to
noise ratio (SNR), which is the ratio of the amplitude of pulses over the
noise standard deviation. It influences the fake rate, i.e., the rate of
random oscillations high enough to be mistakenly identified as pulses, and
the temporal resolution of pulse detection.
By applying linear filters to digitized waveforms acquired from the PDMs
illuminated by a pulsed laser, both in a testing setup at Laboratori
Nazionali del Gran Sasso (LNGS) and in the small prototype TPC ``Proto0'',
we study the noise parameters of single pulse detection: SNR, temporal
resolution, fake rate.
We consider 1)~an autoregressive filter, which uses the least possible
computational resources, 2)~a matched filter without spectrum correction,
which gives almost optimal performance, 3)~a moving average, which is a
compromise between simplicity and performance. Simple filters are needed on
the digitizers, which must process all the incoming data, while the FEP
will probably use the optimal filter. We also study the baseline
computation and the filter length.
Then using a custom peak finder algorithm we measure the DCR and study the
correlated noise, which consists in additional pulses produced recursively
by each pulse, divided in two main categories: afterpulses (AP), which
arrive with some delay from the parent pulse, and have smaller amplitude as
the delay goes to zero, and direct cross talk (DiCT), which manifests as a
integer multiplication of the amplitude of pulses because the children
pulses are overlapped with the parent.
The results follow.
While with an ideal filtering procedure the post-filter SNR reaches~20, it
may realistically be~13 with the resources of the digitizers, i.e., using a
moving average \SI{1}{\micro s} long for both the pulse and the baseline.
With the moving average as just described, the fake rate is \SI{10}{cps}
when the threshold is set to 5~standard deviations of the filtered noise,
where the filter includes the subtraction of the baseline. Since the
filtering procedure is not actually decided yet, we describe and test a
general procedure to measure low fake rates without actually counting the
threshold crossings with only \SI{1}{ms} of recorded data. Of the 25~PDMs
we looked at, all are within specifications but a single one which has an
anomalous fake rate whose origin is still under investigation.
The temporal resolution mostly matters on the FEP, so we summarize the
results with the matched filter: 1)~upsampling is not necessary; 2)~at low
SNR the resolution diverges and how fast heavily depends on the noise
spectrum, with the Proto0 noise the maximum allowed by specs, \SI{10}{ns},
is reached at pre-filter SNR~2.6; 3)~it is sufficient to have \SI{1}{\micro
s} of waveform per pulse sent to the FEP; 4)~it is possible to lower the
sampling frequency from \SI{125}{MSa/s} to \SI{62.5}{MSa/s}. In the thesis
we show detailed curves for all the filter, filter length, SNR and sampling
frequency choices.
We give upper bounds for the DCR of a \SI{25}{cm^2} SiPMs Tile at three
overvoltages, \SI{5.5}{V}, \SI{7.5}{V}, \SI{9.5}{V} (the overvoltage is the
difference between the bias put on the SiPM and the breakdown voltage of
the junction), which are respectively \SI{50}{cps}, \SI{170}{cps},
and~\SI{120}{cps}, to be compared with the DarkSide20k requirement of
\SI{250}{cps}. \SI{5.5}{V} is a somewhat usual operating overvoltage while
\SI{9.5}{V} is considered high. Increasing the overvoltage increases both
the SNR, the DCR and the correlated noise.
The analysis of correlated noise, done on the same data, gives upper bounds
for AP probabilities of \SI{2.5}\%, \SI{3.5}\% and \SI{6.5}\%, and DiCT
probabilities \SI{20}\%, \SI{30}\% and~\SI{50}\%. These are the
probabilities of said noises being generated by any given single pulse,
i.e., the stacked pulses produced by DiCT count separately. The DarkSide20k
specifications require the sum of correlated noise probabilities to be less
than \SI{60}\% in order to avoid performance degradation, e.g., dynamic
range reduction on ionization signals, where there is a lot of pile-up. For
this analysis we employ the common procedures used to fit histograms with
least squares, which we rederive in the Bayesian framework in an appendix.
Measuring AP and DiCT requires models. We try the models we find in the
literature and in the DarkSide20k simulation and bring them into question,
but we do not search for better alternatives, since their level of accuracy
should be enough for the simulation requirements. We find that the AP
temporal distribution is well described by two exponential decays with
constants \SI{200}{ns} and \SI{1}{\micro s}, but not by a single one.
Finally, based on the peak finding algorithm we used in the correlated
noise analysis, we suggest but do not test the following procedure to
better resolve multiple pulses: do a first pass with a short filter, pick
candidate peaks, do a second pass with a long filter, eventually pick
additional candidates, compute the amplitude of pulses solving the linear
system for the superposition of pulses using only the long filter peak
amplitudes, curb peaks with low amplitude and compute it again. We sketch
an argument to justify this procedure.
\end{document}