How to generate a set of
CM2 ntuples using SimpleComposition and BtaTupleMaker
This tutorial will describe how to generate a set of analysis ntuples
using SimpleComposition and BtaTupleMaker .
These were written by Chris Roat and Chih-hsiang Cheng respectively as
a way of simplifying physics analysis on BaBar. They can be used for
many publication-quality BaBar analyses.
To do your own analysis, you will need to start from an
existing skim,
or produce your own one. There are something like 120 skims available,
so you may find that an existing one is suitable. Ask your convener,
or others in your AWG. You should get your AWG and its conveners
involved at the earliest possible stage, and make regular reports
throughout your analysis. My observation is that analyses that don't
interact regularly with their AWGs never get finished.
In general, the ntuples you make should be stored on
your own disk
resources - your laptop, desktop, or university, for example. Keep two
copies - this type of disk space is cheap. (I just bought a 120 GB
external HD suitable for backups for $70 after rebate). On the other
hand, disk space at SLAC is extremely expensive (~$2k for the same
disk), and limited. (It is also way better disk.)
It is easy to produce large sets of ntuples with these
tools. If you
do this, you will be frustrated by how long it takes to make each and
every plot when analyzing them on your laptop. Work a bit with signal
and background MC to tighten your selection before making all your
ntuples. For an exclusive B decay, I would aim for a few tens of GB
at most (including both data and MC); my last couple have been a few
GB.
In this tutorial, we will start with a simple case, B+
-->
Jpsi K+ , with Jpsi --> mu+ mu- . This way we can
go through the full process without getting too bogged down in the
complexities of Composition. We will then go back and do a full B-->
Jpsi pi+ pi- K analysis, including charged and
neutral modes, and various intermediate resonances, in the second part
of this tutorial.
Before doing this workbook section, you should work
through the
introductory "Workbook core" sections. There are other sections
on Paw (1, 2)
and Root(1, 2, 3),
which may be useful. If you do not have a lot of experience with Unix,
you should definitely work through that tutorial.
You might google for an emacs tutorial as well.
This tutorial is written assuming you are running on a yakut
machine at SLAC. It is often faster to work at a different Tier A site,
but you will need to use different commands for the batch system and
for scratch space. It is based on analysis-31 and release
22.3.0 .
Commands that the user might enter are written in bold,
blue to make them easier to find on later browsing.
Note you should ssh yakut.slac.stanford.edu .
Don't
ssh to a specific yakut - different ones have different operating
systems. If you get complaints from ssh about changing IP addresses,
you need to install the full list of SLAC unix sites into your .ssh/known_hosts
file; see
http://www.slac.stanford.edu/comp/unix/ssh.html.
If you get complaints about "keys of a different kind are already known
for this host", try ssh -1 yakut.slac.stanford.edu .
Here are a few useful links:
OK, let's begin!
Build your executable in analysis-31 .
You will need about 150 MB of disk space for your
release,
including libraries and executable. To check your disk space, use the
command:
fs listquota
If you don't have that much, put
in the request before you begin to AFS-req.
Now type:
newrel -t analysis-31 Rel31 cd Rel31 srtpath <enter> <enter> cond18boot addpkg workdir gmake workdir.setup
This will put the library and executable in your own disk
space. You should definitely do this when you are producing
ntuples for a real analysis. If you are just working through a
tutorial for a few days, you could put them in an AFS build scratch
area (e.g. /afs/slac/g/babar/build/h/hearty )
newrel -s my-build-scratch-dir -t analysis-31 Rel31
If you don't have such an area, from a flora machine (ssh flora ) run kinit and then the script bbrCreateBuildArea
to create it. You cannot use this area for ntuples or log files or
such. Conversely, you probably don't want to use your normal NFS
scratch space (e.g. $BFROOT/work/h/hearty ) for your
release because of poor
performance if someone else is heavily using the same disk space by,
for example, writing ntuples from 100 batch jobs simultaneously.
If you use a scratch area, your executable will vanish
six
days after you create it, and your jobs will all crash, and
you won't know why. This is probably OK, if you are just working
through the tutorial.
We are now ready to add tags. Check the
Extra Tags web site to see if you need any extra tags to make your
code work propertly. At the time this tutorial is being written,
(June 2006), there no core bug fixes, and three optional tags for
analysis-31. The optional tags sounds useful, so lets add them. We will
also pick up BtaTupleMaker, so that we can
remake the executable to make use of these new tags:
addpkg EmcCalibToo V01-00-02-01 addpkg EmcSequence V00-06-02-04-01 addpkg KanUtils V01-04-17-01 addpkg BtaTupleMaker
As the extra tags page reminds you, you need to use "checkdep" to check
for any dependencies whenever you add new tags:
checkdep
which will give you message saying that everything is fine. In many
cases, you will have to add more tags:
Using glimpse index of 18.6.4
cvs diff: Diffing BtaTupleMaker cvs diff: Diffing EmcCalibToo cvs diff: Diffing EmcCalibToo/doc cvs diff: Diffing EmcSequence cvs diff: Diffing KanUtils cvs diff: Diffing workdir cvs diff: Diffing workdir/kumac
checkdep: You should add for recompilation: -- NOTHING
If you did have to add more tags, you would continue to
run checkdep and add tags until it was satisfied.
Before compiling and linking, create your .bbobjy file
in your release directory. Use the command,
GetFDID
to find the 4-digit FDID numbers assigned to you, and then make a file
called .bbobjy in Rel31, with the single line:
FD_NUMBER = ****
where **** is one of those numbers.
Now you are ready to compile and link:
bsub -q bldrecoq -o all.log gmake all or gmake all
Some people have had problems with gmake all due to Objectivity issues.
Unless you are using analysis-24 or 25, you don't need to database
import, so if you
know what you are doing, you can get away with
gmake lib gmake BtaTupleMaker.bin
Afterwards, check that you actually got library and binary files for
each
package, and that the dates and sizes are reasonable. If anything is
funny, check for error messages in all.log for that
package. If you compile a second time, be sure to use a different name
than all.log, or else delete the file first. If you get confused about
what you have compiled or linked, you can always do a "gmake clean" and
start over.
ls -l lib/$BFARCH
total 9668 -rw-r--r-- 1 hearty ec 7988868 Jun 2 16:29 libBtaTupleMaker.a -rw-r--r-- 1 hearty ec 1382736 Jun 2 16:30 libEmcCalibToo.a -rw-r--r-- 1 hearty ec 219112 Jun 2 16:30 libEmcSequence.a -rw-r--r-- 1 hearty ec 305852 Jun 2 16:30 libKanUtils.a drwxr-xr-x 8 hearty ec 2048 Jun 2 16:25 templates/
ls -l bin/$BFARCH
total 52955 -rwxr-xr-x 1 hearty ec 54224091 Jun 2 16:31 BtaTupleApp* -rw-r--r-- 1 hearty ec 331 Jun 2 16:33 Index
BtaTupleApp is our analysis executable (it is
the code that creates and fills the ntuple). If you used gmake all,
other binaries that you don't need are also created in the
directory. Go ahead and delete them to save some disk
space.
A typical problem is that your disk fills up and your
binary doesn't get created so that when you run, you get the binary of
the same name in the release, which does not have the tags we just
added. Beware also of core files which can be created from an
unsuccessful compilation or failure of a job to run - these can take up
vast quantities of space and may need to be deleted to carry on. So you
should always be sure to check the size and date of the binary.
That is it - unless new problems are found, and new tags
created, you
will not have to compile/link code again to analyze Runs 1-5 data.
Recall that every time you log on, you need to run srtpath
and cond18boot from your release directory (i.e., Rel31).
Analysis Code - General
Our core analysis code (the part used for every job we run)
consists of a single tcl file. There is an additional small
tcl file (a "snippet") that specializes the code for each
job, setting file names and so forth. There are four
components to this core code: a general part that is the
same for everyone, a tag-filter specific to this analysis,
the SimpleComposition section, and the ntuple-dumping
(BtaTupleMaker) part. These will be the topics of following
sections. Here is the code we will have at the end of the initial
section (note the non-standard file name and extension) Analysis-Simple.tcl.txt. Copy
this file to your workdir and rename it Analysis.tcl .
Let's go through the first part of Analysis.tcl .
Quotes from this file are shown in green to
distinguish them from sample output, commands that you might enter, etc.
#..Analysis.tcl
# # Main tcl file for B+ --> Jpsi K+ [Jpsi --> mu+ mu-] workbook tutorial
#..General setup needed in all jobs sourceFoundFile ErrLogger/ErrLog.tcl sourceFoundFile FrameScripts/FwkCfgVar.tcl sourceFoundFile FrameScripts/talkto.tcl
Lines starting with # are comments. sourceFoundFile
executes the tcl in the specified package or gives an error if it
can't find it. If you have checked out the package (i.e.,
you did addpkg xxx ) it will use that version, otherwise
the version in the release. The path for package xxx is workdir/PARENT/xxx .
These three files turn on error logging, and define
Framework
Configuration Variables (FwkCfgVar ), and "talkto", which
we will use to communicate with code modules.
The following are the Framework Configuration Variables
for our
analysis. They allow you to adjust parameters from one job to
another. For example, you will want to change the ntuple file name,
the number of events to run, and whether or not the input is data or
MC. The first couple (objectivity vs Kanga, and what type of data to
use) are not generally adjusted, but are here if you did need to. We
will discuss the specific meaning of each as they come up in the
tutorial.
The format is: FwkCfgVar variable_name
default_value
i.e., the default value is set in Analysis.tcl. You can
override the default in the tcl snippet for a
particular job:
set variable_name value_for_this_job
With FwkCfgVars , you will not need to use
environmental
variables, which were an on-going source of problems and confusion.
#-------------- FwkCfgVars needed to control this job --------------------- set ProdTclOnly true
#..allowed values of BetaMiniReadPersistence are "Kan", "Bdb" FwkCfgVar BetaMiniReadPersistence Kan
#..allowed values of levelOfDetail are "micro", "cache", "extend" or "refit" FwkCfgVar levelOfDetail "cache"
#..Select MC or Data FwkCfgVar MCTruth "true"
#..Filter on tag bits only if requested FwkCfgVar FilterOnTag "false"
#..Print Frequency FwkCfgVar PrintFreq 1000
#..Ntuple type and name FwkCfgVar BetaMiniTuple "root" FwkCfgVar histFileName "Tutorial.root"
#..Number of Events defaults to 0 (run on full tcl file) FwkCfgVar NEvents 0
There is a standard set of physics code you will need, which
includes particle ID and standard track definitions. It
includes a couple of things we don't need, which we might
disable later. btaMiniPhysics.tcl is the standard set to
use, but you can get more (btaMiniPhysProdSequence.tcl ) or
less (btaMini.tcl ) if it suits you better.
#-------------------------------------------------------------------------- #..General physics sequences
#..set to 'trace' to get info on a configuration problem ErrLoggingLevel warning
#..btaMiniPhysics is the basic physics sequences; sourceFoundFile BetaMiniUser/btaMiniPhysics.tcl
At the end of the Analysis.tcl file, we use some of the
FwkCfgVars to control the actual running of the job. printFreq prints
a line into your log file at the specified interval. Useful to track
progress in batch jobs, but you don't want to bury important log file
information in millions of such lines.
If $NEvents is 0, all events in the collection are run.
If
it is negative, the "ev beg" command is not executed, so get
just get a framework prompt after Analysis.tcl is finished.
This is useful if you want to "mod talk" to a module before
running.
I generally include a "path list" so I can check the log
file to see what was actually run in the job.
#----------------------------------------------------------------------- #..Run time options mod talk EvtCounter printFreq set $PrintFreq exit path list if { $NEvents>=0 } { ev beg -nev $NEvents exit }
This general part of Analysis.tcl was derived from MyMiniAnalysis.tcl
in package BetaMiniUser. If you are switching to a new release, or want
extra information, take a look.
Analysis Code - Tag Filter (and a review of data
processing)
It is useful to briefly review data processing to put the
idea of a Tag Filter into context. Data is processed in PR
("prompt reconstruction"), which creates the AllEvents
collections from the XTC files. For new data, this is done
in Padova; for reprocessing, it can be done at SLAC or
another Tier A site as well.
At a later stage, the skim executable is run. It
includes a
block of physics code which sets a large number (>100) of
booleans (tag variables) to be true or false. Based on these
tag variables and other criteria, such as trigger or
BGFilter booleans, events are placed into various
collections called skims. Every event, whether selected by a
skim or not, is placed into the AllEventsSkim collection. It
differs from the original AllEvents collection because it
contains the tag variables set by the skim executable.
For "deep copy" skims, the event is copied into the new
collection. We can end up with multiple copies of the event
on disk, so we try to not to do this for skims that select a
significant fraction of all events. For these large skims,
we create "pointer collections", which consist of pointers
to the corresponding events, which are physically located in
the AllEventsSkim collection.
Several versions of the skim executable can be run on
the
same data set, as people get new ideas for skims. For
example, there is the original Release 14 skim of runs 1 -
4, identified by "R14" in the file name of the collection. A
small number of skims were then rerun on Runs 1 - 3, again
in Release 14, and are identified by "R14a" in the file
name. Similarly, the reskim of runs 1 - 5 in Release 16 is identified
by "R16a" in the file name. For analysis-31, you should be using R18b
or R18c.
In SP6 and earlier, the block of physics code used in
the skim executable is
also run as part of Moose (the executable used to produce MC), so the
resulting
AllEvents collections contain the corresponding booleans. Note that
the full skim executable is not run - there are no skims
created. These tag bits, of course, are the ones for the release used
to generate the MC in the first place. For example, SP6 corresponds to
the "R14a" set in data, while SP5 corresponds to Release 12. (See the skims
page for the details of each release). Therefore, the tag bit you
are interested in may not exist in a particular MC collection, or it
may exist, but correspond to a different version of the PID selectors
than the current release.
OK, we are ready to tackle Tag Filters. A Tag Filter
allows
you to check the values of the tag variables and read the
full event from disk only if the desired ones are true. This can easily
save an order of magnitude in wall-clock
time compared to running directly on the AllEventsSkim
collection in data or the AllEvents collection in SP. Of
course, it isn't useful if the tag variable doesn't exit in
the collection, such as in the data AllEvents collection. Or, as
mention above, it may exist in an SP AllEvents collection, but
correspond to something slightly different than what you expect.
Tag filters may not help at all if you are running on a
skim, or they may
help a bit. For example, the Jpsitoll skim includes both
Jpsi to l+l- and Psi2s to l+l- tag bits. If you are only
interested in Psi2s decays, you can gain with a tag filter.
The Tag Filter in Analysis.tcl checks for
the two Jpsi
to l+l- tag bits. The option andList set xxxx also exists. I
have set an option to crash the job if the tag bits don't exist in the
event, so that I will know there is something wrong with the
collection I am using. Note that the Tag Filter is appended to the
sequence (i.e., actually executed) only if the FwkCfgVar FilterOnTag
is equal to "true".
#---------------------------------------------------------------------------- #..Use $FilterOnTag to disable tag filtering if desired. module clone TagFilterByName TagJpsill module talk TagJpsill orList set JpsiELoose orList set JpsiMuLoose assertIfMissing set true exit if { $FilterOnTag=="true" } { sequence append BetaMiniReadSequence -a KanEventUpdateTag TagJpsill }
The nano also contains some integers (e.g. numbers of tracks) and
floats (e.g. R2All). You can filter on these as well: see the
hypernews posting on this subject. The relevant tags are included
in analysis-31, so you don't have to add them. You can create and
append more than one filter, if, for example, you wanted to filter on
both the number of GoodTracksLoose and R2All. Here is a list of items
in the nano.
Filtering on Signal MC
Data and generic MC are both routinely skimmed, so you
should be able
to find the collections you want, containing the newest tag bits.
Signal MC (which I will define in the book keeping section later) is
not, so you need a different strategy. Here are a few options:
- Signal MC modes are skimmed if an AWG requests it.
So pass along your requests to your convener. You need to do this
before the appropriate deadline, normally in the middle of the skimming
cycle.
- It is quite straightforward to run
the Skim executable yourself. Be sure to sure the same version that
was used to skim your data (See the section entitled "Skimming Releases").
- If the selection you apply in SimpleComposition when
creating your ntuple is tighter than the criteria used to create the
tag bit, and your signal MC collections are not too large, just run on
the AllEvents collection and ignore the tag bits.
- With a small amount of work, you can add your actual
skim selection code to your path as a filter. I have not tried this
myself but Will Roethel indicates in this
email that it is not too difficult.
- If you are confident that you understand the tag bits
in the SP AllEvents collection, and are not sensitive to changes in PID
selectors or EMC calibration, go ahead and use them.
Analysis Code - Candidate composition using
SimpleComposition
It might be useful to check the SimpleComposition
web site while working on this section. For now, we will do only B+
--> Jpsi K+ , with Jpsi --> mu+ mu- . Once
everything is working, we will expand to the full analysis. For
educational purposes, we are creating our own lists from scratch. If
you are writing code for a skim, you should instead start from
existing lists defined in the SimpleComposition package. This way, we
only need to do the combinatorics once per event, instead of doing it
once per skim. Note you should NEVER "talkto" one of these central
selectors-you would then be modifying the list for everyone, not just
your skim. Instead, you should refine or sublist the general list.
The first step is easy - we create a sequence called
AnalysisSequence and add it to the path, which is called Everything .
#-------------------------------------------------------------------------- #..Use SimpleComposition to make the candidates of interest #..Create Analysis sequence and append it to the path. # All the composition modules get added to this sequence sequence create AnalysisSequence path append Everything AnalysisSequence
Now let's make a Jpsi from mu+ mu- .
We use the neural-net muon selector, which is
the
recommended one. There are actually 8 levels available, with names
like muNNxxx or muNNxxxFakeRate . (The
latter aims for lower fake rate at the cost of lower efficiency). For
plots of performance, see the release
18 selectors webpage. You probably want to take a look at the
PID web
page to understand the different selectors and their systematic
errors.
#------------------ Jpsi Lists -------------------------------------- #..Basic Jpsi to mu mu list, no mass constraint (for ntuple). # We may want to add a cut on the chisquare of the fit as well. mod clone SmpMakerDefiner MyJpsiToMuMu seq append AnalysisSequence MyJpsiToMuMu talkto MyJpsiToMuMu { decayMode set "J/psi -> mu+ mu-" daughterListNames set muNNVeryLoose daughterListNames set muNNVeryLoose fittingAlgorithm set "Cascade" fitConstraints set "Geo" preFitSelectors set "Mass 2.8:3.4" postFitSelectors set "Mass 2.9:3.3" }
The names in the decayMode must match PDT syntax; see the file
pdt.table in the package PDT (workdir/PARENT/PDT/pdt.table). We
are using the Cascade fitter, and applying a constraint ("Geo") to
force the two daughters to come from a common vertex. We are not
applying a mass constraint. Cascade is a good choice for standard
fitting of charged tracks. If you are working with something that has
a non-negligible lifetime, TreeFitter may be more appropriate. See the
fittersAndConstraints webpage. The question of what fitter to use
under what circumstances can be tricky, and if you are not sure, I
would suggest
posting a question to the Vertexing
and Composition HN.
We will now constrain the mass of the Jpsi candidates to
the
PDG value. In general, to get better resolution on the B
candidate, you should mass-constrain daughters with
width narrower than detector resolution: pi0, eta, eta', Ks,
D, D*, Ds, Ds*, Jpsi, chi_c1, chi_c2, psi2s. (I may have
missed some here.)
We could have this in the step above by including the
line fitConstraints set "Mass" . We don't, because we want
to calculate the unconstrained mass and store it in the ntuple
for use in distinguishing signal from background.
#..Now add the mass constraint mod clone SmpRefitterDefiner MyJpsiToMuMuMass seq append AnalysisSequence MyJpsiToMuMuMass talkto MyJpsiToMuMuMass { unrefinedListName set MyJpsiToMuMu fittingAlgorithm set "Cascade" fitConstraints set "Geo" fitConstraints set "Mass" }
We make the B+ candidates by combining a Jpsi
list and
a K+ list:
#------------------------ B+ --------------------------------------- #..B+ --> Jpsi K+ mod clone SmpMakerDefiner BchtoJpsiKch seq append AnalysisSequence BchtoJpsiKch talkto BchtoJpsiKch { decayMode set "B+ -> J/psi K+" daughterListNames set "MyJpsiToMuMuMass" daughterListNames set "KLHVeryLoose" fittingAlgorithm set "Cascade" fitConstraints set "Geo" preFitSelectors set "DeltaE -0.20:0.20" preFitSelectors set "Mes 5.19:5.30" postFitSelectors set "ProbChiSq 0.001:" postFitSelectors set "DeltaE -0.12:0.12" postFitSelectors set "Mes 5.20:5.30" postFitSelectors set "CmsCosTheta" postFitSelectors set "Mmiss" createUsrData set true }
To save some CPU time, we make loose cuts on DeltaE and Mes
before we do the fit.
The last line stores all Selector quantities as user
data associated with the B candidate. If we were writing a skim
(or sub skim), we could write out the list of B candidates
and the associated user data in the selected events. In our
case, we will store the quantities in the ntuple. This is
why we list postFitSelectors such as CmsCosTheta - we want
it calculated and stored, even if we don't want to cut on it
right now.
Mmiss is a kinematic variable that partners with the
candidate mass, in the same way Mes and DeltaE go together.
It may be a better choice if your final state contains
exactly one poorly measured particle, such as a single photon. Wouter
Hulsbergen gave an interesting
talk about the different kinematic variables.
Note that using a Selector to add UsrData can be
inefficient, particularly if you end up having to rerun the kinematic
fits at the end just to create UsrData. (This can happen if you combine
separate B0 and B+ lists to form the list to store in the ntuple, and
then want UsrData attached to this combined list). Chung Khim Lae
is working on a tcl-based
tool to add UsrData to lists, which is the better way to do this.
Analysis Code - Corrections to MC
When running on MC you need to consider corrections for
differences with data with respect to tracking, photon and pi0
reconstruction, and particle ID.
There are a variety of related techniques for correcting
for
PID data/MC differences. You should use only one of these -
in other words, don't apply PID tweaking at a sub skim
level, then make an ntuple set storing PID weights. See pidonmc.html.
Post questions to the Particle
ID tools HN. The ntuple structure below uses PID weights, the
ratio of the
efficiency for a track to satisfy the specified selector
in real data to that for simulated data. It is a function of
the track momentum, theta, and phi. You should check the
status is 1 (OK) before using the weight. And you should ask
around to make sure there really are PID weights in the
conditions database if you are running on new data. (The
weight requires that the PID group has analyzed both data
and MC for a run period).
You need to tell the selectors to operate in
"weight" mode (as
opposed to "tweak" or "kill", for example). Somewhere in your
Analysis.tcl, before "ev beg", place:
pidCfg_mode weight *
The ascii tables used in earlier analysis releases
probably don't work, and if you are upgrading your tcl, you will need
to switch to the conditions database using this syntax.
The tracking correction consists of a weight that gives
the relative efficiency for the MC track compared to data. The
correction is available from the conditions database; no tcl is
required on your part. BtaTupleMaker knows how to read in the
correction table and store it in the ntuple, as described below. See
the tracking
effiency web page for more details. Note that no correction is
required to the tracking resolution.
The efficiency corrections for photons and pi0's are
simple correction calculations that are not applied at run time. See
the neutrals
web page for details. However, it is necessary to apply
additional calibrators (corrections to energy) at run time. In general,
these may be applied to either data or MC. Once activated, the core
neutrals code will handle both cases correctly. To do so, place
the following lines somewhere before "ev beg":
talkto EmcNeutCorrLoader { correctionOn set true endcapShift set true }
Analysis Code - Ntuple Structure
Take a look at the BtaTupleMaker
web page above to understand the options and syntax below, which I will
not explain.
First, add the ntuple-dumping module to the path:
#-------------------------------------------------------------------- #..Use BtuTupleMaker to write out ntuples for SimpleComposition job path append Everything BtuTupleMaker
We determine the quantities to be stored in the ntuple by
setting tcl parameters that control the behavior of
BtuTupleMaker. (Note that while the executable made is called
BtaTupleApp, and it is made using a gmake BtaTupleMaker
command, a module called BtuTupleMaker is actually called and appended
to your path.) I'll intersperse comments with the code
below.
talkto BtuTupleMaker {
These first block contains information per event (not per candidate). I
keep the center-of-mass four-momentum, the beam spot, primary vertex,
and a couple of other items from the Tag (number of charged tracks, and
R2 calculated using
both tracks and neutral clusters).
By default, every event that BtuTupleMaker sees (every
event
passing the TagFilter, if you have one) is dumped to the
ntuple. This can increase the size by a substantial factor,
so set this option to false, unless you have a really good
reason. (For example, I sometimes keep every signal MC event
so that I can study the MC truth distributions).
#..Event information to dump eventBlockContents set "EventID CMp4 BeamSpot" eventTagsInt set "nTracks" eventTagsFloat set "R2All xPrimaryVtx yPrimaryVtx zPrimaryVtx" writeEveryEvent set false
You definitely want to keep the MC truth. You don't need to
do anything differently when running on data - the block
will still be created, so that your code structure can be
the same - but the number of entries in the block will
always be 0.
#..MC truth info fillMC set true mcBlockContents set "Mass CMMomentum Momentum Vertex"
Now we are at the heart of the code. BtuTupleMaker starts
from a specified list, in our case, of B candidates. If you
were doing a recoil analysis, where you fully reconstruct
both B's in the event, you could form Upsilon(4S) candidates
from the two and store that list. If your analysis were of
inclusive Jpsi, that would be the list.
BtuTupleMaker stores information for particles on the
list
and for all its daughters (and granddaughters, and so
forth), and includes links between the ntuple blocks for
each particle type. Every particle type in the decay chain
must be assigned to an ntuple block. You can have more than
one particle type per block, if you prefer. For example, I
prefer to put both B+ and B0 candidates into a single "B"
block; others put them in separate blocks.
Don't worry if you forget the ntpBlockConfigs command
for a
particular particle type - the code will just abort and give
you an error message; it won't do anything wrong. Note that
ntuples are packed, so that increasing the maximum size of
the block increases the ntuple size only for those events
with extra candidates.
The ntpBlockContents command specifies the items to
store.
UsrData(BchtoJpsiKch) stores the User data for our B list,
which is created by the "createUsrData set true" command in
the SimpleComposition block.
#..Particle blocks to store listToDump set BchtoJpsiKch ntpBlockConfigs set "B- B 2 50" ntpBlockContents set "B: MCIdx Mass Momentum CMMomentum Vertex VtxChi2 UsrData(BchtoJpsiKch)"
(where the last line above was split for formatting purposes and
should be entered as a single line). We don't have any Usr data for
the Jpsi list, but we do want to store the unconstrained Jpsi
mass. This is an unusual item, in the sense that it is not obtained
directly from the Jpsi list used to make the B candidates
(MyJpsitoMuMuMass), but rather from MyJpsitoMuMu. However, the
ntuAuxListContents command exists for this purpose. BtuTupleMaker
will search through the list you specify and find the corresponding
Jpsi, and store the requested quantities, mass in this case. The
"Unc" is just a text string to allow you to distinguish ntuple
quantities from this auxilary list.
ntpBlockConfigs set "J/psi Jpsi 2 50" ntpBlockContents set "Jpsi: MCIdx Mass Momentum Vertex VtxChi2" ntpAuxListContents set "Jpsi: MyJpsiToMuMu : Unc : Mass"
The interesting thing in the K+ and mu+ blocks are the
PIDWeights that are stored for each MC candidate. (Weights
are 1 for data). When filling histograms
with MC data, weight the event with the product of the PID
weights for the involved tracks (and the corresponding
tracking effiency weights) to correct for MC/data
differences. You get not only the weight in your ntuple but also its
uncertainty and a status integer. You should check the
status is 1 (OK) before using the weight.
ntpBlockConfigs set "K+ K 0 50" ntpBlockContents set "K: MCIdx Momentum PIDWeight(KLHVeryLoose,KLHLoose,KLHTight)" ntpBlockConfigs set "mu+ mu 0 50" ntpBlockContents set "mu: MCIdx Momentum PIDWeight(muNNVeryLoose,muNNLoose)"
I find it useful to store every track and gamma in the
event, in addition to those used in forming a B candidate.
It can allow you to recover from a variety of mistakes. But
you should test how much this option increases the size of
your ntuples.
#..Want to save all CalorNeutrals in the gamma block ntpBlockConfigs set "gamma gamma 0 60" ntpBlockContents set "gamma: MCIdx Momentum" gamExtraContents set EMC fillAllCandsInList set "gamma CalorNeutral"
#..TRK block. Save all of them as well. fillAllCandsInList set "TRK ChargedTracks"
#..remember to change this back to K pi mu e ntpBlockToTrk set "K mu" ntpBlockContents set "TRK: Doca DocaXY" trkExtraContents set "BitMap:pSelectorsMap,KSelectorsMap,piSelectorsMap, muSelectorsMap,eSelectorsMap,TracksMap" trkExtraContents set HOTS trkExtraContents set Eff:ave }
(where in the above the first trkExtraContents line was only split for
formatting purposes, and should be entered as a single line).
In the track block you can see the tracking efficiency
weight (Eff:ave), which corrects for the MC/data difference
in the efficiency for this track to be reconstructed as a
GoodTracksLoose in MC and data. A GoodTracksLoose track is
required to come from the near the interaction point and to
have a minimum number of drift chamber hits, so that there
is a good momentum measurement. In most cases, you should
require your tracks to be GoodTracksLoose. If your tracks
have low pt --- slow pions from D* decays, for example ---
use GoodTracksVeryLoose instead. If they don't come from
the origin --- pions from Ks decays, for example --- then
just use ChargedTracks. You can't use the tracking efficiency tables
for anything other than GoodTracksLoose, but flat corrections (i.e.,
not a function of particular track in question) are available from the tracking
efficiency web page. They are good enough for most
purposes.
If you aren't using GoodTracksLoose, consider checking
the
pt distribution of your candidates that don't satisfy
GoodTracksLoose. Any track with pt more than ~120 MeV ought
to have drift chamber hits if it is measured correctly.
You select the track type in SimpleComposition by
sublisting, which we will discuss later, or by using the
TracksMap bitmap. The bits set to 1 in this word indicate
which Track requirements are satisfied by the candidate. The
safest way to get the correspondence between the bits and
the track types is to talk to module that creates this work,
TrkMicroDispatch, and type "show". (More on this later).
You can like-wise find out which muon ID selectors
selected
this using the muSelectorsMap bitmap. To get the correspondence
between these bits and the selectors, "mod talk" to MuonMicroDispatch.
(We will do this later). The bitmap is useful
if you want to tighten the criteria you used to make your B
candidates without having to rerun your jobs.
We will next run some sample jobs on MC samples. We will
use book keeping tools to locate the collections.
Some subdirectories
I find it convenient to create a sub directory of workdir
called maketcl, which I use for creating the tcl files and
other book keeping activities. I then create another
subdirectory of workdir called tcl, which contains soft
links to the tcl files that are actually located in maketcl.
I do this so that I can then delete the links in "tcl" as
the corresponding job successfully finishes, without losing
the orginal file itself. But feel free to do whatever you
like best.
mkdir maketcl mkdir tcl
We will also need directories to store the log files and
ntuples you create. You should probably put these in your
scratch (work) area and make soft links to workdir - you
won't have enough space in your own area. Recall that files
in this area vanish after 6 days. Be sure to move both the
ntuples and log files to permanent storage within that time.
mkdir $BFROOT/work/h/hearty/Rel31/ mkdir $BFROOT/work/h/hearty/Rel31/log mkdir $BFROOT/work/h/hearty/Rel31/ntuples ln -s $BFROOT/work/h/hearty/Rel31/log log ln -s $BFROOT/work/h/hearty/Rel31/ntuples ntuples
I will be disappointed with you if you actually create your
directories in /h/hearty .
I find it useful to have a couple of other directories
called "failed" and "successful": the batch system writes to
"log" and "ntuples", then I manually move the files to
"failed" or "successful" as appropriate.
mkdir $BFROOT/work/h/hearty/Rel31/failed mkdir $BFROOT/work/h/hearty/Rel31/successful ln -s $BFROOT/work/h/hearty/Rel31/failed failed ln -s $BFROOT/work/h/hearty/Rel31/successful succesful
Other people keep all of their log files and ntuples in
their local area. To do this, you will want to gzip
everything as it is created, and keep a careful tab on your
disk space. You can use commands like zgrep, zcat, zless and
so forth to work with gzipped files.
While we are at it, let's create a subdirectory for tcl
snippets, which we will need shortly
mkdir snippets
Will Roethel has written a prototype "analysis task
management" system available that is undoubtedly a way
better way to work. See SJMMain.htm
I haven't tried it yet.
Finding the available MC modes
I do all of my bookeeping work in the maketcl subdirectory of
workdir. Since many of the commands used are obscure
and
difficult to remember, I recommend keeping a record of them.
I use a file called commands.txt.
You will need to know the mode number of the MC sample you intend
to
use. You can get some information by browsing the old SP5 inventory
web site. You should also ask your conveners and other colleagues
in
your AWG.
Recall that SP5 corresponds to runs 1 - 3, SP6 is run4,
and SP8 covers the full data set, including run 5.
More generally, you can use BbkSPModes to get lists of
different types of MC. You can search the descriptions (also called
"runtypes") of all modes by:
BbkSPModes --runtype "J/psi" BbkSPModes --runtype "Jpsi" BbkSPModes --runtype "jpsi"
To search in the titles of the decay (.dec) files:
BbkSPModes --decfile "Jpsi"
The resulting output can be quite large. It is perhaps
easier to write this to a file so you can look at it in your
editor:
BbkSPModes --decfile "Jpsi" > Jpsi-2.txt BbkSPModes --decfile "jpsi" >> Jpsi-2.txt
Browsing through this, we find mode 1121, which is Inclusive
Jpsi (generic B events that contain a Jpsi --> e+e- or
mu+mu- decay), and 989, which is the exclusive decay B- -->
Jpsi K-. Mode 4817, B+ --> X3872 K+ might be interesting as
well. You may need to look at the actual .dec files to understand what
these modes are. You find them in the package ProdDecayFiles, which for
your analysis release is workdir/PARENT/ProdDecayFiles. (Note
that the MC from older releases would have been produced using the .dec
files of that release, $BFDIST/releases/xx.x.x/ProdDecayFiles).
You will undoubtedly want generic MC as well. Here are
the
mode numbers and the cross sections at 10.58 GeV, which you will need for
calculating the equivalent luminosity.
B+B- 1235 550 pb B0B0 1237 550 pb cc 1005 1300 pb uds 998 2090 pb tau tau kk2f 3429 890 pb mu mu kk2f 3981 1150 pb ee 30-150deg 2400 25100 pb
Making tcl files for Skimmed Generic MC
BbkDatasetTcl can do a lot of different things. To get
help, type BbkDatasetTcl -h, or check out the
relevant workbook section. It can be helpful to write all the
available datasets into a text file for later browsing. Note
that not all datasets are available at all Tier A sites; in
general, to get the skims you want for your analysis, you will need to
run at the Tier A site
appropriate for your Analysis Working Group.
BbkDatasetTcl > Datasets.txt
It is very much faster to run on skimmed generic MC than
on
the AllEventsSkim collection - you can save an order of
magnitude or more in time. To make tcl files for the Jpsitoll skim
of generic mode 1235 (B+B-) in Run 1, there are several possible
BbkDatasetTcl commands you could use:
BbkDatasetTcl -t -ds SP-1235-Jpsitoll-Run1-R18c
or
BbkDatasetTcl -t 100k -ds SP-1235-Jpsitoll-Run1-R18c
or
BbkDatasetTcl -t 100k --splitruns -ds SP-1235-Jpsitoll-Run1-R18c
Eventually we will use the third command, but let's look at all three.
Make and go to a directory called temp:
mkdir temp cd temp
Now try the first command:
BbkDatasetTcl -t -ds SP-1235-Jpsitoll-Run1-R18c
BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c.tcl Selected 18 collections, 3136044/56880000 events, ~0.0/pb, from bbkr18 at slac
This command produced the file SP-1235-Jpsitoll-Run1-R18c.tcl .
The Jpsitoll skim contains 3136044 events, from an original 56880000
B+B- events in Run 1. (So the skim rate is 5.5%; the equivalent
luminosity, assuming a B+B- cross section of 0.54 nb, is 105 fb-1, 5.4x
the luminosity for Run 1.)
Make sure you keep track of these numbers - you will
need them when
you want to scale your MC to the data luminosity. Write the numbers
in your log book, or put the output in your commands.txt
file. In any case, be sure to keep your tcl files.
The tcl file SP-1235-Jpsitoll-Run1-R18c.tcl is not a
practical way to analyze data - your job will certainly run
out of CPU time before it can go through the full file. So we come to
our second BbkDatasetTcl command:
BbkDatasetTcl -t 100k -ds SP-1235-Jpsitoll-Run1-R18c
BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-1.tcl (1 collections, 131074/2326000 events, ~0.0/pb) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-2.tcl (1 collections, 190830/3404000 events, ~0.0/pb) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-3.tcl (1 collections, 139785/2502000 events, ~0.0/pb) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-4.tcl (1 collections, 26253/468000 events, ~0.0/pb) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-5.tcl (1 collections, 195736/3512000 events, ~0.0/pb) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-6.tcl (1 collections, 193922/3484000 events, ~0.0/pb) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-7.tcl (1 collections, 213505/3828000 events, ~0.0/pb) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-8.tcl (1 collections, 25914/462000 events, ~0.0/pb) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-9.tcl (1 collections, 261049/4748000 events, ~0.0/pb) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-10.tcl (1 collections, 217191/3952000 events, ~0.0/pb) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-11.tcl (1 collections, 196666/3574000 events, ~0.0/pb) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-12.tcl (1 collections, 20254/368000 events, ~0.0/pb) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-13.tcl (1 collections, 218016/3992000 events, ~0.0/pb) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-14.tcl (1 collections, 199687/3652000 events, ~0.0/pb) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-15.tcl (1 collections, 218003/3994000 events, ~0.0/pb) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-16.tcl (1 collections, 207143/3786000 events, ~0.0/pb) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-17.tcl (1 collections, 269130/4940000 events, ~0.0/pb) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-18.tcl (1 collections, 211886/3888000 events, ~0.0/pb) Selected 18 collections, 3136044/56880000 events, ~0.0/pb, from bbkr18 at slac
This version of the command splits the events in several tcl files.
However, it will not split collections, so the resulting number of
events in each file can vary quite a bit. No matter how small your
request, the files can still
be larger than what you require. To get around this, you use the third
BbkDatasetTcl command to request the collections to be broken into
smaller pieces:
BbkDatasetTcl -t 100k --splitruns -ds SP-1235-Jpsitoll-Run1-R18c
BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-1.tcl (1 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-2.tcl (2 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-3.tcl (1 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-4.tcl (2 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-5.tcl (3 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-6.tcl (1 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-7.tcl (2 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-8.tcl (1 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-9.tcl (2 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-10.tcl (1 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-11.tcl (2 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-12.tcl (2 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-13.tcl (1 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-14.tcl (2 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-15.tcl (1 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-16.tcl (2 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-17.tcl (1 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-18.tcl (2 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-19.tcl (2 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-20.tcl (1 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-21.tcl (2 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-22.tcl (1 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-23.tcl (2 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-24.tcl (1 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-25.tcl (2 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-26.tcl (1 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-27.tcl (2 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-28.tcl (1 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-29.tcl (1 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-30.tcl (2 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-31.tcl (1 collections, 100000 events) BbkDatasetTcl: wrote SP-1235-Jpsitoll-Run1-R18c-32.tcl (1 collections, 36044 events) Selected 18 collections, 3136044/56880000 events, ~0.0/pb, from bbkr18 at slac
If you prefer a different file name, use the option --basename
blahblah.
How do you decide how many events to put in a tcl
file? There are a few criteria. Your job can fail
due to CPU time limits on the batch queue, or wall-clock limits. Try a
couple of your longest jobs before you submit a whole bunch.
The CPU limit for the kanga queue, which you should normally use,
is 720 "slac minutes" bqueues -l kanga).
However, 1 CPU min on the batch machine generally counts as more than 1
"slac minute". To get the scale factor CPUF, for batch machine
barb0309 (for example), type bhosts -l barb0309.
So for this machine, the actual CPU limit is 720*60/2.11 = 20,474
sec. You can use other queues - short, long, or xlong, for
examples - but you will be sharing the resources with other SLAC
experiments like Glast and ILC. By default, we create root
ntuples, which do not have a particular size limit. If you are creating
hbook (paw) ntuples, keep them below 30 MB each.
Now that you've experimented a bit with the different
BbkDatasetTcl commands, you can choose the ones you want to use. In
this case we will use the third BbkDatasetTcl command, with the
"--splitruns" option to get the same number of events in each file.
Also, we will use the first BbkDatasetTcl command to make one big tcl
file with the information about the whole set.
First, you can delete your experimentation directory:
/bin/rm -r temp
Next, go to your maketcl directory, and issue the commands:
BbkDatasetTcl -t -ds SP-1235-Jpsitoll-Run1-R18c --basename info-SP-1235-Jpsitoll-Run1-R18c BbkDatasetTcl -t 100k --splitruns -ds SP-1235-Jpsitoll-Run1-R18c
Making tcl files for Exclusive Mode (Signal) MC
Generally, exclusive MC is not skimmed. If you do need a
mode skimmed,
talk to your convener, or skim
it yourself. It is not difficult, and ought to work
out-of-the-box. For normal, unskimmed signal MC, there
is no skim name in the collections. You will probably need
to put fewer events per job as well. Let's make some B+ -->
Jpsi K+ signal MC tcl files:
BbkDatasetTcl -t -ds SP-989-Run1 --basename info-SP-989-Run1 BbkDatasetTcl -t 10k --splitruns -ds SP-989-Run1
Note that we did not specify a release in the command; BbkDatasetTcl
obtains the correct database from your release (hence the "from bbkr18
at slac" in the output above). Remember to copy and paste these
commands into your
"commands.txt" file. And if you are using the maketcl and
tcl structure that I use, make soft links to the tcl files
you made:
cd ../tcl ln -s ../maketcl/SP-1235*.tcl . ln -s ../maketcl/SP-989*.tcl .
Skimmed signal MC collections are not separated by run
block, unlike skimmed generic MC. In our case - and in most cases
- this doesn't matter,
but if you ever do need to split a collection by run blocks, you can
use the "condalias", the month to which the MC
corresponds. For example:
BbkDatasetTcl -t --condalias_select 200002-200010 -ds SP-989-ExclHllMini-R18b
with the condalias_select set to the range of the specific run
period.
|
begin |
end |
Run1 |
200002 |
200010 |
Run2 |
200102 |
200206 |
Run3 |
200212 |
200306 |
Run4 |
200309 |
200407 |
Run5
|
200504
|
200607
|
Making tcl files for data; luminosity and B counting
The commands for data are essentially the same. Check
the Data Quality page
for information on the latest datasets.
To make tcl files, use BbkDatasetTcl as
before. The
format of the dataset names is slightly different - in particular, the
collection names don't start with an SP mode number.
BbkDatasetTcl -ds Jpsitoll-Run1-OnPeak-R18c-v03 --basename info-Jpsitoll-Run1-OnPeak-R18c BbkDatasetTcl -t 100k --splitruns -ds Jpsitoll-Run1-OnPeak-R18c
Again, you will need soft links to these tcl files:
cd ../tcl ln -s ../maketcl/Jpsitoll*.tcl .
To get the B counting (and exact luminosity) for your
sample, you can use the BbkLumi script on your single
info-Jpsitoll-Run1-OnPeak-R18c.tcl file (from maketcl):
BbkLumi -t info-Jpsitoll-Run1-OnPeak-R18c.tcl
====> info-Jpsitoll-Run1-OnPeak-R18c.tcl Failed on dbname : bbkr14 trying bbkr18
Using B Counting release 18 from collections in TCL file Using collections: /store/PRskims/R18/18.6.3d/Jpsitoll/00/Jpsitoll_20006 /store/PRskims/R18/18.6.3d/Jpsitoll/00/Jpsitoll_20008 /store/PRskims/R18/18.6.3d/Jpsitoll/00/Jpsitoll_20009 /store/PRskims/R18/18.6.3d/Jpsitoll/00/Jpsitoll_20010 /store/PRskims/R18/18.6.3d/Jpsitoll/00/Jpsitoll_20012 /store/PRskims/R18/18.6.3d/Jpsitoll/00/Jpsitoll_20015 /store/PRskims/R18/18.6.3d/Jpsitoll/00/Jpsitoll_20017 /store/PRskims/R18/18.6.3d/Jpsitoll/00/Jpsitoll_20019 /store/PRskims/R18/18.6.3d/Jpsitoll/00/Jpsitoll_20020 /store/PRskims/R18/18.6.3d/Jpsitoll/00/Jpsitoll_20021 /store/PRskims/R18/18.6.3d/Jpsitoll/01/Jpsitoll_20174 /store/PRskims/R18/18.6.3d/Jpsitoll/01/Jpsitoll_20180 /store/PRskims/R18/18.6.3d/Jpsitoll/02/Jpsitoll_20206 /store/PRskims/R18/18.6.3d/Jpsitoll/02/Jpsitoll_20207 /store/PRskims/R18/18.6.3d/Jpsitoll/02/Jpsitoll_20208 /store/PRskims/R18/18.6.3d/Jpsitoll/99/Jpsitoll_19999 /store/PRskims/R18/18.6.3e/Jpsitoll/05/Jpsitoll_20500 /store/PRskims/R18/18.6.3e/Jpsitoll/05/Jpsitoll_20501 /store/PRskims/R18/18.6.3e/Jpsitoll/05/Jpsitoll_20502 /store/PRskims/R18/18.6.3e/Jpsitoll/05/Jpsitoll_20503 /store/PRskims/R18/18.6.3e/Jpsitoll/05/Jpsitoll_20504 /store/PRskims/R18/18.6.3e/Jpsitoll/05/Jpsitoll_20505 /store/PRskims/R18/18.6.3e/Jpsitoll/12/Jpsitoll_21267 ============================================== Run by hearty at Fri Jun 2 18:13:01 2006 First run = 9931 : Last Run 17106 == Your Run Selection Summary ============= ***** NOTE only runs in B-counting release 18 considered ***** ***** Use --OPR or --L3 options to see runs without B-counting ***** Number of Data Runs 2915 Number of Contributing Runs 2915 ------------------------------------------- Y(4s) Resonance ON OFF Number Recorded 2915 0
== Your Luminosity (pb-1) Summary ========= Y(4s) Resonance ON OFF Lumi Processed 19839.981 0.000
== Number of BBBar Events Summary ========= Number | ERROR | (stat.) (syst.) (total) Total 21462831.2 | 24532.1 236091.1 237362.3
==For On / Off subtraction====== Nmumu(ON) = 9562871.0 +/- 3092.4 (stat) Nmumu(OFF) = 0.0 +/- 0.0 (stat) Nmh(ON) = 67981988.0 +/- 8245.1 (stat) Nmh(OFF) = 0.0 +/- 0.0 (stat)
The BbkLumi script knows to remove runs that are declared bad
in the tcl file. Be sure to check for messages about runs missing B
counting information.
Updating your tcl files
Some time after you start your analysis, it may be that more
data or MC have been skimmed, and you would like to make tcl
files from just these collections. To do this, check the
time stamp in last line of the last tcl file of the previous
set. For example, in SP-1235-Jpsitoll-Run1-R18c-32.tcl it is
## Last collection added to dataset: 2006/05/08-12:13:53-PDT
Then request only collections created after this time, and
start the numbering of these new tcl files at 33 (the last
set ended at 32).
BbkDatasetTcl -t 100k,33 --splitruns -ds SP-1235-Jpsitoll-Run1-R18c -s 2006/05/08-12:13:54-PDT
Here a second had been added to the time stamp to
be sure you don't get the same collections back.
Making tcl snippets
For each job you run, you need not just the tcl file we just made, but
a tcl "snippet" to set the FwkCfgVar flags to the values appropriate
for this job. This could be tedious to do manually, so with Enrico
Robutti's help, I wrote a perl script called make_snippet.
Copy this to your workdir and make it executable:
chmod a+x make_snippet
To make snippets for the tcl files we just created:
make_snippet -MC -tcldir snippets -logdir log -ntpdir ntuples tcl/SP-1235*.tcl make_snippet -MC -tcldir snippets -logdir log -ntpdir ntuples tcl/SP-989*.tcl make_snippet -data -tcldir snippets -logdir log -ntpdir ntuples tcl/Jpsitoll*.tcl
If you skip the flags specifying the directories, everything
will end up in workdir. Type make_snippet -h if you can't
remember the command options.
For each filename.tcl, it creates a file run_filename.tcl .
Let's look at one:
cat snippets/run_SP-1235-Jpsitoll-Run1-R18c-1.tcl
#..See Analysis.tcl for description of FwkCfgVars. sourceFoundFile tcl/SP-1235-Jpsitoll-Run1-R18c-1.tcl set MCTruth "true" set FilterOnTag "false" set BetaMiniTuple "root" set histFileName ntuples/SP-1235-Jpsitoll-Run1-R18c-1.root set NEvents 0 sourceFoundFile Analysis.tcl
You should edit make_snippet to match your own
taste. hbook vs root, for example, or FilterOnTag
"true" . The selection between MC and data is done via an option
flag of make_snippet - you don't need to edit this.
make_snippet also creates a script to submit all jobs
to the batch queue, called sub_SP-1235-Jpsitoll-Run1-R18c-1 in this
case. We will come back to it when we talk about the batch system.
Running jobs
Running jobs interactively
Check your path
Before running an actual job, it is a good idea to look at your path.
You might notice things are missing, or extra things you don't need. We
will use run_SP-989-Run1-1.tcl , located in the snippets
directory. (I generally find it useful to check out my code with signal
MC). By
default, the NEvents flag is set to run on all events ("0"). To set up
your job, but not execute an "ev beg" command, edit the file and change
NEvents to -1.
To run the job (from workdir), type
BtaTupleApp snippets/run_SP-989-Run1-1.tcl
Here is the resulting output, ending at a framework prompt: path.txt.
The first part lists the values of various FwkCfgVars.
Notice, for example, that PrintFreq is not set in the
snippet, so we just get the default value of 1000.
Most of the output is the path list, which starts with
Everything and ends with BtuTupleMaker. Read through the
one-line descriptions to get an idea of what is actually
happening in your job. Note your AnalysisSequence is a
pretty small part of the whole operation.
While we are here, let's check the meaning of the muon
ID bit map:
mod talk MuonMicroDispatch MuonMicroDispatch> show
Current value of item(s) in the "MuonMicroDispatch" module:
Value of verbose for module MuonMicroDispatch: f Value of production for module MuonMicroDispatch: f Value of enableFrames for module MuonMicroDispatch: f Value of outputMap for module MuonMicroDispatch: IfdStrKey(muSelectorsMap) Value of inputList for module MuonMicroDispatch: IfdStrKey(ChargedTracks) Value of inputMaps for module MuonMicroDispatch: inputMaps[0]=muMicroMinimumIonizing inputMaps[1]=muMicroVeryLoose inputMaps[2]=muMicroLoose inputMaps[3]=muMicroTight inputMaps[4]=muMicroVeryTight inputMaps[5]=muNNVeryLoose inputMaps[6]=muNNLoose inputMaps[7]=muNNTight inputMaps[8]=muNNVeryTight inputMaps[9]=muNNVeryLooseFakeRate inputMaps[10]=muNNLooseFakeRate inputMaps[11]=muNNTightFakeRate inputMaps[12]=muNNVeryTightFakeRate inputMaps[13]=muLikeVeryLoose inputMaps[14]=muLikeLoose inputMaps[15]=muLikeTight
I print out this sort of thing and put it in my log book.
(How did I know this was the correct module to check? From
the description in the path list.) You can do the same thing
for the other four charged particles, and for
TrkMicroDispatch, which fills the track bit map.
There are a lot of sequences listed under
SimpleComposition,
which you might think you want to disable to save time. In
fact, a key feature of SimpleComposition is that it doesn't
make the lists unless they are requested somewhere, so there
is essentially no overhead.
After you are finished browsing, type exit to end your
talk with MuonMicroDispatch, and exit again to exit the framework (ie,
end this job).
Run an interactive job
Now edit run_SP-989-Run1-1.tcl and set the
number of events to 100. It is convenient to run the job in background
mode, and in this case, we will write the output to a log file. Since
this is signal MC, you write want to change "writeEveryEvent" to true
in your Analysis.tcl.
BtaTupleApp snippets/run_SP-989-Run1-1.tcl >& log/SP-989-Run1-1.log &
Here is the resulting file: SP-989-Run1-1.log
There are quite few comments in here that look alarming,
but
in fact, this job completed successfully.
From the log file, you can get the total number of
events processed and the CPU time:
EvtCounter:EvtCounter: total number of events=100 total number of events processed=100 total number of events skipped=0 EvtCounter:Total CPU usage: 26 User: 23 System: 3
Also a report from the Tag Filter, which is pretty boring in
this case, since we didn't turn it on:
TagJpsill:TagJpsill: endJob summary: Events processed: 0 Events passed : 0 Events prescaled: 0
Submit a batch job
Edit run_SP-989-Run1-2.tcl to run on 1000 events, then
submit it to the kanga queue, which you should be using for analysis
at SLAC. (Other Tier A sites will have different commands.) Maybe for a
test job like this you also would like to get more frequent event
number messages. "set PrintFreq 10" will give a message every 10th
event.
bsub -q kanga -o log/SP-989-Run1-2.log ../bin/$BFARCH/BtaTupleApp snippets/run_SP-989-Run1-2.tcl
(where the above command should be entered as a single line).
Check on the status of your job:
bjobs
JOBID USER STAT QUEUE FROM_HOST EXEC_HOST JOB_NAME SUBMIT_TIME 909247 hearty RUN kanga yakut05 cob0178 *un1-2.tcl Jun 2 21:26
Or check the log file:
bpeek 909247 | tail
(or bpeek -f 909247 )
Here is the resulting log file: SP-989-Run1-2.log
There is extra information at the end with respect to
the
interactive job. "Successfully completed" is what you like
to see at the end.
Dealing with many batch jobs
The script created by make_snippet, sub_SP-1235-Jpsitoll-Run1-R18c-1
for example, will submit all the jobs for you at once. You should
probably not put more than a couple of hundred jobs in the queue at
one time.
source sub_SP-1235-Jpsitoll-Run1-R18c-1
As the jobs finish, move the success ones into the
successful directory (grep for "Successfully completed") and
the failures (grep for "exit code" or "Abort") into the
failed directory. If you have a few mysterious failures, you
might try just resubmitting them.
You might find it useful to make some small, temporary,
scripts to do this sort of thing. For example,
grep "Successfully completed" log/*.log > move-2 cat move-2
log/SP-989-Run1-1.log:Successfully completed.
Then edit move-2 to change lines like
log/SP-989-Run1-1.log:Successfully completed.
into
mv log/SP-989-Run1-1.log successful/ mv ntuples/SP-989-Run1-1.hbook successful/ /bin/rm tcl/SP-989-Run1-1.tcl /bin/rm snippets/run_SP-989-Run1-1.tcl gzip maketcl/SP-989-Run1-1.tcl
To use the script, just say source move-2
As I mentioned before, there might be a real task
manager to
do this sort of thing for you.
Be sure to keep track of all your failures so that you
can
correct the luminosity weighting of your MC sample.
If you are using hbook files (vs root), you will want to
gzip them before you transfer them to your local disks. Either way, you
will definitely want to gzip the log files.
Next steps
The Root
III chapter of the workbook gives an example of an analysis on this
type of ntuple.
Here are a few things to try before we go on to the more
complex SimpleComposition case.
Make a decent Jpsi mass plot (with binning suitable for
the
detector resolution), and observe how it changes if you
require muNNLoose on both legs instead of VeryLoose. It
might be interesting to check the scatterplot of momentum vs
theta for your muons to see where they fall in the detector.
What happens if you require GoodTracksLoose for both
muon
legs and the Kaon? You can do this by checking the bitmap,
but you should also try sublisting in SimpleComposition. Another
approach (suitable for your own analysis code, not
for skimming), would be to change the input lists to the
muon and kaon selectors. Look through the path to figure out
where to do this.
Perhaps it would be worth adding a cut on the quality of
the
vertex of the Jpsi. Note that we don't want to make a cut on
the chisquare of the mass-constrained Jpsi - this would be
like making a cut on the Jpsi mass. You will need to add
this quantity to your ntuple, and will need to compare
signal MC to data to see if there is anything to gain.
Before going on to your own analysis, it might be a
reasonable time to review Vertexing
and Composition. This is a complex topic, and it can be hard to
pick the correct approach. Don't be afraid to ask questions. Here is a typical
exchange, in this case concerning the decay D0 --> Ks eta, which
has no charged tracks originating from the vetex point.
Before expanding our analysis, we should discuss the
issues
of self-conjugate modes and clones, which can be quite
confusing if you don't know the underlying issues. HN is
full of questions, including some from me. The tag of SimpleComposition
in analysis-31 has fixed
many of the issues of self-conjugates but it is worth being aware
of the issues.
Self-Conjugate modes and Clones
Self-conjugate states are decays such as D0 --> pi+ pi- and D0-bar
--> pi+ pi-. SimpleComposition generally handles such cases
correctly; if you build a list such as B+ --> D0 pi+, such a "D0"
list will also be automatically used to create B- --> D0-bar pi0-
candidates. For B0 --> D0 pi0, however, you will get only B0
candidates (not B0-bar), since the B0 and B0-bar are considered
"clones". Two
candidates are considered to be clones if the final state
contains the same candidates (in any order) with the same
mass hypotheses. Note that the code does not check whether or not
the initial state is the same.
One case to be aware of is if you merge your self-conjugate D0 list
with other D0 lists, then make B0 --> D0 pi0 candidates, you may end
up with the same D0 / pi0 combination listed as both a B0 and a B0-bar.
This may be what you want, or it may not be.
Your Analysis
Let's end with a few suggestions about your own analysis.
- Discuss it with your conveners and the AWG before you
get started, and frequently thereafter.
- Consider posting your analysis tcl and list of tags
to your AWG hypernews before you make a lot of ntuples - this might
save you a lot of problems.
- Along the same line, check every quantity in your
ntuple on a signal MC and a data job before you start.
- Save the tcl and executable you used to make your
ntuples until you are finished the analysis. Until the paper
appears in print, that is.
Author: Christopher Hearty Last significant update: June 2006
Page maintained by Adam Edwards
Last modified: January 2008
|