Podd Analyzer
From PREX Wiki
Jump to navigationJump to searchA good general reference for Podd
https://redmine.jlab.org/projects/podd/wiki
Specific information for analyzing PREX HRS data:
Brief instructions on running the Podd analyzer This file: /adaqfs/home/a-onl/happexsp/prex/README R. Michaels, May 3, 2019 I. Set up ========== Type "goprex" on a-onl account on aonl1 or aonl2 to go to the working directory and set the environment. We're using the ''official'' (Ole's) analyzer at the moment, but with the ParityData class plugin. I also have a secret version of the full software on another account, but we'll try to use Ole's version because it will be better supported. The database is $DB_DIR = ./DB which is a link. e.g. a link to PREXI or PREXII databases. PREXI is there for test purposes, while PREXII is forward-looking and needs to be updated. Two switch between these: rm DB ; ln -s DB_PREXI DB Our detectors are defined in db_P.dat in the $DB_DIR. Which variables are written to the root output are defined in output.def Which run is analyzed is controlled by a link. run.dat is the link. For example:23 run.dat -> /home/rom/scratch/apex/apex_5021.dat.0 Important files: db_run.dat -- defines kinematics db_cratemap.dat -- cratemap. We have apex and prexI maps, so far onlana.cuts -- cuts file, see Podd documentation output.def -- output definition, see Podd documentation rootlogon.C -- loads the libParity.so setup.C -- sets up the analyzer (see example below) show*.C -- various macro examples. This needs work! Afile.root -- the root output. May want to put this on a work disk bob.txt -- debug output. May want to turn off debugging You can do the following to get familiar with the code. Running the analyzer, type "analyzer" [a-onl@aonl1 prex]$ pwd /adaqfs/home/a-onl/happexsp/prex [a-onl@aonl1 prex]$ analyzer ************************************************ * * * W E L C O M E to the * * H A L L A C++ A N A L Y Z E R * * * * Release 1.6.6 Apr 25 2019 * * Based on ROOT 6.16/00 Jan 23 2019 * * * * For information visit * * http://hallaweb.jlab.org/podd/ * * * ************************************************ analyzer [0] .x setup.C analyzer [1] analyzer->Process(*run) II. Exercises to become familiar =================================== 1. Run the analyzer, with run.dat -> runL_sieve.dat and then: analyzer [2] .x showSieveLeft.C 2. rm run.dat ; ln -s happexsp_6840.dat.0 run.dat analyzer [7] TH1F *h1 = new TH1F("h1","",100,0.,0.02) (TH1F *) 0x194c30a0 analyzer [8] T->Draw("P.q2R>>h1") There should also be EK_L.* and EK_R.* variables and you can analyzer [9] .x showQsqRight.C Look at the P tree (generated by ParityData) root [2] T->Print("P.*") Draw amplitudes for the upper Quartz detector on R-HRS root [4] T->Draw("P.upQadcR") (we need to define these for our new detector locations) 3. Analyze some flash ADC from APEX. Link to some APEX data with mode 10 data. apex_5021.dat.0 -> /adaq1/data1/apex_5021.dat.0 run.dat -> apex_5021.dat.0 The flash adc printout in the debug file "bob.txt" should be self-explanatory. Also in the root file there are snapshots like snap4->Draw(). This is still a work in progress!! III. Working with ParityData ============================= directory: ~a-onl/happexsp/ParityData compile: make ; make install Note, as I write this, the status is the following: A very large output is made: bob.txt It has debugging output. We can turn this off for production running. The part that fills global variables defined in db_P.dat is fairly mature, but the FADC analysis needs a lot of work, though a baseline version which works exists. Each detector should go into Fastbus ADC and TDC, as well as scaler (see IV) and possibly a FADC. IV. Scalers ============ Scalers from the event type 140 are inserted into the ROOT trees, both the "T" tree and dedicated scaler trees TSLeft and TSRight, which you can see when you open the ROOT files and type ".ls". TSRight->Scan("RightLclock:RightT1_r:RightT2_r:RightL1A_R:RightL1A_R_r") The set up of the scalers is controlled by a database file, see below. These are kept in the working directory. db_LeftScalevt.dat db_RightScalevt.dat The syntax is fairly straight forward. For example, the following two lines in db_RightScalevt.dat produce global variables "T5" (counts) and "T5_r" (rates) in the output. variable 3 4 1 T5 RHRS T5 variable 3 4 2 T5_r RHRS T5 rate The numbers mean slot 3 (indices start at 0), channel 4, and the 3rd number means "counts"(1) or "rate"(2). The string after the variable name is just a comment. We'll need to adjust these and first test the scalers using xscaler. Type "goxscaler" on the a-onl to run xscaler. V. Exercises for to perform to become acquainted. ================================================= 1. Generate the sieve slit pattern from PREX-I data 2. Generate the Qsq distribution using both the EK class (built into Podd) and from private analysis in ParityData. Results should be identical. 3. Look at detector signals from PREX-I. 4. Look at FADC data from APEX. ---- now, to our own set up --- 5. Establish detector channels (pulser simulation) in xscaler 6. Adjust db_P.dat mapping to observe Fastbus ADC and TDC for detector 7. Set up the FADC and observe snapshots of random events. 8. Add integrated FADC data and time-over-threshold data to root output