Difference between revisions of "JAPAN Doc Portal"

From PREX Wiki
Jump to: navigation, search
(Introduction to SWIF)
(Example install of Japan on ifarm - not in apar@adaq3 or even for main DAQ data)
 
(18 intermediate revisions by 3 users not shown)
Line 4: Line 4:
 
*[[Analyzer_Meeting|Analyzer Meeting list]]
 
*[[Analyzer_Meeting|Analyzer Meeting list]]
 
*[[Github_Guide|Contributing on Github]]
 
*[[Github_Guide|Contributing on Github]]
 +
*[[CODA_Running_Quickstart|CODA Running Quickstart]]
 +
*[[JLab_Computing|JLab Scientific Computing Guide]]
 +
 
==Resources==
 
==Resources==
 +
* [[Media:Counting_House_TipsTricks-Jun2017.pdf|Brad's Counting House computer guide]]
 +
* [[JLab_Computing|JLab Computing]]
 +
** [[Media:Computing_tools-tricks-Jun2019.pdf|Brad's scientific computing/ifarm guide]]
 +
** [https://scicomp.jlab.org/docs/auger_slurm Auger/Slurm]
 +
** [https://scicomp.jlab.org/docs/swif Swif]
 
* Source code: [https://github.com/JeffersonLab/japan JAPAN Github]
 
* Source code: [https://github.com/JeffersonLab/japan JAPAN Github]
 
* [[Github_Guide|Guide to using Github]]
 
* [[Github_Guide|Guide to using Github]]
 +
* List of cut bits in Japan source code: [https://github.com/JeffersonLab/japan/blob/develop/Analysis/include/QwTypes.h#L165 here]
  
 
==Using JAPAN on adaq cluster==
 
==Using JAPAN on adaq cluster==
Line 16: Line 25:
 
* ''./build/qwparity --config prexinj.conf -r ####''<br />This analyzes the parity_INJ mode data files.  The current default analysis mode is in quartets, if the DAQ is changed to octets or other patterns, we will need to update the injector helicity configuration used by the analyzer.
 
* ''./build/qwparity --config prexinj.conf -r ####''<br />This analyzes the parity_INJ mode data files.  The current default analysis mode is in quartets, if the DAQ is changed to octets or other patterns, we will need to update the injector helicity configuration used by the analyzer.
  
==Using Japan on ifarm to analyze TEDf test stand data==
+
== Example install of Japan on ifarm - not in apar@adaq3 or even for main DAQ data ==
  
 
To analyze any CODA DAQ .dat file with [https://github.com/jeffersonlab/japan JAPAN], do the following
 
To analyze any CODA DAQ .dat file with [https://github.com/jeffersonlab/japan JAPAN], do the following
 +
module load cmake/3.5.1
 
  source /site/12gev_phys/softenv.csh 2.3
 
  source /site/12gev_phys/softenv.csh 2.3
 
  cd /directory/to/store/japan/
 
  cd /directory/to/store/japan/
Line 31: Line 41:
 
** Run numbers can be used automatically to do this, but the user must define the map files appropriately so it works
 
** Run numbers can be used automatically to do this, but the user must define the map files appropriately so it works
 
** It is important to turn off the beam current normalization for channel readouts
 
** It is important to turn off the beam current normalization for channel readouts
 +
* The raw .dat data files should be copied over network ([[JLab_Computing#Transferring_Huge_Files|see guide]]) into a folder you have read and write privileges on (in the ifarm perhaps, especially in /volatile or /scratch directories, [[JLab_Computing|see guide]])
 +
** These resultant root files will only be accessible to you, so if you need to share then consider just using the standard arap@adaq3:~/PREX/japan install, and if there are features you need to add to that instance of the analyzer then use the [https://github.com/JeffersonLab/japan/tree/develop Github repo].
 
* The ROOT output should go into your (setup_japan.tcsh initialized environment variable location) ${QW_ROOTFILES}, for analysis done on the corresponding .dat file in ${QW_DATA}
 
* The ROOT output should go into your (setup_japan.tcsh initialized environment variable location) ${QW_ROOTFILES}, for analysis done on the corresponding .dat file in ${QW_DATA}
 
* After analyzing your run, be sure to look at the JAPAN outputs and read the Error Summary list
 
* After analyzing your run, be sure to look at the JAPAN outputs and read the Error Summary list
 
** Any number above 0 indicates that an event has failed some check and the data was not saved to the ROOTfile (or it was, but has an error code assigned to it)
 
** Any number above 0 indicates that an event has failed some check and the data was not saved to the ROOTfile (or it was, but has an error code assigned to it)
 
** See [https://github.com/JeffersonLab/japan/blob/4d495cb831d3cb10e58a822522d08390e7412602/Analysis/include/QwTypes.h#L163 the source code for a catalog of error codes]
 
** See [https://github.com/JeffersonLab/japan/blob/4d495cb831d3cb10e58a822522d08390e7412602/Analysis/include/QwTypes.h#L163 the source code for a catalog of error codes]
 
== Using ifarm Scientific Computing Resources ==
 
 
The JLab computer center manages the various ifarm clusters and data storage. These are some of the resources offered there ([https://scicomp.jlab.org/docs/FarmUsersGuide general guide]):
 
 
* To log into the JLab servers first ssh into login.jlab.org, which requires a CUE JLab computer account. Then you can log into another node in the jlab network like the jlabl1 workstations or the ifarm via another ssh directly to those hostnames.
 
* Standard nuclear physics computing programs and analysis software can be initialized into your coding environment by executing the "production" scripts. To execute these scripts upon logging into the jlab servers place the following code or something similar in a file called .login or any .*rc file you prefer:
 
source /site/12gev_phys/softenv.csh 2.3
 
* If you have problems then make sure you are running the /bin/tcsh shell (execute "echo $SHELL" on the command line to see what shell you are running)
 
* Recently the production scripts have been updated to "softenv" scripts, and the newest version is 2.3
 
* The [https://scicomp.jlab.org/docs/swif swif workflow program] can manage running batch jobs on the Auger batch farm system for you.
 
* See the "job management" section of Ciprian Gal's [https://github.com/sbujlab/prexSim prexSim simulation readme] for specific details on how to get a swif workflow up and running for you.
 
 
=== Getting Account Access Permissions ===
 
 
* Register as a JLab user, undergrad, or grad student ([https://misportal.jlab.org/jlabAccess/ here], or register from the [https://www.jlab.org/hr/jris/processing.html "online" link in here]).
 
** After registration, you have to "Register New Visit" as user group "Remote Access," even though you aren't necessarily visiting, and you will need you to call the [https://cc.jlab.org/helpdesk/ JLab helpdesk] at some point.
 
** While filling out the Registration form you can request an account on the JLab Common User Environment (CUE). You must include Bob Michaels (rom@jlab.org) as your JLab sponsor for the account - be sure to request access to a-parity and moller12gev user groups ([https://cc.jlab.org/useraccounts here is a good starting link]).
 
** To set up your computing environment on the ifarm see above.
 
** To get access to swif and scientific computing resources follow the instructions [https://scicomp.jlab.org/docs/network_certificate here].
 
** Then to use swif see the guide below or readmes in relevant repositories.
 
*Jefferson Lab github access - Send an email with the following (and if this doesn't work ask one of the senior members of the collaboration to add you themselves):
 
Subject: Please add me to the JeffersonLab github organization
 
To: <helpdesk@jlab.org>
 
Hello,
 
 
I'm a JLab user and my JLab user name is _______.
 
Could you please add me to the JeffersonLab github organization?
 
My github username is ______ and account id is ______
 
 
=== Introduction to SWIF ===
 
 
To use the ifarm's batch submission system ([https://scicomp.jlab.org/scicomp/index.html#/?username= online monitoring and documentation here]) one option is to use the Auger batch system manager called "swif" ([https://scicomp.jlab.org/docs/swif documented somewhat here]).
 
 
* To use swif first you need access to the ifarm, then you need to create a certificate (see above)
 
* Execute
 
/site/bin/jcert -create
 
* To create a workflow on swif run
 
swif create -workflow WorkFlowName (where WorkFlowName is an identifier you give to it to monitor its progress)
 
* To monitor the workflow run
 
swif status -workflow Name
 
* To delete a workflow run
 
swif cancel -workflow Name
 
swif cancel -delete -workflow Name
 
* To add a job run
 
swif add-jsub -workflow Name -script jobScript.xml
 
swif run -workflow Name
 
* To create a script .xml file for running jobs see the description of its function and the python wrapper code included in Ciprian's prexSim code ([https://github.com/cipriangal/prexSim https://github.com/cipriangal/prexSim]) or Cameron's updated one to work with new remoll v2.0.0 data structures ([https://github.com/sbujlab/rad_analysis/blob/master/jlabSubmit.py jlabSubmit.py and its relatives])
 
* A suggested .login file for your ifarm uses (that allows for batch job submission) is:
 
source /site/env/syslogin
 
source /site/env/sysapps
 
if ( `hostname` !~ "jlabl"* && `hostname` !~ "adaq"* )  then
 
source /site/12gev_phys/softenv.csh 2.3
 
endif
 
* A sample .tchsrc file for using the default ifarm tc shell is here:
 
# ~/.tcshrc: executed by tcsh(1) for non-login shells.
 
setenv PATH $PATH\:/site/bin
 
set savehist = 100000
 
set histfile = ~/.tcsh_hist
 
alias root root -l
 
alias gits git status
 
alias swif /site/bin/swif
 
alias swifs swif status -workflow
 
  
 
==Compilation==
 
==Compilation==
Line 132: Line 81:
 
  -- Generating done
 
  -- Generating done
 
  -- Build files have been written to: /home/ciprian/prex/japan/build
 
  -- Build files have been written to: /home/ciprian/prex/japan/build
 +
 +
== Minimal Working Configuration Files ==
 +
For more detailed information on JAPAN configurations [[Japan_options_and_maps|see here]].
 +
 +
To get a minimal working version of a Japan config file you need to do the following steps in order
 +
git clone git@github.com:JeffersonLab/japan (https://github.com/JeffersonLab/japan)
 +
cd japan
 +
mkdir build
 +
cd build
 +
cmake ../
 +
make
 +
cd ../Parity/prminput
 +
* The engine needs to know where to look for CODA files and where to put ROOT files, this can be done either with environment variables or command line options
 +
setenv QW_DATA [path to CODA data files]
 +
setenv QW_ROOTFILES [path to where you want to store output files]
 +
* or included when populating the [config_name].conf with the following options (a conf file stands in place of a long string of command line options, and Japan default assumes that a qwparity.conf file exists and fails if it is not overwritten with --config [config_name].conf instead).
 +
vim [config_name].conf
 +
* fill the CODA and ROOT file options
 +
data = /path/to/coda/files
 +
rootfiles = /path/to/root/output/files
 +
* The engine will search for a default data file name "QwRun_#.log" and so you should overwrite the prefix and data extension with the following .conf options
 +
codafile-stem = [CODA file stem (prex_ALL_)]
 +
codafile-ext = [extension (dat)]
 +
* The engine will search for a default map file for the detector names and channel map in "detectors.map" and should be overwritten with
 +
detectors = [map name (prex_detectors.map)]
 +
* Because the ROCs for the Parity DAQ are < 31 we need to add a flag (either in a separate conf file or by itself)
 +
add-config = prexbankflag.conf
 +
** Where prexbankflag.conf contains
 +
allow-low-subbank-ids = yes
 +
* The engine will automatically give the Rootfile prefix as "Qweak_#.trees.root" and can be overwritten with
 +
rootfile-stem = [stem (prexALL_)]
 +
* To use the analyzer on the command line now you will need to pass your config file and the run number (or else it will use runnumber 0)
 +
./build/qwparity --conf [config name.conf] -r [run number]
 +
With all of these in place the analyzer will behave in a user defined and consistent way.
 +
 +
== Basic level map files ==
 +
The basic map file should contain at least one subsystem array, with at least a channel map and a set of event cuts
 +
[QwBlindDetectorArray]
 +
name    = MainDets
 +
map      = prex_maindet.map
 +
param    = prex_maindet_pedestal.map
 +
eventcut = prex_maindet_eventcuts.map
 +
Where event cuts file should at least contain the following line:
 +
EVENTCUTS = 3
 +
and additional lines are used to give global level event cuts for various subsystem array elements.
 +
The basics of the subsystem array map are to decode the CODA data file and should look like this
 +
ROC=25
 +
Bank=0x5
 +
vqwk_buffer_offset=1
 +
sample_size=16564
 +
!means we are in a commented out line
 +
!Hardware name, module number, channel number, channel object type, channel name  blinding status
 +
VQWK, 0, 0,  IntegrationPMT,  usl          not_blindable
 +
VQWK, 0, 1,  IntegrationPMT,  dsl          not_blindable
 +
 +
The options for possible subsystem arrays include:
 +
* QwBeamLine
 +
* QwHelicity
 +
* QwDetectorArray
 +
* QwBlindDetectorArray
 +
* QwScaler
 +
And any one of them can be called multiple times with unique map files for each instance.
 +
 +
If no helicity subsystem is defined then Japan will default to use quartet pattern with a non-random simple alternating pattern.
 +
 +
An example of a working non-PREX setup that uses the same CRL and Japan analysis is the [https://github.com/JeffersonLab/japan/tree/tedf-test-stand/Parity/prminput TEDf test stand analysis] (which is somewhat stale, but works nonetheless on a different data set and channel maps).
 +
 +
For more information on optional options and map features see [[Japan options and maps]]
  
 
== Recently Asked Questions ==
 
== Recently Asked Questions ==
 +
'''Shift Taker How To'''
 +
* [[Online_analysis|Online Analysis How To]]
  
How to establish pedestals for a channel:
+
'''How to establish pedestals for a channel'''
 
* For beam-off pedestals  
 
* For beam-off pedestals  
** Plot values like "cav4bx.hw_sum_raw/cav4bx.num_samples" during a known period of beam off, and find the mean
+
** Plot values like "cav4bx.hw_sum_raw/cav4bx.num_samples" during a known period of beam off, and find the mean.
** Then you will add lines to the prexCH_beamline_pedestal.1199-.map and prexCH_beamline_pedestal.1230-.map for your channels
+
** Then you will add lines to the prexCH_beamline_pedestal.***.map for your channels
 
** In each line, the channel name is first, then the pedestal, then the gain
 
** In each line, the channel name is first, then the pedestal, then the gain
  
I need to do cuts on non-zero ErrorFlags while working on evt tree. Do I need to do the same thing while working on mul tree too or the mul tree itself makes cuts on non-zero ErrorFlag?
+
'''I need to do cuts on non-zero ErrorFlags while working on evt tree. Do I need to do the same thing while working on mul tree too or the mul tree itself makes cuts on non-zero ErrorFlag?'''
 
* You would also need to do the cuts in the mul tree
 
* You would also need to do the cuts in the mul tree
  
When I applied ErrorFlag==0 cut, I am left with only a few entries out of the run. Is there something wrong or it is normal.
+
'''When I applied ErrorFlag==0 cut, I am left with only a few entries out of the run. Is there something wrong or it is normal.'''
 
* For a high-beam-off run, it is likely normal
 
* For a high-beam-off run, it is likely normal
 
** Take a look at "yield_bcm_an_ds10" with and without the cut
 
** Take a look at "yield_bcm_an_ds10" with and without the cut
 
** You'll see how many events have non-zero beam current
 
** You'll see how many events have non-zero beam current
  
ErrorFlag ==0 events are those with non-zero beam current, right?
+
'''ErrorFlag ==0 events are those with non-zero beam current, right?'''
 
* prexCH_beamline_eventcuts.map contains the condition of ErrorFlag
 
* prexCH_beamline_eventcuts.map contains the condition of ErrorFlag
 
* At the moment the ErrorFlag is set by the one "global" cut (a line marked with a "g" in the event cuts file) and that is on bcm_an_ds3
 
* At the moment the ErrorFlag is set by the one "global" cut (a line marked with a "g" in the event cuts file) and that is on bcm_an_ds3
 
** Its value must be above 1 uA and below 1e6 uA, and there is an eye-balled stability cut as well
 
** Its value must be above 1 uA and below 1e6 uA, and there is an eye-balled stability cut as well
 +
 +
'''How do I use the HV GUI, especially for SAM running?'''
 +
* See the following page and associated links: [[SAMs_HV_Settings|SAMs HV Settings]]
 +
 +
== Expert Notes ==
 +
 +
* [[Japan_options_and_maps|JAPAN Options and Maps]]
 +
* [[JAPAN/Data Handler]] - Describes the functionality of the data handler classes.
  
 
[[Category:Analyzer]]
 
[[Category:Analyzer]]

Latest revision as of 10:56, 18 October 2019

Documentation for new Prex/Crex analysis is curated here

Important Resources

Resources

Using JAPAN on adaq cluster

In the apar account, execute "gojapan" to setup the environment variables and change to the "official" copy of JAPAN (~apar/PREX/japan).

Some typical calls to the analyzer:

  • ./build/qwparity --config prexCH.conf --detectors prexCH_detectors_no_hel.map -r 1107
    This does the analysis for parity_CH mode data files (run 1107 in this particular case), where the helicity patterns are treated as always being quartets with polarities "+ - - +".
  • ./build/qwparity --config prex.conf -r 1036
    This analyzes the parity_ALL mode data files, using the injector helicity data to build quartets. The current default analysis mode is in quartets, if the DAQ is changed to octets or other patterns, we will need to update the injector helicity configuration used by the analyzer.
  • ./build/qwparity --config prexinj.conf -r ####
    This analyzes the parity_INJ mode data files. The current default analysis mode is in quartets, if the DAQ is changed to octets or other patterns, we will need to update the injector helicity configuration used by the analyzer.

Example install of Japan on ifarm - not in apar@adaq3 or even for main DAQ data

To analyze any CODA DAQ .dat file with JAPAN, do the following

module load cmake/3.5.1
source /site/12gev_phys/softenv.csh 2.3
cd /directory/to/store/japan/
git clone https://github.com/JeffersonLab/japan
cd japan
git checkout tedf-test-stand
source setup_japan.tcsh
mkdir build; cd build; cmake ..; make; cd ..
./build/qwparity --config tedf_testing.conf -r [number]
    • The config file (contained within the Parity/prminput folder) should contain all of the necessary information for JAPAN to decode the data
    • Care must be taken to ensure that the maps correspond to what configuration the DAQ channels were in during the data collection for that run
    • Run numbers can be used automatically to do this, but the user must define the map files appropriately so it works
    • It is important to turn off the beam current normalization for channel readouts
  • The raw .dat data files should be copied over network (see guide) into a folder you have read and write privileges on (in the ifarm perhaps, especially in /volatile or /scratch directories, see guide)
    • These resultant root files will only be accessible to you, so if you need to share then consider just using the standard arap@adaq3:~/PREX/japan install, and if there are features you need to add to that instance of the analyzer then use the Github repo.
  • The ROOT output should go into your (setup_japan.tcsh initialized environment variable location) ${QW_ROOTFILES}, for analysis done on the corresponding .dat file in ${QW_DATA}
  • After analyzing your run, be sure to look at the JAPAN outputs and read the Error Summary list

Compilation

The code as is compiles fine on ifarm machines (1401 or 1402). For local installation you need to have:

  • mysql++
  • root with minuit2 library
  • boost libraries

Verified compilation with:

-- The C compiler identification is GNU 4.8.5
-- The CXX compiler identification is GNU 4.8.5
-- Check for working C compiler: /bin/cc
-- Check for working C compiler: /bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /bin/c++
-- Check for working CXX compiler: /bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- System name Linux
-- Found ROOT 6.12/04 in /home/ciprian/root/root6/root/build
-- Boost version: 1.53.0
-- Found the following Boost libraries:
--   program_options
--   filesystem
--   system
--   regex
No QwAnalysis dictionaries needed for ROOT 6.12/04.
-- Configuring done
-- Generating done
-- Build files have been written to: /home/ciprian/prex/japan/build

Minimal Working Configuration Files

For more detailed information on JAPAN configurations see here.

To get a minimal working version of a Japan config file you need to do the following steps in order

git clone git@github.com:JeffersonLab/japan (https://github.com/JeffersonLab/japan)
cd japan
mkdir build
cd build
cmake ../
make
cd ../Parity/prminput
  • The engine needs to know where to look for CODA files and where to put ROOT files, this can be done either with environment variables or command line options
setenv QW_DATA [path to CODA data files]
setenv QW_ROOTFILES [path to where you want to store output files]
  • or included when populating the [config_name].conf with the following options (a conf file stands in place of a long string of command line options, and Japan default assumes that a qwparity.conf file exists and fails if it is not overwritten with --config [config_name].conf instead).
vim [config_name].conf
  • fill the CODA and ROOT file options
data = /path/to/coda/files
rootfiles = /path/to/root/output/files
  • The engine will search for a default data file name "QwRun_#.log" and so you should overwrite the prefix and data extension with the following .conf options
codafile-stem = [CODA file stem (prex_ALL_)]
codafile-ext = [extension (dat)]
  • The engine will search for a default map file for the detector names and channel map in "detectors.map" and should be overwritten with
detectors = [map name (prex_detectors.map)]
  • Because the ROCs for the Parity DAQ are < 31 we need to add a flag (either in a separate conf file or by itself)
add-config = prexbankflag.conf
    • Where prexbankflag.conf contains
allow-low-subbank-ids = yes
  • The engine will automatically give the Rootfile prefix as "Qweak_#.trees.root" and can be overwritten with
rootfile-stem = [stem (prexALL_)]
  • To use the analyzer on the command line now you will need to pass your config file and the run number (or else it will use runnumber 0)
./build/qwparity --conf [config name.conf] -r [run number]

With all of these in place the analyzer will behave in a user defined and consistent way.

Basic level map files

The basic map file should contain at least one subsystem array, with at least a channel map and a set of event cuts

[QwBlindDetectorArray]
name     = MainDets
map      = prex_maindet.map
param    = prex_maindet_pedestal.map
eventcut = prex_maindet_eventcuts.map

Where event cuts file should at least contain the following line:

EVENTCUTS = 3

and additional lines are used to give global level event cuts for various subsystem array elements. The basics of the subsystem array map are to decode the CODA data file and should look like this

ROC=25
Bank=0x5
vqwk_buffer_offset=1
sample_size=16564
!means we are in a commented out line
!Hardware name, module number, channel number, channel object type, channel name  blinding status
VQWK, 0, 0,  IntegrationPMT,  usl           not_blindable
VQWK, 0, 1,  IntegrationPMT,  dsl           not_blindable

The options for possible subsystem arrays include:

  • QwBeamLine
  • QwHelicity
  • QwDetectorArray
  • QwBlindDetectorArray
  • QwScaler

And any one of them can be called multiple times with unique map files for each instance.

If no helicity subsystem is defined then Japan will default to use quartet pattern with a non-random simple alternating pattern.

An example of a working non-PREX setup that uses the same CRL and Japan analysis is the TEDf test stand analysis (which is somewhat stale, but works nonetheless on a different data set and channel maps).

For more information on optional options and map features see Japan options and maps

Recently Asked Questions

Shift Taker How To

How to establish pedestals for a channel

  • For beam-off pedestals
    • Plot values like "cav4bx.hw_sum_raw/cav4bx.num_samples" during a known period of beam off, and find the mean.
    • Then you will add lines to the prexCH_beamline_pedestal.***.map for your channels
    • In each line, the channel name is first, then the pedestal, then the gain

I need to do cuts on non-zero ErrorFlags while working on evt tree. Do I need to do the same thing while working on mul tree too or the mul tree itself makes cuts on non-zero ErrorFlag?

  • You would also need to do the cuts in the mul tree

When I applied ErrorFlag==0 cut, I am left with only a few entries out of the run. Is there something wrong or it is normal.

  • For a high-beam-off run, it is likely normal
    • Take a look at "yield_bcm_an_ds10" with and without the cut
    • You'll see how many events have non-zero beam current

ErrorFlag ==0 events are those with non-zero beam current, right?

  • prexCH_beamline_eventcuts.map contains the condition of ErrorFlag
  • At the moment the ErrorFlag is set by the one "global" cut (a line marked with a "g" in the event cuts file) and that is on bcm_an_ds3
    • Its value must be above 1 uA and below 1e6 uA, and there is an eye-balled stability cut as well

How do I use the HV GUI, especially for SAM running?

Expert Notes