Difference between revisions of "JAPAN Doc Portal"

From PREX Wiki
Jump to: navigation, search
(Using Japan on ifarm to analyze TEDf test stand data)
Line 35: Line 35:
 
** Any number above 0 indicates that an event has failed some check and the data was not saved to the ROOTfile (or it was, but has an error code assigned to it)
 
** Any number above 0 indicates that an event has failed some check and the data was not saved to the ROOTfile (or it was, but has an error code assigned to it)
 
** See [https://github.com/JeffersonLab/japan/blob/4d495cb831d3cb10e58a822522d08390e7412602/Analysis/include/QwTypes.h#L163 the source code for a catalog of error codes]
 
** See [https://github.com/JeffersonLab/japan/blob/4d495cb831d3cb10e58a822522d08390e7412602/Analysis/include/QwTypes.h#L163 the source code for a catalog of error codes]
 +
 +
== Using ifarm Scientific Computing Resources ==
 +
 +
The JLab computer center manages the various ifarm clusters and data storage. These are some of the resources offered there ([https://scicomp.jlab.org/docs/FarmUsersGuide general guide]):
 +
 +
To log into the JLab servers first ssh into login.jlab.org, which requires a CUE JLab computer account. Then you can log into another node in the jlab network like the jlabl1 workstations or the ifarm via another ssh directly to those hostnames.
 +
Standard nuclear physics computing programs and analysis software can be initialized into your coding environment by executing the "production" scripts. To execute these scripts upon logging into the jlab servers place the following code or something similar in a file called .login or any .*rc file you prefer: "source /site/12gev_phys/production.csh 2.0"
 +
If you have problems then make sure you are running the /bin/tcsh shell (execute "echo $SHELL" on the command line to see what shell you are running)
 +
Recently the production scripts have been updated to "softenv" scripts, and the newest version is 2.2 instead of 2.0
 +
The [https://scicomp.jlab.org/docs/swif swif workflow program] can manage running batch jobs on the Auger batch farm system for you.
 +
See the "job management" section of Ciprian Gal's [https://github.com/sbujlab/prexSim prexSim simulation readme] for specific details on how to get a swif workflow up and running for you.
 +
 +
=== Getting Account Access Permissions ===
 +
 +
* Register as a JLab user, undergrad, or grad student ([https://misportal.jlab.org/jlabAccess/ here], or register from the [https://www.jlab.org/hr/jris/processing.html "online" link in here]).
 +
** After registration, you have to "Register New Visit" as user group "Remote Access," even though you aren't necessarily visiting, and you will need you to call the [https://cc.jlab.org/helpdesk/ JLab helpdesk] at some point.
 +
** While filling out the Registration form you can request an account on the JLab Common User Environment (CUE). You must include Bob Michaels (rom@jlab.org) as your JLab sponsor for the account - be sure to request access to a-parity and moller12gev user groups ([https://cc.jlab.org/useraccounts here is a good starting link]).
 +
** To set up your computing environment on the ifarm see above.
 +
** To get access to swif and scientific computing resources follow the instructions [https://scicomp.jlab.org/docs/network_certificate here].
 +
** Then to use swif see the guide below or readmes in relevant repositories.
 +
*Jefferson Lab github access - Send an email with the following (and if this doesn't work ask one of the senior members of the collaboration to add you themselves):
 +
Subject: Please add me to the JeffersonLab github organization
 +
To: <helpdesk@jlab.org>
 +
Hello,
 +
 +
I'm a JLab user and my JLab user name is _______.
 +
Could you please add me to the JeffersonLab github organization?
 +
My github username is ______ and account id is ______
 +
 +
=== Introduction to SWIF ===
 +
 +
To use the ifarm's batch submission system ([https://scicomp.jlab.org/scicomp/index.html#/?username= online monitoring and documentation here]) one option is to use the Auger batch system manager called "swif" ([https://scicomp.jlab.org/docs/swif documented somewhat here]).
 +
 +
To use swif first you need access to the ifarm, then you need to create a certificate
 +
run '/site/bin/jcert -create'
 +
To create a workflow on swif run
 +
swif create -workflow WorkFlowName (where WorkFlowName is an identifier you give to it to monitor its progress)
 +
To monitor the workflow run
 +
swif status -workflow Name
 +
To delete a workflow run
 +
swif cancel -workflow Name
 +
swif cancel -delete -workflow Name
 +
To add a job run
 +
swif add-jsub -workflow Name -script jobScript.xml
 +
swif run -workflow Name
 +
To create a script .xml file for running jobs see the description of its function and the python wrapper code included in Ciprian's prexSim code ([https://github.com/cipriangal/prexSim https://github.com/cipriangal/prexSim]) or Cameron's updated one to work with new remoll v2.0.0 data structures ([https://github.com/sbujlab/rad_analysis/blob/master/jlabSubmit.py jlabSubmit.py and its relatives])
 +
A suggested .login file for your ifarm uses (that allows for batch job submission) is:
 +
source /site/env/syslogin
 +
source /site/env/sysapps
 +
if ( `hostname` !~ "jlabl"* && `hostname` !~ "adaq"* )  then
 +
source /site/12gev_phys/softenv.csh 2.3
 +
endif
 +
 +
 +
 +
A sample .tchsrc file for using the default ifarm tc shell is here:
 +
# ~/.tcshrc: executed by tcsh(1) for non-login shells.
 +
setenv PATH $PATH\:/site/bin
 +
set savehist = 100000
 +
set histfile = ~/.tcsh_hist
 +
alias root root -l
 +
alias gits git status
 +
alias swif /site/bin/swif
 +
alias swifs swif status -workflow
  
 
==Compilation==
 
==Compilation==

Revision as of 20:03, 20 February 2019

Documentation for new Prex/Crex analysis is curated here

Important Resources

Resources

Using JAPAN on adaq cluster

In the apar account, execute "gojapan" to setup the environment variables and change to the "official" copy of JAPAN (~apar/PREX/japan).

Some typical calls to the analyzer:

  • ./build/qwparity --config prexCH.conf --detectors prexCH_detectors_no_hel.map -r 1107
    This does the analysis for parity_CH mode data files (run 1107 in this particular case), where the helicity patterns are treated as always being quartets with polarities "+ - - +".
  • ./build/qwparity --config prex.conf -r 1036
    This analyzes the parity_ALL mode data files, using the injector helicity data to build quartets. The current default analysis mode is in quartets, if the DAQ is changed to octets or other patterns, we will need to update the injector helicity configuration used by the analyzer.
  • ./build/qwparity --config prexinj.conf -r ####
    This analyzes the parity_INJ mode data files. The current default analysis mode is in quartets, if the DAQ is changed to octets or other patterns, we will need to update the injector helicity configuration used by the analyzer.

Using Japan on ifarm to analyze TEDf test stand data

To analyze any CODA DAQ .dat file with JAPAN, do the following

source /site/12gev_phys/softenv.csh 2.3
cd /directory/to/store/japan/
git clone https://github.com/JeffersonLab/japan
cd japan
git checkout tedf-test-stand
source setup_japan.tcsh
mkdir build; cd build; cmake ..; make; cd ..
./build/qwparity --config tedf_testing.conf -r [number]
    • The config file (contained within the Parity/prminput folder) should contain all of the necessary information for JAPAN to decode the data
    • Care must be taken to ensure that the maps correspond to what configuration the DAQ channels were in during the data collection for that run
    • Run numbers can be used automatically to do this, but the user must define the map files appropriately so it works
    • It is important to turn off the beam current normalization for channel readouts
  • The ROOT output should go into your (setup_japan.tcsh initialized environment variable location) ${QW_ROOTFILES}, for analysis done on the corresponding .dat file in ${QW_DATA}
  • After analyzing your run, be sure to look at the JAPAN outputs and read the Error Summary list

Using ifarm Scientific Computing Resources

The JLab computer center manages the various ifarm clusters and data storage. These are some of the resources offered there (general guide):

To log into the JLab servers first ssh into login.jlab.org, which requires a CUE JLab computer account. Then you can log into another node in the jlab network like the jlabl1 workstations or the ifarm via another ssh directly to those hostnames. Standard nuclear physics computing programs and analysis software can be initialized into your coding environment by executing the "production" scripts. To execute these scripts upon logging into the jlab servers place the following code or something similar in a file called .login or any .*rc file you prefer: "source /site/12gev_phys/production.csh 2.0" If you have problems then make sure you are running the /bin/tcsh shell (execute "echo $SHELL" on the command line to see what shell you are running) Recently the production scripts have been updated to "softenv" scripts, and the newest version is 2.2 instead of 2.0 The swif workflow program can manage running batch jobs on the Auger batch farm system for you. See the "job management" section of Ciprian Gal's prexSim simulation readme for specific details on how to get a swif workflow up and running for you.

Getting Account Access Permissions

  • Register as a JLab user, undergrad, or grad student (here, or register from the "online" link in here).
    • After registration, you have to "Register New Visit" as user group "Remote Access," even though you aren't necessarily visiting, and you will need you to call the JLab helpdesk at some point.
    • While filling out the Registration form you can request an account on the JLab Common User Environment (CUE). You must include Bob Michaels (rom@jlab.org) as your JLab sponsor for the account - be sure to request access to a-parity and moller12gev user groups (here is a good starting link).
    • To set up your computing environment on the ifarm see above.
    • To get access to swif and scientific computing resources follow the instructions here.
    • Then to use swif see the guide below or readmes in relevant repositories.
  • Jefferson Lab github access - Send an email with the following (and if this doesn't work ask one of the senior members of the collaboration to add you themselves):
Subject: Please add me to the JeffersonLab github organization
To: <helpdesk@jlab.org>
Hello,
I'm a JLab user and my JLab user name is _______.
Could you please add me to the JeffersonLab github organization?
My github username is ______ and account id is ______

Introduction to SWIF

To use the ifarm's batch submission system (online monitoring and documentation here) one option is to use the Auger batch system manager called "swif" (documented somewhat here).

To use swif first you need access to the ifarm, then you need to create a certificate run '/site/bin/jcert -create' To create a workflow on swif run swif create -workflow WorkFlowName (where WorkFlowName is an identifier you give to it to monitor its progress) To monitor the workflow run swif status -workflow Name To delete a workflow run swif cancel -workflow Name swif cancel -delete -workflow Name To add a job run swif add-jsub -workflow Name -script jobScript.xml swif run -workflow Name To create a script .xml file for running jobs see the description of its function and the python wrapper code included in Ciprian's prexSim code (https://github.com/cipriangal/prexSim) or Cameron's updated one to work with new remoll v2.0.0 data structures (jlabSubmit.py and its relatives) A suggested .login file for your ifarm uses (that allows for batch job submission) is:

source /site/env/syslogin
source /site/env/sysapps
if ( `hostname` !~ "jlabl"* && `hostname` !~ "adaq"* )  then
source /site/12gev_phys/softenv.csh 2.3 
endif


A sample .tchsrc file for using the default ifarm tc shell is here:

# ~/.tcshrc: executed by tcsh(1) for non-login shells.
setenv PATH $PATH\:/site/bin 
set savehist = 100000
set histfile = ~/.tcsh_hist
alias root root -l
alias gits git status
alias swif /site/bin/swif
alias swifs swif status -workflow

Compilation

The code as is compiles fine on ifarm machines (1401 or 1402). For local installation you need to have:

  • mysql++
  • root with minuit2 library
  • boost libraries

Verified compilation with:

-- The C compiler identification is GNU 4.8.5
-- The CXX compiler identification is GNU 4.8.5
-- Check for working C compiler: /bin/cc
-- Check for working C compiler: /bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /bin/c++
-- Check for working CXX compiler: /bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- System name Linux
-- Found ROOT 6.12/04 in /home/ciprian/root/root6/root/build
-- Boost version: 1.53.0
-- Found the following Boost libraries:
--   program_options
--   filesystem
--   system
--   regex
No QwAnalysis dictionaries needed for ROOT 6.12/04.
-- Configuring done
-- Generating done
-- Build files have been written to: /home/ciprian/prex/japan/build

Recently Asked Questions

How to establish pedestals for a channel:

  • For beam-off pedestals
    • Plot values like "cav4bx.hw_sum_raw/cav4bx.num_samples" during a known period of beam off, and find the mean
    • Then you will add lines to the prexCH_beamline_pedestal.1199-.map and prexCH_beamline_pedestal.1230-.map for your channels
    • In each line, the channel name is first, then the pedestal, then the gain

I need to do cuts on non-zero ErrorFlags while working on evt tree. Do I need to do the same thing while working on mul tree too or the mul tree itself makes cuts on non-zero ErrorFlag?

  • You would also need to do the cuts in the mul tree

When I applied ErrorFlag==0 cut, I am left with only a few entries out of the run. Is there something wrong or it is normal.

  • For a high-beam-off run, it is likely normal
    • Take a look at "yield_bcm_an_ds10" with and without the cut
    • You'll see how many events have non-zero beam current

ErrorFlag ==0 events are those with non-zero beam current, right?

  • prexCH_beamline_eventcuts.map contains the condition of ErrorFlag
  • At the moment the ErrorFlag is set by the one "global" cut (a line marked with a "g" in the event cuts file) and that is on bcm_an_ds3
    • Its value must be above 1 uA and below 1e6 uA, and there is an eye-balled stability cut as well