Weekly Analysis Coordinator
|HOW TOs for shift crew|
|All Expert Contacts|
Weekly Analysis Coordinator (WAC)
Daily Meeting BlueJeans
- WAC Notes Index | CREX WAC Notes Index
- WAC Collection
- Old PREX I WAC instructions
- PREX II WAC Re-prompt Device Additions
- PREX II WAC Respin 1
|CREX Dates (2019-2020)||WAC name|
|2019 12/02 - 2020 1/29||Cameron Clarke|
|2020 1/30 - 2/05||Weibin Zhang|
|2020 2/6||Adam Zec|
|2020 2/7 - 2/12||Sakib Rahman|
|2020 2/13 - 2/17||Devi Adhikari|
|2020 2/18 - 2/23||Ryan Richards|
|2020 2/24 - 2/25||Devi Adhikari|
|2020 2/26 - 3/3||Victoria Owen|
|2020 3/4 - 3/10||Adam Zec|
|2020 3/11 - 3/15||Sakib Rahman|
|2020 3/16 - 3/24||Amali|
|PREX II Dates (2019)||WAC name|
|06/24 - 07/06||Tao Ye|
|07/06 - 07/17||Sakib Rahman|
|07/17 - 07/23||Cameron Clarke|
|07/23 - 07/30||Devi Adhikari|
|07/30 - 08/06||Victoria Owen|
|08/06 - 08/13||Adam Zec and Cameron Clarke|
|08/13 - 08/20||Robert Radloff|
|08/20 - 08/27||Ryan Richards|
|08/27 - 09/03||Adam Zec|
- Use command "gowac" to get a quick run down on scripts and set up the environment
- Calculate accumulated coulombs and alert shift how many coulombs there are are left to collect crew if IHWP flip is imminent
- Coordinate with the Analysis Expert
- Run the scripts for looking at daily/weekly data
- Daily: use gojapan ; cd ~/PREX/prompt ; ./prompt.sh runnumber&
- Slugly: How To
- Look closely at runs marked "junk" and update runs every time something is done in the analysis chain (use PVDB tools)
- Collect each shift data - How To
- Develop scripts to monitor data
- Monitor changes made to the regular japan operations branch. This is the directory that opens when the gojapan command is used at login. The regular japan operations branch is the engine for realtime analysis. So, shiftcrew plot macros should be updated in this branch. If map files are updated, propagate that into japan_wac_only folder.
- For a given change in the maps/pedestals/cuts, push it to operations and pull to wac only
- cd ~/PREX/japan/Parity/prminput
- git add [things that are new]
- git commit --author=[email]
- git push -u origin operations
- [enter username and password for github]
- cd ~/PREX/japan_WAC_only/Parity/prminput
- git pull
- redo prompts with cd ~/PREX/prompt ; ./prompt.sh [runnumber]
- Pull in changes from regular japan operations branch into the japan_wac_only folder after reviewing the commits made.
- Track developments and make presentations from WAC Notes
- WAC Notes table template
- PVDB parity run database
- PREX II onlinePlots, PREX II list of all runs - from prompt analysis
- CREX onlinePlots, CREX list of all runs - from prompt analysis
- List of Slow Tree EPICS variables
- Stale PREX II ResultsSummary
A stable grandroot file can be found in /chafs2/work1/apar/aggRootfiles/slugRootfiles/grandRootfile/archive/grand_aggregator_1-30.root which has all slugs between 1-30. Useful cuts: ihwp==1 (in) ihwp==2 (out) wein==1 (right) wein==2 (left) hrs==0 (both arm) hrs==1 (right arm only) hrs==2 (left arm only) To get an idea about variable nomenclature, look in the root tree or the grand_prototype.pdf
- Check the Feedback log: (/adaqfs/home/apar/PREX/japan_feedback/LogFiles/Feedback_PITA_log.txt)
Accessing onsite computers
With the access to hallgw, one can have access to the onsite computers.
- Do not connect remotely to the adaq1 computer unless necessary for some DAQ related process
- Focus on aonl[1-3], especially for analysis jobs
- For large analysis jobs focus on the ifarm installation of JAPAN
To get access to hallgw:
- Request to help desk for 2 factor authentication
- If approved, you will receive an email guiding you through the steps for setting up the access pin and the pin generator app(MobilePASS)
- In a terminal type in: ssh email@example.com
- With access granted, type in: ssh username@hallgw
- HALOG - Accumulated Charge counting procedure
- HALOG - IHWP flipping procedure (preliminary)
- Hall A Web - Online Plots for Slugs
- HALOG - Procedure for defining BPM stability cuts on one run
- Collector Github Repo
- PREX Prompt Github Repo
- Stale PREX II Grand Slug Summary Sheet
- CREX Calibrations history spreadsheet
- CREX Run List spreadsheet
To calculate statistical precision so far
Exp error bar = sqrt(1/(sum(1/sigma^2)))
Known Problems and Solutions
1) What to do when a run has two distinct bpm12X response regions (Example: run 7842)?
Ans: Look at the variable in the mul tree called "BurstCounter" which encodes minirun index. For run 7842, you can look at mul->Draw("BurstCounter:yield_bpm12X","ErrorFlag==0","") to see that in the current root file, only BurstCounter==4 has the change in the 12X, and then mul->Scan("yield_bpm12X:pattern_number","ErrorFlag==0 && BurstCounter==4") lets you see that if you flag patterns between 50719 and 68050 (first and last entry in the scan output) as bad you'd cut out the events when the position is changing. You can multiply the pattern by 4 to get the event number range or use the event tree to get the range and put that in a prex_bad_events.$run_number$.map file. Then reprompt.
2) What to do when a target position is changed while a run is ongoing (Example: run 7860?
Ans: a) Start by looking at the slow tree to know when the target moved: slow->Draw("pcrex90BDSPOS.VAL:Entry$","","*") For run 7860, by zooming in, I see the target position has stabilized by EPICS event 15.
b) Check the beam current in the slow tree: slow->Draw("hac_bcm_average:Entry$","","*") I can see we didn't have current until EPICS event 45 or so, thus we know we only had current on one target. If we had a run with beam on one target then the target was moved and we wanted to analyze the data after the move, we' need to use an eventNumberCut file to exclude the early events. Or if we only wanted the early events, we could exclude all the events after the target started moving. To do that, you'd use the evt tree to find the CodaEventNumber of the event at which you wanted the cut. I'm going to only keep events after 30000, for this run.
c) Create a special eventNumberCut.7860.conf file. The content should look like this, except the event range will be different for a different run.
# Cuts to keep only evnts with Ca-48 target in position
Then you should be able to replay the run.
3) How to monitor dpp drift?
4) How to make regression minrun summary and grandaggregator plots?
a) Login to apar@adaq1.
c) make file named slug$number$.list inside run_list folder or update it. Each entry in the file corresponds to a run in the slug
d) ./EZ_WAC.sh run_list/slug$number$.list