Difference between revisions of "WAC Notes November 27 2019"

From PREX Wiki
Jump to: navigation, search
m
m (Agenda)
Line 23: Line 23:
  
 
* We have a new web plots directory structure for the Respin and for CREX (with ~150 Gb total, CREX has priority)
 
* We have a new web plots directory structure for the Respin and for CREX (with ~150 Gb total, CREX has priority)
 +
 +
* Burst definition:
 +
** Do we want 72k consecutive events, or 72k consecutive GOOD events which pass all cuts
 +
** Do we want to keep the postpan definition of “good” which only looks at the ErrorFlag, or which looks at all device error codes involved in the regression matrix?
 +
 +
 +
Updating our online and prompt plots:
 +
* In PREX II it was kind of chaotic and ultimately we only paid attention to a few plots, and many shift workers were a bit overwhelmed at times.
 +
* I have a very long and not necessarily useful summary of what kinds of things we could plot or are plotting here: [Online_Plots]
 +
* ATs ned more attention during CREX
 +
* SAMs, and especially their dithering response and combo-BPM status (maybe even as a projected BPM with the 4a and 4e) matter, due to reduced Main Detector sensitivities/large widths
 +
 +
 
* Software updates:  
 
* Software updates:  
 
** Our postpan regression, dithering script and correcting, rootscript based plotting tools and data frame based aggregation system from the end of PREX are still in place and will work fine for CREX on day 1
 
** Our postpan regression, dithering script and correcting, rootscript based plotting tools and data frame based aggregation system from the end of PREX are still in place and will work fine for CREX on day 1
Line 34: Line 47:
 
*** The runwise plotting scripts for our prompt plots can also be easily modified to point to the Japan 2 pass trees (or we can continue using postpan for convenience and cross checking)
 
*** The runwise plotting scripts for our prompt plots can also be easily modified to point to the Japan 2 pass trees (or we can continue using postpan for convenience and cross checking)
  
* Burst definition:
 
** Do we want 72k consecutive events, or 72k consecutive GOOD events which pass all cuts
 
** Do we want to keep the postpan definition of “good” which only looks at the ErrorFlag, or which looks at all device error codes involved in the regression matrix?
 
  
 
* The new Japan tree “merger” aggregation tool:
 
* The new Japan tree “merger” aggregation tool:
Line 48: Line 58:
 
*** Updating things to be aggregated (as in above point) would require a full 2 pass respin of prompt, not just a respin of the merger/old aggregator tool
 
*** Updating things to be aggregated (as in above point) would require a full 2 pass respin of prompt, not just a respin of the merger/old aggregator tool
 
*** Slug plotting scripts will need to be changed if we don’t want to continue grabbing literally everything contained in the aggregated rootfiles (now containing literally every branch, not just a subset of outputs from the configured aggregator files)
 
*** Slug plotting scripts will need to be changed if we don’t want to continue grabbing literally everything contained in the aggregated rootfiles (now containing literally every branch, not just a subset of outputs from the configured aggregator files)
** It can be done this way, but I’m not sure if the amount of work to get it all to work down the tool chain is worth the effort - maybe just do both the old postpan+dataframe aggregator and the Japan 2 pass+merger approaches and see what convenience dictates later?  
+
** It can be done this way, but I’m not sure if the amount of work to get it all to work down the tool chain is worth the effort - maybe just do both the old postpan+dataframe aggregator and the Japan 2 pass+merger approaches and see what convenience dictates later?
 
+
Updating our online and prompt plots:
+
* In PREX II it was kind of chaotic and ultimately we only paid attention to a few plots, and many shift workers were a bit overwhelmed at times.
+
* I have a very long and not necessarily useful summary of what kinds of things we could plot or are plotting here: [Online_Plots]
+
* ATs ned more attention during CREX
+
* SAMs, and especially their dithering response and combo-BPM status (maybe even as a projected BPM with the 4a and 4e) matter, due to reduced Main Detector sensitivities/large widths
+
 
+
  
 
== Attendance ==
 
== Attendance ==

Revision as of 12:05, 27 November 2019

Documentation
HOW TOs for shift crew
Expert Tools
All Expert Contacts



PREX Main << Weekly Analysis Coordinator << WAC Notes << Instructions

WAC Notes October 30 2019 << >> WAC Notes November 27 2019

November 6th 2019

Daily Meeting BlueJeans

Agenda

  • We have a new web plots directory structure for the Respin and for CREX (with ~150 Gb total, CREX has priority)
  • Burst definition:
    • Do we want 72k consecutive events, or 72k consecutive GOOD events which pass all cuts
    • Do we want to keep the postpan definition of “good” which only looks at the ErrorFlag, or which looks at all device error codes involved in the regression matrix?


Updating our online and prompt plots:

  • In PREX II it was kind of chaotic and ultimately we only paid attention to a few plots, and many shift workers were a bit overwhelmed at times.
  • I have a very long and not necessarily useful summary of what kinds of things we could plot or are plotting here: [Online_Plots]
  • ATs ned more attention during CREX
  • SAMs, and especially their dithering response and combo-BPM status (maybe even as a projected BPM with the 4a and 4e) matter, due to reduced Main Detector sensitivities/large widths


  • Software updates:
    • Our postpan regression, dithering script and correcting, rootscript based plotting tools and data frame based aggregation system from the end of PREX are still in place and will work fine for CREX on day 1
    • New Software - 2 pass “burst” style mini runs with dithering, regression, combined objects, and aggregation, mostly within the Japan toolset:
      • Japan now supports regression on data frame objects (ie on combiner objects like us_avg and complicated combined BCMs and BPMs)
      • Japan now supports a BMOD data extraction tool for speeding up bmod analysis
      • A new OOP script has been produced to provide a user interface (config files and convenient output rootfiles) for bmod analysis
      • A new OOP script to softly merge disagreeing tree branch structures and to TChain japan’s “summary” (burst/mini run averages) trees into aggregated files (effectively replacing the slower but more configurable data frame based aggregator)
      • Mul plots for looking at distributions across an entire slug (currently assumes postpan and dither output trees from before)
      • The slug/pull/grand aggregation plotting tool can be easily modified to read these updated Japan 2 pass “burst” style data and give the same plots
      • The runwise plotting scripts for our prompt plots can also be easily modified to point to the Japan 2 pass trees (or we can continue using postpan for convenience and cross checking)


  • The new Japan tree “merger” aggregation tool:
    • It assumes the entries in burst trees correspond to mini runs and makes a copy of the entries into a new file, with additional Short_t branches containing the run_number and BurstCounter variables
    • It can either keep the different burst trees (burst, mulc, mulc_lrb, etc) separated, or it can attempt a lateral branch flattening (may be non-trivial)
    • Features:
      • It will be impervious to device list changes
      • but it will be slower to use the branch merging feature than to just do a straight “hadd”
      • it brings literally every branch along for the ride
      • This assumes that the device list (and all combined and regressed and dithered kinds of quantities) are already correctly placed in their trees
      • Updating things to be aggregated (as in above point) would require a full 2 pass respin of prompt, not just a respin of the merger/old aggregator tool
      • Slug plotting scripts will need to be changed if we don’t want to continue grabbing literally everything contained in the aggregated rootfiles (now containing literally every branch, not just a subset of outputs from the configured aggregator files)
    • It can be done this way, but I’m not sure if the amount of work to get it all to work down the tool chain is worth the effort - maybe just do both the old postpan+dataframe aggregator and the Japan 2 pass+merger approaches and see what convenience dictates later?

Attendance

CH: Cameron, Devi

Phone: Ryan, Weibin