WAC Notes November 27 2019

From PREX Wiki
Revision as of 04:50, 13 December 2019 by Rradloff (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
HOW TOs for shift crew
Expert Tools
All Expert Contacts

PREX Main << Weekly Analysis Coordinator << WAC Notes << Instructions

WAC Notes November 6 2019 << >> WAC Notes December 13 2019

November 6th 2019

Daily Meeting BlueJeans


  • We have a new web plots directory structure for the Respin and for CREX (with ~150 Gb total, CREX has priority)
  • Burst definition:
    • Do we want 72k consecutive events, or 72k consecutive GOOD events which pass all cuts
    • Do we want to keep the postpan definition of “good” which only looks at the ErrorFlag, or which looks at all device error codes involved in the regression matrix?

Updating our online and prompt plots:

  • In PREX II it was kind of chaotic and ultimately we only paid attention to a few plots, and many shift workers were a bit overwhelmed at times.
  • I have a very long and not necessarily useful summary of what kinds of things we could plot or are plotting here: Online Plots
  • ATs need more attention during CREX
  • SAMs, and especially their dithering response and combo-BPM status (maybe even as a projected BPM with the 4a and 4e) matter, due to reduced Main Detector sensitivities/large widths

  • Software updates:
    • Our postpan regression, dithering script and correcting, rootscript based plotting tools and data frame based aggregation system from the end of PREX are still in place and will work fine for CREX on day 1
    • New Software - 2 pass “burst” style mini runs with dithering, regression, combined objects, and aggregation, mostly within the Japan toolset:
      • Japan now supports regression on data frame objects (ie on combiner objects like us_avg and complicated combined BCMs and BPMs)
      • Japan now supports a BMOD data extraction tool for speeding up bmod analysis
      • A new OOP script has been produced to provide a user interface (config files and convenient output rootfiles) for bmod analysis
      • A new OOP script to softly merge disagreeing tree branch structures and to TChain japan’s “summary” (burst/mini run averages) trees into aggregated files (effectively replacing the slower but more configurable data frame based aggregator)
      • Mul plots for looking at distributions across an entire slug (currently assumes postpan and dither output trees from before)
      • The slug/pull/grand aggregation plotting tool can be easily modified to read these updated Japan 2 pass “burst” style data and give the same plots
      • The runwise plotting scripts for our prompt plots can also be easily modified to point to the Japan 2 pass trees (or we can continue using postpan for convenience and cross checking)

  • The new Japan tree “merger” aggregation tool:
    • It assumes the entries in burst trees correspond to mini runs and makes a copy of the entries into a new file, with additional Short_t branches containing the run_number and BurstCounter variables
    • It can either keep the different burst trees (burst, mulc, mulc_lrb, etc) separated, or it can attempt a lateral branch flattening (may be non-trivial)
    • Features:
      • It will be impervious to device list changes
      • but it will be slower to use the branch merging feature than to just do a straight “hadd”
      • it brings literally every branch along for the ride
      • This assumes that the device list (and all combined and regressed and dithered kinds of quantities) are already correctly placed in their trees
      • Updating things to be aggregated (as in above point) would require a full 2 pass respin of prompt, not just a respin of the merger/old aggregator tool
      • Slug plotting scripts will need to be changed if we don’t want to continue grabbing literally everything contained in the aggregated rootfiles (now containing literally every branch, not just a subset of outputs from the configured aggregator files)
    • It can be done this way, but I’m not sure if the amount of work to get it all to work down the tool chain is worth the effort - maybe just do both the old postpan+dataframe aggregator and the Japan 2 pass+merger approaches and see what convenience dictates later?


  • Our 150 Gb of space for online/respin plots will need to be actively managed in order to facilitate online CREX plots, and the bare minimum of diagnostic plots for PREX respins
  • Burst definition:
    • Good events are all events passing our set of cuts
    • Which should have all of the standard monitors and detectors that matter determining the ErrorFlag to begin with (cuts files)
    • All types of regressions will be based off of this -> Global ErrorFlag is what defines the burst
  • Online and prompt plots:
    • Update the config files to clean up Cameron's mess
    • Open run ranged dither corrector combiner object in Japan online for doing SAM dithering as a panguin online plotter/regressed widths indicator (new feature)
    • We should make sure that the combined AT detector definitions from PREX (and maybe a few more) get put into the Japan combiner
    • ATs should be normalized, and blinding should be determined by committee (they were blinded in PREX due to their strong APV pickup)
    • We want to have positional SAM moments (monopole, vertical/horizontal dipoles of SAMs) - we already do, but double check
    • CombinedPMT objects do combination event by event and requires perfect mutual calibration in order to get a good combined asymmetry - no clear gain other than obtaining a new effective device (i.e. for dithering)
    • We should make a script that alerts the WAC of which runs have been reprompted since the last time WAC plots have been made - per slug, similarly, if a needed file is missing for some plot then give a useful warning
  • Software Updates:
    • Our old software is fine, but could use some updating of course (especially online plots file paths)
    • New software datahandler-data-sharing update works and priority currently is hardcoded to be the order in which datahandlers are defined in the datahandler map file (so be careful)
      • Primary combiner should be the very first datahandler
      • Residual regression (regress the already regressed quantities) will be a bit awkward, but should work (needs to be set up)
      • Similarly, an eventwise correction number would need to be done, but could just be done in a post-japan script somewhere, or could come from the corrector class defining a residual itself (new code, either in Japan LRB or Dithering corrector script)
    • Merger tool as new aggregator:
      • We would like to use the merger for the updated pass 2 Japan based online analysis and prompt stuff
      • This would probably require either having much larger slug plots, or passing the requested device list parsing information down the line to the slug plotting script (which should be fairly easy to implement)
      • Having all of the data around at the aggregation stage is nice and saves some pain, and if a run is missing some desired calcuations then our new ifarm respin features can update our dataset retroactively
    • Dithering combined BPMs can be done as a combined BPM subsystem in the map files instead of in a datahandler
    • Dithering corrector datahandler still needs to be put together - operations has such a parameter file already - some script for getting slopes into a text file (on the fly for CREX) would be useful (ask Victoria)
    • PREX II miniruns had been defined as 9k GOOD patterns (240Hz - 5 minutes), and CREX may need to extend to be a 15 minute equivalent due to reduced statistical precision
      • Will longer miniruns cease to be meaningful compared to just doing a full run? Will the short time scale of the miniruns for slope stability verification be invalidated by extending to a longer time?


CH: Cameron, Devi

Phone: Ryan, Weibin, Paul, Tao, Robert Radloff