Difference between revisions of "BMOD 28Sept2021"

From PREX Wiki
Jump to: navigation, search
(Created page with "BMOD_20Sept2021 << >> BMOD_30Sept2021 == Logistic information == BlueJeans calling instructions: Toll-Free Number (U.S.& Canada): 888-240-2560 International t...")
 
m (Sept 28)
 
(2 intermediate revisions by the same user not shown)
Line 22: Line 22:
 
** Difference between minirunwise slope*BPM diffs, compare to quadrature error bar difference (of diffs, then *slope) and to local distribution’s RMSs (from slug averaging outputs stddev).
 
** Difference between minirunwise slope*BPM diffs, compare to quadrature error bar difference (of diffs, then *slope) and to local distribution’s RMSs (from slug averaging outputs stddev).
  
Cameron: I have tried to do all of these things: [https://prex.jlab.org/analysis/crex/summary_respin2/results/BMOD_inclusion/ Directory of full plots], [[:file:BMOD_Inclusion_Story.pdf|BMOD Inclusion story document]]
+
'''Cameron: I have tried to do all of these things: [https://prex.jlab.org/analysis/crex/summary_respin2/results/BMOD_inclusion/ Directory of full plots], [[:file:BMOD_Inclusion_Story.pdf|BMOD Inclusion story document]]'''
 +
* Look at outliers multiplet plot (and BMOD Only bad run) and minirun plot and see if there is any pathology and which should be clearly cut from new rerun
 +
* What is the cumulative effect of outliers (effect on mean, effect on minirunwise pull plot)
 +
* Remake averaging plots with newly cut out outliers
  
 
=== Sept 20 ===
 
=== Sept 20 ===
Line 279: Line 282:
  
 
== Attendence ==
 
== Attendence ==
 +
Cameron, Paul, Dave Armstrong, Victoria, Weibin, Robert, Devi, Kent

Latest revision as of 09:49, 28 September 2021

BMOD_20Sept2021 << >> BMOD_30Sept2021

Logistic information

 BlueJeans calling instructions:
 Toll-Free Number (U.S.&  Canada):  888-240-2560
 International toll number:         408-740-7256
 Bluejeans CODE:                 948942477
 Bluejeans link: https://bluejeans.com/948942477

Agenda

Sept 28

  • Sanity check of BPM independence: diff_bpm12X_RMS - y mod only, still bigger?
    • Check diffs and ratio of RMSs of BPMs (and corrected RMSs) between OnlyBMOD and ErrorFlag vs. coils and vs. directions (135,246,7 at once) like plots already, but no phase information required and do RMS diffs and ratios this time improve consistent Y axes
    • Mod+noMod vs noMod, Apv central value, minirun pull plot 5+ sigma outliers check
  • Comparing cuts main detector means minirunwise:
    • Segments 1-3, BeamMod only,  Lagr vs reg, correction rms diff, pull plot. (Compare to noMod)
    • (noMod+Mod) vs noMod Apv central values vs err diff, segments 1-3 and combined.  (Table)
  • (Redo full mode of checks from noMod data set with Mod and Only Mod)
    • Corrections per monitor disagreement (slugs 186-8) minirunwise check
    • Difference between minirunwise slope*BPM diffs, compare to quadrature error bar difference (of diffs, then *slope) and to local distribution’s RMSs (from slug averaging outputs stddev).

Cameron: I have tried to do all of these things: Directory of full plots, BMOD Inclusion story document

  • Look at outliers multiplet plot (and BMOD Only bad run) and minirun plot and see if there is any pathology and which should be clearly cut from new rerun
  • What is the cumulative effect of outliers (effect on mean, effect on minirunwise pull plot)
  • Remake averaging plots with newly cut out outliers

Sept 20

  • To do:
    • Double check smoking gun causes of outliers multiplet and minirunwise
    • Throw away miniruns of bad problems (6567, 6983, 7211)
    • Pull plot minirunwise Mean and RMS errors (set opt stat 112211)
    • Also try multiplying NULL Pitts by wien sign (do both ways)
    • Also try minirunwise pull plot linear scale too (and add a gaussian fit)
    • Slides:
      • Apv detector, different time scales
      • NULL plots - Pittwise (both ways of doing wien sign)
      • Tables (BPMs, evMons, corrections, Apv and DD, wien wise and 6 slow controls wise)



  • Changing from "part" averaging to "wien" averaging, these are the runs that are CREX part 2 but belong to wien state 1, i.e. the 1 day's worth of running in a new optics set up in the old wien state before going to AT running:
    • Minirun count through the experiment (total are 8798), CREX "part", run number, Wien flip number
    • 1316 , 2 , 6328 , 1
    • 1317 , 2 , 6328 , 1
    • 1318 , 2 , 6328 , 1
    • 1319 , 2 , 6328 , 1
    • 1320 , 2 , 6328 , 1
    • 1321 , 2 , 6328 , 1
    • 1322 , 2 , 6329 , 1
    • 1323 , 2 , 6329 , 1
    • 1324 , 2 , 6329 , 1
    • 1325 , 2 , 6329 , 1
    • 1326 , 2 , 6329 , 1
    • 1327 , 2 , 6330 , 1
    • 1328 , 2 , 6330 , 1
    • 1329 , 2 , 6330 , 1
    • 1330 , 2 , 6330 , 1
    • 1331 , 2 , 6330 , 1
    • 1332 , 2 , 6331 , 1
    • 1333 , 2 , 6331 , 1
    • 1334 , 2 , 6331 , 1
    • 1335 , 2 , 6331 , 1
    • 1336 , 2 , 6331 , 1
    • 1337 , 2 , 6331 , 1
    • 1338 , 2 , 6331 , 1
    • 1339 , 2 , 6332 , 1
    • 1340 , 2 , 6332 , 1
    • 1341 , 2 , 6332 , 1
    • 1342 , 2 , 6332 , 1
    • 1343 , 2 , 6332 , 1
    • 1344 , 2 , 6332 , 1
    • 1345 , 2 , 6333 , 1
    • 1346 , 2 , 6334 , 1

Sept 17


To look at:

  • Lagr asym main det, IncludeBMOD Lagr asym main det, Lagr asym DD, IncludeBMOD Lagr asym main det
    • corrections per monitor table
    • slug
    • Pitt
    • Null Pitt
    • slow-flips
    • parts - table, pdf
    • unweighted IHWP
    • unweighted slow-flips
  • BPMs vs. parts table - table example
  • evMons vs. parts table
  • Kinematics BPMs vs. parts table
  • Multiplet plot of Lagr asym main det > 135 uA - Rootfile lives here: /lustre19/expphy/volatile/halla/parity/crex-respin2/pruned_lagr_analysis/All_Lagr_Production.root
  • Multiplet plot of Lagr asym DD > 135 uA

What to plot next:

  • Calculate per time-step (parts, slugs, etc.) the maindet-weighted average mean value. Error bar (in the TGraphErrors) should be minirun-mean_error weighted by detector stat-weight (mean_err, plot the mean_err itself, no rescale!) -> but don't fit that on the TGraph, do calculate the "global" average value+err and show it in statbox/overlay.
  • Null Pitt - The In and Out states should each have 1/2 weight into the average.
  • Do Wien's not crex_parts (double check slow_controls output, and add a 1,2,3 Wien variable).
  • Check NULL pdf works
  • Get BCM, BPM tables, pdfs fast
  • Send data and scripts to Weibin
  • Check minirunwise pull and multipletwise mul_plot outliers
  • Make slugwise plots (add a main-det weighted mean+err to statbox, in addition to self weighted p0 fit value... no fit line at all)

Sept 10

https://docs.google.com/document/d/1tnJQ0Ic679W8Gxl1VNAS5D2oNhxCgXl0kK_V9ee4Ch8/edit


  • Correction certainty stress test: Giant Spike y-rms runs (7500-7715, slugs 185-187 ish)
    • Corrections (total and per evMon or bpm) agree between methods?
    • Cameron slug level correction agreement type up log
    • Minirunwise correction agreement (write a aliasing loop for it)
  • Explanation of regression’s limitations: Generally large regression RMS 3rd period
    • Y mod only? Or…?
    • Y corrections dominate regression discrepancy?
    • Bmodonly, reg, coil1,3,5,7 => if good, then not X’s fault
    • Bomdonly , reg, coil2,4,6 => if bad, then Y’s fault
  • Sanity check of BPM independence: diff_bpm12X_RMS - y mod only, still bigger?
  • Curiosity: Why are OnlyBMOD X and Y BPM RMSs doing weird things? Quartet synching?
    • e.g. X-rms seg 2, phase of modulation?
      • Plot of: yield_beam_mod_ramp_mean and _rms vs. diff_bpm4eY_rms per minirun. If _mean or _rms are correlated the the BPM's RMSs at all this explains it. The 8-step phased bmod cycle can be 2 multiplets of phase 1 and 5, 2 and 6, 3 and 7, or 4 and 8. In yield_beam_mod_ramp units this appears as the following plot:

Bmod phase run 7785.pngBmod phase coil7 run 7785.png

      • The fact that the starting phase (or average phase, however it is calculated) of the multiplets within supercycles is unstable to span multiple starting points of phase options is a sign that there is phase drift and a plot as described before (phase vs. bpm rms) would be useful.

Sept 8

BMOD Inclusion

Plan

Check all of these minirunwise and slugwise, for ErrorFlag, IncludeBMOD, and OnlyBMOD compared to eachother

  • Check corrected RMS difference between all 3 cuts for the analysis techniques
    • 12BPM Eigen Lagr
    • 12BPM Eigen Reg
    • 5BPM Eigen Dit
    • Plots:
      • Corrected RMS for each cut, for each analysis technique
      • Corrected RMS difference between cuts difference, for each analysis technique
      • Corrected RMS ratio between cuts difference, for each analysis technique
  • Check the mean value corrected asymmetry difference (minirunwise differences) between all 3 cuts, and pull plot
    • Plots:
      • Asymmetry difference between cuts p0 fit, for each analysis technique
      • Asymmetry difference between cuts pull plots, for each analysis technique
  • Check the mean value corrected asymmetry difference (multipletwise if possible) of each 3 cuts between analysis techniques
    • Mean value difference: 12BPM Eigen Lagr vs. 12BPM Eigen Reg vs. 5BPM Eigen Dit, for each cut
    • RMS of that difference: 12BPM Eigen Lagr vs. 12BPM Eigen Reg vs. 5BPM Eigen Dit, for each cut
  • Check the BPM differences means and RMSs, check if any unreasonable noise is introduced, between all 3 cuts
General minirunwise grand plots (using self errors for weights everywhere)
  • Allbpms, ErrorFlag cut - pdf, txt
  • Allbpms, IncludeBMOD cut - pdf, txt
  • Allbpms, OnlyBMOD cut - pdf, txt
Specific minirunwise comparison plots

Sept 1

Next steps and priorities:

To finalize the beam corrections results:

  • Make tables of the position differences and evMon diffs (separate the 3 parts)
  • Augment all tables and plots, separating the 6 IHWP x Wien states out
  • Consider the target position and angle and E BPM definitions (if possible)
  • Make a Lagr mul plot (all multiplets in 1 histogram)
  • Archive plots, tables, and results of slug, pitt, etc. plots in a useful format, documented, publicly (haplog/ifarm tape/docdb)
  • Do NULL Pitt plots as well
  • Treat Main det double difference alongside A_PV, corrections, multipletwise differences, etc.
  • Make a multipletwise difference distribution RMS plot (minirunwise, slug, etc.) in addition to the mean disagreement numbers


To determine whether to include BMOD in the final dataset do:

  • Produce aggregator data for Lagr-12BPM dataset with Include and Only BMOD cuts
  • Look at the corrected asym RMS, separately with OnlyBMOD and ErrorFlag cuts
  • Check if the IncludeBMOD corrected asym mean and correction per monitor means jump around outside of statistical fluctuations, compared to ErrorFlag cut dataset
  • Check the multipletwise difference between analysis methods means and RMSs, if OnlyBMOD is substantially worse than ErrorFlag cut dataset

Document, commit, etc. all necessary scripts, and make sure data is saved in safe places

August 25

Respin 2 beam corrections results - slides

August 18

Summary Respin2 webpage

Plan:

  • Data quality check: Minirun and Slug-wise check outliers (us_avg, usl, usr together) of
    • Check 5bpm dit vs. 12bpm Lagr (page 1) and 12bpm reg vs. 5bpm dit (page 9) - Check comparisons of USL vs. USR and pull plots...
    • Method-differences plots
    • Corrections per monitor
    • Method slopes differences (should be benign anyway) (page 17 lagr vs. reg, etc.) - scan
    • Split CREX into the 3 parts -> look for outliers from means that are ~2 sigma pull (scan the tree in part-averaging script, and plot the data in minirun-wise plot) and evaluate potential nefarious causes in the run-wise raw data (usl, usr, us_avg)
      • Lagr vs. reg slope comparison (allbpm eigenvector analysis)
      • Comparing correction per monitor also between Lagr vs. reg (prior times diff_evMon_mean)
      • Corrected asyms differences also between Lagr vs. reg
  • Pitt and slow control averaging plots (signed and unsigned)
  • Evaluate including BMOD in the final-polish/verified data-set (do slug avg, compare means and RMSs of corrected asyms between dataset)

July 21

Looking at results from respin 2 data:

  • dit - reg - lagr differences
  • impact of including error bars on lagrange sensitivities
  • slope plots
  • slope differences per monitor from different techniques (dit vs reg)
  • accumulated residuals (or, residual spread per slug RMS and averages)
  • net beam corrections (3 parts and slugwise)

Tasks for next week:

  • Try to zoom in on the parts where plain and eigen dit corrected asyms disagree
    • Double check the root files and slopes, etc. that there is nothing wrong in the analysis chain
    • The two methods should get identical corrected asyms
    • Verify no 1 arm running shenanigans or mislabeled data slipped in (and that run_avg == eigen_dit_run_avg_part_avgd)
  • Look at where slopes disagree between dit - reg
    • Verify the corrections and/or monitorwise diff RMSs are small (so that if the slopes disagree it has negligible effect on the corrections being made)
    • Make slug_avg plots of the slope disagreements (including RMSs)
    • Make slug_avg plots of the corrections per monitor too
  • Do a 12BPM Eigen Reg and Eigen Lagr comparison
    • See if the slopes agree better than the 5BPM case (where evMons 3 and 4 disagreed substantially between reg and dit)
  • Provide easy to parse tables of the 5 evMon's corrections per 3 CREX parts (and make the error calculation math clear as well)
  • Improve dit-reg-lagr plots
    • Add pull plots
    • Do slug averaging of these (including RMSs)
  • Multipletwise and runwise diff plots being == is suspicious
    • Double check the plotting script isn't doing anything funny
    • Double check the source aggregator root file and input device list are correct
    • Double check the math by plotting the distributions by hand for some example runs

July 14

  • Respin 2 outputs
  • Updated eigenvector definitions
  • Dit + reg + lagrange using eigenvectors
  • Residual sensitivities, done correctly this time

May 19

  • Respin 2: Look at dit vs. reg differences - HAPLOG
    • Look at respin2 dit-reg diff outputs
    • Checked by eye the 4+ sigma outlier miniruns
  • Respin 1: 3 CREX parts’ monitorwise net corrections (and uncertainties?) - HAPLOG
  • Respin 1: Residual dithering sensitivities - HAPLOG
    • Looking at plain BPMs, run_avg sensitivities
    • Looking at eigenvector monitors, run_avg sensitivities
    • Looking at plain BPMs, cyclewise sensitivities
    • Fractional residuals (needs more work in respin2 outputs to clean up outliers) - HAPLOG
  • Timescale dependent RMSs in asym->histogram filling - not discussed - HAPLOG

Attendence

Cameron, Paul, Dave Armstrong, Victoria, Weibin, Robert, Devi, Kent