DAQ Testing/20190322
From PREX Wiki
Revision as of 12:53, 22 March 2019 by Cameronc (talk | contribs) (Created page with "Back to Main Page >> DAQ Documentation Portal >> DAQ Testing >> DAQ Commissioning Notes DAQ_Tes...")
Back to Main Page >> DAQ Documentation Portal >> DAQ Testing >> DAQ Commissioning Notes
Previous Day of Testing << >> Next Day of Testing
March 22nd, 2019 Testers: Cameron Clarke
Goals
- Find the optimal settings for Online Japan running that allow online plots in real time at 240 Hz for long enough to be useful
Online Japan Speed Limit
Cameron made a first pass of tests on March 7th, see HALOG entry: https://logbooks.jlab.org/entry/3664231
Testing today on run 1410, a octet 240 Hz run with 190k events (13 minutes worth of real data) in it (using Parity/prminput/prex.conf and simulate_online_disk_file.conf)
- The analyzer runs through all of the data and slows down as the ROOT file gets bigger - with full tree trim (no evt tree) and no compression (25MB/10k events):
- Adaq1 hits 2.1ms/event after 190k events, estimate hits 4ms/event speed limit at ~ 25 minutes of data taking, which is probably ok
- Adaq3 hits 1.7ms/event after 190k events, estimate hits 4ms/event speed limit at ~ 31 minutes of data taking, which is manageable
- Turning compression on (12MB/10k events):
- Adaq1 hits 4ms/event after 80k events, which is about 6 minutes or so (and is faster than its uncompressed state because of faster processor maybe)
- Adaq3 hits 4ms/event after 50k events, which is only 4 minutes or so
- Turning turning off the circular buffer entirely during "simulated" online running (I think? I set it to 0, which maybe means that it is 0 in length rather than off?)
- There is basically no slow down ever, even as the root file gets large
- It looks like setting this setting to 0 also prevents slowdown in the not "online" regular running mode as well
- Turning on tree splitting (after 50 MB) but leaving circular buffer settings on and also allowing compression to be on
- Does in fact prevent it from slowing down, as each new file that is split off sends the analyzer back to the speed of a "new" .root file, so this should work to get us the speed we need
- I have added the rudimentary functionality needed to let this be a macro command (though it segfaults upon close and re-passing through the rootfile with the QwCombiner class regardless of what technique is used to reduce its size, at QwRootFite.h line 462) into a new branch, "feature-tree-size-limits"