From PREX Wiki
November 5th, 2018 Testers: Cameron
- Measuring ET bridge paradigm analyzer deadtime
- Check that forced blocking deadtime actually affects DAQ readouts
- Check that -nb, -n, -s, -q, and chunk size can or cannot help
- Run 4556 - non-blocking - simple analysis - 961.524 Hz set rate - 28.7829 % deadtime in analyzer - 0 % deadtime in DAQ
- Run 4558 - non-blocking - 200 us delay ana - 961.524 Hz set rate - 76.0947 % deadtime in analyzer - 0 % deadtime in DAQ
- Run 4561 - switching to blocking mode to see if the TI board cares about CODA
- Run 4561 - blocking - 200 us delay ana - 961.524 Hz set rate - 0 % deadtime in analyzer - 0 % deadtime in DAQ
- Run 4561 - blocking - 2,000 us delay ana - 961.524 Hz set rate - 51 % deadtime in analyzer - 0 % deadtime in DAQ
- Run 4561 - blocking - 20,000 us delay ana - 961.524 Hz set rate - 82.1063 % deadtime in analyzer - 0.38 % deadtime in DAQ (after the run the ROC complains that "interrupt: Ev#172055: VQWK 0 timed out with timer=-200.logTask: 201 log messages lost." for a large number of events, probably the ones that were held up)
- Run 4561 - blocking - 200,000 us delay ana - 961.524 Hz set rate - 98.5522 % deadtime in analyzer - 0.101346 % deadtime in DAQ
- Run 4561 - blocking - 2,000,000 us delay ana - 961.524 Hz set rate - 99.949 % deadtime in analyzer - 0.0 % deadtime in DAQ
- Run 4562 - blocking - swapped the TI busy to ADC read signal and got no scaler incrementing for any time delays
- Run 4563 - blocking - restored it... maybe deadtime is differnent? (Ask Ed or Carl?)
- Run 4564 - blocking - when running at 2 seconds deadtime per event the TI busy out reads full rate 0 % deadtime, but if I go to a more reasonable number then the events catch up or something and the DAQ deadtime becomes non-zero
- Run 4565 - blocking - 20,000 us delay ana - 961.524 Hz set rate - 73 % and falling deadtime in analyzer - 0.6 % deadtime in DAQ and rising
- I'm beginning to think that there is some very large buffer, and in order to see the backlogged data you have to run for a very long time... I think the DAQ is continuing to read out the data and store it for the ET to come and grab, but eventually it is getting fuller and slowing down... ROC 23 keeps reading events in spurts
- Ending the run ROC 23 doesn't stop reading events, so I guess the data must be stored somewhere and has to be emptied before the run can end.
- If I kill the etclient that is doing the time-delay analysis then ROC 23 ends immediately
- Run 4566 - non-blocking - back to normal - except interestingly the beginning of the run picket up a huge amount of left over events from the prior run... something is happening.
- I understand the ET system less now I think.
- I talked to Carl Timmer and he pointed me to the et_monitor program and the Java monitor too.
- To use the java monitor
- Execute "java org.jlab.coda.et.monitorGui.monitor" in the java folder of et-12.0.
- Go to the connection menu and connect to the ET system.
- Type the name of the sub ET and know the port numbers if you want to connect remotely.
- If you want to connect directly know the TCP port.
- A local connection is probably easiest since you just need to make sure the port isn't inaccurate.
- This monitor will show where the events are and how the memory is being used - which should illuminate how the events are being moved around and whether or not the scaler readouts are accurate or if some reset time needs to be waited for.
- We should look at file sizes as a function of time.
- We should keep messing with the parameters.
- We should take the TS busy out and use that in the scalers too (needs a non-binary ECL cable and translation).
- We should use the java monitorGui to look at the data flow.