DAQ Testing/20181102

From PREX Wiki
Jump to navigationJump to search

Back to Main Page >> DAQ Documentation Portal >> DAQ Testing >> DAQ Commissioning Notes

Previous Day of Testing << >> Next Day of Testing

November 2nd, 2018 Testers: Cameron and Bob

See Today's Meeting Notes

Goals

  • Begin working on Injector timing diagram
  • Measuring ET bridge paradigm analyzer deadtime
    • Figure out signals for scalers in the counting house
    • Revive software to manipulate the scaler and read it out with a network server
    • Incorporate scaler data into deadtime measurement in a simple online-analyzer

Injector Timing Diagram

In the past several attempts at mapping out the injector timing have been made by Bob and others. Bob has several timing diagrams compiled here and described here.

The goal for today was to refresh Cameron and Bob on the status of the injector timing, both in the crates and wiring, and in the injector CRL code.

The plan moving forward is to put all of the signals on a scope, relative to the master pulse (MPS - Tsettle falling edge), and to parse through the CRL to see what the start, wait, and end times are for all data readouts.

The information needed are:

  • Where all of the Helicity signals are going
    • HEL, Tsettle (MPS, used as LNE - Load Next Event), Tstable (MPS), Pair Sync, and QRT (Multiplet Sync)
  • VQWK information
    • Ext gate (start), integration time, out of the TI, read starts and stops
  • Readout timing in the CRL
    • Start and end times, when flexio, scaler (SIS 3801 or 3800), vqwk, and str7200
  • Flexio
    • NIM in strobe time
  • Scaler (SIS 3801)
    • LNE strobe time

We also need a timing diagram of the Counting House Parity DAQ

120.0 Hz Noise in NIM to TTL level translator

There is a consistent 120.0 Hz noise in the NIM to TTL level translator in the CH01B03 NIM crate (beneath the primary Parity VME DAQ crate). It is noticeable as a aliased ~0.3 V wave in the NIM outputs in the module, but is not present in the TTL outputs. See here for a helicity signal in TTL and in NIM snapshot, and here (cleaner channel) and here (dirtier channel) for a zoomed in look at the noise itself (when absolutely no signal is plugged into the particular input/output channel in question). This should be fixed. There are also several non-functioning fans in the NIM crates to the right of the Parity DAQ VME crate which should get checked out (possibly due to malfunctioning power supply).

Using Scalers to test Online Analysis Deadtime

Signals for scalers:

  • Bob helped Cameron route a Helicity Tsettle signal into the SIS3801 scaler.
    • There was some confusion due to the 120 Hz noise (described above).
  • Ciprian gave Cameron a Phillips 417 Pocket Pulser which sends a ~10.5 kHz NIM signal under batter power.
    • Originally it was low on batteries and so several false starts were misleading (including another stealth strike from the 120 Hz noise again).
    • Now there is a ~10.5 kHz "clock" plugged into channel 12, and the helicity information is plugged into channel 13, with the ~500 kHz Unser 27 signal still in channel 10 as it was originally.

Software for Scalers

A couple years ago (and before that as well) Bob had implemented a client/server in apar@adaq3:~/scaler (with program code in /readout/ and /SISnohel/) for the Counting House scaler in order to support monitoring of the so-called Halo monitor. Briefly, the scheme is to continuosly read the scalers and put the data into EPICS variables "Aline_HaloN" where N = 1, 1, 2, ... 8

  • client/server code
    • /adaqfs/home/apar/scaler/readout/README
  • putting scaler data into EPICS
    • /adaqfs/home/apar/scaler/scalepics/README

There is also a copy of the "xscaler" display in ~/scaler/xscaler but it probably needs a bit of work to connect to the Counting House server.

Things to do to get this working: first need a clock to normalize rates (Phillips 417 Pocket Pulser is sufficient), possibly placed in channel 0 in the scacli.c code as "CLKCHAN 0".

The scaler software is organized as follows:

  • (In /SISnohel/) there is a program called the "producer" which adds up the FIFO signals until it is cleared by software requests (which effectively turns the SIS 3801 into a SIS 3800) and puts the data into memory for remote access.
    • This Producer is spawned with the SISproducer() function in the boot script with the scaler's .o library (currently using the /SISnohel compiled library, as called and loaded in the halladaq6.boot ROC23 boot script).
    • A client then uses the Read3801(iscaler,ichannel) function to get the data out.
  • (In /readout/) there is scaler data server code.
    • It is compiled into a .o library for use by the server and is loaded to be used in the scacli.c program which is compiled to "scread".
    • The ROC 23 halladaq6.boot script also loads the scaser.o library and spawns an instance of it (in addition to the producer).
    • This is the same technique as followed by the Green Monster and BMW servers.
  • The scread program reads out the currently stored information, and iterates ++ for each signal it sees in the channels.
    • Calling scread with a -c flag clears all of the data
    • and -p # reads channel number #
    • and adding a -r after -p # reports the frequency of that channel's incrementing

Incorporating Scaler Server into Deadtime Measures

The simple ET client (~apar/et-12.0/src/cameronc/etclient_main.C) has been updated:

  • Added a clock (scaler channel 1 at ~ 10.5 kHz).
  • Added a helicity event counter and rate calculator (using the clock).
  • Now the number of events obtained by the et client "simple analyzer" can be directly compared to the number of events that the scaler knows should have been processed based on the helicity trigger information.
    • Added a TI board busy-out into the scaler to count the number of events that the ROC processed, so we can see how far back along the chain the deadtime goes.
    • This would be useful to see if any given deadtime is just in the software, or if it is in the crate itself after having been blocked by ET blocking station, for instance.
    • Wrote a little bash script (~apar/et-12.0/src/cameronc/read_scaler.sh) to use scread to reset the scaler, accept a user specified time to count, and then print the clock, the helicity flip count, and the CODA event processing count (this could be used simultaneously with a blocking-mode CODA run to quickly and easily see if, by by how many, CODA fails to process any events due to blocking-mode induced deadtime).
    • Measured the pocket pulser to provide a fairly reliable frequency of 10.4694, and measured the injector helicity flip rate at 29.5567 (in 600 seconds of scaler counting).
    • (no data file, CODA glitch) Run 4548 lasted 150 minutes or so and included a lot of good beam data from the experiment, taken with the injector 30 Hz helicity information.
    • (no data file, CODA glitch) Run 4549 lasted a few minutes but ran at 1 kHz from the helicity control board in the Counting House DAQ, so it has 600,000 events (and was taken when there was no beam, due to the cone falling over again).
      • This test (see a screen shot of the output here) verifies that in fact the deadtime that Cameron has been measuring with non-blocking etbridge and etclients has been in the analyzer and not affecting the DAQ at all.
    • Run 4554 is a test run that reverting back to the proper injector helicity and boot script settings worked (it did, see HALOG 3620085).

Results

  • We have a good idea of how to tackle the Injector (and Counting House) timing problems.
  • The ET client deadtime problem can now be tackled with more mathematical rigour now that we have the scalers as a proper reference clock and counting diagnostic.

To Do

  • Go into the Injector (and CRL) and map out the timing.
  • Redo the tests with -q, -n, -r, and CHUNK size to see what affects the ET client analyzer's deadtime.