-
Notifications
You must be signed in to change notification settings - Fork 1
Quickstart
#Getting this repository ##Obtain repository (requires internet connection)
-
If you have a github account that is aware of the public ssh-keys to your destination computer ( click here to learn more), type the following in the destination directory to clone the cheetah repository:
git clone git@github.com:antonbarty/cheetah.git -
You may also clone the cheetah repository without ssh keys, by issuing this command in your destination directory instead:
git clone https://antonbarty@github.com/antonbarty/cheetah.git
##Compiling this repository Examining the repository carefully and you'll find that it's missing most of the shared libraries needed for linking. To compile cheetah requires two steps.
-
First the LCLS libraries must be compiled in your cloned cheetah distribution. Assuming you are already in the cheetah directory, issue the following commands:
cd release/pdsdata, followed bymake x86_64-linux. This second command creates the LCLS libraries necessary for compile to complete (otherwise you'll get an error such as 'can not find libacqdata.so'). Do remember to change the target CPU/OS combination if working on a different system (eg: OS-X), but we do not currently offer support for compilation on arbitrary OSes. -
Then run make in your cheetah main directory to compile the cheetah executable:
cd ../../thenmake. You may want to change theLCLSDIRandHDF5DIRfields in Makefile if you encounter error messages in the compilation.
##Library Dependence Cheetah depends on the following libraries:
-
HDF5 library for input/output of the Hierarchical Data Format HDF (http://www.hdfgroup.org/HDF5/)
-
giraffe library for correlation functionality (git@github.com:feldkamp/giraffe.git, email feldkamp-at-slac.stanford.edu if you have questions). This library again depends on more external libraries, such as FFTW and LIBTIFF.
All libraries necessary for a basic build should be found when you type make on LCLS's psexport machine (or related computers). If that's not the case, you may have to modify the include or library paths in the Makefile. You can also disable the correlation functionality altogether and eliminate the dependence on the giraffe library by removing the CORRELATION_ENABLED preprocessor flag in the Makefile.
#cheetah.ini as a workflow selector The executable cheetah compiled above fully expects an initialization file named cheetah.ini in the same directory. Through this initialization file the user fine-tunes how cheetah processes the detector data in an xtc container.
Besides a number of detector and runtime parameters -- nthreads, and pixelSize, and files for detector geometry, and various masks -- currently cheetah has two primary mutually-exclusive operations generate-darkcal and hit-finding. Here's a brief description of these two operations (detailed descriptions in cheetah_comments.ini that is included in this distribution).
##Generating darkcal
In cheetah.ini set generateDarkcal=1. This causes many of the hit-finder-only settings to be ignored. In generate-darkcal-mode, cheetah is responsible only for adding all detector frames into a running sum. This accumulated darkcal is what the detector measures when it doesn't see radiation (hence the name dark). The non-zero detector readout in the dark is nominally a result of thermal or electronic noise inherent to the measuring environment. The alternative operation, hit-finding, subtracts this darkcal from any detector frame before assessing if that frame is hit or not.
##Hit-finding Hit-finding requires the user to specify criteria for hits to discriminate against non-hits (or blanks). Hit-finding largely comprises steps in the following sequentially applied group of operations:
###data-reduction and masking Here are the cheetah.ini variables that are grouped according to their functions.
-
useBadPixelMaskand filename forbadpixelMask. A binary mask, which when multiplied to each diffraction frame, suppresses the presumably bad/dead pixels (locations of these pixels determined elsewhere by the user). -
useDarkcalSubtractionand filename fordarkcal. See last subsection on generating darkcal. -
useSubtractPersistentBackgroundandbgMemoryandstartFramesandscaleBackground. A largely-constant background that persistently pollutes hits can be accumulated and subtracted from later diffraction frames. Works best when this constant background is sufficiently different from desired diffraction signal and latter also fluctuates significantly from frame-to-frame. -
useAutoHotpixelandhotpixADCandhotpixfreqandhotpixmemory. Creates a hot-pixel binary mask on the fly -- pixels that persistently exceed bothhotpixADCin value andhotpixfreqin frequency when sampled overhotpixmemoryframes are considered hot and hence masked out. -
useGaincaland filename forgaincalandinvertGain. Applies a known gain-calibration across all detector pixels whenuseGaincal=1; given the option of dividing (invertGain=1) or multiplying by this specifiedgaincal.
###data-selection based on hit-finder criteria
These have several versions, depending on if the hit-finder is specifically trying to be a icefinder, waterfinder, backgroundfinder or just a regular hitfinder. These hit-finders employ a small set of hit-finder algorithms (currently three such algorithms) with parameters one can tweak. Here are short descriptions of these hitfinders algorithms. In the following descriptions, depending on the expected hit-signatures replace the asterisks with the tags icefinder, waterfinder, backgroundfinder or hitfinder. These hit-finder algorithms assess data have been corrected by the data-reduction and masking steps above.
-
Algorithm1 simply counts the number of pixels above a
*ADCthreshold. A diffraction frame is a hit if the number of pixels exceeds*NAT. -
Algorithm2 counts how many pixels around each bright pixel (measured counts above threshold
*ADC) that are similarly bright. These bright pixels are considered a cluster if they comprise at least*Clusterpixels. A diffraction frame is a hit if there are more than*MinPixCountsuch clusters. Bright pixels, once counted, are zeroed-out to avoid double-counting. -
Algorithm3, is a more sophisticated/selective version of Algorithm2. Here we recursively 'grow' clusters outwards from bright pixels, where again a cluster is defined as pixels that have values above
*ADC. Only clusters with sizes between*MinPixCountand*MaxPixCountare considered legitimate. Finally, a diffraction frame is considered a hit only if it has between*Npeaksand*NpeaksMaxsuch legitimate clusters.
Hits are not automatically saved unless savehits=1. Nevertheless, newly-identified hits are always added to a running powderSum but only where at detectors pixels whose corrected values are above powderthresh.
##Special functions.
###HDF5dump mode
Sometimes one needs to see the diffraction frames to get a sense how features in signal differ from those of background. Doing so often allows the user to select an appropriate hitfinder algorithm and fine-tune its parameters, or even write a different hit-finder algorithm altogether. When one sets hdf5dump=1, all frames are automatically written to the h5 format; the user must hard-terminate the cheetah program to end this hdf5dump procedure (press Ctrl+\).
###saveRaw mode
Cheetah outputs data in two different formats: Raw and Assembled formats. The Raw format is an array of pixel values arranged according how the detector panels are readout. The Assembled format uses an input geometry file to place these panels in their physical arrangement on the detector. The geometry file is in raw format and contains the (x,y,z) locations for each pixel. In this scenario, setting saveRaw=1 saves the Raw format of each H5 file in addition to the Assembled data. As much as possible we want to use raw in scripts that process the output of cheetah, since then when the geometry is updated cheetah does not have to be rerun.
#Viewing the HDF5 output
##Anton's IDL H5-browser
Assuming you are using the bash shell, you should be set up to run the H5-browser with the following additions to your $HOME/.bashrc file:
export IDL_DIR='/reg/neh/home/barty/bin/itt/idl'
export IDL_STARTUP='~/idl/idlstartup.pro'
alias runidl="$IDL_DIR/bin/idl -vm='~barty/idl/fel_browser.sav'"
After these (and these variables sourced on the bash shell -- source $HOME/.bashrc), just type runidl and you should be greeted by the H5-browser running off the IDL virtual machine.
##Python H5-browser.
To use the Python browser on SLAC's Unix computers, you first need to setup your Python environment. One can do so using Andy Salnikov's shell script: test -f /reg/g/psdm/etc/ana_env.sh && . /reg/g/psdm/etc/ana_env.sh.
There are two Python simplistic H5-viewers:
+viewRun.py to sequentially view H5 output from a single run;
+viewAssembledSum.py to view the Assembled Sum from a single run.
These python scripts require the user to specify input (H5 files) and output (png and txt files) directories. One must be mindful to modify these directories (write_dir and source_dir) in the python scripts before usage. To learn (or remind yourself) of how to use these scripts, simply type python viewRun.py --help or python viewAssembledSum.py --help.
Values are initialised in this order: first from default values, then overwritten by ini file, then overwritten by command line arguments. The first command line argument should always be the ini file. After that you can set arguments with –commandname, where the command name is any name you can put into the ini file. Though it's probably much better practice to use the ini file since then there is a record of the parameters used.