Last active 1742241867

texture_corr_steps.md Raw

Steps for Texture Correction

  1. First, we need to prepare the grouping file, which will divide detectors into small groups according to the polar and azimuthal angles. The MantidTotalScattering (MTS) reduction will then take the grouping file for reducing data into those small groups.

    • Go to /SNS/NOM/shared/scripts/texture and run the texture_group_gen.py script like this,

      mantidpython texture_group_gen.py
      

      which will create the grouping file in XML format, named nom_texture_grouping.xml, together with a detector information file called nom_texture_pointer.json which contains the information about those generated small groups of detectors, like the corresponding polar and azimuthal angles and the group ID. We will be using this JSON file at later stage.

  2. Copy the grouping file from previous step to the location where we want to run the MTS reduction. In my case, I was working in /SNS/users/y8z/Temp/Monika_IPTS-27956_Texture_Correction/silicon, and here is the input file for running MTS with the grouping file provided. Then we need to run MTS reduction, with the command,

    mts silicon.json
    

    assuming the JSON input file name of silicon.json.

    mts is a system-wise available command which points to my local version of MantidTotalScattering.

  3. The reduced data will be saved into the SofQ directory under our working directory (where we ran the mts command). Depending on the OutputDir setting in the JSON input file used by mts, the reduced data will be saved into SofQ as a sub-directory of the specified output directory.

  4. The reduced data is saved in the NeXus format and we want to extract out the data in plain text form. To do this, I created a Python script wksp2data.py. In the script, we specify that location to the reduced NeXus file and the output directory for hosting the extracted plain text data files.

  5. The reduced data for some of those small detector groups are just bad, for some reason that we are not sure at this moment. Basically, they are pretty noisy so we want to remove them since otherwise they will mess up with the spherical harmonics corrections at the later stage. In the wksp2data.py script, I was specifying the output directory as ./texture_proc (here follows, I will be assuming this as our output directory), meaning all the extracted data will be saved into texture_proc under the working directory. By 'working directory', I mean the directory where we ran the mts reduction and the wksp2data.py script. So, if things are working properly till this stage, you should see the following sub-directories in the working directory,

    GSAS
    GSAS_unnorm
    Logs
    nom_texture_grouping.xml
    silicon.json
    SofQ
    texture_proc
    Topas
    Topas_unnorm
    wksp2data.py
    

    We want to run mantidpython wksp2data.py for the data extraction.

    We want to use the script remove_invalid_banks.py to remove those bad groups. The list of those bad groups is stored in this file. Copy these two files into the texture_proc directory, cd into texture_proc and run python remove_invalid_banks.py.

    Fortunately, this bad groups list is consistent between runs so we don't need to create the list again. In case we do need to, the dirty way is to plot all the reduced data for those small groups (could be several hundreds of them) and visually pick out those bad groups. This is exactly what I did to arrive at the list we are using here.

  6. cd back into the working directory. We then want to prepare another input JSON file. In my case, I was calling it ttheta_group_params_new_new_new.json -- it can be any arbitrary name that we pick and later on we will specify the file name in the script that we will be using for the spherical harmonics correction. To generate the file, we can refer to the version here. In the file, the 2Theta bands are divided into 3 groups and in each group, the 2Theta bands share similar range of available Q-range. For each 2Theta band, we have the entry LeftBound and RightBound specifying the lower and upper limit of the available Q-range. For each group, we have PAParams to specify the Q-range to use for a single peak fitting for the alignment purpose. We also have the QChunkDef entry for each group to define the Q-chunks for the 2Theta part of the correction. The input in the shared version here is for silicon and the relevant entries should be adjusted according to the sample to run with. The principle for the chunk definition is that we want to include one (or several) full peak(s) in each chunk so we can do a reliable peak area integration. The LinLimit entry for each group is something to do with the linear increasing behavior of the integrated intensities across the 2Theta angles that we observed for the Si standard sample. I am still playing around with this at this moment and I found it is probably not necessary to worry about. But for future purpose, I left the entry in the input in case we need it. For the moment, we can put in Infinity as the value for all chunks just like what I have in the example input file. The thing to keep in mind is the number of entries in LinLimit should be the same as that in QChunkDef. Taking the group-1 in my example for Si, the first entry in LinLimit is for the Q-chunk from 0 to 1.8, and so on. To populate proper values that go into the input JSON file at this stage, we can just use the example I provided here and run the correction script once (to be covered in next step), inspect the output generated corresponding to the first stage of the correction (over the azimuthal angle only), populate proper values into the input JSON file and run the correction script again.

  7. Now, we are ready to run the main correction script. I put the script here and give it the name of texture_proc_real_1step_not_aligned_2step_aligned.py. Line-here to here specifies some directories and location of the input JSON files. The run_dir specifies where the extracted data for all those small groups live -- here want to keep consistent with the directory being used in step-5 and in my case, I was staying with texture_proc. The out_dir is to specify the output directory for containing the corrected data and in my example, I was using texture_proc_output under texture_proc. The det_map_file points to the detector information mentioned in step-1 and if we are going to run everything on analysis, we can stay with the value as in my example. The ttheta_group_file variable points to the input JSON file mentioned in step-6. Last, we need to change the stem_name parameter according to the information of our sample. We can check the data files extracted into the texture_proc directory. If my data files are with the name of something like NOM_Si_640e_bank880.dat, my stem name here will be NOM_Si_640e_ (we can for sure figure this output automatically in the running script, but I am not considering too much about being user-friendly at this stage). This is everything we need to change to run on our sample and the very next step is just to run the script,

    python texture_proc_real_1step_not_aligned_2step_aligned.py
    

    We need another JSON file called output_group.json that controls how we want to output the finally corrected data into different groups. This file basically defines the 2Theta range that we want to use for the 6 output groups.

    This is not using mantidpython and to get it working, we have to install pystog. Here are the steps,

    conda create -n pystog
    conda activate pystog
    conda install anaconda::scipy
    conda install conda-forge::matplotlib
    conda install neutrons::pystog
    

    With pystog environment set up properly, we should be able to run the script above without problems.

  8. Now, we cd into the directory containing the correction output, i.e., texture_proc/texture_proc_output under the working directory in my example. We should be able to see the following files,

    NOM_Si_640e_bank_1_merged_1na_2a_ave.dat
    NOM_Si_640e_bank_2_merged_1na_2a_ave.dat
    NOM_Si_640e_bank_3_merged_1na_2a_ave.dat
    NOM_Si_640e_bank_4_merged_1na_2a_ave.dat
    NOM_Si_640e_bank_5_merged_1na_2a_ave.dat
    NOM_Si_640e_bank_6_merged_1na_2a_ave.dat
    

    where NOM_Si_640e_ is our stem_name parameter in the correction script mentioned in step-7.

  9. The 6 output data files mentioned in step-8 are the finally corrected, running through both the first stage of the correction over the azimuthal angle (by Q-points) and the second stage of the correction over the polar angle (by Q-chunks). There are a lot of output files that would be generated to contain the outputs from those intermediate steps. Most of them are my checking purpose so we don't need to worry about them. Files are indeed necessary to check are those with the name of something like NOM_Si_640e_2theta_102.dat, where, again, NOM_Si_640e_ corresponds to the stem_name parameter in the correction script mentioned in step-7. Each of such files contain the correction output from the first stage and we need to grab all such files, plot them and inspect the data for the purpose of populating input parameters for the ttheta_group_params_new_new_new.json input JSON file mentioned in step-6.

  10. For data inspection, we can use the web-based tool at https://addie.ornl.gov/plotter, and for data merging [i.e., from 6 groups to 1 merged S(Q) data], we can use the tool here.


Yuanpeng Zhang @ 03/17/2025 14:18:09 EST

SNS-HFIR, ORNL