Steps for Texture Correction
-
First, we need to prepare the grouping file, which will divide detectors into small groups according to the polar and azimuthal angles. The
MantidTotalScattering
(MTS) reduction will then take the grouping file for reducing data into those small groups.-
Go to
/SNS/NOM/shared/scripts/texture
and run thetexture_group_gen.py
script like this,mantidpython texture_group_gen.py
which will create the grouping file in XML format, named
nom_texture_grouping.xml
, together with a detector information file callednom_texture_pointer.json
which contains the information about those generated small groups of detectors, like the corresponding polar and azimuthal angles and the group ID. We will be using this JSON file at later stage.
-
-
Copy the grouping file from previous step to the location where we want to run the
MTS
reduction. In my case, I was working in/SNS/users/y8z/Temp/Monika_IPTS-27956_Texture_Correction/silicon
, and here is the input file for runningMTS
with the grouping file provided. Then we need to runMTS
reduction, with the command,mts silicon.json
assuming the JSON input file name of
silicon.json
.mts
is a system-wise available command which points to my local version ofMantidTotalScattering
. -
The reduced data will be saved into the
SofQ
directory under our working directory (where we ran themts
command). Depending on theOutputDir
setting in the JSON input file used bymts
, the reduced data will be saved intoSofQ
as a sub-directory of the specified output directory. -
The reduced data is saved in the NeXus format and we want to extract out the data in plain text form. To do this, I created a Python script
wksp2data.py
. In the script, we specify that location to the reduced NeXus file and the output directory for hosting the extracted plain text data files. -
The reduced data for some of those small detector groups are just bad, for some reason that we are not sure at this moment. Basically, they are pretty noisy so we want to remove them since otherwise they will mess up with the spherical harmonics corrections at the later stage. In the
wksp2data.py
script, I was specifying the output directory as./texture_proc
(here follows, I will be assuming this as our output directory), meaning all the extracted data will be saved intotexture_proc
under the working directory. By 'working directory', I mean the directory where we ran themts
reduction and thewksp2data.py
script. So, if things are working properly till this stage, you should see the following sub-directories in theworking directory
,GSAS GSAS_unnorm Logs nom_texture_grouping.xml silicon.json SofQ texture_proc Topas Topas_unnorm wksp2data.py
We want to run
mantidpython wksp2data.py
for the data extraction.We want to use the script
remove_invalid_banks.py
to remove thosebad
groups. The list of thosebad
groups is stored in this file. Copy these two files into thetexture_proc
directory,cd
intotexture_proc
and runpython remove_invalid_banks.py
.Fortunately, this
bad
groups list is consistent between runs so we don't need to create the list again. In case we do need to, the dirty way is to plot all the reduced data for those small groups (could be several hundreds of them) and visually pick out those bad groups. This is exactly what I did to arrive at the list we are using here. -
cd
back into theworking directory
. We then want to prepare another input JSON file. In my case, I was calling itttheta_group_params_new_new_new.json
-- it can be any arbitrary name that we pick and later on we will specify the file name in the script that we will be using for the spherical harmonics correction. To generate the file, we can refer to the version here. In the file, the 2Theta bands are divided into 3 groups and in each group, the 2Theta bands share similar range of available Q-range. For each 2Theta band, we have the entryLeftBound
andRightBound
specifying the lower and upper limit of the available Q-range. For each group, we havePAParams
to specify the Q-range to use for a single peak fitting for the alignment purpose. We also have theQChunkDef
entry for each group to define the Q-chunks for the 2Theta part of the correction. The input in the shared version here is for silicon and the relevant entries should be adjusted according to the sample to run with. The principle for the chunk definition is that we want to include one (or several) full peak(s) in each chunk so we can do a reliable peak area integration. TheLinLimit
entry for each group is something to do with the linear increasing behavior of the integrated intensities across the 2Theta angles that we observed for the Si standard sample. I am still playing around with this at this moment and I found it is probably not necessary to worry about. But for future purpose, I left the entry in the input in case we need it. For the moment, we can put inInfinity
as the value for all chunks just like what I have in the example input file. The thing to keep in mind is the number of entries inLinLimit
should be the same as that inQChunkDef
. Taking the group-1 in my example for Si, the first entry inLinLimit
is for the Q-chunk from0
to1.8
, and so on. To populate proper values that go into the input JSON file at this stage, we can just use the example I provided here and run the correction script once (to be covered in next step), inspect the output generated corresponding to the first stage of the correction (over the azimuthal angle only), populate proper values into the input JSON file and run the correction script again. -
Now, we are ready to run the main correction script. I put the script here and give it the name of
texture_proc_real_1step_not_aligned_2step_aligned.py
. Line-here to here specifies some directories and location of the input JSON files. Therun_dir
specifies where the extracted data for all those small groups live -- here want to keep consistent with the directory being used instep-5
and in my case, I was staying withtexture_proc
. Theout_dir
is to specify the output directory for containing the corrected data and in my example, I was usingtexture_proc_output
undertexture_proc
. Thedet_map_file
points to the detector information mentioned instep-1
and if we are going to run everything on analysis, we can stay with the value as in my example. Thettheta_group_file
variable points to the input JSON file mentioned instep-6
. Last, we need to change thestem_name
parameter according to the information of our sample. We can check the data files extracted into thetexture_proc
directory. If my data files are with the name of something likeNOM_Si_640e_bank880.dat
, my stem name here will beNOM_Si_640e_
(we can for sure figure this output automatically in the running script, but I am not considering too much about being user-friendly at this stage). This is everything we need to change to run on our sample and the very next step is just to run the script,python texture_proc_real_1step_not_aligned_2step_aligned.py
We need another JSON file called
output_group.json
that controls how we want to output the finally corrected data into different groups. This file basically defines the 2Theta range that we want to use for the 6 output groups.This is not using
mantidpython
and to get it working, we have to installpystog
. Here are the steps,conda create -n pystog conda activate pystog conda install anaconda::scipy conda install conda-forge::matplotlib conda install neutrons::pystog
With
pystog
environment set up properly, we should be able to run the script above without problems. -
Now, we
cd
into the directory containing the correction output, i.e.,texture_proc/texture_proc_output
under theworking directory
in my example. We should be able to see the following files,NOM_Si_640e_bank_1_merged_1na_2a_ave.dat NOM_Si_640e_bank_2_merged_1na_2a_ave.dat NOM_Si_640e_bank_3_merged_1na_2a_ave.dat NOM_Si_640e_bank_4_merged_1na_2a_ave.dat NOM_Si_640e_bank_5_merged_1na_2a_ave.dat NOM_Si_640e_bank_6_merged_1na_2a_ave.dat
where
NOM_Si_640e_
is ourstem_name
parameter in the correction script mentioned instep-7
. -
The 6 output data files mentioned in
step-8
are the finally corrected, running through both the first stage of the correction over the azimuthal angle (by Q-points) and the second stage of the correction over the polar angle (by Q-chunks). There are a lot of output files that would be generated to contain the outputs from those intermediate steps. Most of them are my checking purpose so we don't need to worry about them. Files are indeed necessary to check are those with the name of something likeNOM_Si_640e_2theta_102.dat
, where, again,NOM_Si_640e_
corresponds to thestem_name
parameter in the correction script mentioned instep-7
. Each of such files contain the correction output from the first stage and we need to grab all such files, plot them and inspect the data for the purpose of populating input parameters for thettheta_group_params_new_new_new.json
input JSON file mentioned instep-6
. -
For data inspection, we can use the web-based tool at https://addie.ornl.gov/plotter, and for data merging [i.e., from 6 groups to 1 merged S(Q) data], we can use the tool here.
Yuanpeng Zhang @ 03/17/2025 14:18:09 EST
SNS-HFIR, ORNL