Steps for Texture Correction
-
First, we need to prepare the grouping file, which will divide detectors into small groups according to the polar and azimuthal angles. The
MantidTotalScattering(MTS) reduction will then take the grouping file for reducing data into those small groups.-
Go to
/SNS/NOM/shared/scripts/textureand run thetexture_group_gen.pyscript like this,mantidpython texture_group_gen.pywhich will create the grouping file in XML format, named
nom_texture_grouping.xml, together with a detector information file callednom_texture_pointer.jsonwhich contains the information about those generated small groups of detectors, like the corresponding polar and azimuthal angles and the group ID. We will be using this JSON file at later stage.
-
-
Copy the grouping file from previous step to the location where we want to run the
MTSreduction. In my case, I was working in/SNS/users/y8z/Temp/Monika_IPTS-27956_Texture_Correction/silicon, and here is the input file for runningMTSwith the grouping file provided. Then we need to runMTSreduction, with the command,mts silicon.jsonassuming the JSON input file name of
silicon.json.mtsis a system-wise available command which points to my local version ofMantidTotalScattering. -
The reduced data will be saved into the
SofQdirectory under our working directory (where we ran themtscommand). Depending on theOutputDirsetting in the JSON input file used bymts, the reduced data will be saved intoSofQas a sub-directory of the specified output directory. -
The reduced data is saved in the NeXus format and we want to extract out the data in plain text form. To do this, I created a Python script
wksp2data.py. In the script, we specify that location to the reduced NeXus file and the output directory for hosting the extracted plain text data files. -
The reduced data for some of those small detector groups are just bad, for some reason that we are not sure at this moment. Basically, they are pretty noisy so we want to remove them since otherwise they will mess up with the spherical harmonics corrections at the later stage. In the
wksp2data.pyscript, I was specifying the output directory as./texture_proc(here follows, I will be assuming this as our output directory), meaning all the extracted data will be saved intotexture_procunder the working directory. By 'working directory', I mean the directory where we ran themtsreduction and thewksp2data.pyscript. So, if things are working properly till this stage, you should see the following sub-directories in theworking directory,GSAS GSAS_unnorm Logs nom_texture_grouping.xml silicon.json SofQ texture_proc Topas Topas_unnorm wksp2data.pyWe want to run
mantidpython wksp2data.pyfor the data extraction.We want to use the script
remove_invalid_banks.pyto remove thosebadgroups. The list of thosebadgroups is stored in this file. Copy these two files into thetexture_procdirectory,cdintotexture_procand runpython remove_invalid_banks.py.Fortunately, this
badgroups list is consistent between runs so we don't need to create the list again. In case we do need to, the dirty way is to plot all the reduced data for those small groups (could be several hundreds of them) and visually pick out those bad groups. This is exactly what I did to arrive at the list we are using here. -
cdback into theworking directory. We then want to prepare another input JSON file. In my case, I was calling itttheta_group_params_new_new_new.json-- it can be any arbitrary name that we pick and later on we will specify the file name in the script that we will be using for the spherical harmonics correction. To generate the file, we can refer to the version here. In the file, the 2Theta bands are divided into 3 groups and in each group, the 2Theta bands share similar range of available Q-range. For each 2Theta band, we have the entryLeftBoundandRightBoundspecifying the lower and upper limit of the available Q-range. For each group, we havePAParamsto specify the Q-range to use for a single peak fitting for the alignment purpose. We also have theQChunkDefentry for each group to define the Q-chunks for the 2Theta part of the correction. The input in the shared version here is for silicon and the relevant entries should be adjusted according to the sample to run with. The principle for the chunk definition is that we want to include one (or several) full peak(s) in each chunk so we can do a reliable peak area integration. TheLinLimitentry for each group is something to do with the linear increasing behavior of the integrated intensities across the 2Theta angles that we observed for the Si standard sample. I am still playing around with this at this moment and I found it is probably not necessary to worry about. But for future purpose, I left the entry in the input in case we need it. For the moment, we can put inInfinityas the value for all chunks just like what I have in the example input file. The thing to keep in mind is the number of entries inLinLimitshould be the same as that inQChunkDef. Taking the group-1 in my example for Si, the first entry inLinLimitis for the Q-chunk from0to1.8, and so on. To populate proper values that go into the input JSON file at this stage, we can just use the example I provided here and run the correction script once (to be covered in next step), inspect the output generated corresponding to the first stage of the correction (over the azimuthal angle only), populate proper values into the input JSON file and run the correction script again. -
Now, we are ready to run the main correction script. I put the script here and give it the name of
texture_proc_real_1step_not_aligned_2step_aligned.py. Line-here to here specifies some directories and location of the input JSON files. Therun_dirspecifies where the extracted data for all those small groups live -- here want to keep consistent with the directory being used instep-5and in my case, I was staying withtexture_proc. Theout_diris to specify the output directory for containing the corrected data and in my example, I was usingtexture_proc_outputundertexture_proc. Thedet_map_filepoints to the detector information mentioned instep-1and if we are going to run everything on analysis, we can stay with the value as in my example. Thettheta_group_filevariable points to the input JSON file mentioned instep-6. Last, we need to change thestem_nameparameter according to the information of our sample. We can check the data files extracted into thetexture_procdirectory. If my data files are with the name of something likeNOM_Si_640e_bank880.dat, my stem name here will beNOM_Si_640e_(we can for sure figure this output automatically in the running script, but I am not considering too much about being user-friendly at this stage). This is everything we need to change to run on our sample and the very next step is just to run the script,python texture_proc_real_1step_not_aligned_2step_aligned.pyWe need another JSON file called
output_group.jsonthat controls how we want to output the finally corrected data into different groups. This file basically defines the 2Theta range that we want to use for the 6 output groups.This is not using
mantidpythonand to get it working, we have to installpystog. Here are the steps,conda create -n pystog conda activate pystog conda install anaconda::scipy conda install conda-forge::matplotlib conda install neutrons::pystogWith
pystogenvironment set up properly, we should be able to run the script above without problems. -
Now, we
cdinto the directory containing the correction output, i.e.,texture_proc/texture_proc_outputunder theworking directoryin my example. We should be able to see the following files,NOM_Si_640e_bank_1_merged_1na_2a_ave.dat NOM_Si_640e_bank_2_merged_1na_2a_ave.dat NOM_Si_640e_bank_3_merged_1na_2a_ave.dat NOM_Si_640e_bank_4_merged_1na_2a_ave.dat NOM_Si_640e_bank_5_merged_1na_2a_ave.dat NOM_Si_640e_bank_6_merged_1na_2a_ave.datwhere
NOM_Si_640e_is ourstem_nameparameter in the correction script mentioned instep-7. -
The 6 output data files mentioned in
step-8are the finally corrected, running through both the first stage of the correction over the azimuthal angle (by Q-points) and the second stage of the correction over the polar angle (by Q-chunks). There are a lot of output files that would be generated to contain the outputs from those intermediate steps. Most of them are my checking purpose so we don't need to worry about them. Files are indeed necessary to check are those with the name of something likeNOM_Si_640e_2theta_102.dat, where, again,NOM_Si_640e_corresponds to thestem_nameparameter in the correction script mentioned instep-7. Each of such files contain the correction output from the first stage and we need to grab all such files, plot them and inspect the data for the purpose of populating input parameters for thettheta_group_params_new_new_new.jsoninput JSON file mentioned instep-6. -
For data inspection, we can use the web-based tool at https://addie.ornl.gov/plotter, and for data merging [i.e., from 6 groups to 1 merged S(Q) data], we can use the tool here.
Yuanpeng Zhang @ 03/17/2025 14:18:09 EST
SNS-HFIR, ORNL