yuanpeng revisó este gist . Ir a la revisión
1 file changed, 2 insertions, 2 deletions
texture_corr_steps.md
@@ -25,7 +25,7 @@ Steps for Texture Correction | |||
25 | 25 | ||
26 | 26 | 4. The reduced data is saved in the NeXus format and we want to extract out the data in plain text form. To do this, I created a Python script [`wksp2data.py`](https://pf.iris-home.net/yuanpeng/626459d2140f4ba6a6a61f1a7249604a). In the script, we specify that location to the reduced NeXus file and the output directory for hosting the extracted plain text data files. | |
27 | 27 | ||
28 | - | 5. The reduced data for some of those small detector groups are just bad, for some reason that we are not sure at this moment. Basically, they are pretty noisy so we want to remove them since otherwise they will mess up with the spherical harmonics corrections at later stage. In the [`wksp2data.py`](https://pf.iris-home.net/yuanpeng/626459d2140f4ba6a6a61f1a7249604a) script, I was specifying the output directory as `./texture_proc` (here follows, I will be assuming this as our output directory), meaning all the extracted data will be saved into `texture_proc` under the working directory. By 'working directory', I mean the directory where we ran the `mts` reduction and the `wksp2data.py` script. So, if things are working properly till this stage, you should see the following sub-directories in the `working directory`, | |
28 | + | 5. The reduced data for some of those small detector groups are just bad, for some reason that we are not sure at this moment. Basically, they are pretty noisy so we want to remove them since otherwise they will mess up with the spherical harmonics corrections at the later stage. In the [`wksp2data.py`](https://pf.iris-home.net/yuanpeng/626459d2140f4ba6a6a61f1a7249604a) script, I was specifying the output directory as `./texture_proc` (here follows, I will be assuming this as our output directory), meaning all the extracted data will be saved into `texture_proc` under the working directory. By 'working directory', I mean the directory where we ran the `mts` reduction and the `wksp2data.py` script. So, if things are working properly till this stage, you should see the following sub-directories in the `working directory`, | |
29 | 29 | ||
30 | 30 | ``` | |
31 | 31 | GSAS | |
@@ -40,7 +40,7 @@ Steps for Texture Correction | |||
40 | 40 | wksp2data.py | |
41 | 41 | ``` | |
42 | 42 | ||
43 | - | We want to run `mantidpython wksp2data.py` | |
43 | + | > We want to run `mantidpython wksp2data.py` for the data extraction. | |
44 | 44 | ||
45 | 45 | We want to use the script [`remove_invalid_banks.py`](https://pf.iris-home.net/yuanpeng/2bc13a89757a40ff97bf0cebd75d8d5f) to remove those `bad` groups. The list of those `bad` groups is stored in [this](https://pf.iris-home.net/yuanpeng/4dbf30801c5f40bb8326ee784d17f60a) file. Copy these two files into the `texture_proc` directory, `cd` into `texture_proc` and run `python remove_invalid_banks.py`. | |
46 | 46 |
yuanpeng revisó este gist . Ir a la revisión
1 file changed, 2 insertions
texture_corr_steps.md
@@ -40,6 +40,8 @@ Steps for Texture Correction | |||
40 | 40 | wksp2data.py | |
41 | 41 | ``` | |
42 | 42 | ||
43 | + | We want to run `mantidpython wksp2data.py` | |
44 | + | ||
43 | 45 | We want to use the script [`remove_invalid_banks.py`](https://pf.iris-home.net/yuanpeng/2bc13a89757a40ff97bf0cebd75d8d5f) to remove those `bad` groups. The list of those `bad` groups is stored in [this](https://pf.iris-home.net/yuanpeng/4dbf30801c5f40bb8326ee784d17f60a) file. Copy these two files into the `texture_proc` directory, `cd` into `texture_proc` and run `python remove_invalid_banks.py`. | |
44 | 46 | ||
45 | 47 | > Fortunately, this `bad` groups list is consistent between runs so we don't need to create the list again. In case we do need to, the dirty way is to plot all the reduced data for those small groups (could be several hundreds of them) and visually pick out those bad groups. This is exactly what I did to arrive at the list we are using here. |
yuanpeng revisó este gist . Ir a la revisión
1 file changed, 1 insertion
texture_corr_steps.md
@@ -86,6 +86,7 @@ Steps for Texture Correction | |||
86 | 86 | --- | |
87 | 87 | ||
88 | 88 | Yuanpeng Zhang @ 03/17/2025 14:18:09 EST | |
89 | + | ||
89 | 90 | SNS-HFIR, ORNL | |
90 | 91 | ||
91 | 92 | --- |
yuanpeng revisó este gist . Ir a la revisión
1 file changed, 1 insertion, 1 deletion
texture_corr_steps.md
@@ -9,7 +9,7 @@ Steps for Texture Correction | |||
9 | 9 | mantidpython texture_group_gen.py | |
10 | 10 | ``` | |
11 | 11 | ||
12 | - | which will create the grouping file in XML format, named `nom_texture_grouping.xml`, together with a detector information file called `nom_texture_pointer.json` which contains the information about those generated small groups of detectors, like the corresponding polar and azimuthal angles and the group ID. We will be using this JSON file at later stage. | |
12 | + | which will create the grouping file in XML format, named `nom_texture_grouping.xml`, together with a detector information file called `nom_texture_pointer.json` which contains the information about those generated small groups of detectors, like the corresponding polar and azimuthal angles and the group ID. We will be using this JSON file at later stage. | |
13 | 13 | ||
14 | 14 | 2. Copy the grouping file from previous step to the location where we want to run the `MTS` reduction. In my case, I was working in `/SNS/users/y8z/Temp/Monika_IPTS-27956_Texture_Correction/silicon`, and [here](https://pf.iris-home.net/yuanpeng/cc8c9e40970e450fa7558839f934eda1) is the input file for running `MTS` with the grouping file provided. Then we need to run `MTS` reduction, with the command, | |
15 | 15 |
yuanpeng revisó este gist . Ir a la revisión
1 file changed, 3 insertions, 3 deletions
texture_corr_steps.md
@@ -5,9 +5,9 @@ Steps for Texture Correction | |||
5 | 5 | ||
6 | 6 | - Go to `/SNS/NOM/shared/scripts/texture` and run the `texture_group_gen.py` script like this, | |
7 | 7 | ||
8 | - | ```bash | |
9 | - | mantidpython texture_group_gen.py | |
10 | - | ``` | |
8 | + | ```bash | |
9 | + | mantidpython texture_group_gen.py | |
10 | + | ``` | |
11 | 11 | ||
12 | 12 | which will create the grouping file in XML format, named `nom_texture_grouping.xml`, together with a detector information file called `nom_texture_pointer.json` which contains the information about those generated small groups of detectors, like the corresponding polar and azimuthal angles and the group ID. We will be using this JSON file at later stage. | |
13 | 13 |
yuanpeng revisó este gist . Ir a la revisión
1 file changed, 91 insertions
texture_corr_steps.md(archivo creado)
@@ -0,0 +1,91 @@ | |||
1 | + | Steps for Texture Correction | |
2 | + | === | |
3 | + | ||
4 | + | 1. First, we need to prepare the grouping file, which will divide detectors into small groups according to the polar and azimuthal angles. The `MantidTotalScattering` (MTS) reduction will then take the grouping file for reducing data into those small groups. | |
5 | + | ||
6 | + | - Go to `/SNS/NOM/shared/scripts/texture` and run the `texture_group_gen.py` script like this, | |
7 | + | ||
8 | + | ```bash | |
9 | + | mantidpython texture_group_gen.py | |
10 | + | ``` | |
11 | + | ||
12 | + | which will create the grouping file in XML format, named `nom_texture_grouping.xml`, together with a detector information file called `nom_texture_pointer.json` which contains the information about those generated small groups of detectors, like the corresponding polar and azimuthal angles and the group ID. We will be using this JSON file at later stage. | |
13 | + | ||
14 | + | 2. Copy the grouping file from previous step to the location where we want to run the `MTS` reduction. In my case, I was working in `/SNS/users/y8z/Temp/Monika_IPTS-27956_Texture_Correction/silicon`, and [here](https://pf.iris-home.net/yuanpeng/cc8c9e40970e450fa7558839f934eda1) is the input file for running `MTS` with the grouping file provided. Then we need to run `MTS` reduction, with the command, | |
15 | + | ||
16 | + | ```bash | |
17 | + | mts silicon.json | |
18 | + | ``` | |
19 | + | ||
20 | + | assuming the JSON input file name of `silicon.json`. | |
21 | + | ||
22 | + | > `mts` is a system-wise available command which points to my local version of `MantidTotalScattering`. | |
23 | + | ||
24 | + | 3. The reduced data will be saved into the `SofQ` directory under our working directory (where we ran the `mts` command). Depending on the `OutputDir` setting in the JSON input file used by `mts`, the reduced data will be saved into `SofQ` as a sub-directory of the specified output directory. | |
25 | + | ||
26 | + | 4. The reduced data is saved in the NeXus format and we want to extract out the data in plain text form. To do this, I created a Python script [`wksp2data.py`](https://pf.iris-home.net/yuanpeng/626459d2140f4ba6a6a61f1a7249604a). In the script, we specify that location to the reduced NeXus file and the output directory for hosting the extracted plain text data files. | |
27 | + | ||
28 | + | 5. The reduced data for some of those small detector groups are just bad, for some reason that we are not sure at this moment. Basically, they are pretty noisy so we want to remove them since otherwise they will mess up with the spherical harmonics corrections at later stage. In the [`wksp2data.py`](https://pf.iris-home.net/yuanpeng/626459d2140f4ba6a6a61f1a7249604a) script, I was specifying the output directory as `./texture_proc` (here follows, I will be assuming this as our output directory), meaning all the extracted data will be saved into `texture_proc` under the working directory. By 'working directory', I mean the directory where we ran the `mts` reduction and the `wksp2data.py` script. So, if things are working properly till this stage, you should see the following sub-directories in the `working directory`, | |
29 | + | ||
30 | + | ``` | |
31 | + | GSAS | |
32 | + | GSAS_unnorm | |
33 | + | Logs | |
34 | + | nom_texture_grouping.xml | |
35 | + | silicon.json | |
36 | + | SofQ | |
37 | + | texture_proc | |
38 | + | Topas | |
39 | + | Topas_unnorm | |
40 | + | wksp2data.py | |
41 | + | ``` | |
42 | + | ||
43 | + | We want to use the script [`remove_invalid_banks.py`](https://pf.iris-home.net/yuanpeng/2bc13a89757a40ff97bf0cebd75d8d5f) to remove those `bad` groups. The list of those `bad` groups is stored in [this](https://pf.iris-home.net/yuanpeng/4dbf30801c5f40bb8326ee784d17f60a) file. Copy these two files into the `texture_proc` directory, `cd` into `texture_proc` and run `python remove_invalid_banks.py`. | |
44 | + | ||
45 | + | > Fortunately, this `bad` groups list is consistent between runs so we don't need to create the list again. In case we do need to, the dirty way is to plot all the reduced data for those small groups (could be several hundreds of them) and visually pick out those bad groups. This is exactly what I did to arrive at the list we are using here. | |
46 | + | ||
47 | + | 6. `cd` back into the `working directory`. We then want to prepare another input JSON file. In my case, I was calling it `ttheta_group_params_new_new_new.json` -- it can be any arbitrary name that we pick and later on we will specify the file name in the script that we will be using for the spherical harmonics correction. To generate the file, we can refer to the version [here](https://pf.iris-home.net/yuanpeng/570720edcacc408b87b240779a3e19cf). In the file, the 2Theta bands are divided into 3 groups and in each group, the 2Theta bands share similar range of available Q-range. For each 2Theta band, we have the entry `LeftBound` and `RightBound` specifying the lower and upper limit of the available Q-range. For each group, we have `PAParams` to specify the Q-range to use for a single peak fitting for the alignment purpose. We also have the `QChunkDef` entry for each group to define the Q-chunks for the 2Theta part of the correction. The input in the shared version [here](https://pf.iris-home.net/yuanpeng/570720edcacc408b87b240779a3e19cf) is for silicon and the relevant entries should be adjusted according to the sample to run with. The principle for the chunk definition is that we want to include one (or several) full peak(s) in each chunk so we can do a reliable peak area integration. The `LinLimit` entry for each group is something to do with the linear increasing behavior of the integrated intensities across the 2Theta angles that we observed for the Si standard sample. I am still playing around with this at this moment and I found it is probably not necessary to worry about. But for future purpose, I left the entry in the input in case we need it. For the moment, we can put in `Infinity` as the value for all chunks just like what I have in the example input file. The thing to keep in mind is the number of entries in `LinLimit` should be the same as that in `QChunkDef`. Taking the group-1 in my example for Si, the first entry in `LinLimit` is for the Q-chunk from `0` to `1.8`, and so on. To populate proper values that go into the input JSON file at this stage, we can just use the example I provided [here](https://pf.iris-home.net/yuanpeng/570720edcacc408b87b240779a3e19cf) and run the correction script once (to be covered in next step), inspect the output generated corresponding to the first stage of the correction (over the azimuthal angle only), populate proper values into the input JSON file and run the correction script again. | |
48 | + | ||
49 | + | 7. Now, we are ready to run the main correction script. I put the script [here](https://pf.iris-home.net/yuanpeng/6bc4050487974888962fc4c7f0c14989) and give it the name of `texture_proc_real_1step_not_aligned_2step_aligned.py`. Line-[here](https://pf.iris-home.net/yuanpeng/6bc4050487974888962fc4c7f0c14989#file-texture-proc-real-1step-not-aligned-2step-aligned-py-35) to [here](https://pf.iris-home.net/yuanpeng/6bc4050487974888962fc4c7f0c14989#file-texture-proc-real-1step-not-aligned-2step-aligned-py-50) specifies some directories and location of the input JSON files. The `run_dir` specifies where the extracted data for all those small groups live -- here want to keep consistent with the directory being used in `step-5` and in my case, I was staying with `texture_proc`. The `out_dir` is to specify the output directory for containing the corrected data and in my example, I was using `texture_proc_output` under `texture_proc`. The `det_map_file` points to the detector information mentioned in `step-1` and if we are going to run everything on analysis, we can stay with the value as in my example. The `ttheta_group_file` variable points to the input JSON file mentioned in `step-6`. Last, we need to change the `stem_name` parameter according to the information of our sample. We can check the data files extracted into the `texture_proc` directory. If my data files are with the name of something like `NOM_Si_640e_bank880.dat`, my stem name here will be `NOM_Si_640e_` (we can for sure figure this output automatically in the running script, but I am not considering too much about being user-friendly at this stage). This is everything we need to change to run on our sample and the very next step is just to run the script, | |
50 | + | ||
51 | + | ```bash | |
52 | + | python texture_proc_real_1step_not_aligned_2step_aligned.py | |
53 | + | ``` | |
54 | + | ||
55 | + | > We need another JSON file called [`output_group.json`](https://pf.iris-home.net/yuanpeng/f94ab4bdc64a473289e335b9e0df5264) that controls how we want to output the finally corrected data into different groups. This file basically defines the 2Theta range that we want to use for the 6 output groups. | |
56 | + | ||
57 | + | This is not using `mantidpython` and to get it working, we have to install `pystog`. Here are the steps, | |
58 | + | ||
59 | + | ```bash | |
60 | + | conda create -n pystog | |
61 | + | conda activate pystog | |
62 | + | conda install anaconda::scipy | |
63 | + | conda install conda-forge::matplotlib | |
64 | + | conda install neutrons::pystog | |
65 | + | ``` | |
66 | + | ||
67 | + | With `pystog` environment set up properly, we should be able to run the script above without problems. | |
68 | + | ||
69 | + | 8. Now, we `cd` into the directory containing the correction output, i.e., `texture_proc/texture_proc_output` under the `working directory` in my example. We should be able to see the following files, | |
70 | + | ||
71 | + | ``` | |
72 | + | NOM_Si_640e_bank_1_merged_1na_2a_ave.dat | |
73 | + | NOM_Si_640e_bank_2_merged_1na_2a_ave.dat | |
74 | + | NOM_Si_640e_bank_3_merged_1na_2a_ave.dat | |
75 | + | NOM_Si_640e_bank_4_merged_1na_2a_ave.dat | |
76 | + | NOM_Si_640e_bank_5_merged_1na_2a_ave.dat | |
77 | + | NOM_Si_640e_bank_6_merged_1na_2a_ave.dat | |
78 | + | ``` | |
79 | + | ||
80 | + | where `NOM_Si_640e_` is our `stem_name` parameter in the correction script mentioned in `step-7`. | |
81 | + | ||
82 | + | 9. The 6 output data files mentioned in `step-8` are the finally corrected, running through both the first stage of the correction over the azimuthal angle (by Q-points) and the second stage of the correction over the polar angle (by Q-chunks). There are a lot of output files that would be generated to contain the outputs from those intermediate steps. Most of them are my checking purpose so we don't need to worry about them. Files are indeed necessary to check are those with the name of something like `NOM_Si_640e_2theta_102.dat`, where, again, `NOM_Si_640e_` corresponds to the `stem_name` parameter in the correction script mentioned in `step-7`. Each of such files contain the correction output from the first stage and we need to grab all such files, plot them and inspect the data for the purpose of populating input parameters for the `ttheta_group_params_new_new_new.json` input JSON file mentioned in `step-6`. | |
83 | + | ||
84 | + | 10. For data inspection, we can use the web-based tool at [https://addie.ornl.gov/plotter](https://addie.ornl.gov/plotter), and for data merging [i.e., from 6 groups to 1 merged S(Q) data], we can use the tool [here](https://yr.iris-home.net/datamerge). | |
85 | + | ||
86 | + | --- | |
87 | + | ||
88 | + | Yuanpeng Zhang @ 03/17/2025 14:18:09 EST | |
89 | + | SNS-HFIR, ORNL | |
90 | + | ||
91 | + | --- |