Skip to content

Commit 06a2715

Browse files
authored
Merge pull request #449 from OpenCOMPES/spelling_fixes
fix spelling in all files using Code Spell Checker
2 parents 7c92427 + 57d6cd7 commit 06a2715

40 files changed

+191
-192
lines changed

docs/misc/contributing.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ Development Workflow
7373
7474
3. **Write Tests:** If your contribution introduces new features or fixes a bug, add tests to cover your changes.
7575

76-
4. **Run Tests:** To ensure no funtionality is broken, run the tests:
76+
4. **Run Tests:** To ensure no functionality is broken, run the tests:
7777

7878
.. code-block:: bash
7979

docs/misc/maintain.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,7 @@ To create a release, follow these steps:
140140
c. **If you don't see update on PyPI:**
141141

142142
- Visit the GitHub Actions page and monitor the Release workflow (https://github.com/OpenCOMPES/sed/actions/workflows/release.yml).
143-
- Check if errors occured.
143+
- Check if errors occurred.
144144

145145

146146
**Understanding the Release Workflow**

docs/sed/config.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,19 @@
11
Config
22
===================================================
3-
The config module contains a mechanis to collect configuration parameters from various sources and configuration files, and to combine them in a hierachical manner into a single, consistent configuration dictionary.
3+
The config module contains a mechanics to collect configuration parameters from various sources and configuration files, and to combine them in a hierarchical manner into a single, consistent configuration dictionary.
44
It will load an (optional) provided config file, or alternatively use a passed python dictionary as initial config dictionary, and subsequently look for the following additional config files to load:
55

66
* ``folder_config``: A config file of name :file:`sed_config.yaml` in the current working directory. This is mostly intended to pass calibration parameters of the workflow between different notebook instances.
7-
* ``user_config``: A config file provided by the user, stored as :file:`.sed/config.yaml` in the current user's home directly. This is intended to give a user the option for individual configuration modifications of system settings.
8-
* ``system_config``: A config file provided by the system administrator, stored as :file:`/etc/sed/config.yaml` on Linux-based systems, and :file:`%ALLUSERPROFILE%/sed/config.yaml` on Windows. This should provide all necessary default parameters for using the sed processor with a given setup. For an example for an mpes setup, see :ref:`example_config`
7+
* ``user_config``: A config file provided by the user, stored as :file:`.config/sed/config.yaml` in the current user's home directly. This is intended to give a user the option for individual configuration modifications of system settings.
8+
* ``system_config``: A config file provided by the system administrator, stored as :file:`/etc/sed/config.yaml` on Linux-based systems, and :file:`%ALLUSERSPROFILE%/sed/config.yaml` on Windows. This should provide all necessary default parameters for using the sed processor with a given setup. For an example for an mpes setup, see :ref:`example_config`
99
* ``default_config``: The default configuration shipped with the package. Typically, all parameters here should be overwritten by any of the other configuration files.
1010

1111
The config mechanism returns the combined dictionary, and reports the loaded configuration files. In order to disable or overwrite any of the configuration files, they can be also given as optional parameters (path to a file, or python dictionary).
1212

1313

1414
API
1515
***************************************************
16-
.. automodule:: sed.core.config
16+
.. automobile:: sed.core.config
1717
:members:
1818
:undoc-members:
1919

docs/sed/dataset.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ Setting the “use_existing” keyword to False allows to download the data in a
6464
Interrupting extraction has similar behavior to download and just continues from where it stopped.
6565
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
6666

67-
Or if user deletes the extracted documents, it reextracts from zip file
67+
Or if user deletes the extracted documents, it re-extracts from zip file
6868
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
6969

7070
.. code:: python

sed/binning/binning.py

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ def bin_partition(
4545
- an integer describing the number of bins for all dimensions. This
4646
requires "ranges" to be defined as well.
4747
- A sequence containing one entry of the following types for each
48-
dimenstion:
48+
dimension:
4949
5050
- an integer describing the number of bins. This requires "ranges"
5151
to be defined as well.
@@ -76,14 +76,14 @@ def bin_partition(
7676
jittering. To specify the jitter amplitude or method (normal or uniform
7777
noise) a dictionary can be passed. This should look like
7878
jitter={'axis':{'amplitude':0.5,'mode':'uniform'}}.
79-
This example also shows the default behaviour, in case None is
79+
This example also shows the default behavior, in case None is
8080
passed in the dictionary, or jitter is a list of strings.
8181
Warning: this is not the most performing approach. Applying jitter
8282
on the dataframe before calling the binning is much faster.
8383
Defaults to None.
8484
return_edges (bool, optional): If True, returns a list of D arrays
8585
describing the bin edges for each dimension, similar to the
86-
behaviour of ``np.histogramdd``. Defaults to False.
86+
behavior of ``np.histogramdd``. Defaults to False.
8787
skip_test (bool, optional): Turns off input check and data transformation.
8888
Defaults to False as it is intended for internal use only.
8989
Warning: setting this True might make error tracking difficult.
@@ -127,7 +127,7 @@ def bin_partition(
127127
else:
128128
bins = cast(list[int], bins)
129129
# shift ranges by half a bin size to align the bin centers to the given ranges,
130-
# as the histogram functions interprete the ranges as limits for the edges.
130+
# as the histogram functions interpret the ranges as limits for the edges.
131131
for i, nbins in enumerate(bins):
132132
halfbinsize = (ranges[i][1] - ranges[i][0]) / (nbins) / 2
133133
ranges[i] = (
@@ -221,7 +221,7 @@ def bin_dataframe(
221221
- an integer describing the number of bins for all dimensions. This
222222
requires "ranges" to be defined as well.
223223
- A sequence containing one entry of the following types for each
224-
dimenstion:
224+
dimension:
225225
226226
- an integer describing the number of bins. This requires "ranges"
227227
to be defined as well.
@@ -260,7 +260,7 @@ def bin_dataframe(
260260
jittering. To specify the jitter amplitude or method (normal or uniform
261261
noise) a dictionary can be passed. This should look like
262262
jitter={'axis':{'amplitude':0.5,'mode':'uniform'}}.
263-
This example also shows the default behaviour, in case None is
263+
This example also shows the default behavior, in case None is
264264
passed in the dictionary, or jitter is a list of strings.
265265
Warning: this is not the most performing approach. applying jitter
266266
on the dataframe before calling the binning is much faster.
@@ -466,7 +466,7 @@ def normalization_histogram_from_timed_dataframe(
466466
bin_centers: np.ndarray,
467467
time_unit: float,
468468
) -> xr.DataArray:
469-
"""Get a normalization histogram from a timed datafram.
469+
"""Get a normalization histogram from a timed dataframe.
470470
471471
Args:
472472
df (dask.dataframe.DataFrame): a dask.DataFrame on which to perform the

sed/binning/numba_bin.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ def _hist_from_bin_range(
2222
bit integers.
2323
2424
Args:
25-
sample (np.ndarray): The data to be histogrammed with shape N,D.
25+
sample (np.ndarray): The data to be histogram'd with shape N,D.
2626
bins (Sequence[int]): The number of bins for each dimension D.
2727
ranges (np.ndarray): A sequence of length D, each an optional (lower,
2828
upper) tuple giving the outer bin edges to be used if the edges are
@@ -47,7 +47,7 @@ def _hist_from_bin_range(
4747

4848
for i in range(ndims):
4949
delta[i] = 1 / ((ranges[i, 1] - ranges[i, 0]) / bins[i])
50-
strides[i] = hist.strides[i] // hist.itemsize # pylint: disable=E1136
50+
strides[i] = hist.strides[i] // hist.itemsize
5151

5252
for t in range(sample.shape[0]):
5353
is_inside = True
@@ -155,7 +155,7 @@ def numba_histogramdd(
155155
bins: int | Sequence[int] | Sequence[np.ndarray] | np.ndarray,
156156
ranges: Sequence = None,
157157
) -> tuple[np.ndarray, list[np.ndarray]]:
158-
"""Multidimensional histogramming function, powered by Numba.
158+
"""Multidimensional histogram function, powered by Numba.
159159
160160
Behaves in total much like numpy.histogramdd. Returns uint32 arrays.
161161
This was chosen because it has a significant performance improvement over
@@ -165,7 +165,7 @@ def numba_histogramdd(
165165
sizes.
166166
167167
Args:
168-
sample (np.ndarray): The data to be histogrammed with shape N,D
168+
sample (np.ndarray): The data to be histogram'd with shape N,D
169169
bins (int | Sequence[int] | Sequence[np.ndarray] | np.ndarray): The number
170170
of bins for each dimension D, or a sequence of bin edges on which to calculate
171171
the histogram.

sed/binning/utils.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ def simplify_binning_arguments(
3131
- an integer describing the number of bins for all dimensions. This
3232
requires "ranges" to be defined as well.
3333
- A sequence containing one entry of the following types for each
34-
dimenstion:
34+
dimension:
3535
3636
- an integer describing the number of bins. This requires "ranges"
3737
to be defined as well.
@@ -115,7 +115,7 @@ def simplify_binning_arguments(
115115
f"Ranges must be a sequence, not {type(ranges)}.",
116116
)
117117

118-
# otherwise, all bins should by np.ndarrays here
118+
# otherwise, all bins should be of type np.ndarray here
119119
elif all(isinstance(x, np.ndarray) for x in bins):
120120
bins = cast(list[np.ndarray], list(bins))
121121
else:

sed/calibrator/delay.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ def append_delay_axis(
101101
102102
Returns:
103103
tuple[pd.DataFrame | dask.dataframe.DataFrame, dict]: dataframe with added column
104-
and delay calibration metdata dictionary.
104+
and delay calibration metadata dictionary.
105105
"""
106106
# pylint: disable=duplicate-code
107107
if calibration is None:
@@ -406,7 +406,7 @@ def mm_to_ps(
406406
delay_mm: float | np.ndarray,
407407
time0_mm: float,
408408
) -> float | np.ndarray:
409-
"""Converts a delaystage position in mm into a relative delay in picoseconds
409+
"""Converts a delay stage position in mm into a relative delay in picoseconds
410410
(double pass).
411411
412412
Args:

sed/calibrator/energy.py

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -444,7 +444,7 @@ def add_ranges(
444444
traces (np.ndarray, optional): Collection of energy dispersion curves.
445445
Defaults to self.traces_normed.
446446
infer_others (bool, optional): Option to infer the feature detection range
447-
in other traces from a given one using a time warp algorthm.
447+
in other traces from a given one using a time warp algorithm.
448448
Defaults to True.
449449
mode (str, optional): Specification on how to change the feature ranges
450450
('append' or 'replace'). Defaults to "replace".
@@ -1155,7 +1155,7 @@ def common_apply_func(apply: bool): # noqa: ARG001
11551155
update(correction["amplitude"], x_center, y_center, diameter=correction["diameter"])
11561156
except KeyError as exc:
11571157
raise ValueError(
1158-
"Parameter 'diameter' required for correction type 'sperical', ",
1158+
"Parameter 'diameter' required for correction type 'spherical', ",
11591159
"but not present!",
11601160
) from exc
11611161

@@ -1337,7 +1337,7 @@ def apply_energy_correction(
13371337
Defaults to config["energy"]["correction_type"].
13381338
amplitude (float, optional): Amplitude of the time-of-flight correction
13391339
term. Defaults to config["energy"]["correction"]["correction_type"].
1340-
correction (dict, optional): Correction dictionary containing paramters
1340+
correction (dict, optional): Correction dictionary containing parameters
13411341
for the correction. Defaults to self.correction or
13421342
config["energy"]["correction"].
13431343
verbose (bool, optional): Option to print out diagnostic information.
@@ -1938,7 +1938,7 @@ def _datacheck_peakdetect(
19381938
x_axis: np.ndarray,
19391939
y_axis: np.ndarray,
19401940
) -> tuple[np.ndarray, np.ndarray]:
1941-
"""Input format checking for 1D peakdtect algorithm
1941+
"""Input format checking for 1D peakdetect algorithm
19421942
19431943
Args:
19441944
x_axis (np.ndarray): x-axis array
@@ -2108,7 +2108,7 @@ def fit_energy_calibration(
21082108
binwidth (float): Time width of each original TOF bin in ns.
21092109
binning (int): Binning factor of the TOF values.
21102110
ref_id (int, optional): Reference dataset index. Defaults to 0.
2111-
ref_energy (float, optional): Energy value of the feature in the refence
2111+
ref_energy (float, optional): Energy value of the feature in the reference
21122112
trace (eV). required to output the calibration. Defaults to None.
21132113
t (list[float] | np.ndarray, optional): Array of TOF values. Required
21142114
to calculate calibration trace. Defaults to None.
@@ -2130,7 +2130,7 @@ def fit_energy_calibration(
21302130
Returns:
21312131
dict: A dictionary of fitting parameters including the following,
21322132
2133-
- "coeffs": Fitted function coefficents.
2133+
- "coeffs": Fitted function coefficients.
21342134
- "axis": Fitted energy axis.
21352135
"""
21362136
vals = np.asarray(vals)
@@ -2247,7 +2247,7 @@ def poly_energy_calibration(
22472247
each EDC.
22482248
order (int, optional): Polynomial order of the fitting function. Defaults to 3.
22492249
ref_id (int, optional): Reference dataset index. Defaults to 0.
2250-
ref_energy (float, optional): Energy value of the feature in the refence
2250+
ref_energy (float, optional): Energy value of the feature in the reference
22512251
trace (eV). required to output the calibration. Defaults to None.
22522252
t (list[float] | np.ndarray, optional): Array of TOF values. Required
22532253
to calculate calibration trace. Defaults to None.

0 commit comments

Comments
 (0)