Skip to content

Commit 7cd3692

Browse files
committed
merge fixes
1 parent 8e8ca20 commit 7cd3692

File tree

2 files changed

+11
-11
lines changed

2 files changed

+11
-11
lines changed

docs/src/inverse_problems/global_sensitivity_analysis.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
# [Global Sensitivity Analysis](@id global_sensitivity_analysis)
2-
*Global sensitivity analysis* (GSA) is used to study the sensitivity of a function with respect of its input [^1]. Within the context of chemical reaction network modelling it is primarily used for two purposes:
2+
*Global sensitivity analysis* (GSA) is used to study the sensitivity of a function's outputs with respect of its input [^1]. Within the context of chemical reaction network modelling it is primarily used for two purposes:
33
- [When fitting a model's parameters to data](@ref petab_parameter_fitting), it can be applied to the cost function of the optimisation problem. Here, GSA helps determine which parameters does, and does not, affects the model's fit to the data. This can be used to identify parameters that are less relevant for the observed data.
4-
- [When measuring some system behaviour or property](@ref TBA_AT_LATER_STAGE), it can help determine which parameters influence that property. E.g. for a model of a biofuel producing circuit in a synthetic organism, GSA could determine which system parameters has the largest impact on the total rate of biofuel production.
4+
- [When measuring some system behaviour or property](@ref behaviour_optimisation), it can help determine which parameters influence that property. E.g. for a model of a biofuel producing circuit in a synthetic organism, GSA could determine which system parameters has the largest impact on the total rate of biofuel production.
55

66
GSA can be carried out using the [GlobalSensitivity.jl](https://github.com/SciML/GlobalSensitivity.jl) package. This tutorial contain a brief introduction of how to use it for GSA on Catalyst models, with [GlobalSensitivity providing a more complete documentation](https://docs.sciml.ai/GlobalSensitivity/stable/).
77

8-
#### Global vs local sensitivity
8+
### Global vs local sensitivity
99
A related concept to global sensitivity is *local sensitivity*. This, rather than measuring a function's sensitivity (with regards to its inputs) across its entire (or large part of its) domain, measures it at a specific point. This is equivalent to computing the function's gradients at a specific point in phase space, which is an important routine for most gradient-based optimisation methods (typically carried out through [*automatic differentiation*](https://en.wikipedia.org/wiki/Automatic_differentiation)). For most Catalyst related functionalities, local sensitivities are computed using the [SciMLSensitivity.jl](https://github.com/SciML/SciMLSensitivity.jl) package. While certain GSA methods can utilise local sensitivities, this is not necessarily the case.
1010

11-
While local sensitivities are primarily used as a subroutine of other methodologies (such as optimisation schemes), it also has direct uses. E.g., in the context of fitting parameters to data, local sensitivity analysis can be used to, at the parameter set of the optimal fit, [determine the cost function's sensitivity to the system parameters](@ref TBA_AT_LATER_STAGE).
11+
While local sensitivities are primarily used as a subroutine of other methodologies (such as optimisation schemes), it also has direct uses. E.g., in the context of fitting parameters to data, local sensitivity analysis can be used to, at the parameter set of the optimal fit, determine the cost function's sensitivity to the system parameters.
1212

1313
## Basic example
1414
We will consider a simple [SEIR model of an infectious disease](https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology). This is an expansion of the classic SIR model with an additional *exposed* state, $E$, denoting individuals who are latently infected but currently unable to transmit their infection others.
@@ -43,14 +43,14 @@ using GlobalSensitivity
4343
global_sens = gsa(peak_cases, Morris(), [(-3.0,-1.0), (-2.0,0.0), (-2.0,0.0)])
4444
nothing # hide
4545
```
46-
on the domain $10^β ∈ (-3.0,-1.0), 10^a ∈ (-2.0,0.0), 10^γ ∈ (-2.0,0.0)$. The output of `gsa` varies depending on which GSA approach is used. GlobalSensitivity implements a range of methods for GSA. Bellow, we will describe the most common ones, as well as how to apply them and interpret their outputs.
46+
on the domain $10^β ∈ (-3.0,-1.0)$, $10^a ∈ (-2.0,0.0)$, $10^γ ∈ (-2.0,0.0)$ (which corresponds to $β ∈ (0.001,0.1)$, $a ∈ (0.01,1.0)$, $γ ∈ (0.01,1.0)$). The output of `gsa` varies depending on which GSA approach is used. GlobalSensitivity implements a range of methods for GSA. Bellow, we will describe the most common ones, as well as how to apply them and interpret their outputs.
4747

4848
!!! note
4949
We should make a couple of notes about the example above:
50-
- Here, we write our parameters on the forms $10^β$, $10^a$, and $10^γ$, which transforms them into log-space. As [previously described](@ref TBA_AT_LATER_STAGE), this is advantageous in the context of inverse problems such as this one.
51-
- For simplicity, we create a new `ODEProblem` in each evaluation of the `peak_cases` function. For GSA, where a function is evaluated a large number of times, it is ideal to write it as performant as possible. As [previously described](@ref TBA_AT_LATER_STAGE), creating a single `ODEProblem` initially, and then using `remake` to modify it in each evaluations of `peak_cases` will increase performance.
52-
- Again, as [previously described in other inverse problem tutorials](@ref TBA_AT_LATER_STAGE), when exploring a function over large parameter spaces, we will likely simulate our model for unsuitable parameter sets. To reduce time spent on these, and to avoid excessive warning messages, we provide the `maxiters=100000` and `verbose=false` arguments to `solve`.
53-
- As we have encountered in [a few other cases](@ref TBA_AT_LATER_STAGE), the `gsa` function is not able to take parameter inputs of the map form usually used for Catalyst. Hence, in its third argument, we have to ensure that the i'th Tuple corresponds to the parameter bounds of the i'th parameter in the `parameters(seir_model)` vector.
50+
- Here, we write our parameters on the forms $10^β$, $10^a$, and $10^γ$, which transforms them into log-space. As [previously described](@ref optimization_parameter_fitting_logarithmic_scale), this is advantageous in the context of inverse problems such as this one.
51+
- For simplicity, we create a new `ODEProblem` in each evaluation of the `peak_cases` function. For GSA, where a function is evaluated a large number of times, it is ideal to write it as performant as possible. Creating a single `ODEProblem` initially, and then using `remake` to modify it in each evaluations of `peak_cases` will increase performance.
52+
- Again, as [previously described in other inverse problem tutorials](@ref optimization_parameter_fitting_basics), when exploring a function over large parameter spaces, we will likely simulate our model for unsuitable parameter sets. To reduce time spent on these, and to avoid excessive warning messages, we provide the `maxiters=100000` and `verbose=false` arguments to `solve`.
53+
- As we have encountered in [a few other cases](@ref optimization_parameter_fitting_basics), the `gsa` function is not able to take parameter inputs of the map form usually used for Catalyst. Hence, in its third argument, we have to ensure that the i'th Tuple corresponds to the parameter bounds of the i'th parameter in the `parameters(seir_model)` vector.
5454

5555

5656
## Sobol's method based global sensitivity analysis

docs/src/inverse_problems/optimization_ode_param_fitting.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ Fitting parameters to data involves solving an optimisation problem (that is, fi
33

44
This tutorial demonstrates both how to create parameter fitting cost functions using the [DiffEqParamEstim.jl](https://github.com/SciML/DiffEqParamEstim.jl) package, and how to use Optimization.jl to minimise these. Optimization.jl can also be used in other contexts, such as finding parameter sets that maximise the magnitude of some system behaviour. More details on how to use these packages can be found in their [respective](https://docs.sciml.ai/Optimization/stable/) [documentations](https://docs.sciml.ai/DiffEqParamEstim/stable/).
55

6-
## Basic example
6+
## [Basic example](@id optimization_parameter_fitting_basics)
77

88
Let us consider a simple catalysis network, where an enzyme ($E$) turns a substrate ($S$) into a product ($P$):
99
```@example diffeq_param_estim_1
@@ -142,7 +142,7 @@ optsol_fixed_kD = solve(optprob_fixed_kD, Optim.NelderMead())
142142
nothing # hide
143143
```
144144

145-
## Fitting parameters on the logarithmic scale
145+
## [Fitting parameters on the logarithmic scale](@id optimization_parameter_fitting_logarithmic_scale)
146146
Often it can be advantageous to fit parameters on a [logarithmic, rather than linear, scale](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1008646). The best way to proceed is to simply replace each parameter in the model definition by its logarithmic version:
147147
```@example diffeq_param_estim_2
148148
using Catalyst

0 commit comments

Comments
 (0)