Skip to content

More updates to the Getting Started page #604

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
May 8, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 29 additions & 7 deletions docs/src/tutorials/linear.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,9 @@

A linear system $$Au=b$$ is specified by defining an `AbstractMatrix` or `AbstractSciMLOperator`.
For the sake of simplicity, this tutorial will start by only showcasing concrete matrices.
And specifically, we will start by using the basic Julia `Matrix` type.

The following defines a matrix and a `LinearProblem` which is subsequently solved
The following defines a `Matrix` and a `LinearProblem` which is subsequently solved
by the default linear solver.

```@example linsys1
Expand Down Expand Up @@ -57,15 +58,36 @@ sol = solve(prob, KrylovJL_GMRES()) # Choosing algorithms is done the same way
sol.u
```

Similerly structure matrix types, like banded matrices, can be provided using special matrix
Similarly structure matrix types, like banded matrices, can be provided using special matrix
types. While any `AbstractMatrix` type should be compatible via the general Julia interfaces,
LinearSolve.jl specifically tests with the following cases:

* [BandedMatrices.jl](https://github.com/JuliaLinearAlgebra/BandedMatrices.jl)
* [BlockDiagonals.jl](https://github.com/JuliaArrays/BlockDiagonals.jl)
* [CUDA.jl](https://cuda.juliagpu.org/stable/) (CUDA GPU-based dense and sparse matrices)
* [FastAlmostBandedMatrices.jl](https://github.com/SciML/FastAlmostBandedMatrices.jl)
* [Metal.jl](https://metal.juliagpu.org/stable/) (Apple M-series GPU-based dense matrices)
- [LinearAlgebra.jl](https://docs.julialang.org/en/v1/stdlib/LinearAlgebra/)

+ Symmetric
+ Hermitian
+ UpperTriangular
+ UnitUpperTriangular
+ LowerTriangular
+ UnitLowerTriangular
+ SymTridiagonal
+ Tridiagonal
+ Bidiagonal
+ Diagonal

- [BandedMatrices.jl](https://github.com/JuliaLinearAlgebra/BandedMatrices.jl) `BandedMatrix`
- [BlockDiagonals.jl](https://github.com/JuliaArrays/BlockDiagonals.jl) `BlockDiagonal`
- [CUDA.jl](https://cuda.juliagpu.org/stable/) (CUDA GPU-based dense and sparse matrices) `CuArray` (`GPUArray`)
- [FastAlmostBandedMatrices.jl](https://github.com/SciML/FastAlmostBandedMatrices.jl) `FastAlmostBandedMatrix`
- [Metal.jl](https://metal.juliagpu.org/stable/) (Apple M-series GPU-based dense matrices) `MetalArray`

!!! note


Choosing the most specific matrix structure that matches your specific system will give you the most performance.
Thus if your matrix is symmetric, specifically building with `Symmetric(A)` will be faster than simply using `A`,
and will generally lead to better automatic linear solver choices. Note that you can also choose the type for `b`,
but generally a dense vector will be the fastest here and many solvers will not support a sparse `b`.

## Using Matrix-Free Operators

Expand Down
Loading