You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CONTRIBUTING.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,11 +45,11 @@ executorch
45
45
│ └── <ahref="devtools/visualization">visualization</a> - Visualization tools for representing model structure and performance metrics.
46
46
├── <ahref="docs">docs</a> - Static docs tooling and documentation source files.
47
47
├── <ahref="examples">examples</a> - Examples of various user flows, such as model export, delegates, and runtime execution.
48
-
├── <ahref="exir">exir</a> - Ahead-of-time library: model capture and lowering APIs. EXport Intermediate Representation (EXIR) is a format for representing the result of <ahref="https://pytorch.org/docs/stable/export.html">torch.export</a>. This directory contains utilities and passes for lowering the EXIR graphs into different <ahref="/docs/source/ir-exir.md">dialects</a> and eventually suitable to run on target hardware.
48
+
├── <ahref="exir">exir</a> - Ahead-of-time library: model capture and lowering APIs. EXport Intermediate Representation (EXIR) is a format for representing the result of <ahref="https://pytorch.org/docs/stable/export.html">torch.export</a>. This directory contains utilities and passes for lowering the EXIR graphs into different <ahref="docs/source/ir-exir.md">dialects</a> and eventually suitable to run on target hardware.
49
49
│ ├── <ahref="exir/_serialize">_serialize</a> - Serialize final export artifact.
50
50
│ ├── <ahref="exir/backend">backend</a> - Backend delegate ahead of time APIs.
51
51
│ ├── <ahref="exir/capture">capture</a> - Program capture.
52
-
│ ├── <ahref="exir/dialects">dialects</a> - Op sets for various dialects in the export process. Please refer to the <ahref="/docs/source/ir-exir.md">EXIR spec</a> and the <ahref="/docs/source/compiler-backend-dialect.md">backend dialect</a> doc for more details.
52
+
│ ├── <ahref="exir/dialects">dialects</a> - Op sets for various dialects in the export process. Please refer to the <ahref="docs/source/ir-exir.md">EXIR spec</a> and the <ahref="docs/source/compiler-backend-dialect.md">backend dialect</a> doc for more details.
53
53
│ ├── <ahref="exir/emit">emit</a> - Conversion from ExportedProgram to ExecuTorch execution instructions.
│ ├── <ahref="extension/memory_allocator">memory_allocator</a> - 1st party memory allocator implementations.
69
69
│ ├── <ahref="extension/module">module</a> - A simplified C++ wrapper for the runtime. An abstraction that deserializes and executes an ExecuTorch artifact (.pte file). Refer to the <ahref="docs/source/extension-module.md">module documentation</a> for more information.
70
70
│ ├── <ahref="extension/parallel">parallel</a> - C++ threadpool integration.
71
-
│ ├── <ahref="extension/pybindings">pybindings</a> - Python API for executorch runtime. This is powering up the <ahref="docs/source/runtime-python-api-reference.md">runtime Python API</a> for ExecuTorch.
71
+
│ ├── <ahref="extension/pybindings">pybindings</a> - Python API for executorch runtime. This is powering up the <ahref="docs/source/runtime-python-api-reference.rst">runtime Python API</a> for ExecuTorch.
72
72
│ ├── <ahref="extension/pytree">pytree</a> - C++ and Python flattening and unflattening lib for pytrees.
73
73
│ ├── <ahref="extension/runner_util">runner_util</a> - Helpers for writing C++ PTE-execution tools.
74
74
│ ├── <ahref="extension/tensor">tensor</a> - Tensor maker and <code>TensorPtr</code>, details in <ahref="docs/source/extension-tensor.md">this documentation</a>. For how to use <code>TensorPtr</code> and <code>Module</code>, please refer to the <ahref="docs/source/using-executorch-cpp.md">"Using ExecuTorch with C++"</a> doc.
@@ -114,7 +114,7 @@ If you're completely new to open-source projects, GitHub, or ExecuTorch, please
114
114
1. If you've changed APIs or added a new tool or feature, [update the
115
115
documentation](#updating-documentation).
116
116
1. If you added an experimental API or deprecated an existing API, follow the
117
-
[API Life Cycle and Deprecation Policy](/docs/source/api-life-cycle.md).
117
+
[API Life Cycle and Deprecation Policy](docs/source/api-life-cycle.md).
118
118
1. Make sure your code follows the [style guides](#coding-style) and passes the
119
119
[lint checks](#lintrunner).
120
120
1. If you haven't already, complete the [Contributor License Agreement ("CLA")](#contributor-license-agreement-cla).
Copy file name to clipboardExpand all lines: backends/apple/coreml/runtime/test/setup.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,18 +4,18 @@ This is a tutorial for setting up tests for the **Core ML** backend.
4
4
5
5
## Running tests
6
6
7
-
1. Follow the instructions described in [Setting Up ExecuTorch](/docs/source/getting-started-setup.md) to set up ExecuTorch environment.
7
+
1. Follow the instructions described in [Setting Up ExecuTorch](../../../../../docs/source/getting-started-setup.rst) to set up ExecuTorch environment.
8
8
9
9
2. Run `install_requirements.sh` to install dependencies required by the **Core ML** backend.
10
10
11
11
```bash
12
12
cd executorch
13
13
14
-
sh backends/apple/coreml/scripts/install_requirements.sh
14
+
sh backends/apple/coreml/scripts/install_requirements.sh
15
15
16
-
```
16
+
```
17
17
18
-
3. Follow the instructions described in [Building with CMake](/docs/source/runtime-build-and-cross-compilation.md#building-with-cmake) to set up CMake build system.
18
+
3. Follow the instructions described in [Building with CMake](../../../../../docs/source/using-executorch-cpp.md#building-with-cmake) to set up CMake build system.
1. Follow the instructions described in [Building with CMake](/docs/source/runtime-build-and-cross-compilation.md#building-with-cmake) to set up CMake build system.
31
+
1. Follow the instructions described in [Building with CMake](../../../docs/source/using-executorch-cpp.md#building-with-cmake) to set up CMake build system.
Copy file name to clipboardExpand all lines: backends/example/README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,16 +17,16 @@ In the following diagram, we show how to quantize a mobile net v2 model and lowe
17
17
18
18
We can define patterns based on the operators supported by the backend, which will be used by the quantizer and delegate.
19
19
20
-

20
+

21
21
22
22
### Partitioner and Backend
23
23
24
24
The way partitioner and backend is, partitioner will tag the nodes to lower to the backend and backend will will receive all tagged nodes and preprocess them as a delegate.
25
25
26
-

26
+

27
27
28
28
### Memory format permute
29
29
30
30
Some operators may have better performance in the memory format other than contiguous. One way to do that is to insert `to_dim_op` to describe memory format permutation and merge if there two opposite one next to each other.
Copy file name to clipboardExpand all lines: backends/vulkan/docs/android_demo.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Building and Running ExecuTorch with the Vulkan Backend
2
2
3
-
The [ExecuTorch Vulkan Delegate](./native-delegates-executorch-vulkan-delegate.md)
3
+
The [ExecuTorch Vulkan Delegate](../../../docs/source/native-delegates-executorch-vulkan-delegate.md)
4
4
is a native GPU delegate for ExecuTorch.
5
5
6
6
<!----This will show a grid card on the page----->
@@ -12,8 +12,8 @@ is a native GPU delegate for ExecuTorch.
12
12
:::
13
13
:::{grid-item-card} Prerequisites:
14
14
:class-card: card-prerequisites
15
-
* Follow [**Setting up ExecuTorch**](./getting-started-setup.md)
16
-
* It is also recommended that you read through [**ExecuTorch Vulkan Delegate**](./native-delegates-executorch-vulkan-delegate.md) and follow the example in that page
15
+
* Follow [**Setting up ExecuTorch**](../../../docs/source/getting-started-setup.rst)
16
+
* It is also recommended that you read through [**ExecuTorch Vulkan Delegate**](../../../docs/source/native-delegates-executorch-vulkan-delegate.md) and follow the example in that page
17
17
:::
18
18
::::
19
19
@@ -59,7 +59,7 @@ partially lower the Llama model to Vulkan.
59
59
# The files will usually be downloaded to ~/.llama
0 commit comments