diff --git a/docs/source/_static/img/android_studio.jpeg b/docs/source/_static/img/android_studio.jpeg
new file mode 100644
index 00000000000..e65aac0e374
Binary files /dev/null and b/docs/source/_static/img/android_studio.jpeg differ
diff --git a/docs/source/_static/img/android_studio.mp4 b/docs/source/_static/img/android_studio.mp4
new file mode 100644
index 00000000000..2303ef5863c
Binary files /dev/null and b/docs/source/_static/img/android_studio.mp4 differ
diff --git a/docs/source/backend-delegates-xnnpack-reference.md b/docs/source/backend-delegates-xnnpack-reference.md
index 52d208de219..62dead75a4e 100644
--- a/docs/source/backend-delegates-xnnpack-reference.md
+++ b/docs/source/backend-delegates-xnnpack-reference.md
@@ -142,5 +142,5 @@ def _qdq_quantized_linear(
 You can read more indepth explanations on PyTorch 2 quantization [here](https://pytorch.org/tutorials/prototype/pt2e_quant_ptq.html).
 
 ## See Also
-- [Integrating XNNPACK Delegate Android App](demo-apps-android.md)
+- [Integrating XNNPACK Delegate in Android AAR](using-executorch-android.md)
 - [Complete the Lowering to XNNPACK Tutorial](tutorial-xnnpack-delegate-lowering.md)
diff --git a/docs/source/backends-qualcomm.md b/docs/source/backends-qualcomm.md
index 7a2f749e185..71f1d3cd93c 100644
--- a/docs/source/backends-qualcomm.md
+++ b/docs/source/backends-qualcomm.md
@@ -351,11 +351,6 @@ The command-line arguments are written in [utils.py](https://github.com/pytorch/
 The model, inputs, and output location are passed to `qnn_executorch_runner` by `--model_path`, `--input_list_path`, and `--output_folder_path`.
 
 
-### Running a model via ExecuTorch's android demo-app
-
-An Android demo-app using Qualcomm AI Engine Direct Backend can be found in
-`examples`. Please refer to android demo app [tutorial](demo-apps-android.md).
-
 ## Supported model list
 
 Please refer to `$EXECUTORCH_ROOT/examples/qualcomm/scripts/` and `EXECUTORCH_ROOT/examples/qualcomm/oss_scripts/` to the list of supported models.
diff --git a/docs/source/index.md b/docs/source/index.md
index feb2461ae2a..47ea42a21ea 100644
--- a/docs/source/index.md
+++ b/docs/source/index.md
@@ -41,7 +41,7 @@ ExecuTorch provides support for:
 - [Building from Source](using-executorch-building-from-source)
 - [FAQs](using-executorch-faqs)
 #### Examples
-- [Android Demo Apps](demo-apps-android.md)
+- [Android Demo Apps](https://github.com/pytorch-labs/executorch-examples/tree/main/dl3/android/DeepLabV3Demo#executorch-android-demo-app)
 - [iOS Demo Apps](demo-apps-ios.md)
 #### Backends
 - [Overview](backends-overview)
@@ -142,7 +142,7 @@ using-executorch-faqs
 :caption: Examples
 :hidden:
 
-demo-apps-android.md
+Building an ExecuTorch Android Demo App <https://github.com/pytorch-labs/executorch-examples/tree/main/dl3/android/DeepLabV3Demo#executorch-android-demo-app>
 demo-apps-ios.md
 ```
 
diff --git a/docs/source/tutorial-xnnpack-delegate-lowering.md b/docs/source/tutorial-xnnpack-delegate-lowering.md
index a469edebd54..7fc97476ef7 100644
--- a/docs/source/tutorial-xnnpack-delegate-lowering.md
+++ b/docs/source/tutorial-xnnpack-delegate-lowering.md
@@ -176,7 +176,7 @@ Now you should be able to find the executable built at `./cmake-out/backends/xnn
 ```
 
 ## Building and Linking with the XNNPACK Backend
-You can build the XNNPACK backend [CMake target](https://github.com/pytorch/executorch/blob/main/backends/xnnpack/CMakeLists.txt#L83), and link it with your application binary such as an Android or iOS application. For more information on this you may take a look at this [resource](demo-apps-android.md) next.
+You can build the XNNPACK backend [CMake target](https://github.com/pytorch/executorch/blob/main/backends/xnnpack/CMakeLists.txt#L83), and link it with your application binary such as an Android or iOS application. For more information on this you may take a look at this [resource](./using-executorch-android.md) next.
 
 ## Profiling
 To enable profiling in the `xnn_executor_runner` pass the flags `-DEXECUTORCH_ENABLE_EVENT_TRACER=ON` and `-DEXECUTORCH_BUILD_DEVTOOLS=ON` to the build command (add `-DENABLE_XNNPACK_PROFILING=ON` for additional details). This will enable ETDump generation when running the inference and enables command line flags for profiling (see `xnn_executor_runner --help` for details).
diff --git a/docs/source/using-executorch-android.md b/docs/source/using-executorch-android.md
index 99b68008dc6..fc5223278c9 100644
--- a/docs/source/using-executorch-android.md
+++ b/docs/source/using-executorch-android.md
@@ -22,6 +22,8 @@ The AAR artifact contains the Java library for users to integrate with their Jav
   - LLaMa-specific Custom ops library.
 - Comes with two ABI variants, arm64-v8a and x86\_64.
 
+The AAR library can be used for generic Android device with arm64-v8a or x86_64 architecture. It can be used across form factors, including phones, tablets, tv boxes, etc, as it does not contain any UI components.
+
 ## Using AAR from Maven Central
 
 ExecuTorch is available on [Maven Central](https://mvnrepository.com/artifact/org.pytorch/executorch-android).
@@ -38,6 +40,11 @@ dependencies {
 
 Note: `org.pytorch:executorch-android:0.5.1` corresponds to executorch v0.5.0.
 
+Click the screenshot below to watch the *demo video* on how to add the package and run a simple ExecuTorch model with Android Studio.
+<a href="https://pytorch.org/executorch/main/_static/img/android_studio.mp4">
+  <img src="https://pytorch.org/executorch/main/_static/img/android_studio.jpeg" width="800" alt="Integrating and Running ExecuTorch on Android">
+</a>
+
 ## Using AAR file directly
 
 You can also directly specify an AAR file in the app. We upload pre-built AAR to S3 during each release, or as a snapshot.
@@ -103,6 +110,8 @@ export ANDROID_NDK=/path/to/ndk
 sh scripts/build_android_library.sh
 ```
 
+Currently, XNNPACK backend is always built with the script.
+
 ### Optional environment variables
 
 Optionally, set these environment variables before running `build_android_library.sh`.
diff --git a/docs/source/using-executorch-building-from-source.md b/docs/source/using-executorch-building-from-source.md
index 6c73b616643..61f1ce78097 100644
--- a/docs/source/using-executorch-building-from-source.md
+++ b/docs/source/using-executorch-building-from-source.md
@@ -63,7 +63,7 @@ Or alternatively, [install conda on your machine](https://conda.io/projects/cond
    ./install_executorch.sh
    ```
 
-   Use the [`--pybind` flag](https://github.com/pytorch/executorch/blob/main/install_executorch.sh#L26-L29) to install with pybindings and dependencies for other backends. 
+   Use the [`--pybind` flag](https://github.com/pytorch/executorch/blob/main/install_executorch.sh#L26-L29) to install with pybindings and dependencies for other backends.
    ```bash
    ./install_executorch.sh --pybind <coreml | mps | xnnpack>
 
@@ -82,7 +82,7 @@ Or alternatively, [install conda on your machine](https://conda.io/projects/cond
    For development mode, run the command with `--editable`, which allows us to modify Python source code and see changes reflected immediately.
    ```bash
    ./install_executorch.sh --editable [--pybind xnnpack]
-   
+
    # Or you can directly do the following if dependencies are already installed
    # either via a previous invocation of `./install_executorch.sh` or by explicitly installing requirements via `./install_requirements.sh` first.
    pip install -e .
@@ -196,7 +196,7 @@ I 00:00:00.000612 executorch:executor_runner.cpp:138] Setting up planned buffer
 I 00:00:00.000669 executorch:executor_runner.cpp:161] Method loaded.
 I 00:00:00.000685 executorch:executor_runner.cpp:171] Inputs prepared.
 I 00:00:00.000764 executorch:executor_runner.cpp:180] Model executed successfully.
-I 00:00:00.000770 executorch:executor_runner.cpp:184] 1 outputs: 
+I 00:00:00.000770 executorch:executor_runner.cpp:184] 1 outputs:
 Output 0: tensor(sizes=[1], [2.])
 ```
 
@@ -206,6 +206,8 @@ Output 0: tensor(sizes=[1], [2.])
 Following are instruction on how to perform cross compilation for Android and iOS.
 
 ### Android
+
+#### Building executor_runner shell binary
 - Prerequisite: [Android NDK](https://developer.android.com/ndk), choose one of the following:
   - Option 1: Download Android Studio by following the instructions to [install ndk](https://developer.android.com/studio/projects/install-ndk).
   - Option 2: Download Android NDK directly from [here](https://developer.android.com/ndk/downloads).
@@ -243,7 +245,7 @@ sh scripts/build_android_library.sh
 ```
 
 This script will build the AAR, which contains the Java API and its corresponding JNI library. Please see
-[this documentation](./using-executorch-android.md#using-aar-file) for usage.
+[this documentation](./using-executorch-android#using-aar-file) for usage.
 
 ### iOS
 
@@ -278,5 +280,5 @@ Check out the [iOS Demo App](demo-apps-ios.md) tutorial for more info.
 You have successfully cross-compiled `executor_runner` binary to iOS and Android platforms. You can start exploring advanced features and capabilities. Here is a list of sections you might want to read next:
 
 * [Selective build](kernel-library-selective-build.md) to build the runtime that links to only kernels used by the program, which can provide significant binary size savings.
-* Tutorials on building [Android](./demo-apps-android.md) and [iOS](./demo-apps-ios.md) demo apps.
+* Tutorials on building [Android](https://github.com/pytorch-labs/executorch-examples/tree/main/dl3/android/DeepLabV3Demo#executorch-android-demo-app) and [iOS](./demo-apps-ios.md) demo apps.
 * Tutorials on deploying applications to embedded devices such as [ARM Cortex-M/Ethos-U](backends-arm-ethos-u.md) and [XTensa HiFi DSP](./backends-cadence.md).