You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are honored to successfully host the PyTorch Shanghai Meetup on August 15, 2024. This Meetup has received great attention from the industry. We invited senior PyTorch developers from Intel and Huawei as guest speakers, who shared their valuable experience and the latest technical trends. In addition, this event also attracted PyTorch enthusiasts from many technology companies and well-known universities. A total of more than 40 participants gathered together to discuss and exchange the latest applications and technological advances of PyTorch.
9
+
10
+
This Meetup not only strengthened the connection between PyTorch community members, but also provided a platform for local AI technology enthusiasts to learn, communicate and grow. We look forward to the next gathering to continue to promote the development of PyTorch technology in the local area.
PyTorch Board member Fred Li shared the latest updates in the PyTorch community, He reviewed the development history of the PyTorch community, explained in detail the growth path of community developers, encouraged everyone to delve deeper into technology, and introduced the upcoming PyTorch Conference 2024 related matters.
17
+
18
+
## 2. Intel’s Journey with PyTorch Democratizing AI with ubiquitous hardware and open software
19
+
20
+
PyTorch CPU module maintainer Jiong Gong shared 6-year technical contributions from Intel to PyTorch and its ecosystem, explored the remarkable advancements that Intel has made in both software and hardware democratizing AI, ensuring accessibility, and optimizing performance across a diverse range of Intel hardware platforms.
Fengchun Hua, a PyTorch contributor from Huawei, took Huawei Ascend NPU as an example to demonstrate the latest achievements in multi-backend support for PyTorch applications. He introduced the hardware features of Huawei Ascend NPU and the infrastructure of CANN (Compute Architecture for Neural Networks), and explained the key achievements and innovations in native support work. He also shared the current challenges and the next work plan.
29
+
30
+
Yuanhao Ji, another PyTorch contributor from Huawei, then introduced the Autoload Device Extension proposal, explained its implementation details and value in improving the scalability of PyTorch, and introduced the latest work progress of the PyTorch Chinese community.
Eikan is a PyTorch contributor from Intel. He focuses on torch.compile stack for both Intel CPU and GPU. In this session, Eikan presented Intel's efforts on torch.compile for Intel GPUs. He provided updates on the current status of Intel GPUs within PyTorch, covering both functionality and performance aspects. Additionally, Eikan used Intel GPU as a case study to demonstrate how to integrate a new backend into the Inductor using Triton.
37
+
38
+
## 5. PyTorch PrivateUse1 Evolution Approaches and Insights
Jiawei Li, a PyTorch collaborator from Huawei, introduced PyTorch's Dispatch mechanism and emphasized the limitations of DIspatchKey. He took Huawei Ascend NPU as an example to share the best practices of the PyTorch PrivateUse1 mechanism. He mentioned that while using the PrivateUse1 mechanism, Huawei also submitted many improvements and bug fixes for the mechanism to the PyTorch community. He also mentioned that due to the lack of upstream CI support for out-of-tree devices, changes in upstream code may affect their stability and quality, and this insight was recognized by everyone.
Copy file name to clipboardExpand all lines: _get_started/installation/linux.md
+5-5
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
# Installing on Linux
2
2
{:.no_toc}
3
3
4
-
PyTorch can be installed and used on various Linux distributions. Depending on your system and compute requirements, your experience with PyTorch on Linux may vary in terms of processing time. It is recommended, but not required, that your Linux system has an NVIDIA or AMD GPU in order to harness the full power of PyTorch's [CUDA](https://developer.nvidia.com/cuda-zone)[support](https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html?highlight=cuda#cuda-tensors) or [ROCm](https://docs.amd.com) support.
4
+
PyTorch can be installed and used on various Linux distributions. Depending on your system and compute requirements, your experience with PyTorch on Linux may vary in terms of processing time. It is recommended, but not required, that your Linux system has an NVIDIA or AMD GPU in order to harness the full power of PyTorch's [CUDA](https://developer.nvidia.com/cuda-zone)[support](https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html?highlight=cuda#cuda-tensors) or [ROCm](https://rocm.docs.amd.com/) support.
5
5
6
6
## Prerequisites
7
7
{: #linux-prerequisites}
@@ -80,7 +80,7 @@ sudo apt install python3-pip
80
80
81
81
#### No CUDA/ROCm
82
82
83
-
To install PyTorch via Anaconda, and do not have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) or [ROCm-capable](https://docs.amd.com) system or do not require CUDA/ROCm (i.e. GPU support), in the above selector, choose OS: Linux, Package: Conda, Language: Python and Compute Platform: CPU.
83
+
To install PyTorch via Anaconda, and do not have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) or [ROCm-capable](https://rocm.docs.amd.com/) system or do not require CUDA/ROCm (i.e. GPU support), in the above selector, choose OS: Linux, Package: Conda, Language: Python and Compute Platform: CPU.
84
84
Then, run the command that is presented to you.
85
85
86
86
#### With CUDA
@@ -98,7 +98,7 @@ PyTorch via Anaconda is not supported on ROCm currently. Please use pip instead.
98
98
99
99
#### No CUDA
100
100
101
-
To install PyTorch via pip, and do not have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) or [ROCm-capable](https://docs.amd.com) system or do not require CUDA/ROCm (i.e. GPU support), in the above selector, choose OS: Linux, Package: Pip, Language: Python and Compute Platform: CPU.
101
+
To install PyTorch via pip, and do not have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) or [ROCm-capable](https://rocm.docs.amd.com/) system or do not require CUDA/ROCm (i.e. GPU support), in the above selector, choose OS: Linux, Package: Pip, Language: Python and Compute Platform: CPU.
102
102
Then, run the command that is presented to you.
103
103
104
104
#### With CUDA
@@ -108,7 +108,7 @@ Then, run the command that is presented to you.
108
108
109
109
#### With ROCm
110
110
111
-
To install PyTorch via pip, and do have a [ROCm-capable](https://docs.amd.com) system, in the above selector, choose OS: Linux, Package: Pip, Language: Python and the ROCm version supported.
111
+
To install PyTorch via pip, and do have a [ROCm-capable](https://rocm.docs.amd.com/) system, in the above selector, choose OS: Linux, Package: Pip, Language: Python and the ROCm version supported.
112
112
Then, run the command that is presented to you.
113
113
114
114
## Verification
@@ -151,7 +151,7 @@ For the majority of PyTorch users, installing from a pre-built binary via a pack
151
151
1. Install [Anaconda](#anaconda) or [Pip](#pip)
152
152
2. If you need to build PyTorch with GPU support
153
153
a. for NVIDIA GPUs, install [CUDA](https://developer.nvidia.com/cuda-downloads), if your machine has a [CUDA-enabled GPU](https://developer.nvidia.com/cuda-gpus).
154
-
b. for AMD GPUs, install [ROCm](https://docs.amd.com), if your machine has a [ROCm-enabled GPU](https://docs.amd.com)
154
+
b. for AMD GPUs, install [ROCm](https://rocm.docs.amd.com/), if your machine has a [ROCm-enabled GPU](https://rocm.docs.amd.com/)
155
155
3. Follow the steps described here: [https://github.com/pytorch/pytorch#from-source](https://github.com/pytorch/pytorch#from-source)
156
156
157
157
You can verify the installation as described [above](#linux-verification).
class="newsletter__privacy">By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. <ahref="https://www.linuxfoundation.org/privacy/">Privacy Policy</a>.</p>
Copy file name to clipboardExpand all lines: _includes/header.html
+1-1
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
<divclass="hello-bar">
2
2
<divclass="container">
3
-
Join us in Silicon Valley September 18-19 at the 2024 PyTorch Conference. <atarget="_blank" href="https://events.linuxfoundation.org/pytorch-conference/">Learn more</a>.
3
+
Join us in Silicon Valley September 18-19 at the 2024 PyTorch Conference. <atarget="_blank" href="https://events.linuxfoundation.org/pytorch-conference/?utm_source=www&utm_medium=homepage&utm_campaign=Pytorch-Conference-2024&utm_content=hello">Learn more</a>.
Copy file name to clipboardExpand all lines: _mobile/android.md
+5
Original file line number
Diff line number
Diff line change
@@ -8,6 +8,11 @@ order: 3
8
8
published: true
9
9
---
10
10
11
+
<divclass="note-card">
12
+
<h4>Note</h4>
13
+
<p>PyTorch Mobile is no longer actively supported. Please check out <ahref="/executorch-overview">ExecuTorch</a>, PyTorch’s all-new on-device inference library. You can also review <ahref="https://pytorch.org/executorch/stable/demo-apps-android.html">this page</a> to learn more about how to use ExecuTorch to build an Android app.</p>
Copy file name to clipboardExpand all lines: _mobile/home.md
+5
Original file line number
Diff line number
Diff line change
@@ -9,6 +9,11 @@ published: true
9
9
redirect_from: "/mobile/"
10
10
---
11
11
12
+
<divclass="note-card">
13
+
<h4>Note</h4>
14
+
<p>PyTorch Mobile is no longer actively supported. Please check out <ahref="/executorch-overview">ExecuTorch</a>, PyTorch’s all-new on-device inference library. </p>
15
+
</div>
16
+
12
17
# PyTorch Mobile
13
18
14
19
There is a growing need to execute ML models on edge devices to reduce latency, preserve privacy, and enable new interactive use cases.
Copy file name to clipboardExpand all lines: _mobile/ios.md
+5
Original file line number
Diff line number
Diff line change
@@ -8,6 +8,11 @@ order: 2
8
8
published: true
9
9
---
10
10
11
+
<divclass="note-card">
12
+
<h4>Note</h4>
13
+
<p>PyTorch Mobile is no longer actively supported. Please check out <ahref="/executorch-overview">ExecuTorch</a>, PyTorch’s all-new on-device inference library. You can also review <ahref="https://pytorch.org/executorch/stable/demo-apps-ios.html">this page</a> to learn more about how to use ExecuTorch to build an iOS app.</p>
14
+
</div>
15
+
11
16
# iOS
12
17
13
18
To get started with PyTorch on iOS, we recommend exploring the following [HelloWorld](https://github.com/pytorch/ios-demo-app/tree/master/HelloWorld).
0 commit comments