Skip to content

Commit 11fc16d

Browse files
Thiago Crepaldifacebook-github-bot
Thiago Crepaldi
authored andcommitted
Remove HTML tags from README.md (pytorch#9296)
Summary: This change makes README.md compatible with both Github and VSTS markdown engines. Images can be reduced if necessary Pull Request resolved: pytorch#9296 Differential Revision: D8874931 Pulled By: soumith fbshipit-source-id: 0c530c1e00b06fc891301644c92c33007060bf27
1 parent 4ff636a commit 11fc16d

File tree

1 file changed

+11
-29
lines changed

1 file changed

+11
-29
lines changed

README.md

Lines changed: 11 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
<p align="center"><img width="40%" src="docs/source/_static/img/pytorch-logo-dark.png" /></p>
1+
![PyTorch Logo](https://github.com/pytorch/pytorch/blob/master/docs/source/_static/img/pytorch-logo-dark.png)
22

33
--------------------------------------------------------------------------------
44

@@ -34,32 +34,14 @@ See also the [ci.pytorch.org HUD](https://ezyang.github.io/pytorch-ci-hud/build/
3434

3535
At a granular level, PyTorch is a library that consists of the following components:
3636

37-
<table>
38-
<tr>
39-
<td><b> torch </b></td>
40-
<td> a Tensor library like NumPy, with strong GPU support </td>
41-
</tr>
42-
<tr>
43-
<td><b> torch.autograd </b></td>
44-
<td> a tape-based automatic differentiation library that supports all differentiable Tensor operations in torch </td>
45-
</tr>
46-
<tr>
47-
<td><b> torch.nn </b></td>
48-
<td> a neural networks library deeply integrated with autograd designed for maximum flexibility </td>
49-
</tr>
50-
<tr>
51-
<td><b> torch.multiprocessing </b></td>
52-
<td> Python multiprocessing, but with magical memory sharing of torch Tensors across processes. Useful for data loading and Hogwild training. </td>
53-
</tr>
54-
<tr>
55-
<td><b> torch.utils </b></td>
56-
<td> DataLoader, Trainer and other utility functions for convenience </td>
57-
</tr>
58-
<tr>
59-
<td><b> torch.legacy(.nn/.optim) </b></td>
60-
<td> legacy code that has been ported over from torch for backward compatibility reasons </td>
61-
</tr>
62-
</table>
37+
| Component | Description |
38+
| ---- | --- |
39+
| **torch** | a Tensor library like NumPy, with strong GPU support |
40+
| **torch.autograd** | a tape-based automatic differentiation library that supports all differentiable Tensor operations in torch |
41+
| **torch.nn** | a neural networks library deeply integrated with autograd designed for maximum flexibility |
42+
| **torch.multiprocessing** | Python multiprocessing, but with magical memory sharing of torch Tensors across processes. Useful for data loading and Hogwild training |
43+
| **torch.utils** | DataLoader, Trainer and other utility functions for convenience |
44+
| **torch.legacy(.nn/.optim)** | legacy code that has been ported over from torch for backward compatibility reasons |
6345

6446
Usually one uses PyTorch either as:
6547

@@ -72,7 +54,7 @@ Elaborating further:
7254

7355
If you use NumPy, then you have used Tensors (a.k.a ndarray).
7456

75-
<p align=center><img width="30%" src="docs/source/_static/img/tensor_illustration.png" /></p>
57+
![Tensor illustration](https://github.com/pytorch/pytorch/blob/master/docs/source/_static/img/tensor_illustration.png)
7658

7759
PyTorch provides Tensors that can live either on the CPU or the GPU, and accelerate
7860
compute by a huge amount.
@@ -99,7 +81,7 @@ from several research papers on this topic, as well as current and past work suc
9981
While this technique is not unique to PyTorch, it's one of the fastest implementations of it to date.
10082
You get the best of speed and flexibility for your crazy research.
10183

102-
<p align=center><img width="80%" src="docs/source/_static/img/dynamic_graph.gif" /></p>
84+
![Dynamic graph](https://github.com/pytorch/pytorch/blob/master/docs/source/_static/img/dynamic_graph.gif)
10385

10486
### Python First
10587

0 commit comments

Comments
 (0)