Skip to content

Commit cf34867

Browse files
pre-commit-ci[bot]lantiga
authored andcommitted
[pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
1 parent cc457fe commit cf34867

File tree

5 files changed

+18
-22
lines changed

5 files changed

+18
-22
lines changed

.github/workflows/README.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -6,16 +6,16 @@ Brief description of all our automation tools used for boosting development perf
66

77
## Unit and Integration Testing
88

9-
| workflow file | action | accelerator |
10-
| -------------------------------------- | ----------------------------------------------------------------------------------------- | ----------- |
11-
| .github/workflows/ci-tests-fabric.yml | Run all tests except for accelerator-specific and standalone. | CPU |
12-
| .github/workflows/ci-tests-pytorch.yml | Run all tests except for accelerator-specific and standalone. | CPU |
13-
| .github/workflows/ci-tests-data.yml | Run unit and integration tests with data pipelining. | CPU |
14-
| .azure-pipelines/gpu-tests-fabric.yml | Run only GPU-specific tests, standalone\*, and examples. | GPU |
15-
| .azure-pipelines/gpu-tests-pytorch.yml | Run only GPU-specific tests, standalone\*, and examples. | GPU |
16-
| .azure-pipelines/gpu-benchmarks.yml | Run speed/memory benchmarks for parity with vanila PyTorch. | GPU |
17-
| .github/workflows/ci-tests-pytorch.yml | Run all tests except for accelerator-specific, standalone and slow tests. | CPU |
18-
| .github/workflows/tpu-tests.yml | Run only TPU-specific tests. Requires that the PR title contains '\[TPU\]' | TPU |
9+
| workflow file | action | accelerator |
10+
| -------------------------------------- | -------------------------------------------------------------------------- | ----------- |
11+
| .github/workflows/ci-tests-fabric.yml | Run all tests except for accelerator-specific and standalone. | CPU |
12+
| .github/workflows/ci-tests-pytorch.yml | Run all tests except for accelerator-specific and standalone. | CPU |
13+
| .github/workflows/ci-tests-data.yml | Run unit and integration tests with data pipelining. | CPU |
14+
| .azure-pipelines/gpu-tests-fabric.yml | Run only GPU-specific tests, standalone\*, and examples. | GPU |
15+
| .azure-pipelines/gpu-tests-pytorch.yml | Run only GPU-specific tests, standalone\*, and examples. | GPU |
16+
| .azure-pipelines/gpu-benchmarks.yml | Run speed/memory benchmarks for parity with vanila PyTorch. | GPU |
17+
| .github/workflows/ci-tests-pytorch.yml | Run all tests except for accelerator-specific, standalone and slow tests. | CPU |
18+
| .github/workflows/tpu-tests.yml | Run only TPU-specific tests. Requires that the PR title contains '\[TPU\]' | TPU |
1919

2020
\* Each standalone test needs to be run in separate processes to avoid unwanted interactions between test cases.
2121

.github/workflows/ci-pkg-extend.yml

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,6 @@ defaults:
2626
shell: bash
2727

2828
jobs:
29-
3029
import-pkg:
3130
runs-on: ${{ matrix.os }}
3231
strategy:
@@ -50,4 +49,3 @@ jobs:
5049
- name: Try importing
5150
run: from lightning.${{ matrix.pkg-name }} import *
5251
shell: python
53-

examples/fabric/tensor_parallel/train.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,13 @@
11
import lightning as L
22
import torch
33
import torch.nn.functional as F
4+
from data import RandomTokenDataset
45
from lightning.fabric.strategies import ModelParallelStrategy
56
from model import ModelArgs, Transformer
67
from parallelism import parallelize
78
from torch.distributed.tensor.parallel import loss_parallel
89
from torch.utils.data import DataLoader
910

10-
from data import RandomTokenDataset
11-
1211

1312
def train():
1413
strategy = ModelParallelStrategy(

examples/pytorch/tensor_parallel/train.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,13 @@
11
import lightning as L
22
import torch
33
import torch.nn.functional as F
4+
from data import RandomTokenDataset
45
from lightning.pytorch.strategies import ModelParallelStrategy
56
from model import ModelArgs, Transformer
67
from parallelism import parallelize
78
from torch.distributed.tensor.parallel import loss_parallel
89
from torch.utils.data import DataLoader
910

10-
from data import RandomTokenDataset
11-
1211

1312
class Llama3(L.LightningModule):
1413
def __init__(self):

src/lightning/app/__init__.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -13,12 +13,12 @@
1313
# Enable resolution at least for lower data namespace
1414
sys.modules["lightning.app"] = lightning_app
1515

16-
from lightning_app.core.app import LightningApp # noqa: E402
17-
from lightning_app.core.flow import LightningFlow # noqa: E402
18-
from lightning_app.core.work import LightningWork # noqa: E402
19-
from lightning_app.plugin.plugin import LightningPlugin # noqa: E402
20-
from lightning_app.utilities.packaging.build_config import BuildConfig # noqa: E402
21-
from lightning_app.utilities.packaging.cloud_compute import CloudCompute # noqa: E402
16+
from lightning_app.core.app import LightningApp
17+
from lightning_app.core.flow import LightningFlow
18+
from lightning_app.core.work import LightningWork
19+
from lightning_app.plugin.plugin import LightningPlugin
20+
from lightning_app.utilities.packaging.build_config import BuildConfig
21+
from lightning_app.utilities.packaging.cloud_compute import CloudCompute
2222

2323
if module_available("lightning_app.components.demo"):
2424
from lightning.app.components import demo # noqa: F401

0 commit comments

Comments
 (0)