You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
or run the script `./inference.sh`, you can change the parameter in the script, especisally those data path.
139
141
```shell
140
142
./inference.sh
141
143
```
142
144
143
-
#### inference with data workflow
144
-
Alphafold's data pre-processing takes a lot of time, so we speed up the data pre-process by [ray](https://docs.ray.io/en/latest/workflows/concepts.html) workflow, which achieves a 3x times faster speed. To run the inference with ray workflow, you should install the package and add parameter `--enable_workflow` to cmdline or shell script `./inference.sh`
Alphafold's data pre-processing takes a lot of time, so we speed up the data pre-process by [ray](https://docs.ray.io/en/latest/workflows/concepts.html) workflow, which achieves a 3x times faster speed. To run the inference with ray workflow, we add parameter `--enable_workflow` by default.
146
+
To reduce memory usage of embedding presentations, we also add parameter `--inplace` to share memory by defaul.
163
147
164
148
#### inference with lower memory usage
165
149
Alphafold's embedding presentations take up a lot of memory as the sequence length increases. To reduce memory usage,
166
-
you should add parameter `--chunk_size [N]`and `--inplace`to cmdline or shell script `./inference.sh`.
150
+
you should add parameter `--chunk_size [N]` to cmdline or shell script `./inference.sh`.
167
151
The smaller you set N, the less memory will be used, but it will affect the speed. We can inference
168
152
a sequence of length 10000 in bf16 with 61GB memory on a Nvidia A100(80GB). For fp32, the max length is 8000.
169
153
> You need to set `PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:15000` to inference such an extreme long sequence.
0 commit comments