@@ -94,16 +94,31 @@ conda env create -f environment.yml
94
94
source activate deep3d_pytorch
95
95
```
96
96
97
- 2 . Install Nvdiffrast library (Only needed for training and testing with rendering/visualization):
98
- ```
99
- git clone -b 0.3.0 https://github.com/NVlabs/nvdiffrast
100
- cd nvdiffrast # ./Deep3DFaceRecon_pytorch/nvdiffrast
101
- pip install .
102
- ```
97
+ 2 . Install mesh renderer:
98
+ 1 . Nvdiffrast library (necessary for training, optional for testing):
99
+ ```
100
+ git clone -b 0.3.0 https://github.com/NVlabs/nvdiffrast
101
+ cd nvdiffrast # ./Deep3DFaceRecon_pytorch/nvdiffrast
102
+ pip install .
103
+ cd .. # ./Deep3DFaceRecon_pytorch
104
+ ```
105
+ 2. Use a cpu renderer from 3DDFA-V3 instead for testing (which can work on MacOS):
106
+ ```
107
+ git clone --depth=1 https://github.com/wang-zidu/3DDFA-V3
108
+ cp 3DDFA-V3/utils/cpu_renderer.py ./utils/
109
+ cp -r 3DDFA-V3/utils/cython_renderer ./utils/
110
+
111
+ pip install Cython
112
+
113
+ cd util/cython_renderer/
114
+ python setup.py build_ext -i
115
+ cd ../.. # ./Deep3DFaceRecon_pytorch
116
+ ```
117
+ 3. Skip this step for inference/test, but you need run test.py with "--renderer_type none --no_viz" options
118
+
103
119
104
120
3. Install Arcface Pytorch:
105
121
```
106
- cd .. # ./Deep3DFaceRecon_pytorch
107
122
git clone https://github.com/deepinsight/insightface.git
108
123
cp -r ./insightface/recognition/arcface_torch ./models/
109
124
```
@@ -183,19 +198,29 @@ On **MacOS**, you can run the test script with CPU or Apple Silicon (M1, M2, M3
183
198
run with MPS:
184
199
```
185
200
# get reconstruction results of your custom images
186
- python test.py --name=<model_name> --epoch=20 --img_folder=<folder_to_test_images> --renderer_type none --device cpu --no_viz
201
+ python test.py --name=<model_name> --epoch=20 --img_folder=<folder_to_test_images> --renderer_type none --device mps
202
+
203
+ # no visualization
204
+ python test.py --name=<model_name> --epoch=20 --img_folder=<folder_to_test_images> --renderer_type none --device mps --no_viz
187
205
188
206
# get reconstruction results of example images
189
- python test.py --name=<model_name> --epoch=20 --img_folder=./datasets/examples --renderer_type none --device cpu --no_viz
207
+ python test.py --name=<model_name> --epoch=20 --img_folder=./datasets/examples --renderer_type none --device mps
208
+
209
+ # no visualization
210
+ python test.py --name=<model_name> --epoch=20 --img_folder=./datasets/examples --renderer_type none --device mps --no_viz
190
211
```
191
212
192
213
or run with CPU:
193
214
```
194
215
# get reconstruction results of your custom images
195
- python test.py --name=<model_name> --epoch=20 --img_folder=<folder_to_test_images> --renderer_type none --device mps --no_viz
216
+ python test.py --name=<model_name> --epoch=20 --img_folder=<folder_to_test_images> --renderer_type none --device cpu
217
+
218
+ python test.py --name=<model_name> --epoch=20 --img_folder=<folder_to_test_images> --renderer_type none --device cpu --no_viz
196
219
197
220
# get reconstruction results of example images
198
- python test.py --name=<model_name> --epoch=20 --img_folder=./datasets/examples --renderer_type none --device mps --no_viz
221
+ python test.py --name=<model_name> --epoch=20 --img_folder=./datasets/examples --renderer_type none --device cpu
222
+
223
+ python test.py --name=<model_name> --epoch=20 --img_folder=./datasets/examples --renderer_type none --device cpu --no_viz
199
224
```
200
225
201
226
**_Following [#108](https://github.com/sicxu/Deep3DFaceRecon_pytorch/issues/108), if you don't have OpenGL environment, you can simply add "--use_opengl False" to use CUDA context. Make sure you have updated the nvdiffrast to the latest version._**
0 commit comments