-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Closed
Description
Hello, Thank you for making this code base open-source, it's great!
I'm having the following issue: I'm fine-tuning the ffhq model on my own dataset. Since I'm training on colab, I have to do this piecewise, so I end up training as long as possible, then restarting from the latest snapshot.
The problem is that when I look at the losses, they seem to start from scratch every time. I includea screnshot of losses for two subsequent runs. I call train.py with the following arguments (other than the snapshot and data paths)
--aupipe=bg --gamma=10 --cfg=paper256 --mirror=1 --snap=10 --metrics=none
Is this normal would you say? What's then the best way of getting a sense of progress (other than manually inspecting outputs)? Thanks!
ink1
Metadata
Metadata
Assignees
Labels
No labels