You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
im working/exploring lately the conjunction of both to solve ML problems. One of the most interesting requests is having the ability to start training from a checkpoint just in case something fails in the middle of your several-days-training-job. This is specially needed for spot instances that can be interrupted by the vendor with just 30 seconds of time to take actions.
The ideal solution that came to us was using DVC cache. However the CML and DVC integration is not yet smooth.
The requirements are:
The dvc pipeline is happening in the CI runners (hosted or self-hosted), never locally
Every batch the checkpoints should be stored in dvc cache to be restored in case the CI workflow has an issue and restating it should just restart the training from that stage.
So lets setup the dvc pipeline with a very easy to follow example:
Lets review it:
Ideally if our runner dies after stet1 (while sleeping) models.data should contain step1. If we restart the workflow we will end up with model.data containing
step1
step1
step2
well, thats ideally... As we can state DVC is not actually caching anything.
In fact the issue comes from having created a pipeline without run.
Please note the dvc pull --run-cache -f || echo 'failed dvc pull :('
I put || as a try catch since dvc will aways fail, remember that ideally the very first time we do this should be empty but after restarting our failed workflow we should recover our model.data with
step1
inside. DVC does not seems to be handling well a deferred repro and pull without ids or empty caches. In fact cache is not working because of that?
WARNING: Some of the cache files do not exist neither locally nor on remote. Missing cache files:
name: model.data, md5: e0b7ab6cd3e2df496849e69c355045a7
WARNING: Cache 'e0b7ab6cd3e2df496849e69c355045a7' not found. File 'model.data' won't be created.
1 file failed
ERROR: failed to pull data from the cloud - Checkout failed for following targets:
model.data
Did you forget to fetch?
Having any troubles? Hit us up at https://dvc.org/support, we are always happy to help!
Uh oh!
There was an error while loading. Please reload this page.
Hi guys,
im working/exploring lately the conjunction of both to solve ML problems. One of the most interesting requests is having the ability to start training from a checkpoint just in case something fails in the middle of your several-days-training-job. This is specially needed for spot instances that can be interrupted by the vendor with just 30 seconds of time to take actions.
The ideal solution that came to us was using DVC cache. However the CML and DVC integration is not yet smooth.
The requirements are:
So lets setup the dvc pipeline with a very easy to follow example:
train.sh
our github workfow file will be:
.github/workflows/cml.yaml
we setup dvc with:
dvc init dvc remote add s3 s3://your-s3-bucket dvc remote default s3 dvc run --no-exec -n train \ --outs-persist model.data \ ./train.sh git add --all git commit -m 'dvc' dvc push git push
Lets review it:
Ideally if our runner dies after stet1 (while sleeping) models.data should contain
step1
. If we restart the workflow we will end up with model.data containingwell, thats ideally... As we can state DVC is not actually caching anything.
In fact the issue comes from having created a pipeline without run.
Please note the
dvc pull --run-cache -f || echo 'failed dvc pull :('
I put || as a try catch since dvc will aways fail, remember that ideally the very first time we do this should be empty but after restarting our failed workflow we should recover our model.data with
inside. DVC does not seems to be handling well a deferred repro and pull without ids or empty caches. In fact cache is not working because of that?
Any ideas @dmpetrov ?
The text was updated successfully, but these errors were encountered: