Containerized global-workflow errors #3241
Unanswered
gspetro-NOAA
asked this question in
Q&A
Replies: 2 comments 5 replies
-
I updated the patch in https://github.com/noaa-epic/global-workflow-patch that will fix this problem. |
Beta Was this translation helpful? Give feedback.
2 replies
-
@gspetro-NOAA |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am running the containerized global-workflow on the NOAA ParallelWorks AWS cloud platform, and I've run into a few errors. The first error (after running
./build_all.sh gfs
) related toCMEPS/mediator/med_time_mod.F90
, and I solved the build error by checking out and using hash4c6c6a41
(which still had that file) instead of HEAD of develop. However, I'm not sure if the error was container-related or more general to GW, so I wanted to share the problem here in case anyone has ideas. Here is the error message:Now, I have successfully built the GW using hash
4c6c6a41
. However, when I follow the steps laid out by @mark-a-potts here, I run into a Jinja2 error. After running./setup_expt.py gfs forecast-only --start cold --pslot c48_atm --app ATM --resdetatmos 48 --idate 2021032312 --edate 2021032312 --comroot /contrib/Gillian.Petro/gw/comroot --icsdir=/contrib/Gillian.Petro/gw/ICSDIR/C48C48mx500/20240610 --expdir /contrib/Gillian.Petro/gw/expdir
, I am getting the following error and don't know what to make of it:It seems like the Jinja2 in use should be new enough to include
pass_eval_context
.Has anyone seen this before?
I realize I'm not working on a supported system, so I don't expect a huge amount of debugging help, but I thought that perhaps this sort of thing has come up on other systems, and someone might have ideas.
Beta Was this translation helpful? Give feedback.
All reactions