diff --git a/.gitignore b/.gitignore index dc7dcf33a2..5d778b370c 100644 --- a/.gitignore +++ b/.gitignore @@ -2,3 +2,4 @@ testing-project .mypy_cache poetry.lock +dev-link/ diff --git a/.travis.yml b/.travis.yml index 2a0fec738b..ad7e0349a3 100644 --- a/.travis.yml +++ b/.travis.yml @@ -9,4 +9,4 @@ services: - docker script: -- bash ./test.sh +- bash ./scripts/test.sh diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 0000000000..d95d76171c --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,83 @@ +# Contributing + +Here are some short guidelines to guide you if you want to contribute to the development of the Full Stack FastAPI PostgreSQL project generator itself. + +After you clone the project, there are several scripts that can help during development. + +* `./scripts/dev-fsfp.sh`: + +Generate a new default project `dev-fsfp`. + +Call it from one level above the project directory. So, if the project is at `~/code/full-stack-fastapi-postgresql/`, call it from `~/code/`, like: + +```console +$ cd ~/code/ + +$ bash ./full-stack-fastapi-postgresql/scripts/dev-fsfp.sh +``` + +It will generate a new project with all the defaults at `~/code/dev-fsfp/`. + +You can go to that directory with a full new project, edit files and test things, for example: + +```console +$ cd ./dev-fsfp/ + +$ docker-compose up -d +``` + +It is outside of the project generator directory to let you add Git to it and compare versions and changes. + +* `./scripts/dev-fsfp-back.sh`: + +Move the changes from a project `dev-fsfp` back to the project generator. + +You would call it after calling `./scripts/dev-fsfp.sh` and adding some modifications to `dev-fsfp`. + +Call it from one level above the project directory. So, if the project is at `~/code/full-stack-fastapi-postgresql/`, call it from `~/code/`, like: + +```console +$ cd ~/code/ + +$ bash ./full-stack-fastapi-postgresql/scripts/dev-fsfp-back.sh +``` + +That will also contain all the generated files with the generated variables, but it will let you compare the changes in `dev-fsfp` and the source in the project generator with git, and see what to commit. + +* `./scripts/discard-dev-files.sh`: + +After using `./scripts/dev-fsfp-back.sh`, there will be a bunch of generated files with the variables for the generated project that you don't want to commit, like `README.md` and `.gitlab-ci.yml`. + +To discard all those changes at once, run `discard-dev-files.sh` from the root of the project, e.g.: + +```console +$ cd ~/code/full-stack-fastapi-postgresql/ + +$ bash ./scripts/dev-fsfp-back.sh +``` + +* `./scripts/test.sh`: + +Run the tests. It creates a project `testing-project` *inside* of the project generator and runs its tests. + +Call it from the root of the project, e.g.: + +```console +$ cd ~/code/full-stack-fastapi-postgresql/ + +$ bash ./scripts/test.sh +``` + +* `./scripts/dev-link.sh`: + +Set up a local directory with links to the files for live development with the source files. + +This script generates a project `dev-link` *inside* the project generator, just to generate the `.env` and `./frontend/.env` files. + +Then it removes everything except those 2 files. + +Then it creates links for each of the source files, and adds those 2 files back. + +The end result is that you can go into the `dev-link` directory and develop locally with it as if it was a generated project, with all the variables set. But all the changes are actually done directly in the source files. + +This is probably a lot faster to iterate than using `./scripts/dev-fsfp.sh`. But it's tested only in Linux, it might not work in other systems. diff --git a/README.md b/README.md index 3d6bac8bfe..2974cd0d0c 100644 --- a/README.md +++ b/README.md @@ -117,12 +117,12 @@ The input variables, with their default values (some auto generated) are: * `pgadmin_default_user_password`: PGAdmin default user password. Generate it with the method above. * `traefik_constraint_tag`: The tag to be used by the internal Traefik load balancer (for example, to divide requests between backend and frontend) for production. Used to separate this stack from any other stack you might have. This should identify each stack in each environment (production, staging, etc). -* `traefik_constraint_tag_staging`: The Traefik tag to be used while on staging. +* `traefik_constraint_tag_staging`: The Traefik tag to be used while on staging. * `traefik_public_constraint_tag`: The tag that should be used by stack services that should communicate with the public. -* `flower_auth`: Basic HTTP authentication for flower, in the form`user:password`. By default: "`root:changethis`". +* `flower_auth`: Basic HTTP authentication for flower, in the form`user:password`. By default: "`admin:changethis`". -* `sentry_dsn`: Key URL (DSN) of Sentry, for live error reporting. If you are not using it yet, you should, is open source. E.g.: `https://1234abcd:5678ef@sentry.example.com/30`. +* `sentry_dsn`: Key URL (DSN) of Sentry, for live error reporting. You can use the open source version or a free account. E.g.: `https://1234abcd:5678ef@sentry.example.com/30`. * `docker_image_prefix`: Prefix to use for Docker image names. If you are using GitLab Docker registry it would be based on your code repository. E.g.: `git.example.com/development-team/my-awesome-project/`. * `docker_image_backend`: Docker image name for the backend. By default, it will be based on your Docker image prefix, e.g.: `git.example.com/development-team/my-awesome-project/backend`. And depending on your environment, a different tag will be appended ( `prod`, `stag`, `branch` ). So, the final image names used will be like: `git.example.com/development-team/my-awesome-project/backend:prod`. @@ -141,7 +141,7 @@ After using this generator, your new project (the directory created) will contai ## Sibling project generators -* Based on Couchbase: [https://github.com/tiangolo/full-stack-fastapi-couchbase](https://github.com/tiangolo/full-stack-fastapi-couchbase). +* Full Stack FastAPI Couchbase: [https://github.com/tiangolo/full-stack-fastapi-couchbase](https://github.com/tiangolo/full-stack-fastapi-couchbase). ## Release Notes diff --git a/dev-fsfp-config.yml b/dev-fsfp-config.yml deleted file mode 100644 index 8d9cf5f9ce..0000000000 --- a/dev-fsfp-config.yml +++ /dev/null @@ -1,2 +0,0 @@ -default_context: - "project_name": "Dev FSFP" diff --git a/dev-fsfp.sh b/dev-fsfp.sh deleted file mode 100644 index bb3554d75a..0000000000 --- a/dev-fsfp.sh +++ /dev/null @@ -1,10 +0,0 @@ -#! /usr/bin/env bash - -# Run this script from outside the project, to generate a dev-fsfp project - -# Exit in case of error -set -e - -rm -rf ./dev-fsfp - -cookiecutter --config-file ./full-stack-fastapi-postgresql/dev-fsfp-config.yml --no-input -f ./full-stack-fastapi-postgresql diff --git a/dev-fsfp-back.sh b/scripts/dev-fsfp-back.sh similarity index 75% rename from dev-fsfp-back.sh rename to scripts/dev-fsfp-back.sh index 4301707bf6..95e9b78102 100644 --- a/dev-fsfp-back.sh +++ b/scripts/dev-fsfp-back.sh @@ -5,6 +5,11 @@ # Exit in case of error set -e +if [ ! -d ./full-stack-fastapi-postgresql ] ; then + echo "Run this script from outside the project, to integrate a sibling dev-fsfp project with changes and review modifications" + exit 1 +fi + if [ $(uname -s) = "Linux" ]; then echo "Remove __pycache__ files" sudo find ./dev-fsfp/ -type d -name __pycache__ -exec rm -r {} \+ diff --git a/scripts/dev-fsfp.sh b/scripts/dev-fsfp.sh new file mode 100644 index 0000000000..9afbe30b15 --- /dev/null +++ b/scripts/dev-fsfp.sh @@ -0,0 +1,13 @@ +#! /usr/bin/env bash + +# Exit in case of error +set -e + +if [ ! -d ./full-stack-fastapi-postgresql ] ; then + echo "Run this script from outside the project, to generate a sibling dev-fsfp project with independent git" + exit 1 +fi + +rm -rf ./dev-fsfp + +cookiecutter --no-input -f ./full-stack-fastapi-postgresql project_name="Dev FSFP" diff --git a/scripts/dev-link.sh b/scripts/dev-link.sh new file mode 100644 index 0000000000..3b59f9d52a --- /dev/null +++ b/scripts/dev-link.sh @@ -0,0 +1,34 @@ +#! /usr/bin/env bash + +# Exit in case of error +set -e + +# Run this from the root of the project to generate a dev-link project +# It will contain a link to each of the files of the generator, except for +# .env and frontend/.env, that will be the generated ones +# This allows developing with a live stack while keeping the same source code +# Without having to generate dev-fsfp and integrating back all the files + +rm -rf dev-link +mkdir -p tmp-dev-link/frontend + +cookiecutter --no-input -f ./ project_name="Dev Link" + +mv ./dev-link/.env ./tmp-dev-link/ +mv ./dev-link/frontend/.env ./tmp-dev-link/frontend/ + +rm -rf ./dev-link/ +mkdir -p ./dev-link/ + +cd ./dev-link/ + +for f in ../\{\{cookiecutter.project_slug\}\}/* ; do + ln -s "$f" ./ +done + +cd .. + +mv ./tmp-dev-link/.env ./dev-link/ +mv ./tmp-dev-link/frontend/.env ./dev-link/frontend/ + +rm -rf ./tmp-dev-link diff --git a/scripts/discard-dev-files.sh b/scripts/discard-dev-files.sh index 47fc5d2971..7a07a70bb3 100644 --- a/scripts/discard-dev-files.sh +++ b/scripts/discard-dev-files.sh @@ -1,3 +1,7 @@ +#! /usr/bin/env bash + +set -e + rm -rf \{\{cookiecutter.project_slug\}\}/.git rm -rf \{\{cookiecutter.project_slug\}\}/backend/app/poetry.lock rm -rf \{\{cookiecutter.project_slug\}\}/frontend/node_modules @@ -5,7 +9,5 @@ rm -rf \{\{cookiecutter.project_slug\}\}/frontend/dist git checkout \{\{cookiecutter.project_slug\}\}/README.md git checkout \{\{cookiecutter.project_slug\}\}/.gitlab-ci.yml git checkout \{\{cookiecutter.project_slug\}\}/cookiecutter-config-file.yml -git checkout \{\{cookiecutter.project_slug\}\}/docker-compose.deploy.networks.yml git checkout \{\{cookiecutter.project_slug\}\}/.env git checkout \{\{cookiecutter.project_slug\}\}/frontend/.env - diff --git a/test.sh b/scripts/test.sh similarity index 58% rename from test.sh rename to scripts/test.sh index f783f27e77..36cc535fa8 100644 --- a/test.sh +++ b/scripts/test.sh @@ -3,9 +3,11 @@ # Exit in case of error set -e +# Run this from the root of the project + rm -rf ./testing-project -cookiecutter --config-file ./testing-config.yml --no-input -f ./ +cookiecutter --no-input -f ./ project_name="Testing Project" cd ./testing-project diff --git a/testing-config.yml b/testing-config.yml deleted file mode 100644 index d117463ab9..0000000000 --- a/testing-config.yml +++ /dev/null @@ -1,2 +0,0 @@ -default_context: - "project_name": "Testing Project" diff --git a/{{cookiecutter.project_slug}}/README.md b/{{cookiecutter.project_slug}}/README.md index 901fe3b9b8..f1539db836 100644 --- a/{{cookiecutter.project_slug}}/README.md +++ b/{{cookiecutter.project_slug}}/README.md @@ -2,12 +2,13 @@ ## Backend Requirements -* Docker -* Docker Compose +* [Docker](https://www.docker.com/). +* [Docker Compose](https://docs.docker.com/compose/install/). +* [Poetry](https://python-poetry.org/) for Python package and environment management. ## Frontend Requirements -* Node.js (with `npm`) +* Node.js (with `npm`). ## Backend local development @@ -53,61 +54,73 @@ If your Docker is not running in `localhost` (the URLs above wouldn't work) chec ### General workflow -Open your editor at `./backend/app/` (instead of the project root: `./`), so that you see an `./app/` directory with your code inside. That way, your editor will be able to find all the imports, etc. +By default, the dependencies are managed with [Poetry](https://python-poetry.org/), go there and install it. + +From `./backend/app/` you can install all the dependencies with: + +```console +$ poetry install +``` + +Then you can start a shell session with the new environment with: + +```console +$ poetry shell +``` + +Next, open your editor at `./backend/app/` (instead of the project root: `./`), so that you see an `./app/` directory with your code inside. That way, your editor will be able to find all the imports, etc. Make sure your editor uses the environment you just created with Poetry. Modify or add SQLAlchemy models in `./backend/app/app/models/`, Pydantic schemas in `./backend/app/app/schemas/`, API endpoints in `./backend/app/app/api/`, CRUD (Create, Read, Update, Delete) utils in `./backend/app/app/crud/`. The easiest might be to copy the ones for Items (models, endpoints, and CRUD utils) and update them to your needs. -Add and modify tasks to the Celery worker in `./backend/app/app/worker.py`. +Add and modify tasks to the Celery worker in `./backend/app/app/worker.py`. If you need to install any additional package to the worker, add it to the file `./backend/app/celeryworker.dockerfile`. -There is an `.env` file that has some Docker Compose default values that allow you to just run `docker-compose up -d` and start working, while still being able to use and share the same Docker Compose files for deployment, avoiding repetition of code and configuration as much as possible. - ### Docker Compose Override -During development, you can change Docker Compose settings that will only affect the local development environment, in the files `docker-compose.dev.*.yml`. +During development, you can change Docker Compose settings that will only affect the local development environment, in the file `docker-compose.override.yml`. -The changes to those files only affect the local development environment, not the production environment. So, you can add "temporal" changes that help the development workflow. +The changes to that file only affect the local development environment, not the production environment. So, you can add "temporary" changes that help the development workflow. -For example, the directory with the backend code is mounted as a Docker "host volume" (in the file `docker-compose.dev.volumes.yml`), mapping the code you change live to the directory inside the container. That allows you to test your changes right away, without having to build the Docker image again. It should only be done during development, for production, you should build the Docker image with a recent version of the backend code. But during development, it allows you to iterate very fast. +For example, the directory with the backend code is mounted as a Docker "host volume", mapping the code you change live to the directory inside the container. That allows you to test your changes right away, without having to build the Docker image again. It should only be done during development, for production, you should build the Docker image with a recent version of the backend code. But during development, it allows you to iterate very fast. -There is a command override in the file `docker-compose.dev.command.yml` that runs `/start-reload.sh` (included in the base image) instead of the default `/start.sh` (also included in the base image). It starts a single server process (instead of multiple, as would be for production) and reloads the process whenever the code changes. As it is in `docker-compose.dev.command.yml`, it only applies to local development. Have in mind that if you have a syntax error and save the Python file, it will break and exit, and the container will stop. After that, you can restart the container by fixing the error and running again: +There is also a command override that runs `/start-reload.sh` (included in the base image) instead of the default `/start.sh` (also included in the base image). It starts a single server process (instead of multiple, as would be for production) and reloads the process whenever the code changes. Have in mind that if you have a syntax error and save the Python file, it will break and exit, and the container will stop. After that, you can restart the container by fixing the error and running again: -```bash -docker-compose up -d +```console +$ docker-compose up -d ``` -There is also a commented out `command` override (in the file `docker-compose.dev.command.yml`), you can uncomment it and comment the default one. It makes the backend container run a process that does "nothing", but keeps the process running. That allows you to get inside your living container and run commands inside, for example a Python interpreter to test installed dependencies, or start the development server that reloads when it detects changes, or start a Jupyter Notebook session. +There is also a commented out `command` override, you can uncomment it and comment the default one. It makes the backend container run a process that does "nothing", but keeps the container alive. That allows you to get inside your running container and execute commands inside, for example a Python interpreter to test installed dependencies, or start the development server that reloads when it detects changes, or start a Jupyter Notebook session. To get inside the container with a `bash` session you can start the stack with: -```bash -docker-compose up -d +```console +$ docker-compose up -d ``` and then `exec` inside the running container: -```bash -docker-compose exec backend bash +```console +$ docker-compose exec backend bash ``` You should see an output like: -``` +```console root@7f2607af31c3:/app# ``` that means that you are in a `bash` session inside your container, as a `root` user, under the `/app` directory. -There you use the script `/start-reload.sh` to run the debug live reloading server. You can run that script from inside the container with: +There you can use the script `/start-reload.sh` to run the debug live reloading server. You can run that script from inside the container with: -```bash -bash /start-reload.sh +```console +$ bash /start-reload.sh ``` ...it will look like: -```bash +```console root@7f2607af31c3:/app# bash /start-reload.sh ``` @@ -117,21 +130,18 @@ Nevertheless, if it doesn't detect a change but a syntax error, it will just sto ...this previous detail is what makes it useful to have the container alive doing nothing and then, in a Bash session, make it run the live reload server. - ### Backend tests To test the backend run: -```bash -DOMAIN=backend sh ./scripts/test.sh +```console +$ DOMAIN=backend sh ./scripts/test.sh ``` -The file `./scripts/test.sh` has the commands to generate a testing `docker-stack.yml` file from the needed Docker Compose files, start the stack and test it. +The file `./scripts/test.sh` has the commands to generate a testing `docker-stack.yml` file, start the stack and test it. The tests run with Pytest, modify and add tests to `./backend/app/app/tests/`. -If you need to install any additional package for the tests, add it to the file `./backend/app/tests.dockerfile`. - If you use GitLab CI the tests will run automatically. #### Local tests @@ -145,7 +155,7 @@ The `./backend/app` directory is mounted as a "host volume" inside the docker co You can rerun the test on live code: ```Bash -docker-compose exec backend-tests /tests-start.sh +docker-compose exec backend /app/tests-start.sh ``` #### Test running stack @@ -153,24 +163,24 @@ docker-compose exec backend-tests /tests-start.sh If your stack is already up and you just want to run the tests, you can use: ```bash -docker-compose exec backend-tests /tests-start.sh +docker-compose exec backend /app/tests-start.sh ``` -That `/tests-start.sh` script inside the `backend-tests` container calls `pytest`. If you need to pass extra arguments to `pytest`, you can pass them to that command and they will be forwarded. +That `/app/tests-start.sh` script just calls `pytest` after making sure that the rest of the stack is running. If you need to pass extra arguments to `pytest`, you can pass them to that command and they will be forwarded. For example, to stop on first error: ```bash -docker-compose exec backend-tests /tests-start.sh -x +docker-compose exec backend bash /app/tests-start.sh -x ``` ### Live development with Python Jupyter Notebooks If you know about Python [Jupyter Notebooks](http://jupyter.org/), you can take advantage of them during local development. -The `docker-compose.dev.build.yml` file sends a variable `env` with a value `dev` to the build process of the Docker image (during local development) and the `Dockerfile` has steps to then install and configure Jupyter inside your Docker container. +The `docker-compose.override.yml` file sends a variable `env` with a value `dev` to the build process of the Docker image (during local development) and the `Dockerfile` has steps to then install and configure Jupyter inside your Docker container. -So, you can enter into the Docker running container: +So, you can enter into the running Docker container: ```bash docker-compose exec backend bash @@ -180,7 +190,7 @@ And use the environment variable `$JUPYTER` to run a Jupyter Notebook with every It will output something like: -``` +```console root@73e0ec1f1ae6:/app# $JUPYTER [I 12:02:09.975 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret [I 12:02:10.317 NotebookApp] Serving notebooks from local directory: /app @@ -203,36 +213,34 @@ http://localhost:8888/token=f20939a41524d021fbfc62b31be8ea4dd9232913476f4397 and then open it in your browser. -You will have a full Jupyter Notebook running inside your container, that has direct access to your database by the name container name, etc. So, you can just copy your backend code and run it directly, without needing to modify it. - -If you use tools like [Hydrogen](https://github.com/nteract/hydrogen) or [Visual Studio Code Jupyter](https://donjayamanne.github.io/pythonVSCodeDocs/docs/jupyter/), you can use that same modified URL. +You will have a full Jupyter Notebook running inside your container that has direct access to your database by the container name (`db`), etc. So, you can just run sections of your backend code directly, for example with [VS Code Python Jupyter Interactive Window](https://code.visualstudio.com/docs/python/jupyter-support-py) or [Hydrogen](https://github.com/nteract/hydrogen). ### Migrations -As during local development your app directory is mounted as a volume inside the container (set in the file `docker-compose.dev.volumes.yml`), you can also run the migrations with `alembic` commands inside the container and the migration code will be in your app directory (instead of being only inside the container). So you can add it to your git repository. +As during local development your app directory is mounted as a volume inside the container, you can also run the migrations with `alembic` commands inside the container and the migration code will be in your app directory (instead of being only inside the container). So you can add it to your git repository. Make sure you create a "revision" of your models and that you "upgrade" your database with that revision every time you change them. As this is what will update the tables in your database. Otherwise, your application will have errors. * Start an interactive session in the backend container: -```bash -docker-compose exec backend bash +```console +$ docker-compose exec backend bash ``` * If you created a new model in `./backend/app/app/models/`, make sure to import it in `./backend/app/app/db/base.py`, that Python module (`base.py`) that imports all the models will be used by Alembic. * After changing a model (for example, adding a column), inside the container, create a revision, e.g.: -```bash -alembic revision --autogenerate -m "Add column last_name to User model" +```console +$ alembic revision --autogenerate -m "Add column last_name to User model" ``` * Commit to the git repository the files generated in the alembic directory. * After creating the revision, run the migration in the database (this is what will actually change the database): -```bash -alembic upgrade head +```console +$ alembic upgrade head ``` If you don't want to use migrations at all, uncomment the line in the file at `./backend/app/app/db/init_db.py` with: @@ -243,8 +251,8 @@ Base.metadata.create_all(bind=engine) and comment the line in the file `prestart.sh` that contains: -```bash -alembic upgrade head +```console +$ alembic upgrade head ``` If you don't want to start with the default models and want to remove them / modify them, from the beginning, without having any previous revision, you can remove the revision files (`.py` Python files) under `./backend/app/alembic/versions/`. And then create a first migration as described above. @@ -265,9 +273,9 @@ After performing those steps you should be able to open: http://local.dockertool Check all the corresponding available URLs in the section at the end. -### Develpment in `localhost` with a custom domain +### Development in `localhost` with a custom domain -You might want to use something different than `localhost` as the domain. For example, if you are having problems with cookies that need a subdomain, and Chrome is not allowing you to use `localhost`. +You might want to use something different than `localhost` as the domain. For example, if you are having problems with cookies that need a subdomain, and Chrome is not allowing you to use `localhost`. In that case, you have two options: you could use the instructions to modify your system `hosts` file with the instructions below in **Development with a custom IP** or you can just use `localhost.tiangolo.com`, it is set up to point to `localhost` (to the IP `127.0.0.1`) and all its subdomains too. And as it is an actual domain, the browsers will store the cookies you set during development, etc. @@ -314,7 +322,7 @@ Check all the corresponding available URLs in the section at the end. If you need to use your local stack with a different domain than `localhost`, you need to make sure the domain you use points to the IP where your stack is set up. See the different ways to achieve that in the sections above (i.e. using Docker Toolbox with `local.dockertoolbox.tiangolo.com`, using `localhost.tiangolo.com` or using `dev.{{cookiecutter.domain_main}}`). -To simplify your Docker Compose setup, for example, so that the API explorer, Swagger UI, knows where is your API, you should let it know you are using that domain for development. You will need to edit 1 line in 2 files. +To simplify your Docker Compose setup, for example, so that the API docs (Swagger UI) knows where is your API, you should let it know you are using that domain for development. You will need to edit 1 line in 2 files. * Open the file located at `./.env`. It would have a line like: @@ -328,7 +336,7 @@ DOMAIN=localhost DOMAIN=localhost.tiangolo.com ``` -That variable will be used by some of the local development `docker-compose.dev.*.yml` files, for example, to tell Swagger UI to use that domain for the API. +That variable will be used by the Docker Compose files. * Now open the file located at `./frontend/.env`. It would have a line like: @@ -364,11 +372,11 @@ npm run serve Then open your browser at http://localhost:8080 -Notice that this live server is not running inside Docker, it is for local development, and that is the recommended workflow. Once you are happy with your frontend, you can build the frontend Docker image and start it, to test it in a production-like environment. But compiling the image at every change will not be as productive as running the local development server. +Notice that this live server is not running inside Docker, it is for local development, and that is the recommended workflow. Once you are happy with your frontend, you can build the frontend Docker image and start it, to test it in a production-like environment. But compiling the image at every change will not be as productive as running the local development server with live reload. Check the file `package.json` to see other available options. -If you have Vue CLI installed, you can also run `vue ui` to control, configure, serve and analyse your application using a nice local web user interface. +If you have Vue CLI installed, you can also run `vue ui` to control, configure, serve, and analyze your application using a nice local web user interface. If you are only developing the frontend (e.g. other team members are developing the backend) and there is a staging environment already deployed, you can make your local development code use that staging API instead of a full local Docker Compose stack. @@ -396,7 +404,7 @@ But you have to configure a couple things first. ### Traefik network -This stack expects the public Traefik network to be named `traefik-public`, just as in the tutorial in DockerSwarm.rocks. +This stack expects the public Traefik network to be named `traefik-public`, just as in the tutorials in DockerSwarm.rocks. If you need to use a different Traefik public network name, update it in the `docker-compose.yml` files, in the section: @@ -422,11 +430,11 @@ To solve that, you can put constraints in the services that use one or more data #### Adding services with volumes -For each service that uses a volume (databases, services with uploaded files, etc) you should have a label constraint in your `docker-compose.deploy.volumes-placement.yml` file. +For each service that uses a volume (databases, services with uploaded files, etc) you should have a label constraint in your `docker-compose.yml` file. -To make sure that your labels are unique per volume per stack (for examlpe, that they are not the same for `prod` and `stag`) you should prefix them with the name of your stack and then use the same name of the volume. +To make sure that your labels are unique per volume per stack (for example, that they are not the same for `prod` and `stag`) you should prefix them with the name of your stack and then use the same name of the volume. -Then you need to have those constraints in your deployment Docker Compose file for the services that need to be fixed with each volume. +Then you need to have those constraints in your `docker-compose.yml` file for the services that need to be fixed with each volume. To be able to use different environments, like `prod` and `stag`, you should pass the name of the stack as an environment variable. Like: @@ -434,7 +442,7 @@ To be able to use different environments, like `prod` and `stag`, you should pas STACK_NAME={{cookiecutter.docker_swarm_stack_name_staging}} sh ./scripts/deploy.sh ``` -To use and expand that environment variable inside the `docker-compose.deploy.volumes-placement.yml` files you can add the constraints to the services like: +To use and expand that environment variable inside the `docker-compose.yml` files you can add the constraints to the services like: ```yaml version: '3' @@ -448,7 +456,7 @@ services: - node.labels.${STACK_NAME}.app-db-data == true ``` -note the `${STACK_NAME}`. In the script `./scripts/deploy.sh`, that `docker-compose.deploy.volumes-placement.yml` would be converted, and saved to a file `docker-stack.yml` containing: +note the `${STACK_NAME}`. In the script `./scripts/deploy.sh`, the `docker-compose.yml` would be converted, and saved to a file `docker-stack.yml` containing: ```yaml version: '3' @@ -466,7 +474,6 @@ If you add more volumes to your stack, you need to make sure you add the corresp Then you have to create those labels in some nodes in your Docker Swarm mode cluster. You can use `docker-auto-labels` to do it automatically. - #### `docker-auto-labels` You can use [`docker-auto-labels`](https://github.com/tiangolo/docker-auto-labels) to automatically read the placement constraint labels in your Docker stack (Docker Compose file) and assign them to a random Docker node in your Swarm mode cluster if those labels don't exist yet. @@ -493,13 +500,12 @@ If you don't want to use `docker-auto-labels` or for any reason you want to manu * Then check the available nodes with: -```bash -docker node ls -``` +```console +$ docker node ls -you would see an output like: -``` +// you would see an output like: + ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS nfa3d4df2df34as2fd34230rm * dog.example.com Ready Active Reachable 2c2sd2342asdfasd42342304e cat.example.com Ready Active Leader @@ -534,7 +540,7 @@ Here are the steps in detail: 1. **Build your app images** -* Set these environment variables, prepended to the next command: +* Set these environment variables, right before the next command: * `TAG=prod` * `FRONTEND_ENV=production` * Use the provided `scripts/build.sh` file with those environment variables: @@ -581,9 +587,11 @@ If you change your mind and, for example, want to deploy everything to a differe #### Deployment Technical Details -For the 3 steps (build, push, deploy) you need a generated `docker-stack.yml`, it is generated using the `docker-compose` command with some of the `docker-compose.*.yml` files. As each of these steps uses different `docker-compose.*.yml` files, the generated `docker-stack.yml` file is slightly different. But it's all generated by the scripts. +Building and pushing is done with the `docker-compose.yml` file, using the `docker-compose` command. The file `docker-compose.yml` uses the file `.env` with default environment variables. And the scripts set some additional environment variables as well. + +The deployment requires using `docker stack` instead of `docker-swarm`, and it can't read environment variables or `.env` files. Because of that, the `deploy.sh` script generates a file `docker-stack.yml` with the configurations from `docker-compose.yml` and injecting the environment variables in it. And then uses it to deploy the stack. -You can do the process by hand based on those same scripts if you wanted. The general structure of the scripts is like this: +You can do the process by hand based on those same scripts if you wanted. The general structure is like this: ```bash # Use the environment variables passed to this script, as TAG and FRONTEND_ENV @@ -594,17 +602,22 @@ TAG=${TAG} \ FRONTEND_ENV=${FRONTEND_ENV-production} \ # The actual comand that does the work: docker-compose docker-compose \ -# Pass the files that should be used at this stage, the set of files changes in each script / each stage --f docker-compose.deploy.build.yml \ --f docker-compose.deploy.images.yml \ -# Use the docker-compose sub command named "config", it just uses the docker-compose.*.yml files passed -# to it and prints their combined contents +# Pass the file that should be used, setting explicitly docker-compose.yml avoids the +# default of also using docker-compose.override.yml +-f docker-compose.yml \ +# Use the docker-compose sub command named "config", it just uses the docker-compose.yml +# file passed to it and prints their combined contents # Put those contents in a file "docker-stack.yml", with ">" config > docker-stack.yml -# The previous only generated a docker-stack.yml file, but didn't do anything with it -# Now this command uses that same file and does some operation with it, in this case, build it -docker-compose -f docker-stack.yml build +# The previous only generated a docker-stack.yml file, +# but didn't do anything with it yet + +# docker-auto-labels makes sure the labels used for constraints exist in the cluster +docker-auto-labels docker-stack.yml + +# Now this command uses that same file to deploy it +docker stack deploy -c docker-stack.yml --with-registry-auth "${STACK_NAME}" ``` ### Continuous Integration / Continuous Delivery @@ -618,67 +631,37 @@ GitLab CI is configured assuming 2 environments following GitLab flow: * `prod` (production) from the `production` branch. * `stag` (staging) from the `master` branch. -If you need to add more environments, for example, you could imagine using a client-approved `preprod` branch, you can just copy the configurations in `.gitlab-ci.yml` for `stag` and rename the corresponding variables. All the Docker Compose files are configured to support as many environments as you need, so that you only need to modify `.gitlab-ci.yml` (or whichever CI system configuration you are using). +If you need to add more environments, for example, you could imagine using a client-approved `preprod` branch, you can just copy the configurations in `.gitlab-ci.yml` for `stag` and rename the corresponding variables. The Docker Compose file and environment variables are configured to support as many environments as you need, so that you only need to modify `.gitlab-ci.yml` (or whichever CI system configuration you are using). +## Docker Compose files and env vars -## Docker Compose files +There is a main `docker-compose.yml` file with all the configurations that apply to the whole stack, it is used automatically by `docker-compose`. -There are several Docker Compose files, each with a specific purpose. +And there's also a `docker-compose.override.yml` with overrides for development, for example to mount the source code as a volume. It is used automatically by `docker-compose` to apply overrides on top of `docker-compose.yml`. -They are designed to support several "stages", like development, building, testing, and deployment. Also, allowing the deployment to different environments like staging and production (and you can add more environments very easily). - -They are designed to have the minimum repetition of code and configurations, so that if you need to change something, you have to change it in the minimum amount of places. That's why several of the files use environment variables that get auto-expanded. That way, if for example, you want to use a different domain, you can call the `docker-compose` command with a different `DOMAIN` environment variable instead of having to change the domain in several places inside the Docker Compose files. - -Also, if you want to have another deployment environment, say `preprod`, you just have to change environment variables, but you can keep using the same Docker Compose files. +These Docker Compose files use the `.env` file containing configurations to be injected as environment variables in the containers. -Because of that, for each "stage" (development, building, testing, deployment) you would use a different set of Docker Compose files. +They also use some additional configurations taken from environment variables set in the scripts before calling the `docker-compose` command. -But you probably don't have to worry about the different files, for building, testing and deployment, you would probably use a CI system (like GitLab CI) and the different configured files would be already set there. +It is all designed to support several "stages", like development, building, testing, and deployment. Also, allowing the deployment to different environments like staging and production (and you can add more environments very easily). -And for development, there's a `.env` file that will be automatically used by `docker-compose` locally, with the default configurations already set for local development. Including environment variables. So, for local development you can just run: +They are designed to have the minimum repetition of code and configurations, so that if you need to change something, you have to change it in the minimum amount of places. That's why files use environment variables that get auto-expanded. That way, if for example, you want to use a different domain, you can call the `docker-compose` command with a different `DOMAIN` environment variable instead of having to change the domain in several places inside the Docker Compose files. -```bash -docker-compose up -d -``` - -and it will do the right thing. - -They are also separated by the common tasks and functionalities they solve, and they are named accordinly. So, although there are many Docker Compose files, each one has a name that shows what should be in there, and the contents tend to be small and specific. That makes it easier to modify, or add configurations, as you can go directly to the relevant file. - -The `docker-compose.deploy.*.yml` files are only used at deployment, being it to production or any other environment. They build the images in production mode (not installing debugging packages), set configurations for Docker Swarm mode, etc. - -The `docker-compose.dev.*.yml` files are only used during development. They have overrides and tools for development, as mounting app volumes directly inside the container to iterate fast, map ports directly to your machine, install debugging packages, etc. +Also, if you want to have another deployment environment, say `preprod`, you just have to change environment variables, but you can keep using the same Docker Compose files. -The `docker-compose.test.yml` file is used for testing, during development and in a CI environment running tests, but not used in deployment to production (or staging or any other deployment environment of the final code). +### The .env file -The `docker-compose.shared.*.yml` files are used at several stages and contain stuff shared by several stages: development, testing, deployment. They have things like the databases or the environment variables, that are used by all the main services / containers, during development, testing and deployment. The file for `admin`, that has utils needed for development and production, like the Swagger UI interactive API documentation system. But this file is not used during testing (in CI environments) as this is not needed or used in that stage. +The `.env` file is the one that contains all your configurations, generated keys and passwords, etc. -The purpose of each Docker Compose file is: +Depending on your workflow, you could want to exclude it from Git, for example if your project is public. In that case, you would have to make sure to set up a way for your CI tools to obtain it while building or deploying your project. -* `docker-compose.deploy.build.yml`: build directories and `Dockerfile`s, for deployment (the building process for development has a little difference). -* `docker-compose.deploy.command.yml`: command overrides for images only during deployment. Initially only for the main Traefik proxy, making it run in a Docker Swarm mode cluster. -* `docker-compose.deploy.images.yml`: image names to be created, with environment variables for the specific tag. -* `docker-compose.deploy.labels.yml`: labels for deployment, the configurations to make the internal Traefik proxy serve some services on specific URLs, some with basic HTTP auth, etc. Also labels used in the internal Traefik proxy container to make it talk to the public Traefik proxy (outside of this stack) and make it send requests for this domain, generate HTTPS certificates, etc. -* `docker-compose.deploy.networks.yml`: networks that have to be used and shared by containers that need to be able to talk to the public Traefik proxy (when a service requires a domain for itself). -* `docker-compose.deploy.volumes-placement.yml`: volume declarations, volumes used by stateful services (as databases) and volume placement constraints, to make those services always run on the node that has their volumes, even after stack updates. -* `docker-compose.dev.build.yml`: build directories and `Dockerfile`s, for local development, sets a built-time argument that then is used in the `Dockerfile`s to install and configure helper tools exclusively for development. -* `docker-compose.dev.command.yml`: command overrides for local development. To tell the internal Traefik proxy to work with a local Docker in the host instead of a Docker Swarm mode cluster. And (commented out but ready to be used) overrides to make the containers run an infinite loop while keeping alive to be able to run the development server manually or do any other interactive work. -* `docker-compose.dev.env.yml`: development environment variable overrides. -* `docker-compose.dev.labels.yml`: local development labels, to be used by the local development Traefik proxy. They have to be declared in a different place than for deployment. -* `docker-compose.dev.networks.yml`: local development networks, to enable interactively talking to the backend. -* `docker-compose.dev.ports.yml`: local development port mappings. -* `docker-compose.dev.volumes.yml`: local development mounted volumes, mainly to map the development code directory inside the container, for fast development without needing to re-build the images. -* `docker-compose.shared.admin.yml`: additional services for administration or utilities with their configurations, like PGAdmin and Swagger, that are not needed during testing and use external images (don't need to be built or create images). -* `docker-compose.shared.base-images.yml`: base Docker images used without modification for shared services, as databases. Used in deployment, development, testing, etc. -* `docker-compose.shared.depends.yml`: dependencies between main services with `depends_on`, used in deployment, development, testing, etc. -* `docker-compose.shared.env.yml`: environment variables used by services, as database passwords, secret keys, etc. -* `docker-compose.test.yml`: specific additional container to be used only during testing, mainly the container that tests the backend and the APIs. +One way to do it could be to add each environment variable to your CI/CD system, and updating the `docker-compose.yml` file to read that specific env var instead of reading the `.env` file. ## URLs These are the URLs that will be used and generated by the project. -### Production +### Production URLs Production URLs, from the branch `production`. @@ -694,7 +677,7 @@ PGAdmin: https://pgadmin.{{cookiecutter.domain_main}} Flower: https://flower.{{cookiecutter.domain_main}} -### Staging +### Staging URLs Staging URLs, from the branch `master`. @@ -709,8 +692,8 @@ Automatic Alternative Docs (ReDoc): https://{{cookiecutter.domain_staging}}/redo PGAdmin: https://pgadmin.{{cookiecutter.domain_staging}} Flower: https://flower.{{cookiecutter.domain_staging}} - -### Development + +### Development URLs Development URLs, for local development. @@ -728,7 +711,7 @@ Flower: http://localhost:5555 Traefik UI: http://localhost:8090 -### Development with Docker Toolbox +### Development with Docker Toolbox URLs Development URLs, for local development. @@ -746,7 +729,7 @@ Flower: http://local.dockertoolbox.tiangolo.com:5555 Traefik UI: http://local.dockertoolbox.tiangolo.com:8090 -### Development with a custom IP +### Development with a custom IP URLs Development URLs, for local development. @@ -764,7 +747,7 @@ Flower: http://dev.{{cookiecutter.domain_main}}:5555 Traefik UI: http://dev.{{cookiecutter.domain_main}}:8090 -### Development in localhost with a custom domain +### Development in localhost with a custom domain URLs Development URLs, for local development. @@ -795,7 +778,7 @@ You can check the variables used during generation in the file `cookiecutter-con You can generate the project again with the same configurations used the first time. -That would be useful if, for example, the project generator (`tiangolo/full-stack-fastapi-postgresql`) was updated and you want to integrate or review the changes. +That would be useful if, for example, the project generator (`tiangolo/full-stack-fastapi-postgresql`) was updated and you wanted to integrate or review the changes. You could generate a new project with the same configurations as this one in a parallel directory. And compare the differences between the two, without having to overwrite your current code but being able to use the same variables used for your current project. @@ -805,8 +788,8 @@ You can use that file while generating a new project to reuse all those variable For example, run: -```bash -cookiecutter --config-file ./cookiecutter-config-file.yml --output-dir ../project-copy https://github.com/tiangolo/full-stack-fastapi-postgresql +```console +$ cookiecutter --config-file ./cookiecutter-config-file.yml --output-dir ../project-copy https://github.com/tiangolo/full-stack-fastapi-postgresql ``` That will use the file `cookiecutter-config-file.yml` in the current directory (in this project) to generate a new project inside a sibling directory `project-copy`.