diff --git a/Dockerfile b/Dockerfile index 442d73c0e..2f6ae68d5 100644 --- a/Dockerfile +++ b/Dockerfile @@ -21,7 +21,10 @@ ADD ./config/boto.cfg /etc/boto.cfg RUN pip install /docker-registry/ ENV DOCKER_REGISTRY_CONFIG /docker-registry/config/config_sample.yml +ENV SETTINGS_FLAVOR dev EXPOSE 5000 -CMD cd /docker-registry && ./setup-configs.sh && exec docker-registry +WORKDIR /docker-registry + +CMD exec docker-registry diff --git a/README.md b/README.md index abcc54163..f6364a4a5 100644 --- a/README.md +++ b/README.md @@ -3,8 +3,37 @@ Docker-Registry [![Build Status](https://travis-ci.org/dotcloud/docker-registry.png)](https://travis-ci.org/dotcloud/docker-registry) +Quick start +=========== + +The fastest way to get running is using the +[official image from the Docker index](https://index.docker.io/_/registry/): + +This example will launch a container on port 5000, and storing images within +the container itself: +``` +docker run -p 5000:5000 registry +``` + + +This example will launch a container on port 5000, and storing images in an +Amazon S3 bucket: +``` +docker run \ + -e SETTINGS_FLAVOR=s3 \ + -e AWS_BUCKET=acme-docker \ + -e STORAGE_PATH=/registry \ + -e AWS_KEY=AKIAHSHB43HS3J92MXZ \ + -e AWS_SECRET=xdDowwlK7TJajV1Y7EoOZrmuPEJlHYcNP2k4j49T \ + -e SEARCH_BACKEND=sqlalchemy \ + -p 5000:5000 \ + registry +``` + +See [config_sample.yml](config/config_sample.yml) for all available environment variables. + Create the configuration ------------------------- +======================== The Docker Registry comes with a sample configuration file, `config_sample.yml`. Copy this to `config.yml` to provide a basic @@ -20,20 +49,26 @@ Configuration flavors Docker Registry can run in several flavors. This enables you to run it in development mode, production mode or your own predefined mode. -In the config yaml file, you'll see a few sample flavors: +In the `config_sample.yml` file, you'll see several sample flavors: 1. `common`: used by all other flavors as base settings -1. `dev`: used for development -1. `prod`: used for production +1. `local`: stores data on the local filesystem +1. `s3`: stores data in an AWS S3 bucket +1. `gcs`: stores data in Google cloud storage +1. `swift`: stores data in OpenStack Swift +1. `glance`: stores data in OpenStack Glance, with a fallback to local storage +1. `glance-swift`: stores data in OpenStack Glance, with a fallback to Swift +1. `elliptics`: stores data in Elliptics key/value storage +1. `dev`: basic configuration using the `local` flavor 1. `test`: used by unit tests -1. `openstack`: to integrate with openstack +1. `prod`: production configuration (basically a synonym for the `s3` flavor) You can define your own flavors by adding a new top-level yaml key. You can specify which flavor to run by setting `SETTINGS_FLAVOR` in your environment: `export SETTINGS_FLAVOR=dev` -The default environment is `dev`. +The default flavor is `dev`. NOTE: it's possible to load environment variables from the config file with a simple syntax: `_env:VARIABLENAME[:DEFAULT]`. Check this syntax @@ -84,7 +119,12 @@ The default location of the config file is `config.yml`, located in the `config` subdirectory. If `DOCKER_REGISTRY_CONFIG` is a relative path, that path is expanded relative to the `config` subdirectory. -It is possible to mount the configuration file into the docker image +### Docker image +When building an image using the Dockerfile or using an image from the +[Docker index](https://index.docker.io/_/registry/), the default config is +`config_sample.yml`. + +It is also possible to mount the configuration file into the docker image ``` sudo docker run -p 5000:5000 -v /home/user/registry-conf:/registry-conf -e DOCKER_REGISTRY_CONFIG=/registry-conf/config.yml registry @@ -93,20 +133,23 @@ sudo docker run -p 5000:5000 -v /home/user/registry-conf:/registry-conf -e DOCKE Available configuration options =============================== -### General options +When using the `config_sample.yml`, you can pass all options through as environment variables. See [`config_sample.yml`](config/config_sample.yml) for the mapping. + +## General options 1. `loglevel`: string, level of debugging. Any of python's - [logging](http://docs.python.org/2/library/logging.html) module levels: - `debug`, `info`, `warn`, `error` or `critical` -1. If you are using `storage: s3` the + [logging](http://docs.python.org/2/library/logging.html) module levels: + `debug`, `info`, `warn`, `error` or `critical` +1. `storage_redirect`: Redirect resource requested if storage engine supports + this, e.g. S3 will redirect signed URLs, this can be used to offload the + server. +1. `boto_host`/`boto_port`: If you are using `storage: s3` the [standard boto config file locations](http://docs.pythonboto.org/en/latest/boto_config_tut.html#details) (`/etc/boto.cfg, ~/.boto`) will be used. If you are using a *non*-Amazon S3-compliant object store, in one of the boto config files' `[Credentials]` section, set `boto_host`, `boto_port` as appropriate for the service you are using. -1. `storage_redirect`: Redirect resource requested if storage engine supports - this, e.g. S3 will redirect signed URLs, this can be used to offload the - server. +1. `bugsnag`: The bugsnag API key ### Authentication options @@ -122,10 +165,158 @@ Available configuration options index. You should provide your own method of authentication (such as Basic auth). -### S3 options +#### Privileged access + +1. `privileged_key`: allows you to make direct requests to the registry by using + an RSA key pair. The value is the path to a file containing the public key. + If it is not set, privileged access is disabled. + +##### Generating keys with `openssl` + +You will need to install the python-rsa package (`pip install rsa`) in addition to using `openssl`. +Generating the public key using openssl will lead to producing a key in a format not supported by +the RSA library the registry is using. + +Generate private key: + + openssl genrsa -out private.pem 2048 + +Associated public key : + + pyrsa-priv2pub -i private.pem -o public.pem + + +### Search-engine options + +The Docker Registry can optionally index repository information in a +database for the `GET /v1/search` [endpoint][search-endpoint]. You +can configure the backend with a configuration like: + +The `search_backend` setting selects the search backend to use. If +`search_backend` is empty, no index is built, and the search endpoint always +returns empty results. + +1. `search_backend`: The name of the search backend engine to use. + Currently supported backends are: + 1. `sqlalchemy` + + +If `search_backend` is neither empty nor one of the supported backends, it +should point to a module. + +Example: + +```yaml +common: + search_backend: foo.registry.index.xapian +``` + +#### sqlalchemy + +1. `sqlalchemy_index_database`: The database URL + +Example: + +```yaml +common: + search_backend: sqlalchemy + sqlalchemy_index_database: sqlite:////tmp/docker-registry.db +``` + + +In this case, the module is imported, and an instance of it's `Index` +class is used as the search backend. + +### Mirroring Options + +All mirror options are placed in a `mirroring` section. + +1. `mirroring`: + 1. `source`: + 1. `source_index`: + 1. `tags_cache_ttl`: + +Example: + +```yaml +common: + mirroring: + source: https://registry-1.docker.io + source_index: https://index.docker.io + tags_cache_ttl: 864000 # 10 days +``` + +### Cache options + +It's possible to add an LRU cache to access small files. In this case you need +to spawn a [redis-server](http://redis.io/) configured in +[LRU mode](http://redis.io/topics/config). The config file "config_sample.yml" +shows an example to enable the LRU cache using the config directive `cache_lru`. + +Once this feature is enabled, all small files (tags, meta-data) will be cached +in Redis. When using a remote storage backend (like Amazon S3), it will speeds +things up dramatically since it will reduce roundtrips to S3. + +All config settings are placed in a `cache` or `cache_lru` section. + +1. `cache`/`cache_lru`: + 1. `host`: Host address of server + 1. `port`: Port server listens on + 1. `password`: Authentication password + + +### Email options + +Settings these options makes the Registry send an email on each code Exception: + +1. `email_exceptions`: + 1. `smtp_host`: hostname to connect to using SMTP + 1. `smtp_port`: port number to connect to using SMTP + 1. `smtp_login`: username to use when connecting to authenticated SMTP + 1. `smtp_password`: password to use when connecting to authenticated SMTP + 1. `smtp_secure`: boolean, true for TLS to using SMTP. this could be a path + to the TLS key file for client authentication. + 1. `from_addr`: email address to use when sending email + 1. `to_addr`: email address to send exceptions to + +Example: + +```yaml +test: + email_exceptions: + smtp_host: localhost +``` + +## Storage options + +1. `storage`: Selects the storage engine to use. The options for which are + defined below + +### storage: local + +1. `storage_path`: Path on the filesystem where to store data + +Example: + +```yaml +local: + storage: local + storage_path: /mnt/registry +``` -These options configure your S3 storage. These are used when `storage` is set -to `s3`. +#### Persistent storage +If you use any type of local store along with a registry running within a docker +remember to use a data volume for the `storage_path`. Please read the documentation +for [data volumes](http://docs.docker.io/en/latest/use/working_with_volumes/) for more information. + +Example: + +``` +docker run -p 5000 -v /tmp/registry:/tmp/registry registry +``` + +### storage: s3 +AWS Simple Storage Service options 1. `s3_access_key`: string, S3 access key 1. `s3_secret_key`: string, S3 secret key @@ -138,12 +329,22 @@ to `s3`. 1. `boto_bucket`: string, the bucket name 1. `storage_path`: string, the sub "folder" where image data will be stored. -### Elliptics options +Example: +```yaml +prod: + storage: s3 + s3_region: us-west-1 + s3_bucket: acme-docker + storage_path: /registry + s3_access_key: AKIAHSHB43HS3J92MXZ + s3_secret_key: xdDowwlK7TJajV1Y7EoOZrmuPEJlHYcNP2k4j49T +``` -These options configure your [Elliptics](http://reverbrain.com/elliptics/) storage. These are used when `storage` is set -to `elliptics`. +### storage: elliptics +[Elliptics](http://reverbrain.com/elliptics/) key/value storage options 1. `elliptics_nodes`: Elliptics remotes + Can be a hash of `address: port`, or a list of `address:port` strings, or a single space delimited string. 1. `elliptics_wait_timeout`: time to wait for the operation complete 1. `elliptics_check_timeout`: timeout for pinging node 1. `elliptics_io_thread_num`: number of IO threads in processing pool @@ -159,10 +360,8 @@ Example: dev: storage: elliptics elliptics_nodes: - elliptics-host1: 1025 - elliptics-host2: 1025 - ... - hostN: port + elliptics-host1: 1025 + elliptics-host2: 1025 elliptics_wait_timeout: 60 elliptics_check_timeout: 60 elliptics_io_thread_num: 2 @@ -174,9 +373,8 @@ dev: elliptics_loglevel: debug ``` -### Google Cloud Storage options -These options configure your [Google Cloud Storage](https://cloud.google.com/products/cloud-storage/) storage. -These are used when `storage` is set to `gcs`. +### storage: gcs +[Google Cloud Storage](https://cloud.google.com/products/cloud-storage/) options 1. `boto_bucket`: string, the bucket name 1. `storage_path`: string, the sub "folder" where image data will be stored. @@ -209,117 +407,23 @@ dev: gs_secure: false ``` -### Search-engine options +### storage: swift +OpenStack Swift options -The Docker Registry can optionally index repository information in a -database for the `GET /v1/search` [endpoint][search-endpoint]. You -can configure the backend with a configuration like: - -```yaml -search_backend: "_env:SEARCH_BACKEND:" -``` - -The `search_backend` setting selects the search backend to use. If -`search_backend` is empty, no index is built, and the search endpoint -always returns empty results. Currently supported backends and their -backend-specific configuration options are: - -* `sqlalchemy': Use [SQLAlchemy][]. - * The backing database is selected with - `sqlalchemy_index_database`, which is passed through to - [create_engine][]. - -If `search_backend` is neither empty nor one of the above backends, it -should point to a module: - -```yaml -search_backend: foo.registry.index.xapian -``` - -In this case, the module is imported, and an instance of it's `Index` -class is used as the search backend. +1. `storage_path`: The prefix of where data will be stored +1. `swift_authurl`: Authentication url +1. `swift_container`: +1. `swift_user`: +1. `swift_password`: +1. `swift_tenant_name`: +1. `swift_region_name`: -### Email options +### storage: glance +OpenStack Glance options -Settings these options makes the Registry send an email on each code Exception: +1. `storage_alternate`: +1. `storage_path`: -1. `email_exceptions`: - 1. `smtp_host`: hostname to connect to using SMTP - 1. `smtp_port`: port number to connect to using SMTP - 1. `smtp_login`: username to use when connecting to authenticated SMTP - 1. `smtp_password`: password to use when connecting to authenticated SMTP - 1. `smtp_secure`: boolean, true for TLS to using SMTP. this could be a path - to the TLS key file for client authentication. - 1. `from_addr`: email address to use when sending email - 1. `to_addr`: email address to send exceptions to - -Example: - -```yaml -test: - email_exceptions: - smtp_host: localhost -``` - -### Performance on prod - -It's possible to add an LRU cache to access small files. In this case you need -to spawn a [redis-server](http://redis.io/) configured in -[LRU mode](http://redis.io/topics/config). The config file "config_sample.yml" -shows an example to enable the LRU cache using the config directive `cache_lru`. - -Once this feature is enabled, all small files (tags, meta-data) will be cached -in Redis. When using a remote storage backend (like Amazon S3), it will speeds -things up dramatically since it will reduce roundtrips to S3. - - -### Storage options - -`storage`: can be one of: - -1. `local`: store images on local storage - 1. `storage_path` local path to the image store -1. `s3`: store images on S3 - 1. `storage_path` is a subdir in your S3 bucket - 1. remember to set all `s3_*` options (see above) -1. `glance`: store images on Glance (OpenStack) - 1. `storage_alternate`: storage engine to use when Glance storage fails, - e.g. `local` - 1. If you use `storage_alternate` local, remeber to set `storage_path` -1. `elliptics`: store images in [Elliptics](http://reverbrain.com/elliptics/) key-value storage - -#### Persist local storage - -If you use any type of local store along with a registry running within a docker -remember to use a data volume for the `storage_path`. Please read the documentation -for [data volumes](http://docs.docker.io/en/latest/use/working_with_volumes/) for more information. - -Example: - -``` -docker run -p 5000 -v /tmp/registry:/tmp/registry registry -``` - -### Privileged access - -Privileged access allows you to make direct requests to the registry by using -an RSA key pair. The `privileged_key` config entry, if set, must indicate a -path to a file containing the public key. -If it is not set, privileged access is disabled. - -#### Generating keys with `openssl` - -You will need to install the python-rsa package (`pip install rsa`) in addition to using `openssl`. -Generating the public key using openssl will lead to producing a key in a format not supported by -the RSA library the registry is using. - -Generate private key: - - openssl genrsa -out private.pem 2048 - -Associated public key : - - pyrsa-priv2pub -i private.pem -o public.pem Run the Registry ---------------- @@ -335,8 +439,22 @@ Run registry: docker run -p 5000:5000 registry ``` +or + +``` +docker run \ + -e SETTINGS_FLAVOR=s3 \ + -e AWS_BUCKET=acme-docker \ + -e STORAGE_PATH=/registry \ + -e AWS_KEY=AKIAHSHB43HS3J92MXZ \ + -e AWS_SECRET=xdDowwlK7TJajV1Y7EoOZrmuPEJlHYcNP2k4j49T \ + -e SEARCH_BACKEND=sqlalchemy \ + -p 5000:5000 \ + registry +``` + NOTE: The container will try to allocate the port 5000. If the port -is already taken, find out which container is already using it by running "docker ps" +is already taken, find out which container is already using it by running `docker ps` ### Option 2 (Advanced) - Install the registry on an existing server diff --git a/config/config_s3.yml b/config/config_s3.yml deleted file mode 100644 index 9e45d8ef1..000000000 --- a/config/config_s3.yml +++ /dev/null @@ -1,18 +0,0 @@ -# Set env vars for AWS_* when launching - this config will refer to them. -# To specify prod flavor, set the environment variable SETTINGS_FLAVOR=prod - -# example launching with this config, in a docker image: -# docker run -p 5000:5000 -e SETTINGS_FLAVOR=prod -e AWS_KEY=X -e AWS_SECRET=Y -e AWS_BUCKET=images registry-image - -prod: - storage: s3 - boto_bucket: _env:AWS_BUCKET - s3_access_key: _env:AWS_KEY - s3_secret_key: _env:AWS_SECRET - s3_bucket: _env:AWS_BUCKET - s3_encrypt: true - s3_secure: true - s3_encrypt: true - s3_secure: true - storage_path: /images - storage_redirect: False # Redirect signed URL for image files diff --git a/config/config_sample.yml b/config/config_sample.yml index d2a58d831..d451a3201 100644 --- a/config/config_sample.yml +++ b/config/config_sample.yml @@ -1,49 +1,21 @@ # The `common' part is automatically included (and possibly overriden by all # other flavors) common: - # Bucket for storage - boto_bucket: REPLACEME - - # Amazon S3 Storage Configuration - s3_access_key: REPLACEME - s3_secret_key: REPLACEME - s3_bucket: REPLACEME - s3_encrypt: REPLACEME - s3_secure: REPLACEME - - # Google Cloud Storage Configuration - # See: - # https://developers.google.com/storage/docs/reference/v1/getting-startedv1#keys - # for details on access and secret keys. - gs_access_key: REPLACEME - gs_secret_key: REPLACEME - gs_secure: REPLACEME + loglevel: _env:LOGLEVEL:debug + storage_redirect: _env:STORAGE_REDIRECT + standalone: _env:STANDALONE + index_endpoint: _env:INDEX_ENDPOINT + disable_token_auth: _env:DISABLE_TOKEN_AUTH + privileged_key: _env:PRIVILEGED_KEY - # OAuth 2.0 authentication with the storage. - # Supported for Google Cloud Storage only. - # oauth2 can be set to true or false. If it is set to true, gs_access_key, - # gs_secret_key and gs_secure are not needed. - # Client ID and Client Secret must be set into OAUTH2_CLIENT_ID and - # OAUTH2_CLIENT_SECRET environment variables. - # See: https://developers.google.com/accounts/docs/OAuth2. - oauth2: REPLACEME - - search_backend: "_env:SEARCH_BACKEND:" - sqlalchemy_index_database: - "_env:SQLALCHEMY_INDEX_DATABASE:sqlite:////tmp/docker-registry.db" + search_backend: _env:SEARCH_BACKEND + sqlalchemy_index_database: _env:SQLALCHEMY_INDEX_DATABASE:sqlite:////tmp/docker-registry.db + mirroring: + source: _env:MIRROR_SOURCE # https://registry-1.docker.io + source_index: _env:MIRROR_SOURCE_INDEX # https://index.docker.io + tags_cache_ttl: _env:MIRROR_TAGS_CACHE_TTL # 864000 # seconds -# This is the default configuration when no flavor is specified -dev: - storage: local - storage_path: /tmp/registry - loglevel: debug - -# To specify another flavor, set the environment variable SETTINGS_FLAVOR -# $ export SETTINGS_FLAVOR=prod -prod: - storage: s3 - storage_path: "_env:STORAGE_PATH:/prod" # Enabling LRU cache for small files. This speeds up read/write on small files # when using a remote storage backend (like S3). cache: @@ -54,27 +26,62 @@ prod: host: _env:CACHE_LRU_REDIS_HOST port: _env:CACHE_LRU_REDIS_PORT password: _env:CACHE_LRU_REDIS_PASSWORD + # Enabling these options makes the Registry send an email on each code Exception email_exceptions: - smtp_host: REPLACEME - smtp_port: 25 - smtp_login: REPLACEME - smtp_password: REPLACEME - smtp_secure: false - from_addr: docker-registry@localdomain.local - to_addr: noise+dockerregistry@localdomain.local + smtp_host: _env:SMTP_HOST + smtp_port: _env:SMTP_PORT:25 + smtp_login: _env:SMTP_LOGIN + smtp_password: _env:SMTP_PASSWORD + smtp_secure: _env:SMTP_SECURE:false + from_addr: _env:SMTP_FROM_ADDR:docker-registry@localdomain.local + to_addr: _env:SMTP_TO_ADDR:noise+dockerregistry@localdomain.local + # Enable bugsnag (set the API key) - bugsnag: REPLACEME + bugsnag: _env:BUGSNAG -# This flavor is automatically used by unit tests -test: + + +local: &local storage: local - storage_path: /tmp/test + storage_path: _env:STORAGE_PATH:/tmp/registry + + +s3: &s3 + storage: s3 + s3_region: _env:AWS_REGION + s3_bucket: _env:AWS_BUCKET + boto_bucket: _env:AWS_BUCKET + storage_path: _env:STORAGE_PATH:/registry + s3_encrypt: _env:AWS_ENCRYPT:true + s3_secure: _env:AWS_SECURE:true + s3_access_key: _env:AWS_KEY + s3_secret_key: _env:AWS_SECRET + +# Google Cloud Storage Configuration +# See: +# https://developers.google.com/storage/docs/reference/v1/getting-startedv1#keys +# for details on access and secret keys. +gcs: + storage: gcs + boto_bucket: _env:GCS_BUCKET + storage_path: _env:STORAGE_PATH:/registry + gs_secure: _env:GCS_SECURE:true + gs_access_key: _env:GCS_KEY + gs_secret_key: _env:GCS_SECRET + # OAuth 2.0 authentication with the storage. + # oauth2 can be set to true or false. If it is set to true, gs_access_key, + # gs_secret_key and gs_secure are not needed. + # Client ID and Client Secret must be set into OAUTH2_CLIENT_ID and + # OAUTH2_CLIENT_SECRET environment variables. + # See: https://developers.google.com/accounts/docs/OAuth2. + oauth2: _env:GCS_OAUTH2:false # This flavor is for storing images in Openstack Swift -swift: +swift: &swift storage: swift - storage_path: /registry + storage_path: _env:STORAGE_PATH:/registry + # keystone authorization swift_authurl: _env:OS_AUTH_URL swift_container: _env:OS_CONTAINER swift_user: _env:OS_USERNAME @@ -84,22 +91,54 @@ swift: # This flavor stores the images in Glance (to integrate with openstack) # See also: https://github.com/dotcloud/openstack-docker -openstack: +glance: &glance storage: glance - storage_alternate: local - storage_path: /tmp/registry - loglevel: debug + storage_alternate: _env:GLANCE_STORAGE_ALTERNATE:local + storage_path: _env:STORAGE_PATH:/tmp/registry + +openstack: + <<: *glance # This flavor stores the images in Glance (to integrate with openstack) # and tags in Swift. -openstack-swift: +glance-swift: &glance-swift + <<: *swift storage: glance - storage_path: /registry storage_alternate: swift - # keystone authorization - swift_authurl: REPLACEME - swift_container: REPLACEME - swift_user: REPLACEME - swift_password: REPLACEME - swift_tenant_name: REPLACEME - swift_region_name: REPLACEME + +openstack-swift: + <<: *glance-swift + +elliptics: + storage: elliptics + elliptics_nodes: _env:ELLIPTICS_NODES + elliptics_wait_timeout: _env:ELLIPTICS_WAIT_TIMEOUT:60 + elliptics_check_timeout: _env:ELLIPTICS_CHECK_TIMEOUT:60 + elliptics_io_thread_num: _env:ELLIPTICS_IO_THREAD_NUM:2 + elliptics_net_thread_num: _env:ELLIPTICS_NET_THREAD_NUM:2 + elliptics_nonblocking_io_thread_num: _env:ELLIPTICS_NONBLOCKING_IO_THREAD_NUM:2 + elliptics_groups: _env:ELLIPTICS_GROUPS + elliptics_verbosity: _env:ELLIPTICS_VERBOSITY:4 + elliptics_logfile: _env:ELLIPTICS_LOGFILE:/dev/stderr + elliptics_addr_family: _env:ELLIPTICS_ADDR_FAMILY:2 + + + +# This is the default configuration when no flavor is specified +dev: &dev + <<: *local + search_backend: _env:SEARCH_BACKEND:sqlalchemy + # gunicorn will set this automatically if unset in dev mode + secret_key: _env:SECRET_KEY:secret + +# This flavor is automatically used by unit tests +test: + <<: *dev + storage_path: _env:STORAGE_PATH:/tmp/test + +# To specify another flavor, set the environment variable SETTINGS_FLAVOR +# $ export SETTINGS_FLAVOR=prod +prod: + <<: *s3 + storage_path: _env:STORAGE_PATH:/prod + diff --git a/config/config_test.yml b/config/config_test.yml deleted file mode 100644 index 35606b333..000000000 --- a/config/config_test.yml +++ /dev/null @@ -1,19 +0,0 @@ -test: - boto_bucket: _env/S3_BUCKET - - loglevel: info - storage: local - storage_path: /tmp/test - s3_access_key: _env:S3_ACCESS_KEY - s3_secret_key: _env:S3_SECRET_KEY - s3_bucket: _env:S3_BUCKET - s3_encrypt: _env:S3_ENCRYPT - s3_secure: _env:S3_SECURE - storage_redirect: False - - gs_access_key: _env:GS_ACCESS_KEY - gs_secret_key: _env:GS_SECRET_KEY - gs_secure: false - - search_backend: sqlalchemy - sqlalchemy_index_database: sqlite:////tmp/docker-registry.db diff --git a/docker_registry/app.py b/docker_registry/app.py index 085253540..206cd9ad0 100644 --- a/docker_registry/app.py +++ b/docker_registry/app.py @@ -46,7 +46,7 @@ def after_request(response): def init(): # Configure the email exceptions info = cfg.email_exceptions - if info: + if info and 'smtp_host' in info: mailhost = info['smtp_host'] mailport = info.get('smtp_port') if mailport: diff --git a/docker_registry/lib/config.py b/docker_registry/lib/config.py index a4bbae68e..ba95330a5 100644 --- a/docker_registry/lib/config.py +++ b/docker_registry/lib/config.py @@ -26,13 +26,18 @@ def get(self, *args, **kwargs): def _walk_object(obj, callback): if not hasattr(obj, '__iter__'): return callback(obj) + obj_new = {} if isinstance(obj, dict): for i, value in obj.iteritems(): - obj[i] = _walk_object(value, callback) - return obj + value = _walk_object(value, callback) + if value or value == '': + obj_new[i] = value + return obj_new for i, value in enumerate(obj): - obj[i] = _walk_object(value, callback) - return obj + value = _walk_object(value, callback) + if value or value == '': + obj_new[i] = value + return obj_new def convert_env_vars(config): @@ -40,7 +45,7 @@ def _replace_env(s): if isinstance(s, basestring) and s.startswith('_env:'): parts = s.split(':', 2) varname = parts[1] - vardefault = '!ENV_NOT_FOUND' if len(parts) < 3 else parts[2] + vardefault = None if len(parts) < 3 else parts[2] return os.environ.get(varname, vardefault) return s diff --git a/docker_registry/storage/ellipticsbackend.py b/docker_registry/storage/ellipticsbackend.py index 8c198d82c..af59a1cd8 100644 --- a/docker_registry/storage/ellipticsbackend.py +++ b/docker_registry/storage/ellipticsbackend.py @@ -52,7 +52,12 @@ def __init__(self, config): logger.info("Using namespace %s", self.namespace) at_least_one = False - for host, port in config.get('elliptics_nodes').iteritems(): + nodes = config.get('elliptics_nodes') + if isinstance(nodes, basestring): + nodes = nodes.split() + if isinstance(nodes, list) or isinstance(nodes, tuple): + nodes = dict(node.split(':') for node in nodes) + for host, port in nodes.iteritems(): try: self._elliptics_node.add_remote(host, port, config.get( diff --git a/setup-configs.sh b/setup-configs.sh deleted file mode 100755 index 688bbcaba..000000000 --- a/setup-configs.sh +++ /dev/null @@ -1,39 +0,0 @@ -#!/bin/bash - -check() { - echo "Check: $1" - if [ "$1" == "" ]; then - echo "[ERROR] $2" - exit 1 - fi -} - -GUNICORN_WORKERS=${GUNICORN_WORKERS:-4} - -if [ "$SETTINGS_FLAVOR" = "prod" ] ; then - config=$(config/config.yml -elif [ "$SETTINGS_FLAVOR" = "openstack-swift" ] ; then - check "$OS_USERNAME" 'Please specify $OS_USERNAME (or source your keystone_adminrc file)' - check "$OS_PASSWORD" 'Please specify $OS_PASSWORD (or source your keystone_adminrc file)' - check "$OS_TENANT_NAME" 'Please specify $OS_TENANT_NAME (or source your keystone_adminrc file)' - check "$OS_AUTH_URL" 'Please specify $OS_AUTH_URL (or source your keystone_adminrc file)' - check "$OS_REGION_NAME" 'Please specify $OS_REGION_NAME (e.g,: RegionOne)' - check "$SWIFT_CONTAINER" 'Please specify $SWIFT_CONTAINER (e.g,: docker-registry)' - check "$OS_GLANCE_URL" 'Please specify $OS_GLANCE_URL (e.g,: http://10.129.184.9:9292)' - - config=$(config/config.yml -fi diff --git a/tox.ini b/tox.ini index eefe3638f..092c20e6d 100644 --- a/tox.ini +++ b/tox.ini @@ -4,7 +4,7 @@ skipsdist = True [testenv] setenv = SETTINGS_FLAVOR=test - DOCKER_REGISTRY_CONFIG=config_test.yml + DOCKER_REGISTRY_CONFIG=config_sample.yml PYTHONPATH=test deps = -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt