What is the difference between the two docker compose commands - docker

I have been trying to setup vault using docker composer file. I have three requirements,
Make sure the secrets and other configurations created are persisted after a container restart
Should be able to access the web ui of the vault
Start the vault using the production mode
I tried these two configuration segments.
vault-server:
image: vault:latest
ports:
- "8204:8200"
environment:
VAULT_ADDR: "http://0.0.0.0:8204"
VAULT_DEV_ROOT_TOKEN_ID: "vault-plaintext-root-token"
cap_add:
- IPC_LOCK
volumes:
- ./vault/logs:/vault/logs
- ./vault/file:/vault/file:rw
vault_dev:
hostname: vault
container_name: vault
image: vault:latest
environment:
VAULT_ADDR: "http://0.0.0.0:8205"
VAULT_DEV_ROOT_TOKEN_ID: "vault-plaintext-root-token"
VAULT_DEV_LISTEN_ADDRESS: "0.0.0.0:8205"
ports:
- "8205:8200"
volumes:
- ./vault/files:/vault/file:rw
cap_add:
- IPC_LOCK
entrypoint: vault server -dev
My problems are;
With the vault server configuration, I could access the vault UI with the url http://localhost:8204/ui/ but the secrets created are not retained after restart
With the vault_dev configuration, (which I though would help me to start the vault in production mode when I remove the -dev command) I can't access the vault UI with the url http://localhost:8205/ui/
I really don't understand what is the difference between these two configuration segments and how to achieve step 1 and 3.
For the persistence, since it says we need to add a volume, I tried mounting the file path, but still the content vanishes when the container is restarted.
In some documents it says, in the dev mode you can't make it persist and have to run in prod mode. But I a, unable to figure out how modify the vault-server configuration set to make it run in prod mode.
Appreciate, if someone can help, as I have been going through several links for the past few days and kind of lost at the moment.

I think what's causing your issue is VAULT_DEV_LISTEN_ADDRESS: "0.0.0.0:8205".
That causes Vault to listen on port 8205 rather than the default 8200. So when you do that, you need to map port 8205 rather than 8200. That would look like this
ports:
- "8205:8205"
But when you run in a container, there's rarely any reason to change port numbers inside the container, since there's typically only one process running so it can't really conflict with anything. So I'd just let it listen on the default 8200 and map it to 8205 on the host.
I looked at the docs for VAULT_ADDR and I think you should delete that as well, unless you have multiple Vault nodes and a load balancer in front of them.
So you'll end up with
vault_dev:
hostname: vault
container_name: vault
image: vault:latest
environment:
VAULT_DEV_ROOT_TOKEN_ID: "vault-plaintext-root-token"
ports:
- "8205:8200"
volumes:
- ./vault/files:/vault/file:rw
cap_add:
- IPC_LOCK
entrypoint: vault server -dev
Then Vault should be reachable on http://localhost:8205/ui.

If you look at the Docker Hub page for the vault image it documents:
Running the Vault container with no arguments will give you a Vault server in development mode.
/vault/file [is used] for writing persistent storage data when using the file data storage plugin. By default nothing is written here (a dev server uses an in-memory data store); the file data storage backend must be enabled in Vault's configuration before the container is started.
(H/T #ChrisBecke who described this behavior in a comment; it is also in the well-commented Dockerfile.)
Later on that page is a section entitled "Running Vault in Server Mode for Development". The key point here is that you need to explicitly provide a command: vault server to cause it to not start up in dev mode.
#HansKilian's answer on port setup is also important here. Incorporating that answer's simplifications and the need to explicitly run vault server without -dev, you should get something like:
version: '3.8' # most recent stable Compose file format
services:
vault:
image: vault:1.12.2
environment:
VAULT_LOCAL_CONFIG: >-
{
"storage": {
"file": {"path": "/vault/file"}
},
"listener": [{
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable": true
}
}],
"default_lease_ttl": "168h",
"max_lease_ttl": "720h",
"ui": true
}
ports:
- "8204:8200"
cap_add:
- IPC_LOCK
volumes:
- vault_file:/vault/file:rw
volumes:
vault_file:
The JSON block is copied from the documentation, which also notes
Disabling TLS and using the file storage backend are not recommended for production use.
The underlying Vault storage can't be usefully accessed from the host (if nothing else, it is encrypted) and I've chosen to store it in a named Docker volume instead.
Since this is not running in dev mode, you will need to go through the steps of initializing Vault, which will give you a set of critical credentials, and then you'll need to create user identities and add credentials to Vault. It sounds like you're not looking for a fully-automated setup here, so be aware that there are some manual steps involved with some "no really don't lose these keys" output.

Related

How to apply new TLS certificates for running Hashicorp Vault service

I am using Hashicorp vault , consul. So I have Vault,Consul and Golang Vaultmanager services. These services are running as a docker containers. I am not using any container orchestration (like k8s or podman). Simply running the containers in Linux environment using docker-compose.yaml file.
Refer the docker-compose file content below.
version: '3.6'
services:
vault:
image: imagename
networks:
- nwname
command: server -config=/vault/config/vault-config.json
cap_add:
- IPC_LOCK
restart: always
consul:
image: imagename
networks:
- nwname
command: agent -server -bind 0.0.0.0 -client 0.0.0.0 -bootstrap-expect 1 -config-file=/consul/config/config.json
restart: always
vaultmanager:
image: imagename
devices:
- "/dev/tpm0:/dev/tpm0"
networks:
- nwname
restart: always
networks:
nwname:
name: nwname
driver: bridge
For now Vault service is using Self signed certificates for TLS communication. But we need to update the certificates (.crt and .key). Once the containers are up and running, during the VaultManager service startup i am generating new certificates and put it into same location where the existing certificates were loaded.
So the Vault server needs to pickup the newly updated TLS certificates. How do we achieve this feature?
Note: Vault,Consul,VaultManager services are running in separate containers. From the VaultManager container we need to achieve this feature automatically without manual intervention.
VaultManager service is written in GoLang.
I have tried to restart the Vault container from VaultManager container by using docker restart Vault but
docker command not found inside the VaultManager container.
Please refer the below vault-config.
{
"backend": {
"consul": {
"address": "consul:8500",
"path": "vault/"
}
},
"listener": {
"tcp":{
"address": "0.0.0.0:8200",
"tls_disable": 0,
"tls_cert_file" : "/vault/config/certificate.crt",
"tls_key_file" : "/vault/config/private.key"
}
},
"ui": true
}
Also please advice, how can we use SIGHUP process in this usecase.
It is unclear what you're trying to do here. Nothing you're asking here is unique to Vault, and is more a general sysadmin question. Also it is unclear which containerization platform you're leveraging.
First option is to SIGHUP the Vault process. For some config, this is sufficient for Vault to pick up and process the changes. If you've registered Vault as a systemd service, you can use systemctl reload vault.service command to send this signal.
Second option is to systemctl restart vault. This will incur downtime, and possibly require you to unseal (unless you have auto-unseal configured). Only use this if the SIGHUP doesn't work.
Third option is to send a docker container restart vault to the host. This assumes Docker - if you're using Kubernetes, or Nomad, or Cloud Foundry, or something else, you should put those details in your question. This will have some of the same drawbacks as simply restarting the service, and should only be used if your container runs a script on boot that is doing something important (which hopefully isn't the case).
If you're asking how to ssh from one container to another, you should add tags to your question, as this is beyond the scope of Hashicorp Vault.
Further reading:
https://support.hashicorp.com/hc/en-us/articles/5767318985107-Vault-SIGHUP-Behavior

Docker: Multiple Compositions

I've seen many examples of Docker compose and that makes perfect sense to me, but all bundle their frontend and backend as separate containers on the same composition. In my use case I've developed a backend (in Django) and a frontend (in React) for a particular application. However, I want to be able to allow my backend API to be consumed by other client applications down the road, and thus I'd like to isolate them from one another.
Essentially, I envision it looking something like this. I would have a docker-compose file for my backend, which would consist of a PostgreSQL container and a webserver (Apache) container with a volume to my source code. Not going to get into implementation details but because containers in the same composition exist on the same network I can refer to the DB in the source code using the alias in the file. That is one environment with 2 containers.
On my frontend and any other future client applications that consume the backend, I would have a webserver (Apache) container to serve the compiled static build of the React source. That of course exists in it's own environement, so my question is like how do I converge the two such that I can refer to the backend alias in my base url (axios, fetch, etc.) How do you ship both "environments" to a registry and then deploy from that registry such that they can continue to communicate across?
I feel like I'm probably missing the mark on how the Docker architecture works at large but to my knowledge there is a default network and Docker will execute the composition and run it on the default network unless otherwise specified or if it's already in use. However, two separate compositions are two separate networks, no? I'd very much appreciate a lesson on the semantics, and thank you in advance.
There's a couple of ways to get multiple Compose files to connect together. The easiest is just to declare that one project's default network is the other's:
networks:
default:
external:
name: other_default
(docker network ls will tell you the actual name once you've started the other Compose project.) This is also suggested in the Docker Networking in Compose documentation.
An important architectural point is that your browser application will never be able to use the Docker hostnames. Your fetch() call runs in the browser, not in Docker, and so it needs to reach a published port. The best way to set this up is to have the Apache server that's serving the built UI code also run a reverse proxy, so that you can use a same-server relative URL /api/... to reach the backend. The Apache ProxyPass directive would be able to use the Docker-internal hostnames.
You also mention "volume with your source code". This is not a Docker best practice. It's frequently used to make Docker simulate a local development environment, but it's not how you want to deploy or run your code in production. The Docker image should be self-contained, and your docker-compose.yml generally shouldn't need volumes: or a command:.
A skeleton layout for what you're proposing could look like:
version: '3'
services:
db:
image: postgres:12
volumes:
- pgdata:/var/lib/postgresql/data
backend:
image: my/backend
environment:
PGHOST: db
# No ports: (not directly exposed) (but it could be)
# No volumes: or command: (use what's in the image)
volumes:
pgdata:
version: '3'
services:
frontend:
image: my/frontend
environment:
BACKEND_URL: http://backend:3000
ports:
- 8080:80
networks:
default:
external:
name: backend_default

Spring Cloud Apps Running in Container - Runs on Local Machine, Fails on Google Cloud

This is a follow up to an earlier question I asked on Stack Overflow. I am building a Spring Boot Cloud based service. I am running it in a container using Docker. I finally got it running on my local machine.
# Use postgres/example user/password credentials
version: '3.2'
services:
db:
image: postgres
ports:
- 5000:5432
environment:
POSTGRES_PASSWORD: example
volumes:
- type: volume
source: psql_data
target: /var/lib/postgresql/data
networks:
- app
restart: always
config:
image: kellymarchewa/config_server
ports:
- 8888:8888
networks:
- app
volumes:
- /home/kelly/.ssh:/root/.ssh
restart: always
search:
image: kellymarchewa/search_api
networks:
- app
restart: always
ports:
- 8081:8081
depends_on:
- db
- config
- inventory
inventory:
image: kellymarchewa/inventory_api
depends_on:
- db
- config
# command: ["/home/kelly/workspace/git/wait-for-it/wait-for-it.sh", "config:8888"]
ports:
- 8082:8082
networks:
- app
restart: always
volumes:
psql_data:
networks:
app:
Earlier, I was having difficulty relating to the dependency of the clients on the config server (the configuration server was not fully started at the time the clients tried to access it). However, I resolved this issue using spring-retry. Now, although I can run it using docker-compose up on my local machine, running the same command (using the same Docker file) fails on a virtual machine hosted by Google Cloud's Service.
inventory_1 | java.lang.IllegalStateException: Could not locate PropertySource and the fail fast property is set, failing
However, it appears to be querying the appropriate location:
inventory_1 | 2018-02-10 00:23:00.945 INFO 1 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Fetching config from server at: http://config:8888
I am not sure what the issue, since both are running using the same docker-compose file and the config server itself it starting.
The config server's application.properties:
server.port=8888
management.security.enabled=false
spring.cloud.config.server.git.uri=git#gitlab.com:leisurely-diversion/configuration.git
# spring.cloud.config.server.git.uri=${HOME}/workspace/eclipse/leisurely_diversion_config
Client bootstrap.properties:
spring.application.name=inventory-client
#spring.cloud.config.uri=http://localhost:8888
spring.cloud.config.uri=http://config:8888
management.security.enabled=false
spring.cloud.config.fail-fast=true
spring.cloud.config.retry.max-attempts=10
spring.cloud.config.retry.initial-interval=2000
EDIT:
Upon further examination, it appears as if the config server is failing to pull the git repository that stores the application properties. However, I am not sure why this behavior is present because of the following:
I have added SSH keys for GitLab to my VM.
I can pull the repository from my VM.
I am using volumes to reference /home/kelly/.ssh in my docker-compose file. The known_hosts file is included in this directory.
The above (using volumes for my SSH keys) worked fine on my development machine.
Any help would be appreciated.
Eventually, I was able to resolve the issue. While this was actually resolved a couple of days ago, I am posting the general solution in hopes that it may prove useful in the future.
First, I was able to confirm (by using curl to call one of my server's endpoints) that the underlying issue was the inability of the config server to pull the git repo.
Initially, I was a bit perplexed - my SSH keys were set-up and I was able to git clone the repo from the VM. However, while looking over the Spring Cloud documention, I discovered that, the known_hosts file must be in ssh-rsa format. However, the VM's ssh utility was saving them in a different format (even though both my development machine and VM are running Debian 9). To resolve the issue, add the corresponding GitLab (or other host) entry in ssh-rsa format. Checking one's /etc/ssh/sshd_config may also be of value.

How to configure dns entries for Docker Compose

I am setting up a Spring application to run using compose. The application needs to establish a connection to ActiveMQ either running locally for developers or to existing instances for staging/production.
I setup the following which is working great for local dev:
amq:
image: rmohr/activemq:latest
ports:
- "61616:61616"
- "8161:8161"
legacy-bridge:
image: myco/myservice
links:
- amq
and in the application configuration I am declaring the AMQ connection as
broker-url=tcp://amq:61616
Running docker-compose up is working great, activeMQ is fired up locally and my application constiner starts and connects to it.
Now I need to set this up for staging/production where the ActiveMQ instances are running on existing hardware within the infrastructure. My thoughts are to either use spring profiles to handle a different configurations in which case the application configuration entry for 'broker-url=tcp://amq:61616' would become something like broker-url=tcp://some.host.here:61616 or find some way to create a dns entry within my production docker-compose.yml which will point an amq dns entry to the associated staging or production queues.
What is the best approach here and if it is DNS, how to I set that up in compose?
Thanks!
Using the extra_hosts flag
First thing that comes to mind is using Compose's extra_hosts flag:
legacy-bridge:
image: myco/myservice
extra_hosts:
- "amq:1.2.3.4"
This will not create a DNS record, but an entry in the container's /etc/hosts file, effectively allowing you to continue using tcp://amq:61616 as your broker URL in your application.
Using an ambassador container
If you're not content with directly specifying the production broker's IP address and would like to leverage existing DNS records, you can use the ambassador pattern:
amq-ambassador:
image: svendowideit/ambassador
command: ["your-amq-dns-name", "61616"]
ports:
- 61616
legacy-bridge:
image: myco/myservice
links:
- "amq-ambassador:amq"

Fig (Docker): how to specify which services to run depending on the environment

I'm using Fig (and Docker) to set up my dev environment.
One of the services that I have configured is Adminer, which is a lightweight web database client. I need it for development, but don't want it running in production. How can I do that? A solution for Fig (preferable) or Docker will do.
Here's a part of my fig.yml:
db:
image: postgres
adminer:
image: clue/adminer
links:
- db
ports:
- "8081:80"
You could use multiple fig files. Fig uses fig.yml by default, but you can specify with the -f flag. Docs.
Thus, whatever you want your default to be could be fig.yml. Then, you could have fig-dev.yml (for example) for your development environment. Use fig -f fig-dev.yml up when using that one.

Resources