I cannot seem to find a clear answer for this, I found https://stackoverflow.com/a/68864132/17183293 but it is not very clear and also might be outdated because "dockerComposeFile" is no longer a valid option.
I have a project with an existing docker-compose.yml file which spins up a MariaDB database, I added a generated devcontainer.json configuration file for Node which looks like
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.202.3/containers/javascript-node
{
"name": "Node.js",
"runArgs": ["--init"],
"build": {
"dockerfile": "Dockerfile",
// Update 'VARIANT' to pick a Node version: 16, 14, 12.
// Append -bullseye or -buster to pin to an OS version.
// Use -bullseye variants on local arm64/Apple Silicon.
"args": { "VARIANT": "12" }
},
// Set *default* container specific settings.json values on container create.
"settings": {},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"dbaeumer.vscode-eslint"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "yarn install",
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "node"
}
It also generated a Dockerfile
# See here for image contents: https://github.com/microsoft/vscode-dev-containers/tree/v0.202.3/containers/javascript-node/.devcontainer/base.Dockerfile
# [Choice] Node.js version (use -bullseye variants on local arm64/Apple Silicon): 16, 14, 12, 16-bullseye, 14-bullseye, 12-bullseye, 16-buster, 14-buster, 12-buster
ARG VARIANT="16-bullseye"
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-${VARIANT}
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
# [Optional] Uncomment if you want to install an additional version of node using nvm
# ARG EXTRA_NODE_VERSION=10
# RUN su node -c "source /usr/local/share/nvm/nvm.sh && nvm install ${EXTRA_NODE_VERSION}"
# [Optional] Uncomment if you want to install more global node modules
# RUN su node -c "npm install -g <your-package-list-here>"
These files are inside my .devcontainer folder, now in my project's docker-compose.yml file
version: '3.8'
services:
mariadb:
image: mariadb:10.1
env_file: .env
environment:
ports:
- 3306:3306
volumes:
- ./docker/mariadb.conf.d/:/etc/mysql/conf.d:z
- ./docker/mariadb-init/:/docker-entrypoint-initdb.d:z
What I like to achieve is to be able to spin up this mariadb instance so my app inside my dev container can access it and ideally I'd also be able to access the database through my operating system, I'd like to use the existing docker-compose.yml file so that people without the dev containers extension can run docker-compose up manually, how could I achieve this?
Full working example (combined from answers mentioned in question and other SO):
Linux as host
Go as example language
zsh, oh-my-zsh, .zsh_history from host Linux
.devcontainer/devcontainer.json:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.217.4/containers/go
{
"name": "Go",
"service": "workspace",
"workspaceFolder": "/home/vscode/woskpaces/go-example/",
"dockerComposeFile": [
"docker-compose.yml",
"docker-compose.workspace.yml"
],
// Set *default* container specific settings.json values on container create.
"settings": {
"go.toolsManagement.checkForUpdates": "local",
"go.useLanguageServer": true,
"go.gopath": "/go",
"go.goroot": "/usr/local/go"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"golang.go"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "go version",
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "vscode"
}
.devcontainer/docker-compose.workspace.yml:
version: '3'
networks:
myNetwork:
name: myNetwork
services:
workspace:
build:
context: ./
dockerfile: Dockerfile
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ..:/home/vscode/woskpaces/go-example/
- ~/.zshrc:/home/vscode/.zshrc
- ~/.oh-my-zsh/:/home/vscode/.oh-my-zsh/
- ~/.zsh_history:/home/vscode/.zsh_history
depends_on:
- kafka
tty: true # <- keeps container running
networks:
- myNetwork
.devcontainer/docker-compose.yml:
version: '3'
networks:
myNetwork:
name: myNetwork
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 22181:2181
networks:
- myNetwork
tmpfs: "/datalog"
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
networks:
- myNetwork
depends_on:
- zookeeper
.devcontainer/Dockerfile:
# See here for image contents: https://github.com/microsoft/vscode-dev-containers/tree/v0.209.6/containers/go/.devcontainer/base.Dockerfile
# [Choice] Go version (use -bullseye variants on local arm64/Apple Silicon): 1, 1.16, 1.17, 1-bullseye, 1.16-bullseye, 1.17-bullseye, 1-buster, 1.16-buster, 1.17-buster
ARG VARIANT="1.17-bullseye"
FROM mcr.microsoft.com/vscode/devcontainers/go:0-${VARIANT}
# [Choice] Node.js version: none, lts/*, 16, 14, 12, 10
ARG NODE_VERSION="none"
RUN if [ "${NODE_VERSION}" != "none" ]; then su vscode -c "umask 0002 && . /usr/local/share/nvm/nvm.sh && nvm install ${NODE_VERSION} 2>&1"; fi
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
# [Optional] Uncomment the next lines to use go get to install anything else you need
# USER vscode
# RUN go get -x <your-dependency-or-tool>
# [Optional] Uncomment this line to install global node packages.
# RUN su vscode -c "source /usr/local/share/nvm/nvm.sh && npm install -g <your-package-here>" 2>&1
Related
I'm looking for a way to make the "devcontainer open ." command to finish faster. I was looking at the log produced by that command and I noticed something:
[1895 ms] Start: Run: docker-compose --project-name hobby-on-rails -f /home/pedro/tempo-livre/hobby-on-rails/docker-compose.yml build
That got me wondering, I expected the build to run only once, like when you run docker-compose up, and not every time I start the project. I think I'm missing some configuration to tell the devcontainer that it's not necessary to build my web service again. Let's take a look at my devcontainer.yml:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.234.0/containers/docker-existing-docker-compose
// If you want to run as a non-root user in the container, see .devcontainer/docker-compose.yml.
{
"name": "Hobby On Rails",
// Update the 'dockerComposeFile' list if you have more compose files or use different names.
// The .devcontainer/docker-compose.yml file contains any overrides you need/want to make.
"dockerComposeFile": [
"../docker-compose.yml"
],
// The 'service' property is the name of the service for the container that VS Code should
// use. Update this value and .devcontainer/docker-compose.yml to the real service name.
"service": "web",
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/opt/hobbyonrails",
// Set *default* container specific settings.json values on container create.
"settings": {},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"castwide.solargraph",
"github.copilot",
"misogi.ruby-rubocop"
]
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Uncomment the next line if you want start specific services in your Docker Compose config.
// "runServices": [],
// Uncomment the next line if you want to keep your containers running after VS Code shuts down.
// "shutdownAction": "none",
// Uncomment the next line to run commands after the container is created - for example installing curl.
// "postCreateCommand": "apt-get update && apt-get install -y curl",
// Uncomment to connect as a non-root user if you've added one. See https://aka.ms/vscode-remote/containers/non-root.
// "remoteUser": "vscode"
}
The service web is being build every time. Is it possible to prevent this from happening?
In case you would like to take a look at my docker-compose.yml:
version: "3.7"
services:
db:
image: postgres:14.4
container_name: hobbyonrails_db
ports:
- 5432:5432
env_file:
- ./.docker/env_files/.env
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
volumes:
- ./.docker/volumes/postgres_data:/var/lib/postgresql/data
web:
image: hobbyonrails
container_name: hobbyonrails_web
env_file:
- ./.docker/env_files/.env
build:
context: .
depends_on:
db:
condition: service_healthy
links:
- db
ports:
- 3000:3000
volumes:
- .:/opt/hobbyonrails:cached
stdin_open: true
tty: true
livereload:
image: hobbyonrails
container_name: hobbyonrails_livereload
depends_on:
- web
ports:
- 35729:35729
command: ["bundle", "exec", "guard", "-i"]
env_file:
- ./.docker/env_files/.env
volumes:
- .:/opt/hobbyonrails:cached
tailwindcsswatcher:
image: hobbyonrails
container_name: hobbyonrails_tailwindcsswatcher
depends_on:
- web
ports:
- 3035:3035
command: ["bin/rails", "tailwindcss:watch"]
env_file:
- ./.docker/env_files/.env
volumes:
- .:/opt/hobbyonrails:cached
tty: true
selenium:
container_name: hobbyonrails_selenium
image: selenium/standalone-chrome:3.141.59
ports:
- 4444:4444
volumes:
postgres_data:
What do you think?
I'm unable to mount a host directory (on a rasberry pi) to a docker container api_service. Even with host chmod -R 777.
I was able to mount it running the api_service from commandline docker start --mount type=bind,src=/data/yarmp-data,target=/data/yarmp-data docker_api_service_1 and docker inspect containerId in this case the mount section was telling me the mount was done and inside the container it was the case. But I'd like to achieve that with docker-compose.
I tried different syntaxes into the docker-compose.yaml file but never achieving it. Every time removing all containers, images, then docker-compose build and docker-compose up.
What am I missing? is there a way to trace the mount options at startup of the container?
Should the target directory have been created into the target image before mounting it on docker-compose.yaml?
docker-compose.yaml
#Doc: https://github.com/compose-spec/compose-spec/blob/master/spec.md
version: '3.2'
services:
api_service:
build: ./api_service
restart: always
ports:
- target: 8080
published: 8080
depends_on:
- postgres_db
links:
- postgres_db:yarmp-db-host # database is postgres_db hostname into this api_service
volumes:
- type: bind
source: $HOST/data/yarmp-data #Host with this version not working
source: /data/yarmp-data #Host absolute path not working
#source: ./mount-test #not working either
target: /data/yarmp-data
#- /data/yarmp-data:/data/yarmp-data # not working either
postgres_db:
build: ./postgres_db
restart: always
ports:
- target: 5432
published: 5432
env_file:
- postgres_db/pg-db-database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/
postgres_db/Dockerfile
FROM postgres:latest
LABEL maintainer="me#mail.com"
RUN mkdir -p /docker-entrypoint-initdb.d
COPY yarmp-dump.sql /docker-entrypoint-initdb.d/
api_service/Dockerfile
FROM arm32v7/adoptopenjdk
LABEL maintainer="me#mail.com"
RUN apt-get update
RUN apt-get -y install git curl vim
CMD ["/bin/bash"]
#csv files data
RUN mkdir -p /data/yarmp-data #Should I create it or not??
RUN mkdir -p /main-app
WORKDIR /main-app
# JAVA APP DATA
ADD my-api-0.0.1-SNAPSHOT.jar app.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar","/main-app/app.jar"]
Seems my entire docker-compose.yaml file was not correct.
As pointed out by #xdhmoore there was an indentation issue, and others.
I figured out by:
validating the docker-compose.yaml with docker-compose config
Tabs are NOT permitted by the YAML specs, USE ONLY SPACES FOR INDENTATION
Note that vim default configuration file /usr/share/vim/vim81/ftplugin/yaml.vim was right replacing tabs with spaces...
The indentation of long syntax was done on my editor with tabs when 2 spaces before were working. Here my final docker-compose.yaml
docker-compose.yaml
version: '3.2'
services:
api_service:
build: ./api_service
restart: always
ports:
- target: 8080
published: 8080 #2 spaces before published
depends_on:
- postgres_db
links:
- postgres_db:yarmp-db-host
volumes:
- type: bind
source: /data/yarmp-data #2 spaces before source, meaning same level as previous '- types:...' and add 2 spaces more
target: /data/yarmp-data #2 spaces before target
postgres_db:
build: ./postgres_db
restart: always
ports:
- target: 5432
published: 5432 #2 spaces before published
env_file:
- postgres_db/pg-db-database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/
volumes:
database-data:
This is based on the YAML in your answer. When I plug it into this yaml to json converter, I get:
{
"version": "3.2",
"services": null,
"api_service": {
"build": "./api_service",
"restart": "always",
"ports": [
{
"target": "8080\npublished: 8080"
}
],
"depends_on": [
"postgres_db"
],
"links": [
"postgres_db:yarmp-db-host"
],
"volumes": [
{
"type": "bind\nsource: /data/yarmp-data"
}
]
},
"postgres_db": {
"build": "./postgres_db",
"restart": "always",
"ports": [
{
"target": "5432\npublished: 5432"
}
],
"env_file": [
"postgres_db/pg-db-database.env"
],
"volumes": [
"database-data:/var/lib/postgresql/data/"
]
},
"volumes": {
"database-data": null
}
}
You can see several places where the result is something like "type": "bind\nsource: /data/yarmp-data".
It appears that YAML is interpreting the source line here as the 2nd line of a multiline string. However, if you adjust the indentation to line up with the t in - type, you end up with:
...
"volumes": [
{
"type": "bind",
"source": "/data/yarmp-data",
"target": "/data/yarmp-data"
}
]
...
The indentation in YAML is tricky (and it matters), so I've found the above and similar tools helpful to get what I want. It also helps me to think about YAML in terms of lists and objects and strings. Here - creates a new item in a list, and type: bind is a key-value in that item (not in the list). Then source: blarg is also a key-value in the same item, so it makes sense that it should line up with the t in type. Indenting more indicates you are continuing a multiline string, and I think if you indented less (like aligning with -), you would get an error or end up adding a key-value pair to one of the objects higher up the hierarchy.
Anyway, it's certainly confusing. I've found such online tools to be helpful.
This is the docker-compose command and the results:
$ docker-compose -f docker-compose-base.yml -f docker-compose-test.yml run api sh -c 'pwd && ls'
Starting test-db ... done
/usr/src/api
node_modules
I then inspected the most recent container id:
$ docker inspect --format='{{json .Mounts}}' e150beeef85c
[
{
"Type": "bind",
"Source": "/home/circleci/project",
"Destination": "/usr/src/api",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "4f86174ca322af6d15489da91f745861815a02f5b4e9e879ef5375663b9defff",
"Source": "/var/lib/docker/volumes/4f86174ca322af6d15489da91f745861815a02f5b4e9e879ef5375663b9defff/_data",
"Destination": "/usr/src/api/node_modules",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
Which means, these files are not appearing:
$ ls /home/circleci/project
Dockerfile docker-compose-base.yml docker-compose-prod.yml migrations nodemon-debug.json package-lock.json src test-db.env tsconfig.build.json tslint.json
README.md docker-compose-dev.yml docker-compose-test.yml nest-cli.json nodemon.json package.json test test.env tsconfig.json
Why could this be?
Update: I should mention that all this works fine on my local dev environment. The above is failing on CircleCI.
When I inspect the differences between the containers, the only major things that I see is that my dev environment runs Docker 19 using overlay2 graph driver and the above failing environment runs Docker 17 using aufs graph driver.
Update 2: Actual docker-compose files:
# docker-compose-base.yml
version: '3'
services:
api:
build: .
restart: on-failure
container_name: api
# docker-compose-test.yml
version: '3'
networks:
default:
external:
name: lb_lbnet
services:
test-db:
image: postgres:11
container_name: test-db
env_file:
- ./test-db.env # uses POSTGRES_DB and POSTGRES_PASSWORD to create a fresh db with a password when first run
api:
restart: 'no'
env_file:
- test.env
volumes:
- ./:/usr/src/api
- /usr/src/api/node_modules
depends_on:
- test-db
ports:
- 9229:9229
- 3000:3000
command: npm run start:debug
And finally Dockerfile:
FROM node:11
WORKDIR /usr/src/api
COPY package*.json ./
RUN npm install
COPY . .
# not using an execution list here so we get shell variable substitution
CMD npm run start:$NODE_ENV
As #allisongranemann pointed out, CircleCI states:
It is not possible to mount a volume from your job space into a
container in Remote Docker (and vice versa).
The original reason why I wanted to mount the project directory to docker was that in the development environment, I could change code quickly and run tests without rebuilding the container.
With this limitation, the solution I went with was to remove volumes mounting from docker-compose-test.yml as follow:
version: '3'
services:
test-db:
image: postgres:11
container_name: test-db
env_file:
- ./test-db.env # uses POSTGRES_DB and POSTGRES_PASSWORD to create a fresh db with a password when first run
api:
restart: 'no'
env_file:
- test.env
depends_on:
- test-db
ports:
- 9229:9229
- 3000:3000
command: npm run start:debug
And I also added docker-compose-test-dev.yml that adds the volumes for the dev environment:
version: '3'
services:
api:
volumes:
- ./:/usr/src/api
Finally, to run tests on the dev environment, I run:
docker-compose -f docker-compose-base.yml -f docker-compose-test.yml -f docker-compose-test-dev.yml run api npm run test:e2e
In a docker-compose I have two services that share the same mapped volume from the host. The mapped volume is the application source files located in the host.
When the source files are changed in the host, HMR is not triggered, including a manual refresh does not show the latest changes. Although, if editing the file directly in the container, the HMR reloads and displays the changes. Finally, the changes are visible from the host - meaning that the mapped volume is correct and pointing to the correct place.
The question is why isn't the webpack-dev-server watcher picking up the changes? How to debut this? What solutions are there?
The docker-compose services in question:
node_dev_worker:
build:
context: .
dockerfile: ./.docker/dockerFiles/node.yml
image: foobar/node_dev:latest
container_name: node_dev_worker
working_dir: /home/node/app
environment:
- NODE_ENV=development
volumes:
- ./foobar-blog-ui/:/home/node/app
networks:
- foobar-wordpress-network
node_dev:
image: foobar/node_dev:latest
container_name: node_dev
working_dir: /home/node/app
ports:
- 8000:8000
- 9000:9000
environment:
- NODE_ENV=development
volumes:
- ./foobar-blog-ui/:/home/node/app
- ./.docker/scripts/wait-for-it.sh:/home/node/wait-for-it.sh
command: /bin/bash -c '/home/node/wait-for-it.sh wordpress-reverse-proxy:80 -t 10 -- npm run start'
depends_on:
- node_dev_worker
- mysql
- wordpress
networks:
- foobar-wordpress-network
The node.yml:
FROM node:8.16.0-slim
WORKDIR /home/node/app
RUN apt-get update
RUN apt-get install -y rsync vim git libpng-dev libjpeg-dev libxi6 build-essential libgl1-mesa-glx
CMD npm install
The Webpack dev server configuration follows some recommendations found online, for container issues, such as the one I'm presenting. The webpack configuration is placed in a middleware provided by Gatsbyjs called gatsby-node.js, as follow:
devServer: {
port: 8000,
disableHostCheck: true,
watchOptions: {
poll: true,
aggregateTimeout: 500
}
}
The Linux distro is (Docker image called node:8.16.0-slim):
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=debian
Also, the browser does show that [HMR] is connected and listening. As follows:
[HMR] connected
[HMR] bundle rebuilt in 32899ms
The host in question is Macos 10.14.6 Mojave. Running Docker 2.1.0.2
Any hints on how to debug this issue?
To fix this problem I've checked the documents provided by Docker, regarding my host system that is MacOS where they announce OSFX ( https://docs.docker.com/docker-for-mac/osxfs/ ), so before anything else, I made sure that the volumes that I am allowed to mount from MacOS are listed:
My volume sits under the /Users parent directory, so I'm good to go!
Obs: I don't think it relates, but I did reset to factory before going ahead verifying the File Sharing tab.
Have in mind the previous changes I raised in the original ticket, as this helps and is recommended. Check your Webpack Dev Server configuration:
devServer: {
port: 8000,
disableHostCheck: true,
watchOptions: {
poll: true,
aggregateTimeout: 500
}
}
It's also important to start the development server by declaring the --host and --port, like:
gatsby develop -H 0.0.0.0 -p 8000
To complete and I believe this is the key to fix this problem is that I've set the following environment variable GATSBY_WEBPACK_PUBLICPATH in my Docker-compose yaml file, under the property environment:
node_dev:
image: moola/node_dev:latest
container_name: node_dev
working_dir: /home/node/app
ports:
- 8000:8000
- 9000:9000
environment:
- NODE_ENV=development
- GATSBY_WEBPACK_PUBLICPATH=/
volumes:
- ./foobar-blog-ui/:/home/node/app
- ./.docker/scripts/wait-for-it.sh:/home/node/wait-for-it.sh
command: /bin/bash -c '/home/node/wait-for-it.sh wordpress-reverse-proxy:80 -t 10 -- npm run start'
depends_on:
- node_dev_worker
- mysql
- wordpress
networks:
- foobar-wordpress-network
I have few simple java applications and i want them to use consul for runtime configuration. I can't understand the approach that should be used at this combo: docker + consul + apps.
I've managed to customize a proper docker-compose.yml file with required containers: consul, jetty1, jetty2, jetty3. Each jetty gets a war application on build. When i docker-compose up my stack i have a proper applications started.
But i can't understand what should i do to make my apps read consul config from the consul service.
I've made such docker-compose.yml file:
version: '2'
services:
consuldns:
build: ./consul
command: 'agent -server -bootstrap-expect=1 -ui -client=0.0.0.0 -node=consuldns -log-level=debug'
ports:
- '8300:8300'
- '8301:8301'
- '8302:8302'
- '8400:8400'
- '8500:8500'
- '8600:53/udp'
container_name: 'consuldns'
jettyok1:
build: ./jetty
ports:
- "8081:8080"
container_name: jettyok1
depends_on:
- consuldns
jettyok2:
build: ./jetty
ports:
- "8082:8080"
container_name: jettyok2
depends_on:
- consuldns
jettyok3:
build: ./jetty
ports:
- "8083:8080"
container_name: jettyok3
depends_on:
- consuldns
i have two folders near docker-compose.yml file:
- consul:
Dockerfile (copied from official repo)
FROM consul:latest
ENV CONSUL_TEMPLATE_VERSION 0.18.1
ADD https://releases.hashicorp.com/consul-template/${CONSUL_TEMPLATE_VERSION}/consul-template_${CONSUL_TEMPLATE_VERSION}_linux_amd64.zip /
RUN unzip consul-template_${CONSUL_TEMPLATE_VERSION}_linux_amd64.zip && \
mv consul-template /usr/local/bin/consul-template &&\
rm -rf /consul-template_${CONSUL_TEMPLATE_VERSION}_linux_amd64.zip && \
mkdir -p /etc/consul-template/config.d/templates && \
apk add --no-cache curl
RUN apk update && apk add --no-cache jq
RUN mkdir /etc/consul.d
RUN echo '{"service": {"name": "my_java_application", "tags": ["java"], "port": 8080}}' > /etc/consul.d/java.json
#RUN consul agent -data-dir /consul/data -config-dir /etc/consul.d
CMD ["agent", "-dev", "-client", "0.0.0.0"]
jetty:
Dockerfile (handmade)
FROM jetty:latest
ENV DEFAULT_SYSTEM_MESSAGE='dockerfile default message'
COPY \always-healthy.war /var/lib/jetty/webapps/
always-healthy.war is a simple spring-boot web app with a single GET method support:
package org.bajiepka.demo.controller;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.core.env.Environment;
#RestController
public class MessageController {
#Autowired
private Environment environment;
#GetMapping("/message")
public String getDefaultMessage(){
return String.format("Current value: %s", environment.getProperty("DEFAULT_SYSTEM_MESSAGE"));
}
}
Point me please, what should i do, to make my always-healthy apps read a value from consul service so i could manage an env parameter DEFAULT_SYSTEM_MESSAGE of any jetty instance or always-healthy application
If you are using Spring cloud you could use Spring cloud consul config to enable the app to load the configuration from consul at the startup.
See samples and example here and here
If you are not using spring cloud then it is a bit more work where you have to fetch the configuration using a rest client from consul.