I have few simple java applications and i want them to use consul for runtime configuration. I can't understand the approach that should be used at this combo: docker + consul + apps.
I've managed to customize a proper docker-compose.yml file with required containers: consul, jetty1, jetty2, jetty3. Each jetty gets a war application on build. When i docker-compose up my stack i have a proper applications started.
But i can't understand what should i do to make my apps read consul config from the consul service.
I've made such docker-compose.yml file:
version: '2'
services:
consuldns:
build: ./consul
command: 'agent -server -bootstrap-expect=1 -ui -client=0.0.0.0 -node=consuldns -log-level=debug'
ports:
- '8300:8300'
- '8301:8301'
- '8302:8302'
- '8400:8400'
- '8500:8500'
- '8600:53/udp'
container_name: 'consuldns'
jettyok1:
build: ./jetty
ports:
- "8081:8080"
container_name: jettyok1
depends_on:
- consuldns
jettyok2:
build: ./jetty
ports:
- "8082:8080"
container_name: jettyok2
depends_on:
- consuldns
jettyok3:
build: ./jetty
ports:
- "8083:8080"
container_name: jettyok3
depends_on:
- consuldns
i have two folders near docker-compose.yml file:
- consul:
Dockerfile (copied from official repo)
FROM consul:latest
ENV CONSUL_TEMPLATE_VERSION 0.18.1
ADD https://releases.hashicorp.com/consul-template/${CONSUL_TEMPLATE_VERSION}/consul-template_${CONSUL_TEMPLATE_VERSION}_linux_amd64.zip /
RUN unzip consul-template_${CONSUL_TEMPLATE_VERSION}_linux_amd64.zip && \
mv consul-template /usr/local/bin/consul-template &&\
rm -rf /consul-template_${CONSUL_TEMPLATE_VERSION}_linux_amd64.zip && \
mkdir -p /etc/consul-template/config.d/templates && \
apk add --no-cache curl
RUN apk update && apk add --no-cache jq
RUN mkdir /etc/consul.d
RUN echo '{"service": {"name": "my_java_application", "tags": ["java"], "port": 8080}}' > /etc/consul.d/java.json
#RUN consul agent -data-dir /consul/data -config-dir /etc/consul.d
CMD ["agent", "-dev", "-client", "0.0.0.0"]
jetty:
Dockerfile (handmade)
FROM jetty:latest
ENV DEFAULT_SYSTEM_MESSAGE='dockerfile default message'
COPY \always-healthy.war /var/lib/jetty/webapps/
always-healthy.war is a simple spring-boot web app with a single GET method support:
package org.bajiepka.demo.controller;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.core.env.Environment;
#RestController
public class MessageController {
#Autowired
private Environment environment;
#GetMapping("/message")
public String getDefaultMessage(){
return String.format("Current value: %s", environment.getProperty("DEFAULT_SYSTEM_MESSAGE"));
}
}
Point me please, what should i do, to make my always-healthy apps read a value from consul service so i could manage an env parameter DEFAULT_SYSTEM_MESSAGE of any jetty instance or always-healthy application
If you are using Spring cloud you could use Spring cloud consul config to enable the app to load the configuration from consul at the startup.
See samples and example here and here
If you are not using spring cloud then it is a bit more work where you have to fetch the configuration using a rest client from consul.
Related
I have project written in Django Restframework, Celery for executing long running task, Redis as a broker and Flower for monitoring Celery task. I have written a Dockerfile & docker-compose.yaml to create a network and run this services inside containers.
Dockerfile
FROM python:3.7-slim
ENV PYTHONUNBUFFERED 1
RUN apt-get update &&\
apt-get install python3-dev default-libmysqlclient-dev gcc -y &&\
apt-get install -y libssl-dev libffi-dev &&\
python -m pip install --upgrade pip &&\
mkdir /ibdax
WORKDIR /ibdax
COPY ./requirements.txt /requirements.txt
COPY . /ibdax
EXPOSE 80
EXPOSE 5555
ENV ENVIRONMENT=LOCAL
#install dependencies
RUN pip install -r /requirements.txt
RUN pip install django-phonenumber-field[phonenumbers]
RUN pip install drf-yasg[validation]
docker-compose.yaml
version: "3"
services:
redis:
container_name: redis-service
image: "redis:latest"
ports:
- "6379:6379"
restart: always
command: "redis-server"
ibdax-backend:
container_name: ibdax
build:
context: .
dockerfile: Dockerfile
image: "ibdax-django-service"
volumes:
- .:/ibdax
ports:
- "80:80"
expose:
- "80"
restart: always
env_file:
- .env.staging
command: >
sh -c "daphne -b 0.0.0.0 -p 80 ibdax.asgi:application"
links:
- redis
celery:
container_name: celery-container
image: "ibdax-django-service"
command: "watchmedo auto-restart -d . -p '*.py' -- celery -A ibdax worker -l INFO"
volumes:
- .:/ibdax
restart: always
env_file:
- .env.staging
links:
- redis
depends_on:
- ibdax-backend
flower:
container_name: flower
image: "ibdax-django-service"
command: "flower -A ibdax --port=5555 --basic_auth=${FLOWER_USERNAME}:${FLOWER_PASSWORD}"
volumes:
- .:/ibdax
ports:
- "5555:5555"
expose:
- "5555"
restart: always
env_file:
- .env
- .env.staging
links:
- redis
depends_on:
- ibdax-backend
This Dockerfile & docker-compose is working just fine and now I want to deploy this application to GKE. I came across Kompose which translate the docker-compose to kubernetes resources. I read the documentation and started following the steps and the first step was to run kompose convert. This returned few warnings and created few files as show below -
WARN Service "celery" won't be created because 'ports' is not specified
WARN Volume mount on the host "/Users/jeetpatel/Desktop/projects/ibdax" isn't supported - ignoring path on the host
WARN Volume mount on the host "/Users/jeetpatel/Desktop/projects/ibdax" isn't supported - ignoring path on the host
WARN Volume mount on the host "/Users/jeetpatel/Desktop/projects/ibdax" isn't supported - ignoring path on the host
INFO Kubernetes file "flower-service.yaml" created
INFO Kubernetes file "ibdax-backend-service.yaml" created
INFO Kubernetes file "redis-service.yaml" created
INFO Kubernetes file "celery-deployment.yaml" created
INFO Kubernetes file "env-dev-configmap.yaml" created
INFO Kubernetes file "celery-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "flower-deployment.yaml" created
INFO Kubernetes file "flower-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "ibdax-backend-deployment.yaml" created
INFO Kubernetes file "ibdax-backend-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "redis-deployment.yaml" created
I ignored the warnings and moved to the next step i.e running command
kubectl apply -f flower-service.yaml, ibdax-backend-service.yaml, redis-service.yaml, celery-deployment.yaml
but I get this error -
error: Unexpected args: [ibdax-backend-service.yaml, redis-service.yaml, celery-deployment.yaml]
Hence I planned to apply one by one like this -
kubectl apply -f flower-service.yaml
but I get this error -
The Service "flower" is invalid: spec.ports[1]: Duplicate value: core.ServicePort{Name:"", Protocol:"TCP", AppProtocol:(*string)(nil), Port:5555, TargetPort:intstr.IntOrString{Type:0, IntVal:0, StrVal:""}, NodePort:0}
Not sure where am I going wrong.
Also the prerequisites of Kompose is to have a Kubernetes cluster so I created an Autopilot cluster with public network. Now I am not sure how this apply command will identify the cluster I created and deploy my application on it.
After kompose convert your flower-service.yaml file have duplicate ports - that's what the error is saying.
...
ports:
- name: "5555"
port: 5555
targetPort: 5555
- name: 5555-tcp
port: 5555
targetPort: 5555
...
You can either delete port name: "5555" or name: 5555-tcp.
For example, replace ports block with
ports:
- name: 5555-tcp
port: 5555
targetPort: 5555
and deploy the service again.
I would also recommend changing port name to something more descriptive.
Same thing happens with ibdax-backend-service.yaml file.
...
ports:
- name: "80"
port: 80
targetPort: 80
- name: 80-tcp
port: 80
targetPort: 80
...
You can delete one of the definitions, and redeploy the service (changing port name to something more descriptive is also recommended).
kompose is not a perfect tool, that will always give you a perfect result. You should check the generated files for any possible conflicts and/or missing fields.
I'm following this tutorial https://trailhead.salesforce.com/en/content/learn/modules/user-interface-api/install-sample-app?trail_id=force_com_dev_intermediate and I have never used docker before.
Steps I followed:
Cloned the repo
Installed docker for windows and it is perfectly installed.
Tried to run this cmd on the repo docker-compose build && docker-compose up -d
While running this cmd, I'm getting the same error.
E:\Salesforce\RecordViewer>docker-compose build && docker-compose up -d
(root) Additional property nginx is not allowed
I found this answer: https://stackoverflow.com/a/38717336/279771
Basically I needed to add services: to the docker-compose.yml so it looks like this:
services:
web:
build: .
command: 'bash -c ''node app.js'''
working_dir: /usr/src/app
environment:
PORT: 8050
NGINX_PORT: 8443
volumes:
- './views:/app/user/views:ro'
nginx:
build: nginx
ports:
- '8080:80'
- '8443:443'
links:
- web:web
volumes_from:
- web
I have created a Go backend server that contains an api. All works well when running locally "go run server". I did however encounter issues when running it in docker. So I created a Dockerfile and run the linux container in networkmode host but can't access the api in the browser. If it's working I should be able to see a json response from http://localhost:8500/status. So I'm thinking I need permissions or add flags or more installation related scripts. I have been testing different flags and ports in docker but can't identify the issue. See code, dockerfile and command below.
Note: When I run the program locally on windows a security warning pops up, perhaps this is related to the issue?
I'm running docker desktop community v 2.2.0.5 stable.
Api code:
import (
"log"
"net/http"
conf "server/conf"
"github.com/gorilla/mux"
"github.com/jinzhu/gorm"
_ "github.com/jinzhu/gorm/dialects/mysql"
)
var db *gorm.DB
// Middleware used to add correct permissions for all requests
func middleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Headers", "Content-Type")
next.ServeHTTP(w, r)
})
}
func StartApi(config conf.Config, _db *gorm.DB) {
log.Println("Start Api")
db = _db
router := mux.NewRouter()
// Create
router.HandleFunc("/login", login).Methods("POST", "OPTIONS")
router.HandleFunc("/network", createNetwork).Methods("POST", "OPTIONS")
// Read
router.HandleFunc("/users/{networkName}", getUsers).Methods("GET", "OPTIONS")
router.HandleFunc("/streams/{networkName}", getStreams).Methods("GET", "OPTIONS")
router.HandleFunc("/status", showStatus).Methods("GET", "OPTIONS")
log.Println("API started at port " + config.Backend.Api_port) // api port is stored in config
log.Println(http.ListenAndServe(":"+config.Backend.Api_port, middleware(router)))
log.Println("Api done!") }
Dockerfile:
# Start from the latest golang base image
FROM golang:latest
WORKDIR /go/src/server
COPY ./src/server .
# Download packages
RUN go get .
# Compile code
#RUN go build -o server
RUN go install server
# Expose ports
EXPOSE 8500
EXPOSE 8600
#ENTRYPOINT ./server
CMD ["go", "run", "server"]
Docker-compose:
version : '3'
services:
react:
build: ./ReactFrontend/
container_name: ReactFrontend
tty: true
ports:
- 4000:3000
backend:
network_mode: host
build: ./GoBackend/
container_name: goserver
ports:
- 8500:8500
- 8600:8600
Docker command:
docker run --net=host goserver
Any help is appreciated,
Cheers!
So i solved it for now by:
As mentioned here:
https://docs.docker.com/compose/networking/
Add to docker-compose:
networks:
default:
external:
name: database_default
As mentioned here: Access to mysql container from other container
I can connect to database as < db_container_name >:3306 in backend.
To automate the process I created a .sh script for handling an extra setup config step on container start. However, due to my structure in my config.yml it was hard to update with just "sed" commands so I created a python program for updating all config data. The Dockerfile, docker-compose file, setup.sh and update_config.py file is shown below.
setup.sh:
#!/bin/bash
# Don't remove this!
# This file is used by dockerfile to replace configs
# Replace config on run
python3 update_config.py
# Start program
go run server
Dockerfile:
# Start from the latest golang base image
FROM golang:latest
WORKDIR /go/src/server
COPY ./src/server .
# Install python3 and yml compability
RUN apt-get update && apt-get install -y python3-pip
RUN python3 --version
RUN pip3 install PyYAML
# Download packages
RUN go get .
# Compile code
#RUN go build -o server
RUN go install server
# Expose ports
EXPOSE 8500
EXPOSE 8600
# ENV
ENV DB_HOST "mysql:3306"
#CHMOD setup
RUN chmod +x setup.sh
CMD ["./setup.sh"]
Docker-compose
version : '3'
services:
react:
build: ./ReactFrontend/
container_name: ReactFrontend
tty: true
ports:
- 4000:3000
backend:
build: ./GoBackend/
container_name: GoBackend
environment:
DB_HOST: mysql:3306 # Name or IP of DB container!
ports:
- 8500:8500
- 8600:8600
networks:
default:
external:
name: database_default
update_config.py:
import yaml
import os
"""
DONT REMOVE
This file is used in the dockerfile!
"""
fname = "/go/src/server/config.yml"
stream = open(fname, 'r')
data = yaml.safe_load(stream)
stream.close()
# Add more updates here!
if os.environ.get('DB_HOST') is not None:
data["database"]["ip"] = os.environ['DB_HOST']
# Updated data print
print("Updated Data", data)
# Write changes to config
stream = open(fname, 'w')
yaml.dump(data, stream)
stream.close()
Example docker command that now works if we only want container to run:
docker run -p 8500:8500 -p 8600:8600 --net database_default goserver
It is working fine! And we are avoiding using unnecessary host network mode!
I am quite new to docker but am trying to use docker compose to run automation tests against my application.
I have managed to get docker compose to run my application and run my automation tests, however, at the moment my application is running on localhost when I need it to run against a specific domain example.com.
From research into docker it seems you should be able to hit the application on the hostname by setting it within links, but I still don't seem to be able to.
Below is the code for my docker compose files...
docker-compose.yml
abc:
build: ./
command: run container-dev
ports:
- "443:443"
expose:
- "443"
docker-compose.automation.yml
tests:
build: test/integration/
dockerfile: DockerfileUIAuto
command: sh -c "Xvfb :1 -screen 0 1024x768x16 &>xvfb.log && sleep 20 && DISPLAY=:1.0 && ENVIRONMENT=qa BASE_URL=https://example.com npm run automation"
links:
- abc:example.com
volumes:
- /tmp:/tmp/
and am using the following command to run...
docker-compose -p tests -f docker-compose.yml -f docker-compose.automation.yml up --build
Is there something I'm missing to map example.com to localhost?
If the two containers are on the same Docker internal network, Docker will provide a DNS service where one can talk to the other by just its container name. As you show this with two separate docker-compose.yml files it's a little tricky, because Docker Compose wants to isolate each file into its own separate mini-Docker world.
The first step is to explicitly declare a network in the "first" docker-compose.yml file. By default Docker Compose will automatically create a network for you, but you need to control its name so that you can refer to it from elsewhere. This means you need a top-level networks: block, and also to attach the container to the network.
version: '3'
networks:
abc:
name: abc
services:
abc:
build: ./
command: run container-dev
ports:
- "443:443"
networks:
abc:
aliases:
- example.com
Then in your test file, you can import that as an external network.
version: 3
networks:
abc:
external: true
name: abc
services:
tests:
build: test/integration/
dockerfile: DockerfileUIAuto
command: sh -c "Xvfb :1 -screen 0 1024x768x16 &>xvfb.log && sleep 20 && npm run automation"
environment:
DISPLAY: "1.0"
ENVIRONMENT: qa
BASE_URL: "https://example.com"
networks:
- abc
Given the complexity of what you're showing for the "test" container, I would strongly consider running it not in Docker, or else writing a shell script that launches the X server, checks that it actually started, and then runs the test. The docker-compose.yml file isn't the only tool you have here.
I have mysql DB running in a container and web app running in another container. My use case is once the DB container is up and running app container has to insert some initial data to DB using Liquibase and start the app. My docker yml looks like below.
db:
build: kdb
user: "1000:50"
volumes:
- /data/mysql:/var/lib/mysql
container_name: kdb
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
image: kdb
ports:
- "3307:3306"
k-api:
container_name: k_api
hostname: k-api
domainname: i.com
image: k_api
volumes:
- /Users/agu/work:/data
build:
context: ./api
args:
KB_API_WAR: k-web-1.2.9.war
KB_API_URL: https://artifactory.b-aws.i.com
ports:
- "8097:8080"
depends_on:
- db
#command: [/usr/local/bin/wait-for-it.sh, "db:3306","-s","-t","0","--","/bin/sh" "wait_for_liquibase.sh"]
links:
- "db:kdb_docker_host"
And in my Dockerfile for api i have entry point for a shell script called "wait_for_liquibase.sh"
CMD ["wait_for_liquibase.sh"]
wait_for_liquibase.sh:
#!/bin/sh
set -e
#RUN liquibase
mvn clean install -X -PdropAll -Dcontexts=test -Dliquibase.user=XX -Dliquibase.pass=XX -Dliquibase.host=db -Dliquibase.port=3306 -Dliquibase.schema=knowledgebasedb -DpromptOnNonLocalDatabase=false -Dcontexts=test -f k/k-liquibase
/usr/local/tomcat/bin/catalina.sh run
The issue is once the DB container is up and running app container is not able to reach the DB server to perform liquibase setup for database. I see the below error.
Communication failure: Unknown database host -Dliquibase.host=db.
I am assuming you are using version 1.
You giving an alias to your "db" service, you will need to use that alias, kdb_docker_host
Also, the ports are mapping to the host machine, to expose ports between containers yuo will need to use the expose property.
expose:
- 3306
I used this in Docker file RUN apt-get update
RUN apt-get install netcat -y
ADD wait-for-base.sh /wait-for-base.sh
CMD ["/wait-for-base.sh"]
and in wait-for-base.sh :
#!/bin/bash
while ! nc -z db 3306; do sleep 3; done
[my command to run]
In your case /usr/local/tomcat/bin/catalina.sh run