configure different hostname for every contain when use docker compose swarm? - docker

docker-compose.yml
services:
{{ app }}{{ env_id }}-{{stage_name}}:
image: "{{ registry_url }}/{{ app }}-{{ stage_name }}:{{ tag }}"
ports:
- {{ port }}:3000
volumes:
- /var/log/{{ app }}/logs:/app/logs
networks:
- net{{ env_id }}
hostname: "{{contain_name}}"
logging:
driver: syslog
options:
tag: "{{ app }}"
stop_grace_period: 20s
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/version"]
interval: 5s
timeout: 10s
retries: 3
start_period: 5s
deploy:
replicas: 4
update_config:
parallelism: 1
order: start-first
failure_action: rollback
monitor: 15s
rollback_config:
order: start-first
restart_policy:
condition: any
delay: 5s
resources:
limits:
memory: 7G
networks:
net{{ env_id }}:
name: {{ app }}{{ env_id }}_network
use the docker-compose.yml,I can get a swarm stack and four contains, but contains have same hostname,I want they named like
"contain_name1
contain_name2
contain_name3
contain_name4"
How to do it?

Unfortunately, this functionality is not yet supported. https://github.com/docker/swarmkit/issues/1242
Kubernetes can resolve this problem by using StatefulSet. https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/

Related

How to import an external file on k8 Manifest

I have a docker-compose.yml file that has my configuration on importing an external file that installs a postgis configuration when creating a docker image for Postgres,
This is the docker file
services:
postgres:
container_name: postgres_db
build:
context: .
dockerfile: Dockerfile-db
image: postgres
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: password
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
ports:
- "5454:5454"
networks:
- postgres
the file am importing is called Dockerfile-db .
FROM postgres:14.1
RUN apt-get update && apt-get install -y postgresql-14-postgis-3
CMD ["/usr/local/bin/docker-entrypoint.sh","postgres"]
How can I do the same import on a K8 manifest file. this is where I add the database
spec:
serviceName: zone-service-db-service
selector:
matchLabels:
app: zone-service-db
replicas: 1
template:
metadata:
labels:
app: zone-service-db
spec:
tolerations:
- key: "podType"
operator: "Equal"
value: "isDB"
effect: "NoSchedule"
containers:
- name: postgres
image: postgres:latest
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: zone-service-db
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
resources:
requests:
memory: '256Mi'
cpu: '100m'
limits:
memory: '256Mi'
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: zone-service-pv-claim
How can I import the Dockerfile-db on the k8 manifest file and be called during the creating of the Postgres container and have the extensions available on the docker-image? Any help is appreciated
I believe you are getting this error
ERROR: type "geometry" does not exist
The file you have added above will mostly work with docker-compose but for Kubernetes, to have both Postgress and Postgis work together you will have to us the postgis image instead of the postgres image like this
spec:
serviceName: zone-service-db-service
selector:
matchLabels:
app: zone-service-db
replicas: 1
template:
metadata:
labels:
app: zone-service-db
spec:
tolerations:
- key: "podType"
operator: "Equal"
value: "isDB"
effect: "NoSchedule"
containers:
- name: postgres
image: postgis/postgis:latest
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: zone-service-db
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
resources:
requests:
memory: '256Mi'
cpu: '100m'
limits:
memory: '256Mi'
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: zone-service-pv-claim
Try this and advise. No need to import external files.

Healthcheck is failing when deploying a mssql database

The healthcheck is failing when deploying a mssql database on AWS ECS.
Below is a copy of the service form the docker-compose.yml file
sql_server_db:
image: 'mcr.microsoft.com/mssql/server:2017-latest'
environment:
SA_PASSWORD: Password123#
ACCEPT_EULA: "Y"
labels:
- traefik.enable=false
deploy:
resources:
limits:
cpus: '1'
memory: 8Gb
reservations:
cpus: '0.5'
memory: 4GB
healthcheck:
test: ["/opt/mssql-tools/bin/sqlcmd", "-U", "sa", "-P", "Password123#", "-Q", "SELECT 1"]
interval: 1m
retries: 10
start_period: 60s
I have the same issue, when checking the "inspect" for the container I was getting "Login fails for SA"
this was disturbing because the password was the same (I used the .env variable) ... but for some reason the special characters seems to mess up the check.
I simply created a oneliner script
/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P $SA_PASSWORD -Q "Select 1"
and then I called it as HC
healthcheck:
test: ["CMD","bash","/healthcheck.sh", ]
and it works
I don't really like it but I will keep it until I find a better one (I am not sure it can actually fails )

After converting docker-compose.yml into k8s using kompose.io , Kubernetes pods are in pending state due to PVC

I have used kompose.io to convert the docker-compose.yml (given at the end) into kubernetes yaml
deployments and then applied kubeclt apply -f . to deploy them in my 4-node k8s cluster in cloudlab.
But most of the pods are in pending due to the following error:
Warning FailedScheduling 24h default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
I would be grateful if anyone can give some pointers on how can I resolve this issue.
$ kubectl describe pod pgadmin-6897994987-q2kxj
Name: pgadmin-6897994987-q2kxj
Namespace: default
Priority: 0
Node: <none>
Labels: io.kompose.service=pgadmin
pod-template-hash=6897994987
Annotations: kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.26.1 (a9d05d509)
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/pgadmin-6897994987
Containers:
pgadmin-container:
Image: dpage/pgadmin4
Port: 80/TCP
Host Port: 0/TCP
Environment:
PGADMIN_DEFAULT_EMAIL:
PGADMIN_DEFAULT_PASSWORD:
Mounts:
/root/.pgadmin from pgadmin-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lt9r6 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
pgadmin-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pgadmin-volume
ReadOnly: false
default-token-lt9r6:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-lt9r6
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 24h default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
kubectl get pods:
NAME READY STATUS RESTARTS AGE
callback-755dd5d9cd-jnsbq 0/1 Pending 0 24h
create-bucket-7bd9cdd74f-hddcp 0/1 Pending 0 24h
distribution-8556bcd7d7-lzs9k 0/1 Pending 0 24h
download-6dd7fbf4d5-mpvxm 0/1 Pending 0 24h
efgs-fake-77777895d4-7kt86 1/1 Running 0 24h
objectstore-5bd9cfdc9-jpvfh 0/1 Pending 0 24h
pgadmin-6897994987-q2kxj 0/1 Pending 0 24h
postgres-978d8867b-czdpj 0/1 Pending 0 24h
submission-5db76bc69d-hckl7 0/1 Pending 0 24h
upload-5748b4857d-w8n5c 0/1 Pending 0 24h
verification-fake-6f4d75944f-ssttg 1/1 Running 0 24h
docker-compose.yml
version: '3'
services:
callback:
build:
context: ./
dockerfile: ./services/callback/Dockerfile
depends_on:
- postgres
- efgs-fake
ports:
- "8010:8080"
environment:
SPRING_PROFILES_ACTIVE: debug,disable-ssl-client-postgres
POSTGRESQL_SERVICE_PORT: '5432'
POSTGRESQL_SERVICE_HOST: postgres
POSTGRESQL_DATABASE: ${POSTGRES_DB}
POSTGRESQL_PASSWORD_CALLBACK: ${POSTGRES_CALLBACK_PASSWORD}
POSTGRESQL_USER_CALLBACK: ${POSTGRES_CALLBACK_USER}
POSTGRESQL_PASSWORD_FLYWAY: ${POSTGRES_FLYWAY_PASSWORD}
POSTGRESQL_USER_FLYWAY: ${POSTGRES_FLYWAY_USER}
SSL_CALLBACK_KEYSTORE_PATH: file:/secrets/ssl.p12
SSL_CALLBACK_KEYSTORE_PASSWORD: 123456
SSL_FEDERATION_TRUSTSTORE_PATH: file:/secrets/contains_efgs_truststore.jks
SSL_FEDERATION_TRUSTSTORE_PASSWORD: 123456
FEDERATION_GATEWAY_KEYSTORE_PATH: file:/secrets/ssl.p12
FEDERATION_GATEWAY_KEYSTORE_PASS: 123456
FEDERATION_GATEWAY_BASE_URL: https://efgs-fake:8014
# for local testing: FEDERATION_GATEWAY_BASE_URL: https://host.docker.internal:8014
volumes:
- ./docker-compose-test-secrets:/secrets
submission:
build:
context: ./
dockerfile: ./services/submission/Dockerfile
depends_on:
- postgres
- verification-fake
ports:
- "8000:8080"
- "8006:8081"
environment:
SPRING_PROFILES_ACTIVE: debug,disable-ssl-client-postgres
POSTGRESQL_SERVICE_PORT: '5432'
POSTGRESQL_SERVICE_HOST: postgres
POSTGRESQL_DATABASE: ${POSTGRES_DB}
POSTGRESQL_PASSWORD_SUBMISION: ${POSTGRES_SUBMISSION_PASSWORD}
POSTGRESQL_USER_SUBMISION: ${POSTGRES_SUBMISSION_USER}
POSTGRESQL_PASSWORD_FLYWAY: ${POSTGRES_FLYWAY_PASSWORD}
POSTGRESQL_USER_FLYWAY: ${POSTGRES_FLYWAY_USER}
VERIFICATION_BASE_URL: http://verification-fake:8004
SUPPORTED_COUNTRIES: DE,FR
SSL_SUBMISSION_KEYSTORE_PATH: file:/secrets/ssl.p12
SSL_SUBMISSION_KEYSTORE_PASSWORD: 123456
SSL_VERIFICATION_TRUSTSTORE_PATH: file:/secrets/contains_efgs_truststore.jks
SSL_VERIFICATION_TRUSTSTORE_PASSWORD: 123456
volumes:
- ./docker-compose-test-secrets:/secrets
distribution:
build:
context: ./
dockerfile: ./services/distribution/Dockerfile
depends_on:
- postgres
- objectstore
- create-bucket
environment:
SUPPORTED_COUNTRIES: DE,FR
SPRING_PROFILES_ACTIVE: debug,signature-dev,testdata,disable-ssl-client-postgres,local-json-stats
POSTGRESQL_SERVICE_PORT: '5432'
POSTGRESQL_SERVICE_HOST: postgres
POSTGRESQL_DATABASE: ${POSTGRES_DB}
POSTGRESQL_PASSWORD_DISTRIBUTION: ${POSTGRES_DISTRIBUTION_PASSWORD}
POSTGRESQL_USER_DISTRIBUTION: ${POSTGRES_DISTRIBUTION_USER}
POSTGRESQL_PASSWORD_FLYWAY: ${POSTGRES_FLYWAY_PASSWORD}
POSTGRESQL_USER_FLYWAY: ${POSTGRES_FLYWAY_USER}
# Settings for the S3 compatible objectstore
CWA_OBJECTSTORE_ACCESSKEY: ${OBJECTSTORE_ACCESSKEY}
CWA_OBJECTSTORE_SECRETKEY: ${OBJECTSTORE_SECRETKEY}
CWA_OBJECTSTORE_ENDPOINT: http://objectstore
CWA_OBJECTSTORE_BUCKET: cwa
CWA_OBJECTSTORE_PORT: 8000
services.distribution.paths.output: /tmp/distribution
# Settings for cryptographic artifacts
VAULT_FILESIGNING_SECRET: ${SECRET_PRIVATE}
FORCE_UPDATE_KEYFILES: 'false'
STATISTICS_FILE_ACCESS_KEY_ID: fakeAccessKey
STATISTICS_FILE_SECRET_ACCESS_KEY: secretKey
STATISTICS_FILE_S3_ENDPOINT: https://localhost
DSC_TRUST_STORE: /secrets/dsc_truststore
DCC_TRUST_STORE: /secrets/dcc_truststore
volumes:
- ./docker-compose-test-secrets:/secrets
download:
build:
context: ./
dockerfile: ./services/download/Dockerfile
depends_on:
- postgres
ports:
- "8011:8080"
environment:
SPRING_PROFILES_ACTIVE: debug,disable-ssl-server,disable-ssl-client-postgres,disable-ssl-client-verification,disable-ssl-client-verification-verify-hostname,disable-ssl-efgs-verification
POSTGRESQL_SERVICE_PORT: '5432'
POSTGRESQL_SERVICE_HOST: postgres
POSTGRESQL_DATABASE: ${POSTGRES_DB}
POSTGRESQL_PASSWORD_CALLBACK: ${POSTGRES_CALLBACK_PASSWORD}
POSTGRESQL_USER_CALLBACK: ${POSTGRES_CALLBACK_USER}
POSTGRESQL_PASSWORD_FLYWAY: ${POSTGRES_FLYWAY_PASSWORD}
POSTGRESQL_USER_FLYWAY: ${POSTGRES_FLYWAY_USER}
FEDERATION_GATEWAY_KEYSTORE_PATH: file:/secrets/ssl.p12
FEDERATION_GATEWAY_KEYSTORE_PASS: 123456
SSL_FEDERATION_TRUSTSTORE_PATH: file:/secrets/contains_efgs_truststore.jks
SSL_FEDERATION_TRUSTSTORE_PASSWORD: 123456
volumes:
- ./docker-compose-test-secrets:/secrets
upload:
build:
context: ./
dockerfile: ./services/upload/Dockerfile
depends_on:
- postgres
ports:
- "8012:8080"
environment:
SPRING_PROFILES_ACTIVE: disable-ssl-client-postgres, connect-efgs
POSTGRESQL_SERVICE_PORT: '5432'
POSTGRESQL_SERVICE_HOST: postgres
POSTGRESQL_DATABASE: ${POSTGRES_DB}
POSTGRESQL_PASSWORD_FLYWAY: ${POSTGRES_FLYWAY_PASSWORD}
POSTGRESQL_USER_FLYWAY: ${POSTGRES_FLYWAY_USER}
VAULT_EFGS_BATCHIGNING_SECRET: ${SECRET_PRIVATE}
VAULT_EFGS_BATCHIGNING_CERTIFICATE: file:/secrets/efgs_signing_cert.pem
SSL_FEDERATION_TRUSTSTORE_PATH: file:/secrets/contains_efgs_truststore.jks
SSL_FEDERATION_TRUSTSTORE_PASSWORD: 123456
FEDERATION_GATEWAY_KEYSTORE_PATH: file:/secrets/ssl.p12
FEDERATION_GATEWAY_KEYSTORE_PASS: 123456
volumes:
- ./docker-compose-test-secrets:/secrets
postgres:
image: postgres:11.8
restart: always
ports:
- "8001:5432"
environment:
PGDATA: /data/postgres
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- postgres_volume:/data/postgres
- ./setup/setup-roles.sql:/docker-entrypoint-initdb.d/1-roles.sql
- ./local-setup/create-users.sql:/docker-entrypoint-initdb.d/2-users.sql
- ./local-setup/enable-test-data-docker-compose.sql:/docker-entrypoint-initdb.d/3-enable-testdata.sql
pgadmin:
container_name: pgadmin_container
image: dpage/pgadmin4
volumes:
- pgadmin_volume:/root/.pgadmin
ports:
- "8002:80"
restart: unless-stopped
depends_on:
- postgres
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD}
objectstore:
image: "zenko/cloudserver"
volumes:
- objectstore_volume:/data
ports:
- "8003:8000"
environment:
ENDPOINT: objectstore
REMOTE_MANAGEMENT_DISABLE: 1
SCALITY_ACCESS_KEY_ID: ${OBJECTSTORE_ACCESSKEY}
SCALITY_SECRET_ACCESS_KEY: ${OBJECTSTORE_SECRETKEY}
create-bucket:
image: amazon/aws-cli
environment:
- AWS_ACCESS_KEY_ID=${OBJECTSTORE_ACCESSKEY}
- AWS_SECRET_ACCESS_KEY=${OBJECTSTORE_SECRETKEY}
entrypoint: [ "/root/scripts/wait-for-it/wait-for-it.sh", "objectstore:8000", "-t", "30", "--" ]
volumes:
- ./scripts/wait-for-it:/root/scripts/wait-for-it
command: aws s3api create-bucket --bucket cwa --endpoint-url http://objectstore:8000 --acl public-read
depends_on:
- objectstore
verification-fake:
image: roesslerj/cwa-verification-fake:0.0.5
restart: always
ports:
- "8004:8004"
efgs-fake:
image: roesslerj/cwa-efgs-fake:0.0.5
restart: always
ports:
- "8014:8014"
volumes:
postgres_volume:
pgadmin_volume:
objectstore_volume:
akazad#node-0:~/cwa-server$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
callback-claim0 Pending 26h
create-bucket-claim0 Pending 26h
distribution-claim0 Pending 26h
download-claim0 Pending 26h
objectstore-volume Pending 26h
pgadmin-volume Pending 26h
postgres-claim1 Pending 26h
postgres-claim2 Pending 26h
postgres-claim3 Pending 26h
postgres-volume Pending 26h
submission-claim0 Pending 26h
upload-claim0 Pending 26h
akazad#node-0:~/cwa-server$ kubectl describe pvc pgadmin-volume
Name: pgadmin-volume
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: io.kompose.service=pgadmin-volume
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: pgadmin-6897994987-q2kxj
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 69s (x6322 over 26h) persistentvolume-controller no persistent volumes available for this claim and no storage class is set

How to achieve zero downtime with docker stack

Docker updates container but network registration takes 10 minutes to complete so while the new container is being registered the page returns 502 because internal network is still pointing at the old container. How can i delay the removal of the old container after the update to the new container by 10 minutes or so? Ideally I would like to push this config with docker stack but I'll do whatever it takes. I should also note that I am unable to use replicas right now due to certain limitations of a security package i'm being forced to use.
version: '3.7'
services:
xxx:
image: ${xxx}/com.xxx:${xxx}
environment:
- SERVICE_NAME=xxx
- xxx
- _xxx
- SPRING_PROFILES_ACTIVE=${xxx}
networks:
- xxx${xxx}
healthcheck:
interval: 1m
deploy:
mode: replicated
replicas: 1
resources:
limits:
cpus: '3'
memory: 1024M
reservations:
cpus: '0.50'
memory: 256M
labels:
- com.docker.lb.hosts=xxx${_xxx}.xxx.com
- jenkins.url=${xxx}
- com.docker.ucp.access.label=/${xxx}/xxx
- com.docker.lb.network=xxx${_xxx}
- com.docker.lb.port=8080
- com.docker.lb.service_cluster=${xxx}
- com.docker.lb.ssl_cert=xxx.cert
- com.docker.lb.ssl_key=xxx.key
- com.docker.lb.redirects=http://xxx${_xxx}.xxx.com/xxx,https://xxx${_xxx}.xxx.com/xxx
restart_policy:
condition: any
delay: 5s
max_attempts: 3
window: 120s
update_config:
parallelism: 1
delay: 10s
order: start-first
failure_action: rollback
rollback_config:
parallelism: 0
order: stop-first
secrets:
- ${xxx}
networks:
xxx${_xxx}:
external: true
secrets:
${xxx}:
external: true
xxx.cert:
external: true
xxx.key:
external: true
Use proper healthcheck - see the reference here: https://docs.docker.com/compose/compose-file/#healthcheck
So:
You need to define proper test to know when your new container is fully up (that goes inside test instruction of your healthcheck).
Use start_period instruction to specify your 10 (or so) minute way - otherwise, Docker Swarm would just kill your new container and never let it start.
Basically, once you get healthcheck right, this should solve your issue.

How to setup docker log driver tag with ansible ecs_task_definition module?

I am trying to set docker log tag with Ansible for Amazon ECS TaskDefinition but unfortunately, I am getting an error mentioned below.
I exactly want to display the container name in the docker logs.
playbook.yml:
tasks:
- name: Create task definition
ecs_taskdefinition:
containers:
- name: hello-world-1
cpu: "2"
essential: true
image: "nginx"
memory: "128"
portMappings:
- containerPort: "80"
hostPort: "0"
logConfiguration:
logDriver: syslog
options:
syslog-address: udp://127.0.0.1:514
tag: '{{.Name}}'
family: "{{ taskfamily_name }}"
state: present
register: task_output
error:
TASK [Create task definition] ***************************************************************************
task path: /home/ubuntu/ansible/ecs_random.yml:14
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: unexpected '.'. String: {{.Name}}"
}
Below expression works for me.
tag: "{{ '{{' }}.Name{{ '}}' }}"
task:
tasks:
- name: Create task definition
ecs_taskdefinition:
containers:
- name: hello-world-1
cpu: "2"
essential: true
image: "nginx"
memory: "128"
portMappings:
- containerPort: "80"
hostPort: "0"
logConfiguration:
logDriver: syslog
options:
syslog-address: udp://127.0.0.1:514
tag: "{{ '{{' }}.Name{{ '}}' }}"
Related Question:
Escaping double curly braces in Ansible
Related Documentation:
http://jinja.pocoo.org/docs/dev/templates/#escaping

Resources