I've got a problem with my docker-compose. I'm trying to create mulitple containers with one spring-cloud-config-server, and im trying to use spring-cloud server with my webapps container.
That's the list of my containers:
1 for the BDD (db)
1 for spring cloud config server (configproperties)
1 for some webapps (webapps)
1 nginx for reverse proxying (cnginx)
I already try to use localhost, my config-server name as host in environment variable : SPRING_CLOUD_CONFIG_URI and in boostrap.properties.
My 'configproperties' container, use 8085 in the server.xml.
That's my docker-compose.yml
version: '3'
services:
cnginx:
image: cnginx
ports:
- 80:80
restart: always
depends_on:
- configproperties
- cprocess
- webapps
db:
image: postgres:9.3
environment:
POSTGRES_USER: xxx
POSTGRES_PASSWORD: xxx
restart: always
command:
- -c
- max_prepared_transactions=100
configproperties:
restart: on-failure:2
image: configproperties
depends_on:
- db
expose:
- "8085"
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://db:5432/mydbName?currentSchema=public
webapps:
restart: on-failure:2
image: webapps
links:
- "configproperties"
environment:
- SPRING_CLOUD_CONFIG_URI=http://configproperties:8085/config-server
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- configproperties
expose:
- "8085"
When i run my docker-compose, my webapps are deploying with somme error like:
- Could not resolve placeholder 'repository.temps' in value "${repository.temps}"
But when i'm using postman and send a request like that :
http://localhost/config-server/myApplication/docker/latest
It's working properly, the request return all configuration for 'myApplication'.
I think i have miss somethink, but i don't find it...
Anyone can help me ?
Regards,
For using springcloud server, we just need to run our webapps after the initialisation of the springcloud server.
#!/bin/bash
CONFIG_SERVER_URL=$SPRING_CLOUD_CONFIG_URI/myApp/production/latest
#Check si le le serveur de config est bien lancé
echo 'Waiting for tomcat to be available'
attempt_counter=0
max_attempts=10
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' $CONFIG_SERVER_URL)" != "200" ]]; do
if [ ${attempt_counter} -eq ${max_attempts} ];then
echo "Max attempts reached on $CONFIG_SERVER_URL"
exit 1
fi
attempt_counter=$(($attempt_counter+1))
echo "code retour $(curl -s -o /dev/null -w ''%{http_code}'' $CONFIG_SERVER_URL)"
echo "Attempt to connect to $CONFIG_SERVER_URL : $attempt_counter/$max_attempts"
sleep 15
done
echo 'Tomcat is available'
mv /usr/local/tomcat/webappsTmp/* /usr/local/tomcat/webapps/
Related
I'm really confused why I'm unable to make API requests to any site. for example, I want to run :
HTTParty.get("https://fakerapi.it/api/v1/persons")
It runs well on my machine. (without docker).
But if I run it inside docker, I got :
SocketError (Failed to open TCP connection to fakerapi.it:443 (getaddrinfo: Name does not resolve))
It happens not only for this site. But for all sites.
So I guess there's something wrong with my docker settings. But I'm not sure where to start.
I'm new to docker. So any advice means a lot to me.
Below is my docker-compose.yaml
version: '3.4'
services:
db:
image: mysql:8.0.17 #using official mysql image from docker hub
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- db_data:/var/lib/mysql
ports:
- "3307:3306"
backend:
build:
context: .
dockerfile: backend-dev.Dockerfile
ports:
- "3001:3001"
volumes:
#the host repos are mapped to the container's repos
- ./backend:/my-project
#volume to cache gems
- bundle:/bundle
depends_on:
- db
stdin_open: true
tty: true
env_file: .env
command: /bin/sh -c "rm -f tmp/pids/server.pid && rm -f tmp/pids/delayed_job.pid && bundle exec bin/delayed_job start && bundle exec rails s -p 3001 -b '0.0.0.0'"
frontend:
build:
context: .
dockerfile: frontend-dev.Dockerfile
ports:
- "3000:3000"
links:
- "backend:bb"
depends_on:
- backend
volumes:
#the host repos are mapped to the container's repos
- ./frontend/:/my-project
# env_file: .env
environment:
- NODE_ENV=development
command: /bin/sh -c "yarn dev --port 3000"
volumes:
db_data:
driver: local
bundle:
driver: local
How I try to run:
docker-compose run backend /bin/sh
rails c
HTTParty.get("https://fakerapi.it/api/v1/persons")
Any idea how can I fix this?
I'm developing a Docker infrastructure with Ansible and Docker Compose and I have a problem with the authentication via LDAP on my custom image of Gitea.
The error that i get inside the logs of Gitea when I try to use one of the users that are in the LDAP is:
Do you think that is a problem of network or is a problem of the LDAP that doesn't find the user?
The restoration of the LDIF backup works as expected because it adds the user that I'm trying to log:
Also when I create manually a user in Gitea via the graphic interface, in the authentication sources I find ansible-ldap.
What can be the solution to this problem?
This is my configuration:
app.ini (of Gitea)
[DEFAULT]
RUN_USER = git
RUN_MODE = prod
...
[database]
PATH = /data/gitea/gitea.db
DB_TYPE = postgres
HOST = db:5432
NAME = gitea
USER = gitea
PASSWD = gitea
LOG_SQL = false
...
Dockerfile
FROM gitea/gitea:1.16.8
RUN apk add sudo
RUN chmod 777 /home
COPY entrypoint /usr/bin/custom_entrypoint
COPY gitea-cli.sh /usr/bin/gitea-cli.sh
ENTRYPOINT /usr/bin/custom_entrypoint
entrypoint
#!/bin/sh
set -e
while ! nc -z $GITEA__database__HOST; do sleep 1; done;
chown -R 1000:1000 /data/gitea/conf
if ! [ -f /data/gitea.initialized ]; then
gitea-cli.sh migrate
gitea-cli.sh admin auth add-ldap --name ansible-ldap --host 127.0.0.1 --port 1389 --security-protocol unencrypted --user-search-base dc=ldap,dc=vcc,dc=unige,dc=it --admin-filter "(objectClass=giteaAdmin)" --user-filter "(&(objectClass=inetOrgPerson)(uid=%s))" --username-attribute uid --firstname-attribute givenName --surname-attribute surname --email-attribute mail --bind-dn cn=admin,dc=ldap,dc=vcc,dc=unige,dc=it --bind-password admin --allow-deactivate-all
touch /data/gitea.initialized
fi
exec /usr/bin/entrypoint
gitea-cli.sh
#!/bin/sh
echo 'Started gitea-cli'
USER=git HOME=/data/git GITEA_WORK_DIR=/var/lib/gitea sudo -E -u git gitea --config /data/gitea/conf/app.ini "$#"
docker-compose.yaml
db:
image: postgres:14.3
restart: always
hostname: db
environment:
POSTGRES_DB: gitea
POSTGRES_USER: gitea
POSTGRES_PASSWORD: gitea
ports:
- 5432:5432
volumes:
- /data/postgres:/var/lib/postgresql/data
networks:
- vcc
openldap:
image: bitnami/openldap:2.5
ports:
- 1389:1389
- 1636:1636
environment:
BITNAMI_DEBUG: "true"
LDAP_LOGLEVEL: 4
LDAP_ADMIN_USERNAME: admin
LDAP_ADMIN_PASSWORD: admin
LDAP_ROOT: dc=ldap,dc=vcc,dc=unige,dc=it
LDAP_CUSTOM_LDIF_DIR: /bitnami/openldap/backup
LDAP_CUSTOM_SCHEMA_FILE: /bitnami/openldap/schema/schema.ldif
volumes:
- /data/openldap/:/bitnami/openldap
networks:
- vcc
gitea:
image: 127.0.0.1:5000/custom_gitea:51
restart: always
hostname: git.localdomain
build: /data/gitea/custom
ports:
- 4000:4000
- 222:22
environment:
USER: git
USER_UID: 1000
USER_GID: 1000
GITEA__database__DB_TYPE: postgres
GITEA__database__HOST: db:5432
GITEA__database__NAME: gitea
GITEA__database__USER: gitea
GITEA__database__PASSWD: gitea
GITEA__security__INSTALL_LOCK: "true"
GITEA__security__SECRET_KEY: XQolFkmSxJWhxkZrkrGbPDbVrEwiZshnzPOY
volumes:
- /data/gitea:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- /data/gitea/app.ini:/data/gitea/conf/app.ini
deploy:
mode: global
depends_on:
- db
- openldap
- openldap_admin
networks:
- vcc
The problem was the address 127.0.0.1 in the entrypoint file in --host, changing it to openldap (name of the service in the docker-compose file) worked.
I have a problem with my nextcloud docker stack.
I run fsck on every boot of my system. So the volume in the stack is not yet mounted when the stack starts.
Is there a simple way to wait starting the stack until /srv/dev-disk-by-uuid-77365390-c57e-4b8a-846f-42fa099bf411/ is mounted??
My Stack looks like this...
version: "2"
services:
nextcloud:
image: linuxserver/nextcloud
container_name: nextcloud
networks:
- homeserver
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Berlin
volumes:
- /srv/dev-disk-by-uuid-77365390-c57e-4b8a-846f-42fa099bf411/docker/appdata/nextcloud/config:/config
- /srv/dev-disk-by-uuid-77365390-c57e-4b8a-846f-42fa099bf411/docker/appdata/nextcloud/data:/data
depends_on:
- mariadb
restart: unless-stopped
mariadb:
image: yobasystems/alpine-mariadb:armhf
container_name: mariadb
networks:
- homeserver
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Berlin
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_PASSWORD=xxx
- MYSQL_ROOT_PASSWORD=
volumes:
- /srv/dev-disk-by-uuid-77365390-c57e-4b8a-846f-42fa099bf411/docker/appdata/mariadb/logs:/var/lib/mysql/mysql-bin
- /srv/dev-disk-by-uuid-77365390-c57e-4b8a-846f-42fa099bf411/docker/appdata/mariadb/mysql/_data:/var/lib/mysql
restart: unless-stopped
phpmyadmin:
container_name: phpmyadmin-nextcloud
image: phpmyadmin
networks:
- homeserver
restart: unless-stopped
environment:
- PMA_HOST=172.18.0.2
- PMA_PORT=3306
ports:
- 8182:80
networks:
homeserver:
external:
name: homeserver
Thanks a lot for your help!!
Afaik there is no native solution for this, I would recommend to take a look at the solution https://github.com/docker/compose/issues/374#issuecomment-310266246
copy-paste from the link
// start.sh
#!/bin/sh
set -eu
docker volume create --name=gql-sync
echo "Building docker containers"
docker-compose build
echo "Running tests inside docker container"
docker-compose up -d pubsub
docker-compose up -d mongo
docker-compose up -d botms
docker-compose up -d events
docker-compose up -d identity
docker-compose up -d importer
docker-compose run status
docker-compose run testing
exit $?
// status.sh
#!/bin/sh
set -eu pipefail
echo "Attempting to connect to bots"
until $(nc -zv botms 3000); do
printf '.'
sleep 5
done
echo "Attempting to connect to events"
until $(nc -zv events 3000); do
printf '.'
sleep 5
done
echo "Attempting to connect to identity"
until $(nc -zv identity 3000); do
printf '.'
sleep 5
done
echo "Attempting to connect to importer"
until $(nc -zv importer 8080); do
printf '.'
sleep 5
done
echo "Was able to connect to all"
exit 0
// in my docker compose file
status:
image: yikaus/alpine-bash
volumes:
- "./internals/scripts:/scripts"
command: "sh /scripts/status.sh"
depends_on:
- "mongo"
- "importer"
- "events"
- "identity"
- "botms"
I have the following docker-compose.yml.
version: "3.1"
services:
db:
container_name: ${MYSQL_CONTAINER}
image: mysql:5.7.30
volumes:
- ${VOLUMES_DIR}/mysql_data:/var/lib/mysql
- ./slow_log.cnf:/etc/mysql/my.cnf
- ${VOLUMES_DIR}/mysql_logs:/var/log/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_USER=${MYSQL_USER}
ports:
- ${MYSQL_PORT}:3306
entrypoint: ""
command: bash -c "chown -R mysql:mysql /var/log/mysql && exec /entrypoint.sh mysqld --default-authentication-plugin=mysql_native_password"
restart: on-failure
backend:
container_name: ${BACKEND_CONTAINER}
image: ${BACKEND_IMAGE}
depends_on:
- db
ports:
- ${BACKEND_PORT}
command: >
bash -c "command A
&& command B
&& ... "
restart: unless-stopped
I am scaling backend service so my startup command is sudo docker-compose -p ${COMPOSE_PROJECT_NAME} up -d --scale backend=10.
The problem I am facing is command A, command B in service backend was running for all 10 containers startup(means they were being run 10 times).
But I want command A to run only once for all the backend service-related containers but Command B should run for all containers.
Any suggestions in accomplishing this?
I'm not entirely sure that there would be an out-of-the-box solution for your requirement.
However, I can suggest you a workaround like this. You can duplicate your backend service in docker-compose and run one backend service with both Command A and Command B, while the other backend has only Command B.
Then when you want to scale, you scale the backend which has only Command B.
version: "3.1"
services:
db:
container_name: ${MYSQL_CONTAINER}
image: mysql:5.7.30
volumes:
- ${VOLUMES_DIR}/mysql_data:/var/lib/mysql
- ./slow_log.cnf:/etc/mysql/my.cnf
- ${VOLUMES_DIR}/mysql_logs:/var/log/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_USER=${MYSQL_USER}
ports:
- ${MYSQL_PORT}:3306
entrypoint: ""
command: bash -c "chown -R mysql:mysql /var/log/mysql && exec /entrypoint.sh mysqld --default-authentication-plugin=mysql_native_password"
restart: on-failure
backend_default:
container_name: ${BACKEND_CONTAINER}
image: ${BACKEND_IMAGE}
depends_on:
- db
ports:
- ${BACKEND_PORT}
command: >
bash -c "command A
&& command B
&& ... "
restart: unless-stopped
backend:
container_name: ${BACKEND_CONTAINER}
image: ${BACKEND_IMAGE}
depends_on:
- db
ports:
- ${BACKEND_PORT}
command: >
bash -c "command B
&& ... "
restart: unless-stopped
Now you can use the scale option like below:
sudo docker-compose -p ${COMPOSE_PROJECT_NAME} up -d --scale backend=9
Now if there happens to be a scenario, where you need only 1 backend to be run, you can use profiles in docker-compose to only run backend when there is a specific profile is given with docker-compose command. That means only default_backend will run if that profile is not given and hence the scale is 1.
Hope this helps you. Cheers 🍻 !!!
If BACKEND_IMAGE is being built by you, you should do RUN command A in your Dockerfile. The RUN line will be executed only once during build time — so you will also need to make sure that this meshes with your needs — while the ENTRYPOINT and CMD lines will only be run upon execution of the container. The command in the docker-compose file overrides the CMD line.
I try i setup a Shopware Docker Container for development. I setup a Dockerfile for the Shopware initialize process but every time i run the build process shopware return this error message:
mysql -u 'root' -p'root' -h 'dbs' --port='3306' -e "DROP DATABASE IF EXISTS `shopware6dev`"
ERROR 2005 (HY000): Unknown MySQL server host 'dbs' (-2)
i think docker setup the default network after all build processes are done but i need to connect before all containers are ready. The depends_on option brings nothing. I hope anyone have a idea to solve this problem.
This is my docker-compose file:
version: '3'
services:
shopwaredev:
build:
context: ./docker/web
dockerfile: Dockerfile
volumes:
- ./log:/var/log/apache2
environment:
- VIRTUAL_HOST=shopware6dev.test,www.shopware6dev.test
- HTTPS_METHOD=noredirect
restart: on-failure:10
depends_on:
- dbs
adminer:
image: adminer
restart: on-failure:10
ports:
- 8080:8080
dbs:
image: "mysql:5.7"
volumes:
- ./mysql57:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=shopware6dev
restart: on-failure:10
nginx-proxy:
image: solution360/nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./ssl:/etc/nginx/certs
restart: on-failure:10
and this is my dockerfile for web shopwaredev container:
FROM solution360/apache24-php74-shopware6
WORKDIR /var/www/html
RUN rm index.html
RUN git clone https://github.com/shopware/development.git .
RUN cp .psh.yaml.dist .psh.yaml
RUN sed -i 's|DB_USER: "app"|DB_USER: "root"|g' .psh.yaml
RUN sed -i 's|DB_PASSWORD: "app"|DB_PASSWORD: "root"|g' .psh.yaml
RUN sed -i 's|DB_HOST: "mysql"|DB_HOST: "dbs"|g' .psh.yaml
RUN sed -i 's|DB_NAME: "shopware"|DB_NAME: "shopware6dev"|g' .psh.yaml
RUN sed -i 's|APP_URL: "http://localhost:8000"|APP_URL: "http://shopware6dev.test"|g' .psh.yaml
RUN ./psh.phar install