GitLab with docker-compose throwing wierd errors - docker

My docker-compose file:
version: "3.8"
networks:
traefik_network:
external: true
services:
gitlab:
container_name: gitlab
image: 'gitlab/gitlab-ee:latest'
restart: always
hostname: 'my-server'
environment:
GITLAB_HOME: ./srv/gitlab
# GITLAB_OMNIBUS_CONFIG: |
# external_url 'https://my-server'
# # Add any other gitlab.rb configuration here, each on its own line
ports:
- '3380:80'
- '3443:443'
- '3322:22'
volumes:
- ./srv/gitlab/config:/etc/gitlab'
- ./srv/gitlab/logs:/var/log/gitlab'
- ./srv/gitlab/data:/var/opt/gitlab'
shm_size: '256m'
networks:
- traefik_network
I started with this guide as it seemed to be the most recent thing I could find.
https://docs.gitlab.com/ee/install/docker.html#install-gitlab-using-docker-compose
I changed my volume directory after trying chmod and guessing that the issue I was having might be related to file permissions. Changing my volume to the same directory as the project did not make a difference. Other than that I am pretty much the same as the default recommended file. I need to get this to eventually run with SSL and NGINX but I'd like to start with just getting the app running. Does anyone have a working docker file or know what is causing my errors?
This is first chunk of my output:
https://pastebin.com/mtzYWFDg
As you can see after it gets through the start up stuff, it begins to loop every second and start complaining about owners and symlinks and execute ruby blocks. This looping errors won't stop until I kill the container. The ports are also unreachable. I have no idea what I could have done to cause any of those errors and I'm pretty lost trying to find answers.

Related

docker-compose not loading definitions.json for RabbitMQ

I am experimenting with Docker to create a container for RabbitMQ on my Windows 11 laptop. Doing the basics I can get it to run without error. So, from this I tried to have expand it by adding to the compose yaml file the definitions.json. The definitions.json I simply downloaded the definitions straight from the UI.
My docker-compose.yml looks like this:
version: "3.8"
services:
rabbitmq:
image: rabbitmq:3-management
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
volumes:
- ./definitions.json:/etc/rabbitmq/definitions.json
- ~/.docker-conf/rabbitmq/data/:/var/lib/rabbitmq/
- ~/.docker-conf/rabbitmq/log/:/var/log/rabbitmq
networks:
- rabbitmq_go_net
networks:
rabbitmq_go_net:
driver: bridge
Now, when I run the compose file, it runs without any error at all, but none of the queues are visible in the UI. I have tried various things, but it appears as though the definitions.json is being ignored. As a further check, I did reload the definitions through the UI and queues reappeared.
So, how do you configure the docker compose file to load the definitions.json when creating a container from docker compose up?
Actually, the problem was the location where the definitions.json is meant to be stored. Some websites I have read have it located in the rabbitmq folder. However, I followed this link https://thomasdecaux.medium.com/deploy-rabbitmq-with-docker-static-configuration-23ad39cdbf39 and it worked. The other point to make is ensuring there is a rabbitmq.conf file to load the definitions.json file - this is critical to load the file

docker-compose failing sayin image doesn't exist

I am trying to take a very difficult, multi-step docker setup and make it into an easy docker-compose. I haven't really used docker-compose before. I am used to using a dockerfile to build and image, then using something like
docker run --name mysql -v ${PWD}/sql-files:/docker-entrypoint-initdb.d ... -h mysql -d mariadb:10.4
Then running the web app in the same manner after building the dockerfile that is simple. Trying to combine these into a docker-compose.yml file seems to be quite difficult. I'll post up my docker-compose.yml file, edited to remove passwords and such and the error I am getting, hopefully someone can figure out why it's failing, because I have no idea...
docker-compose.yml
version: "3.7"
services:
mysql:
image: mariadb:10.4
container_name: mysql
environment:
MYSQL_ROOT_PASSWORD: passwordd1
MYSQL_ALLOW_EMPTY_PASSWORD: "true"
volumes:
- ./sql-files:/docker-entrypoint-initdb.d
- ./ssl:/var/lib/mysql/ssl
- ./tls.cnf:/etc/mysql/conf.d/tls.cnf
healthcheck:
test: ["CMD", "mysqladmin ping"]
interval: 10s
timeout: 2s
retries: 10
web:
build: ./base/newdockerfile
container_name: web
hostname: dev.website.com
volumes:
- ./ssl:/etc/httpd/ssl
- ./webfiles:/var/www
depends_on:
mysql:
condition: service_healthy
ports:
- "8443:443"
- "8888:80"
entrypoint:
- "/sbin/httpd"
- "-D"
- "FOREGROUND"
The error I get when running docker-compose up in the terminal window is...
Service 'web' depends on service 'mysql' which is undefined.
Why would mysql be undefined. It's the first definition in the yml file and has a health check associated with it. Also, it fails very quickly, like within a few seconds, so there's no way all the healthchecks ran and failed, and I'm not getting any other errors in the terminal window. I do docker-compose up and within a couple seconds I get that error. Any help would be greatly appreciated. Thank you.
according to this documentation
depends_on does not wait for db and redis to be “ready” before
starting web - only until they have been started. If you need to wait
for a service to be ready, see Controlling startup order for more on
this problem and strategies for solving it.
Designing your application so it's resilient when database is not available or set up yet is what we all have to deal with. Healthcheck doesn't guarantee you database is ready before the next stage. The best way is probably write a wait-for-it script or wait-for and run it after depends-on:
depends_on:
- "db"
command: ["./wait-for-it.sh"]

Symfony Swiftmail with docker mailcatcher

I have a Symfony app (v4.3) running in an docker setup. This setup also has a container for the mailcatcher. No matter how I try to set the MAILER_URL in the .env file no mail shows up in the mailcatcher. If I just the call regular PHP mail() function the mails pops up in the mail catcher. The setup has been used for other projects as well and it worked without a flaw.
Only with the Symfony Swiftmailer I can't the mails.
My docker-compose file looks like this:
version: '3'
services:
#######################################
# PHP application Docker container
#######################################
app:
container_name: project_app
build:
context: docker/web
networks:
- default
volumes:
- ../project:/project:cached
- ./etc/httpd/vhost.d:/opt/docker/etc/httpd/vhost.d
# cap and privileged needed for slowlog
cap_add:
- SYS_PTRACE
privileged: true
env_file:
- docker/web/conf/environment.yml
- docker/web/conf/environment.development.yml
environment:
- VIRTUAL_HOST=.project.docker
- POSTFIX_RELAYHOST=[mail]:1025
#######################################
# Mailcatcher
#######################################
mail:
image: schickling/mailcatcher
container_name: project_mail
networks:
- default
environment:
- VIRTUAL_HOST=mail.project.docker
- VIRTUAL_PORT=1080
I pleayed around with the MAILER_URL setting hours but everything failed so far.
Hope soembody here has an idea how to set the MAILER_URL.
Thank you
According to docker-compose.yml, your MAILER_URL should be:
smtp://project_mail:1025
Double-check if it has the correct value
Then you can view container logs using
docker-compose logs -f mail to see if your messages reach the service at all.
It will be something like:
==> SMTP: Received message from '<user#example.com>' (619 bytes)
Second: try to restart your containers. Sometimes changes in .env* files are not applied instantly.

Docker Deploy stack extra hosts ignored

docker stack deploy isnt respecting the extra_hosts parameter in my compose file. when i do a simple docker-compose up the entry is created in the /etc/hosts however when i do docker deploy –compose-file docker-compose.yml myapp it ignores extra_hosts, any insights?
Below is the docker-compose.xml:
version: '3'
services:
web:
image: user-service
deploy:
labels:
- the label
build:
context: ./
environment:
DATABASE_URL: jdbc:postgresql://dbhost:5432/postgres
ports:
- 9002:9002
extra_hosts:
- "dbhost: ${DB_HOST}"
networks:
- wellness_swarm
env_file:
- .env
networks:
wellness_swarm:
external:
name: wellness_swarm
the docker-compose config also displays the compose file properly.
This may not be a direct answer to the question as it doesn't use env variables but what I found was that the extra_hosts block in the compose file was ignored in swarm mode if entered in the format above.
i.e. this works for me and puts entries in /etc/hosts in the container:
extra_hosts:
retisdev: 10.48.161.44
retistesting: 10.48.161.44
whereas when entered in the other format it gets ignored when deploying as a service
extra_hosts:
- "retisdev=10.48.161.44"
- "retistesting=10.48.161.44"
I think it's an ordering issue. The ${} variable you've got in the compose file runs during the YAML processing before the service definition is created. Then stack deploy processes the .env file for running in the container as envvars, but the YAML variable is needed first...
To fix that, you should use the docker-compose config command first, to process the YAML, and then use the output of that to send to the stack deploy.
docker-compose config will show you the output you're likely wanting.
Then do a pipe to get a one-liner.
docker-compose config | docker stack deploy -c - myapp
Note: Ideally you wouldn't use the extra_hosts, but rather put the envvar directly in the connection string. Your way seems like unnecessary complexity and isn't the usual way I see a connection string created.
e.g.
version: '3'
services:
web:
image: user-service
deploy:
labels:
- the label
build:
context: ./
environment:
DATABASE_URL: jdbc:postgresql://${DB_HOST}:5432/postgres
ports:
- 9002:9002
networks:
- wellness_swarm
env_file:
- .env
networks:
wellness_swarm:
external:
name: wellness_swarm
As i see https://github.com/moby/moby/issues/29133 seems like it is by design where in the compose command takes into consideration the environment variables mentioned in .env file however the deploy command ignores that :( why is that so, pretty lame reasons!

Restart Docker Containers when they Crash Automatically

I want to restart a container if it crashes automatically. I am not sure how to go about doing this. I have a script docker-compose-deps.yml that has elasticsearch, redis, nats, and mongo. I run this in the terminal to set this up: docker-compose -f docker-compose-deps.yml up -d. After this I set up my containers by running: docker-compose up -d. Is there a way to make these containers restart if they crash? I noticed that docker has a built in restart, but I don't know how to implement this.
After some feedback I added restart: always to my docker-compose file and my docker-compose-deps.yml file. Does this look correct? Or is this how you would implement the restart always?
docker-compose sample
myproject-server:
build: "../myproject-server"
dockerfile: Dockerfile-dev
restart: always
ports:
- 5880:5880
- 6971:6971
volumes:
- "../myproject-server/src:/src"
working_dir: "/src"
external_links:
- nats
- mongo
- elasticsearch
- redis
myproject-associate:
build: "../myproject-associate"
dockerfile: Dockerfile-dev
restart: always
ports:
- 5870:5870
volumes:
- "../myproject-associate/src:/src"
working_dir: "/src"
external_links:
- nats
- mongo
- elasticsearch
- redis
docker-compose-deps.yml sample
nats:
image: nats
container_name: nats
restart: always
ports:
- 4222:4222
mongo:
image: mongo
container_name: mongo
restart: always
volumes:
- "./data:/data"
ports:
- 27017:27017
If you're using compose, it has a restart flag which is analogous to the one existing in the docker run command, so you can use that. Here is a link to the documentation about this part -
https://docs.docker.com/compose/compose-file/
When you deploy out, it depends where you deploy to. Most container clusters like kubernetes, mesos or ECS would have some configuration you can use to auto-restart your containers. If you don't use any of these tools you are probably starting your containers manually and can then just use the restart flag just as you would locally.
Looks good to me. What you want to understand when working on Docker policies is what each one means. always policy means that if it crashes for any reason automatically restart.
So if it stops for any reason, go ahead and restart it.
So why would you ever want to use always as opposed to say on-failure?
In some cases, you might have a container that you always want to ensure is running such as a web server. If you are running a public web application chances are you want that server to be available 100% of the time.
So for web application I expect you want to use always. On the other hand if you are running a worker process on a file and then naturally exit, that would be a good use case for the on-failure policy, because the worker container might be finished processing the file and you probably want to let it close out and not have it restart.
Thats where I would expect to use the on-failure policy. So not just knowing the syntax, but when to apply which policy and what each one means.

Resources