symfony docker-compose output log to host - docker

I am setting up a Symfony 6/Next js app with docker-compose and would like to have the logs from Monolog that in Symfony are output to /var/log/ show on a host file within the app code, so that I can easily review it with VSCode instead of the awkward process of opening a shell.
Note that the logs are correctly output with the container, which is fine.
So I have attempted to create a bind mount, but either it just does not work or it throws an error with docker-compose up.
Something like:
services:
php:
build:
context: ./api
target: api_platform_php
depends_on:
- database
restart: unless-stopped
volumes:
- php_socket:/var/run/php
- ./api/var/log:/srv/api/var/log ***HERE***
In this specific instance it will error: setfacl: var/log: Not supported
I did find a number of related questions on SO, including this one, but they either concern overall docker logs or are not directly applicable.

Not what you asked, but its common practice to pipe the logs to the container:
Monolog config:
path: "php://stderr"
then you can use
docker-compose logs -f --tail="all" php
from outside the container

Related

docker-compose not loading definitions.json for RabbitMQ

I am experimenting with Docker to create a container for RabbitMQ on my Windows 11 laptop. Doing the basics I can get it to run without error. So, from this I tried to have expand it by adding to the compose yaml file the definitions.json. The definitions.json I simply downloaded the definitions straight from the UI.
My docker-compose.yml looks like this:
version: "3.8"
services:
rabbitmq:
image: rabbitmq:3-management
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
volumes:
- ./definitions.json:/etc/rabbitmq/definitions.json
- ~/.docker-conf/rabbitmq/data/:/var/lib/rabbitmq/
- ~/.docker-conf/rabbitmq/log/:/var/log/rabbitmq
networks:
- rabbitmq_go_net
networks:
rabbitmq_go_net:
driver: bridge
Now, when I run the compose file, it runs without any error at all, but none of the queues are visible in the UI. I have tried various things, but it appears as though the definitions.json is being ignored. As a further check, I did reload the definitions through the UI and queues reappeared.
So, how do you configure the docker compose file to load the definitions.json when creating a container from docker compose up?
Actually, the problem was the location where the definitions.json is meant to be stored. Some websites I have read have it located in the rabbitmq folder. However, I followed this link https://thomasdecaux.medium.com/deploy-rabbitmq-with-docker-static-configuration-23ad39cdbf39 and it worked. The other point to make is ensuring there is a rabbitmq.conf file to load the definitions.json file - this is critical to load the file

Portainer stacks and command line arguments

I have a portainer stack running one container. Lets use microbin as an example.
The docker-compose yaml looks like this:
---
version: "3"
services:
paste:
image: danielszabo99/microbin:latest
container_name: microbin
restart: always
ports:
- "8525:8080"
volumes:
- /mnt/docker_volumes/microbin-data:/app/pasta_data
This particular container is hosted on docker hub, and the maintainer provides examples of command line arguments that can be appended to the dockerfile to activate various features easily. One example would be:
--no-listing
Disables the /pastalist endpoint, essentially making all pastas private.
So this brings me to my issue. I don't want to maintain my own custom dockerfile, and in the past I have always inserted environment variables into the docker-compose yaml to call features like this.
An example would be like this - I have a stack running for Authentik (a sso/saml/idp gateway with a pretty web interface). You can see the "environment:" section and the variables I am calling.
server:
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2022.5.3}
restart: unless-stopped
command: server
environment:
AUTHENTIK_REDIS__HOST: redis
AUTHENTIK_POSTGRESQL__HOST: postgresql
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
AUTHENTIK_ERROR_REPORTING__ENABLED: "true"
# WORKERS: 2
volumes:
- ./media:/media
- ./custom-templates:/templates
- geoip:/geoip
env_file:
- stack.env
So - not knowing how the development side of making these containers and hosting them on docker-hub goes... is there a way for me to use these command line arguments for microbin as environment variables in my docker-compose yaml / stack configuration file, or am I going to have to wait on the maintainer to implement this as a feature?
Thanks for your help in advance.
You can pass command line arguments in your docker-compose.yml file using the command attribute. That assumes of course the process started within the Docker image can deal with those, but that seems to be the case for your image and should generally be the case.
version: "3"
services:
paste:
image: danielszabo99/microbin:latest
container_name: microbin
restart: always
ports:
- "8525:8080"
volumes:
- /mnt/docker_volumes/microbin-data:/app/pasta_data
command: my command line --args here
See Docker Compose Reference - command for details.

Docker-compose expected container is up to date

I created docker-compose yaml file to run a docker image after it has been pulled locally and this yaml file contain another services (mysql and phpmyadmin) so when I run a command docker-composer up -d I found a conflict in creating a container as it been used by another container with the same name and I expected to show me that the container is already run and up to date, I know that the container must be removed or renamed before creating a new one but I aiming to get the newer version of image and check if mysql and phpmyadmin services is up or not if up gives me those container is up to date and if not create it as another environment bellow.
docker-compose.yml
version: '3.3'
services:
app-prod:
image: prod:1.0
container_name: app-prod
ports:
- "81:80"
links:
- db-prod
depends_on:
- db-prod
- phpmyadmin-prod
db-prod:
image: mysql:8
container_name: db-prod
environment:
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_DATABASE=laravel
- MYSQL_USER=user
- MYSQL_PASSWORD=secret
volumes:
- db-prod:/var/lib/mysql
phpmyadmin-prod:
image: phpmyadmin/phpmyadmin:5.0.1
restart: always
environment:
PMA_HOST: db-prod
container_name: phpmyadmin-prod
ports:
- "5001:80"
volumes:
db-prod:
Error
Creating phpmyadmin-prod ... error
Creating db-prod ...
ERROR: for phpmyadmin-prod Cannot create container for service phpmyadmin: Conflict. The container name "/phpmyadmin-prod" is already in use by container "5a52b27b64f7302bccb1c3a0eaeca8a33b3dfee5f1a279f6a809695Creating db-prod ... error
ERROR: for db-prod Cannot create container for service db: Conflict. The container name "/db-prod" is already in use by container "5d01c21efafa757008d1b4dbcc8d09b4341ac1457a0ca526ee57873cd028cf2b". You have to remove (or rename) that container to be able to reuse that name.
ERROR: for phpmyadmin Cannot create container for service phpmyadmin: Conflict. The container name "/phpmyadmin-prod" is already in use by container "5a52b27b64f7302bccb1c3a0eaeca8a33b3dfee5f1a279f6a809695f482500a9". You have to remove (or rename) that container to be able to reuse that name.
ERROR: for db Cannot create container for service db: Conflict. The container name "/db-prod" is already in use by container "5d01c21efafa757008d1b4dbcc8d09b4341ac1457a0ca526ee57873cd028cf2b". You have to remove (or rename) that container to be able to reuse that name.
ERROR: Encountered errors while bringing up the project.
Error: No such container: app-prod
Error: No such container: app-prod
While using another docker-compose file for test environment I got this which I expected
db-stage is up-to-date
phpmyadmin-stage is up-to-date
Creating app-stage ... done
docker-compose run command will create new containers. But in your case, 2 of them are already running, hence, you can use
docker-compose up -d
That specific error is easy to fix. You're trying to manually specify container_name: for every container, but the error message says you have existing containers with those same names already. Left to its own, Compose will assign non-conflicting names, and you can almost always just delete container_name: from the Compose file.
version: '3.8'
services:
app:
image: prod:1.0
ports:
- "81:80"
depends_on: [db, phpmyadmin]
# no container_name: or links:
db: { ... }
phpmyadmin: { ... }
The other obvious point of conflict for running the same Compose file in multiple places is the ports:; only one container or host process can listen on a given (first) port number. If you're trying to run the same Compose file multiple times on the same system you might need some way to replace the port numbers. This could be a place where using multiple Compose files fits in well: define a base docker-compose.yml that defines the services but not any of the ports, and an override file that assigns specific host ports.
As I have several docker-compose files and I run this command docker-composer -f <compose_file_path> -p <project_name> -d up and then I try to run docker-composer up -d in the same location without specify the project name -p <project_name> which gives me the conflict of the container as this make the compose-file up with a different project name and with the same container name.

Can you use the current project_name in a docker compose file?

I see lots of questions around setting/changing the COMPOSE_PROJECT_NAME or PROJECT_NAME using ENV variables.
I'm fine with the default project name, but I would like to reference it in my compose file.
version: "3.7"
services:
app:
build: DockerFile
container_name: app
volumes:
- ./:/var/app
networks:
- the-net
npm:
image: ${project_name}_app
volumes:
- ./:/var/app
depends_on:
- app
entrypoint: [ 'npm' ]
networks:
- the-net
npm here is arbitrary , hopefully the fact that could be run as its own container or in other ways does not distract from the questions.
is it possible to reference the project name with out setting it manually or first?
Unfortunately it is not possible.
As alluded to, you can create a .env file and populate it with COMPOSE_PROJECT_NAME=my_name, but the config option does not present itself in your environment by default.
Unfortunately the env substitution in docker-compose is fairly limited, meaning we cannot use the available PWD env variable and greedy match it at all
$ cd ~
$ pwd
/home/tqid
$ echo "Base Dir: ${PWD##*/}"
Base Dir: tqid
When we use this reference, compose has issues:
$ docker-compose up -d
ERROR: Invalid interpolation format for "image" option in service "demo": "${PWD##*/}"
It's probably better to be explicit anyway, the COMPOSE_PROJECT_NAME is based on your dir, and if someone clones to a new folder then it gets out of whack, including the .env file in source control would provide a re-usable and consistent place to reference the name
https://docs.docker.com/compose/reference/envvars/#compose_project_name
using the same image as another container was what I was after ... reuse the image and change the entry point.
Specify the same build: options for both containers.
This seems inefficient, in that it will trigger the build sequence twice and docker images will list both of them. However, the way Docker's layer caching works, if identical RUN commands are run on identical input images, the resulting layer will simply be reused, and the two final images will have the same image ID; they will literally be the same image with two names.
The context I've run into this the most is with a Python application where the same code base is used for a Django or Flask Web server, plus a Celery worker. The Docker-level setup is fairly language-independent, though: specify the same build: for both containers, and override the command: for the container(s) that need to do a non-default task.
version: '3.8'
services:
app:
build: .
ports: ['3000:3000']
environment:
REDIS_HOST: redis
worker:
build: . # <-- same as app
command: npm run worker # <-- overrides Dockerfile CMD
environment:
REDIS_HOST: redis
redis:
image: redis
It is also valid to specify build: and image: together in the docker-compose.yml file; this specifies the name of the image that will be built. It's frequently useful to explicitly specify this because you will need to point at a specific Docker Hub or other registry location to push the built image. If you do this, then you'll know the image name and don't need to depend on the context name.
version: '3.8'
services:
app:
build: .
image: registry.example.com/my/app:${TAG:-latest}
worker:
image: registry.example.com/my/app:${TAG:-latest}
command: npm run worker
You will need to manually docker-compose build in this setup. Compose's workflow doesn't have a way to specify that one container's build must run before a different container can start.

How to reach services with the URL in Docker-Compose

Im trying to setup an application environment with two different docker-compose.yml files. The first one creates services in the default network elastic-apm-stack_default. To reach the services of both docker-compose files I used the external command within the second docker-compose file. Both files look like this:
# elastic-apm-stack/docker-compose.yml
services:
apm-server:
image: docker.elastic.co/apm/apm-server:6.2.4
build: ./apm_server
ports:
- 8200:8200
depends_on:
- elasticsearch
- kibana
...
# sockshop/docker-compose.yml
services:
front-end:
...
...
networks:
- elastic-apm-stack_default
networks:
elastic-apm-stack_default:
external: true
Now the front-end service in the second file needs to send data to the apm-server service in the first file. Therefore, I used the url http://apm-server:8200 in the source code of the front-end service but i always get an connectionRefused error. If I define all services in a single docker-compose file it works but I want to separate the docker-compose files.
Could anyone help me? :)
Docker containers run in network 172.17.0.1
So, you may use url
http://172.17.0.1:8200
to get access to your apm-server container.

Resources