How to run Symfony console command inside the docker container - docker

I am trying to run Symfony 3 console command inside of my docker container but not able to getting proper output.
docker-compose.yaml
version: '3.4'
services:
app:
build:
context: .
target: symfony_docker_php
args:
SYMFONY_VERSION: ${SYMFONY_VERSION:-}
STABILITY: ${STABILITY:-stable}
volumes:
# Comment out the next line in production
- ./:/srv/app:rw,cached
# If you develop on Linux, comment out the following volumes to just use bind-mounted project directory from host
- /srv/app/var/
- /srv/app/var/cache/
- /srv/app/var/logs/
- /srv/app/var/sessions/
environment:
- SYMFONY_VERSION
nginx:
build:
context: .
target: symfony_docker_nginx
depends_on:
- app
volumes:
# Comment out the next line in production
- ./docker/nginx/conf.d:/etc/nginx/conf.d:ro
- ./public:/srv/app/public:ro
ports:
- '80:80'
My console command
docker-compose exec nginx php bin/console
It returns the following response
the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'

Copy from https://docs.docker.com/compose/reference/exec/
To disable this behavior, you can either the -T flag to disable pseudo-tty allocation.
docker-compose exec -T nginx <command>
Or, set COMPOSE_INTERACTIVE_NO_CLI value as 1
export COMPOSE_INTERACTIVE_NO_CLI=1
For php bin/console to run you need to run from app container like below.
docker-compose exec -T app php bin/console

Related

Docker-compose entrypoint could not locate Gemfile but works fine with docker

When I use docker-compose up
the container exits with code 10 and says
Could not locate Gemfile or .bundle/ directory
but if I do docker run web entrypoint.sh
The rails app seems to start without an issue.
What could be the cause of this inconsistent behavior?
Entrypoint.sh
#!/bin/bash
set -e
if [ -f tmp/pids/server.pid ]; then
rm tmp/pids/server.pid
fi
bundle exec rails s -b 0.0.0.0 -p 8080
Relevant part from the docker-compose file.
docker-compose.yml
...
web:
build:
context: "./api/"
args:
RUBY_VERSION: '2.7.2'
BUNDLER_VERSION: '2.2.29'
entrypoint: entrypoint.sh
volumes:
- .:/app
tty: true
stdin_open: true
ports:
- "8080:8080"
environment:
- RAILS_ENV=development
depends_on:
- mongodb
...
When you docker run web ..., you're running exactly what's in the image, no more and no less. On the other hand, the volumes: directive in the docker-compose.yml file replaces the container's /app directory with arbitrary content from the host. If your Dockerfile RUN bundle install expecting to put content in /app/vendor in the image, the volumes: hide that.
You can frequently resolve problems like this by deleting volumes: from the Compose setup. Since you're running the code that's built into your image, this also means you're running the exact same image and environment you'll eventually run in production, which is a big benefit of using Docker here.
(You should also be able to delete the tty: and stdin_open: options, which aren't usually necessary, and the entrypoint: and those specific build: { args: }, which replicate settings that should be in the Dockerfile.)
(The Compose file suggests you're building a Docker image out of the api subdirectory, but then bind-mounting the current directory . -- api's parent directory -- over the image contents. That's probably the immediate cause of the inconsistency you see.)

docker-compose COPY before running endrypoint

Using docker desktop with WSL2, the ultimate aim is to run a shell command to generate local SSL certs before starting an nginx service.
to docker up we have
version: '3.6'
services:
# Frontend
rp:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
container_name: revproxy
image: nginx:latest
user: root
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- .\conf:/home/conf
- .\scripts:/home/scripts
so far so good, now we would like to add a pre startup script to create the ssl certs before launching the nginx server /home/scripts/certs.sh
mkdir -p /home/ssl/certs
mkdir -p /home/ssl/private
openssl req -x509 -nodes -days 365 -subj "/C=CA/ST=QC/O=Company, Inc./CN=zero.url" -addext "subjectAltName=DNS:mydomain.com" -newkey rsa:2048 -keyout /home/ssl/private/nginx-zero.key -out /home/ssl/certs/nginx-zero.crt;
Now adding the following to docker-compose.yml causes the container to bounce between running to rebooting and keeps recreating the certs via the script the exits the container. no general error message. I assume the exit code means the container is exiting correctly, that then triggers the restart.
command: /bin/sh -c "/home/scripts/certs.sh"
following other answers, adding exec "$#" makes no difference.
as an alternative I tried to copy the script into the pre nginx launch folder docker-entrypoint.d. this creates an error on docker up
version: '3.6'
services:
# Frontend
rp:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
container_name: revproxy
image: nginx:latest
user: root
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- .\conf:/home/conf
- .\scripts:/home/scripts
COPY /home/scripts/certs.sh /docker-entrypoint.d/certs.sh
this generates an error
ERROR: yaml.scanner.ScannerError: while scanning a simple key
in ".\docker-compose.yml", line 18, column 7
could not find expected ':'
in ".\docker-compose.yml", line 18, column 64
The terminal process "C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -Command docker-compose -f "docker-compose.yml" up -d --build" terminated with exit code: 1.
So what are the options for running a script before starting the primary docker-entrypoint.sh script
UPDATE:
as per suggestion in comment, changing the format of the flag did not help,
version: '3.6'
services:
# Frontend
rp:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS: 1
container_name: revproxy
image: nginx:latest
user: root
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- .\conf:/home/conf
- .\dc_scripts:/home/scripts
COPY /home/scripts/certs.sh /docker-entrypoint.d/certs.sh
ERROR: yaml.scanner.ScannerError: while scanning a simple key
in ".\docker-compose.yml", line 17, column 7
could not find expected ':'
in ".\docker-compose.yml", line 18, column 7
The terminal process "C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -Command docker-compose -f "docker-compose.yml" up -d --build" terminated with exit code: 1.
Dockerfiles are used to buid images, and contains a list of commands like RUN, EXEC and COPY. They have a very shell script like syntax with one command per line (for the most part).
A docker compose file on the other hand is a yaml formatted file that is used to deploy built images to docker as running services. You cannot put commands like COPY in this file.
You can, for local deployments, on non windows systems, map individual files in in the volumes section:
volumes:
- .\conf:/home/conf
- .\scripts:/home/scripts
- ./scripts/certs.sh:/usr/local/bin/certs.sh
But this syntax only works on linux and MacOS hosts - I believe.
An alternative is to restructure your project with a Dockerfile and a docker-compose.yml file.
With a Dockerfile
FROM nginx:latest
COPY --chmod=0755 scripts/certs.sh /usr/local/bin
ENTRYPOINT ["certs.sh"]
Into the docker-compose, add a build: node with the path to the Dockerfile. "." will do. docker-compose build will be needed to force a rebuild if the Dockerfile changes after the first time.
version: '3.9'
services:
revproxy:
environment:
COMPOSE_CONVERT_WINDOWS_PATHS: 1
image: nginx:custom
build: .
user: root
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- .\conf:/home/conf
- .\scripts:/home/scripts
Now, that youve changed the entrypoint of the nginx container to your custom script, you need to chain to the original one, and call it with the original command.
So, certs.sh needs to look like:
#!/bin/sh
# your cert setup here
# this should remove "certs.sh" from the beginning of the current parameter list.
shift 1
# and now, transfer control to the original entrypoint, with the commandline that was passed.
exec "./docker-entrypoint.sh" "$#"
docker inspect nginx:latest was used to discover the original entrypoint.
Added after edit:
Also, COMPOSE_CONVERT_WINDOWS_PATHS doesn't look like an environment variable that nginx is going to care about. This variable should probably be set on your windows user environment so it is available before running docker-compose.
C:\> set COMPOSE_CONVERT_WINDOWS_PATHS=1
C:\> docker-compose build
...
C:\> docker-compose up
...
Also, nginx on docker hub indicates that /etc/nginx is the proper configuration folder for nginx, so I don't think that mapping things to /home/... is going to do anything. nginx should display a default page however.

Issue with docker not acknowledging docker-compose.override.yml

I'm particularly new to Docker. I was trying to containerize a project for development and production versions. I came up with a very basic docker-compose configuration and then tried the override feature which doesn't seem to work.
I added overrides for volumes to web and celery services which do not actually mount to the container, can confirm the same by looking at the inspect log of both the containers.
Contents of compose files:-
docker-compose.yml
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
depends_on:
- redis
redis:
image: redis:5.0.9-alpine
celery:
build: .
command: celery worker -A facedetect.celeryapp -l INFO --concurrency=1 --without-gossip --without-heartbeat
depends_on:
- redis
environment:
- C_FORCE_ROOT=true
docker-compose.override.yml
version: '3'
services:
web:
volumes:
- .:/code
ports:
- "8000:8000"
celery:
volumes:
- .:/code
I use Docker with Pycharm on Windows 10.
Command executed to deploy the compose configuration:-
C:\Program Files\Docker Toolbox\docker-compose.exe" -f <full-path>/docker-compose.yml up -d
Command executed to inspect one of the containers:-
docker container inspect <container_id>
Any help would be appreciated! :)
Just figured out that I had provided the docker-compose.yml file explicitly to the Run Configuration created in Pycharm as it was mandatory to provide at least one of these.
The command used by Pycharm explicitly mentions the .yml files using -f option when running the configuration. Adding the docker-compose.override.yml file to the Run Configuration changed the command to
C:\Program Files\Docker Toolbox\docker-compose.exe" -f <full_path>\docker-compose.yml -f <full_path>/docker-compose.override.yml up -d
This solved the issue. Thanks to Exadra37 directing to look out for the command that was being executed.

docker-compose run commands after up

I have the following docker-compose file
version: '3.2'
services:
nd-db:
image: postgres:9.6
ports:
- 5432:5432
volumes:
- type: volume
source: nd-data
target: /var/lib/postgresql/data
- type: volume
source: nd-sql
target: /sql
environment:
- POSTGRES_USER="admin"
nd-app:
image: node-docker
ports:
- 3000:3000
volumes:
- type: volume
source: ndapp-src
target: /src/app
- type: volume
source: ndapp-public
target: /src/public
links:
- nd-db
volumes:
nd-data:
nd-sql:
ndapp-src:
ndapp-public:
nd-app contains a migrations.sql and seeds.sql file. I want to run them once the container is up.
If I ran the commands manually they would look like this
docker exec nd-db psql admin admin -f /sql/migrations.sql
docker exec nd-db psql admin admin -f /sql/seeds.sql
When you run up with this docker-compose file, it will run the container entrypoint command for both the nd-db and nd-app containers as part of starting them up. In the case of nd-db, this does some prep work then starts the postgres database.
The entrypoint command is defined in the Dockerfile, and expects to combine configured bits of ENTRYPOINT and CMD. What you might do is override the ENTRYPOINT in a custom Dockerfile or overriding it in your docker-compose.yml.
Looking at the postgres:9.6 Dockerfile, it has the following two lines:
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["postgres"]
You could add the following to your nd-db configuration in docker-compose.yml to retain the existing entrypoint but also "daisy-chain" a custom migration-script.sh step.
entrypoint: ["docker-entrypoint.sh", "migration-script.sh"]
The custom script needs only one special behavior: it needs to do a passthru execution of any following arguments, so the container continues on to start postgres:
#!/usr/bin/env bash
set -exo pipefail
psql admin admin -f /sql/migrations.sql
psql admin admin -f /sql/seeds.sql
exec "$#"
Does docker-composer -f path/to/config.yml name_of_container nd-db psql admin admin -f /sql/migrations.sql work?
I’ve found that you have to specify the config and container when running commands from the laptop.

Set secomp to unconfined in docker-compose

I need to be able fork a process. As i understand it i need to set the security-opt. I have tried doing this with docker command and it works fine. However when i do this in a docker-compose file it seem to do nothing, maybe I'm not using compose right.
Docker
docker run --security-opt=seccomp:unconfined <id> dlv debug --listen=:2345 --headless --log ./cmd/main.go
Docker-compose
Setup
docker-compose.yml
networks:
backend:
services:
example:
build: .
security_opt:
- seccomp:unconfined
networks:
- backend
ports:
- "5002:5002"
Dockerfile
FROM golang:1.8
RUN go get -u github.com/derekparker/delve/cmd/dlv
RUN dlv debug --listen=:2345 --headless --log ./cmd/main.go
command
docker-compose -f docker-compose.yml up --build --abort-on-container-exit
Result
2017/09/04 15:58:33 server.go:73: Using API v1 2017/09/04 15:58:33
debugger.go:97: launching process with args: [/go/src/debug] could not
launch process: fork/exec /go/src/debug: operation not permitted
The compose syntax is correct. But the security_opt will be applied to the new instance of the container and thus is not available at build time like you are trying to do with the Dockerfile RUN command.
The correct way should be :
Dockerfile:
FROM golang:1.8
RUN go get -u github.com/derekparker/delve/cmd/dlv
docker-compose.yml
networks:
backend:
services:
example:
build: .
security_opt:
- seccomp:unconfined
networks:
- backend
ports:
- "5002:5002"
entrypoint: ['/usr/local/bin/dlv', '--listen=: 2345', '--headless=true', '--api-version=2', 'exec', 'cmd/main.go']

Resources