I'm using docker on macOS with docker-osx-dev (https://github.com/brikis98/docker-osx-dev)
And all is ok. It helps me to solve the problem with slow volumes. But every time when i up my docker-compose i have a problem with permissions and i am forced to set permission through docker exec and chmod ... I spent a lot of time to find solution. I tried to use usermod with 501 and 1000 id but nothing help. Have you any idea about how to fix it?
My project settings: https://bitbucket.org/SmileSergey/symfonydocker/src
Thanks!
You can use http://docker-sync.io with the https://github.com/EugenMayer/docker-sync/blob/master/example/docker-sync.yml#L47 to map the OSX user to the uid you have on the container. This removes the whole issue, even though this will replace docker-osx-dev completely.
As a quick workaround, you could try to add a command to your nginx part of your docker-compose.yml like follows:
nginx:
image: nginx:latest
...
volumes:
- "./config/nginx.conf:/etc/nginx/conf.d/default.conf"
- "./app:/var/www/symfony"
command: /bin/bash -c "chmod -R 777 /var/www/symfony/ && nginx -g 'daemon off;'"
Background:
the official nginx Dockerfile specifies following default command:
CMD ["nginx", "-g", "daemon off;"]
which is executing
nginx -g 'daemon off;'
You already have found a quick&dirty workaround to run chmod -R 777 /var/www/symfony/ within the container.
With the command in the docker-compose file above, we execute the chmod command before executing the nginx -g 'daemon off;' default command.
Note, that a CMD specified in the Dockerfile is replaced by the command defined on a docker-compose file (or any command specified on docker run ... myimage mycommand.)
For dev on OSX just use vagrant + nfs shared folders. It solves whole the problems and doesn't require to add anything special to docker-compose for dev.
Related
I'm using this in my Dockerfile, without CMD or ENTRYPOINT. I'm relying on the underlying nginx official image's docker-entrypoint.sh
FROM nginx:1.18
I've tried in my docker-compose to add the following command but it just keeps restarting and no error msg.
command: >
sh -c 'echo "My Own Command" && /usr/sbin/nginx -g "daemon off;"'
It will work fine if I remove the command from my docker-compose.yml.
My final objective is to add some scripts so I can export secrets to my environment variable but I couldn't get the underlying docker-entrypoint.sh or nginx-g "daemon off;" command running and keeping the container going.
I have a docker-compose file with a service called 'app'. When I try to run my docker file I don't see the service with docker ps but I do with docker ps -a.
I looked at the logs:
docker logs my_app_1
python: can't open file '//apps/index.py': [Errno 2] No such file or directory
In order to debug I wanted to be able to see the home directory and the files and dirs contained there when the app attempts to run.
Is there a command I can add to docker-compose that would show me the pwd and ls -l of the container when it attempts to run index.py?
My Dockerfile:
FROM python:3
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "apps/index.py"]
My docker-compose.yaml:
version: '3.1'
services:
app:
build:
context: ./app
dockerfile: ./Dockerfile
depends_on:
- db
ports:
- 8050:8050
My directory structure:
my_app:
* docker-compose.yaml
* app
* Dockerfile
* apps
* index.py
You can add a RUN statement in the application Dockerfile to run these commands.
Example:
FROM python:3
COPY . .
RUN pip install -r requirements.txt
# Run your commands
RUN pwd && ls -l
CMD ["python", "apps/index.py"]
Then you chan check the logs of the build process and view the results.
I hope this answer helps you.
If you're just trying to debug an image you've already built, you can docker-compose run an alternate command:
docker-compose run apps \
ls -l ./apps
You don't need to modify anything in your Dockerfile to be able to do this (assuming it uses CMD correctly; see below).
If you need to do more intensive debugging, you can docker-compose run apps sh (or, if your image has it, bash) to get an interactive shell. The container will include any mounted volumes and be on the same Docker network as the named container, but won't have published ports.
Note that the command here replaces the CMD in the Dockerfile. If your image uses ENTRYPOINT for its main command, or if it has a complete command split between ENTRYPOINT and CMD (especially, if you have ENTRYPOINT ["python"]), these need to be combined into a single CMD for this to work. If your ENTRYPOINT is a wrapper script that does some first-time setup and then runs the CMD, this approach will work fine; the debugging ls or sh will run after the first-time setup happens.
I have to execute two commands on the docker file, but both these commands are attached to the terminal and block the execution from the next.
dockerfile:
FROM sinet/nginx-node:latest
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
RUN git clone https://name:pass#bitbucket.org/joaocromg/front-web-alferes.git
WORKDIR /usr/src/app/front-web-alferes
RUN npm install
RUN npm install bower -g
RUN npm install gulp -g
RUN bower install --allow-root
COPY default.conf /etc/nginx/conf.d/
RUN nginx -g 'daemon off;' & # command 1 blocking
CMD ["gulp watch-dev"] # command 2 not executed
Someone know how can I solve this?
Try creating a script like this:
#!/bin/sh
nginx -g 'daemon off;' &
gulp watch-dev
And then execute it in your CMD:
CMD /bin/my-script.sh
Also, notice your last line would not have worked:
CMD ["gulp watch-dev"]
It needed to be either:
CMD gulp watch-dev
or:
CMD ["gulp", "watch-dev"]
Also, notice that RUN is for executing a command that will change your image state (like RUN apt install curl), not for executing a program that needs to be running when you run your container. From the docs:
The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the Dockerfile.
I suggest you try supervisord in this case. http://supervisord.org/
Edit: Here is an dockerized example of httpd and ssh daemon: https://riptutorial.com/docker/example/14132/dockerfile-plus-supervisord-conf
The answer here is that RUN nginx -g 'daemon off;' is intentionally starting nginx in the foreground, which is blocking your second command. This command is intended to start docker containers with this as the foreground process. Running RUN nginx will start nginx, create the master and child nodes and (hopefully) exit with a zero status code. Although as mentioned above, this is not the intended use of run, so a bash script would work best in this case.
I'm trying to create a Hygieia api docker image based from https://github.com/Hygieia/Hygieia
So i already executed "mvn clean install" in hygieia-core and hygieia. I'm now trying to execute "docker build . -t hygieia-api but i'm getting this error:
COPY failed: stat /var/lib/docker/tmp/docker-builderXXXXX/default.conf: no such file or directory
Can someone shed some light on why this is happening? I'm still trying to get myself into the bits and process of docker and I would appreciate any tips on this. Thank you!
Dockerfile can be found here
https://github.com/Hygieia/Hygieia/blob/master/Dockerfile
I tried some suggested troubleshoot options like restarting the docker service or running "docker pull nginx" but i am still getting this error.
FROM docker.io/nginx:latest
COPY default.conf /etc/nginx/conf.d/default.conf.templ
COPY conf-builder.sh /usr/bin/conf-builder.sh
COPY html /usr/share/nginx/html
RUN chown nginx:nginx /usr/share/nginx/html
EXPOSE 80 443
CMD conf-builder.sh &&\
nginx -g "daemon off;"
For the first, run like that:
docker build -t someimage .
Second, your default.conf must be in the same dirctory as dockerfile.
Third, remove all whitespaces from the last string of your dockerfile
I'm building SPA app that I would like to host in Docker container. App requires some configuration (e.g. url of backend). I decided to create short bash script that reads enviromental variables and assemble configuration file, but if I try to run it by CMD or ENTRYPOINT it dies immediately. I suppose that I override entrypoint in original docker file? Do I really need to start nginx manually when I would be done with preparing this file?
In this case you have it easy: the official nginx image doesn't declare an ENTRYPOINT, so you can add your own without conflicting with anything in the base image. The important details here:
When the entrypoint exits, the container is finished
The entrypoint is passed the CMD or docker run command as arguments
If the entrypoint is a shell script, it therefore usually wants to end with exec "$#"
A typical entrypoint script for this sort of thing might look like:
#!/bin/sh
sed -i.bak -e "s/EXTERNAL_URL/$EXTERNAL_URL/g" /etc/nginx/nginx.conf
exec "$#"
(For this specific task I've found envsubst to be very useful, but I don't think it's present in the Alpine-based images; it's not a "standard" tool but it will be there in a full GNU-based runtime environment, like the Debian-based images have. It goes through files and replaces $VARIABLE references with the contents of the matching environment variables.)
Yes, you are overriding the CMD.
Recommended way:
Try to use environment variable in your app when ever possible, This way you don't need to change the entrypoint/cmd of the official nginx container.
If it's not possible:
Nginx Dockerfile uses "nginx", "-g", "daemon off;" as cmd, you can override it by:
docker run -d --name yourapp-nginx <put other required docker switches here> <your image name:tag> /bin/sh -c '/path/to/yourscript.sh && /usr/sbin/nginx -g "daemon off;"'
Or you can put it as CMD/Entrypoint in your Dockerfile, if you are building your own image.
I found that if I create my own container some of commands from parent containers are not executed. Despite of fact that nginx image exposes port 80, I had to add EXPOSE command to mine one
FROM nginx
ADD dist/* /usr/share/nginx/html/
EXPOSE 80/tcp
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
Additionally I had to start nginx manually in my docker-entrypoint.sh:
#!/bin/bash
echo "{ backendUrl: '$BACKEND_URL'}" >> /usr/share/nginx/html/config
exec "$#"
nginx -g "daemon off;"