I know it is a question that many people have asked. But I don't know what I'm doing wrong. I have this Dockerfile
FROM nginx
ENV USER=${USER}
COPY proof.sh .
RUN chmod 777 proof.sh
CMD echo ${USER} >> /usr/share/nginx/html/index.html
CMD ["nginx", "-g", "daemon off;"]
EXPOSE 80
When I execute the env command in Linux, I got USER=fran and after that I run these commands:
sudo docker run --entrypoint "/bin/sh" 5496674f99e5 ./prueba.sh and as well I run this other docker run --env USER -d -p 89:80 prueba . If I have understood well, doing the last command this environment variable from host it should be passed to the docker, but I don't get anything. Why?. Could you help me?. Thanks in advance
This should be like
FROM nginx
ARG USER=default
ENV USER=${USER}
RUN echo ${USER}
CMD ["nginx", "-g", "daemon off;"]
EXPOSE 80
So now if you build with some build-args like
docker build --no-cache --build-arg USER=fran_new -t my_image .
You will see in logs like fran_new
or if you want to have user from host OS at run time then pass it as an environment variable at run time then
docker run --name my_container -e USER=$USER my_image
So the container user will be the same as in host.
Related
I have a docker file which when built and run stops. I am trying to run both client and server in one docker container. If there is any solution to use docker-compose, then that is already in place and working fine. Please advise how to keep the container up and running using docker run. Thanks!
Here is my docker file, package.json and screenshot of folder structure.
DockerFile contents:
FROM node:14.14.0-alpine
RUN apk update && apk add bash
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
WORKDIR /app
EXPOSE 3000
EXPOSE 4565
CMD ["npm","run","prebuild"]
docker build: command:
docker build -t sample .
docker run command:
docker run -d -it --name sm -v `pwd`:/app sample
Package.json:
I have this Dockerfile written.
FROM python:3.6
WORKDIR /usr/src/app
EXPOSE 8080
CMD [ "python3", "-m http.server" ] //even tried CMD [ "python3", "-m", "http.server" ]
I built the image with this:
docker build -t --name server .
and I ran a container from the image like this:
docker run -d -p 8080:8080 --name web server
But when I hit < host-url >:8080
It doesn't work.
Can somebody please help me?
You are trying to run the Python SimpleHTTPServer which is served in port 8000 by default.
Either your Dockerfile should expose 8000 instead of 8080
EXPOSE 8000
Or, change the command to run it in port 8080
CMD ["python3", "-m", "http.server", "8080"]
This is my Dockerfile
FROM nginx
EXPOSE 8080
CMD ["nginx", "-g", "daemon off;"]
I use docker build . and then docker run -it 603030818c86 to start my nginx container. But when I go to http://localhost:8080 it doesn't give me the nginx homepage. What am I doing wrong?
You exposing port inside docker network, try to use:
docker run -it -p 8080:8080 603030818c86
I have got below Dockerfile.
FROM node:boron
# Create app directory
RUN mkdir -p /usr/src/akamai
WORKDIR /usr/src/akamai
# Install app dependencies
COPY package.json /usr/src/akamai/
RUN npm install
# Bundle app source
COPY . /usr/src/akamai
#EXPOSE 8080
CMD ["node", "src/akamai-client.js", "purge", "https://www.example.com/main.css"]
Below is the command which I run from CMD after the docker image build
docker run -it "akamaiapi" //It executes the CMD command as given in above Dockerfile.
CMD ["node", "src/akamai-client.js", "purge", "https://www.example.com/main.css"] //I want these two arguments directly passed from docker command instead hard-coded in Dockerfile, so my Docker run commands could be like these:
docker run -it "akamaiapi" queue
docker run -it "akamaiapi" purge "https://www.example.com/main.css"
docker run -it "akamaiapi" purge-status "b9f80d960602b9f80d960602b9f80d960602"
You can do that through a combination of ENTRYPOINT and CMD.
The ENTRYPOINT specifies a command that will always be executed when the container starts.
The CMD specifies arguments that will be fed to the ENTRYPOINT.
So, with Dockerfile:
FROM node:boron
...
ENTRYPOINT ["node", "src/akamai-client.js"]
CMD ["purge", "https://www.example.com/main.css"]
The default behavior of a running container:
docker run -it akamaiapi
would be like command :
node src/akamai-client.js purge "https://www.example.com/main.css"
And if you do :
docker run -it akamaiapi queue
The underlying execution in the container would be like:
node src/akamai-client.js queue
What's wrong with my dockerfile?
The dockerfile is in the rootfolder of my repo and the dist-folder too.
FROM nginx
# copy folder
COPY dist /usr/share/nginx/html
EXPOSE 8080
CMD ["nginx"]
I build the image:
docker build -f Dockerfile.nginx -t localhost:5000/test/image:${version} .
The image is there after performing docker images
It looks so simple but when I try to run the image as a container:
docker run -d -p 80:8080 localhost:5000/test/image:15
545445f961f4ec22becc0688146f3c73a41504d65467020a3e572d136354e179
But: Exited (0) About a minute ago
The docker logs shows nothing
Default nginx behaviour is run as a daemon. To prevent this run nginx with parameter daemon off.
CMD ["nginx", "daemon off"]
By default, Nginx will fork into the background and -- as the original foreground process has terminated -- the Docker container will stop immediately. You can have a look at how the original image's Dockerfile handles this:
CMD ["nginx", "-g", "daemon off;"]
The flag -g "daemon off;" causes Nginx to not fork, but continue running in the foreground, instead. And since you're already extending the official nginx image, you can drop your CMD line altogether, as it will be inherited from the base image, anyway.