Container is automatically exiting - docker

Below is my Dockerfile:
FROM python:3
COPY . /Demo
WORKDIR /Demo
RUN pip install -r requirements.txt
EXPOSE 9005
CMD python ./app.py
I am using following command to run the resulting image:
docker run -it -d host:port imagename:v1
The container is automatically exiting. When I run docker ps, no running container is shown.

Instead of this line:
CMD python ./app.py
give absolute path like below:
CMD python /<path-to-script>/app.py

Related

docker container stops after docker run

I have a docker file which when built and run stops. I am trying to run both client and server in one docker container. If there is any solution to use docker-compose, then that is already in place and working fine. Please advise how to keep the container up and running using docker run. Thanks!
Here is my docker file, package.json and screenshot of folder structure.
DockerFile contents:
FROM node:14.14.0-alpine
RUN apk update && apk add bash
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
WORKDIR /app
EXPOSE 3000
EXPOSE 4565
CMD ["npm","run","prebuild"]
docker build: command:
docker build -t sample .
docker run command:
docker run -d -it --name sm -v `pwd`:/app sample
Package.json:

Astro in Docker not refresh

I am creating an Astro js container with Docker on windows.
Dockerfile
FROM node:18-alpine3.15
RUN mkdir app
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 24678
CMD ["npm","run","dev","--","--host"]
I build my image with the following command
docker build . -t astro
I run my container with this command
docker run --name astro1 -p 24678:24678 -v D:\Workspace\Docker\Practicas\docker-astro-example:/app -v /app/node_modules/ astro
So far without problems but when I make a change in the index.astro document it does not refresh the page to see the changes.

Copy files from container to local in Docker

I want to copy a file from container to my local. The file is generated after execute python script, but due to then ENTRYPOINT, the container exited right after it run, and cant be able to use docker cp command. Any idea on how to prevent the container from exit before manage to copy the file? Below is my Dockerfile:
FROM python:3.9-alpine3.12
WORKDIR /app
COPY . /app/
RUN pip install --no-cache-dir -r requirements.txt && \
rm -f /var/cache/apk/*
ENTRYPOINT ["python3", "main.py"]
I use this command to run the image:
docker run -d -it --name test [image]
If the output file is stored in it's own directory (say /app/output) you can run: docker run -d -it -v $PWD/output:/app/output/ --name test [image] and the file will be in the output directory of the current directory.
If it's not, then run the container with: docker run -d -it --name test [image]
Then copy the file to your own filesystem using docker cp test:/app/example.json . to copy it to the current directory.
If running a container in background is unnecessary then you can copy a file from stdout
docker run -it [image] cat /app/example.json > out_example.json

Docker: Container should get access to directory of another container

I need to get access to a directory from docker container to another docker container.
In the first container I am running a nodeJS application and in the tests/e2e folder there are my e2e tests and the configuration for webdriverIO.
Also it I don't need a persistend volume - like I've done it so far. I just need the test files as long as both container are running.
$ docker run
--name app_stage
--volume tests:/app/tests
--detach
app:stage
This is the Dockerfile to that application
RUN mkdir -p /app
WORKDIR /app
COPY . /app
RUN npm install
RUN npm run build
EXPOSE 3000
ENV NODE_ENV production
CMD next start
In the second container I'm running webdriverIO, which needs to get the tests and the configuration of the first container stored there in app/tests
$ docker run
--rm
--volumes-from app_stage
webdriverio wdio
But this is not working as I do not see the needed directory in the second container.
First, specify VOLUMEvariable in you dockerfile:
RUN mkdir -p /app
WORKDIR /app
COPY . /app
RUN npm install
RUN npm run build
EXPOSE 3000
ENV NODE_ENV production
VOLUME /app/tests
CMD next start
Use your first command to start app_stage container then start webdriverio container with the second command.

Passing arguments from CMD in docker

I have got below Dockerfile.
FROM node:boron
# Create app directory
RUN mkdir -p /usr/src/akamai
WORKDIR /usr/src/akamai
# Install app dependencies
COPY package.json /usr/src/akamai/
RUN npm install
# Bundle app source
COPY . /usr/src/akamai
#EXPOSE 8080
CMD ["node", "src/akamai-client.js", "purge", "https://www.example.com/main.css"]
Below is the command which I run from CMD after the docker image build
docker run -it "akamaiapi" //It executes the CMD command as given in above Dockerfile.
CMD ["node", "src/akamai-client.js", "purge", "https://www.example.com/main.css"] //I want these two arguments directly passed from docker command instead hard-coded in Dockerfile, so my Docker run commands could be like these:
docker run -it "akamaiapi" queue
docker run -it "akamaiapi" purge "https://www.example.com/main.css"
docker run -it "akamaiapi" purge-status "b9f80d960602b9f80d960602b9f80d960602"
You can do that through a combination of ENTRYPOINT and CMD.
The ENTRYPOINT specifies a command that will always be executed when the container starts.
The CMD specifies arguments that will be fed to the ENTRYPOINT.
So, with Dockerfile:
FROM node:boron
...
ENTRYPOINT ["node", "src/akamai-client.js"]
CMD ["purge", "https://www.example.com/main.css"]
The default behavior of a running container:
docker run -it akamaiapi
would be like command :
node src/akamai-client.js purge "https://www.example.com/main.css"
And if you do :
docker run -it akamaiapi queue
The underlying execution in the container would be like:
node src/akamai-client.js queue

Resources