I have a docker container that runs multiple python scripts with pm2, each one of them has its own .log file and I want to tail them separately in order to debug them easily. The problem is when I run docker exec -it mycontainer /bin/sh and tail -f scriptFile.log inside the directory where the file is located only shows the first lines and it freezes until hit ctl + c. The image I'm using is nikolaik/python-nodejs:python3.9-nodejs16-alpine from enter link description here
Also tried docker logs tail -f container /app/logfile.log
Here is the dockerfile
FROM nikolaik/python-nodejs:python3.9-nodejs16-alpine
WORKDIR /app
# Installing pm2
RUN npm install pm2 -g
# Create prod env variable
ENV PROD=true
COPY . .
# Installing lxml
RUN apk add --update --no-cache g++ gcc libxslt-dev
# Installing requirements
RUN pip install -r requirements.txt
CMD ["pm2-runtime", "start", "ecosystem.config.js"]
Related
I am working on a Dockerfile to be used with Google Cloud Run.
I'm not getting the command to run.
Here's the (slightly obfuscated) Dockerfile:
FROM gcr.io/google.com/cloudsdktool/google-cloud-cli:latest
RUN apt-get update
RUN pip install --upgrade pip
COPY requirements.txt /root/
RUN pip install -r /root/requirements.txt
RUN useradd -m ubuntu
ENV HOME=/home/ubuntu
USER ubuntu
COPY --chown=ubuntu:ubuntu . /home/ubuntu
WORKDIR /home/ubuntu
RUN gcloud config set project our-customer-tech-sem-prod
RUN gcloud auth activate-service-account --key-file=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
RUN gcloud compute config-ssh
ENV GOOGLE_APPLICATION_CREDENTIALS=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
CMD ["gcloud", "compute", "ssh", "--internal-ip", "our-persist-cluster-py3-prod", "--zone=us-central1-b", "--project", "our-customer-tech-sem-prod", "--", "'ps -ef'", "|", "./checker2.py"]
This tries to run the CMD at the end, but says it can't find the host specified. (Runs fine from the command line outside Docker.)
There were a couple of things wrong at the end (1) typo in the host name (fixed with the help of a colleague) ... then I had to make the CMD into a shell script to get the pipe inside to work correctly.
Here's the final (slightly obfuscated) script:
FROM gcr.io/google.com/cloudsdktool/google-cloud-cli:latest
RUN apt-get update
RUN pip install --upgrade pip
COPY requirements.txt /root/
RUN pip install -r /root/requirements.txt
RUN useradd -m ubuntu
RUN mkdir /secrets
COPY secrets/* /secrets
ENV HOME=/home/ubuntu
USER ubuntu
COPY --chown=ubuntu:ubuntu . /home/ubuntu
WORKDIR /home/ubuntu
RUN gcloud config set project our-customer-tech-sem-prod
RUN gcloud auth activate-service-account --key-file=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
RUN gcloud compute config-ssh
ENV GOOGLE_APPLICATION_CREDENTIALS=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
CMD ["./rungcloud.sh"]
I'm trying to run on my Ubuntu 20.04 machine a cluster of docker containers present in this repository :
https://github.com/Capgemini-AIE/ethereum-docker
My dockerfile:
FROM ethereum/client-go
RUN apk update && apk add bash
RUN apk add --update git bash nodejs npm perl
RUN cd /root &&\
git clone https://github.com/cubedro/eth-net-intelligence-api &&\
cd eth-net-intelligence-api &&\
npm install &&\
npm install -g pm2
ADD start.sh /root/start.sh
ADD app.json /root/eth-net-intelligence-api/app.json
RUN chmod +x /root/start.sh
ENTRYPOINT /root/start.sh
The commands:
sudo docker-compose build
sudo docker-compose up -d
are done correctly, but when execute:
docker exec -it ethereum-docker-master_eth_1 geth attach ipc://root/.ethereum/devchain/geth.ipc
i have this error:
ERROR: Container 517e11aef83f0da580fdb91b6efd19adc8b1f489d6a917b43cc2d22881b865c6 is restarting, wait until the container is running
The reason is, executing:
docker logs ethereum-docker-master_eth_1
result:\
/root/start.sh: line 5: /usr/bin/pm2: No such file or directory\
/root/start.sh: line 5: /usr/bin/pm2: No such file or directory\
/root/start.sh: line 5: /usr/bin/pm2: No such file or directory
Why do I have this problem? In the Docker file I have the command:
RUN npm install -g pm2
How can I solve the problem?
When I build an image with this dockerfile and then check the files in it I find that pm2 is installed at /usr/local/bin/pm2:
So you need to change the call in your script to
/usr/local/bin/pm2
I work on a project that has a large number of Java SpringBoot services (and other types) running in k8s clusters. Each service has a small start script that executes a more complex script that is provided in a configmap. This all works fine in builds and at runtime.
I need to make some changes to that complex script. I've already made the changes and tested the concept in an isolated script. I still need to do more testing of it. I am attempting to take some of the command lines that run in our Linux build system and run them on my VirtualBox Ubuntu VM that runs on my Windows 10 laptop. Although I am running this on the VM, most of the files were created and written on the host Windows 10 laptop that I get to using a VirtualBox Shared Folder.
When I look at the "ls -l" output of "startService.sh", I just get this:
-rwxrwx--- 1 root vboxsf 634 Aug 24 15:07 startService.sh*
Note that I am running docker with my own uid, and I have that uid in the "vboxsf" group.
It seems like when the file gets copied into the image, either the owner or the perms get changed in a way that make it inaccessible from within the container.
I tried adding a "RUN chmod 777 startService.sh" in the Dockerfile, just before the ENTRYPOINT, but that fails at build time with this:
Step 23/26 : RUN chmod 777 startService.sh
---> Running in 6dbb89c930c1
chmod: startService.sh: Operation not permitted
The command '/bin/sh -c chmod 777 startService.sh' returned a non-zero code: 1
I don't know why this is happening, or whether this is something that might mitigate this.
My "docker build" command looks like it went fine. I saw it execute all the steps that the normal build shows. The "docker run" step seemed to go fine, but it finished very quickly. When I looked at the "docker log" for the container, it just said entirely:
/bin/sh: ./startService.sh: Permission denied
Note that everything here is done the same way it is on the build server. There seems to be something funny with the fact that I'm running an Ubuntu
You have to write chmod +x startService.sh before docker run or docker-compose up -d --build
And example Dockerfile for django. Look at actions with wait-for, you must make same
###########
# BUILDER #
###########
# pull official base image
FROM python:3.8.3-slim as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apt-get update \
&& apt-get -y install libpq-dev gcc \
python3-dev musl-dev libffi-dev\
&& pip install psycopg2
# lint
RUN pip install --upgrade pip
COPY . .
# install dependencies
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt
# copy project
COPY . .
#########
# FINAL #
#########
# pull official base image
FROM python:3.8.3-slim
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup --system app && adduser --system --group app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/static
RUN mkdir $APP_HOME/media
RUN mkdir $APP_HOME/currencies
WORKDIR $APP_HOME
# install dependencies
RUN apt-get update && apt-get install -y libpq-dev bash netcat rabbitmq-server
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements.txt .
COPY wait-for /bin/wait-for
COPY /log /var/log
COPY /run /var/run
RUN pip install --no-cache /wheels/*
# copy project
COPY . $APP_HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
RUN chown -R app:app /var/log/
RUN chown -R app:app /var/run/
EXPOSE 3000
# change to the app user
USER app
# only for dgango
CMD ["gunicorn", "Config.asgi:application", "--bind", "0.0.0.0:8000", "--workers", "3", "-k","uvicorn.workers.UvicornWorker","--log-file","-"]
I'm trying to run a node app with xvfb-run, here is my Dockerfile
FROM node:lts-alpine
RUN apk --no-cache upgrade && apk add --no-cache chromium coreutils xvfb xvfb-run
ENV CHROME_BIN="/usr/bin/chromium-browser"\
PUPPETEER_SKIP_CHROMIUM_DOWNLOAD="true" \
UPLOAD_ENV="test"
WORKDIR /app
COPY package.json .
COPY .npmrc .
RUN npm install
COPY . .
# EXPOSE 9999
ENTRYPOINT xvfb-run -a npm run dev
I can successfully build the image, but when I run it with docker run, it gets stuck without any log
But when I open an interactive shell and run the ENTRYPOINT command, it works...
How do I fix it?
You should add --init to docker run, for example:
docker run --init --rm -it $IMAGE$ xvfb-run $COMMAND$
I'm trying to execute shell file that contains python script.
But, I don't know why i met the error like this
file directory structure
/home/kwstat/workplace/analysis/report_me
home
kwstat
workplace
analysis
report_me
report_me.sh
python_file
python_code.py
...
$docker exec -it test /bin/bash -c "source /home/kwstat/workplace/analysis/report_me/report_me.sh"
# Error
/home/kwstat/workplace/analysis/report_me/report_me.sh: line 30: source: /usr/local/bin/python: cannot execute binary file
I tried several things in Dockerfile, But same error occured.
# 1.CMD ["/bin/bash","-l","-c"]
CMD ["/bin/bash","-l","-c"]
# 2. CMD bin/bash
CMD bin/bash
#########My Dockerfile#############
FROM continuumio/miniconda3
# System packages
RUN apt-get update && apt-get install -y curl
RUN apt-get update && apt-get install -y subversion
WORKDIR /home/kwstat/workplace/analysis/report_me
COPY environments.yml /home/kwstat/workplace/analysis/report_me/environments.yml
RUN conda env create -f /home/kwstat/workplace/analysis/report_me/environments.yml
# Make RUN commands use the new environment:
SHELL ["conda", "run", "-n", "myenv", "/bin/bash", "-c"]
RUN echo "conda activate my_env" >> ~/.profile
# Activate the environment, and make sure it's activated:
#RUN echo "Make sure flask is installed:"
COPY requirements.txt /home/kwstat/me_report_dockerfile/requirements.txt
RUN pip install -r /home/kwstat/me_report_dockerfile/requirements.txt
WORKDIR /home/kwstat/workplace/analysis/report_me/python_file
COPY python_file/ /home/kwstat/workplace/analysis/report_me/python_file
WORKDIR /home/kwstat/workplace/analysis/report_me/
COPY report_me.sh ./report_me.sh
RUN chmod +x report_me.sh
CMD ["/bin/bash"]
please any help ~
my problem was from shell script.
Inside the shell set the coda env path
and all solved.