Docker logs command doesn't show any logs - docker

I have a flask app in python which is built into an image , when I try to run it using the command
docker logs -f f7e2cd41c0706b7a26d9ff5821aa1d792c685826d1c9707422a2a5dfa2e33796
It is not showing any logs , it should at least show that flask app has started right ? Note that I am able to hit the flask API from the host and it is working.There are many print statements in the code for this API to have worked , so those print statements should have come in the logs. Am I missing something here ?
Dockerfile is :
FROM python:3.6.8
WORKDIR /app
COPY . /app
#RUN apt-get update -y
#RUN apt-get install python-pip -y
RUN pip install -r requirements.txt
EXPOSE 5001
WORKDIR Flask/
RUN chmod -x main.py ;
CMD ["python", "main.py"]

you can get logs from your host machine. The default logging driver is a JSON-structured file located on local disk
/var/lib/docker/containers/[container-id]/[container-id]-json. log.
Or the same way as you did will also work.
sudo docker ps -a
sudo docker logs -f container-id

Related

How to use env variables set from build phase in run. (Docker)

I want to preface this in saying that I am very new to docker and have just got my feet wet with using it. In my Docker file that I run to build the container I install a program that sets some env variables. Here is my Docker file for context.
FROM python:3.8-slim-buster
COPY . /app
RUN apt-get update
RUN apt-get install wget -y
RUN wget http://static.matrix-vision.com/mvIMPACT_Acquire/2.40.0/install_mvGenTL_Acquire.sh
RUN wget http://static.matrix-vision.com/mvIMPACT_Acquire/2.40.0/mvGenTL_Acquire-x86_64_ABI2-2.40.0.tgz
RUN chmod +x ./install_mvGenTL_Acquire.sh
RUN ./install_mvGenTL_Acquire.sh -u
RUN apt-get install -y python3-opencv
RUN pip3 install USSCameraTools
WORKDIR /app
CMD python3 main.py
After executing the build docker command the program "mvGenTL_Acquire.sh" sets env inside the container. I need these variables to be set when executing the run docker command. But when checking the env variables after running the image it is not set. I know I can pass them in directly but would like to use the ones that are set from the install in the build.
Any help would be greatly appreciated, thanks!
For running a bash script when your container is creating:
make an script.sh file:
#!/bin/bash
your commands here
If you are using an alpine image, you must use #!/bin/sh instead of #!/bin/bash on the first line of your bash file.
Now in your Dockerfile copy your bash file in the container and use the ENTRYPOINT instruction for running this file when the container is creating:
.
.
.
COPY script.sh /
RUN chmod +x /script.sh
.
.
.
ENTRYPOINT ["/script.sh"]
Notice that in the ENTRYPOINT instruction use your bash file address in your image.
Now when you create a container, the script.sh file will be executed.

Docker Port Forwarding for FastAPI REST API

I have a simple FastAPI project called toyrest that runs a trivial API. The code looks like this.
from fastapi import FastAPI
__version__ = "1.0.0"
app = FastAPI()
#app.get("/")
def root():
return "hello"
I've built the usual Python package infrastructure around it. I can install the package. If I run uvicorn toyrest:app the server launches on port 8000 and everything works.
Now I'm trying to get this to run in a Docker image. I have the following Dockerfile.
# syntax=docker/dockerfile:1
FROM python:3
# Create a user.
RUN useradd --user-group --system --create-home --no-log-init user
USER user
ENV PATH=/home/user/.local/bin:$PATH
# Install the API.
WORKDIR /home/user
COPY --chown=user:user . ./toyrest
RUN python -m pip install --upgrade pip && \
pip install -r toyrest/requirements.txt
RUN pip install toyrest/ && \
rm -rf /home/user/toyrest
CMD ["uvicorn", "toyrest:app"]
I build the Docker image and run it, forwarding port 8000 to the running container.
docker run -p 8000:8000 toyrest:1.0.0
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
When I try to connect to http://127.0.0.1:8000/ I get no response.
Presumably I am doing the port forwarding incorrectly. I've tried various permutations of the port forwarding argument (e.g. -p 8000, -p 127.0.0.1:8000:8000) to no avail.
This is such a basic Docker command that I can't see how I'm getting it wrong, but somehow I am. What am I doing wrong?
try to add this line to yourCMD in ̀dockerfile`:
CMD ["uvicorn", "toyrest:app","--host", "0.0.0.0"]

CentOS7: How to start the slapd service in a docker container?

I want to run an OpenLDAP server in a docker container using CentOS7.
I managed to have a container running with an openldap installed in it. My problem is that I am using a script entrypoint.sh to start the slapd service and add a user to my directory. I would like this two steps to be in the Dockerfile so that the password to perform ldapadd is not stored in the script.
So far I have only found examples on debian .
https://github.com/kanboard/docker-openldap/blob/master/memberUid/Dockerfile this is what I would like to do but using CentOS 7.
I tried start slapd service in my Dockerfile without success.
My Dockerfile looks like this :
FROM centos:7
RUN yum -y update && yum -y install \
openldap-servers \
openldap-clients \
libselinux-python \
openssl \
; yum clean all
RUN chown ldap:ldap -R /var/lib/ldap
COPY slapd.conf /etc/openldap/slapd.conf
COPY base.ldif /etc/openldap/schema/base.ldif
COPY entrypoint.sh /entrypoint.sh
RUN chmod 500 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
My entrypoint.sh script looks like this :
#!/bin/bash
exec /usr/sbin/slapd -f /etc/openldap/slapd.conf -h "ldapi:/// ldap:///" -d stats &
sleep 10
ldapadd -x -w mypassword -D "cn=ldapadm,dc=mydomain" -f /etc/openldap/schema/base.ldif
This does work however I am looking to start the ldap service and do the ldapadd command in the Dockerfile not to have mypassword stored in entrypoint.sh.
Hence I tried these commands :
RUN systemctl slapd start
RUN ldapadd -x -w password -D "cn=ldapadm,dc=mydomain" -f /etc/openldap/schema/base.ldif
Of course this does not work as systemctl does not work in Dockerfile. What is the best alternative ? I was considering having one container starting the ldap servcie but then I do not know how to call it to build the image of the other container...
EDIT :
Thanks to Guido U. Draheim, I have an alternative to systemctl to start slapd service.
My Dockerfile now looks like this :
FROM centos:7
RUN yum -y update && yum -y install \
openldap-servers \
openldap-clients \
libselinux-python \
openssl \
; yum clean all
RUN chown ldap:ldap -R /var/lib/ldap
COPY slapd.conf /etc/openldap/slapd.conf
COPY base.ldif /etc/openldap/schema/base.ldif
COPY files/docker/systemctl.py /usr/bin/systemctl
RUN systemctl enable slapd
RUN systemctl start slapd;\
ldapdd -x -w password -D "cn=ldapadm,dc=sblanche" -f /etc/openldap/schema/base.ldif
COPY entrypoint.sh /entrypoint.sh
RUN chmod 500 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
But I have got the following error : ldap_bind: Invalid credentials (49)
(a) you could use the docker-systemctl-replacement to run your "systemctl.py start slapd". Which is the obvious first error.
(b) each RUN in a dockerfile is a new container, so the running process from the earlier invocation can not survive anyway. That's why the referenced dockerfile example has it combined with "&&".
And yeah (c) I am using an openldap centos container. So go ahead, try again.

Getting "/usr/bin/python: No module named" whenever I run the Docker container

I have a flask API alongside with another class for Machine learning prediction that I need to dockerize so I have a Dockerfile as shown below. I run the container using
sudo docker run -d -p 5000:5000 test
However every-time I run it, it crashes.
I can see that the status of it "Exited (1)" whenever I run
docker ps --all
When I do docker logs containerIDHere I get the reason of the crash as below
/usr/local/bin/python: No module named
Dockerfile
FROM python:3.6
RUN mkdir -p /app
WORKDIR /app
COPY requirements.txt /app
RUN pip install -r ./requirements.txt
COPY Network /app
COPY Train /app
EXPOSE 5000
CMD ["python", "-m" ,"Network.Api"]

How to pass command line arguments to my dockerized python app

I have a simple docker file which I am using to containerize my python app. The app actually takes file paths as command line arguments. It is my first time using Docker and I am wondering how I can achieve this:
FROM python:3.6-slim
COPY . /app
WORKDIR /app
RUN apt-get update && apt-get -y install gcc g++
# Install numpy, pandas, scipy and scikit
RUN pip install --upgrade pip
RUN pip --no-cache-dir install -r requirements.txt
RUN python setup.py install
ENTRYPOINT python -m myapp.testapp
Please note that the python app is run from the from the module with the -m flag.
This builds the image completely fine. I can also run it using:
docker run -ti myimg
However, I cannot pass any command line arguments to it. For example, my app prints some help options with the -h option.
However, running docker as:
docker run -ti myimg -h
does not do anything. So, the command line option are not being passed.
Additionally, I was wondering what the best way to actually pass file handles from the host computer to docker. So, for example, the application takes path to a file as an input and the file would usually reside on the host computer. Is there a way for my containerized app to be able to access that?
You have to use the CMD instruction along with ENTRYPOINT(in exec form)
ENTRYPOINT ["python", "-m", "myapp.testapp"]
CMD [""]
Make sure whatever default value you are passing to CMD, ("" in the above snippet), it is ignored by your main command
When you do, docker run -ti myimg,
the command will be executed as python -m myapp.testapp ''
When you do, docker run -ti mying -h,
the command will be executed as python -m myapp.testapp -h
Note:
exec form: ENTRYPOINT ["command", "parameter1", "parameter2"]
shell form: ENTRYPOINT command parameter1 parameter2

Resources