run bash command outside docker container - docker

I built API using FastAPI that calls some bash commands. Now I want to make a docker container for my app but I encountered the following issue: if I create a docker container, the app won't run bash commands. I guess I need to get out of docker container to run bash commands but I am not sure that it is possible. Any suggestions? Apologies in advance if my question is confusing.
Here is my Docker File
FROM python:3.9
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip3 install --no-cache-dir --upgrade -r /code/requirements.txt
COPY ./app /code/app
# CMD ["python", "./app/main.py"]
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]
and here is an example of how I run bash command (it is actually a docker command)
#app.post("/stop-camera")
async def stop_camera(info: Request):
req_info = await info.json()
file_name = str(req_info["id"])
result1 = subprocess.run([str(env_dictionary["STOP"])+ file_name], shell = True)
result2 = subprocess.run([str(env_dictionary["REMOVE"])+ file_name], shell = True)
return {
"status" : "SUCCESS",
"stop" : result1,
"rm" : result2
}

Here's a very simple example of UDP communication between something running on a docker host and something running inside a container.
On the host, start a simple docker container passing it a way to get the host's IP address:
docker run -it --add-host host.docker.internal:host-gateway alpine:latest ash
Then, still outside the container and on the host, wait for a command on UDP port 65000 from the container. Note I am using netcat here, but you would likely use Python since you have that already:
# Listen on UDP port 65000
nc -u -l 65000
Obviously you could run this in a loop to wait for multiple commands, and you could parse different commands that arrive and react differently to different commands and you could also check the source of the commands, or encrypt them for some level of security...
Inside the container, I quickly add netcat but you would probably use Python again:
# Install netcat
apk update && apk install netcat-openbsd
# Send command to host via UDP on port 65000
echo STOP | nc -u host.docker.internal 65000

Related

Docker Port mapping resulting in ERR_EMPTY_RESPONSE [duplicate]

I've been trying to figure this out in the last hours but I'm stuck.
I have a very simple Dockerfile which looks like this:
FROM alpine:3.6
COPY gempbotgo /
COPY configs /configs
CMD ["/gempbotgo"]
EXPOSE 8025
gempbotgo is just an go binary which runs a webserver and some other stuff.
The webserver is running on 8025 and should answer with an hello world.
My issue is with exposing ports. I ran my container like this (after building it)
docker run --rm -it -p 8025:8025 asd
Everything seems fine but when I try to open 127.0.0.1:8025 in the browser or try a wget i just get an empty response.
Chrome: ERR_EMPTY_RESPONSE
The port is used and not restricted by the firewall on my Windows 10 system.
Running the go binary without container just on my "Bash on Ubuntu on Windows" terminal and then browsing to 127.0.0.1:8025 works without a hitch.
Other addresses returned a "ERR_CONNECTION_REFUSED" like 127.0.0.1:8030 so there definetly is something active on the port.
I then went into the conatiner with
docker exec -it e1cc6daae4cf /bin/sh
and checked in there with a wget what happens. Also there no issues. index.html file gets downloaded with a "Hello World"
Any ideas why docker is not sending any data? I've also ran my container with docker-compose but no difference there.
I also ran the container on my VPS hosted externally. Same issue there... (Debian)
My code: (note the Makefile)
https://github.com/gempir/gempbotgo/tree/docker
Edit:
After getting some comments I changed my Dockerfile to a multi-stage build. This is my Dockerfile now:
FROM golang:latest
WORKDIR /go/src/github.com/gempir/gempbotgo
RUN go get github.com/gempir/go-twitch-irc \
&& go get github.com/stretchr/testify/assert \
&& go get github.com/labstack/echo \
&& go get github.com/op/go-logging
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY configs ./configs
COPY --from=0 /go/src/github.com/gempir/gempbotgo/app .
CMD ["./app"]
EXPOSE 8025
Sadly this did not change anything, I kept everything as close as possbile to the guide here: https://docs.docker.com/engine/userguide/eng-image/multistage-build/#use-multi-stage-builds
I have also tried the minimalist Dockerfile from golang.org which looks like this:
FROM golang:onbuild
EXPOSE 8025
But no success either with that.
Your issue is that you are binding to the 127.0.0.1:8025 inside your code. This makes the code work from inside the container but not outside.
You need to bind to 0.0.0.0:8025 to bind to all interfaces inside the container. So traffic coming from outside of the container is also accepted by your Go app
Adding to the accepted answer: I had the same error message trying to run docker/getting-started.
The problem was that "getting-started" is using port 80 and this was
"occupied" (netsh http show urlacl) on my machine.
I had to use docker run -d -p 8888:80 docker/getting-started where
8888 was an unused port. And then open "http://localhost:8888/tutorial/".
I have the same problem using Dockerize GatsbyJS. As Tarun Lalwani's comment above, I resolved the problem by binding or using 0.0.0.0 as hostname
yarn develop -P 0.0.0.0 -p 8000
For me this was a problem with the docker swarm mode ingress network. I had to recreate it. https://docs.docker.com/network/overlay/#customize-the-default-ingress-network
Another possibility why you are getting that error is that the docker run command is run through a normal cmd prompt and not the admin command prompt. Make sure you run as an admin!

Docker Port Forwarding for FastAPI REST API

I have a simple FastAPI project called toyrest that runs a trivial API. The code looks like this.
from fastapi import FastAPI
__version__ = "1.0.0"
app = FastAPI()
#app.get("/")
def root():
return "hello"
I've built the usual Python package infrastructure around it. I can install the package. If I run uvicorn toyrest:app the server launches on port 8000 and everything works.
Now I'm trying to get this to run in a Docker image. I have the following Dockerfile.
# syntax=docker/dockerfile:1
FROM python:3
# Create a user.
RUN useradd --user-group --system --create-home --no-log-init user
USER user
ENV PATH=/home/user/.local/bin:$PATH
# Install the API.
WORKDIR /home/user
COPY --chown=user:user . ./toyrest
RUN python -m pip install --upgrade pip && \
pip install -r toyrest/requirements.txt
RUN pip install toyrest/ && \
rm -rf /home/user/toyrest
CMD ["uvicorn", "toyrest:app"]
I build the Docker image and run it, forwarding port 8000 to the running container.
docker run -p 8000:8000 toyrest:1.0.0
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
When I try to connect to http://127.0.0.1:8000/ I get no response.
Presumably I am doing the port forwarding incorrectly. I've tried various permutations of the port forwarding argument (e.g. -p 8000, -p 127.0.0.1:8000:8000) to no avail.
This is such a basic Docker command that I can't see how I'm getting it wrong, but somehow I am. What am I doing wrong?
try to add this line to yourCMD in ̀dockerfile`:
CMD ["uvicorn", "toyrest:app","--host", "0.0.0.0"]

Google Cloud Run fails to listen even after changing port to 8080

I am having some issues deploying to Cloud Run lately. When I am trying to deploy the below Dockerfile to Cloud Run, it ends up with the error Failed to start and then listen on the port defined by the PORT environment variable.:
FROM phpmyadmin/phpmyadmin:latest
EXPOSE 8080
RUN sed -i 's/80/${PORT}/g' /etc/apache2/sites-available/000-default.conf /etc/apache2/ports.conf
ENTRYPOINT [ "/docker-entrypoint.sh" ]
CMD [ "apache2-foreground" ]
The ENTRYPOINT and CMD were added separately even though the phpmyadmin/phpmyadmin:latest uses this same ENTRYPOINT and CMD to see if that would solve it, though it is not required. The same Docker image when deployed using docker run runs properly and listens on port 8080. Is there something I am doing wrong?
This is the command I use to deploy:
gcloud run deploy phpmyadmin --memory=1Gi --platform=managed \
--allow-unauthenticated --add-cloudsql-instances project_id:us-central1:db-name \
--region=us-central1 --image gcr.io/project_id/phpmyadmin:1.3 \
--update-env-vars PMA_HOST=localhost,PMA_SOCKET="/cloudsql/project_id:us-central1:db-name",PMA_ABSOLUTE_URI=phpmyadmin.domain.com
This is all I can find in the logs. (Have redacted some data):
https://gist.github.com/shanukk27/9dd4b3076c55307bd6e853a76e7a34e0
Cloud Run runtime environment seems to be slightly different than Docker run command. You can't use ENTRYPOINT and CMD in the same time
ENTRYPOINT [ "/docker-entrypoint.sh" ]
CMD [ "apache2-foreground" ]
It works with Docker Run (Why? Docker issue? Docker feature?) and not on Cloud Run (missing feature? bug?).
Use only one of them, for example:
ENTRYPOINT /docker-entrypoint.sh && apache2-foreground
EDIT
A strange remark shared by Shanu is the 2 command works with Wordpress deployment, and doesn't work here.
FROM wordpress:5.3.2-php7.3-apache
EXPOSE 8080
# Copy custom entrypoint from repo
COPY cloud-run-entrypoint.sh /usr/local/bin/
# Change apache listening port and set permission for docker entrypoint
RUN sed -i 's/80/${PORT}/g' /etc/apache2/sites-available/000-default.conf /etc/apache2/ports.conf && \
chmod +x /usr/local/bin/cloud-run-entrypoint.sh
# Wordpress conf
COPY wordpress/. /var/www/html/
# Custom entrypoint
ENTRYPOINT ["cloud-run-entrypoint.sh","docker-entrypoint.sh"]
# Start apache when docker container starts
CMD ["apache2-foreground"]
The problem is solved here, but the reason is not clear
Note to Googler (Steren? Ahmet?): Can you share more details on this behavior?

What's the meaning of "CMD [ "-?" ]" in Docker?

《The Docker Book v17.12.0-ce》 Page 223
Listing 6.19: Our war fle fetcher
FROM ubuntu:16.04
MAINTAINER James Turnbull
ENV REFRESHED_AT 2016-06-01
RUN apt-get -yqq update
RUN apt-get -yqq install wget
VOLUME [ "/var/lib/tomcat7/webapps/" ]
WORKDIR /var/lib/tomcat7/webapps/
ENTRYPOINT [ "wget" ]
CMD [ "-?" ]
This incredibly simple image does one thing: it wgets whatever fle from a URL
that is specifed when a container is run from it and stores the fle in the /var/lib
/tomcat7/webapps/ directory. This directory is also a volume and the working
directory for any containers. We’re going to share this volume with our Tomcat
server and run its contents.
Finally, the ENTRYPOINT and CMD instructions allow our container to run when no
URL is specifed; they do so by returning the wget help output when the container
is run without a URL.
Can anyboy tell me what's the meaning of "CMD [ "-?" ]"
I know the concept of ENTRYPOINT and CMD,
what I don't understand is the meaning of "-?" in "wget -?"
When you run a Docker container, it constructs a command line by simply concatenating the "entrypoint" and "command". Those come from different places in the docker run command line; but if you don't provide a --entrypoint option then the ENTRYPOINT in the Dockerfile is used, and if you don't provide any additional command-line arguments after the image name then the CMD is appended.
So, a couple of invocations:
# Does "wget -?"
docker run --rm thisimage
# Does "wget -O- http://stackoverflow.com": dumps the SO home page
docker run --rm thisimage -O- http://stackoverflow.com
# What you need to do to get an interactive shell
docker run --rm -it --entrypoint /bin/sh thisimage
I figure it out, The author made a clerical error. The arguments in CMD should be "-h".
Because in the later he said " Finally, the ENTRYPOINT and CMD instructions allow our container to run when no URL is specifed; they do so by returning the wget help output when the container is run without a URL."

How to pass command line arguments to my dockerized python app

I have a simple docker file which I am using to containerize my python app. The app actually takes file paths as command line arguments. It is my first time using Docker and I am wondering how I can achieve this:
FROM python:3.6-slim
COPY . /app
WORKDIR /app
RUN apt-get update && apt-get -y install gcc g++
# Install numpy, pandas, scipy and scikit
RUN pip install --upgrade pip
RUN pip --no-cache-dir install -r requirements.txt
RUN python setup.py install
ENTRYPOINT python -m myapp.testapp
Please note that the python app is run from the from the module with the -m flag.
This builds the image completely fine. I can also run it using:
docker run -ti myimg
However, I cannot pass any command line arguments to it. For example, my app prints some help options with the -h option.
However, running docker as:
docker run -ti myimg -h
does not do anything. So, the command line option are not being passed.
Additionally, I was wondering what the best way to actually pass file handles from the host computer to docker. So, for example, the application takes path to a file as an input and the file would usually reside on the host computer. Is there a way for my containerized app to be able to access that?
You have to use the CMD instruction along with ENTRYPOINT(in exec form)
ENTRYPOINT ["python", "-m", "myapp.testapp"]
CMD [""]
Make sure whatever default value you are passing to CMD, ("" in the above snippet), it is ignored by your main command
When you do, docker run -ti myimg,
the command will be executed as python -m myapp.testapp ''
When you do, docker run -ti mying -h,
the command will be executed as python -m myapp.testapp -h
Note:
exec form: ENTRYPOINT ["command", "parameter1", "parameter2"]
shell form: ENTRYPOINT command parameter1 parameter2

Resources