I have a FAST API host in docker container. The workflow of this API will post the data to others APIs which "host on different server". And now the FAST API can be called by another program. But it will get "No address associated with hostname" error when it call to others API, I am thinking maybe something is wrong in dockerfile. Below are the diagram and dockerfile.
Dockerfile
FROM python:3.9
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
COPY ./app /code/app
WORKDIR /code/app
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
Because I am running my docker using Docker Desktop installed on Windows OS. And I notice that Windows doesn't have /etc/resolv.conf file. So maybe it will not automatically inherits the DNS settings of the host. And the DNS server ip is different between Windows and Linux in our company.
So I solved this issue in two way.
Host the Docker in Linux Machine without setting any dns ip.
Change the dns server ip.
Related
Context
I'm juggling between Dockerfile and docker-compose to figure out the best security practice to deploy my docker image and push it to the docker registry so everyone can use it.
Currently, I have a FastAPI application that uses an AWS API token for an AWS Service. I'm trying to figure out a solution that can work in both Docker for Windows (GUI) and Docker for Linux.
In Docker Windows GUI it's well and clear that after I pull the image from the registry I can add API tokens in the environment of the image and spin a container.
I need to know
When it comes to Docker for Linux, I'm trying to figure out a way to build an image with an AWS API token either via Dockerfile or docker-compose.yml.
Things I tried
Followed the solution from this blog
As I said earlier if I do something like that as mentioned in the blog. It's fine for my personal use. A user who pulls my docker image from the registry will also be having my AWS Secrets. How do I handle this situation in a better way
Current state of Dockerfile
FROM python:3.10
# Set the working directory to /app
WORKDIR /src
# Copy the current directory contents into the container at /app
ADD ./ /src
# Install any needed packages specified in requirements.txt
#RUN /usr/local/bin/python -m pip install --upgrade pip
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 8000
# Run app.py when the container launches
CMD ["python", "main.py"]
I'm trying to run Rhino Compute in a docker container and facing a weird issue. I had built an image using the Dockerfile below and when I run it locally, there are no issues.
# escape=`
# see https://discourse.mcneel.com/t/docker-support/89322 for troubleshooting
# NOTE: use 'process' isolation to build image (otherwise rhino fails to install)
### builder image
FROM mcr.microsoft.com/dotnet/sdk:5.0 as builder
# copy everything, restore nuget packages and build app
COPY src/ ./src/
RUN dotnet publish -c Release -r win10-x64 --self-contained true src/compute.sln
### main image
# tag must match windows host for build (and run, if running with process isolation)
# e.g. 1903 for Windows 10 version 1903 host
FROM mcr.microsoft.com/windows:1809
#Copy the fonts and font install script
COPY fonts/* fonts/
COPY InstallFont.ps1 .
#Run font install scriptin powershell
RUN powershell -ExecutionPolicy Bypass -Command .\InstallFont.ps1
# install .net 4.8 if you're using the 1809 base image (see https://git.io/JUYio)
# comment this out for 1903 and newer
RUN curl -fSLo dotnet-framework-installer.exe https://download.visualstudio.microsoft.com/download/pr/7afca223-55d2-470a-8edc-6a1739ae3252/abd170b4b0ec15ad0222a809b761a036/ndp48-x86-x64-allos-enu.exe `
&& .\dotnet-framework-installer.exe /q `
&& del .\dotnet-framework-installer.exe `
&& powershell Remove-Item -Force -Recurse ${Env:TEMP}\*
# install rhino (with “-package -quiet” args)
# NOTE: edit this if you use a different version of rhino!
# the url below will always redirect to the latest rhino 7 (email required)
# https://www.rhino3d.com/download/rhino-for-windows/7/latest/direct?email=EMAIL
RUN curl -fSLo rhino_installer.exe https://www.rhino3d.com/download/rhino-for-windows/7/latest/direct?email=<myemail> `
&& .\rhino_installer.exe -package -quiet `
&& del .\rhino_installer.exe
# (optional) use the package manager to install plug-ins
# RUN ""C:\Program Files\Rhino 7\System\Yak.exe"" install jswan
# copy compute app to image
COPY --from=builder ["/src/dist", "/app"]
WORKDIR /app
# bind rhino.compute to port 5000
EXPOSE 5000
# uncomment to build core-hour billing credentials into image (not recommended)
# see https://developer.rhino3d.com/guides/compute/core-hour-billing/
#ENV RHINO_TOKEN=
CMD ["rhino.compute/rhino.compute.exe"]
Application code is here: https://github.com/mcneel/compute.rhino3d
As mentioned, everything works without any issues from inside the container when I curl against localhost:5000. But, I can't get any response when I try to curl from the host (after running docker run -p nodeport:containerport imagename). I'm not sure if it has something to do with firewall or anything in the Dockerfile is not configured properly.
Any help is appreciated.
By default, the EXPOSE instruction does not expose the container’s ports to be accessible from the host. In other words, it only makes the stated ports available for inter-container interaction.
For example, let’s say you have a Node.js application and a Redis server deployed on the same Docker network. To ensure the Node.js application communicates with the Redis server, the Redis container should expose a port.
If you check the Dockerfile of the official Redis image, a line is included that says EXPOSE 6379. This is what allows the two containers to talk with one another.
Therefore, when your Node.js application connects to the 6379 port of the Redis container, the EXPOSE directive is what ensures the inter-container communication takes place.
you can't publish a port to your host via Dockerfile. you should do that via the docker-compose.yml file or via the command line.
after your image gets built with the Dockerfile then you can use the command below to publish port 5000 of your container to port 5000 of your host.
docker run -it image:tag -p 5000:5000
I suspected the issue might be with "localhost", since it only responds to localhost from inside and it doesn't respond from anywhere else. We added .NET variable ASPNETCORE_URLS to our Dockerfile.
# bind rhino.compute to port 5000
ENV ASPNETCORE_URLS="http://*:5000"
EXPOSE 5000
This will ensure that the application is listening on interfaces, not only localhost and that resolved the issue
Unfortunately, at the moment, I cannot use docker-compose. And I have to get Google Cloud Proxy running in a Docker container. But it doesn't run in the container, as MySQL is unable to connect to Google Cloud SQL.
Keep in mind, I was able to connect outside of the container on my machine. So that's how I know the connection works.
My Dockerfile looks like this:
FROM node:12-alpine
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy \
&& chmod +x cloud_sql_proxy
RUN ./cloud_sql_proxy -instances=project_placeholder:region_placeholder:instance_placeholder=tcp:3306 -credential_file=service_account.json &
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 80
CMD ["npm", "run", "serve"
How can I configure it so Google Cloud Proxy runs?
RUN directive executes at build time so your CMD only start node process, that is why you are not able to connect because the proxy process is not running at all.
One way is to start both processes from entrypoint but you should know that in such case if proxy down due to some reason your container will still keep running as the main process is nodejs of the container.
Change the entrypoint to
ENTRYPOINT [ "sh", "-c", "/cloud_sql_proxy -instances=project_placeholder:region_placeholder:instance_placeholder=tcp:3306 -credential_file=service_account.json & npm start" ]
I have a flask app, if use "python app.py" to start the server, it runs perfectly. The browser client can get what I want.
However, if I use docker container, If I use below DockerFile:
FROM python:3.6.5-slim
RUN mkdir /opt/convosimUI
WORKDIR /opt/convosimUI
RUN pip install Flask
RUN pip install grpcio
RUN pip install grpcio-tools
ADD . .
EXPOSE 5000
ENV FLASK_APP=app.py
CMD ["python", "-u", "app.py"]
The browser (on windows) can not get response from server, in the linux container, everything works perfect and I can use wget to get content of 127.0.0.1, but outside the container, everything in container is not accessible.
If I change the lask line of DockerFile into:
CMD ["flask", "run", "--host", "0.0.0.0"]
Not use python app.py, then it works again.
Why this happen? And how if I want to use python app.py command?
It because there're some other parallel processing in app.py which I need to share the client of another service, and this client must be always on when turn on the web server. So I can not just put them in separate place.
Any ideas are welcome. Thanks!
I am totally new to docker and the client I am working for have sent me dockerfile configuration .dockerignore file probably to set up the work environment.
So this is basically what he sent to me
FROM node:8
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
RUN npm install
COPY assets ./assets
COPY server ./server
COPY docs ./docs
COPY internals ./internals
COPY track ./track
RUN npm run build:dll
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
with docker build and run command (he also provided the same)
docker build -t reponame:tag .
docker run -p 3000:3000 admin-web:v1
Here, First can someone tell me what does copy . . mean?
He asked me to configure all the ports accordingly. From going through videos, I remember that we can map ports like this -p 3000:3000 but what does configuring port means? and how can i do? any relevant article for the same would also be helpful. Do I need to make docker-compose file?
. is current directory in linux. So basicly: copy current local directory to the current container's directory.
The switch -p is used to configure port mapping. -p 2900:3000 means publish your local port 2900 to container's 3000 port so that the container is available on the outside (by your web browser for instance). Without that mapping the port would not be available to access outside the container. This port is still available to other containers inside same docker network though.
You don't need to make a docker-compose.yml file, but it certainly will make your life easier if you have one, because then you can just run docker-compose up every time to run the container instead of having to run
docker run -p 3000:3000 admin-web:v1
every time you want to start your applicaton.
Btw here is one of the ultimate docker cheatsheets that I got from a friend, might help you: https://gist.github.com/ruanbekker/4e8e4ca9b82b103973eaaea4ac81aa5f