I have create a Docker Image based on the following DockerFile
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
COPY . /usr/app/
EXPOSE 80
WORKDIR /usr/app/
RUN pip install -r requirements.txt
CMD ["uvicorn", "app_Testing_Praveen:app", "--host", "0.0.0.0", "--port", "80"]
following the documentation available at
https://fastapi.tiangolo.com/deployment/docker/
After running the command
docker run -p 80:80 image_name
My docker image is running but giving the address as 0.0.0.0:80
But I am not able to find the absolute link to open the application. I know, due to virtualization there will be different external IP address for docker.
I found that IP on my docker network interface as "docker subnet mask" but that value is also not opening the applicatiln on browser.
My docker version is Docker version 20.10.5, build 55c4c88 and I am running this on windows.
You reach your services inside Docker containers, via the IP of the host machine
So you either access your service by http://localhost:80 or, from another machine, with http://<docker_host_ip>:80.
Related
I have this docker image that exposes port 5984 from a pouchdb-server running inside a docker container.
Here's what Dockerfile looks like:
From node:16-alpine
WORKDIR /pouchdb
RUN apk update
RUN npm install --location=global pouchdb-server
EXPOSE 5984
ENTRYPOINT ["pouchdb-server"]
CMD ["--port", "5984"]
Running the container using the default bridge network doesn't work:
docker run -d -v $(pwd)/pouchdb -p 5984:5984 pouchdb-server:v1
But upon running the container using the host docker network, it works like a charm.
docker run -d -v $(pwd)/pouchdb -p 5984:5984 --network host pouchdb-server:v1.
I understand that it removes the network isolation between docker and host network but this has the caveat of possible port conflicts.
My question is, is there any way to export make this work without using host network?
I am facing an error when running docker container on port 5000 on localhost. I deployed a fastapi Machine learning model and want to run it using docker container.
Error:
This site can’t be reached
The web page at http://0.0.0.0:5000/ might be temporarily down or it may have moved permanently to a new web address.
ERR_ADDRESS_INVALID
Dockerfile:
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8
WORKDIR /app
COPY . /app
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "5000"]
Command I am running for docker to build container:
docker build -t tags:latest .
Command to run docker container I am using:
docker run -p 5000:5000 tags:latest
In case you are getting the error message from a client (e.g. your browser on the same machine):
When you open a server socket for IP 0.0.0.0 it means it opens that server socket on all interfaces.
When you connect with a client to a server, you need to give a concise address, such as 127.0.0.1 or one of the configured IPs for that machine. Connecting to 0.0.0.0 should always result in an error message.
I have two images that should communicate with each other. The first image is the web API, and the second image is the web itself.
Before I build and run the images, I create a new network called nat using the command:
docker network create nat
After this, I start to create my images which I called image-api that runs on port 8080 and image-web that run on port 8081.
Then, I run the image-api image using the command:
docker run -d -p 3000:8080 --network nat image-api
I mapped the container port 8080 to my host port 3000. I tried to access the localhost port 3000 in my browser and it's running without an error and giving me the response as it should be.
The problem here is, my second image, the image-web image. I try to run it using the command:
docker run -d -p 3001:8081 --network nat image-web
When I try to access localhost:3001 in my browser, it's running, but not giving the data from image-api container. When I try to logs the image-web container, it's giving me an error like:
Error: getaddrinfo ENOTFOUND image-api
Just in case, I try to access the image-api container from my image-web container using URL like this:
http://image-api/ping
Here's my image-api Dockerfile:
FROM node:14 as builder
WORKDIR /app
COPY package.json /app
RUN npm install
FROM node:14
EXPOSE 8080
CMD ["node", "server.js"]
WORKDIR /server
COPY --from=builder /app/node_modules /server/node_modules
COPY server.js .
And here's my image-web Dockerfile:
FROM node:14 as builder
WORKDIR /app
COPY package.json .
RUN npm install
FROM node:14
ENV IMAGE_URL=http://image-api/ping
EXPOSE 8081
CMD ["node", "app.js"]
WORKDIR /web
COPY --from=builder /app/node_modules /web/node_modules
COPY /src .
Just in case, I already tried to run both of them without docker, and they run as it should be in my local computer.
EDIT:
I've also tried to run the container using name like this:
docker run -d -p 3000:8080 --network nat --name imageapi image-api
And try to access it using:
http://imageapi/ping
But it's still giving me the same error
SOLUTIONS:
As being pointed out by #davidMaze on the answer. After running our container using --name tag. I can access my container using http://imageapi:8080/ping
You need to explicitly specify a docker run --name when you start the container. Since you can run multiple containers from the same image, Docker doesn't automatically assign a container name based on the image name; you need to set it yourself.
docker run -d -p 3000:8080 --network nat --name image-api image-api
docker run -d -p 3001:8081 --network nat --name image-web image-web
Once you set the container name, Docker provides an internal DNS service and the container will be reachable by that name from other containers on the same network.
Use bridge networks in the Docker documentation describes this further, though mostly in contrast to an obsolete networking mode.
In order to work with pwa-studio I have to use docker due to the fact that I am using windows. So I made it in the simplest possible way for me with Dockerfile:
FROM node:10
RUN mkdir /app
WORKDIR /app
EXPOSE 3000
then I created container:
winpty docker run -it -p 3000:3000 --mount type=bind,source="$(pwd)",target=/app nfr:1.0 bash
all commands related to installation packages or running app I make being attached to container
Using address localhost:3000 I can see my app running since I expose the port.
PROBLEM:
One of the first configuration steps in pwa-studio is to add a custom hostname and SSL cert, which is done with
yarn buildpack create-custom-origin ./
As a result app inside container is no longer available via localhost:3000 but via domain name, in my case it is: https://pwa-aynbv.local.pwadev:3000/
How to configure docker to expose this domain outside of the container ?
Thanks in advance for any help
I have this docker file:
# Use the official image as a parent image
FROM mysql/mysql-server:8.0
# Set the working directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Copy the file from your host to your current location
COPY customers.sql .
COPY entrypoint.sh .
# Inform Docker that the container is listening on the specified port at runtime.
EXPOSE 1433:1433
# Run the command inside your image filesystem
RUN chmod +x entrypoint.sh
# Run the specified command within the container.
RUN /bin/bash ./entrypoint.sh
And entypoint.sh:
mysql --host=localhost --protocol=tcp -u root -pMypassword -e "create database customersDatabase; use customersDatabase; source customers.sql;"
but i get the following error message:
ERROR 2003 (HY000): Can't connect to MySQL server on 'localhost' (99)
when i run docker build
what is the correct way to build entrypoint.sh in order to run docker commands?
BEFORE OP EDIT:
problem:
./entrypoint.sh: line 2: docker: command not found
You are trying to run docker inside docker.
Possible solutions
1) Mount host's docker sock
or
2) Install docker inside docker before you run your
-> apt install docker.io
--> expect super size of your image
entrypoint
Difference between 1) and 2)
in 1) your docker's docker is the host's docker
while in 2) installed docker in the docker is independent and thus isolated from host
AFTER OP EDIT
problem:
ERROR 2003 (HY000): Can't connect to MySQL server on 'localhost' (99)
EDIT: And since you edit your question, which now doesnt correspond with your title, I will provide you with your second problem
You cannot connect to your localhost, because insider docker, localhost is docker itself, not your host.
This can be solved by using host network driver.
Or preferably, put your db in docker too, have in the same docker network,expose port, name your db container as mysql_database, and connect to it as mysql_database:port
Or dont try to connect to db which is in your container from within your container. Thats I think antipattern. Usually it should be possible to get into db's CLI where you can run commands