Access to API documentation on Docker - docker

I have a fastapi and when I run it with uvicorn, I am able to open its api documentation through localhost/docs.
When I have this fastapi on docker, I am not able to see the API documentation.
Do I need to add an extra docker container for the API documentation?
Here is my docker compose file:
version: "3.7"
services:
web:
build: ui
ports:
- 80:80
depends_on:
- api
api:
build: app
environment:
- PORT=80
ports:
- 8020:80
and I run it with docker-compose up --build.
I tried localhost/docs and localhost/swagger/index.html and I wasn't able to see the API documentation on docker.
Here is my fastapi docker file:
FROM python:3.9
COPY requirements.txt /app/
RUN pip install -r /app/requirements.txt
COPY ./ /app
WORKDIR /app
CMD ["uvicorn", "api:app", "--host", "0.0.0.0", "--port", "8020"]

I think the problem is mismatching ports.
try
ports:
- 8020:8020
then you can see docs at
http://0.0.0.0:8020/docs

Related

Problem with img reference in docker-compose

Hello i tried to build docker-compose in my project with these structure file:
app/
-front-end/src/Components
-back-end/images
but when i run build i have these error with img relative url:
frontend_1 | Module not found: Can't resolve '../../../../../back-end/images'
And these is my docker-compose file:
version: '2'
services:
backend:
network_mode: host
build: ./back-end/
ports:
- "6200:6200"
volumes:
- ./back-end:/usr/src/app
frontend:
build: ./front-end/
ports:
- "3000:3000"
volumes:
- ./front-end:/usr/src/app
depends_on:
- backend
My frontend Dockerfile:
FROM node:10.15.3
RUN mkdir -p /usr/src/app
WORKDIR /TuKanasta
EXPOSE 3000
CMD ["npm", "start"]
the backend Dockerfile:
FROM node:10.15.3
RUN mkdir -p /usr/src/app
WORKDIR /TuKanasta
RUN npm install -g nodemon
EXPOSE 4000
CMD [ "npm", "start" ]
Note: My project run 100 % without docker.
volumes:
- ./back-end:/usr/src/app
...
volumes:
- ./front-end:/usr/src/app
If set in the same image, the second bind mount volume would overwrite the first /usr/src/app content, as illustrated in gladiusio/gladius-archive-node issue 4.
If set in two different images, /usr/src/app in frontend1 would not be able to see back-end, copied in /usr/src/app separate volume of backend service.
Declaring the volume as external might help, as illustrated in this thread.
Or copying into an existing volume (shown here)

Docker does not build and run first service when there are two services in docker-compose

docker-compose up -d works fine when I have only the postgres service in the docker-compose.yml code below. But once I add the python service, the postgres container is never run even though its image is built. docker container ls -a shows that it does not exist.
version: '3'
services:
postgres:
build:
context: .
dockerfile: Dockerfile.postgres
restart: always
container_name: test_postgres
ports:
- "5431:5432"
# Once this python service is added, the postgres does not run.
python:
depends_on:
- postgres
build:
context: .
dockerfile: Dockerfile.python
restart: on-failure:10
container_name: test_python
ports:
- "8001:8000"
I haven't been able to find clear information on why this should be. Some solutions mention that version 3 doesn't use depends_on anymore. I thought this might be a possible issue so I removed it and added restart: on-failure:10 but it made no difference.
If I run docker-compose up -d with just the postgres service in it first, then add the python service into the same docker-compose.yml file and run it again, both images are built and containers run properly.
Not sure if necessary but here are the Dockerfiles for the services:
Dockerfile.postgres:
FROM postgres
WORKDIR /docker-entrypoint-initdb.d
ENV POSTGRES_DB test_postgres
ENV POSTGRES_PASSWORD 1234
COPY init.sql /docker-entrypoint-initdb.d
EXPOSE 5432
Dockerfile.python:
FROM python:latest
RUN mkdir /code
WORKDIR /code
COPY ./backend/ /code
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
RUN python manage.py migrate
RUN python manage.py loaddata customers
EXPOSE 8000
CMD python manage.py runserver 0.0.0.0:8000
What am I doing wrong?

Docker Compose port mapping: 127.0.0.1 refused to connect

I am currently working with Docker and a simple Flask website to which I want to send images. For this I'm working on port 8080, but the mapping from docker to host is not working properly as I am unable to connect. Could someone explain to me what I am doing wrong?
docker-compose.yml
version: "2.3"
services:
dev:
container_name: xvision-dev
build:
context: ./
dockerfile: docker/dev.dockerfile
working_dir: /app
volumes:
- .:/app
- /path/to/images:/app/images
ports:
- "127.0.0.1:8080:8080"
- "8080"
- "8080:8080"
dev.dockerfile
FROM tensorflow/tensorflow:latest
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt
RUN apt update && apt install -y python-tk
EXPOSE 8080
CMD ["python", "-u", "app.py"]
app.py
#APP.route('/test', methods=['GET'])
def hello():
return "Hello world!"
def main():
"""Start the script"""
APP.json_encoder = Float32Encoder
APP.run(host="127.0.0.1", port=os.getenv('PORT', 8080))
I start my docker with docker-compose up, this gives the output: Running on http://127.0.0.1:8080/ (Press CTRL+C to quit).
But when I do a get request to 127.0.0.1:8080/test I get that there is no response.
I have also tried docker-compose run --service-port dev as some people have suggested online but this says that there is no service dev.
Can someone help me for what I am doing wrong?
Use:
APP.run(host="0.0.0.0", port=os.getenv('PORT', 8080))
Using only:
ports:
- "8080:8080"
is enough

Hot reloading of Gatsby doesn't work inside docker for Windows

I have set up Gatsby to work inside docker container and it works perfectly fine except for hot reloading.
I tried something like gatsby develop --host 0.0.0.0 --port 8080 but doesn't do hot reloading. I have to manually restart the container.
In your file docker-compose you must incorporate the following environment variable:
docker-compose.yml
version: '3'
services:
gatsby-app:
build:
context: ./
dockerfile: Dockerfile
image: gatsby-app
container_name: gatsby-app
working_dir: /app
volumes:
- /app/node_modules
- ./app:/app
ports:
- 80:8000
- 81:9000
environment:
- NODE_ENV=development
- GATSBY_WEBPACK_PUBLICPATH=/
- CHOKIDAR_USEPOLLING=1
Your DockerFile file must be:
Dockerfile
from node:latest
EXPOSE 8000
RUN npm install -g gatsby-cli yarn
WORKDIR /app
COPY ./app/package.json .
RUN yarn install && yarn cache clean
CMD ["yarn", "develop", "-H", "0.0.0.0", "-p", "8000"]

Docker on Windows 10- D: drive not shared

I have a Django rest project which I am dockerizing.
My Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY . /code/
RUN pip install -r requirements.txt
And docker-compose:
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
I first ran: docker-compose build which was successful. I then ran: docker-compose up which is giving me error as ERROR: for web Cannot create container for service web: D: drive is not shared. Please share it in Docker for Windows Settings
How to fix this?
You just need to activate a drive for sharing in the settings.
Docker Desktop Settings

Resources