Nginx docker container error / can not open http://localhost:8030/docs - docker

Trying to open a restapi with swagger and fastapi with mysql and docker desktop
my nginx docker container:

Related

How can I read files from docker container inside a flask app

I am running a nginx application using docker. My nginx application creates some files in the docker container. I can see those files in the directory. I tried those files from my flask application, but I cannot since I am running my flask application in another docker container.
Is there a way to read files from a docker container inside a flask application running in localhost/docker?
You can explore docker container's file system via;
docker exec -it [containerId] bash
also you can try docker cp.

Docker Nginx: host not found in upstream

I have my docker app running in the aws EC2 instance, and I am currently trying to map the app to the external IP address using Nginx. Here is a snap shot of the containers that I have running:
My test-app is a fairly simple app that displays a static html website. And I deployed it using the following command:
docker run -d --name=test-app test-app
The nginx-proxy has the following proxy.conf
server {
listen 80;
location / {
proxy_pass http://test-app;
}
}
Here is the Dockerfile for the nginx proxy:
FROM nginx:alpine
RUN rm /etc/nginx/conf.d/*
COPY proxy.conf /etc/nginx/conf.d/
nginx-proxy is run using the following command:
docker run -d -p 80:80 --name=nginx-proxy nginx-proxy
However, the nginx container never runs, and here the error log I get
2020/03/27 15:55:19 [emerg] 1#1: host not found in upstream "test-app" in /etc/nginx/conf.d/proxy.conf:5
nginx: [emerg] host not found in upstream "test-app" in /etc/nginx/conf.d/proxy.conf:5
While your two containers are both running and you have properly exposed the ports required for the NGINX container. You have not exposed any ports for the test-app container. The NGINX container has no way of talking to it. Exposing ports directly with docker run would likely defeat the point of using a reverse proxy in your situation. So instead, what you should do in this situation is create a Network for both of your Docker containers and then add them to it. Then they will be able to communicate with one-another over a bridge. For example:
docker network create example
docker run -d --network=example --name=test-app test-app
docker run -d -p 80:80 --network=example --name=nginx-proxy nginx-proxy
Now that you have both of your pods on the same network, Docker will enable DNS-based service discovery between them by container name and you will be able to resolve them from each other. You can test connectivity like so: docker exec -it nginx-proxy ping test-app. Well, that is provided ping is installed in that Docker container.

I can't access my Docker container on GCP Compute Engine

I have my Docker container running on GCP Compute Engine. The CE server is running on CentOS 7. My Docker container has the application being served by Nginx with port 80 exposed. For some reason, I can't access it from the external IP address on my browser.
I ran the container with this command:
sudo docker run --name myapp -p 80:80 -d myapp:1.0.0
When I do sudo curl <internal_ip>:80 or sudo curl <localhost>:80 it will show that the application is running and returns back the content, but if I try to access in my browser with <external_ip>:80, it doesn't load anything. What can I do to make this accessible through the external IP address?
It seems I had to configure the firewall to open up port 80.

Access container started at Docker Compose step from Hosted Linux Agent on Azure VSTS

I am using VSTS build step Docker Compose v 0.* on Hosted Linux Agent.
Here is my docker-compose:
version: '3.0'
services:
storage:
image: blobstorageemulator:1.1
ports:
- "10000:10000"
server:
build: .
environment:
- ENV=--tests
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8080:8080"
depends_on:
- storage
I use run services command.
So basically I am running 2 Linux containers inside another Linux container (Build Agent).
I was able to connect these containers to each other (server uses storage through connection string, which contains storage as a host - http://storage:10000/devstoreaccount1).
Question: how to get access to the server from the build agent container? When I do curl http://localhost:8080 on the next step it returns Failed to connect to localhost port 8080: Connection refused.
PS
Locally I run docker compose and can easily access my exposed port from host OS (I have VirtualBox with Ubuntu 17.10)
UPDATE:
I tried using docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' server-container-name to get the IP address of the container running my server app and curl this IP, but I am getting connection timed out now.
there is no way to access it from the host container, you have to perform exec command.
docker-compose -p container_name exec name_from_compose http://localhost:8080

Not able to access files in xampp htdocs installed in ubuntu docker container

I have installed xampp and deployed my php code in a docker image and started a container on ubuntu 14.04.
I cannot access my phpmyadmin by using my docker container system ip/phpmyadmin in host computer's firefox browser , but cannot take my web interface in browser. while try to access my web interface its shows as follows:
Access forbidden!
You don't have permission to access the requested object. bhla bhla....
Error 403
Note: I have already given required permissions to files in xampp/htdocs folder
Running a new container with sudo docker run -ti ubuntu will not bind any port. The option -p needs to be used to bind host-port from container-port.
See a more detailed answer.
In your case, assuming your web server is running on port 80 in the container and assuming you cant to access it from you host web browser on the port 9090 start the container with the command:
docker run -it -p 9090:80 ubuntu

Resources