Docker: Unable to access Minio Web Browser - docker

I am having trouble accessing the Minio embedded web based object browser. The http://127.0.0.1:9000 and http://127.0.0.1:45423 addresses immediately shows a "This page isn't working. ERR_INVALID_HTTP_RESPONSE".
The http://172.22.0.8:9000 and http://172.22.0.8:45423 addresses will load until timeout and land on a "This page isn't working. ERR_EMPTY_RESPONSE "
Am I missing something from my Docker setups?
docker-compose.yml:
version: "3.7"
services:
minio-image:
container_name: minio-image
build:
context: ./dockerfiles/dockerfile_minio
restart: always
working_dir: "/minio-image/storage"
volumes:
- ./Storage/minio/storage:/minio-image/storage
ports:
- "9000:9000"
environment:
MINIO_ROOT_USER: minio-image
MINIO_ROOT_PASSWORD: minio-image-pass
command: server /minio-image/storage
Dockerfile
FROM minio/minio:latest
CMD wget https://dl.min.io/client/mc/release/linux-amd64/mc && \
chmod +x mc
From minio-image container logs:
API: http://172.22.0.8:9000 http://127.0.0.1:9000
Console: http://172.22.0.8:45423 http://127.0.0.1:45423
Documentation: https://docs.min.io
WARNING: Console endpoint is listening on a dynamic port (45423), please use --console-address ":PORT" to choose a static port.
Logging into the docker container through cli and running pwd and ls results in: minio-image/storage and airflow-files mlflow-models model-support-files, respectively.

I see a few problems here.
First, you're only publishing port 9000, which is the S3 API port. If I run your docker-compose.yml, access to port 9000 works just fine; on the Docker host, I can run curl http://localhost:9000 and get:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied.</Message><Resource>/</Resource><RequestId>16A25441E50432A4</RequestId><HostId>b1eed50d-9218-488a-9df6-fe008e758b27</HostId></Error>
...which is expected, because I haven't provided any credentials.
If you want to access the console, you need to do two things:
As instructed by the log message, you need to set a static console port using --console-address.
You need to publish this port in the ports section of your docker-compose.yml.
That gives us:
version: "3.7"
services:
minio-image:
container_name: minio-image
build:
context: ./dockerfiles/dockerfile_minio
restart: always
working_dir: "/minio-image/storage"
volumes:
- ./Storage/minio/storage:/minio-image/storage
ports:
- "9000:9000"
- "9001:9001"
environment:
MINIO_ROOT_USER: minio-image
MINIO_ROOT_PASSWORD: minio-image-pass
command: server /minio-image/storage --console-address :9001
Running the above docker-compose.yml, I can access the MinIO console
at http://localhost:9001, and log in using the
minio-image/minio-image-pass credentials.

Related

Docker compose for nginx, streamlit, and mariadb

I am not exactly sure how to go about this. I have an instance in AWS Lightsail that has a static IP for which the IP department granted access to read from MariaDB database. I am using streamlit for my app and have stored my database credentials in a .env file. After, I have copied the file code and dockerized it (running the following command):
docker-compose up --build -d
It is built successfully, but when I use the static IP to look at the web page I get the following error:
OperationalError: (2003, "Can't connect to MySQL server on 'localhost' ([Errno 99] Cannot assign requested address)")
Is there something I have to do either in docker or with MariaDB? Thank you in advance.
File of docker-compose.yml:
version: '3'
services:
app:
container_name: app
restart: always
build: ./app
ports:
- "8501:8501"
command: streamlit run Main.py
database:
image: mariadb:latest
volumes:
- /data/mysql:/var/lib/mysql
restart: always
env_file: .env
nginx:
container_name: nginx
restart: always
build: ./nginx
ports:
- "80:80"
depends_on:
- app
- database
I am not sure how the streamlit app is connected with mariadb here.

Nuxt.js 500 NuxtServerError under docker-compose

my system contains 3 dockers:
mongodb
api backend, built with Nestjs
web application, build with Nuxt.js
the mongo and the backend seems to be working, because i can access the swagger at localhost:3000/api/.
the Nuxtjs web app is failing, and i'm getting 500 Nuxtserver error.
Dockerfile (for the web app):
FROM node:12.13-alpine
ENV APP_ROOT /src
RUN mkdir ${APP_ROOT}
WORKDIR ${APP_ROOT}
ADD . ${APP_ROOT}
RUN npm install
RUN npm run build
ENV HOST 0.0.0.0
EXPOSE 4000
docker-compose.yml:
version: "3"
services:
# backend nestjs app
api:
image: nestjs-api-server
container_name: my-api
depends_on:
- db
restart: unless-stopped
environment:
- NODE_ENV=production
ports:
- 3000:3001
networks:
- mynet
links:
- db
# mongodb
db:
image: mongo
container_name: db_mongo
restart: unless-stopped
volumes:
- ~/data/:/data/db
ports:
- 27017:27017
networks:
- mynet
# front web app, nuxt.js
web:
image: nuxtjs-web-app
container_name: my-web
depends_on:
- api
restart: always
ports:
- 4000:4000
environment:
- BASE_URL=http://localhost:3000/api
command:
"npm run start"
networks:
- mynet
networks:
mynet:
driver: bridge
Looks like the nuxtjs app cannot connect to the api. in the log i see:
ERROR connect ECONNREFUSED 127.0.0.1:3000
But why? the swagger (coming from the same api) works fine on http://localhost:3000/api/#/.
Any idea?
environment:
- BASE_URL=http://localhost:3000/api
localhost in a container means inside that particular container. i.e., it will try to resolve port 3000 in my-web container itself.
Basically from front-end you cannot do container communication. May be you can communicate via public hostname or ip or you can make use of extra_hosts concept in docker-compose to resolve localhost.
Got it. The problem was in nuxtServerInit. This is a very special method on vuex, and it is running in the server. i called $axios from it, and i guess you can't do that.
once i commented that method, it's working fine.

Is it possible to curl across docker network via docker-compose between 2 docker-compose.yaml?

I have 2 application run with a different network and it uses separate docker-compose.yaml. So I trying to call an request from app A to app B, but it not works.
docker exec -it app_a_running curl http://localhost:8012/user/1
So I got an error
cURL error 7: Failed to connect to localhost port 8012
docker-compose-app-a.yaml
version: "3"
services:
app:
build: go/
restart: always
ports:
- 8011:8011
volumes:
- ../src/app:/go/src/app
working_dir: /go/src/app
container_name: app-a
command: sleep 72000
networks:
- app-a-network
networks:
app-a-network:
docker-compose-app-b.yaml
version: "3"
services:
app:
build: go/
restart: always
ports:
- 8012:8012
volumes:
- ../src/app:/go/src/app
working_dir: /go/src/app
container_name: app-b
command: sleep 72000
networks:
- app-b-network
networks:
app-b-network:
Questions:
Is it possible to do this?
If the first question is possible, Please suggest me :)
You can use curl on docker containers. The reason why your curl command didn't work is probably that you did not publish your docker container's port. For example, try:
docker run -d -p 8080:8080 tomcat
instead of
docker run -d tomcat
This will forward the port 8080 of your machine to the port 8080 of your container.
If you have a shell to your container, you can use the service name or the container's name to curl a container on your Docker network, provided your target exists with the same network.

How to access docker container using localhost address

I am trying to access a docker container from another container using localhost address.
The compose file is pretty simple. Both containers ports are exposed.
There are no problems when building.
In my host machine I can successfully execute curl http://localhost:8124/ and get a response.
But inside the django_container when trying the same command I get Connection refused error.
I tried adding them in the same network, still result didn't change.
Well if I try to execute with the internal ip of that container like curl 'http://172.27.0.2:8123/' I get the response.
Is this the default behavior? How can I reach clickhouse_container using localhost?
version: '3'
services:
django:
container_name: django_container
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
container_name: clickhouse_container
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
So with this line here - "8124:8123" you're mapping the port of clickhouse container to localhost 8124. Which allows you to access clickhouse from localhost at port 8124.
If you want to hit clickhouse container from within the dockerhost network you have to use the hostname for the container. This is what I like to do:
version: '3'
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
hostname: clickhouse
container_name: clickhouse
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
If you make the changes like I have made above you should be able to access clickhouse from within the django container like this curl http://clickhouse:8123.
As in #Billy Ferguson's answer, you can visit using localhost in host machine just because: you define a port mapping to route localhost:8124 to clickhouse:8123.
But when from other container(django), you can't. But if you insist, there is a ugly workaround: share host's network namespace with network_mode, but with this the django container will just share all network of host.
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
network_mode: "host"
It depends of config.xml settings. If in config.xml <listen_host> 0.0.0.0</listen_host> you can use clickhouse-client -h your_ip --port 9001

Why running go api's on docker give response when hit with the postman?

I have a docker file in the named docker-compose.yml in the bkapiv folder and there is a service of users and I run the docker-compose.yml file by command sudo docker-compose up -d and then I run sudo docker run users/image and then I run my database using sudo docker-compose up -d and then sudo docker run mongo both were started. But now I will hit the route based on port:8080 the route will not respond me any valid output it will show me Could not not get any response my docker-compose.yml is given below:-
version: '2'
services:
web:
build: ./users
ports:
- "8080:8080"
users:
image: cinema/movies
container_name: cinema-movies
depends_on:
- db
links:
- db
environment:
VIRTUAL_HOST: movies.local
db:
image: mongo
container_name: users_db
ports:
- "27019:27019"
volumes:
- ./backup:/backup:rw
Folder Structure is:-
bkapiv(Folder)------users(Folder)
| |
| ------Dockerfile
|
---------docker-compose.yml
And the url hitting on the postman is movies.local/users method POST but it will show me COULD NOT GET ANY RESPONSE.
How I will resolve it and send the data to through my go api to the mongodb
In your docker-compose file you are exposing the container port 8000 to host port 8000. Not 8080!

Resources