How to change port number when hosting minio server? - port

I am currently working on a project where I am attempting to use MinIO with a data moving program developed by my company. This broker software only allows for devices using port 80 to successfully complete a job; however, any avid user of MinIO knows that MinIO hosts on port 9000. So my question is, is there a way to change the port on which the MinIO server is hosted? I've tried looking through the config.json file to find an address variable to assign a port number to but each of the address variables I attempted to change had no effect on the endpoint port number. For reference, I am hosting MinIO on a windows 10 virtual machine during the test phase of the project and will be moving it onto a dedicated server (also windows 10) upon successful completion of testing.

Add --address :80 when you start your minio.
You can refer to this: https://docs.min.io/docs/multi-tenant-minio-deployment-guide.html

When you start the minio server use the following command…
minio server start --address :[port you want to use]
for example…
minio server start --address :8000

Related

Use Docker with same port as other program

I am currently facing following problem:
I build a docker container of a node server (a simple express server which sends tracing data to Zipkin on port 9411) and want to run it along Zipkin.
So as I understood, the node server should send tracing data to Zipkin using port 9411.
If I run the server with node only (not as docker), I can run it along Zipkin and everything is working fine.
But if I got Zipkin running and than want to fire up my Docker Container, I get the error
Error starting userland proxy: listen tcp4 0.0.0.0:9411: bind: address already in use.
My understanding is that there is a conflict concerning the port 9411, since it seems to be blocked by Zipkin, but obviously, also the server in the Docker container needs to use it to communicate with Zipkin.
I would appreciate if anybody got an idea how I could solve this problem.
Greetings,
Robert
When you start a docker container, you add a port binding like this:
docker run ... -p 8000:9000
where 8000 is the port you can use on the pc to access port 9000 within the container.
Don't bind the express server to 9411 as zipkin is already using that port.
I found the solution: using the flag --network="host" does the job, -p also is not needed.

Source client having trouble connecting to serverless Icecast server on Cloud Run

Is it possible to make a serverless Icecast server?
I'm trying to make an internet radio with Icecast on Google's serverless Cloud Run platform. I've put this docker image in Containter Registry and then created a Cloud Run service with default Icecast port 8000. It all seems to work when visiting Cloud Run's provided URL. Using it I can get to the default Icecast and admin pages.
The problem is trying to connect to the server with a source client (tried using mixxx and butt). I think the problem is with ports since setting the port to 8000 on mixxx gives: Socket is busy error while butt just simply doesn't connect. Setting the port to 443 on mixxx gives: Socket error while butt: connect: server answered with 411!
Tried to do the same thing with Compute Engine but just installing Icecast and not a docker image and everything works as intended. As I understand Cloud Run provides a URL for the container (https://example.app) with given port on setup (for Icecast 8000) but source client tries to connect to that URL with its provided port (http://example.app:SOURCE_CLIENT_PORT). So not sure if there's a problem with HTTPS or just need to configure the ports differently.
With Cloud Run you can expose only 1 port externally. By default it's the 8080 port but you can override this when you deploy your revision.
This port is wrapped and behind a front layer on Google Cloud infrastructure, named Google Front End, and exposed with a DNS (*.run.app) on the port 443 (HTTPS).
Thus, you can reach your service only on the exposed port via port 443 wrapping. Any other port will fail.
With Compute Engine, you don't have this limitation, and that's why you haven't issues. Simply open the correct port with firewall rules and enjoy.

Unable to receive data from socket on Docker Windows

I have a webserver listening on some port. I dockerize this server and publish its port with the command:
docker run -p 8080:8080 image-tag
Now I write a short Java client socket connecting to localhost on this port (it is able to connect). However, when I read data from this socket via the readLine function, it always returns me null. It shouldn't. Can someone point me some direction on how to troubleshoot this? Things I have tried:
This webserver and client works fine without docker.
Using my docker installation, I'm able to pull the getting-started app and it works fine. (means there is no problem with my docker, it still can publish port)
My docker pulls only the openjdk:latest as the base image. Other than that, nothing special.
The docker is Linux Docker on Windows Host.
The port the webserver is running on is correct and the same as the published port.
I would be very happy if someone could help.
By making the server app inside container listen on address 0.0.0.0 instead of localhost, I'm able to solve the problem.

Flask in docker, access other flask server running locally

After finding a solution for this problem, I have another question: I am running a flask app in a docker container (my web map), and on this map I want to show tiles served by a (flask-based) Terracotta tile server running in another docker container. The two containers are on the same docker network and can talk to each other, however only the port where my web server is running is open to the public, and I like to keep it that way. Is there a way I can serve my tiles somehow "from local" without opening the port of the tile server? Maybe by setting up some redirects or something?
Main reason for this is that I need someone else to open ports for me, which takes ages.
If you are running your docker containers on a remote machine like ec2, then you need not worry about a port being open to public, as by default ports are closed in ec2 or similar services. You just need to open the port on which you are running your app, you can use aws console for that.
If you are running your docker container locally or on some server for which you don't have cosole access, then you can use somekind of firewall to open or close a port. I personally prefer UFW for Ubuntu systems. You can allow a certain range of ports using a simple command such as sudo ufw allow 9000 to allow incoming tcp packets on port 9000. Similarly you can deny incoming packets to a port. Also, you can open a port to a certain ip (like your own ip) using sudo ufw allow from <ip address>.

How to make a Docker container's service accessible via the container's IP address?

I'm a bit confused. Trying to run both a HTTP server listening on port 8080 and a SSH server listening on port 22 inside a Docker container I managed to accomplish the latter but strangely not the former.
Here is what I want to achieve and how I tried it:
I want to access services running inside a Docker container using the IP address assigned to the container:
ssh user#172.17.0.2
curl http://172.17.0.2:8080
Note: I know this is not how you would configure a real web server but I want the container to mimic an embedded device which runs both services and which I don't have available all the time. (So it's really just a local non-production thing with no security requirements).
I didn't expect integrating the SSH server to be easy, but to my surprise I just installed and started it and had to do nothing else to be able to connect to the machine via ssh (no EXPOSE 22 or --publish).
Now I wanted to access the container via HTTP on port 8080 and fiddled with --publish and EXPOSE but only managed to make the HTTP server available through localhost/127.0.0.1 on the host. So now I can access it via
curl http://127.0.0.1:8080/
but I want to access both services via the same IP address which is NOT localhost (e.g. the address the container got randomly assigned is totally OK for me).
Unfortunately
curl http://172.17.0.2:8080/
waits until it times out every time I tied it.
I tried docker run together with -p 8080, -p 127.0.0.1:8080:8080, -p 172.17.0.2:8080:8080 and much more combinations, together or without EXPOSE 8080 in the Dockerfile but without success.
Why can I access the container via port 22 without having exposed anything?
And how do I make it accessible via the container's IP address?
Update: looks like I'm experiencing exactly what's described here.

Resources