Docker-compose runs into an error due to Socket Address - docker

When trying to build and run my docker project using docker-compose up, it returns me this error output:
I have deleted all old containers, I'm not in swarm mode, and I have no more dockers images or containers running so ... I don't know why there is a problem about sockets in 5000 port.
Thanks buddies.
EDIT: It doesn't matter if I change the port on the docker-compose.yml, console will throw me the same issue.
EDIT 2: After changing the port to 9000:9000 in docker-compose.yml:

The error indicates the port 5000 is already in use. Please change up the port mapping to bind to some other port.

Related

Joplin Docker compose on ubuntu 22.04

I am attempting top deploy Joplin Docker on my VM which has public ip address.
I am following : https://github.com/laurent22/joplin/blob/dev/docker-compose.server.yml
And for some reason I am getting an error:
exec /usr/local/bin/tini: exec format error
I am assuming it has to do with some network related issue may be. I have granted the access of port 22300 on my VM public ingress rules. the webpage is not even able to load.
Docker entrypoint is: /usr/local/bin/tini
So it is failing to even start I think.
Also, database docker is running fine. only the application docker is having a problem.
And I am not willing to use port 443 or 80 on my VM for this application.
Please guide.
I tried this as well: https://blog.5mx.de/posts/joplin-server-docker/
And with that, I am not getting any error but not able to load the webpage either.
And I am willing to keep my VM's port no. 443 and 80 free.

Use Docker with same port as other program

I am currently facing following problem:
I build a docker container of a node server (a simple express server which sends tracing data to Zipkin on port 9411) and want to run it along Zipkin.
So as I understood, the node server should send tracing data to Zipkin using port 9411.
If I run the server with node only (not as docker), I can run it along Zipkin and everything is working fine.
But if I got Zipkin running and than want to fire up my Docker Container, I get the error
Error starting userland proxy: listen tcp4 0.0.0.0:9411: bind: address already in use.
My understanding is that there is a conflict concerning the port 9411, since it seems to be blocked by Zipkin, but obviously, also the server in the Docker container needs to use it to communicate with Zipkin.
I would appreciate if anybody got an idea how I could solve this problem.
Greetings,
Robert
When you start a docker container, you add a port binding like this:
docker run ... -p 8000:9000
where 8000 is the port you can use on the pc to access port 9000 within the container.
Don't bind the express server to 9411 as zipkin is already using that port.
I found the solution: using the flag --network="host" does the job, -p also is not needed.

How to connect Dockerized ASP.NET Core app ->Dockerized PostgreSQL?

I have 2 Dockers:
my ASP.NET Core Web server -p 5001:80
postgresql -p 5451:5432
When I configure my Web Server to work with postgresql running on my host it works.
But when I run configure myWeb App to work with postgresql in Docker , run http://localhost:5001 it
starts but then an error appears:
warn: Microsoft.AspNetCore.HttpsPolicy.HttpsRedirectionMiddleware[3]
Failed to determine the https port for redirect.
fail: Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware[1]
An unhandled exception has occurred while executing the request.
System.InvalidOperationException: An exception has been raised that is likely due to a transient failure.
---> Npgsql.NpgsqlException (0x80004005): Exception while connecting
---> System.Net.Internals.SocketExceptionFactory+ExtendedSocketException (99): Cannot assign requested address [::1]:5451
If I connect the app to an external non-dockerized PostgreSQL - it works fine.
What is incorrect and how to fix it?
There is my docker-compose file
https://pastebin.com/b8FbHSLL
So, localhost here refers to the locahost of the container which runs the webserver, not your localhost.
Therefore you can't use localhost to refer to another container, without doing some networking-related things first.
There are several ways to proceed. Since you mention in the comment you're using docker-compose, I would advise the following:
With docker-compose, networking is relatively simple, if all the services that need to communicate with each other are included in the docker-compose.yml file, you run all of them with docker-compose up. If you haven't specified any specific network in the docker-compose file, docker-compose sets up a single network for all the included services, which makes it possible for each container to reach the other ones, by using a hostname identical to the container name.
Basically, you can then replace localhost with the service-name of the service you want, i.e. if postgres is called "db" in your docker-compose file, you replace localhost:5451 with db:5432.
If you specify custom networks in your docker-compose file, then you have to make sure the web-server and postgres are using the same network.
If you need to run the webapp with docker run instead of docker-compose up, then you need to include a --network argument so that they use the same network.
More info here
Edit: Corrected port number. We now need to use the container port, not the host port, as mentioned by #Adiii in above comment.

Docker compose not exposing port for application container

I have exposed port 80 in my application container's dockerfile.yml as well as mapping "80:80" in my docker-compose.yml but I only get a "Connection refused" after I do a "docker-compose up" and try to do a HTTP GET on port 80 on my docker-machine's IP address. My docker hub provided RethinkDB instance's admin panel gets mapped just fine through that same dockerfile.yml ("EXPOSE 8080") and docker-compose.yml (ports "8080:8080") and when I start the application on my local development machine port 80 gets exposed as expected.
What could be going wrong here? I would be very grateful for a quick insight from anyone with more docker experience!
So in my case, my service containers both bound to localhost (127.0.0.1) and therefore seemingly the exposed ports were never picked up via my docker-compose port mapping. I configured my services to bind to 0.0.0.0 respectively and now they works flawlessly. Thank you #creack for pointing me in the right direction.
In my case I was using
docker-compose run app
Apparently
docker-compose run command does not create any of the ports specified in the service configuration.
See https://docs.docker.com/compose/reference/run/
I started using
docker-compose create app
docker-compose start app
and problem solved.
In my case I found that the service I am trying to set up had all their networks as internal: true. It is strange that it didn't give me an issue when doing a docker stack deploy
I have opened up https://github.com/docker/compose/issues/6534 to ask for a proper error message so it will be obvious for other people.
If you are using the same Dockerfile, make sure you also expose the port 80 EXPOSE 80 otherwise, your compose mapping 80:80 will not work.
Also make sure that your http server listens on 0.0.0.0:80 and not localhost or a different port.

Connecting to Docker container connection refused - but container is running

I am running 2 spring boot applications: A client and rest-api. The client communicates to the rest-api which communicates to a mongodb database. All 3 tiers are running inside docker containers.
I launch the containers normally specifying the exposed ports in the dockerfile and mapping them to a port on the host machine such as: -p 7070:7070, where 7070 is a port exposed in the Dockerfile.
When I run the applications through the java -jar [application_name.war] command, the application works fine and they all can communicate.
However, when I run the applications in a Docker container I get connection refused error, such as when the client tries to connect to the rest-api I get a connection refused error at http://localhost:7070.
But the command docker ps shows that the containers are all running and listening on the exposed and mapped ports.
I have no clue why the containers aren't recognizing that the other containers are running and listening on their ports.
Does this have anything to do with iptables?
Any help is appreciated.
Thanks
EDIT 1: The applications when ran inside containers work fine on my machine, and they don't throw any connection refused errors. The error only happens on that particular different machine.
I used container linking to solve this problem. Make sure you add --link <name>:<alias> at run-time to the container you want linked. <name> is the name of the container you want to link to and <alias> will be the host/domain of an entry in Spring's application.properties file.
Example:
spring.data.mongodb.host=mongodb if the alias supplied at run-time is 'mongodb':
--link myContainerName:mongodb

Resources