I have a service, some-service, that needs to make http requests to a Jenkins service - both running in separate Docker containers. My issue is that whenever I make a request, my connection is refused.
Both some-service and Jenkins are running on ports 3030 and 4040 with host names some-service and jenkins, respectively.
I can hit Jenkins successfully on my local machine outside of some-service with:
curl -v http://localhost:4040/
However, I cannot reach Jenkins from inside some-service using:
curl -v http://jenkins:4040/
I'm using this simple Docker-compose.yaml file to create both some-service and Jenkins:
version: '3'
services:
some-service:
container_name: service
image: service:latest
hostname: some-service
build:
context: service/
dockerfile: Dockerfile
environment:
GET_HOSTS_FROM: dns
networks:
- eg-net
ports:
- 3030:3030
depends_on:
- jenkins
links:
- jenkins
labels:
kompose.service.type: LoadBalancer
jenkins:
container_name: jenkins
image: jenkinsci/blueocean
restart: always
hostname: jenkins
networks:
- eg-net
ports:
- 4040:8080
volumes:
- ./jenkins-data:/var/jenkins_home
networks:
eg-net:
driver: bridge
You can't access http://jenkins:4040/ from within your service because port 4040 is exposed only to the host machine. Thats why curl -v http://localhost:4040/ on your host machine works.
If you want to access jenkins from within another container you have to use the port 8080 because this port is exposed within the network. So curl -v http://jenkins:8080/ from within your service will work.
Hope this will clarify it.
Related
I have:
NiFi running in docker container. I'm running NiFi via Docker Compose with the following config:
version: '3'
services:
nifi:
image: apache/nifi:latest
container_name: nifi
ports:
- "8443:8443"
volumes:
- ./database_repository:/opt/nifi/nifi-current/database_repository
- ./flowfile_repository:/opt/nifi/nifi-current/flowfile_repository
- ./content_repository:/opt/nifi/nifi-current/content_repository
- ./provenance_repository:/opt/nifi/nifi-current/provenance_repository
- ./state:/opt/nifi/nifi-current/state
- ./logs:/opt/nifi/nifi-current/logs
- ./conf:/opt/nifi/nifi-current/conf
restart: always
FTP server located on localhost (outside the container)
In the NiFi route i'm using a GetFTP processor which tries unsuccessfully to connect to localhost:21
I understand that the problem is that localhost is not available, because container is isolated. What and how to configure in the docker-compose.yml configuration to solve the problem?
As part of a school challenge I need to run a Jenkins environment using Docker on port 7070:9090.
I'm trying to change the default access port for Jenkins (8080) on a Docker container unsuccessfully.
Here's my code:
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins-image
ports:
- "7070:8080"
volumes:
- "jenkins_home:/var/jenkins_home"
- "/var/run/docker.sock:/var/run/docker.sock"
I managed to change the localhost to 7070, but not the default access port from 8080.
All the tutorials I've found online only explain how to change the localhost.
Any advice on how to change the port 8080 and still manage to have Jenkins running?
Access port is related with Docker instead of Jenkins. The syntax should be like this HOST:CONTAINER if the Jenkins is running at 7070 in your container following code needs to work for you.
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins-image
ports:
- "8080:7070"
volumes:
- "jenkins_home:/var/jenkins_home"
- "/var/run/docker.sock:/var/run/docker.sock"
I have a java application, that connects through external database through custom docker network
and I want to connect a Redis container.
docker-redis github topic
I tried the following on the application config:
1 localhost:6379
2 app_redis://app_redis:6379
3 redis://app_redis:6379
nothing works on my setup
docker network setup:
docker network create -d bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 mynet
Connect to a Database Running on Your Docker Host
PS: this might be off-topic, how I can add the network on docker-compose instead of external
docker-compose:
services:
app-kotin:
build: ./app
container_name: app_server
restart: always
working_dir: /app
command: java -jar app-server.jar
ports:
- 3001:3001
links:
- app-redis
networks:
- front
app-redis:
image: redis:5.0.9-alpine
container_name: app-redis
expose:
- 6379
networks:
front:
external:
name: mynet
with the setup above how can I connect through a Redis container?
Both containers need to be on the same Docker network to communicate with each other. The app-kotin container is on the front network, but the app-redis container doesn't have a networks: block and so goes onto an automatically-created default network.
The simplest fix from what you have is to also put the app-redis container on to the same network:
app-redis:
image: redis:5.0.9-alpine
networks:
- front
The Compose service name app-redis will then be usable as a host name, from other containers on the same network.
You can simplify this setup considerably. You don't generally need to manually specify IP configuration for the Docker-private networks. Compose can create the network for you, and in fact it will create a network named default for you. (Networking in Compose discusses this further.) links: and expose: aren't used in modern Docker networking; Compose can provide a default container_name: for you; and you don't need to repeat the working_dir: or command: from the image. Removing all of that would leave you with:
version: '3'
services:
app-kotin:
build: ./app
restart: always
ports:
- '3001:3001'
app-redis:
image: redis:5.0.9-alpine
The server container will be able to use the other container's Compose service name app-redis as a host name, even with this minimal configuration.
I want to create a PostgreSQL cluster composed by a master and two slaves within three containers. I want to do that with docker-compose. Everything works fine but I cannot ping containers from my Mac.
Here the code of my docker-compose.yml.
On Stackoverflow there is this thread How could I ping my docker container from my host that address docker standalone and not docker-compose.
version: '3.6'
volumes:
pgmaster_volume:
pgslave1_volume:
pgslave2_volume:
services:
pgmaster:
container_name: pgmaster
build:
context: ../src
dockerfile: Dockerfile
image: docker-postgresql:latest
environment:
NODE_NAME: pgmaster # Node name
ports:
- 5422:5432
volumes:
- pgmaster_volume:/home/postgres/data
networks:
cluster:
ipv4_address: 10.0.2.31
aliases:
- pgmaster.domain.com
pgslave1:
container_name: pgslave1
build:
context: ../src
dockerfile: Dockerfile
image: docker-postgresql:latest
environment:
NODE_NAME: pgslave1 # Node name
ports:
- 5441:5432
volumes:
- pgslave1_volume:/home/postgres/data
networks:
cluster:
ipv4_address: 10.0.2.32
aliases:
- pgslave1.domain.com
pgslave2:
container_name: pgslave2
build:
context: ../src
dockerfile: Dockerfile
image: docker-postgresql:latest
environment:
NODE_NAME: pgslave2 # Node name
ports:
- 5442:5432
volumes:
- pgslave2_volume:/home/postgres/data
networks:
cluster:
ipv4_address: 10.0.2.33
aliases:
- pgslave2.domain.com
networks:
cluster:
driver: bridge
ipam:
config:
- subnet: 10.0.2.1/24
On my Mac, I have a 192.168.0.0 local network. I expect that doing ping 10.0.2.31 I can ping my container but this is not possible. I think this is due to Linux VM created inside Mac where containers live and the IPs are not reachable outside this VM.
Can someone help me to understand how to make the above three IP reachable? IPs are reachable from one container to another.
Here my full code:
https://github.com/sasadangelo/docker-postgres
you should be able to ping your containers from you host.
via public ip:
just use their public ip. (you had been trying to ping your
container local ip, inside the docker network)
how to find the container public IP?
you can get it by running ifconfig inside the container.
or
or by running on your host docker container inspect <container_id>.
it should be there under NetworkSettings.<network_name>.IPAddress )
via container name/id
docker is running some sort of dns on your machine so you can also use
the container name or id - ping <container_name/id>
note
the way to access your containers outside the docker network is via their published ports. you have bound port 5432 on the docker network to port 5442 on your host, therefore the container should listen and accept traffic at 127.0.0.1:5442 (thats your localhost at the port you've bound)
Context
I was planning on simplifying some development setup of multiple docker-compose.yml by introducing virtual hosts locally. I looked around and decided to use nginx-proxy for the reverse-proxy (ability to set VIRTUAL_HOST for each service).
Setup
To expose these on the host machine I went the route of dnsmasq and adding a /etc/resolver/test/ with nameserver 127.0.0.1.
I went and put the above into action using a dev/docker-compose.yml file:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
restart: 'always'
ports:
- "80:80"
- "443:443"
volumes:
- "/var/run/docker.sock:/tmp/docker.sock:ro"
dnsmasq:
image: andyshinn/dnsmasq
restart: 'always'
ports:
- "53:53/tcp"
- "53:53/udp"
cap_add:
- NET_ADMIN
command: --log-facility=-
volumes:
- ./data/dnsmasq.conf:/etc/dnsmasq.conf
- ./data/dnsmasq.d:/etc/dnsmasq.d
networks:
default:
external:
name: proxynet
The data/dnsmasq.conf file only contains address=/test/127.0.0.1.
I've also created an external network proxynet and use that as the default network for the docker-compose file(s) (docker network create proxynet). This then allows other docker-compose files and services to be linked to the proxy.
I have the following proj1/docker-compose.yml:
version: "3.5"
services:
proj1-web:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=proj1-web.test
networks:
default:
external:
name: proxynet
Having both these of these docker-compose files running (i.e., docker-compose up) I am able to access proj1-web.test from my local machine. Everything works as expected.
Now I want to be able to reference proj1-web.test in another container and have it resolve to the running container.
I'll create proj2/docker-compose.yml (similar to previous just different name):
version: "3.5"
services:
proj2-web:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=proj2-web.test
networks:
default:
external:
name: proxynet
With everything running I can access both proj1-web.test and proj2-web.test from my local machine. I can successfully curl different services using between proj1 and proj2: docker-compose run proj1-web sh -c "apk update -qq; apk add curl -qq; curl -v proj2-web:8000".
Problem
The problem is that I cannot curl the virtual host's name proj2-web.test from proj1: docker-compose run proj1-web sh -c "apk update -qq; apk add curl -qq; curl -v proj2-web.test":
* Rebuilt URL to: proj2-web.test/
* Trying 127.0.0.1...
* TCP_NODELAY set
* connect to 127.0.0.1 port 80 failed: Connection refused
* Failed to connect to proj2-web.test port 80: Connection refused
* Closing connection 0
curl: (7) Failed to connect to proj2-web.test port 80: Connection refused
Is there something I'm missing here? It appears the individual containers don't have access to the DNS being provided from dnsmasq to my local machine, I cannot figure out how to grant them that access. Maybe I'm going about this the wrong way -- I am open to suggestions.
I ended up creating a solution which addresses my question. You can see the repository here for the tool:
https://github.com/scoremedia/dcdc
I also created a blog post detailing a bit of this: https://kevinjalbert.com/docker-compose-dns-consistency-dcdc/
Hopefully this helps others.