Hi I am new to docker I have created docker images and able to start them using docker compose.
Able to access these services from browser using docker tcp IP and they can ping each other using ping command.
When I tried to access the service from one another using service name in docker compose it is not accessible.
Is it a firewall issue?But both the services can be accessible from browser.
Tried by creating network when I try to inspect network both containers are in same network and they can ping each other.
These are my docker files
backendservice
FROM java:8
EXPOSE 8080
ADD /target/microService.jar microService.jar
ENTRYPOINT ["java","-jar","microService.jar"]
uiservice
FROM java:8
EXPOSE 8081
ADD /target/csuiservice.war csuiservice.war
ENTRYPOINT ["java","-jar","csuiservice.war"]
Using spring boot to develop above services and they able to access independently on exposed ports
docker-compose.yml
version: '3'
services:
backendservice:
build:
./BAService
volumes:
- ./BAService:/usr/src/app
ports:
- 5001:8080
website:
image: uiservice
ports:
- 5000:8081
links:
- "backendservice:backendservice"
volumes:
- ./spring-boot-web-jsp:/usr/src/app1
depends_on:
- backendservice
networks:
default:
external:
name: mynetwork
I am trying to access the backendservice by following url
"http://backendservice:8080/getUsers"
Related
I'm struggling to configure docker-compose file in order to achieve below structure. Web container needs to be accessible through virtual pcs, physical devices (local & external), but the Keycloak container needs to be only accessible by web container. How can I achieve this?
Desired Network Structure
Web Container starts flask app expose on port 5000.
My docker-compose file currently:
version: '2'
services:
web:
build: .
ports:
- '5000:5000'
volumes:
- .:/app
depends_on:
- keycloak
keycloak:
container_name: keycloak
image: jboss/keycloak:13.0.1
ports:
- '8080:8080'
environment:
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
If a container doesn't have ports:, it (mostly*) isn't accessible from outside of Docker. If your goal is to have the container only be accessible from other containers, you can just delete ports:.
In comments you ask about the container being reachable from other containers. So long as both containers are on the same Docker network (or the same Compose-provided default network) they can communicate using the other container's Compose service name and the port the process inside the container is listening on. ports: aren't required, and they're ignored if they're present.
So in your setup, it should be enough to remove the ports: from the keycloak container.
version: '2.4'
services:
web:
build: .
ports:
- '5000:5000'
depends_on:
- keycloak
# can call keycloak:8080
keycloak:
image: jboss/keycloak:13.0.1
environment: { ... }
# no ports:, container_name: is also unnecessary
(*) On a native-Linux host, the container's Docker-internal IP address will be reachable from the same host, but not other hosts, if you have some way of finding it (including port-scanning 172.16.0.0/20). If someone can run docker commands then they can also easily attach other containers to the same network and gain access to the container, but if they can run docker commands then they can also pretty straightforwardly root the entire host.
I have a docker-compose, that gathers 3 images (mariadb, tomcat and a backup service).
In the end, this exposes a 8080 port on which any user can connect using a browser.
This docker-compose seems to work nicely as I can open a browser (from the host) and browse http://localhost:8080/my service path
I did not try yet from a different machine (I do not have another one where I am currently) but since the default network type is bridge it should work also.
My docker-compose.yml looks like this:
version: "3.0"
networks:
my-network:
services:
mariadb-service:
image: *****
ports:
- "3306:3306"
networks:
- my-network
tomcat-service:
image: *****
ports:
- "8080:8080"
networks:
- my-network
depends_on:
- mariadb-service
backup-service:
image: *****
depends_on:
- mariadb-service
networks:
- my-network
(I remove all the useless stuff)
Now I also have a 'client' docker image allowing to connect to such a server (very similarly to the user with its browser). I'm running this docker image this way:
docker run --name xxx -it -e SERVER_NAME=<ip address of the server> <image name/tag> bash
The strange thing is that this client docker can connect to an external server (running on a production server) but cannot connect to the server docker running locally on the same host.
My understanding is that using default network type (bridge), all docker images can communicate together on the docker host and can also be accessed from outside.
What Am I missing ?
Thanks,
I have a Java application running in a Docker container and rabbitmq in another container.
How can I connect the containers to use rabbitmq in my Java application?
You have to set up a network and attach the running containers to the network.
Then you have to set the connection URL of your app to the name of the rabbitmq's network name in Docker container.
The easiest way is to create docker-compose file because it will create the network and attach the containers automatically.
Create a network
Connect the container
Or
Docker compose file
Example of docker-compose.yml
version: '3.7'
services:
yourapp:
image: image_from_dockerhub_or_local // or use "build: ./myapp_folder_below_this_where_is_the_Dockerfile" to build container from scratch
hostname: myapp
ports:
- 8080:8080
rabbitmq:
image: rabbitmq:3.8.3-management-alpine
hostname: rabbitmq
environment:
RABBITMQ_DEFAULT_USER: user
RABBITMQ_DEFAULT_PASS: pass
ports:
- 5672:5672
- 15672:15672
You can run it with docker-compose up command.
Then in your connection url use host:rabbitmq, port:5672.
Note that you don't have to create a port forward if you don't want to reach rabbitmq from your host machine.
I have a java application, that connects through external database through custom docker network
and I want to connect a Redis container.
docker-redis github topic
I tried the following on the application config:
1 localhost:6379
2 app_redis://app_redis:6379
3 redis://app_redis:6379
nothing works on my setup
docker network setup:
docker network create -d bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 mynet
Connect to a Database Running on Your Docker Host
PS: this might be off-topic, how I can add the network on docker-compose instead of external
docker-compose:
services:
app-kotin:
build: ./app
container_name: app_server
restart: always
working_dir: /app
command: java -jar app-server.jar
ports:
- 3001:3001
links:
- app-redis
networks:
- front
app-redis:
image: redis:5.0.9-alpine
container_name: app-redis
expose:
- 6379
networks:
front:
external:
name: mynet
with the setup above how can I connect through a Redis container?
Both containers need to be on the same Docker network to communicate with each other. The app-kotin container is on the front network, but the app-redis container doesn't have a networks: block and so goes onto an automatically-created default network.
The simplest fix from what you have is to also put the app-redis container on to the same network:
app-redis:
image: redis:5.0.9-alpine
networks:
- front
The Compose service name app-redis will then be usable as a host name, from other containers on the same network.
You can simplify this setup considerably. You don't generally need to manually specify IP configuration for the Docker-private networks. Compose can create the network for you, and in fact it will create a network named default for you. (Networking in Compose discusses this further.) links: and expose: aren't used in modern Docker networking; Compose can provide a default container_name: for you; and you don't need to repeat the working_dir: or command: from the image. Removing all of that would leave you with:
version: '3'
services:
app-kotin:
build: ./app
restart: always
ports:
- '3001:3001'
app-redis:
image: redis:5.0.9-alpine
The server container will be able to use the other container's Compose service name app-redis as a host name, even with this minimal configuration.
I am hosting 3 services using docker-compose.
version: '3.3'
services:
service-a:
container_name: service-a
network_mode: default
ports:
- 8001:8001
- 8080:8080
service-b:
container_name: service-b
network_mode: default
ports:
- 8180:8080
links:
- service-a:srv_a
service-api:
container_name: service-api
environment:
- SERVER_URL=http://localhost:8180/myserver
- 8001:8001
links:
- service-b: srv_b
However the service-api which is a spring boot application can't access the
service-b despite the link.
I can do that when using the browser.
What can I do to investigate the reasons for the lack of connectivity?
Should the link be somehow used in the server_url variable?
Each Docker container has it's own IP address. From the service-api container perspective, localhost resolve to its own IP address.
Docker-compose provides your containers with the ability to resolve other containers IP addresses from the docker compose service names.
Try:
service-api:
environment:
- SERVER_URL=http://service-b:8080/myserver
note that you need to connect to the container internal port (8080) and not the matching port published on the docker host (8180).