Hazelcast failed to connect to Hazelcast-mancenter - docker

I´ve written the following docker-compose.yml to build a running Docker-Instance of Hazelcast with Mancenter.
version: "3"
services:
hazelcast:
image: hazelcast/hazelcast:3.12.9
container_name: hazelcast
restart: unless-stopped
environment:
MAX_HEAP_SIZE: "512m"
MIN_HEAP_SIZE: "512m"
JAVA_OPTS: "-Dhazelcast.rest.enabled=true
-Dhazelcast.mancenter.enabled=true
-Dhazelcast.mancenter.url=http://localhost:8080/hazelcast-mancenter"
ports:
- "5701:5701"
networks:
- default
hazelcast-management:
image: hazelcast/management-center:3.12.9
container_name: hazelcast-management
restart: unless-stopped
ports:
- "8080:8080"
networks:
- default
networks:
default:
driver: bridge
The log is always showing the following error, even if I use "127.0.0.1" or my IP instead of localhost. I´m using the same version for both: hc and hc-mancenter.
hazelcast | Sep 23, 2020 11:38:35 AM com.hazelcast.internal.management.ManagementCenterService
hazelcast | INFO: [192.168.160.3]:5701 [dev] [3.12.9] Failed to pull tasks from Management Center
hazelcast | Sep 23, 2020 11:38:35 AM com.hazelcast.internal.management.ManagementCenterService
hazelcast | INFO: [192.168.160.3]:5701 [dev] [3.12.9] Failed to connect to: http://localhost:8080/hazelcast-mancenter/collector.do
Regards, Dom

Services in docker-compose are on the same "docker network" and reachable via the service name. When you use localhost or 127.0.0.1 the container tries to communicate with its "own" localhost. So instead of Dhazelcast.mancenter.url=http://localhost:8080 you should connect to Dhazelcast.mancenter.url=http://hazelcast-management:8080. The container of the hazelcast service should have a host entry that points the name hazelcast-management to the correct container ip.

Related

Docker & Hive - Port 50070 ports on Windows permission denied

I want to setup a local hive server and found this repo:
https://github.com/big-data-europe/docker-hive
This is the yaml file I use.
version: "3"
services:
namenode:
image: bde2020/hadoop-namenode:2.0.0-hadoop2.7.4-java8
volumes:
- namenode:/hadoop/dfs/name
environment:
- CLUSTER_NAME=test
env_file:
- ./hadoop-hive.env
ports:
- "50070:50070"
datanode:
image: bde2020/hadoop-datanode:2.0.0-hadoop2.7.4-java8
volumes:
- datanode:/hadoop/dfs/data
env_file:
- ./hadoop-hive.env
environment:
SERVICE_PRECONDITION: "namenode:50070"
ports:
- "50075:50075"
hive-server:
image: bde2020/hive:2.3.2-postgresql-metastore
env_file:
- ./hadoop-hive.env
environment:
HIVE_CORE_CONF_javax_jdo_option_ConnectionURL: "jdbc:postgresql://hive-metastore/metastore"
SERVICE_PRECONDITION: "hive-metastore:9083"
ports:
- "10000:10000"
hive-metastore:
image: bde2020/hive:2.3.2-postgresql-metastore
env_file:
- ./hadoop-hive.env
command: /opt/hive/bin/hive --service metastore
environment:
SERVICE_PRECONDITION: "namenode:50070 datanode:50075 hive-metastore-postgresql:5432"
ports:
- "9083:9083"
hive-metastore-postgresql:
image: bde2020/hive-metastore-postgresql:2.3.0
presto-coordinator:
image: shawnzhu/prestodb:0.181
ports:
- "8080:8080"
volumes:
namenode:
datanode:
Error:
Error starting userland proxy: Bind for 0.0.0.0:50075: unexpected error Permission denied
The ports >50000 are blocked on windows, I don´t have admin rights on my company pc, so I tried to map the ports like this:
ports:
- "40070:50070"
environment:
SERVICE_PRECONDITION: "namenode:40070 datanode:40075 hive-metastore-postgresql:5432"
This will let me get the Container started, but the container seem not to be able to communicate.
hive-metastore_1 | [1/100] check for namenode:40070...
hive-metastore_1 | [1/100] namenode:40070 is not available yet
hive-metastore_1 | [1/100] try in 5s once again ...
956a5237dbe2_docker-hive_datanode_1 | [4/100] check for namenode:40070...
956a5237dbe2_docker-hive_datanode_1 | [4/100] namenode:40070 is not available yet
I tried to change both ports:
ports:
- "40070:40070"
This will not work, because some IPs seem to be hardcoded:
ded7410db1b9_docker-hive_namenode_1 | 21/10/08 12:39:05 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
ded7410db1b9_docker-hive_namenode_1 | 21/10/08 12:39:05 INFO http.HttpServer2: Jetty bound to port 50070
Does anyone know how to get this running?
With the following:
ports:
- "40070:50070"
all you are doing is directing traffic from host port 40070 to container port 50070.
So to access "namenode" from the host machine for example:
localhost:40070
And to access "namenode" inside the compose network:
namenode:50070
Service precondition with BDE checks the container and the port repeatedly to see if the service is running before setting up its own services to ensure things are ready first. You have not changed the port running on the container, so your containers should still communicate via port 50070.
You have incorrectly changed the precondition to scan instead for your host port 40070, whereas it should look for the internal network container port 50070 regardless of host port.
Change it to the following:
ports:
- "40070:50070"
environment:
SERVICE_PRECONDITION: "namenode:50070 datanode:50075 hive-metastore-postgresql:5432"
You can change the operating ports on Hive etc. with the environmental variable file provided, but you shouldn't need to. Exposing host port 40070 to container port 50070 has no impact on the operation of the docker services.

Docker containers in the same network cannot communicate with container names (M1 Mac)

I am trying to run express, Prisma ORM, and postgre applications in Docker.
I have two containers in the same network, but they cannot communicate with each other unless they use the actual IP address instead of the container name. Here is my docker-compose.yml file:
version: '3.8'
services:
web:
container_name: urefer-backend
networks:
- backend
build: .
volumes:
- .:/usr/app/
- /usr/app/node_modules
ports:
- "5000:5000"
depends_on:
- db
environment:
DATABASE_URL: postgresql://postgres:docker#urefer-db:5432/urefer?schema=public
stdin_open: true # docker run -i
tty: true # docker run -t
db:
networks:
- backend
image: postgres:latest
environment:
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-docker}
POSTGRES_DB: "urefer"
container_name: urefer-db
restart: always
ports:
- 5432:5432
volumes:
- database-data:/var/lib/postgresql/data/
volumes:
database-data:
networks:
backend:
The urefer-backend container and the urefer-db container are in the same "backend" network. However, when I run docker-compose up --build, I always have a connection issue:
urefer-backend | Error: Error in migration engine: Can't reach database server at `urefer-db`:`5432`
urefer-backend |
urefer-backend | Please make sure your database server is running at `urefer-db`:`5432`.
When I replace "urefer-db" with the actual IP address of the db server, then it connects successfully. This I think means that there is something wrong with the DNS setup.
Could anyone please help me connect the two containers without using the actual IP address? the IP address for DB is always changing whenever I stop and restart the container, so it is really bothering to use.
#edit:
I got a suggestion from a comment to put all log and errors I got on the console.
urefer-db |
urefer-db | PostgreSQL Database directory appears to contain a database; Skipping initialization
urefer-db |
urefer-db | 2021-10-01 14:44:57.165 UTC [1] LOG: starting PostgreSQL 14.0 (Debian 14.0-1.pgdg110+1) on aarch64-unknown-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
urefer-db | 2021-10-01 14:44:57.165 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
urefer-db | 2021-10-01 14:44:57.165 UTC [1] LOG: listening on IPv6 address "::", port 5432
urefer-db | 2021-10-01 14:44:57.168 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
urefer-db | 2021-10-01 14:44:57.171 UTC [28] LOG: database system was shut down at 2021-10-01 14:44:49 UTC
urefer-db | 2021-10-01 14:44:57.177 UTC [1] LOG: database system is ready to accept connections
urefer-backend | Prisma schema loaded from prisma/schema.prisma
urefer-backend | Datasource "db": PostgreSQL database "urefer", schema "public" at "urefer-db:5432"
urefer-backend |
urefer-backend | Error: P1001: Can't reach database server at `urefer-db`:`5432`
urefer-backend |
urefer-backend | Please make sure your database server is running at `urefer-db`:`5432`.
urefer-backend | Prisma schema loaded from prisma/schema.prisma
urefer-backend |
When the backend service is up, it runs the command npx prisma migrate dev --name init, and this command creates this connection error.
You should use host name instead of container name. Containers uses hostname when communicate each other. So if you add hostname docker-compose for container it will work
version: '3.8'
services:
web:
container_name: urefer-backend
networks:
- backend
build: .
volumes:
- .:/usr/app/
- /usr/app/node_modules
ports:
- "5000:5000"
depends_on:
- db
environment:
DATABASE_URL: postgresql://postgres:docker#urefer-db:5432/urefer?schema=public
stdin_open: true # docker run -i
tty: true # docker run -t
db:
networks:
- backend
image: postgres:latest
environment:
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-docker}
POSTGRES_DB: "urefer"
container_name: urefer-db
hostname: urefer-db
restart: always
ports:
- 5432:5432
volumes:
- database-data:/var/lib/postgresql/data/
volumes:
database-data:
networks:
backend:

How to connect to mysql created in docker from windows machine

Docker-toolbox on windows. How to connect to the database from a program like WorkBench?
version: "3.1"
services:
mysql:
image: mysql:5.7
command: --innodb_use_native_aio=0
restart: always
volumes:
- //c/Users/radik/projects/laravel/storage/docker/mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: 123456
ports:
- 33061:3306
I'm trying to connect to the database from HeidiSql
host:192.168.99.100
name: root
port: 33061
result:
Cannot connect to MySQL server
you have 33333:3306 in your config so you should connect to port 33333 not 33061 right?
If docker runs on your local machine, you can just use localhost (127.0.0.1) as host.

How to run two or more apps in localhost

I am getting started with docker and docker-compose. I have the tutorials and I use docker-compose.yml file to run one of my sites in my local machine.
I can see my site running by going to http://localhost
My problem now is trying to run more than one site. If one of my sites is running and I try to run another site using docker-compose up -d I get the following error.
$ docker-compose up -d
Creating network "exampleCOM_default" with driver "bridge"
Creating exampleCOMphp-fpm ...
Creating exampleCOMmariadb ... error
ERROR: for exampleCOMmariadb Cannot start service db: driver failed programming external connectivity on endpoint exampleCOMmariadb (999572f33113c9fce034b4ed72aaCreating exampleCOMphp-fpm ... done
eady allocated
Creating exampleCOMnginx ... error
ERROR: for exampleCOMnginx Cannot start service nginx: driver failed programming external connectivity on endpoint exampleCOMnginx (9dc04f8b06825d7ff535afb1101933be7435c68f4350f845c756fc93e1a0322c): Bind for 0.0.0.0:443 failed: port is already allocated
ERROR: for db Cannot start service db: driver failed programming external connectivity on endpoint exampleCOMmariadb (999572f33113c9fce034b4ed72aa072708f6f477eb2af8ad614c0126ca457b64): Bind for 0.0.0.0:3306 failed: port is already allocated
ERROR: for nginx Cannot start service nginx: driver failed programming external connectivity on endpoint exampleCOMnginx (9dc04f8b06825d7ff535afb1101933be7435c68f4350f845c756fc93e1a0322c): Bind for 0.0.0.0:443 failed: port is already allocated
Encountered errors while bringing up the project.
This is my docker-compose file. I am using LEMP stack (PHP, NGINX, MARIADB)
version: '3'
services:
db:
container_name: ${SITE_NAME}_mariadb
build:
context: ./mariadb
volumes:
- ./mariadb/scripts:/docker-entrypoint-initdb.d
- ./.data/db:/var/lib/mysql
- ./logs/mariadb:/var/log/mysql
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
ports:
- '${MYSQL_PORT:-3306}:3306'
command:
'mysqld --innodb-flush-method=fsync'
networks:
- default
restart: always
nginx:
container_name: ${SITE_NAME}_nginx
build:
context: ./nginx
args:
- 'php-fpm'
- '9000'
volumes:
- ${APP_PATH}:/var/www/app
- ./logs/nginx/:/var/log/nginx
ports:
- "80:80"
- "443:443"
depends_on:
- php-fpm
networks:
- default
restart: always
php-fpm:
container_name: ${SITE_NAME}_php-fpm
build:
context: ./php7-fpm
args:
TIMEZONE: ${TIMEZONE}
volumes:
- ${APP_PATH}:/var/www/app
- ./php7-fpm/config/php.ini:/usr/local/etc/php/php.ini
environment:
DB_HOST: db
DB_PORT: 3306
DB_DATABASE: ${MYSQL_DATABASE}
DB_USERNAME: ${MYSQL_USER}
DB_PASSWORD: ${MYSQL_PASSWORD}
networks:
- default
restart: always
networks:
default:
driver: bridge
The host port you have mapped to is preventing you from starting another instance of the service even though the docker-compose creates a private network.
You can solve this problem by using random host ports assigned by docker-compose.
The ports entry in docker-compose is
ports
host_port:container_port
If you specify only the container port host port is randomly assigned. See here
You can provide the host_port values in ranges.
In below example, i've run the nginx containers and started multiple nginx containers that are automatically exposed to host ports based on the range values [30000-30005].
Command:
docker run -p 30000-30005:80 --name nginx1 -d nginx
Output:
9083d5fc97e0 nginx ... Up 2 seconds 0.0.0.0:30001->80/tcp nginx1
f2f9de1efd8c nginx ... Up 24 seconds 0.0.0.0:30000->80/tcp nginx

Bad log time in Docker

I have *.yml file to keycloak example and when i want see logs in console then i use:
docker logs -f keycloak
example logs:
08:41:27,304 INFO [org.jboss.msc] (main) JBoss MSC version 1.2.6.Final
When i go into docker
docker exec -it keycloak bash
and run date then i have correct time like
[root#3741ebb1131f /]# date
Mon Nov 14 14:44:41 CET 2016
My yml file:
version: '2'
services:
keycloak:
image: bla_bla_bla_image
container_name: keycloak
volumes:
- /etc/localtime:/etc/localtime:ro
external_links:
- postgres_container:postgres
networks:
default:
ipv4_address: "111.111.11.11"
networks:
default:
external:
name: demo
Can someone tell me what is happen?

Resources