I have a ruby on rails app which I run via docker-compose up. I'm a complete noob in graphql and hasura and I've tried different ways to configure my docker but I cannot make it work.
My docker-compose.yml:
version: '3.6'
services:
postgres:
image: postgis/postgis:latest
restart: always
ports:
- "5434:5432"
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
environment:
PG_HOST_AUTH_METHOD: "trust"
graphql-engine:
image: hasura/graphql-engine:v1.2.2.cli-migrations-v2
restart: always
ports:
- "8081:8080"
volumes:
- ./metadata:/hasura-metadata
environment:
HSR_GQL_DB_URL: "postgres://postgres#postgres/db-dev-name"
HSR_GQL_ADMIN_SECRET: secret
env_file:
- .env
server:
build: .
depends_on:
- "postgres"
command: bundle exec rails server -p 8081 -b 0.0.0.0
ports:
- "8080:8081"
volumes:
- ./:/server
- gem_cache:/usr/local/bundle/gems
- node_modules:/server/node_modules
env_file:
- .env
volumes:
gem_cache:
node_modules:
config.yml:
version: 2
endpoint: http://localhost:8080
metadata_directory: metadata
actions:
kind: synchronous
handler_webhook_baseurl: http://localhost:8080
Docker logs server shows:
Listening on tcp://0.0.0.0:8081
And I can access the app after docker-compose up at http://localhost:8080/server/
Checking database connection, it is also reachable at port :5434 in a database manager GUI.
But when I try to execute hasura console --admin-secret secret, hasura console on browser is not showing. I am just getting this error on different logs:
docker logs server:
Started POST "//v1/query" for 172.25.0.1 at 2021-04-10 17:53:08 +0000
ActionController::RoutingError (No route matches [POST] "/v1/query")
docker logs postgresql:
17:27:33.453 UTC [1] LOG: starting PostgreSQL 13.2 (Debian 13.2-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
17:27:33.453 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
17:27:33.453 UTC [1] LOG: listening on IPv6 address "::", port 5432
17:27:33.459 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
17:27:33.467 UTC [103] LOG: database system was shut down at 2021-04-10 17:27:33 UTC
17:27:33.473 UTC [1] LOG: database system is ready to accept connections
17:27:41.250 UTC [111] ERROR: duplicate key value violates unique constraint "pg_extension_name_index"
17:27:41.250 UTC [111] DETAIL: Key (extname)=(postgis) already exists.
17:27:41.250 UTC [111] STATEMENT: CREATE EXTENSION IF NOT EXISTS postgis
17:27:47.565 UTC [116] ERROR: duplicate key value violates unique constraint "pg_extension_name_index"
17:27:47.565 UTC [116] DETAIL: Key (extname)=(pgcrypto) already exists.
17:27:47.565 UTC [116] STATEMENT: CREATE EXTENSION IF NOT EXISTS "pgcrypto"
17:28:01.619 UTC [128] ERROR: duplicate key value violates unique constraint "acct_status_pkey"
17:28:01.619 UTC [128] DETAIL: Key (status)=(new) already exists.
17:28:01.619 UTC [128] STATEMENT: INSERT INTO "acct_status" ("status") VALUES ($1) RETURNING "status"
docker logs graphql-engine: last few lines.
{"kind":"event_triggers","info":"preparing data"}}
{"type":"startup","timestamp":"2021-04-10T17:28:15.576+0000","level":"info","detail":{"kind":"event_triggers","info":"starting workers"}}
{"type":"startup","timestamp":"2021-04-10T17:28:15.576+0000","level":"info","detail":{"kind":"telemetry","info":"Help us improve Hasura! The graphql-engine server collects anonymized usage stats which allows us to keep improving Hasura at warp speed. To read more or opt-out, visit https://hasura.io/docs/1.0/graphql/manual/guides/telemetry.html"}}
{"type":"startup","timestamp":"2021-04-10T17:28:15.576+0000","level":"info","detail":{"kind":"server","info":{"time_taken":2.384872458,"message":"starting API server"}}}
I tried accessing http://localhost:8081//console/api-explorer via browser, and it seems the graphql container is getting the request but still wont display hasura console
{
"type": "http-log",
"timestamp": "2021-04-12T03:24:24.399+0000",
"level": "error",
"detail": {
"operation": {
"error": {
"path": "$",
"error": "resource does not exist",
"code": "not-found"
},
"request_id": "6d1e5f04-f7d4-48d6-932e-4cf81bdf9795",
"response_size": 65,
"raw_query": ""
},
"http_info": {
"status": 404,
"http_version": "HTTP/1.1",
"url": "/console/api-explorer",
"ip": "192.168.0.1",
"method": "GET",
"content_encoding": null
}
}
I've tried setting the HSR_GQL_DB_URL to either postgres / localhost:5432 as well as postgis:// instead of postgres://.
I've also tried changing config.yml endpoint field to http://localhost:8080/server/ but those were also not working.
Perhaps a bit too late, but here goes anyway:
According to the documentation under GraphQL engine server config reference -> Command config you need to set the HASURA_GRAPHQL_ENABLE_CONSOLE environment variable to enable the console served at /console. Alternatively for the hasura serve command, set the --enable-console option to true:
Flag
ENV variable
Description
...
...
...
--enable-console <true|false>
HASURA_GRAPHQL_ENABLE_CONSOLE
Enable the Hasura Console (served by the server on / and /console) (default: false)
...
...
...
So for your docker-compose.yml:
...
graphql-engine:
image: hasura/graphql-engine:v1.2.2.cli-migrations-v2
restart: always
ports:
- "8081:8080"
volumes:
- ./metadata:/hasura-metadata
environment:
HSR_GQL_DB_URL: "postgres://postgres#postgres/db-dev-name"
HSR_GQL_ADMIN_SECRET: secret
HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # add this
env_file:
- .env
...
I also read somewhere to make sure to quote true and false values when using environment variables in Docker related situations. This has worked for me, so I would stick with it. Try it out.
Related
After spending hours searching why I cannot access to my webUI, I turn to you.
I setup freeipa on docker using docker-compose. I opened some port to gain remote access using host-ip:port on my own computer. Freeipa is supposed to be run on my server (lets say 192.168.1.2) and the webui accessible with any other local computer on port 80 / 443 (192.168.1.4:80 or 192.168.1.4:443)
When I run my .yaml file, freeipa get setup with a "the ipa-server-install command was successful" message.
I thought it could come from my tight iptables rules and tried to put all policies to ACCEPT to debug. It didn't do it.
I'm a bit lost to how I could debbug this or find how to fix it.
OS : ubuntu 20.04.3
Docker version: 20.10.12, build e91ed57
freeipa image: freeipa/freeipa:centos-8-stream
Docker-compose version: 1.29.2, build 5becea4c
My .yaml file:
version: "3.8"
services:
freeipa:
image: freeipa/freeipa-server:centos-8-stream
hostname: sanctuary
domainname: serv.sanctuary.local
container_name: freeipa-dev
ports:
- 80:80
- 443:443
- 389:389
- 636:636
- 88:88
- 464:464
- 88:88/udp
- 464:464/udp
- 123:123/udp
dns:
- 10.64.0.1
- 1.1.1.1
- 1.0.0.1
restart: unless-stopped
tty: true
stdin_open: true
environment:
IPA_SERVER_HOSTNAME: serv.sanctuary.local
IPA_SERVER_IP: 192.168.1.100
TZ: "Europe/Paris"
command:
- -U
- --domain=sanctuary.local
- --realm=sanctuary.local
- --admin-password=pass
- --http-pin=pass
- --dirsrv-pin=pass
- --ds-password=pass
- --no-dnssec-validation
- --no-host-dns
- --setup-dns
- --auto-forwarders
- --allow-zone-overlap
- --unattended
cap_add:
- SYS_TIME
- NET_ADMIN
restart: unless-stopped
volumes:
- /etc/localtime:/etc/localtime:ro
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- ./data:/data
- ./logs:/var/logs
sysctls:
- net.ipv6.conf.all.disable_ipv6=0
- net.ipv6.conf.lo.disable_ipv6=0
security_opt:
- "seccomp:unconfined"
labels:
- dev
I tried to tinker with the deployment file (add or remove conf found on internet such as add/remove IPA_SERVER_IP, add/remove an external bridge network)
Thank you very much for any help =)
Alright, for those who might have the same problem, I will explain everything I did to debug this.
I extensively relieded on the answers found here : https://floblanc.wordpress.com/2017/09/11/troubleshooting-freeipa-pki-tomcatd-fails-to-start/
First, I checked the status of each services with ipactl status. Depending of the problem, you might have different output but mine was like this :
Directory Service: RUNNING
krb5kdc Service: RUNNING
kadmin Service: RUNNING
named Service: RUNNING
httpd Service: RUNNING
ipa-custodia Service: RUNNING
pki-tomcatd Service: STOPPED
ipa-otpd Service: RUNNING
ipa-dnskeysyncd Service: RUNNING
ipa: INFO: The ipactl command was successful
I therefore checked the logs for tomcat /var/log/pki/pki-tomcat/ca/debug-xxxx. I realised I had connection refused with something related to the certificates.
Here, I first checked that my certificate was present in /etc/pki/pki-tomcat/alias using sudo certutil -L -d /etc/pki/pki-tomcat/alias -n 'subsystemCert cert-pki-ca'.
## output :
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 4 (0x4)
...
...
Then I made sure that the private key can be read using the password found in /var/lib/pki/pki-tomcat/conf/password.conf (with the tag internal=…)
grep internal /var/lib/pki/pki-tomcat/conf/password.conf | cut -d= -f2 > /tmp/pwdfile.txt
certutil -K -d /etc/pki/pki-tomcat/alias -f /tmp/pwdfile.txt -n 'subsystemCert cert-pki-ca'
I still had nothings strange so I assumed that at this point :
pki-tomcat is able to access the certificate and the private key
The issue is likely to be on the LDAP server side
I tried to read the user entry in the LDAP to compare it to the certificate using ldapsearch -LLL -D 'cn=directory manager' -W -b uid=pkidbuser,ou=people,o=ipaca userCertificate description seeAlso but had an error after entering the password. Because my certs were OK and LDAP service running, I assumed something was off with the certificates date.
Indeed, during the install freeipa setup the certs using your current system date as base. But it also install chrony for server time synchronization. After reboot, my chrony conf were wrong and set my host date 2 years ahead.
I couldnt figure out the problem with the chrony conf so I stopped the service and set the date manually using timedatectl set-time "yyyy-mm-dd hh:mm:ss".
I restarted freeipa services amd my pki-tomcat service was working again.
After that, I set the freeipa IP in my router as DNS. I restarted services and computer in the local network so DNS config were refreshed. After that, the webUI was accessible !
I currently have a bunch of Docker-based services that work over SSL, for local development we just use a self-signed cert, but now we are trying to configure the production deployment.
My current testing environment is w10 based, and the containers run inside wsl
For most of the steps we are following these instructions and as for normal HTTP traffic is seems to be working, but when I try to request over HTTPS, I'm getting a "500 Internal Server Error", if I do a curl from inside the Linux instance, I can see that I get the site served, but if I try to reach it from elsewhere, I'll get the 500 error.
The question is, can I only configure ssl when working with the final public hosting and reconfigure my domain, or is there a way to test everything locally before moving to prod? and might be any issues with the self-signed cert currently inside the apache image?
Edit: From checking the documentation now I understand that in order to have lets-encrypt working, I need to use the actual final public DNS and hosting, but I'm wondering how could I configure this to work locally, or just drop the ssl part? I remember some requirement on our architecture for it to be used on ssl, but not quite sure right now, and locally, I need devs to be able to run multiple instances without issues
My app docker file is based upon this
one
and the current docker-compose file is as follows:
version: '3'
services:
web:
build:
context: ./modxServer
links:
- 'db:mysql'
ports:
- 443
- 80
networks:
- reverse-proxy
- back
environment:
XDEBUG_SESSION: wtf
MODX_VERSION: 2.8.1
MODX_CORE_LOCATION: /var/www/coreM0dXF1L3s
MODX_DB_HOST: 'mysql:3306'
MODX_DB_PASSWORD: modx
MODX_DB_USER: modx
MODX_DB_NAME: modx
MODX_TABLE_PREFIX: modx_
MODX_ADMIN_USER: admin
MODX_ADMIN_PASSWORD: admin
MODX_ADMIN_EMAIL: admin#admin.com
MODX_SERVER_ROUTE: boats.trotalo.com
VIRTUAL_HOST: boats.trotalo.com
VIRTUAL_PROTO: https
VIRTUAL_PORT: 443
LETSENCRYPT_HOST: boats.trotalo.com
LETSENCRYPT_EMAIL: camilo.casadiego#trotalo.com
volumes:
- '~/development/boatsSupervisionSystem/www:/var/www'
db:
image: 'mysql:8.0.22'
networks:
- back
environment:
MYSQL_ROOT_PASSWORD: mysql
MYSQL_DATABASE: modx
MYSQL_USER: modx
MYSQL_PASSWORD: modx
ports:
- 3306
command: --default-authentication-plugin=mysql_native_password
volumes:
- '~/development/boatsSupervisionSystem/mysql:/var/lib/mysql'
networks:
reverse-proxy:
external:
name: reverse-proxy
back:
driver: bridge
Currently, the only meaningful log I'm getting is this from lets-encrypt
021/08/31 00:09:46 [notice] 175#175: signal process started
Creating/renewal boats.trotalo.com certificates... (boats.trotalo.com)
[Tue Aug 31 00:09:46 UTC 2021] Using CA: https://acme-v02.api.letsencrypt.org/directory
[Tue Aug 31 00:09:46 UTC 2021] Creating domain key
[Tue Aug 31 00:09:47 UTC 2021] The domain key is here: /etc/acme.sh/camilo.casadiego#trotalo.com/boats.trotalo.com/boats.trotalo.com.key
[Tue Aug 31 00:09:47 UTC 2021] Single domain='boats.trotalo.com'
[Tue Aug 31 00:09:47 UTC 2021] Getting domain auth token for each domain
[Tue Aug 31 00:09:49 UTC 2021] Getting webroot for domain='boats.trotalo.com'
[Tue Aug 31 00:09:49 UTC 2021] Verifying: boats.trotalo.com
2021/08/31 00:09:25 Generated '/app/letsencrypt_service_data' from 2 containers
2021/08/31 00:09:25 Running '/app/signal_le_service'
2021/08/31 00:09:25 Watching docker events
2021/08/31 00:09:25 Contents of /app/letsencrypt_service_data did not change. Skipping notification '/app/signal_le_service'
2021/08/31 00:09:37 Received event start for container 7e0b47af1ddc
2021/08/31 00:09:37 Received event start for container 283bb4ebec51
2021/08/31 00:09:42 Debounce minTimer fired
2021/08/31 00:09:42 Generated '/app/letsencrypt_service_data' from 4 containers
2021/08/31 00:09:42 Running '/app/signal_le_service'
[Tue Aug 31 00:09:53 UTC 2021] boats.trotalo.com:Verify error:DNS problem: NXDOMAIN looking up A for boats.trotalo.com - check that a DNS record exists for this domain
[Tue Aug 31 00:09:53 UTC 2021] Please check log file for more details: /dev/null
In the end was more of an understanding issue, for local development I don't need Nginx, and there, I can just use self-signed certificates, and for prod, the official Nginx/lets-encrypt image does almost all the magic.
The command I used to launch the nginx containers is:
docker run -d \
--name nginx-letsencrypt \
--net reverse-proxy \
--volumes-from nginx-proxy \
-v $HOME/certs:/etc/nginx/certs:rw \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
nginxproxy/acme-companion
And inside each docker-composer.yml file, or as parameters for docker run:
VIRTUAL_HOST: mydomain.or.subdomain.com
VIRTUAL_PROTO: https
VIRTUAL_PORT: 443
LETSENCRYPT_HOST: mydomain.or.subdomain.com
LETSENCRYPT_EMAIL: your.name#mydomain.or.subdomain.com
I tried to install Authelia as oAuth Server with Docker-Compose. But everytime when I start the container, the logs are saying this
time="2020-05-23T16:51:09+02:00" level=error msg="Provide a JWT secret using \"jwt_secret\" key"
time="2020-05-23T16:51:09+02:00" level=error msg="Please provide `ldap` or `file` object in `authentication_backend`"
time="2020-05-23T16:51:09+02:00" level=error msg="Set domain of the session object"
time="2020-05-23T16:51:09+02:00" level=error msg="A storage configuration must be provided. It could be 'local', 'mysql' or 'postgres'"
time="2020-05-23T16:51:09+02:00" level=error msg="A notifier configuration must be provided"
panic: Some errors have been reported
goroutine 1 [running]:
main.startServer()
github.com/authelia/authelia/cmd/authelia/main.go:41 +0xc80
main.main.func1(0xc00009c000, 0xc0001e6100, 0x0, 0x2)
github.com/authelia/authelia/cmd/authelia/main.go:126 +0x20
github.com/spf13/cobra.(*Command).execute(0xc00009c000, 0xc000020190, 0x2, 0x2, 0xc00009c000, 0xc000020190)
github.com/spf13/cobra#v0.0.7/command.go:842 +0x29d
github.com/spf13/cobra.(*Command).ExecuteC(0xc00009c000, 0xc0007cdf58, 0x4, 0x4)
github.com/spf13/cobra#v0.0.7/command.go:943 +0x317
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/cobra#v0.0.7/command.go:883
main.main()
github.com/authelia/authelia/cmd/authelia/main.go:143 +0x166
and the container is restarting.
I don't realy understand why and where this behavior comes from. I've used named volumes just like binded volumes but it is still the same error. Maybe someone can tell me where I'm doing a (probably stupid) mistake, becuase I don't see it.
My compose.yml file:
version: '3.7'
services:
authelia:
image: "authelia/authelia:latest"
container_name: authelia
restart: "unless-stopped"
# security_opt:
# - no-new-privileges:true
networks:
- "web"
- "intern"
volumes:
- ./authelia:/var/lib/authelia
- ./configuration.yml:/etc/authelia/configuration.yml:ro
- ./users_database.yml:/etc/authelia/users_database.yml
# Had to bind this volumen, without it, docker creates an own volumen with empty configuration.yml and
# users_database.yml
- ./data:/etc/authelia
environment:
- TZ=$TZ
labels:
- "traefik.enable=true"
# HTTP Routers
- "traefik.http.routers.authelia-rtr.entrypoints=https"
- "traefik.http.routers.authelia-rtr.rule=Host(`secure.$DOMAINNAME`)"
- "traefik.http.routers.authelia-rtr.tls=true"
- "traefik.http.routers.authelia-rtr.tls.certresolver=le"
# Middlewares
- "traefik.http.routers.authelia-rtr.middlewares=chain-no-auth#file"
# HTTP Service
- "traefik.http.routers.authelia-rtr.service=authelia-svc"
- "traefik.http.services.auhtelia-svc.loadbalancer.server.port=9091"
networks:
web:
external: true
intern:
external: true
The files and folders under the volumes section are existing and configuration.yml is not empty. I use an admin (non-root) user with sudo permissions.
Can anybody tell me what I'm doing wrong and why authelia isn't able to find or read the configuration.yml?
Verify your configuration.yml file. These errors show up when your yml syntax is incorrect. In particular:
double-check indentation,
put your domain names in quotation marks (this was my problem when I encountered that).
See also discussion here.
I am trying to capture syslog messages sent over the network using rsyslog, and then have rsyslog capture, transform and send these messages to elasticsearch.
I found a nice article on the configuration on https://www.reddit.com/r/devops/comments/9g1nts/rsyslog_elasticsearch_logging/
Problem is that rsyslog keeps popping up an error at startup that it cannot connect to Elasticsearch on the same machine on port 9200. Error I get is
Failed to connect to localhost port 9200: Connection refused
2020-03-20T12:57:51.610444+00:00 53fd9e2560d9 rsyslogd: [origin software="rsyslogd" swVersion="8.36.0" x-pid="1" x-info="http://www.rsyslog.com"] start
rsyslogd: omelasticsearch: we are suspending ourselfs due to server failure 7: Failed to connect to localhost port 9200: Connection refused [v8.36.0 try http://www.rsyslog.com/e/2007 ]
Anyone can help on this?
Everything is running in docker on a single machine. I use below docker compose file to start the stack.
version: "3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- 9200:9200
networks:
- logging-network
kibana:
image: docker.elastic.co/kibana/kibana:7.6.1
depends_on:
- logstash
ports:
- 5601:5601
networks:
- logging-network
rsyslog:
image: rsyslog/syslog_appliance_alpine:8.36.0-3.7
environment:
- TZ=UTC
- xpack.security.enabled=false
ports:
- 514:514/tcp
- 514:514/udp
volumes:
- ./rsyslog.conf:/etc/rsyslog.conf:ro
- rsyslog-work:/work
- rsyslog-logs:/logs
volumes:
rsyslog-work:
rsyslog-logs:
networks:
logging-network:
driver: bridge
rsyslog.conf file below:
global(processInternalMessages="on")
#module(load="imtcp" StreamDriver.AuthMode="anon" StreamDriver.Mode="1")
module(load="impstats") # config.enabled=`echo $ENABLE_STATISTICS`)
module(load="imrelp")
module(load="imptcp")
module(load="imudp" TimeRequery="500")
module(load="omstdout")
module(load="omelasticsearch")
module(load="mmjsonparse")
module(load="mmutf8fix")
input(type="imptcp" port="514")
input(type="imudp" port="514")
input(type="imrelp" port="1601")
# includes done explicitely
include(file="/etc/rsyslog.conf.d/log_to_logsene.conf" config.enabled=`echo $ENABLE_LOGSENE`)
include(file="/etc/rsyslog.conf.d/log_to_files.conf" config.enabled=`echo $ENABLE_LOGFILES`)
#try to parse a structured log
action(type="mmjsonparse")
# this is for index names to be like: rsyslog-YYYY.MM.DD
template(name="rsyslog-index" type="string" string="rsyslog-%$YEAR%.%$MONTH%.%$DAY%")
# this is for formatting our syslog in JSON with #timestamp
template(name="json-syslog" type="list") {
constant(value="{")
constant(value="\"#timestamp\":\"") property(name="timereported" dateFormat="rfc3339")
constant(value="\",\"host\":\"") property(name="hostname")
constant(value="\",\"severity\":\"") property(name="syslogseverity-text")
constant(value="\",\"facility\":\"") property(name="syslogfacility-text")
constant(value="\",\"program\":\"") property(name="programname")
constant(value="\",\"tag\":\"") property(name="syslogtag" format="json")
constant(value="\",") property(name="$!all-json" position.from="2")
# closing brace is in all-json
}
# this is where we actually send the logs to Elasticsearch (localhost:9200 by default)
action(type="omelasticsearch" template="json-syslog" searchIndex="rsyslog-index" dynSearchIndex="on")
#################### default ruleset begins ####################
# we emit our own messages to docker console:
syslog.* :omstdout:
include(file="/config/droprules.conf" mode="optional") # this permits the user to easily drop unwanted messages
action(name="main_utf8fix" type="mmutf8fix" replacementChar="?")
include(text=`echo $CNF_CALL_LOG_TO_LOGFILES`)
include(text=`echo $CNF_CALL_LOG_TO_LOGSENE`)
First of all you need to run all the containers on the same docker network which in this case are not. Second , after running the containers on the same network , login to rsyslog container and check if 9200 is available.
I have setup MySQL cluster on my PC using mysql/mysql-cluster image on docker hub, and it starts up fine. However when I try to connect to the cluster from outside docker (via the host machine) using clusterJ it doesn't connect.
Initially I was getting the following error: Could not alloc node id at 127.0.0.1 port 1186: No free node id found for mysqld(API)
So I created a custom mysql-cluster.cnf, very similar to the one distributed with the docker image, but with a new api endpoint:
[ndbd default]
NoOfReplicas=2
DataMemory=80M
IndexMemory=18M
[ndb_mgmd]
NodeId=1
hostname=192.168.0.2
datadir=/var/lib/mysql
[ndbd]
NodeId=2
hostname=192.168.0.3
datadir=/var/lib/mysql
[ndbd]
NodeId=3
hostname=192.168.0.4
datadir=/var/lib/mysql
[mysqld]
NodeId=4
hostname=192.168.0.10
[api]
This is the configuration used for clusterJ setup:
com.mysql.clusterj.connect:
host: 127.0.0.1:1186
database: my_db
Here is the docker-compose config:
version: '3'
services:
#Sets up the MySQL cluster ndb_mgmd process
database-manager:
image: mysql/mysql-cluster
networks:
database_net:
ipv4_address: 192.168.0.2
command: ndb_mgmd
ports:
- "1186:1186"
volumes:
- /c/Users/myuser/conf/mysql-cluster.cnf:/etc/mysql-cluster.cnf
# Sets up the first MySQL cluster data node
database-node-1:
image: mysql/mysql-cluster
networks:
database_net:
ipv4_address: 192.168.0.3
command: ndbd
depends_on:
- database-manager
# Sets up the second MySQL cluster data node
database-node-2:
image: mysql/mysql-cluster
networks:
database_net:
ipv4_address: 192.168.0.4
command: ndbd
depends_on:
- database-manager
#Sets up the first MySQL server process
database-server:
image: mysql/mysql-cluster
networks:
database_net:
ipv4_address: 192.168.0.10
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=true
- MYSQL_DATABASE=my_db
- MYSQL_USER=my_user
command: mysqld
networks:
database_net:
ipam:
config:
- subnet: 192.168.0.0/16
When I try to connect to the cluster I get the following error: '127.0.0.1:1186' nodeId 0; Return code: -1 error code: 0 message: .
I can see that the app running ClusterJ is registered to the cluster, but then it disconnects. Here is a excerpt from the docker mysql manager logs:
database-manager_1 | 2018-05-10 11:18:43 [MgmtSrvr] INFO -- Node 3: Communication to Node 4 opened
database-manager_1 | 2018-05-10 11:22:16 [MgmtSrvr] INFO -- Alloc node id 6 succeeded
database-manager_1 | 2018-05-10 11:22:16 [MgmtSrvr] INFO -- Nodeid 6 allocated for API at 10.0.2.2
Any help solving this issue would be much appreciated.
Here is how ndb_mgmd handles the request to start the ClusterJ application.
You connect to the MGM server on port 1186. In this connection you
will get the configuration. This configuration contains the IP addresses
of the data nodes. To connect to the data nodes ClusterJ will try to
connect to 192.168.0.3 and 192.168.0.4. Since ClusterJ is outside Docker,
I presume those addresses point to some different place.
The management server will also provide a dynamic port to use when
connecting to the NDB data node. It is a lot easier to manage this
by setting ServerPort for NDB data nodes. I usually use 11860 as
ServerPort, 2202 is also popular to use.
I am not sure how you mix a Docker environment with an external
environment. I assume it is possible to solve somehow by setting
up proper IP translation tables in the correct places.