Two databases for Hasura in Docker Compose - docker

I tried to separate the databases for hasura in Docker Compose.
Idea is to have a database for the metadata of hasura. And one for my actual data.
This is my docker-compose.yml file.
version: "3.8"
services:
meta:
image: postgres
hostname: meta
container_name: meta
environment:
POSTGRES_DB: meta
POSTGRES_USER: meta
POSTGRES_PASSWORD: metapass
restart: always
volumes:
- db_meta:/var/lib/postgresql/data
networks:
- backend
data:
image: postgres
hostname: data
container_name: data
environment:
POSTGRES_DB: data
POSTGRES_USER: data
POSTGRES_PASSWORD: datapass
restart: always
volumes:
- db_data:/var/lib/postgresql/data
networks:
- backend
graphql-engine:
image: hasura/graphql-engine:v2.13.0
ports:
- "8080:8080"
depends_on:
- "meta"
- "data"
restart: always
environment:
## postgres database to store Hasura metadata
# Database URL postgresql://username:password#hostname:5432/database
HASURA_GRAPHQL_METADATA_DATABASE_URL: meta://meta:metapass#meta:5432/meta
## this env var can be used to add the above postgres database to Hasura as a data source. this can be removed/updated based on your needs
PG_DATABASE_URL: data://data:datapass#data:5432/data
## enable the console served by server
HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set to "false" to disable console
## enable debugging mode. It is recommended to disable this in production
HASURA_GRAPHQL_DEV_MODE: "true"
HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup, http-log, webhook-log, websocket-log, query-log
## uncomment next line to run console offline (i.e load console assets from server instead of CDN)
# HASURA_GRAPHQL_CONSOLE_ASSETS_DIR: /srv/console-assets
## uncomment next line to set an admin secret
# HASURA_GRAPHQL_ADMIN_SECRET: myadminsecretkey
networks:
- backend
volumes:
db_data:
db_meta:
networks:
backend:
driver: bridge
I get {"detail":{"info":{"code":"postgres-error","error":"connection error","internal":"missing \"=\" after \"meta://meta:metapass#meta:5432/meta\" in connection info string\n","path":"$"},"kind":"catalog_migrate"},"level":"error","timestamp":"2022-10-24T14:16:06.432+0000","type":"startup"}
I think the problem is related to the hostname. But I do not know how to solve it. Any ideas?

I have tried a lot so now my solution looks like this.
I have noticed that volumes delete makes it more easy to develop.
The beginning of the Database URL must start with postgresql://. Just as #jlandercy has already said.
version: "3.8"
services:
meta:
image: postgres
container_name: meta
restart: always
volumes:
- db_meta:/var/lib/postgresql/data
environment:
POSTGRES_USER: meta_user
POSTGRES_PASSWORD: meta_password
POSTGRES_DB: meta_db
data:
image: postgres
container_name: data
restart: always
volumes:
- db_data:/var/lib/postgresql/data
environment:
POSTGRES_USER: data_user
POSTGRES_PASSWORD: data_password
POSTGRES_DB: data_db
ports:
- 5432:5432
graphql-engine:
image: hasura/graphql-engine
depends_on:
- "meta"
- "data"
restart: always
environment:
## postgres database to store Hasura metadata
# Database URL postgresql://username:password#hostname:5432/database
HASURA_GRAPHQL_METADATA_DATABASE_URL: postgresql://meta_user:meta_password#meta:5432/meta_db
## this env var can be used to add the above postgres database to Hasura as a data source. this can be removed/updated based on your needs
PG_DATABASE_URL: postgresql://data_user:data_password#data:5432/data_db
## enable the console served by server
HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set to "false" to disable console
## enable debugging mode. It is recommended to disable this in production
HASURA_GRAPHQL_DEV_MODE: "true"
HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup, http-log, webhook-log, websocket-log, query-log
## uncomment next line to run console offline (i.e load console assets from server instead of CDN)
# HASURA_GRAPHQL_CONSOLE_ASSETS_DIR: /srv/console-assets
## uncomment next line to set an admin secret
# HASURA_GRAPHQL_ADMIN_SECRET: myadminsecretkey
ports:
- "8080:8080"
volumes:
db_data:
db_meta:

Related

docker(-compose): access wikijs container only through nginx-proxy-manager; 502 Bad Gateway

I'm running a server that I want to setup to provide several webservices. One service is WikiJS.
I want the service to only be accessible through nginx-proxy-manager via a subdomain, but not directly accessing the IP (and port) of the server.
My try was:
version: "3"
services:
nginxproxymanager:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
# These ports are in format <host-port>:<container-port>
- '80:80' # Public HTTP Port
- '443:443' # Public HTTPS Port
- '8181:81' # Admin Web Port
# Add any other Stream port you want to expose
# - '21:21' # FTP
# Uncomment the next line if you uncomment anything in the section
# environment:
# Uncomment this if you want to change the location of
# the SQLite DB file within the container
# DB_SQLITE_FILE: "/data/database.sqlite"
# Uncomment this if IPv6 is not enabled on your host
# DISABLE_IPV6: 'true'
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
networks:
- reverseproxy-nw
db:
image: postgres:11-alpine
environment:
POSTGRES_DB: wiki
POSTGRES_PASSWORD: ###DBPW
POSTGRES_USER: wikijs
logging:
driver: "none"
restart: unless-stopped
volumes:
- db-data:/var/lib/postgresql/data
networks:
- reverseproxy-nw
wiki:
image: requarks/wiki:2
depends_on:
- db
environment:
DB_TYPE: postgres
DB_HOST: db
DB_PORT: 5432
DB_USER: wikijs
DB_PASS: ###DBPW
DB_NAME: wiki
restart: unless-stopped
ports:
- "3001:3000"
networks:
- reverseproxy-nw
volumes:
db-data:
networks:
reverseproxy-nw:
external: true
In nginx-proxy-manager I then tried to use "wikijs" as the forwarding host.
The service is accessible if I try: http://publicip:3001, however not via the assigned subdomain in nginx-proxy-manager. I only get a 502 which usually means, that nginx-proxy-manager cannot access the given service.
What do I have to change to make the service available unter the domain but not from http://publicip:3001 ?
Thanks in advance.
Ok, I finally found out what my conceptual problem was:
I needed to create a network bridge for the two containers. Basically it was as basic as specifying the driver of the network:
networks:
reverseproxy-nw:
driver: bridge
Like this the wikijs-container is only available through nginx as I want it to be.

Can not send post request to nuxeo server over docker container

I can send post request to Nuxeo server using http://localhost:8080 base address from local. When I add docker support to my app, my app can not send post request to nuxeo server using http://nuxeo_container_name:80 base address. It returns badRequest. How can I solve it? Nuxeo server and app are in the same docker network.
This is my docker-compose for nuxeo server. I use nuxeo_app_server in my app as nuxeo container name.
version: "3.5"
networks:
nuxnet:
name: network
services:
nginx:
container_name: nuxeo_app_server
build: nginx
ports:
# For localhost use, the exposed nginx port
# must match the localhost:port below in NUXEO_URL
- "8080:80"
#- "443:443"
cap_add:
- NET_BIND_SERVICE
links:
- nuxeo1
# - nuxeo2
environment:
USE_STAGING: 1
# default is 4096, but gcloud requires 2048
KEYSIZE: 2048
DOMAIN_LIST: /etc/nginx/conf.d/domains.txt
devices:
- "/dev/urandom:/dev/random"
sysctls:
- net.core.somaxconn=511
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- certs:/etc/ssl/acme
networks:
- nuxnet
restart: always
nuxeo1:
image: nuxeo
hostname: nuxeo1
links:
- redis
- es
- db
env_file:
- ./nuxeo/setup.env
environment:
# Each nuxeo container must have a unique cluster id
NUXEO_CLUSTER_ID: 1
# URL that a user would use to access nuxeo UI or API
# For localhost urls, the port must match the exposted nginx port above
NUXEO_URL: http://localhost:8080/nuxeo
# JAVA memory tuning -Xms, -Xmx
JVM_MS: 1024m
JVM_MX: 2048m
devices:
- "/dev/urandom:/dev/random"
volumes:
- ./nuxeo/init:/docker-entrypoint-initnuxeo.d:ro
- app-data:/var/lib/nuxeo
- app-logs:/var/log/nuxeo
networks:
- nuxnet
restart: always
redis:
# note: based on alpine:3.6
# see https://hub.docker.com/_/redis/
image: redis:3.2-alpine
volumes:
- redis-data:/data
networks:
- nuxnet
restart: always
es:
image: elasticsearch:2.4-alpine
volumes:
- es-data:/usr/share/elasticsearch/data
- es-plugins:/usr/share/elasticsearch/plugins
- es-config:/usr/share/elasticsearch/config
- ./elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
environment:
# settings below add -Xms400m -Xmx1g
ES_MIN_MEM: 500m
EX_MAX_MEM: 1g
security_opt:
- seccomp:unconfined
networks:
- nuxnet
restart: always
db:
image: postgres:9.6-alpine
# note mem tuning suggestions in the following two links
# https://doc.nuxeo.com/nxdoc/postgresql/
# https://doc.nuxeo.com/nxdoc/postgresql/#adapt-your-configuration-to-your-hardware
environment:
POSTGRES_USER: nuxeo
POSTGRES_PASSWORD: nuxeo
POSTGRES_DB: nuxeo
POSTGRES_INITDB_ARGS: "-E UTF8"
PGDATA: /var/lib/postgresql/data
volumes:
- db-store:/var/lib/postgresql/data
- ./postgresql/postgresql.conf:/etc/postgresql.conf:ro
command: postgres -c config_file=/etc/postgresql.conf
networks:
- nuxnet
restart: always
volumes:
# to view current data, run bin/view-data.sh
certs:
app-logs: # all server logs, can be shared between instances
app-data: # contains app data and packages (store cache), can be shared between instances
db-store: # postgres database
es-data:
es-plugins:
es-config:
redis-data:
I succeed to make REST request between two containers using container name and default port.
Did you try with URL : http://nuxeo1:8080 ?

Docker/Wallaby - localhost refused to connect

I am trying to run my feature test using Wallaby but I keep getting the localhost refused to connect error.
Here is my compose.yml:
version: '2'
services:
app:
image: hin101/phoenix:1.5.1
build: .
restart: always
ports:
- "4000:4000"
- "4002:4002"
volumes:
- ./src:/app
depends_on:
- db
- selenium
hostname: app
db:
image: postgres:10
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=hinesh_blogs
ports:
- 5432:5432
volumes:
- database:/var/lib/postgresql/data
selenium:
build:
context: .
dockerfile: Dockerfile-selenium
container_name: selenium
image: selenium/standalone-chrome-debug:2.52.0
restart: always
ports:
- "4444:4444"
- "5900:5900"
hostname: selenium
volumes:
database:
test_helper.ex:
ExUnit.start()
Ecto.Adapters.SQL.Sandbox.mode(HineshBlogs.Repo, :auto)
{:ok, _} = Application.ensure_all_started(:ex_machina)
Application.put_env(:wallaby, :base_url, HineshBlogsWeb.Endpoint.url)
Application.put_env(:wallaby, :screenshot_on_failure, true)
{:ok, _} = Application.ensure_all_started(:wallaby)
config/test.exs:
use Mix.Config
# Configure your database
#
# The MIX_TEST_PARTITION environment variable can be used
# to provide built-in test partitioning in CI environment.
# Run `mix help test` for more information.
config :hinesh_blogs, HineshBlogs.Repo,
username: "postgres",
password: "postgres",
database: "hinesh_blogs_test#{System.get_env("MIX_TEST_PARTITION")}",
hostname: "db",
pool: Ecto.Adapters.SQL.Sandbox,
pool_size: 10
# We don't run a server during test. If one is required,
# you can enable the server option below.
config :hinesh_blogs, HineshBlogsWeb.Endpoint,
http: [port: 4002],
server: true
config :hinesh_blogs, :sql_sandbox, true
# Print only warnings and errors during test
config :logger, level: :warn
# Selenium
config :wallaby, otp_app: :hinesh_blogs_web, base_url: "http://localhost:4002/", driver: Wallaby.Selenium, hackney_options: [timeout: :infinity, recv_timeout: :infinity]
I run the tests using the command: docker-compose run app mix test
Do I need to have any additional configurations to run these tests and if not, what is the best way to configure wallaby to use docker containers?

Sending Http request from one docker container to another

I am running multiple docker containers. I want to invoke a graphql Hasura api running on a docker container from a node js application running on another container. I am unable to use same url - (http:///v1/graphql) that I use to access the Hasura api for accessing from node js application.
I tried http://localhost/v1/graphql but that is not also working.
The following is the docker compose file for Hasura graphql
version: '3.6'
services:
postgres:
image: postgis/postgis:12-master
restart: always
volumes:
- db_data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: <postgrespassword>
pgadmin:
image: dpage/pgadmin4
restart: always
depends_on:
- postgres
ports:
- 5050:80
## you can change pgAdmin default username/password with below environment variables
environment:
PGADMIN_DEFAULT_EMAIL: <email>
PGADMIN_DEFAULT_PASSWORD: <pass>
graphql-engine:
image: hasura/graphql-engine:v1.3.0-beta.3
depends_on:
- "postgres"
restart: always
environment:
# database url to connect
HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:postgrespassword#postgres:5432/postgres
# enable the console served by server
HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set "false" to disable console
## uncomment next line to set an admin secret key
HASURA_GRAPHQL_ADMIN_SECRET: <secret>
HASURA_GRAPHQL_UNAUTHORIZED_ROLE: anonymous
HASURA_GRAPHQL_JWT_SECRET: '{ some secret }'
command:
- graphql-engine
- serve
caddy:
image: abiosoft/caddy:0.11.0
depends_on:
- "graphql-engine"
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/Caddyfile
- caddy_certs:/root/.caddy
volumes:
db_data:
caddy_certs:
The caddy file has the following configuration:
# replace :80 with your domain name to get automatic https via LetsEncrypt
:80 {
proxy / graphql-engine:8080 {
websocket
}
}
What is the api end point I should be using from another docker container (not present in this docker-compose) to access the hasura api? From browser I use http://#ipaddress /v1/graphql.
What is the configuration of caddy actually do here?

How to change permissions for Gemfile created with Docker for Windows?

I'm working with a dockerized rails application, however whenever I make a change to the Gemfile the file permissions change to an unkown user, and I'm unable to do anything to the file whether I'm inside the container or not.
How can I make it so that I'm able to manipulate the file again?
Here's my .docker-copmose.yml:
version: '3'
networks:
backend:
volumes:
postgres:
services:
postgres:
image: postgres
ports:
- ${APP:-5432}:5432
volumes:
- ./db/dumps:/db/dumps # Mount the directory DB dumps folder
- postgres:/var/lib/postgresql/data
networks:
- backend
environment:
POSTGRES_PASSWORD: 3x4mpl3
web:
build:
context: .
dockerfile: Dockerfile
command: rails s -b 0.0.0.0 -p 3000
entrypoint: /app/bin/entrypoint.sh
volumes:
- .:/app
ports:
- ${APP_WEB_PORT:-3000}:3000
stdin_open: true
tty: true
depends_on:
- postgres
networks:
- backend
environment:
DATABASE_URL: postgres://postgres:3x4mpl3#postgres:5432/app_development
RAILS_ENV: development
Can you try to change the file ownership from outside of the docker and after that you can manipulate the file.

Resources