Docker/Wallaby - localhost refused to connect - docker

I am trying to run my feature test using Wallaby but I keep getting the localhost refused to connect error.
Here is my compose.yml:
version: '2'
services:
app:
image: hin101/phoenix:1.5.1
build: .
restart: always
ports:
- "4000:4000"
- "4002:4002"
volumes:
- ./src:/app
depends_on:
- db
- selenium
hostname: app
db:
image: postgres:10
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=hinesh_blogs
ports:
- 5432:5432
volumes:
- database:/var/lib/postgresql/data
selenium:
build:
context: .
dockerfile: Dockerfile-selenium
container_name: selenium
image: selenium/standalone-chrome-debug:2.52.0
restart: always
ports:
- "4444:4444"
- "5900:5900"
hostname: selenium
volumes:
database:
test_helper.ex:
ExUnit.start()
Ecto.Adapters.SQL.Sandbox.mode(HineshBlogs.Repo, :auto)
{:ok, _} = Application.ensure_all_started(:ex_machina)
Application.put_env(:wallaby, :base_url, HineshBlogsWeb.Endpoint.url)
Application.put_env(:wallaby, :screenshot_on_failure, true)
{:ok, _} = Application.ensure_all_started(:wallaby)
config/test.exs:
use Mix.Config
# Configure your database
#
# The MIX_TEST_PARTITION environment variable can be used
# to provide built-in test partitioning in CI environment.
# Run `mix help test` for more information.
config :hinesh_blogs, HineshBlogs.Repo,
username: "postgres",
password: "postgres",
database: "hinesh_blogs_test#{System.get_env("MIX_TEST_PARTITION")}",
hostname: "db",
pool: Ecto.Adapters.SQL.Sandbox,
pool_size: 10
# We don't run a server during test. If one is required,
# you can enable the server option below.
config :hinesh_blogs, HineshBlogsWeb.Endpoint,
http: [port: 4002],
server: true
config :hinesh_blogs, :sql_sandbox, true
# Print only warnings and errors during test
config :logger, level: :warn
# Selenium
config :wallaby, otp_app: :hinesh_blogs_web, base_url: "http://localhost:4002/", driver: Wallaby.Selenium, hackney_options: [timeout: :infinity, recv_timeout: :infinity]
I run the tests using the command: docker-compose run app mix test
Do I need to have any additional configurations to run these tests and if not, what is the best way to configure wallaby to use docker containers?

Related

Redash cannot connect to MySQL server through socket

I want to connect redash to MySQL Server. I added MYSQL_TCP_PORT for server to use TCP connection, not default UNIX socket (to avoid mysqld.sock error). If I go to mysql container and mysql -p - I can open mysql shell. But If I test connection in redash - it will return (2006, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)").
Here is my docker-compose file:
# This configuration file is for the **development** setup.
# For a production example please refer to getredash/setup repository on GitHub.
version: "2.2"
x-redash-service: &redash-service
build:
context: .
# args:
# skip_frontend_build: "true" # set to empty string to build
volumes:
- .:/app
env_file:
- .env
x-redash-environment: &redash-environment
REDASH_LOG_LEVEL: "INFO"
REDASH_REDIS_URL: "redis://redis:6379/0"
REDASH_DATABASE_URL: "postgresql://postgres#postgres/postgres"
REDASH_RATELIMIT_ENABLED: "false"
REDASH_MAIL_DEFAULT_SENDER: "redash#example.com"
REDASH_MAIL_SERVER: "email"
REDASH_ENFORCE_CSRF: "true"
REDASH_GUNICORN_TIMEOUT: 60
# Set secret keys in the .env file
services:
server:
<<: *redash-service
command: dev_server
depends_on:
- postgres
- redis
ports:
- "5000:5000"
- "5678:5678"
networks:
- default_network
environment:
<<: *redash-environment
PYTHONUNBUFFERED: 0
scheduler:
<<: *redash-service
command: dev_scheduler
depends_on:
- server
networks:
- default_network
environment:
<<: *redash-environment
worker:
<<: *redash-service
command: dev_worker
depends_on:
- server
networks:
- default_network
environment:
<<: *redash-environment
PYTHONUNBUFFERED: 0
redis:
image: redis:3-alpine
restart: unless-stopped
networks:
- default_network
postgres:
image: postgres:9.5-alpine
# The following turns the DB into less durable, but gains significant performance improvements for the tests run (x3
# improvement on my personal machine). We should consider moving this into a dedicated Docker Compose configuration for
# tests.
ports:
- "15432:5432"
command: "postgres -c fsync=off -c full_page_writes=off -c synchronous_commit=OFF"
restart: unless-stopped
networks:
- default_network
environment:
POSTGRES_HOST_AUTH_METHOD: "trust"
email:
image: djfarrelly/maildev
ports:
- "1080:80"
restart: unless-stopped
networks:
- default_network
mysql:
image: mysql/mysql-server:latest
ports:
- "3306:3306"
restart: unless-stopped
container_name: mysql
networks:
- default_network
environment:
MYSQL_ROOT_PASSWORD: "${MYSQL_ROOT_PASSWORD}"
MYSQL_TCP_PORT: 3306
networks:
default_network:
external: false
name: default_network
driver: bridge
As I see - redash is connecting via unix socket - not TCP connection (otherwise there will no mysqld.sock err). I don't know - what I should fix in docker-compose or somewhere else to make it connect properly. Any suggestions? If you need me to provide more info - ask me please

Two databases for Hasura in Docker Compose

I tried to separate the databases for hasura in Docker Compose.
Idea is to have a database for the metadata of hasura. And one for my actual data.
This is my docker-compose.yml file.
version: "3.8"
services:
meta:
image: postgres
hostname: meta
container_name: meta
environment:
POSTGRES_DB: meta
POSTGRES_USER: meta
POSTGRES_PASSWORD: metapass
restart: always
volumes:
- db_meta:/var/lib/postgresql/data
networks:
- backend
data:
image: postgres
hostname: data
container_name: data
environment:
POSTGRES_DB: data
POSTGRES_USER: data
POSTGRES_PASSWORD: datapass
restart: always
volumes:
- db_data:/var/lib/postgresql/data
networks:
- backend
graphql-engine:
image: hasura/graphql-engine:v2.13.0
ports:
- "8080:8080"
depends_on:
- "meta"
- "data"
restart: always
environment:
## postgres database to store Hasura metadata
# Database URL postgresql://username:password#hostname:5432/database
HASURA_GRAPHQL_METADATA_DATABASE_URL: meta://meta:metapass#meta:5432/meta
## this env var can be used to add the above postgres database to Hasura as a data source. this can be removed/updated based on your needs
PG_DATABASE_URL: data://data:datapass#data:5432/data
## enable the console served by server
HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set to "false" to disable console
## enable debugging mode. It is recommended to disable this in production
HASURA_GRAPHQL_DEV_MODE: "true"
HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup, http-log, webhook-log, websocket-log, query-log
## uncomment next line to run console offline (i.e load console assets from server instead of CDN)
# HASURA_GRAPHQL_CONSOLE_ASSETS_DIR: /srv/console-assets
## uncomment next line to set an admin secret
# HASURA_GRAPHQL_ADMIN_SECRET: myadminsecretkey
networks:
- backend
volumes:
db_data:
db_meta:
networks:
backend:
driver: bridge
I get {"detail":{"info":{"code":"postgres-error","error":"connection error","internal":"missing \"=\" after \"meta://meta:metapass#meta:5432/meta\" in connection info string\n","path":"$"},"kind":"catalog_migrate"},"level":"error","timestamp":"2022-10-24T14:16:06.432+0000","type":"startup"}
I think the problem is related to the hostname. But I do not know how to solve it. Any ideas?
I have tried a lot so now my solution looks like this.
I have noticed that volumes delete makes it more easy to develop.
The beginning of the Database URL must start with postgresql://. Just as #jlandercy has already said.
version: "3.8"
services:
meta:
image: postgres
container_name: meta
restart: always
volumes:
- db_meta:/var/lib/postgresql/data
environment:
POSTGRES_USER: meta_user
POSTGRES_PASSWORD: meta_password
POSTGRES_DB: meta_db
data:
image: postgres
container_name: data
restart: always
volumes:
- db_data:/var/lib/postgresql/data
environment:
POSTGRES_USER: data_user
POSTGRES_PASSWORD: data_password
POSTGRES_DB: data_db
ports:
- 5432:5432
graphql-engine:
image: hasura/graphql-engine
depends_on:
- "meta"
- "data"
restart: always
environment:
## postgres database to store Hasura metadata
# Database URL postgresql://username:password#hostname:5432/database
HASURA_GRAPHQL_METADATA_DATABASE_URL: postgresql://meta_user:meta_password#meta:5432/meta_db
## this env var can be used to add the above postgres database to Hasura as a data source. this can be removed/updated based on your needs
PG_DATABASE_URL: postgresql://data_user:data_password#data:5432/data_db
## enable the console served by server
HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set to "false" to disable console
## enable debugging mode. It is recommended to disable this in production
HASURA_GRAPHQL_DEV_MODE: "true"
HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup, http-log, webhook-log, websocket-log, query-log
## uncomment next line to run console offline (i.e load console assets from server instead of CDN)
# HASURA_GRAPHQL_CONSOLE_ASSETS_DIR: /srv/console-assets
## uncomment next line to set an admin secret
# HASURA_GRAPHQL_ADMIN_SECRET: myadminsecretkey
ports:
- "8080:8080"
volumes:
db_data:
db_meta:

Multi Container Connection in docker

I have built a CRUD application using spring-boot and MySQL, MySQL is in docker and I am able to connect from local and my application is working. But when I tried to deploy the Spring-boot application in docker now it is not able to connect to Docker MySQL.
## Spring application.properties
server.port=8001
# MySQL Props
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5InnoDBDialect
spring.jpa.hibernate.ddl-auto = create
spring.datasource.url=jdbc:mysql://${MYSQL_HOST:localhost}:${MYSQL_PORT:9001}/${MYSQL_DATABASE:test-db}
spring.datasource.username=${MYSQL_USER:admin}
spring.datasource.password=${MYSQL_PASSWORD:nimda}
##Dockerfile
FROM openjdk:11
RUN apt-get update
ADD target/mysql-crud-*.jar mysql-crud.jar
ENTRYPOINT ["java", "-jar", "mysql-crud.jar"]
## docker-compose.yml
version: '3.9'
services:
dockersql:
image: mysql:latest
restart: always
container_name: dockersql
ports:
- "3306:3306"
env_file: .env
environment:
- MYSQL_DATABASE=$MYSQL_DATABASE
- MYSQL_USER=$MYSQL_USER
- MYSQL_PASSWORD=$MYSQL_PASSWORD
- MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
networks:
- crud-network
mycrud:
depends_on:
- dockersql
restart: always
container_name: mycrud
env_file: .env
environment:
- MYSQL_HOST=dockersql:3306
- MYSQL_DATABASE=$MYSQL_DATABASE
- MYSQL_USER=$MYSQL_USER
- MYSQL_PASSWORD=$MYSQL_PASSWORD
- MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
build: .
networks:
- crud-network
networks:
crud-network:
driver: bridge
# .env file
MYSQL_DATABASE=test-db
MYSQL_USER=admin
MYSQL_PASSWORD=nimda
MYSQL_ROOT_PASSWORD=nimda
Can anyone help me?
Even better add a health check for MySQL and make it a condition for spring boot to start
dockersql:
healthcheck:
test: [ "CMD-SHELL", 'mysql --user=${MYSQL_USER} --database=${MYSQL_DATABASE} --password=${MYSQL_PASSWORD} --execute="SELECT count(table_name) > 0 FROM information_schema.tables;"' ]
mycrud:
depends_on:
dockersql:
condition: service_healthy
The --execute can also be modified to include application-specific healthcheck. for example, checking on a specific table that it exists.
I found out that before MySQL is completely up and running, my spring boot tries to connect MySQL and that is causing the error.
After adding
mycrud:
depends_on:
- dockersql
container_name: mycrud
restart: on-failure
It resolves my issue.

docker-compose mysql Connection refused

my development environment
Spring Boot 2.x, Gradle, Mybatis
Simply build through Docker-compose and create a project for API testing.
Why do I keep getting connection refused?
If you type the command 'docker-compose up', it runs without any problem.
I checked the network and it is connected.
I usually use 'docker exec A ping B' to check.
I don't know why an error occurs when A is a DB container.
I'll attach the code just in case.
# application.properties
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.datasource.url=jdbc:mysql://test-db:3306/exdocker?serverTimezone=Asia/Seoul
spring.datasource.username=dockerex
spring.datasource.password=12341234
# docker-compose.yml
version: "3"
services:
test-db:
container_name: test-db
image: mysql:8.0
environment:
MYSQL_DATABASE: "exdocker"
MYSQL_ROOT_PASSWORD: "root"
MYSQL_PASSWORD: "12341234"
MYSQL_USER: "dockerex"
restart: always
networks:
- exdocker-network
ports:
- 3307:3306
test-app:
links:
- test-db
build:
context: .
dockerfile: Dockerfile
ports:
- 8081:8080
restart: always
networks:
- exdocker-network
depends_on:
- test-db
networks:
exdocker-network:
ipam:
driver: default
config:
- subnet: 172.202.0.1/16

How to change permissions for Gemfile created with Docker for Windows?

I'm working with a dockerized rails application, however whenever I make a change to the Gemfile the file permissions change to an unkown user, and I'm unable to do anything to the file whether I'm inside the container or not.
How can I make it so that I'm able to manipulate the file again?
Here's my .docker-copmose.yml:
version: '3'
networks:
backend:
volumes:
postgres:
services:
postgres:
image: postgres
ports:
- ${APP:-5432}:5432
volumes:
- ./db/dumps:/db/dumps # Mount the directory DB dumps folder
- postgres:/var/lib/postgresql/data
networks:
- backend
environment:
POSTGRES_PASSWORD: 3x4mpl3
web:
build:
context: .
dockerfile: Dockerfile
command: rails s -b 0.0.0.0 -p 3000
entrypoint: /app/bin/entrypoint.sh
volumes:
- .:/app
ports:
- ${APP_WEB_PORT:-3000}:3000
stdin_open: true
tty: true
depends_on:
- postgres
networks:
- backend
environment:
DATABASE_URL: postgres://postgres:3x4mpl3#postgres:5432/app_development
RAILS_ENV: development
Can you try to change the file ownership from outside of the docker and after that you can manipulate the file.

Resources