Docker Rails app with searchkick/elasticsearch - ruby-on-rails

Im porting my rails app from my local machine into a docker container and running into an issue with elasticsearch/searchkick. I can get it working temporarily but Im wondering if there is a better way. So basically the port for elasticsearch isnt matching up with the default localhost:9200 that searchkick uses. Now I have used "docker inspect" on the elasticsearch container and got the actual IP and then set the ENV['ELASTICSEARCH_URL'] variable like the searchkick docs say and it works. The problem Im having is that is a pain if I restart/change the containers the IP changes sometimes and I have to go through the whole process again. Here is my docker-compose.yml:
version: '2'
services:
web:
build: .
command: rails server -p 3000 -b '0.0.0.0'
volumes:
- .:/living-recipe
ports:
- '3000:3000'
env_file:
- .env
depends_on:
- postgres
- elasticsearch
postgres:
image: postgres
elasticsearch:
image: elasticsearch

use elasticsearch:9200 instead of localhost:9200. docker compose exposes the container via it's name.

Here is the docker-compose.yml that is working for me
docker compose will expose the container vaia it's name, so you can set
ELASTICSEARCH_URL: http://elasticsearch:9200 ENV variable in your rails application container
version: "3"
services:
db:
image: postgres:9.6
restart: always
volumes:
- /tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
volumes:
- .:/app
ports:
- 9200:9200
environment:
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
api:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- ".:/app"
ports:
- "3001:3000"
depends_on:
- db
environment:
DB_HOST: db
DB_PASSWORD: password
ELASTICSEARCH_URL: http://elasticsearch:9200

You don't want to try to map the IP address for elasticsearch manually, as it will change.
Swap out depends_on for links. This will create the same dependency, but also allows the containers to be reached via service name.
Containers for the linked service will be reachable at a hostname identical to the alias, or the service name if no alias was specified.
Links also express dependency between services in the same way as depends_on, so they determine the order of service startup.
Docker Compose File Reference - Links
Then in your rails app where you're setting ENV['ELASTICSEARCH_URL'], use elasticsearch instead.

Related

How to access wacore container which is exiting due to file error present within - "/opt/whatsapp/bin/wait_on_postgres.sh"

I am launching containers via docker-compose, but 2 out of 3 containers are failing stating -:"exec user process caused "exec format error" "
The above error is caused while executing a file places at location /opt/whatsapp/bin/wait_on_postgres.sh, i need to add #!/bin/bash at top of this file.
Problem is, the container is exiting in no time so how to access this file to make necessary changes ??
Below is the docker-compose.yml i am using -:
version: '3'
volumes:
whatsappMedia:
driver: local
postgresData:
driver: local
services:
db:
image: postgres:10.6
command: "-p 3306 -N 500"
restart: always
environment:
POSTGRES_PASSWORD: testpass
POSTGRES_USER: root
expose:
- "33060"
ports:
- "33060:3306"
volumes:
- postgresData:/var/lib/postgresql/data
network_mode: bridge
wacore:
image: docker.whatsapp.biz/coreapp:v${WA_API_VERSION:?Run docker-compose with env var WA_API_VERSION (ex. WA_API_VERSION=2.31.4 docker-compose <command> <options>)}
command: ["/opt/whatsapp/bin/wait_on_postgres.sh", "/opt/whatsapp/bin/launch_within_docker.sh"]
volumes:
- whatsappMedia:/usr/local/wamedia
env_file:
- db.env
environment:
# This is the version of the docker templates being used to run WhatsApp Business API
WA_RUNNING_ENV_VERSION: v2.2.3
ORCHESTRATION: DOCKER-COMPOSE
depends_on:
- "db"
network_mode: bridge
links:
- db
waweb:
image: docker.whatsapp.biz/web:v${WA_API_VERSION:?Run docker-compose with env var WA_API_VERSION (ex. WA_API_VERSION=2.31.4 docker-compose <command> <options>)}
command: ["/opt/whatsapp/bin/wait_on_postgres.sh", "/opt/whatsapp/bin/launch_within_docker.sh"]
ports:
- "9090:443"
volumes:
- whatsappMedia:/usr/local/wamedia
env_file:
- db.env
environment:
WACORE_HOSTNAME: wacore
# This is the version of the docker templates being used to run WhatsApp Business API
WA_RUNNING_ENV_VERSION: v2.2.3
ORCHESTRATION: DOCKER-COMPOSE
depends_on:
- "db"
- "wacore"
links:
- db
- wacore
network_mode: bridge
Problem got resolved by using 64bit guest OS image.
I was running this container over 32 bit Centos which was causing the error.

Docker ports not being exposed properly

version: '3'
services:
app:
build: .
ports:
- "8000:8000"
volumes:
- .:/srv/redditaurus
environment:
- REDDIT_KEY=${REDDIT_KEY}
- REDDIT_SECRET=${REDDIT_SECRET}
links:
- mysql:mysql
mysql:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: root
ports:
- "3306:3306"
# volumes:
# - ./mysql:/var/lib/mysql/
nginx:
image: nginx
ports:
- "80:80"
This is my docker-compose.yml. The weirdest thing is happening. I can visit localhost:8000 and get the redditaurus app without any issue. However, if I try to do the same thing with localhost:80, or localhost:3306 from a mysql terminal, I'll get access denied or ERR_EMPTY_RESPONSE.
If I try 0.0.0.0:80, I get the default nginx page, so that's okay, but why won't localhost work?
MySQL refuses to be served on either localhost or 0.0.0.0. I've tried accessing it from Sequel Pro, from inside a linked container, and from my host machine's console, and nothing can get into it. If I exec into the SQL container, I can log in just fine, so it's not a password issue.
Why can't I get to my containers normally? :(
You have missing some configuration properties. try this
version: '3'
services:
app:
build: .
ports:
- "8000:8000"
volumes:
- .:/srv/redditaurus
environment:
- REDDIT_KEY=${REDDIT_KEY}
- REDDIT_SECRET=${REDDIT_SECRET}
links:
- mysql:mysql
mysql:
image: mysql
entrypoint: ['/entrypoint.sh', '--default-authentication-plugin=mysql_native_password']
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_ALLOW_EMPTY_PASSWORD: "YES"
ports:
- "3306:3306"
nginx:
image: nginx
ports:
- "80:80"
if you want to connect mysql via terminal. run this
mysql -uroot -proot —protocol tcp
Next thing is your nginx binding with 80 is work correct.
Problem in here is not docker-compose. It can be in your os configurations.
I used mysql:5.7 tag in docker-compose, and that allowed the container to work. I guess the latest branch has some issue with my local env.
Still not sure what's up with nginx, but it's not an issue.

Can't to connect to postgres container

I define postgres server in docker-compose.yml:
db:
image: postgres:9.5
expose:
- 5432
Then in another docker container I tried to connect to this postgres container. But it gives an error with warning:
Is the server running on host "db" (172.22.0.2) and accepting
data-service_1 | TCP/IP connections on port 5432?
Why container can't to connect to another by provided information (host="db" and port=5432)?
PS
Full docker-compose.yml:
version: "2"
services:
data-service:
build: .
depends_on:
- db
ports:
- "50051:50051"
db:
image: postgres:9.5
depends_on:
- data-volume
environment:
- POSTGRES_USER=cobrain
- POSTGRES_PASSWORD=a
- POSTGRES_DB=datasets
ports:
- "8000:5432"
expose:
- 5432
volumes_from:
- data-volume
# - container:postgres9.5-data
restart: always
data-volume:
image: busybox
command: echo "I'm data container"
volumes:
- /var/lib/postgresql/data
Solution #1. Same file.
To be able to access the db container, you have to define your other containers in context of docker-compose.yml. When containers are started, each container gets all other containers mapped in /etc/hosts.
Just do
version: '2'
services:
web:
image: your/image
db:
image: postgres:9.5
If you do not wish to put your other containers into the same docker-compose.yml, there are other solutions:
Solution #2. IP
Do docker inspect <name of your db container> and look for IPAddress directive in the result listing. Use that IPAddress as host to connect to.
Solution #3. Networks
Make your containers join same network. For that, under each service, define:
services:
db:
networks:
- myNetwork
Don't forget to change db for each container you are starting.
I usually go with the first solution during development. I use apache+php as one container and pgsql as another one, a separate DB for every project. I do not start more than one setting of docker-compose.yml, so in this case defining both containers in one .yml config is perfect.
the depends on is not correct. i would try to use other paramters like LINKS and environment:
version: "2"
services:
data-service:
build: .
links:
- db
ports:
- "50051:50051"
volumes_from: ["db"]
environment:
DATABASE_HOST: db
db:
image: postgres:9.5
environment:
- POSTGRES_USER=cobrain
- POSTGRES_PASSWORD=a
- POSTGRES_DB=datasets
ports:
- "8000:5432"
expose:
- 5432
#volumes_from:
#- data-volume
# - container:postgres9.5-data
restart: always
data-volume:
image: busybox
command: echo "I'm data container"
volumes:
- /var/lib/postgresql/data
this one works for me (not postgres but mysql)

how to map database from couchdb container to another containers webapp in same docker-compose file

I am having one web application which is running with couchdb database.
This coudb container has multiple databases from which web application need only one database.
I am using docker compose to run it but web application didn't recognize database inside couchdb container
by docker-compose.yml file as below
version: "2"
services:
db:
image: mysite/couch:latest
ports:
- "15984:5984"
environment:
DB_USER: admin
DB_PASSWORD: password
DB_NAME: db_new
webapp:
image: mysite/webapp:latest
ports:
- "3050:3000"
links:
- db
- db:db_new
If I run docker manually as mentioned below it works fine
docker run --rm -e DB_URL=http://localip:15984/db_new -p 0.0.0.0:3050:3000
Any Ideas what I am missing in docker-compose file?
After spending lots of time on docker-compose I got the solution as below
version: "2"
services:
db:
image: mysite/couch:latest
ports:
- "15984:5984"
webapp:
image: mysite/webapp:latest
ports:
- "3050:3000"
links:
- db
environment:
DB_URL: http://admin:password#db:5984/db_new

connect to mysql database from docker container

I have this docker file and it is working as expected. I have php application that connects to mysql on localhost.
# cat Dockerfile
FROM tutum/lamp:latest
RUN rm -fr /app
ADD crm_220 /app/
ADD crmbox.sql /
ADD mysql-setup.sh /mysql-setup.sh
EXPOSE 80 3306
CMD ["/run.sh"]
When I tried to run the database as separate container, my php application is still pointing to localhost. When I connect to the "web" container, I am not able to connect to "mysql1" container.
# cat docker-compose.yml
web:
build: .
restart: always
volumes:
- .:/app/
ports:
- "8000:8000"
- "80:80"
links:
- mysql1:mysql
mysql1:
image: mysql:latest
volumes:
- "/var/lib/mysql:/var/lib/mysql"
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: secretpass
How does my php application connect to mysql from another container?
This is similar to the question asked here...
Connect to mysql in a docker container from the host
I do not want to connect to mysql from host machine, I need to connect from another container.
At first you shouldn't expose mysql 3306 port if you not want to call it from host machine. At second links are deprecated now. You can use network instead. I not sure about compose v.1 but in v.2 all containers in common docker-compose file are in one network (more about networks) and can be resolved by name each other. Example of docker-compose v.2 file:
version: '2'
services:
web:
build: .
restart: always
volumes:
- .:/app/
ports:
- "8000:8000"
- "80:80"
mysql1:
image: mysql:latest
volumes:
- "/var/lib/mysql:/var/lib/mysql"
environment:
MYSQL_ROOT_PASSWORD: secretpass
With such configuration you can resolve mysql container by name mysql1 inside web container.
For me, the name resolutions is never happening. Here is my docker file, and I was hoping to connect from app host to mysql, where the name is mysql and passed as an env variable to the other container - DB_HOST=mysql
version: "2"
services:
app:
build:
context: ./
dockerfile: /src/main/docker/Dockerfile
image: crossblogs
environment:
- DB_HOST=mysql
- DB_PORT=3306
ports:
- 8080:8080
depends_on:
- mysql
mysql:
image: mysql:5.7.20
environment:
- MYSQL_USER=root
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=crossblogs
ports:
- 3306:3306
command: mysqld --lower_case_table_names=1 --skip-ssl --character_set_server=utf8 --explicit_defaults_for_timestamp

Resources