Can I deploy a containerized ElasticSearch on Heroku with a web app? - docker

I have a toy MVP application that I'd like to deploy on Heroku. There's an ElasticSearch dependency expressed in a docker-compose file. The smallest ES add-on for Heroku is $67/month which is more than I want to spend for an MVP. I'm trying to figure out how to deploy it alongside the web app in a containerized fashion. All the guides I saw for multiple processes have a Dockerfile, not a docker-compose. Can I express this in a heroku.yml configuration?
Here is my Dockerfile:
version: '3.6'
services:
web:
image: denoland/deno:latest
container_name: my_app
build: .
ports:
- 3001:3001
environment:
- DENO_ENV=local
- ES_HOST=elasticsearch
- DENO_PORT=3001
- ELASTIC_URL=http://elasticsearch:9200
volumes:
- .:/usr/src/app
command: deno run --allow-net --allow-read --allow-env src/main.ts
links:
- elasticsearch
depends_on:
- elasticsearch
networks:
- es-net
elasticsearch:
container_name: es-container
image: docker.elastic.co/elasticsearch/elasticsearch:8.5.2
volumes:
- esdata:/usr/share/elasticsearch/data
environment:
- xpack.security.enabled=false
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
logging:
driver: none
ports:
- 9300:9300
- 9200:9200
networks:
- es-net
volumes:
esdata:
networks:
es-net:
driver: bridge

Not unless you want to pay for private spaces, and even then I don't think that it would work properly. Heroku's Docker support does not include volume mounts.
Internal routing is only available for apps in private spaces.

Related

Restarting docker-compose after running rake assets:precompile resets the changes

I'm currently trying to use Zammad Open Source, a helpdesk ticketing system with Docker-compose. However I have used it on a non-docker setup before and I edited the html views and added some logos and extra features that are required by my team. However we are needed to move into a docker-based instance soon due to reasons.
I succeeded in installing it normally, and the default compose file does mount an image when bringing the container up. After that I go and apply the changes as how I did on my existing setup. The changes require me to run
rake assets:precompile
and restart only the rails container. After restarting it, it works and the changes are reflected.
However, once I run
docker-compose restart
All the containers restart (as expected) but the rails server seems to discard every single change I made, and everything looks as if I just brought up a fresh container.
What I've tried:
Apply the changes, restart rails container, and commit the container into a custom image and pulled from it. Didn't work.
Edited dockerfile, entrypoint scripts to apply the changes and also run precompile during installation. Didn't work.
docker-compose.yml
version: '3'
services:
zammad-backup:
command: ["zammad-backup"]
depends_on:
- zammad-railsserver
- zammad-postgresql
entrypoint: /usr/local/bin/backup.sh
environment:
- BACKUP_SLEEP=86400
- HOLD_DAYS=10
- POSTGRESQL_USER=${POSTGRES_USER}
- POSTGRESQL_PASSWORD=${POSTGRES_PASS}
image: ${IMAGE_REPO}:zammad-postgresql${VERSION}
restart: ${RESTART}
volumes:
- zammad-backup:/var/tmp/zammad
- zammad-data:/opt/zammad
zammad-elasticsearch:
environment:
- discovery.type=single-node
image: ${IMAGE_REPO}:zammad-elasticsearch${VERSION}
restart: ${RESTART}
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
zammad-init:
command: ["zammad-init"]
depends_on:
- zammad-postgresql
environment:
- MEMCACHE_SERVERS=${MEMCACHE_SERVERS}
- POSTGRESQL_USER=${POSTGRES_USER}
- POSTGRESQL_PASS=${POSTGRES_PASS}
- REDIS_URL=${REDIS_URL}
image: ${IMAGE_REPO}:zammad${VERSION}
restart: on-failure
volumes:
- zammad-data:/opt/zammad
zammad-memcached:
command: memcached -m 256M
image: memcached:1.6.10-alpine
restart: ${RESTART}
zammad-nginx:
command: ["zammad-nginx"]
expose:
- "8080"
depends_on:
- zammad-railsserver
image: ${IMAGE_REPO}:zammad${VERSION}
restart: ${RESTART}
volumes:
- zammad-data:/opt/zammad
zammad-postgresql:
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASS}
image: ${IMAGE_REPO}:zammad-postgresql${VERSION}
restart: ${RESTART}
volumes:
- postgresql-data:/var/lib/postgresql/data
zammad-railsserver:
command: ["zammad-railsserver"]
depends_on:
- zammad-memcached
- zammad-postgresql
- zammad-redis
environment:
- MEMCACHE_SERVERS=${MEMCACHE_SERVERS}
- REDIS_URL=${REDIS_URL}
image: ${IMAGE_REPO}:zammad${VERSION}
restart: ${RESTART}
volumes:
- zammad-data:/opt/zammad
zammad-redis:
image: redis:6.2.5-alpine
restart: ${RESTART}
zammad-scheduler:
command: ["zammad-scheduler"]
depends_on:
- zammad-memcached
- zammad-railsserver
- zammad-redis
environment:
- MEMCACHE_SERVERS=${MEMCACHE_SERVERS}
- REDIS_URL=${REDIS_URL}
image: ${IMAGE_REPO}:zammad${VERSION}
restart: ${RESTART}
volumes:
- zammad-data:/opt/zammad
zammad-websocket:
command: ["zammad-websocket"]
depends_on:
- zammad-memcached
- zammad-railsserver
- zammad-redis
environment:
- MEMCACHE_SERVERS=${MEMCACHE_SERVERS}
- REDIS_URL=${REDIS_URL}
image: ${IMAGE_REPO}:zammad${VERSION}
restart: ${RESTART}
volumes:
- zammad-data:/opt/zammad
volumes:
elasticsearch-data:
driver: local
postgresql-data:
driver: local
zammad-backup:
driver: local
zammad-data:
driver: local
I found a solution after a few headaches here and there:
What I did:
Dove in the files and found out that it pulls an image from github, downloaded the image. After extracting the tar.gz file, I applied all the changes that i needed, repacked the tar.gz, and edited the dockerfile to point to the new image.
After that, i need to force docker-compose to rebuild the image. Then the changes are persistent even after restarts.

Trying to connect all my docker containers to a separate MariaDB container

So, I've setup several container apps that use MariaDB as their db backend, using docker-compose.
Containers are setup as needed and therefore MariaDB gets installed each time on every container that uses the db.
For example, I have some containers (PHPMyAdmin, NGiNX-PM, etc.) that use MariaDB, and they, in turn, have a version of it installed within their container. I also have a separate container (MariaDB) that I would rather have shared amongst the other containered apps and, thereby, I'd only have to maintain one version of the db.
I've searched for a solution, but no luck. Needless to say, I'm a noob at docker.
The only thing I can come up with is that all the apps need to be installed through the same docker-compose.yaml file to use the same db? That would make for a very long file if I had many containers running, and I'd prefer to have a directory per app and all the app's contents available in this one location.
I'm sure there is a way, I just haven't been able to figure it out.
So this is what I've tried:
The following setup is what I've tried but I am unable to get it to work:
(/docker/apps/mariadb/mariadb.yml)
version: '3.9'
networks:
NET:
external: true
services:
#############################################################################################
# MariaDB (docker-compose -f mariadb.yml up -d) #
#############################################################################################
mariadb:
image: jsurf/rpi-mariadb:latest
restart: unless-stopped
environment:
- TZ=${TIMEZONE}
- MYSQL_DATABASE=dockerApps
- MYSQL_USER=root
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
volumes:
- $HOME/docker/apps/mariadb/db:/var/lib/mysql
expose:
- '3306'
networks:
- NET
(/docker/apps/nginxpm/nginxpm.yml)
version: '3.9'
networks:
NET:
external: true
services:
#############################################################################################
# NGiNX Proxy Manager (docker-compose -f nginxpm.yml up -d) #
#############################################################################################
nginxpm:
container_name: NGiNX_Proxy_Manager
image: 'jc21/nginx-proxy-manager:latest'
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- ./config.json:/app/config/production.json
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
networks:
- NET
depends_on:
- mariadb
(/docker/apps/phpmyadmin/phpmyadmin.yml)
version: "3.9"
networks:
NET:
external: true
services:
#############################################################################################
# phpMyAdmin (docker-compose up -d -OR- docker-compose -f phpmyadmin.yml up -d) #
#############################################################################################
phpmyadmin:
image: phpmyadmin:latest
container_name: phpMyAdmin
restart: unless-stopped
environment:
PMA_HOST: mariadb
PMA_USER: root
PMA_PASSWORD: ${MYSQL_PASSWORD}
volumes:
# Must add ServerName directive to end of file "ServerName 127.0.0.1"
- $HOME/docker/apps/phpmyadmin/apache2.conf:/etc/apache2/apache2.conf
ports:
- '8004:80'
networks:
- NET
Any help in this matter is greatly appreciated.
Ok, so after some more reading and testing, I've found the answer to my issue. I was assuming that "depends_on" was supposed to connect the containers, somehow. Not true!
I found that "external_links" is the correct way of connecting them.
So, my final docker-compose file looks like this:
(/docker/apps/nginxpm/nginxpm.yml)
version: '3.9'
networks:
NET:
external: true
services:
#############################################################################################
# NGiNX Proxy Manager (docker-compose -f nginxpm.yml up -d) #
#############################################################################################
nginxpm:
container_name: NGiNX_Proxy_Manager
image: 'jc21/nginx-proxy-manager:latest'
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- ./config.json:/app/config/production.json
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
networks:
- NET
external_links:
- mariadb

Visual Studio Docker Compose - stop and remove containers after debug session ends

I have an ASP.NET Core web app for which I have added logging to ElasticSearch & Kibana.
I am running it on a Windows host and the containers are Linux.
The docker-compose file is set to first start up elastic search, then kibana and then finally the web application - as below:
version: '3.4'
services:
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
ports:
- 9200:9200
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
environment:
- xpack.monitoring.enabled=true
- xpack.watcher.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
networks:
- elastic
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.9.2
ports:
- 5601:5601
depends_on:
- elasticsearch
environment:
- ELASTICSEARCH_URL=http://localhost:9200
networks:
- elastic
hyena.webapp:
image: ${DOCKER_REGISTRY-}hyena_image
build:
context: .
dockerfile: Hyena.WebApp/Dockerfile
ports:
- "5001:443"
- "5000:80"
container_name: "hyena_container"
volumes:
- type: bind
source: /c/docker/hyena
target: /data
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
networks:
- elastic
networks:
elastic:
driver: bridge
volumes:
elasticsearch-data:
When I hit Debug (F5), VS starts everything up and I can see in Docker Desktop that the containers are running and I can see that the apps communicate with each other (i.e. logs from the web app appear in Kibana).
Now, when I stop Visual Studio debugging, my webapp is no longer accessible through the browser, however both ElasticSearch and Kibana keep on working.
Docker desktop and docker container ls command all show that the containers are running.
So, my questions are:
If the containers are running, why cannot I access my web app anymore?
Why is it only happening with my web app and not with Kibana or ElasticSearch?
How can I make VS stop and remove these containers after a debug session?
Kind regards,
Bartosz

ElasticSearch container won't start up in Docker

I'm attempting to run this script in Win10 to configure everything.
All containers except the elastic container are initialized correctly and
Elastic times out and then exits with code 124.
https://imgur.com/a/FO8ckwc (some log outputs)
I'm running this script where I didn't touch anything except the Windows ports (you can see the comments)
https://pastebin.com/7Z8Gnenr
version: '3.1'
# Generated on 23-04-2018
services:
alfresco:
image: openmbeeguest/mms-repo:3.2.4-SNAPSHOT
environment:
CATALINA_OPTS: "-Xmx1G -XX:+UseConcMarkSweepGC"
depends_on:
- postgresql
- activemq
- elastic
networks:
- internal
ports:
- 8080:8080
volumes:
- alf_logs:/usr/local/tomcat/logs
- alf_data:/opt/alf_data
tmpfs:
- /tmp
- /usr/local/tomcat/temp/
- /usr/local/tomcat/work/
solr:
image: openmbeeguest/mms-solr:3.2.4-SNAPSHOT
environment:
CATALINA_OPTS: "-Xmx1G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:G1HeapRegionSize=8m -XX:MaxGCPauseMillis=200"
depends_on:
- alfresco
networks:
- internal
volumes:
- solr_logs:/usr/local/tomcat/logs/
- solr_content_store:/opt/solr/ContentStore
tmpfs:
- /tmp
- /usr/local/tomcat/temp/
- /usr/local/tomcat/work/
activemq:
image: openmbeeguest/mms-activemq:3.2.4-SNAPSHOT
ports:
#I changed these Windows side ports
- 61615:61616
- 61617:61614
- 8162:8161
# ORIGINAL
#- 61616:61616
#- 61614:61614
#- 8161:8161
volumes:
- activemq-data-volume:/data/activemq
- activemq-log-volume:/var/log/activemq
- activemq-conf-volume:/opt/activemq/conf
environment:
- ACTIVEMQ_ADMIN_LOGIN admin
- ACTIVEMQ_ADMIN_PASSWORD admin
networks:
- internal
elastic:
image: openmbeeguest/mms-elastic:3.2.4-SNAPSHOT
environment:
CLEAN: 'false'
ports:
- 9200:9200
volumes:
- elastic-data-volume:/usr/share/elasticsearch/data
networks:
- internal
postgresql:
image: openmbeeguest/mms-postgres:3.2.4-SNAPSHOT
volumes:
- pgsql_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=alfresco
- POSTGRES_PASSWORD=alfresco
- POSTGRES_DB=alfresco
networks:
- internal
volumes:
alf_logs:
alf_data:
solr_logs:
solr_content_store:
pgsql_data:
activemq-data-volume:
activemq-log-volume:
activemq-conf-volume:
elastic-data-volume:
nginx-external-volume:
networks:
internal:
Any help would be greatly appreciated!
Do you have the logs from the elasticsearch container to share? Without that it's hard to tell why it's exiting.
One thing that's tripped me up repeatedly though is the vm.max_map_count setting - the default in Docker is too low for elasticsearch to function, so it's a good first thing to check.

Rancher running an APP that needs internet access

Rancher v 1.6.10, Docker v 17.06.2-ce
I'm deploying a stack via Rancher UI that contains one of the docker containers that has an app which connects to Dropbox via the internet. But the app isn't able to access the internet.
However, if I don't use rancher and simply use docker-compose up natively, then it all works fine.
The networking that the Rancher creates appears to be the problem I guess.
Can I be advised please?
My docker compose file:
version: '2'
services:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
container_name: es1
environment:
- cluster.name=idc-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- docker-elk
idcdb:
image: postgres:9.6
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=DriveMe
- POSTGRES_USER=idc
- POSTGRES_DB=idc
volumes:
- pgdata:/var/lib/db
idcredis:
image: redis:4.0
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '6379:6379'
volumes:
- redisdata:/var/lib/redis
booking-service:
environment:
- PORT=8085
- PROFILE=integration
ports:
- 8085:8085
image: idc/idc-booking-service
depends_on:
- idcdb
- idcredis
notification-service:
environment:
- PORT=8087
- PROFILE=integration
ports:
- 8087:8087
image: idc/idc-notification-service
depends_on:
- idcredis
analytics-service:
environment:
- PORT=8088
- PROFILE=integration
ports:
- 8088:8088
image: idc/idc-analytics-service
depends_on:
- idcredis
- elasticsearch1
kibana:
image: docker.elastic.co/kibana/kibana:5.6.3
environment:
- "ELASTICSEARCH_URL=http://elasticsearch1:9200"
networks:
- docker-elk
volumes:
pgdata: {}
redisdata: {}
esdata1:
driver: local
networks:
docker-elk:
driver: bridge
You should specify the networks while starting docker
--net=host
if this does not solve your problem
sudo gedit /etc/NetworkManager/NetworkManager.conf
comment out the following line:
#dns=dnsmasq
then
sudo restart network-manager
You could use a Rancher LB and add it to your application as follows:
In the stack where you application is you will have to click on Add Service button and select Add a Load Balancer
Then you make sure that where is says Access is set to Public
In the Request Host you will have to add the desired URL such as: mylocal.dev
Then you will have to add the port 80 so it will be accessible from the outside world on port 80
Select the service you want the LB to apply for and the internal application port.
Thats' all :) now you should be able to connect to mylocal.dev from the outside world.

Resources