In our situation we want to run an app container and a search container as separate services on our ECS cluster.
I need to create a search container to run under ECS/Fargate and linked to a load balancer.
I need to create an app container which is able to talk to our PostgreSQL RDS instance, which already has the fusionauth tables setup, and talk to the search container through the load balancer.
I started with the docker-compose.yaml and deleted the db service. I changed the values in the fusionauth section to
fusionauth:
image: fusionauth/fusionauth-app:latest
depends_on:
- search
environment:
DATABASE_URL: jdbc:postgresql://mypostgre-rds-endpoint:5432/fusionauth
DATABASE_ROOT_USER: root_user_account
DATABASE_ROOT_PASSWORD: root_user_password
DATABASE_USER: fusionauth
DATABASE_PASSWORD: fusionauth_user_password
FUSIONAUTH_MEMORY: ${FUSIONAUTH_MEMORY}
FUSIONAUTH_SEARCH_SERVERS: http://search:9200
FUSIONAUTH_URL: http://load-balancer-url:9011
networks:
- db
- search
restart: unless-stopped
ports:
- 9011:9011
volumes:
- fa_config:/usr/local/fusionauth/config
The first issue is that when i run this container, it goes into maintenance mode and tries to create the database. I get a locale error. I don't need maintenance mode, I just need it to connect to the database. So I think I must have the database url defined incorrectly.
The second problem, is I need to be able to do the same thing for search: create a container that runs under ECS/Fargate and accessed through the load balancer.
I am no docker expert (yet). But I can't find any specific documentation to help me figure out how to configure and deploy the search and the app containers.
Any pointers to existing docs, or help is appreciated to get this running.
I know i have to change the search section in the docker-compose file (posted entirely below) but I don't yet know what to change, or how to build the container for search yet.
Entire docker-compose file as it stands right now.
version: '3'
services:
search:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.1
environment:
- cluster.name=fusionauth
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=${ES_JAVA_OPTS}"
ports:
- 9200:9200
- 9300:9300
networks:
- search
restart: unless-stopped
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es_data:/usr/share/elasticsearch/data
fusionauth:
image: fusionauth/fusionauth-app:latest
depends_on:
- search
environment:
DATABASE_URL: jdbc:postgresql://mypostgre-rds-endpoint:5432/fusionauth
DATABASE_ROOT_USER: root_user_account
DATABASE_ROOT_PASSWORD: root_user_password
DATABASE_USER: fusionauth
DATABASE_PASSWORD: fusionauth_user_password
FUSIONAUTH_MEMORY: ${FUSIONAUTH_MEMORY}
FUSIONAUTH_SEARCH_SERVERS: http://search:9200
FUSIONAUTH_URL: http://load-balancer-url:9011
networks:
- db
- search
restart: unless-stopped
ports:
- 9011:9011
volumes:
- fa_config:/usr/local/fusionauth/config
networks:
db:
driver: bridge
search:
driver: bridge
volumes:
db_data:
es_data:
fa_config:
AFAICT there is no reason you should be using DATABASE_ROOT_USER and DATABASE_USER if the db is already setup
I would suggest you could start by removing that but other than that it looks pretty similar to a docker-compose setup I've been using for a while
The only other thing I'd add is this problem has nothing to do with ECS or Fargate at all as it sits, its really just a docker compose file you are having trouble getting running from what I can tell
Related
I want to run elasticsearch and kibana with docker-compose.
This is my docker-compose.yml which I run with docker-compose --env-file dev.env up
Docker Compose
version: '3.1'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.1.1
container_name: elasticsearch
environment:
- cluster.name=elasticsearch-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
- xpack.security.enrollment.enabled=true
- ELASTICSEARCH_USERNAME=${ELASTICSEARCH_USERNAME}
- ELASTICSEARCH_PASSWORD=${ELASTICSEARCH_PASSWORD}
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata:/usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana:8.1.1
container_name: kibana
environment:
- ELASTICSEARCH_HOSTS=${ELASTICSEARCH_HOSTS}
- ELASTICSEARCH_USERNAME=${ELASTICSEARCH_USERNAME}
- ELASTICSEARCH_PASSWORD=${ELASTICSEARCH_PASSWORD}
- xpack.security.enabled=true
depends_on:
- elasticsearch
ports:
- "5601:5601"
networks:
- esnet
volumes:
esdata:
driver: local
postgres-data:
driver: local
networks:
esnet:
Stacktrace
Error: [config validation of [elasticsearch].username]: value of "elastic" is forbidden. This is a superuser account that cannot write to system indices that Kibana needs to function. Use a service account token instead
I manage to create service-account token for example for user elastic/kibana, but how can I set it to docker-compose? Is there a specific env variabile that should I use?
Or is there a way to make it work without the usage of service account?
I stumbled upon the same issue and tried using the kibana_admin and kibana_system built-in users but that didn't work either. Maybe you can set the password for these users but I was not able to.
The elastic user role is not allowed to have system-index write-access which Kibana needs. This is based on a change by Elastic (Link to Pullrequest).
You should instead use Service Accounts as described in the docs for Service Accounts.
Apparently, according to the docs on creating a Service Account Token, you would have to somehow create the Elasticsearch container and create a token before starting the Kibana container.
This is also discussed on the Elasticsearch forums.
Downgrading and using a previous ELK version is also a possibility and is what I did, since I only need the cluster for local development.
I am trying to deploy a stack with the docker swarm with the following configuration docker-compose.yaml file as below via the command:
docker stack deploy --with-registry-auth -c docker-compose.yaml project
version: "3.9"
services:
mysql:
image: mysql:8.0
deploy:
replicas: 1
volumes:
- mysql_data:/var/lib/mysql
networks:
- internal
ports:
- 3306:3306
environment:
MYSQL_ROOT_HOST: '%'
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: project_production
MYSQL_USER: username
MYSQL_PASSWORD: password
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.4
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- internal
website:
image: registry.gitlab.com/project/project-website:latest
networks:
- internal
deploy:
replicas: 1
ports:
- 3000:3000
environment:
- RAILS_ENV=production
- MYSQL_HOST=mysql
- ES_HOST=http://es01
- project_DATABASE_USERNAME=root
- project_DATABASE_PASSWORD=root
depends_on:
- es01
- mysql
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
mysql_data:
networks:
internal:
external: true
name: project
Before I deploy the stack I also have created the network for the project via the following command:
docker network create -d overlay project
But when I see the logs for the project using docker logs command I see the following error stops my project get started:
Mysql2::Error: Host '10.0.2.202' is not allowed to connect to this MySQL server
I went exactly as the documents suggested I am not sure what is wrong with the settings that I have come up!
Question:
How can I connect from project to mysql container in docker swarm?
Based on the documentation, Docker Swarm automatically creates the overlay network for you. So I think you don't need to create an external network by default, unless you have specific needs:
When you initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host:
an overlay network called ingress, which handles the control and data traffic related to swarm services. When you create a swarm service and do not connect it to a user-defined overlay network, it connects to the ingress network by default.
a bridge network called docker_gwbridge, which connects the individual Docker daemon to the other daemons participating in the swarm.
As Chris also mentioned in the comments, the DB credentials also don't match.
OPTIONAL: MYSQL_ROOT_HOST is only necessary if you want to connect as root user which is not recommended in production environments. There's also no need to expose the port to the host machine since the database service will only be used from inside the cluster. So if you still want to use root user, you can set the variable to allow connections only from inside the cluster, like MYSQL_ROOT_HOST=10.*.*.*.
I have a docker-compose LAMP stack comprised of three services; a webserver, php and mysql.
The apache2 webroot inside the container is shared to my local machine using a volume like so:
volumes:
- ./public_html:/usr/local/apache2/htdocs
When the stack is running though, I can't edit files inside of the shared volume, since I have a different local user as the user inside the apache2 container. Additionally the installer of my CMS (Processwire) is unable to acquire permissions to the required install directories.
The apache container uses alpine 2.4.35.
I've build my docker-compose file according to this tutorial:
https://medium.com/#thivi/creating-a-lamp-stack-using-docker-compose-13ca4e3950e1
Below I have attached my docker-compose.yml.
version: '3.7'
services:
apache:
build: './apache'
restart: always
ports:
- 80:80
- 443:443
networks:
- frontend
- backend
volumes:
- ./public_html:/usr/local/apache2/htdocs
- ./cert/:/usr/local/apache2/cert/
depends_on:
- php
- mysql
php:
build: './php'
restart: always
networks:
- backend
volumes:
- ./public_html:/usr/local/apache2/htdocs
- ./tmp:/usr/local/tmp
mysql:
build: './mysql'
restart: always
ports:
- 3306:3306
expose:
- 3306
networks:
- backend
volumes:
- ./database:/var/lib/mysql
networks:
backend:
frontend:
Is there any way to fix this issue? I'd be grateful for answers, I've been dealing with this issue for the past 2 days, without getting anywhere and I'm also kind of surprised that such an essential feature like directory sharing is so complicated.
/edit:
I've also noticed something interesting; when I execute a bash inside the apache-container the ownership of apache's document root is set to nobody:nobody, which probably also isn't right.
I'm having a problem with the ordering of my Docker services in terms of how they are setup. Effectively I have five services: api (a Django Rest Framework application), db (a PostgreSQL database), elasticsearch (an Elasticsearch service), Kibana, and APM for logging.
Effectively, I need Elasticsearch to be spun up before the apm service can begin doing it's thing - but the ordering is such that Elasticsearch needs to finish before APM and Kibana can begin doing there thing.
Here is my docker-compose.yml file and all the relevant dependencies:
version: "3"
services:
api:
build:
context: api
command: python3 manage.py runserver 0.0.0.0:8000
env_file: api/.env
volumes:
- ./api:/usr/src/app
ports:
- 8000:8000
- 6900:6900
depends_on:
- db
db:
build: docker/db
environment:
- POSTGRES_DB=************
- POSTGRES_USER=dbaccount
- POSTGRES_PASSWORD=dbpassword
ports:
- 5432:5432
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.1
ports:
- 9200:9200
- 9300:9300
environment:
- discovery.type=single-node
- cluster.routing.allocation.disk.threshold_enabled=false
kibana:
image: docker.elastic.co/kibana/kibana:6.5.1
ports:
- 5601:5601
depends_on:
- elasticsearch
apm:
image: docker.elastic.co/apm/apm-server:6.5.1
volumes:
- ./docker/apm/apm-server.yml:/usr/share/apm-server/apm-server.yml
depends_on:
- elasticsearch
ports:
- 8200:8200
Effectively the running order causes a generic Linux chroot exit code of "1" on the APM service, and requires a manual "docker-compose apm restart" for everything to get going...
Is there a way to wait for one service to fully "up" before "upping" another?
As explained in docker-compose documentation:
https://docs.docker.com/compose/startup-order/
There is no way with docker or compose to know when a container is fully "ready" before starting their dependencies, that should be handled by each application.
There is a workaround though shown in the documentation by using the wait-for-it script: https://github.com/vishnubob/wait-for-it
There's a github thread on this here:
https://github.com/moby/moby/issues/30404#issuecomment-274825244
and here
https://github.com/docker/compose/issues/4305
If you want to use a health check for docker-compose, I suggest you use v2.1 as this is the only version supporting it. However, as explained in the above answers this functionality was removed to facilitate moving away from external docker-compose:
https://github.com/docker/compose/issues/4305#issuecomment-276527457
I already have a docker-compose.yml file like this:
version: "3.1"
services:
memcached:
image: memcached:alpine
container_name: dl-memcached
redis:
image: redis:alpine
container_name: dl-redis
mysql:
image: mysql:5.7.21
container_name: dl-mysql
restart: unless-stopped
working_dir: /application
environment:
- MYSQL_DATABASE=dldl
- MYSQL_USER=docker
- MYSQL_PASSWORD=docker
- MYSQL_ROOT_PASSWORD=docker
volumes:
- ./../:/application
ports:
- "8007:3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: dl-phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=dl-mysql
- PMA_PORT=3306
- MYSQL_USER=docker
- MYSQL_PASSWORD=docker
- MYSQL_ROOT_PASSWORD=docker
restart: always
ports:
- 8002:80
volumes:
- /application
links:
- mysql
elasticsearch:
build: phpdocker/elasticsearch
container_name: dl-es
volumes:
- ./phpdocker/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "8003:9200"
webserver:
image: nginx:alpine
container_name: dl-webserver
working_dir: /application
volumes:
- ./../:/application:delegated
- ./phpdocker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- ./logs:/var/log/nginx:delegated
ports:
- "9003:80"
php-fpm:
build: phpdocker/php-fpm
container_name: dl-php-fpm
working_dir: /application
volumes:
- ./../:/application:delegated
- ./phpdocker/php-fpm/php-ini-overrides.ini:/etc/php/7.2/fpm/conf.d/99-overrides.ini
- ./../docker/php-fpm/certs/store_stock/:/usr/local/share/ca-certificates/
- ./logs:/var/log:delegated # nginx logs
- /application/var/cache
environment:
XDEBUG_CONFIG: remote_host=host.docker.internal
PHP_IDE_CONFIG: "serverName=dl"
node:
build:
dockerfile: dl/phpdocker/node/Dockerfile
context: ./../
container_name: dl-node
working_dir: /application
ports:
- "8008:3000"
volumes:
- ./../:/application:cached
tty: true
My goal is to have 2 isolate environments working at the same time in the same server with the same docker-compose file? I wonder if it's possible?
I want to be able to stop and update one env. while the other one is still running and getting the traffic.
Maybe I need another approach in my case?
There are a couple of problems with what you're trying to do. If your goal is to put things behind a load balancer, I think that rather than trying to start multiple instances of your project, a better solution would be to use the scaling features available to docker-compose. In particular, if your goal is to put some services behind a load balancer, you probably don't want multiple instances of things like your database.
If you combine this with a dynamic front-end proxy like Traefik, you can make the configuration largely automatic.
Consider a very simple example consisting of a backend container running a simple webserver and a traefik frontend:
---
version: "3"
services:
webserver:
build:
context: web
labels:
traefik.enable: true
traefik.port: 80
traefik.frontend.rule: "PathPrefix:/"
frontend:
image: traefik
command:
- --api
- --docker
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
ports:
- "80:80"
- "127.0.0.1:8080:8080"
If I start it like this, I get a single backend and a single frontend:
docker-compose up
But I can also ask docker-compose to scale out the backend:
docker-compose up --scale webserver=3
In this case, I get a single frontend and three backend servers. Traefik will automatically discover the backends and will round-robin connections between them. You can download this example and try it out.
Caveats
There are a few aspects of your configuration that would need to change in order to make this work (and in fact, you would need to change them even if you were to create multiple instances of your project as you have proposed in your question).
Conflicting paths
Take for example the configuration of your webserver container:
volumes:
- ./logs:/var/log/nginx:delegated
If you start two instances of this service, both containers will mount ./logs on /var/log/nginx. If they both attempt to write to /var/log/nginx/access.log, you're going to have problems.
The easiest solution here is to avoid bind mounts for things like log directories (and any other directories to which you will be writing), and instead use named docker volumes.
Hardcoding container names
In some places, you are hardcoding the container name, like this:
mysql:
image: mysql:5.7.21
container_name: dl-mysql
This will cause problems if you attempt to start multiple instances of this project or multiple instances of the mysql container. Don't statically set the container name.
Deprecated links syntax
Your configuration is using the deprecated links syntax:
links:
- mysql
Don't do that. In modern docker, containers on the same network can simply refer to each other by name. In other words, if your compose configuration has:
mysql:
image: mysql:5.7.21
restart: unless-stopped
working_dir: /application
environment:
- MYSQL_DATABASE=dldl
- MYSQL_USER=docker
- MYSQL_PASSWORD=docker
- MYSQL_ROOT_PASSWORD=docker
volumes:
- ./../:/application
ports:
- "8007:3306"
Other containers in your compose stack can simply use the hostname mysql to refer to this service.
You won't be able to run same compose file on a host without changing the port mappings because that will cause port conflict. I'd recommend creating a base compose file and using extends to override port mappings for different environments.