ElasticSearch with Docker and Traefic - docker

When I will run elasticsearch with docker and traefic (With SSL encryption).
Than I can't connect to elasticsearch via domain.
When I remove all traefic things in the docker-compose by the elastic search tham it works over the IP and http.
Here is my docker-compose.yml:
`
version: "3.7"
networks:
traefik:
external: true
search:
services:
elasticsearch:
image: elasticsearch:7.16.2
container_name: elasticsearch
environment:
- xpack.security.enabled=false
- discovery.type=single-node
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD=XXXXXX
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
labels:
- "traefik.enable=true"
- "traefik.http.routers.elastic_search.rule=Host(se.mydomain.xyz)"
- "traefik.http.routers.elastic_search.entrypoints.websecure"
- "traefik.http.services.elastic_search.loadbalancer.server.port=9200"
ports:
- 9200:9200
- 9300:9300
networks:
- search
kibana:
image: kibana:7.16.2
container_name: kibana
labels:
- "traefik.enable=true"
- "traefik.http.routers.kibana_search.rule=Host(kibana.mydomain.xyz)"
- "traefik.http.routers.kibana_search.entrypoints.websecure"
- "traefik.http.services.kibana_search.loadbalancer.server.port=5601"
#ports:
#- 5601:5601
environment:
ELASTICSEARCH_URL: http://XXX.XXX.XXX.XXX:9200
ELASTICSEARCH_HOSTS: '["http://XXX.XXX.XXX.XXX:9200"]'
networks:
- search
- traefik
volumes:
elasticsearch-data:
driver: local
`
Has anyone a idea for a solution?

Related

Logging to Elastic search is not working while pointing to docker url in Api docker image

Logging to Elastic search work fine when testing from local debugging to localhost app connection to elasticsearch url at https://localhost:9200, but it fails to connect with docker images of the dotnet.monitoring.api to the docker image of Elastic search at http://elasticsearch:9200
Below is the docker compose file.
version: '3.4'
services:
productdb:
container_name: productdb
environment:
SA_PASSWORD: "SwN12345678"
ACCEPT_EULA: "Y"
restart: always
ports:
- "1433:1433"
elasticsearch:
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name-es-docker-cluster
- xpack.security.enabled=false
- "discovery.type=single-node"
networks:
- es-net
volumes:
- data01:/urs/share/elasticsearch/data
ports:
- 9200:9200
kibana:
container_name: kibana
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
networks:
- es-net
depends_on:
- elasticsearch
ports:
- 5601:5601
dotnet.monitoring.api:
container_name: dotnet.monitoring.api
environment:
- ASPNETCORE_ENVIRONMENT=Development
- "ElasticConfiguration:Url=http://elasticsearch:9200/"
- "ConnectionStrings:Product=server=productdb;Database=ProductDb;User Id=sa;Password=SampleP#&&W0rd;TrustServerCertificate=True;"
depends_on:
- productdb
- kibana
ports:
- "8001:80"
volumes:
data01:
driver: local
networks:
es-net:
driver: bridge
Your image of API is not in the same network. Make es-net as default.
networks:
default:
name: es-net
external: true

Got problem deploying docker-compose service(port issue)

I want to deploy a service that will allow me to use Spark and MongoDB in a Jupiter notebook.
I use docker-compose to build up the service, and it`s as followed:
version: "3.3"
volumes:
shared-workspace:
networks:
spark-net:
driver: bridge
services:
spark-master:
image: uqteaching/cloudcomputing:spark-master-v1
container_name: spark-master
networks:
- "spark-net"
ports:
- "8080:8080"
- "7077:7077"
environment:
- INIT_DAEMON_STEP=setup_spark
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
spark-worker-1:
image: uqteaching/cloudcomputing:spark-worker-v1
container_name: spark-worker-1
depends_on:
- spark-master
networks:
- "spark-net"
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
spark-worker-2:
image: uqteaching/cloudcomputing:spark-worker-v1
container_name: spark-worker-2
depends_on:
- spark-master
networks:
- "spark-net"
ports:
- "8082:8082"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
mongo:
image: mongo
container_name: 'mongo'
networks:
- "spark-net"
ports:
- "27017:27017"
mongo_admin:
image: mongo-express
container_name: 'mongoadmin'
networks:
- "spark-net"
depends_on:
- mongo
links:
- mongo
ports:
- "8091:8091"
jupyter-notebook:
container_name: jupyternb
image: jupyter/all-spark-notebook:42f4c82a07ff
depends_on:
- mongo
- spark-master
links:
- mongo
expose:
- "8888"
networks:
- "spark-net"
ports:
- "8888:8888"
volumes:
- ./nbs:/home/jovyan/work/nbs
- ./events:/tmp/spark-events
environment:
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
command: "start-notebook.sh \
--ip=0.0.0.0 \
--allow-root \
--no-browser \
--notebook-dir=/home/jovyan/work/nbs \
--NotebookApp.token='' \
--NotebookApp.password=''
"
And the result is something like this:
I dont know why. Even I set these 2` services to listen to a different port.
They are using 8081/tcp at the same time, which caused them both to crash.
I want to solve this.
mongo-express seems to need port 8081 internal, so use another external port to be able to login to the webui.
http://localhost:8092 would then be something like this:
mongo_admin:
image: mongo-express
container_name: 'mongoadmin'
networks:
- "spark-net"
depends_on:
- mongo
links:
- mongo
ports:
- "8092:8091"

Making Nextcloud Docker accessible from local network

I installed a fully dockerized Nextcloud server on Ubuntu LTS 20.04.
Right now, it is accessible via nginx from the subdomain I assigned to it, with a SSL certificate from Lets Encrypt.
I would like to be able to access it from a local IP from within the network on port 8140.
I tried adding the ports to the docker-compose.yml file with:
ports:
- "8140:8140"
But the ports get assigned to 0.0.0.0 instead of the machine's IP address.
Anyone knows how to expose the container to the local network?
Here's an example of the docker-compose.yml I used:
version: '3'
services:
proxy:
image: jwilder/nginx-proxy:alpine
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
container_name: nextcloud-proxy
networks:
- nextcloud_network
ports:
- 80:80
- 443:443
- "8140:8140"
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- ./proxy/certs:/etc/nginx/certs:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: unless-stopped
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nextcloud-letsencrypt
depends_on:
- proxy
networks:
- nextcloud_network
volumes:
- ./proxy/certs:/etc/nginx/certs:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: unless-stopped
db:
image: mariadb
container_name: nextcloud-mariadb
networks:
- nextcloud_network
volumes:
- db:/var/lib/mysql
- /etc/localtime:/etc/localtime:ro
environment:
- MYSQL_ROOT_PASSWORD=toor
- MYSQL_PASSWORD=mysql
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
restart: unless-stopped
app:
image: nextcloud:latest
container_name: nextcloud-app
networks:
- nextcloud_network
depends_on:
- letsencrypt
- proxy
- db
volumes:
- nextcloud:/var/www/html
- ./app/config:/var/www/html/config
- ./app/custom_apps:/var/www/html/custom_apps
- ./app/data:/var/www/html/data
- ./app/themes:/var/www/html/themes
- /etc/localtime:/etc/localtime:ro
environment:
- VIRTUAL_HOST=nextcloud.YOUR-DOMAIN
- LETSENCRYPT_HOST=nextcloud.YOUR-DOMAIN
- LETSENCRYPT_EMAIL=YOUR-EMAIL
restart: unless-stopped
volumes:
nextcloud:
db:
networks:
nextcloud_network:
As far as I know, you append the IP Address you are binding to locally as follows:
ports:
- 192.168.0.254:80:80
- 192.168.0.254:443:443
- "192.168.0.254:8140:8140"

configure traefik as reverse proxy with docker

I am trying to configure traefik to connect between my 3 docker containers.
I tried with this configuration but I got net::ERR_NAME_NOT_RESOLVED on my browser console.
searchservice:
hostname: searchservice
image: searchservice:0.0.3-SNAPSHOT
container_name: searchservice
networks:
- es-network
#ipv4_address: 172.21.0.12
ports:
- 8070:8080
restart: always
depends_on:
- elasticsearch
- reverseproxy
labels:
- "traefik.frontend.rule=PathPrefix:/searchservice,Host:localhost"
- "traefik.port: 8070"
- "traefik.enable=true"
subscriber-service:
hostname: subscriber-service
image: subscriberservice:0.0.4-SNAPSHOT
container_name: subscriber-service
networks:
- es-network
#ipv4_address: 172.21.0.13
ports:
- 8090:8090
restart: always
depends_on:
- mongo1
- mongo2
- reverseproxy
labels:
- "traefik.frontend.rule=PathPrefix:/api,Host:localhost"
- "traefik.port: 8090"
- "traefik.enable=true"
searchappfront:
hostname: searchappfront
image: frontservice:latest
container_name: searchappfront
networks:
- es-network
ports:
- 80:80
restart: always
depends_on:
- subscriber-service
- searchservice
- reverseproxy
labels:
- "traefik.frontend.rule=PathPrefix:/"
- "traefik.enable=true"
- "traefik.port=80"
# - "traefik.frontend.rule=Host:localhost"
reverseproxy:
image: traefik:v2.1
command:
- '--providers.docker=true'
- '--entryPoints.web.address=:80'
- '--providers.providersThrottleDuration=2s'
- '--providers.docker.watch=true'
- '--providers.docker.defaultRule=Host("local.me")'
- '--accessLog.bufferingSize=0'
volumes:
- '/var/run/docker.sock:/var/run/docker.sock:ro'
#ports:
# - '80:80'
# - '8080:8080'
The searchappfront is an angular application where the http endPoints have this pattern
http://subscriber-service:8090/
http://searchservice:8070/
if I use localhost instead of hostnames, requests work fine but I need to deploy these containers in a cloud instance.
You are using traefik 2, but your annotation is for traefik 1. This is not going to work.

Running Kibana in Docker image gives Non-Root error

I'm having issues attempting to setup the ELK stack (v7.6.0) in docker using Docker-Compose.
Elastic Search & Logstash startup fine, but Kibana instantly exists, the docker logs for that container report:
Kibana should not be run as root. Use --allow-root to continue.
the docker-compose for those elements looks like this:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
environment:
- discovery.type=single-node
ports:
- 9200:9200
mem_limit: 2gb
kibana:
image: docker.elastic.co/kibana/kibana:7.6.0
environment:
- discovery.type=single-node
ports:
- 5601:5601
depends_on:
- elasticsearch
logstash:
image: docker.elastic.co/logstash/logstash:7.6.0
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
mem_limit: 2gb
depends_on:
- elasticsearch
How do I either disable the run as root error, or set the application to not run as root?
In case you don't have a separate Dockerfile for Kibana and you just want to set the startup arg in docker-compose, the syntax there is as follows:
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.9.2
ports:
- 5601:5601
depends_on:
- elasticsearch
environment:
- ELASTICSEARCH_URL=http://localhost:9200
networks:
- elastic
entrypoint: ["./bin/kibana", "--allow-root"]
That works around the problem of running it on a Windows Container.
That being said, I don't know why Kibana should not be executed as root.
i've just run this docker image and all works perfectly, i share my docker-compose file:
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
container_name: elasticsearch
environment:
- node.name=node
- cluster.name=elasticsearch-default
- bootstrap.memory_lock=true
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
ports:
- "9200:9200"
expose:
- "9200"
networks:
- "esnet"
kibana:
image: docker.elastic.co/kibana/kibana:7.6.0
container_name: kibana
ports:
- "5601:5601"
expose:
- "5601"
networks:
- "esnet"
depends_on:
- elasticsearch
logstash:
image: docker.elastic.co/logstash/logstash:7.6.0
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600
depends_on:
- elasticsearch
networks:
- "esnet"
networks:
esnet:
I had the same problem. I did run it manually by adding the ENTRYPOINT to the end of Dockerfile.
ARG ELK_VERSION
FROM docker.elastic.co/kibana/kibana:${ELK_VERSION}
ENTRYPOINT ["./bin/kibana", "--allow-root"]
The docker-compose.yml
version: '3.2'
# Elastic stack (ELK) on Docker https://github.com/deviantony/docker-elk
services:
elasticsearch:
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
discovery.type: single-node
networks:
- elk
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5002:5002/tcp"
- "5002:5002/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/ ############## Dockerfile ##############
args:
ELK_VERSION: $ELK_VERSION
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: nat
volumes:
elasticsearch:

Resources