Docker-compose file build context explanation - docker

I am using a docker-compose file that is working but I have poor understanding of the file
In the following file how can I interpret
build:
context:
ulimits:
sysctl:
version: '3.1'
services:
zabbix-server:
container_name: zabbix-server
build:
context: zabbix-server-mysql
image: zabbix:ubuntu-5.4.4-custom
ulimits:
nproc: 65535
nofile:
soft: 20000
hard: 40000
sysctls:
- net.ipv4.ip_local_port_range=1024 65000
- net.ipv4.conf.all.accept_redirects=0
- net.ipv4.conf.all.secure_redirects=0
- net.ipv4.conf.all.send_redirects=0

build and context
ulimits 1 ulimits 2
sysctl 1 sysctl 2
and many many other articles

Related

how to install custom REST plugin on a dockerized opensearch

I am trying to install me custom REST plugin on my dockerized OpenSearch...
I am using ubuntu 20
this is my docker-compose.yml file
version: '3'
services:
opensearch-node1:
image: opensearchproject/opensearch:1.0.1
container_name: opensearch-node1
environment:
- cluster.name=opensearch-cluster
- node.name=opensearch-node1
- discovery.seed_hosts=opensearch-node1,opensearch-node2
- cluster.initial_master_nodes=opensearch-node1,opensearch-node2
- bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536 # maximum number of open files for the OpenSearch user, set to at least 65536 on modern systems
hard: 65536
volumes:
- opensearch-data1:/usr/share/opensearch/data
ports:
- 9200:9200
- 9600:9600 # required for Performance Analyzer
networks:
- opensearch-net
opensearch-node2:
image: opensearchproject/opensearch:1.0.1
container_name: opensearch-node2
environment:
- cluster.name=opensearch-cluster
- node.name=opensearch-node2
- discovery.seed_hosts=opensearch-node1,opensearch-node2
- cluster.initial_master_nodes=opensearch-node1,opensearch-node2
- bootstrap.memory_lock=true
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- opensearch-data2:/usr/share/opensearch/data
networks:
- opensearch-net
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:1.0.1
container_name: opensearch-dashboards
ports:
- 5601:5601
expose:
- "5601"
environment:
OPENSEARCH_HOSTS: '["https://opensearch-node1:9200","https://opensearch-node2:9200"]' # must be a string with no spaces when specified as an environment variable
networks:
- opensearch-net
volumes:
opensearch-data1:
opensearch-data2:
networks:
opensearch-net:
running docker-compose up start the service and it runs fine
but I have no idea how to continue from here...
this is the plugin layout (which I cloned from here )
The docker image is all ready to run, without your plugin of course:/ Therefore, you need to create a docker image with your plugin installed.
Open a new directory outside the plugin project with your packaged plugin in it. This package is called something like ->my-plugin.zip located in your plugin project under build/distributions/my-plugin.zip. If it is not there then you should assemble the plugin like so:
./gradlew assemble -Dopensearch.version=1.0.0 -Dbuild.snapshot=false
Add the following Dockerfile to the new directory:
FROM opensearchproject/opensearch:1.0.0
ADD ./my-plugin.zip /usr/
RUN /usr/share/opensearch/bin/opensearch-plugin install file:///usr/my-plugin.zip
The ADD will add the local package to the container so it can be used by the next command.
The script after RUN will install the plugin into OpenSearch.
Build the docker image, adding a tag will make life easier later:
docker build --tag=opensearch-with-my-plugin .
Now you have an opensearch image built with your plugin installed on it!
Fix the YAML file you posted originally so that it uses the correct image. This means replacing the opensearchproject/opensearch:1.0.1 with the tag you gave the image you had built - opensearch-with-my-plugin. And add it to the directory with the Dockerfile (not as you have it, in the project).
I took the liberty to change the dashboards version to 1.0.0, as i'm not sure if it will work with 1.0.0 of the image.
In any case, this should be a solid start!
version: '3'
services:
opensearch-node1:
image: opensearch-with-my-plugin
container_name: opensearch-node1
environment:
- cluster.name=opensearch-cluster
- node.name=opensearch-node1
- discovery.seed_hosts=opensearch-node1,opensearch-node2
- cluster.initial_master_nodes=opensearch-node1,opensearch-node2
- bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536 # maximum number of open files for the OpenSearch user, set to at least 65536 on modern systems
hard: 65536
volumes:
- opensearch-data1:/usr/share/opensearch/data
ports:
- 9200:9200
- 9600:9600 # required for Performance Analyzer
networks:
- opensearch-net
opensearch-node2:
image: opensearch-with-my-plugin
container_name: opensearch-node2
environment:
- cluster.name=opensearch-cluster
- node.name=opensearch-node2
- discovery.seed_hosts=opensearch-node1,opensearch-node2
- cluster.initial_master_nodes=opensearch-node1,opensearch-node2
- bootstrap.memory_lock=true
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- opensearch-data2:/usr/share/opensearch/data
networks:
- opensearch-net
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:1.0.0
container_name: opensearch-dashboards
ports:
- 5601:5601
expose:
- "5601"
environment:
OPENSEARCH_HOSTS: '["https://opensearch-node1:9200","https://opensearch-node2:9200"]' # must be a string with no spaces when specified as an environment variable
networks:
- opensearch-net
volumes:
opensearch-data1:
opensearch-data2:
networks:
opensearch-net:
Verify by running this in the terminal and see that your plugin is installed:)
curl -XGET https://localhost:9200/_cat/plugins -u 'admin:admin' --insecure
Also, I created a GitHub template for plugins with more info. The repo you cloned is the one I created for my blog post on writing plugins, so it is a REST plugin, but not the most generic one.
If you are still having issues getting this running please let me know in the comments, I'm glad to help.
EDIT: file locations edited a bit.

How to connect distro elasticsearch service to another service defined in docker compose

hi i want to connect to Elasticsearch inside my app which is defined as "cog-app" service in docker-compose.yml along with ditsro elasticsearch and kibana
i am not able to connect to elasticsearch when i run docker file, can you please tell me how i can connect elasticsearch service to app service
i have defined elasticsearch in cog-app service, and im getting connection failure with elasticsearch
version: "3"
services:
cog-app:
image: app:2.0
build:
context: .
dockerfile: ./Dockerfile
stdin_open: true
tty: true
ports:
- "7111:7111"
environment:
- LANG=C.UTF-8
- LC_ALL=C.UTF-8
- CONTAINER_NAME=app
volumes:
- /home/developer/app:/app
odfe-node1:
image: amazon/opendistro-for-elasticsearch:1.13.2
container_name: odfe-node1
environment:
- cluster.name=odfe-cluster
- node.name=odfe-node1
- discovery.seed_hosts=odfe-node1,odfe-node2
- cluster.initial_master_nodes=odfe-node1,odfe-node2
- bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
- "ES_JAVA_OPTS=-Xms2g -Xmx2g" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536 # maximum number of open files for the Elasticsearch user, set to at least 65536 on modern systems
hard: 65536
volumes:
- odfe-data1:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9600:9600 # required for Performance Analyzer
odfe-node2:
image: amazon/opendistro-for-elasticsearch:1.13.2
container_name: odfe-node2
environment:
- cluster.name=odfe-cluster
- node.name=odfe-node2
- discovery.seed_hosts=odfe-node1,odfe-node2
- cluster.initial_master_nodes=odfe-node1,odfe-node2
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- odfe-data2:/usr/share/elasticsearch/data
networks:
- odfe-net
kibana:
image: amazon/opendistro-for-elasticsearch-kibana:1.13.2
container_name: odfe-kibana
ports:
- 5601:5601
expose:
- "5601"
environment:
ELASTICSEARCH_URL: https://odfe-node1:9200
ELASTICSEARCH_HOSTS: https://odfe-node1:9200
networks:
- odfe-net
volumes:
odfe-data1:
odfe-data2:
networks:
odfe-net:
please tell me how two services can communicate with each other
As the elasticsearch service is running in another container, localhost is not valid. You should use odfe-node1:9200 as the url

Elasticsearch application with docker compose runnable on multi-node swarm

I have a small app with a python backend where I'm streaming and classifying tweets in real-time.
I use elasticsearch to collect classified tweets and Kibana to make visualizations based on es data.
In my frontend, I just use kibana visualizations.
For the moment, I'm trying to run my application in a multi-node swarm as a services stack but I'm having problems with my compose file.
I tried to start with elastisearch and to use this info https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html but didn't help, and I didn'd succed to deploy my docker-compose file even with just elasticsearch serivce.
This is my yml file:
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- 'ES_JAVA_OPTS=-Xms512m -Xmx512m'
ulimits:
memlock:
soft: -1
hard: -1
ports:
- '9200:9200'
kibana:
image: docker.elastic.co/kibana/kibana:7.6.2
ports:
- '5601:5601'
Below is the docker-compose file which works for a single node in a development environment, which have disabled security and has discovery.type=single-node param to make sure elasticsearch production bootstrap checks are not kicked in.
version: '2.2'
services:
#Elasticsearch Docker Images: https://www.docker.elastic.co/
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
container_name: elasticsearch
environment:
- xpack.security.enabled=false
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
cap_add:
- IPC_LOCK
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
volumes:
elasticsearch-data:
driver: local
networks:
elastic:
external: true

docker-compose up -d gives "OCI runtime create failed: wrong rlimit value" when trying to set mem_limit in the docker-compose.yml file

docker-compose version 1.18.0, build 8dd22a9 on Ubuntu 16.04
Docker version 17.12.0-ce, build c97c6d6
docker-compose file version: '3'
Relevant portion of the docker-compose file
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.0
container_name: elasticsearch1
restart: unless-stopped
environment:
- http.host=0.0.0.0
- reindex.remote.whitelist=remote_es:*
- xpack.security.enabled=false
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1000000000
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
When I do a docker-compose up -d, I get the following error:
ERROR: for elasticsearch1 Cannot start service elasticsearch1: OCI runtime create failed: wrong rlimit value: RLIMIT_MEM_LIMIT: unknown
Any ideas what's going on?
The docker-compose reference document seems to imply that since I've not running in swarm mode, I should be using the version 2 syntax for mem_limit even though my docker-compose file is version 3.
ERROR: for elasticsearch1 Cannot start service elasticsearch1: OCI runtime create failed: wrong rlimit value: RLIMIT_MEM_LIMIT: unknown
You got above error, because set mem_limit under the ulimits section. It should be under container level on the same level with image, environment etc:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.0
container_name: elasticsearch1
restart: unless-stopped
environment:
- http.host=0.0.0.0
- reindex.remote.whitelist=remote_es:*
- xpack.security.enabled=false
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1000000000
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
And another issue is here. According to the issue:
The v3 format is specifically designed to run with Swarm mode and the
docker stack features. It wouldn't make sense for us to re-add options
to that format when they have been replaced and would be ignored in
Swarm mode.
It means that you can use cpu_shares, cpu_quota, cpuset, mem_limit, memswap_limit, mem_swappiness in version 2 only and use new resource options in version 3 in swarm mode only.
So, if you don't want to use swarm mode, you need to use version 2.
The final docker-compose.yml is:
version: '2'
services:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.0
container_name: elasticsearch1
restart: unless-stopped
environment:
- http.host=0.0.0.0
- reindex.remote.whitelist=remote_es:*
- xpack.security.enabled=false
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1000000000
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200

docker compose file is invalid ,additional properties not allowed tty

I really need help on this error I don't understand why I get this error. Thanks
docker -v
Docker version 1.13.1, build 092cba3
docker-compose -v
docker-compose version 1.11.1, build 7c5d5e4
this is my dockerfile
version: '2.0'
services:
arcgis-server:
container_name: "arcgis-server"
image: "arcgis-server:10.4.1"
volumes:
- "./license:/license"
- "./arcgisserver:/arcgis/server/usr/directories"
- "./config-store:/arcgis/server/usr/config-store"
build:
context: .
dockerfile: "Dockerfile"
ulimits:
nproc: 25059
nofile:
soft: 65535
hard: 65535
ports:
- "127.0.0.1:6080:6080"
- "127.0.0.1:6443:6443"
- "4001:4001"
- "4002:4002"
- "4004:4004"
stdin_open: true
tty: true
here is the error
docker-compose build
ERROR: The Compose file './docker-compose.yml' is invalid because:
Additional properties are not allowed ('tty' was unexpected)
You might be seeing this error because you're using the wrong Compose file version. Either specify a supported version ("2.0", "2.1", "3.0") and place your service definitions under the `services` key, or omit the `version` key and place your service definitions at the root of the file to use version 1.
For more on the Compose file format versions, see https://docs.docker.com/compose/compose-file/
Actually I test on the old machine it worked fine. I would appreciate your helps. Thanks again!!!!
tty needs to be defined as a setting on your service, not at the top level. Yaml files are space sensitive, so removing the leading spaces puts the setting at the top level where it's not valid. Use the following syntax to fix it:
version: '2.0'
services:
arcgis-server:
container_name: "arcgis-server"
image: "arcgis-server:10.4.1"
volumes:
- "./license:/license"
- "./arcgisserver:/arcgis/server/usr/directories"
- "./config-store:/arcgis/server/usr/config-store"
build:
context: .
dockerfile: "Dockerfile"
ulimits:
nproc: 25059
nofile:
soft: 65535
hard: 65535
ports:
- "127.0.0.1:6080:6080"
- "127.0.0.1:6443:6443"
- "4001:4001"
- "4002:4002"
- "4004:4004"
stdin_open: true
tty: true

Resources