This question already has answers here:
How to run docker containers in host network mode using docker-compose?
(2 answers)
Closed 2 years ago.
I am trying to run a container where I need to user network driver as "host" instead of "bridge". I am running it on Centos machine and my docker-compose.yml is
version: '3.4'
services:
testContainer:
build:
context: .
args:
HADOOP_VERSION: 2.6.0
HIVE_VERSION: 1.1.0
image: testcontainer
container_name: testcontainer
hostname: testcontainer
ports:
- 9200:9200
- 9300:9300
- 5601:5601
- 9001:9001
ulimits:
memlock:
soft: -1
hard: -1
networks:
- elknet
networks:
elknet:
driver: host
But i am getting the following error when I fire "docker-compose up" :
ERROR: only one instance of "host" network is allowed
Can anyone please suggest how can I use host network using docker-compose.yml.
Also note that if I use network_host as suggested by #larsks, I am still getting error
version: '3.4'
services:
testContainer:
build:
context: .
args:
HADOOP_VERSION: 2.6.0
HIVE_VERSION: 1.1.0
image: testcontainer
container_name: testcontainer
hostname: testcontainer
ports:
- 9200:9200
- 9300:9300
- 5601:5601
- 9001:9001
ulimits:
memlock:
soft: -1
hard: -1
network_mode: host
I am getting following error
ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services: 'testContainer'
Get rid of the networks section in your docker-compose.yml, and add a network_mode directive to your service definition:
services:
testContainer:
build:
context: .
args:
HADOOP_VERSION: 2.6.0
HIVE_VERSION: 1.1.0
image: testcontainer
container_name: testcontainer
hostname: testcontainer
ports:
- 9200:9200
- 9300:9300
- 5601:5601
- 9001:9001
ulimits:
memlock:
soft: -1
hard: -1
network_mode: host
Related
How do i enable basic authentication for kibana and elasticsearch on docker container?
I want to have authentication enabled in kibana. With the normal files we can simply set the flag
xpack.security.enabled=true and generate the password but since i am running elasticsearch and kibana on docker how do i do it ??
This is my current docker file
version: '3.7'
services:
elasticsearch:
image: elasticsearch:7.9.2
ports:
- '9200:9200'
environment:
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
kibana:
image: kibana:7.9.2
ports:
- '5601:5601'
You can pass it in env vars while running docker run command for elasticsearch.
Something like this:
docker run -p 9200:9200 -p 9300:9300 -e "xpack.security.enabled=true" docker.elastic.co/elasticsearch/elasticsearch:7.14.0
I am running ElasticSearch instance on my Ubuntu server using docker container.
during a large insert-or-update request I get the following exception.
OriginalException: Elasticsearch.Net.ElasticsearchClientException: Request failed to execute. Call: Status code 413 from: POST /_bulk?pretty=true&error_trace=true
It sounds like I need to increase the http.max_content_length content from the default 100mb.
I bring up my docker instance using the following docker-compose
version: '3.4'
services:
# Nginx proxy
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- 80:80
- 443:443
restart: always
networks:
- nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /etc/nginx/vhost.d:/etc/nginx/vhost.d:ro
- /etc/certificates:/etc/nginx/certs
# ElasticSearch instance
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0
container_name: es01
environment:
- VIRTUAL_HOST=my.elastic-domain.com
- VIRTUAL_PORT=9200
- ELASTIC_PASSWORD=mypassword
- xpack.security.enabled=true
- discovery.type=single-node
- http.max_content_length=3000mb
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
cap_add:
- IPC_LOCK
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
expose:
- 9200
networks:
- nginx-proxy
depends_on:
- nginx-proxy
volumes:
data01:
driver: local
networks:
nginx-proxy:
default:
external:
name: nginx-proxy
As you can see, I tried to increase the value by setting an environment variable http.max_content_length=3000mb
Also, in Nginx proxy, I set the client_max_body_size 0; to ensure the proxy allow unlimited size.
How to set the http.max_content_length value for ElasticSearch container using docker container?
you can modify the elasticsearch.yml file and add the
http.max_content_length=3000mb
This file is in your elastic docker conntainer /config/elasticsearch.yml
Unable to pull Elasticsearch and Kibana images by using docker compose.
When I was trying to retry muliple times using docker-compose up cmd, each and every time some of the service are not available, which is unpredictable.
Can somebody please guide me what causing the issue, even the proxy has been set in docker.service.
Please find the attached screenshot, I have also given the docker-compose.yaml file for reference.
Kindly let me know in case of any further information needed.
Docker-compose.yml File
version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
container_name: elasticsearch
environment:
- node.name=elasticsearch
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes=elasticsearch
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
kibana:
image: docker.elastic.co/kibana/kibana:7.8.0
container_name: kibana
environment:
ELASTICSEARCH_URL: "http://elasticsearch:9200"
ports:
- 5601:5601
depends_on:
- elasticsearch
volumes:
esdata1:
driver: local
It was issue with RHEL server trying with multiple times the issue got solved
I have created a docker-compose file that will start up Kabana and ElasticSearch containers. I already have created a network and volume for these in my VM. I am using docker compose version 3.4.
Command: docker volumes ls
DRIVER VOLUME NAME
local elasticsearch-data
local portainer_data
Command: docker volumes ls
NETWORK ID NAME DRIVER SCOPE
75464cd8c8ab bridge bridge local
587a311f6f4f host host local
649ac00b7f93 none null local
4b5923b1d144 stars.api.web bridge local
Command: docker-compose up -d
ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
in "./docker-compose.yml", line 33, column 27
docker-compose.yml
version: '3.4'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
container_name: elasticsearch
ports:
- "9200:9200"
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
networks:
- stars.api.web
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
discovery.type: single-node
ulimits:
memlock:
soft: -1
hard: -1
kibana:
image: docker.elastic.co/kibana/kibana:7.6.0
container_name: kibana
ports:
- "5601:5601"
depends_on:
- elasticsearch
networks:
- stars.api.web
volumes:
name: elasticsearch-data:
networks:
name: stars.api.web:
EDIT:
removing the : from the name, eg name: elasticsearch-data throws the following error:
ERROR: In file './docker-compose.yml', volume 'name' must be a mapping not a string.
Your yaml is invalid according to the docs.
Please use the following compose file:
version: '3.4'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
container_name: elasticsearch
ports:
- "9200:9200"
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
networks:
- stars.api.web
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
discovery.type: single-node
ulimits:
memlock:
soft: -1
hard: -1
kibana:
image: docker.elastic.co/kibana/kibana:7.6.0
container_name: kibana
ports:
- "5601:5601"
depends_on:
- elasticsearch
networks:
- stars.api.web
volumes:
elasticsearch-data:
external: true
networks:
stars.api.web:
I assume that you have already created defined volume and network. Note that external: true is required when the volume has been created outside of docker-compose context.
In addition, a nice trick to check if you Compose file is valid:
docker-compose -f file config
If the alternate file is omitted will take docker-compose.yml as default.
from the help page:
config: Validate and view the Compose file
After applying the suggested edits by #leopal,
If you want a "quite" output,
$ docker-compose -f docker-compose.yaml config -q
$ docker-compose -f your.yaml config
networks:
stars.api.web: {}
services:
elasticsearch:
container_name: elasticsearch
environment:
ELASTIC_PASSWORD: changeme
ES_JAVA_OPTS: -Xmx256m -Xms256m
discovery.type: single-node
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
networks:
stars.api.web: null
ports:
- published: 9200
target: 9200
ulimits:
memlock:
hard: -1
soft: -1
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data:rw
kibana:
container_name: kibana
depends_on:
- elasticsearch
image: docker.elastic.co/kibana/kibana:7.6.0
networks:
stars.api.web: null
ports:
- published: 5601
target: 5601
version: '3.4'
volumes:
elasticsearch-data: {}
docker-compose version 1.18.0, build 8dd22a9 on Ubuntu 16.04
Docker version 17.12.0-ce, build c97c6d6
docker-compose file version: '3'
Relevant portion of the docker-compose file
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.0
container_name: elasticsearch1
restart: unless-stopped
environment:
- http.host=0.0.0.0
- reindex.remote.whitelist=remote_es:*
- xpack.security.enabled=false
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1000000000
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
When I do a docker-compose up -d, I get the following error:
ERROR: for elasticsearch1 Cannot start service elasticsearch1: OCI runtime create failed: wrong rlimit value: RLIMIT_MEM_LIMIT: unknown
Any ideas what's going on?
The docker-compose reference document seems to imply that since I've not running in swarm mode, I should be using the version 2 syntax for mem_limit even though my docker-compose file is version 3.
ERROR: for elasticsearch1 Cannot start service elasticsearch1: OCI runtime create failed: wrong rlimit value: RLIMIT_MEM_LIMIT: unknown
You got above error, because set mem_limit under the ulimits section. It should be under container level on the same level with image, environment etc:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.0
container_name: elasticsearch1
restart: unless-stopped
environment:
- http.host=0.0.0.0
- reindex.remote.whitelist=remote_es:*
- xpack.security.enabled=false
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1000000000
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
And another issue is here. According to the issue:
The v3 format is specifically designed to run with Swarm mode and the
docker stack features. It wouldn't make sense for us to re-add options
to that format when they have been replaced and would be ignored in
Swarm mode.
It means that you can use cpu_shares, cpu_quota, cpuset, mem_limit, memswap_limit, mem_swappiness in version 2 only and use new resource options in version 3 in swarm mode only.
So, if you don't want to use swarm mode, you need to use version 2.
The final docker-compose.yml is:
version: '2'
services:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.0
container_name: elasticsearch1
restart: unless-stopped
environment:
- http.host=0.0.0.0
- reindex.remote.whitelist=remote_es:*
- xpack.security.enabled=false
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1000000000
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200