docker-compose stop not working after docker-compose -p <name> up - docker

I am using docker-compose version 2. I am starting containers with docker-compose -p some_name up -d and trying to kill them with docker-compose stop. The commands exits with 0 code but the containers are still up and running.
Is this the expected behaviour for version? If yes, any idea how can I work around it?
my docker-compose.yml file looks like this
version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.3.0
ports:
- "9200:9200"
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms512m"
xpack.security.enabled: "false"
xpack.monitoring.enabled: "false"
xpack.graph.enabled: "false"
xpack.watcher.enabled: "false"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 262144
hard: 262144
kafka-server:
image: spotify/kafka
environment:
- TOPICS=my-topic
ports:
- "9092:9092"
test:
build: .
depends_on:
- elasticsearch
- kafka-server
update
I found that the problem is caused by using the -p parameter and giving explicit prefix to the container. Still looking for the best way to solve it.

docker-compose -p [project_name] stop worked in my case. I had the same problem.

Try forcing running containers to stop by sending a SIGKILL with docker-compose -p some_name kill.
docker-compose kill
I just read and experimented with something from compose CLI envs when passing -p.
You have to pass the -p some_name to kill the containers or compose will assume the directory name if you don't.
Kindly let me know if this helped.

Related

Configuring security in Elasticsearch on docker container

How do i enable basic authentication for kibana and elasticsearch on docker container?
I want to have authentication enabled in kibana. With the normal files we can simply set the flag
xpack.security.enabled=true and generate the password but since i am running elasticsearch and kibana on docker how do i do it ??
This is my current docker file
version: '3.7'
services:
elasticsearch:
image: elasticsearch:7.9.2
ports:
- '9200:9200'
environment:
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
kibana:
image: kibana:7.9.2
ports:
- '5601:5601'
You can pass it in env vars while running docker run command for elasticsearch.
Something like this:
docker run -p 9200:9200 -p 9300:9300 -e "xpack.security.enabled=true" docker.elastic.co/elasticsearch/elasticsearch:7.14.0

Using netcat to wait neo to be ready within the same container

So consider the following... If we had a docker container called app, this app contained an instance of neo4j database. Within the Dockerfile under the CMD we reference entrypoint.sh. From this script we are running the following ...
end="$((SECONDS+60))"
while true; do
nc -z localhost 7687 && break
[[ "${SECONDS}" -ge "${end}" ]] && exit 1
sleep 1
done
The question is why does netcat not see the neo, even though it is booting. I have confirmed this by commenting out the CMD line within the dockerfile and checking that we can get to it though a browser.
If there is only one container enclosing neo and netcat running within the entrypoint.sh will it even have access to neo when it comes up or would netcat need to be in a seperate container all together ?
My docker-compose.yaml is below...
version: "2.1"
services:
app:
build:
context: .
container_name: neo4j-ingestion
expose:
- "7474"
- "7687"
ports:
- "7687:7687"
- "7474:7474"
environment:
MEMORY_LOCK: "true"
DB_START_DELAY: "10"
PROCESSORS: "2"
DATA_INGEST_FOLDER: /ingest/pending
NEO4J_AUTH: "neo4j/password"
NEO4J_ACCEPT_LICENSE_AGREEMENT: "yes"
NEO4J_dbms_memory_heap_maxSize: "4G"
NEO4J_HOSTNAME: "0.0.0.0"
NEO4J_USERNAME: "neo4j"
NEO4J_PASSWORD: "password"
PROCESS_NAME: "neo4j-ingest"
INGESTION_TYPE: "incremental"
SLEEP_PERIOD: "20"
CREATE_SCHEMA: "false"
NEO4J_dbms_security_procedures_unrestricted: "apoc.*"
IS_DELTA: "1"
volumes:
- neodata:/data
- "./test-integration/incremental:/ingest"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
neodata:
driver: local

How to connect metricbeat to elasticsearch and kibana with docker

I've setup elasticsearch and kibana with docker compose. elasticsearch is deployed on: localhost:9200 while kibana is deployed on localhost:5601
When trying to deploy metricbeat with docker run I got the following errors:
$ docker run docker.elastic.co/beats/metricbeat:6.3.2 setup -E setup.kibana.host=kibana:5601 -E output.elasticsearch.hosts=["localhost:9200"]
Exiting: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://localhost:9200: Get http://localhost:9200: dial tcp [::1]:9200: connect: cannot assign requested address]
Exiting: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://elasticsearch:9200: Get http://elasticsearch:9200: lookup elasticsearch on 192.168.65.1:53: no such host]
My docker-compose.yml:
# ./docker-compose.yml
version: "3.7"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
environment:
# - cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elkdata:/usr/share/elasticsearch/data
ports:
- "9200:9200"
restart: always
kibana:
image: docker.elastic.co/kibana/kibana:6.3.2
volumes:
- kibana:/usr/share/kibana/config
ports:
- "5601:5601"
depends_on:
- elasticsearch
restart: always
volumes:
elkdata:
kibana:
First edit your docker-compose file by adding a name for default docker network:
version: "3.7"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
environment:
# - cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elkdata:/usr/share/elasticsearch/data
ports:
- "9200:9200"
networks:
- my-network
restart: always
kibana:
image: docker.elastic.co/kibana/kibana:6.3.2
volumes:
- kibana:/usr/share/kibana/config
ports:
- "5601:5601"
networks:
- my-network
depends_on:
- elasticsearch
restart: always
volumes:
elkdata:
kibana:
networks:
my-network:
name: awesome-name
Execute docker-compose up and then start metricbeat with the below command:
$ docker run docker.elastic.co/beats/metricbeat:6.3.2 --network=awesome-name setup -E setup.kibana.host=kibana:5601 -E output.elasticsearch.hosts=["elasticsearch:9200"]
Explanation:
When you try to deploy metricbeat, you provide below envars:
setup.kibana.host=kibana:5601
output.elasticsearch.hosts=["localhost:9200"]
I will start with the second one. With docker run command, when you are starting metricbeat, you are telling the container that it can access elastic search on localhost:9200. So when the container starts, it will access localhost on port 9200 expecting to find elasticsearch running. But, as the container is a host isolated process with its own network layer, localhost resolves to the container itself, not to your docker host machine as you are expecting.
Regarding the setup of kibana host, you should firstly understand how docker-compose works. By default, when you execute docker-compose up, a docker network is created and all services defined on yml file are added to this network. Inside this network and only, services are accessible through their service name. For your case, as defined on yml file, their names would be elasticsearch, kibana.
So in order metricbeat container to be able to communicate with elasticsearch and kibana containers, it should be added to the same docker network. This can be achieved with setting --network flag on docker run command.
Another approach would be to share docker host's network with your containers by using network mode host, but I would not recommend that.
References:
Docker compose
docker run

How to auto-restart a docker-compose cluster when EC2 instance reboots

I had this docker-compose.yml file:
version: '2.2'
services:
kibana:
restart: always
depends_on:
- es01
- es02
image: docker.elastic.co/kibana/kibana:7.3.1
container_name: kibana
ports:
- 5601:5601
environment:
ELASTICSEARCH_HOSTS: http://es01:9200
ELASTICSEARCH_URL: http://es01:9200
es01:
restart: always
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.1
container_name: es01
environment:
- node.name=es01
- discovery.seed_hosts=es02
- cluster.initial_master_nodes=es01,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
es02:
restart: always
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.1
container_name: es02
environment:
- node.name=es02
- discovery.seed_hosts=es01
- cluster.initial_master_nodes=es01,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata02:/usr/share/elasticsearch/data
volumes:
esdata01:
driver: local
esdata02:
driver: local
but these containers did not restart when the ec2 instance rebooted. Maybe I should use something like this instead:
docker-compose up -d --restart # the --restart flag maybe?
?
Notice the "restart" properties in the yml file, guess they didn't do anything in this case?
But there is no --restart flag:
(account-api) ubuntu#account_management5-interos:~/interos/repos/elastic-search-app$
docker-compose up -d --restart
Builds, (re)creates, starts, and attaches to containers for a service.
Unless they are already running, this command also starts any linked services.
The `docker-compose up` command aggregates the output of each container. When
the command exits, all containers are stopped. Running `docker-compose up -d`
starts the containers in the background and leaves them running.
If there are existing containers for a service, and the service's configuration
or image was changed after the container's creation, `docker-compose up` picks
up the changes by stopping and recreating the containers (preserving mounted
volumes). To prevent Compose from picking up changes, use the `--no-recreate`
flag.
If you want to force Compose to stop and recreate all containers, use the
`--force-recreate` flag.
Usage: up [options] [--scale SERVICE=NUM...] [SERVICE...]
Options:
-d, --detach Detached mode: Run containers in the background,
print new container names. Incompatible with
--abort-on-container-exit.
--no-color Produce monochrome output.
--quiet-pull Pull without printing progress information
--no-deps Don't start linked services.
--force-recreate Recreate containers even if their configuration
and image haven't changed.
--always-recreate-deps Recreate dependent containers.
Incompatible with --no-recreate.
--no-recreate If containers already exist, don't recreate
them. Incompatible with --force-recreate and -V.
--no-build Don't build an image, even if it's missing.
--no-start Don't start the services after creating them.
--build Build images before starting containers.
--abort-on-container-exit Stops all containers if any container was
stopped. Incompatible with -d.
-t, --timeout TIMEOUT Use this timeout in seconds for container
shutdown when attached or when containers are
already running. (default: 10)
-V, --renew-anon-volumes Recreate anonymous volumes instead of retrieving
data from the previous containers.
--remove-orphans Remove containers for services not defined
in the Compose file.
--exit-code-from SERVICE Return the exit code of the selected service
container. Implies --abort-on-container-exit.
--scale SERVICE=NUM Scale SERVICE to NUM instances. Overrides the
`scale` setting in the Compose file if present.
I am looking for the equivalent to:
docker run -d -p 27017:27017 \
--restart unless-stopped \ # RESTART
--name 'interos-mongo' \
'mongo:4.0'
In fact docker-compose doesn't handle real restarts, these are done by dockerd.
The restart policy written in the compose configuration file will eventually be written to the container's restart policy, which you can check with the following command.
docker inspect --format '{{.HostConfig.RestartPolicy}}' you-container-ID-or-name
Going back to your question, have you set up dockerd to auto starting? i.e. systemctl enable docker
xref: https://docs.docker.com/compose/production/

Docker Rails app with searchkick/elasticsearch

Im porting my rails app from my local machine into a docker container and running into an issue with elasticsearch/searchkick. I can get it working temporarily but Im wondering if there is a better way. So basically the port for elasticsearch isnt matching up with the default localhost:9200 that searchkick uses. Now I have used "docker inspect" on the elasticsearch container and got the actual IP and then set the ENV['ELASTICSEARCH_URL'] variable like the searchkick docs say and it works. The problem Im having is that is a pain if I restart/change the containers the IP changes sometimes and I have to go through the whole process again. Here is my docker-compose.yml:
version: '2'
services:
web:
build: .
command: rails server -p 3000 -b '0.0.0.0'
volumes:
- .:/living-recipe
ports:
- '3000:3000'
env_file:
- .env
depends_on:
- postgres
- elasticsearch
postgres:
image: postgres
elasticsearch:
image: elasticsearch
use elasticsearch:9200 instead of localhost:9200. docker compose exposes the container via it's name.
Here is the docker-compose.yml that is working for me
docker compose will expose the container vaia it's name, so you can set
ELASTICSEARCH_URL: http://elasticsearch:9200 ENV variable in your rails application container
version: "3"
services:
db:
image: postgres:9.6
restart: always
volumes:
- /tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
volumes:
- .:/app
ports:
- 9200:9200
environment:
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
api:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- ".:/app"
ports:
- "3001:3000"
depends_on:
- db
environment:
DB_HOST: db
DB_PASSWORD: password
ELASTICSEARCH_URL: http://elasticsearch:9200
You don't want to try to map the IP address for elasticsearch manually, as it will change.
Swap out depends_on for links. This will create the same dependency, but also allows the containers to be reached via service name.
Containers for the linked service will be reachable at a hostname identical to the alias, or the service name if no alias was specified.
Links also express dependency between services in the same way as depends_on, so they determine the order of service startup.
Docker Compose File Reference - Links
Then in your rails app where you're setting ENV['ELASTICSEARCH_URL'], use elasticsearch instead.

Resources