Running script after Cassandra starts (Docker) - docker

I'm trying to run a script after Cassandra starts that will create the keyspace.
Here's my docker compose:
version: '3.6'
services:
cassandra:
container_name: cassandra
image: bitnami/cassandra:3.11.2
volumes:
- ./cassandra_data:/bitnami
- ./scripts/cassandra_init.sh:/cassandra_init.sh
environment:
- CASSANDRA_USER=${CASSANDRA_USERNAME}
- CASSANDRA_PASSWORD=${CASSANDRA_PASSWORD}
- CASSANDRA_CLUSTER_NAME=Testing
- CASSANDRA_PASSWORD_SEEDER=yes
entrypoint: ["/app-entrypoint.sh"]
command: ["nami","start","--foreground","cassandra","/cassandra_init.sh"]
volumes:
cassandra_data:
["nami","start","--foreground","cassandra"] starts Cassandra. If I start the container without adding my script, it works just fine.
However if I start the container including my script, I get this error after the container starts:
nami ERROR Unknown command '/cassandra_init.sh'
How can I achieve this?

I figured it out.
In docker.compose I had to call the script init.sh and call it:
version: '3.6'
services:
cassandra:
container_name: cassandra
image: bitnami/cassandra:3.11.2
volumes:
- ./cassandra_data:/bitnami
- ./scripts/cassandra_init.sh:/init.sh
environment:
- CASSANDRA_USER=${CASSANDRA_USERNAME}
- CASSANDRA_PASSWORD=${CASSANDRA_PASSWORD}
- CASSANDRA_CLUSTER_NAME=Testing
- CASSANDRA_PASSWORD_SEEDER=yes
entrypoint: ["/app-entrypoint.sh"]
command: ["/init.sh"]
volumes:
cassandra_data:
and the script should look like this:
#!/bin/bash
nami start cassandra
echo "script stuff here to run after cassandra starts"

Related

Can i run cmd command in docker compose outside of container?

I have 2 docker-compose files that build a dockerfile, and i want join those docker-compose files
so, i created other docker-compose that goes up these 2 images
version: "3.4"
services:
frontend:
image: frontend-image
depends_on:
- backend
ports:
- "3000:80"
networks:
- teste-network
backend:
image: backend-image
ports:
- "5001:80"
networks:
- test-network
networks:
test-network:
driver: bridge
but, this docker-compose file not build the images
then i created a bash command that build these images
bash -c "docker-compose -f ./frontend/docker/docker-compose.yml build
&& docker-compose -f ./backend/docker/docker-compose.yml build"
I want to run this script before up containers, just typing docker-compose up
i assume that you have 2 dockerfiles - one for the frontend and the other for the backend, where each of which resides in the corresponding folder from your post, that is:
frontend/docker/Dockerfile
backend/docker/Dockerfile
then you can leverage docker-compose to build and run your images. all you have to do is to tell docker-compose where are the dockerfiles, which you can do by utilizing the build configuration.
version: "3.4"
services:
frontend:
image: frontend-image
build: ./frontend/docker
depends_on:
- backend
ports:
- "3000:80"
networks:
- test-network
backend:
image: backend-image
build: ./backend/docker
ports:
- "5001:80"
networks:
- test-network
networks:
test-network:
driver: bridge
then running docker-compose up frontend will build the docker images (if they do no exist), and then start them.

Dockerimage working on pull but not on pull image directive in yml file?

I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.

Restart docker compose with a different command

I've an application running on a server, but somehow the server rebooted but some docker services could restart, another not.
docker-compose ps:
Name Command State Ports
------------------------------------------------------------------------------------------------------------
elasticsearch /usr/local/bin/docker-entr ... Up 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp
kibana sh -c ./bin/kibana-plugin ... Restarting
logstash /usr/local/bin/docker-entr ... Up 5044/tcp, 9600/tcp
If I try to see the logs of kibana by docker kibana ps:
Plugin kbn_radar already exists, please remove before installing a new version
Found previous install attempt. Deleting...
Attempting to transfer from file:///usr/share/kibana/config/kbn_radar.zip
Transferring 3686700 bytes....................
Transfer complete
Retrieving metadata from plugin archive
Extracting plugin archive
The problem is: kbn_radar takes a long time to restart, so I want to restart the kibana service without needing to restart the other applications. I've tried to change my .yml file where I've run the commands to start de plugins:
kibana:
image: docker.elastic.co/kibana/kibana:6.8.0
command:
- sh
- -c
- './bin/kibana-plugin install file:///usr/share/kibana/config/kbn_radar.zip && ./bin/kibana-plugin install file:///usr/share/kibana/config/ob-kb-funnel-6.8.zip && exec /usr/local/bin/kibana-docker'
So at the end, my docker compose was:
docker-compose.yml:
version: "3"
networks:
elasticsearch-net-624:
services:
elasticsearch-products-624-service:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0
container_name: elasticsearch
restart: always
networks:
- elasticsearch-net-624
ports:
- "9200:9200"
- "9300:9300"
expose:
- "9200"
volumes:
- /home/docker/elastic.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /home/docker/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
- /docker/elastic/data:/usr/share/elasticsearch/data
- /docker/elastic/data/snapshots:/usr/share/elasticsearch/data/snapshots
kibana:
image: docker.elastic.co/kibana/kibana:6.8.0
command:
- sh
- -c
- 'exec /usr/local/bin/kibana-docker'
container_name: kibana
restart: always
hostname: kibana
networks:
- elasticsearch-net-624
environment:
- SERVER_NAME=kibana.localhost
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_HOST=elasticsearch
- ELASTICSEARCH_PORT=9200
- XPACK_GRAPH_ENABLED=true
- XPACK_WATCHER_ENABLED=true
- XPACK_ML_ENABLED=true
- XPACK_MONITORING_ENABLED=true
- XPACK_MONITORING_UI_CONTAINER_ELASTICSEARCH_ENABLED
ports:
- "5601:5601"
expose:
- "5601"
links:
- elasticsearch-products-624-service
depends_on:
- elasticsearch-products-624-service
volumes:
- /home/docker/kibana.yml:/usr/share/kibana/config/kibana.yml
- /home/docker/ob-kb-funnel-6.8.zip:/usr/share/kibana/config/ob-kb-funnel-6.8.zip
- /home/docker/kbn_radar.zip:/usr/share/kibana/config/kbn_radar.zip
- /home/morpheus/docker/dashboard_app.js:/usr/share/kibana/src/legacy/core_plugins/kibana/public/dashboard/dashboard_app.js
logstash:
image: docker.elastic.co/logstash/logstash:6.8.0
container_name: logstash
restart: always
volumes:
- /home/docker/logstash.yml:/usr/share/logstash/config/logstash.yml
Finally I've tried to restart the service:
docker-compose -f docker-kibana.yml restart kibana
But, the service keeps trying to restart the plugins and if I run docker-compose ps, the command continues "sh -c ./bin/kibana-plugin ..."
How could I restart docker service with another command? Or restart my service without restarting the plugin that already exists?
I recommend that you create a build for your plugin and not do everything at the container start.
A simple dockerfile to fix your issue would look somewhat like this
FROM docker.elastic.co/kibana/kibana:6.8.0
COPY ob-kb-funnel-6.8.zip kbn_radar.zip /usr/share/kibana/config/
RUN ./bin/kibana-plugin install file:///usr/share/kibana/config/kbn_radar.zip &&
./bin/kibana-plugin install file:///usr/share/kibana/config/ob-kb-funnel-6.8.zip
ENTRYPOINT /usr/local/bin/kibana-docker
Next you would need to use docker-compose to build your image. We can do that by updating your service definition
kibana:
build:
context: ./kibana
container_name: kibana
restart: always
hostname: kibana
networks:
- elasticsearch-net-624
environment:
- SERVER_NAME=kibana.localhost
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_HOST=elasticsearch
- ELASTICSEARCH_PORT=9200
- XPACK_GRAPH_ENABLED=true
- XPACK_WATCHER_ENABLED=true
- XPACK_ML_ENABLED=true
- XPACK_MONITORING_ENABLED=true
- XPACK_MONITORING_UI_CONTAINER_ELASTICSEARCH_ENABLED
ports:
- "5601:5601"
expose:
- "5601"
links:
- elasticsearch-products-624-service
depends_on:
- elasticsearch-products-624-service
volumes:
- /home/docker/kibana.yml:/usr/share/kibana/config/kibana.yml
- /home/morpheus/docker/dashboard_app.js:/usr/share/kibana/src/legacy/core_plugins/kibana/public/dashboard/dashboard_app.js
As you can see in the service definition we replaced image with build. We assume that your Dockerfile for kibana resides in a folder called kibana and also contains your plugin zip files.
next you can run docker-compose build and it will build you the required images for your compose stack.
The problem is that when you run a docker-compose or a docker stack, a context is created with all the initial data. If you later change this data, for example the command in your case, it will not take effect unless you restart the whole context, that is, unless you bring down and up again the docker-compose or stack.
However, you might try your luck with the following:
Edit the compose with the command you want to run now.
Remove the kibana container at all. I mean, don't try to restart kibana with docker-compose, but remove the container. docker rm -f dir_kibana
Run docker-compose up again. It should detect that kibana is missing and run it again.

Execute command into docker image to launch solr exporter

I would like to execute a command inside the solr docker image to export metrics.
https://lucene.apache.org/solr/guide/7_3/monitoring-solr-with-prometheus-and-grafana.html
I tried with this :
command:
- solr-demo
- sh ./bin/solr-exporter -p 9854 -b http://localhost:8983/solr
Here is the complete docker-compose
version: '3.7'
volumes:
solr_data: {}
services:
solr:
image: solr:8
ports:
- "8983:8983"
volumes:
- solr_data:/var/solr
command:
- solr-demo
I don't have any errors but the command to launch the exporter is not executed.
The Prometheus way to address this issue is to run the solr-exporter as a separate docker container or side-car and have it scrape the solr server.
version: '3.7'
volumes:
solr_data: {}
services:
solr:
image: solr:8
ports:
- "8983:8983"
volumes:
- solr_data:/var/solr
command:
- solr-demo
solr-exporter:
image: solr:8
ports:
- "9854:9854"
entrypoint:
- "/opt/solr-8.2.0/contrib/prometheus-exporter/bin/solr-exporter"
- "-p"
- "9854"
- "-b"
- "http://solr:8983/solr"
- "-f"
- "/opt/solr-8.2.0/contrib/prometheus-exporter/conf/solr-exporter-config.xml"
- "-n"
- "8"
Using "http://solr:8983/solr" as the target for the exporter makes it scrape the container named solr.
The above exporter commandline was taken verbatim from the docs here, you might want to adjust it depending on your needs.

docker-compose: run a command on a pgsql container

I am trying to run the following docker-compose file:
version: "3"
services:
db:
image: postgres
container_name: pgsql
environment:
- foo=foo
- bar=bar
volumes:
- ./sql/:/opt/sql
command: bash /opt/sql/create-db.sql
# command: ps -aux
web:
image: benit/debian-web
container_name: web
depends_on:
- db
ports:
- 80:80
volumes:
- ./html:/var/www/html
I am encountering an error with the line:
command: bash /opt/sql/create-db.sql
It is because pgsql service is not started. It can be monitored with command: ps -aux
How can I run my script once pgsql service is started ?
You can use a volume to provide an initialization sql script:
version: "3"
services:
db:
image: postgres
container_name: pgsql
environment:
- foo=foo
- bar=bar
volumes:
- ./sql/:/opt/sql
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
web:
image: benit/debian-web
container_name: web
depends_on:
- db
ports:
- 80:80
volumes:
- ./html:/var/www/html
This will work because original Posgresql dockerfile contains a script (that runs after Posrgres has been started) which will execute any *.sql files from /docker-entrypoint-initdb.d/ folder.
By mounting your local volume in that place, your sql files will be run at the right time.
It's actually mentioned in documentation for that image: https://hub.docker.com/_/postgres under the How to extend this image section.

Resources