Running a custom script using entrypoint in docker-compose - docker

I modified the docker-compose.yml file as given on https://hub.docker.com/_/solr/ by adding a volumes configuration and a change in entrypoint. The modified file is as given:
version: '3'
services:
solr:
image: solr
ports:
- "8983:8983"
volumes:
- ./solr/init.sh:/init.sh
- ./solr/data:/opt/solr/server/solr/mycores
entrypoint:
- init.sh
- docker-entrypoint.sh
- solr-precreate
- mycore
I need to run this 'init.sh' before entrypoint starts, to prepare my files inside container.
But I get following errors:
ERROR: for solr_solr_1 Cannot start service solr: oci runtime error:
container_linux.go:247: starting container process caused "exec:
\"init.sh\": executable file not found in $PATH"
Earlier I found about official image hooks in neo4j from here. Is there a similar thing I can use here also?
Update 1: From comments below, I realized that dockerfile set WORKDIR /opt/solr due to which executable file not found in $PATH. So I tested by providing the absolute path to entrypoint by using /init.sh. But this also gives error, but a different one:
standard_init_linux.go:178: exec user process caused "exec format
error"

It looks like you need to map your volume to /docker-entrypoint-initdb.d/
version: '3'
services:
solr:
image: solr
ports:
- "8983:8983"
volumes:
- ./solr/init.sh:/docker-entrypoint-initdb.d/init.sh
- ./solr/data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- init
From
https://hub.docker.com/_/solr/
Extending the image The docker-solr image has an extension mechanism. At run time, before starting Solr, the container will
execute scripts in the /docker-entrypoint-initdb.d/ directory. You can
add your own scripts there either by using mounted volumes or by using
a custom Dockerfile. These scripts can for example copy a core
directory with pre-loaded data for continuous integration testing, or
modify the Solr configuration.
The docker-entrypoint.sh seems to be responsible for running the sh scripts based on the arguments passed to it. So init is the first argument which in turn tries to run init.sh
docker-compose logs solr | head
Update 1:
I had struggled to get this to work and finally figured out why my docker-compose was not working while the docker run -v pointing to the /docker-entrypoint-initdb.d/init.sh was working.
It turns out that removing the entrypoint tree was the solution. Here's my final docker-compose:
version: '3'
services:
solr:
image: solr:6.6-alpine
ports:
- "8983:8983"
volumes:
- ./solr/data/:/opt/solr/server/solr/
- ./solr/config/init.sh:/docker-entrypoint-initdb.d/init.sh
my ./solr/config/init.sh
#!/bin/bash
echo "running"
touch /opt/solr/server/solr/test.txt;
echo "test" > /opt/solr/server/solr/test.txt;

An alternative solution that worked for me was modifying entrypoint by placing /bin/sh.It looked a bit like this afterwards
version: '3'
services:
web:
build: .
volumes:
- .:/code
entrypoint :
- /bin/sh
- ./test.sh
ports:
- "5000:5000
where test.sh is the required bash script to be run inside the container.

Related

Docker-compose how to update celery without rebuild?

I am working on my django + celery + docker-compose project.
Problem
I changed django code
Update is working only after docker-compose up --build
How can I enable code update without rebuild?
I found this answer Developing with celery and docker but didn't understand how to apply it
docker-compose.yml
version: '3.9'
services:
django:
build: ./project # path to Dockerfile
command: sh -c "
gunicorn --bind 0.0.0.0:8000 core_app.wsgi"
volumes:
- ./project:/project
- ./project/static:/project/static
- media-volume:/project/media
expose:
- 8000
celery:
build: ./project
command: celery -A documents_app worker --loglevel=info
volumes:
- ./project:/usr/src/app
- media-volume:/project/media
depends_on:
- django
- redis
.........
volumes:
pg_data:
static:
media-volume:
Code update without rebuild is achievable and best practice when working with containers otherwise it takes too much time and effort creating a new image every time you change the code.
The most popular way of doing this is to mount your code directory into the container using one of the two methods below.
In your docker-compose.yml
services:
web:
volumes:
- ./codedir:/app/codedir # while 'codedir' is your code directory
In CLI starting a new container
$ docker run -it --mount "type=bind,source=$(pwd)/codedir,target=/app/codedir" celery bash
So you're effectively mounting the directory that your code lives in on your computer inside of the /opt/ dir of the Celery container. Now you can change your code and...
the local directory overwrites the one from the image when the container is started. You only need to build the image once and use it until the installed dependencies or OS-level package versions need to be changed. Not every time your code is modified. - Quoted from this awesome article

Docker compose throwing find: ‘’: No such file or directory

I am trying to execute docker-compose on top of official Cassandra docker image.
Basically, I am trying to set a few of properties present inside Cassandra image at /etc/cassandra/cassandra.yaml .
My docker-compose looks like
version: '3.0'
services:
cassandra:
image: cassandra:3.11.6
ports:
- "9042:9042"
environment:
- CASSANDRA_ENABLE_USER_DEFINED_FUNCTIONS=true
# restart: always
volumes:
- ./cassandra-dc-1:/usr/local/bin/
container_name: cassandra-dc-1
entrypoint: /usr/local/bin/docker-entrypoint.sh
# command: /usr/local/bin/docker-entrypoint.sh
When I run docker-compose up --build I get below error, from the directory where I have the docker-compose.yaml present
cassandra-dc-1 | find: ‘’: No such file or directory
cassandra-dc-1 exited with code 1
I tried giving
Absolute path in the volume
./cassandra-dc-1:/usr/local/bin/ -- using this with file 'docker-entrypoint.sh' which I want to copy within docker
I am unable to figure out what is wrong.
Basically, I am trying to set a few of properties present inside
Cassandra image at /etc/cassandra/cassandra.yaml .
If you just to set properties in yaml file you do not need to override the entrypoint as it will hide a lot of executable not only entrypoint by doing below
volumes:
- ./cassandra-dc-1:/usr/local/bin/
Do not mound the whole directory /usr/local/bin/.
If you just want to override entrypoint then do the following
volumes:
- ./cassandra-dc-1/docker-entrypoint.sh:/usr/local/bin/docker-entrypoint.sh
Custom config file
FROM cassandra:3.11.6
copy my_customconfig.yml /etc/cassandra/cassandra.yaml
That's all you need to run with custom config, copy the config during build time.
or with docker-compose
volumes:
- ./cassandra.yaml:/etc/cassandra/cassandra.yaml
Configuring Cassandra

Is there a way to RUN a command after building one of two containers in docker-compose

Following case:
I want to build with docker-compose two containers. One is MySQL, the other is a .war File executed with springboot that is dependend on MySQL and needs a working db. After I build the mysql container, I want to fill the db with my mysqldump file, before the other container is built.
My first idea was to have it in my mysql Dockerfile as
#RUN mysql -u root -p"$MYSQL_ROOT_PASSWORD"' < /appsb.sql
but of course it wants to execute it while building.
I have no idea how to do it in the docker-compose file as Command, maybe that would work. Or do I need to build a script?
docker-compose.yml
version: "3"
services:
mysqldb:
networks:
- appsb-mysql
environment:
- MYSQL_ROOT_PASSWORD=rootpw
- MYSQL_DATABASE=appsb
build: ./mysql
app-sb:
image: openjdk:8-jdk-alpine
build: ./app-sb/
ports:
- "8080:8080"
networks:
- appsb-mysql
depends_on:
- mysqldb
networks:
- appsb-mysql:
Dockerfile for mysqldb:
FROM mysql:5.7
COPY target/appsb.sql /
#RUN mysql -u root -p"$MYSQL_ROOT_PASSWORD"' < /appsb.sql
Dockerfile for the other springboot appsb:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY target/appsb.war /
RUN java -jar /appsb.war
Here is a similar issue (loading a dump.sql at start up) for a MySQL container: Setting up MySQL and importing dump within Dockerfile.
Option 1: import via a command in Dockerfile.
Option 2: exec. a bash script from docker-compose.yml
Option 3: exec. an import command from docker-compose.yml

Go applications fails and exits when running using docker-compose, but works fine with docker run command

I am running all of these operations on a remove server that is a
VM running Ubuntu 16.04.5 x64.
My Go project's Dockerfile looks like:
FROM golang:latest
ADD . $GOPATH/src/example.com/myapp
WORKDIR $GOPATH/src/example.com/myapp
RUN go build
#EXPOSE 80
#ENTRYPOINT $GOPATH/src/example.com/myapp/myapp
ENTRYPOINT ./myapp
#CMD ["./myapp"]
When I run the docker container using docker-compose up -d, the Go application exits and I see this in the docker logs:
myapp_1 | /bin/sh: 1: ./myapp: Exec format error docker_myapp_1
exited with code 2
If I locate the image using docker images and run the image like:
docker run -it 75d4a95ef5ec
I can see that my golang applications runs just fine:
viper environment is: development HTTP server listening on address:
":3005"
When I googled for this error some people suggested compiling with some special flags but I am running this container on the same Ubuntu host so I am really confused why this isn't working using docker.
My docker-compose.yml looks like:
version: "3"
services:
openresty:
build: ./openresty
ports:
- "80:80"
- "443:443"
depends_on:
- myapp
env_file:
- '.env'
restart: always
myapp:
build: ../myapp
volumes:
- /home/deploy/apps/myapp:/go/src/example.com/myapp
ports:
- "3005:3005"
depends_on:
- db
- redis
- memcached
env_file:
- '.env'
redis:
image: redis:alpine
ports:
- "6379:6379"
volumes:
- "/home/deploy/v/redis:/data"
restart: always
memcached:
image: memcached
ports:
- "11211:11211"
restart: always
db:
image: postgres:9.4
volumes:
- "/home/deploy/v/pgdata:/var/lib/postgresql/data"
restart: always
Your docker-compose.yml file says:
volumes:
- /home/deploy/apps/myapp:/go/src/example.com/myapp
which means your host system's source directory is mounted over, and hides, everything that the Dockerfile builds. ./myapp is the host's copy of the myapp executable and if something is different (maybe you have a MacOS or Windows host) that will cause this error.
This is a popular setup for interpreted languages where developers want to run their application without running a normal test-build-deploy sequence, but it doesn't really make sense for a compiled language like Go where you don't have a choice. I'd delete this block entirely.
The Go container stops running because of this:
WORKDIR $GOPATH/src/example.com/myapp
RUN go build
#EXPOSE 80
#ENTRYPOINT $GOPATH/src/example.com/myapp/myapp
ENTRYPOINT ./myapp
You are switching directories to $GOPATH/src/example.com/myapp where you build your app, however, your entry point is pointing to the wrong location.
To solve this, you either copy the app into the root directory and keep the same ENTRYPOINT command or you copy the application to a different location and pass the full path such as:
ENTRYPOINT /my/go/app/location

How to create solr core in Dockerfile and may the local directory to container /opt/solr directory

I am using the following docker-compose.yml and Dockerfile to build the solr container
docker-compose.yml
version: "2"
services:
solr:
container_name: test.solr
#image: solr:5.5
build: .build/solr
ports:
- "8983:8983"
volumes:
- ./data/solr:/opt/solr/server/solr
Dockerfile
FROM solr:5.5
WORKDIR /opt/solr
RUN solr create -c drupal-solr
But the container can't be built with the following error
ERROR: Service 'solr' failed to build: The command '/bin/sh -c solr
create -c drupal-search -p 8983' returned a non-zero code: 1
I have to remove the solr create command from the Dockerfile to allow the container to be built properly.
However when I switch on the container, the solr container will exit with the following error
Solr home directory /opt/solr/server/solr must contain a solr.xml
file!
How should I update my docker-compose.yml and Dockerfile to pre-create the core, and map the core directory to the local directory?
This might be because of the build and start order.
To avoid this issue you could use:
FROM solr:6.6-alpine
RUN precreate-core drupal-solr

Resources