Just new to Docker so please bear with me. I'm trying to use docker-compose with an Alpine Dockerfile. Ideally I'd spin the alpine image up and have it continue to run
FROM alpine:edge
I have a shell script in my mounted volume in my docker-compose.yml
version: '3'
services:
solr-service:
build: ./solr-service
volumes:
- /Users/asdf/customsolr/trunk:/asdf/customsolr/trunk
ports:
- 8801:8801
Referenced within the mounted volume at /Users/asdf/customsolr/trunk/startsolr.sh I have a script which I've tried all kinds of approaches to run and stay running. Basically if I run this locally on my own machine outside of Docker it spins up the files in its dir in a mini custom jetty instance. When I try to invoke the script thru RUN or CMD the docker container has already finished or it cannot not find the needed startup.jar.
#!/bin/sh
export DEBUG_ARGS=''
export DEBUG_ARGS='-Xdebug -
Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8909'
java -Xms512M -Xmx1024M $DEBUG_ARGS -Dfile.encoding=UTF8 -server -Dsolr.solr.home=cores -Dsolr.useFilterForSort=false -Djetty.home=_container -Djetty.logs=_container/logs -jar _container/start.jar
Any ideas of what I'm doing wrong ?
Alpine image doesn't come with java installed, so unless you're adding it to your image inside Dockerfile, I'd say that the script just finishes and your container exists immediately.
It would be helpful if you provide your Dockerfile, full docker-compose.yml and log from container.
Related
I would like to run an oracle docker container using docker-compose. In my docker-compose.yml file i mount the docker volume as
volumes: - /host/folder:/opt/oracle/scripts/setup
Actually the /host/folder has multiple subdirectories containing some setup scripts which i want them to be executed when i do docker-compose up. Would runScripts.sh in container consider the subdirectories too ?
No. docker-compose does not consider your subdirectories for that.
You can run a specific bash script according to your requirements in which you can execute the specific scripts.
Your docker-compose.yml will look like following:
version: "3"
services:
setup:
image: ubuntu:latest
volumes:
- ./startup-script.sh:/root/startup-script.sh
- /host/folder:/opt/oracle/scripts/setup
entrypoint: "/root/startup-script.sh"
stdin_open: true
tty: true
And startup-script.sh will look like following:
#!/bin/bash
bash /directory1/script.sh
bash /directory2/script.sh
bash /directory3/script.sh
bash /directory4/script.sh
/bin/bash
exec "$#"
So, when docker container gets up, startup-script.sh will be executed and it will then execute all of your other required scripts.
Note: If your container is not of ubuntu image and supports sh instead of bash, then you can replace /bin/bash with bin/sh within your docker-compose.yml and startup-script.sh
I am very (read very) new to Docker so experimenting. I have created a very basic Dockerfile to pull in Laravel:
FROM composer:latest
RUN composer_version="$(composer --version)" && echo $composer_version
RUN composer global require laravel/installer
WORKDIR /var/www
RUN composer create-project --prefer-dist laravel/laravel site
My docker-compose.yml file looks like:
version: '3.7'
services:
laravel:
build:
context: .
dockerfile: laravel.dockerfile
container_name: my_laravel
network_mode: host
restart: on-failure
volumes:
- ./site:/var/www/site
When I run docker-compose up, the ./site directory is created but the contents are empty. I've put this in docker-compose as I plan on on including other things like nginx, mysql, php etc
The command:
docker run -v "/where/i/want/data/site:/var/www/site" my_laravel
Results in the same behaviour.
I know the install is successful as I modified my dockerfile with the follwing two lines appended to it:
WORKDIR /var/www/site
RUN ls -la
Which gives me the correct listing.
Clearly misunderstanding something here. Any help appreciated.
EDIT: So, I was able to get this to work... although, it slightly more difficult than just specifying a path..
You can accomplish this by specifying a volume in docker-compose.yml.. The path to the directory (on the host) is labeled as device in the compose file.. It appears that the root of the path has to be an actual volume (possibly a share would work) but the 'destination' of the path can be a directory on the specified volume..
I created a new volume called docker on my machine but I suppose you could do this with your existing disk/volume..
I am on a Mac and this docker-compose.yml file worked for me:
version: '3.7'
services:
nodemon-test:
container_name: my-nodemon-test
image: oze4/nodemon-docker-test
ports:
- "1337:1337"
volumes:
- docker_test_app:/app # see comment below on which name to use here
volumes:
docker_test_app: # use this name under `volumes:` for the service
name: docker_test_app
driver: local
driver_opts:
o: bind
type: none
device: /Volumes/docker/docker_test_app
The container specified exists in my DockerHub.. this is the source code for it, just in case you are worried about anything malicious. I created it like two weeks ago to help someone else on StackOverflow.
Shows files from the container on my machine (the host)..
You can read more about Docker Volume configs here if you would like.
ORIGINAL ANSWER:
It looks like you are trying to share the build directory with your host machine.. After some testing, it appears Docker will overwrite the specified path on the container with the contents of the path on the host.
If you run docker logs my_laravel you should see an error about missing files at /var/www/site.. So, even though the build is successful - once Docker mounts the directory from your machine (./site) onto the container (/var/www/site) it overwrites the path within the container (/var/www/site) with the contents of the path on your host (./site) - which is empty.
To test and make sure the contents of /var/www/site are in fact being overwritten, you can run docker exec -it /bin/bash (you may need to replace /bin/bash with /bash).. This will give you command line access inside of the container. From there you can do ls -a /var/www/site..
Furthermore, you can also pre-stage ./site to have a random test file in it (test.txt or whatever), then docker-compose up -d, then run the same commands from the step above docker exec -it ... and see if the staged test.txt file is now inside the container - this gives you definitive evidence that when you run volumes, the data on your host overwrites data in the container.
With that being said, doing something like this and sharing a log directory will work... the volume path specified on the container is still overwritten, the difference is the container is writing to that path.. it doesn't rely on it for config files/app files.
Hope this helps.
Here is a simplified version of my docker-compose.yml (it's the volume in buggy-service that does not behave as I expect):
version: '3.4'
services:
local-db:
image: postgres:9.6
environment:
- DB_NAME=${DB_NAME}
# other env vars (not important)
ports:
- 5432:5432
volumes:
- ~/.docker-volumes/${DB_NAME}/postgresql/data:/var/lib/postgresql/data
- postgresql:/docker-entrypoint-initdb.d
buggy-service:
build:
context: .
dockerfile: Dockerfile.test
target: buggy-image
args:
# bunch of args (not important)
volumes:
- /Users/me/temp:/temp
volumes:
postgresql:
driver_opts:
type: none
device: /Users/me/postgresql
o: bind
If I do docker-compose -f docker-compose.yml up -d local-db, a container for it starts up automatically and I find that /Users/me/postgresql on the host machine (Mac OSX) binds correctly to /docker-entrypoint-initdb.d with content synced.
However, if I do docker-compose -f docker-compose.yml up --build -d buggy-service, a container does not start up automatically.
Question: How do I get buggy-service to behave like local-db, i.e., start up automatically with the required volume mounted?
Here's the stripped down version of Dockerfile.test referenced by buggy-service:
FROM microsoft/dotnet:2.1-sdk-alpine AS buggy-image
# Bunch of ARG definitions (not important)
VOLUME /temp
# other stuff (not important)
ENTRYPOINT ["/bin/bash"]
# Other FROMs
Edit 1
A bit more info about what I’m trying to achieve...
The buggy-container I’m trying to get working runs .Net Core as the base image. Its purpose is to run dotnet test and generate coverage reports, which can then be consumed in the host, which may either be a local dev machine or a build server (in this case, BitBucket pipelines).
... followed by docker run -dit --name buggy-container buggy-image
This command creates a new container, not based on anything in the compose yml file. Without a volume specification, it will only get an anonymous volume since you've defined the volume in the Dockerfile (I tend to recommend against defining a volume there). You can see the anonymous volumes with a docker volume ls command, they'll be the ones with a long unique id and no reference to what they belong to.
To define a host volume from docker run, you need the -v flag:
docker run -dit -v /Users/me/temp:/temp --name buggy-container buggy-image
From your now changed question, you have a new issue. Your container specifies a single command to run in the entrypoint:
ENTRYPOINT ["/bin/bash"]
When bash runs, it reads input from stdin. When that input ends, like when you run a container with no input attached, bash will exit. When the process your container runs exits, the container exits. From the details available, I can't tell you what that command should be, but a good starting point is to look at other images on docker hub that perform a similar task that you're trying to run, and look at the Dockerfile they use (many hub images point back to a GitHub repo with the full source).
docker compose v3
I'm trying to run some app-specific commands like composer update whenever I run docker-compose up, having my docker-compose.yml file look something along the lines of this
version: '3'
services:
app1:
image: laraedit/laraedit
ports:
- 3000:80
volumes:
- ./appfolder:/var/www/appfolder
If I run my first-run commands in the entrypoint, it will override all the commands that default laraedit/laraedit is running. (At least I think so, because the container always stops when my entrypoint commands finish)
I don't want to bother the process of laraedit/laraedit starting up, I just want to execute a couple of commands on the side.
If I weren't using docker-compose, then I would have laraedit/laraedit Dockerfile locally, and I could then edit it and add a RUN statement somewhere in there.
But since I don't have the Dockerfile, and I can't make an entrypoint without throwing off the container's normal startup, I don't know how to go about automating the process of running these boring commands every single time I run docker-compose up.
Things I've tried:
adding my own Dockerfile (that replaces laraedit's)
running an entrypoint script (that blocks laraedit's startup)
running them as a command (the commands did not execute)
You need to extend the laraedit/laraedit image with a custom one.
You can use a Dockerfile as simple as this:
FROM laraedit/laraedit
COPY my_entrypoint.sh /my_entrypoint.sh
RUN chmod +x /my_entrypoint.sh
ENTRYPOINT /my_entrypoint.sh
my_entrypoint.sh is a script that contains your initialization commands and calls the original entrypoint at its end, for example:
#!/bin/sh
my_init_cmd1
my_init_cmd2
...
/original/entrypoint/script/path
You can get /original/entrypoint/script/path value by reading the original laraedit Dockerfile
Let's say you put the 2 files above in a directory called docker alongside your docker-compose.yml, than you need to adjust your docker-compose.yml like this:
version: '3'
services:
app1:
build: ./docker/
ports:
- 3000:80
volumes:
- ./appfolder:/var/www/appfolder
I have followed the next guide https://hub.docker.com/r/iliyan/jenkins-ci-php/ to download the docker image with Jenkins.
When I start my container using docker start CONTAINERNAME command, I can access to Jenkins from localhost:8080.
The problem comes up when I change Jenkins configuration and restart Jenkins using docker stop CONTAINERNAME and docker start CONTAINERNAME, my Jenkins doesn't contain any of my previous configuration changes..
How can I persist the Jenkins configuration?
You need to mount the Jenkins configuration as a volume, the -v flag will do just that for you. (you can ignore the --privileged flag in my example unless you plan on building docker images inside your jenkins docker image)
docker run --privileged --name='jenkins' -d -p 6999:8080 -p 50000:50000 -v /home/jan/jenkins:/var/jenkins_home jenkins:latest
The -v flag will mount your /var/jenkins_home outside your container in /home/jan/jenkins maintaining it between rebuilds.
--name so that you have a fixed name for the container to start / stop it from.
Then next time you want to run it, simply call
docker start jenkins
My understanding is that the init script
/sbin/tini -- /usr/local/bin/jenkins.sh
is reseting jenkins configuration on startup within the folder provided through the
JENKINS_HOME env var,
wether mounted outside the docker vm or not.
It is but possible to store the configuration on github using
configure/"Configure System"/"SCM Sync configuration"/Git
section.
See possible detailed configuration here
You can use this docker-compose file:
version: '3.1'
services:
jenkins:
image: jenkins:latest
container_name: jenkins
restart: always
environment:
TZ: GMT
volumes:
- ./jenkins_host:/var/jenkins_home
ports:
- 8080:8080
tty: true
You only need to share the jenkins volume ./jenkins_host:/var/jenkins_home with host folder
Besides the obvious, like running parameters that clear up the image that you should disable, you can do a few things:
use docker commit and reuse the commited container
mount the part where you write to the local file system with docker volumes
my favorite : use command :
docker container restart containername
Depending on your needs you can pick one.
I use the latter for example when testing jenkins plugins and it retains the data inside.
Source of the latter that is also useful for updates:
https://jimkang.medium.com/how-to-start-a-new-jenkins-container-and-update-jenkins-with-docker-cf628aa495e9