I am trying to configure a bitbucket CI pipeline to run tests.Stripping out the details I have a make file which looks as follows to run some form of integration tests.
test-e2e:
docker-compose -f ${DOCKER_COMPOSE_FILE} up -d ${APP_NAME}
godog
docker-compose -f ${DOCKER_COMPOSE_FILE} down
Docker compose is a single webserver with ports exposed.
Pipeline looks as follows:
- step: &integration-testing
name: Run integration tests script: # do this to make go module work with private repo
- apk add libc-dev py-pip python-dev libffi-dev openssl-dev gcc libc-dev make bash
- pip install docker-compose - git config --global url."git#bitbucket.org:".insteadOf "https://bitbucket.org/"
- go get github.com/onsi/ginkgo/ginkgo
- go get github.com/onsi/gomega/...
- go get github.com/DATA-DOG/godog/cmd/godog
- make build-only && make test-e2e
I am facing two separate issues for both i have not been able to find a solution.
Keep getting connection refused when the tests are run.
To elaborate above, the docker compose brings up a server with proper host:port mapping ("127.0.0.1:10077:10077"). The command godog is intended to run the tests by querying the server. This however always ends in connection refused.This link has a possible solution , so i am exploring that.
The pipeline almost always runs commands before the container is up. I've tried fixing this by changing the invoke to.
test-e2e:
docker-compose -f ${DOCKER_COMPOSE_FILE} up -d ${APP_NAME} && sleep 10 && docker exec -i oracle-go godog && docker-compose -f ${DOCKER_COMPOSE_FILE} down
However the container is always brought up after the sleep (almost instantaneously).
Example:
Creating oracle-go ...
Sleep 10
docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
docker exec -i oracle-go godog
Creating oracle-go ... done
Error response from daemon: Container 7bab5322203756b972e7f0a3c6e5827413279914a68c705221b8af7daadc1149 is not running
Please let me know if there is a way around it.
If I understood your question correctly, you want to wait for the server to start before running tests.
Instead of manually sleeping, you should use wait-for-it.sh (or an alternative). See the relevant Docker docs for more information.
For example:
test-e2e:
bash wait-for-it.sh <HOST>:<PORT> -- docker-compose -f ${DOCKER_COMPOSE_FILE} up -d ${APP_NAME} && docker exec -i oracle-go godog && docker-compose -f ${DOCKER_COMPOSE_FILE} down
Change <HOST> and <PORT> to your service's host name and port respectively. Alternatively, you could use wait-for-it.sh in your Docker Compose command or the like.
Related
I am wondering if its possible to install dependancies in the containers without creating aditional dockerfiles.
Basic example is php apache + mysqli.
For mysqli I need to run:
docker-php-ext-install mysqli && apachectl restart
But I dont want to create a seperate file just to run this command.
I have tried adding it as a command to the docker-compose.yml:
services:
apache-php:
image: php:8.1-apache
volumes:
- ./:/var/www/html
command: docker-php-ext-install mysqli && apachectl restart
ports:
- "5520:80"
The commands succesfully execute but then the container shuts down.
As soon as I add any command even if its echo "hello" the container will shutdown.
Ive read so much of the documentation and I cant see a clear reason for this or hunt down any way to make the container stay open.
Ive read a lot of stack overflows on using sleep, tty interactive and other tricks to keep the container from shutting down but none of them work.
I can acheive a single file setup with simple bash script using docker cli:
#!/usr/bin/env bash
#Use the project folder as the prefix for our containers and networks.
PROJECT=`basename $(pwd)`
# HTTP: APACHE + PHP + MYSQLI
NAME="${PROJECT}_http"
PORT="5520"
docker run -dit --name "$NAME" -p $PORT:80 \
-v `pwd`:/var/www/html \
php:8.1-apache
docker exec "$NAME" docker-php-ext-install mysqli
docker exec "$NAME" apachectl restart
This script works fine, and I can also do anything I want (create networks aliases etc) with docker cli but docker-compose would be nicer.
I just dont want to create a bunch of extra files for some simple commands to run. Surely there is a way?
If its not possible I guess I just keep expanding the bash script and build my network that way.
Trying to run sendmailconfig after my PHP FPM (7.1-fpm) docker has started, but i'm having a hard time doing so without getting in the way of the FPM part of the container.
FROM php:7.1-fpm
RUN apt-get update && apt-get install
CMD "/usr/local/bin/config.sh" && /bin/bash
I've tried making a script that purely executes yes | sendmailconfig but seems to stop the image's default script from running which causes PHP-FPM to never actually run.
The reason I want this done in the image is because I have to run the sendmailconfig command every time I restart the container, which is impractical when managing multiple docker stacks.
Set your entrypoint to run a file you've copied in, that file should have something like the following in it
/usr/local/bin/config.sh
# If this isn't the correct command for you to start php-fpm look up the correct one for your image
sudo service php7.1-fpm start
# Execute the CMD passed in from the dockerfile
sudo -H bash -c "$#;"
# You'll probably be ok with just `bash -c "$#;"` if you don't have sudo installed
I’m trying to extend CouchDB docker image to pre-populate CouchDB (with initial databases, design documents, etc.).
In order to create a database named db, I first tried this initial Dockerfile:
FROM couchdb
RUN curl -X PUT localhost:5984/db
but the build failed since couchdb service is not yet started at build time. So I changed it into this:
FROM couchdb
RUN service couchdb start && \
sleep 3 && \
curl -s -S -X PUT localhost:5984/db && \
curl -s -S localhost:5984/_all_dbs
Note:
the sleep was the only way I found to make it work, since it did not work with curl option --connect-timeout,
the second curl is only to check that the database was created.
The build seems to work fine:
$ docker build . -t test3 --no-cache
Sending build context to Docker daemon 6.656kB
Step 1/2 : FROM couchdb
---> 7f64c92d91fb
Step 2/2 : RUN service couchdb start && sleep 3 && curl -s -S -X PUT localhost:5984/db && curl -s -S localhost:5984/_all_dbs
---> Running in 1f3b10080595
Starting Apache CouchDB: couchdb.
{"ok":true}
["db"]
Removing intermediate container 1f3b10080595
---> 7d733188a423
Successfully built 7d733188a423
Successfully tagged test3:latest
What is weird is that now when I start it as a container, database db does not seem to be saved into test3 image:
$ docker run -p 5984:5984 -d test3
b34ad93f716e5f6ee68d5b921cc07f6e1c736d8a00e354a5c25f5c051ec01e34
$ curl localhost:5984/_all_dbs
[]
Most of the standard Docker database images include a VOLUME line that prevents creating a derived image with prepopulated data. For the official couchdb image you can see the relevant line in its Dockerfile. Unlike the relational-database images, this image doesn’t have any support for scripts that run at first startup.
That means you need to do the initialization from the host or from another container. If you can directly interact with it using its HTTP API, then this could look like:
# Start the container
docker run -d -p 5984:5984 -v ... couchdb
# Wait for it to be up
for i in $(seq 20); do
if curl -s http://localhost:5984 >/dev/null 2>&1; then
break
fi
sleep 1
done
# Create the database
curl -XPUT http://localhost:5984/db
I am trying to get the Jenkins Docker image deployed to ECS and have docker-compose work inside of my pipeline.
I have hit wall after wall trying to get this Jenkins container launched and functioning. Most of the issues have just been getting the docker command to work inside of a pipeline (including getting the permissions/group right).
I've gotten to the point that the command works and uses the host docker socket (docker ps outputs the jenkins container and ecs agent) and docker-compose is working (docker-compose --version works) but when I try to run anything that involves files inside the pipeline, I get a "no such file or directory" error. This happens when I run docker-compose -f docker-compose.testing.yml up -d --build (it can't find the yml file) and also when I try to run a basic docker build, it can't find local files used in the COPY command (ie. COPY . /app). I've tried from changing the command to be ./file.yml and $PWD/file.yml and still getting the same error.
Here is my Jenkins Dockerfile:
FROM jenkins/jenkins:lts
USER root
RUN apt-get update && \
apt-get install -y --no-install-recommends curl
RUN apt-get remove docker
RUN curl -sSL https://get.docker.com/ | sh
RUN curl -L --fail https://github.com/docker/compose/releases/download/1.21.2/run.sh -o /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
RUN groupadd -g 497 dockerami \
&& usermod -aG dockerami jenkins
USER jenkins
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
COPY jobs /app/jenkins/jobs
COPY jenkins.yml /var/jenkins_home/jenkins.yml
RUN xargs /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
RUN echo 2.0 > /usr/share/jenkins/ref/jenkins.install.UpgradeWizard.state
ENV CASC_JENKINS_CONFIG /var/jenkins_home/jenkins.yml
I also have Terraform building the task definition and binding the /var/run/docker.sock from the host to the jenkins container.
I'm hoping to get this working since I have liked Jenkins since we started using it about 2 years ago and I've had these pipelines working with docker-compose in our non-containerized Jenkins install, but getting Jenkins containerized so far has been pulling teeth. I would much prefer to get this working than to have to change my workflows right now to something like Concourse or Drone.
One issue that you have in your Dockerfile is that you are copying a file into /var/jenkins_home which will disappear as /var/jenkins_home is defined as a volume in the parent Jenkins image and any files you copy into a volume after the volume has been declared are discarded - see https://docs.docker.com/engine/reference/builder/#notes-about-specifying-volumes
I am containerizing the latest version of grafana and want to start the grafana-process when the container starts and then use it in my K8S (kubernetes) cluster.
My Dockerfile looks like :
FROM armdocker/baseimages/rhel:7-20161207
MAINTAINER xxxxxxxx
ENV GRAFANA_VERSION_MAJOR=4 GRAFANA_VERSION_MINOR=4 GRAFANA_VERSION_PATCH=3-1
ENV GRAFANA_VERSION=${GRAFANA_VERSION_MAJOR}.${GRAFANA_VERSION_MINOR}.${GRAFANA_VERSION_PATCH}
RUN yum clean all && yum install -y unzip tar
RUN curl -f -L -o grafana-${GRAFANA_VERSION}.x86_64.rpm https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana-${GRAFANA_VERSION}.x86_64.rpm && \
yum localinstall grafana-${GRAFANA_VERSION}.x86_64.rpm -y
EXPOSE 3000
ENTRYPOINT ["/etc/init.d/grafana-server start"]
Building the Dockerfile is successful and returns no errors.
When I try to run this image, I get the ERROR.
docker run -dit -p 3000:3000 armdocker/proj/grafana:1.0.5
471b2acb964caad69bbb78831a59ee9d2b27997911b5b104b0057ddc957d1101
Error response from daemon: Cannot start container 471b2acb964caad69bbb78831a59ee9d2b27997911b5b104b0057ddc957d1101: [8] System error: exec: "/etc/init.d/grafana-server start": stat /etc/init.d/grafana-server start: no such file or directory
This seems to be very weird since I am installing the RPM first (which makes the file /etc/init.d/grafana-server ) and then I am trying to start the process as my ENTRYPOINT
I then tried
CMD ["/etc/init.d/grafana-server start"]
This also results in the same ERROR /etc/init.d/grafana-server start: no such file or directory
I then tried using the systemctl command :
docker run -dit -p 3000:3000 armdocker/proj/grafana:1.0.6
bfd492c75a0f4c284fc0fdbd5a590f0155f6f67bcb4834e144f344bb789546f3
Error response from daemon: Cannot start container bfd492c75a0f4c284fc0fdbd5a590f0155f6f67bcb4834e144f344bb789546f3: [8] System error: exec: "/bin/systemctl start grafana-server.service": stat /bin/systemctl start grafana-server.service: no such file or directory
I am out of ideas as to what am I doing wrong to have a container with a started grafana process.
Unless you're running your own systemd daemon inside of the container (I don't recommend this, it creates lots of issues), you shouldn't be trying to start the process with a systemctl or /etc/init.d command. Containers are not a VM, they are a method to run an application within their own namespace. And when that application exits, so do your container. When your application is something like a systemctl start command, your container will exit the moment that systemctl command returns, which isn't useful it you were hopping it would stay up for the duration of the grafana process running.
Rather than trying to reinvent the wheel, I'd recommend you look at how grafana themselves packages their docker container. Specifically their run.sh ends with:
exec gosu grafana /usr/sbin/grafana-server \
--homepath=/usr/share/grafana \
--config=/etc/grafana/grafana.ini \
cfg:default.log.mode="console" \
cfg:default.paths.data="$GF_PATHS_DATA" \
cfg:default.paths.logs="$GF_PATHS_LOGS" \
cfg:default.paths.plugins="$GF_PATHS_PLUGINS" \
"$#"
Their repo is over at https://github.com/grafana/grafana-docker
As an alternative you could use the docker-systemctl-replacement script and register it as the main CMD of the image. It will check out the *.service scripts to know how to start and stop a service (without the help of a systemd daemon). So if the Grafana guys change their startup scenario then your builds will continue to work. ;)