Convert MakeFile docker run to Cloud Build - docker

I have a repo that contains a Dockerfile and a MakeFile, usually what I do is run make && make mkdocs-serve in the local repo folder and the MakeFile takes care of 1) Building the image, 2) Building MKDocs (static site builder) and 3) Run MKDocs server at 127.0.0.1:8789
Here is what the MakeFile looks like:
#vars
IMAGENAME=my_mkdocs
USER=$(id -u)
.DEFAULT_GOAL := all
docker-clean:
-#docker rm ${IMAGENAME}
docker-build:
#docker build -t ${IMAGENAME}:v1 --build-arg=USER=${USER} .
mkdocs-build:
#docker run -it -v $(CURDIR):/home/mkdocs ${IMAGENAME}:v1 new /build
# #docker run -it --name ${IMAGENAME} -p 8789:80 ${IMAGENAME}:v1
mkdocs-serve:
#docker run --rm -it -v $(CURDIR):/home/mkdocs -p 8789:8000 ${IMAGENAME}:v1 mkdocs serve --dev-addr 0.0.0.0:8000
all: docker-clean docker-build
Now I want to build and run the Docker image using Cloud Build and Cloud Run, and I'm having difficulty converting MakeFile to Cloud Build syntax:
mkdoc-build:
I think the original bash syntax is:
docker run -it -v /home/mkdocs my_mkdocs:v1 new /build
Here is the Cloud Build syntax I think might work:
# Build MKDocs
- name: 'bash'
args: ['docker', 'run', '-it', '-v', '/home/mkdocs', 'my_mkdocs:v1', 'new', '/build']
mkdocs-serve:
I think the original bash syntax is:
docker run --rm -it -v /home/mkdocs -p 8789:8000 my_mkdocs:v1 mkdocs serve --dev-addr 0.0.0.0:8000
Here is the Cloud Build syntax I think might work:
# Build MKDocs
- name: 'bash'
args: ['docker', 'run', '--rm', '-it', '-v', '/home/mkdocs', '-p', '8789:8000', 'my_mkdocs:v1', 'mkdocs', 'serve', '--dev-addr', '0.0.0.0:8000']
Does this make sense to you? I'm sure this works on my local machine, but I feel they are wrong as there is no deployment to Cloud Run. There is indeed some documentation of how to call Cloud Run from Cloud Build:
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args: ['run', 'deploy', 'SERVICE-NAME', '--image', 'gcr.io/PROJECT_ID/IMAGE', '--region', 'REGION']
images:
- gcr.io/PROJECT_ID/IMAGE
But this seems to be a bit far from what the bash commands are.

Related

Makefile: wget: operation not permitted only on GitLab CI job

I have a Makefile containing the following:
docker-compose.yml:
wget https://gitlab.com/dependabot-gitlab/dependabot/-/raw/v0.34.0/docker-compose.yml
docker run --rm -v ${PWD}:${PWD} -w ${PWD} mikefarah/yq:3 yq delete -i docker-compose.yml 'services[*].ports'
Running make docker-compose.yml works as expected, downloading and hacking the targeted docker-compose.yml remote file.
However, if I configure a GitLab CI job to run this command:
deploy:
image: docker:20.10
services:
- docker:20.10-dind
before_script:
- apk add make wget
- make docker-compose.yml
script: docker-compose up -d
I have the following error:
$ make docker-compose.yml
wget https://gitlab.com/dependabot-gitlab/dependabot/-/raw/v0.34.0/docker-compose.yml
make: wget: Operation not permitted
make: *** [Makefile:5: docker-compose.yml] Error 127
But copy pasting the make docker-compose.yml command contents directly to the CI job script like so:
deploy:
# ...
before_script:
- apk add wget
# Copy of make docker-compose.yml
# For "reasons", using the make command end to a "wget: Operation not permitted" error.
- wget https://gitlab.com/dependabot-gitlab/dependabot/-/raw/v0.34.0/docker-compose.yml
- docker run --rm -v ${PWD}:${PWD} -w ${PWD} mikefarah/yq:3 yq delete -i docker-compose.yml 'services[*].ports'
# ...
Why do I not have the same behavior using make on a CI job and how can I solve this issue to avoid logic duplicate?

Run the container if the previous container ran with cmd/entrypoint script successfully

Im using docker to run container app A, When i upgrade version of container app A i will upgrade remote db using pgsql with image postgres.
In k8s, i use init container to init images posgres and run script update.sh => If process successfully then run the container app A.
With docker environment, i wonder how to do that same with k8s?
#this problem has been solved, i using command into k8s resource and it work
- name: main-container
...
command:
- bash
- -c
args:
- |
if [ -f /tmp/update_success ]; then
do
else
# Update failed
do somethingelse
done
You would probably get a better answer if you posted your initContainer, but I would do something like this:
initContainers:
- name: init
...
command:
- bash
- -c
args:
- |
update.sh && touch /tmp/update_success
volumeMounts:
- name: tmp
mountPath: /tmp
containers:
- name: main-container
...
command:
- bash
- -c
args:
- |
if [ -f /tmp/update_success ]; then
# Update succeeded
do_whatever
else
# Update failed
do_something_else
done
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir: {}
Also, if your init container exits non-zero, the main container will not run. If that's what you want, just make sure update.sh exits an error code when the update fails, and you don't need the above.
In plain Docker, if you run docker run without the -d option, it will block until the container completes. So you could run this literal sequence as something like
docker network create appnet
docker run -d --name db --net appnet postgres
# Run the migrations
docker run --net appnet -e DB_HOST=db myimage update.sh
if [ $? != 0 ]; then
echo migrations failed >&2
exit 1
fi
# Run the main application
docker run -d --name app --net appnet -p 8000:8000 myimage
Docker Compose has no support for workflows like this; it is only able to start a batch of long-running containers in parallel, but not more complex "start A only after B is finished" sequences.
If you're sure you want to run migrations every time every instance of your application starts up (including every replica of a Kubernetes Deployment) then you can also write this sequence into an entrypoint wrapper script in your image. This script can be as little as
#!/bin/sh
# Run migrations
update.sh
# Run the main container command
exec "$#"
and in your Dockerfile, make this script be the ENTRYPOINT
COPY entrypoint.sh . # should be checked in to source control as executable
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array syntax
CMD same CMD as before # unmodified from the existing Dockerfile
Note that there are several reasons to not want this (if you need to roll back the application, what happens to the database? if you need 16 replicas, does every one try to run migrations on its own?) and I might look into other mechanisms like Helm hooks (specifically in a Kubernetes context) to run the upgrades instead.

Junit report is not updated after run tests on gitlab

I run automated tests on gitlab ci with gitlab runner, all works good except reports. After tests junit reports are not updated, always show the same pass and not pass tests even thought cmd show different number of passed tests.
Gitlab script:
stages:
- build
- test
docker-build-master:
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "xxx" -p "yyy" docker.io
script:
- docker build ./AutomaticTests --pull -t "dockerImage"
- docker image tag dockerImage xxx/dockerImage:0.0.1
- docker push "xxx/dockerImage:0.0.1"
test:
image: docker:latest
services:
- docker:dind
stage: test
before_script:
- docker login -u "xxx" -p "yyy" docker.io
script:
- docker run "xxx/dockerImage:0.0.1"
artifacts:
when: always
paths:
- AutomaticTests/bin/Release/artifacts/test-result.xml
reports:
junit:
- AutomaticTests/bin/Release/artifacts/test-result.xml
Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.1
COPY /publish /AutomaticTests
WORKDIR /AutomaticTests
RUN apt-get update -y
RUN apt install unzip
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
RUN curl https://chromedriver.storage.googleapis.com/84.0.4147.30/chromedriver_linux64.zip -o /usr/local/bin/chromedriver
RUN unzip -o /usr/local/bin/chromedriver -d /AutomaticTests
RUN chmod 777 /AutomaticTests
CMD dotnet vstest /Parallel AutomaticTests.dll --TestAdapterPath:. --logger:"nunit;LogFilePath=..\artifacts\test-result.xml;MethodFormat=Class;FailureBodyFormat=Verbose"
I had a similar issue when using docker-in-docker for my gitlab pipeline. You run your tests inside your container. Therefore, test results are stored inside your "container-under-test". However, the gitlab-ci paths reference not the "container-under-test", but the outside container of your docker-in-docker environment.
You could try to copy the test results from the image directly to your outside container via something like this:
mkdir reports
docker cp $(docker create --rm DOCKER_IMAGE):/ABSOLUTE/FILEPATH/IN/DOCKER/CONTAINER reports/.
So, this would be something like this in your case (untested...!):
...
test:
image: docker:latest
services:
- docker:dind
stage: test
before_script:
- docker login -u "xxx" -p "yyy" docker.io
script:
- mkdir reports
- docker cp $(docker create --rm xxx/dockerImage:0.0.1):/AutomaticTests/bin/Release/artifacts/test-result.xml reports/.
artifacts:
when: always
reports:
junit:
- reports/test-result.xml
...
Also, see this post for furhter explanation on the docker cp command: https://stackoverflow.com/a/59055906/6603778
Keep in mind, that docker cp requires an absolute path to the file you want to copy from your container.

gitlab ci e2e test against nginx docker image

I'm trying to run e2e tests against a docker image, which is based on the offical nginx image, and built in a step before.
My idea was to make it available via service in this way:
e2e:
stage: e2e
image: weboaks/node-karma-protractor-chrome:alpine
services:
- name: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
alias: app
before_script:
- yarn
- yarn run webdriver:update --standalone
script:
- yarn run e2e:ci
The docker file of the as service linked image looks like
FROM nginx:1.15-alpine
RUN rm -rf /usr/share/nginx/html/* && apk add --no-cache -vvv bash
ADD deploy/nginx/conf.d /etc/nginx/conf.d
ADD dist /usr/share/nginx/html
But it seems that app isn't available under http://app.
Do I miss something or is there any other approach to test against an already created image?
When I run the image with docker run -p 80:80 local-test locally or deploy it to a server everything works as expected.

Docker images using circleci

I am working on integrating CI/CD pipeline using Docker. How can i use dockercompose file to build and create container?
I have tried it in putting Dockerfile, and docker-compose.yml, but none of them works.
Below is docker-compose file :
FROM ruby:2.2
EXPOSE 8000
RUN mkdir -p /usr/src/app
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN bundle install
CMD ["ruby","app.rb"]
RUN docker build -t itsruby:itsruby .
RUN docker run -d itsruby:itsruby
Below is docker-compose.yml
version: 2
jobs:
build:
docker:
- image: circleci/ruby:2.2
steps:
- checkout
- run: CMD ["ruby","app.rb"]
- run: |
docker build -t itsruby:itsruby .
docker run -d itsruby:itsruby
test:
docker:
- image: circleci/ruby:2.2
steps:
- checkout
- run: CMD ["ruby","app.rb"]
The build is getting failed in circle/ci.

Resources