Makefile: wget: operation not permitted only on GitLab CI job - docker

I have a Makefile containing the following:
docker-compose.yml:
wget https://gitlab.com/dependabot-gitlab/dependabot/-/raw/v0.34.0/docker-compose.yml
docker run --rm -v ${PWD}:${PWD} -w ${PWD} mikefarah/yq:3 yq delete -i docker-compose.yml 'services[*].ports'
Running make docker-compose.yml works as expected, downloading and hacking the targeted docker-compose.yml remote file.
However, if I configure a GitLab CI job to run this command:
deploy:
image: docker:20.10
services:
- docker:20.10-dind
before_script:
- apk add make wget
- make docker-compose.yml
script: docker-compose up -d
I have the following error:
$ make docker-compose.yml
wget https://gitlab.com/dependabot-gitlab/dependabot/-/raw/v0.34.0/docker-compose.yml
make: wget: Operation not permitted
make: *** [Makefile:5: docker-compose.yml] Error 127
But copy pasting the make docker-compose.yml command contents directly to the CI job script like so:
deploy:
# ...
before_script:
- apk add wget
# Copy of make docker-compose.yml
# For "reasons", using the make command end to a "wget: Operation not permitted" error.
- wget https://gitlab.com/dependabot-gitlab/dependabot/-/raw/v0.34.0/docker-compose.yml
- docker run --rm -v ${PWD}:${PWD} -w ${PWD} mikefarah/yq:3 yq delete -i docker-compose.yml 'services[*].ports'
# ...
Why do I not have the same behavior using make on a CI job and how can I solve this issue to avoid logic duplicate?

Related

error during connect: lookup thedockerhost on *: no such host

I'm new to building docker images in gitlab ci and keep returning an error during connect error.
I set up my docker image in Gitlab to be created in AWS.
Dockerfile
FROM python:3-alpine
RUN apk add --update git bash curl unzip zip openssl make
ENV TERRAFORM_VERSION="0.12.28"
RUN curl https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip > terraform_${TERRAFORM_VERSION}_linux_amd64.zip && \
unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip -d /bin && \
rm -f terraform_${TERRAFORM_VERSION}_linux_amd64.zip
RUN pip install awscli boto3
ENTRYPOINT ["terraform"]
.gitlab-ci.yml
variables:
DOCKER_REGISTRY: *.dkr.ecr.eu-west-2.amazonaws.com
AWS_DEFAULT_REGION: eu-west-2
APP_NAME: mytestbuild
DOCKER_HOST: tcp://thedockerhost:2375/
#publish script
publish:
image:
name: amazon/aws-cli:latest
entrypoint: [""]
services:
- docker:dind
before_script:
- amazon-linux-extras install docker
- aws --version
- docker --version
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME:$CI_PIPELINE_IID .
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker push $DOCKER_REGISTRY/$APP_NAME:$CI_PIPELINE_IID
When I push the file up to GitLab and the script begins to run it fails and presents this error code
error during connect: Post
"http://thedockerhost:2375/v1.24/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&shmsize=0&t=854124157125.dkr.ecr.eu-west-2.amazonaws.com%2Fmytestbuild%3A20&target=&ulimits=null&version=1":
dial tcp: lookup thedockerhost on 172.20.0.10:53: no such host
I've tried a few things to try to sort it out but it is mostly related to using docker: latest image, however, I also found that using amazon/aws-cli should also work. None of what I have seen has worked, and I'd appreciate the help.

docker-compose not producting "No Such File or Directory" when files exist in container

I have a simple Dockerfile
FROM python:3.8-slim-buster
RUN apt-get update && apt-get install
RUN apt-get install -y \
curl \
gcc \
make \
python3-psycopg2 \
postgresql-client \
libpq-dev
RUN mkdir -p /var/www/myapp
WORKDIR /var/www/myapp
COPY . /var/www/myapp
RUN chmod 700 ./scripts/*.sh
And an associated docker-compose file
version: "3"
volumes:
postgresdata:
services:
myapp:
image: ralston3/myapp_api:prod-latest
tty: true
command: /bin/bash -c "/var/www/myapp/scripts/myscript.sh && echo 'hello world'"
ports:
- 8000:8000
volumes:
- .:/var/www/myapp
environment:
SOME_ENV_VARS=SOME_VARIABLE
# ... more here
depends_on:
- redis
- postgresql
# ... other docker services defined below
When I run docker-compose up via:
docker-compose up -f /path/to/docker-compose.yml up
My myapp container/service fails with myapp_myapp_1 exited with code 127 with another error mentioning myapp_1 | /bin/sh: 1: /var/www/myapp/scripts/myscript.sh: not found
Further, if I exec into the myapp container via docker exec -it {CONTAINER_ID} /bin/bash I can clearly see that all of my files are there. I can literally run the /var/www/myapp/scripts/myscript.sh and it works fine.
However, there seems to be some issue with docker-compose (which could totally be my mistake). But I'm just confused as to how I can exec into the container and clearly see the files there. But docker-compose exists with 127 saying "No such file or directory".
You are bind mounting the current directory into "/var/www/myapp" so it may be that your local directory is "hiding/overwriting" the container directory. Try removing the volumes declaration for you myapp service and if that works then you know it is the bind mount causing the issue.
Unrelated to your question, but a problem you will also encounter: you're installing Python a second time, above and beyond the version pre-installed in the python Docker image.
Either switch to debian:buster as base image, or don't bother installing antyhign with apt-get and instead just pip install your dependencies like psycopg.
See https://pythonspeed.com/articles/official-python-docker-image/ for explanation why you don't need to do this.
in my case there were 2 stages: builder and runner.
I was getting an executable in builder and running that exe using the alpine image in runner.
My mistake here was that I didn't use the alpine version for the builder. Ex. I used golang:1.20 but when I used golang:1.20-alpine the problem went away.
Make sure you use the correct version and tag!

$GOPATH/go.mod exists but should not when building docker container, but works if I manually run commands

I'm building a golang:1.14.2 docker container with go-redis from a Dockerfile.
FROM golang:1.14.2
# project setup and install go-redis
RUN mkdir -p /go/delivery && cd /go/delivery && \
go mod init example.com/delivery && \
go get github.com/go-redis/redis/v7
# important to copy to /go/delivery
COPY ./src /go/delivery
RUN ls -la /go/delivery
RUN go install example.com/delivery
ENTRYPOINT ["delivery"]
However, when I try to build the container using docker-compose up --build -d, I get this error: $GOPATH/go.mod exists but should not
ERROR: Service 'delivery' failed to build: The command '/bin/sh -c go get github.com/go-redis/redis/v7' returned a non-zero code: 1.
However, I can create a docker container using the image from the dockerfile docker container run -it --rm golang:1.14.2 and then run the exact same commands as in the Dockerfile, and delivery does what I expect it to.
``
Here is deliver.go:
package main
import (
"fmt"
"github.com/go-redis/redis/v7"
)
func main() {
// redis client created here...
fmt.Println("inside main...")
}
What am I doing wrong? I looked up this error message and none of the solutions I've seen worked for me.
EDIT: Here is the compose file:
version: '3.4'
services:
...
delivery:
build: ./delivery
environment:
- REDIS_PORT=${REDIS_PORT}
- REDIS_PASS=${REDIS_PASS}
- QUEUE_NAME-${QUEUE_NAME}
volumes:
- ./logs:/logs
I have same problem. You need set WORKDIR /go/delivery

Installing NPM during build fails Docker build

I'm trying to get GitLab CI runner to build my project off the Docker image and install NPM package during the build. My .gitlab-ci.yml file was inspired by this topic Gitlab CI with Docker and NPM where the PO was dealing with identical problem:
image: docker:stable
services:
- docker:dind
stages:
- build
cache:
paths:
- node_modules/
before_script:
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
compile:
image: node:8
stage: build
script:
- apk add --no-cache py-pip python-dev libffi-dev openssl-dev gcc libc-dev make
- pip install docker-compose
- docker-compose up -d
- docker-compose exec -T users python manage.py recreate_db
- docker-compose exec -T users python manage.py seed_db
- npm install
- bash test.sh
after_script:
- docker-compose down
Sadly, that solution didn't work well but I feel like I'm little bit closer to actual solution now. I'm getting two errors during the build:
/bin/bash: line 89: apk: command not found
Running after script...
$ docker-compose down
/bin/bash: line 88: docker-compose: command not found
How can I troubleshoot this ?
Edit:
image: docker:stable
services:
- docker:dind
stages:
- build
- test
before_script:
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
compile:
stage: build
script:
- apk add --no-cache py-pip python-dev libffi-dev openssl-dev gcc libc-dev make
- pip install docker-compose
- docker-compose up -d
- docker-compose exec -T users python manage.py recreate_db
- docker-compose exec -T users python manage.py seed_db
testing:
image: node:alpine
stage: test
script:
- npm install
- bash test.sh
after_script:
- docker-compose down
I moved tests into separate stage testing which I should've done anyway and I figured I'd defined the image there to separate it from the build stage. No change. Docker can't be found and bash test also can't be ran:
$ bash test.sh
/bin/sh: eval: line 87: bash: not found
Running after script...
$ docker-compose down
/bin/sh: eval: line 84: docker-compose: not found
image: node:8 this image is not based on alpine so as a result, you got error
apk: command not found
node:<version>
These are the suite code names for releases of Debian and indicate
which release the image is based on. If your image needs to install
any additional packages beyond what comes with the image, you'll
likely want to specify one of these explicitly to minimize breakage
when there are new releases of Debian.
just replace image to
node:alpine
and it should work.
the second error is because docker-compose is not installed.
You can check this answer for more details about composer.

How to compose docker-compose.yml so i can access deamon's container from php?

I need help with Docker.
Lets say I have docker-compose.yml version 3 with Nginx+PHP. How do I add image vitr/casperjs so I can call it from PHP like
exec('casperjs --version', $output);
?
Any help is appreciated.
UPDATED:
It looks like correct answer would be: It is impossible.
You need to put PHP and CasperJS (and PhantoJS as well) to the same container to get them work together. It would be nice if someone might proof me wrong and show the better where to do it. Here is smth like working example:
FROM nanoninja/php-fpm
ENV PHANTOMJS_VERSION=phantomjs-2.1.1-linux-x86_64
ENV PHANTOMJS_DIR=/app/phantomjs
RUN apt-get update -y
RUN apt-get install -y apt-utils libfreetype6-dev libfontconfig1-dev wget bzip2
RUN wget --no-check-certificate https://bitbucket.org/ariya/phantomjs/downloads/${PHANTOMJS_VERSION}.tar.bz2
RUN tar xvf ${PHANTOMJS_VERSION}.tar.bz2
RUN mv ${PHANTOMJS_VERSION}/bin/phantomjs /usr/local/bin/
RUN rm -rf phantom*
RUN mkdir -p ${PHANTOMJS_DIR}
RUN echo '"use strict"; \n\
console.log("Hello, world!"); + \n\
console.log("using PhantomJS version " + \n\
phantom.version.major + "." + \n\
phantom.version.minor + "." + \n\
phantom.version.patch); \n\
phantom.exit();' \
> ${PHANTOMJS_DIR}/script.js
RUN apt-get update -y && apt-get install -y \
git \
python \
&& rm -rf /var/lib/apt/lists/*
RUN git clone https://github.com/n1k0/casperjs.git
RUN mv casperjs /opt/
RUN ln -sf /opt/casperjs/bin/casperjs /usr/local/bin/casperjs
Q: How to compose docker-compose.yml so i can access deamon's container from php?
A: You could share docker's unix domain socket to access daemon's container.
Something like follows:
docker-compose.yml:
version: '3'
services:
app:
image: ubuntu:16.04
privileged: true
volumes:
- /usr/bin/docker:/usr/bin/docker
- /var/run/docker.sock:/var/run/docker.sock
- /usr/lib/x86_64-linux-gnu/libltdl.so.7:/usr/lib/x86_64-linux-gnu/libltdl.so.7
command: docker run --rm vitr/casperjs casperjs --version
test:
# docker-compose up
WARNING: Found orphan containers (abc_plop_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Recreating abc_app_1 ... done
Attaching to abc_app_1
app_1 | 1.1.4
abc_app_1 exited with code 0
You can see 1.1.4 was print by execute command docker run --rm vitr/casperjs casperjs --version in app container.
This is just an example, you can call docker run --rm vitr/casperjs casperjs --version in your own php container not use ubuntu:16.04, still use exec in php code and get the output.
Updated: (2018/11/05)
First I think some concepts need to be align with you:
-d: this means start a container in detached mode, not daemon. In docker, when we talk about daemon, it means docker daemon which used to accept the connection of docker cli, see here.
--rm: this just to delete the temp container after use it, you can also do not use it.
Difference for using -d & no -d:
With -d: it will run container in detached mode, this means even the container running, the cli command docker run, will exit at once & show you a container id, no any log you will see, like next:
# docker run -d vitr/casperjs casperjs --version
d8dc585bc9e3cc577cab15ff665b98d798d95bc369c876d6da31210f625b81e0
Without -d: the cli command will not exit until the command for container finish, so you can see the output of the command, like next:
# docker run vitr/casperjs casperjs --version
1.1.4
So, your requirement is want to get the output of casperjs, surely you had to use no -d mode, I think.
If you accept above concepts, then you can go on to see a workable example:
folder structure:
abc
├── docker-compose.yml
└── index.php
docker-compose.yml:
version: '3'
services:
phpfpm:
container_name: phpfpm
image: nanoninja/php-fpm
entrypoint: php index.php
privileged: true
volumes:
- .:/var/www/html
- /usr/bin/docker:/usr/bin/docker
- /var/run/docker.sock:/var/run/docker.sock
- /usr/lib/x86_64-linux-gnu/libltdl.so.7:/usr/lib/x86_64-linux-gnu/libltdl.so.7
index.php:
<?php
exec('docker run vitr/casperjs casperjs --version', $output);
print_r($output);
test:
~/abc# docker-compose up
Starting phpfpm ... done
Attaching to phpfpm
phpfpm | Array
phpfpm | (
phpfpm | [0] => 1.1.4
phpfpm | )
phpfpm exited with code 0
You can see 1.1.4 was print through php, attention privileged & volumes are things had to be set.

Resources