GitLab CI script needs entry in /etc/hosts - docker

I have a GitLab CI docker runner to execute my automated tests when I push. One of my tests requires a custom entry in /etc/hosts. I can't figure out how to get the entry into that file.
Here's basically what my .gitlab-ci.yml file looks like:
before_script:
- cat /etc/hosts # for debugging
- ... # install app dependencies
specs:
script:
- rspec # <- a test in here fails without the /etc/hosts entry
All my tests pass, except for the one that requires that /etc/hosts entry.
Let's say I'm trying to have the hostname myhost.local resolve to the IPv4 address XX.XX.XX.XX...
I tried using extra_hosts on the runner config, but it didn't seem to have any effect (got idea from here):
/etc/gitlab-runner/config.toml:
concurrent = 1
check_interval = 0
[[runners]]
name = "shell"
url = "https://mygitlabinstance.com/"
token = "THETOKEN"
executor = "shell"
[runners.cache]
[[runners]]
name = "docker-ruby-2.5"
url = "https://mygitlabinstance.com/"
token = "THETOKEN"
executor = "docker"
[runners.docker]
tls_verify = false
image = "ruby:2.5"
privileged = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
extra_hosts = ["myhost.local:XX.XX.XX.XX"]
[runners.cache]
But the test still failed. The cat /etc/hosts shows that it's unchanged:
# Your system has configured 'manage_etc_hosts' as True.
# As a result, if you wish for changes to this file to persist
# then you will need to either
# a.) make changes to the master file in /etc/cloud/templates/hosts.tmpl
# b.) change or remove the value of 'manage_etc_hosts' in
# /etc/cloud/cloud.cfg or cloud-config from user-data
#
127.0.1.1 ip-172-31-2-54.ec2.internal ip-172-31-2-54
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
I figured I could just add the entry myself in a before_script line, but I don't seem to be able to execute anything with root privileges in the container:
before_script:
- echo 'XX.XX.XX.XX myhost.local' >> /etc/hosts
...
But that just fails because the gitlab-runner user doesn't have permissions to write to that file. I tried to use sudo, but gitlab-runner can't do that either (echo 'XX.XX.XX.XX myhost.local' | sudo tee --non-interactive --append /etc/hosts --> sudo: a password is required)
So in summary, how can I get my container to have the host entry I need (or how can I execute a before_script command as root)?

The following statement is incorrect:
"But that just fails because the gitlab-runner user doesn't have permissions to write to that file."
The gitlab-runner is not the user executing your before_script, it is the user that runs the container in which your job is executed.
You are using the ruby:2.5 Docker image as far as I can tell and that does not contain any USER reference in its or its parents Dockerfile.
Try adding a whoami command right before your echo 'XX.XX.XX.XX myhost.local' >> /etc/hosts command to verify you are root.
Update
If gitlab-runner is shown as the result of whoamithe docker-executor is not used and instead a shell-executor has picked up the job.

In your config.toml on your Gitlab CI runner, you can add a setting to the config.toml so you can achieve this without touching /etc/hosts.
[runners.docker]
# ... other settings ...
extra_hosts = ["myhost.local:xx.xx.xx.xx"]
You can read more about the extra_hosts configuration here: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runnersdocker-section

Related

gitlab - ci for composer package

i setup a dev-server in my homeoffice and installed gitlab via docker-compose. so far everything works fine, i can login, push commits and so on.
Now i wanted to setup a CI Pipeline to build composer packages when new tags are pushed. So i clicked the CI/CD Button and added the .gitlab-ci.yml file from the composer template. But the pipeline was only pending. So i figured i might need to register a runner first.
I installed gitlab-runner (via apt) on the same machine that runs the gitlab via docker and registered the runner with the domain and key given by gitlab (in the add runners page). I selected docker as executor, gave it a name and left everything else at its default value.
The runner is registered properly in gitlab and the ci pipeline is now working but it always fails.
The only output i have is:
Running with gitlab-runner 11.2.0 (11.2.0)
on **************
Using Docker executor with image curlimages/curl:latest ...
Pulling docker image gitlab-runner-helper:11.2.0 ...
The contents of the gitlab-ci file are:
# This file is a template, and might need editing before it works on your project.
# Publishes a tag/branch to Composer Packages of the current project
publish:
image: curlimages/curl:latest
stage: build
variables:
URL: "$CI_SERVER_PROTOCOL://$CI_SERVER_HOST:$CI_SERVER_PORT/api/v4/projects/$CI_PROJECT_ID/packages/composer?job_token=$CI_JOB_TOKEN"
script:
- version=$([[ -z "$CI_COMMIT_TAG" ]] && echo "branch=$CI_COMMIT_REF_NAME" || echo "tag=$CI_COMMIT_TAG")
- insecure=$([ "$CI_SERVER_PROTOCOL" = "http" ] && echo "--insecure" || echo "")
- response=$(curl -s -w "\n%{http_code}" $insecure --data $version $URL)
- code=$(echo "$response" | tail -n 1)
- body=$(echo "$response" | head -n 1)
# Output state information
- if [ $code -eq 201 ]; then
echo "Package created - Code $code - $body";
else
echo "Could not create package - Code $code - $body";
exit 1;
fi
Because i did not make any changes to the template file i suspect the gitlab-runner setup to need some configuration in order to work, maybe a group-assignment or something like that.
When running systemctl status gitlab-runner i can see:
Failed to create container volume for /builds/{group} Error response from daemon: pull access denied for gitlab-runner-helper, repository does not exist or may require 'docker login': denied: requested access to the resource is denied (executor_docker.go:166:3s)" job=15 project=34 runner=******
So i went to the runners section in gitlab and enabled the runner fot the specific project. So i could avoid the error above but the pipeline still breaks.
The output in gitlab is still the same but the gitlab-runner log is different:
Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n
Sadly - i am not getting any furhter from here
Everytime i press the retry button for the pipeline i get the following syslog entries:
Checking for jobs... received" job=19 repo_url="correct-url-for-repo" runner=******
This message appears twice
Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n
Ignoring extra error returned from registry: unauthorized: authentication required
Failed to create container volume for /builds/{group} Error response from daemon: pull access denied for gitlab-runner-helper, repository does not exist or may require 'docker login': denied: requested access to the resource is denied (executor_docker.go:166:3s)" job=19 project=34 runner=******
Job failed: Error response from daemon: pull access denied for gitlab-runner-helper, repository does not exist or may require 'docker login': denied: requested access to the resource is denied (executor_docker.go:166:3s)" job=19 project=34 runner=******
Both messages appear twice
so either the gitlab-runner is not allowed to pull docker images or it is not allowed to access my gitlab project but i cant figure out the problem.
When running gitlab-runner restart as root i see the following "error"
ERRO[0000] Docker executor: prebuilt image helpers will be loaded from /var/lib/gitlab-runner.
Can someone please help me :) ?
Select the correct Docker image for the runner. Depending where are you executing it, and probably also depending on your GitLab version. Also, manually try it before executing the pipeline:
docker pull gitlab/gitlab-runner-helper:x86_64-latest
To use the selected image, modify the runner's config file:
[[runners]]
(...)
executor = "docker"
[runners.docker]
(...)
helper_image = "gitlab/gitlab-runner-helper:tag"
The images gitlab-runner-helper, gitlab/gitlab-runner-helper:11.2.0 do not exist. It seems the debian package installable in ubuntu is broken somehow... So i figured i might need to install the latest gitlab-runner version
Here is what i did: (I am on Ubuntu 20.04)
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash
cat <<EOF | sudo tee /etc/apt/preferences.d/pin-gitlab-runner.pref
Explanation: Prefer GitLab provided packages over the Debian native ones
Package: gitlab-runner
Pin: origin packages.gitlab.com
Pin-Priority: 1001
EOF
Source
So i was able to update gitlab-runner to the latest version.
But still no success, now the service won't start without any error message, systemctl only tells mit that the process exited.
the syslog told me:
chdir /var/lib/gitlab-runner: no such file or directory
opening /etc/init.d/gitlab-runner showed me that path as --working-directory parameter for the service.
So i created that directory and changed its ownership to gitlab-runner
This time i could run the ci pipeline!
Still got an error
fatal: unable to access 'http://{mylocaldomain}/isat/typo3-gdpr.git/': Could not resolve host: {mylocaldomain}
Okay - dns resolution not possible because i use a local domain.
As stated here you can pass an extra_host to the docker executor.
To do so, simply adjust the /etc/gitlab-runner/config.toml file and add the extra_hosts option:
concurrent = 1
check_interval = 0
[[runners]]
name = "lab"
url = "http://{localDomain}/"
token = "******"
executor = "docker"
[runners.docker]
tls_verify = false
image = "ruby:2.1"
privileged = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
extra_hosts = ["{localDomain}:{ip}"]
[runners.cache]
Now i could sucessfully run the ci pipeline and my package is listed in the composer registry!

GitLab with Docker runner on localhost: how to expose host to container?

I'm learning to use GitLab CI.
Just now I'm using GitLab on localhost (external_url "http://localhost"). And I've registered a Docker runner with vanilla ubuntu:20.04 image and tried to run some test job on it.
Alas, it tries to clone my repo from localhost repository in the container, but cannot do it, because my localhost's port 80 is not visible from container.
Running with gitlab-runner 13.5.0 (ece86343)
on docker0 x8pHJPn7
Preparing the "docker" executor
Using Docker executor with image ubuntu:20.04 ...
Pulling docker image ubuntu:20.04 ...
Using docker image sha256:d70eaf7277eada08fca944de400e7e4dd97b1262c06ed2b1011500caa4decaf1 for ubuntu:20.04 with digest ubuntu#sha256:fff16eea1a8ae92867721d90c59a75652ea66d29c05294e6e2f898704bdb8cf1 ...
Preparing environment
Running on runner-x8phjpn7-project-6-concurrent-0 via gigant...
Getting source from Git repository
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/root/ci_fuss/.git/
fatal: unable to access 'http://localhost:80/root/ci_fuss.git/': Failed to connect to localhost port 80: Connection refused
Uploading artifacts for failed job
Uploading artifacts...
WARNING: report.xml: no matching files
ERROR: No files to upload
Cleaning up file based variables
ERROR: Job failed: exit code 1
How can I can my Docker runner to expose host's localhost:80 as container's localhost:80?
Well, i have coped with this stuff.
I have added network_mode = "host"to my runner configuration in /etc/gitlab-runner/config.toml to make my docker use host network connections.
Also I've added --pull_policy="if-not-present" to first search for container image locally, then in remote repo.
[[runners]]
name = "docker0"
url = "http://localhost/"
token = "TTBRFis_W_yJJpN1LLzV"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "exposed_ctr:latest"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
network_mode = "host"
pull_policy = "if-not-present"

Installing gerrit plugin in docker container

When running gerritcodereview/gerrit docker container. Gerrit is installed within the /var/gerrit directoy in the container. But when trying to install plugins by docker cp the plugin .jar file, downloaded from https://gerrit-ci.gerritforge.com/job/plugin-its-jira-bazel-stable-2.16/ into the /var/gerrit/plugins directory, plugins are not showing up in the list amongst installed plugins. Eventhough I restarted the container.
I ran gerrit with:
docker run -ti -p 8080:8080 -p 29418:29418 gerritcodereview/gerrit
And Gerrit is accessible via:
http://localhost:8080/admin/plugins
I also have a list of plugins in the plugins manager, but don't know how to add more plugins to the list, have tried to use gerrit-ci.gerritforge.com url in [httpd]. http://localhost:8080/plugins/plugin-manager/static/index.html
My gerrit.config file looks like this:
[gerrit]
basePath = git
serverId = 62b710a2-3947-4e96-a196-6b3fb1f6bc2c
canonicalWebUrl = http://10033a3fe5b7
[database]
type = h2
database = db/ReviewDB
[index]
type = LUCENE
[auth]
type = DEVELOPMENT_BECOME_ANY_ACCOUNT
[sendemail]
smtpServer = localhost
[sshd]
listenAddress = *:29418
[httpd]
listenUrl = http://*:8080/
filterClass = com.googlesource.gerrit.plugins.ootb.FirstTimeRedirect
firstTimeRedirectUrl = /login/%23%2F?account_id=1000000
[cache]
directory = cache
[plugins]
allowRemoteAdmin = true
[container]
javaOptions = "-Dflogger.backend_factory=com.google.common.flogger.backend.log4j.Log4jBackendFactory#getInstance"
javaOptions = "-Dflogger.logging_context=com.google.gerrit.server.logging.LoggingContext#getInstance"
user = gerrit
javaHome = /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64/jre
javaOptions = -Djava.security.egd=file:/dev/./urandom
[receive]
enableSignedPush = false
[noteDb "changes"]
autoMigrate = true
I am pretty sure that Gerrit runs from /var/gerrit, even for your version as that is the version I used before.
Why don't you use docker-compose together with a custom Dockerfile. This way you can easily recreate your image and don't need to worry about adding plugins again after you upgrade your version.
I would suggest that you play around with these scripts and use it for your testing.
This is what my Dockerfile looks like for my previous 2.16 installation:
FROM gerritcodereview/gerrit:2.16.8
# Add custom plugins that are not downloaded from the web
COPY ./plugins/* /var/gerrit/plugins/
# Add logo
COPY ./static/* /var/gerrit/static/
ADD https://gerrit-ci.gerritforge.com/view/Plugins-stable-2.16/job/plugin-avatars-gravatar-bazel-master-stable-2.16/lastSuccessfulBuild/artifact/bazel-genfiles/plugins/avatars-gravatar/avatars-gravatar.jar /var/gerrit/plugins/
USER root
# Fix any permissions
RUN chown -R gerrit:gerrit /var/gerrit
USER gerrit
ENV CANONICAL_WEB_URL=https://gerrit.mycompoany.net/r/
And below the docker-compose.yml
version: '3.4'
services:
gerrit:
build: .
ports:
- "29418:29418"
- "8080:8080"
restart: unless-stopped
volumes:
- /external/gerrit2.16/etc:/var/gerrit/etc
- /external/gerrit2.16/git:/var/gerrit/git
- /external/gerrit2.16/index:/var/gerrit/index
- /external/gerrit2.16/cache:/var/gerrit/cache
- /external/gerrit2.16/logs:/var/gerrit/logs
- /external/gerrit2.16/.ssh:/var/gerrit/.ssh
# entrypoint: java -jar /var/gerrit/bin/gerrit.war init --install-all-plugins -d /var/gerrit
# entrypoint: java -jar /var/gerrit/bin/gerrit.war reindex -d /var/gerrit
Finally found out a way that works for me in my use case.
copy content of your public key and insert into ssh web browser profile settings: my_gerrit_admin_username
Add key to ssh-agent:
eval `ssh-agent`
ssh-add .ssh/id_rsa
from terminal outside container, run:
ssh -p 29418 my_gerrit_admin_username#localhost gerrit plugin install -n its-base.jar https://gerrit-ci.gerritforge.com/job/plugin-its-base-bazel-stable-2.16/lastSuccessfulBuild/artifact/bazel-bin/plugins/its-base/its-base.jar
check web browser that plugin is installed among plugins.

Gitlab runner docker host setting

How should i enter the "host" value for the host params?
https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section
Thanks in advance!
I tried tcp://0.0.0.0:2375, 0.0.0.0:2375, 0.0.0.0, etc and all result in errors.
[runners.docker]
host = tcp://0.0.0.0:2375
tls_verify = false
image = "docker:latest"
The runner toml config file should be accepted without any error
You should leave it blank, if you run the command on your PC.
The format is 'ip-address:port-number'.
You can set the value of the host to 127.0.0.1:2375
[runners.docker]
host=127.0.0.1:2375

Docker: Change word in file at container startup

I'm creating a docker image for our fluentd.
The image contains a file called http_forward.conf
It contains:
<store>
type http
endpoint_url ENDPOINTPLACEHOLDER
http_method post # default: post
serializer json # default: form
rate_limit_msec 100 # default: 0 = no rate limiting
raise_on_error true # default: true
authentication none # default: none
username xxx # default: ''
password xxx # default: '', secret: true
</store>
So this is in our image. But we want to use the image for all our environments. Specified with environment variables.
So we create an environment variable for our environment:
ISSUE_SERVICE_URL = http://xxx.dev.xxx.xx/api/fluentdIssue
This env variable contains dev on our dev environment, uat on uat etc.
Than we want to replace our ENDPOINTPLACEHOLDER with the value of our env variable. In bash we can use:
sed -i -- 's/ENDPOINTPLACEHOLDER/'"$ISSUE_SERVICE_URL"'/g' .../http_forward.conf
But how/when do we have to execute this command if we want to use this in our docker container? (we don't want to mount this file)
We did that via ansible coding.
Put the file http_forward.conf as template, and deploy the change depend on the environment, then mount the folder (include the conf file) to docker container.
ISSUE_SERVICE_URL = http://xxx.{{ environment }}.xxx.xx/api/fluentdIssue
playbook will be something like this, I don't test it.
- template: src=http_forward.conf.j2 dest=/config/http_forward.conf mode=0644
- docker:
name: "fluentd"
image: "xxx/fluentd"
restart_policy: always
volumes:
- /config:/etc/fluent
In your DockerFile you should have a line starting with CMD somewhere. You should add it there.
Or you can do it cleaner: set the CMD line to call a script instead. For example CMD ./startup.sh. The file startup.sh will then contain your sed command followed by the command to start your fluentd (I assume that is currently the CMD).

Resources