Airflow pull docker image from private google container repository - docker

I am using the https://github.com/puckel/docker-airflow image to run Airflow. I had to add pip install docker in order for it to support DockerOperator.
Everything seems ok, but I can't figure out how to pull an image from a private google docker container repository.
I tried adding the connection in the admin section type of google cloud conenction and running the docker operator as.
t2 = DockerOperator(
task_id='docker_command',
image='eu.gcr.io/project/image',
api_version='2.3',
auto_remove=True,
command="/bin/sleep 30",
docker_url="unix://var/run/docker.sock",
network_mode="bridge",
docker_conn_id="google_con"
)
But always get an error...
[2019-11-05 14:12:51,162] {{taskinstance.py:1047}} ERROR - No Docker
registry URL provided
I also tried the docker_conf_option
t2 = DockerOperator(
task_id='docker_command',
image='eu.gcr.io/project/image',
api_version='2.3',
auto_remove=True,
command="/bin/sleep 30",
docker_url="unix://var/run/docker.sock",
network_mode="bridge",
dockercfg_path="/usr/local/airflow/config.json",
)
I get the following error:
[2019-11-06 13:59:40,522] {{docker_operator.py:194}} INFO - Starting
docker container from image
eu.gcr.io/project/image
[2019-11-06 13:59:40,524] {{taskinstance.py:1047}} ERROR -
('Connection aborted.', FileNotFoundError(2, 'No such file or
directory'))
I also tried using only dockercfg_path="config.json" and got the same error.
I can't really use Bash Operator to try to docker login as it does not recognize docker command...
What am I missing?
line 1: docker: command not found
t3 = BashOperator(
task_id='print_hello',
bash_command='docker login -u _json_key - p /usr/local/airflow/config.json eu.gcr.io'
)

airflow.hooks.docker_hook.DockerHook is using docker_default connection where one isn't configured.
Now in your first attempt, you set google_con for docker_conn_id and the error thrown is showing that host (i.e registry name) isn't configured.
Here are a couple of changes to do:
image argument passed in DockerOperator should be set to image tag without registry name prefixing it.
DockerOperator(api_version='1.21',
# docker_url='tcp://localhost:2375', #Set your docker URL
command='/bin/ls',
image='image',
network_mode='bridge',
task_id='docker_op_tester',
docker_conn_id='google_con',
dag=dag,
# added this to map to host path in MacOS
host_tmp_dir='/tmp',
tmp_dir='/tmp',
)
provide registry name, username and password for the underlying DockerHook to authenticate to Docker in your google_con connection.
You can obtain long lived credentials for authentication from a service account key. For username, use _json_key and in password field paste in the contents of the json key file.
Here are logs from running my task:
[2019-11-16 20:20:46,874] {base_task_runner.py:110} INFO - Job 443: Subtask docker_op_tester [2019-11-16 20:20:46,874] {dagbag.py:88} INFO - Filling up the DagBag from /Users/r7/OSS/airflow/airflow/example_dags/example_docker_operator.py
[2019-11-16 20:20:47,054] {base_task_runner.py:110} INFO - Job 443: Subtask docker_op_tester [2019-11-16 20:20:47,054] {cli.py:592} INFO - Running <TaskInstance: docker_sample.docker_op_tester 2019-11-14T00:00:00+00:00 [running]> on host 1.0.0.127.in-addr.arpa
[2019-11-16 20:20:47,074] {logging_mixin.py:89} INFO - [2019-11-16 20:20:47,074] {local_task_job.py:120} WARNING - Time since last heartbeat(0.01 s) < heartrate(5.0 s), sleeping for 4.989537 s
[2019-11-16 20:20:47,088] {logging_mixin.py:89} INFO - [2019-11-16 20:20:47,088] {base_hook.py:89} INFO - Using connection to: id: google_con. Host: gcr.io/<redacted-project-id>, Port: None, Schema: , Login: _json_key, Password: XXXXXXXX, extra: {}
[2019-11-16 20:20:48,404] {docker_operator.py:209} INFO - Starting docker container from image alpine
[2019-11-16 20:20:52,066] {logging_mixin.py:89} INFO - [2019-11-16 20:20:52,066] {local_task_job.py:99} INFO - Task exited with return code 0

I know the question is about GCR but it's worth noting that other container registries may expect the config in a different format.
For example Gitlab expects you to pass the fully qualified image name to the DAG and only put the Gitlab container registry host name in the connection:
DockerOperator(
task_id='docker_command',
image='registry.gitlab.com/group/project/image:tag',
api_version='auto',
docker_conn_id='gitlab_registry',
)
The set up your gitlab_registry connection like:
docker://gitlab+deploy-token-1234:ABDCtoken1234#registry.gitlab.com

Based on recent Cloud Composer documentation, it's recommended to use KubernetesPodOperator instead, like this:
from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator
KubernetesPodOperator(
task_id='docker_op_tester',
name='docker_op_tester',
dag=dag,
namespace="default",
image="eu.gcr.io/project/image",
cmds=["ls"]
)

Further to #Tamlyn's answer, we can also skip the creation of connection (docker_conn_id) from airflow and use it with gitlab as under
On your development machine :
https://gitlab.com/yourgroup/yourproject/-/settings/repository (create a token here and get details for logging in)
docker login registry.gitlab.com (on the machine to login to docker from the machine to push the image to docker - enter your gitlab credentials when prompted)
docker build -t registry.gitlab.com/yourgroup/yourproject . && docker push registry.gitlab.com/yourgroup/yourproject (builds and pushes to your project repo's container registry)
On your airflow machine :
https://gitlab.com/yourgroup/yourproject/-/settings/repository (you can use the above created token for logging in)
docker login registry.gitlab.com (to login to docker from the machine to pull the image from docker, this skips the need for creating a docker registry connection - enter your gitlab credentials when prompted = this generates ~/.docker/config.json which is required Reference from docker docs )
In your dag :
dag = DAG(
"dag_id",
default_args = default_args,
schedule_interval = "15 1 * * *"
)
docker_trigger = DockerOperator(
task_id = "task_id",
api_version = "auto",
network_mode = "bridge",
image = "registry.gitlab.com/yourgroup/yourproject",
auto_remove = True, # use if required
force_pull = True, # use if required
xcom_all = True, # use if required
# tty = True, # turning this on screws up the log rendering
# command = "", # use if required
environment = { # use if required
"envvar1": "envvar1value",
"envvar2": "envvar2value",
},
dag = dag,
)
this works with Ubuntu 20.04.2 LTS (tried and tested) with airflow installed on the instance

You will need to instal Cloud SDK in your workstation which includes the gcloud command-line tool.
After installing Cloud SDK and Docker version 18.03 or newer
According to their documentation to pull from Container Registry, use the command:
docker pull [HOSTNAME]/[PROJECT-ID]/[IMAGE]:[TAG]
or
docker pull [HOSTNAME]/[PROJECT-ID]/[IMAGE]#[IMAGE_DIGEST]
where:
[HOSTNAME] is listed under Location in the console. It's one of four
options: gcr.io, us.gcr.io, eu.gcr.io, or asia.gcr.io.
[PROJECT-ID] is your Google Cloud Platform Console project ID.
[IMAGE] is the image's name in Container Registry.
[TAG] is the tag applied to the image. In a registry, tags are unique
to an image.
[IMAGE_DIGEST] is the sha256 hash value of the image contents. In the
console, click on the specific image to see its metadata. The digest
is listed as the Image digest.
To get the pull command for a specific image:
Click on the name of an image to go to the specific registry.
In the registry, check the box next to the version of the image that
you want to pull.
Click SHOW PULL COMMAND on the top of the page.
Copy the pull command, which identifies the image using either the
tag or the digest
*Also check that you have push and pull permissions from the registry.
**Configured Docker to use gcloud as a credential helper, or are using another authentication method. To use gcloud as the credential helper, run the command:
gcloud auth configure-docker

Related

How to make Jenkins to use private key and passphrase to run Ansible playbook

I am using Jenkins to run some Ansible playbooks. One of the simple tests I did was to have the playbook to cat the fstab file on a remote server:
The playbook looks like this:
---
- hosts: "tesst-1-server"
tasks:
- name: dislpay /etc/fstab
shell: cat /etc/fstab
register: fstab_reg
- debug: msg="{{ fstab_reg.stdout }}"
In Jenkins, I have a freestyle project, it uses Invoke Ansible Playbook to call the above playbook, and the project credentials was setup with a different: ansible-user. This is different from the default user-jenkins that runs Jenkins. User ansible-user can ssh to all my servers. I have ansible-user setup in Jenkins Credential with its private key and passphrase. But when I run the project, I got an error:
[update_fstab] $ /usr/bin/ansible-playbook google/ansible/test-scripts/test/sub_book.yml -i /etc/ansible/hosts -f 5 --private-key /tmp/ssh14117407503194058572.key -u ansible-user
[WARNING]: Invalid characters were found in group names but not replaced, use
-vvvv to see details
fatal: [test-1-server]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ansible-user#test-1-server: Permission denied (publickey).", "unreachable": true}
I am not quiet sure what exactly the error is saying as I have setup the private key and passphrase to ansible-user's credentials. What does the group names in the message mean? Because this is done through Jenkins, I am not sure how to do the -vvv as it suggested.
How can I make Jenkins to pass the private key and passphrase to the Ansible playbook?
Thanks!
I think I have found the "issue". After I switched to a different user other than ansible-user, the playbook worked. Interesting thing is that when I created the private key pairs for ansible-user, I used "-m PEM" and it should be good for Jenkins.

Could not resolve host when trying to access service in gitlab

I'm trying to access a gitlab-service from a container started with docker run, but it doesn't seem to work.
They actually have a nice section on gitlab about this: https://docs.gitlab.com/ee/ci/services/#using-services-with-docker-run-docker-in-docker-side-by-side
However, even after a 1:1 copy of their code:
access-service:
stage: build
image: docker:19.03.1
before_script:
- echo "Overriding default before_script"
services:
- docker:dind # necessary for docker run
- tutum/wordpress:latest
variables:
FF_NETWORK_PER_BUILD: "true" # activate container-to-container networking
script: |
docker run --rm --name curl \
--volume "$(pwd)":"$(pwd)" \
--workdir "$(pwd)" \
--network=host \
curlimages/curl:7.74.0 curl "http://tutum-wordpress"
I get an error:
Running with gitlab-runner 14.3.4 (77516d85)
on gitlab-aws-autoscaler 7ee750d2
feature flags: FF_NETWORK_PER_BUILD:true, FF_DISABLE_UMASK_FOR_DOCKER_EXECUTOR:true
Preparing the "docker+machine" executor 02:34
Using Docker executor with image docker:19.03.1 ...
WARNING: Container based cache volumes creation is disabled. Will not create volume for "/cache"
Starting service docker:dind ...
Authenticating with credentials from $DOCKER_AUTH_CONFIG
Pulling docker image docker:dind ...
Using docker image sha256:1a42336ff683d7dadd320ea6fe9d93a5b101474346302d23f96c9b4546cb414d for docker:dind with digest docker#sha256:6f2ae4a5fd85ccf85cdd829057a34ace894d25d544e5e4d9f2e7109297fedf8d ...
Starting service tutum/wordpress:latest ...
Authenticating with credentials from $DOCKER_AUTH_CONFIG
Pulling docker image tutum/wordpress:latest ...
Using docker image sha256:7e7f97a602ff0c3a30afaaac1e681c72003b4c8a76f8a90696f03e785bf36b90 for tutum/wordpress:latest with digest tutum/wordpress#sha256:2aa05fd3e8543b615fc07a628da066b48e6bf41cceeeb8f4b81e189de6eeda77 ...
Waiting for services to be up and running...
*** WARNING: Service runner-7ee750d2-project-2-concurrent-0-483783518ce3e922-docker-0 probably didn't start properly.
Health check error:
service "runner-7ee750d2-project-2-concurrent-0-483783518ce3e922-docker-0-wait-for-service" timeout
Health check container logs:
Service container logs:
2022-02-22T20:44:10.523612305Z Generating RSA private key, 4096 bit long modulus (2 primes)
2022-02-22T20:44:11.037778878Z ...................................................................................++++
2022-02-22T20:44:11.319540033Z ..................................++++
2022-02-22T20:44:11.320611978Z e is 65537 (0x010001)
2022-02-22T20:44:11.341349948Z Generating RSA private key, 4096 bit long modulus (2 primes)
2022-02-22T20:44:11.360835661Z .++++
2022-02-22T20:44:11.678902603Z ...................................................++++
2022-02-22T20:44:11.679451336Z e is 65537 (0x010001)
2022-02-22T20:44:11.719133216Z Signature ok
2022-02-22T20:44:11.719148571Z subject=CN = docker:dind server
2022-02-22T20:44:11.719151811Z Getting CA Private Key
2022-02-22T20:44:11.734914635Z /certs/server/cert.pem: OK
2022-02-22T20:44:11.738748856Z Generating RSA private key, 4096 bit long modulus (2 primes)
2022-02-22T20:44:11.993700065Z .........................................++++
2022-02-22T20:44:12.036121070Z .....++++
2022-02-22T20:44:12.036364885Z e is 65537 (0x010001)
2022-02-22T20:44:12.067743203Z Signature ok
2022-02-22T20:44:12.067755273Z subject=CN = docker:dind client
2022-02-22T20:44:12.067758449Z Getting CA Private Key
2022-02-22T20:44:12.081823033Z /certs/client/cert.pem: OK
2022-02-22T20:44:12.174949567Z time="2022-02-22T20:44:12.174783104Z" level=info msg="Starting up"
2022-02-22T20:44:12.177055953Z time="2022-02-22T20:44:12.176931675Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
2022-02-22T20:44:12.177086275Z failed to load listeners: can't create unix socket /var/run/docker.sock: device or resource busy
*********
Authenticating with credentials from $DOCKER_AUTH_CONFIG
Pulling docker image docker:19.03.1 ...
Using docker image sha256:0cecfefe921f22fc898f7a0055358380c8870ab6f05b01999367911714fe9d00 for docker:19.03.1 with digest docker#sha256:2dcf87c9893b05ab815880e3d223cd6976c388a6f6697de10e90523255259ca4 ...
Not using umask - FF_DISABLE_UMASK_FOR_DOCKER_EXECUTOR is set!
...
$ docker run --rm --name curl \ # collapsed multi-line command
Unable to find image 'curlimages/curl:7.74.0' locally
7.74.0: Pulling from curlimages/curl
aad63a933944: Pulling fs layer
...
3d4876cbff99: Pull complete
110e7f874674: Pull complete
Digest: sha256:a3e534fced74aeea171c4b59082f265d66914d09a71062739e5c871ed108a46e
Status: Downloaded newer image for curlimages/curl:7.74.0
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (6) Could not resolve host: tutum-wordpress
Can anyone give me a pointer why this is not working? Does this have to do with the fact that this is the executer docker+machine and not docker?
Here's our config.toml:
[[runners]]
name = "gitlab-aws-autoscaler"
url = "https://code.example.com"
token = "${TOKEN}"
executor = "docker+machine"
limit = ${LIMIT_MEDIUM_RUNNERS}
[runners.docker]
image = "example/gitlabrunner:2.10"
privileged = true
disable_cache = true
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache", "/builds:/builds"]
wait_for_services_timeout = 120
[runners.cache]
Type = "s3"
ServerAddress = "s3.amazonaws.com"
AccessKey = "${KEY}"
SecretKey = "${SECRET}"
BucketName = "example-gitlab-runner-cache-virginia"
BucketLocation = "us-east-1"
Shared = true
[runners.machine]
IdleCount = 0
IdleTime = 1800
MaxBuilds = 100
MachineDriver = "amazonec2"
MachineName = "gitlab-docker-machine-%s"
MachineOptions = [
"amazonec2-instance-type=t2.medium",
"amazonec2-access-key=${KEY}",
"amazonec2-secret-key=${SECRET}",
"amazonec2-root-size=100", # GB
"amazonec2-region=us-east-1",
"amazonec2-tags=runner-manager-name,gitlab-aws-autoscaler,gitlab,true,gitlab-runner-autoscale,true",
"amazonec2-security-group=EC2-X-ci-runner",
"amazonec2-vpc-id=vpc-XXX",
"amazonec2-subnet-id=subnet-XXX",
"amazonec2-zone=b",
"amazonec2-use-private-address=true",
"amazonec2-private-address-only=true"
]
Edit:
When trying to set the DOCKER_HOST variable as suggested in one answer, I get the following errors:
Running with gitlab-runner 14.3.4 (77516d85)
on gitlab-aws-autoscaler 7ee750d2
feature flags: FF_NETWORK_PER_BUILD:true, FF_DISABLE_UMASK_FOR_DOCKER_EXECUTOR:true
Preparing the "docker+machine" executor 02:42
Using Docker executor with image docker:19.03.1 ...
WARNING: Container based cache volumes creation is disabled. Will not create volume for "/cache"
Starting service docker:dind ...
Authenticating with credentials from $DOCKER_AUTH_CONFIG
Pulling docker image docker:dind ...
Using docker image sha256:1a42336ff683d7dadd320ea6fe9d93a5b101474346302d23f96c9b4546cb414d for docker:dind with digest docker#sha256:6f2ae4a5fd85ccf85cdd829057a34ace894d25d544e5e4d9f2e7109297fedf8d ...
Starting service tutum/wordpress:latest ...
Authenticating with credentials from $DOCKER_AUTH_CONFIG
Pulling docker image tutum/wordpress:latest ...
Using docker image sha256:7e7f97a602ff0c3a30afaaac1e681c72003b4c8a76f8a90696f03e785bf36b90 for tutum/wordpress:latest with digest tutum/wordpress#sha256:2aa05fd3e8543b615fc07a628da066b48e6bf41cceeeb8f4b81e189de6eeda77 ...
Waiting for services to be up and running...
*** WARNING: Service runner-7ee750d2-project-2-concurrent-0-a0ec4dc562ad3891-docker-0 probably didn't start properly.
Health check error:
service "runner-7ee750d2-project-2-concurrent-0-a0ec4dc562ad3891-docker-0-wait-for-service" timeout
Health check container logs:
Service container logs:
2022-02-24T16:21:42.803216350Z time="2022-02-24T16:21:42.803077740Z" level=info msg="Starting up"
2022-02-24T16:21:42.804161387Z time="2022-02-24T16:21:42.804107933Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
2022-02-24T16:21:42.804233443Z failed to load listeners: can't create unix socket /var/run/docker.sock: device or resource busy
*********
Authenticating with credentials from $DOCKER_AUTH_CONFIG
Pulling docker image docker:19.03.1 ...
Using docker image sha256:0cecfefe921f22fc898f7a0055358380c8870ab6f05b01999367911714fe9d00 for docker:19.03.1 with digest docker#sha256:2dcf87c9893b05ab815880e3d223cd6976c388a6f6697de10e90523255259ca4 ...
The issue here is that your job is not utilizing the docker:dind service. While you have your job configured mostly correct, your docker GitLab runner defines the following volumes configuration:
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache", "/builds:/builds"]
When bind-mounting /var/run/docker.sock, and not providing the DOCKER_HOST environment variable, your jobs will default to using the bind-mounted docker socket and connect to the daemon on the "metal" host directly, instead of connecting to the docker:dind container, which is required for this services: setup to work correctly.
You can run docker info in your job to confirm this.
You should be able to fix this by setting the DOCKER_HOST environment variable in your job (normally, this is set for you when using gitlab.com runners, which is why it is omitted in their documentation).
access-service:
variables:
DOCKER_HOST: "tcp://docker:2375"
DOCKER_TLS_CERTDIR: ""
# ...
Note: DOCKER_TLS_CERTDIR is also unset here to disable TLS to ensure port 2375 is used. Using TLS is an available option and should be considered more secure.

gitlab - ci for composer package

i setup a dev-server in my homeoffice and installed gitlab via docker-compose. so far everything works fine, i can login, push commits and so on.
Now i wanted to setup a CI Pipeline to build composer packages when new tags are pushed. So i clicked the CI/CD Button and added the .gitlab-ci.yml file from the composer template. But the pipeline was only pending. So i figured i might need to register a runner first.
I installed gitlab-runner (via apt) on the same machine that runs the gitlab via docker and registered the runner with the domain and key given by gitlab (in the add runners page). I selected docker as executor, gave it a name and left everything else at its default value.
The runner is registered properly in gitlab and the ci pipeline is now working but it always fails.
The only output i have is:
Running with gitlab-runner 11.2.0 (11.2.0)
on **************
Using Docker executor with image curlimages/curl:latest ...
Pulling docker image gitlab-runner-helper:11.2.0 ...
The contents of the gitlab-ci file are:
# This file is a template, and might need editing before it works on your project.
# Publishes a tag/branch to Composer Packages of the current project
publish:
image: curlimages/curl:latest
stage: build
variables:
URL: "$CI_SERVER_PROTOCOL://$CI_SERVER_HOST:$CI_SERVER_PORT/api/v4/projects/$CI_PROJECT_ID/packages/composer?job_token=$CI_JOB_TOKEN"
script:
- version=$([[ -z "$CI_COMMIT_TAG" ]] && echo "branch=$CI_COMMIT_REF_NAME" || echo "tag=$CI_COMMIT_TAG")
- insecure=$([ "$CI_SERVER_PROTOCOL" = "http" ] && echo "--insecure" || echo "")
- response=$(curl -s -w "\n%{http_code}" $insecure --data $version $URL)
- code=$(echo "$response" | tail -n 1)
- body=$(echo "$response" | head -n 1)
# Output state information
- if [ $code -eq 201 ]; then
echo "Package created - Code $code - $body";
else
echo "Could not create package - Code $code - $body";
exit 1;
fi
Because i did not make any changes to the template file i suspect the gitlab-runner setup to need some configuration in order to work, maybe a group-assignment or something like that.
When running systemctl status gitlab-runner i can see:
Failed to create container volume for /builds/{group} Error response from daemon: pull access denied for gitlab-runner-helper, repository does not exist or may require 'docker login': denied: requested access to the resource is denied (executor_docker.go:166:3s)" job=15 project=34 runner=******
So i went to the runners section in gitlab and enabled the runner fot the specific project. So i could avoid the error above but the pipeline still breaks.
The output in gitlab is still the same but the gitlab-runner log is different:
Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n
Sadly - i am not getting any furhter from here
Everytime i press the retry button for the pipeline i get the following syslog entries:
Checking for jobs... received" job=19 repo_url="correct-url-for-repo" runner=******
This message appears twice
Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n
Ignoring extra error returned from registry: unauthorized: authentication required
Failed to create container volume for /builds/{group} Error response from daemon: pull access denied for gitlab-runner-helper, repository does not exist or may require 'docker login': denied: requested access to the resource is denied (executor_docker.go:166:3s)" job=19 project=34 runner=******
Job failed: Error response from daemon: pull access denied for gitlab-runner-helper, repository does not exist or may require 'docker login': denied: requested access to the resource is denied (executor_docker.go:166:3s)" job=19 project=34 runner=******
Both messages appear twice
so either the gitlab-runner is not allowed to pull docker images or it is not allowed to access my gitlab project but i cant figure out the problem.
When running gitlab-runner restart as root i see the following "error"
ERRO[0000] Docker executor: prebuilt image helpers will be loaded from /var/lib/gitlab-runner.
Can someone please help me :) ?
Select the correct Docker image for the runner. Depending where are you executing it, and probably also depending on your GitLab version. Also, manually try it before executing the pipeline:
docker pull gitlab/gitlab-runner-helper:x86_64-latest
To use the selected image, modify the runner's config file:
[[runners]]
(...)
executor = "docker"
[runners.docker]
(...)
helper_image = "gitlab/gitlab-runner-helper:tag"
The images gitlab-runner-helper, gitlab/gitlab-runner-helper:11.2.0 do not exist. It seems the debian package installable in ubuntu is broken somehow... So i figured i might need to install the latest gitlab-runner version
Here is what i did: (I am on Ubuntu 20.04)
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash
cat <<EOF | sudo tee /etc/apt/preferences.d/pin-gitlab-runner.pref
Explanation: Prefer GitLab provided packages over the Debian native ones
Package: gitlab-runner
Pin: origin packages.gitlab.com
Pin-Priority: 1001
EOF
Source
So i was able to update gitlab-runner to the latest version.
But still no success, now the service won't start without any error message, systemctl only tells mit that the process exited.
the syslog told me:
chdir /var/lib/gitlab-runner: no such file or directory
opening /etc/init.d/gitlab-runner showed me that path as --working-directory parameter for the service.
So i created that directory and changed its ownership to gitlab-runner
This time i could run the ci pipeline!
Still got an error
fatal: unable to access 'http://{mylocaldomain}/isat/typo3-gdpr.git/': Could not resolve host: {mylocaldomain}
Okay - dns resolution not possible because i use a local domain.
As stated here you can pass an extra_host to the docker executor.
To do so, simply adjust the /etc/gitlab-runner/config.toml file and add the extra_hosts option:
concurrent = 1
check_interval = 0
[[runners]]
name = "lab"
url = "http://{localDomain}/"
token = "******"
executor = "docker"
[runners.docker]
tls_verify = false
image = "ruby:2.1"
privileged = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
extra_hosts = ["{localDomain}:{ip}"]
[runners.cache]
Now i could sucessfully run the ci pipeline and my package is listed in the composer registry!

Error checking seal status while connecting to Vault Docker

I'm trying to run vault docker in server mode as described here. This is the command I'm using to run vault
docker run --cap-add=IPC_LOCK -e 'VAULT_LOCAL_CONFIG={"backend": {"file": {"path": "/home/jwahba/PycharmProjects/work/vault/vault.json"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h"}' vault server
And this is the vault.json configuration file
storage "inmem" {}
listener "tcp" {
address = "127.0.0.1:8200"
tls_disable = 1
}
disable_mlock = true
The container comes up successfully.
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
55100205d2ab vault "docker-entrypoint..." 6 minutes ago Up 6 minutes 8200/tcp stoic_blackwell
However, when I try to execute
docker exec stoic_blackwell vault status
I get the below error:
Error checking seal status: Get https://127.0.0.1:8200/v1/sys/seal-status: dial tcp 127.0.0.1:8200: connect: connection refused
There is a similar question here but unfortunately I couldn't figure out what I misconfigured.
Any suggestions please?
The VAULT_LOCAL_CONFIG parameter specifies the configuration of your Vault; using the {"backend": {"file": annotation you set a file backend as the storage one.
So, in VAULT_LOCAL_CONFIG you should directly include what you wrote in your configuration file (vault.json).
Sidenote: The configuration file that you wrote is in HCL language, not json.
Please try it with below command,
vault status -tls-skip-verify

Installing Zookeeper on offline openshift

I've an Openshift Origin cluster running offline on 3 Centos 7 vm. It's working fine, I've a registry where I push my images like this :
docker login -u <username> -e <any_email_address> -p <token_value> <registry_ip>:<port>
Login is successful, then :
oc tag <image-id> <docker-registry-IP>:<port>/<project-name>/<image>
So, for nginx for example :
oc tag 49011ce3b713 172.30.222.111:5000/test/nginx
Then I push it to the internal registry :
docker push 172.30.222.111:5000/test/nginx
And finaly :
oc new-app nginx --name="nginx"
With nginx, everything is working fine, now my problem :
I'm actually wanting to put Zookeeper on it, so : I do the same steps than above, I also install "jboss/base-jdk:7" which is a dependancy of Zookeeper, problem is :
docker push 172.30.222.111:5000/test/jboss/base-jdk:7
Giving :
[root#master 994089]# docker push 172.30.222.111:5000/test/jboss/base-jdk:7
The push refers to a repository [172.30.222.111:5000/test/jboss/base-jdk]
c4c6a9114a05: Layer already exists
3bf2c105669b: Layer already exists
85c6e373d858: Layer already exists
dc1e2dcdc7b6: Layer already exists
Received unexpected HTTP status: 500 Internal Server Error
The problem seems to be the "/" here jboss**/**base-jdk:7
I also tried to push just like this :
docker push 172.30.222.111:5000/test/base-jdk:7
This is working , but Zookeeper is looking for exactly "jboss/base-jdk:7", and not just "base-jdk:7"
Finally, I'm blocked here, when trying this command : oc new-app zookeeper --name="zookeeper" --loglevel=8 --insecure-registry --allow-missing-images
I0628 14:31:54.009713 53407 dockerimagelookup.go:92] checking local Docker daemon for "jboss/base-jdk:7"
I0628 14:31:54.030546 53407 dockerimagelookup.go:380] partial match on "172.30.222.111:5000/test/base-jdk:7" with 0.375000
I0628 14:31:54.030571 53407 dockerimagelookup.go:346] exact match on "jboss/base-jdk:7"
I0628 14:31:54.030578 53407 dockerimagelookup.go:107] Found local docker image match "172.30.222.111:5000/test/base-jdk:7" with score 0.375000
I0628 14:31:54.030589 53407 dockerimagelookup.go:107] Found local docker image match "jboss/base-jdk:7" with score 0.000000
I0628 14:31:54.032799 53407 componentresolvers.go:59] Error from resolver: [can't look up Docker image "jboss/base-jdk:7": Internal error occurred: Get http://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 10.253.158.90:53: no such host]
I0628 14:31:54.032831 53407 dockerimagelookup.go:169] Added missing image match for jboss/base-jdk:7
F0628 14:31:54.032882 53407 helpers.go:110] error: can't look up Docker image "jboss/base-jdk:7": Internal error occurred: Get http://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 10.253.158.90:53: no such host
We can see that 172.30.222.111:5000/test/base-jdk:7 is found but it's not exactly what the command is looking for so it doesn't use it...
So, if you have any idea how to solve this ! :)
Resolved by upgrading to Openshift 1.5.1, previous was 1.3.1.

Resources