Gitlab CI-CD via Docker : can't access Nexus in another container - docker

I'm using Gitlab CI-CD to build some projects using a single Runner (for now) on Docker (the runner itself is a docker container, so I guess this is Docker in Docker..)
My problem is that I can't use my own nexus/npm repository while building...
npm install --registry=http://153.89.23.53:8082/repository/npm-all
npm ERR! code EHOSTUNREACH
npm ERR! errno EHOSTUNREACH
npm ERR! request to http://153.89.23.53:8082/repository/npm-all/typescript/-/typescript-3.6.5.tgz failed, reason: connect EHOSTUNREACH 153.89.23.53:8082
The same runner on another server works perfectly, but it doens't work if running on the same server hosting the Nexus (everything is container-based)
The Gitlab runner is using the host network.
If I connect to the Runner and try to ping 153.89.23.53:8082 (Nexus), it works
root#62591008a000:/# wget http://153.89.23.53:8082
--2020-07-13 09:56:16-- http://153.89.23.53:8082/
Connecting to 153.89.23.53:8082... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7952 (7.8K) [text/html]
Saving to: 'index.html'
index.html 100%[===========================================================================================>] 7.77K --.-KB/s in 0s
2020-07-13 09:56:16 (742 MB/s) - 'index.html' saved [7952/7952]
So I guess the problem occurs in the "second docker container", the one used inside the runner... but I have no idea what I should change.
Note : I could probably set the gitlab runner to join the nexus network and use internal IPs, but this would break the scripts if the runner is started on other servers...

Ok, I found the solution..
There is a network_mode settings that can be set in the runner configuration. Default value is bridge, not host..
**config.toml**
[runners.docker]
...
volumes = ["/cache"]
network_mode = "host"

Related

Karate-Chrome docker container in azure dev ops failing to connect

I have seen many similar issues to this but none seem to resolve or describe my exact issue.
I have configured an azure devops pipeline to use a container like below:
container:
image: ptrthomas/karate-chrome
options: --cap-add=SYS_ADMIN
I have uploaded the contents of the example from the jobserver demo to a repository and then run the following:
steps:
- script: mvn clean test -DargLine='-Dkarate.env=docker' -Dtest=WebRunner
It is my understanding (and I can see from the logs) that the files are loaded into the container and the script command is being executed inside the container. So that script command is the equivalent of docker exec -it -w /src karate mvn clean test -DargLine='-Dkarate.env=docker' -Dtest=WebRunner just without having to exec into the container.
When I run the example locally it executes the tests with no issues but in azure dev ops it fails at the point the tests actually start running, throwing this error:
14:16:37.388 [main] ERROR com.intuit.karate - karate.org.apache.http.conn.HttpHostConnectException: Connect to
localhost:9222 [localhost/127.0.0.1] failed: Connection refused
(Connection refused), http call failed after 2 milliseconds for url:
http://localhost:9222/json 14:16:39.388 [main] DEBUG
com.intuit.karate.shell.Command - attempt #4 waiting for http to be
ready at: http://localhost:9222/json 14:16:39.391 [main] DEBUG
com.intuit.karate - request: 5 > GET http://localhost:9222/json 5 >
Host: localhost:9222 5 > Connection: Keep-Alive 5 > User-Agent:
Apache-HttpClient/4.5.13 (Java/1.8.0_275) 5 > Accept-Encoding:
gzip,deflate
Looking at other issues there have been suggestions to specify the driver in the feature files with this line:
* configure driver = { type: 'chrome', executable: 'chrome' }
but a) that hasn't worked for me and b) shouldn't the karate-chrome docker image render this configuration unnecessary as it should be no different than the container I run locally?
Any help appreciated!
Thanks
Only thing I can think of is that the Azure config does not call the ENTRYPOINT of the image.
Maybe you should try to create a container from scratch (that does extensive logging) and see what happens. Use the Karate one as a reference.

How do I run containerized Cypress runner against containerized server?

I'm trying to run Cypress tests against containerized Nginx:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7c3efd24e6e6 tdd_nginx "/docker-entrypoint.…" 19 minutes ago Up 19 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp tdd_nginx_1
from official docs I learned I can use docker run -it -v $PWD:/e2e -w /e2e -e CYPRESS_baseUrl=host.docker.internal cypress/included:7.7.0
Here I learned about host.docker.internal which is how supposedly Cypress knows to look for localhost in a particular container.
Nginx container has exposed port 80 so I've tried -e CYPRESS_baseUrl=host.docker.internal:80 as well as without specifying port as port 80 is a fallback port in most cases.
error output:
Cypress could not verify that this server is running:
> http://host.docker.internal:80
We are verifying this server because it has been configured as your `baseUrl`.
Cypress automatically waits until your server is accessible before running tests.
We will try connecting to it 3 more times...
We will try connecting to it 2 more times...
We will try connecting to it 1 more time...
Cypress failed to verify that your server is running.
Please start this server and then run Cypress again.
Moving the env variable into cypress.json made no difference:
{
"baseUrl": "host.docker.internal",
"video": false
}
Changed the cypress.json to:
{
"CYPRESS_BASE_URL": "host.docker.internal",
"video": false
}
parsing CYPRESS_BASE_URL as env variable didn't help but putting it into the file did the trick. Strangely, it makes difference.
Thanks goes to #jonrsharpe

gitlab - ci for composer package

i setup a dev-server in my homeoffice and installed gitlab via docker-compose. so far everything works fine, i can login, push commits and so on.
Now i wanted to setup a CI Pipeline to build composer packages when new tags are pushed. So i clicked the CI/CD Button and added the .gitlab-ci.yml file from the composer template. But the pipeline was only pending. So i figured i might need to register a runner first.
I installed gitlab-runner (via apt) on the same machine that runs the gitlab via docker and registered the runner with the domain and key given by gitlab (in the add runners page). I selected docker as executor, gave it a name and left everything else at its default value.
The runner is registered properly in gitlab and the ci pipeline is now working but it always fails.
The only output i have is:
Running with gitlab-runner 11.2.0 (11.2.0)
on **************
Using Docker executor with image curlimages/curl:latest ...
Pulling docker image gitlab-runner-helper:11.2.0 ...
The contents of the gitlab-ci file are:
# This file is a template, and might need editing before it works on your project.
# Publishes a tag/branch to Composer Packages of the current project
publish:
image: curlimages/curl:latest
stage: build
variables:
URL: "$CI_SERVER_PROTOCOL://$CI_SERVER_HOST:$CI_SERVER_PORT/api/v4/projects/$CI_PROJECT_ID/packages/composer?job_token=$CI_JOB_TOKEN"
script:
- version=$([[ -z "$CI_COMMIT_TAG" ]] && echo "branch=$CI_COMMIT_REF_NAME" || echo "tag=$CI_COMMIT_TAG")
- insecure=$([ "$CI_SERVER_PROTOCOL" = "http" ] && echo "--insecure" || echo "")
- response=$(curl -s -w "\n%{http_code}" $insecure --data $version $URL)
- code=$(echo "$response" | tail -n 1)
- body=$(echo "$response" | head -n 1)
# Output state information
- if [ $code -eq 201 ]; then
echo "Package created - Code $code - $body";
else
echo "Could not create package - Code $code - $body";
exit 1;
fi
Because i did not make any changes to the template file i suspect the gitlab-runner setup to need some configuration in order to work, maybe a group-assignment or something like that.
When running systemctl status gitlab-runner i can see:
Failed to create container volume for /builds/{group} Error response from daemon: pull access denied for gitlab-runner-helper, repository does not exist or may require 'docker login': denied: requested access to the resource is denied (executor_docker.go:166:3s)" job=15 project=34 runner=******
So i went to the runners section in gitlab and enabled the runner fot the specific project. So i could avoid the error above but the pipeline still breaks.
The output in gitlab is still the same but the gitlab-runner log is different:
Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n
Sadly - i am not getting any furhter from here
Everytime i press the retry button for the pipeline i get the following syslog entries:
Checking for jobs... received" job=19 repo_url="correct-url-for-repo" runner=******
This message appears twice
Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n
Ignoring extra error returned from registry: unauthorized: authentication required
Failed to create container volume for /builds/{group} Error response from daemon: pull access denied for gitlab-runner-helper, repository does not exist or may require 'docker login': denied: requested access to the resource is denied (executor_docker.go:166:3s)" job=19 project=34 runner=******
Job failed: Error response from daemon: pull access denied for gitlab-runner-helper, repository does not exist or may require 'docker login': denied: requested access to the resource is denied (executor_docker.go:166:3s)" job=19 project=34 runner=******
Both messages appear twice
so either the gitlab-runner is not allowed to pull docker images or it is not allowed to access my gitlab project but i cant figure out the problem.
When running gitlab-runner restart as root i see the following "error"
ERRO[0000] Docker executor: prebuilt image helpers will be loaded from /var/lib/gitlab-runner.
Can someone please help me :) ?
Select the correct Docker image for the runner. Depending where are you executing it, and probably also depending on your GitLab version. Also, manually try it before executing the pipeline:
docker pull gitlab/gitlab-runner-helper:x86_64-latest
To use the selected image, modify the runner's config file:
[[runners]]
(...)
executor = "docker"
[runners.docker]
(...)
helper_image = "gitlab/gitlab-runner-helper:tag"
The images gitlab-runner-helper, gitlab/gitlab-runner-helper:11.2.0 do not exist. It seems the debian package installable in ubuntu is broken somehow... So i figured i might need to install the latest gitlab-runner version
Here is what i did: (I am on Ubuntu 20.04)
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash
cat <<EOF | sudo tee /etc/apt/preferences.d/pin-gitlab-runner.pref
Explanation: Prefer GitLab provided packages over the Debian native ones
Package: gitlab-runner
Pin: origin packages.gitlab.com
Pin-Priority: 1001
EOF
Source
So i was able to update gitlab-runner to the latest version.
But still no success, now the service won't start without any error message, systemctl only tells mit that the process exited.
the syslog told me:
chdir /var/lib/gitlab-runner: no such file or directory
opening /etc/init.d/gitlab-runner showed me that path as --working-directory parameter for the service.
So i created that directory and changed its ownership to gitlab-runner
This time i could run the ci pipeline!
Still got an error
fatal: unable to access 'http://{mylocaldomain}/isat/typo3-gdpr.git/': Could not resolve host: {mylocaldomain}
Okay - dns resolution not possible because i use a local domain.
As stated here you can pass an extra_host to the docker executor.
To do so, simply adjust the /etc/gitlab-runner/config.toml file and add the extra_hosts option:
concurrent = 1
check_interval = 0
[[runners]]
name = "lab"
url = "http://{localDomain}/"
token = "******"
executor = "docker"
[runners.docker]
tls_verify = false
image = "ruby:2.1"
privileged = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
extra_hosts = ["{localDomain}:{ip}"]
[runners.cache]
Now i could sucessfully run the ci pipeline and my package is listed in the composer registry!

Cannot connect nlu in Docker container

I am trying to run Botpress with docker. I set my Dockerfile as follows:
FROM botpress/server:v11_9_5
ADD . /botpress
WORKDIR /botpress
CMD ["./bp"]
After building image, I run docker run my_image:latest to start my botpress. However it cannot connect to Duckling server.
According to the log,
03:20:32.917 Mod[nlu] Couldn't reach the Duckling server , so it will be disabled.
For more informations (or if you want to self-host it), please check the docs at
https://botpress.io/docs/build/nlu/#system-entities
[Error, connect ECONNREFUSED 127.0.0.1:8000]
STACK TRACE
Error: connect ECONNREFUSED 127.0.0.1:8000
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1158:14)
My nlu.json setting is as follow:
{
"$schema": "../../assets/modules/nlu/config.schema.json",
"confidenceTreshold": 0.7,
"ducklingURL": "https://duckling.botpress.io",
"ducklingEnabled": true,
"autoTrainInterval": "30s",
"preloadModels": false,
"languageModel": "en",
"fastTextOverrides": {}
}
Duckling is bundled with Botpress when using the Docker image (and is expected to be started when you start Botpress). There is an environment variable which tells it to use the local version of duckling.
If you run the image directly, both processes are started at the same time.
There are a couple of examples on how to run both of them here: https://github.com/botpress/botpress/tree/master/examples/docker-compose
Basically:
command: bash -c "./duckling -p 8000 & ./bp"

Composer fails within Docker 'Failed to enable crypto'

I've been battling an issue with a corporate proxy when trying to run docker-compose up -d nginx mysql
I'm attempting to run the Laradock container on OSX but keep running into errors when composer attempts to install dependencies. I've updated my docker settings to notify it about my corporate proxy:
Before adding the proxy information, I was receiving this error:
[Composer\Downloader\TransportException]
The "https://packagist.org/packages.json" file could not be downloaded: SSL operation failed with code 1. OpenSSL Error messages:
error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed
Since updating the proxy details, I am now receiving this error:
Step 27/183 : RUN if [ ${COMPOSER_GLOBAL_INSTALL} = true ]; then composer global install ;fi
---> Running in a7699d4ecebd
Changed current directory to /home/laradock/.composer
Loading composer repositories with package information
[Composer\Downloader\TransportException]
The "https://packagist.org/packages.json" file could not be downloaded: SSL: Success
Failed to enable crypto
failed to open stream: operation failed
I'm an experienced dev, but new to Docker. I think that the error is being caused because PHP is running inside the docker container but for some reason does not have access to my local certificates?

Resources