Local ci pipeline with gitlab using docker - docker

I installed gitlab via docker locally successfully on port 9800 in an own network "cinet". Now I would like to set up a ci pipeline. For that I first have to install a gitlab-runner and then to register it. The registration fails, however.
Starting gitlab like so
$ docker run --name=gitlab --volume="/srv/gitlab/config:/etc/gitlab" --volume="/srv/gitlab/logs:/var/log/gitlab" --volume="/srv/gitlab/data:/var/opt/gitlab" -p 9800:80 -p 22:22 -p 443:443 --network=cinet --restart=no --detach=true gitlab/gitlab-ce:latest
The gui is ready under http://localhost:9800. I created a java maven projekts with a .gitlab-ci.yml and pushed it into my local gitlab instance successfully. The ci pipeline is stuck as expected since there is no runner yet intalled/registered.
Installing the runner
$ docker run -d --name gitlab-runner --restart always -v /srv/gitlab-runner/config:/etc/gitlab-runner -v /var/run/docker.sock:/var/run/docker.sock --network=cinet gitlab/gitlab-runner
Registering the runner. I first tried a shared runner and obtained the token from the ci gui.
$ docker run --rm -t -i -v /srv/gitlab-runner/config:/etc/gitlab-runner --network=cinet gitlab/gitlab-runner register
The I am prompted for host, token, description and tags. Tags I left empty. For host I tried:
http://localhost:9800/ - That results in the following error:
ERROR: Registering runner... failed runner=Lt7NVbJ_ status=couldn't execute POST against http://localhost:9800/api/v4/runners: Post http://localhost:9800/api/v4/runners: dial tcp 127.0.0.1:9800: connect: connection refused
PANIC: Failed to register this runner. Perhaps you are having network problems
http://172.19.0.2:9800/ - That results in the following error:
ERROR: Registering runner... failed runner=Lt7NVbJ_ status=couldn't execute POST against http://172.19.0.2:9800/api/v4/runners: Post http://172.19.0.2:9800/api/v4/runners: dial tcp 172.19.0.2:9800: connect: connection refused
PANIC: Failed to register this runner. Perhaps you are having network problems
Why is that failing? What do I need to do to make that run?

Related

Docker installation on Mac m1

I am trying to install Docker desktop on Mac m1 but after installation dockers asks to execute following command.
docker run -d -p 80:80 docker/getting-started
But, it gives following error
Unable to find image 'docker/getting-started:latest' locally
docker: Error response from daemon: Get "https://registry-1.docker.io/v2/": read tcp 192.168.65.4:58764->192.168.65.5:3128: read: connection reset by peer.
See 'docker run --help'.
Why is it not pulling docker data?
(Sorry for the miss... going to try this again)
Try docker exec command before your command.
Like this docker exec docker run -d -p 80:80 docker/getting-started
"Tried using the docker exec command and it appears to have worked OK with two different ubuntu instances. Did not try Docker Desktop.
It kind of looks like there is a problem with Docker Desktop manipulating Terminal.app.
I’m using the macOS default zshell."
https://forums.docker.com/t/problems-getting-started/116487/9

Docker compose with specified host pulls from wrong registry

I am playing with Gitlab CI CD and docker. So far I have the following setup:
A server with gitlab-runner (docker executor)
A staging server with docker installed
A self-hosted GitLab instance
After building and pushing images to the container registry, I am trying to deploy the app on a staging server by doing following steps:
- eval $(ssh-agent -s)
- echo "$DEPLOY_SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY_IMAGE
- docker-compose -H "ssh://$DEPLOY_USER#$DEPLOY_SERVER" down --remove-orphans || true
- docker-compose -H "ssh://$DEPLOY_USER#$DEPLOY_SERVER" pull
- docker-compose -H "ssh://$DEPLOY_USER#$DEPLOY_SERVER" up -d
It fails on the 4th step, where as far as I understood, it points to wrong container registry:
error during connect: Get "http://docker.example.com/v1.24/containers/json?all=1&filters=%7B%22label%22%3A%7B%22com.docker.compose.project%3Drepo_name%22%3Atrue%7D%7D&limit=0": command [ssh -l deployer -- staging-server-ip docker system dial-stdio] has exited with exit status 255, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=Host key verification failed.
Do I have to run docker login on a staging server as well, or what am I missing?
It turns out that there were several issues:
Make sure to use the correct ssh key
I had to run docker on the deployment server in rootles mode (Not sure if it's required)
Also, on a client machine from where we try to connect to the deployment server, I had to disable strict host key checks in the /etc/ssh/ssh_config

installing Filebeat on windows

I'm new Elastic Stack. I've been able to install Elasticsearch and Kibana via Docker using the instructions on elastic.co. However, I'm having some difficulty installing filebeats using the directions on elastic.co. After starting Elasticsearch and Kibana, when I run:
docker run docker.elastic.co/beats/filebeat:7.13.0 setup -E setup.kibana.host=kibana:5601 -E output.elasticsearch.hosts=["localhost:9200"]
I get the following output:
Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at http://localhost:9200: Get "http://localhost:9200": dial tcp [::1]:9200: connect: cannot assign requested address]
This is with a docker setup. Any guidance to fixing this would be great. Thanks.
if you were following instructions from tutorial You can see, that it should use the same network.
So instead of
docker run docker.elastic.co/beats/filebeat:7.13.0 setup -E setup.kibana.host=kibana:5601 -E output.elasticsearch.hosts=["localhost:9200"]
should be
docker run --net {network_name} docker.elastic.co/beats/filebeat:7.13.0 setup -E setup.kibana.host=kibana:5601 -E output.elasticsearch.hosts=["localhost:9200"]
Check Your elasticsearch container network with following command
docker inspect -f '{{.NetworkSettings.Networks}}' {es-container-name}
If You try to run Kibana+Elastic+Filebeat on Windows, I would suggest writing a dockerfile (or dockercompose) with Your own fileabeat.yml.
Of course, if You run Your elasticserach non-containered, You should use host network, but it's another story.

How to access docker container on the web?

I have google cloud VM instance running Ubuntu 18 and I have installed ngnix and pulled this docker image https://github.com/lensesio/fast-data-dev. But the problem is I cannot access container when i run following command
docker run -d -p 2181:2181 -p 3030:3030 -p 8081-8083:8081-8083 \
-p 9581-9585:9581-9585 -p 9092:9092 -e ADV_HOST=[VM_EXTERNAL_IP] \
-e RUNNING_SAMPLEDATA=1 lensesio/fast-data-dev
It is supposed to work on myexternalip:3030 but it doesn't open. I assume because I have to expose docker ports to external web because
curl 0.0.0.0:3030
returns response. I opened mentioned ports in the command above in the firewall.
You need to make sure those ports you expose in docker are opened both in the Instance and in the VPC firewall.
In Google Cloud you can deploy a container inside a Compute Engine instance when you create it (see Deploying a container on a new vm instace).
It is easier (and faster) than doing it on your own, and you all your container port mappings defined in the same place you open the firewall ports.
BTW - Don't use the IP address 0.0.0.0: it's the unspecified newtwork address.
It might be filtered silently by firewalls or routers. Use the loopback address - 127.0.0.1
In order to expose the port 3030 on the public IP (VM_EXTERNAL_IP) at the Instance , at VPC level there must be a rule on the firewall for the specific protocol/port to allow it.[1]
A generic broad scope allow firewall rule definition would be [2]:
gcloud compute --project=[PROJECT] firewall-rules create allow-lenses-io-3030 --direction=INGRESS --priority=1000 --network=default --action=ALLOW --rules=tcp:3030 --source-ranges=0.0.0.0/0
You can limit the scope of the rule as needed/wanted/required.[3]
[1] https://cloud.google.com/vpc/docs/firewalls
[2] https://cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/create
[3] https://cloud.google.com/vpc/docs/using-firewalls
Ran this on my VM, and I got this error:
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/create: dial unix /var/run/docker.sock: connect: permission denied.
See 'docker run --help'.
If you are getting the same error above, you would need to check the file permission by running this command
~$ ls -la /var/run/docker.sock
After changing the file permission to 666 by running this command below:
sudo chmod 666 /var/run/docker.sock
This should be the file permission
srw-rw-rw- 1 root docker 0 Dec 17 14:40 /var/run/docker.sock
Then, I was able to successfully run your command
docker run -d -p 2181:2181 -p 3030:3030 -p 8081-8083:8081-8083 -p 9581-9585:9581-9585 -p 9092:9092 -e ADV_HOST=[VM_EXTERNAL_IP] -e RUNNING_SAMPLEDATA=1 lensesio/fast-data-dev
Unable to find image 'lensesio/fast-data-dev:latest' locally
latest: Pulling from lensesio/fast-data-dev
05e7bc50f07f: Pull complete
476521bd3084: Pull complete
c4c2aa517a1c: Pull complete
7f1b06a24ab4: Pull complete
bae2eaa88cbb: Pull complete
2d9ee69ece21: Pull complete
4da70d410da1: Pull complete
59abe7119ed3: Pull complete
ed6eaf2a0a19: Pull complete
25aa81bc4e49: Pull complete
8ccac59252e2: Pull complete
225a5ca8c99d: Pull complete
6d7f2dab62f4: Pull complete
Digest: sha256:a40302e35e1e11839bcfe12f6e63e0d665f098249e0ce9c111a2e212215f8841
Status: Downloaded newer image for lensesio/fast-data-dev:latest
051711522df0198d2f94825dee8d2556e137c7b31501b68cd73541ae8d4286d7

Error while pushing a Docker image into Cloud Foundry

I am trying to push a Docker image (public) into Cloud Foundry, but got the
following error message.
FAILED
Error restarting application: Server error, status code: 500, error code: 170011, message: Stager error: Failed to open TCP connection to stager.service.cf.internal:8888 (getaddrinfo: Name or service not known)
I got the same issue, CF installed with bosh-lite according to pivotal docs.
commands:
cf enable-feature-flag diego_docker
cf push app1 -o localhost:5000/app1:1
docker registry setup:
docker pull registry:2
docker run --name reg2 -d --restart=always -p 5000:5000 registry:2
docker image app1 tagged and pushed w/o any issues.

Resources