I have the following configuration:
dockered gitlab (named gitlab)
dockered gitlab-ci-multirunner (linked to gitlab and named gitlab-runners).
┌──────────────────────┐ ┌─────────┐
│ 172.12.x.x │ │172.13.x.│
┌┴──────────┬┬──────────┴┐┌┴─────────┴┐
│ GitLab ││ GitLab ││ GitLab │
│ ││ Runners ││ Runners │
│ ││ ││ │
└───────────┘└───────────┘└───────────┘
│ │ │ ▲
│ │ │ ╱
│ │ │ ╱
│ │ ▼ ╱
───────┴────────────┴────────────────────
I successfully registered a runner into gitlab, but when I try to run a build I cannot manage to connect the docker container of the project spawned by the gitlab-runners to my gitlab docker; therefore when the project docker tries to clone the project it's not able to resolve the name http://gitlab/ I tried to use the parameter -links=["network-name:gitlab"] in the toml file of my runner, but this leads to:
API error (500) Could not get container for <network name>.
Any clues?
Here is my .toml:
concurrent = 1
check_interval = 0
[[runners]]
name = "d4cf95ba5a90"
url = "http://gitlab/ci"
token = "9e6c2edb5832f92512a69df1ec4464"
executor = "docker"
[runners.docker]
tls_verify = false
image = "node:4.2.2"
privileged = false
disable_cache = false
volumes = ["/cache"]
links = ["evci_default:gitlab"]
[runners.cache]
Only one solution i found is to add IP of docker host to 'extra_hosts' of config.toml
extra_hosts = ["host:192.168.137.1"]
Related
I was trying to see the dashboard, previously works fine...
Now I get using minikube dashboard
λ minikube dashboard
X Exiting due to GUEST_STATUS: state: unknown state "minikube": docker container inspect minikube --format=: exit status 1
stdout:
stderr:
Error: No such container: minikube
*
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please attach the following file to the GitHub issue: │
│ * - C:\Users\JOSELU~1\AppData\Local\Temp\minikube_dashboard_dc37e18dac9641f7847258501d0e823fdfb0604c_0.log │
│ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
With minikube status
λ minikube status
E0604 13:13:20.260421 27600 status.go:258] status error: host: state: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:
stderr:
Error: No such container: minikube
E0604 13:13:20.261425 27600 status.go:261] The "minikube" host does not exist!
minikube
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent
With the command minikube profile list
λ minikube profile list
|----------|-----------|---------|--------------|------|---------|---------|-------|
| Profile | VM Driver | Runtime | IP | Port | Version | Status | Nodes |
|----------|-----------|---------|--------------|------|---------|---------|-------|
| minikube | docker | docker | 192.168.49.2 | 8443 | v1.20.2 | Unknown | 1 |
|----------|-----------|---------|--------------|------|---------|---------|-------|
Now,...
What would be it happens?
What would be the best solution?
Thansk...
Remove unused data:
docker system prune
Clear minikube's local state:
minikube delete
Start the cluster:
minikube start --driver=<driver_name>
(In your case driver name is docker as per minikube profile list info shared by you)
Check the cluster status:
minikube status
Use the following documentation for more information:
https://docs.docker.com/engine/reference/commandline/system_prune/#examples
https://v1-18.docs.kubernetes.io/docs/tasks/tools/install-minikube/
I have an app running on a EKS (kubernetes) cluster. The cluster was created with the eksctl tool. I'm running fargate only. The app needs to connect to an elasticache redis cluster, which I spun up within the same subnet as the fargate worker. The connection errors out with:
{ Error: Redis connection to my-redis.kptb5s.ng.0001.use1.cache.amazonaws.com:6379 failed - connect ETIMEDOUT 192.168.116.58:6379 │
│ at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1107:14) │
│ errno: 'ETIMEDOUT', │
│ code: 'ETIMEDOUT', │
│ syscall: 'connect', │
│ address: '192.168.116.58', │
│ port: 6379 }
How can I troubleshoot this? I need to get this connection to redis working. What are the most likely issues?
The most likely reason (based on the above) is that the Security Groups setup do not allow traffic to flow from the pod to the redis instance on port 6379. Assuming the EKS cluster SG associated to the pod has all outbound flow allowed, I would focus on the SG assigned to the Redis cluster endpoint (which needs to allow traffic from the EKS cluster security group).
I am searching for the ./data folder described in Storage section pf Prometheus Documentation.
I run a basic Prometheus Docker container prom/prometheus on Kubernetes. If I execute a shell inside the container, the working directory is /prometheus and it contains the wal directory, but it does not have the structure mentioned in the documentation and I can not find any metric data.
Where are the metrics stored which I can query over Prometheus GUI?
As you can check the Prometheus Dockerfile on Github (https://github.com/prometheus/prometheus/blob/master/Dockerfile#L24), the working directory is /prometheus and that where you will find all the metrics and data.
Below is the data present in /prometheus directory
/prometheus $ ls -l
total 8
-rw-r--r-- 1 nobody nogroup 0 May 28 12:39 lock
-rw-r--r-- 1 nobody nogroup 20001 May 28 12:45 queries.active
drwxr-xr-x 2 nobody nogroup 4096 May 28 12:39 wal
And this is the *Time Series Database * which you can't decode . If you will run more exporters, probably you can see more folders like chunks etc.
Ref: https://prometheus.io/docs/prometheus/1.8/storage/
I have figured out the problem now. The structure, which was mentioned in the documentation, need several hours to build up.
./data
├── 01BKGTZQ1HHWHV8FBJXW1Y3W0K
│ └── meta.json
├── 01BKGV7JC0RY8A6MACW02A2PJD
│ ├── chunks
│ │ └── 000001
│ ├── tombstones
│ ├── index
│ └── meta.json
└── wal
├── 00000002
└── checkpoint.000001
The first of those folders starting with 01... appeared after 4 hours. The next one took an other 2 hours. It seems to vary according to the amount of metrics you are pulling.
Sidenote: As nischay goyal wrote in his answer, Prometheus TSDB can not be decoded. You can query the Prometheus API with a script to export metrics.
It has been stated that:
The shim allows for daemonless containers. It basically sits as the parent of the container's process to facilitate a few things.
It keeps the STDIO and other fds open for the container incase containerd and/or docker both die. If the shim was not running then the parent side of the pipes or the TTY master would be closed and the container would exit.
However from a process level, it appears that containerd spawns containerd-shim, so if containerd is down I would expect containerd-shim to go down too.
Can someone explain how containerd-shim can remain up if containerd/docker are down?
$ ps fxa | grep dockerd -A 3
PID TTY STAT TIME COMMAND
43449 pts/2 S+ 0:00 \_ grep dockerd -A 3
117536 ? Ssl 163:36 /usr/bin/containerd
93633 ? Sl 1:01 \_ containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/8f75a1b32bb09611430ea55958b11a482b6c83ba2a75f7ca727301eb49a2770f -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
$ pstree -lpTs
systemd(1)─┬─VGAuthService(45146)
├─accounts-daemon(1053)
├─agetty(104696)
├─agetty(104707)
├─agetty(104716)
├─atd(993)
├─containerd(117536)─┬─containerd-shim(8394)─┬─bash(8969)
│ │ └─sh(8420)─┬─sshd(8512)
│ │ └─tail(8514)
│ ├─containerd-shim(13170)───bash(13198)
│ ├─containerd-shim(13545)───portainer(13577)
│ ├─containerd-shim(14156)───mysqld(14184)
...
├─dockerd(42320)─┬─docker-proxy(42700)
│ ├─docker-proxy(42713)
│ ├─docker-proxy(42725)
│ ├─docker-proxy(42736)
│ └─docker-proxy(42749)
UPDATE: Based on the explanation provided in the accepted answer:
$ pstree -lpTs
systemd(1)─┬─VGAuthService(45146)
├─accounts-daemon(1053)
├─agetty(104696)
├─agetty(104707)
├─agetty(104716)
├─atd(993)
├─containerd(117536)─┬─containerd-shim(8394)─┬─bash(8969)
│ │ └─sh(8420)─┬─sshd(8512)
│ │ └─tail(8514)
│ ├─containerd-shim(13170)───bash(13198)
│ ├─containerd-shim(13545)───portainer(13577)
│ ├─containerd-shim(14156)───mysqld(14184)
$ sudo kill -9 117536
$ pstree -lpTs
systemd(1)─┬─VGAuthService(45146)
├─accounts-daemon(1053)
├─agetty(104696)
├─agetty(104707)
├─agetty(104716)
├─atd(993)
├─containerd-shim(8394)─┬─bash(8969)
│ └─sh(8420)─┬─sshd(8512)
│ └─tail(8514)
├─containerd-shim(13170)───bash(13198)
├─containerd-shim(13545)───portainer(13577)
├─containerd-shim(14156)───mysqld(14184)
However from a process level, it appears that containerd spawns containerd-shim, so if containerd is down I would expect containerd-shim to go down too.
Child processes don't automatically die when their parent dies, they are simply re-parented to PID 1. systemd takes over as parent and containerd-shim continues running.
Started by user sabari k
Building in workspace /var/lib/jenkins/workspace/actualdairy
[actualdairy] $ /bin/sh -xe /tmp/jenkins4465259595371700187.sh
+ echo hello
+ cd
+ ./actualDairy/deploy.sh
Pseudo-terminal will not be allocated because stdin is not a terminal.
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-36-generic x86_64)
Documentation: https://help.ubuntu.com
Management: https://landscape.canonical.com
Support: https://ubuntu.com/advantage
System information as of Thu Oct 25 20:05:25 UTC 2018
System load: 0.09 Processes: 90
Usage of /: 8.7% of 24.06GB Users logged in: 1
Memory usage: 38% IP address for eth0:
Swap usage: 0% IP address for eth1:
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
43 packages can be updated.
6 updates are security updates.
Welcome to DigitalOcean's One-Click Node.js Droplet.
To keep this Droplet secure, the UFW firewall is enabled.
All ports are BLOCKED except 22 (SSH), 80 (HTTP), and 443 (HTTPS).
To get started, visit http://do.co/node1804
To delete this message of the day: rm -rf /etc/update-motd.d/99-one-click
mesg: ttyname failed: Inappropriate ioctl for device
git#bitbucket.org: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
npm WARN optional Skipping failed optional dependency /chokidar/fsevents:
npm WARN notsup Not compatible with your operating system or architecture: fsevents#1.2.4
Use --update-env to update environment variables
[PM2] Applying action restartProcessId on app [all](ids: 0)
[PM2] www ✓
┌──────────┬────┬─────────┬──────┬───────┬────────┬─────────┬────────┬─────┬──────────┬──────┬──────────┐
│ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
├──────────┼────┼─────────┼──────┼───────┼────────┼─────────┼────────┼─────┼──────────┼──────┼──────────┤
│ www │ 0 │ 0.0.2 │ fork │ 23688 │ online │ 65 │ 0s │ 0% │ 5.4 MB │ root │ disabled │
└──────────┴────┴─────────┴──────┴───────┴────────┴─────────┴────────┴─────┴──────────┴──────┴──────────┘
Use pm2 show <id|name> to get more details about an app
Finished: SUCCESS
i dont know why the git pull is not working. my deploy script is
#!/bin/bash
ssh root#ipaddress <<EOF
cd actualdairy
git pull
npm install
pm2 restart all
exit
EOF
i added remote server public key in bitbucket but its not pulling from the repo.saying Permission denied (publickey)