How to pass docker run parameter via kubernetes pod - docker

Hi I am running kubernetes cluster where I run Logstash container.
But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:
docker run --log-driver=gelf logstash -f /config-dir/logstash.conf
But I need to run it via Kubernetes pod. My pod looks like:
spec:
containers:
- name: logstash-logging
image: "logstash:latest"
command: ["logstash", "-f" , "/config-dir/logstash.conf"]
volumeMounts:
- name: configs
mountPath: /config-dir/logstash.conf
How to achieve to run Docker container with parameter --log-driver=gelf via kubernetes. Thanks.

Kubernetes does not expose docker-specific options such as --log-driver. A higher abstraction of logging behavior might be added in the future, but it is not in the current API yet. This issue was discussed in https://github.com/kubernetes/kubernetes/issues/15478, and the suggestion was to change the default logging driver for docker daemon in the per-node configuration/salt template.

Related

How can I retrieve the container images from a K8s cluster (not just list the container image names)?

I would like to run a K8s Cronjob (or controller) to mirror copies of the actual container images used to a external (ECR) Docker repo. How can I do the equivalent of:
docker pull localimage
docker tag localimage newlocation
docker push newlocation
Kubernetes doesn't have any way to push or rename images, or to manually pull images beyond declaring them in a Pod spec.
The Docker registry system has its own HTTP API. One possibility is, when you discover a new image, manually make the API calls to pull and push it. In this context you wouldn't specifically need to "tag" the image since the image repository, name, and tag only appear as URL components. I'm not specifically aware of a prebuilt tool that can do this, though I'd be very surprised if nobody's built it.
If you can't do this, then the only reliable way to get access to some Docker daemon in Kubernetes is to run one yourself. In this scenario you don't need access to "the real" container system, just somewhere you can target the specific docker commands you list, so you're not limited by the container runtime Kubernetes uses. The one big "but" here is that the Docker daemon must run in a privileged container, which your local environment may not allow.
It's a little unusual to run two containers in one Pod but this is a case where it makes sense. The Docker daemon can run as a prepackaged separate container, tightly bound to its client, as a single unit. Here you don't need persistent storage or anything else that might want the Docker daemon to have a different lifecycle than the thing that's using it; it's just an implementation detail of the copy process. Carefully Googling "docker in docker" kubernetes finds write-ups like this or this that similarly describe this pattern.
By way of illustration, here's a way you might do this in a Kubernetes Job:
apiVersion: batch/v1
kind: Job
metadata: { ... }
spec:
template:
spec:
containers:
- name: cloner
image: docker:latest # just the client and not a daemon
environment:
- name: IMAGE
value: some/image:tag
- name: REGISTRY
value: registry.example.com
- name: DOCKER_HOST
value: tcp://localhost:2375 # pointing at the other container
command:
- /bin/sh
- -c
- |-
docker pull "$IMAGE"
docker tag "$IMAGE" "$REGISTRY/$IMAGE"
docker push "$REGISTRY/$IMAGE"
docker rmi "$IMAGE" "$REGISTRY/$IMAGE"
- name: docker
image: docker:dind
securityContext:
privileged: true # <-- could be a problem with your security team
volumes:
- name: dind-storage
mountPath: /var/lib/docker
volumes:
- name: dind-storage
emptyDir: {} # won't outlive this Pod and that's okay
In practice I suspect you'd want a single long-running process to manage this, maybe running the DinD daemon as a second container in your controller's Deployment.

Gitlab CI - exposing port/service of spawned docker container

I have setup a testplant of Gitlab CI
Gitlab-CE on ubuntu 18.04 VM
Docker gitlab runner
Microk8s cluster
I am able to install the gitlab managed Ingress controller
As I am running dind, how should I expose port 4000 to my host machine (VM) and what is the best way to do it ?
I tried to play around with gitlab installed ingress controller, but not sure where the config files/yaml for gitlab managed apps ?
Tried simple nodeport expose and it did not help
kubectl -n gitlab-managed-apps expose deployment <Gitlab Runner> --type=NodePort --port=4000
Below is my gitlab-ci.yaml file..
image: docker:19.03.13
services:
- name: docker:18.09.7-dind
command:
[
'--insecure-registry=gitlab.local:32000',
]
stages:
- testing
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://localhost:2375
DOCKER_TLS_CERTDIR: ""
CI_REGISTRY_IMAGE: img1
before_script:
- echo "$REG_PASSWORD" | docker -D login "$CI_REGISTRY" -u "$REG_USER" --password-stdin
testing:
stage: testing
tags: [docker]
script:
- docker pull "gitlab.local:32000/$CI_REGISTRY_IMAGE:latest"
- docker images
- hostname
- docker run --rm -d -p 4000:4000 "gitlab.local:32000/$CI_REGISTRY_IMAGE:latest"
- netstat -na | grep -w 4000
- sleep 3600
only:
- master
I managed to figure out what was the issue on exposing using k8s services. It was the selector that was not clearly defined. Some key points to note
I could see that the port was listening on IPv6 interface (::4000) within the pod. However this was not the problem
I added podLabels in config.toml of the gitlab runner config (e.g. app: myapp). This way, each pod spawned by the runner had a predefined label
Used the label in my selector section of the LB service
Hope its useful to anyone

How can I add the `--privileged` flag of Docker to a Kubernetes pod spec container YAML definition

In this question: Teamcity Build won't run until Build Agents is configured with Docker?
I had a problem with the teamcity-agent (Teamcity is a build server) deployment. These agents are the build runners and they come as their own pods. So back the days when I was just using Docker without K8s I used this command to run the container:
docker run -it -e SERVER_URL="<url to TeamCity server>" \
--privileged -e DOCKER_IN_DOCKER=start \
jetbrains/teamcity-agent
So adding those environement vars to the K8s container definition wasn't that hard. I just had to define this spec part:
spec:
containers:
- name: teamcity-agent
image: jetbrains/teamcity-agent:latest
ports:
- containerPort: 8111
env:
- name: SERVER_URL
value: 10.0.2.205:8111
- name: DOCKER_IN_DOCKER
value: start
So now I want to have the --privileged flag as well. I found and article here link to guide which I did not really understood. I added
securityContext:
allowPrivilegeEscalation: false // also tried 'true'
but it did not worked with that.
Can someone point out where to look at?
I think you may use it like this
securityContext:
privileged: true
see this

Injecting host network into container in CircleCI

I have this CircleCI configuration.
version: 2
jobs:
build:
docker:
- image: docker:18.09.2-git
- image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
name: elasticsearch
working_directory: ~/project
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- run:
name: test
command: |
docker run --rm \
--network host \
byrnedo/alpine-curl \
elasticsearch:9200
I'm looking for a way to allow my new container to access to the elasticsearch port 9200. With this configuration, the elasticsearch is not even a known host name.
Creating an extra network is not possible, so I have this error message container sharing network namespace with another container or host cannot be connected to any other network
Host network seems to be working only in the primary image
How could I do this?
That will not work. Containers started during a build via the docker run command are running via a remote Docker engine. The cannot talk to the containers running as part of the executor via TCP since they are isolated. Just docker exec.
The solution will ultimately depend on your end goal, but one option might be to remove the Elasticsearch image/container from the executor, and use Docker Compose to get both images to talk to each other within the build.

How to pass the Docker CLI Arguments when starting a container using Kubernetes

I am exploring Kubernetes Cluster orchestration and I am familiar with docker based containerization techniques.
Normally when starting docker containers, we pass different CLI Arguments(port options + Env variables) something like below
docker run --name myService -p 8080:8080 -v /var/lib/otds:/usr/local/otds -e VIRTUAL_PORT=8080 myImage
When I am trying to up the same on Kubernetes Cluster(using its CLI - kuberctl) I am seeing errors saying that these arguments are not recognized
I am trying something like below
kuberctl run myService -p 8080:8080 -v /var/lib/otds:/usr/local/otds -e VIRTUAL_PORT=8080 --image==myImage
I am looking for help on how to pass docker's CLI Arguments to KuberCTL
kubectl run is just a shorthand convenience method. Normally you should be writing pod specs in YAML/JSON.
Based on your unfamiliarity with basics, I would highly recommend sitting down and following through some of the training material at https://kubernetes.io/docs/tutorials/
As for your question, in a pod spec, the command/args field is what you're looking for and it is documented here: https://kubernetes.io/docs/tasks/configure-pod-container/define-command-argument-container/
Here's a sample:
apiVersion: v1
kind: Pod
metadata:
name: demo
spec:
containers:
- name: foo
image: alpine
command: ["date"]

Resources