Running kubernetes on windows - docker

I am using docker toolbox (windows 7) to create my docker image, now I would like to use kubernetes as a container orchestration.
I want to run Kubernetes locally, I install it using minikube and kubectl. Is it the best way? Can I use k3s on windows7 ?
And is it possible to create a private registry as docker hub on windows 7?
Thank you.

The easiest way to experiment with Kubernetes locally is with Minikube.
As for a docker registry, I would suggest running the official registry image from Docker Hub. When you want to step up, Nexus is a really nice choice.

If you want to play with Kubernetes, the latest version of Docker Desktop allows you to setup a fully functional Kubernetes environment on your desktop, and enable this with a click, see image below and here Docker docs
A private registry allows you to store your images, and pull offical images provided by vendors. That's a cloud service, Docker Hub is just one of many repositories available.

Docker Desktop includes a standalone Kubernetes server and client, as well as Docker CLI integration. The Kubernetes server runs locally within your Docker instance, is not configurable, and is a single-node cluster.
Refer to :
https://docs.docker.com/docker-for-windows/kubernetes/
The Kubernetes server runs within a Docker container on your local system, and is only for local testing. When Kubernetes support is enabled, you can deploy your workloads, in parallel, on Kubernetes, Swarm, and as standalone containers. Enabling or disabling the Kubernetes server does not affect your other workloads.
You can deploy a stack on Kubernetes with docker stack deploy, the docker-compose.yml file, and the name of the stack.
docker stack deploy --compose-file /path/to/docker-compose.yml mystack
docker stack services mystack
To be able running on kubernetes specify the orchestrator in your stack deployment.
docker stack deploy --orchestrator kubernetes --compose-file /path/to/docker-compose.yml mystack
Create a volume directory for nexus-data. I used /nexus-data directory which is mount point of the second disk
mkdir /nexus-data
chown -R 200 /nexus-data
Exaples Apps :
version: '3.3'
services:
traefik:
image: traefik:v2.2
container_name: traefik
restart: always
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=true"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
ports:
- 80:80
- 443:443
networks:
- nexus
volumes:
- /var/run/docker.sock:/var/run/docker.sock
nexus:
container_name: nexus
image: sonatype/nexus3
restart: always
networks:
- nexus
volumes:
- /nexus-data:/nexus-data
labels:
- traefik.port=8081
- traefik.http.routers.nexus.rule=Host(`NEXUS.mydomain.com`)
- traefik.enable=true
- traefik.http.routers.nexus.entrypoints=websecure
- traefik.http.routers.nexus.tls=true
- traefik.http.routers.nexus.tls.certresolver=myresolver
networks:
nexus:
external: true

Related

Why can't I connect to selenium docker-compose service from my GitLab job?

I am running selenium test in Gitlab CI, but have problem with setting remote URL correctly when using the gitlab runner instead of my computer.
The IP address of the runner is 192.168.xxx.xxx. And when I run the pipeline, I got the IP address of selenium hub is 172.19.0.2/16. I tried both, and both failed. I also tried to use the name of the selenium hub container http://selenium__hub, but it also failed.
The docker-compose.yml is:
version: "3"
services:
chrome:
image: selenium/node-chrome:4.0.0-20211013
container_name: chrome
shm_size: 2gb
depends_on:
- selenium-hub
volumes:
- ./target:/home/seluser/Downloads
environment:
- SE_EVENT_BUS_HOST=selenium-hub
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
- SE_NODE_GRID_URL=http://localhost:4444
ports:
- "6900:5900"
edge:
image: selenium/node-edge:4.0.0-20211013
container_name: edge
shm_size: 2gb
depends_on:
- selenium-hub
volumes:
- ./target:/home/seluser/Downloads
environment:
- SE_EVENT_BUS_HOST=selenium-hub
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
- SE_NODE_GRID_URL=http://localhost:4444
ports:
- "6901:5900"
firefox:
image: selenium/node-firefox:4.0.0-20211013
container_name: firefox
shm_size: 2gb
depends_on:
- selenium-hub
volumes:
- ./target:/home/seluser/Downloads
environment:
- SE_EVENT_BUS_HOST=selenium-hub
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
- SE_NODE_GRID_URL=http://localhost:4444
ports:
- "6902:5900"
selenium-hub:
image: selenium/hub:4.0.0-20211013
container_name: selenium-hub
ports:
- "4444:4444"
the gitlab runner's config file looks like this:
[[runners]]
name = "selenium"
url = "https://gitlab.myhost.at"
token = "xxxxxxxx"
executor = "docker"
privileged = true
links = ["selenium__hub:hub"]
[runners.docker]
image = "docker:stable"
privileged = true
The remote url I have tried are:
WebDriver driver = new RemoteWebDriver(new URL("http://192.168.xxx.xxx:4444/wd/hub"), cap);
WebDriver driver = new RemoteWebDriver(new URL("http://172.19.0.2:4444/wd/hub"), cap);
WebDriver driver = new RemoteWebDriver(new URL("http://selenium__hub:4444/wd/hub"), cap);
How can I get this to work with GitLab runner?
The issue here is when your job launches containers using docker-compose, the hostnames in the docker network are not known to your job container.
Assuming you are using the docker:dind service in your job to use docker-compose and you are trying to connect to your services started with docker-cmpose from your job, you would need to use the hostname docker to reach your services through their mapped ports.
So your corrected code would be as follows:
WebDriver driver = new RemoteWebDriver(new URL("http://docker:4444/wd/hub"), cap);
Why "docker"?
The reason this is needed is because your containers are running 'on' a remote docker daemon service -- the docker:dind container. When you invoke docker-compose your job container talks to the docker:dind container which in turn spins up a new docker network and creates the docker containers in your compose file on that network.
Your job container has no knowledge of (or route to) that network, nor does it know the hostnames of the services. The service daemon itself also is running on a separate network from your runner -- because it is another docker container that was created by the docker executor; so your runner IP won't work either.
However, the docker executor does create a link to your services: I.E. the docker:dind service. So you can reach that container by the docker hostname. Additionally, your compose file indicates that the hub service should make a port mapping of 4444:4444 from the host -> continaer. The host, in this case, means the docker:dind service. So calling http://docker:4444 from your job reaches the hub service.
Why doesn't "links" work?
Finally, to cover one last detail, it looks like in your runner configuration you expected your links to allow you to communicate with the hub container by hostname:
links = ["selenium__hub:hub"]
In the runner configuration, it's true that the links configuration, in general, would allow your job to communicate with containers by hostname. However, this configuration is errant for two reasons:
This configuration can only apply to containers alongside your runner container. That is other containers registered on the host daemon -- not containers created by other docker daemons, like the ones created with docker-compose in your job by talking to the docker:dind service daemon.
Even if you could reach containers created by other daemons, or your hub container was created by the host daemon, the parameters are wrong, according to the URLS you tried. This configuration basically says "expose the selenium__hub container as the FQDN hub" -- but you never tried the hostname hub.
There is nothing to fix here because (1) is not a fixable error when using docker-in-docker.
Alternatives
Alternatively, you can utilize GitLab's services: capability to run the hub and/or browser containers.
my_job:
services:
- docker:dind
- name: selenium/hub:4.0.0-20211013
alias: hub # this is the hostname
You could even do this as a runner configuration and give it a special tag and jobs needing remote browsers can just add the necessary tags: key(s) to reduce the amount of job configuration needed.
You may also be interested to see one of my other answers on how FF_NETWORK_PER_BUILD feature flag can affect how networking works between docker containers and jobs/services.

Creating and copying files to volume within the gitlab ci-cd

I created a Gitlab CI CD pipline with the gitlab runner and gitlab itself.
right now everything runs besides one simple script.
It does not copy any files to the volume.
I'm using docker-compose 2.7
I also have to say, that I'm not 100% sure about the volumes.
Here is an abstract of my .gitlab-ci.yml
stages:
- build_single
- test_single
- clean_single
- build_lb
- test_lb
- clean_lb
Build_single:
stage: build_single
script:
- docker --version
- docker-compose --version
- docker-compose -f ./NodeApp/docker-compose.yml up --scale slave=1 -d
- docker-compose -f ./traefik/docker-compose_single.yml up -d
- docker-compose -f ./DockerJMeter/docker-compose.yml up --scale slave=10 -d
When I'm using ls, all the files are in the correct folder.
Docker-compose:
version: '3.7'
services:
reverse-proxy:
# The official v2.0 Traefik docker image
image: traefik:v2.0
# Enables the web UI and tells Traefik to listen to docker
command: --api.insecure=true --providers.docker
ports:
# The HTTP port
- "7000:80"
# The Web UI (enabled by --api.insecure=true)
- "7080:8080"
volumes:
# So that Traefik can listen to the Docker events
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/config_lb:/etc/traefik
networks:
- default
networks:
default:
driver: bridge
name: traefik
For JMeter I'm using the copy statement to get the configuration files after it startet. but for traefik I need the files on the booting process for traefik.
I thought ./traefik/config_lb:/etc/traefik with a '.' in front of traefik creates a path in respect to the docker-compose file.
Is this wrong?
I also have to say, that gitlab and the runner are both dockerized on the host system. So the instanz of docker is running on the host system, and gitlab-runner also using the docker.sock.
Best Regards!
When you use the gitlab-runner in a docker container, it starts another container, the gitlab-executor based on an image that you specify in .gitlab-ci.yml. The gitlab-runner uses the docker sock of the docker host (see /var/run/docker.sock:/var/run/docker.sock in /etc/gitlab-runner/config.toml) to start the executor.
When you then start another container using docker-compose, again the docker sock is used. Any source paths that you specify in docker-compose.yml have to point to paths on the docker host, otherwise the destination in the created service will be empty (given the source path does not exist).
So what you need to do is find the path to traefik/config_lb on the docker host and provide that as the source.

How can I populate volumes with local content using ECS and docker-compose

The Situation
I am trying to set up a Prometheus / Grafana cluter using AWS ECS. Both Prometheus and Grafana need configuration files. Normally I would use a volume to pass that kind of information to a docker image.
Since these are two services, I would like to use docker-compose to set them both up and tie them together at once.
The Attempt
Here's the compose file that I would use for a normal docker setup:
version: '3.0'
volumes:
prometheus_data: {}
grafana_data: {}
services:
prometheus:
image: prom/prometheus
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
command:
- '--storage.tsdb.path=/prometheus'
ports:
- 9090:9090
grafana:
image: grafana/grafana
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning/:/etc/grafana/provisioning/
env_file:
- ./grafana/config.monitoring
ports:
- 3000:3000
This does not appear to actually work when I run ecs-cli compose service up. Specifically, the tasks start but then crash, and I'm not seeing any evidence that the configuration files were actually injected.
This guide explains how to set up a Prometheus image on ECS, but it is actually creating a configured docker image and publishing that image -- it's not using docker compose.
The Question
Is there a way to inject files (e.g. config files) from my local computer into my ECS images / tasks using docker-compose?
The docker-container should be treated differently when it comes to ECS, the above docker-compose seems fine to working with the local setup but with ECS I will not recommend to go for mounting.
So I will recommend putting the config file into the docker image. for example
FROM prom/prometheus
COPY myconfig.yml /etc/prometheus/prometheus.yml
Also, I will prefer ECR as a docker registry in AWS.
The disadvantage of mounting in case of ECS
You will need to keep config in EC2 instance
You will not able to use in case of fargate as there is no server to manage in fargate
You will be depended in AMI in case of auto-scaling as you docker-container depended on config

can I deploy local build to docker swarm in virtual machine?

I am learning Docker and trying to follow the Docker tutorial and am in step 4 here.
Basically in this step, we are creating 2 VMs for docker swarm: 1 as swarm manager and 1 as swarm worker.
I think it pulls docker-hub pushed image to the virtual machines to get the service working in swarm. Problem is I am not pushing my built image to docker hub.
My question is, can I use local build to deploy to the swarm VM?
I tried to change image line the example docker-compose.yml to build like so:
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
# image: friendlyhello
build: .
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4000:80"
networks:
- webnet
networks:
webnet:
it of course does not work, which is why I am asking if there is a way to do this?
You can create a local registry on the vm or your local machine and push/pull images from local repo
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Then name/tag your images using
localhost:5000/Image_Name:Tag
Then push images using
docker push localhost:5000/Image_Name:Tag
This will let you save your images in a local registry that your swarm can use without pushing to dockerhub

Running Traefik on Docker Windows Container

I try to run Traefik on docker on Windows native container but I don't find any exemple. I just want to run the Getting Started exemple with whoami.
I try many parameters without success. I have two question :
how to to pass a configuration file for traefik with an Windows Container ? (binding file don't work on Windows)
how to connect to docker host with named pipe ?
Exemple of docker compose I've tried :
version: '3.7'
services:
reverse-proxy:
image: traefik:v1.7.2-nanoserver # The official Traefik docker image
command: --api --docker --docker.endpoint=npipe:////./pipe/docker_engine # Enables the web UI and tells Træfik to listen to docker
ports:
- "80:80" # The HTTP port
- "8080:8080" # The Web UI (enabled by --api)
volumes:
- source: '\\.\pipe\docker_engine'
target: '\\.\pipe\docker_engine'
type: npipe
whoami:
image: emilevauge/whoami # A container that exposes an API to show its IP address
labels:
- "traefik.frontend.rule=Host:whoami.docker.localhost"
Traefik dashboard work fine on 8080 but no provider found and whoami container not found.
I'm with Windows 10 1803, Docker version 18.06.1-ce, build e68fc7a, docker-compose version 1.22.0, build f46880fe
Note that Traefik work fine if I launch it on my Windows (not in a container).
Thank you for help.
There is a working example of it here. If you would like to run it in swarm, try using docker network create and docker service create instead of docker stack deploy. See my question here for more details.

Resources