I created a Gitlab CI CD pipline with the gitlab runner and gitlab itself.
right now everything runs besides one simple script.
It does not copy any files to the volume.
I'm using docker-compose 2.7
I also have to say, that I'm not 100% sure about the volumes.
Here is an abstract of my .gitlab-ci.yml
stages:
- build_single
- test_single
- clean_single
- build_lb
- test_lb
- clean_lb
Build_single:
stage: build_single
script:
- docker --version
- docker-compose --version
- docker-compose -f ./NodeApp/docker-compose.yml up --scale slave=1 -d
- docker-compose -f ./traefik/docker-compose_single.yml up -d
- docker-compose -f ./DockerJMeter/docker-compose.yml up --scale slave=10 -d
When I'm using ls, all the files are in the correct folder.
Docker-compose:
version: '3.7'
services:
reverse-proxy:
# The official v2.0 Traefik docker image
image: traefik:v2.0
# Enables the web UI and tells Traefik to listen to docker
command: --api.insecure=true --providers.docker
ports:
# The HTTP port
- "7000:80"
# The Web UI (enabled by --api.insecure=true)
- "7080:8080"
volumes:
# So that Traefik can listen to the Docker events
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/config_lb:/etc/traefik
networks:
- default
networks:
default:
driver: bridge
name: traefik
For JMeter I'm using the copy statement to get the configuration files after it startet. but for traefik I need the files on the booting process for traefik.
I thought ./traefik/config_lb:/etc/traefik with a '.' in front of traefik creates a path in respect to the docker-compose file.
Is this wrong?
I also have to say, that gitlab and the runner are both dockerized on the host system. So the instanz of docker is running on the host system, and gitlab-runner also using the docker.sock.
Best Regards!
When you use the gitlab-runner in a docker container, it starts another container, the gitlab-executor based on an image that you specify in .gitlab-ci.yml. The gitlab-runner uses the docker sock of the docker host (see /var/run/docker.sock:/var/run/docker.sock in /etc/gitlab-runner/config.toml) to start the executor.
When you then start another container using docker-compose, again the docker sock is used. Any source paths that you specify in docker-compose.yml have to point to paths on the docker host, otherwise the destination in the created service will be empty (given the source path does not exist).
So what you need to do is find the path to traefik/config_lb on the docker host and provide that as the source.
Related
I have existing docker-compose.yml file that runs on my Docker CE standalone server.
I would like to deploy this same configuration using the AWS ECS service. The documentation of the ecs-cli tool states that Docker Compose files can be used. Other (simpler) container configs have worked with my existing files.
With my configuration, this errors with:
ERRO[0000] Unable to open ECS Compose Project error="External option
is not supported"
FATA[0000] Unable to create and read ECS Compose Project
error="External option is not supported"
I am using "external" Docker volumes, so that they are auto-generated as required and not deleted when a container is stopped or removed.
This is a simplification of the docker-compose.yml file I am testing with and would allow me to mount the volume to a running container:
version: '3'
services:
busybox:
image: busybox:1.31.1
volumes:
- ext_volume:/path/in/container
volumes:
ext_volume:
external: true
Alternatively, I have read in other documentation to use the ecs-params.yml file in the same directory to pass in variables. Is this a replacement to my docker-compose.yml file? I had expected to leave it's syntax unchanged.
Working config (this was ensuring the container stays running, so I could ssh in and view the mounted drive):
version: '3'
services:
alpine:
image: alpine:3.12
volumes:
- test_docker_volume:/path/in/container
command:
- tail
- -f
- /dev/null
volumes:
test_docker_volume:
And in ecs-params.yml:
version: 1
task_definition:
services:
alpine:
cpu_shares: 100
mem_limit: 28000000
docker_volumes:
- name: test_docker_volume
scope: "shared"
autoprovision: true
I am using docker toolbox (windows 7) to create my docker image, now I would like to use kubernetes as a container orchestration.
I want to run Kubernetes locally, I install it using minikube and kubectl. Is it the best way? Can I use k3s on windows7 ?
And is it possible to create a private registry as docker hub on windows 7?
Thank you.
The easiest way to experiment with Kubernetes locally is with Minikube.
As for a docker registry, I would suggest running the official registry image from Docker Hub. When you want to step up, Nexus is a really nice choice.
If you want to play with Kubernetes, the latest version of Docker Desktop allows you to setup a fully functional Kubernetes environment on your desktop, and enable this with a click, see image below and here Docker docs
A private registry allows you to store your images, and pull offical images provided by vendors. That's a cloud service, Docker Hub is just one of many repositories available.
Docker Desktop includes a standalone Kubernetes server and client, as well as Docker CLI integration. The Kubernetes server runs locally within your Docker instance, is not configurable, and is a single-node cluster.
Refer to :
https://docs.docker.com/docker-for-windows/kubernetes/
The Kubernetes server runs within a Docker container on your local system, and is only for local testing. When Kubernetes support is enabled, you can deploy your workloads, in parallel, on Kubernetes, Swarm, and as standalone containers. Enabling or disabling the Kubernetes server does not affect your other workloads.
You can deploy a stack on Kubernetes with docker stack deploy, the docker-compose.yml file, and the name of the stack.
docker stack deploy --compose-file /path/to/docker-compose.yml mystack
docker stack services mystack
To be able running on kubernetes specify the orchestrator in your stack deployment.
docker stack deploy --orchestrator kubernetes --compose-file /path/to/docker-compose.yml mystack
Create a volume directory for nexus-data. I used /nexus-data directory which is mount point of the second disk
mkdir /nexus-data
chown -R 200 /nexus-data
Exaples Apps :
version: '3.3'
services:
traefik:
image: traefik:v2.2
container_name: traefik
restart: always
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=true"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
ports:
- 80:80
- 443:443
networks:
- nexus
volumes:
- /var/run/docker.sock:/var/run/docker.sock
nexus:
container_name: nexus
image: sonatype/nexus3
restart: always
networks:
- nexus
volumes:
- /nexus-data:/nexus-data
labels:
- traefik.port=8081
- traefik.http.routers.nexus.rule=Host(`NEXUS.mydomain.com`)
- traefik.enable=true
- traefik.http.routers.nexus.entrypoints=websecure
- traefik.http.routers.nexus.tls=true
- traefik.http.routers.nexus.tls.certresolver=myresolver
networks:
nexus:
external: true
I have a docker desktop installed on my windows pc. In that, I have self-hosted gitlab on one docker container. Today I tried to back up my gitlab by typing the following command:
docker exec -t <my-container-name> gitlab-backup create
After running this command the backup was successful and saw a message that backup is done. I then restarted my docker desktop and I waited for the container to start when the container started I accessed the gitlab interface but I saw a new gitlab instance.
I then type the following command to restore my backup:
docker exec -it <my-container-name> gitlab-backup restore
But saw the message that:
No backups found in /var/opt/gitlab/backups
Please make sure that file name ends with _gitlab_backup.tar
What can be the reason am I doing it the wrong way because I saw these commands on gitlab official website.
I have this in the docker-compose.yml file:
version: "3.6"
services:
web:
image: 'gitlab/gitlab-ce'
container_name: 'gitlab'
restart: always
hostname: 'localhost'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://localhost:9090'
gitlab_rails['gitlab_shell_ssh_port'] = 2224
networks:
- gitlab-network
ports:
- '80:80'
- '443:443'
- '9090:9090'
- '2224:22'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
- '/srv/gitlab/data:/var/opt/gitlab'
networks:
gitlab-network:
name: gitlab-network
I used this command to run the container:
docker-compose up --build --abort-on-container-exit
If you started your container using Volumes, try looking at C:\ProgramData\docker\volume for your backup.
The backup is normally located at: /var/opt/gitlab/backups within the container. So hopefully you mapped /var/opt/gitlab to either a volume or a bind mount.
Did you try supplying the name of the backup file, as for the omnibus install? When I've restored a backup in Docker, I basically use the omnibus instructions, but use docker exec to do it. Here are the commands I've used from my notes.
docker exec -it gitlab gitlab-ctl stop unicorn
docker exec -it gitlab gitlab-ctl stop sidekiq
docker exec -it gitlab gitlab-rake gitlab:backup:restore BACKUP=1541603057_2018_11_07_10.3.4
docker exec -it gitlab gitlab-ctl start
docker exec -it gitlab gitlab-rake gitlab:check SANITIZE=true
It looks like they added a gitlab-backup command at some point, so you can probably use that instead of gitlab-rake.
How do I dynamically add container ip in other Dockerfile ( I am running two container a) Redis b) java application .
I need to pass redis url on run time to my java arguments
Currently I am manually checking the redis ip and copying it in Dockerfile. and later creating new image using redis ip for java application.
docker run --name my-redis -d redis
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' my-redis
IN Dockerfile (java application)
CMD ["-Dspring.redis.host=172.17.0.2", "-jar", "/apps/some-0.0.1-SNAPSHOT.jar"]
Can I use any script to update the DockerFile or can use any environment variable.
you can assign a static ip address to your dokcer container when you run it, following the steps:
1 - create custom network:
docker network create --subnet=172.17.0.0/16 redis-net
2 - run the redis container to use the specified network, and assign the ip address:
docker run --net redis-net --ip 172.17.0.2 --name my-redis -d redis
by then you have the static ip address 172.17.0.2 for my-redis container, you don't need to inspect it anymore.
3 - now it is possible to run the java appication container but it must use the same network:
docker run --net redis-net my-java-app
of course you can optimize the solution, by using env variables or whatever you find convenient to your setup.
More infos can be found in the official docs (search for --ip):
docker run
docker network
Edit (add docker-compose):
I just find out that it is also possible to assign static ips using docker-compose, and this answer gives an example how.
This is a similar example just in case:
version: '3'
services:
redis:
container_name: redis
image: redis:latest
restart: always
networks:
vpcbr:
ipv4_address: 172.17.0.2
java-app:
container_name: java-app
build: <path to Dockerfile>
networks:
vpcbr:
ipv4_address: 172.17.0.3
depends_on:
- redis
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 172.17.0.0/16
gateway: 172.17.0.1
official docs: https://docs.docker.com/compose/networking/
hope this helps you find your way.
You should add your containers in the same network . Then at runtime you can use that name to refer to the container with its name. Container's name is the host name in the network. Thus at runtime it will be resolved as container's ip address.
Follow these steps:
First, create a network for the containers:
docker network create my-network
Start redis: docker run -d --network=my-network --name=redis redis
Edit java application's Dockerfile, replace -Dspring.redis.host=172.17.0.2" with -Dspring.redis.host=redis" and build again.
Finally start java application container: docker run -it --network=my-network your_image. Optionally you can define a name for the container, but it is not required as you do not access java application's container from redis container.
Alternatively you can use a docker-compose file. By default docker-compose creates a network for running services. I am not aware of your full setup, so I will provide a sample docker-compose.yml that illustrates the main concept.
version: "3.7"
services:
redis:
image: redis
java_app_image:
image: your_image_name
In both ways, you are able to access redis container from java application dynamically using container's hostname instead of providing a static ip.
I try to run Traefik on docker on Windows native container but I don't find any exemple. I just want to run the Getting Started exemple with whoami.
I try many parameters without success. I have two question :
how to to pass a configuration file for traefik with an Windows Container ? (binding file don't work on Windows)
how to connect to docker host with named pipe ?
Exemple of docker compose I've tried :
version: '3.7'
services:
reverse-proxy:
image: traefik:v1.7.2-nanoserver # The official Traefik docker image
command: --api --docker --docker.endpoint=npipe:////./pipe/docker_engine # Enables the web UI and tells Træfik to listen to docker
ports:
- "80:80" # The HTTP port
- "8080:8080" # The Web UI (enabled by --api)
volumes:
- source: '\\.\pipe\docker_engine'
target: '\\.\pipe\docker_engine'
type: npipe
whoami:
image: emilevauge/whoami # A container that exposes an API to show its IP address
labels:
- "traefik.frontend.rule=Host:whoami.docker.localhost"
Traefik dashboard work fine on 8080 but no provider found and whoami container not found.
I'm with Windows 10 1803, Docker version 18.06.1-ce, build e68fc7a, docker-compose version 1.22.0, build f46880fe
Note that Traefik work fine if I launch it on my Windows (not in a container).
Thank you for help.
There is a working example of it here. If you would like to run it in swarm, try using docker network create and docker service create instead of docker stack deploy. See my question here for more details.