How to docker compose up a jenkins agent node with git installed - jenkins

I docker compose up my Jenkins Agent Node with
jenkins_barcelona_node:
image: jenkins/ssh-agent:jdk11
privileged: true
user: root
group_add:
- "998"
depends_on:
- jenkins <-- Master Node
- reverse-proxy
container_name: barcelona
restart: "always"
hostname: barcelona
networks:
- akogare-net
expose:
- 22
environment:
- JENKINS_AGENT_SSH_PUBKEY=ssh-rsa......
volumes:
- barcelona-data:/var/barcelona_home
- /usr/bin/docker:/usr/bin/docker
- /var/run/docker.sock:/var/run/docker.sock
This results in a debian container running a Jenkins Slave Node and works as expected. But there is one thing I want to improve. For my pipeline I need git here. So I connect via SSH to this docker container and install git manually here with apt-get install git-all.
Is there a way to deploy a Jenkins Agent Node with git already installed? So I dont need to manually install it.

Related

docker container communication in jenkins build

I have below docker-compose.yml file
freeradius:
image: freeradius/freeradius-server:latest-alpine
restart: always
volumes:
- ./radius/raddb/users:/etc/raddb/users:ro
- ./radius/raddb/clients.conf:/etc/raddb/clients.conf:ro
ports:
- "1812-1813:1812-1813/udp"
command: [ "radiusd", "-X", "-t" ] # Debug mode with colour
my-service:
image: 'my-service:latest'
container_name: my-service
ports:
- '443:443'
environment:
# some env variable
extra_hosts:
- "host.docker.internal:host-gateway"
So above file runs while integration tests runs, after IT, stops the docker container. From one of the test case, I am calling a class where have below codes to test the AAA server communication
radius.authenticate('secret', username, password, host='freeradius', port=1812))
above authentication works well in my local machine (macOS) but when I start the build from jenkins from our jenkins server, giving me error to connect radius server, I guess in jenkins build host name is not resolving
Any idea why it is happening in Jenkins build, what changes I need to make so in Jenkins build, IT will pass.

Can't run jenkins image in docker

I have just started to learn Docker.
I have tried to run jenkins in my docker.
I have tried the commands:
docker run jenkins ,
docker run jenkins:latest
But showing the error in the docker interactive shell:
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: manifest for jenkins:latest not found: manifest unknown: manifest unknown.
You can run the container by using the command
docker run -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts
The documentation page is pretty good.
I would use a docker-compose file to
mount a volume for home to make it persistent
(in order to look into the build workspace you need to attach another container to it)
control the version programmatically
add docker client or other utilities installed later
add 'fixed' agents
docker compose file:
version: '3.5'
services:
jenkins-server:
build: ./JenkinsServer
container_name: jenkins
restart: always
environment:
JAVA_OPTS: "-Xmx1024m"
ports:
- "50000:50000"
- "8080:8080"
networks:
jenkins:
aliases:
- jenkins
volumes:
- jenkins-data:/var/jenkins_home
networks:
jenkins:
external: true
volumes:
jenkins-data:
external: true
dockerfile for server:
FROM jenkins/jenkins:2.263.2-lts
USER root

Pull Image from Gitlab Registry in Docker Compose File

I want to deploy a docker stack on my own server. I've written a .gitlab-ci.yml file that currently builds the images in my stack and pushes them to my gitlab registry:
build:
stage: build
image: docker:stable
services:
- docker:dind
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker info
script:
- docker build -t $DOCKER_IMAGE1_TAG -f dir1/Dockerfile ./dir1
- docker push $DOCKER_IMAGE1_TAG
- docker build -t $DOCKER_IMAGE2_TAG -f dir2/Dockerfile ./dir2
- docker push $DOCKER_IMAGE2_TAG
I'm struggling for a way to run the docker deploy command on my own server with the docker-compose.yml file I've written, that successfully pulls the images from my gitlab registry. I figure I could use sshpass to ssh into my server and then copy the docker-compose.yml file across and run docker deploy from there, but I'm not sure what's the best way to allow my server to access the images now located in my gitlab registry:
# Need to ssh into the server, transfer over docker-stack file and run docker swarm deploy
deploy:
stage: deploy
environment:
name: production
image: trion/ng-cli-karma
before_script:
- apt-get update -qq && apt-get install -y -qq sshpass
- eval $(ssh-agent -s)
This is a section of my docker-compse file:
version: "3.2"
services:
octeditor:
image: image # how to set this to the image in my container registry?
ports:
- "3000:3000"
networks:
- front-tier
deploy:
replicas: 1
update_config:
parallelism: 1
failure_action: rollback
placement:
constraints:
- 'node.role == manager'
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
How can I pull the images from my gitlab registry? Is this the preferred way of creating a docker deployment on a remote server, via gitlab ci?
I equally had this difficulty recently, finally I found out that the solution is just to insert the link to the image in the private registry as is the case for me with gitlab.
version: "3.2"
services:
octeditor:
image: registry.gitlab.com/project-or-group/project-name/image-name:tag
ports:
- "3000:3000"
networks:
- front-tier

where should I put my project git repository when using docker swarm

I have a docker with the following docker-compose.yml configuration
version: '2'
services:
payments:
build: .
container_name: 'my-container'
expose:
- "80"
- "27017"
ports:
- "80:80"
- "27017:27017"
volumes:
- .:/var/www/html
read_only: false
privileged: true
tty: true
and this Dockerfile
FROM my-private/repo
RUN apt-get update && apt-get install -y supervisor
EXPOSE 22 80
In my localhost, it works as expected. However, I want to deploy my docker in a container service configured with Swarm (1 master node and 2 agents).
Where should I put the code? In my localhost, the code is mounted from "." to "/var/www/html". However after deploying to Swarm, the folder /var/www/html is empty.
Any idea? Thank you for help

Configure 3 Mesos instance with 1 master using docker and docker-compose

By reading this article : how-to-configure-a-production-ready-mesosphere-cluster-on-ubuntu-14-04,
I wanted to start my own docker mesosphere using 3 server.
The setup is similar than the article, expect I use 4 dockerized server :
Docker Zookeeper
Docker Mesos Master
Docker Mesos Slave
Docker Marathon
I got really confused by the configuration files location, because they install the 4 components on the same machine.
Docker install use 4 different server, how do you apply the steps correctly using Docker.
I have
Server 1 - prod02 - prod02.domain.com
Server 2 - preprod02 - preprod02.domain.com
Server 3 - prod01 - prod01.domain.com
Here is a the docker-compose.yml I started writting for running the master mesosphere server 1
zookeeper:
build: zookeeper
restart: always
command: /usr/share/zookeeper/bin/zkServer.sh start-foreground
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
master:
build: master
restart: always
environment:
- MESOS_HOSTNAME=master.prod-02.example.com
- MESOS_ZK=zk://prod-02.example.com:2181,prod-01.example.com:2181,preprod-02.example.com:2181/mesos
- MESOS_QUORUM=1
- MESOS_LOG_DIR=/var/log/mesos
- MESOS_WORK_DIR=/var/lib/mesos
volumes:
- /srv/docker/mesos-master:/var/log/mesos
ports:
- "5050:5050"
slave:
build: slave
restart: always
privileged: true
environment:
- MESOS_HOSTNAME=slave.prod-02.example.com
- MESOS_MASTER=zk://prod-02.example.com:2181,prod-01.example.com:2181,preprod-02.example.com:2181/mesos
- MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins #also in Dockerfile
- MESOS_CONTAINERIZERS=docker,mesos
- MESOS_LOG_DIR=/var/log/mesos
- MESOS_LOGGING_LEVEL=INFO
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
- /sys:/sys:ro
- /srv/docker/mesos-slave:/var/log/mesos
- /srv/docker/mesos-data/docker.tar.gz:/etc/docker.tar.gz
ports:
- "5051:5051"
marathon:
build: marathon
restart: always
environment:
- MARATHON_HOSTNAME=marathon.prod-02.example.com
- MARATHON_MASTER=zk://prod-02.example.com:2181,prod-01.example.com:2181,preprod-02.example.com:2181/mesos
- MARATHON_ZK=zk://prod-02.example.com:2181,prod-01.example.com:2181,preprod-02.example.com:2181/marathon
ports:
- "8081:8080"
My project directory looks like this
/prod-02
/marathon
Dockerfile
/master
Dockerfile
/slave
Dockerfile
/zookeeper
/assets
/conf
myid
zoo.cfg
docker-compose.yml
With this config, the master and slave serveur can't start , log is :
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1016 12:12:49.976361 1 process.cpp:895] Failed to initialize: Failed to bind on XXX.XXX.XXX.XXX:5051: Cannot assign requested address: Cannot assign requested address [99]
*** Check failure stack trace: ***
I feel a bit lost due to lake of documentation, any help to configure is well appreciated
I finally sort this out, what was missing was the external ip address MESOS_IP set for master and slave and also the net: host mode

Resources