Old Question
Is that possible to automate Jenkins installation(Jenkins binaries, plugins, credentials) by using any of the configuration management automation tool like Ansible and etc?
Edited
After this question asked I have learned and found many ways to achieve Jenkins Installation. I found docker-compose is interesting to achieve one way of Jenkins Installation automation. So my question is, Is there a better way to automate Jenkins Installation than I am doing, Is there any risk in the way I am handling this automation.
I have taken the advantage of docker Jenkins image and did the automation with docker-compose
Dockerfile
FROM jenkinsci/blueocean
RUN jenkins-plugin-cli --plugins kubernetes workflow-aggregator git configuration-as-code blueocean matrix-auth
docker-compose.yaml
version: '3.7'
services:
dind:
image: docker:dind
privileged: true
networks:
jenkins:
aliases:
- docker
expose:
- "2376"
environment:
- DOCKER_TLS_CERTDIR=/certs
volumes:
- type: volume
source: jenkins-home
target: /var/jenkins_home
- type: volume
source: jenkins-docker-certs
target: /certs/client
jcac:
image: nginx:latest
volumes:
- type: bind
source: ./jcac.yml
target: /usr/share/nginx/html/jcac.yml
networks:
- jenkins
jenkins:
build: .
ports:
- "8080:8080"
- "50000:50000"
environment:
- DOCKER_HOST=tcp://docker:2376
- DOCKER_CERT_PATH=/certs/client
- DOCKER_TLS_VERIFY=1
- JAVA_OPTS="-Djenkins.install.runSetupWizard=false"
- CASC_JENKINS_CONFIG=http://jcac/jcac.yml
- GITHUB_ACCESS_TOKEN=${GITHUB_ACCESS_TOKEN:-fake}
- GITHUB_USERNAME=${GITHUB_USERNAME:-fake}
volumes:
- type: volume
source: jenkins-home
target: /var/jenkins_home
- type: volume
source: jenkins-docker-certs
target: /certs/client
read_only: true
networks:
- jenkins
volumes:
jenkins-home:
jenkins-docker-certs:
networks:
jenkins:
jcac.yaml
credentials:
system:
domainCredentials:
- credentials:
- usernamePassword:
id: "github"
password: ${GITHUB_PASSWORD:-fake}
scope: GLOBAL
username: ${GITHUB_USERNAME:-fake}
- usernamePassword:
id: "slave"
password: ${SSH_PASSWORD:-fake}
username: ${SSH_USERNAME:-fake}
jenkins:
globalNodeProperties:
- envVars:
env:
- key: "BRANCH"
value: "hello"
systemMessage: "Welcome to (one click) Jenkins Automation!"
agentProtocols:
- "JNLP4-connect"
- "Ping"
crumbIssuer:
standard:
excludeClientIPFromCrumb: true
disableRememberMe: false
markupFormatter: "plainText"
mode: NORMAL
myViewsTabBar: "standard"
numExecutors: 4
# nodes:
# - permanent:
# labelString: "slave01"
# launcher:
# ssh:
# credentialsId: "slave"
# host: "worker"
# port: 22
# sshHostKeyVerificationStrategy: "nonVerifyingKeyVerificationStrategy"
# name: "slave01"
# nodeDescription: "SSH Slave 01"
# numExecutors: 3
# remoteFS: "/home/jenkins/workspace"
# retentionStrategy: "always"
securityRealm:
local:
allowsSignup: false
enableCaptcha: false
users:
- id: "admin"
password: "${ADMIN_PASSWORD:-admin123}" #
- id: "user"
password: "${DEFAULTUSER_PASSWORD:-user123}"
authorizationStrategy:
globalMatrix:
permissions:
- "Agent/Build:user"
- "Job/Build:user"
- "Job/Cancel:user"
- "Job/Read:user"
- "Overall/Read:user"
- "View/Read:user"
- "Overall/Read:anonymous"
- "Overall/Administer:admin"
- "Overall/Administer:root"
unclassified:
globalLibraries:
libraries:
- defaultVersion: "master"
implicit: false
name: "jenkins-shared-library"
retriever:
modernSCM:
scm:
git:
remote: "https://github.com/samitkumarpatel/jenkins-shared-libs.git"
traits:
- "gitBranchDiscovery"
The command to start and stop Jenkins are
# start Jenkins
docker-compose up -d
# stop Jenkins
docker-compose down
Sure it is :) For Ansible you can always check Ansible Galaxy whenever you want to automate installation of something. Here is the most popular role for installing Jenkins. And here is its GitHub repo
Related
I installed GitLab runner via HelmChart on my Kubernetes cluster
While installing via helm I used config values.yaml
But my Runner stucks every time at docker login command,
without docker login working good
I have no idea what is wrong :(
Any help appreciated!
Error: write tcp 10.244.0.44:50882->188.72.88.34:443: use of closed network connection
.gitlab-ci.yaml file
build docker image:
stage: build
image: docker:latest
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
variables:
DOCKER_HOST: tcp://localhost:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
before_script:
- mkdir -p $HOME/.docker
- echo passwd| docker login -u user https://registry.labs.com --password-stdin
script:
- docker images
- docker ps
- docker pull registry.labs.com/jappweek:a_zh
- docker build -t "$CI_REGISTRY"/"$CI_REGISTRY_IMAGE":1.8 .
- docker push "$CI_REGISTRY"/"$CI_REGISTRY_IMAGE":1.8
tags:
- k8s
values.yaml file
image:
registry: registry.gitlab.com
#image: gitlab/gitlab-runner:v13.0.0
image: gitlab-org/gitlab-runner
# tag: alpine-v11.6.0
imagePullPolicy: IfNotPresent
gitlabUrl: https://gitlab.somebars.com
runnerRegistrationToken: "GR1348941a7jJ4WF7999yxsya9Arsd929g"
terminationGracePeriodSeconds: 3600
#
concurrent: 10
checkInterval: 30
sessionServer:
enabled: false
## For RBAC support:
rbac:
create: true
rules:
- resources: ["configmaps", "pods", "pods/attach", "secrets", "services"]
verbs: ["get", "list", "watch", "create", "patch", "update", "delete"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create", "patch", "delete"]
clusterWideAccess: false
podSecurityPolicy:
enabled: false
resourceNames:
- gitlab-runner
metrics:
enabled: false
portName: metrics
port: 9252
serviceMonitor:
enabled: false
service:
enabled: false
type: ClusterIP
runners:
config: |
[[runners]]
[runners.kubernetes]
namespace = "{{.Release.Namespace}}"
image = "ubuntu:16.04"
privileged: true
cache: {}
builds: {}
services: {}
helpers: {}
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: false
runAsNonRoot: true
privileged: false
capabilities:
drop: ["ALL"]
podSecurityContext:
runAsUser: 100
# runAsGroup: 65533
fsGroup: 65533
resources: {}
affinity: {}
nodeSelector: {}
tolerations: []
hostAliases: []
podAnnotations: {}
podLabels: {}
priorityClassName: ""
secrets: []
configMaps: {}
volumeMounts: []
volumes: []
I bypassed docker login with importing $HOME/.docker/config.json file which stores auth token from my host machine to Gitlab Ci
before_script:
- mkdir -p $HOME/.docker
- echo $DOCKER_AUTH_CONFIG > $HOME/.docker/config.json
$DOCKER_AUTH_CONFIG is $HOME/.docker/config.json
That's all no docker login required
I'm new to Drone pipeline and is interested to use it in my current project for CICD.
My project tech stack is as follows:
Java
Spring Boot
Maven
I have created a sample drone pipeline, but not able to cache the maven dependencies which is downloaded and stored in .m2 folder.
Always say the mount path is not available or not found. Please find the screen shot for the same:
Drone mount path issue
Not sure of the path to provide here. Can someone help me to understand the mount path which we need to provide to cache all the dependencies in .m2 path.
Adding the pipeline information below:
kind: pipeline
type: docker
name: config-server
steps:
name: restore-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
restore: true
cache_key: "volume"
archive_format: "gzip"
mount:
- ./target
- /root/.m2/repository
volumes:
name: cache
path: /tmp/cache
name: build
image: maven:3.8.3-openjdk-17
pull: if-not-exists
environment:
M2_HOME: /usr/share/maven
MAVEN_CONFIG: /root/.m2
commands:
mvn clean install -DskipTests=true -B -V
volumes:
name: cache
path: /tmp/cache
name: rebuild-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
rebuild: true
cache_key: "volume"
archive_format: "gzip"
mount:
- ./target
- /root/.m2/repository
volumes:
name: cache
path: /tmp/cache
trigger:
branch:
main
event:
push
volumes:
name: cache
host:
path: /var/lib/cache
Thanks in advance..
Resolved the issue. Please find the solution below and working drone pipeline.
kind: pipeline
type: docker
name: data-importer
steps:
- name: restore-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
restore: true
ttl: 1
cache_key: "volume"
archive_format: "gzip"
mount:
- ./.m2/repository
volumes:
- name: cache
path: /tmp/cache
- name: maven-build
image: maven:3.8.6-amazoncorretto-11
pull: if-not-exists
commands:
- mvn clean install -DskipTests=true -Dmaven.repo.local=.m2/repository -B -V
volumes:
- name: cache
path: /tmp/cache
- name: rebuild-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
rebuild: true
cache_key: "volume"
archive_format: "gzip"
ttl: 1
mount:
- ./.m2/repository
volumes:
- name: cache
path: /tmp/cache
trigger:
branch:
- main
- feature/*
event:
- push
volumes:
- name: cache
host:
path: /var/lib/cache
I build a spring boot project and I want to deploy it to minikube using GitLab CI/CD. I'm able to deploy the application by directly accessing the deployment.yml from local machine.
But I'm getting the following error when I tried to deploy it from GitLab.
Error
$ kubectl apply -f deployment.yml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Cleaning up project directory and file based variables
ERROR: Job failed: exit code 1
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-management
spec:
# the target number of Pods
replicas: 2
selector:
matchLabels:
app: user-management
template:
metadata:
labels:
app: user-management
spec:
containers:
- name: user-management7
image: registry.gitlab.com/PROFILE_NAME/user-management
imagePullPolicy: Always
ports:
- containerPort: 8082
imagePullSecrets:
- name: registry.gitlab.com
.gitlab-ci.yml
image: docker:latest
services:
- docker:dind
- mysql:8
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- package
- test
- deploy-tb
- deploy-prod
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker build -t registry.gitlab.com/PROFILE_NAME/user-management .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/PROFILE_NAME/user-management
test:
image: maven:3-jdk-8
services:
- mysql:8
script:
- "mvn clean test"
artifacts:
when: always
reports:
junit:
- target/surefire-reports/TEST-*.xml
deploy-tb:
image:
name: bitnami/kubectl:latest
entrypoint: [ "" ]
stage: deploy-tb
script:
- kubectl apply -f deployment.yml
environment:
name: prod
url: registry.gitlab.com/PROFILE_NAME/user-management
I don't know what I'm missing here.
According to GitLab documentation, you need first to install the GitLab Agent for Kubernetes.
These are the steps for the installation process:
To install the Agent in your cluster:
Define a configuration repository.
Register an agent with GitLab.
Install the agent into the cluster.
Note: On self-managed GitLab instances, a GitLab administrator needs to set up the GitLab Agent Server (KAS).
I am running the WSO2is version 5.8.0 in Docker-Swarm, i script a compose for this mapping the files:
deployment.toml, wsocarbon.jks and directory in servers.
After change the keystore i receive the error on login admin:
System error while Authenticating/Authorizing User : Error when handling event : PRE_AUTHENTICATION
removing the mapping, the SSL Cert is not valid, but i login.
PS: i use traefik to redirect to container.
The stack deploy file:
#IS#
is-hml:
image: wso2/wso2is:5.8.0
ports:
- 4763:4763
- 4443:9443
volumes:
#- /docker/release-hml/wso2/full-identity-server-volume:/home/wso2carbon/wso2is-5.8.0
- /docker/release-hml/wso2/identity-server:/home/wso2carbon/wso2-config-volume
extra_hosts:
- "wso2-hml.valecard.com.br:127.0.0.1"
networks:
traefik_traefik:
aliases:
- is-hml
configs:
#- source: deployment.toml
# target: /home/wso2carbon/wso2is-5.8.0/repository/conf/deployment.toml
#
- source: wso2carbon.jks
target: /home/wso2carbon/wso2is-5.8.0/repository/resources/security/wso2carbon.jks
#- source: catalina-server.xml
# target: /home/wso2carbon/wso2is-5.8.0/repository/conf/tomcat/catalina-server.xml
- source: carbon.xml
target: /home/wso2carbon/wso2is-5.8.0/repository/conf/carbon.xml
#environment:
# - "CATALINA_OPTS=-Xmx2g -Xms2g -XX:MaxPermSize=1024m"
# - "JVM_OPTS=-Xmx2g -Xms2g -XX:MaxPermSize=1024m"
# - "JAVA_OPTS=-Xmx2g -Xms2g"
deploy:
#endpoint_mode: dnsrr
resources:
limits:
cpus: '2'
memory: '4096M'
replicas: 1
labels:
- "traefik.docker.network=traefik_traefik"
- "traefik.backend=is-hml"
- "traefik.port=4443"
- "traefik.frontend.entryPoints=http,https"
- "traefik.frontend.rule=Host:wso2-hml.valecard.com.br"
configs:
deployment.toml:
file: ./wso2-config/deployment.toml
catalina-server.xml:
file: ./wso2-config/catalina-server.xml
wso2carbon.jks:
file: ../../certs/wso2carbon-valecard.jks
carbon.xml:
file: ./wso2-config/carbon.xml
networks:
traefik_traefik:
external: true
The password is some from the deployment.toml
Thz.
I work with a compose file which looks like this:
version: '3.7'
services:
shinyproxy:
build: /home/shinyproxy
deploy:
#replicas: 3
user: root:root
hostname: shinyproxy
image: shinyproxy-example
networks:
- sp-example-net
volumes:
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock
- type: bind
source: /home/shinyproxy/application.yml
target: /opt/shinyproxy/application.yml
....
networks:
sp-example-net:
driver: overlay
attachable: true
This shinyproxy application uses the following .yml file
proxy:
port: 5000
template-path: /opt/shinyproxy/templates/2col
authentication: keycloak
admin-groups: admins
users:
- name: jack
password: password
groups: admins
- name: jeff
password: password
container-backend: docker-swarm
docker:
internal-networking: true
container-network: sp-example-net
specs:
- id: 01_hello
display-name: Hello Application
description: Application which demonstrates the basics of a Shiny app
container-cmd: ["R", "-e", "shinyproxy::run_01_hello()"]
container-image: openanalytics/shinyproxy-demo
container-network: "${proxy.docker.container-network}"
access-groups: test
- id: euler
display-name: Euler's number
container-cmd: ["R", "-e", "shiny::runApp('/root/euler')"]
container-image: euler-docker
container-network: "${proxy.docker.container-network}"
access-groups: test
To deploy the stack I run the following command:
docker stack deploy -c docker-compose.yml test
This results in the following: Creating network test_sp-example-net
So indead of sp-example_net my network´s name is test_sp-example_net
Is there a way to prevent this kind of combination for my network name?
Thank you!