I work with a compose file which looks like this:
version: '3.7'
services:
shinyproxy:
build: /home/shinyproxy
deploy:
#replicas: 3
user: root:root
hostname: shinyproxy
image: shinyproxy-example
networks:
- sp-example-net
volumes:
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock
- type: bind
source: /home/shinyproxy/application.yml
target: /opt/shinyproxy/application.yml
....
networks:
sp-example-net:
driver: overlay
attachable: true
This shinyproxy application uses the following .yml file
proxy:
port: 5000
template-path: /opt/shinyproxy/templates/2col
authentication: keycloak
admin-groups: admins
users:
- name: jack
password: password
groups: admins
- name: jeff
password: password
container-backend: docker-swarm
docker:
internal-networking: true
container-network: sp-example-net
specs:
- id: 01_hello
display-name: Hello Application
description: Application which demonstrates the basics of a Shiny app
container-cmd: ["R", "-e", "shinyproxy::run_01_hello()"]
container-image: openanalytics/shinyproxy-demo
container-network: "${proxy.docker.container-network}"
access-groups: test
- id: euler
display-name: Euler's number
container-cmd: ["R", "-e", "shiny::runApp('/root/euler')"]
container-image: euler-docker
container-network: "${proxy.docker.container-network}"
access-groups: test
To deploy the stack I run the following command:
docker stack deploy -c docker-compose.yml test
This results in the following: Creating network test_sp-example-net
So indead of sp-example_net my network´s name is test_sp-example_net
Is there a way to prevent this kind of combination for my network name?
Thank you!
Related
I'm running Loki for test purposes in Docker and am recently getting following error from the Promtail and Loki containers:
level=warn ts=2022-02-18T09:41:39.186511145Z caller=client.go:349 component=client host=loki:3100 msg="error sending batch, will retry" status=429 error="server returned HTTP status 429 Too Many Requests (429): Maximum active stream limit exceeded, reduce the number of active streams (reduce labels or reduce label values), or contact your Loki administrator to see if the limit can be increased"
I have tried increasing limit settings (ingestion_rate_mb and ingestion_burst_size_mb) in my Loki config.
I setup two Promtail jobs - one job ingesting MS Exchange logs from a local directory (currently 8TB and increasing), the other job gets logs spooled from syslog-ng.
I've read that reducing labels help. But I'm only using two labels.
Configuration
Below my config files (docker-compose, loki, promtail):
docker-compose.yaml
version: "3"
networks:
loki:
services:
loki:
image: grafana/loki:2.4.2
container_name: loki
restart: always
user: "10001:10001"
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
volumes:
- ${DATADIR}/loki/etc:/etc/loki:rw
- ${DATADIR}/loki/chunks:/loki/chunks
networks:
- loki
promtail:
image: grafana/promtail:2.4.2
container_name: promtail
restart: always
volumes:
- /var/log/loki:/var/log/loki
- ${DATADIR}/promtail/etc:/etc/promtail
ports:
- "1514:1514" # for syslog-ng
- "9080:9080" # for http web interface
command: -config.file=/etc/promtail/config.yml
networks:
- loki
grafana:
image: grafana/grafana:8.3.4
container_name: grafana
restart: always
user: "476:0"
volumes:
- ${DATADIR}/grafana/var:/var/lib/grafana
ports:
- "3000:3000"
networks:
- loki
Loki Config
auth_enabled: false
server:
http_listen_port: 3100
common:
path_prefix: /loki
storage:
filesystem:
chunks_directory: /loki/chunks
rules_directory: /loki/rules
replication_factor: 1
ring:
instance_addr: 127.0.0.1
kvstore:
store: inmemory
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: http://localhost:9093
# https://grafana.com/docs/loki/latest/configuration/#limits_config
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
ingestion_rate_mb: 12
ingestion_burst_size_mb: 24
per_stream_rate_limit: 24MB
chunk_store_config:
max_look_back_period: 336h
table_manager:
retention_deletes_enabled: true
retention_period: 2190h
ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_encoding: snappy
Promtail Config
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: exchange
static_configs:
- targets:
- localhost
labels:
job: exchange
__path__: /var/log/loki/exchange/*/*/*log
- job_name: syslog-ng
syslog:
listen_address: 0.0.0.0:1514
idle_timeout: 60s
label_structured_data: yes
labels:
job: "syslog-ng"
relabel_configs:
- source_labels: ['__syslog_message_hostname']
target_label: 'host'
I'm new to Kubernetes (and Docker) for that matter. I need to understand the process of migrating my existing Vue.js app using Devspace. I've got the app running, sorta, but I am not connecting to
ws://localhost:4000/graphql
or able to establish a mongo connection.
MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017
relevant pre-existing package.json entry points
"serve": "vue-cli-service serve -mode development",
"build": "vue-cli-service build",
"apollo": "vue-cli-service apollo:dev --generate-schema",
"apollo:schema:generate": "vue-cli-service apollo:schema:generate",
"apollo:schema:publish": "vue-cli-service apollo:schema:publish",
"apollo:start": "vue-cli-service apollo:start",
app structure
/apollo-server
context.js ## Mongo connection made here.
/src
vue-apollo.js ## Apollo setup (Graphql is setup here.)
Dockerfile
devspace.yaml
package.json
Now,
Dockerfile
FROM node:13.12.0-alpine
# Set working directory
WORKDIR /app
# Add package.json to WORKDIR and install dependencies
COPY package*.json ./
RUN npm install
# Add source code files to WORKDIR
COPY . .
# Application port (optional)
# express server runs on port 3000
EXPOSE 3000
# Debugging port (optional)
# For remote debugging, add this port to devspace.yaml: dev.ports[*].forward[*].port: 9229
EXPOSE 9229
CMD ["npm", "start"]
devspace.yaml
version: v1beta9
images:
app:
image: sandbox/practiceapp
preferSyncOverRebuild: true
injectRestartHelper: false
cmd: ["yarn", "serve"]
appendDockerfileInstructions:
- USER root
backend:
image: sandbox/backend
preferSyncOverRebuild: true
injectRestartHelper: false
entrypoint: ["yarn", "apollo"]
appendDockerfileInstructions:
- USER root
deployments:
- name: frontend
helm:
componentChart: true
values:
containers:
- image: sandbox/practiceapp
service:
ports:
- port: 8080
- name: backend
helm:
componentChart: true
values:
containers:
- image: sandbox/backend
service:
ports:
- port: 4000
- port: 3000
- port: 27017
# - name: mongo
# helm:
# componentChart: true
# values:
# containers:
# - image: sandbox/mongo
# service:
# ports:
# - port: 27017
dev:
ports:
- imageName: app
forward:
- port: 8080
# - imageName: apollo
# forward:
# port: 3000
# - imageName: graphql
# forward:
# port: 4000
# - imageName: mongo
# forward:
# port: 27017
open:
- url: http://localhost:8080
- url: http://localhost:4000/graphql
sync:
- imageName: app
excludePaths:
- .git/
uploadExcludePaths:
- Dockerfile
- node_modules/*
- '!node_modules/.temp/'
- devspace.yaml
onUpload:
restartContainer: true
profiles:
- name: production
patches:
- op: remove
path: images.app.injectRestartHelper
- op: remove
path: images.app.appendDockerfileInstructions
- name: interactive
patches:
- op: add
path: dev.interactive
value:
defaultEnabled: true
- op: add
path: images.app.entrypoint
value:
- sleep
- "9999999999"
I've looked for information on how to include services from pre-existing apps, but I've had difficulty understanding. I need some guidance on how to set this up, or where to look.
Thanks for your help and time.
From the information you provided, I think this is probably a networking issue. Please, check if your applications are listening on all interfaces instead of on localhost only because that would lead to the connection being refused as described in this troubleshooting guide: https://devspace.sh/cli/docs/guides/networking-domains#troubleshooting
The answer to this was refactoring the structure of my app and including the service port in deployments as well as the forwarding port in dev.ports.
deployments:
- name: app
helm:
componentChart: true
values:
containers:
- image: namespace/frontend
service:
name: app-service
ports:
- port: 8080
- port: 4000
dev:
ports:
- imageName: app
forward:
- port: 8080
- port: 4000
The final structure of my app:
./backend
.dockerignore
Dockerfile
package.json
./frontend
.dockerignore
Dockerfile
package.json
devspace.yaml
As far as connecting mongodb, I initially started with minikube and then moved to docker-desktop, but was not able to set up headless ports with external loadbalancing access due to using a replicaset on docker-desktop (localhost cannot be assigned twice as the external ip). I used bitnami's mongodb helm chart with devspace to do so.
The shinyproxy page is displayed and after authentication I can see the nav bar, 2 links to the 2 applications. Then, when I click on one of them, I got en error 500 / "Failed to start container"
In the stack, I can see :
Caused by: java.io.IOException: Permission denied
Here is my configuration
application.yml:
proxy:
title: Open Analytics Shiny Proxy
# landing-page: /
port: 8080
authentication: simple
admin-groups: scientists
# Example: 'simple' authentication configuration
users:
- name: jack
password: password
groups: scientists
- name: jeff
password: password
groups: mathematicians
# Example: 'ldap' authentication configuration
# Docker configuration
#docker:
#cert-path: /home/none
#url: http://localhost:2375
#port-range-start: 20000
specs:strong text
- id: 01_hello
display-name: Hello Application
description: Application which demonstrates the basics of a Shiny app
container-cmd: ["R", "-e", "shinyproxy::run_01_hello()"]
container-image: openanalytics/shinyproxy-demo
access-groups: [scientists, mathematicians]
- id: 06_tabsets
container-cmd: ["R", "-e", "shinyproxy::run_06_tabsets()"]
container-image: openanalytics/shinyproxy-demo
access-groups: scientists
logging:
file:
shinyproxy.log
shinyproxy-docker-compose.yml:
version: '2.4'
services:
shinyproxy:
container_name: shinyproxy
image: openanalytics/shinyproxy:2.3.1
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./application.yml:/opt/shinyproxy/application.yml
privileged: true
ports:
- 35624:8080
I have the same problem, workaround
sudo chown $USER:docker /run/docker.sock
However, I do not understand why this is needed, because /run/docker.sock was already root:docker.
This is under WSL2.
I am running the WSO2is version 5.8.0 in Docker-Swarm, i script a compose for this mapping the files:
deployment.toml, wsocarbon.jks and directory in servers.
After change the keystore i receive the error on login admin:
System error while Authenticating/Authorizing User : Error when handling event : PRE_AUTHENTICATION
removing the mapping, the SSL Cert is not valid, but i login.
PS: i use traefik to redirect to container.
The stack deploy file:
#IS#
is-hml:
image: wso2/wso2is:5.8.0
ports:
- 4763:4763
- 4443:9443
volumes:
#- /docker/release-hml/wso2/full-identity-server-volume:/home/wso2carbon/wso2is-5.8.0
- /docker/release-hml/wso2/identity-server:/home/wso2carbon/wso2-config-volume
extra_hosts:
- "wso2-hml.valecard.com.br:127.0.0.1"
networks:
traefik_traefik:
aliases:
- is-hml
configs:
#- source: deployment.toml
# target: /home/wso2carbon/wso2is-5.8.0/repository/conf/deployment.toml
#
- source: wso2carbon.jks
target: /home/wso2carbon/wso2is-5.8.0/repository/resources/security/wso2carbon.jks
#- source: catalina-server.xml
# target: /home/wso2carbon/wso2is-5.8.0/repository/conf/tomcat/catalina-server.xml
- source: carbon.xml
target: /home/wso2carbon/wso2is-5.8.0/repository/conf/carbon.xml
#environment:
# - "CATALINA_OPTS=-Xmx2g -Xms2g -XX:MaxPermSize=1024m"
# - "JVM_OPTS=-Xmx2g -Xms2g -XX:MaxPermSize=1024m"
# - "JAVA_OPTS=-Xmx2g -Xms2g"
deploy:
#endpoint_mode: dnsrr
resources:
limits:
cpus: '2'
memory: '4096M'
replicas: 1
labels:
- "traefik.docker.network=traefik_traefik"
- "traefik.backend=is-hml"
- "traefik.port=4443"
- "traefik.frontend.entryPoints=http,https"
- "traefik.frontend.rule=Host:wso2-hml.valecard.com.br"
configs:
deployment.toml:
file: ./wso2-config/deployment.toml
catalina-server.xml:
file: ./wso2-config/catalina-server.xml
wso2carbon.jks:
file: ../../certs/wso2carbon-valecard.jks
carbon.xml:
file: ./wso2-config/carbon.xml
networks:
traefik_traefik:
external: true
The password is some from the deployment.toml
Thz.
Old Question
Is that possible to automate Jenkins installation(Jenkins binaries, plugins, credentials) by using any of the configuration management automation tool like Ansible and etc?
Edited
After this question asked I have learned and found many ways to achieve Jenkins Installation. I found docker-compose is interesting to achieve one way of Jenkins Installation automation. So my question is, Is there a better way to automate Jenkins Installation than I am doing, Is there any risk in the way I am handling this automation.
I have taken the advantage of docker Jenkins image and did the automation with docker-compose
Dockerfile
FROM jenkinsci/blueocean
RUN jenkins-plugin-cli --plugins kubernetes workflow-aggregator git configuration-as-code blueocean matrix-auth
docker-compose.yaml
version: '3.7'
services:
dind:
image: docker:dind
privileged: true
networks:
jenkins:
aliases:
- docker
expose:
- "2376"
environment:
- DOCKER_TLS_CERTDIR=/certs
volumes:
- type: volume
source: jenkins-home
target: /var/jenkins_home
- type: volume
source: jenkins-docker-certs
target: /certs/client
jcac:
image: nginx:latest
volumes:
- type: bind
source: ./jcac.yml
target: /usr/share/nginx/html/jcac.yml
networks:
- jenkins
jenkins:
build: .
ports:
- "8080:8080"
- "50000:50000"
environment:
- DOCKER_HOST=tcp://docker:2376
- DOCKER_CERT_PATH=/certs/client
- DOCKER_TLS_VERIFY=1
- JAVA_OPTS="-Djenkins.install.runSetupWizard=false"
- CASC_JENKINS_CONFIG=http://jcac/jcac.yml
- GITHUB_ACCESS_TOKEN=${GITHUB_ACCESS_TOKEN:-fake}
- GITHUB_USERNAME=${GITHUB_USERNAME:-fake}
volumes:
- type: volume
source: jenkins-home
target: /var/jenkins_home
- type: volume
source: jenkins-docker-certs
target: /certs/client
read_only: true
networks:
- jenkins
volumes:
jenkins-home:
jenkins-docker-certs:
networks:
jenkins:
jcac.yaml
credentials:
system:
domainCredentials:
- credentials:
- usernamePassword:
id: "github"
password: ${GITHUB_PASSWORD:-fake}
scope: GLOBAL
username: ${GITHUB_USERNAME:-fake}
- usernamePassword:
id: "slave"
password: ${SSH_PASSWORD:-fake}
username: ${SSH_USERNAME:-fake}
jenkins:
globalNodeProperties:
- envVars:
env:
- key: "BRANCH"
value: "hello"
systemMessage: "Welcome to (one click) Jenkins Automation!"
agentProtocols:
- "JNLP4-connect"
- "Ping"
crumbIssuer:
standard:
excludeClientIPFromCrumb: true
disableRememberMe: false
markupFormatter: "plainText"
mode: NORMAL
myViewsTabBar: "standard"
numExecutors: 4
# nodes:
# - permanent:
# labelString: "slave01"
# launcher:
# ssh:
# credentialsId: "slave"
# host: "worker"
# port: 22
# sshHostKeyVerificationStrategy: "nonVerifyingKeyVerificationStrategy"
# name: "slave01"
# nodeDescription: "SSH Slave 01"
# numExecutors: 3
# remoteFS: "/home/jenkins/workspace"
# retentionStrategy: "always"
securityRealm:
local:
allowsSignup: false
enableCaptcha: false
users:
- id: "admin"
password: "${ADMIN_PASSWORD:-admin123}" #
- id: "user"
password: "${DEFAULTUSER_PASSWORD:-user123}"
authorizationStrategy:
globalMatrix:
permissions:
- "Agent/Build:user"
- "Job/Build:user"
- "Job/Cancel:user"
- "Job/Read:user"
- "Overall/Read:user"
- "View/Read:user"
- "Overall/Read:anonymous"
- "Overall/Administer:admin"
- "Overall/Administer:root"
unclassified:
globalLibraries:
libraries:
- defaultVersion: "master"
implicit: false
name: "jenkins-shared-library"
retriever:
modernSCM:
scm:
git:
remote: "https://github.com/samitkumarpatel/jenkins-shared-libs.git"
traits:
- "gitBranchDiscovery"
The command to start and stop Jenkins are
# start Jenkins
docker-compose up -d
# stop Jenkins
docker-compose down
Sure it is :) For Ansible you can always check Ansible Galaxy whenever you want to automate installation of something. Here is the most popular role for installing Jenkins. And here is its GitHub repo