I have a module helloworld.erl in ejabberd/src
-module(helloworld).
%% API
-export([start/0]).
start() ->
% Chat
ejabberd_auth:try_register(<<"username">>, <<"example.com">>, <<"secret_password">>).
How do I go about registering it into /etc/ejabberd/ejabberd.yml? What's the syntax?
The following attempts:
erl
c(helloworld).
hellworld:start().
Throws the error:
** exception error: undefined function jid:nodeprep/1
in function ejabberd_auth:validate_credentials/3 (src/ejabberd_auth.erl, line 796)
in call from ejabberd_auth:try_register/3 (src/ejabberd_auth.erl, line 251)
How can I fix it? What is the right way to do it so that I can call ejabberd_auth:try_register successfully?
Thank you all in advance.
ejabberd.yml
###
### ejabberd configuration file
###
###
### The parameters used in this configuration file are explained in more detail
### in the ejabberd Installation and Operation Guide.
### Please consult the Guide in case of doubts, it is included with
### your copy of ejabberd, and is also available online at
### http://www.process-one.net/en/ejabberd/docs/
### The configuration file is written in YAML.
### Refer to http://en.wikipedia.org/wiki/YAML for the brief description.
### However, ejabberd treats different literals as different types:
###
### - unquoted or single-quoted strings. They are called "atoms".
### Example: dog, 'Jupiter', '3.14159', YELLOW
###
### - numeric literals. Example: 3, -45.0, .0
###
### - quoted or folded strings.
### Examples of quoted string: "Lizzard", "orange".
### Example of folded string:
### > Art thou not Romeo,
### and a Montague?
### =======
### LOGGING
##
## loglevel: Verbosity of log files generated by ejabberd.
## 0: No ejabberd log at all (not recommended)
## 1: Critical
## 2: Error
## 3: Warning
## 4: Info
## 5: Debug
##
loglevel: 4
##
## rotation: Describe how to rotate logs. Either size and/or date can trigger
## log rotation. Setting count to N keeps N rotated logs. Setting count to 0
## does not disable rotation, it instead rotates the file and keeps no previous
## versions around. Setting size to X rotate log when it reaches X bytes.
## To disable rotation set the size to 0 and the date to ""
## Date syntax is taken from the syntax newsyslog uses in newsyslog.conf.
## Some examples:
## $D0 rotate every night at midnight
## $D23 rotate every day at 23:00 hr
## $W0D23 rotate every week on Sunday at 23:00 hr
## $W5D16 rotate every week on Friday at 16:00 hr
## $M1D0 rotate on the first day of every month at midnight
## $M5D6 rotate on every 5th day of the month at 6:00 hr
##
log_rotate_size: 10485760
log_rotate_date: ""
log_rotate_count: 1
##
## overload protection: If you want to limit the number of messages per second
## allowed from error_logger, which is a good idea if you want to avoid a flood
## of messages when system is overloaded, you can set a limit.
## 100 is ejabberd's default.
log_rate_limit: 100
##
## watchdog_admins: Only useful for developers: if an ejabberd process
## consumes a lot of memory, send live notifications to these XMPP
## accounts.
##
## watchdog_admins:
## - "bob#example.com"
### ================
### SERVED HOSTNAMES
##
## hosts: Domains served by ejabberd.
## You can define one or several, for example:
## hosts:
## - "example.net"
## - "example.com"
## - "example.org"
##
hosts:
- "localhost"
##
## route_subdomains: Delegate subdomains to other XMPP servers.
## For example, if this ejabberd serves example.org and you want
## to allow communication with an XMPP server called im.example.org.
##
## route_subdomains: s2s
### ===============
### LISTENING PORTS
##
## listen: The ports ejabberd will listen on, which service each is handled
## by and what options to start it with.
##
listen:
-
port: 5222
module: ejabberd_c2s
##
## If TLS is compiled in and you installed a SSL
## certificate, specify the full path to the
## file and uncomment these lines:
##
## certfile: "/path/to/ssl.pem"
## starttls: true
##
## To enforce TLS encryption for client connections,
## use this instead of the "starttls" option:
##
## starttls_required: true
##
## Custom OpenSSL options
##
## protocol_options:
## - "no_sslv3"
## - "no_tlsv1"
max_stanza_size: 65536
shaper: c2s_shaper
access: c2s
-
port: 5269
module: ejabberd_s2s_in
##
## ejabberd_service: Interact with external components (transports, ...)
##
## -
## port: 8888
## module: ejabberd_service
## access: all
## shaper_rule: fast
## ip: "127.0.0.1"
## hosts:
## "icq.example.org":
## password: "secret"
## "sms.example.org":
## password: "secret"
##
## ejabberd_stun: Handles STUN Binding requests
##
## -
## port: 3478
## transport: udp
## module: ejabberd_stun
##
## To handle XML-RPC requests that provide admin credentials:
##
## -
## port: 4560
## module: ejabberd_xmlrpc
-
port: 5280
module: ejabberd_http
## request_handlers:
## "/pub/archive": mod_http_fileserver
web_admin: true
http_poll: true
http_bind: true
## register: true
captcha: true
##
## s2s_use_starttls: Enable STARTTLS + Dialback for S2S connections.
## Allowed values are: false optional required required_trusted
## You must specify a certificate file.
##
## s2s_use_starttls: optional
##
## s2s_certfile: Specify a certificate file.
##
## s2s_certfile: "/path/to/ssl.pem"
## Custom OpenSSL options
##
## s2s_protocol_options:
## - "no_sslv3"
## - "no_tlsv1"
##
## domain_certfile: Specify a different certificate for each served hostname.
##
## host_config:
## "example.org":
## domain_certfile: "/path/to/example_org.pem"
## "example.com":
## domain_certfile: "/path/to/example_com.pem"
##
## S2S whitelist or blacklist
##
## Default s2s policy for undefined hosts.
##
## s2s_access: s2s
##
## Outgoing S2S options
##
## Preferred address families (which to try first) and connect timeout
## in milliseconds.
##
## outgoing_s2s_families:
## - ipv4
## - ipv6
## outgoing_s2s_timeout: 10000
### ==============
### AUTHENTICATION
##
## auth_method: Method used to authenticate the users.
## The default method is the internal.
## If you want to use a different method,
## comment this line and enable the correct ones.
##
auth_method: internal
##
## Store the plain passwords or hashed for SCRAM:
## auth_password_format: plain
## auth_password_format: scram
##
## Define the FQDN if ejabberd doesn't detect it:
## fqdn: "server3.example.com"
##
## Authentication using external script
## Make sure the script is executable by ejabberd.
##
## auth_method: external
## extauth_program: "/path/to/authentication/script"
##
## Authentication using ODBC
## Remember to setup a database in the next section.
##
## auth_method: odbc
##
## Authentication using PAM
##
## auth_method: pam
## pam_service: "pamservicename"
##
## Authentication using LDAP
##
## auth_method: ldap
##
## List of LDAP servers:
## ldap_servers:
## - "localhost"
##
## Encryption of connection to LDAP servers:
## ldap_encrypt: none
## ldap_encrypt: tls
##
## Port to connect to on LDAP servers:
## ldap_port: 389
## ldap_port: 636
##
## LDAP manager:
## ldap_rootdn: "dc=example,dc=com"
##
## Password of LDAP manager:
## ldap_password: "******"
##
## Search base of LDAP directory:
## ldap_base: "dc=example,dc=com"
##
## LDAP attribute that holds user ID:
## ldap_uids:
## - "mail": "%u#mail.example.org"
##
## LDAP filter:
## ldap_filter: "(objectClass=shadowAccount)"
##
## Anonymous login support:
## auth_method: anonymous
## anonymous_protocol: sasl_anon | login_anon | both
## allow_multiple_connections: true | false
##
## host_config:
## "public.example.org":
## auth_method: anonymous
## allow_multiple_connections: false
## anonymous_protocol: sasl_anon
##
## To use both anonymous and internal authentication:
##
## host_config:
## "public.example.org":
## auth_method:
## - internal
## - anonymous
### ==============
### DATABASE SETUP
## ejabberd by default uses the internal Mnesia database,
## so you do not necessarily need this section.
## This section provides configuration examples in case
## you want to use other database backends.
## Please consult the ejabberd Guide for details on database creation.
##
## MySQL server:
##
## odbc_type: mysql
## odbc_server: "server"
## odbc_database: "database"
## odbc_username: "username"
## odbc_password: "password"
##
## If you want to specify the port:
## odbc_port: 1234
##
## PostgreSQL server:
##
## odbc_type: pgsql
## odbc_server: "server"
## odbc_database: "database"
## odbc_username: "username"
## odbc_password: "password"
##
## If you want to specify the port:
## odbc_port: 1234
##
## If you use PostgreSQL, have a large database, and need a
## faster but inexact replacement for "select count(*) from users"
##
## pgsql_users_number_estimate: true
##
## ODBC compatible or MSSQL server:
##
## odbc_type: odbc
## odbc_server: "DSN=ejabberd;UID=ejabberd;PWD=ejabberd"
##
## Number of connections to open to the database for each virtual host
##
## odbc_pool_size: 10
##
## Interval to make a dummy SQL request to keep the connections to the
## database alive. Specify in seconds: for example 28800 means 8 hours
##
## odbc_keepalive_interval: undefined
### ===============
### TRAFFIC SHAPERS
shaper:
##
## The "normal" shaper limits traffic speed to 1000 B/s
##
normal: 1000
##
## The "fast" shaper limits traffic speed to 50000 B/s
##
fast: 50000
##
## This option specifies the maximum number of elements in the queue
## of the FSM. Refer to the documentation for details.
##
max_fsm_queue: 1000
###. ====================
###' ACCESS CONTROL LISTS
acl:
##
## The 'admin' ACL grants administrative privileges to XMPP accounts.
## You can put here as many accounts as you want.
##
## admin:
## user:
## - "aleksey": "localhost"
## - "ermine": "example.org"
##
## Blocked users
##
## blocked:
## user:
## - "baduser": "example.org"
## - "test"
## Local users: don't modify this.
##
local:
user_regexp: ""
##
## More examples of ACLs
##
## jabberorg:
## server:
## - "jabber.org"
## aleksey:
## user:
## - "aleksey": "jabber.ru"
## test:
## user_regexp: "^test"
## user_glob: "test*"
##
## Loopback network
##
loopback:
ip:
- "127.0.0.0/8"
##
## Bad XMPP servers
##
## bad_servers:
## server:
## - "xmpp.zombie.org"
## - "xmpp.spam.com"
##
## Define specific ACLs in a virtual host.
##
## host_config:
## "localhost":
## acl:
## admin:
## user:
## - "bob-local": "localhost"
### ============
### ACCESS RULES
access:
## Maximum number of simultaneous sessions allowed for a single user:
max_user_sessions:
all: 10
## Maximum number of offline messages that users can have:
max_user_offline_messages:
admin: 5000
all: 100
## This rule allows access only for local users:
local:
local: allow
## Only non-blocked users can use c2s connections:
c2s:
blocked: deny
all: allow
## For C2S connections, all users except admins use the "normal" shaper
c2s_shaper:
admin: none
all: normal
## All S2S connections use the "fast" shaper
s2s_shaper:
all: fast
## Only admins can send announcement messages:
announce:
admin: allow
## Only admins can use the configuration interface:
configure:
admin: allow
## Admins of this server are also admins of the MUC service:
muc_admin:
admin: allow
## Only accounts of the local ejabberd server can create rooms:
muc_create:
local: allow
## All users are allowed to use the MUC service:
muc:
all: allow
## Only accounts on the local ejabberd server can create Pubsub nodes:
pubsub_createnode:
local: allow
## In-band registration allows registration of any possible username.
## To disable in-band registration, replace 'allow' with 'deny'.
register:
all: allow
## Only allow to register from localhost
trusted_network:
loopback: allow
## Do not establish S2S connections with bad servers
## s2s:
## bad_servers: deny
## all: allow
## By default the frequency of account registrations from the same IP
## is limited to 1 account every 10 minutes. To disable, specify: infinity
## registration_timeout: 600
##
## Define specific Access Rules in a virtual host.
##
## host_config:
## "localhost":
## access:
## c2s:
## admin: allow
## all: deny
## register:
## all: deny
### ================
### DEFAULT LANGUAGE
##
## language: Default language used for server messages.
##
language: "en"
##
## Set a different default language in a virtual host.
##
## host_config:
## "localhost":
## language: "ru"
### =======
### CAPTCHA
##
## Full path to a script that generates the image.
##
## captcha_cmd: "/lib/ejabberd/priv/bin/captcha.sh"
##
## Host for the URL and port where ejabberd listens for CAPTCHA requests.
##
## captcha_host: "example.org:5280"
##
## Limit CAPTCHA calls per minute for JID/IP to avoid DoS.
##
## captcha_limit: 5
### =======
### MODULES
##
## Modules enabled in all ejabberd virtual hosts.
##
modules:
mod_adhoc: {}
## mod_admin_extra: {}
mod_announce: # recommends mod_adhoc
access: announce
mod_blocking: {} # requires mod_privacy
mod_caps: {}
mod_carboncopy: {}
mod_client_state:
queue_chat_states: true
queue_presence: false
mod_configure: {} # requires mod_adhoc
mod_disco: {}
## mod_echo: {}
mod_irc: {}
mod_http_bind: {}
## mod_http_fileserver:
## docroot: "/var/www"
## accesslog: "/var/log/ejabberd/access.log"
mod_last: {}
mod_muc:
## host: "conference.#HOST#"
access: muc
access_create: muc_create
access_persistent: muc_create
access_admin: muc_admin
## mod_muc_log: {}
mod_offline:
access_max_user_messages: max_user_offline_messages
mod_ping: {}
## mod_pres_counter:
## count: 5
## interval: 60
mod_privacy: {}
mod_private: {}
## mod_proxy65: {}
mod_pubsub:
access_createnode: pubsub_createnode
## reduces resource comsumption, but XEP incompliant
ignore_pep_from_offline: true
## XEP compliant, but increases resource comsumption
## ignore_pep_from_offline: false
last_item_cache: false
plugins:
- "flat"
- "hometree"
- "pep" # pep requires mod_caps
mod_register:
##
## Protect In-Band account registrations with CAPTCHA.
##
## captcha_protected: true
##
## Set the minimum informational entropy for passwords.
##
## password_strength: 32
##
## After successful registration, the user receives
## a message with this subject and body.
##
welcome_message:
subject: "Welcome!"
body: |-
Hi.
Welcome to this XMPP server.
##
## When a user registers, send a notification to
## these XMPP accounts.
##
## registration_watchers:
## - "admin1#example.org"
##
## Only clients in the server machine can register accounts
##
ip_access: trusted_network
##
## Local c2s or remote s2s users cannot register accounts
##
## access_from: deny
access: register
mod_roster: {}
mod_shared_roster: {}
mod_stats: {}
mod_time: {}
mod_vcard: {}
mod_version: {}
##
## Enable modules with custom options in a specific virtual host
##
## host_config:
## "localhost":
## modules:
## mod_echo:
## host: "mirror.localhost"
##
## Enable modules management via ejabberdctl for installation and
## uninstallation of public/private contributed modules
## (enabled by default)
##
allow_contrib_modules: true
### Local Variables:
### mode: yaml
### End:
### vim: set filetype=yaml tabstop=8
Old question. But let me try and explain your problem. As your shell error reported,
exception error: undefined function jid:nodeprep/1
in function ejabberd_auth:validate_credentials/3 (src/ejabberd_auth.erl, line 796)
in call from ejabberd_auth:try_register/3 (src/ejabberd_auth.erl, line 251)
This happened because erlang could find the jid module in any code path. Code path I mean, directories where erlang searches to load modules(beam files) when starting. When your ordinarily execute erl, erlang adds the current directory (from where the command was called), to its code path. Since jid.beam which is linked to ejabberd_auth is not there or any other, that exception is thrown.
Now to your questions:
How do I go about registering it into /etc/ejabberd/ejabberd.yml? What's the syntax?
Add your module to the modules list field as:
modules: mod_adhoc: {}
Make sure to follow the same indentation rule. I'm sure the YAML file parser will throw error if that's not obeyed.
How can I fix it? What is the right way to do it so that I can call ejabberd_auth:try_register successfully?
To fix that, the ejabberd_auth module and all the modules that it directly or transitively access via the call to try_register/3 must be added to the code path. You can do that with -pa flag of the erl command as:
erl -pa ./path-to-ejabberd-ebin-folder
See more at erl
However, ejabberd is an application not just a library. It has lots of other dependecies it require and states to setup(mnesia tables,...).
Even if the jid module is resolved, other problems could happened. I advise to start ejabberd as application. If you do it this way, ejabberd will auto start your module since you registered in the *ejabberd.yml** config file
Related
I have a fresh install of Ejabberd 21.12 on Ubuntu 20.04. So far I have not been unable to mosquitto_sub or mosquitto_pub a test message.
mosquitto_sub -h mydomain.xyz -t "test/topic" -d -p 1883
Client (null) sending CONNECT
Client (null) received CONNACK (5)
Connection error: Connection Refused: not authorised.
Client (null) sending DISCONNECT
the relevant (I think) parts of ejabberd.yml looks like this
port: 5280
ip: "::"
module: ejabberd_http
request_handlers:
"/admin": ejabberd_web_admin
"/mqtt": mod_mqtt
-
port: 1883
module: mod_mqtt
backlog: 1000
-
port: 8883
module: mod_mqtt
backlog: 1000
tls: true
mod_mqtt: {}
I tried defining publisher and subscriber in acl but that didn't seem to change anything.
I did:
ufw allow 1883
ufw allow 8883
Thanks for any help or insight.
------EDIT----------
Still having the same issue but I have tried adding access rules just to fill out the mqtt configuration.
I have tried changing the syntax of the ACL section many times but they all result in the same auth fail. Here is my yml:
###
###' ejabberd configuration file
###
### The parameters used in this configuration file are explained at
###
### https://docs.ejabberd.im/admin/configuration
###
### The configuration file is written in YAML.
### *******************************************************
### ******* !!! WARNING !!! *******
### ******* YAML IS INDENTATION SENSITIVE *******
### ******* MAKE SURE YOU INDENT SECTIONS CORRECTLY *******
### *******************************************************
### Refer to http://en.wikipedia.org/wiki/YAML for the brief description.
###
hosts:
- DOMAIN.XYZ
loglevel: debug
certfiles:
- "/opt/ejabberd/conf/server.pem"
# - "/etc/letsencrypt/live/DOMAIN.XYZ/fullchain.pem"
# - "/etc/letsencrypt/live/DOMAIN.XYZ/privkey.pem"
- "/opt/ejabberd/conf/fullchain.pem"
- "/opt/ejabberd/conf/privkey.pem"
ca_file: "/opt/ejabberd/conf/cacert.pem"
listen:
-
port: 5222
ip: "0.0.0.0"
module: ejabberd_c2s
max_stanza_size: 262144
shaper: c2s_shaper
access: c2s
starttls_required: true
-
port: 5269
ip: "0.0.0.0"
module: ejabberd_s2s_in
max_stanza_size: 524288
-
port: 5443
ip: "0.0.0.0"
module: ejabberd_http
tls: true
request_handlers:
"/admin": ejabberd_web_admin
"/api": mod_http_api
"/bosh": mod_bosh
"/captcha": ejabberd_captcha
"/upload": mod_http_upload
"/ws": ejabberd_http_ws
"/oauth": ejabberd_oauth
-
port: 5280
ip: "0.0.0.0"
module: ejabberd_http
request_handlers:
"/admin": ejabberd_web_admin
"/mqtt": mod_mqtt
-
port: 1883
module: mod_mqtt
backlog: 1000
-
port: 8883
module: mod_mqtt
backlog: 1000
tls: true
s2s_use_starttls: optional
acl:
local:
user_regexp: ""
loopback:
ip:
- 127.0.0.0/8
- ::1/128
- ::FFFF:127.0.0.1/128
admin:
user:
- admin#DOMAIN.XYZ
publisher:
user:
"broker" : "DOMAIN.XYZ"
subscriber:
user:
"broker" : "DOMAIN.XYZ"
access_rules:
local:
allow: local
c2s:
deny: blocked
allow: all
announce:
allow: admin
configure:
allow: admin
muc_create:
allow: local
pubsub_createnode:
allow: local
trusted_network:
allow: loopback
api_permissions:
"console commands":
from:
- ejabberd_ctl
who: all
what: "*"
"admin access":
who:
access:
allow:
acl: loopback
acl: admin
oauth:
scope: "ejabberd:admin"
access:
allow:
acl: loopback
acl: admin
what:
- "*"
- "!stop"
- "!start"
"public commands":
who:
ip: 127.0.0.1/8
what:
- status
- connected_users_number
shaper:
normal: 1000
fast: 50000
shaper_rules:
max_user_sessions: 10
max_user_offline_messages:
5000: admin
100: all
c2s_shaper:
none: admin
normal: all
s2s_shaper: fast
max_fsm_queue: 10000
acme:
contact: "mailto:cedar#disroot.org"
ca_url: "https://acme-v02.api.letsencrypt.org"
modules:
mod_adhoc: {}
mod_admin_extra: {}
mod_announce:
access: announce
mod_avatar: {}
mod_blocking: {}
mod_bosh: {}
mod_caps: {}
mod_carboncopy: {}
mod_client_state: {}
mod_configure: {}
mod_disco: {}
mod_fail2ban: {}
mod_http_api: {}
mod_http_upload:
put_url: https://#HOST#:5443/upload
mod_last: {}
mod_mam:
## Mnesia is limited to 2GB, better to use an SQL backend
## For small servers SQLite is a good fit and is very easy
## to configure. Uncomment this when you have SQL configured:
## db_type: sql
assume_mam_usage: true
default: never
mod_mqtt:
access_publish:
"#":
- allow: publisher
- deny
access_subscribe:
"#":
- allow: subscriber
- deny
mod_muc:
access:
- allow
access_admin:
- allow: admin
access_create: muc_create
access_persistent: muc_create
access_mam:
- allow
default_room_options:
allow_subscription: true # enable MucSub
mam: false
mod_muc_admin: {}
mod_offline:
access_max_user_messages: max_user_offline_messages
mod_ping: {}
mod_privacy: {}
mod_private: {}
mod_proxy65:
access: local
max_connections: 5
mod_pubsub:
access_createnode: pubsub_createnode
plugins:
- flat
- pep
force_node_config:
## Avoid buggy clients to make their bookmarks public
storage:bookmarks:
access_model: whitelist
mod_push: {}
mod_push_keepalive: {}
mod_register:
## Only accept registration requests from the "trusted"
## network (see access_rules section above).
## Think twice before enabling registration from any
## address. See the Jabber SPAM Manifesto for details:
## https://github.com/ge0rg/jabber-spam-fighting-manifesto
ip_access: trusted_network
mod_roster:
versioning: true
mod_s2s_dialback: {}
mod_shared_roster: {}
mod_stream_mgmt:
resend_on_timeout: if_offline
mod_vcard: {}
mod_vcard_xupdate: {}
mod_version:
show_os: false
### Local Variables:
### mode: yaml
### End:
### vim: set filetype=yaml tabstop=8
Even if Ejabberd is set up to allow "anonymous" access, it seems to require a username AND PASSWORD (that it doesn't check) when connecting over MQTT. This command works for me:
mosquitto_sub -h mydomain.xyz -t "test/topic" -d -p 1883 -u foo -P bar
I used the following helm chart to install Jenkins
https://artifacthub.io/packages/helm/jenkinsci/jenkins
The problem is it does't build docker images, saying there's no docker. Docker was installed on host with sudo apt install docker-ce docker-ce-cli containerd.io
var/jenkins_home/workspace/test#tmp/durable-23187d38/script.sh: 2: /var/jenkins_home/workspace/test#tmp/durable-23187d38/script.sh: docker: not found
There was also another error
Caused: java.io.IOException: Cannot run program "docker": error=2, No such file or directory
I tried experimenting with multiple values.yaml variations with no luck.
Here's the latest one(I removed the ingress part). There might be redundant parts like that podTemplates: with docker config(copied it from some github repo) and agent: volumes: where I mount the same thing /var/run/docker.sock I don't understand where it should go.
I'm using k3s if it matters. And this is just a plain VPS.
# Default values for jenkins.
# This is a YAML-formatted file.
# Declare name/value pairs to be passed into your templates.
# name: value
## Overrides for generated resource names
# See templates/_helpers.tpl
# nameOverride:
# fullnameOverride:
# namespaceOverride:
# For FQDN resolving of the controller service. Change this value to match your existing configuration.
# ref: https://github.com/kubernetes/dns/blob/master/docs/specification.md
clusterZone: "cluster.local"
renderHelmLabels: true
controller:
# Used for label app.kubernetes.io/component
componentName: "jenkins-controller"
image: "jenkins/jenkins"
tag: "2.277.2-jdk11"
imagePullPolicy: "Always"
imagePullSecretName:
# Optionally configure lifetime for controller-container
lifecycle:
# postStart:
# exec:
# command:
# - "uname"
# - "-a"
disableRememberMe: false
numExecutors: 1
# configures the executor mode of the Jenkins node. Possible values are: NORMAL or EXCLUSIVE
executorMode: "NORMAL"
# This is ignored if enableRawHtmlMarkupFormatter is true
markupFormatter: plainText
customJenkinsLabels: []
# The default configuration uses this secret to configure an admin user
# If you don't need that user or use a different security realm then you can disable it
adminSecret: true
hostNetworking: false
# When enabling LDAP or another non-Jenkins identity source, the built-in admin account will no longer exist.
# If you disable the non-Jenkins identity store and instead use the Jenkins internal one,
# you should revert controller.adminUser to your preferred admin user:
adminUser: "admin"
# adminPassword: <defaults to random>
admin:
existingSecret: ""
userKey: jenkins-admin-user
passwordKey: jenkins-admin-password
# This values should not be changed unless you use your custom image of jenkins or any devired from. If you want to use
# Cloudbees Jenkins Distribution docker, you should set jenkinsHome: "/var/cloudbees-jenkins-distribution"
jenkinsHome: "/var/jenkins_home"
# This values should not be changed unless you use your custom image of jenkins or any devired from. If you want to use
# Cloudbees Jenkins Distribution docker, you should set jenkinsRef: "/usr/share/cloudbees-jenkins-distribution/ref"
jenkinsRef: "/usr/share/jenkins/ref"
# Path to the jenkins war file which is used by jenkins-plugin-cli.
jenkinsWar: "/usr/share/jenkins/jenkins.war"
resources:
# requests:
# cpu: "50m"
# memory: "256Mi"
limits:
cpu: "2000m"
memory: "2048Mi"
# Environment variables that get added to the init container (useful for e.g. http_proxy)
# initContainerEnv:
# - name: http_proxy
# value: "http://192.168.64.1:3128"
# containerEnv:
# - name: http_proxy
# value: "http://192.168.64.1:3128"
# Set min/max heap here if needed with:
# javaOpts: "-Xms512m -Xmx512m"
# jenkinsOpts: ""
# If you are using the ingress definitions provided by this chart via the `controller.ingress` block the configured hostname will be the ingress hostname starting with `https://` or `http://` depending on the `tls` configuration.
# The Protocol can be overwritten by specifying `controller.jenkinsUrlProtocol`.
# jenkinsUrlProtocol: "https"
# If you are not using the provided ingress you can specify `controller.jenkinsUrl` to change the url definition.
# jenkinsUrl: ""
# If you set this prefix and use ingress controller then you might want to set the ingress path below
# jenkinsUriPrefix: "/jenkins"
# Enable pod security context (must be `true` if podSecurityContextOverride, runAsUser or fsGroup are set)
usePodSecurityContext: true
# Note that `runAsUser`, `fsGroup`, and `securityContextCapabilities` are
# being deprecated and replaced by `podSecurityContextOverride`.
# Set runAsUser to 1000 to let Jenkins run as non-root user 'jenkins' which exists in 'jenkins/jenkins' docker image.
# When setting runAsUser to a different value than 0 also set fsGroup to the same value:
runAsUser: 1000
fsGroup: 1000
# If you have PodSecurityPolicies that require dropping of capabilities as suggested by CIS K8s benchmark, put them here
securityContextCapabilities: {}
# drop:
# - NET_RAW
# Completely overwrites the contents of the `securityContext`, ignoring the
# values provided for the deprecated fields: `runAsUser`, `fsGroup`, and
# `securityContextCapabilities`. In the case of mounting an ext4 filesystem,
# it might be desirable to use `supplementalGroups` instead of `fsGroup` in
# the `securityContext` block: https://github.com/kubernetes/kubernetes/issues/67014#issuecomment-589915496
# podSecurityContextOverride:
# runAsUser: 1000
# runAsNonRoot: true
# supplementalGroups: [1000]
# # capabilities: {}
servicePort: 8080
targetPort: 8080
# For minikube, set this to NodePort, elsewhere use LoadBalancer
# Use ClusterIP if your setup includes ingress controller
serviceType: ClusterIP
# Jenkins controller service annotations
serviceAnnotations: {}
# Jenkins controller custom labels
statefulSetLabels: {}
# foo: bar
# bar: foo
# Jenkins controller service labels
serviceLabels: {}
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
# Put labels on Jenkins controller pod
podLabels: {}
# Used to create Ingress record (should used with ServiceType: ClusterIP)
# nodePort: <to set explicitly, choose port between 30000-32767
# Enable Kubernetes Liveness and Readiness Probes
# if Startup Probe is supported, enable it too
# ~ 2 minutes to allow Jenkins to restart when upgrading plugins. Set ReadinessTimeout to be shorter than LivenessTimeout.
healthProbes: true
probes:
startupProbe:
httpGet:
path: '{{ default "" .Values.controller.jenkinsUriPrefix }}/login'
port: http
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 12
livenessProbe:
failureThreshold: 5
httpGet:
path: '{{ default "" .Values.controller.jenkinsUriPrefix }}/login'
port: http
periodSeconds: 10
timeoutSeconds: 5
# If Startup Probe is not supported on your Kubernetes cluster, you might want to use "initialDelaySeconds" instead.
# It delays the initial liveness probe while Jenkins is starting
# initialDelaySeconds: 60
readinessProbe:
failureThreshold: 3
httpGet:
path: '{{ default "" .Values.controller.jenkinsUriPrefix }}/login'
port: http
periodSeconds: 10
timeoutSeconds: 5
# If Startup Probe is not supported on your Kubernetes cluster, you might want to use "initialDelaySeconds" instead.
# It delays the initial readyness probe while Jenkins is starting
# initialDelaySeconds: 60
agentListenerPort: 50000
agentListenerHostPort:
agentListenerNodePort:
disabledAgentProtocols:
- JNLP-connect
- JNLP2-connect
csrf:
defaultCrumbIssuer:
enabled: true
proxyCompatability: true
# Kubernetes service type for the JNLP agent service
# agentListenerServiceType is the Kubernetes Service type for the JNLP agent service,
# either 'LoadBalancer', 'NodePort', or 'ClusterIP'
# Note if you set this to 'LoadBalancer', you *must* define annotations to secure it. By default
# this will be an external load balancer and allowing inbound 0.0.0.0/0, a HUGE
# security risk: https://github.com/kubernetes/charts/issues/1341
agentListenerServiceType: "ClusterIP"
# Optionally assign an IP to the LoadBalancer agentListenerService LoadBalancer
# GKE users: only regional static IPs will work for Service Load balancer.
agentListenerLoadBalancerIP:
agentListenerServiceAnnotations: {}
# Example of 'LoadBalancer' type of agent listener with annotations securing it
# agentListenerServiceType: LoadBalancer
# agentListenerServiceAnnotations:
# service.beta.kubernetes.io/aws-load-balancer-internal: "True"
# service.beta.kubernetes.io/load-balancer-source-ranges: "172.0.0.0/8, 10.0.0.0/8"
# LoadBalancerSourcesRange is a list of allowed CIDR values, which are combined with ServicePort to
# set allowed inbound rules on the security group assigned to the controller load balancer
loadBalancerSourceRanges:
- 0.0.0.0/0
# Optionally assign a known public LB IP
# loadBalancerIP: 1.2.3.4
# Optionally configure a JMX port
# requires additional javaOpts, ie
# javaOpts: >
# -Dcom.sun.management.jmxremote.port=4000
# -Dcom.sun.management.jmxremote.authenticate=false
# -Dcom.sun.management.jmxremote.ssl=false
# jmxPort: 4000
# Optionally configure other ports to expose in the controller container
extraPorts: []
# - name: BuildInfoProxy
# port: 9000
# List of plugins to be install during Jenkins controller start
# installPlugins:
# - kubernetes:latest
# - kubernetes-credentials-provider:latest
# - workflow-aggregator:2.6
# - git:latest
# - configuration-as-code:1.47
# Set to false to download the minimum required version of all dependencies.
installLatestPlugins: false
# List of plugins to install in addition to those listed in controller.installPlugins
additionalPlugins: []
# Enable to initialize the Jenkins controller only once on initial installation.
# Without this, whenever the controller gets restarted (Evicted, etc.) it will fetch plugin updates which has the potential to cause breakage.
# Note that for this to work, `persistence.enabled` needs to be set to `true`
initializeOnce: false
# Enable to always override the installed plugins with the values of 'controller.installPlugins' on upgrade or redeployment.
# overwritePlugins: true
# Configures if plugins bundled with `controller.image` should be overwritten with the values of 'controller.installPlugins' on upgrade or redeployment.
overwritePluginsFromImage: true
# Enable HTML parsing using OWASP Markup Formatter Plugin (antisamy-markup-formatter), useful with ghprb plugin.
# The plugin is not installed by default, please update controller.installPlugins.
enableRawHtmlMarkupFormatter: false
# Used to approve a list of groovy functions in pipelines used the script-security plugin. Can be viewed under /scriptApproval
scriptApproval: []
# - "method groovy.json.JsonSlurperClassic parseText java.lang.String"
# - "new groovy.json.JsonSlurperClassic"
# List of groovy init scripts to be executed during Jenkins controller start
initScripts: []
# - |
# print 'adding global pipeline libraries, register properties, bootstrap jobs...'
additionalSecrets: []
# - name: nameOfSecret
# value: secretText
# Below is the implementation of Jenkins Configuration as Code. Add a key under configScripts for each configuration area,
# where each corresponds to a plugin or section of the UI. Each key (prior to | character) is just a label, and can be any value.
# Keys are only used to give the section a meaningful name. The only restriction is they may only contain RFC 1123 \ DNS label
# characters: lowercase letters, numbers, and hyphens. The keys become the name of a configuration yaml file on the controller in
# /var/jenkins_home/casc_configs (by default) and will be processed by the Configuration as Code Plugin. The lines after each |
# become the content of the configuration yaml file. The first line after this is a JCasC root element, eg jenkins, credentials,
# etc. Best reference is https://<jenkins_url>/configuration-as-code/reference. The example below creates a welcome message:
JCasC:
defaultConfig: true
configScripts: {}
# welcome-message: |
# jenkins:
# systemMessage: Welcome to our CI\CD server. This Jenkins is configured and managed 'as code'.
# Ignored if securityRealm is defined in controller.JCasC.configScripts and
# ignored if controller.enableXmlConfig=true as controller.securityRealm takes precedence
securityRealm: |-
local:
allowsSignup: false
enableCaptcha: false
users:
- id: "${chart-admin-username}"
name: "Jenkins Admin"
password: "${chart-admin-password}"
# Ignored if authorizationStrategy is defined in controller.JCasC.configScripts
authorizationStrategy: |-
loggedInUsersCanDoAnything:
allowAnonymousRead: false
# Optionally specify additional init-containers
customInitContainers: []
# - name: custom-init
# image: "alpine:3.7"
# imagePullPolicy: Always
# command: [ "uname", "-a" ]
sidecars:
configAutoReload:
# If enabled: true, Jenkins Configuration as Code will be reloaded on-the-fly without a reboot. If false or not-specified,
# jcasc changes will cause a reboot and will only be applied at the subsequent start-up. Auto-reload uses the
# http://<jenkins_url>/reload-configuration-as-code endpoint to reapply config when changes to the configScripts are detected.
enabled: true
image: kiwigrid/k8s-sidecar:0.1.275
imagePullPolicy: IfNotPresent
resources: {}
# limits:
# cpu: 100m
# memory: 100Mi
# requests:
# cpu: 50m
# memory: 50Mi
# How many connection-related errors to retry on
reqRetryConnect: 10
# env:
# - name: REQ_TIMEOUT
# value: "30"
# SSH port value can be set to any unused TCP port. The default, 1044, is a non-standard SSH port that has been chosen at random.
# Is only used to reload jcasc config from the sidecar container running in the Jenkins controller pod.
# This TCP port will not be open in the pod (unless you specifically configure this), so Jenkins will not be
# accessible via SSH from outside of the pod. Note if you use non-root pod privileges (runAsUser & fsGroup),
# this must be > 1024:
sshTcpPort: 1044
# folder in the pod that should hold the collected dashboards:
folder: "/var/jenkins_home/casc_configs"
# If specified, the sidecar will search for JCasC config-maps inside this namespace.
# Otherwise the namespace in which the sidecar is running will be used.
# It's also possible to specify ALL to search in all namespaces:
# searchNamespace:
# Allows you to inject additional/other sidecars
other: []
## The example below runs the client for https://smee.io as sidecar container next to Jenkins,
## that allows to trigger build behind a secure firewall.
## https://jenkins.io/blog/2019/01/07/webhook-firewalls/#triggering-builds-with-webhooks-behind-a-secure-firewall
##
## Note: To use it you should go to https://smee.io/new and update the url to the generete one.
# - name: smee
# image: docker.io/twalter/smee-client:1.0.2
# args: ["--port", "{{ .Values.controller.servicePort }}", "--path", "/github-webhook/", "--url", "https://smee.io/new"]
# resources:
# limits:
# cpu: 50m
# memory: 128Mi
# requests:
# cpu: 10m
# memory: 32Mi
# Name of the Kubernetes scheduler to use
schedulerName: ""
# Node labels and tolerations for pod assignment
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
nodeSelector: {}
terminationGracePeriodSeconds:
tolerations: []
affinity: {}
# Leverage a priorityClass to ensure your pods survive resource shortages
# ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
priorityClassName:
podAnnotations: {}
# Add StatefulSet annotations
statefulSetAnnotations: {}
# StatefulSet updateStrategy
# ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy: {}
ingress:
# configures the hostname e.g. jenkins.example.com
tls:
- hosts:
- "jen.kirqe.be"
secretName: jen-kirqe-be-tls
# Can be used to disable rendering controller test resources when using helm template
testEnabled: true
agent:
enabled: true
defaultsProviderTemplate: ""
# URL for connecting to the Jenkins contoller
jenkinsUrl:
# connect to the specified host and port, instead of connecting directly to the Jenkins controller
jenkinsTunnel:
kubernetesConnectTimeout: 5
kubernetesReadTimeout: 15
maxRequestsPerHostStr: "32"
namespace: jenkins
image: "jenkins/inbound-agent"
tag: "4.6-1"
workingDir: "/home/jenkins"
customJenkinsLabels: []
# name of the secret to be used for image pulling
imagePullSecretName:
componentName: "jenkins-agent"
websocket: false
privileged: false
runAsUser:
runAsGroup:
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "512m"
memory: "512Mi"
# You may want to change this to true while testing a new image
alwaysPullImage: false
# Controls how agent pods are retained after the Jenkins build completes
# Possible values: Always, Never, OnFailure
podRetention: "Never"
# You can define the volumes that you want to mount for this container
# Allowed types are: ConfigMap, EmptyDir, HostPath, Nfs, PVC, Secret
# Configure the attributes as they appear in the corresponding Java class for that type
# https://github.com/jenkinsci/kubernetes-plugin/tree/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/volumes
volumes:
# - type: ConfigMap
# configMapName: myconfigmap
# mountPath: /var/myapp/myconfigmap
# - type: EmptyDir
# mountPath: /var/myapp/myemptydir
# memory: false
- type: HostPath
hostPath: /var/run/docker.sock
mountPath: /var/run/docker.sock
# - type: Nfs
# mountPath: /var/myapp/mynfs
# readOnly: false
# serverAddress: "192.0.2.0"
# serverPath: /var/lib/containers
# - type: PVC
# claimName: mypvc
# mountPath: /var/myapp/mypvc
# readOnly: false
# - type: Secret
# defaultMode: "600"
# mountPath: /var/myapp/mysecret
# secretName: mysecret
# Pod-wide environment, these vars are visible to any container in the agent pod
# You can define the workspaceVolume that you want to mount for this container
# Allowed types are: DynamicPVC, EmptyDir, HostPath, Nfs, PVC
# Configure the attributes as they appear in the corresponding Java class for that type
# https://github.com/jenkinsci/kubernetes-plugin/tree/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/volumes/workspace
workspaceVolume: {}
# - type: DynamicPVC
# configMapName: myconfigmap
# - type: EmptyDir
# memory: false
# - type: HostPath
# hostPath: /var/lib/containers
# - type: Nfs
# readOnly: false
# serverAddress: "192.0.2.0"
# serverPath: /var/lib/containers
# - type: PVC
# claimName: mypvc
# readOnly: false
# Pod-wide environment, these vars are visible to any container in the agent pod
envVars: []
# - name: PATH
# value: /usr/local/bin
nodeSelector: {}
# Key Value selectors. Ex:
# jenkins-agent: v1
# Executed command when side container gets started
command:
args: "${computer.jnlpmac} ${computer.name}"
# Side container name
sideContainerName: "jnlp"
# Doesn't allocate pseudo TTY by default
TTYEnabled: false
# Max number of spawned agent
containerCap: 10
# Pod name
podName: "default"
# Allows the Pod to remain active for reuse until the configured number of
# minutes has passed since the last step was executed on it.
idleMinutes: 0
# Raw yaml template for the Pod. For example this allows usage of toleration for agent pods.
# https://github.com/jenkinsci/kubernetes-plugin#using-yaml-to-define-pod-templates
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
yamlTemplate: ""
# yamlTemplate: |-
# apiVersion: v1
# kind: Pod
# spec:
# tolerations:
# - key: "key"
# operator: "Equal"
# value: "value"
# Defines how the raw yaml field gets merged with yaml definitions from inherited pod templates: merge or override
yamlMergeStrategy: "override"
# Timeout in seconds for an agent to be online
connectTimeout: 100
# Annotations to apply to the pod.
annotations: {}
# Below is the implementation of custom pod templates for the default configured kubernetes cloud.
# Add a key under podTemplates for each pod template. Each key (prior to | character) is just a label, and can be any value.
# Keys are only used to give the pod template a meaningful name. The only restriction is they may only contain RFC 1123 \ DNS label
# characters: lowercase letters, numbers, and hyphens. Each pod template can contain multiple containers.
# For this pod templates configuration to be loaded the following values must be set:
# controller.JCasC.defaultConfig: true
# Best reference is https://<jenkins_url>/configuration-as-code/reference#Cloud-kubernetes. The example below creates a python pod template.
podTemplates:
docker: |
- name: docker
label: docker
containers:
- name: docker
image: docker:latest
command: "/bin/sh -c"
args: "cat"
ttyEnabled: true
privileged: true
resourceRequestCpu: "400m"
resourceRequestMemory: "512Mi"
resourceLimitCpu: "1"
resourceLimitMemory: "1024Mi"
volumes:
- hostPathVolume:
hostPath: /var/run/docker.sock
mountPath: /var/run/docker.sock
- hostPathVolume:
hostPath: /data
mountPath: /data
# Here you can add additional agents
# They inherit all values from `agent` so you only need to specify values which differ
additionalAgents: {}
# maven:
# podName: maven
# customJenkinsLabels: maven
# # An example of overriding the jnlp container
# # sideContainerName: jnlp
# image: jenkins/jnlp-agent-maven
# tag: latest
# python:
# podName: python
# customJenkinsLabels: python
# sideContainerName: python
# image: python
# tag: "3"
# command: "/bin/sh -c"
# args: "cat"
# TTYEnabled: true
persistence:
enabled: true
## A manually managed Persistent Volume and Claim
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
existingClaim:
## jenkins data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass:
annotations: {}
accessMode: "ReadWriteOnce"
size: "5Gi"
volumes:
# - name: jenkins
# persistentVolumeClaim:
# claimName: jenkins-claim
mounts:
# - name: jenkins
# mountPath: /var/jenkins_home
# volumes:
# # - name: nothing
# # emptyDir: {}
# mounts:
# # - mountPath: /var/nothing
# # name: nothing
# # readOnly: true
networkPolicy:
# Enable creation of NetworkPolicy resources.
enabled: false
# For Kubernetes v1.4, v1.5 and v1.6, use 'extensions/v1beta1'
# For Kubernetes v1.7, use 'networking.k8s.io/v1'
apiVersion: networking.k8s.io/v1
# You can allow agents to connect from both within the cluster (from within specific/all namespaces) AND/OR from a given external IP range
internalAgents:
allowed: true
podLabels: {}
namespaceLabels: {}
# project: myproject
externalAgents: {}
# ipCIDR: 172.17.0.0/16
# except:
# - 172.17.1.0/24
## Install Default RBAC roles and bindings
rbac:
create: true
readSecrets: false
serviceAccount:
create: true
# The name of the service account is autogenerated by default
name:
annotations: {}
imagePullSecretName:
serviceAccountAgent:
# Specifies whether a ServiceAccount should be created
create: false
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name:
annotations: {}
imagePullSecretName:
checkDeprecation: true
You are running Jenkins itself as a container. Therefore the docker command line application must be present in the container, not the host.
Easiest solution: Use a Jenkins docker image that contains the docker cli already, for example https://hub.docker.com/r/trion/jenkins-docker-client
I would suggest using Kaniko in your pipeline as it's more secure and following best practices.
an example pipeline:
pipeline {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
jenkins: worker
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:debug
command: ["/busybox/cat"]
tty: true
volumeMounts:
- name: dockercred
mountPath: /root/.docker/
volumes:
- name: dockercred
secret:
secretName: dockercred
"""
}
}
stages {
stage('Stage 1: Build with Kaniko') {
steps {
container('kaniko') {
sh '/kaniko/executor --context=git://github.com/repository/project.git \
--destination=repository/image:tag \
--insecure \
--skip-tls-verify \
-v=debug'
}
}
}
}
}
you will need to create a docker credential file and mount it as a configmap/secret in order to push the image.
I installed and configured an Ejabberd XMPP server. I tested connecting to the server from a mobile app and exchanging messages. Now I want to enable OAuth(need to integrate OAuth token generation into my own Node.js REST API: login on my REST API = login on my DB + Ejabberd OAuth token generation). I want to prevent doing 2 login calls (my REST API + Ejabberd) by generating the token within my REST API login and use this token in my Android/Ios apps.
When I open this url:
http://my-server:5280/oauth/authorization_token?response_type=token&client_id=Client1&redirect_uri=http://my-server:3000/ejabberd&scope=get_roster+sasl_auth
I get a login form, after filling the form with valid credentials I get redirected to this url:
http://my-server:5280/oauth/authorization_token
with an empty response: ERR_EMPTY_RESPONSE
For reference, here is my configuration file:
hosts:
- "test"
- "157.245.128.100"
loglevel: 5
log_rotate_size: 10485760
log_rotate_date: ""
log_rotate_count: 1
log_rate_limit: 100
certfiles:
- "/opt/ejabberd/conf/server.pem"
#- "/opt/ejabberd/proxym.dev.pem"
## - "/etc/letsencrypt/live/localhost/fullchain.pem"
## - "/etc/letsencrypt/live/localhost/privkey.pem"
ca_file: "/opt/ejabberd/conf/cacert.pem"
listen:
-
port: 5222
ip: "::"
module: ejabberd_c2s
max_stanza_size: 262144
shaper: c2s_shaper
access: c2s
starttls_required: false
-
port: 5269
ip: "::"
module: ejabberd_s2s_in
max_stanza_size: 524288
-
port: 5443
ip: "::"
module: ejabberd_http
tls: false
request_handlers:
"/admin": ejabberd_web_admin
"/api": mod_http_api
"/bosh": mod_bosh
"/captcha": ejabberd_captcha
"/upload": mod_http_upload
"/ws": ejabberd_http_ws
#"/oauth": ejabberd_oauth
-
port: 5280
ip: "::"
module: ejabberd_http
request_handlers:
"/admin": ejabberd_web_admin
"/oauth": ejabberd_oauth
"/api": mod_http_api
-
port: 1883
ip: "::"
module: mod_mqtt
backlog: 1000
s2s_use_starttls: optional
#disable_sasl_mechanisms: ["X-OAUTH2"]
acl:
local:
user_regexp: ""
loopback:
ip:
- 127.0.0.0/8
- ::1/128
- ::FFFF:127.0.0.1/128
admin:
user:
- "admin#test"
access_rules:
local:
allow: local
c2s:
deny: blocked
allow: all
announce:
allow: admin
configure:
allow: admin
muc_create:
allow: local
pubsub_createnode:
allow: local
trusted_network:
allow: loopback
api_permissions:
"console commands":
from:
- ejabberd_ctl
who: all
what: "*"
"admin access":
who:
access:
allow:
acl: loopback
acl: admin
oauth:
scope: "ejabberd:admin"
access:
allow:
acl: loopback
acl: admin
what:
- "*"
- "!stop"
- "!start"
"public commands":
who:
ip: 127.0.0.1/8
what:
- status
- connected_users_number
commands_admin_access:
who: all
commands:
what:
- "user"
- "admin"
- "open"
shaper:
normal: 1000
fast: 50000
shaper_rules:
max_user_sessions: 10
max_user_offline_messages:
5000: admin
100: all
c2s_shaper:
none: admin
normal: all
s2s_shaper: fast
max_fsm_queue: 10000
acme:
contact: "mailto:admin#test"
ca_url: "https://acme-v01.api.letsencrypt.org"
modules:
mod_adhoc: {}
mod_admin_extra: {}
mod_announce:
access: announce
mod_avatar: {}
mod_blocking: {}
mod_bosh: {}
mod_caps: {}
mod_carboncopy: {}
mod_client_state: {}
mod_configure: {}
mod_disco: {}
mod_fail2ban: {}
mod_http_api: {}
mod_http_upload:
put_url: https://#HOST#:5443/upload
docroot: /home/upload
mod_last: {}
mod_mam:
## Mnesia is limited to 2GB, better to use an SQL backend
## For small servers SQLite is a good fit and is very easy
## to configure. Uncomment this when you have SQL configured:
## db_type: sql
assume_mam_usage: true
default: never
mod_mqtt: {}
mod_muc:
access:
- allow
access_admin:
- allow: admin
access_create: muc_create
access_persistent: muc_create
access_mam:
- allow
default_room_options:
allow_subscription: true # enable MucSub
mam: false
mod_muc_admin: {}
mod_offline:
access_max_user_messages: max_user_offline_messages
mod_ping: {}
mod_privacy: {}
mod_private: {}
mod_proxy65:
access: local
max_connections: 5
mod_pubsub:
access_createnode: pubsub_createnode
plugins:
- flat
- pep
force_node_config:
## Avoid buggy clients to make their bookmarks public
storage:bookmarks:
access_model: whitelist
mod_push: {}
mod_push_keepalive: {}
mod_register:
## Only accept registration requests from the "trusted"
## network (see access_rules section above).
## Think twice before enabling registration from any
## address. See the Jabber SPAM Manifesto for details:
## https://github.com/ge0rg/jabber-spam-fighting-manifesto
ip_access: trusted_network
mod_roster:
versioning: true
mod_s2s_dialback: {}
mod_shared_roster: {}
mod_stream_mgmt:
resend_on_timeout: if_offline
mod_vcard: {}
mod_vcard_xupdate: {}
mod_version:
show_os: false
# mod_admin_extra: {}
#commands_admin_access: configure
#commands:
# - add_commands:
# - user
#oauth_expire: 3600
#oauth_access: all
commands_admin_access:
- allow:
- user: "admin#test"
commands:
- add_commands: [user, admin, open]
oauth_expire: 31536000
oauth_access: all
### Local Variables:
### mode: yaml
### End:
### vim: set filetype=yaml tabstop=8
### host_config:
sql_type: mysql
sql_server: "157.245.128.100"
sql_database: "ejabberd"
sql_username: "ejabberd"
sql_password: "password"
sql_port: 3306
auth_method: sql
default_db: sql
Please check out that you have following configuration added in your configuration yml.
commands_admin_access:
- allow:
- user: "admin#localhost" # your user name.
commands:
- add_commands: [user, admin, open]
oauth_access:
- allow:
- user:
- "admin#localhost" # add your user name
oauth_expire: 86400
Please follow the link here:
https://docs.ejabberd.im/developer/ejabberd-api/simple-configuration/
You might have missed some configuration from up, since I also ran into similar problem but found that I was missing some part of above configuration.
I want to run the Redis with ReJson module on production Kubernetes.
Right now in staging, I am running single pod of Redis database as stateful sets.
There is a helm chart available? Can anyone please share it?
i have tried editing redis/stable and stable/redis-ha with redislabs/rejson imaoge but it's not working.
What i have did
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
image:
repository: redislabs/rejson
tag: latest
pullPolicy: IfNotPresent
## replicas number for each component
replicas: 3
## Kubernetes priorityClass name for the redis-ha-server pod
# priorityClassName: ""
## Custom labels for the redis pod
labels: {}
## Pods Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
## Specifies whether a ServiceAccount should be created
##
create: true
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the redis-ha.fullname template
# name:
## Enables a HA Proxy for better LoadBalancing / Sentinel Master support. Automatically proxies to Redis master.
## Recommend for externally exposed Redis clusters.
## ref: https://cbonte.github.io/haproxy-dconv/1.9/intro.html
haproxy:
enabled: false
# Enable if you want a dedicated port in haproxy for redis-slaves
readOnly:
enabled: false
port: 6380
replicas: 1
image:
repository: haproxy
tag: 2.0.4
pullPolicy: IfNotPresent
annotations: {}
resources: {}
## Kubernetes priorityClass name for the haproxy pod
# priorityClassName: ""
## Service type for HAProxy
##
service:
type: ClusterIP
loadBalancerIP:
annotations: {}
serviceAccount:
create: true
## Prometheus metric exporter for HAProxy.
##
exporter:
image:
repository: quay.io/prometheus/haproxy-exporter
tag: v0.9.0
enabled: false
port: 9101
init:
resources: {}
timeout:
connect: 4s
server: 30s
client: 30s
## Role Based Access
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
##
rbac:
create: true
sysctlImage:
enabled: false
command: []
registry: docker.io
repository: bitnami/minideb
tag: latest
pullPolicy: Always
mountHostSys: false
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:
## Redis specific configuration options
redis:
port: 6379
masterGroupName: mymaster
config:
## Additional redis conf options can be added below
## For all available options see http://download.redis.io/redis-stable/redis.conf
min-replicas-to-write: 1
min-replicas-max-lag: 5 # Value in seconds
maxmemory: "0" # Max memory to use for each redis instance. Default is unlimited.
maxmemory-policy: "volatile-lru" # Max memory policy to use for each redis instance. Default is volatile-lru.
# Determines if scheduled RDB backups are created. Default is false.
# Please note that local (on-disk) RDBs will still be created when re-syncing with a new slave. The only way to prevent this is to enable diskless replication.
save: "900 1"
# When enabled, directly sends the RDB over the wire to slaves, without using the disk as intermediate storage. Default is false.
repl-diskless-sync: "yes"
rdbcompression: "yes"
rdbchecksum: "yes"
## Custom redis.conf files used to override default settings. If this file is
## specified then the redis.config above will be ignored.
# customConfig: |-
# Define configuration here
resources: {}
# requests:
# memory: 200Mi
# cpu: 100m
# limits:
# memory: 700Mi
## Sentinel specific configuration options
sentinel:
port: 26379
quorum: 2
config:
## Additional sentinel conf options can be added below. Only options that
## are expressed in the format simialar to 'sentinel xxx mymaster xxx' will
## be properly templated.
## For available options see http://download.redis.io/redis-stable/sentinel.conf
down-after-milliseconds: 10000
## Failover timeout value in milliseconds
failover-timeout: 180000
parallel-syncs: 5
## Custom sentinel.conf files used to override default settings. If this file is
## specified then the sentinel.config above will be ignored.
# customConfig: |-
# Define configuration here
resources: {}
# requests:
# memory: 200Mi
# cpu: 100m
# limits:
# memory: 200Mi
securityContext:
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true
## Node labels, affinity, and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
nodeSelector: {}
## Whether the Redis server pods should be forced to run on separate nodes.
## This is accomplished by setting their AntiAffinity with requiredDuringSchedulingIgnoredDuringExecution as opposed to preferred.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature
##
hardAntiAffinity: true
## Additional affinities to add to the Redis server pods.
##
## Example:
## nodeAffinity:
## preferredDuringSchedulingIgnoredDuringExecution:
## - weight: 50
## preference:
## matchExpressions:
## - key: spot
## operator: NotIn
## values:
## - "true"
##
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
additionalAffinities: {}
## Override all other affinity settings for the Redis server pods with a string.
##
## Example:
## affinity: |
## podAntiAffinity:
## requiredDuringSchedulingIgnoredDuringExecution:
## - labelSelector:
## matchLabels:
## app: {{ template "redis-ha.name" . }}
## release: {{ .Release.Name }}
## topologyKey: kubernetes.io/hostname
## preferredDuringSchedulingIgnoredDuringExecution:
## - weight: 100
## podAffinityTerm:
## labelSelector:
## matchLabels:
## app: {{ template "redis-ha.name" . }}
## release: {{ .Release.Name }}
## topologyKey: failure-domain.beta.kubernetes.io/zone
##
affinity: |
# Prometheus exporter specific configuration options
exporter:
enabled: false
image: oliver006/redis_exporter
tag: v0.31.0
pullPolicy: IfNotPresent
# prometheus port & scrape path
port: 9121
scrapePath: /metrics
# cpu/memory resource limits/requests
resources: {}
# Additional args for redis exporter
extraArgs: {}
podDisruptionBudget: {}
# maxUnavailable: 1
# minAvailable: 1
## Configures redis with AUTH (requirepass & masterauth conf params)
auth: false
# redisPassword:
## Use existing secret containing key `authKey` (ignores redisPassword)
# existingSecret:
## Defines the key holding the redis password in existing secret.
authKey: auth
persistentVolume:
enabled: true
## redis-ha data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessModes:
- ReadWriteOnce
size: 10Gi
annotations: {}
init:
resources: {}
# To use a hostPath for data, set persistentVolume.enabled to false
# and define hostPath.path.
# Warning: this might overwrite existing folders on the host system!
hostPath:
## path is evaluated as template so placeholders are replaced
# path: "/data/{{ .Release.Name }}"
# if chown is true, an init-container with root permissions is launched to
# change the owner of the hostPath folder to the user defined in the
# security context
chown: true
in redis-ha chart i have updated two line to change the image and tag for image in helm chart.
image:
repository: redislabs/rejson
tag: latest
Pod is starting but when logged in using redis-cli it's not taking json as input.
Looks like Helm stable/redis has support to ReJson, as stated in the following PR (#7745):
This allows stable/redis to offer a higher degree of flexibility for
those who may need to run images containing redis modules or based on
a different linux distribution than what is currently offered by
bitnami.
Several interesting test cases:
[...]
helm upgrade --install redis-test ./stable/redis --set image.repository=redislabs/rejson --set image.tag=latest
The stable/redis-ha also has a PR (#7323) that may make the chart compatible with ReJson:
This also removes dependencies on very specific redis images thus
allowing for use of any redis images.
I am new to Kubernetes and ingress.
I am trying to setup Jenkins master and slave using helm on kubernetes on docker on mac.
More info:
I also installed ingress from the following link
Here is my values.yaml
clusterZone: "cluster.local"
master:
# Used for label app.kubernetes.io/component
componentName: "jenkins-master"
image: "jenkins/jenkins"
imageTag: "lts"
imagePullPolicy: "Always"
imagePullSecretName:
# Optionally configure lifetime for master-container
lifecycle:
# postStart:
# exec:
# command:
# - "uname"
# - "-a"
numExecutors: 0
# configAutoReload requires UseSecurity is set to true:
useSecurity: true
# Allows to configure different SecurityRealm using Jenkins XML
securityRealm: |-
<securityRealm class="hudson.security.LegacySecurityRealm"/>
# Allows to configure different AuthorizationStrategy using Jenkins XML
authorizationStrategy: |-
<authorizationStrategy class="hudson.security.FullControlOnceLoggedInAuthorizationStrategy">
<denyAnonymousReadAccess>true</denyAnonymousReadAccess>
</authorizationStrategy>
hostNetworking: false
# When enabling LDAP or another non-Jenkins identity source, the built-in admin account will no longer exist.
# Since the AdminUser is used by configAutoReload, in order to use configAutoReload you must change the
# .master.adminUser to a valid username on your LDAP (or other) server. This user does not need
# to have administrator rights in Jenkins (the default Overall:Read is sufficient) nor will it be granted any
# additional rights. Failure to do this will cause the sidecar container to fail to authenticate via SSH and enter
# a restart loop. Likewise if you disable the non-Jenkins identity store and instead use the Jenkins internal one,
# you should revert master.adminUser to your preferred admin user:
adminUser: "admin"
# adminPassword: <defaults to random>
# adminSshKey: <defaults to auto-generated>
# If CasC auto-reload is enabled, an SSH (RSA) keypair is needed. Can either provide your own, or leave unconfigured to allow a random key to be auto-generated.
# If you supply your own, it is recommended that the values file that contains your key not be committed to source control in an unencrypted format
rollingUpdate: {}
# Ignored if Persistence is enabled
# maxSurge: 1
# maxUnavailable: 25%
resources:
requests:
cpu: "50m"
memory: "256Mi"
limits:
cpu: "2000m"
memory: "4096Mi"
# Environment variables that get added to the init container (useful for e.g. http_proxy)
# initContainerEnv:
# - name: http_proxy
# value: "http://192.168.64.1:3128"
# containerEnv:
# - name: http_proxy
# value: "http://192.168.64.1:3128"
# Set min/max heap here if needed with:
# javaOpts: "-Xms512m -Xmx512m"
# jenkinsOpts: ""
# jenkinsUrl: ""
# If you set this prefix and use ingress controller then you might want to set the ingress path below
# jenkinsUriPrefix: "/jenkins"
# Enable pod security context (must be `true` if runAsUser or fsGroup are set)
usePodSecurityContext: true
# Set runAsUser to 1000 to let Jenkins run as non-root user 'jenkins' which exists in 'jenkins/jenkins' docker image.
# When setting runAsUser to a different value than 0 also set fsGroup to the same value:
# runAsUser: <defaults to 0>
# fsGroup: <will be omitted in deployment if runAsUser is 0>
servicePort: 8080
targetPort: 8080
# For minikube, set this to NodePort, elsewhere use LoadBalancer
# Use ClusterIP if your setup includes ingress controller
serviceType: LoadBalancer
# Jenkins master service annotations
serviceAnnotations: {}
# Jenkins master custom labels
deploymentLabels: {}
# foo: bar
# bar: foo
# Jenkins master service labels
serviceLabels: {}
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
# Put labels on Jenkins master pod
podLabels: {}
# Used to create Ingress record (should used with ServiceType: ClusterIP)
# nodePort: <to set explicitly, choose port between 30000-32767
# Enable Kubernetes Liveness and Readiness Probes
# ~ 2 minutes to allow Jenkins to restart when upgrading plugins. Set ReadinessTimeout to be shorter than LivenessTimeout.
healthProbes: true
healthProbesLivenessTimeout: 5
healthProbesReadinessTimeout: 5
healthProbeLivenessPeriodSeconds: 10
healthProbeReadinessPeriodSeconds: 10
healthProbeLivenessFailureThreshold: 5
healthProbeReadinessFailureThreshold: 3
healthProbeLivenessInitialDelay: 90
healthProbeReadinessInitialDelay: 60
slaveListenerPort: 50000
slaveHostPort:
disabledAgentProtocols:
- JNLP-connect
- JNLP2-connect
csrf:
defaultCrumbIssuer:
enabled: true
proxyCompatability: true
cli: false
# Kubernetes service type for the JNLP slave service
# slaveListenerServiceType is the Kubernetes Service type for the JNLP slave service,
# either 'LoadBalancer', 'NodePort', or 'ClusterIP'
# Note if you set this to 'LoadBalancer', you *must* define annotations to secure it. By default
# this will be an external load balancer and allowing inbound 0.0.0.0/0, a HUGE
# security risk: https://github.com/kubernetes/charts/issues/1341
slaveListenerServiceType: "ClusterIP"
slaveListenerServiceAnnotations: {}
slaveKubernetesNamespace:
# Example of 'LoadBalancer' type of slave listener with annotations securing it
# slaveListenerServiceType: LoadBalancer
# slaveListenerServiceAnnotations:
# service.beta.kubernetes.io/aws-load-balancer-internal: "True"
# service.beta.kubernetes.io/load-balancer-source-ranges: "172.0.0.0/8, 10.0.0.0/8"
# LoadBalancerSourcesRange is a list of allowed CIDR values, which are combined with ServicePort to
# set allowed inbound rules on the security group assigned to the master load balancer
loadBalancerSourceRanges:
- 0.0.0.0/0
# Optionally assign a known public LB IP
# loadBalancerIP: 1.2.3.4
# Optionally configure a JMX port
# requires additional javaOpts, ie
# javaOpts: >
# -Dcom.sun.management.jmxremote.port=4000
# -Dcom.sun.management.jmxremote.authenticate=false
# -Dcom.sun.management.jmxremote.ssl=false
# jmxPort: 4000
# Optionally configure other ports to expose in the master container
extraPorts:
# - name: BuildInfoProxy
# port: 9000
# List of plugins to be install during Jenkins master start
installPlugins:
- kubernetes:1.16.0
- workflow-job:2.32
- workflow-aggregator:2.6
- credentials-binding:1.19
- git:3.10.0
# Enable to always override the installed plugins with the values of 'master.installPlugins' on upgrade or redeployment.
# overwritePlugins: true
# Enable HTML parsing using OWASP Markup Formatter Plugin (antisamy-markup-formatter), useful with ghprb plugin.
# The plugin is not installed by default, please update master.installPlugins.
enableRawHtmlMarkupFormatter: false
# Used to approve a list of groovy functions in pipelines used the script-security plugin. Can be viewed under /scriptApproval
scriptApproval:
# - "method groovy.json.JsonSlurperClassic parseText java.lang.String"
# - "new groovy.json.JsonSlurperClassic"
# List of groovy init scripts to be executed during Jenkins master start
initScripts:
# - |
# print 'adding global pipeline libraries, register properties, bootstrap jobs...'
# Kubernetes secret that contains a 'credentials.xml' for Jenkins
# credentialsXmlSecret: jenkins-credentials
# Kubernetes secret that contains files to be put in the Jenkins 'secrets' directory,
# useful to manage encryption keys used for credentials.xml for instance (such as
# master.key and hudson.util.Secret)
# secretsFilesSecret: jenkins-secrets
# Jenkins XML job configs to provision
jobs:
# test: |-
# <<xml here>>
# Below is the implementation of Jenkins Configuration as Code. Add a key under configScripts for each configuration area,
# where each corresponds to a plugin or section of the UI. Each key (prior to | character) is just a label, and can be any value.
# Keys are only used to give the section a meaningful name. The only restriction is they may only contain RFC 1123 \ DNS label
# characters: lowercase letters, numbers, and hyphens. The keys become the name of a configuration yaml file on the master in
# /var/jenkins_home/casc_configs (by default) and will be processed by the Configuration as Code Plugin. The lines after each |
# become the content of the configuration yaml file. The first line after this is a JCasC root element, eg jenkins, credentials,
# etc. Best reference is https://<jenkins_url>/configuration-as-code/reference. The example below creates a welcome message:
JCasC:
enabled: false
pluginVersion: "1.21"
# it's only used when plugin version is <=1.18 for later version the
# configuration as code support plugin is no longer needed
supportPluginVersion: "1.18"
configScripts:
welcome-message: |
jenkins:
systemMessage: Welcome to our CI\CD server. This Jenkins is configured and managed 'as code'.
# Optionally specify additional init-containers
customInitContainers: []
# - name: custom-init
# image: "alpine:3.7"
# imagePullPolicy: Always
# command: [ "uname", "-a" ]
sidecars:
configAutoReload:
# If enabled: true, Jenkins Configuration as Code will be reloaded on-the-fly without a reboot. If false or not-specified,
# jcasc changes will cause a reboot and will only be applied at the subsequent start-up. Auto-reload uses the Jenkins CLI
# over SSH to reapply config when changes to the configScripts are detected. The admin user (or account you specify in
# master.adminUser) will have a random SSH private key (RSA 4096) assigned unless you specify adminSshKey. This will be saved to a k8s secret.
enabled: false
image: shadwell/k8s-sidecar:0.0.2
imagePullPolicy: IfNotPresent
resources:
# limits:
# cpu: 100m
# memory: 100Mi
# requests:
# cpu: 50m
# memory: 50Mi
# SSH port value can be set to any unused TCP port. The default, 1044, is a non-standard SSH port that has been chosen at random.
# Is only used to reload jcasc config from the sidecar container running in the Jenkins master pod.
# This TCP port will not be open in the pod (unless you specifically configure this), so Jenkins will not be
# accessible via SSH from outside of the pod. Note if you use non-root pod privileges (runAsUser & fsGroup),
# this must be > 1024:
sshTcpPort: 1044
# folder in the pod that should hold the collected dashboards:
folder: "/var/jenkins_home/casc_configs"
# If specified, the sidecar will search for JCasC config-maps inside this namespace.
# Otherwise the namespace in which the sidecar is running will be used.
# It's also possible to specify ALL to search in all namespaces:
# searchNamespace:
# Allows you to inject additional/other sidecars
other:
## The example below runs the client for https://smee.io as sidecar container next to Jenkins,
## that allows to trigger build behind a secure firewall.
## https://jenkins.io/blog/2019/01/07/webhook-firewalls/#triggering-builds-with-webhooks-behind-a-secure-firewall
##
## Note: To use it you should go to https://smee.io/new and update the url to the generete one.
# - name: smee
# image: docker.io/twalter/smee-client:1.0.2
# args: ["--port", "{{ .Values.master.servicePort }}", "--path", "/github-webhook/", "--url", "https://smee.io/new"]
# resources:
# limits:
# cpu: 50m
# memory: 128Mi
# requests:
# cpu: 10m
# memory: 32Mi
# Node labels and tolerations for pod assignment
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
nodeSelector: {}
tolerations: []
# Leverage a priorityClass to ensure your pods survive resource shortages
# ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
# priorityClass: system-cluster-critical
podAnnotations: {}
# The below two configuration-related values are deprecated and replaced by Jenkins Configuration as Code (see above
# JCasC key). They will be deleted in an upcoming version.
customConfigMap: false
# By default, the configMap is only used to set the initial config the first time
# that the chart is installed. Setting `overwriteConfig` to `true` will overwrite
# the jenkins config with the contents of the configMap every time the pod starts.
# This will also overwrite all init scripts
overwriteConfig: false
# By default, the Jobs Map is only used to set the initial jobs the first time
# that the chart is installed. Setting `overwriteJobs` to `true` will overwrite
# the jenkins jobs configuration with the contents of Jobs every time the pod starts.
overwriteJobs: false
ingress:
enabled: true
# For Kubernetes v1.14+, use 'networking.k8s.io/v1beta1'
apiVersion: "extensions/v1beta1"
labels: {}
annotations: {}
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
# Set this path to jenkinsUriPrefix above or use annotations to rewrite path
path: "/jenkins"
# configures the hostname e.g. jenkins.example.com
hostName: localhost
# tls:
# - secretName: jenkins.cluster.local
# hosts:
# - jenkins.cluster.local
# Openshift route
route:
enabled: false
labels: {}
annotations: {}
# path: "/jenkins"
additionalConfig: {}
# master.hostAliases allows for adding entries to Pod /etc/hosts:
# https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
hostAliases: []
# - ip: 192.168.50.50
# hostnames:
# - something.local
# - ip: 10.0.50.50
# hostnames:
# - other.local
# Expose Prometheus metrics
prometheus:
# If enabled, add the prometheus plugin to the list of plugins to install
# https://plugins.jenkins.io/prometheus
enabled: false
# Additional labels to add to the ServiceMonitor object
serviceMonitorAdditionalLabels: {}
scrapeInterval: 60s
# This is the default endpoint used by the prometheus plugin
scrapeEndpoint: /prometheus
# Additional labels to add to the PrometheusRule object
alertingRulesAdditionalLabels: {}
# An array of prometheus alerting rules
# See here: https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/
# The `groups` root object is added by default, simply add the rule entries
alertingrules: []
agent:
enabled: true
image: "jenkins/jnlp-slave"
imageTag: "3.27-1"
customJenkinsLabels: []
# name of the secret to be used for image pulling
imagePullSecretName:
componentName: "jenkins-slave"
privileged: false
resources:
requests:
cpu: "200m"
memory: "256Mi"
limits:
cpu: "200m"
memory: "256Mi"
# You may want to change this to true while testing a new image
alwaysPullImage: false
# Controls how slave pods are retained after the Jenkins build completes
# Possible values: Always, Never, OnFailure
podRetention: "Never"
# You can define the volumes that you want to mount for this container
# Allowed types are: ConfigMap, EmptyDir, HostPath, Nfs, Pod, Secret
# Configure the attributes as they appear in the corresponding Java class for that type
# https://github.com/jenkinsci/kubernetes-plugin/tree/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/volumes
# Pod-wide ennvironment, these vars are visible to any container in the slave pod
envVars:
# - name: PATH
# value: /usr/local/bin
volumes:
# - type: Secret
# secretName: mysecret
# mountPath: /var/myapp/mysecret
# - type: EmptyDir
# mountPath: "/var/lib/containers"
# memory: false
nodeSelector: {}
# Key Value selectors. Ex:
# jenkins-agent: v1
# Executed command when side container gets started
command:
args:
# Side container name
sideContainerName: "jnlp"
# Doesn't allocate pseudo TTY by default
TTYEnabled: false
# Max number of spawned agent
containerCap: 10
# Pod name
podName: "default"
# Allows the Pod to remain active for reuse until the configured number of
# minutes has passed since the last step was executed on it.
idleMinutes: 0
# Raw yaml template for the Pod. For example this allows usage of toleration for agent pods.
# https://github.com/jenkinsci/kubernetes-plugin#using-yaml-to-define-pod-templates
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
yamlTemplate:
# yamlTemplate: |-
# apiVersion: v1
# kind: Pod
# spec:
# tolerations:
# - key: "key"
# operator: "Equal"
# value: "value"
persistence:
enabled: true
## A manually managed Persistent Volume and Claim
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
existingClaim:
## jenkins data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass:
annotations: {}
accessMode: "ReadWriteOnce"
size: "8Gi"
volumes:
# - name: nothing
# emptyDir: {}
mounts:
# - mountPath: /var/nothing
# name: nothing
# readOnly: true
networkPolicy:
# Enable creation of NetworkPolicy resources.
enabled: false
# For Kubernetes v1.4, v1.5 and v1.6, use 'extensions/v1beta1'
# For Kubernetes v1.7, use 'networking.k8s.io/v1'
apiVersion: networking.k8s.io/v1
## Install Default RBAC roles and bindings
rbac:
create: true
serviceAccount:
create: true
# The name of the service account is autogenerated by default
name:
annotations: {}
serviceAccountAgent:
# Specifies whether a ServiceAccount should be created
create: false
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name:
annotations: {}
## Backup cronjob configuration
## Ref: https://github.com/nuvo/kube-tasks
backup:
# Backup must use RBAC
# So by enabling backup you are enabling RBAC specific for backup
enabled: false
# Used for label app.kubernetes.io/component
componentName: "backup"
# Schedule to run jobs. Must be in cron time format
# Ref: https://crontab.guru/
schedule: "0 2 * * *"
annotations:
# Example for authorization to AWS S3 using kube2iam
# Can also be done using environment variables
iam.amazonaws.com/role: "jenkins"
image:
repository: "nuvo/kube-tasks"
tag: "0.1.2"
# Additional arguments for kube-tasks
# Ref: https://github.com/nuvo/kube-tasks#simple-backup
extraArgs: []
# Add existingSecret for AWS credentials
existingSecret: {}
## Example for using an existing secret
# jenkinsaws:
## Use this key for AWS access key ID
# awsaccesskey: jenkins_aws_access_key
## Use this key for AWS secret access key
# awssecretkey: jenkins_aws_secret_key
# Add additional environment variables
env:
# Example environment variable required for AWS credentials chain
- name: "AWS_REGION"
value: "us-east-1"
resources:
requests:
memory: 1Gi
cpu: 1
limits:
memory: 1Gi
cpu: 1
destination: "s3://nuvo-jenkins-data/backup"
checkDeprecation: true
But I am not able to access the Jenkins on localhost/jenkins. I also tried jenkins.clutser.local.
Can someone please let me know how it works. ?
Ingress :
#kubectl describe ing jenkins
Name: jenkins
Namespace: jenkins
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/ jenkins:8080 (<none>)
Annotations:
Events: <none>
I tried this example app also this is also not working for me.
For this example:
curl -kL http://localhost/banana
curl: (7) Failed to connect to localhost port 80: Connection refused