I've configured jenkins to connect to an OCP cluster but all jobs trying to use the pod template get stuck at state below
Still waiting to schedule task
All nodes of label ‘agent11-optimised’ are offline
Checking on OCP cluster side and I see no events in the namespace I have configured it to use.
I tested the connection and it connects fine
The credentials are form of secret text and its the kubeconfig of a service account with below binded roles on the cluster
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: admin
uid: c485b8c8-3396-465c-9b0c-957754416b85
resourceVersion: '1072863'
creationTimestamp: '2022-04-29T13:52:50Z'
labels:
kubernetes.io/bootstrapping: rbac-defaults
annotations:
rules:
- verbs:
- create
- update
- patch
- delete
apiGroups:
- operators.coreos.com
resources:
- subscriptions
- verbs:
- delete
apiGroups:
- operators.coreos.com
resources:
- clusterserviceversions
- catalogsources
- installplans
- subscriptions
- verbs:
- get
- list
- watch
apiGroups:
- operators.coreos.com
resources:
- clusterserviceversions
- catalogsources
- installplans
- subscriptions
- operatorgroups
- verbs:
- get
- list
- watch
apiGroups:
- packages.operators.coreos.com
resources:
- packagemanifests
- packagemanifests/icon
- verbs:
- create
- update
- patch
- delete
apiGroups:
- packages.operators.coreos.com
resources:
- packagemanifests
- verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
apiGroups:
- ''
resources:
- secrets
- serviceaccounts
- verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
apiGroups:
- ''
- image.openshift.io
resources:
- imagestreamimages
- imagestreammappings
- imagestreams
- imagestreams/secrets
- imagestreamtags
- imagetags
- verbs:
- create
apiGroups:
- ''
- image.openshift.io
resources:
- imagestreamimports
- verbs:
- get
- update
apiGroups:
- ''
- image.openshift.io
resources:
- imagestreams/layers
- verbs:
- get
apiGroups:
- ''
resources:
- namespaces
- verbs:
- get
apiGroups:
- ''
- project.openshift.io
resources:
- projects
- verbs:
- create
- update
- patch
- delete
apiGroups:
- core.libopenstorage.org
resources:
- storageclusters
- verbs:
- create
- update
- patch
- delete
apiGroups:
- core.libopenstorage.org
resources:
- storagenodes
- verbs:
- get
- list
- watch
apiGroups:
- ''
resources:
- pods/attach
- pods/exec
- pods/portforward
- pods/proxy
- secrets
- services/proxy
- verbs:
- impersonate
apiGroups:
- ''
resources:
- serviceaccounts
- verbs:
- create
- delete
- deletecollection
- patch
- update
apiGroups:
- ''
resources:
- pods
- pods/attach
- pods/exec
- pods/portforward
- pods/proxy
- verbs:
- create
- delete
- deletecollection
- patch
- update
apiGroups:
- ''
resources:
- configmaps
- endpoints
- events
- persistentvolumeclaims
- replicationcontrollers
- replicationcontrollers/scale
- secrets
- serviceaccounts
- services
- services/proxy
- verbs:
- create
- delete
- deletecollection
- patch
- update
apiGroups:
- apps
resources:
- daemonsets
- deployments
- deployments/rollback
- deployments/scale
- replicasets
- replicasets/scale
- statefulsets
- statefulsets/scale
- verbs:
- create
- delete
- deletecollection
- patch
- update
apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
- verbs:
- create
- delete
- deletecollection
- patch
- update
apiGroups:
- batch
resources:
- cronjobs
- jobs
- verbs:
- create
- delete
- deletecollection
- patch
- update
apiGroups:
- extensions
resources:
- daemonsets
- deployments
- deployments/rollback
- deployments/scale
- ingresses
- networkpolicies
- replicasets
- replicasets/scale
- replicationcontrollers/scale
- verbs:
- create
- delete
- deletecollection
- patch
- update
apiGroups:
- policy
resources:
- poddisruptionbudgets
- verbs:
- create
- delete
- deletecollection
- patch
- update
apiGroups:
- networking.k8s.io
resources:
- ingresses
- networkpolicies
- verbs:
- get
- list
- watch
apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
- verbs:
- create
apiGroups:
- ''
- image.openshift.io
resources:
- imagestreams
- verbs:
- update
apiGroups:
- ''
- build.openshift.io
resources:
- builds/details
- verbs:
- get
apiGroups:
- ''
- build.openshift.io
resources:
- builds
- verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- deletecollection
apiGroups:
- snapshot.storage.k8s.io
resources:
- volumesnapshots
- verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
apiGroups:
- ''
- build.openshift.io
resources:
- buildconfigs
- buildconfigs/webhooks
- builds
- verbs:
- get
- list
- watch
apiGroups:
- ''
- build.openshift.io
resources:
- builds/log
- verbs:
- create
apiGroups:
- ''
- build.openshift.io
resources:
- buildconfigs/instantiate
- buildconfigs/instantiatebinary
- builds/clone
- verbs:
- edit
- view
apiGroups:
- build.openshift.io
resources:
- jenkins
- verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
apiGroups:
- ''
- apps.openshift.io
resources:
- deploymentconfigs
- deploymentconfigs/scale
- verbs:
- create
apiGroups:
- ''
- apps.openshift.io
resources:
- deploymentconfigrollbacks
- deploymentconfigs/instantiate
- deploymentconfigs/rollback
- verbs:
- get
- list
- watch
apiGroups:
- ''
- apps.openshift.io
resources:
- deploymentconfigs/log
- deploymentconfigs/status
- verbs:
- get
- list
- watch
apiGroups:
- ''
- image.openshift.io
resources:
- imagestreams/status
- verbs:
- get
- list
- watch
apiGroups:
- ''
- quota.openshift.io
resources:
- appliedclusterresourcequotas
- verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
apiGroups:
- ''
- route.openshift.io
resources:
- routes
- verbs:
- create
apiGroups:
- ''
- route.openshift.io
resources:
- routes/custom-host
- verbs:
- get
- list
- watch
apiGroups:
- ''
- route.openshift.io
resources:
- routes/status
- verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
apiGroups:
- ''
- template.openshift.io
resources:
- processedtemplates
- templateconfigs
- templateinstances
- templates
- verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
apiGroups:
- networking.k8s.io
resources:
- networkpolicies
- verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
apiGroups:
- ''
- build.openshift.io
resources:
- buildlogs
- verbs:
- get
- list
- watch
apiGroups:
- ''
resources:
- resourcequotausages
- verbs:
- get
- list
- watch
apiGroups:
- packages.operators.coreos.com
resources:
- packagemanifests
- verbs:
- get
- list
- watch
apiGroups:
- ''
- image.openshift.io
resources:
- imagestreamimages
- imagestreammappings
- imagestreams
- imagestreamtags
- imagetags
- verbs:
- get
apiGroups:
- ''
- image.openshift.io
resources:
- imagestreams/layers
- verbs:
- get
apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
resourceNames:
- storageclusters.core.libopenstorage.org
- verbs:
- get
- list
- watch
apiGroups:
- core.libopenstorage.org
resources:
- storageclusters
- verbs:
- get
apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
resourceNames:
- storagenodes.core.libopenstorage.org
- verbs:
- get
- list
- watch
apiGroups:
- core.libopenstorage.org
resources:
- storagenodes
- verbs:
- get
- list
- watch
apiGroups:
- ''
resources:
- configmaps
- endpoints
- persistentvolumeclaims
- persistentvolumeclaims/status
- pods
- replicationcontrollers
- replicationcontrollers/scale
- serviceaccounts
- services
- services/status
- verbs:
- get
- list
- watch
apiGroups:
- ''
resources:
- bindings
- events
- limitranges
- namespaces/status
- pods/log
- pods/status
- replicationcontrollers/status
- resourcequotas
- resourcequotas/status
- verbs:
- get
- list
- watch
apiGroups:
- ''
resources:
- namespaces
- verbs:
- get
- list
- watch
apiGroups:
- discovery.k8s.io
resources:
- endpointslices
- verbs:
- get
- list
- watch
apiGroups:
- apps
resources:
- controllerrevisions
- daemonsets
- daemonsets/status
- deployments
- deployments/scale
- deployments/status
- replicasets
- replicasets/scale
- replicasets/status
- statefulsets
- statefulsets/scale
- statefulsets/status
- verbs:
- get
- list
- watch
apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
- horizontalpodautoscalers/status
- verbs:
- get
- list
- watch
apiGroups:
- batch
resources:
- cronjobs
- cronjobs/status
- jobs
- jobs/status
- verbs:
- get
- list
- watch
apiGroups:
- extensions
resources:
- daemonsets
- daemonsets/status
- deployments
- deployments/scale
- deployments/status
- ingresses
- ingresses/status
- networkpolicies
- replicasets
- replicasets/scale
- replicasets/status
- replicationcontrollers/scale
- verbs:
- get
- list
- watch
apiGroups:
- policy
resources:
- poddisruptionbudgets
- poddisruptionbudgets/status
- verbs:
- get
- list
- watch
apiGroups:
- networking.k8s.io
resources:
- ingresses
- ingresses/status
- networkpolicies
- verbs:
- get
- list
- watch
apiGroups:
- snapshot.storage.k8s.io
resources:
- volumesnapshots
- verbs:
- get
- list
- watch
apiGroups:
- ''
- build.openshift.io
resources:
- buildconfigs
- buildconfigs/webhooks
- builds
- verbs:
- view
apiGroups:
- build.openshift.io
resources:
- jenkins
- verbs:
- get
- list
- watch
apiGroups:
- ''
- apps.openshift.io
resources:
- deploymentconfigs
- deploymentconfigs/scale
- verbs:
- get
- list
- watch
apiGroups:
- ''
- route.openshift.io
resources:
- routes
- verbs:
- get
- list
- watch
apiGroups:
- ''
- template.openshift.io
resources:
- processedtemplates
- templateconfigs
- templateinstances
- templates
- verbs:
- get
- list
- watch
apiGroups:
- ''
- build.openshift.io
resources:
- buildlogs
- verbs:
- '*'
apiGroups:
- packages.operators.coreos.com
resources:
- packagemanifests
- verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
apiGroups:
- ''
- authorization.openshift.io
resources:
- rolebindings
- roles
- verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
- roles
- verbs:
- create
apiGroups:
- ''
- authorization.openshift.io
resources:
- localresourceaccessreviews
- localsubjectaccessreviews
- subjectrulesreviews
- verbs:
- create
apiGroups:
- authorization.k8s.io
resources:
- localsubjectaccessreviews
- verbs:
- delete
- get
apiGroups:
- ''
- project.openshift.io
resources:
- projects
- verbs:
- create
apiGroups:
- ''
- authorization.openshift.io
resources:
- resourceaccessreviews
- subjectaccessreviews
- verbs:
- '*'
apiGroups:
- core.libopenstorage.org
resources:
- storageclusters
- verbs:
- '*'
apiGroups:
- core.libopenstorage.org
resources:
- storagenodes
- verbs:
- create
apiGroups:
- ''
- security.openshift.io
resources:
- podsecuritypolicyreviews
- podsecuritypolicyselfsubjectreviews
- podsecuritypolicysubjectreviews
- verbs:
- get
- list
- watch
apiGroups:
- ''
- authorization.openshift.io
resources:
- rolebindingrestrictions
- verbs:
- admin
- edit
- view
apiGroups:
- build.openshift.io
resources:
- jenkins
- verbs:
- delete
- get
- patch
- update
apiGroups:
- ''
- project.openshift.io
resources:
- projects
- verbs:
- update
apiGroups:
- ''
- route.openshift.io
resources:
- routes/status
aggregationRule:
clusterRoleSelectors:
- matchLabels:
rbac.authorization.k8s.io/aggregate-to-admin: 'true'
Related
I have created a docker-compose.yml for Elasticsearch and Kibana as shown below. The docker-compose.yml is working fine but with no index and data. I remember in one of my database specific docker-compose.yml I adds data like as shown below and within the docker/db folder I places my dq sql scripts
services:
postgres:
image: postgres:9.6
volumes:
- ./docker/db:/docker-entrypoint-initdb.d
environment:
POSTGRES_DB: some_db
ports:
- 5432:5432
Now the question similarly to be above way how do I specify ES index and data volume, what should be the ES file extension
To be more specific I want the below ES index and data to be there when the elasticsearch is started
PUT test
POST _bulk
{ "index" : { "_index" : "test"} }
{ "name" : "A" }
{ "index" : { "_index" : "test"} }
{ "name" : "B" }
{ "index" : { "_index" : "test"} }
{ "name" : "C" }
{ "index" : { "_index" : "test"} }
{ "name" : "D" }
docker-compose.yml
version: '3.7'
services:
# Elasticsearch Docker Images: https://www.docker.elastic.co/
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.0
container_name: elasticsearch
environment:
- xpack.security.enabled=false
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
cap_add:
- IPC_LOCK
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.4.0
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
volumes:
elasticsearch-data:
driver: local
I'm trying to make a monitoring stack with traefik, grafana, zabbix, gotify etc.
I've a domain name called domain.tld.
In my docker-compose, I've some services with different port (grafana for example), but I've also some services on the same port (gotify, zabbix).
I want to redirect my domain.tld with zabbix.domain.tld, grafana.domain.tld to each container with SSL.
It's works, but not exactly.
If I put in my address bar:
grafana.domain.tld -> 404 Error with SSL redirection
If I put in my address bar:
grafana.domain.tld:3000 -> It's ok
I think, I'm little lost (or completely ?) in my many modifications..
Just doc and me is not enought.
So, my docker-compose:
version: '3.5'
networks:
traefik_front:
external: true
services:
traefik:
image: traefik
command: --api --docker
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- "${TRAEFIK_PATH}/traefik.toml:/etc/traefik/traefik.toml"
- "${TRAEFIK_PATH}/acme.json:/acme.json"
- /var/run/docker.sock:/var/run/docker.sock
labels:
- "traefik.frontend.rule=Host:traefik.${DOMAIN}"
- "treafik.port=8080"
- "traefik.enable=true"
- "traefik.backend=traefik"
- "traefik.docker.network=traefik_front"
#- "traefik.frontend.entryPoints=http,https"
networks:
- traefik_front
gotify:
image: gotify/server
container_name: gotify
volumes:
- "${GOTIFY_PATH}:/app/data"
env_file:
- env/.env_gotify
labels:
- "traefik.frontend.rule=Host:push.${DOMAIN}"
- "traefik.port=80"
- "traefik.enable=true"
- "traefik.backend=gotify"
- "traefik.docker.network=traefik_front"
networks:
- traefik_front
- default
grafana:
image: grafana/grafana
container_name: grafana
volumes:
- "${GF_PATH}:/var/lib/grafana"
env_file:
- env/.env_grafana
labels:
- "traefik.frontend.rule=Host:grafana.${DOMAIN}"
- "traefik.port=3000"
- "traefik.enable=true"
- "traefik.backend=grafana"
- "traefik.docker.network=traefik_front"
networks:
- traefik_front
- default
zabbix-server:
image: zabbix/zabbix-server-mysql:ubuntu-4.0-latest
volumes:
- "${ZABBIX_PATH}/alertscripts:/usr/lib/zabbix/alertscripts:ro"
- "${ZABBIX_PATH}/externalscripts:/usr/lib/zabbix/externalscripts:ro"
- "${ZABBIX_PATH}/modules:/var/lib/zabbix/modules:ro"
- "${ZABBIX_PATH}/enc:/var/lib/zabbix/enc:ro"
- "${ZABBIX_PATH}/ssh_keys:/var/lib/zabbix/ssh_keys:ro"
- "${ZABBIX_PATH}/mibs:/var/lib/zabbix/mibs:ro"
- "${ZABBIX_PATH}/snmptraps:/var/lib/zabbix/snmptraps:ro"
links:
- mysql-server:mysql-server
env_file:
- env/.env_zabbix_db_mysql
- env/.env_zabbix_srv
user: root
depends_on:
- mysql-server
- zabbix-snmptraps
labels:
- "traefik.backend=zabbix-server"
- "traefik.port=10051"
zabbix-web-apache-mysql:
image: zabbix/zabbix-web-apache-mysql:ubuntu-4.0-latest
links:
- mysql-server:mysql-server
- zabbix-server:zabbix-server
volumes:
- "${ZABBIX_PATH}/ssl/apache2:/etc/ssl/apache2:ro"
env_file:
- env/.env_zabbix_db_mysql
- env/.env_zabbix_web
user: root
depends_on:
- mysql-server
- zabbix-server
labels:
- "traefik.frontend.rule=Host:zabbix.${DOMAIN}"
- "traefik.port=80"
- "traefik.enable=true"
- "traefik.backend=zabbix-web"
- "traefik.docker.network=traefik_front"
networks:
- traefik_front
- default
zabbix-agent:
image: zabbix/zabbix-agent:ubuntu-4.0-latest
ports:
- "10050:10050"
volumes:
- "${ZABBIX_PATH}/zabbix_agentd.d:/etc/zabbix/zabbix_agentd.d:ro"
- "${ZABBIX_PATH}/modules:/var/lib/zabbix/modules:ro"
- "${ZABBIX_PATH}/enc:/var/lib/zabbix/enc:ro"
- "${ZABBIX_PATH}/ssh_keys:/var/lib/zabbix/ssh_keys:ro"
links:
- zabbix-server:zabbix-server
env_file:
- env/.env_zabbix_agent
user: root
networks:
- default
zabbix-snmptraps:
image: zabbix/zabbix-snmptraps:ubuntu-4.0-latest
ports:
- "162:162/udp"
volumes:
- "${ZABBIX_PATH}/snmptraps:/var/lib/zabbix/snmptraps:rw"
user: root
networks:
- default
mysql-server:
image: mysql:5.7
command: [mysqld, --character-set-server=utf8, --collation-server=utf8_bin]
volumes:
- /var/lib/mysql:/var/lib/mysql:rw
env_file:
- env/.env_zabbix_db_mysql
labels:
- "traefik.enable=false"
user: root
networks:
- default
And my traefik.toml:
# WEBUI
[web]
entryPoint = "dashboard"
dashboard = true
address = ":8080"
usersFile = "/etc/docker/traefik/.htpasswd"
logLevel = "ERROR"
# Force HTTPS
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.dashboard]
address = ":8080"
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[docker]
endpoint = "unix:///var/run/docker.sock"
watch = true
exposedbydefault = false
domain = "domain.tld"
network = "traefik_front"
# Let's Encrypt
[acme]
email = "mail#mail.fr"
storage = "acme.json"
entryPoint = "https"
onHostRule = true
onDemand = false
[acme.httpChallenge]
entryPoint = "http"
OnHostRule = true
[[acme.domains]]
main = "domain.tld"
[[acme.domains]]
main = "domain.tld"
[[acme.domains]]
main = "domain.tld"
[[acme.domains]]
main = "domain.tld"
I've done something similar, and it would look this on your setup
docker-compose.yml
service:
traefik:
labels:
- "treafik.port=8080"
- "traefik.enable=true"
- "traefik.backend=traefik"
- "traefik.docker.network=traefik_front"
- "traefik.frontend.rule=Host:traefik.${DOMAIN}"
- "traefik.webservice.frontend.entryPoints=https"
zabbix-web-apache-mysql:
labels:
- "traefik.port=80"
- "traefik.enable=true"
- "traefik.backend=zabbix-web"
- "traefik.passHostHeader=true"
- "traefik.docker.network=traefik_front"
- "traefik.frontend.rule=Host:zabbix.${DOMAIN}"
grafana:
labels:
- "traefik.port=3000"
- "traefik.enable=true"
- "traefik.backend=grafana"
- "traefik.passHostHeader=true"
- "traefik.docker.network=traefik_front"
- "traefik.frontend.rule=Host:grafana.${DOMAIN}"
and the way my traefik.toml is configured
InsecureSkipVerify = true ## This is optional
## Force HTTPS
[entryPoints]
[entryPoints.http]
passHostHeader = true
address = ":80"
[entryPoints.http.forwardedHeaders]
insecure = true
[entryPoints.http.proxyProtocol]
insecure = true
## This seems to be an absolute requirement for redirect
## ...but it redirects every request to https
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.traefik]
address = ":8080"
[entryPoints.traefik.auth.basic]
# the "user" password is the MD5 encrpytion of the word "pass"
users = ["user:$apr1$.LWU4fEi$4YipxeuXs5T0xulH3S7Kb."]
[entryPoints.https]
passHostHeader = true
address = ":443"
[entryPoints.https.tls] ## This seems to be an absolute requirement
[entryPoints.https.forwardedHeaders]
insecure = true
[entryPoints.https.proxyProtocol]
insecure = true
I'm running into a problem with running browsersync via Gulp inside a docker container. I'm using Laradock.
I tried to extrapolate from this answer: Browsersync within a Docker container, but I'm only able to get the UI to show on port 3001.
It's unclear to me what I should have for proxy. I've tried many combinations such as:
function browserSync(done) {
browsersync.init({
proxy: 'workspace:22', notify:false, open:false
});
done();
}
// -----------------
function browserSync(done) {
browsersync.init({
proxy: 'workspace:3000', notify:false, open:false
});
done();
}
// ---------------
function browserSync(done) {
browsersync.init({
proxy: 'localhost', notify:false, open:false
});
done();
}
I've added 3000 and 3001 to docker-compose.yml under my workspace service
### Workspace Utilities ##################################
workspace:
build:
context: ./workspace
args:
- LARADOCK_PHP_VERSION=${PHP_VERSION}
- LARADOCK_PHALCON_VERSION=${PHALCON_VERSION}
- INSTALL_SUBVERSION=${WORKSPACE_INSTALL_SUBVERSION}
- INSTALL_XDEBUG=${WORKSPACE_INSTALL_XDEBUG}
- INSTALL_PHPDBG=${WORKSPACE_INSTALL_PHPDBG}
- INSTALL_BLACKFIRE=${INSTALL_BLACKFIRE}
- INSTALL_SSH2=${WORKSPACE_INSTALL_SSH2}
- INSTALL_GMP=${WORKSPACE_INSTALL_GMP}
- INSTALL_SOAP=${WORKSPACE_INSTALL_SOAP}
- INSTALL_XSL=${WORKSPACE_INSTALL_XSL}
- INSTALL_LDAP=${WORKSPACE_INSTALL_LDAP}
- INSTALL_IMAP=${WORKSPACE_INSTALL_IMAP}
- INSTALL_MONGO=${WORKSPACE_INSTALL_MONGO}
- INSTALL_AMQP=${WORKSPACE_INSTALL_AMQP}
- INSTALL_PHPREDIS=${WORKSPACE_INSTALL_PHPREDIS}
- INSTALL_MSSQL=${WORKSPACE_INSTALL_MSSQL}
- INSTALL_NODE=${WORKSPACE_INSTALL_NODE}
- NPM_REGISTRY=${WORKSPACE_NPM_REGISTRY}
- INSTALL_YARN=${WORKSPACE_INSTALL_YARN}
- INSTALL_NPM_GULP=${WORKSPACE_INSTALL_NPM_GULP}
- INSTALL_NPM_BOWER=${WORKSPACE_INSTALL_NPM_BOWER}
- INSTALL_NPM_VUE_CLI=${WORKSPACE_INSTALL_NPM_VUE_CLI}
- INSTALL_NPM_ANGULAR_CLI=${WORKSPACE_INSTALL_NPM_ANGULAR_CLI}
- INSTALL_DRUSH=${WORKSPACE_INSTALL_DRUSH}
- INSTALL_WP_CLI=${WORKSPACE_INSTALL_WP_CLI}
- INSTALL_DRUPAL_CONSOLE=${WORKSPACE_INSTALL_DRUPAL_CONSOLE}
- INSTALL_AEROSPIKE=${WORKSPACE_INSTALL_AEROSPIKE}
- INSTALL_V8JS=${WORKSPACE_INSTALL_V8JS}
- COMPOSER_GLOBAL_INSTALL=${WORKSPACE_COMPOSER_GLOBAL_INSTALL}
- COMPOSER_AUTH=${WORKSPACE_COMPOSER_AUTH}
- COMPOSER_REPO_PACKAGIST=${WORKSPACE_COMPOSER_REPO_PACKAGIST}
- INSTALL_WORKSPACE_SSH=${WORKSPACE_INSTALL_WORKSPACE_SSH}
- INSTALL_LARAVEL_ENVOY=${WORKSPACE_INSTALL_LARAVEL_ENVOY}
- INSTALL_LARAVEL_INSTALLER=${WORKSPACE_INSTALL_LARAVEL_INSTALLER}
- INSTALL_DEPLOYER=${WORKSPACE_INSTALL_DEPLOYER}
- INSTALL_PRESTISSIMO=${WORKSPACE_INSTALL_PRESTISSIMO}
- INSTALL_LINUXBREW=${WORKSPACE_INSTALL_LINUXBREW}
- INSTALL_MC=${WORKSPACE_INSTALL_MC}
- INSTALL_SYMFONY=${WORKSPACE_INSTALL_SYMFONY}
- INSTALL_PYTHON=${WORKSPACE_INSTALL_PYTHON}
- INSTALL_IMAGE_OPTIMIZERS=${WORKSPACE_INSTALL_IMAGE_OPTIMIZERS}
- INSTALL_IMAGEMAGICK=${WORKSPACE_INSTALL_IMAGEMAGICK}
- INSTALL_TERRAFORM=${WORKSPACE_INSTALL_TERRAFORM}
- INSTALL_DUSK_DEPS=${WORKSPACE_INSTALL_DUSK_DEPS}
- INSTALL_PG_CLIENT=${WORKSPACE_INSTALL_PG_CLIENT}
- INSTALL_PHALCON=${WORKSPACE_INSTALL_PHALCON}
- INSTALL_SWOOLE=${WORKSPACE_INSTALL_SWOOLE}
- INSTALL_LIBPNG=${WORKSPACE_INSTALL_LIBPNG}
- INSTALL_IONCUBE=${WORKSPACE_INSTALL_IONCUBE}
- INSTALL_MYSQL_CLIENT=${WORKSPACE_INSTALL_MYSQL_CLIENT}
- PUID=${WORKSPACE_PUID}
- PGID=${WORKSPACE_PGID}
- CHROME_DRIVER_VERSION=${WORKSPACE_CHROME_DRIVER_VERSION}
- NODE_VERSION=${WORKSPACE_NODE_VERSION}
- YARN_VERSION=${WORKSPACE_YARN_VERSION}
- DRUSH_VERSION=${WORKSPACE_DRUSH_VERSION}
- TZ=${WORKSPACE_TIMEZONE}
- BLACKFIRE_CLIENT_ID=${BLACKFIRE_CLIENT_ID}
- BLACKFIRE_CLIENT_TOKEN=${BLACKFIRE_CLIENT_TOKEN}
- INSTALL_POWERLINE=${WORKSPACE_INSTALL_POWERLINE}
- INSTALL_FFMPEG=${WORKSPACE_INSTALL_FFMPEG}
volumes:
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}${APP_CODE_CONTAINER_FLAG}
extra_hosts:
- "dockerhost:${DOCKER_HOST_IP}"
ports:
- "${WORKSPACE_SSH_PORT}:22"
- "3000:3000"
- "3001:3001"
tty: true
environment:
- PHP_IDE_CONFIG=${PHP_IDE_CONFIG}
- DOCKER_HOST=tcp://docker-in-docker:2375
networks:
- frontend
- backend
links:
- docker-in-docke
I'm running apache on port 80, so my app can be seen at http://localhost
I'm able to access the UI at localhost:3001, but can't access localhost:3000.
I am using docker swarm mode for hyperledger composer setup and I am new to docker. My fabric is running okay. When I use service names in connection.json file, it results into "REQUEST_TIMEOUT" while installing network. But when I use IP address of host instead of service name all works fine. So,how can I resolve service name/container name?
Here is my peer configuration:
peer1:
deploy:
replicas: 1
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
hostname: peer1.eprocure.org.com
image: hyperledger/fabric-peer:$ARCH-1.1.0
networks:
hyperledger-ov:
aliases:
- peer1.eprocure.org.com
environment:
- CORE_LOGGING_LEVEL=debug
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_PEER_ID=peer1.eprocure.org.com
- CORE_PEER_ADDRESS=peer1.eprocure.org.com:7051
- CORE_PEER_LOCALMSPID=eProcureMSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/msp
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb1:5984
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hyperledger-ov
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.eprocure.org.com:7051
- CORE_PEER_ENDORSER_ENABLED=true
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
- CORE_PEER_PROFILE_ENABLED=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
volumes:
- /var/run/:/host/var/run/
- /export/composer/genesis-folder:/etc/hyperledger/configtx
- /export/composer/crypto-config/peerOrganizations/eprocure.org.com/peers/peer1.eprocure.org.com/msp:/etc/hyperledger/peer/msp
- /export/composer/crypto-config/peerOrganizations/eprocure.org.com/users:/etc/hyperledger/msp/users
ports:
- 8051:7051
- 8053:7053
And here is my current connection.json with IP
"peers": {
"peer0.eprocure.org.com": {
"url": "grpc://192.168.0.147:7051",
"eventUrl": "grpc://192.168.0.147:7053"
},
"peer1.eprocure.org.com": {
"url": "grpc://192.168.0.147:8051",
"eventUrl": "grpc://192.168.0.147:8053"
},
"peer2.eprocure.org.com": {
"url": "grpc://192.168.0.147:9051",
"eventUrl": "grpc://192.168.0.147:9053"
}
},
I have tried following before.
"peers": {
"peer0.eprocure.org.com": {
"url": "grpc://peers_peer0:7051",
"eventUrl": "grpc://peers_peer0:7053"
},
"peer1.eprocure.org.com": {
"url": "grpc://peers_peer1:8051",
"eventUrl": "grpc://peers_peer2:8053"
},
"peer2.eprocure.org.com": {
"url": "grpc://peers_peer2:9051",
"eventUrl": "grpc://peers_peer2:9053"
}
}
But this doesn't work.
Can anyone please let me know how can I solve my problem?
I have a simple Grails application. I did not author the front end, only the business logic layer.
I checked out all the source from SVN and the app starts, but I cannot load the main url. It errors out with the messages below. I have tried refreshing dependencies, but to no avail.
I have spelunked every file I could think of to try to fix this. What gets my attention is the FORWARD slash in front of the css, while the other delimiters in the path are backslashes.
Does anyone have any idea where this is going wrong and how to fix it? Maybe the front end developer needs to check something in?
Error 2013-07-31 13:50:24,036 [http-bio-8080-exec-4] ERROR [/MyClientAppName].[grails-errorhandler] - Servlet.service() for servlet grails-errorhandler threw exception
Message: Error applying layout : main
Line | Method
->> 1110 | runWorker in \grails-app\views\layouts\main.gsp
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 603 | run in ''
^ 722 | run . . . in ''
Caused by GroovyPagesException: Error processing GroovyPageView: Error executing tag <r:layoutResources>: Module [bootstrap] depends on resource [/css\bootstrap\bootstrap-responsive.css] but the file cannot be found
->> 464 | runWorker in \grails-app\views\layouts\main.gsp
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Caused by GrailsTagException: Error executing tag <r:layoutResources>: Module [bootstrap] depends on resource [/css\bootstrap\bootstrap-responsive.css] but the file cannot be found
->> 8 | doCall in C:/workspaces/GGTS1/MyClientAppName/grails-app/views/layouts/main.gsp
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Caused by IllegalArgumentException: Module [bootstrap] depends on resource [/css\bootstrap\bootstrap-responsive.css] but the file cannot be found
->