I've added this part in application.yml but it doesn't work.
server:
session:
timeout: 3600 #seconds
You have to add
server:
servlet:
session:
timeout: 3600
Instead of
server:
session:
timeout: 3600 #seconds
Related
I setup mosquitto password using a password file
volumes:
- /password:/mosquitto/config
How can I add healthcheck in docker-compose? I tried the below solution provided here
Script to check mosquitto is healthy
healthcheck:
test: ["CMD-SHELL", "timeout -t 5 mosquitto_sub -t '$$SYS/#' -C 1 | grep -v Error || exit 1"]
interval: 10s
timeout: 10s
retries: 6
Also, I tried a couple of other options but they are asking me to pass username and password. Can't I use this password file?
update:
mosquitto.conf
allow_anonymous false
password_file /mosquitto/config/pwfile
port 1883
listener 9001
persistence true
persistence_location /mosquitto/data/
log_dest file /mosquitto/log/mosquitto.log
At a push you could enable listener with MQTT over Websockets as the protocol and then use a basic curl get request to check it the broker is up.
e.g. add this to the mosquitto.conf
listener 8080 127.0.0.1
protocol websockets
and a health check something like
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080" ]
interval: 10s
timeout: 10s
retries: 6
The raw HTTP GET request should complete without needing to authenticate.
The other option is re-enable anonymous users and to add readonly access to the anonymous user to access the $SYS/# topic pattern using a acl file (acl_file)
What's the best way to test the health of Keycloak configured as cluster deployed as docker swarm service?
I tried the below healthcheck for testing availability in Keycloak service descriptor:
healthcheck:
test: ["CMD-SHELL", "curl http://localhost:8080/auth/realms/[realm_name]"]
interval: 30s
timeout: 10s
retries: 10
start_period: 1m
Are there more things to check for?
Couldn't find the documentation for this.
I prefer to listen directly the 'master' realm.
Morover most recent Keycloak versions uses a different path (omitting 'auth'):
healthcheck:
test: ["CMD", "curl", "-f", "http://0.0.0.0:8080/realms/master"]
start_period: 10s
interval: 30s
retries: 3
timeout: 5s
One can also use the /health endpoint on the KeyCloak container as follows:
"healthCheck": {
"retries": 3,
"command": [
"CMD-SHELL",
"curl -f http://localhost:8080/health || exit 1"
],
"timeout": 5,
"interval": 60,
"startPeriod": 300
}
I'm trying to set the serverURL per environment in application.yml as follows:
environments:
development:
grails:
serverURL: http://localhost:8089
dataSource:
dbCreate: create
url: jdbc:postgresql://localhost:5432/tests
username: postgress
password: rootass
But it doesn't work - when I do run-appit still runs on 8080. Also, how do I set the app name or context name so when I do run-app it's like http://localhost:8089/vis
Try the following:
server:
port: 8089
You can add contextPath at the same level as port if need be e.g.
server:
port: 8089
contextPath: '/myApp'
Should be accessible at http://localhost:8089/myApp
I have two container. The first container has unicorn and the second container has the mongod.
Name Command State Ports
-------------------------------------------------------------------------------------
app_mongodb_1 /entrypoint.sh mongod Up 0.0.0.0:27017->27017/tcp
app_web_1 foreman start Up 0.0.0.0:3000->3000/tcp
When i try access my rails application returns the error not connect to a primary node for replica set #<Moped::Cluster:69840575665060 #seeds=[<Moped::Node resolved_address="127.0.0.1:27017">]>, but when i enter with docker-compose web run rails c i can save document: f = Feature.new(name: "test", value: 10) => #<Feature _id: 565deac0616e642856000000, name: "test", value: 10, created_at: nil, updated_at: nil> => f.save => true
config/mongodb.yml
development:
sessions:
default:
database: app_development
hosts:
- mongodb
options:
options:
test:
sessions:
default:
database: app_test
hosts:
- mongodb
options:
read: primary
max_retries: 1
retry_interval: 0
production:
sessions:
default:
database: app_production
hosts:
- mongodb
options:
read: primary
max_retries: 1
retry_interval: 0
I do not understand, it is that the bug is on 127.0.0.1, but in the configuration file i haven't defined this address.
/etc/hosts
172.17.0.3 mongodb 49ea2c077967 app_mongodb
I have been trying to configure my Sunspot Solr for my environment. I am getting confused between path and data_path, Can any one give me the difference and how to use them.
I have been referring this,
https://github.com/sunspot/sunspot/blob/master/sunspot_rails/lib/sunspot/rails/configuration.rb
production:
solr:
hostname: localhost
port: 8983
log_level: WARNING
path: /solr/production
data_path: /some_path
# read_timeout: 2
# open_timeout: 0.5
development:
solr:
hostname: localhost
port: 8982
log_level: INFO
#path: /solr/development
test:
solr:
hostname: localhost
port: 8981
log_level: WARNING
path: /solr/test
Path: The url path to the Solr servlet (useful if you are running multicore).
# Default '/solr/default'.
Data Path: the path to store lucene index data files.
#Default '#{Rails.root}/solr/data'
data_path :
Path: