I am trying to add HA on the same machine like in this article:
http://aws-labs.com/neo4j-high-availability-setup-tutorial/
("Alternative setup: Creating a local cluster for testing")
When I started to open all locations only :7474 opened.
:7475 and :7476 show me "This site can’t be reached".
All config files look like this:
# Name of the service
dbms.windows_service_name=neo4j
# With default configuration Neo4j only accepts local connections.
# To accept non-local connections, uncomment this line:
dbms.connectors.default_listen_address=0.0.0.0
# Reduce the default page cache memory allocation
dbms.pagecache.memory=500m
# Port to listen to for incoming backup requests.
online_backup_server = 10.234.10.94:6366
# Unique server id for this Neo4j instance
# can not be negative id and must be unique
ha.server_id = 1
# List of other known instances in this cluster
ha.initial_hosts = 10.234.10.94:5001,10.234.10.94:5002,10.234.10.94:5003
# IP and port for this instance to bind to for communicating cluster
information
# with the other neo4j instances in the cluster.
ha.cluster_server = 10.234.10.94:5001
# IP and port for this instance to bind to for communicating data with the
# other neo4j instances in the cluster.
ha.server = 10.234.10.94:6363
# HA - High Availability
# SINGLE - Single mode, default.
org.neo4j.server.database.mode=HA
# http port (for all data, administrative, and UI access)
dbms.connector.http.enabled=true
org.neo4j.server.webserver.port=7474
# https port (for all data, administrative, and UI access)
dbms.connector.https.enabled=true
org.neo4j.server.webserver.https.port=7484
And how I should config the master?
Related
So right now we are trying to get a Bitnami Redis Sentinel cluster working, together with our Rails app + Sidekiq.
We tried different things, but it's not really clear to us, how we should specify the sentinels for Sidekiq (crutial part is here, that the sentinel nodes are READ ONLY, so we cannot use them for sidekiq, since job states get written).
Since on Kubernetes there are only 2 services available: "redis" and "redis-headless" (not sure how they differ?) - how can I specify the sentinels like this:
Sidekiq.configure_server do |config|
config.redis = {
url: "redis",
sentinels: [
{ host: "?", port: 26379 } # why do we have to specify it here seperately, since we should be able to get a unified answer via a service, or?
{ host: "?", port: 26379 }
{ host: "?", port: 26379 }
}
}
end
Would be nice if someone can shed some light on this. As far as I understood, the bitnami redis sentiel only returns the IP of the master and the application has to handle the corresponding writes to this master then (https://github.com/bitnami/charts/tree/master/bitnami/redis#master-replicas-with-sentinel) - but I really don't understand on how to do this with sidekiq?
Difference between a Kubernetes Service and a Headless Service
Let's get started by clarifying the difference between a Headless Service and a Service.
A Service allows one to connect to one Pod, while a headless Service returns the list of available IP addresses from all the available pods, allowing to auto-discover.
A better detailed explanation by Marco Luksa has been published on SO here:
Each connection to the service is forwarded to one randomly selected backing pod. But what if the client needs to connect to all of those pods? What if the backing pods themselves need to each connect to all the other backing pods. Connecting through the service clearly isn’t the way to do this. What is?
For a client to connect to all pods, it needs to figure out the the IP of each individual pod. One option is to have the client call the Kubernetes API server and get the list of pods and their IP addresses through an API call, but because you should always strive to keep your apps Kubernetes-agnostic, using the API server isn’t ideal
Luckily, Kubernetes allows clients to discover pod IPs through DNS lookups. Usually, when you perform a DNS lookup for a service, the DNS server returns a single IP — the service’s cluster IP. But if you tell Kubernetes you don’t need a cluster IP for your service (you do this by setting the clusterIP field to None in the service specification ), the DNS server will return the pod IPs instead of the single service IP. Instead of returning a single DNS A record, the DNS server will return multiple A records for the service, each pointing to the IP of an individual pod backing the service at that moment. Clients can therefore do a simple DNS A record lookup and get the IPs of all the pods that are part of the service. The client can then use that information to connect to one, many, or all of them.
Setting the clusterIP field in a service spec to None makes the service headless, as Kubernetes won’t assign it a cluster IP through which clients could connect to the pods backing it.
"Kubernetes in Action" by Marco Luksa
How to specify the sentinels
As the Redis documentation say:
When using the Sentinel support you need to specify a list of sentinels to connect to. The list does not need to enumerate all your Sentinel instances, but a few so that if one is down the client will try the next one. The client is able to remember the last Sentinel that was able to reply correctly and will use it for the next requests.
So the idea is to give what you have, and if you scale up the redis pods, then you don't need to re-configure Sidekiq (or Rails if you're using Redis for caching).
Combining all together
Now you just need a way to fetch the IP addresses from the headless service in Ruby, and configure Redis client sentinels.
Fortunately, since Ruby 2.5.0, the Resolv class is available and can do that for you.
irb(main):007:0> Resolv.getaddresses "redis-headless"
=> ["172.16.105.95", "172.16.105.194", "172.16.9.197"]
So that you could do:
Sidekiq.configure_server do |config|
config.redis = {
# This `host` parameter is used by the Redis gem with the Redis command
# `get-master-addr-by-name` (See https://redis.io/topics/sentinel#obtaining-the-address-of-the-current-master)
# in order to retrieve the current Redis master IP address.
host: "mymaster",
sentinels: Resolv.getaddresses('redis-headless').map do |address|
{ host: address, port: 26379 }
end
}
end
That will create an Array of Hashes with the IP address as host: and 26379 as the port:.
Within a service "foo" started as part of a Docker Compose stack of services I would like to be able to find out/know both the IPv4 and IPv6 address of the container the service is running in.
One way to find out is via the shell command hostname -i, but this only gives the IPv4 address. I'd also prefer one way to get both, if possible. Is there a way that Compose can pass the service its IPv4/6 addresses during startup? If not, can the service determine these from the Docker runtime after startup?
Irrelevant to the question, but I'll describe what I'm doing so people can understand the sense.
I've got Nginx also running in the stack. It has a rule like the following:
location ~ "^/foo/bar.*" {
if ($http_x_hsn = "") {
return 401 '{"error":"Invalid hsn"}';
}
# The resolver DNS name is resolvable by Docker.
# Any instance of "foo" has a trivial DNS server built in
# and can be used to resolve the IP address of a particular
# "foo" instance which is associated with a particular value
# of "hsn" passed in a the header "x-hsn" by looking up that
# association which will have been centrally registered prior
# to the handling of this type of request. Of course, if
# no association between foo instance IP and hsn is found the
# DNS query will return no record and this request should
# then fail.
# Note that a particular foo instance is 1-to-1 with a
# particular value of hsn.
resolver <DNS name resolving to any service foo instance>;
# Given an "hsn" with value "bar", service foo
# will be asked to resolve "foo-service.bar".
# The IP address returned should be one visible to Nginx
set $upstream_service foo-service.$http_x_hsn;
# Now, proxy to the correct instance of foo, based on
# the value of "hsn"
proxy_pass http://$upstream_service;
}
I've got this working using os.networkInterfaces() in foo (this is a Nodejs service), but the structure that returns can list multiple interfaces and I'm not sure that the one being used for the service would be always eth0 so I thought I'd ask here if there's a better way.
I should also mention that the associations between hsn values and service instances will have been created by Nginx routing to an instance (via another location rule) in a round robin way, with that instance centrally registering its IP address with that particular hsn value.
Previously i have installed Neo4j-3.1.4 everything was working fine, For upgrade i uninstalled 3.1.4 and again freshly installed Neo4j-3.4.0.
I can check status of Neo4j after starting. It shows running.
But i cannot access in browser using http://localhost:7474/browser
or http://<ip address>:7474/browser
I have changed necessary details in neo4j.conf file.
Still helpless.
Here is my neo4j.conf changes
# Bolt connector
dbms.connector.bolt.enabled=true
#dbms.connector.bolt.tls_level=OPTIONAL
dbms.connector.bolt.listen_address=0.0.0.0:7687
# HTTP Connector. There must be exactly one HTTP connector.
dbms.connector.http.enabled=true
#dbms.connector.http.listen_address=0.0.0.0:7474
dbms.connector.http.listen_address=<ip address>:7474
# HTTPS Connector. There can be zero or one HTTPS connectors.
dbms.connector.https.enabled=true
#dbms.connector.https.listen_address=:7473
Please help
If you are using windows on cmd try ipconfig
And get you ip adress and go directly to step 4.
If you are on linux, you have to check your network (on linux)
1- cd /etc/network
then apply
2- ifconfig -a
you should have on inet an ip adress matching yours.
If the adress on inet match your adress ip then check
if the adress on inet match your adress ip then check
3) on linux
cd /etc/neo4j/
sudo nano neo4j.conf
and check on the config file, you have to see if the right configuration is correct.
4) Try with this configuration.
on linux
cd /etc/neo4j/
sudo nano neo4j.conf
and check on the file, you have to see if the right configuration is correct.
#*****************************************************************
# Network connector configuration
#*****************************************************************
# With default configuration Neo4j only accepts local connections.
# To accept non-local connections, uncomment this line:
dbms.connectors.default_listen_address=0.0.0.0
# You can also choose a specific network interface, and configure a non-default
# port for each connector, by setting their individual listen_address.
# The address at which this server can be reached by its clients. This may be the server's IP address or DNS name, or
# it may be the address of a reverse proxy which sits in front of the server. This setting may be overridden for
# individual connectors below.
#dbms.connectors.default_advertised_address=localhost
# You can also choose a specific advertised hostname or IP address, and
# configure an advertised port for each connector, by setting their
# individual advertised_address.
# Bolt connector
dbms.connector.bolt.enabled=true
#dbms.connector.bolt.tls_level=OPTIONAL
#dbms.connector.bolt.listen_address=0.0.0.0:7687
# HTTP Connector. There must be exactly one HTTP connector.
dbms.connector.http.enabled=true
#dbms.connector.http.listen_address=:7474
# HTTPS Connector. There can be zero or one HTTPS connectors.
dbms.connector.https.enabled=true
Save config file and restart neo4j.
From browser try Youripaddress:7474/browser/
I have a Neo4j server that works fine but, when I tried to set up a cluster I can't just figure out why it does not work.
In order to make the cluster work, it seems I just need to uncoment the following lines:
ha.server_id = 3
ha.initial_hosts =192.168.1.93:5001,192.168.1.91:5001
dbms.mode=HA
but when I do so, I got an error in the logs file about the db load.
This is my neo4j.conf file
#*****************************************************************
# Neo4j configuration
#*****************************************************************
# The name of the database to mount
dbms.active_database=graph.db
# Paths of directories in the installation.
dbms.directories.data=/var/lib/neo4j/data
dbms.directories.plugins=/var/lib/neo4j/plugins
#dbms.directories.certificates=certificates
# This setting constrains all `LOAD CSV` import files to be under the `import` directory. Remove or uncomment it to
# allow files to be loaded from anywhere in filesystem; this introduces possible security problems. See the `LOAD CSV`
# section of the manual for details.
dbms.directories.import=import
# Whether requests to Neo4j are authenticated.
# To disable authentication, uncomment this line
#dbms.security.auth_enabled=false
# Enable this to be able to upgrade a store from an older version.
#dbms.allow_format_migration=true
# The amount of memory to use for mapping the store files, in bytes (or
# kilobytes with the 'k' suffix, megabytes with 'm' and gigabytes with 'g').
# If Neo4j is running on a dedicated server, then it is generally recommended
# to leave about 2-4 gigabytes for the operating system, give the JVM enough
# heap to hold all your transaction state and query context, and then leave the
# rest for the page cache.
# The default page cache memory assumes the machine is dedicated to running
# Neo4j, and is heuristically set to 50% of RAM minus the max Java heap size.
#dbms.memory.pagecache.size=10g
# Enable online backups to be taken from this database.
#dbms.backup.enabled=true
# To allow remote backups, uncomment this line:
#dbms.backup.address=0.0.0.0:6362
#*****************************************************************
# Network connector configuration
#*****************************************************************
# Bolt connector
dbms.connector.bolt.type=BOLT
dbms.connector.bolt.enabled=true
dbms.connector.bolt.tls_level=OPTIONAL
# To have Bolt accept non-local connections, uncomment this line
# dbms.connector.bolt.address=0.0.0.0:7687
# HTTP Connector
dbms.connector.http.type=HTTP
dbms.connector.http.enabled=true
#dbms.connector.http.encryption=NONE
# To have HTTP accept non-local connections, uncomment this line
dbms.connector.http.address=0.0.0.0:7474
# HTTPS Connector
dbms.connector.https.type=HTTP
dbms.connector.https.enabled=true
dbms.connector.https.encryption=TLS
dbms.connector.https.address=localhost:7473
# Number of Neo4j worker threads.
#dbms.threads.worker_count=
#*****************************************************************
# HA configuration
#*****************************************************************
# Uncomment and specify these lines for running Neo4j in High Availability mode.
# See the High availability setup tutorial for more details on these settings
# http://neo4j.com/docs/operations-manual/current/#tutorials
# Database mode
# Allowed values:
# HA - High Availability
# SINGLE - Single mode, default.
# To run in High Availability mode uncomment this line:
#dbms.mode=HA
# ha.server_id is the number of each instance in the HA cluster. It should be
# an integer (e.g. 1), and should be unique for each cluster instance.
ha.server_id=5
# ha.initial_hosts is a comma-separated list (without spaces) of the host:port
# where the ha.host.coordination of all instances will be listening. Typically
# this will be the same for all cluster instances.
ha.initial_hosts=192.168.1.93:5001,192.168.1.91:5001
# IP and port for this instance to listen on, for communicating cluster status
# information iwth other instances (also see ha.initial_hosts). The IP
# must be the configured IP address for one of the local interfaces.
ha.host.coordination=127.0.0.1:5001
# IP and port for this instance to listen on, for communicating transaction
# data with other instances (also see ha.initial_hosts). The IP
# must be the configured IP address for one of the local interfaces.
ha.host.data=127.0.0.1:6001
ha.pull_interval=10
# Amount of slaves the master will try to push a transaction to upon commit
# (default is 1). The master will optimistically continue and not fail the
# transaction even if it fails to reach the push factor. Setting this to 0 will
# increase write performance when writing through master but could potentially
# lead to branched data (or loss of transaction) if the master goes down.
#ha.tx_push_factor=1
# Strategy the master will use when pushing data to slaves (if the push factor
# is greater than 0). There are three options available "fixed_ascending" (default),
# "fixed_descending" or "round_robin". Fixed strategies will start by pushing to
# slaves ordered by server id (accordingly with qualifier) and are useful when
# planning for a stable fail-over based on ids.
#ha.tx_push_strategy=fixed_ascending
# Policy for how to handle branched data.
#ha.branched_data_policy=keep_all
# How often heartbeat messages should be sent. Defaults to ha.default_timeout.
#ha.heartbeat_interval=5s
# Timeout for heartbeats between cluster members. Should be at least twice that of ha.heartbeat_interval.
#ha.heartbeat_timeout=11s
# If you are using a load-balancer that doesn't support HTTP Auth, you may need to turn off authentication for the
# HA HTTP status endpoint by uncommenting the following line.
#dbms.security.ha_status_auth_enabled=false
# Whether this instance should only participate as slave in cluster. If set to
# true, it will never be elected as master.
#ha.slave_only=false
Thank you
First of all, if you get an error on startup, the least you could do is quote it here, otherwise there's not much anyone can do about it.
Regarding your configuration, there's at least one mistake: to participate in a cluster with other hosts using IP addresses such as 192.168.1.93 and 192.168.1.91, you need to set up this host to communicate on the same network, not on loopback (i.e. 127.0.0.1) which other hosts cannot connect to.
If this host has e.g. 192.168.1.93, that's what you need to use:
ha.initial_hosts=192.168.1.93:5001,192.168.1.91:5001
ha.host.coordination=192.168.1.93:5001
ha.host.data=192.168.1.93:6001
However, if your host has 192.168.1.92, you need to add it to ha.initial_hosts (which is the same on all hosts, as noted in the comment, and not the list of other hosts):
ha.initial_hosts=192.168.1.93:5001,192.168.1.92:5001,192.168.1.91:5001
ha.host.coordination=192.168.1.92:5001
ha.host.data=192.168.1.92:6001
Installation:
SonarQube 5.0
MySql-5.6.23
My question is about SonarQube web server:
When the SonarQube web server isn't used for more than three days and I try to reach my SonarQube web server after this time, I get the error message:
"We're sorry, but something went wrong.
Please try back in a few minutes and contact support if the problem persists.
Go back to the homepage"
On the other hand, when the SonarQube web server is in use every day. No problem occurs.
There is no error message in the SonarQube log file.
The DB is still working.
Can anybody give me a hint how I can make the SonarQube webserver working also after a timeout of three days?
Thanks in advance.
Database connection settings:
#--------------------------------------------------------------------------------------------------
# DATABASE
#
# IMPORTANT: the embedded H2 database is used by default. It is recommended for tests but not for
# production use. Supported databases are MySQL, Oracle, PostgreSQL and Microsoft SQLServer.
# User credentials.
# Permissions to create tables, indices and triggers must be granted to JDBC user.
# The schema must be created first.
#sonar.jdbc.username=sonar
#sonar.jdbc.password=sonar
#----- Embedded Database (default)
# It does not accept connections from remote hosts, so the
# server and the analyzers must be executed on the same host.
#sonar.jdbc.url=jdbc:h2:tcp://localhost:9092/sonar
# H2 embedded database server listening port, defaults to 9092
#sonar.embeddedDatabase.port=9092
#----- MySQL 5.x
sonar.jdbc.url=jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
#----- Oracle 10g/11g
# - Only thin client is supported
# - Only versions 11.2.* of Oracle JDBC driver are supported, even if connecting to lower Oracle versions.
# - The JDBC driver must be copied into the directory extensions/jdbc-driver/oracle/
# - If you need to set the schema, please refer to http://jira.codehaus.org/browse/SONAR-5000
#sonar.jdbc.url=jdbc:oracle:thin:#localhost/XE
#----- PostgreSQL 8.x/9.x
# If you don't use the schema named "public", please refer to http://jira.codehaus.org/browse/SONAR-5000
#sonar.jdbc.url=jdbc:postgresql://localhost/sonar
#----- Microsoft SQLServer 2005/2008
# Only the distributed jTDS driver is supported.
#sonar.jdbc.url=jdbc:jtds:sqlserver://localhost/sonar;SelectMethod=Cursor
#----- Connection pool settings
# The maximum number of active connections that can be allocated
# at the same time, or negative for no limit.
#sonar.jdbc.maxActive=50
# The maximum number of connections that can remain idle in the
# pool, without extra ones being released, or negative for no limit.
#sonar.jdbc.maxIdle=5
# The minimum number of connections that can remain idle in the pool,
# without extra ones being created, or zero to create none.
#sonar.jdbc.minIdle=2
# The maximum number of milliseconds that the pool will wait (when there
# are no available connections) for a connection to be returned before
# throwing an exception, or <= 0 to wait indefinitely.
#sonar.jdbc.maxWait=5000
#sonar.jdbc.minEvictableIdleTimeMillis=600000
#sonar.jdbc.timeBetweenEvictionRunsMillis=30000
Web Server settings:
#--------------------------------------------------------------------------------------------------
# WEB SERVER
# Web server is executed in a dedicated Java process. By default heap size is 768Mb.
# Use the following property to customize JVM options.
# Recommendations:
#
# The HotSpot Server VM is recommended. The property -server should be added if server mode
# is not enabled by default on your environment: http://docs.oracle.com/javase/7/docs/technotes/guides/vm/server-class.html
#
# Set min and max memory (respectively -Xms and -Xmx) to the same value to prevent heap
# from resizing at runtime.
#
sonar.web.javaOpts=-Xmx768m -XX:MaxPermSize=160m -XX:+HeapDumpOnOutOfMemoryError
# Same as previous property, but allows to not repeat all other settings like -Xmx
#sonar.web.javaAdditionalOpts=
# Binding IP address. For servers with more than one IP address, this property specifies which
# address will be used for listening on the specified ports.
# By default, ports will be used on all IP addresses associated with the server.
sonar.web.host=xxx.xxx.x.xxx
# Web context. When set, it must start with forward slash (for example /sonarqube).
# The default value is root context (empty value).
#sonar.web.context=
# TCP port for incoming HTTP connections. Disabled when value is -1.
sonar.web.port=9000
# TCP port for incoming HTTPS connections. Disabled when value is -1 (default).
#sonar.web.https.port=-1
# HTTPS - the alias used to for the server certificate in the keystore.
# If not specified the first key read in the keystore is used.
#sonar.web.https.keyAlias=
# HTTPS - the password used to access the server certificate from the
# specified keystore file. The default value is "changeit".
#sonar.web.https.keyPass=changeit
# HTTPS - the pathname of the keystore file where is stored the server certificate.
# By default, the pathname is the file ".keystore" in the user home.
# If keystoreType doesn't need a file use empty value.
#sonar.web.https.keystoreFile=
# HTTPS - the password used to access the specified keystore file. The default
# value is the value of sonar.web.https.keyPass.
#sonar.web.https.keystorePass=
# HTTPS - the type of keystore file to be used for the server certificate.
# The default value is JKS (Java KeyStore).
#sonar.web.https.keystoreType=JKS
# HTTPS - the name of the keystore provider to be used for the server certificate.
# If not specified, the list of registered providers is traversed in preference order
# and the first provider that supports the keystore type is used (see sonar.web.https.keystoreType).
#sonar.web.https.keystoreProvider=
# HTTPS - the pathname of the truststore file which contains trusted certificate authorities.
# By default, this would be the cacerts file in your JRE.
# If truststoreFile doesn't need a file use empty value.
#sonar.web.https.truststoreFile=
# HTTPS - the password used to access the specified truststore file.
#sonar.web.https.truststorePass=
# HTTPS - the type of truststore file to be used.
# The default value is JKS (Java KeyStore).
#sonar.web.https.truststoreType=JKS
# HTTPS - the name of the truststore provider to be used for the server certificate.
# If not specified, the list of registered providers is traversed in preference order
# and the first provider that supports the truststore type is used (see sonar.web.https.truststoreType).
#sonar.web.https.truststoreProvider=
# HTTPS - whether to enable client certificate authentication.
# The default is false (client certificates disabled).
# Other possible values are 'want' (certificates will be requested, but not required),
# and 'true' (certificates are required).
#sonar.web.https.clientAuth=false
# The maximum number of connections that the server will accept and process at any given time.
# When this number has been reached, the server will not accept any more connections until
# the number of connections falls below this value. The operating system may still accept connections
# based on the sonar.web.connections.acceptCount property. The default value is 50 for each
# enabled connector.
#sonar.web.http.maxThreads=50
#sonar.web.https.maxThreads=50
# The minimum number of threads always kept running. The default value is 5 for each
# enabled connector.
#sonar.web.http.minThreads=5
#sonar.web.https.minThreads=5
# The maximum queue length for incoming connection requests when all possible request processing
# threads are in use. Any requests received when the queue is full will be refused.
# The default value is 25 for each enabled connector.
#sonar.web.http.acceptCount=25
#sonar.web.https.acceptCount=25
# Access logs are generated in the file logs/access.log. This file is rolled over when it's 5Mb.
# An archive of 3 files is kept in the same directory.
# Access logs are enabled by default.
#sonar.web.accessLogs.enable=true
# TCP port for incoming AJP connections. Disabled if value is -1. Disabled by default.
#sonar.ajp.port=-1
I found my mistake:
I start MySQL and SonarQube as a service with the task scheduler every time I restart my computer. Under "Settings" is one default setting called "Stop the task if it runs longer than: 3days". I disabled this option. Consequently the services should run all the time now.