How to split Apache Druid hstorical and middlemanager nodes - time-series

In a clustered deployment of druid, the document doesn't mention how to split historical and middlemanager nodes into two (or more) separate nodes. It only mentions the port ranges they can accept. For running data-servers, druid says we must execute this script:
./bin/start-cluster-data-server
In we run this script in two different machines, how one of them act as middlemanager and the other as historical node?

If you cat the quickstart scripts, you'll see that each one calls a Perl script called supervise. Here is what you'll see inside start-cluster-data-server, for example:
exec "$WHEREAMI/supervise" -c "$WHEREAMI/../conf/supervise/cluster/data.conf"
The configuration file following -c contains information about the processes to start. For example, here's the data.conf file:
:verify bin/verify-java
historical bin/run-druid historical conf/druid/cluster/data
middleManager bin/run-druid middleManager conf/druid/cluster/data
You can see then how you can configure your own start scripts and a .conf file to go with it that will spawn only those processes that you want to run on a particular node.

Related

Automatically Configure Config inside Docker Container

While setting up and configure some docker containers I asked myself how I could automatically edit some config files inside the container after the containerized service finished installing (since the config files are created at the installation).
I have tried that using a shell file and adding it as the entrypoint in the Dockerfile. However, as I have said the config file does not exist right at the beginning and hence the sed commands in the script fail.
Linking an config files with - ./myConfig.conf:/xy/myConfig.conf is also not an option because the config contains some installation dependent options.
The most reasonable solution I have found was running a script, which edits the config, manually after the installation has finished with docker exec -i mycontainer sh < editconfig.sh
EDIT
My question is formulated in general terms. However, the question arose while working with Nextcloud in a docker-compose setup similar to the official example. That container contains a config.php file which is the general config file of Nextcloud and is generated during the installation. Certain properties of that files have to be changed (there are only a very limited number of environmental variables to specify). Since I am conducting some tests with this container I have to repeatedly reinstall it and thus reedit the config file.
Maybe you can try another approach and have your config file/application pick its settings from the environmental variables. That would be consistent with the 12factor app methodology see here
How I understand your case you need to start your container from creating config by some template.
I see a number of options to do it:
Use some script that generates a config from template and arguments from a command line or environment variables. (Jinja2 and python for example or Mustache and node.js ). In this case, your entrypoint generate the template and after this start application. For change config, you will be forced restart service (container).
Run some service can save the configuration and render you configuration in run time. Personally, I like consul template, we active use this engine in our environment, and have no problems for while. In this case, config is more dynamic and able to be changed "on the fly". In your container, you will have two processes, application, and consul-template daemon. Obviously, you will need to run and maintain consul. For reloading config restart of an application process is enough.
Run a custom script to create the config. :)

Make one instance of multiple uWSGI workers perform a extra function

I have a python flask app running on uWSGI with a config file that specifics it to spawn multiple workers (which I am assuming are identical processes).
Everything works well except for one part: the python app runs a bash command to download an update a database every day using a scheduler, which needs to run only once but multiple processes means that it runs multiple times at the same time, thus corrupting the downloaded file.
Is there a way to run this bash command on only one instance of uWSGI workers? I can't run the bash command as a separate cron job (the database update has to integrate seamlessly with the app).
Check The uWSGI cron-like interface
uWSGI’s master has an internal cron-like facility that can generate
events at predefined times. You can use it
You can set the options for example to:
[uwsgi]
; every two hours
cron = 0 -2 -1 -1 -1 /usr/bin/backup_my_home --recursive
Is that sufficient?

Query on custom metrics exposed via prometheus node exporter textfile collector fails

I am new to prometheus/alertmanager.
I have created a cron job which executes shell script every minute. This shell script generates "test.prom" file (with a gauge metric in it) in the same directory which is assigned to --textfile.collector.directory argument (to node-exporter). I verified (using curl http://localhost:9100/metrics) that the node-exporter exposes that custom metric correctly.
When I tried to run a query against that custom metric in prometheus dashboard, it does not show up any results (it says no data found).
I could not figure out why the query against the metric exposed via node-exporter textfile collector fails. Any clues what I missed ? Also please let me know how to check and ensure that prometheus scraped my custom metric 'test_metric` ?
My query in prometheus dashboard is test_metric != 0 (in prometheus dashboard) which did not give any results. But I exposed test_metric via node-exporter textfile.
Any help is appreciated !!
BTW, the node-exporter is running as docker container in Kubernetes environment.
I had a similar situation, but it was not a configuration problem.
Instead, my data included timestamps:
# HELP network_connectivity_rtt Round Trip Time to each node
# TYPE network_connectivity_rtt gauge
network_connectivity_rtt{host="home"} 53.87 1541426242
network_connectivity_rtt{host="hop_1"} 58.8 1541426242
network_connectivity_rtt{host="hop_2"} 21.93 1541426242
network_connectivity_rtt{host="hop_3"} 71.69 1541426242
PNE was picking them up without any problem once I reloaded it. As prometheus is running under systemd, I had to check the logs like this:
journalctl --system -u prometheus.service --follow
There I read this line:
msg="Error on ingesting samples that are too old or are too far into the future"
Once I removed the timestamps, values started appearing. This lead me to read more in detail about the timestamps, and I found out they have to be in miliseconds. So this format now is ok:
# HELP network_connectivity_rtt Round Trip Time to each node
# TYPE network_connectivity_rtt gauge
network_connectivity_rtt{host="home"} 50.47 1541429581376
network_connectivity_rtt{host="hop_1"} 3.38 1541429581376
network_connectivity_rtt{host="hop_2"} 11.2 1541429581376
network_connectivity_rtt{host="hop_3"} 20.72 1541429581376
I hope it helps someone else.
Its my bad. I did not included scrape instructions for node-exporter in prometheus.yaml file. It worked after including them.
This issue is happening because of stale metrics.
Lets say you have written you metric in file at 13.00
by default after 5min prometheus will consider you metric stale and it might disappear from there at the time you are making query.

How can I run Neo4j with larger heap size, specify -server and correct GC strategy

As a someone who never really messed with the JVM much how can I ensure my Neo4j instances are running with all of the recommended JVM settings. E.g. Heap size, server mode, and -XX:+UseConcMarkSweepGC
Should these be set inside a config file? Can I set the dynamically at runtime? Are they set at a system level? Can I have different settings when running two instances of neo4j on the same machine?
It is a bit fuzzy at what point all of these things get set.
I am running neo4j inside a docker container so that is something to consider as well.
Dockerfile as follows. I am starting neo4j with the console command
FROM dockerfile/java:oracle-java8
# INSTALL OS DEPENDENCIES AND NEO4J
ADD /files/neo4j-enterprise-2.1.3-unix.tar.gz /opt/neo
RUN rm /opt/neo/neo4j-enterprise-2.1.3/conf/neo4j-server.properties
ADD /files/neo4j-server.properties /opt/neo/neo4j-enterprise-2.1.3/conf/neo4j-server.properties
#RUN mv -f /files/neo4j-server.properties /opt/neo/neo4j-enterprise-2.1.3/conf/neo4j-server.properties
EXPOSE 7474
CMD ["console"]
ENTRYPOINT ["/opt/neo/neo4j-enterprise-2.1.3/bin/neo4j"]
Ok, so you are using the Neo4j server script. In this case you should configure the low level JVM properties in neo4j.properties which should also live in the conf directory. Basically do the same thing for neo4j.properties as you already do for neo4j-server.properties. Create the properties file in your Docker context and configure the properties you want to add. Then in the Dockerfile use:
ADD /files/neo4j.properties /opt/neo/neo4j-enterprise-2.1.3/conf/neo4j.properties
The syntax in the properties files is the following (from the documetnation):
# initial heap size (in MB)
wrapper.java.initmemory=<value>
# maximum heap size (in MB)
wrapper.java.maxmemory=<value>
# additional literal JVM parameter, where N is a number for each
wrapper.java.additional.N=<value>
See also http://docs.neo4j.org/chunked/stable/server-performance.html.
One way to test whether the settings are applied is to run jinfo <pid> in the Docker container, where is the process id of the Neo4j JVM. To enter the container, you can either change the entrypoint to /bin/bash at the command line when you run the container or you use nsenter. The latter would be my choice.

how to swap solr core from shell

I have a solr setup with two cores. I want to schedule a core(core1, backend) for full import frequently(e.g. after every 5 mins), then swap with the live(core0, serving) core from shell command through a shceduler.
For full-import command, I am using following shell command
wget -o - -q -t 1 http://localhost:8080/solr/core1/dataimport?command=full-import
Which works fine. If I do a core swap from browser by hitting
http://localhost:8080/solr/admin/cores?action=SWAP&core=core1&other=core0, I get latest update instantly on search. But if I schedule this URL as shell command similar to dataimport, it doesn't do that swap.
Did you try with
curl
"http://'localhost':8080/solr/admin/cores?action=SWAP&core=core1&other=core0"
from shell?
There is catch with the SWAPs
Apache Solr allows to swap two cores around for non-Cloud configurations. They take each other’s name, so it is a good way to push an updated core into a production without downtime.
But an interesting question is how this is achieved. Normally, core name is it’s directory name too. So, does Solr rename the directory on the filesystem too?
Not really! Instead name property in the core.properties file is updated to use the name of the other core. Usually that property is used to give an alternave name of the core for when the directory naming conventions are not suitable.
The gotcha is - of course - that you still have two directories with right looking names for the cores you see in the Admin UI. So, it is very easy to forget that extra redirection/rename step when troubleshooting somebody else’s - or even your own old - setup.

Resources