I have managed to build influxdb from source for windows, without much issue.
I am now trying to get their clustering to work as per:
https://influxdb.com/docs/v0.9/guides/clustering.html
That assumes a linux os.
When updating the influxdb.conf from localhost to the RealHostName in step 2 and starting the first node
The logs return:
2016/01/06 15:01:44 Go version go1.5.2, GOMAXPROCS set to 8
2016/01/06 15:01:44 Using configuration at: influxdb.conf
[metastore] 2016/01/06 15:01:44 Using data dir: D:\XXXXXX\.influxdb\meta
[metastore] 2016/01/06 15:01:44 Skipping cluster join: already member of cluster
: nodeId=1 raftEnabled=true peers=[localhost:8088 RealHostName:8088]
[metastore] 2016/01/06 15:01:44 Node at RealHostName:8088 [Follower]
[metastore] 2016/01/06 15:01:45 Node at RealHostName:8088 [Leader]. peers=[localhost
:8088 RealHostName:8088]
[metastore] 2016/01/06 15:01:45 Node at RealHostName:8088 [Follower]. peers=[localhost:8088 RealHostName:8088]
Is there some I am missing? or is this part of their disclaimer:
Clustering is in a alpha state right now. There are still a good
number of rough edges. If you notice any issues please report them.
Clustering in InfluxDB 0.9 should be considered alpha functionality, and Windows is not yet a supported OS for InfluxDB. Since you are using alpha functionality on an unsupported OS it may be impossible to fix whatever issues are happening.
I recommend waiting for the 0.10 release later this month, which will have clustering in a beta/RC state. Full Windows support is coming soon but I do not have an estimate yet.
You might also consider running your cluster Linux server. Are you completely locked into deploying InfluxDB on Windows?
Related
I have a problem with my clustered Camunda environment. What I am trying is to run multiple Camunda instances on my Openshift Cluster. All of them are connected to a single oracle db instance.
My problem is, that the deployment of the first instance is working as expected. However as soon as I'm trying to scale the pods to e.x. 3 instances, at least one of them failes and remains stuck on the following output:
{"timestamp":"2020-07-15 14:04:39.503","level":"DEBUG","thread":"main","logger":"org.camunda.bpm.engine.cmd","message":"ENGINE-13009 opening new command context","context":"default"}
14:01:00.741","level":"DEBUG","thread":"main","logger":"org.camunda.bpm.engine.impl.persistence.entity.PropertyEntity.lockDeploymentLockProperty","message":"==> Preparing: SELECT VALUE_ FROM ACT_GE_PROPERTY WHERE NAME_ = 'deployment.lock' for update ","context":"default"}
{"timestamp":"2020-07-15 14:01:00.748","level":"DEBUG","thread":"main","logger":"org.camunda.bpm.engine.impl.persistence.entity.PropertyEntity.lockDeploymentLockProperty","message":"==> Parameters: ","context":"default"}
As the logs tell, it hast something to do with locking of process deployment. After further investigation I came across this article on the offical Camunda page:
https://docs.camunda.org/manual/7.13/user-guide/process-engine/deployments/
And have seen these entries in the database:
Problem: I do understand why the deployments are locked but the main problem is that the lock remains there forever and never gets released. I would appreciate any help!
Are you using autodeployment?! The mentioned article describes a weired situation where multiple nodes try to deploy the same resources. In my opinion this only should happen, when each node trys to autodeploy resources.
Using an explicit deployment (after nodes are started) should be executed on a single node.
KR, Joachim
I installed mosquitto on my Raspberry Pi.
I installed the MQTT Binding, MQTT Binding (1.x) using PaperUI.
I created an item:
Number mqtt_kitchen_gas "Gas Level [%.1f]" {mqtt="<[mosquitto:Home/Floor1/Kitchen/Gas_Sensor:state:default]"}
I opened a terminal window and sent:
mosquitto_pub -u openhabian --pw xxxx -t "Home/Floor1/Kitchen/Gas_Sensor" -m 10
The value "10" appeared in the Gas Level field.
I could change "10" to any number and that would appear in the field.
All was good with the world.
Then I rebooted and looked for the binding MQTT. It is not listed under Configuration, Bindings. (GPIO, another binding I installed is listed.) Also, if I search the Add-ons for MQTT it shows MQTT Binding (1.x) is installed (can be uninstalled) and I can still change the Gas Level field using the above mosquitto_pub.
Maybe I shouldn't worry about it since it works but maybe I have something wrong with my installation and it will come back to bite me.
Any opinions?
I am not completely sure (about 75% :D), but i think that the MQTT 1.x Binding simply doesn't provide any configuration that could be done in paper ui.
That could be the reason why it isn't displayed in the config section.
Since it appears in the addons section and works, everything should be fine.
I am using this binding too and it works properly for ages.
Of course i checked my configuration area in paper ui and it isn't shown there for me too.
Anyways its always fine to check the logs regularly for problems.
If there are none, installation is fine.
I am trying to deploy an HTTP Trigger to keda. I have installed Osiris components for the same. It helped me to scale to zero when no request is coming, but it is not scaling up from 1 instance. I have removed all replica-constraints from deploy.yaml file still no effects.
Using AKS Cluster with 3 node
Installed Keda and Osiris components using Core Tools install
I expect the nodes to be scaled up when doing a load test for 100 users. But it always show 1 instance.Please help me if anybody have tried it.
I'm trying to add Neo4j 3.0 to my tests for the neo4j gem and I'm having trouble with the server getting killed in a Travis CI container. Pre-3.0 works just fine, but when I use 3.0 it seems to get killed. There seems to be plenty of memory (when I run Neo4j locally it uses 300-400 MB). I get a warning from Neo4j saying:
WARNING: Max 30000 open files allowed, minimum of 40000 recommended. See the Neo4j manual.
That makes me think that it's getting killed because of too many open files. I'm not sure if there's a way to increase the number of files on a Neo4j container, and I have a number of jobs so I don't want to slow things down by running sudo: true. Did Neo4j 3.0 change to require more open files (the documentation doesn't seem to imply that it did)?
EDIT:
My .travis.yml file:
This is how I do it, and it works fine for me for 2.3 and 3.0 including a push to docker hub.
https://github.com/maxdemarzi/neo_travis
https://travis-ci.org/maxdemarzi/neo_travis
I think our memory allocation is messing things up. One thing that is unusual on your (travis's) setup is that there is twice the amount of swap memory compared to RAM, and that the amount of memory reported as available is very large.
Try specifying the amount of memory in your config file. See http://neo4j.com/docs/operations-manual/current/#performance-tuning for more details, but essentially add these to your config.
In neo4j.conf:
dbms.memory.pagecache.size=1G
and in neo4j-wrapper.conf:
dbms.memory.heap.max_size=1000
dbms.memory.heap.initial_size=1000
The memory limits are set quite low to guarantee that Travis doesn't kill the process, and I suspect that the tests don't need much in terms of memory.
Some background:
I'm trying to set up Azure Pack in a test environment, and are currently woriking on setting up the servers who's going to host it all.
To do this i have two virtual Windows Server 2016 TP4 servers hostet on a ESXI host, and so i need to set up Storage Spaces Direct.
(iSCSI target and Storage Spaces (WS 2012), have been ruled out since the first is a nightmare to set up and the internet told me the second one comes with a low R/W speed).
I've been following this guide: https://technet.microsoft.com/en-us/library/mt126109.aspx
Problem:
When i run this cmdlet: Enable-ClusterStorageSpacesDirect
, I get this warning: No elegible DAS disk found.
Both servers have 3 disk each. They are initialized and 100% unallocated, and I have tried with them beeing both offline and online.
If I try running this cmdlet: (Get-Cluster).DasModeEnabled=1
I get the following error: The property 'DasModeEnabled' cannot be found on this object. Verify that the property exists and can be set.
Any and all help is greatly appriciated!
Storage Spaces Direct doesn't support FC & RAID-controlled LUNs.
The key is to force S2D to accept RAID BusType:
(Get-Cluster).S2DBusTypes=256
Here's a good article about it https://www.starwindsoftware.com/blog/resolving-enable-clusters2d-bus-type-support-issue-on-some-storage-controllers.
Another option is to reflash the controller's firmware to IT mode.
There's also other solutions, like that Starwind, which I suggest you to test.