What's the step to change DRBD IP - drbd

How do I change the node IP of DRBD?
This is my config:
resource data {
protocol C;
on server1 {
device /dev/drbd0;
disk /dev/sdb1;
address 192.168.56.101:7788;
meta-disk internal;
}
on server2 {
device /dev/drbd0;
disk /dev/sdb1;
address 192.168.56.103:7788;
meta-disk internal;
}
}
These are the steps I took:
stop the drbd service on server1 and server2.
change the IP of server2.
change the hosts file.
change the drbd config.
start the DRBD service on server1 and server2.
And then I got some error such as diskless. So what's the correct step to change the IP and avoid data loss?

# drbdadm disconnect <resource_name> # on both nodes
Change the IP address within the /etc/drbd.d/<resource_name>.res file on both nodes
# drbdadm adjust <resource_name> # on both nodes
When DRBD starts it runs through a series of steps, if any one of these fails it will skip the latter steps. One of these steps is to create a TCP socket. If it fails to do that, it will skip the latter steps, one of which is attaching to the disk.
I suspect in your case, that DRBD fails to find the IP address to use present on the system, and thus skips the latter steps of attaching to the disk, and thus starts up connectionless and diskless. Make sure the IP address you're changing DRBD to use is already present on the systems.

Related

How can a process in a container started as a Docker Compose service find out both its IPv4 and its IPv6 address?

Within a service "foo" started as part of a Docker Compose stack of services I would like to be able to find out/know both the IPv4 and IPv6 address of the container the service is running in.
One way to find out is via the shell command hostname -i, but this only gives the IPv4 address. I'd also prefer one way to get both, if possible. Is there a way that Compose can pass the service its IPv4/6 addresses during startup? If not, can the service determine these from the Docker runtime after startup?
Irrelevant to the question, but I'll describe what I'm doing so people can understand the sense.
I've got Nginx also running in the stack. It has a rule like the following:
location ~ "^/foo/bar.*" {
if ($http_x_hsn = "") {
return 401 '{"error":"Invalid hsn"}';
}
# The resolver DNS name is resolvable by Docker.
# Any instance of "foo" has a trivial DNS server built in
# and can be used to resolve the IP address of a particular
# "foo" instance which is associated with a particular value
# of "hsn" passed in a the header "x-hsn" by looking up that
# association which will have been centrally registered prior
# to the handling of this type of request. Of course, if
# no association between foo instance IP and hsn is found the
# DNS query will return no record and this request should
# then fail.
# Note that a particular foo instance is 1-to-1 with a
# particular value of hsn.
resolver <DNS name resolving to any service foo instance>;
# Given an "hsn" with value "bar", service foo
# will be asked to resolve "foo-service.bar".
# The IP address returned should be one visible to Nginx
set $upstream_service foo-service.$http_x_hsn;
# Now, proxy to the correct instance of foo, based on
# the value of "hsn"
proxy_pass http://$upstream_service;
}
I've got this working using os.networkInterfaces() in foo (this is a Nodejs service), but the structure that returns can list multiple interfaces and I'm not sure that the one being used for the service would be always eth0 so I thought I'd ask here if there's a better way.
I should also mention that the associations between hsn values and service instances will have been created by Nginx routing to an instance (via another location rule) in a round robin way, with that instance centrally registering its IP address with that particular hsn value.

Akka Cluster with bind-port and bind-hostname

After configuring bind-hostname and bind-port in application.conf, as specified by the Akka FAQ, and bringing up the cluster, I'm receiving an error:
[ERROR] [07/09/2015 19:54:24.132] [default-akka.remote.default-remote-dispatcher-20]
[akka.tcp://default#54.175.105.30:2552/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fdefault%4054.175.105.30%3A2552-757/endpointWriter]
dropping message [class akka.actor.ActorSelectionMessage]
for non-local recipient[Actor[akka.tcp://default#54.175.105.30:32810/]]
arriving at [akka.tcp://default#54.175.105.30:32810]
inbound addresses are [akka.tcp://default#54.175.105.30:2552]
What this seems to say is that the actor has received a message destined for port 32810 (the external port) but its dropping it because the internal port (2552) doesn't match.
The relevant portions of the file are:
hostname = 54.175.105.30
port = 32810
bind-hostname = 172.17.0.44
bind-port = 2552
I've tried this on 2.4-M1, 2.4-M2, and 2.4-SNAPSHOT, all with the same effect.
Has anyone else encountered this before? Any suggestions?
edit:
This actor system is running in ECS in docker containers. The docker container configuration is set to forward from the ephemeral range to 2552 on the container's private IP. ECS is successfully mapping the hostname:port to bind-hosname:bind-port. The actor is successfully running and binding to the local bind-hostname and bind-port, but is dropping messages and emitting the error described above.
bind-* configuration settings are meant to be used in situations when Akka nodes are started behind NAT (or in docker containers). Have you configured address translation from hostname:port to bind-hostname:bind-port?
In your particular configuration, when you do
ctx.actorSelection("akka.tcp://default#54.175.105.30:32810/user/actor") ! "Hi"
then someone at 54.175.105.30 should be listening for TCP port 32810 and port forwarding to 172.17.0.44:2552. The actor system should be running with your provided configuration at 172.17.0.44:2552. Is this the case?
Also you have to configure this for every node that is behind a NAT, because connections between Actor Systems are peer to peer.
This was due to a misconfiguration on my end. Some boilerplate code was remaining that was overriding the bind-port.

Remote connection to Neo4j server

I believe the way to creating a remote connection is by changing this line in conf/neo4j-server.properties, specifically by removing the comment and restarting the server.
org.neo4j.server.webserver.address=0.0.0.0
My URL is https://0.0.0.0:7473/browser/ and works on the local machine, but when I test the URL in Safari on iPhone over 3G, it cannot connect.
What do I set the address to in the properties file?
I thought it was the IP address of my computer, but after trying the remote address which I got from Googling “ip address mac” that didn’t work, nor did (obviously) the local IP address of my machine, 192.168.0.14
I should point out that setting it to the IP address from Google throws an error and the log reads:
2015-01-29 17:10:08.888+0000 INFO [API] Failed to start Neo Server on port [7474], reason [MultiException[java.net.BindException: Can't assign requested address, java.net.BindException: Can't assign requested address]]
With default configuration Neo4j only accepts local connections
In neo4j-community-3.1.0 edit conf/neo4j.conf file and uncomment the following to accept non-local connections
dbms.connectors.default_listen_address=0.0.0.0
By setting
org.neo4j.server.webserver.address=0.0.0.0
enables Neo4j on all network interfaces.
The remainder of that reply is not Neo4j related at all - it's regular networking. Double check if port 7473 (and/or 7474) are not blocked neither be a locally running firewall nor by your router. You local IP 192.168.0.14 indicates you're behind a router doing NAT. Therefore you have to setup a port forwarding in your router for the ports mentioned above.
Please be aware that this is potentially dangerous since everyone knowing your external IP can access your Neo4j instance. Consider using either https://github.com/neo4j-contrib/authentication-extension or use a VPN in favour of port forwarding.
in 3.0:
##### To have HTTP accept non-local connections, uncomment this line
dbms.connector.http.address=0.0.0.0:7474
Confused myself with the setting. Anyone who has the same problem, 0.0.0.0 just means “this server isn’t local any more” and so to access it you use the public IP address of the computer that’s hosting the Neo4j server.
Just make sure that the ports you set in the server properties (default are 7474 and 7473) are open for incoming connections on your router/firewall etc.
I think there's some confusion here. That configuration property org.neo4j.server.webserver.address is about which IP address the server you're starting listens on for external connections. Relevant documentation is here.
It seems you're asking how to configure your database to talk to a remote database. I don't think you can do that. Rather, by editing that file you're planning on running a database on the host where that file is. Your local database on that host will write files to wherever the org.neo4j.server.database.location configuration parameter points.
A remote connection is something that the neo4j shell might establish, or that you browser might make to a foreign server running neo4j; but you don't establish that sort of remote connection by editing that file. Hopefully this helps.
Also if you have ssh access to remote server with neo4j you can setup ssh tunnel to access it via localhost:
ssh -NfL localhost:7474:localhost:7474 -L localhost:7687:localhost:7687 yourname#yourhost
then type in browser:
localhost:7474
Depends on the version.
Look for the phrase 'non-local connections' in the conf file.(In my case, $NEO4J_HOME/conf/neo4j.conf)
Then follow the instructions in the comments.
In my case,
# With default configuration Neo4j only accepts local connections.
# To accept non-local connections, uncomment this line:
server.default_listen_address=0.0.0.0

Jenkins Slave port number for firewall

We use Jenkins 1.504 on Windows.
We need to have Master and Slave in different sub-networks with firewall in between.
We can't have ANY to ANY port firewall rules, we must specify exact port numbers.
I know the port Master is listening on.
I also see that Slave opens connection to the Master from the arbitrary port dynamically assigned every run, and port on the Master side is also arbitrary.
I can fix Master's port by specifying it in Manage Jenkins > Configure Global Security > TCP port for JNLP slave agents).
How to fix Slave port?
UPDATE: Found Connection Mechanism described here: https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+CLI#JenkinsCLI-Connectionmechanism
I think it might work for us, but still would be better to have fixed-2-fixed ports connection.
We had a similar situation, but in our case Infosec agreed to allow any to 1, so we didnt had to fix the slave port, rather fixing the master to high level JNLP port 49187 worked ("Configure Global Security" -> "TCP port for JNLP slave agents").
TCP
49187 - Fixed jnlp port
8080 - jenkins http port
Other ports needed to launch slave as a windows service
TCP
135
139
445
UDP
137
138
A slave isn't a server, it's a client type application. Network clients (almost) never use a specific port. Instead, they ask the OS for a random free port. This works much better since you usually run clients on many machines where the current configuration isn't known in advance. This prevents thousands of "client wouldn't start because port is already in use" bug reports every day.
You need to tell the security department that the slave isn't a server but a client which connects to the server and you absolutely need to have a rule which says client:ANY -> server:FIXED. The client port number should be >= 1024 (ports 1 to 1023 need special permissions) but I'm not sure if you actually gain anything by adding a rule for this - if an attacker can open privileged ports, they basically already own the machine.
If they argue, then ask them why they don't require the same rule for all the web browsers which people use in your company.
I have a similar scenario, and had no problem connecting after setting the JNLP port as you describe, and adding a single firewall rule allowing a connection on the server using that port. Granted it is a randomly selected client port going to a known server port (a host:ANY -> server:1 rule is needed).
From my reading of the source code, I don't see a way to set the local port to use when making the request from the slave. It's unfortunate, it would be a nice feature to have.
Alternatives:
Use a simple proxy on your client that listens on port N and then does forward all data to the actual Jenkins server on the remote host using a constant local port. Connect your slave to this local proxy instead of the real Jenkins server.
Create a custom Jenkins slave build that allows an option to specify the local port to use.
Remember also if you are using HTTPS via a self-signed certificate, you must alter the configuration jenkins-slave.xml file on the slave to specify the -noCertificateCheck option on the command line.

Cassandra Cluster Setup getting JMX error

I m trying setup a cassandra cluster as a test bed but gave the JMX remote connection error. I seem to found the answer for my error from cassandra FAQ page
Nodetool says "Connection refused to host: 127.0.1.1" for any remote host. What gives?
Nodetool relies on JMX, which in turn relies on RMI, which in turn sets up it's own listeners and connectors as needed on each end of the exchange. Normally all of this happens behind the scenes transparently, but incorrect name resolution for either the host connecting, or the one being connected to, can result in crossed wires and confusing exceptions.
If you are not using DNS, then make sure that your /etc/hosts files are accurate on both ends. If that fails try passing the -Djava.rmi.server.hostname=$IP option to the JVM at startup (where $IP is the address of the interface you can reach from the remote machine).
But can somebody help me on how to do -Djava.rmi.server.hostname=$IP
Or what to add is hosts file, i know that in hosts normally we add "IP Alias", but whose ip and alias.
I dont know much java or either linux
I m currently working on ubuntu v10.04 and cassandra v0.74
Sudesh
For JMX you need to enable JMX-remoting:
java -Dcom.sun.management.jmxremote
Depending on from where you want to access the jmx-server, you also need to specify a port:
-Dcom.sun.management.jmxremote.port=12345
and set or disable passwords.
Have a look at http://download.oracle.com/javase/1.5.0/docs/guide/management/agent.html for more details.

Resources