Resolv.getaddress not resolving the FQDN that has entry in /etc/host - ruby-on-rails

I'm using Ruby version: 2.0. The function Resolv.getaddress(fqdn) is returning following error when used in rails application.
Parameters: {"fqdn"=>"1kzdm.scalsoln.in"}
Completed 500 Internal Server Error in 64ms
Resolv::ResolvError (no address for 1kzdm.scalsoln.in):
I'm have entry for this host in /etc/hosts and is resolvable using ping.
How can I make the Resolv.getaddress() function to read entries in /etc/hosts file?

It should just work like you expect - and it does on my machine.
Resolve::Hosts doesn't read in /etc/hosts on every resolution, though. It reads the file on first resolution and it is then cached for the subsequent calls. Perhaps you simply need to restart your server process to force a reload of the /etc/hosts file?
To work around the caching (ie. always resolve using non-cached data) you can create a new Resolv instance every time you want to look something up:
Resolv.new.getaddress("1kzdm.scalsoln.in")
(note the new in there).

Related

How can I tell what options are in use on a running Mosquitto Service

How can I tell if the settings files associated with a Mosquitto instance, have been properly applied?
I want to add a configuration file to the conf.d folder to override some settings in the default file, but I do not know how to check they have been applied correctly once the Broker is running.
i.e. change persistence to false (without editing the standard file).
Test it.
You can run mosquitto with verbose output enabled, which will generally give you feedback on what options were set, but don't just believe that.
To do that, stop running Mosquitto as a service (how you do this depends on you setup) and manually run it from the shell with the -v option. Be sure to point it at the correct configuration file with the -c option.
That's not enough to be sure that it's actually working properly. To do that you need to test it.
Options have consequences or we wouldn't use them.
If you configure Mosquitto to listen on a specific port, test it by trying to connect to that port.
If you configure Mosquitto to require secure connections on a port, test it by trying to connect to the port unsecured (this shouldn't work) and secured (this should work).
You should be able to devise relatively simple tests for any options you can set in the configuration file. If you care if it's actually working, don't just take it on faith; test it.
For extra credit you can bundle the tests up into a script so that you can run an entire test suite easily in the future and test your Mosquitto installation anytime you make changes to it.
Having duplicate configuration options with different values is a REALLY bad idea.
The behaviour of mosquitto is not defined in this case, which value should be honoured, the first found, the last? When using the conf.d directory, what order will the files be loaded in?
Also will you always remember that you have changed the value in a conf.d file in the future when you go looking?
If you want to change one of the defaults in the /etc/mosquitto/mosquitto.conf file then edit that file. (Any sensible package management system will notice the file has been changed and ask what to do at the point of upgrade)
The conf.d/ directory is intended for adding extra listeners.
Also be aware that there really isn't a default configuration file, you must always specify a configuration file with the -c command line option. The file at /etc/mosquitto/mosquitto.conf just happens to be the config file that is passed when mosquitto is started as a service when installed using most Linux package managers. (The default Fedora install doesn't even setup the /etc/mosquitto/conf.d directory)

How to force .env variables update in a nuxt project?

hi!
I wonder if anyone knows if there is any way to force the update of the .env file.
First of all, every time that I modified my .env variables, the changes were happening right away but now I started using the next build configuration:
build: {
hardSource: true,
cache: true,
parallel: true,
}
And ever since I started using those experimental features, the .env variables do not seem to get updated after I update one value in my .env file.
In my project, I develop the API in one machine and the front end on other machine (Just for convenience), so, sometimes my machine has the next IP address: 192.168.100.100 and sometimes 192.168.100.101, etc.
My project uses the environment variable (in the .env file)
API_URL=http://192.168.100.100:4100
BASE_URL=http://localhost:4200
So, when the local IP address of the first machine changes, I have to update the .env file.
The problem now is that even after killing the app, deleting .nuxt folder and running npm run dev I still see the API requests having the previous IP address.
Solutions?
I have thought of disabling the cache and hardSource configurations but they are really helpful to me and the IP changes are not that often, but once in a while I have to update other variable, so that's not a solution for me.
I have also thought of disabling DHCP on my other machine and assigning it a fixed local IP address, but that is not ideal for me, although I think I will do that for now, hoping that in the future I get to know a better way of updating the environment variables (Because sooner or later I will need to update another variable that has nothing to do with the IP address)
I'd like to know if there is a way to force the .env variables to be updated in a nuxt project with hardSource, cache and parallel set to true.

SymmetricDS Sample - client node refused to connect to server node

I am working on the example from the SymmetricDS tutorial. I am using the configuration files corp-000.properties and store-001.properties found in the samples directory of the download zip. I have placed them in the engine directory and edited them so that corp-000 is using a Postgresql DB as master-000 and store-001 is using an MySQL DB as slave-001, both on separate machine.
Here are the config from corp-000.properties:
engine.name=master-000
db.driver=org.postgresql.Driver
db.url=jdbc:postgresql://127.0.0.1/master?stringtype=unspecified
I've also enable the firewall (8080/tcp and 5432/tcp) and changed port from 31415 to 8080: However when the same error still came out and the url returns this result:
This site can’t be reached
<Master-node-IP> refused to connect.
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
What should I do to solve thise problem?
Add to corp configuration
auto.registration=true
Can’t hurt to add
auto.reload=true
Solution by #swm is
The solution is need to set bind ip in symmetricDS
Below are some example configs.
What is happening here is that the main or master cannot see the sync / registration urls and ports not the database.
Make sure the following are setup correctly.
MAIN
registration.url=
sync.url=ttp://<IP>:<PORT>/sync/<SDS_MAIN>
CHILD
registration.url=http://<IP>:<PORT>/sync/<SDS_MAIN>
sync.url=http://<IP>:<PORT>/sync/<SDS_CHILD>
FULL EXAMPLE CONFIGS BELOW
MAIN
engine.name=<SDS_MAIN>
db.driver=net.sourceforge.jtds.jdbc.Driver
db.url=jdbc:jtds:sqlserver://<IP>:1433/<DB>;useCursors=true;bufferMaxMemory=10240;lobBuffer=5242880
db.user=***********
db.password=***********
registration.url=
sync.url=ttp://<IP>:<PORT>/sync/<SDS_MAIN>
group.id=<GID>
external.id=000
auto.registration=true
initial.load.create.first=true
sync.table.prefix=sym
#start.initial.load.extract.job=false
compression.level=-1
compression.strategy=0
CHILD
engine.name=<SDS_CHILD>
db.driver=net.sourceforge.jtds.jdbc.Driver
db.url=jdbc:jtds:sqlserver://<IP>:1433/<DB>;useCursors=true;bufferMaxMemory=10240;lobBuffer=5242880
db.user=***********
db.password=***********
registration.url=http://<IP>:<PORT>/sync/<SDS_MAIN>
sync.url=http://<IP>:<PORT>/sync/<SDS_CHILD>
group.id=<GID>
external.id=100
auto.registration=true
initial.load.create.first=true
sync.table.prefix=sym
start.initial.load.extract.job=false
compression.level=-1
compression.strategy=0

Debug authentication of Bazel's http_file

I want to fetch some data in Bazel over HTTP. There's a http_file method that looks like what I want. The remote server I'm fetching from uses authentication, so I've written it as
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_file")
http_file(
name = "data_file",
urls = ["https://example.com/data.0.1.2"],
sha256 = "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
downloaded_file_path = "data_file",
)
When I try the build, I get
WARNING: Download from https://example.com/data.0.1.2 failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 401 Unauthorized
Followed by fatal errors because the file doesn't exist.
The error makes me think that I'm not authenticating correctly. I have a .netrc file and curl is able to use it to fetch the file.
Is there a way for me to debug? If it was curl, I would pass -v and see the auth header being sent with the request. I'm not sure if bazel is failing to send any authentication or if it's incorrect.
Running Bazel 3.2.0 on Mac OS (Darwin 19.6.0) and on Linux (Ubuntu 18.04).
HTTP 401 indeed sounds like incorrectly or not at all authenticated. .netrc should be supported and recognized. If not explicitly specified with netrc attribute, ${HOME}/.netrc would be tried if HOME is in the environment and bazel runs on non-Windows host (this has been the case since bazel 1.1; and shortly in 0.29) or %USERPROFILE%/.netrc if the variable is in the environment and running on Windows (this has been the case since 3.1). At the risk of stating the obvious, the .netrc should be owned by the same UID under which the process using it runs and its permbits should be 0600. If authentication methods other then http basic are needed, auth_patterns attribute needs to be used to configure that.
I am not aware of there being any ready made repository rule debugging facility such as CLI flag, but in this case it should be viable to copy the implementation of of the rule and functions it uses from tools/build_defs/repo, instrument it to get debugging info from it and use that for the purpose. For starters perhaps just print(auth) of what auth = _get_auth(ctx, all_urls) yielded to see if the that rule got the right idea about how to talk to host in question. It should be a dict with type, login, password information for each individual urls entries. The magic itself happens in use_netrc.

Cassandra fails to initialize with error "Cannot add table 'role_members' to non existing keyspace 'system_auth'"

I am running a Cassandra cluster in Docker containers using fleet for management. I am able to get the cluster up and running, but if I bring the units down with fleet and then back up again, the containers fail. The Cassandra logs has this entry on the second start.
Cannot add table 'role_members' to non existing keyspace 'system_auth'.
Fatal configuration error; unable to start server. See log for stacktrace.
INFO 20:59:34 InetAddress /172.17.8.102 is now DOWN
ERROR 20:59:34 Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException: Cannot add table 'role_members' to non existing keyspace 'system_auth'.
at org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:284) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:275) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.StorageService.maybeAddTable(StorageService.java:1046) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.StorageService.doAuthSetup(StorageService.java:1034) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:967) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:698) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:581) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:291) [apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:481) [apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:588) [apache-cassandra-2.2.0.jar:2.2.0]
I can't find any information on this particular error, and I really have no idea why it's happening. The closest information I can find is that the system_auth table needs to be configured specially if you are not using the default AllowAllAuthenticator, but I am using this. I haven't changed it in the cassandra.yaml file.
Does anyone know why this might be happening?
Is it possible that you are using CassandraAuthorizer without using PasswordAuthenticator? I think that might not work and cause this particular error.
system_auth is not applicable to AllowAllAuthenticator, you need to use PasswordAuthenticator instead. If you configure cassandra.yaml in the following way:
authenticator: PasswordAuthenticator
authorizer: CassandraAuthorizer
And then restart cassandra, it should create the system_auth keyspace for you. If you don't want to set up authorization, you can always use AllowAllAuthorizer instead. More information can be found here.
This turned out to be a rather unique configuration issue I had. I was mapping /var/lib/cassandra on the host to /var/lib/cassandra inside my docker container. But I was also inadvertently mapping /var/lib/cassandra/data to an auto-generated Docker directory on the host. As such when I stopped and restarted the containers, the data directory would disappear and Cassandra would fail as it tried to recreate data from the commitlog directory.
I got the problem just following the Datastax "Initializing a multiple node cluster (single data center)" tutorial.
I solved the same problem deleting the whole content of /var/lib/cassandra and not only the content of /var/lib/cassandra/system/
Why?
I think Kris got the real problem source: when restarting, the C* service found the commitLog full and recovered by trying to reconstruct the commits found there, failing due to a different configuration and a different table structure...

Resources