Managing configuration files across multiple servers - ruby-on-rails

Running a Rails application on multiple servers (~20), I want to able to manage the configurartion files (mainly *.yml, but also SSL pem/certs files and other text based) from a single location such that any change to files, or a new file, is added to all servers.
I also want to have this content source controller via git.
Updated are not frequent and I want to keep the app untouched such that data is read from files as it is right now.
What are the available solution for that, is Zookeeper good fit?

I have not used Zookeeper but I believe you should be able to do something like you need with a tool such as Puppet or Chef.

We're using ZooKeeper for live settings.
One idea is to use a registry.
Say you have a component called Arst.
You can have some config - lets say for redis under these folders each representing a different instance:
/dbs/redis/0 (host, port, db, password as children)
/dbs/redis/1 (host, port, db, password as children)
/dbs/redis/prod (host, port, db, password as children)
And if your component Arst needs to use instance 0, you can have a registry like this:
/arst/redis/0
If you want to add 1 just add the node and a child watch in the application will update things for you without a restart.
It's not very simple to do though and managing the settings can be a pain for teams like qa.
So I'll be working on a console to help with this as well. We'll be open sourcing some pieces.

Related

Can I transfer files from a local server to my Nextcloud server without using the internet and allow users to access them?(same computer)

I used docker technology to set up a nextcloud server for myself and my family
Can I transfer files from a local server to my Nextcloud server without using the internet and allow users to access them?
Because I have discovered two strange things:
1.Placing files directly under a specific user's file path on the server does not allow the user to successfully access the file.
2.As long as I don't delete the files added by the user, even if I directly change the content of the files on the server, the user can still accurately and correctly read the original content.
Or is the user profile path that I think is incorrect?
I think it's /var/www/html/data/"USERID"/files
I would like to know how to solve it, but at the same time, I also want to know what is the reason that causes the following two problems.
Thank you so much.

Getting "ECONNREFUSED" error when trying to upload to Wolkenkit Blob Server

I'm currently developing a Wolkenkit application which is run on my local machine.
I want to upload a file from the Wolkenkit app to the blob server (as documented here).
When sending a POST request from the server to https://local.wolkenkit.io:3001/, Node.js gives me the error ECONNREFUSED.
I've tested the POST-Request with another program and it works there. Any idea why it doesn't work from the wolkenkit application itself?
Thanks!
The Storing files sample you linked to shows code that is to be run in the browser, not in the backend itself. Of course, both should work, but there are a few minor differences you need to watch out for.
Fixing the host name
First, I suppose that local.wolkenkit.io in your case maps to 127.0.0.1, which is the default for wolkenkit. That means that when you try to connect to this domain from within a Docker container, the container does not try to call out to the blog storage container, but it stays within itself. So, the first thing that needs to be fixed is the host name.
Basically, there are two options for this: You can either setup local.wolkenkit.io so that it resolves to the external IP address of your machine. This would work, but is pretty cumbersome. The other option is to directly address the appropriate container that is responsible for blob storage, by its internal name. The internal name is <name-of-your-app>-depot-file. So you need to replace https://local.wolkenkit.io:3001/ by https://<...>-depot-file.wolkenkit.io:3001/.
Fixing the port
Second, the port is wrong. This is because the blob storage service is internally running on port 3000, externally on 3001. So instead of https://<...>-depot-file.wolkenkit.io:3001/ you need to use https://<...>-depot-file.wolkenkit.io:3000/.
Once you have done this you should not get any more errors like ECONNREFUSED, since now the service can be found.
Fixing SSL issues
Third, since you are now connecting to the blob storage service using a different domain name, the SSL certificate doesn't match any more, since it was issued for local.wolkenkit.io. As a result, you will get SSL errors when trying to connect.
The simplest way to get around this is to disable any SSL checks (albeit this is also the most insecure way to handle this!). How to do this depends on the HTTP client module you are using. E.g., in request there is an option called strictSSL that you can set to false.
Of course, what you actually should do is to either use a custom certificate which includes this domain name as well, or to write a function that handles the certificate check and accepts the presented one, especially in this case.
If you do all of this, things should work :-)
PS: I am one of the authors of wolkenkit. Thanks a lot for bringing up this issue, and we will take care of this in the future, to make storing blobs easier.

Edit system files from within Ruby on Rails app

I am putting together a web interface for an embedded hardware product (think like your router) that needs the ability to change system files that are owned by root. In particular I need to change the network address and then restart the service.
What is the best way to handle this both for editing the file and securely handling the escalation (preferably outside of the webapp somehow). I had the idea of a user who can sudo with no password for scripts to use that was banned from SSH or Terminal login, but I am unsure if this is the best thing to do security wise as it leaves that user open to attacks that can then escalate privleges.
I effectively want to read ifcfg-eth0, write changes to a temporary file, double check those changes are valid, then write it back to the ifcfg-eth0 original file, finally restart the network interface.

YAWS: Update docroot while YAWS is running

I'm working with an implementation of YAWS and would like to have an easy way for developers to switch between using the docroot specified in yaws.conf and a custom location which contains a development build.
Eg. say docroot is set to serve /TEST/html
I want a developer to be able to switch docroot to /TEST/dev/html while YAWS is still running (and only have that change effect that one user).
Any suggestions on how I could accomplish this would be appreciated.
There are a few ways to accomplish this:
Set up a separate server instance in your yaws.conf file with a different docroot.
Run an entirely separate Yaws instance for testing purposes.
Use an appmod registered on "/" to examine all incoming requests and redirect those specific to your developers to a different directory area.
Use arg rewriting to redirect developer requests to a different server instance (follow that link to section 7 of the Yaws PDF documentation).
Of these, I'd recommend 1 or 2, since 3 and 4 rely on "special" URLs that might cause problems if used by a non-developer (in general, mixing testing and production on the same server endpoint can be problematic).

Elmah XML Logging on Load Balanced Environment

We're implementing Elmah for an internal application. For development and testing we use a single server instance but on the production environment the app is delivered using a load balanced environment.
Everything works as charm using Elmah, except for the fact that the logs are done independant in each server. What I mean with this is that if an error happens in Server1 the xml file is stored physically on that server and the same for Server2, since I'm storing that files on the App_Data
When I access the axd location to see the error list, I just see the ones of the server that happened to attend my request.
Is there any way to consolidate the xml files other than putting them on a shared folder? Having a shared folder will make us to allow the user that executes the application on the server to have access to that separate folder and to be on only one of the servers instead of both.
I cannot use In-Memory or Database logging since FileLog is the only one allowed.
You might consider using ElmahR for this case, since you are not able to implement In-Memory or Database logging. ElmahR will provide you with a central location for the two load balanced servers to send errors to (in addition to logging them locally) via an Http post. Then you can access the ElmahR site for to view an aggregated list. Also, ElmahR is storing the error messages in a SqlServerCE database, so it can persist the error messages it receives.
Keep in mind that if the ElamhR Dashboard app design does not meet your initial needs/desires, it could be modified as needed given that it is an open source project.
Hope this might be a viable option.

Resources