For reasons outlined here: https://community.exasol.com/t5/discussion-forum/performance-on-premise-dropping/td-p/9029 we need to restart a database regularly (at least until al issues are resolved, and this can take some time). So the question arises: Can this be done on a regular bases without human interaction?
LUA is not a solution, but perhaps a cron job is possible, but we need OS access for that, which we do not have.
Try to use xmlrpc API: https://github.com/exasol/exaoperation-xmlrpc/blob/master/EXAoperation_XMLRPC.md#method-restartdatabase
Here is a nice example with explanations: https://community.exasol.com/t5/environment-management/starting-and-stopping-clusters-using-xml-rpc/ta-p/1579
Yes, this should be possible using the shudownDatabase() and startDatabase() methods from this GitHub repository. You might need to use stateDatabase() in between to determine when the database is actually stopped before you try to start it again.
Related
I want to use Dask on Databricks. It should be possible (I cannot see why not). If I import it, one of two things happens, either I get an ImportError but when I install distributed to solve this DataBricks just says Cancelled without throwing any errors.
Anyone looking for an answer, check this medium blogpost. To prevent people from missing this in comments, I'm posting this as an answer.
I don't think we have heard of anyone using Dask under databricks, but so long as it's just python, it may well be possible.
The default scheduler for Dask is threads, and this is the most likely thing to work. In this case you don't even need to install distributed.
For the Cancelled error, it sounds like you are using distributed, and, at a guess, the system is not allowing you to start extra processes (you could test this with the subprocess module). To work around, you could do
client = dask.distributed.Client(processes=False)
Of course, if it is indeed the processes that you need, this would not be great. Also, I have no idea how you might expose the dashboard's port.
Instead of having to go through a convoluted push process that takes several minutes to complete every time I make a minor change, I'd like direct FTP access to my files on the Heroku server.
Is this possible?
No, it is not possible. While directly pushing with FTP is more simple, it is a fragile means of deploying code and managing applications (hard to track what was done, difficult to reproduce or manage rollbacks, etc).
No, you cannot. Using the Git you can manage different version of your code
As a workaround, you can try to Use Dropbox Sync and deploy your app from Dropbox.
UPD: In July 2018 this feature was deprecated.
I'm studying the best way to have multiple redmine instances in the same server (basically I need a database for each redmine group).
Until now I have 2 options:
Deploy a redmine instance for each group
Deploy one redmine instance with multiple database
I really don't know what is the best practice in this situation, I've seen some people doing this in both ways.
I've tested the deployment of multiple redmines (3 instances) with nginx and passenger. It worked well but I think with a lot of instances it may not be feasible. Each app needs around 100mb of RAM, and with the increasing of requests it tends to allocate more processes to the app. This scenario seems bad if we had a lot of instances.
The option 2 seems reasonable, I think I can implement that with rails environments. But I think that there are some security problems related with sessions (I think a user of site A is allowed to make actions on site B after an authentication in A).
There are any good practice for this situation? What's the best practice to take in this situation?
Other requirement related with this is: we must be able to create or shut down a redmine instance without interrupt the others (e.g. we should avoid server restarts..).
Thanks for any advice and sorry for my english!
Edit:
My solution:
I used a redmine instance for each group. I used nginx+unicorn to manage each instance independently (because passenger didn't allow me to manage each instance independently).
The two options are not so different after all. The only difference is that in option 2, you only have one copy of the code on your disk.
In any case, you still need to run different worker processes for each instance, as Redmine (and generally most Rails apps) doesn't support database switching for each request and some data regarding a certain environment are cached in process.
Given that, there is not really much incentive to share even the codebase as it would require certain monkey patches and symlink-magic to allow the proper initialization for the intentional configuration differences (database and email configuration, paths to uploaded files, ...). The Debian package does that but it's (in my eyes) rather brittle and leads to a rather non-standard system.
But to stress again: even if you share the same code on the disk between instances, you can't share the running worker processes.
Running multiple instances from the same codebase is not officially supported by Redmine. However, Debian/Ubuntu packages seem to support such approach... See:
Multiple instances of redmine on Debian squeeze
So, generally:
If you use Debian/Ubuntu go with option #2
Otherwise go with #1
Rolling forward a couple of years, and you might now want to consider a third option of using docker containers for each of your redmine instances.
I've been using https://github.com/sameersbn/docker-redmine.git , and have been quite happy with it except that it doesn't yet support handling of incoming mail for creating and commenting on tickets.
I'm a bit overwhelmed by mere amount a possible solutions the Rails community has created for my problem. So perhaps anyone can help me to figure out how to solve it best.
What I want to do is to write a Rails app that behaves kind of "dropbox". On the one hand it should be a web interface where I can upload and download files to my web server. This interacts with my database and all that stuff. On the other hand I have SSH access to that server and can put files there manually. Now I want this file system actions to trigger my Rails app to do the things it would do if I'd created the file via the web interface.
So I somehow write a daemon, right? There are a lot of solutions, like
daemons.rubyforge.org/
github.com/mirasrael/daemons-rails
github.com/costan/daemonz
github.com/kennethkalmer/daemon-kit
Another feature that I would like to have, is that my Rails app automatically spawns and stops my daemon as start or quit my Rails app resp. So "daemonz" seems the best solution. But as I googled further I found
github.com/FooBarWidget/daemon_controller/
which seems a lot more "high tech" and already used as I deploy with passenger. But I don't understand if it kills my daemons as I quit Rails. I suppose that is not the case and so I wonder how to implement this in my app.
The way to implement a "thing" to react to file system changes seems straight forward for me. I'd use
github.com/guard/listen/
(an alternative would be: github.com/ttilley/fssm )
But what I don't understand as this the first time I'm really faced with this protocol things is, if this spawns a server I'm able to communicate with or what kind of object I have to deal with.
The last thing, I would like to implement is a kind of worker queue so that the listening for file system changes is seperated from the the actions of my rails app. But there are so many solutions that I'm totally overwhelmed to pick one:
github.com/tobi/delayed_job/
github.com/defunkt/resque
http://backgroundrb.rubyforge.org/
And what is
http://godrb.com/
all about? How could that help me?
Has anyone hints how to solve this? Thanks a lot!
Jan
P.S. I'd like to post links to all the github projects but unfortunately I don't have enough 'reputation'
I'd definitely look into creating a process (daemon) that monitors the relevant directory. Then your Rails app can just put files into it without having to know anything about the back end, and it'll work with SSH too.
Your daemon can load the Rails environment & communicate with your database. I'd leave all the communication between them at that level.
As for making it start/stop with your rails app...are you sure? I use god (the ruby gem) to start/monitor processes. It will "daemonize" your Ruby app for you, too. If you want to, you can actually tell god to stop your directory-monitor process & then exit when Rails stops. And you can fire off god from a Rails initializer.
However, if you might find yourself using SSH or some other means to put files into that directory when rails is not running, you might look into putting a script into /etc/init.d to automatically start god when the server boots up.
HTH
I think you want something like Guard for monitoring the changes on the filesystem and performing actions when changes occur.
As for god, you should definitely look into it. It will make starting/stopping processes you depend on considerably easier. We used Bluepill for a while, but there are so many bugs, we ditched it and moved to God, which IMHO is a lot more pleasant to work with, for the mostpart.
Have you tried creating a script file eg:
startDaemon.rb
And then placing it:
config/initializers/
?
I was using the spawn plugin (http://rubyforge.org/projects/spawn/) which worked excellent. However, I then moved to Mongo (using mongo_mapper) and Spawn no longer worked.
Modifying the plugin is beyond the scope of my abilities. Is there a simple way to do spawning in Rails that would work with Mongo? It's not an often-run process so it doesn't have to be the most elegant solution in the world.
Thanks!
It looks like the reason it's not working is because:
The plugin also patches ActiveRecord::Base to handle some known bugs when using
threads (see lib/patches.rb).
Is there anyway you could use a cron job with script/runner? If so the following link should help you:
http://www.ameravant.com/posts/recurring-tasks-in-ruby-on-rails-using-runner-and-cron-jobs
I am a big fan of putting the logic into a controller and using cron to call the page with curl or wget.
Easy, cheap, works within the Rails stack so you can re-use your code.