I'm having some issues ensuring that a Windows service resource is configured correctly after it is created. I'm in the situation where the creation of the service is handled by a separate installer (.exe).
I need to configure this service afterwards to use a different user.
Here's my resource definition:
windows_service 'Service' do
action [:configure_startup, :start]
service_name 'service'
startup_type :automatic
run_as_user agent_credentials['user']
run_as_password agent_credentials['password']
only_if { ::Win32::Service.exists?('myservice') }
end
I'm pulling the credentials out of an encrypted data bag.
The issue I'm facing is that the account the service runs as is never being updated. In my client run, it sees no need to apply the resource actions after the .exe is installed:
* windows_service[Service] action configure_startup (up to date)
* windows_service[Service] action start (up to date)
I can only get my resource to apply if the service is stopped first, which immediately after install, it is not. Do I have to use Chef to stop it first and then start it again? I thought it would be able to detect that the configuration of the service did not match that of the defined resource and then correct it...
Thanks
just add :stop to the action array, such as
windows_service 'Service' do
action [:configure_startup, :stop, :start]
service_name 'service'
startup_type :automatic
run_as_user agent_credentials['user']
run_as_password agent_credentials['password']
only_if { ::Win32::Service.exists?('myservice') }
end
if the service does not fail on stop even when the service is stopped, then this should ease your pain. note that you might need to place the :stop in a different place on the array, depending on the service behavior.
I have the same situation as described by DL3001, and found the solution presented not sufficient.
Applying the stop start action a second time did the trick however, but this seems awkward and not as elegant as it could be.
windows_service 'Configure_service' do
service_name 'service'
run_as_user app_vault['username']
run_as_password app_vault['password']
startup_type :automatic
action [:configure_startup, :stop, :start]
end
windows_service 'Restart_service' do
service_name 'service'
action [:stop, :start]
end
Related
I have following actions in a Test controller:
def run
while true do
# code
end
end
def stop
# stop the run action
end
how can stop action be implemented to halt the run action?
Because a client will wait for a response from a server, you can't have a loop in an endpoint that waits for another endpoint to be called.
In this case, a client will visit /test/run, but since the server won't return anything until the loop finishes, the client will just keep waiting.
This means (unless you specifically configured your webserver to do so), that another connection can't be made to the server to reach the test/stop endpoint.
If you must have a "job" that runs and be cancel-able by an endpoint, turn it into an actual background task.
I've heard that it's considered bad practice in Ruby to use global variables beginning with a dollar sign. Is this also true for Rails controllers?
For example, I have a web app that uses a series of partial views that render in successive stages. The user input from the first stage gets taken from the param and put into a global variable so that it is accessible to each subsequent method. Those later stages need to easily access the selections the user made in the earlier stages.
routes.rb
post 'stage_one_form' => 'myexample#stage_two_form'
post 'stage_two_form' => 'myexample#stage_three_form'
post 'stage_three_form' => 'myexample#stage_four_form'
myexample_controller.rb
def stage_two_form
$stage_one_form_input = params[:stage_one_form_input]
end
...
def stage_four_form
#stage_four_displayed_info = $stage_one_form_input + "some other stuff"
end
This is just a dummy example but it seems a lot more graceful to use global variables here than my original approach, which was to pass the information back and forth from the client to the server in each stage, by using hidden fields.
Are global variables appropriate, or is there a better way?
If you want store the input from the first stage and use on stage two, you are doing a kind o wizard. You consider using the session or something more robust to store the information, not configuration or models as stated by #NickM.
For more info:
Rails Multi-Step Form without Wizard Gem
http://railscasts.com/episodes/217-multistep-forms
Additional info...
What you have done here with these global variables will not work in a production deployment where you're using an application server. In those environments you need multiple processes (or threads) so that more than one visitor to your site can be served at the same time.
With both of these you will have two problems:
Setting of the variable for one visitor will affect the experience of the next visitor (even if it's a different person) to be served by that process/thread.
A related problem is that a single visitor is not at all guaranteed to be served by the same process on their next request, so the process/thread that serves their second request is probably not going to have the global variable set from their first request.
In summary, chaos, use session - that's precisely what it's for.
You can throw them in config/application.rb, or your config.yml file.
config.your_variable = 'something'
Then call it from inside your app
<%= Rails.configuration.your_variable %>
Or you could throw it in a controller:
class Foo
MY_VARIABLE = 'something'
end
And then call it in your view:
<%= Foo::MY_VARIABLE %>
Or you could throw it in a method in your controller and define a helper:
class FooController
def my_variable
'something'
end
helper_method :my_variable
end
More info about config.yml here: Best way to create custom config options for my Rails app?
..but if you need to access one variable in various stages, you might want to dump it in a session variable in your controller:
session[:stage] = 'something'
and access it later using session[:stage]
Then you can clear it when the process starts again:
session[:stage] = nil
I have a longer running task in the background, and how exactly would I let pull status from my background task or would it better somehow to communicate the task completion to my front end?
Background :
Basically my app uses third party service for processing data, so I want this external web service workload not to block all the incoming requests to my website, so I put this call inside a background job (I use sidekiq). And so when this task is done, I was thinking of sending a webhook to a certain controller which will notify the front end that the task is complete.
How can I do this? Is there a better solution for this?
Update:
My app is hosted on heroku
Update II:
I've done some research on the topic and I found out that I can create a seperate app on heroku which will handle this, found this example :
https://github.com/heroku-examples/ruby-websockets-chat-demo
This long running task will be run per user, on a website with a lot of traffic, is this a good idea?
I would implement this using a pub/sub system such as Faye or Pusher. The idea behind this is that you would publish the status of your long running job to a channel, which would then cause all subscribers of that channel to be notified of the status change.
For example, within your job runner you could notify Faye of a status change with something like:
client = Faye::Client.new('http://localhost:9292/')
client.publish('/jobstatus', {id: jobid, status: 'in_progress'})
And then in your front end you can subscribe to that channel using javascript:
var client = new Faye.Client('http://localhost:9292/');
client.subscribe('/jobstatus', function(message) {
alert('the status of job #' + message.jobid + ' changed to ' + message.status);
});
Using a pub/sub system in this way allows you to scale your realtime page events separately from your main app - you could run Faye on another server. You could also go for a hosted (and paid) solution like Pusher, and let them take care of scaling your infrastructure.
It's also worth mentioning that Faye uses the bayeaux protocol, which means it will utilise websockets where it is available, and long-polling where it is not.
We have this pattern and use two different approaches. In both cases background jobs are run with Resque, but you could likely do something similar with DelayedJob or Sidekiq.
Polling
In the polling approach, we have a javascript object on the page that sets a timeout for polling with a URL passed to it from the rails HTML view.
This causes an Ajax ("script") call to the provided URL, which means Rails looks for the JS template. So we use that to respond with state and fire an event for the object to response to when available or not.
This is somewhat complicated and I wouldn't recommend it at this point.
Sockets
The better solution we found was to use WebSockets (with shims). In our case we use PubNub but there are numerous services to handle this. That keeps the polling/open-connection off your web server and is much more cost effective than running the servers needed to handle these connection.
You've stated you are looking for front-end solutions and you can handle all the front-end with PubNub's client JavaScript library.
Here's a rough idea of how we notify PubNub from the backend.
class BackgroundJob
#queue = :some_queue
def perform
// Do some action
end
def after_perform
publish some_state, client_channel
end
private
def publish some_state, client_channel
Pubnub.new(
publish_key: Settings.pubnub.publish_key,
subscribe_key: Settings.pubnub.subscribe_key,
secret_key: Settings.pubnub.secret_key
).publish(
channel: client_channel,
message: some_state.to_json,
http_sync: true
)
end
end
The simplest approach that I can think of is that you set a flag in your DB when the task is complete, and your front-end (view) sends an ajax request periodically to check the flag state in db. In case the flag is set, you take appropriate action in the view. Below are code samples:
Since you suggested that this long running task needs to run per user, so let's add a boolean to users table - task_complete. When you add the job to sidekiq, you can unset the flag:
# Sidekiq worker: app/workers/task.rb
class Task
include Sidekiq::Worker
def perform(user_id)
user = User.find(user_id)
# Long running task code here, which executes per user
user.task_complete = true
user.save!
end
end
# When adding the task to sidekiq queue
user = User.find(params[:id])
# flag would have been set to true by previous execution
# In case it is false, it means sidekiq already has a job entry. We don't need to add it again
if user.task_complete?
Task.perform_async(user.id)
user.task_complete = false
user.save!
end
In the view you can periodically check whether the flag was set using ajax requests:
<script type="text/javascript">
var complete = false;
(function worker() {
$.ajax({
url: 'task/status/<%= #user.id %>',
success: function(data) {
// update the view based on ajax request response in case you need to
},
complete: function() {
// Schedule the next request when the current one's complete, and in case the global variable 'complete' is set to true, we don't need to fire this ajax request again - task is complete.
if(!complete) {
setTimeout(worker, 5000); //in miliseconds
}
}
});
})();
</script>
# status action which returns the status of task
# GET /task/status/:id
def status
#user = User.find(params[:id])
end
# status.js.erb - add view logic based on what you want to achieve, given whether the task is complete or not
<% if #user.task_complete? %>
$('#success').show();
complete = true;
<% else %>
$('#processing').show();
<% end %>
You can set the timeout based on what the average execution time of your task is. Let's say your task takes 10 minutes on average, so their's no point in checking it at a 5sec frequency.
Also in case your task execution frequency is something complex (and not 1 per day), you may want to add a timestamp task_completed_at and base your logic on a combination of the flag and timestamp.
As for this part:
"This long running task will be run per user, on a website with a lot of traffic, is this a good idea?"
I don't see a problem with this approach, though architectural changes like executing jobs (sidekiq workers) on separate hardware will help. These are lightweight ajax calls, and some intelligence built into your javascript (like the global complete flag) will avoid the unnecessary requests. In case you have huge traffic, and DB reads/writes are a concern then you may want to store that flag directly into redis instead (since you already have it for sidekiq). I believe that will resolve your read/write concerns, and I don't see that it is going to cause problems. This is the simplest and cleanest approach I can think of, though you can try achieving the same via websockets, which are supported by most modern browsers (though can cause problems in older versions).
I have the following Port Forwarding created with Net::SSH:
def connectToCustomerSystem
Net::SSH.start("localhost", 'myuser', :password => "password") do |ssh|
logger.debug ssh.exec!("hostname")
ssh.forward.local(1234, "www.capify.org", 80)
#ssh.loop { true } #dont use loop to directly render director/index
end
render :controller => "director", :action => "index"
end
Now i will cancel this Connection via "ssh.forward.cancel_local(1234)" in a new, other method.
def disconnectForwarding
ssh.forward.cancel_local(1234)
end
But, of course, "ssh" isnt valid in this context. How could i search all available Objects with Type "Net::SSH"? Or are there any other ways how i could quit a specific Forwarding (because in the end, there will be much Forwards for some different Users, and i dont want to kill all, just a specific one).
Thanks in advance.
I see you're trying to keep stateful Ruby objects around while interacting with Rails. This doesn't play well with the Rails "shared nothing" approach to servers. Not only will ssh not be in scope because of the block being out of scope for your method, you will be talking to a different worker process next time.
What you need is separate longer running ruby processes in the background to manage the ssh connection. A simple way to do this is launch a ruby script and connect to the object managing ssh using DRb. You can then reconnect to the Ruby process managing the connection each time a request is handled and give it commands over DRb. The ssh variable might work as an instance variable inside the object you speak with over DRb in this case (it looks like you can set this to loop so it sticks around).
There are some terrible approaches to getting out of the scope box, like setting the variable as a thread-local variable, or searching the ObjectSpace.each_object, but those approaches are pretty bad in principal.
Ok, I have a Rails 3 application and I am using CouchDB as my primary database to take advantage of it's replication capabilities.
Anyway, what I want to do is store some configuration type stuff in 1 document in the database and load the values of this configuration file one time when the app starts up in production and reload ONLY if the user goes to the admin panel and explicitly requests it to happen. I was thinking by touching a URL to clear the loaded config or something.
My thought was that I would just create a before_filter in application_controller, but since I am new to rails, I didn't know if this was the proper way to do this.
before_filter :get_config
private
def get_config
#config = Config.get('_id')
end
Clearly this would run every request, which I don't want or need. Is there a way to save the config output so I don't have to fetch it every single request, or is there a better way to do this.
Thanks in advance.
Actually I am writing an article about the proper way of using global variables in rails. This seems to be the case to introduce global variables, as their values are shared across different users.
In your before_filter, try this:
def get_config
$config ||= Config.get('_id')
end
This would call Config.get('_id') only if $config is false or nil. Otherwise, $config wiil remain unchanged.
The tricky part is global variables (starting with a $ sign) alive in the whole application. So $config is available everywhere (and that would be a problem for careless design!)
Another point is, as you said you are new to rails, I do suggest you to read more about global variables before you use it and DO NOT ADDICT to it.