Oracle job not executing automatically after 30 seconds - oracle9i

I have created this Job to run a procedure. I expect it to run automatically after 30 seconds from the first execution but it does not. What could be the issue. The Job runs if I execute Manually. I am using 9i
DECLARE
X NUMBER;
BEGIN
SYS.DBMS_JOB.SUBMIT
( job => X
,what => 'ALERT_GNARATE;'
,next_date => to_date('25/11/2011 11:43:35','dd/mm/yyyy hh24:mi:ss')
,interval => 'SYSDATE+1/2880'
,no_parse => TRUE
);
SYS.DBMS_OUTPUT.PUT_LINE('Job Number is: ' || to_char(x));
END;

Related

Using limit and offset in rails together with updated_at and find_each - will that cause a problem?

I have a Ruby on Rails project in which there are millions of products with different urls. I have a function "test_response" that checks the url and returns either a true or false for the Product attribute marked_as_broken, either way the Product is saved and has its "updated_at"-attribute updated to the current Timestamp.
Since this is a very tedious process I have created a task which in turn starts off 15 tasks, each with a N/15 number of products to check. The first one should check from, for example, the first to the 10.000th, the second one from the 10.000nd to the 20.000nd and so on, using limit and offset.
This script works fine, it starts off 15 process but rather quickly completes one script after another far too early. It does not terminate, it finishes with a "Process exited with status 0".
My guess here is that using find_each together with a search for updated_at as well as in fact updating the "updated_at" while running the script changes everything and does not make the script go through the 10.000 items as supposed but I can't verify this.
Is there something inherently wrong by doing what I do here. For example, does "find_each" run a new sql query once in a while providing completely different results each time, than anticipated? I do expect it to provide the same 10.000 -> 20.000 but just split it up in pieces.
task :big_response_launcher => :environment do
nbr_of_fps = Product.where(:marked_as_broken => false).where("updated_at < '" + 1.year.ago.to_date.to_s + "'").size.to_i
nbr_of_processes = 15
batch_size = ((nbr_of_fps / nbr_of_processes))-2
heroku = PlatformAPI.connect_oauth(auth_code_provided_elsewhere)
(0..nbr_of_processes-1).each do |i|
puts "Launching #{i.to_s}"
current_offset = batch_size * i
puts "rake big_response_tester[#{current_offset},#{batch_size}]"
heroku.dyno.create('kopa', {
:command => "rake big_response_tester[#{current_offset},#{batch_size}]",
:attach => false
})
end
end
task :big_response_tester, [:current_offset, :batch_size] => :environment do |task,args|
current_limit = args[:batch_size].to_i
current_offset = args[:current_offset].to_i
puts "Launching with offset #{current_offset.to_s} and limit #{current_limit.to_s}"
Product.where(:marked_as_broken => false).where("updated_at < '" + 1.year.ago.to_date.to_s + "'").limit(current_limit).offset(current_offset).find_each do |fp|
fp.test_response
end
end
As many have noted in the comments, it seems like using find_each will ignore the order and limit. I found this answer (ActiveRecord find_each combined with limit and order) that seems to be working for me. It's not working 100% but it is a definite improvement. The rest seems to be a memory issue, i.e. I cannot have too many processes running at the same time on Heroku.

Working stored procedure fails when called from SQL Agent Job

I have a stored procedure that runs fine with no errors inside SQL Server Management Studio. However, when the same stored procedure is executed as a step of a SQL Agent Job, it terminates with:
Error 3930: The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction.
The stored procedure lives in a schema named [Billing]. Most of the tables it uses are also in the [Billing] schema.
The main stored procedure begins a database transaction. The stored procedures called by the main stored procedure do all of their work on that inherited transaction. The main stored procedure is responsible for committing or rolling back the transaction.
The database User running the SQL Agent Job Step is not in the Sysadmin role, nor is it dbo. It belongs to the db_datareader and db_datawriter database roles, and has been given Delete, Execute, Insert, References, Select, and Update, permissions in the [Billing] schema.
Here is the main stored procedure:
CREATE PROCEDURE [Billing].[GenerateBillXml]
#pUseProductionPsSystem bit
,#pPeriodYear int
,#pPeriodMonthNumber int
,#pDocumentTypeName varchar(20)
,#pUser varchar(100)
,#pFilePath varchar(500)
,#pExportDateTime datetime2(7)
,#pResultCode int = 0 OUTPUT
AS
BEGIN
SET NOCOUNT ON;
SET #pResultCode = 0;
DECLARE #transactionBegun bit = 0
,#RC int = 0
,#processDateTimeUtc datetime2(7) = GETUTCDATE()
,#billGenerationId int
,#PsJobSuffix char(2)
,#periodDate date
,#periodDateString char(6)
,#PsBaseJobNumber char(6)
,#okayToRun int = 0
,#msg varchar(500);
BEGIN TRY
/* calculate the period date */
SET #periodDate = CONVERT(date,CAST(#pPeriodMonthNumber as varchar) + '/01/' + CAST(#pPeriodYear as varchar))
SET #periodDateString = CONVERT(varchar, #periodDate, 12); -- yyMMdd
/* retrieve the job suffix */
SELECT #PsJobSuffix = CAST([PsJobSuffix] as char(2))
FROM [dbo].[DocumentType] dt
WHERE [Name] like #pDocumentTypeName;
/* build the base job number */
SET #PsBaseJobNumber = LEFT(#periodDateString, 4) + #PsJobSuffix
/*
* We've made it past the input check - record the fact that we're generating a bill
*/
INSERT [Billing].[BillGeneration] (
[PsBaseJobNumber], [Status], [RunBy], [ProcessDateTimeUtc]
) VALUES (
#PsBaseJobNumber, 'Running', #pUser, #processDateTimeUtc
);
IF ##ROWCOUNT = 1
SET #billGenerationId = SCOPE_IDENTITY();
EXECUTE #RC = [Billing].[_0_OkayToGenerateBill]
#PsBaseJobNumber
,#okayToRun OUTPUT
,#pResultCode OUTPUT;
IF #pResultCode = 0
BEGIN
-- called stored procedure completed without error
IF #okayToRun = -1 -- this bill has already been generated
BEGIN
SET #msg = 'The billing for job ' + CAST(#PsBaseJobNumber as varchar) + ' has already been produced.';
RAISERROR(#msg, 16, 1)
END
IF #okayToRun = -2 -- too early to run billing for this period
BEGIN
SET #msg = 'It is too early to generate billing for job ' + CAST(#PsBaseJobNumber as varchar) + '.';
RAISERROR(#msg, 16, 1)
END
IF #okayToRun <> 1 -- unknown error...
BEGIN
SET #msg = 'Unknown error occured while determining whether okay to generate bill for job ' + CAST(#PsBaseJobNumber as varchar) + '.';
RAISERROR(#msg, 16, 1)
END
END
ELSE
BEGIN
SET #msg = 'Unknown failure in sub-stored procedure [Billing].[_0_OkayToRun]() for job ' + CAST(#PsBaseJobNumber as varchar) + '.';
RAISERROR(#msg, 16, 1) -- will cause branch to CATCH
END
/* Okay to generate bill */
/* If not in a transaction, begin one */
IF ##TRANCOUNT = 0
BEGIN
BEGIN TRANSACTION
SET #transactionBegun = 1;
END
EXECUTE #RC = [Billing].[_1_GeneratePsPreBillData]
#PsBaseJobNumber
,#pUser
,#pResultCode OUTPUT;
IF #pResultCode = 0
BEGIN
-- stored proced ran to successful completion
EXECUTE #RC = [Billing].[_2_GetBillingDataForXmlGeneration]
#pUseProductionPsSystem
,#PsBaseJobNumber
,#pResultCode OUTPUT;
IF #pResultCode = 0
BEGIN
-- stored proced ran to successful completion
IF #transactionBegun = 1
-- all table data has been created/updated
COMMIT TRANSACTION
-- Output XML bill to file
EXECUTE #RC = [Billing].[_3_GenerateBillingXmlFilesForPsJob]
#PsBaseJobNumber
,#pFilePath
,#pExportDateTime
,#pResultCode OUTPUT;
IF #pResultCode <> 0
BEGIN
-- called stored procedure failed
SET #msg = '[Billing].[_3_GenerateBillingXmlFilesForPsJob]() failed for job ' + CAST(#PsBaseJobNumber as varchar);
RAISERROR(#msg, 16, 1) -- will cause branch to CATCH
END
END
ELSE
BEGIN
-- called stored procedure failed
SET #msg = '[Billing].[_2_GetBillingDataForXmlGeneration]() failed for job ' + CAST(#PsBaseJobNumber as varchar);
RAISERROR(#msg, 16, 1) -- will cause branch to CATCH
END
END
ELSE
BEGIN
-- called stored procedure failed
SET #msg = '[Billing].[_1_GeneratePsPreBillData]() failed for job ' + CAST(#PsBaseJobNumber as varchar);
RAISERROR(#msg, 16, 1) -- will cause branch to CATCH
END
-- bill generation was successful
IF #billGenerationId IS NOT NULL
UPDATE [Billing].[BillGeneration]
SET [Status] = 'Successful', [ProcessEndDateTimeUtc] = GETUTCDATE()
WHERE [Id] = #billGenerationId;
END TRY
BEGIN CATCH
-- rollback transaction if we started one
IF #transactionBegun = 1
ROLLBACK TRANSACTION
-- record the error
INSERT [Billing].[BillGenerationError] (
[DateTime], [Object], [ErrorNumber], [ErrorMessage]
) VALUES (
GETDATE(), OBJECT_NAME(##PROCID), ERROR_NUMBER(), ERROR_MESSAGE()
);
-- bill generation failed
IF #billGenerationId IS NOT NULL
UPDATE [Billing].[BillGeneration]
SET [Status] = 'Failed'
,[Note] = ERROR_MESSAGE()
,[ProcessEndDateTimeUtc] = GETUTCDATE()
WHERE [Id] = #billGenerationId;
SELECT ERROR_NUMBER() as ErrorNumber;
SELECT ERROR_MESSAGE() as ErrorMessage;
SET #pResultCode = 1
END CATCH
END
As #Lukasz Szozda hinted in one of his comments to my question, the issue was that when the SQL Agent job executed the BCP.EXE command, it was running under the service account used by SQL Agent, which for me is the fairly restrictive "Local System" account. At this point it became obvious to me that a Proxy account had to be used. So I created a Proxy under Operating System (CmdExec), which was the only choice that made sense.
I went back to the job step to change it to use the Proxy, but then noticed that in its current type of Transact-SQL script (TSQL), there is no way to assign a Proxy account.
After trying a few things, finally decided to put the TSQL statements that were in the job step into a new stored procedure, and then call that stored procedure from the SQL command-line executable SQLCMD.EXE. I then changed the job step type from Transact-SQL script (TSQL) to Operating System (CmdExec). I could then set the Run As field to the Proxy I created earlier. I specified the command to run as CMD.EXE /c SQLCMD.EXE -S [ServerName] -Q "EXEC [NewProcedureName] [parameters]".
If you're curious as to why I'm running SQLCMD.EXE under CMD.EXE, it's because one of the parameters to the new stored procedure was the current date in a particular format ('%date:~4,10%'), which the SQL Server job execution environment didn't support, but which CMD.EXE certainly does.
Overall, I think this took a bit more effort than I expected.

How to run some action every few seconds for 10 minutes in rails?

I am trying to build quizup like app and want to send broadcast every 10 second with a random question for 2 minutes. How do I do that using rails ? I am using action cable for sending broadcast. I can use rufus-scheduler for running an action every few seconds but I am not sure if it make sense to use it for my use case .
Simplest solution would be to fork a new thread:
Thread.new do
duration = 2.minutes
interval = 10.seconds
number_of_questions_left = duration.seconds / interval.seconds
while(number_of_questions_left > 0) do
ActionCable.server.broadcast(
"some_broadcast_id", { random_question: 'How are you doing?' }
)
number_of_questions_left -= 1
sleep(interval)
end
end
Notes:
This is only a simple solution of which you are actually ending up more than 2.minutes of total run time, because each loop actually ends up sleeping very slightly more than 10 seconds. If this discrepancy is not important, then the solution above would be already sufficient.
Also, this kind-of-scheduler only persists in memory, as opposed to a dedicated background worker like sidekiq. So, if the rails process gets terminated, then all currently running "looping" code will also be terminated as well, which you might intentionally want or not want.
If using rufus-scheduler:
number_of_questions_left = 12
scheduler = Rufus::Scheduler.new
# `first_in` is set so that first-time job runs immediately
scheduler.every '10s', first_in: 0.1 do |job|
ActionCable.server.broadcast(
"some_broadcast_id", { random_question: 'How are you doing?' }
)
number_of_questions_left -= 1
job.unschedule if number_of_questions_left == 0
end

How to make the second procedure to wait untill the first procedure gets completed Using DBMS_SCHEDULER?

I need to run multiple procedures one by one. The second procedure is depends upon first one and third depends on 2nd...like that i have multiple procedures.
This is my Code. First one(A_BLOCK) is executing at the given time.
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'mxpdm.my_job1',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN A_BLOCK; END;',
start_date => '04-FEB-15 2.25.00PM ASIA/CALCUTTA',
enabled => TRUE,
comments => 'Check');
END;
/
I don't know how to schedule and execute the second procedure B_BLOCK,after the A_BLOCK gets completed. I have searched in docs and i'm not clear,since i just started using this DBMS_SCHEDULER. Can somebody please help me?

Programmatically get the number of jobs in a Resque queue

I am interested in setting up a monitoring service that will page me whenever there are too many jobs in the Resque queue (I have about 6 queues, I'll have different numbers for each queue). I also want to setup a very similar monitoring service that will alert me when I exceed a certain amount of failed jobs in my queue.
My question is, there is a lot of keys and confusion that I see affiliated with Resque on my redis server. I don't necessarily see a straight forward way to get a count of jobs per queue or the number of failed jobs. Is there currently a trivial way to grab this data from redis?
yes it's quite easy, given you're using the Resque gem:
require 'resque'
Resque.info
will return a hash
e.g/ =>
{
:pending => 54338,
:processed => 12772,
:queues => 2,
:workers => 0,
:working => 0,
:failed => 8761,
:servers => [
[0] "redis://192.168.1.10:6379/0"
],
:environment => "development"
}
So to get the failed job count, simply use:
Resque.info[:failed]
which would give
=> 8761 #in my example
To get the queues use:
Resque.queues
this returns a array
e.g./ =>
[
[0] "superQ",
[1] "anotherQ"
]
You may then find the number of jobs per queue:
Resque.size(queue_name)
e.g/ Resque.size("superQ") or Resque.size(Resque.queues[0]) .....
Here is a bash script which will monitor the total number of jobs queued and the number of failed jobs.
while :
do
let sum=0
let errors=$(redis-cli llen resque:failed)
for s in $(redis-cli keys resque:queue:*)
do
let sum=$sum+$(redis-cli llen $s)
done
echo $sum jobs queued, with $errors errors
sleep 1 # sleep 1 second, probably want to increase this
done
This is for Resque 1.X, 2.0 might have different key names.
There is also a method Resque.queue_sizes That returns a hash of the queue name and size
Resque.queue_sizes
=> {"default"=>0, "slow"=>0}

Resources