stored procedure execution gives different results in SSIS Vs. SSMS - stored-procedures

I execute a stored procedure which takes in 7 parameters and returns an integer code. When I run the exec statement, it works perfectly and gives a valid result. But when I run the same using SSIS, I get the exact opposite result of what I am expecting.
I ran the trace on Profiler and saw what was being passed. The values have the correct sequence and correct values. I am not sure what is going on with SSIS execution part.
I verified the data types too and they look good to me. One thing that I noticed on the profiler was that even the integer valued columns were being passed as varchars with single quotes around them. Does that make any difference? Below is what I got from Profiler
exec sp_executesql N'EXEC [dbo].[ProcName] #P1,#P2,#P3,#P4,#P5,#P6,#P7',N'#P1 varchar(6),#P2 nvarchar(9),#P3 datetime2(0),#P4 varchar(1),#P5 varchar(1),#P6 varchar(4),#P7 nvarchar(5)','743290',N'000000034','2018-07-25 00:00:00','2','2','1002',N'Swift'
Thanks,
RV

Ok. So this was a mistake on my part. The query that was running on SSMS was running on the primary instance of the cluster where as the SSIS package points to the readonly instance of the cluster.
I was under the impression that the data is the same on these 2 server, but that was my mistake. I started pointing the SSMS version to the readonly instance and it gave the exact same result as I was expecting. Hence closing this out.

Related

Getting Null Pointer exception while trying to get value from option in Apache beam

I am using JAVA8 and Apache beam 2.19.0 to run some dataflow jobs. As per my requirement I am setting option value dynamically in code as following.
option.setDay(ValueProvider.StaticValueProvider.of(sDay))
I am trying to get this in another transformation in same dataflow pipeline. When I run for small data its work fine I am able to get options.getDay().get() value but for huge data such as 5 million lines in different files it is giving Null pointer exception at options.getDay().get().
Adding more example points to this question for better understanding.
If I am reading 1 millions of line it execute well.
If I am reading 2 millions of line it execute well but give
Throttling logger worker. It used up its 30s quota for logs in
only 25.107s
If I am reading more than 2 millions of line it gives Throttling
logger worker. It used up its 30s quota for logs in only 25.107s and
Null pointer exception at options.getDay().get()
If I understood correctly, it looks like you're tying to setDay on every element in the stream. I guess that one element is calling set, and another element is trying to get or set again in parallel, which causes the null pointer exception.
To fix this you can pass the sDay on the element itself on another property, instead of modifying the options.

Redis Lua Script Unpack Returning Different Results

Setup by running sadd a b c
When I execute this code against the set a
keystoclear1 has a single value of "b" in it.
keystoclear2 as both values in it.
local keystoclear = unpack(redis.call('smembers', KEYS[1]))
redis.call('sadd', 'keystoclear1', keystoclear)
redis.call('sadd', 'keystoclear2', unpack(redis.call('smembers', KEYS[1])))
I am by no means a lua expert, so I could just have some strange behavior here, but I would like to know what is causing it.
I tested this on both the windows and linux version of redis, with redis-cli and the stackexchange.redis client. Same behavior in all cases. This is a trivial example, I actually would like to store the results of the unpack because I need to perform several operations with it.
UPDATE: I understand the issue.
table.unpack() only returns the first element
Lua always adjusts the number of results from a function to the circumstances of the call. When we call a function as a statement, Lua discards all of its results. When we use a call as an expression, Lua keeps only the first result. We get all results only when the call is the last (or the only) expression in a list of expressions.
This case is slightly different from the one you referenced in your update. In this case unpack (may) return several elements, but you only store one and discard the rest. You can get other elements if you use local keytoclear1, keytoclear2 = ..., but it's much easier to store the table itself and unpack it as needed:
local keystoclear = redis.call('smembers', KEYS[1])
redis.call('sadd', 'keystoclear1', unpack(keystoclear))
As long as unpack is the last parameter, you'll get all the elements that are present in the table being unpacked.

how to find a particular redis key memory size in lua script

redis.call('select','14')
local allKeys = redis.call('keys','orgId#1:logs:email:uid#*')
for i = 1 , #allKeys ,1
do
local object11 = redis.call('DEBUG OBJECT',allKeys[i])
print("kk",object11[1])
end
Here "DEBUG OBJECT" is run successfully on redis-cli, but if we want to run through lua script on multiple key. That send error like this.
(error) ERR Error running script (call to f_b003d960240545d9540ebc2319d863221045
3815): Wrong number of args calling Redis command From Lua script
DEBUG OBJECT is not a good bet. It shows the serialized length of the value, so it is just the size of the object once stored on an RDB file.
To have some hint about the size of an object in Redis, you need to resort to more complex techniques, but you can only get an approximation. You need to run:
TYPE
OBJECT ENCODING
The object-type specific command to get its length.
Sample a few elements to understand the average string length of the object.
Based on this four informations, you need to check the Redis source code to check the different memory footprints of the internal structures used, and do the math. Not easy...
A much more viable approximation is to just to use:
APPROX_USED_MEM = num_elements * avg_size * overhead_factor
You may want to pick an overhead factor which makes sense for a variety of data types. The error is big, but is an approximation good enough for some use cases. Maybe overhead_factor may be something like 2.
TLDR: What you are trying to do is complex and leads to errors. In the future the idea is to provide a MEMORY command which is able to do this.

F# Array.Parallel hanging

I have been struggling with parallel and async constructs in F# for the last couple days and not sure where to go at this point. I have been programming with F# for about 4 months - certainly no expert - and I currently have a series of calculations that are implemented in F# (asp.net 4.5) and are working correctly when executed sequentially. I am running the calculations on a multi-core server and since there are millions of inputs to perform the same calculation on, I am hoping to take advantage of parallelism to speed it up.
The calculations are extremely data parallel - basically the exact calculation on different input data. I have tried a number of different avenues and I continually run into the same issue - it seems as if the parallel looping never gets to the end of the input data set. I have tried TPL, ConcurrentQueues, Parallel.Array.map/iter and all the same result: the program starts out fine and then somewhere in the middle (indeterminate) it just hangs and never completes. For simplicity I actually removed the calculation from the program and I am just calling a print method, and Here is where the code is currently at:
let runParallel =
let ids = query {for c in db.CustTable do select c.id} |> Seq.take(5)
let customerInputArray= getAllObservations ids
Array.Parallel.iter(fun c -> testParallel c) customerInputArray
let key = System.Console.ReadKey()
0
A few points...
I limited the results above to only 5 just for debugging. The actual program does not apply the Take(5).
The testParallel method is just a printfn "test".
The customerInputArray is a complex data type. It is a tuple of lists that contain records. So I am pretty sure my problem must be there...but I added exception handling and no exception is getting raised, so have no idea how to go about finding the problem.
Any help is appreciated. Thanks in advance.
EDIT: Thanks for the advice...I think it is definitely deadlock. When I remove all of the printfn, sprintfn, and string concat operations, it completes. (of course, I need those things in there.)
Is printfn, sprintfn, and string ops not thread-safe?
Another EDIT: Iteration always stops on the last item..So if my input array has 15 items, the processing stops on item 14, or seems to never get to item 15. Then everything just hangs. Does not matter what the size of the input array is..Any ideas what can be causing this? I even switched over to Parallel.ForEach (instead of Array.Parallel) and same behavior.
Update on the situation and how I resolved this issue.
I was unable to upload code from my example due to my company's firewall policy, so in the end my question did not have enough details. I failed to mention that I was using a type provider which was important information in this situation. But here is what I figured out.
I am using the F# type provider for SQL Server and was passing around its Service Types which I suspect are not thread-safe. When I replaced the ServiceTypes with plain old F# Records, the code worked fine - no more deadlocks and everything completed without error.

Is there a limit to the number of parameters in a TStoredProc?

Is there limit to either the number of params or to the overall size of a params in a TStoredProc ExecProc call?
Currently running a system that is still using the BDE to connect to Oracle and a recent change to the number of parameters to a package procedure as started producing access violations. The params count is now up to 291 and the AV is being created in the ExecProc call of TStoredProc.
If we remove a single param from the list (any param, does not have to be a specific param), the ExecProc call works fine.
I have debugged through the code and the access violation is being thrown with the TStoredProc.BindParams procedure within DBTables.pas. I have several watches set up, one of which is SizeOf(FRecordBuffer) and as I step through this procedure, the value is 65535. This is MaxWord (Windows.pas). I don't see is any specified limits within the DBTables code.
The callstack is TStoredProd.ExecProc -> TStoredProc.CreateCursor -> TStoredProc.GetCursor -> TStoredProc.BindParams and the access violation is thrown in the for-loop that iterates through the FParams.
Thanks in advance, we need to find something we can pinpoint so we can steer clear.
I'm not at all versed in Oracle SQL, but since you're maintaining the thing, I would see if I could change the call with all that parameters to a single insert into a new dedicated table (with that many columns plus an autonumber primary key), and change the stored procedure to take this key as input and call the values from this new record to do its job. This may just deliver a bit faster than finding out what's the maximum number of parameters and try to find a fix there. (Though it's a bit of a strange number, as in not a power of 2, it may well be 291...)

Resources