Rails app taking longer than 30 seconds for mongo query - ruby-on-rails

I have a mongo db on remote, having a collection named con_v, here is the preview of one object.
{
"_id" : ObjectId("xxxxxxx"),
"complete" : false,
"far" : "something",
"input" : {
"number_of_apples" : 2,
"type_of_apple" : "red"
},
"created_at" : ISODate("xxxxx"),
"error" : false,
"version_id" : ObjectId("someID"),
"transcript" : "\nUser: \nScript: hello this is a placeholder script.\nScript: how many apples do you want?\n",
"account_id" : null,
"channel_id" : ObjectId("some channel ID"),
"channel_mode_id" : ObjectId("some channel mode ID"),
"units" : 45,
"updated_at" : ISODate("xxxx")
}
I am using rails app to make queries from the remote mongo db, If I am fetching 20-50 records its responding but I need more than 1000 records at a time, I have a function creating csv file from that record, before it was working fine but now its taking longer than usual and freezing the server (total # of records are around 30K). Directly through mongo shell or using rails console the query is taking no time. Same thing if I am doing with cloned db at local machine app works fine..
here is the code in model file which queries and generates csv file
def self.to_csv(con_v)
version = con_v.first.version
CSV.generate do |csv|
fields = version.input_set_field_names
extra_fields = ['Collected At', 'Far', 'Name', 'Version','Completed']
csv << fields + extra_fields
con_v.each do |con|
values = con.input.values_at(*fields)
extra_values = [
con.created_at,
con.units,
con.far,
version.name,
con.complete
]
csv << values + extra_values
end
end
nutshell is app is taking longer on remote db, very slow mongo query but works fine wit local db, Debugged using pry, controller code is fine its getting the records as well just the respond time is slow on remote

Related

Get Metadata Multiple Source File System Into Azure SQL Table

I have Multiple folder and files which from FileSystem (linked service) on Azure Data Factory. and my activity is references on link: https://www.sqlservercentral.com/articles/working-with-get-metadata-activity-in-azure-data-factory
for now I'm using process metadata FileName and LastModified per file like this:
and then i'm using stored-procedure on ADF like this:
ALTER PROCEDURE [dbo].[SP_FileSystemMonitoring]
(
-- Add the parameters for the stored procedure here
#FLAG int,
#FILE_NAME nvarchar(100),
#LAST_MODIFIED datetime
)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON
-- Insert statements for procedure here
IF ( #FILE_NAME IS NOT NULL )
BEGIN
UPDATE [dwh].[FileSystemMonitoring]
SET STATUS = #FLAG,
PROCESS_DATE = DATEADD(HH, 7, Getdate()),
REPORT_DATE = DATEADD(hh,7,(DATEADD(dd,-1,GETDATE()))),
LAST_MODIFIED = #LAST_MODIFIED
WHERE FILE_NAME = #FILE_NAME
but, I want on 1 activity can get metadata on 1 folder and then then file that folder insert to Azure SQL Database, for example
folderA/file1.txt
folderA/file2.txt
On that Azure SQL Table like this:
--------------------------
File_Name | Last_Modified
--------------------------
file1.txt | 2021-12-19 13:45:56
file2.txt | 2021-12-18 10:23:32
I have no idea, because I'm confuse how to mapping on that sink on Azure SQL Table. Thanks before ...
Confused by your question, is it that you want to get the details of the file or folder from the get metadata activity? Or do you want to enumerate/store the child items of a root folder?
If you simply want to reference the items from Get Metadata, add a dynamic expression that navigates the output value to the JSON property you seek. For example:
#activity('Get Metadata Activity Name').output.lastModified
#activity('Get Metadata Activity Name').output.itemName
You can pass each of the above expressions as values to your stored procedure parameters. NOTE: 'Get Metadata Activity Name' should be renamed to the name of your activity.
The output JSON of this activity is like so and will grow depending on what you select to return in the Get Metadata activity. In my example I'm also including childItems.
{
"exists": true,
"lastModified": "2021-03-04T14:00:01Z",
"itemName": "some-container-name",
"itemType": "Folder",
"childItems": [{
"name": "someFilePrefix_1640264640062_24_12_2021_1640264640.csv",
"type": "File"
}, {
"name": "someFilePrefix_1640286000083_24_12_2021_1640286000.csv",
"type": "File"
}
],
"effectiveIntegrationRuntime": "DefaultIntegrationRuntime (Australia Southeast)",
"executionDuration": 0,
"durationInQueue": {
"integrationRuntimeQueue": 0
},
"billingReference": {
"activityType": "PipelineActivity",
"billableDuration": [{
"meterType": "AzureIR",
"duration": 0.016666666666666666,
"unit": "Hours"
}
]
}
}
if you want to store the child files, then you can either parse childItems as an nvarchar JSON value into your stored procedure and then enumerate the JSON array in SQL.
You could also use ADF an enumerate the same childItems property using a ForEach activity for each file. You simply enumerate over:
#activity('Get Metadata Activity Name').output.childItems
You can then call the SP for each file referencing the nested item as:
#item().name
You'll also still be able to reference any of the root parameters from the original get metadata activity within the ForEach activity.

Can I generate migration seeds from an sql script?

I'm using Sqlserver and .NETCore to create backend for my project.
and I have so many tables with so much data.
I was wondering, is there a way to generate seeds to use in my migration from the existing db tables?
i.e : I want to generate this from the table FamilyMemberPrivileges
modelBuilder.Entity<FamilyMemberPrivileges>().HasData(
new FamilyMemberPrivileges
{
Id = 1,
Name = "full control"
},
new FamilyMemberPrivileges
{
Id = 2,
Name = "control over self"
},
new FamilyMemberPrivileges
{
Id = 3,
Name = "read-only"
}
);
I have searched everywhere for this, maybe it doesnt work like that. but no harm in asking!
also, if this is not possible, is there an easier way to do this instead of writing the seeds myself?
Thanks!
You can write a Sql Statement that returns C# code and run it in SSMS. An example will be like:
select 'new FamilyMemberPrivileges{ Id ='+ convert(varchar(10), [Id] )+ ', Name="'+ [Name] + '"},'
from dbo.FamilyMemberPrivileges
The result will look like this
-------------------------------------------------------------------------------
new FamilyMemberPrivileges{ Id =1, Name="Full Control"},
new FamilyMemberPrivileges{ Id =2, Name="Control Over Self"},
(2 rows affected)
And then copy + paste the result to your code

bitcoin transaction block height

Hi I check that in the blockchain.info or blockr.io or other block explorer when checking one transaction ( not my own wallet transaction ) I could see the return value of "block_height" which can be use to count the transaction confirmation using block_count - block_height.
I have my own bitcoin node with -txindex enabled and I add additional txindex=1 in the conf.
But when using "bitcoin-cli decoderawtransaction " the parameters was never there.
How do I turn on that ? Or is it a custom made code ?
Bitcoind run under Ubuntu 14.04 x64bit version 0.11.0
I disable the wallet function and install using https://github.com/spesmilo/electrum-server/blob/master/HOWTO.md
The decoderawtransaction command just decodes the transaction, that is, it makes the transaction human readable.
Any other (though useful) information which is not related to the raw structure of a transaction is not shown.
If you need further info, you might use getrawtransaction <tx hash> 1, which returns both the result of decoderawtransaction and some additional info, such as:
bitcoin-cli getrawtransaction 6263f1db6112a21771bb44f242a282d04313bbda5ed8b517489bd51aa48f7367 1
-> {
"hex" : "010000000...0b00",
"txid" : "6263f1db6112a21771bb44f242a282d04313bbda5ed8b517489bd51aa48f7367",
"version" : 1,
"locktime" : 723863,
"vin" : [
{...}
],
"vout" : [
{...}
],
"blockhash" : "0000000084b830792477a0955eee8081e8074071930f7340ff600cc48c4f724f",
"confirmations" : 4,
"time" : 1457383001,
"blocktime" : 1457383001
}

mongodb get count of orders in each zip code

Hi we are using rails with mongo and we have a collection called orders which contain a specific zip code for each customer. We want to retrieve a count of how many orders belong to each zip code and what time frame they appear in most.
The docs can have basic information such as :
{"zip" : 60010,
"order_time" : <whatever the time for the order is>
}
Right now i have an aggregation using ruby:
coll_orders.aggregate([{"$group" => {"_id" => "$zip", "numOrders" => {"$sum" => 1}}}, {"$sort" => {"numOrders" => 1}}])
and it results with:
{"_id" : 60010,
"numOrders" : 55
}
My question is how do i add functionality in the aggregation so that i can get additional fields where it shows a breakdown of when the orders usually happen? Essentially a result document like :
{"_id" : 60010,
"numOrders" : 55,
"morning" : 25,
"afternoon" : 10,
"evening" : 20
}
edit: spelling/missed quotes
The best way to approach this would be calculate and store the minutes separately using a $project phase and use that to group by. I have provided a sample query below that'll run in the shell for just calculating the orders placed in the morning. You can extend this for the results you are expecting.
db.foo.aggregate( [
{ $project: {
zip: 1,
order_time: 1,
minutes: { $add: [
{ $multiply: [ { $hour: '$order_time' }, 60 ] },
{ $minute: '$order_time' }
] }
} },
{ $group: {
_id:"$zip",
numOrders:{$sum:1}
morning:{$sum:{$cond:[{$lt:["$minutes",12*60]}, 1, 0]}}}}
] );
You can use the following mongoid queries to fetch the data you need:
Count in each zip code:
Order.where(:zip => zip).count
And for count in a given time slot (you will need to create separate queries for each time slot - morning, afternoon, etc.):
Order.where(:zip => zip, :order_time => {'$gt' => start_time, '$lt' => end_time}).count
You can either keep inserting each unique zipcode in a separate collection to get a list of all zip codes, or if you don't run this task often (and your collection is small), you can get a list of all the unique zip codes in your collection using:
zips = Order.all.map{|o| o.zip}.compact.uniq
and iterate over this array and do the above queries.

Rails 3: object.save is writing the old values to database

I have code which is updating a model's property then calling save!. A Rails.logger.info call shows that the model thinks it has the new values. But the SQL write performed by the save! call is writing the old value to the database.
At first it wasn't writing anything to the database at all when I called save!. I thought it was that the object wasn't thinking its value had changed for some reason: changed? returned false, so I used a _will_change! notification to force a write. But now it is doing a write, but with the old values.
This doesn't happen from the "rails console" command line: there I'm able to update the property and it will return changed? of true, and let me save successfully.
Excerpt from the server log follows. Note that the object thinks it has log_ids of '1234,5678,1137', but writes to the database '1234,5678'.
current log ids are [1234, 5678]
new log ids are [1234, 5678, 1137]; writing log_ids of '1234,5678,1137' to NewsList 13 with dirty true
SQL (2.0ms) UPDATE "news_lists" SET "log_ids" = '1234,5678', "updated_at" = '2012-01-02 02:12:17.612283' WHERE ("news_lists"."id" = 13)
The object property in question is log_ids, which is a string containing several IDs of another kind of object.
The source code that produced the output above:
def add_log(new_log)
new_ids = get_log_ids
Rails.logger.info("current log ids are #{new_ids}")
if new_ids.length >= NewsList.MAX_LENGTH
new_ids.shift
end
log_ids_will_change!
new_ids.push new_log.id
log_ids = new_ids.join ","
Rails.logger.info("new log ids are #{new_ids}; writing log_ids of '#{log_ids}' to NewsList #{id} with dirty #{changed?}")
save!
end
def get_log_ids
if log_ids
log_ids.split(",").map &:to_i
else
[]
end
end
Can anyone suggest what might be going on here?
Add the self to self.log_ids = new_ids.join "," otherwise you will just be assigning to the local variable (namesake) instead of the db-persisted attribute (column).

Resources