Yesterday I had a stub problem, which turns out to be an ActiveStorage problem. I work on a system that stores cached data with ActiveStorage. We are using Rails 5.2.3 with the multiverse gem.
I want to load a specific file in the setup of a request spec.
So I create a user object :
user = create :user, name: 'bob'
using that factory :
FactoryBot.define do
factory :user
name { 'Robert' }
email { 'robert#email.com' }
end
end
and then try to attach a file to it like so :
user.cached_data.attach(
io: File.open(Rails.root.join('spec', 'support', 'sample_data.json'),
filename: 'sample_data.json',
content_type: 'application/json'
)
My spec is stuck there, it don't even launch, and I get that error :
ActiveRecord::LockWaitTimeout:
Mysql2::Error::TimeoutError: Lock wait timeout exceeded; try restarting transaction: INSERT INTO `active_storage_attachments` (`name`, `record_type`, `record_id`, `blob_id`, `created_at`) VALUES ('cached_data', 'User', 102, 60, '2019-08-22 16:24:07')
One of my coworkers told me that if I remove the config.use_transactional_fixtures = true in my rails_helper, that would remove my transaction problem. But I want to keep that line, which seems like a healthy standard to me.
EDIT 26/08/19
I used the MySQL console to launch the SHOW ENGINE INNODB STATUS; command during the lock. From the output I got that list of transactions :
------------
TRANSACTIONS
------------
Trx id counter 295543
Purge done for trx's n:o < 295541 undo n:o < 0 state: running but idle
History list length 66
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 295531, not started
MySQL thread id 26, OS thread handle 0x700008449000, query id 214 localhost db_user
---TRANSACTION 0, not started
MySQL thread id 20, OS thread handle 0x7000083fc000, query id 251 localhost root init
SHOW ENGINE INNODB STATUS
---TRANSACTION 295427, not started
MySQL thread id 1, OS thread handle 0x7000083af000, query id 0 Waiting for requests
---TRANSACTION 295542, ACTIVE 40 sec
2 lock struct(s), heap size 360, 1 row lock(s), undo log entries 1
MySQL thread id 29, OS thread handle 0x700008530000, query id 242 localhost db_user
Trx #rec lock waits 0 #table lock waits 0
Trx total rec lock wait time 0 SEC
Trx total table lock wait time 0 SEC
---TRANSACTION 295541, ACTIVE 40 sec inserting
mysql tables in use 1, locked 1
LOCK WAIT 3 lock struct(s), heap size 1184, 1 row lock(s), undo log entries 1
MySQL thread id 28, OS thread handle 0x7000084e3000, query id 247 localhost db_user update
INSERT INTO `active_storage_attachments` (`name`, `record_type`, `record_id`, `blob_id`, `created_at`) VALUES ('cached_data', 'User', 99, 55, '2019-08-26 10:08:49')
Trx read view will not see trx with id >= 295542, sees < 295532
Trx #rec lock waits 1 #table lock waits 0
Trx total rec lock wait time 0 SEC
Trx total table lock wait time 0 SEC
------- TRX HAS BEEN WAITING 40 SEC FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 5557 page no 3 n bits 96 index `PRIMARY` of table `test`.`active_storage_blobs` trx table locks 2 total table locks 2 trx id 295541 lock mode S locks rec but not gap waiting lock hold time 40 wait time before grant 0
------------------
---TRANSACTION 295532, ACTIVE 40 sec
2 lock struct(s), heap size 360, 0 row lock(s), undo log entries 3
MySQL thread id 27, OS thread handle 0x700008496000, query id 243 localhost db_user
Trx #rec lock waits 0 #table lock waits 0
Trx total rec lock wait time 0 SEC
Trx total table lock wait time 0 SEC
Related
I am trying to parse CLI output that has cascading elements using textfsm & python. Here is an example: Ref - https://github.com/google/textfsm/wiki/TextFSM
Using this example. How can I get the value of 'CPU utilization' for each Slot ?
Routing Engine status:
Slot 0:
Current state Master
Election priority Master (default)
Temperature 39 degrees C / 102 degrees F
CPU temperature 55 degrees C / 131 degrees F
DRAM 2048 MB
Memory utilization 76 percent
CPU utilization:
User 95 percent
Background 0 percent
Kernel 4 percent
Interrupt 1 percent
Idle 0 percent
Model RE-4.0
Serial ID xxxxxxxxxxxx
Start time 2008-04-10 20:32:25 PDT
Uptime 180 days, 22 hours, 45 minutes, 20 seconds
Load averages: 1 minute 5 minute 15 minute
0.96 1.03 1.03
Routing Engine status:
Slot 1:
Current state Backup
Election priority Backup
Temperature 30 degrees C / 86 degrees F
CPU temperature 31 degrees C / 87 degrees F
DRAM 2048 MB
Memory utilization 14 percent
CPU utilization:
User 0 percent
Background 0 percent
Kernel 0 percent
Interrupt 1 percent
Idle 99 percent
Model RE-4.0
Serial ID xxxxxxxxxxxx
Start time 2008-01-22 07:32:10 PST
Uptime 260 days, 10 hours, 45 minutes, 39 seconds
Template
Value Required Slot (\d+)
Value State (\w+)
Value Temp (\d+)
Value CPUTemp (\d+)
Value DRAM (\d+)
Value User (\d+)
Value Background (\d+)
Value Kernel (\d+)
Value Interrupt (\d+)
Value Idle (\d+)
Value Model (\S+)
Start
^Routing Engine status: -> Record RESlot
^\s+CPU utilization: -> Record SUBRESlot
RESlot
^\s+Slot\s+${Slot}
^\s+Current state\s+${State}
^\s+Temperature\s+${Temp} degrees
^\s+CPU temperature\s+${CPUTemp} degrees
^\s+DRAM\s+${DRAM} MB
^\s+Model\s+${Model} -> Start
SUBRESlot
^\s+User\s+${User}\s+percent
^\s+backgroud\s+${Background}\s+percent
^\s+Kernel\s+${Kernel}\s+percent
^\s+Interrupt\s+${Interrupt}\s+percent
^\s+Idle\s+${Idle}\s+percent -> Start
Output:
Slot, State, Temp, CPUTemp, DRAM, User, Background, Kernel, Interrupt, Idle, Model
0, Master, 39, 55, 2048, , , , , , RE-4.0
1, Backup, 30, 31, 2048, , , , , , RE-4.0
As you can see the CPU utilization elements are not getting populated.
I would really appreciate any pointer
I consider there are two mistakes in your template:
You should Record when you have all data.
The Model value you are trying to match in the last line of RESlot state can be matched only after you passed the CPU utilization section in the input text. Please note that textfsm parses the file line by line.
You can use below template to get your data:
Value Required Slot (\d+)
Value State (\w+)
Value Temp (\d+)
Value CPUTemp (\d+)
Value DRAM (\d+)
Value User (\d+)
Value Background (\d+)
Value Kernel (\d+)
Value Interrupt (\d+)
Value Idle (\d+)
Value Model (\S+)
Start
^Routing Engine status: -> RESlot
^\s+CPU utilization: -> SUBRESlot
RESlot
^\s+Slot\s+${Slot}
^\s+Current state\s+${State}
^\s+Temperature\s+${Temp} degrees
^\s+CPU temperature\s+${CPUTemp} degrees
^\s+DRAM\s+${DRAM} MB -> Start
SUBRESlot
^\s+User\s+${User}\s+percent
^\s+Background\s+${Background}\s+percent
^\s+Kernel\s+${Kernel}\s+percent
^\s+Interrupt\s+${Interrupt}\s+percent
^\s+Idle\s+${Idle}\s+percent -> SUBModel
SUBModel
^\s+Model\s+${Model} -> Record Start
Result:
Slot, State, Temp, CPUTemp, DRAM, User, Background, Kernel, Interrupt, Idle, Model
0, Master, 39, 55, 2048, 95, 0, 4, 1, 0, RE-4.0
1, Backup, 30, 31, 2048, 0, 0, 0, 1, 99, RE-4.0
I have run this command siege -c50 -d10 -t3M http://www.example.com in both the versions 3.0.8 and 4.0.4. I got a totally different result. Can anyone give me a solution for this and why do the values differ in these versions..?
In version 4.0.4
Transactions: 1033 hits
Availability: 100.00 %
Elapsed time: 179.47 secs
Data transferred: 26.31 MB
Response time: 8.45 secs
Transaction rate: 5.76 trans/sec
Throughput: 0.15 MB/sec
Concurrency: 48.63
Successful transactions: 1033
Failed transactions: 0
Longest transaction: 72.85
Shortest transaction: 3.65
In version 3.0.8
Transactions: 133 hits
Availability: 100.00 %
Elapsed time: 179.08 secs
Data transferred: 27.59 MB
Response time: 50.95 secs
Transaction rate: 0.74 trans/sec
Throughput: 0.15 MB/sec
Concurrency: 37.84
Successful transactions: 133
Failed transactions: 0
Longest transaction: 141.14
Shortest transaction: 8.34
Thank You.
HTML parsing was added to version 4.0.0 and it is enabled by default. It makes an additional request for resources like style sheets, images, javascript, etc.
We can enable/disable this feature in the siege.conf file, by setting the value of parser to true/false.
Setting parser value in the siege.conf file
I've got some data structured as such
select * from rules where time > now() - 1m limit 5
name: rules
time ackrate consumers deliverrate hostname publishrate ready redeliverrate shard unacked version
---- ------- --------- ----------- -------- ----------- ----- ------------- ----- ------- -------
1513012628943000000 864 350 861.6 se-rabbit14 975.8 0 0 14 66 5
1513012628943000000 864.8 350 863 se-rabbit9 920.8 0 0 09 64 5
1513012628943000000 859.8 350 860.2 se-rabbit8 964.2 0 0 08 58 5
1513012628943000000 864.8 350 863.6 se-rabbit16 965.4 0 0 16 64 5
1513012631388000000 859.8 350 860.2 se-rabbit8 964.2 0 0 08 58 5
I want to calculate the percentage of 'up-time' defined as the amount of time when the queue has no ready messages.
I can get the maximum number of ready in each minute
select max(ready) from rules where time > now() - 1h group by time(1m) limit 5
name: rules
time max
---- ---
1513009560000000000 0
1513009620000000000 0
1513009680000000000 0
1513009740000000000 0
1513009800000000000 0
Using a sub-query, I can select only the minutes that have values ready.
select ready from (select max(ready) as ready from rules where time > now() - 1h group by time(1m)) where ready > 0
name: rules
time ready
---- -----
1513010520000000000 49
1513013280000000000 57
I wanted to get a count of these values and then doing a bit of math calculate a percentage. In this case, with 2 results in the last hour,
((60 minutes * 1 hour) - 2) / (60 minutes * 1 hour)) == 96%
When I try to count this though, I get no response.
select count(ready) from (select max(ready) as ready from rules where time > now() - 1h group by time(1m)) where ready > 0
This is v1.2.2.
How can I return a count of the number of results?
The solution was simply to upgrade from v1.2.2 to v1.3.8. Using the later version.
select count(ready) from (select max(ready) as ready from rules where time > now() - 1h group by time(1m)) where ready > 0
name: rules
time count
---- -----
0 6
I just running a simple insertion query in Enterprise version neo4j-3.1.3 on neo4j browser. At the very first time insertion execution time is 410ms, and subsequent insertion it reduce to 4ms.
CQL:
create (n:City {name:"Trichy", lat:50.25, lng:12.21});
//Execution Time : Completed after 410 ms.
Even fetching a single node taking much time.
My CQL Query:
MATCH (n:City) RETURN n LIMIT 25
//Execution time : Started streaming 1 record after 105 ms and completed after 111 ms
I have allotted: dbms.memory.pagecache.size=15g
System:
total used free shared buffers cached
Mem: 11121 8171 2950 611 956 2718
-/+ buffers/cache: 4496 6625
Swap: 3891 0 3891
Why its taking much time on singe insertion. Even 4ms seems too costly for a single insertion with minimal property.
Even fetching also taking much time.
INSERT OVERWRITE TABLE result
SELECT /*+ STREAMTABLE(product) */
i.IMAGE_ID,
p.PRODUCT_NO,
p.STORE_NO,
p.PRODUCT_CAT_NO,
p.CAPTION,
p.PRODUCT_DESC,
p.IMAGE1_ID,
p.IMAGE2_ID,
s.STORE_ID,
s.STORE_NAME,
p.CREATE_DATE,
CASE WHEN custImg.IMAGE_ID is NULL THEN 0 ELSE 1 END,
CASE WHEN custImg1.IMAGE_ID is NULL THEN 0 ELSE 1 END,
CASE WHEN custImg2.IMAGE_ID is NULL THEN 0 ELSE 1 END
FROM image i
JOIN PRODUCT p ON i.IMAGE_ID = p.IMAGE1_ID
JOIN PRODUCT_CAT pcat ON p.PRODUCT_CAT_NO = pcat.PRODUCT_CAT_NO
JOIN STORE s ON p.STORE_NO = s.STORE_NO
JOIN STOCK_INFO si ON si.STOCK_INFO_ID = pcat.STOCK_INFO_ID
LEFT OUTER JOIN CUSTOMIZABLE_IMAGE custImg ON i.IMAGE_ID = custImg.IMAGE_ID
LEFT OUTER JOIN CUSTOMIZABLE_IMAGE custImg1 ON p.IMAGE1_ID = custImg1.IMAGE_ID
LEFT OUTER JOIN CUSTOMIZABLE_IMAGE custImg2 ON p.IMAGE2_ID = custImg2.IMAGE_ID;
I have a join query where i am joining huge tables and i am trying to optimize this hive query. Here are some facts about the tables
image table has 60m rows,
product table has 1b rows,
product_cat has 1000 rows,
store has 1m rows,
stock_info has 100 rows,
customizable_image has 200k rows.
a product can have one or two images (image1 and image2) and product level information are stored only in product table. i tried moving the join with product to the bottom but i couldnt as all other following joins require data from the product table.
Here is what i tried so far,
1. I gave the hint to hive to stream product table as its the biggest one
2. I bucketed the table (during create table) into 256 buckets (on image_id) and then did the join - didnt give me any significant performance gain
3. changed the input format to sequence file from textfile(gzip files) , so that it can be splittable and hence more mappers can be run if hive want to run more mappers
Here are some key logs from hive console. I ran this hive query in aws. Can anyone help me understand the primary bottleneck here ? This job is only processing a subset of the actual data.
Stage-14 is selected by condition resolver.
Launching Job 1 out of 11
Number of reduce tasks not specified. Estimated from input data size: 22
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /home/hadoop/bin/hadoop job -kill job_201403242034_0001
Hadoop job information for Stage-14: number of mappers: 341; number of reducers: 22
2014-03-24 20:55:05,709 Stage-14 map = 0%, reduce = 0%
.
2014-03-24 23:26:32,064 Stage-14 map = 100%, reduce = 100%, Cumulative CPU 34198.12 sec
MapReduce Total cumulative CPU time: 0 days 9 hours 29 minutes 58 seconds 120 msec
.
2014-03-25 00:33:39,702 Stage-30 map = 100%, reduce = 100%, Cumulative CPU 20879.69 sec
MapReduce Total cumulative CPU time: 0 days 5 hours 47 minutes 59 seconds 690 msec
.
2014-03-26 04:15:25,809 Stage-14 map = 100%, reduce = 100%, Cumulative CPU 3903.4 sec
MapReduce Total cumulative CPU time: 0 days 1 hours 5 minutes 3 seconds 400 msec
.
2014-03-26 04:25:05,892 Stage-30 map = 100%, reduce = 100%, Cumulative CPU 2707.34 sec
MapReduce Total cumulative CPU time: 45 minutes 7 seconds 340 msec
.
2014-03-26 04:45:56,465 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3901.99 sec
MapReduce Total cumulative CPU time: 0 days 1 hours 5 minutes 1 seconds 990 msec
.
2014-03-26 04:54:56,061 Stage-26 map = 100%, reduce = 100%, Cumulative CPU 2388.71 sec
MapReduce Total cumulative CPU time: 39 minutes 48 seconds 710 msec
.
2014-03-26 05:12:35,541 Stage-4 map = 100%, reduce = 100%, Cumulative CPU 3792.5 sec
MapReduce Total cumulative CPU time: 0 days 1 hours 3 minutes 12 seconds 500 msec
.
2014-03-26 05:34:21,967 Stage-5 map = 100%, reduce = 100%, Cumulative CPU 4432.22 sec
MapReduce Total cumulative CPU time: 0 days 1 hours 13 minutes 52 seconds 220 msec
.
2014-03-26 05:54:43,928 Stage-21 map = 100%, reduce = 100%, Cumulative CPU 6052.96 sec
MapReduce Total cumulative CPU time: 0 days 1 hours 40 minutes 52 seconds 960 msec
MapReduce Jobs Launched:
Job 0: Map: 59 Reduce: 18 Cumulative CPU: 3903.4 sec HDFS Read: 37387 HDFS Write: 12658668325 SUCCESS
Job 1: Map: 48 Cumulative CPU: 2707.34 sec HDFS Read: 12658908810 HDFS Write: 9321506973 SUCCESS
Job 2: Map: 29 Reduce: 10 Cumulative CPU: 3901.99 sec HDFS Read: 9321641955 HDFS Write: 11079251576 SUCCESS
Job 3: Map: 42 Cumulative CPU: 2388.71 sec HDFS Read: 11079470178 HDFS Write: 10932264824 SUCCESS
Job 4: Map: 42 Reduce: 12 Cumulative CPU: 3792.5 sec HDFS Read: 10932405443 HDFS Write: 11812454443 SUCCESS
Job 5: Map: 45 Reduce: 13 Cumulative CPU: 4432.22 sec HDFS Read: 11812679475 HDFS Write: 11815458945 SUCCESS
Job 6: Map: 42 Cumulative CPU: 6052.96 sec HDFS Read: 11815691155 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 0 days 7 hours 32 minutes 59 seconds 120 msec
OK
The query is still taking longer than 5 hours in Hive where as in RDBMS it takes only 5 hrs. I need some help in optimizing this query, so that it executes much faster. Interestingly, when i ran the task with 4 large core instances, the time taken improved only by 10 mins compared to the run with 3 large instance core instances. but when i ran the task with 3 med cores, it took 1hr 10 mins more.
This brings me to the question, "is Hive even the right choice for such complex joins" ?
I suspect the bottleneck is just in sorting your product table, since it seems much larger than the others. I think joins with Hive for tables over a certain size become untenable, simply because they require a sort.
There are parameters to optimize sorting, like io.sort.mb, which you can try setting, so that more sorting occurs in memory, rather than spilling to disk, re-reading and re-sorting. Look at the number of spilled records, and see if this much larger than your inputs. There are a variety of ways to optimize sorting. It might also help to break your query up into multiple subqueries so it doesn't have to sort as much at one time.
For the stock_info , and product_cat tables, you could probably keep them in memory since they are so small ( Check out the 'distributed_map' UDF in Brickhouse ( https://github.com/klout/brickhouse/blob/master/src/main/java/brickhouse/udf/dcache/DistributedMapUDF.java ) For custom image, you might be able to use a bloom filter, if having a few false positives is not a real big problem.
To completely remove the join, perhaps you could store the image info in a keystone DB like HBase to do lookups instead. Brickhouse also had UDFs for HBase , like hbase_get and base_cached_get .