Make Sphinx quiet (non-verbose) - ruby-on-rails

I'm using Sphinx through Thinking Sphinx in a Ruby on Rails project. When I create seed data and all the time, it's quite verbose, printing this:
using config file '/Users/pupeno/projectx/config/development.sphinx.conf'...
indexing index 'user_delta'...
collected 7 docs, 0.0 MB
collected 0 attr values
sorted 0.0 Mvalues, 100.0% done
sorted 0.0 Mhits, 99.6% done
total 7 docs, 159 bytes
total 0.042 sec, 3749.29 bytes/sec, 165.06 docs/sec
Sphinx 0.9.8.1-release (r1533)
Copyright (c) 2001-2008, Andrew Aksyonoff
for every record that is created or so. Is there a way to suppress that output?

There is actually a setting to stop this - you'll want to set it at the end of your environment.rb file:
ThinkingSphinx.suppress_delta_output = true
In Thinking Sphinx v3 and newer, this has changed, and this setting is now managed through config/thinking_sphinx.yml. Repeat for each appropriate environment:
development:
quiet_deltas: true

Run sphinx with --quiet flag. I'm not using TS, though, so I don't know how to add make TS to use this flag. hth.

Related

Can the additional time it takes to execute the first instance of a query be eliminated in neo4j?

Whatever steps I take, the first example of any query I make in neo4j always takes longer than any subsequent execution of the same query. So I guess something other than the store is being cached.
I'm using the latest community container image for 3.5 (3.5.20 at the time of writing)
I have plenty of memory to cache absolutely everything if I want to
I'm using well documented warm-up strategies in order to (allegedly) prime the page cache
The database details...
I run CALL apoc.monitor.store(); and it tells me the size of each store: -
+------------------------------------------------------------------------------------------------------------+
| logSize | stringStoreSize | arrayStoreSize | relStoreSize | propStoreSize | totalStoreSize | nodeStoreSize |
+------------------------------------------------------------------------------------------------------------+
| 1224 | 148165607424 | 3016515584 | 26391839040 | 42701007672 | 241318778285 | 2430128610 |
+------------------------------------------------------------------------------------------------------------+
I run CALL apoc.warmup.run(true, true, true); (before running any queries). It takes about 15 minutes and displays a summary of what it's done. The text it outputs is not easily parsed in its raw form so I've summarised salient parts of it below. Basically it tells me the number of pages loaded for each store, and these are: -
nodePages 296,719
relPages 3,234,294
relGroupPages 4,580
propPages 5,233,608
stringPropPages 18,086,620
arrayPropPages 368,225
indexPages 2,235,227
---
Total 29,459,273
With a page size of 8,192 bytes per page that's approximately 225GiB of pages for the displayed stores
I have enough physical memory and I have already set NEO4J_dbms_memory_pagecache_size to 250G
I set NEO4J_dbms_memory_heap_initial__size and NEO4J_dbms_memory_heap_max__size to 8G
So (allegedly) the page cache is "warm" and I have enough physical memory.
Query timings...
I run my query, which returns 1,813 records, and I execute the same query several times in order to illustrate the issue. I see the following (typical) timings: -
1. 1,821 mS
2. 75 mS
3. 60 mS
4. 51 mS
5. 48 mS
6. 42 mS
7. 38 mS
8. 36 mS
9. 36 mS
The actual values are dependent on the query but the first execution of every query is always significantly longer than the second.
ADDENDUM (16/Jul).
Just to be clear, using apoc.warmup.run does help.
If I don't use it, the first query is much longer still.
Having just restarted the DB (without a warm-up) the first query
took 7,829mS. The 2nd was 116mS, the third 66mS
So, warm-up or not, the first query is always longer.
Question...
What's going on?
Can I do anything more to reduce the initial query time?
Oh, and using the query as the warm up is not the answer - I don't know what queries will be used
Not sure why apoc.warmup.run does not speed up your initial query, but you could just try using an initial query invocation as your "warmup" instead.

structure.sql file is modified on running migrations, things are added

After running 'rails db:migrate' my structure file is modified in a way that is different then my coworkers. for example:
CREATE SEQUENCE public.foo_bar_seq
+ AS integer --<< this is added.
START WITH 1
INCREMENT BY 1
NO MINVALUE
- WHERE sometime > ((('now'::text)::date - '3 mons'::interval) - '8 days'::interval))
+ WHERE sometime > ((CURRENT_DATE - '3 mons'::interval) - '8 days'::interval)) -- becomes this
I can't figure it out and is highly annoying as I have to commit by line changes to my structure file whenever I add a migration. We are all using the same version of psql 11.2.
sometimes different versions of postgresql running on your local machine changes the structure as well.

Ruby/Rails: Traverse folders and parse metadata to seed DB

I have a bunch of documents that I'd like to index in a Rails application. I'd like to use a rake task of sorts to comb a directory hierarchy looking for files and capturing the metadata from those files to index in Rails.
I'm not really sure how to do this in Ruby. I have found a utility called pdftk which can extract the metadata from the PDF files (much of what I'm indexing is PDFs) but I'm not sure how to capture the individual pieces of that data?
For example, to grab the ModDate or each BookmarkTitle and BookmarkPageNumber below.
Specifically I want to traverse a file hierarchy, execute the pdftk $filename dump_data command for each .pdf I find and then capture the important parts of that output into a rails model(s).
Output from pdftk:
$ pdftk BoringDocument883c2.pdf dump_data
InfoKey: Creator
InfoValue: Adobe Acrobat 9.3.4
InfoKey: Producer
InfoValue: Adobe Acrobat 9.34 Paper Capture Plug-in
InfoKey: ModDate
InfoValue: D:20110312194536-04'00'
InfoKey: CreationDate
InfoValue: D:20110214174733-05'00'
PdfID0: 2f28dcb8474c6849ae8628bc4157df43
PdfID1: 3e13c82c73a9f44bad90eeed137e7a1a
NumberOfPages: 126
BookmarkTitle: Alternative Maintenance Techniques
BookmarkLevel: 1
BookmarkPageNumber: 3
BookmarkTitle: CONTENTS
BookmarkLevel: 1
BookmarkPageNumber: 4
BookmarkTitle: EXHIBITS
BookmarkLevel: 1
BookmarkPageNumber: 6
BookmarkTitle: I - INTRODUCTION
BookmarkLevel: 1
BookmarkPageNumber: 8
BookmarkTitle: II - EXECUTIVE SUMMARY
BookmarkLevel: 1
BookmarkPageNumber: 13
BookmarkTitle: III - REMOTE DIAGNOSTICS - A STATUS REPORT
BookmarkLevel: 1
BookmarkPageNumber: 30
BookmarkTitle: IV - ALTERNATIVE TECHNIQUES
BookmarkLevel: 1
BookmarkPageNumber: 55
BookmarkTitle: V - COMPANYA - A SERVICE PHILOSOPHY
BookmarkLevel: 1
BookmarkPageNumber: 66
BookmarkTitle: VI - COMPANYB - REDUNDANT HARDWARE ARCHITECTURE
BookmarkLevel: 1
BookmarkPageNumber: 77
...shortened for brevity...
PageLabelNewIndex: 1
PageLabelStart: 1
PageLabelPrefix: F-E12_0001.jpg
PageLabelNumStyle: NoNumber
PageLabelNewIndex: 2
PageLabelStart: 1
PageLabelPrefix: F-E12_0002.jpg
PageLabelNumStyle: NoNumber
PageLabelNewIndex: 3
PageLabelStart: 1
PageLabelPrefix: F-E12_0003.jpg
PageLabelNumStyle: NoNumber
...
Edit: I've recently found the pdf-reader gem which looks promising and may obviate the need for triggering pdftk, somehow, in the shell?!?
First off, let me say that my knowledge of Rake isn't that good, so there might be some mistakes. Let me know if something doesn't work and I would be happy to try and fix the problem.
To solve this, I am going to use 2 rake tasks. One of the rake tasks will be a recursive directory traversal task, and the other will be a task which kicks off the recursion.
desc "Populate the database with PDF metadata from the default PDF path"
task :populate_all_pdf_metadata do
pdf_path = "/path/to/pdfs"
Rake::Task[:populate_pdf_metadata].invoke(pdf_path)
end
desc "Recursively traverse a path looking for PDF metadata"
task :populate_pdf_metadata, :pdf_path do |t, args|
excluded_dir_names = [".", ".."] # Do not look in dirs with these names.
pdf_path = args[:pdf_path]
Dir.entries(pdf_path).each do |file|
if Dir.directory?(file) && !excluded_dir_names.include?(file)
Rake::Task[:populate_pdf_metadata].invoke(pdf_path + "/" + file)
elsif File.extname(file) == ".pdf"
reader = PDF::Reader.new(file)
# Populate the database here
end
end
end
I believe the code above is similar to what you want to do. In order to access the database you will need to add the :environment dependency to your tasks. You can search Google for how to access ActiveRecord models from a rake tasks. I hope this helps.

Erlang/OTP - Timing Applications

I am interested in bench-marking different parts of my program for speed. I having tried using info(statistics) and erlang:now()
I need to know down to the microsecond what the average speed is. I don't know why I am having trouble with a script I wrote.
It should be able to start anywhere and end anywhere. I ran into a problem when I tried starting it on a process that may be running up to four times in parallel.
Is there anyone who already has a solution to this issue?
EDIT:
Willing to give a bounty if someone can provide a script to do it. It needs to spawn though multiple process'. I cannot accept a function like timer.. at least in the implementations I have seen. IT only traverses one process and even then some major editing is necessary for a full test of a full program. Hope I made it clear enough.
Here's how to use eprof, likely the easiest solution for you:
First you need to start it, like most applications out there:
23> eprof:start().
{ok,<0.95.0>}
Eprof supports two profiling mode. You can call it and ask to profile a certain function, but we can't use that because other processes will mess everything up. We need to manually start it profiling and tell it when to stop (this is why you won't have an easy script, by the way).
24> eprof:start_profiling([self()]).
profiling
This tells eprof to profile everything that will be run and spawned from the shell. New processes will be included here. I will run some arbitrary multiprocessing function I have, which spawns about 4 processes communicating with each other for a few seconds:
25> trade_calls:main_ab().
Spawned Carl: <0.99.0>
Spawned Jim: <0.101.0>
<0.100.0>
Jim: asking user <0.99.0> for a trade
Carl: <0.101.0> asked for a trade negotiation
Carl: accepting negotiation
Jim: starting negotiation
... <snip> ...
We can now tell eprof to stop profiling once the function is done running.
26> eprof:stop_profiling().
profiling_stopped
And we want the logs. Eprof will print them to screen by default. You can ask it to also log to a file with eprof:log(File). Then you can tell it to analyze the results. We tell it to collapse the run time from all processes into a single table with the option total (see the manual for more options):
27> eprof:analyze(total).
FUNCTION CALLS % TIME [uS / CALLS]
-------- ----- --- ---- [----------]
io:o_request/3 46 0.00 0 [ 0.00]
io:columns/0 2 0.00 0 [ 0.00]
io:columns/1 2 0.00 0 [ 0.00]
io:format/1 4 0.00 0 [ 0.00]
io:format/2 46 0.00 0 [ 0.00]
io:request/2 48 0.00 0 [ 0.00]
...
erlang:atom_to_list/1 5 0.00 0 [ 0.00]
io:format/3 46 16.67 1000 [ 21.74]
erl_eval:bindings/1 4 16.67 1000 [ 250.00]
dict:store_bkt_val/3 400 16.67 1000 [ 2.50]
dict:store/3 114 50.00 3000 [ 26.32]
And you can see that most of the time (50%) is spent in dict:store/3. 16.67% is taken in outputting the result, another 16.67% is taken by erl_eval (this is why you get by running short functions in the shell -- parsing them becomes longer than running them).
You can then start going from there. That's the basics of profiling run times with Erlang. Handle with care, eprof can be quite a load on a production system or for functions that run for too long. Especially on a production system.
You can use eprof or fprof.
The normal way to do this is with timer:tc. Here is a good explanation.
I can recommend you this tool: https://github.com/virtan/eep
You will get something like this https://raw.github.com/virtan/eep/master/doc/sshot1.png as a result.
Step by step instruction for profiling all processes on running system:
On target system:
1> eep:start_file_tracing("file_name"), timer:sleep(20000), eep:stop_tracing().
$ scp -C $PWD/file_name.trace desktop:
On desktop:
1> eep:convert_tracing("file_name").
$ kcachegrind callgrind.out.file_name

SE 4.10 bcheck <filename>, SE 2.10 bcheck <filename.ext> and other bcheck anomalies

ISQL-SE 4.10.DD6 (DOS 6.22):
BCHECK C-ISAM B-tree Checker version 4.10.DD6
C-ISAM File: c:\dbfiles.dbs\*.*
ERROR: cannot open C-ISAM file
In SE2.10 it worked with wilcards * .* for all files, but in SE4.10 it doesn’t. I have an SQL script which my users periodically run to reorg customer and transactions tables. Then I have a FIX.BAT DOS script [bcheck –y * .*] as a utility option for my users in case any tables get screwed up. Since users running the reorg will now increment the table version number, example: CUSTO102, 110, … now I’m going to have to devise a way to strip the .DAT extensions from the .DBS dir and feed it to BCHECK. Before, my reorg would always re-create a static CUSTOMER.DAT with CREATE TABLE customer IN “C:\DBFILES.DBS\CUSTOMER”; but that created the write permission problem and had to revert back to SE’s default datafile journaling…
Before running BCHECK on CUSTO102, its .IDX file size was 22,089 bytes and its .DAT size is 882,832 bytes.
After running BCHECK on CUSTO102, its .IDX size increased to 122,561 bytes, however a new .IDY file was created with 88,430 bytes..
What's a .IDY file ???
C:\DBFILES.DBS> bcheck –y CUSTO102
BCHECK C-ISAM B-tree Checker version 4.10.DD6
C-ISAM File: c:\dbfiles.dbs\CUSTO102
Checking dictionary and file sizes.
Index file node size = 512
Current C-ISAM index file node size = 512
Checking data file records.
Checking indexes and key descriptions.
Index 1 = unique key
0 index node(s) used -- 1 index b-tree level(s) used
Index 2 = duplicates (2,30,0)
42 index node(s) used -- 3 index b-tree level(s) used
Index 3 = unique key (32,5,0)
29 index node(s) used -- 2 index b-tree level(s) used
Index 4 = duplicates (242,4,2)
37 index node(s) used -- 2 index b-tree level(s) used
Index 5 = duplicates (241,1,0)
36 index node(s) used -- 2 index b-tree level(s) used
Index 6 = duplicates (46,4,2)
38 index node(s) used -- 2 index b-tree level(s) used
Checking data record and index node free lists.
ERROR: 177 missing index node pointer(s)
Fix index node free list ? yes
Recreating index node free list.
Recreating index 6.
Recreating index 5.
Recreating index 4.
Recreating index 3.
Recreating index 2.
Recreating index 1.
184 index node(s) used, 177 free -- 1083 data record(s) used, 0 free
The problem with the wild cards is more likely an issue with the command interpreter that was used to run bcheck than with bcheck itself. If you give bcheck a list of file names (such as 'abc def.dat def.idx', then it will process the C-ISAM file pairs (abc.dat, abc.idx), (def.dat, def.idx) and (def.dat, def.idx - again). Since it complained about being unable to open 'c:\dbfiles.dbs\*.*', it means that the command interpreter did not expand the '*.*' bit, or there was nothing for it to expand into.
I expect that the '.IDY' file is an intermediate used while rebuilding the indexes for the table. I do not know why it was not cleaned up - maybe the process did not complete.
About sizes, I think your table has about 55,000 rows of size 368 bytes each (SE might say 367; the difference is the record status byte at the end, which is either '\0' for deleted or '\n' for current). The unique index on the CHAR(5) column (index 3) requires 9 bytes per entry, or about 56 keys per index node, for about 1000 index nodes. The duplicate indexes are harder to size; you need space for the key value plus a list of 4-byte numbers for the duplicates, all packed into 512-byte pages. The 22 KB index file was missing a lot of information. The revised index file is about the right size. Note that index 1 is the 'ROWID' index; it does not occupy any space. (Index 1 is also why although every table created by SE is stored in a C-ISAM file, not all C-ISAM files are necessarily compatible with SE.)

Resources