I've created an action as such:
use Illuminate\Bus\Queueable;
use Laravel\Nova\Actions\Action;
use Illuminate\Support\Collection;
use Laravel\Nova\Fields\ActionFields;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Contracts\Queue\ShouldQueue;
use Laravel\Nova\Fields\BelongsTo;
use Illuminate\Support\Facades\Queue;
class ExportToCsv extends Action implements ShouldQueue
{
use InteractsWithQueue, Queueable;
public function handle(ActionFields $fields, Collection $models)
{
var_dump(time(), collect($models)->map(function($item){
return $item->id;
})->first());
sleep(3);
}
}
I'm then triggering it for 1128 items (twice) and observing this as a result:
[2020-05-02 20:45:05][181] Processing: App\Nova\Actions\ExportToCsv
int(1588452305)
int(621525)
[2020-05-02 20:45:05][180] Processing: App\Nova\Actions\ExportToCsv
int(1588452305)
int(412186)
[2020-05-02 20:45:05][179] Processing: App\Nova\Actions\ExportToCsv
int(1588452305)
int(621282)
[2020-05-02 20:45:08][181] Processed: App\Nova\Actions\ExportToCsv
[2020-05-02 20:45:08][184] Processing: App\Nova\Actions\ExportToCsv
int(1588452308)
int(623886)
[2020-05-02 20:45:08][180] Processed: App\Nova\Actions\ExportToCsv
[2020-05-02 20:45:08][182] Processing: App\Nova\Actions\ExportToCsv
int(1588452308)
int(622950)
[2020-05-02 20:45:08][179] Processed: App\Nova\Actions\ExportToCsv
[2020-05-02 20:45:08][183] Processing: App\Nova\Actions\ExportToCsv
int(1588452308)
int(623252)
[2020-05-02 20:45:11][184] Processed: App\Nova\Actions\ExportToCsv
[2020-05-02 20:45:11][182] Processed: App\Nova\Actions\ExportToCsv
[2020-05-02 20:45:11][183] Processed: App\Nova\Actions\ExportToCsv
[2020-05-02 20:45:35][185] Processing: App\Nova\Actions\ExportToCsv
int(1588452335)
int(621282)
[2020-05-02 20:45:35][187] Processing: App\Nova\Actions\ExportToCsv
int(1588452335)
int(621525)
[2020-05-02 20:45:35][186] Processing: App\Nova\Actions\ExportToCsv
int(1588452335)
int(412186)
[2020-05-02 20:45:38][185] Processed: App\Nova\Actions\ExportToCsv
[2020-05-02 20:45:38][188] Processing: App\Nova\Actions\ExportToCsv
int(1588452338)
int(622950)
[2020-05-02 20:45:38][187] Processed: App\Nova\Actions\ExportToCsv
[2020-05-02 20:45:38][190] Processing: App\Nova\Actions\ExportToCsv
int(1588452338)
int(623886)
[2020-05-02 20:45:38][186] Processed: App\Nova\Actions\ExportToCsv
[2020-05-02 20:45:38][189] Processing: App\Nova\Actions\ExportToCsv
int(1588452338)
int(623252)
[2020-05-02 20:45:41][188] Processed: App\Nova\Actions\ExportToCsv
[2020-05-02 20:45:41][190] Processed: App\Nova\Actions\ExportToCsv
[2020-05-02 20:45:41][189] Processed: App\Nova\Actions\ExportToCsv
By default the batches are grouped by 200 items and from what I'm seeing there are simultaneously 3 jobs being spawned at the same time. Unfortunately there's no way of knowing which job will be processed first, so models displayed in log for each batch change, as seen in output: 621525, 412186, 621282 for when action is called the first time and 621282, 621525, 412186 when it's called second time (the models are split between jobs the same way, but the order of jobs processed changes). The problem here is, that if I'm saving the view to a CSV file, the order matters.
I'd like only one job to be spawned at given time for given action, which would solve my problem. Is this possible?
Using horizon as queue worker
The way to accomplish this is to change processes value in horizon configuration:
'environments' => [
'local' => [
'supervisor-1' => [
'connection' => 'redis',
'queue' => ['default'],
'balance' => 'simple',
'processes' => 1,
'tries' => 3,
],
],
Related
well.. This is my first attempt to ml modelling. Given data is like below..
// normalized stock price data with label
// is just EXAMPLE
// $labels = ['10%', '30%', '-30%', ..];
// $samples= [[1,0,45,29,100], [100,13,0,14,5], [3,2,0,10,100], ..];
Each sample has 31 value and all label has 7 types. I successfully trained Model itself but score doesn't add up. Every attempt make under 0.2 score.
array:4 [▼
"environment" => array:4 [▶]
"filename" => "MP-test-220425-010403.rbx"
"scores" => array:9 [▼
1 => 0.077863577863578
2 => 0.077863577863578
3 => 0.071805006587615
4 => 0.14278357892554
5 => 0.077863577863578
6 => 0.077863577863578
7 => 0.13096842384232
8 => 0.078355812459859
9 => 0.077863577863578
]
"losses" => array:9 [▼
1 => 0.2336202611478
2 => 0.22053903758692
3 => 0.22142868877431
4 => 0.21962296766134
5 => 0.21888143998952
6 => 0.21872846102315
7 => 0.21900067894143
8 => 0.21882642822037
9 => 0.21780065553406
]
]
This is what I got .. Scores keep getting same value. What does it mean? I can find examples about underfitting and overfitting but they all just explaining on somewhat high accuracy.. at least above 50% ;;;;
Is.. underfitting or overfitting? Do I need more training data? Data is actually cannot classifiable? ..
What can I do to improving result?
I'm using Ruby 2.2.1 and Rails 4.2.0
I have a method on my Rails app to warehouse data from a web service. After retrieving and formatting the data, I'm using assign_attributes to update the model before doing some other logic and saving. The problem is that assigning the variables is crazy slow! Assigning 3 properties (a string and two booleans) is taking between 1 and 3 seconds. My full application needs to assign 30, and it's taking upwards of a minute for each object to be updated.
Sample code:
...
# #trips_hash is a hash of { trip_numbers => trip_details<Hash> }
# #bi_trips is an array of <Trip> (ActiveRecord::Base) objects from the database
#trips_hash.each do |trip_number, trip_details|
trip = #bi_trips.select { |t| t.number == trip_number }.first
...
time_started = Time.now # For performance profiling
trip.assign_attributes( stage: 'foo', all_intl: true, active: false )
p "Done in #{(Time.now - time_started).round(2)} s."
end
...
Here are results for the above code:
"Done in 0.0 s."
"Done in 1.71 s."
"Done in 2.09 s."
"Done in 3.36 s."
"Done in 1.45 s."
"Done in 1.99 s."
"Done in 1.63 s."
"Done in 0.59 s."
"Done in 1.61 s."
"Done in 1.56 s."
"Done in 2.25 s."
"Done in 1.42 s."
"Done in 1.53 s."
"Done in 1.61 s."
Am I going crazy? It seems like it shouldn't take 1-3 seconds to assign 3 properties of an object. I get similar results breaking it into
trip.stage = 'foo'
trip.all_intl = false
trip.active = true
This is my pry session output:
[1] pry(SomeTask)> epub
=> #<File:/somepath/tmp/x.epub>
[2] pry(SomeTask)> epub.size
=> 134
[3] pry(SomeTask)> File.size("/somepath/tmp/x.epub")
=> 44299
[4] pry(SomeTask)> epub.class
=> Tempfile
I see that File.size yields a different result than the size method of the Tempfile instance.
How is this possible?
The devil is in the details. From the docs for Tempfile#size (emphasis mine):
size()
Returns the size of the temporary file. As a side effect, the IO buffer is flushed before determining the size.
What's happening is that you're using File.size to read the size of the file before the buffer has been flushed—i.e. before all of the bytes have been written to the file—and then you're using Tempfile#size, which flushes that buffer before it calculates the size:
tmp = Tempfile.new('foo')
tmp.write('a' * 1000)
File.size(tmp)
# => 0
tmp.size
# => 1000
But see what happens when you call tmp.size before File.size(tmp):
tmp = Tempfile.new('bar')
tmp.write('a' * 1000)
tmp.size
# => 1000
File.size(tmp)
# => 1000
You can get the behavior you want out of File.size by manually flushing the buffer:
tmp = Tempfile.new('baz')
tmp.write('a' * 1000)
tmp.flush
File.size(tmp)
# => 1000
I'm using Pry version 0.10.1 on Ruby 2.2.2 and can't duplicate that situation:
[1] (pry) main: 0> foo = Tempfile.new('foo')
#<File:/var/folders/yb/whn8dwns6rl92jswry5cz87dsgk2n1/T/foo20150819-83612-1tpkqm4>
[2] (pry) main: 0> File.size(foo.path)
=> 0
[3] (pry) main: 0> foo.size
=> 0
After initialization, the file size is 0 bytes.
[4] (pry) main: 0> foo.write('a')
=> 1
[5] (pry) main: 0> File.size(foo.path)
=> 0
After writing one character to foo, the data has been buffered and not flushed to disk as I'd expect.
[6] (pry) main: 0> foo.size
=> 1
[7] (pry) main: 0> File.size(foo.path)
=> 1
foo.size flushes the buffer then returns the size of the file, which matches what File.size says it is.
When dealing with temporary files created by Tempfile, we don't care or want to know what their size is. They're temporary and will disappear (eventually) and are treated like buffers. If you need a file that is more permanent, then create and write to a normal file.
I've set up Thinking Sphinx gem https://github.com/pat/thinking-sphinx
and I'm trying to get it work with globalize https://github.com/globalize/globalize gem.
I have a model named Content that has :name, :body ,:summary attributes and also has
translates :name, :body, :summary, :fallbacks_for_empty_translations => true
for translations.
I've created a content_index that has
ThinkingSphinx::Index.define :content, :with => :active_record do
indexes translations.summary, :sortable => true
indexes translations.body , :sortable => true
where "content_translations.locale = 'my_locale'"
end
When I do rake ts:index or rake ts:rebuild I get
Generating configuration to
rails_app_path/config/development.sphinx.conf Sphinx
2.1.8-id64-release (rel21-r4675) Copyright (c) 2001-2014, Andrew Aksyonoff Copyright (c) 2008-2014, Sphinx Technologies Inc
(http://sphinxsearch.com)
using config file 'rails_app_path/config/development.sphinx.conf'...
indexing index 'content_core'...
collected 43 docs, 0.0 MB
sorted 0.0 Mhits, 100.0% done
total 43 docs, 3266 bytes
total 0.005 sec, 616808 bytes/sec, 8120.86 docs/sec
skipping non-plain index 'content'...
total 3 reads, 0.000 sec, 1.3 kb/call avg, 0.0 msec/call avg
total 10 writes, 0.000 sec, 1.5 kb/call avg, 0.0 msec/call avg
rotating indices: successfully sent SIGHUP to searchd (pid=8282).
So when I get to rails console (rails c) and try something like
Content.search "something"
I get empty results.
2.1.2 :050 > Content.search("something")
Sphinx Query (0.6ms) SELECT * FROM content_core WHERE
MATCH('something') AND sphinx_deleted = 0 LIMIT 0, 20
Sphinx Found 0 results
=> []
Does skipping non-plain index 'content'... line in ts:rebuild task has anything to do with it?
So, after troubleshooting for a couple days to get Passenger to load my rails application, I was previously getting some nice little Passenger errors when I navigated to my domain about gemset errors and database roles not being able to login. I believe I have fixed those setup errors, but now when I navigate to my domain, I get nothing. No error page, nothing. I can't find anything in the error log that indicates where my problem is. I suspect that the problem might be something to do with permissions, but I don't know how to diagnose whether this is correct or how to fix it. Here's what my error log looks like:
[Fri Nov 01 10:10:14 2013] [notice] caught SIGTERM, shutting down
[Fri Nov 01 10:10:15 2013] [notice] SELinux policy enabled; httpd running as context unconfined_u:system_r:httpd_t:s0
[Fri Nov 01 10:10:15 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[ 2013-11-01 10:10:15.0461 3060/7f2d05770720 agents/Watchdog/Main.cpp:574 ]: Options: { 'analytics_log_user' => 'nobody', 'default_group' => 'nobody', 'default_python' => 'python', 'default_ruby' => '/home/krstck/.rvm/wrappers/ruby-1.9.3-p448/ruby', 'default_user' => 'nobody', 'log_level' => '0', 'max_instances_per_app' => '0', 'max_pool_size' => '6', 'passenger_root' => '/home/krstck/.rvm/gems/ruby-1.9.3-p448/gems/passenger-4.0.23', 'pool_idle_time' => '300', 'temp_dir' => '/tmp', 'union_station_gateway_address' => 'gateway.unionstationapp.com', 'union_station_gateway_port' => '443', 'user_switching' => 'true', 'web_server_pid' => '3059', 'web_server_type' => 'apache', 'web_server_worker_gid' => '48', 'web_server_worker_uid' => '48' }
[ 2013-11-01 10:10:15.1385 3063/7f0d05adc720 agents/HelperAgent/Main.cpp:619 ]: PassengerHelperAgent online, listening at unix:/tmp/passenger.1.0.3059/generation-0/request
[ 2013-11-01 10:10:15.1523 3068/7f5fe481f7e0 agents/LoggingAgent/Main.cpp:318 ]: PassengerLoggingAgent online, listening at unix:/tmp/passenger.1.0.3059/generation-0/logging
[ 2013-11-01 10:10:15.1528 3060/7f2d05770720 agents/Watchdog/Main.cpp:761 ]: All Phusion Passenger agents started!
[Fri Nov 01 10:10:15 2013] [notice] Digest: generating secret for digest authentication ...
[Fri Nov 01 10:10:15 2013] [notice] Digest: done
[ 2013-11-01 10:10:15.3630 3082/7fad84082720 agents/Watchdog/Main.cpp:574 ]: Options: { 'analytics_log_user' => 'nobody', 'default_group' => 'nobody', 'default_python' => 'python', 'default_ruby' => '/home/krstck/.rvm/wrappers/ruby-1.9.3-p448/ruby', 'default_user' => 'nobody', 'log_level' => '0', 'max_instances_per_app' => '0', 'max_pool_size' => '6', 'passenger_root' => '/home/krstck/.rvm/gems/ruby-1.9.3-p448/gems/passenger-4.0.23', 'pool_idle_time' => '300', 'temp_dir' => '/tmp', 'union_station_gateway_address' => 'gateway.unionstationapp.com', 'union_station_gateway_port' => '443', 'user_switching' => 'true', 'web_server_pid' => '3080', 'web_server_type' => 'apache', 'web_server_worker_gid' => '48', 'web_server_worker_uid' => '48' }
[ 2013-11-01 10:10:15.4069 3085/7fa13a7d6720 agents/HelperAgent/Main.cpp:619 ]: PassengerHelperAgent online, listening at unix:/tmp/passenger.1.0.3080/generation-0/request
[ 2013-11-01 10:10:15.4172 3090/7f6893bad7e0 agents/LoggingAgent/Main.cpp:318 ]: PassengerLoggingAgent online, listening at unix:/tmp/passenger.1.0.3080/generation-0/logging
[ 2013-11-01 10:10:15.4177 3082/7fad84082720 agents/Watchdog/Main.cpp:761 ]: All Phusion Passenger agents started!
[Fri Nov 01 10:10:15 2013] [notice] Apache/2.2.15 (Unix) DAV/2 PHP/5.3.3 Phusion_Passenger/4.0.23 configured -- resuming normal operations
Here's the output of passenger-memory-stats:
--------- Apache processes ---------
PID PPID VMSize Private Name
------------------------------------
3080 1 471.2 MB 0.2 MB /usr/sbin/httpd
3100 3080 471.2 MB 0.1 MB /usr/sbin/httpd
3101 3080 471.2 MB 0.1 MB /usr/sbin/httpd
3102 3080 471.2 MB 0.1 MB /usr/sbin/httpd
3103 3080 471.2 MB 0.1 MB /usr/sbin/httpd
3104 3080 471.2 MB 0.1 MB /usr/sbin/httpd
3105 3080 471.2 MB 0.1 MB /usr/sbin/httpd
3106 3080 471.2 MB 0.1 MB /usr/sbin/httpd
3107 3080 471.2 MB 0.1 MB /usr/sbin/httpd
### Processes: 9
### Total private dirty RSS: 1.36 MB
-------- Nginx processes --------
### Processes: 0
### Total private dirty RSS: 0.00 MB
---- Passenger processes -----
PID VMSize Private Name
------------------------------
3082 209.8 MB 0.2 MB PassengerWatchdog
3085 497.1 MB 0.3 MB PassengerHelperAgent
3090 207.7 MB 0.5 MB PassengerLoggingAgent
### Processes: 3
### Total private dirty RSS: 1.08 MB
Here's passenger-status:
Version : 4.0.21
Date : 2013-11-01 10:35:58 -0500
Instance: 3080
----------- General information -----------
Max pool size : 6
Processes : 0
Requests in top-level queue : 0
----------- Application groups -----------
Anything else I need to include? What could be getting in the way of loading my application?