One time vs Iteration Model in vowpal wabbit with --lrq option - machine-learning

I am using vowpal wabbit logistic regression with Low Rank Quadratic options (--lrq ) for CTR prediction.I have trained model with two scenario
Model building in one time with command
vw -d traning_data_all.vw --lrq ic2 --link logistic --loss_function logistic --l2 0.0000000360911 --l1 0.00000000103629 --learning_rate 0.3 --holdout_off -b 28 --noconstant -f final_model
I have breaks the training data in 20 chunks(day wise) and building the model iterative way (with option -i and --save_resume).
first step:-
vw -d traning_data_day_1.vw --lrq ic2 --link logistic --loss_function logistic --l2 0.0000000360911 --l1 0.00000000103629 --learning_rate 0.3 --holdout_off -b 28 --noconstant -f model_1
And then
`vw -d traning_data_day_2.vw --lrq ic2 --link logistic --loss_function logistic --l2 0.0000000360911 --l1 0.00000000103629 --learning_rate 0.3 --holdout_off -b 28 --noconstant --save_resume -i model_1 -f model_2`
And so on up to 20 iteration
1st scenario is working fine but in second scenario predictions are tending to 1 OR 0 (only ) after 7-8 iteration. I need 2nd scenario working because i want to update model frequently. l1, l2 and learning_rate are optimised by vw-hypersearch script.
please help me how to solve this issue. Am i missing something ?. I have tried with option --lrqdropout.

Related

How to use libvips to shrink giant images with limited memory

I have a Ruby on Rails web application that allow users to upload images which then automatically get resized as small thumbnails using libvips and the ImageProcessing ruby gem. Sometimes users legitimately need to upload 100MP+ images. These large images break our server that only has 1GB of RAM. If it's relevant, these images are almost always JPEGs.
What I'm hoping is to use libvips to first scale down these images to a size that my server can handle--maybe like under 8,000x8,000 pixels--without using lots of RAM. Then I would use that image to do the other things we already do, like change the colorspace to sRGB and resize and strip metadata, etc.
Is this possible? If so can you give an example of a vips or vipsthumbnail linux CLI command?
I found a feature in Imagemagick that should theoretically solve this issue, mentioned in the two links below. But I don't want to have to switch the whole system to Imagemagick just for this.
https://legacy.imagemagick.org/Usage/formats/#jpg_read
https://github.com/janko/image_processing/wiki/Improving-ImageMagick-performance
P.S.: I'm using Heroku so if the RAM usage peaks at up to 2GB the action should still work.
(I've always been confused about why image processing seems to always require loading the entire image in RAM at once...)
UPDATE:
I'm providing more context because jcupitt's command is still failing for me.
This is the main software that is installed on the Docker container that is running libvips, as defined in the Dockerfile:
FROM ruby:3.1.2
RUN apt-get update -qq && apt-get install -y postgresql-client
# uglifier requires nodejs -- `apt-get install nodejs` only installs older version by default
RUN apt-get install -y curl
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash -
RUN apt-get install -y nodejs
RUN apt-get install -y libvips libvips-dev libvips-tools
# install pdftotext
RUN apt-get install -y xpdf
I am limiting the memory usage of the sidekiq container to 500MB to be more similar to production server. (I also tried this when limiting memory and reserved memory to 1GB and the same thing happens.) This is the config as specified in docker-compose.yml
sidekiq:
depends_on:
- db
- redis
build: .
command: sidekiq -c 1 -v -q mailers -q default -q low -q searchkick
volumes:
- '.:/myapp'
env_file:
- '.env'
deploy:
resources:
limits:
memory: 500M
reservations:
memory: 500M
This is the exact command I'm trying, based on the command that jcupitt suggested:
first I run docker stats --all to see the sidekiq container's memory usage after booting up, not running libvips:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
4d7e9ff9c7c7 sidekiq_1 0.48% 210.2MiB / 500MiB 42.03% 282kB / 635kB 133MB / 0B 7
I also check docker-compose exec sidekiq top and get a higher RAM limit, which I think is normal for Docker
top - 18:39:48 up 1 day, 3:21, 0 users, load average: 0.01, 0.08, 0.21
Tasks: 3 total, 1 running, 2 sleeping, 0 stopped, 0 zombie
%Cpu(s): 1.2 us, 1.5 sy, 0.0 ni, 97.1 id, 0.2 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 3929.7 total, 267.4 free, 1844.1 used, 1818.1 buff/cache
MiB Swap: 980.0 total, 61.7 free, 918.3 used. 1756.6 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 607688 190620 12848 S 0.3 4.7 0:10.31 ruby
54 root 20 0 6984 3260 2772 R 0.3 0.1 0:00.05 top
39 root 20 0 4092 3256 2732 S 0.0 0.1 0:00.03 bash
then I run the command
docker-compose exec sidekiq bash
root#4d7e9ff9c7c7:/myapp# vipsheader /tmp/shrine20220728-1-8yqju5.jpeg
/tmp/shrine20220728-1-8yqju5.jpeg: 23400x15600 uchar, 3 bands, srgb, jpegload
VIPS_CONCURRENCY=1 vipsthumbnail /tmp/shrine20220728-1-8yqju5.jpeg --size 500x500
Then in another Terminal window I check docker stats --all again
In maybe 0.5s the memory usage quickly shoots to 500MB and the vipsthumbnail process dies and just returns "Killed".
libvips will almost always stream images rather than loading them in memory, so you should not see high memory use.
For example:
$ vipsheader st-francis.jpg
st-francis.jpg: 30000x26319 uchar, 3 bands, srgb, jpegload
$ ls -l st-francis.jpg
-rw-rw-r-- 1 john john 227612475 Sep 17 2020 st-francis.jpg
$ /usr/bin/time -f %M:%e vipsthumbnail st-francis.jpg --size 500x500
87412:2.57
So 87MB of memory and 2.5s. The image is around 3gb uncompressed. You should get the same performance with ActiveRecord.
In fact there's not much useful concurrency for this sort of operation, so you can run libvips with a small threadpool.
$ VIPS_CONCURRENCY=1 /usr/bin/time -f %M:%e vipsthumbnail st-francis.jpg --size 500x500
52624:2.49
So with one thread in the threadpool it's about the same speed, but memory use is down to 50MB.
There are a few cases when this will fail. One is with interlaced (also called progressive) images.
These represent the image as a series of passes of increasingly higher detail. This can help when displaying an image to the user (the image appears in slowly increasing detail, rather than as a line moving down the screen), but unfortunately this also means that you don't get the final value of the first pixel until the entire image has been decompressed. This means you have to decompress the whole image into memory, and makes this type of file extremely unsuitable for large images of the sort you are handling.
You can detect an interlaced image in ruby-vips with:
if image.get_typeof("interlaced") != 0
error "argh! can't handle this"
end
I would do that test early on in your application and block upload of this type of file.

Set up a sharded solr collection using solrcloud

I would like to set up a 6 shards solr collection on 3 windows machines.
Tried the bin\solr -e cloud and set up 2 machines 6 shards and 1 replica. When stopping and starting 2 cores on one machine (each using another hard disk) I get 6 shards; 3 for each core.
When I start another core on another machine nothing happens, the 3rd one doesn't do anything.
When I start another core on the same machine using the same config in another directory nothing happens, the core starts but has no collections and the 2 cores first started still have 3 shards each.
For example: I start the 3rd one with:
bin\solr start -c -p 7576 -z localhost:9983 -s server/solr/collection/node3/solr
Or start on another machine:
bin\solr start -c -p 7576 -z zookeeper:9983 -s server/solr/collection/node3/solr
Is there some documentation out there that doesn't use the "convenient" bin\solr that I'm trying to reverse engineer the entire day to figure out how to set up zookeeper/solr to add the nth solr core as a shard until 6 shards are reached?
I think I found the answer: bin\solr -e cloud starts up the cores and assignes data to them.
After running the standard bin\solr -e cloud with 2 cores, a collection with 6 shards and 1 replica I stop all bin\solr stop -all
Then copy solr-5.2.1\example\cloud\node1 as solr-5.2.1\example\cloud\node3 delete the files in solr-5.2.1\example\cloud\node3\logs and let solr-5.2.1\example\cloud\node3 have gettingstarted_shard6_replica1 (leave that file in solr-5.2.1\example\cloud\node3\solr and remove it from solr-5.2.1\example\cloud\node1\solr).
Start up 3 cores:
bin\solr start -c -p 8983 -s example\cloud\node1\solr
bin\solr start -cloud -p 7574 -z localhost:9983 -s example\cloud\node2\solr
bin\solr start -cloud -p 7575 -z localhost:9983 -s example\cloud\node3\solr
And now I can see the 3rd solr instance has gettingstarted_shard6_replica1

How to stop parallel from reporting "No more processes" with "-X" option?

Working off this example: http://www.gnu.org/software/parallel/man.html#EXAMPLE:-Speeding-up-fast-jobs
When I run:
seq -w 0 9999 | parallel touch pict{}.jpg
seq -w 0 9999 | parallel -X touch pict{}.jpg
Success! However, add another 9 and BOOM:
$ seq -w 0 99999 | parallel --eta -X touch pict{}.jpg
parallel: Warning: No more processes: Decreasing number of running jobs to 3. Raising ulimit -u or /etc/security/limits.conf may help.
Computers / CPU cores / Max jobs to run
1:local / 4 / 3
parallel: Warning: No more processes: Decreasing number of running jobs to 2. Raising ulimit -u or /etc/security/limits.conf may help.
parallel: Warning: No more processes: Decreasing number of running jobs to 1. Raising ulimit -u or /etc/security/limits.conf may help.
parallel: Error: No more processes: cannot run a single job. Something is wrong.
I would expect parallel -X to run no more jobs than I have cpu cores, and to cram as many parameters onto each job as the max command line length permits. How am I running out of processes?
My environment:
OSX Yosemite
ulimit -u == 709
GNU parallel 20141122
GNU bash, version 3.2.53(1)-release (x86_64-apple-darwin14)
Your expectation is 100% correct. What you are seeing is clearly a bug - probably due to GNU Parallel not being well tested on OSX. Please follow http://www.gnu.org/software/parallel/man.html#REPORTING-BUGS and file a bug report.

Freeing unused allocated nodes on a SLURM cluster

I'm running some batches of serial programs on a (very) inhomogeneous SLURM cluster (version 2.6.6-2), using GNU 'parallel' to do the distribution. The problem that I'm having is that some of the nodes finish their tasks a lot faster than the others, and I end up with situations like, for example, a job that's allocating 4 nodes but is only using 1 during half of the simulation.
Is there any way, without administrator privileges, to free one of these unused nodes? I can mitigate the problem by running 4 jobs on individual nodes, or with files containing lists of homogeneous nodes, but it's still far from ideal.
For reference, here are the script files that I'm using (adapted from here)
job.sh
#!/bin/sh
#SBATCH --job-name=test
#SBATCH --time=96:00:00
#SBATCH --ntasks=16
#SBATCH --mem-per-cpu=1024
#SBATCH --ntasks-per-node=4
#SBATCH --partition=normal
# --delay .2 prevents overloading the controlling node
# -j is the number of tasks parallel runs so we set it to $SLURM_NTASKS
# --joblog makes parallel create a log of tasks that it has already run
# --resume makes parallel use the joblog to resume from where it has left off
# the combination of --joblog and --resume allow jobs to be resubmitted if
# necessary and continue from where they left off
parallel="parallel --delay .2 -j $SLURM_NTASKS"
$parallel < command_list.sh
command_list.sh
srun --exclusive -N1 -n1 nice -19 ./a.out config0.dat
srun --exclusive -N1 -n1 nice -19 ./a.out config1.dat
srun --exclusive -N1 -n1 nice -19 ./a.out config2.dat
...
srun --exclusive -N1 -n1 nice -19 ./a.out config31.dat
You can use the scontrol command to downsize your job:
scontrol update JobId=# NumNodes=#
I am not sure however how Slurm chooses the nodes to dismiss. You might need to choose them by hand and write
scontrol update JobId=# NodeList=<names>
See Question 24 in the Slurm FAQ.

Get raw predictions from Vowpal Wabbit in daemon mode

I'm starting Vowpal Wabbit in daemon mode with the following command:
vw --loss_function hinge --ect 250 --quiet --passes 5 -b 25 --daemon --port 10001 --pid_file pidfile
This works well and I'm able to get predictions by connecting to the socket and sending my data and reading an answer.
My question is, is it possible to also get the raw predictions passed over the socket when in daemon mode?
Instead of only
1.000000 as an answer, I'd like to get something 1:-2.31425 2:-3.98557 3:-3.97967 instead.
There isn't a way to do this with VW currently. The best option is to write the raw predictions to a file and read from that file.
vw --loss_function hinge -r raw_pred --ect 250 --quiet --passes 5 -b 25 --daemon --port 10001 --pid_file pidfile

Resources