Heroku slug size exploded after rails asset precompile - ruby-on-rails

My app was running just file. I had to add an additional js file and after recompiling assets and redeploying it went to 930Mb so it won't deploy on Heroku. I've tried clearing assets, clearing build cache and anything else I've found but it's only down to 821 Mb. I'm out of ideas and I'm stuck. I needed to deploy this to fix a bug but the size just won't budge.
The vendor folder is currently huge at 711 Mb. How can I reduce it's size?
~ $ du -ha --max-depth 1 /app | sort -hr
821M /app
711M /app/vendor
79M /app/bin
27M /app/public
2.4M /app/app
964K /app/latest.dump
520K /app/server
520K /app/generate
268K /app/config
188K /app/db
164K /app/spec
104K /app/lib
48K /app/jquery.fileupload.js
48K /app/Gemfile.lock
20K /app/widget.js
20K /app/esc
16K /app/.heroku
12K /app/.profile.d
8.0K /app/tmp
8.0K /app/spring
8.0K /app/exit
8.0K /app/.bundle
4.0K /app/.ruby-version
4.0K /app/.rspec
4.0K /app/README.MD
4.0K /app/Rakefile~
4.0K /app/Rakefile
4.0K /app/Procfile
4.0K /app/log
4.0K /app/jdd
4.0K /app/init.rb~
4.0K /app/init.rb
4.0K /app/.gitignore~
4.0K /app/.gitignore
4.0K /app/Gemfile~
4.0K /app/Gemfile
4.0K /app/config.ru

Except the vendor older folders look fine. Check if the content in vendor, generally installed gems being sent out as bundle pack along the project folder. If that's the case, it can be removed.
You can compress the js,css and other asset files further using different libraries and compression techniques. Also after compression the files can be made accessible from content storing platforms like s3.

Related

How to contintue to import openstreetmap to docker container after interrupted?

I was using Ubuntu 20.04 to import openstreetmap like this:
docker volume create osm-data
sudo time docker run -v ./planet-200210.osm.pbf:/data/region.osm.pbf -v osm-data:/data/database overv/openstreetmap-tile-server:2.2.0 import
The process was suddenly wiped out from RAM and the volume, container disappeared. But I can sill find something:
4.0K ./volumes/osm-data/_data/postgres/pg_snapshots
4.0K ./volumes/osm-data/_data/postgres/pg_stat_tmp
4.0K ./volumes/osm-data/_data/postgres/pg_replslot
4.0K ./volumes/osm-data/_data/postgres/pg_stat
12K ./volumes/osm-data/_data/postgres/pg_multixact/offsets
12K ./volumes/osm-data/_data/postgres/pg_multixact/members
28K ./volumes/osm-data/_data/postgres/pg_multixact
4.0K ./volumes/osm-data/_data/postgres/pg_twophase
108K ./volumes/osm-data/_data/postgres/pg_subtrans
12K ./volumes/osm-data/_data/postgres/pg_xact
4.0K ./volumes/osm-data/_data/postgres/pg_dynshmem
568K ./volumes/osm-data/_data/postgres/global
4.0K ./volumes/osm-data/_data/postgres/pg_tblspc
4.0K ./volumes/osm-data/_data/postgres/pg_serial
4.0K ./volumes/osm-data/_data/postgres/pg_commit_ts
4.0K ./volumes/osm-data/_data/postgres/pg_wal/archive_status
2.9G ./volumes/osm-data/_data/postgres/pg_wal
4.0K ./volumes/osm-data/_data/postgres/pg_logical/mappings
4.0K ./volumes/osm-data/_data/postgres/pg_logical/snapshots
16K ./volumes/osm-data/_data/postgres/pg_logical
4.0K ./volumes/osm-data/_data/postgres/pg_notify
8.3M ./volumes/osm-data/_data/postgres/base/13758
8.4M ./volumes/osm-data/_data/postgres/base/1
8.4M ./volumes/osm-data/_data/postgres/base/13759
12K ./volumes/osm-data/_data/postgres/base/pgsql_tmp
1.1T ./volumes/osm-data/_data/postgres/base/16385
1.1T ./volumes/osm-data/_data/postgres/base
1.1T ./volumes/osm-data/_data/postgres
1.1T ./volumes/osm-data/_data
1.1T ./volumes/osm-data
How to contintue to import the left data?

How to find out where the default influxDB data storage location on Ubuntu is?

I am running influxDB version: 1.7.8 and since my Ubuntu machine (18.04.3 LTS) is running low of storage (I have 80GB), I want to:
Locate where does InfluxDB physically store the data (the big files)
How to change location to another place.
From this question here I understand that there are two locations:
var\lib\influxdb\wal
var\lib\influxdb\data
When I check the first I see 4.0K file size... Which tells me it's not the right place.
my_server:~$ sudo ls -l /var/lib/influxdb/wal/ -sh
total 20K
4.0K drwx------ 3 influxdb influxdb 4.0K Jul 9 2019 _internal
4.0K drwx------ 3 influxdb influxdb 4.0K Jul 10 2019 db1
4.0K drwx------ 3 influxdb influxdb 4.0K Nov 30 12:32 db2
4.0K drwx------ 3 influxdb influxdb 4.0K Nov 30 21:50 db3
4.0K drwx------ 3 influxdb influxdb 4.0K Dec 12 00:18 db4
When I check the second, I see the same
my_server:~$ sudo ls -l /var/lib/influxdb/data/ -sh
total 20K
4.0K drwx------ 4 influxdb influxdb 4.0K Jul 9 2019 _internal
4.0K drwx------ 4 influxdb influxdb 4.0K Jul 10 2019 db1
4.0K drwx------ 4 influxdb influxdb 4.0K Nov 30 12:32 db2
4.0K drwx------ 4 influxdb influxdb 4.0K Nov 30 21:50 db3
4.0K drwx------ 4 influxdb influxdb 4.0K Dec 12 00:18 db4
At the same time I see that this file: /var/log/syslog.1 takes a crazy amount of storage (13.7GB) with DB related information.
I could not find any information about this on the InfluxDB documentation, which I think is weird.
Can anyone provide either either a link to where I can read up about this and figure it out
or a solution to how I can approach to addressing my primary issues: finding out where the physical information is stored?
Thanks!
Those are directories..directories on Linux always show as 4KB
Those locations are correct though
Try du -h -d1 /var/lib/influxdb for am accurate count

Some files not visible and other visible as folders after adding Docker volume

I couldn't start container because of some issues with volumes so I tried this to make sure I understand how volumes work. And there is something strange that is happening here. Two files should be present in /data directory but instead, I see one folder named as one of the files on the source machine. I'm doing this on Windows 10.
PS C:\Users\Piotrek\source\repos\fluentd> dir
Directory: C:\Users\Piotrek\source\repos\fluentd
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 06.01.2019 18:50 7 abc.txt
-a---- 06.01.2019 18:50 80 test.conf
PS C:\Users\Piotrek\source\repos\fluentd> docker run -ti --rm -v ${PWD}:/data ubuntu ls -alR /data
/data:
total 4
drwxr-xr-x 3 1000 root 60 Jan 6 16:48 .
drwxr-xr-x 1 root root 4096 Jan 6 17:53 ..
drwxr-xr-x 2 1000 root 40 Jan 6 16:48 test.conf
/data/test.conf:
total 0
drwxr-xr-x 2 1000 root 40 Jan 6 16:48 .
drwxr-xr-x 3 1000 root 60 Jan 6 16:48 ..
Problem solved.
I went to Docker settings and under "Shared Drives" I clicked Reset Credentials.
I have enabled drive sharing some time ago but after that I changed password - to no password. Looks like Docker doesn't ask you to enable drive sharing again when your password is empty. It does when you change password, but not to empty one.

Docker Containers all gone after reboot

I had changed the directory to store containers and images as /data/docker.And it work successfully in the coming one year.
However, after I reboot the system (Ubuntu 16.04), a disk partition /dev/sdb4 was broken,which was mount to /data/docker. And the broken partition lead to the system come into emergency mode. I run a command fsck -y /dev/sdb4 to repair it and restart again.
After restarting,all work normally as before except docker. It is amazing that all my container and almost images were missing. When I run docker container ls -a,I got nothing. When I run docker images,I only got one list mysql 5.6 7edb93321b06 5 months ago 256MB.But I had 4 containers and some (more than one) images before. In other words,they were gone after reboot.
In addition,when I run sudo du -h /data/docker/volumes,I got:
6.8M /data/docker/volumes/b35a0a2be6a1d10693e891f0b5c7dfa6e27a34b6a61384e6db79ac06df4bde36/_data/mysql
636K /data/docker/volumes/b35a0a2be6a1d10693e891f0b5c7dfa6e27a34b6a61384e6db79ac06df4bde36/_data/performance_schema
116M /data/docker/volumes/b35a0a2be6a1d10693e891f0b5c7dfa6e27a34b6a61384e6db79ac06df4bde36/_data
116M /data/docker/volumes/b35a0a2be6a1d10693e891f0b5c7dfa6e27a34b6a61384e6db79ac06df4bde36
4.0K /data/docker/volumes/4bfb7ca010e869edd409ee14aa3a8b9ec70ec144a1b1586e792a8f87f9d5a9b2/_data
8.0K /data/docker/volumes/4bfb7ca010e869edd409ee14aa3a8b9ec70ec144a1b1586e792a8f87f9d5a9b2
4.0K /data/docker/volumes/d24ddf0126cef08bc3ff8bf75e79e76375b1e6cfa70c646d3173554d78555aeb/_data
8.0K /data/docker/volumes/d24ddf0126cef08bc3ff8bf75e79e76375b1e6cfa70c646d3173554d78555aeb
4.0K /data/docker/volumes/c22f29c1ab50c2973e6e1cc3e1ea64ad7a253741d44a290df3789fbf1e4c3a05/_data
8.0K /data/docker/volumes/c22f29c1ab50c2973e6e1cc3e1ea64ad7a253741d44a290df3789fbf1e4c3a05
4.0K /data/docker/volumes/985f51b406657eacbd70dfe7d0bd0502287c783a389ae34fa36d70b5c98e94a3/_data/#innodb_temp
84K /data/docker/volumes/985f51b406657eacbd70dfe7d0bd0502287c783a389ae34fa36d70b5c98e94a3/_data/sys
32K /data/docker/volumes/985f51b406657eacbd70dfe7d0bd0502287c783a389ae34fa36d70b5c98e94a3/_data/mysql
84K /data/docker/volumes/985f51b406657eacbd70dfe7d0bd0502287c783a389ae34fa36d70b5c98e94a3/_data/test
1.4M /data/docker/volumes/985f51b406657eacbd70dfe7d0bd0502287c783a389ae34fa36d70b5c98e94a3/_data/performance_schema
165M /data/docker/volumes/985f51b406657eacbd70dfe7d0bd0502287c783a389ae34fa36d70b5c98e94a3/_data
165M /data/docker/volumes/985f51b406657eacbd70dfe7d0bd0502287c783a389ae34fa36d70b5c98e94a3
3.1G /data/docker/volumes/b445468b292cff4531429b7a9883a825fc794327a46672ec518e7765050437e5/_data/newProject
6.8M /data/docker/volumes/b445468b292cff4531429b7a9883a825fc794327a46672ec518e7765050437e5/_data/mysql
387M /data/docker/volumes/b445468b292cff4531429b7a9883a825fc794327a46672ec518e7765050437e5/_data/project
636K /data/docker/volumes/b445468b292cff4531429b7a9883a825fc794327a46672ec518e7765050437e5/_data/performance_schema
4.0G /data/docker/volumes/b445468b292cff4531429b7a9883a825fc794327a46672ec518e7765050437e5/_data
4.0G /data/docker/volumes/b445468b292cff4531429b7a9883a825fc794327a46672ec518e7765050437e5
4.2G /data/docker/volumes
It means that some of data is not lost.
How can I repair it and find them back ?

Is my caching solution solution secure?

I'm running Rails 3.1 on Ubuntu 10.04 on Nginx and Passenger.
In my logs I could see much of the following:
cache error: Permission denied - /var/www/redmeetsblue/releases/20120221032538/tmp/cache/B27
I solved the problem by changing the name of the user (from google advice) but I'm unsure of the security implications. Who is nobody? and is this secure?
/var/www/redmeetsblue/current/tmp/cache
total 16K
drwxr-xr-x 4 www-data root 4.0K 2012-02-20 22:27 .
drwxr-xr-x 3 root root 4.0K 2012-02-20 22:26 ..
drwxr-xr-x 54 www-data root 4.0K 2012-02-20 22:27 assets
drwxr-xr-x 3 www-data root 4.0K 2012-02-20 22:27 sass
root#y:/var/www/redmeetsblue/current/tmp# cd b27
-bash: cd: b27: No such file or directory
root#y:/var/www/redmeetsblue/current/tmp# cd B27
-bash: cd: B27: No such file or directory
root#y:/var/www/redmeetsblue/current/tmp# chown -R nobody cache
root#y:/var/www/redmeetsblue/current/tmp# ls -alh /var/www/redmeetsblue/current/tmp/cache
total 16K
drwxr-xr-x 4 nobody root 4.0K 2012-02-20 22:27 .
drwxr-xr-x 3 root root 4.0K 2012-02-20 22:26 ..
drwxr-xr-x 54 nobody root 4.0K 2012-02-20 22:27 assets
drwxr-xr-x 3 nobody root 4.0K 2012-02-20 22:27 sass
after changing the user, my cache is working, but I'm not sure if its safe. See working cache..
cache: [GET /assets/grid.png] stale, valid, store
cache: [GET /dashboards] miss
cache: [GET /assets/grid.png] stale, valid, store
The nobody user in commonly used as unix daemons owners so that they have enough permissions to do their job, but not too many as to do potentially destructive naughtiness. Running the daemon under a user account, it wouldn't be able to for example write to the syslogs. Running it under a privileged account such as root gives the process permissions to do that, but also for everything else. So if your daemon's process is compromised, an attacker would have far more freedom to own your server. The server may also start as root (necessary for example to bind to TCP port 80) and then give up its rights to user nobody.

Resources