How to contintue to import openstreetmap to docker container after interrupted? - docker

I was using Ubuntu 20.04 to import openstreetmap like this:
docker volume create osm-data
sudo time docker run -v ./planet-200210.osm.pbf:/data/region.osm.pbf -v osm-data:/data/database overv/openstreetmap-tile-server:2.2.0 import
The process was suddenly wiped out from RAM and the volume, container disappeared. But I can sill find something:
4.0K ./volumes/osm-data/_data/postgres/pg_snapshots
4.0K ./volumes/osm-data/_data/postgres/pg_stat_tmp
4.0K ./volumes/osm-data/_data/postgres/pg_replslot
4.0K ./volumes/osm-data/_data/postgres/pg_stat
12K ./volumes/osm-data/_data/postgres/pg_multixact/offsets
12K ./volumes/osm-data/_data/postgres/pg_multixact/members
28K ./volumes/osm-data/_data/postgres/pg_multixact
4.0K ./volumes/osm-data/_data/postgres/pg_twophase
108K ./volumes/osm-data/_data/postgres/pg_subtrans
12K ./volumes/osm-data/_data/postgres/pg_xact
4.0K ./volumes/osm-data/_data/postgres/pg_dynshmem
568K ./volumes/osm-data/_data/postgres/global
4.0K ./volumes/osm-data/_data/postgres/pg_tblspc
4.0K ./volumes/osm-data/_data/postgres/pg_serial
4.0K ./volumes/osm-data/_data/postgres/pg_commit_ts
4.0K ./volumes/osm-data/_data/postgres/pg_wal/archive_status
2.9G ./volumes/osm-data/_data/postgres/pg_wal
4.0K ./volumes/osm-data/_data/postgres/pg_logical/mappings
4.0K ./volumes/osm-data/_data/postgres/pg_logical/snapshots
16K ./volumes/osm-data/_data/postgres/pg_logical
4.0K ./volumes/osm-data/_data/postgres/pg_notify
8.3M ./volumes/osm-data/_data/postgres/base/13758
8.4M ./volumes/osm-data/_data/postgres/base/1
8.4M ./volumes/osm-data/_data/postgres/base/13759
12K ./volumes/osm-data/_data/postgres/base/pgsql_tmp
1.1T ./volumes/osm-data/_data/postgres/base/16385
1.1T ./volumes/osm-data/_data/postgres/base
1.1T ./volumes/osm-data/_data/postgres
1.1T ./volumes/osm-data/_data
1.1T ./volumes/osm-data
How to contintue to import the left data?

Related

Heroku slug size exploded after rails asset precompile

My app was running just file. I had to add an additional js file and after recompiling assets and redeploying it went to 930Mb so it won't deploy on Heroku. I've tried clearing assets, clearing build cache and anything else I've found but it's only down to 821 Mb. I'm out of ideas and I'm stuck. I needed to deploy this to fix a bug but the size just won't budge.
The vendor folder is currently huge at 711 Mb. How can I reduce it's size?
~ $ du -ha --max-depth 1 /app | sort -hr
821M /app
711M /app/vendor
79M /app/bin
27M /app/public
2.4M /app/app
964K /app/latest.dump
520K /app/server
520K /app/generate
268K /app/config
188K /app/db
164K /app/spec
104K /app/lib
48K /app/jquery.fileupload.js
48K /app/Gemfile.lock
20K /app/widget.js
20K /app/esc
16K /app/.heroku
12K /app/.profile.d
8.0K /app/tmp
8.0K /app/spring
8.0K /app/exit
8.0K /app/.bundle
4.0K /app/.ruby-version
4.0K /app/.rspec
4.0K /app/README.MD
4.0K /app/Rakefile~
4.0K /app/Rakefile
4.0K /app/Procfile
4.0K /app/log
4.0K /app/jdd
4.0K /app/init.rb~
4.0K /app/init.rb
4.0K /app/.gitignore~
4.0K /app/.gitignore
4.0K /app/Gemfile~
4.0K /app/Gemfile
4.0K /app/config.ru
Except the vendor older folders look fine. Check if the content in vendor, generally installed gems being sent out as bundle pack along the project folder. If that's the case, it can be removed.
You can compress the js,css and other asset files further using different libraries and compression techniques. Also after compression the files can be made accessible from content storing platforms like s3.

How to find out where the default influxDB data storage location on Ubuntu is?

I am running influxDB version: 1.7.8 and since my Ubuntu machine (18.04.3 LTS) is running low of storage (I have 80GB), I want to:
Locate where does InfluxDB physically store the data (the big files)
How to change location to another place.
From this question here I understand that there are two locations:
var\lib\influxdb\wal
var\lib\influxdb\data
When I check the first I see 4.0K file size... Which tells me it's not the right place.
my_server:~$ sudo ls -l /var/lib/influxdb/wal/ -sh
total 20K
4.0K drwx------ 3 influxdb influxdb 4.0K Jul 9 2019 _internal
4.0K drwx------ 3 influxdb influxdb 4.0K Jul 10 2019 db1
4.0K drwx------ 3 influxdb influxdb 4.0K Nov 30 12:32 db2
4.0K drwx------ 3 influxdb influxdb 4.0K Nov 30 21:50 db3
4.0K drwx------ 3 influxdb influxdb 4.0K Dec 12 00:18 db4
When I check the second, I see the same
my_server:~$ sudo ls -l /var/lib/influxdb/data/ -sh
total 20K
4.0K drwx------ 4 influxdb influxdb 4.0K Jul 9 2019 _internal
4.0K drwx------ 4 influxdb influxdb 4.0K Jul 10 2019 db1
4.0K drwx------ 4 influxdb influxdb 4.0K Nov 30 12:32 db2
4.0K drwx------ 4 influxdb influxdb 4.0K Nov 30 21:50 db3
4.0K drwx------ 4 influxdb influxdb 4.0K Dec 12 00:18 db4
At the same time I see that this file: /var/log/syslog.1 takes a crazy amount of storage (13.7GB) with DB related information.
I could not find any information about this on the InfluxDB documentation, which I think is weird.
Can anyone provide either either a link to where I can read up about this and figure it out
or a solution to how I can approach to addressing my primary issues: finding out where the physical information is stored?
Thanks!
Those are directories..directories on Linux always show as 4KB
Those locations are correct though
Try du -h -d1 /var/lib/influxdb for am accurate count

Docker mounting an empty directory

Working in Jenkinsx build container ...
I'm trying to mount a volume while in docker container. The directory get's mounted, however, the files that exist on source ( host ) directory are not present in the container.
In this case, the host is a docker container as well, so basically I'm running docker-compose from docker container.
Does anyone experienced this issue and has a solution?
Here are the results :
bash-4.2# pwd
/home/jenkins
bash-4.2# ls -l datadir/
total 4
-rw-r--r-- 1 root root 4 May 15 20:06 foo.txt
bash-4.2# cat docker-compose.yml
version: '2.3'
services:
testing-wiremock:
image: rodolpheche/wiremock
volumes:
- ./datadir:/home/wiremock
bash-4.2# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 95G 24G 71G 25% /
tmpfs 7.4G 0 7.4G 0% /dev
tmpfs 7.4G 0 7.4G 0% /sys/fs/cgroup
/dev/sda1 95G 24G 71G 25% /etc/hosts
tmpfs 7.4G 4.0K 7.4G 1% /root/.m2
shm 64M 0 64M 0% /dev/shm
tmpfs 7.4G 4.0K 7.4G 1% /home/jenkins/.docker
tmpfs 7.4G 1.9M 7.4G 1% /run/docker.sock
tmpfs 7.4G 0 7.4G 0% /home/jenkins/.gnupg
tmpfs 7.4G 12K 7.4G 1% /run/secrets/kubernetes.io/serviceaccount
bash-4.2# docker-compose up -d
Creating network "jenkins_default" with the default driver
Creating jenkins_testing-wiremock_1 ... done
bash-4.2# docker ps |grep wiremock
6293dee408aa rodolpheche/wiremock "/docker-entrypoint.…" 26 seconds ago Up 25 seconds 8080/tcp, 8443/tcp jenkins_testing-wiremock_1
8db3b729c5d2 rodolpheche/wiremock "/docker-entrypoint.…" 21 minutes ago Up 21 minutes (unhealthy) 8080/tcp, 8443/tcp zendeskintegration_rest_1
bd52fb96036d rodolpheche/wiremock "/docker-entrypoint.…" 21 minutes ago Up 21 minutes (unhealthy) 8080/tcp, 8443/tcp zendeskintegration_zendesk_1
bash-4.2# docker exec -it 6293dee408aa bash
root#6293dee408aa:/home/wiremock# ls -ltr
total 8
drwxr-xr-x 2 root root 4096 May 15 20:06 mappings
drwxr-xr-x 2 root root 4096 May 15 20:06 __files
I could reproduce the issue by running this on a MacOS system:
First open a shell in a container that already has docker-compose installed:
docker run --rm -v $(pwd):/work -v /var/run/docker.sock:/var/run/docker.sock --workdir /work -ti tmaier/docker-compose sh
I map the current folder so that I can work with my current project as if it were on my host.
And then inside the container:
docker-compose run testing-wiremock ls -lart
Now change the docker-compose.yml to the following:
version: '2.3'
services:
testing-wiremock:
image: rodolpheche/wiremock
volumes:
- /tmp:/home/wiremock/
and run again:
docker-compose run testing-wiremock ls -lart
This will show you the contents of the /tmp directory on the host (where the docker socket actually runs). To test you can even create a folder and a file in the /tmp and run the "docker-compose run" again. You will see the new files.
Moral of the story:
If the mounted folder corresponds to an existing folder on the host where the docker daemon is actually running, then the mapping will actually work.
host -> container -> container (mounts here refer to paths on the host)
In your specific case the folder is mounted empty because the mounted path (check it by running docker-compose config) is not present on the host (host = the host running your Jenkins container, not the Jenkins container itself).

COS is running out of inodes for /var/lib/docker volume

I'm trying to us cos to run some services on GCP.
One of the issues I'm seeing currently is that the VMs I've started very quickly seem to run out of inodes for the /var/lib/docker filesystem. I'd have expected this to be one of the things tuned in a container optimized os?
wouter#nbwm-cron ~ $ df -hi
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/root 78K 13K 65K 17% /
devtmpfs 463K 204 463K 1% /dev
tmpfs 464K 1 464K 1% /dev/shm
tmpfs 464K 500 463K 1% /run
tmpfs 464K 13 464K 1% /sys/fs/cgroup
tmpfs 464K 9 464K 1% /mnt/disks
tmpfs 464K 16K 448K 4% /tmp
/dev/sda8 4.0K 11 4.0K 1% /usr/share/oem
/dev/sda1 1013K 998K 15K 99% /var
tmpfs 464K 45 464K 1% /var/lib/cloud
overlayfs 464K 39 464K 1% /etc
wouter#nbwm-cron ~ $ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<name>/stackdriver-agent latest 0c4b075e7550 3 days ago 1.423 GB
<none> <none> 96d027d3feea 4 days ago 905.2 MB
gcr.io/<project>/nbwm-ops/docker-php5 latest 5d2c59c7dd7a 2 weeks ago 1.788 GB
nbwm-cron wouter # tune2fs -l /dev/sda1
tune2fs 1.43.3 (04-Sep-2016)
Filesystem volume name: STATE
Last mounted on: /var
Filesystem UUID: ca44779b-ffd5-405a-bd3e-528071b45f73
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Remount read-only
Filesystem OS type: Linux
Inode count: 1036320
Block count: 4158971
Reserved block count: 0
Free blocks: 4062454
Free inodes: 1030756
First block: 0
Block size: 4096
Fragment size: 4096
Group descriptor size: 64
Reserved GDT blocks: 747
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8160
Inode blocks per group: 510
Flex block group size: 16
Filesystem created: Thu Jun 15 22:39:33 2017
Last mount time: Wed Jun 28 13:51:31 2017
Last write time: Wed Jun 28 13:51:31 2017
Mount count: 5
Maximum mount count: -1
Last checked: Thu Nov 19 19:00:00 2009
Check interval: 0 (<none>)
Lifetime writes: 67 MB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 32
Desired extra isize: 32
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: 66aa0e7f-57da-41d0-86f7-d93270e53030
Journal backup: inode blocks
How do I tune the filesystem to have more inodes available?
This is a known issue with the overlay storage driver in docker and is addressed by the overlay2 driver.
The new cos-61 releases use docker 17.03 with overlay2 storage driver. Could you please give it a try and see if the issue happens again?
Thanks!
I have witnessed the same issue with all COS versions from 57.9202.64.0 (docker 1.11.2) on GKE 1.5 to 65.10323.85.0 (docker 17.03.2) on GKE 1.8.12-gke.1. Older version were certainly affected too.
Those all use the overlay driver:
pdecat#gke-cluster-test-pdecat-default-pool-e8945081-xhj6 ~ $ docker info 2>&1 | grep "Storage Driver"
Storage Driver: overlay
pdecat#gke-cluster-test-pdecat-default-pool-e8945081-xhj6 ~ $ grep "\(CHROMEOS_RELEASE_VERSION\|CHROMEOS_RELEASE_CHROME_MILESTONE\)" /etc/lsb-release
CHROMEOS_RELEASE_CHROME_MILESTONE=65
CHROMEOS_RELEASE_VERSION=10323.85.0
The overlay2 driver is only used for GKE 1.9+ clusters (fresh or upgraded) with the same COS version:
pdecat#gke-cluster-test-pdecat-default-pool-e8945081-xhj6 ~ $ docker info 2>&1 | grep "Storage Driver"
Storage Driver: overlay2
pdecat#gke-cluster-test-pdecat-default-pool-e8945081-xhj6 ~ $ grep "\(CHROMEOS_RELEASE_VERSION\|CHROMEOS_RELEASE_CHROME_MILESTONE\)" /etc/lsb-release
CHROMEOS_RELEASE_CHROME_MILESTONE=65
CHROMEOS_RELEASE_VERSION=10323.85.0
When the free space/inodes issue occurs with the overlay driver, I resolve it using spotify's docker-gc:
# docker run --rm --userns host -v /var/run/docker.sock:/var/run/docker.sock -v /etc:/etc spotify/docker-gc
Before:
# df -hi /var/lib/docker/
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 6.0M 5.0M 1.1M 83% /var
# df -h /var/lib/docker/
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 95G 84G 11G 89% /var
# du --inodes -s /var/lib/docker/*
180 /var/lib/docker/containers
4093 /var/lib/docker/image
4 /var/lib/docker/network
4906733 /var/lib/docker/overlay
1 /var/lib/docker/tmp
1 /var/lib/docker/trust
25 /var/lib/docker/volumes
After:
# df -hi /var/lib/docker/
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 6.0M 327K 5.7M 6% /var/lib/docker
# df -h /var/lib/docker/
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 95G 6.6G 88G 7% /var/lib/docker
# du --inodes -s /var/lib/docker/*
218 /var/lib/docker/containers
1792 /var/lib/docker/image
4 /var/lib/docker/network
279002 /var/lib/docker/overlay
1 /var/lib/docker/tmp
1 /var/lib/docker/trust
25 /var/lib/docker/volumes
Note: using the usual docker rmi $(docker images --filter "dangling=true" -q --no-trunc) and docker rm $(docker ps -qa --no-trunc --filter "status=exited") did not help to recover resources in /var/lib/docker/overlay.

No space left on device using vps rails passenger+nginx

I got a problem of space on a server vps with an application on rails in production environment with passenger and nginx.
I read a lot of similar problems but don't found my solution.
my errors can be
bash: cannot create temp file for here-document: No space left on device
My storage
admin#vps202702:~$ sudo du -h --max-depth=1 /
[sudo] password for admin:
206M /lib
12K /srv
12M /opt
4.0K /mnt
12M /bin
14M /root
20M /boot
2.0G /usr
6.4M /sbin
6.7G /var
4.0K /lib64
16K /lost+found
0 /sys
933M /home
5.3M /run
24K /tmp
du: cannot access ‘/proc/23910/task/23910/fd/4’: No such file or directory
du: cannot access ‘/proc/23910/task/23910/fdinfo/4’: No such file or directory
du: cannot access ‘/proc/23910/fd/4’: No such file or directory
du: cannot access ‘/proc/23910/fdinfo/4’: No such file or directory
0 /proc
0 /dev
4.0K /media
7.8M /etc
9.9G /
Detailed storage
admin#vps202702:~$ sudo du -x / | sort -n | tail -40
120736 /var/www/plannings_ecranvillage/code/.git/objects/pack
121908 /home/admin/.rvm/gems/ruby-2.2.1/cache
128912 /usr/lib/locale
129972 /var/www/plannings_ecranvillage/code/.git/objects
132152 /var/www/plannings_ecranvillage/code/.git
165128 /lib/modules/3.16.0-4-amd64/kernel
168844 /lib/modules/3.16.0-4-amd64
168848 /lib/modules
184432 /usr/share/doc
185980 /usr/share/locale
204432 /usr/bin
208020 /var/www/plannings_ecranvillage/code/log
209936 /lib
225812 /var/www/plannings_ecranvillage/code/bundle/ruby/2.2.0/gems
234060 /var/www/plannings_ecranvillage/code/vendor/bundle/ruby/2.2.0/gems
293548 /var/lib
297180 /var/www/plannings_ecranvillage/code/bundle/ruby/2.2.0
297184 /var/www/plannings_ecranvillage/code/bundle/ruby
297188 /var/www/plannings_ecranvillage/code/bundle
306828 /var/www/plannings_ecranvillage/code/vendor/bundle/ruby/2.2.0
306832 /var/www/plannings_ecranvillage/code/vendor/bundle/ruby
306836 /var/www/plannings_ecranvillage/code/vendor/bundle
306852 /var/www/plannings_ecranvillage/code/vendor
444924 /usr/lib/x86_64-linux-gnu
562076 /home/admin/.rvm/gems/ruby-2.2.1/gems
771244 /home/admin/.rvm/gems/ruby-2.2.1
771252 /home/admin/.rvm/gems
842120 /usr/share
907040 /home/admin/.rvm
941852 /home/admin
946400 /usr/lib
954900 /home
956848 /var/www/plannings_ecranvillage/code
956876 /var/www/plannings_ecranvillage
956892 /var/www
2037752 /usr
5734208 /var/log/nginx
5735444 /var/log
7007748 /var
10282476 /
Node in use
admin#vps202702:~$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/vda1 655360 151619 503741 24% /
udev 249112 295 248817 1% /dev
tmpfs 251192 355 250837 1% /run
tmpfs 251192 1 251191 1% /dev/shm
tmpfs 251192 3 251189 1% /run/lock
tmpfs 251192 13 251179 1% /sys/fs/cgroup
tmpfs 251192 4 251188 1% /run/user/1001
admin#vps202702:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 9.9G 9.9G 0 100% /
udev 10M 0 10M 0% /dev
tmpfs 393M 5.3M 388M 2% /run
tmpfs 982M 0 982M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 982M 0 982M 0% /sys/fs/cgroup
tmpfs 197M 0 197M 0% /run/user/1001
I have created a directory mytmp in /home/admin and edited with this line
the /home/admin/.bashrc
export TMPDIR=/home/admin/mytmp
My disk is not so big :
admin#vps202702:~$ sudo fdisk -l
Disk /dev/vda: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device Boot Start End Sectors Size Id Type
/dev/vda1 * 2048 20971519 20969472 10G 83 Linux
Loocking for inodes
root#vps202702:~# for i in /*; do echo $i; find $i |wc -l; done
/bin
151
/boot
7
/dev
297
/etc
2162
/extlinux.conf
1
/home
33553
/initrd.img
1
/ldlinux.c32
1
/ldlinux.sys
1
/lib
4597
/lib64
2
/lost+found
1
/media
1
/mnt
1
/opt
34
/proc
14853
/root
9
/run
360
/sbin
146
/srv
3
/sys
15871
/tmp
6
/usr
85647
/var
25250
/vmlinuz
1
Passenger version
Phusion Passenger 5.0.28
RVM version
admin#vps202702:~$ rvm -v
rvm 1.26.11 (latest) by Wayne E. Seguin <wayneeseguin#gmail.com>, Michal Papis <mpapis#gmail.com> [https://rvm.io/]
Trying to solve with this trick
admin#vps202702:~$ sudo -i
root#vps202702:~# pushd /proc ; for i in [1-9]* ; do ls -l $i/fd | grep "(deleted)" && (echo -n "used by: " ; ps -p $i | grep -v PID ; echo ) ; done ; popd
/proc ~
~
I'm not sure, but I think something is wrong with /proc ?
What's wrong ?

Categories

Resources