How to control the count of processes spawned by ng ng build --prod=true to avoid bitbucket pipeline from failing with 'Build' exceeded memory limit - bitbucket

I am doing a memory dump on the instance where ng build is triggered. I can show the time where number of processes are spawned by ng build. Is there way to control this number.
total used free shared buff/cache available
Mem: 30G 8.5G 2.7G 215M 19G 21G
Swap: 0B 0B 0B
Fri Jan 27 15:07:19 UTC 2023
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 4288 708 ? Ss 15:02 0:00 /bin/sh -c exit $( (/usr/bin/mkfifo /opt/atlassian/pipelines/agent/tmp/build_result && /bin/cat /opt/atlassian/pipelines/agent/tmp/build_result) || /bin/echo 1)
root 8 0.0 0.0 4288 96 ? S 15:02 0:00 /bin/sh -c exit $( (/usr/bin/mkfifo /opt/atlassian/pipelines/agent/tmp/build_result && /bin/cat /opt/atlassian/pipelines/agent/tmp/build_result) || /bin/echo 1)
root 9 0.0 0.0 4200 716 ? S 15:02 0:00 /bin/cat /opt/atlassian/pipelines/agent/tmp/build_result
root 11 0.0 0.0 4288 1460 ? Ss 15:02 0:00 /bin/sh /opt/atlassian/pipelines/agent/tmp/wrapperScript14257846929627798257.sh
root 35 0.0 0.0 4288 764 ? S 15:02 0:00 /bin/sh /opt/atlassian/pipelines/agent/tmp/buildScript1578831321044327918.sh
root 36 0.0 0.0 18004 2936 ? S 15:02 0:00 /bin/bash -i /opt/atlassian/pipelines/agent/tmp/bashScript16516797543053797206.sh
root 37 0.0 0.0 18008 2416 ? S 15:02 0:00 /bin/bash -i /opt/atlassian/pipelines/agent/tmp/bashScript16516797543053797206.sh
root 38 0.0 0.0 18008 2416 ? S 15:02 0:00 /bin/bash -i /opt/atlassian/pipelines/agent/tmp/bashScript16516797543053797206.sh
root 40 0.0 0.0 18024 2924 ? S 15:02 0:00 bash docker-build.sh
root 60 0.0 0.1 666880 41336 ? Sl 15:02 0:00 npm
root 71 0.0 0.0 4296 808 ? S 15:02 0:00 sh -c npm i --unsafe-perm -g #angular/cli && npm i && npm run copyi18n && npm run bump_version && node --max_old_space_size=6656 node_modules/#angular/cli/bin/ng build --prod=true --base-href=/
root 327 125 10.3 4375844 3362764 ? Rl 15:03 5:00 ng build --prod=true --base-href=/
root 574 0.0 0.0 4196 680 ? S 15:07 0:00 sleep 5
root 576 0.0 0.0 36644 2844 ? R 15:07 0:00 ps aux
In the above there is PID - 317 and only one ng build instance
After 10 second i see the following
Fri Jan 27 15:07:29 UTC 2023
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 4288 708 ? Ss 15:02 0:00 /bin/sh -c exit $( (/usr/bin/mkfifo /opt/atlassian/pipelines/agent/tmp/build_result && /bin/cat /opt/atlassian/pipelines/agent/tmp/build_result) || /bin/echo 1)
root 8 0.0 0.0 4288 96 ? S 15:02 0:00 /bin/sh -c exit $( (/usr/bin/mkfifo /opt/atlassian/pipelines/agent/tmp/build_result && /bin/cat /opt/atlassian/pipelines/agent/tmp/build_result) || /bin/echo 1)
root 9 0.0 0.0 4200 716 ? S 15:02 0:00 /bin/cat /opt/atlassian/pipelines/agent/tmp/build_result
root 11 0.0 0.0 4288 1460 ? Ss 15:02 0:00 /bin/sh /opt/atlassian/pipelines/agent/tmp/wrapperScript14257846929627798257.sh
root 35 0.0 0.0 4288 764 ? S 15:02 0:00 /bin/sh /opt/atlassian/pipelines/agent/tmp/buildScript1578831321044327918.sh
root 36 0.0 0.0 18004 2936 ? S 15:02 0:00 /bin/bash -i /opt/atlassian/pipelines/agent/tmp/bashScript16516797543053797206.sh
root 37 0.0 0.0 18008 2416 ? S 15:02 0:00 /bin/bash -i /opt/atlassian/pipelines/agent/tmp/bashScript16516797543053797206.sh
root 38 0.0 0.0 18008 2416 ? S 15:02 0:00 /bin/bash -i /opt/atlassian/pipelines/agent/tmp/bashScript16516797543053797206.sh
root 40 0.0 0.0 18024 2924 ? S 15:02 0:00 bash docker-build.sh
root 60 0.0 0.1 666880 41336 ? Sl 15:02 0:00 npm
root 71 0.0 0.0 4296 808 ? S 15:02 0:00 sh -c npm i --unsafe-perm -g #angular/cli && npm i && npm run copyi18n && npm run bump_version && node --max_old_space_size=6656 node_modules/#angular/cli/bin/ng build --prod=true --base-href=/
root 327 123 12.5 5146756 4055172 ? Sl 15:03 5:07 ng build --prod=true --base-href=/
root 578 40.1 0.2 611084 90388 ? Rl 15:07 0:02 /usr/local/bin/node --max_old_space_size=6656 /opt/atlassian/pipelines/agent/build/node_modules/worker-farm/lib/child/index.js /usr/local/bin/node /opt/atlassian/pipelines/agent/build/node_modules/#angular/cli/bin/ng build --prod=true --base-href=/
root 585 86.2 0.3 651124 129216 ? Rl 15:07 0:04 /usr/local/bin/node --max_old_space_size=6656 /opt/atlassian/pipelines/agent/build/node_modules/worker-farm/lib/child/index.js /usr/local/bin/node /opt/atlassian/pipelines/agent/build/node_modules/#angular/cli/bin/ng build --prod=true --base-href=/
root 596 25.6 0.3 638388 117496 ? Rl 15:07 0:01 /usr/local/bin/node --max_old_space_size=6656 /opt/atlassian/pipelines/agent/build/node_modules/worker-farm/lib/child/index.js /usr/local/bin/node /opt/atlassian/pipelines/agent/build/node_modules/#angular/cli/bin/ng build --prod=true --base-href=/
root 604 29.0 0.2 609828 88928 ? Rl 15:07 0:01 /usr/local/bin/node --max_old_space_size=6656 /opt/atlassian/pipelines/agent/build/node_modules/worker-farm/lib/child/index.js /usr/local/bin/node /opt/atlassian/pipelines/agent/build/node_modules/#angular/cli/bin/ng build --prod=true --base-href=/
root 611 102 0.2 617016 95864 ? Rl 15:07 0:04 /usr/local/bin/node --max_old_space_size=6656 /opt/atlassian/pipelines/agent/build/node_modules/worker-farm/lib/child/index.js /usr/local/bin/node /opt/atlassian/pipelines/agent/build/node_modules/#angular/cli/bin/ng build --prod=true --base-href=/
root 618 32.7 0.3 620824 99604 ? Rl 15:07 0:01 /usr/local/bin/node --max_old_space_size=6656 /opt/atlassian/pipelines/agent/build/node_modules/worker-farm/lib/child/index.js /usr/local/bin/node /opt/atlassian/pipelines/agent/build/node_modules/#angular/cli/bin/ng build --prod=true --base-href=/
root 625 66.2 0.4 657444 136912 ? Rl 15:07 0:02 /usr/local/bin/node --max_old_space_size=6656 /opt/atlassian/pipelines/agent/build/node_modules/worker-farm/lib/child/index.js /usr/local/bin/node /opt/atlassian/pipelines/agent/build/node_modules/#angular/cli/bin/ng build --prod=true --base-href=/
root 633 0.0 0.0 4196 652 ? S 15:07 0:00 sleep 5
root 635 0.0 0.0 36644 2804 ? R 15:07 0:00 ps aux
After this point the available RAM Drops and then eventually fails
total used free shared buff/cache available
Mem: 30G 11G 762M 169M 18G 19G
Swap: 0B 0B 0B
Bitbucket Pipeline File
branches:
master:
- step:
size: 2x
script:
- bash docker-build.sh
'{dev/*,release/*,hotfix/*}':
- step:
size: 2x
script:
- while true; do date && ps aux && echo "" && sleep 5; done &
- while true; do free -h && echo "" && sleep 5; done &
- bash docker-build.sh
definitions:
services:
node:
image: node:10.15.3
memory: 7680
docker:
memory: 512
# Docker true for running docker daemon commands. By default it will be there in step
options:
docker: true
size: 2x
package.json snippet
{
"name": "de-ui",
"version": "4.7.1",
"scripts": {
"ng": "ng",
"start": "node --max_old_space_size=8192 node_modules/#angular/cli/bin/ng serve",
"build": "ng build",
"test": "ng test",
"lint": "ng lint",
"e2e": "ng e2e",
"prod_build": "npm i --unsafe-perm -g #angular/cli && npm i && npm run copyi18n && npm run bump_version && node --max_old_space_size=6656 node_modules/#angular/cli/bin/ng build --prod=true --base-href=/",
"copyi18n": "node ./load.po.files.js ./src/assets/i18n/po/ ./src/assets/i18n/",
"createi18npo": "node ./load.po.files.js ./src/assets/i18n/ ./src/assets/i18n/po/",
"update_de": "npm update #de/de-ui-core #de/de-jsf-form #de/de-ui-app #de/de-ui-api",
"bump_version": "node ./bump_version.js",
"serve_prod": "node --max_old_space_size=8192 node_modules/#angular/cli/bin/ng serve --prod=true"
}
Note:
Size: 2x -- 8 GB Available
docker - Set to true
Sizing:
node:
image: node:10.15.3
memory: 7680
docker:
memory: 512
node --max_old_space_size=6656 -- provided in ng build
1. How can i avoid so many process from getting triggered
2. Is there way i can re arrange the memory allocation to avoid getting --> Container 'Build' exceeded memory limit.
Have tried changing the memory sizing. But not able to get it.
Thinking if i the number of processes that is getting spawned can be controlled, then can handle memory issue.

Related

Dockerfile-dev vs Dockerdev-prod

This is my Docker project structure:
├── docker-compose-dev.yml
├── docker-compose-prod.yml
└── services
├── client
│ ├── Dockerfile-dev
│ ├── Dockerfile-prod
├── nginx
│ ├── Dockerfile-dev
│ ├── Dockerfile-prod
│ ├── dev.conf
│ └── prod.conf
└── web
├── Dockerfile-dev <----- THIS
├── Dockerfile-prod <----- THIS
├── entrypoint-prod.sh
├── entrypoint.sh
├── htmlcov
├── manage.py
├── project
│ ├── __init__.py
│ ├── api
│ │ ├── __init__.py
│ │ ├── models.py
│ │ ├── templates
│ │ │ └── index.html
│ │ └── users.py
│ ├── config.py
│ ├── db
│ │ ├── Dockerfile
│ │ └── create.sql
└── requirements.txt
At development stage, docker images for my project have been successfully created with:
$ docker-compose -f docker-compose-dev.yml up --build
Dockerfile-dev on "web" service
# base image
FROM python:3.6-alpine
# install dependencies
RUN apk update && \
apk add --virtual build-deps gcc python-dev musl-dev && \
apk add libffi-dev && \
apk add postgresql-dev && \
apk add netcat-openbsd && \
apk add bind-tools && \
apk add --update --no-cache g++ libxslt-dev && \
apk add jpeg-dev zlib-dev
ENV PACKAGES="\
dumb-init \
musl \
libc6-compat \
linux-headers \
build-base \
bash \
git \
ca-certificates \
freetype \
libgfortran \
libgcc \
libstdc++ \
openblas \
tcl \
tk \
libssl1.0 \
"
ENV PYTHON_PACKAGES="\
numpy \
matplotlib \
scipy \
scikit-learn \
nltk \
"
RUN apk add --no-cache --virtual build-dependencies python3 \
&& apk add --virtual build-runtime \
build-base python3-dev openblas-dev freetype-dev pkgconfig gfortran \
&& ln -s /usr/include/locale.h /usr/include/xlocale.h \
&& python3 -m ensurepip \
&& rm -r /usr/lib/python*/ensurepip \
&& pip3 install --upgrade pip setuptools \
&& ln -sf /usr/bin/python3 /usr/bin/python \
&& ln -sf pip3 /usr/bin/pip \
&& rm -r /root/.cache \
&& pip install --no-cache-dir $PYTHON_PACKAGES \
&& pip3 install 'pandas<0.21.0' \
&& apk del build-runtime \
&& apk add --no-cache --virtual build-dependencies $PACKAGES \
&& rm -rf /var/cache/apk/*
# set working directory
WORKDIR /usr/src/app
# add and install requirements
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
# add entrypoint.sh
COPY ./entrypoint.sh /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
# add app
COPY . /usr/src/app
# run server
CMD ["/usr/src/app/entrypoint.sh"]
docker ps -as shows me the project up and running at localhost:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES SIZE
f88c27e5334f dev3_nginx "nginx -g 'daemon of…" 14 hours ago Up 14 hours 0.0.0.0:80->80/tcp dev3_nginx_1 2B (virtual 16.1MB)
f77eb7949fef dev3_client "npm start" 14 hours ago Up 14 hours 0.0.0.0:3007->3000/tcp dev3_client_1 55B (virtual 553MB)
33b1b50931a6 dev3_web "/usr/src/app/entryp…" 14 hours ago Up 14 hours 0.0.0.0:5001->5000/tcp dev3_web_1 35.3kB (virtual 3.32GB)
0e28363ab85a dev3_web-db "docker-entrypoint.s…" 3 days ago Up 14 hours 0.0.0.0:5435->5432/tcp dev3_web-db_1 63B (virtual 71.7MB)
But I can't build my production images with:
$ docker-compose -f docker-compose-prod.yml up --build
Dockerfile-prod on "web" service
(...the same as Dockerfile-dev from top to here)
# set working directory
WORKDIR /usr/src/app
# add and install requirements
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
# new
# add entrypoint.sh
COPY ./entrypoint.sh /usr/src/app/entrypoint-prod.sh
RUN chmod +x /usr/src/app/entrypoint-prod.sh
# add app
COPY . /usr/src/app
# new
# run server
CMD ["/usr/src/app/entrypoint-prod.sh"]
at production build hangs after numpy was installed, and it never resolves.
(...)
Collecting pip
Downloading https://files.pythonhosted.org/packages/d8/f3/413bab4ff08e1fc4828dfc59996d721917df8e8583ea85385d51125dceff/pip-19.0.3-py2.py3-none-any.whl (1.4MB)
Requirement already up-to-date: setuptools in /usr/local/lib/python3.6/site-packages (40.8.0)
Installing collected packages: pip
Found existing installation: pip 19.0.2
Uninstalling pip-19.0.2:
Successfully uninstalled pip-19.0.2
Successfully installed pip-19.0.3
Collecting numpy
Downloading https://files.pythonhosted.org/packages/2b/26/07472b0de91851b6656cbc86e2f0d5d3a3128e7580f23295ef58b6862d6c/numpy-1.16.1.zip (5.1MB)
Collecting matplotlib
Downloading https://files.pythonhosted.org/packages/89/0c/653aec68e9cfb775c4fbae8f71011206e5e7fe4d60fcf01ea1a9d3bc957f/matplotlib-3.0.2.tar.gz (36.5MB)
# HANGS HERE ˆˆˆˆˆ
The problem does not seem to be matplotlib because if I remove this package it hangs at scipy, the next after numpy, and so on...
NOTE: I am trying to build production not at localhost but rather in a docker-machine.
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
testdriven-dev - virtualbox Running tcp://192.168.99.100:2376 v18.09.1
testdriven-prod * amazonec2 Running tcp://18.234.200.115:2376 v18.09.1 <------ THIS ONE
with:
$ docker-machine env testdriven-dev
$ eval $(docker-machine env testdriven-prod)
$ export REACT_APP_WEB_SERVICE_URL=http://18.234.200.115
$ docker-compose -f docker-compose-prod.yml up -d --build
env was pruned against any dangling images.
Why is this happening?
Edit
Following advice down on comments, I SSHed into docker-machine to check CPU use during build and, at the point of hanging, this is what I get:
$ docker-machine ssh testdriven-prod free
total used free shared buff/cache available
Mem: 1014540 136296 632136 10712 246108 692056
Swap: 0 0 0
and:
$ docker-machine ssh testdriven-prod df -h
Filesystem Size Used Avail Use% Mounted on
udev 488M 0 488M 0% /dev
tmpfs 100M 11M 89M 11% /run
/dev/xvda1 16G 2.2G 14G 14% /
tmpfs 496M 0 496M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 496M 0 496M 0% /sys/fs/cgroup
tmpfs 100M 0 100M 0% /run/user/1000
SSH:
top - 01:34:52 up 18 days, 23:47, 1 user, load average: 0.00, 0.00, 0.00
Tasks: 109 total, 1 running, 108 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.3 us, 0.0 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 1014540 total, 594752 free, 124092 used, 295696 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 698272 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
8363 root 20 0 435240 21768 3124 S 0.3 2.1 30:07.40 containerd
1 root 20 0 185312 4916 2996 S 0.0 0.5 0:12.98 systemd
more:
ubuntu#testdriven-prod:~$ ps aux -Hww
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 2 0.0 0.0 0 0 ? S Feb03 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? S Feb03 0:37 [ksoftirqd/0]
root 5 0.0 0.0 0 0 ? S< Feb03 0:00 [kworker/0:0H]
root 7 0.0 0.0 0 0 ? S Feb03 0:17 [rcu_sched]
root 8 0.0 0.0 0 0 ? S Feb03 0:00 [rcu_bh]
root 9 0.0 0.0 0 0 ? S Feb03 0:00 [migration/0]
root 10 0.0 0.0 0 0 ? S Feb03 0:07 [watchdog/0]
root 11 0.0 0.0 0 0 ? S Feb03 0:00 [kdevtmpfs]
root 12 0.0 0.0 0 0 ? S< Feb03 0:00 [netns]
root 13 0.0 0.0 0 0 ? S< Feb03 0:00 [perf]
root 14 0.0 0.0 0 0 ? S Feb03 0:00 [xenwatch]
root 15 0.0 0.0 0 0 ? S Feb03 0:00 [xenbus]
root 17 0.0 0.0 0 0 ? S Feb03 0:00 [khungtaskd]
root 18 0.0 0.0 0 0 ? S< Feb03 0:00 [writeback]
root 19 0.0 0.0 0 0 ? SN Feb03 0:00 [ksmd]
root 20 0.0 0.0 0 0 ? SN Feb03 0:03 [khugepaged]
root 21 0.0 0.0 0 0 ? S< Feb03 0:00 [crypto]
root 22 0.0 0.0 0 0 ? S< Feb03 0:00 [kintegrityd]
root 23 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 24 0.0 0.0 0 0 ? S< Feb03 0:00 [kblockd]
root 25 0.0 0.0 0 0 ? S< Feb03 0:00 [ata_sff]
root 26 0.0 0.0 0 0 ? S< Feb03 0:00 [md]
root 27 0.0 0.0 0 0 ? S< Feb03 0:00 [devfreq_wq]
root 30 0.0 0.0 0 0 ? S Feb03 0:07 [kswapd0]
root 31 0.0 0.0 0 0 ? S< Feb03 0:00 [vmstat]
root 32 0.0 0.0 0 0 ? S Feb03 0:00 [fsnotify_mark]
root 33 0.0 0.0 0 0 ? S Feb03 0:00 [ecryptfs-kthrea]
root 49 0.0 0.0 0 0 ? S< Feb03 0:00 [kthrotld]
root 50 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 51 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 52 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 53 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 54 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 55 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 56 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 57 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 58 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 59 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 60 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 61 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 62 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 63 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 64 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 65 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 66 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 67 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 68 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 69 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 70 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 71 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 72 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 73 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 74 0.0 0.0 0 0 ? S Feb03 0:00 [scsi_eh_0]
root 75 0.0 0.0 0 0 ? S< Feb03 0:00 [scsi_tmf_0]
root 76 0.0 0.0 0 0 ? S Feb03 0:00 [scsi_eh_1]
root 77 0.0 0.0 0 0 ? S< Feb03 0:00 [scsi_tmf_1]
root 79 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 83 0.0 0.0 0 0 ? S< Feb03 0:00 [ipv6_addrconf]
root 96 0.0 0.0 0 0 ? S< Feb03 0:00 [deferwq]
root 258 0.0 0.0 0 0 ? S< Feb03 0:00 [raid5wq]
root 288 0.0 0.0 0 0 ? S< Feb03 0:00 [bioset]
root 310 0.0 0.0 0 0 ? S Feb03 0:06 [jbd2/xvda1-8]
root 311 0.0 0.0 0 0 ? S< Feb03 0:00 [ext4-rsv-conver]
root 386 0.0 0.0 0 0 ? S< Feb03 0:00 [iscsi_eh]
root 389 0.0 0.0 0 0 ? S< Feb03 0:00 [ib_addr]
root 392 0.0 0.0 0 0 ? S< Feb03 0:00 [ib_mcast]
root 394 0.0 0.0 0 0 ? S< Feb03 0:00 [ib_nl_sa_wq]
root 397 0.0 0.0 0 0 ? S< Feb03 0:00 [ib_cm]
root 398 0.0 0.0 0 0 ? S< Feb03 0:00 [iw_cm_wq]
root 399 0.0 0.0 0 0 ? S< Feb03 0:00 [rdma_cm]
root 411 0.0 0.0 0 0 ? S Feb03 0:00 [kauditd]
root 541 0.0 0.0 0 0 ? S< Feb03 0:02 [kworker/0:1H]
root 23959 0.0 0.0 0 0 ? S< Feb03 0:00 [xfsalloc]
root 23960 0.0 0.0 0 0 ? S< Feb03 0:00 [xfs_mru_cache]
root 12186 0.0 0.0 0 0 ? S Feb21 0:00 [kworker/u30:2]
root 12198 0.0 0.0 0 0 ? S 00:21 0:00 [kworker/u30:1]
root 12219 0.0 0.0 0 0 ? S 00:21 0:00 [kworker/0:0]
root 13607 0.0 0.0 0 0 ? S 02:16 0:00 [kworker/0:2]
root 1 0.0 0.4 185312 5020 ? Ss Feb03 0:13 /lib/systemd/systemd --system --deserialize 27
root 366 0.0 0.2 28352 2212 ? Ss Feb03 0:04 /lib/systemd/systemd-journald
root 437 0.0 0.0 102968 372 ? Ss Feb03 0:00 /sbin/lvmetad -f
root 942 0.0 0.2 16116 2780 ? Ss Feb03 0:00 /sbin/dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases -I -df /var/lib/dhcp/dhclient6.eth0.leases eth0
root 1091 0.0 0.2 26068 2048 ? Ss Feb03 0:01 /usr/sbin/cron -f
daemon 1097 0.0 0.1 26044 1664 ? Ss Feb03 0:00 /usr/sbin/atd -f
message+ 1101 0.0 0.1 42992 1704 ? Ss Feb03 0:01 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
root 1110 0.0 0.3 272944 3360 ? Ssl Feb03 0:20 /usr/lib/accountsservice/accounts-daemon
root 1113 0.0 0.6 636464 7008 ? Ssl Feb03 0:07 /usr/bin/lxcfs /var/lib/lxcfs/
root 1139 0.0 0.2 28616 2452 ? Ss Feb03 0:01 /lib/systemd/systemd-logind
syslog 1140 0.0 0.2 260628 2228 ? Ssl Feb03 0:01 /usr/sbin/rsyslogd -n
root 1151 0.0 0.1 4396 1312 ? Ss Feb03 0:00 /usr/sbin/acpid
root 1157 0.0 0.0 5220 116 ? Ss Feb03 0:38 /sbin/iscsid
root 1158 0.0 0.3 5720 3508 ? S<Ls Feb03 3:04 /sbin/iscsid
root 1172 0.0 0.0 13372 144 ? Ss Feb03 0:00 /sbin/mdadm --monitor --pid-file /run/mdadm/monitor.pid --daemonise --scan --syslog
root 1263 0.0 0.1 12840 1588 ttyS0 Ss+ Feb03 0:00 /sbin/agetty --keep-baud 115200 38400 9600 ttyS0 vt220
root 1266 0.0 0.1 14656 1472 tty1 Ss+ Feb03 0:00 /sbin/agetty --noclear tty1 linux
root 8363 0.1 2.1 435240 21768 ? Ssl Feb03 30:09 /usr/bin/containerd
root 9248 0.0 5.6 583988 57744 ? Ssl Feb03 18:11 /usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver overlay2 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=amazonec2
root 24091 0.0 0.1 277088 1652 ? Ssl Feb03 0:00 /usr/lib/policykit-1/polkitd --no-debug
root 1068 0.0 0.2 65512 2616 ? Ss Feb09 0:00 /usr/sbin/sshd -D
root 13498 0.0 0.6 92800 6580 ? Ss 01:32 0:00 sshd: ubuntu [priv]
ubuntu 13560 0.0 0.3 92800 3352 ? S 01:32 0:01 sshd: ubuntu#pts/0
ubuntu 13561 0.0 0.5 21388 5104 pts/0 Ss 01:32 0:00 -bash
ubuntu 13617 0.0 0.3 36228 3332 pts/0 R+ 02:23 0:00 ps aux -Hww
root 31239 0.0 1.5 292584 15836 ? Ssl Feb13 0:20 /usr/lib/snapd/snapd
systemd+ 22714 0.0 0.1 100324 1816 ? Ssl Feb20 0:00 /lib/systemd/systemd-timesyncd
root 23340 0.0 0.2 42124 2484 ? Ss Feb20 0:00 /lib/systemd/systemd-udevd
ubuntu 13500 0.0 0.4 45148 4608 ? Ss 01:32 0:00 /lib/systemd/systemd --user
ubuntu 13505 0.0 0.2 208764 2032 ? S 01:32 0:00 (sd-pam)

Erlang: rebar3 release, start beam first?

I am trying to utilize a new feature in 19.3 per this question: Erlang: does the application behavior trap SIGTERM?
My understanding is that sending SIGTERM to BEAM now triggers a graceful shutdown in Erlang 19.3+
I start my application in Docker using the ENTRYPOINT ./_build/default/rel/myapp/bin/myapp where ./_build/default/rel/myapp/bin/myapp is generated from rebar3 release
When I do this in Docker, myapp gets PID1 and BEAM seems to gets another PID.
Is there a different set of commands I can run such that BEAM gets PID1 and myapp gets loaded from there? Something like
./start_beam; ./start_my_app_via_beam?
I need this because docker stop sends SIGTERM to the PID1. I need that to be BEAM. Using the above entrypoint, here is what happens in the container":
top
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 4340 644 556 S 0.0 0.0 0:00.01 myapp
14 root 20 0 3751188 50812 6660 S 0.0 0.6 0:00.48 beam.smp
18 root 20 0 11492 116 0 S 0.0 0.0 0:00.00 epmd
31 root 20 0 4220 680 604 S 0.0 0.0 0:00.10 erl_child_setup
53 root 20 0 11456 944 840 S 0.0 0.0 0:00.00 inet_gethost
54 root 20 0 17764 1660 1504 S 0.0 0.0 0:00.00 inet_gethost
55 root 20 0 20252 3208 2720 S 0.0 0.0 0:00.02 bash
61 root 20 0 21956 2468 2052 R 0.0 0.0 0:00.00 top
Currently, to get around this, I have this horrendous beast:
#!/usr/bin/env bash
echo "if testing locally send SIGTERM to $$"
term_handler() {
echo "Stopping the Erlang VM gracefully"
#/usr/local/Cellar/erlang/19.1/lib/erlang/lib/erl_interface-
3.9.1/bin/erl_call -c myapp -s -a 'init stop' -n 'myapp#localhost'
/usr/local/lib/erlang/lib/erl_interface-3.9.2/bin/erl_call -c myapp -s -a 'init stop' -n 'myapp#localhost'
echo "Erlang VM Stopped"
}
trap term_handler SIGQUIT SIGINT SIGTERM
./_build/default/rel/myapp/bin/myapp &
PID=$!
echo "Erlang VM Started"
#wait $PID
while kill -0 $PID ; do wait $PID ; EXIT_STATUS=$? ; done
echo "Exiting Wrapper."
exit $EXIT_STATUS
```
And then I do `ENTRYPOINT : ["./thisscript"]`
This beast becomes PID 1, and it finds the correct thing to kill after that.
I'm trying to get rid of this script.

cron task in docker container not being executed

I have this Dockerfile (where I am using miniconda just because I would like to schedule some python scripts, but it's a debian:jessie docker image):
FROM continuumio/miniconda:4.2.12
RUN mkdir -p /workspace
WORKDIR /workspace
ADD volume .
RUN apt-get update
RUN apt-get install -y cron
ENTRYPOINT ["/bin/sh", "/workspace/conf/entrypoint.sh"]
The script entrypoint.sh that keeps the container alive is this one:
#!/usr/bin/env bash
echo ">>> Configuring cron"
service cron start
touch /var/log/cron.log
mv /workspace/conf/root /var/spool/cron/crontabs/root
chmod +x /var/spool/cron/crontabs/root
crontab /var/spool/cron/crontabs/root
echo ">>> Done!"
tail -f /var/log/cron.log
From the docker documentation about supervisor (https://docs.docker.com/engine/admin/using_supervisord/) it looks like that could be an option as well as the bash script option (as in my example), that's why I decided to go for the bash script and to ignore supervisor.
And the content of the cron details /workspace/conf/root is this:
* * * * * root echo "Hello world: $(date +%H:%M:%S)" >> /var/log/cron.log 2>&1
(with at the bottom as an empty line \n)
I can not find a way to see that Hello world: $(date +%H:%M:%S) each minute appended to /var/log/cron.log, but to me all the cron/crontab settings are correct.
When I check the logs of the container I can see:
>>> Configuring cron
[ ok ] Starting periodic command scheduler: cron.
>>> Done!
Also, when logging into the running container I can see the cron daemon running:
root#2330ced4daa9:/workspace# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 4336 1580 ? Ss+ 13:06 0:00 /bin/sh /workspace/conf/entrypoint.sh
root 14 0.0 0.0 27592 2096 ? Ss 13:06 0:00 /usr/sbin/cron
root 36 0.0 0.0 5956 740 ? S+ 13:06 0:00 tail -f /var/log/cron.log
root 108 0.5 0.1 21948 3692 ? Ss 13:14 0:00 bash
root 114 0.0 0.1 19188 2416 ? R+ 13:14 0:00 ps aux
What am I doing wrong?
Are you sure the Cronjob has execution rights?
chmod 0644 /var/spool/cron/crontabs/root

Xvfb command in docker supervisor conf not working

I have a Docker image based on Ubuntu that runs a supervisor script as the CMD at the end of the Dockerfile. This successfully runs uwsgi and nginx in the container on start up. However, the following appended at the end of the supervisor-app.conf does not work:
[program:Xvfb]
command=/usr/bin/Xvfb :1 -screen 0 1024x768x16 &> xvfb.log &
When I open a shell into a running docker instance there is no X instance running:
root#9221694363ea:/# ps aux | grep X
root 39 0.0 0.0 8868 784 ? S+ 15:32 0:00 grep --color=auto X
However, running exactly the same command as in the supervisor-app.conf works
root#9221694363ea:/# /usr/bin/Xvfb :1 -screen 0 1024x768x16 &> xvfb.log &
[1] 40
root#9221694363ea:/# ps aux | grep X
root 40 1.2 0.1 170128 21604 ? Sl 15:33 0:00 /usr/bin/Xvfb :1 -screen 0 1024x768x16
root 48 0.0 0.0 8868 792 ? S+ 15:33 0:00 grep --color=auto X
so what's wrong with the line in the supervisor-app.conf?
Supervisor does not handle bash specific operators such as the-run-in-the -background '&' or redirections like '>' as per my original failing config line.
I solved it by using bash -c thus:
[program:Xvfb]
command=bash -c "/usr/bin/Xvfb :1 -screen 0 1024x768x16 &> xvfb.log"
Now when I get into the docker bash shell the Xvfb window is created waiting for me to use it elsewhere in the code.

How can I get my Fortran program to use a certain amount of RAM?

I am trying to write a Fortran program which will eat up a lot of memory (for the reasoning behind this, please see the note at the end of this question). I am doing this by allocating a 3 dimensional array of size (n,n,n) and then deallocating it - continually increasing n until I run out of memory (this should happen when ~16 GB of memory is used). Unfortunately, it seems as if my program is running out of memory long before I see the system resources get up to 16 GB.
Here is my sample code:
1 program fill_mem
2 implicit none
3 integer, parameter :: ikind = selected_int_kind(8)
4 integer, parameter :: rkind = 8
5
6 integer(kind = ikind) :: nfiles = 100
7 integer(kind = ikind) :: n = 1200
8 integer(kind = ikind) :: i, nn
9
10 real(kind = rkind), allocatable :: real_arr(:,:,:)
11
12 character(500) :: sysline
13
14
15 call system('echo ''***no_allocation***'' > outfile')
16 call system('ps aux | grep fill_mem.exe >> outfile')
17 !call system('smem | grep fill_mem.exe >> sm.out')
18 allocate(real_arr(n, n, n))
19
20 nn = 100000
21 do i = 1,nn
22 deallocate(real_arr)
23 n = n + 10
24 print*, 'n = ', n
25 allocate(real_arr(n, n, n))
26 call system('echo ''*************'' >> outfile')
27 write(sysline, *) 'allocation', i, '... n = ', n
28
29 write(*, '(f10.5, a)') 100.0*real(i)/real(nn), '%'
30
31 call system(trim(adjustl('echo '//sysline//'>> outfile')))
32 call system('ps aux | grep fill_mem.exe >> outfile')
33 enddo
34
35 end program fill_mem
and here is the sample output:
1 ***no_allocation***
2 1000 12350 0.0 0.0 12780 760 pts/1 S+ 13:32 0:00 ./fill_mem.exe
3 1000 12352 0.0 0.0 4400 616 pts/1 S+ 13:32 0:00 sh -c ps aux | grep fill_mem.exe >> outfile
4 1000 12354 0.0 0.0 9384 920 pts/1 S+ 13:32 0:00 grep fill_mem.exe
5 *************
6 allocation 1 ... n = 1210
7 1000 12350 0.0 0.0 13853104 796 pts/1 S+ 13:32 0:00 ./fill_mem.exe
8 1000 12357 0.0 0.0 4400 616 pts/1 S+ 13:32 0:00 sh -c ps aux | grep fill_mem.exe >> outfile
9 1000 12359 0.0 0.0 9384 920 pts/1 S+ 13:32 0:00 grep fill_mem.exe
10 *************
11 allocation 2 ... n = 1220
12 1000 12350 0.0 0.0 14199096 952 pts/1 S+ 13:32 0:00 ./fill_mem.exe
13 1000 12362 0.0 0.0 4400 612 pts/1 S+ 13:32 0:00 sh -c ps aux | grep fill_mem.exe >> outfile
14 1000 12364 0.0 0.0 9384 920 pts/1 S+ 13:32 0:00 grep fill_mem.exe
15 *************
16 allocation 3 ... n = 1230
17 1000 12350 0.0 0.0 14550804 956 pts/1 S+ 13:32 0:00 ./fill_mem.exe
18 1000 12367 0.0 0.0 4400 612 pts/1 S+ 13:32 0:00 sh -c ps aux | grep fill_mem.exe >> outfile
19 1000 12369 0.0 0.0 9384 920 pts/1 S+ 13:32 0:00 grep fill_mem.exe
20 *************
21 allocation 4 ... n = 1240
22 1000 12350 0.0 0.0 14908284 956 pts/1 S+ 13:32 0:00 ./fill_mem.exe
23 1000 12372 0.0 0.0 4400 612 pts/1 S+ 13:32 0:00 sh -c ps aux | grep fill_mem.exe >> outfile
24 1000 12374 0.0 0.0 9384 920 pts/1 S+ 13:32 0:00 grep fill_mem.exe
25 *************
26 allocation 5 ... n = 1250
27 1000 12350 0.0 0.0 15271572 956 pts/1 S+ 13:32 0:00 ./fill_mem.exe
28 1000 12377 0.0 0.0 4400 612 pts/1 S+ 13:32 0:00 sh -c ps aux | grep fill_mem.exe >> outfile
29 1000 12379 0.0 0.0 9384 916 pts/1 S+ 13:32 0:00 grep fill_mem.exe
30 *************
31 allocation 6 ... n = 1260
32 1000 12350 0.0 0.0 15640720 956 pts/1 S+ 13:32 0:00 ./fill_mem.exe
33 1000 12382 0.0 0.0 4400 616 pts/1 S+ 13:32 0:00 sh -c ps aux | grep fill_mem.exe >> outfile
34 1000 12384 0.0 0.0 9384 920 pts/1 S+ 13:32 0:00 grep fill_mem.exe
35 *************
36 allocation 7 ... n = 1270
37 1000 12350 0.0 0.0 16015776 956 pts/1 S+ 13:32 0:00 ./fill_mem.exe
38 1000 12387 0.0 0.0 4400 616 pts/1 S+ 13:32 0:00 sh -c ps aux | grep fill_mem.exe >> outfile
39 1000 12389 0.0 0.0 9384 920 pts/1 S+ 13:32 0:00 grep fill_mem.exe
Now, I see that the VSZ portion gets up to ~15 GB so I am assuming when I try to address more, it fails with
Operating system error: Cannot allocate memory
Allocation would exceed memory limit
because there is not that much RAM. Why is it that RSS is so far below that, though? When I actually look on my system resources I see about 140 MB being used up (I am running this in a Linux VM and monitoring the system resources through Windows - I have given the GM 16 GB of RAM to use though, so I should see the VM memory increasing until it reaches the 16 GB mark - for what it's worth, the VM has VT-x/Nested Paging/PAE/NX so it should use the physical architecture just like the native OS).
Can anyone explain why I do not see my program actually using up the full 16 GB of RAM and how I can write my code to keep these arrays I am creating in RAM - fully utilizing my available hardware?
NOTE: The reason I am trying to write a sample program which reads a lot of memory is that I am working with data which takes up around 14 GB of space in ascii text. I will need to be working with data A LOT throughout the course of this program, so I want to read it all in at once and then reference it from RAM throughout the duration of the program. To make sure I am doing this correctly, I am trying to write a simple program which will store a very large array (~15 GB) in memory all at once.
(Caveat: The Fortran standard doesn't say how such thing ought to be implemented etc., the description below refers to how Fortran compilers are typically implemented on current operating systems.)
When you execute an ALLOCATE statement (or equivalently, calling malloc() in C, FWIW), you're not actually reserving physical memory, but only mapping address space for your process. That's why the VSZ goes up, but not the RSS. Actually reserving physical memory for your process happens only when you first access the memory (typically at page size granularity, that is, 4 KB on most current hw). So only once you start putting some data into your array does the RSS begin to climb. E.g. a statement like
real_arr = 42.
ought to bump up your RSS to the vicinity of the VSZ.
You probably need to increase the memory allocated to the stack. For example, see http://software.intel.com/en-us/articles/intel-fortran-compiler-increased-stack-usage-of-80-or-higher-compilers-causes-segmentation-fault

Resources