Why does \b exclude a process in grep? - grep

I can grep my processes:
ladislav#cool:~$ ps aux | grep "node example.js"
ladislav 18231 0.1 0.3 11116444 50812 ? Ssl 22:46 0:00 node example.js server m 0 found
ladislav 18257 0.0 0.0 11600 712 pts/0 S+ 22:49 0:00 grep --color=auto node example.js
But I want to get rid of the second line. I tried following command which works:
ladislav#cool:~$ ps aux | grep "\bnode example.js"
ladislav 18231 0.0 0.3 11116444 50812 ? Ssl 22:46 0:00 node example.js server m 0 found
Could someone explain why or how it works? I don't understand it. I think it should not work.

Better do this:
pgrep -fl node example.js
It's especially designed for this task.

Related

pause container have pid 1 in the pod?

[root#k8s001 ~]# docker exec -it f72edf025141 /bin/bash
root#b33f3b7c705d:/var/lib/ghost# ps aux`enter code here`
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 1012 4 ? Ss 02:45 0:00 /pause
root 8 0.0 0.0 10648 3400 ? Ss 02:57 0:00 nginx: master process nginx -g daemon off;
101 37 0.0 0.0 11088 1964 ? S 02:57 0:00 nginx: worker process
node 38 0.9 0.0 2006968 116572 ? Ssl 02:58 0:06 node current/index.js
root 108 0.0 0.0 3960 2076 pts/0 Ss 03:09 0:00 /bin/bash
root 439 0.0 0.0 7628 1400 pts/0 R+ 03:10 0:00 ps aux
The display come from internet, it says pause container is the parent process of other containers in the pod, if you attach pod or other containers, do ps aux, you would see that.
Is it correct, I do in my k8s,different, PID 1 is not /pause.
...Is it correct, I do in my k8s,different, PID 1 is not /pause.
This has changed, pause no longer hold PID 1 despite being the first container created by the container runtime to setup the pod (eg. cgroups, namespace etc). Pause is isolated (hidden) from the rest of the containers in the pod regardless of your ENTRYPOINT/CMD. See here for more background information.
By default, Docker will run your entrypoint (or the command, if there is no entrypoint) as PID 1. However, that is not necessarily always the case, since, depending on how you start the container, Docker (or your orchestrator) can also run its custom init process as PID 1:
$ docker run -d --init --name test alpine sleep infinity
849efe38ecec439550738e981065ec4aff55ef5607f03b9fed975e2d3146b9b0
$ with-docker docker exec -ti test ps
PID USER TIME COMMAND
1 root 0:00 /sbin/docker-init -- sleep infinity
7 root 0:00 sleep infinity
8 root 0:00 ps
For more information on why you would want your entrypoint not to be PID 1, you can check this explanation from a tini developer:
Now, unlike other processes, PID 1 has a unique responsibility, which is to reap zombie processes.
Zombie processes are processes that:
Have exited.
Were not waited on by their parent process (wait is the syscall parent processes use to retrieve the exit code of their children).
Have lost their parent (i.e. their parent exited as well), which means they'll never be waited on by their parent.

Why does Jenkins pop up in my browser insted of Ambari?

I am running HDP-sandbox on VM VirtulBox(host is Ubuntu). From localhost:1080
I choose Ambari,broser redirects to 8080. But to my great surprise
How to find out where is Jenkins process and disable it?
ps
ps aux | grep jenkins
jenkins 3132 0.0 0.0 77020 6808 ? Ss 08:37 0:00 /lib/systemd/systemd --user
jenkins 3135 0.0 0.0 114676 2688 ? S 08:37 0:00 (sd-pam)
jenkins 3151 0.0 0.0 29040 180 ? S 08:37 0:00 /usr/bin/daemon --name=jenkins --inherit --env=JENKINS_HOME=/var/lib/jenkins --output=/var/log/jenkins/jenkins.log --pidfile=/var/run/jenkins/jenkins.pid -- /usr/bin/java -Djava.awt.headless=true -jar /usr/share/jenkins/jenkins.war --webroot=/var/cache/jenkins/war --httpPort=8080
jenkins 3154 0.3 2.1 7973532 357496 ? Sl 08:37 0:21 /usr/bin/java -Djava.awt.headless=true -jar /usr/share/jenkins/jenkins.war --webroot=/var/cache/jenkins/war --httpPort=8080
miki 13760 0.0 0.0 23080 1012 pts/0 R+ 10:11 0:00 grep --color=auto jenkins
Netstat host output
tcp6 0 0 :::8080 :::* LISTEN 3154/java
HDP netstat
[root#sandbox-hdp ~]# netstat -plnt | grep ':8080'
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 367/java
I checked port-forwarding on my VM
Virtual adapter is attached to real network with NAT.
Maybe you can view the password and use it on the GUI prompt?
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
You can disable jenkins by systemctl disable jenkins

Docker container increases ram

I have launched several docker containers and using docker stats, I have verified that one of them increases the consumption of ram memory since it starts until it is restarted.
My question is if there is any way to verify where such consumption comes from within the docker container. There is some way to check the consumption inside the container, something of the docker stats style but for the inside of the container.
Thanks for your cooperation.
Not sure if it's what you are asking for, but here's an example:
(Before your start):
Run a test container docker run --rm -it ubuntu
Install stress by typing apt-get update and apt-get install stress
Run stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1 (it will start consuming memory)
1. with top
If you go to a new terminal you can type docker container exec -it <your container name> top and you will get something like the following:
(notice that the %MEM usage of PID 285 is 68.8%)
docker container exec -it dreamy_jang top
top - 12:46:04 up 22 min, 0 users, load average: 1.48, 1.55, 1.12
Tasks: 4 total, 2 running, 2 sleeping, 0 stopped, 0 zombie
%Cpu(s): 20.8 us, 0.8 sy, 0.0 ni, 78.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 6102828 total, 150212 free, 5396604 used, 556012 buff/cache
KiB Swap: 1942896 total, 1937508 free, 5388 used. 455368 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
285 root 20 0 4209376 4.007g 212 R 100.0 68.8 6:56.90 stress
1 root 20 0 18500 3148 2916 S 0.0 0.1 0:00.09 bash
274 root 20 0 36596 3072 2640 R 0.0 0.1 0:00.21 top
284 root 20 0 8240 1192 1116 S 0.0 0.0 0:00.00 stress
2. with ps aux
Again, from a new terminal you type docker container exec -it <your container name> ps aux
(notice that the %MEM usage of PID 285 is 68.8%)
docker container exec -it dreamy_jang ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 18500 3148 pts/0 Ss 12:25 0:00 /bin/bash
root 284 0.0 0.0 8240 1192 pts/0 S+ 12:39 0:00 stress --vm-byt
root 285 99.8 68.8 4209376 4201300 pts/0 R+ 12:39 8:53 stress --vm-byt
root 286 0.0 0.0 34400 2904 pts/1 Rs+ 12:48 0:00 ps aux
My source for this stress thing is from this question: How to fill 90% of the free memory?

Haproxy reload with different backend server ip

Is it possible to reload haproxy while the backend server ip changed? If, how?
It is essential for docker stack. On every deploy, new containers with different ip will replace the old containers.
In our implementation, services return 503 occasionally as the old haproxy process is not terminated and still accepting request, while the backend server is already gone. httplog show that some requests forward a backend which is gone.
# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 893 0.0 0.0 0 0 ? Zs 19:39 0:01 [haproxy] <defunct>
root 898 0.3 0.0 49416 9640 ? Ss 19:49 0:13 /usr/local/sbin/haproxy -D -f /app/haproxy.cfg -p /var/run/haproxy.pid
root 915 0.2 0.0 0 0 ? Zs 19:49 0:12 [haproxy] <defunct>
root 920 0.2 0.0 49308 10196 ? Ss 20:57 0:01 /usr/local/sbin/haproxy -D -f /app/haproxy.cfg -p /var/run/haproxy.pid
root 937 0.0 0.0 0 0 ? Zs 20:57 0:00 [haproxy] <defunct>
root 942 0.3 0.0 49296 9880 ? Ss 20:58 0:01 /usr/local/sbin/haproxy -D -f /app/haproxy.cfg -p /var/run/haproxy.pid
root 959 0.2 0.0 49296 9852 ? Ss 20:58 0:01 /usr/local/sbin/haproxy -D -f /app/haproxy.cfg -p /var/run/haproxy.pid
[Edit]
I am using docker swarm mode. I did try with publish service's port to the host; however, the performance of the swarm’s internal load balancer is bad, and I try to avoid.
While it should be possible to change the HAProxy configuration to point to a different backend server, it seems like it would be easier to bind the Docker containers' ports to predictable ports on the Docker host, so the HAProxy config does not need to change.
For example:
docker run -d -p 127.0.0.1:80:9999 hello_world
And your HAProxy config could look like
backend something
# Assuming the Docker host's IP address is 192.0.2.123
server some-server 192.0.2.123:9999

Git push Heroku master takes forever

I'm using a VM based on Ubuntu 12.04 (ruby 1.9.2p290 and rails 3.1.0) and my app work perfectly on local. I'm using Git and when I try to git push heroku master it doesn't work. I get:
Counting objects: 435, done. Compressing objects: 100% (215/215), done. Writing objects: 100% (435/435), 73.35 KiB, done. Total 435 (delta 171), reused 435 (delta 171)
And it never finish, so it doesn't push anything to Heroku. The terminal standby forever.
Operative System information:
jobs
[1]+ Running git push heroku master &
ps -x
PID TTY STAT TIME COMMAND
1078 ? Ssl 0:00 gnome-session --session=ubuntu
1135 ? Sl 0:00 /usr/bin/VBoxClient --clipboard
1147 ? Sl 0:00 /usr/bin/VBoxClient --display
1154 ? Sl 0:00 /usr/bin/VBoxClient --seamless
1162 ? Sl 0:19 /usr/bin/VBoxClient --draganddrop
1167 ? Ss 0:00 /usr/bin/ssh-agent /usr/bin/dbus-launch --exit-with-s
1171 ? S 0:00 /usr/bin/dbus-launch --exit-with-session gnome-sessio
1172 ? Ss 0:01 //bin/dbus-daemon --fork --print-pid 5 --print-addres
1246 ? Sl 0:00 /usr/bin/gnome-keyring-daemon --start --components=se
1250 ? Sl 0:02 /usr/lib/gnome-settings-daemon/gnome-settings-daemon
1329 ? S 0:00 /usr/lib/gvfs/gvfsd
1334 ? Sl 0:00 /usr/lib/gvfs//gvfs-fuse-daemon -f /home/ubuntu/.gvfs
1401 ? Sl 0:03 metacity
1417 ? S 0:00 /usr/lib/i386-linux-gnu/gconf/gconfd-2
1421 ? S<l 0:01 /usr/bin/pulseaudio --start --log-target=syslog
1426 ? Sl 0:01 unity-2d-panel
1427 ? Sl 0:07 unity-2d-shell
1430 ? S 0:00 /usr/lib/pulseaudio/pulse/gconf-helper
1447 ? Sl 0:01 /usr/lib/bamf/bamfdaemon
1450 ? Sl 0:00 /usr/lib/gnome-settings-daemon/gnome-fallback-mount-h
1453 ? Sl 0:02 nautilus -n
1455 ? Sl 0:00 /usr/lib/policykit-1-gnome/polkit-gnome-authenticatio
1457 ? Sl 0:00 bluetooth-applet
1468 ? Sl 0:00 nm-applet
1482 ? S 0:00 /usr/lib/gvfs/gvfs-gdu-volume-monitor
1500 ? Sl 0:00 /usr/lib/gvfs/gvfs-afc-volume-monitor
1504 ? S 0:00 /usr/lib/gvfs/gvfs-gphoto2-volume-monitor
1518 ? S 0:00 /usr/lib/gvfs/gvfsd-trash --spawner :1.9 /org/gtk/gvf
1521 ? Sl 0:01 /usr/lib/unity/unity-panel-service
1523 ? Sl 0:00 /usr/lib/dconf/dconf-service
1539 ? Sl 0:00 /usr/lib/indicator-datetime/indicator-datetime-servic
1541 ? Sl 0:00 /usr/lib/indicator-printers/indicator-printers-servic
1543 ? Sl 0:00 /usr/lib/indicator-messages/indicator-messages-servic
1545 ? Sl 0:00 /usr/lib/indicator-session/indicator-session-service
1547 ? Sl 0:00 /usr/lib/indicator-application/indicator-application-
1549 ? Sl 0:00 /usr/lib/indicator-sound/indicator-sound-service
1574 ? S 0:00 /usr/lib/geoclue/geoclue-master
1591 ? S 0:00 /usr/lib/ubuntu-geoip/ubuntu-geoip-provider
1597 ? Sl 0:00 /usr/lib/gnome-disk-utility/gdu-notification-daemon
1603 ? S 0:00 /usr/lib/gvfs/gvfsd-metadata
1609 ? Sl 0:00 /usr/lib/indicator-appmenu/hud-service
1620 ? Sl 0:00 /usr/lib/unity-lens-applications/unity-applications-d
1622 ? Sl 0:00 /usr/lib/unity-lens-files/unity-files-daemon
1624 ? Sl 0:00 /usr/lib/unity-lens-music/unity-music-daemon
1626 ? Sl 0:00 /usr/bin/python /usr/lib/unity-lens-video/unity-lens-
1653 ? Sl 0:00 /usr/bin/zeitgeist-daemon
1661 ? Sl 0:00 telepathy-indicator
1668 ? Sl 0:00 /usr/lib/zeitgeist/zeitgeist-fts
1672 ? Sl 0:00 zeitgeist-datahub
1676 ? S 0:00 /bin/cat
1682 ? Sl 0:00 /usr/lib/telepathy/mission-control-5
1701 ? Sl 0:00 gnome-screensaver
1703 ? Sl 0:00 /usr/bin/python /usr/lib/unity-scope-video-remote/uni
1728 ? Sl 0:05 gnome-terminal
1734 ? S 0:00 gnome-pty-helper
1738 pts/2 Ss 0:00 bash
1796 ? Sl 0:00 update-notifier
1954 pts/2 S 0:00 git push heroku master
1955 pts/2 S 0:00 ssh git#heroku.com git-receive-pack 'polar-island-471
1959 pts/2 R+ 0:00 ps -x
There are many reasons why your push to Heroku can timeout. In my experience, the most common reason is due to site size, and by site size, I mean 3 things: the size of your Git repo, the size of your site that gets pushed to Heroku (Git repo - ignored files), and the size of the gems you use. Heroku is not particularly robust when it comes to accommodating big sites (or long running processes, for that matter) and if you get too big, you can cause your push to hang / timeout and intermittently too, which can be perplexing.
.git folder
I have had the site get unexplainably large and saw that over time my .git folder in the root of the project had grown to 600mb. Thank goodness I noticed a warning in the Heroku deploy chatter that warned me that my Git repo was too large. Anyway, since that folder is managed by Git behind the scenes, I ended up starting a fresh Git repo and moving my code over to it, which shrunk my site by 90%.
.slugignore
Another pitfall that caused my site to become large enough to timeout was allowing things like logs, my temp directory, and my Solr indexing directory to be included in the project. After I excluded all of those folders in my .slugignore file, pushing became very fast. And yes, you could get the same basic effect using .gitignore, but there are some things that I like to manage through Git, but ignore when pushing to Heroku. That's when the .slugignore comes in handy.
gems
By completing the steps above, I reduced my site to a reasonable size, but still had occasional timeouts. Then I realized that gems contribute to your site size too. So I removed a handful of unused gems from my gemfile, and I was able to reduce my slug compile time from 900+ seconds to 250 seconds and my overall deployment time from 15+ minutes with frequent timeouts to under 10 minutes. What a relief to not have to wait so long for each deploy and risk timing out half the time.
Depending on your particular setup, any or all of these factors can really hurt you. In my case I was doing everything wrong. However, even if you are not timing out, you may still want to prune your site as much as possible to cut down your deployment time.
how to make Heroku not suck
http://www.stormconsultancy.co.uk/blog/development/6-ways-to-get-more-bang-for-your-heroku-buck-while-making-your-rails-site-super-snappy/
The reason was, that my connection was very slow. It was the Access Point of my smartphone.

Resources