Missing PID for process inside docker container - docker

I'm running a simple web application inside a docker container. When I look at the output of netstat, the PID/Program name is blank.
root#fasf343344423# sudo netstat -tulnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:5697 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:9090 0.0.0.0:* LISTEN -
I've seen the PID before on a different setup. So, I want to understand if this is because of a setup issue. Appreciate your help

I was able to resolve this with the following change:
Edit /etc/apparmor.d/docker file and add the following line
ptrace peer=docker-default,
sudo service apparmor restart

As in my related question
Which PID is using a PORT inside a k8s pod without net tools
The lack of POSIX Capability CAP_SYS_PTRACE avoids netstat to trace the inode to PID

Related

Xdebug 3.0 WSL2 and VSCode - address is already in use by docker-proxy

My VSCode in WSL:Ubuntu is unable to listen to the xdebug port, because it is blocked by some docker-proxy.
I was following this Solution, but trying VSCode to listen to the xdebug port, results in the following error:
Error: listen EADDRINUSE: address already in use :::9003
Can anyone help with connecting VSCode to xdebug?
Windows 11 says the port is already allocated by wslhost:
PS C:\WINDOWS\system32> Get-Process -Id (Get-NetTCPConnection -LocalPort 9003).OwningProcess
Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName
------- ------ ----- ----- ------ -- -- -----------
285 47 2288 4748 0,05 19480 1 wslhost
Ubuntu tells, its allocated by some docker-proxy:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:9003 0.0.0.0:* LISTEN 17210/docker-proxy
tcp6 0 0 :::9003 :::* LISTEN 17217/docker-proxy
docker-compose-version: docker-compose version 1.25.0
The xdebug.log says:
[Step Debug] INFO: Connecting to configured address/port: host.docker.internal:9003.
[Step Debug] ERR: Time-out connecting to debugging client, waited: 200 ms. Tried: host.docker.internal:9003 (through xdebug.client_host/xdebug.client_port) :-(
For sure as long as nothing is listening.
As to xdebug.client_host I'v tried:
host.docker.internal
xdebug://gateway and xdebug://nameserver refering to this: https://docs.google.com/document/d/1W-NzNtExf5C4eOu3rRQm1WlWnbW44u3ANDDA49d3FD4/edit?pli=1
setting the env-variable with docker-compose.yml: XDEBUG_CONFIG="client_host=..."
Removing the Expose directive from Dockerfile/docker-compose as in this comment doesn't remove the error neither.
Solved it. For others with this challenge:
Inside of wsl-ubuntu -> docker-containter host.docker.internal directs to the wrong ip.
In the wsl-distribution the file /etc/resolv.conf is the ip of the windows host.
To get the correct ip use this answer: How to get the primary IP address of the local machine on Linux and OS X?
My solution is to define an env-variable with this ip:
alias docker_compose_local_ip="ifconfig eth0 | sed -En 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'"
export DOCKER_COMPOSE_LOCAL_IP=$(docker_compose_local_ip)
and configure the container with it:
services:
service-name:
environment:
- XDEBUG_CONFIG=client_host=${DOCKER_COMPOSE_LOCAL_IP} ...

Port mapping problems with VScode OSS running inside a docker container

I would like to run the VSCode OSS Web Server within a Docker Container, as described here: https://github.com/microsoft/vscode/wiki/How-to-Contribute#vs-code-for-the-web
The Container is running, but the port mapping doesn't work. I run my image with
docker run -it -p 9888:9888 -p 5877:5877 vscode-server
but I got nothing with curl -I http://localhost:9888 on my machine. The VScode server is running, but the mapping to the host will not work. I think the problem is the binding. It looks like the VScode Server will bind to 127.0.0.1 but should bind to 0.0.0.0
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:9888 0.0.0.0:* LISTEN 870/node
tcp 0 0 127.0.0.1:5877 0.0.0.0:* LISTEN 881/node
Can anybody help here?

docker container not able to reach some of host's ports

I have a stack with docker-compose running on a VM.
Here is a sample output of my netstat -tulpn on the VM
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:9839 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:8484 0.0.0.0:* LISTEN
The docker is able to communicate with port 9839 (using 172.17.0.1) but not with port 8484.
Why is that?
That's because the program listening on port 8484 is bound to 127.0.0.1 meaning that it'll only accept connections from localhost.
The one listening on 9839 has bound to 0.0.0.0 meaning it'll accept connections from anywhere.
To make the one listening on 8484 accept connections from anywhere, you need to change what it's binding to. If it's something you've written yourself, you can change it in code. If it's not, there's probably a configuration setting your can set.

Jenkins server --httpListenAddress=127.0.0.1 not working

recently I installed Jenkins server, and wanted to hide it behind Nginx proxy.
My Nginx proxy works fine and I read to restrict Jenkins to 127.0.0.1:8080 therefore, I edited the config file /etc/default/jenkins and put below line of code:
JENKINS_ARGS="--webroot=/var/cache/$NAME/war --httpPort=8080 --httpListenAddress=127.0.0.1"
After restarting jenkins, I still have access to Jenkins on port 8080
Environment:
Ubuntu 20.04
OpenJDK 11
Jenkins 2.332.1
Netstat output:
sudo netstat -plnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 2313/java
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 970/nginx: master p
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 708/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 946/sshd: /usr/sbin
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 757/cupsd
tcp6 0 0 :::80 :::* LISTEN 970/nginx: master p
tcp6 0 0 :::22 :::* LISTEN 946/sshd: /usr/sbin
tcp6 0 0 ::1:631 :::* LISTEN 757/cupsd
P.S. I tried on EC2/Amazom linux 2, same issue
As of Jenkins version 2.332.1, which you indicated you are running, Jenkins made the switch from running as a service using classic SysV init scripts over to fully integrating with systemd on Linux distributions that support it, which includes Ubuntu 20.04. I don't see any signs that the systemd unit file for Jenkins ever parses /etc/default/jenkins, meaning those settings are only parsed by the SysV init script, which would explain why your configuration had no effect there.
As you found, setting the environment variable in /lib/systemd/system/jenkins.service indeed works, but your instinct is absolutely correct that it is not best practice to directly edit the unit file managed by the packaging system. As with most things in Linux, the /etc directory is where administrators are meant to put their configuration files, and /lib and /usr/lib are reserved for the package manager, so luckily systemd is no exception to this and provides a mechanism for such changes.
Systemd has the concept of "drop-in" directories where you can place ".conf" files with partial systemd unit configurations whose directives will override those in the main unit file. From the systemd.unit man page:
Along with a unit file foo.service, a "drop-in" directory foo.service.d/ may exist. All files with the suffix ".conf" from this directory will be merged in the alphanumeric order and parsed after the main unit file itself has been parsed. This is useful to alter or add configuration settings for a unit, without having to modify unit files. Each drop-in file must contain appropriate section headers.
Here's how I set up Jenkins 2.332.1 on Ubuntu 20.04 using a systemd drop-in override to bind the listener to 127.0.0.1:
Verify Jenkins is running and listening on all addresses/interfaces:
$ sudo ss -tlnp | grep 8080
LISTEN 0 50 *:8080 *:* users:(("java",pid=2688,fd=116))
Create a systemd drop-in directory for Jenkins:
$ sudo mkdir /etc/systemd/system/jenkins.service.d
Create an override file using your favorite editor. You can name it whatever you want as long as it has a .conf extension. Personally, I prefer something descriptive and to begin with a number so that I can control the lexicographic order in which the files are parsed, should I ever end up with multiple override files. Given that, I created a file /etc/systemd/system/jenkins.service.d/50-listen-address-override.conf with the following content:
[Service]
Environment="JENKINS_LISTEN_ADDRESS=127.0.0.1"
Now, all we have to do is tell systemd that we made some changes we want it to reparse:
$ sudo systemctl daemon-reload
And we can restart Jenkins to give it its new config:
$ sudo systemctl restart jenkins
If we verify our work, we can now see that Jenkins is only bound to 127.0.0.1:
$ sudo ss -tlnp | grep 8080
LISTEN 0 50 [::ffff:127.0.0.1]:8080 *:* users:(("java",pid=31636,fd=116))
For what it's worth, you can also use the command systemctl edit jenkins to create the override, and systemd will create the drop-in directory and override file automatically for you and drop you into your default editor to write the file contents, however it does not give you the freedom to choose your own name for the override file, giving it instead a generic name of override.conf.
While it won't hurt to restrict port 8080 in an AWS environment there really isn't a reason to worry about it. You'll want to setup a security group to your server so everything is blocked except for maybe port 22 (ssh), port 80 (http) and port 443 (https). You can do this through the AWS console.
To do this, go to the AWS console and select EC2 and then your instance. In the middle of the page is the "Security" tab. From there you can create a security group to determine what traffic you allow in and out.
In this way no one can connect to any ports that you don't allow in. You're not currently using https it looks like and so you may want to leave out port 443 until you're ready.

How to fix "Connection refused" error on ACME certificate challenge with cookiecutter-django

I have created a simple website using cookiecutter-django (using the latest master cloned today). Running the docker-compose setup locally works. Now I would like to deploy the site on digital ocean. To do this, I run the following commands:
$ docker-machine create -d digitalocean --digitalocean-access-token=secret instancename
$ eval "$(docker-machine env instancename)"
$ sudo docker-compose -f production.yml build
$ sudo docker-compose -f production.yml up
In the cookiecutter-django documentation I read
If you are not using a subdomain of the domain name set in the project, then remember to put your staging/production IP address in the DJANGO_ALLOWED_HOSTS environment variable (see Settings) before you deploy your website. Failure to do this will mean you will not have access to your website through the HTTP protocol.
Therefore, in the file .envs/.production/.django I changed the line with DJANGO_ALLOWED_HOSTS from
DJANGO_ALLOWED_HOSTS=.example.com (instead of example.com I use my actual domain)
to
DJANGO_ALLOWED_HOSTS=XXX.XXX.XXX.XX
(with XXX.XXX.XXX.XX being the IP of my digital ocean droplet; I also tried DJANGO_ALLOWED_HOSTS=.example.com and DJANGO_ALLOWED_HOSTS=.example.com,XXX.XXX.XXX.XX with the same outcome)
In addition, I logged in to where I registered the domain and made sure to point the A-Record to the IP of my digital ocean droplet.
With this setup the deployment does not work. I get the following error message:
traefik_1 | time="2019-03-29T21:32:20Z" level=error msg="Unable to obtain ACME certificate for domains \"example.com\" detected thanks to rule \"Host:example.com\" : unable to generate a certificate for the domains [example.com]: acme: Error -> One or more domains had a problem:\n[example.com] acme: error: 400 :: urn:ietf:params:acme:error:connection :: Fetching http://example.com/.well-known/acme-challenge/example-key-here: Connection refused, url: \n"
Unfortunately, I was not able to find a solution for this problem. Any help is greatly appreciated!
Update
When I run netstat -antp on the server as suggested in the comments I get the following output (IPs replaced with placeholders):
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1590/sshd
tcp 0 0 XXX.XXX.XXX.XX:22 YYY.YY.Y.YYY:48923 SYN_RECV -
tcp 0 332 XXX.XXX.XXX.XX:22 ZZ.ZZZ.ZZ.ZZZ:49726 ESTABLISHED 16959/0
tcp 0 1 XXX.XXX.XXX.XX:22 YYY.YY.Y.YYY:17195 FIN_WAIT1 -
tcp 0 0 XXX.XXX.XXX.XX:22 YYY.YY.Y.YYY:57909 ESTABLISHED 16958/sshd: [accept
tcp6 0 0 :::2376 :::* LISTEN 5120/dockerd
tcp6 0 0 :::22 :::* LISTEN 1590/sshd
When I run $ sudo docker-compose -f production.yml up before, netstat -antp returns this:
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1590/sshd
tcp 0 332 XXX.XXX.XXX.XX:22 ZZ.ZZZ.ZZ.ZZZ:49726 ESTABLISHED 16959/0
tcp 0 0 XXX.XXX.XXX.XX:22 AA.AAA.AAA.A:50098 ESTABLISHED 17046/sshd: [accept
tcp 0 0 XXX.XXX.XXX.XX:22 YYY.YY.Y.YYY:55652 SYN_RECV -
tcp 0 0 XXX.XXX.XXX.XX:22 YYY.YY.Y.YYY:16750 SYN_RECV -
tcp 0 0 XXX.XXX.XXX.XX:22 YYY.YY.Y.YYY:31541 SYN_RECV -
tcp 0 1 XXX.XXX.XXX.XX:22 YYY.YY.Y.YYY:57909 FIN_WAIT1 -
tcp6 0 0 :::2376 :::* LISTEN 5120/dockerd
tcp6 0 0 :::22 :::* LISTEN 1590/sshd
In my experience, the Droplets are configured as needed by cookiecutter-django, the ports are open properly, so unless you closed them, you shouldn't have to do anything.
Usually, when this error happens, it's due to DNS configuration issue. Basically Let's Encrypt was not able to reach your server using the domain example.com. Unfortunately, you're not giving us the actual domain you've used, so I'll try to guess.
You said you've configured a A record to point to your droplet, which is what you should do. However, this config needs to be propagated on most of the name servers, which may take time. It might be propagated for you, but if the name server used by Let's Encrypt isn't, your TLS certificate will fail.
You can check how well it's propagated using an online tool which checks multiple name servers at once, like https://dnschecker.org/.
From your machine, you can do so using dig (for people interested, I recommend this video):
# Using your default name server
dig example.com
# Using 1.1.1.1 as name server
dig #1.1.1.1 example.com
Hope that helps.

Resources