Error: Specified qdisc not found with tc inside docker - docker

I am trying to use tc to emulate network conditions inside docker containers. My linux is an ubuntu 20.04 LTS inside WSL2.
For this, I am running the container with
docker run -it --cap-add=NET_ADMIN ubuntu:latest
Then inside the docker running
apt-get update
apt-get install linux-generics # for kernel-modules-extra
apt-get install iproute2
tc qdisc add dev eth0 root handle 1: prio
But the last command prints Error: Specified qdisc not found even though tc qdisc show prints
qdisc noqueue 0: dev lo root refcnt 2
qdisc noqueue 0: dev eth0 root refcnt 2
There have been many issues opened regarding this question, most of them either refer to compiling a custom kernel (not a viable option for me), installing kernel-modules-extra (which I have done) or just not solving the issue/it solving itself.

Related

Can't Connect to Docker container within local network

Trying to run QuakeJS within the docker container. I'm new to docker (and networking). Couldn't connect. Decided to start easier and ran nginxdemos/helloworld. Still can't connect to the server (running Ubuntu Server).
Tried:
docker run -d -p 8080:80 nginxdemos/hello
Probably relevant ip addr:
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 18:03:73:be:70:eb brd ff:ff:ff:ff:ff:ff
altname enp0s25
inet 10.89.233.61/20 metric 100 brd 10.89.239.255 scope global dynamic eno1
valid_lft 27400sec preferred_lft 27400sec
inet6 fe80::1a03:73ff:febe:70eb/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:7c:bb:47 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe7c:bb47/64 scope link
valid_lft forever preferred_lft forever
Here's docker-network-ls:
NETWORK ID NAME DRIVER SCOPE
5671ad4b57fe bridge bridge local
a9348e40fb3c host host local
fdb16382afbd none null local
ufw-status
To Action From
-- ------ ----
8080 ALLOW Anywhere
8080 (v6) ALLOW Anywhere (v6)
Anywhere ALLOW OUT 172.17.0.0/16 on docker0
But when I try to access in a web browser (chrome and firefox) at 172.17.0.0:8080 (or many other permutations) I just end up in a time out. I'm sure this is a stupid think but I'm very stuck.
UPDATE
I installed a basic apache server and it worked fine. So it's something with Docker. I think.
UPDATE AGAIN
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a7bbfee83954 nginxdemos/hello "/docker-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:8080->80/tcp, :::8080->80/tcp relaxed_morse
I can use curl localhost:8080 and see the nginx page
I was playing with ufw but disabled it, not worried about network security. Tried ufw-docker too
FINAL UPDATE
Restarting Docker worked :|
When you publish a port with the -p option to docker run, the syntax is -p <host port>:<container port>, and you are saying, "expose the service running in the container on port <container port> on the Docker host as port <host port>.".
So when you run:
docker run -d -p 8080:80 nginxdemos/hello
You could open a browser on your host and connect to http://localhost:8080 (because port 8080 is the <host_port>). If you have the address of the container, you could also connect to http://<container_ip>:80, but you almost never want to do that, because every time you start a new container it receives a new ip address.
We publish ports to the host so that we don't need to muck about finding container ip address.
running 172.17.0.0:8080 (0.1, 0.2) or 10.89.233.61:8080 result in a timeout
172.17.0.0:8080 doesn't make any sense.
Both 172.17.0.1:8080 and 10.89.233.61:8080 ought to work (as should any other address assigned to a host interface). Some diagnostics to try:
Is the container actually running (docker ps)?
On the docker host are you able to connect to localhost:8080?
It looks like you're using UFW. Have you made any recent changes to the firewall?
If you restart docker (systemctl restart docker), do you see any changes in behavior?

tc qdisc with GRE in openwrt

I'm trying to implement traffic control to GRE interface in an openwrt board. For this i followed below steps,
Create GRE interface named gre1 in both tunnel end devices.
Tested reachability with ping, Success.
create qdisc using following command.
tc qdisc add dev gre1 root handle 1: default 2
Before creating tc classes i tired to ping the tunnel interface but this failed.
I tried to capture packet in gre1 but found 0 packets.
Monitored the statistics of qdisc using the command
tc -p -s -d qdisc show dev gre1
found that packet drop count is increasing.
I have tested this same in Ubuntu PC and found working. Also if i change the tunnel to VPN tunnel instead of GRE it working fine.
Is there any additional thing which I need to handle to implement tc in GRE ?
Any help will be appreciated.
Fixed !
Add class
tc class add dev eth0 parent 1:1 classid 1:2 htb rate 60kbps ceil 100kbps
then add sfq for the class
tc qdisc add dev eth0 parent 1:2 handle 20: sfq

Docker exposed port unavailable on browser, though the former run works

I use docker on Virtual Box and I try to add one more image-running.
The first 'run' that I had is
$ docker run -Pit --name rpython -p 8888:8888 -p 8787:8787 -p 6006:6006 -p
8022:22 -v /c/Users/lenovo:/home/dockeruser/hosthome
datascienceschool/rpython
$ docker port rpython
8888/tcp -> 0.0.0.0:8888
22/tcp -> 0.0.0.0:8022
27017/tcp -> 0.0.0.0:32781
28017/tcp -> 0.0.0.0:32780
5432/tcp -> 0.0.0.0:32783
6006/tcp -> 0.0.0.0:6006
6379/tcp -> 0.0.0.0:32782
8787/tcp -> 0.0.0.0:8787
It works fine, on local browser through those tcp.
But about the second 'running'
docker run -Pit -i -t -p 8888:8888 -p 8787:8787 -p 8022:22 -p 3306:3306 --
name ubuntu jhleeroot/dave_ubuntu:1.0.1
$ docker port ubuntu
22/tcp -> 0.0.0.0:8022
3306/tcp -> 0.0.0.0:3306
8787/tcp -> 0.0.0.0:8787
8888/tcp -> 0.0.0.0:8888
It doesn't work.
root#90dd963fe685:/# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
23: eth0#if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
Any idea about this?
Your first command ran an image (datascienceschool/rpython) that presumably kicked off a python app which must have listened to the port you were testing.
Your second command ran a different image (jhleeroot/dave_ubuntu:1.0.1) and from the pasted output you are only running a bash shell in that image. Bash isn't listening on these ports inside the container, so docker will forward to a closed port and the browser will see a closed connection.
Docker doesn't run the server for your listening ports, it relies on you to run that inside the container and it just forwards the requests.

docker swarm container connect to host port

I have a swarm cluster in which I created a global service to run on all docker hosts in the cluster.
The goal is to have each container instance for this service connect to a port listening on the docker host.
For further information, I am following this Docker Daemon Metrics guide for exposing the new docker metrics API on all hosts and then proxying that host port into the overlay network so that Prometheus can scrape metrics from all swarm hosts.
I have read several docker github issues #8395 #32101 #32277 #1143 - from this my understanding is the same as outlined in the Docker Daemon Metrics. In order to connect to the host from within a swarm container, I should use the docker-gwbridge network which by default is 172.18.0.1.
Every container in my swarm has a network interface for the docker-gwbridge network:
326: eth0#if327: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
link/ether 02:42:0a:ff:00:06 brd ff:ff:ff:ff:ff:ff
inet 10.255.0.6/16 scope global eth0
valid_lft forever preferred_lft forever
inet 10.255.0.5/32 scope global eth0
valid_lft forever preferred_lft forever
333: eth1#if334: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:12:00:04 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.4/16 scope global eth1
valid_lft forever preferred_lft forever
Also, every container in the swarm has a default route that is via 172.0.0.1:
/prometheus # ip route show 0.0.0.0/0 | grep -Eo 'via \S+' | awk '{ print $2 }'
172.18.0.1
/prometheus # netstat -nr | grep '^0\.0\.0\.0' | awk '{print $2}'
172.18.0.1
/prometheus # ip route
default via 172.18.0.1 dev eth1
10.0.1.0/24 dev eth2 src 10.0.1.9
10.255.0.0/16 dev eth0 src 10.255.0.6
172.18.0.0/16 dev eth1 src 172.18.0.4
Despite this, I cannot communicate with 172.18.0.1 from within the container:
/ # wget -O- 172.18.0.1:4999
Connecting to 172.18.0.1:4999 (172.18.0.1:4999)
wget: can't connect to remote host (172.18.0.1): No route to host
On the host, I can access the docker metrics API on 172.18.0.1. I can ping and I can make a successful HTTP request.
Can anyone shed some light as to why this does not work from within the container as outlined in the Docker Daemon Metrics guide?
If the container has a network interface on the 172.18.0.1 network and has routes configured for 172.18.0.1 why do pings fail to 172.18.0.1 from within the container?
If this is not a valid approach for accessing a host port from within a swarm container, then how would one go about achieving this?
EDIT:
Just realized that I did not give all the information in the original post.
I am running docker swarm on a CentOS 7.2 host with docker version 17.04.0-ce, build 4845c56. My kernel is a build of 4.9.11 with vxlan and ipvs modules enabled.
After some further digging I have noted that this appears to be a firewall issue. I discovered that not only was I unable to ping 172.18.0.1 from within the containers - but I was not able to ping my host machine at all! I tried my domain name, the FQDN for the server and even its public IP address but the container could not ping the host (there is network access as I can ping google/etc).
I disabled firewalld on my host and then restarted the docker daemon. After this I was able to ping my host from within the containers (both domain name and 172.18.0.1). Unfortunately this is not a solution for me. I need to identify what firewall rules I need to put in place to allow container->host communication without requiring firewalld being disabled.
Firstly, I owe you a huge THANK YOU. Before I read your EDIT part, I'd spent literally day and night to solve a similar issue, and never realized that the devil is the firewall.
Without disabling the firewall, I have solved my problem on Ubunt 16.04, using
sudo ufw allow in on docker_gwbridge
sudo ufw allow out on docker_gwbridge
sudo ufw enable
I'm not much familiar with CentOS, but I do believe the following should help you, or at least serve as a hint
sudo firewall-cmd --permanent --zone=trusted --change-interface=docker_gwbridge
sudo systemctl restart firewalld
You might have to restart docker as well.

Java application cannot get IP address of the host in docker container with static IP

I use OpenStack for a while to manage my applications. Now I want to transfer them to docker as container per app because docker is more lightweight and efficient.
The problem is almost every thing related to networking went wrong in runtime.
In my design, every application container should have a static IP address and I can use hosts file to locate the container network.
Here is my implementation. (the bash filename is docker_addnet.sh)
# Useages
# docker_addnet.sh container_name IP
# interface name: veth_(containername)
# gateway 172.17.42.1
if [ $# != 2 ]; then
echo -e "ERROR! Wrong args"
exit 1
fi
container_netmask=16
container_gw=172.17.42.1
container_name=$1
bridge_if=veth_`echo ${container_name} | cut -c 1-10`
container_ip=$2/${container_netmask}
container_id=`docker ps | grep $1 | awk '{print \$1}'`
pid=`docker inspect -f '{{.State.Pid}}' ${container_name}`
echo "Contaner: " $container_name "pid: " $pid
mkdir -p /var/run/netns
ln -s /proc/$pid/ns/net /var/run/netns/$pid
brctl delif docker0 $bridge_if
ip link add A type veth peer name B
ip link set A name $bridge_if
brctl addif docker0 $bridge_if
ip link set $bridge_if up
ip link set B netns $pid
ip netns exec $pid ip link set dev B name eth0
ip netns exec $pid ip link set eth0 up
ip netns exec $pid ip addr add $container_ip dev eth0
ip netns exec $pid ip route add default via $container_gw
The script is use to set the static ip address of the container, then you run the container, you must append --net=none to manually setup the network
You can now start a container by
sudo docker run --rm -it --name repl --dns=8.8.8.8 --net=none clojure bash
and set the network by
sudo zsh docker_addnet.sh repl 172.17.15.1
In the container bash, you can see the IP address by ip addr, the output is something like
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
67: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 2e:7b:7e:5a:b5:d6 brd ff:ff:ff:ff:ff:ff
inet 172.17.15.1/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::2c7b:7eff:fe5a:b5d6/64 scope link
valid_lft forever preferred_lft forever
So far so good.
Let's try to get the container host ip address by using clojure repl. First the repl by:
lein repl
next eval the code below
(. java.net.InetAddress getLocalHost)
The clojure code is equals to
System.out.println(Inet4Address.getLocalHost());
What you get is an exception
UnknownHostException 5a8efbf89c79: Name or service not known
java.net.Inet6AddressImpl.lookupAllHostAddr (Inet6AddressImpl.java:-2)
Other things going weird is the RMI server cannot get client IP address by RemoteServer.getClientHost();.
So what may cause this issue? I remember that java sometimes get the wrong network configures, but I don't know the reason.
The documentation for InetAddress.getLocalHost() says:
Returns the address of the local host. This is achieved by retrieving the name of the host from the system, then resolving that name into an InetAddress.
Since you didn't take any steps to make your static IP address resolvable inside the container, it doesn't work.
To find the address in Java without going via hostname you could enumerate all network interfaces via NetworkInterface.getNetworkInterfaces() then iterate over each interface inspecting each address to find the one you want. Example code at
Getting the IP address of the current machine using Java
Another option would be to use Docker's --add-host and --hostname options on the docker run command to put in a mapping for the address you want, then getLocalHost() should work as you expect.
In my design, every application container should have a static IP
address and I can use hosts file to locate the container network.
Why not rethink your original design? Let's start with two obvious options:
Container linking
Add a service discovery component
Container linking
This approach is described in the Docker documentation.
https://docs.docker.com/userguide/dockerlinks/
When launching a container you specify that it is linked to another. This results in environment variables being injected into the linked container containing the IP address and port number details of the collaborating container.
Your application's configuration stops using hard coded IP addresses and instead uses soft code references that are set at run-time.
Currently docker container linking is limited to a single host, but I expect this concept will continue to evolve into multi-host implementations. Worst case you could inject environment variables into your container at run-time.
Service discovery
This is a common approach taken by large distributed applications. Examples implementations of such systems would be:
zookeeper
etcd
consul
..
With such a system in place, your back-end service components (eg database) would register themselves on startup and client processes would dynamically discover their location at run-time. This form of decoupled operation is very Docker friendly and scales very very well.

Resources