wifi interface somtimes doesn't work in archlinux - wifi

Sometimes I boot into Archlinux networkmanager doesn't use the wifi adapter while it shows its being used.
linux-firmware is installed
lspci -k
04:00.0 Network controller: Qualcomm Atheros QCA9377 802.11ac Wireless Network Adapter (rev 31)
Subsystem: Dell Device 1810
Kernel driver in use: ath10k_pci
Kernel modules: ath10k_pci
ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
2: enp3s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000 link/ether
2c:ee:7f:2c:ee:16 brd ff:ff:ff:ff:ff:ff

I had a similar problem with this before where the wifi adapter is not shown in ip link too. My solution to that was to install linux-firmware during boot installation rather than after as udev somehow did not auto detect the wifi adapter after boot installation.
Not sure if this is unique to my computer but hope it helps!

Related

Rootless mode: My docker container has internet but can not ping

I just noticed that my docker container has internet connection. Because I can update or download packages. But I can not ping google. I do not think it is a DNS issue. Because I can not ping 8.8.8.8 either.
% docker run -it --name alpine5 alpine ash
/ # apk update
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/community/x86_64/APKINDEX.tar.gz
v3.15.0-172-g86e4642bdd [https://dl-cdn.alpinelinux.org/alpine/v3.15/main]
v3.15.0-173-g0bd3b989ee [https://dl-cdn.alpinelinux.org/alpine/v3.15/community]
OK: 15837 distinct packages available
/ # apk add openssh
(1/10) Installing openssh-keygen (8.8_p1-r1)
(2/10) Installing ncurses-terminfo-base (6.3_p20211120-r0)
(3/10) Installing ncurses-libs (6.3_p20211120-r0)
(4/10) Installing libedit (20210910.3.1-r0)
(5/10) Installing openssh-client-common (8.8_p1-r1)
(6/10) Installing openssh-client-default (8.8_p1-r1)
(7/10) Installing openssh-sftp-server (8.8_p1-r1)
(8/10) Installing openssh-server-common (8.8_p1-r1)
(9/10) Installing openssh-server (8.8_p1-r1)
(10/10) Installing openssh (8.8_p1-r1)
Executing busybox-1.34.1-r3.trigger
OK: 12 MiB in 24 packages
/ # ping -c 2 google.com
PING google.com (172.217.160.142): 56 data bytes
--- google.com ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
/ # ping -c 2 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
What might be the issue here?
Update 1
I wanted to add that in my host machine (Linux Mint 20.2) the interfaces are:
% ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 18:31:bf:b5:e6:5b brd ff:ff:ff:ff:ff:ff
In the container, the interface are:
$ docker container attach alpine5
/ # ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
61: eth0#if62: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:07 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.7/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
/ # ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
61: eth0#if62: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:07 brd ff:ff:ff:ff:ff:ff
There is no docker0 interface. One more thing I need to mention is , I am using docker in rootless mode.

Not able to connect from inside container, but able to connect from host

I am facing a peculiar situation, could not find much help from web.
I have a container(based on alpine image) running on Centos7 host in host network mode which essentially means it shares the network stack, /etc/hosts, and /etc/resolv.conf with host.
Trying to connect to a remote machine(UB1804-MN1-131) within our organization network (so no proxy needed). The connect call is a grpc.dial(hostname:port, ..) call.
I keep getting the below error:
code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp: i/o timeout"
This behavior itself is not consistent. For e.g. it connects successfully after few retries sometimes, other times it simply refuses to connect
The same remote machine connects without any issues when tried from host itself.
Any help in finding the root cause is much appreciated. For reference, sharing eth details, host and resolv details (Edited some values in it for security reasons):
[user#HOST-21343-135 ~]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:9e:7c:15 brd ff:ff:ff:ff:ff:ff
inet 172.17.65.135/16 brd 172.17.255.255 scope global ens192
valid_lft forever preferred_lft forever
3: ens224: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:50:56:9e:12:a3 brd ff:ff:ff:ff:ff:ff
4: ens256: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:50:56:9e:23:0c brd ff:ff:ff:ff:ff:ff
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b0:fa:05:37 brd ff:ff:ff:ff:ff:ff
inet 10.190.64.1/25 scope global docker0
valid_lft forever preferred_lft forever
[user#HOST-21343-135 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 HOST-21343-135
172.17.65.131 UB1804-MN1-131
[root#ATLAS-21343-135 ~]# cat /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
nameserver 14.110.135.81
nameserver 14.110.135.82
nameserver 14.110.135.83
I have verified all the above data is shared by container also.
After much effort, found the root cause. It was due to the DNS IPs present in /etc/resolv.conf. When I removed all DNS entries, it worked like charm. So it looks like it was routing the connection through the DNS servers and it was failing.
What I don't understand though is, the same thing works fine on the host machine with DNS present which means it ignores the DNS and only reads from /etc/hosts. However the same thing from within container did not ignore DNS entries. I am not sure why this differing behaviour, I do not know.
For now, the workaround mentioned above works for me. So I am good to go.

Docker containers with bridge network cannot ping anything (even default gateway)

I cannot ping anything from containers using bridge networking (example: docker run --network bridge --rm -it bash ping 8.8.8.8). Not even the default gateway of the container.
ip route from inside container:
bash-5.1# ip route
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 scope link src 172.17.0.2
ip link from my machine:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000
link/ether 6c:02:e0:77:5a:c1 brd ff:ff:ff:ff:ff:ff
altname enp16s0
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DORMANT group default qlen 1000
link/ether a4:97:b1:86:f9:6b brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:bd:d0:fb:cc brd ff:ff:ff:ff:ff:ff
6: veth40b832a#if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether ba:e5:3a:88:e4:67 brd ff:ff:ff:ff:ff:ff link-netnsid 0
The docker0 interface stays down even if containers are running.
brctl shows that the container interfaces don't get bridged to docker0:
bridge name bridge id STP enabled interfaces
docker0 8000.0242bdd0fbcc no
Here's the output of iptables -S -t nat:
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N DOCKER
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
So far I've tried reinstalling docker and switching between iptables and nftables with iptables-nft. The whole issue started when I tried running a k3d example cluster. I'm running everything on Arch using the official packages.
I finally figured it out
Both NetworkManager and systemd-networkd were running on my system, which messed with the interface IPs via multiple DHCP services. That caused bridge networking not to work as well, which in turn messed with the containers traffic.
Pro tip: don't run multiple daemons trying to do the same thing
Check
cat /etc/sysctl.conf
if you have
net.ipv4.ip_forward=1

Simulating network failures in Docker

I am trying to simulate partial/total network/container failure in docker in order to see how my application behaves in failure conditions. I have started by using pumba, but it isn't working right. More specifically, this command fails when run, both via pumba and when run directly on the container with docker exec:
tc qdisc add dev eth0 root netem delay 2000ms 10ms 20.00
with the following output:
RTNETLINK answers: Operation not permitted
Now here is where it gets stranger. It works when run inside my service containers Actually, it only works when run via pumba, not when run directly (rabbitmq:3.6.10, redis:4.0.1, mongo:3.5.11), after installing the iproute2 package. It does not work inside my application containers, all of which use node:8.2.1 as the base image, which already has iproute2 installed. None of the containers have any add_caps applied.
Output of ip addr on one of the application containers:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: tunl0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
3: gre0#NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1
link/gre 0.0.0.0 brd 0.0.0.0
4: gretap0#NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: ip_vti0#NONE: <NOARP> mtu 1332 qdisc noop state DOWN group default qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
6: ip6_vti0#NONE: <NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1
link/tunnel6 :: brd ::
7: sit0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
link/sit 0.0.0.0 brd 0.0.0.0
8: ip6tnl0#NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1
link/tunnel6 :: brd ::
9: ip6gre0#NONE: <NOARP> mtu 1448 qdisc noop state DOWN group default qlen 1
link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
113: eth0#if114: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:06 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.6/16 scope global eth0
valid_lft forever preferred_lft forever
Ok, I found part of the answer. It turns out that the tc command was not working when run directly on the service containers. Sorry for the bit of incorrect information in the original question. Pumba works on the service containers and not the application containers. The tc command does not work in any of the containers.
It turns out that it was a problem with running as an unprivileged user. I opened an issue with pumba to address the problem.
The tc comand still isn't working when run as root, and I still don't know why. However, I was only using that command for debugging, so while I am curious as to why it doesn't work, my main issue has been resolved.
You should call exec on the containner using root user: -u=0
like:
sudo docker exec-u=0 myContainer tc qdisc add dev eth0 root netem delay 2000ms 10ms 20.00
I had a similar issue on Windows and finally was able to resolve by turning off the WSL 2 based engine in Docker settings. Now all my tc qdisc... commands are working.

How to let docker container access host network port

I want to connect to my local host service from a docker container. I am using docker for mac. I checked this link How to access host port from docker container but when I run ip addr show docker0 in the docker container. I got ip addr show docker0 error response. Below is all the network devices on my docker container.
# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
3: gre0#NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1
link/gre 0.0.0.0 brd 0.0.0.0
4: gretap0#NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: ip_vti0#NONE: <NOARP> mtu 1428 qdisc noop state DOWN group default qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
6: ip6_vti0#NONE: <NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1
link/tunnel6 :: brd ::
7: sit0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
link/sit 0.0.0.0 brd 0.0.0.0
8: ip6tnl0#NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1
link/tunnel6 :: brd ::
9: ip6gre0#NONE: <NOARP> mtu 1448 qdisc noop state DOWN group default qlen 1
link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
238: eth0#if239: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:14:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.20.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe14:2/64 scope link
valid_lft forever preferred_lft forever
Which one is my local host address?
You can use the special Mac-only DNS name docker.for.mac.localhost, this will resolve to the host IP.
Source: https://docs.docker.com/docker-for-mac/networking/#there-is-no-docker0-bridge-on-macos
You can docker run --network host ... to let docker container access host ports. Or use option network_mode in docker-compose.yaml. But this mode can be a security issue if you use an untrusted docker image.

Resources