Unmount NFS folders on wlan0 with pre-down - shutdown

I'm working on this machine (Ubuntu 12.10) that needs to connect to a NAS via wlan.
/etc/fstab is configured with nfs auto and the directory is mounted correctly when the wlan is connected.
BUT
the computer won't shutdown and hangs waiting for the directory to be unmounted.
I did some research and the most common solution is mounting the directories with the soft,sync options but I'm not too sure about using soft.
I think this problem can be fixed with just unmounting the directory before bringing down the interface.
So, I was thinking about using the pre-down function in /etc/network/interfaces, but I don't know how to write it correctly.
iface wlan0 inet manual
pre-down /path/to/script_to_unmount_stuff.sh
Where script_to_unmount_stuff.sh would look like
umount -l /path/to/folder1
umount -l /path/to/folder2
Can something like this work?
Any other suggestion?
Thanks!

Related

Docker-compose can't connect to jupyter notebook on WSL

I run docker-compose on my WSL with a jupyter notebook, it gives me following information:
[I 00:28:20.921 NotebookApp] Jupyter Notebook 6.1.3 is running at:
[I 00:28:20.921 NotebookApp] http://docker-desktop:3000/?token=...
[I 00:28:20.921 NotebookApp] or http://127.0.0.1:3000/?token=...
[I 00:28:20.921 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
as docker is running on WSL I can't access it via localhost on my windows machine. I looked up the ip of the network adapter which is 172.23.16.1 and tried to access the notebook via 172.23.16.1:3000, but I get an error connection refused.
I also opened incoming and outgoing port 3000 on my windows machine
What have I missed?
Have you map your container port so the host machine can reach?
Another common problem is: by default jupiter notebook will only allow traffic coming from localhost (notice that this localhost is the container itself), therefore you can't access from anywhere outside of the container. So to resolve this, make sure you start jupiter notebook and allow traffic coming from all IPs:
jupyter notebook --ip 0.0.0.0
Long story short, you are almost certainly running into the same problem documented in this, this, and this question, among others. The last one is most similar, since it is about accessing a WSL2 instance from a Docker container, but they are all the same root cause. To quote my answer (slightly modified) from one of those:
The core issue here is that WSL2 operates in a Hyper-V VM with its own virtual NIC, running NAT'd behind the Windows host. WSL1, on the other hand, ran bridged with the Windows NIC.
On localhost, Windows does seem to do an automatic mapping, but for the host IP address (and thus, on the local network - Including Docker containers, since they are on their own network), it does not. Even with the Docker network in bridged mode, it still does not see the WSL2 IP without additional effort.
You'll find a lot of information on this particular topic on this Github thread, along with several workarounds that I documented in answers to the other questions.
In your case, I would propose running the Jupyter notebook in a WSL1 instance, rather than WSL2. To my knowledge, there's nothing special in Jupyter which would require WSL2 capabilities, right?
Again, with a copy/paste here -- You can convert the WSL2 instance to WSL1 by either doing (from PowerShell) a wsl --set-version <distroname> 1 or by cloning the existing with a wsl --export <distroname> <archivename>.tar and then wsl --import <distroname> <installlocation) <archivename>.tar. I prefer cloning since it gives you a backup.

Mounted remote directory in MacOS is getting unmounted frequently

I've installed OSXFUSE in my Mac and used sshfs to mount a remote directory which is hosted in a Ubuntu server (I usually ssh this server using ssh username#ip). This works fine, but I am frequently getting unmounted, which I again need to mount using sshfs.
Can somebody help me understand why it is happening and what is the way out? My host machine is running MacOS Catalina and remote machine is Ubuntu 18.
Any pointer is highly appreciated.
As your ssh connection remains connected I guess your sshd configuration is Okay. Use -o reconnect option for sshfs. Like,
sshfs USER#HOST:REMOTE_DIR MOUNT_DIR -o
auto_cache,reconnect,defer_permissions

Docker image ls does not look at the proper IP address

I can see my machine … Windows 10 Home
usuario#DESKTOP-GTCQCAR MINGW64 /c/Program Files/Docker Toolbox
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Running tcp://192.168.99.101:2376 v18.05.0-ce
But when I try to list the images it tries to connect to a different IP ending in 100, instead of 101 where the docker machine is:
usuario#DESKTOP-GTCQCAR ~
$ docker image ls
error during connect: Get https://192.168.99.100:2376/v1.37/images/json: dial tcp 192.168.99.100:2376: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
It can not connect. How can I fix it?
I also faced similar problem after updating from Docker toolbox to Docker for windows.
I solved this problem by deleting all the environment variables starting with Docker.
I am not sure if it will solve your problem as well, but may be it will help someone.
This can also be helpful
Issue can be of having Docker Toolbox installed before changing to Docker for Windows
Uninstalled Docker for windows (make sure Docker Toolbox and VirtualBox are uninstalled as well)
Go to C:\users[USER] directory and remove .docker directory if it is there.
Remove Environmental Variables:
DOCKER_TLS_VERIFY
DOCKER_CERT_PATH
DOCKER_HOST
DOCKER_TOOLBOX_INSTALL_PATH
You might want to restart you computer just to be safe.
Reference: https://forums.docker.com/t/docker-starts-but-trying-to-do-anything-results-in-error-during-connect/49007/5
Check out this great guide: https://docs.docker.com/toolbox/faqs/troubleshoot/
Good luck

How can I trigger a reload of resolv.conf in my containers?

When running containers on startup I noticed some were using resolv.conf before systemd-resolved had updated it from the default using DHCP. This meant that containers that started too early after boot could not resolve anything and needed to be restarted to use the proper DNS settings. This is happening for different reasons for both rkt and Docker; Docker's method for updating resolv.conf inside containers is not compatible with the overlay filesystem driver and since systemd-resolved does not update the file in-place (rather creates a temporary one and renames) rkt's bind mounting does not update what the container sees.
Currently I am using a hacky systemd.unit to delay the network-online.target which docker.service and my rkt pods depend on.
[Unit]
Description=Wait for DNS
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/sh -c 'while ! getent ahosts google.com >dev/null; do sleep 1; done'
[Install]
WantedBy=network-online.target
But this significantly delays my start-up time
# systemd-analyze blame
18.068s wait-for-dns.service
...
and if resolv.conf changes again it won't help. So I was wondering if there's a more elegant solution to my problem. Idealy I'd like to be able to trigger a resolv.conf update in both rkt and Docker containers every time it changes.
Run containers on a user defined network so they will use the embedded DNS server that will forward lookups to the systems DNS.
The default docker0 bridge has some special rules that were left in place for legacy support. Using a mounted /etc/resolv.conf is one of those legacy things.
If rkt doesn't support the same type of DNS then the general solution could be to setup a DNS server like Unbound to be a local forwarding resolver. Then containers have a static DNS server to reference.

Cannot get docker-machine to work with virtualbox when using Cisco VPN AnyConnect

When I use Cisco VPN Anyconnect to join to my corporate network, I cannot get docker-machine to connect to my virtualbox VM. It has something to do with Cisco Anyconnect taking over all 192.168.. routes. I also tried using a totally different cidr range (25.0.1.100/24) but still cannot get docker-machine to talk to VM. When I check the routes table, route gets added to utun0 instead of vboxnet0. I'm assuming utun0 is VPN's host network interface. Here the docker-machine output:
docker-machine create -d virtualbox dev
Running pre-create checks...
Creating machine...
Waiting for machine to be running, this may take a few minutes...
Machine is running, waiting for SSH to be available...
Detecting operating system of created instance...
Provisioning created instance...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
WARNING >>>
This machine has been allocated an IP address, but Docker Machine could not
reach it successfully.
SSH for the machine should still work, but connecting to exposed ports, such as
the Docker daemon port (usually <ip>:2376), may not work properly.
You may need to add the route manually, or use another related workaround.
This could be due to a VPN, proxy, or host file configuration issue.
You also might want to clear any VirtualBox host only interfaces you are not using.
To see how to connect Docker to this machine, run: docker-machine env dev
I had a similar problem with IP conflicts on 192.168.x.x I solved it changing the subnet of the VirtualBox host-only network.
1) run docker-machine rm dev
2) Go into the VirtualBox preferences and remove the host-only network
3) run docker-machine create --driver virtualbox --virtualbox-hostonly-cidr "25.0.1.100/24" dev
There is also a discussion on Github here: https://github.com/docker/kitematic/issues/1029#issuecomment-156219462
I have the same issue and this post on docker at github.com solves it.
sudo ifconfig vboxnet0 down && sudo ifconfig vboxnet0 up
You also may want to use port 2377 as discussed here
If you have the option to run Cisco VPN in Split Tunnel (instead of Full Tunnel) mode, that seems to work well, while still allowing you to access your corporate network.

Resources