Operation not permitted for TUNSETIFF - docker

i am trying to open a TUN device and using ioctl with operation code TUNSETIFF and getting operation not permitted error.
environment
PRETTY_NAME="Ubuntu 22.04.1 LTS"
$ docker --version
Docker version 20.10.17, build 100c701
Python 3.10.6
using following command to run the container
docker run --rm -it --network host --cap-add=NET_ADMIN --device=/dev/net/tun ubuntutest bash -c "tuntaptest.py"
i have tried following options
docker run --rm -it --network host --privileged docker run --rm -it --network host --cap-add=SYS_ADMIN nothing has worked so far
code snapshot
TUNSETIFF: int = 0x400454ca
IFF_TUN: int = 0x0001
IFF_NO_PI: int = 0x1000
tun = open('/dev/net/tun', 'r+b', buffering=0)
ifr: bytes = struct.pack('!16sH', bytes('tun0', 'utf-8'), IFF_TUN | IFF_NO_PI)
fcntl.ioctl(self.tun, TUNSETIFF, ifr)

Related

Is it possible to access mptcp sysctl inside docker?

I would like to access sysctl -n net.mptcp.mptcp_enabled from docker container, but currently I couldn't achieve it. I already tried the below things.
1.
docker run -d --sysctl net.mptcp.mptcp_enabled=1 --name=test -p 3100:3100 my_container
75dcbdc65a1539ce734a413cb6e23bf216aea76f6533c52280d3e866270424b9
docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: write sysctl key net.mptcp.mptcp_enabled: open /proc/sys/net/mptcp/mptcp_enabled: no such file or directory: unknown.
docker run -d --cap-add=SYS_ADMIN --privileged --name=test -p 3100:3100 my_container
This time container starts but there is no file under /proc/sys/net/mptcp/mptcp_enabled
3.
docker run -d -v /proc:/proc --cap-add=SYS_ADMIN --privileged --name=test -p 3100:3100 my_container
This is also the same as 2.
I saw that a sysctl that starts with net.* are namespaced but wonder why this is not working.
Note: My host machine has mptcp supported kernel and I can see all mptcp related files under /proc/sys/net/mptcp/*
I faced the same issue. Using --net=host should solve it.
Try this:
docker run -d --net=host --name=test -p 3100:3100 my_container

Docker cant find file location in windows 10

I am trying to run a software for predicting hemorrhage volume on brain CT in docker: https://github.com/msharrock/deepbleed
I created a "deepbleed" folder in my D:\ drive on windows, and ran docker pull msharrock/deepbleed command after I cd'd inside that directory. The pull was successful and I can see the container in my docker desktop app.
Then I went on and created an indir and outdir folder as instructed in documentation; placed my CT file for prediction in the indir folder.
The readme tells me to run this command next:
docker run -it msharrock/deepbleed bash -v /path/to/data:/data/
So I have run the following commands, but I get "no such file or directory" for all of them:
docker run --rm -it msharrock/deepbleed bash -v pwd/deepbleed/indir:outdir
docker run --rm -it msharrock/deepbleed bash -v ~/deepbleed/indir:/outdir/
docker run --rm -it msharrock/deepbleed bash -v /mnt/d/deepbleed/indir:/outdir/
docker run --rm -it msharrock/deepbleed bash -v /d/deepbleed/indir:/outdir
docker run --rm -it msharrock/deepbleed bash -v "$(& "D:\deepbleed\indir" "$(pwd)")":/outdir
docker run --rm -it msharrock/deepbleed bash -v /indir/:/outdir/
docker run --rm -it msharrock/deepbleed bash -v //d:/deepbleed/indir://d:/deepbleed/outdir/
docker run --rm -it msharrock/deepbleed bash -v //d/deepbleed/indir://d/deepbleed/outdir/
docker run --rm -it msharrock/deepbleed bash -v //d/deepbleed/indir:/outdir/
My docker is running on a wsl2 based engine in windows 10, the hyper-v folders for disks and virtual machines are located on my d: drive.
What do I need to do to get this running?
Try doing it like this (just using one of your items in the list for this example to give you the idea):
docker run -rm -it -v /mnt/d/deepbleed/indir:/outdir msharrock/deepbleed bash

docker version 18.09 version of --gpus all

I'm trying to run a gpu-enabled container on a server with docker 18.09.5 installed. It's a shared server so I can't just upgrade the docker version.
I have a private server with docker 19.03.12 and the following works fine:
docker pull vistart/cuda
docker run --name somename --gpus all -it --shm-size=10g -v /dataloc:/mountedData vistart/cuda /bin/sh
nvidia-smi
yields: expected gpu stats
When I try this on the server with docker 18.09:
docker pull vistart/cuda
docker run --name somename --gpus all -it --shm-size=10g -v /dataloc:/mountedData
yields:
unknown flag: --gpus-all
See 'docker run --help'.
docker run --name somename -it --shm-size=10g -v /dataloc:/mountedData
works but..
nvidia-smi yields:
/bin/sh: 1: nvidia-smi: not found
Is there some v18.09 version of --gpus all that would work?
I've tried with nvidia-docker:
nvidia-docker run --name somename -it --shm-size=10g -v /dataloc:/mountedData
and this yields:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:424: container init caused \"process_linux.go:407: running prestart hook 1 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=#/sbin/ldconfig --device=all --compute --utility --require=cuda>=11.0 brand=tesla,driver>=384,driver<385 brand=tesla,driver>=396,driver<397 brand=tesla,driver>=410,driver<411 brand=tesla,driver>=440,driver<441 brand=tesla,driver>=450,driver<451 --pid=3030 /local/var_local/nobackup/docker/overlay2/d096e63d0a34537f04cbafeb1b6c3315b4e6f0ff15e3e2cb30057f549dc75cb5/merged]\\\\nnvidia-container-cli: requirement error: unsatisfied condition: brand = tesla\\\\n\\\"\"": unknown.
Looks like the share is running CUDA 10.1 so it's not hitting the cuda>-11.0 req...
From docker 19.03 onwards, you can use:
docker run --gpus all myimage
For previous versions, you would use nvidia-docker like this:
nvidia-docker run myimage

how to use a hosts ip on a docker container?

I am running a metasploitable2 docker container on a server. Here is the docker command to create this docker container:
docker run --name victumb-it tleemcjr/metasploitable2:latest sh -c "/bin/services.sh && bash" --security-opt apparmor=unconfined -privileged true --network host
I then ran an exploit on Kali linux container on a different server targeting the docker image, however it failed.
use exploit/unix/ftp/vsftpd_234_backdoor
msf5 exploit(unix/ftp/vsftpd_234_backdoor) > set RHOST 134.122.105.88
RHOST => 134.122.105.88
msf5 exploit(unix/ftp/vsftpd_234_backdoor) > run
[-] 134.122.105.88:21 - Exploit failed [unreachable]: Rex::ConnectionTimeout The connection timed out (134.122.105.88:21).
I am confused as to why this exploit failed. Due to the --network host i thought that the traffic would be mirrored into the container. Is their anyway to fix this networking error, so that the hack is successful?
Here is the tutorial I was loosely following: https://medium.com/cyberdefendersprogram/kali-linux-metasploit-getting-started-with-pen-testing-89d28944097b
Because the option --network host should be placed before the image
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
This should work:
docker run --name victumb-it --network host --security-opt apparmor=unconfined --privileged tleemcjr/metasploitable2:latest sh -c "/bin/services.sh && bash"
Here sh is the command, and everything after that is arguments passed to sh command.
The docker run options like --network, --security-opt and --privileged are placed before the image.
If you run docker inspect container_id you'll see at the Args key the arguments passed to the command. It means they are not arguments to docker run.

Docker: tendermint container not work

My OS is Windows 10 and docker version 17.12.0-ce, build c97c6d6.
Here is my plan:
0. Get containers
docker pull tendermint/tendermint
docker pull tendermint/monitor
1. Init container
docker run --rm -p 46657:46657 --name tendermint_bc -v "C:/Users/user/sandbox/tendermind/tmdata:/tendermint" tendermint/tendermint init
2. Start container
docker run --rm -d -v "C:/Users/user/sandbox/tendermind/tmdata:/tendermint" tendermint/tendermint node --proxy_app=dummy
3. Start tendemint monitor
docker run -it --rm --link=tm tendermint/monitor tendermint_bc:46657
By start of tendermint container I see only one hash, but by docker ps -a container is not listed.
If I run docker logs tendermint_bc, result is:
Error response from daemon: No such container: tendermint_bc
Same workflow on Unix work fine.
Thx for help.
In step 1, you are initializing Tendermint, but not running it. To run it, execute:
docker run --rm -p 46657:46657 --name tendermint_bc -v "C:/Users/user/sandbox/tendermind/tmdata:/tendermint" tendermint/tendermint node --proxy_app=dummy

Resources