I am trying to investigate what is causing "Session invalid. Please log in again" in OTOBO platform.
I therefore tried taking tcpdump based on this provided solution but what I fail to get is where my file "dump.pcap" is stored, I have checked in the /tmp directories in linux and in the container but I cannot find it.
The tcpdump command I used is docker run --rm -v $(pwd):/dump --tty --net=container:otobo_web_1 tcpdump tcpdump -i any -w /tmp/dump.pcap
Please me understand where the file is stored.
I have managed to get the dump but using a different comamand.
With this, the file is saved in my ubuntu directory
docker run --tty --net=container:otobo_web_1 tcpdump | tee /tmp/dump.pcap
Related
I'm trying to run racadm both in Windows Powershell using the official utility and on my Mac using this Docker container. In both instances, I can pull the RAC details, so I know my login and password are valid, but when I try to perform an sslkeyupload, I get the following error:
ERROR: Specified file file.pem does not exist.
The permissions on the file, at least on my Mac, are wide open (chmod 777) and are in the same directory I'm trying to run the script in:
docker run stackbot/racadm -r 10.10.1.4 -u root -p calvin sslkeyupload -t 1 -f ./file.pem
Anyone see anything obvious I may be doing wrong?
You're running the command inside a Docker container. It has no visibility to your local filesystem unless you explicitly expose your directory inside the container using the -v command line option:
docker run -v $PWD:$PWD -w $PWD ...
The -v option creates a bind mount, and the -w option sets the working directory.
The .sh file I am working with is:
docker run -d --rm -it --gpus '"device=0,1,2,3"' --ipc=host -v $HOME/Folder:/Folder tr_xl_container nohup python /Path/file.py -p/Path/ |& tee $HOME/Path/log.txt
I am confused about the -v and everything after that. Specifically, the -v $HOME/Folder:/Folder tr_xl_container section and -p/Path/. If someone would be able to help breakdown what those commands mean or point me to a reference that does, that would be very much appreciated. I checked Docker documentation and Linux command line documentation and did not come up with anything too helpful.
A docker run command is split up in 3 parts:
docker options
the image to run
a command for the container
In your case -d --rm -it --gpus '"device=0,1,2,3"' --ipc=host -v $HOME/Folder:/Folder are docker options.
tr_xl_container is the image name.
nohup python /Path/file.py -p/Path/ is the command sent to the container.
The last part, |& tee $HOME/Path/log.txt isn't run in the container, but takes the output from the docker run command and saves it in $HOME/Path/log.txt.
As for -v $HOME/Folder:/Folder, it's a volume mapping or more precisely, a bind mount. It creates a directory in the container with the path /Folder that is linked to the directory $Home/Folder on the host machine. That makes files in the host directory visible inside the container and if the container does anything with files in the /Folder directory, those changes will be visible in the host directory.
The command after the image name is for the container and it's up to the container what to do with it. From looking at it, it looks like it runs a Python program stored in /Path/file.py in the image. But to be sure, you'll need to know what the image does.
I am trying to setup an image of the osrm-backend on my docker. I am unable to run docker using the below commands (as mentioned in wiki)
docker run -t -v ${pwd}/data osrm/osrm-backend:v5.18.0 osrm-extract -p /opt/car.lua /data/denmark-latest.osm.pbf
docker run -t -v ${pwd}:/data osrm/osrm-backend:v5.18.0 osrm-contract /data/denmark-latest.osrm
docker run -t -i -p 5000:5000 -v ${pwd}/data osrm/osrm-backend:v5.18.0 osrm-routed /data/denmark-latest.osrm
I have already fetched the corresponding map using both wget and Invoke-WebRequest. Every time I run the first command from the above, it gives the error...
[error] Input file /data/denmark-latest.osm.pbf not found!
I have tried placing the downloaded maps in the corresponding location as well. Can anyone tell me what I am doing wrong here ?
I am using PowerShell on Windows 10
For me the problem was that docker was not able to access the C drive, even though the sharing was turned on in docker settings. After wasting lots of time, I turned off the sharing of C drive, and then turned it back on. After that when I mounted some folder to docker, it was able to see the files.
Docker share drive
We recently have a lot of problem deploying the Linux version of our app for the client (updated libraries, missing libraries, install path), and we are looking to use Docker for deployment.
Our app as a UI, so we naturally map that using
-e DISPLAY:$DISPLAY -v /tmp/X11-unix:/tmp/X11-unix
and we can actually see the UI popping up.
But when it's time to open a file, the problem start there. We want to only browse the host system and save any output file on the host (output directory is determined by the location of the opened file).
Which strategy would you suggest for this?
We want the client to not see the difference between the app running locally or inside Docker. We are working on a launch script so it looks like the client would still be double-clicking on it to start the app. We can add all the configuration we need in there for the docker run command.
After recommendations by #CFrei and #Robert here's a solution that seems to work well:
docker run \
-ti \
-u $(id -u):$(id -g) \
-v /home/$(id -un):/home/$(id -un) \
-v /etc/passwd:/etc/passwd \
-w $(pwd) \
MyDockerImage
And now, every file created inside that container is perfectly located in the right directory on the host with the ownership by the user.
And from inside the container, it really looks like the host, which will be very useful for the client.
Thanks again for your help guys!
And I hope this can help someone else.
As you may know, the container hast its own filesystem that is provided by the image where it runs on top.
You can map a host's directory or file to a path inside the container, where your program expects it to be. This is known as docker volume. You're already doing that with the X11 socket communication (the -v flag).
For instance, for a file:
docker run -v /absolute/path/in/the/host/some.file:/path/inside/container/some.file
For a directory:
docker run -v /absolute/path/in/the/host/some/dir:/path/inside/container/some/dir
You can provide as many -v flags as you might need.
Here you can find more useful information.
I need to pipe (inject) a file or some data into docker as part of the run command and have it written to a file within the container as part of the startup. Is there best practise way to do this ?
I've tried this.
cat data.txt | docker run -a stdin -a stdout -i -t ubuntu /bin/bash -c 'cat >/data.txt'
But can't seem to get it to work.
cat setup.json | docker run -i ubuntu /bin/bash -c 'cat'
This worked for me. Remove the -t. Don't need the -a's either.
The better solution is to make (mount) you host folder be accessible to docker container. E.g. like this
docker run -v /Users/<path>:/<container path> ...
Here is /Users/<path> is a folder on your host computer and <container path> mounted path inside container.
Also see Manage data in containers manual page.
UPDATE another Accessing External Files from Docker Containers example.