I'm trying to access the file system of a container made with docker-machine. I've used the ssh command but it doesn't seem to have anything that will allow you to list files / folders (like ls).
How would one explore the files currently on a container with docker-machine?
You can create a bash session(assuming your image has bash installed) in a RUNNING container with the following command.
docker exec -ti <container_id> bash
Then you can explore the filesystem of the container.
Related
I have installed Linux subsystem and windows terminal. I ran image using docker (command of the form docker run -it ...... where "......" refers to further part of syntax).
After running of command finished, my current directory (which was PS C:\Users\krs>) changed to root#ad02e79cfb5b and i saw my project directory (say ProjectX) there (it was highlisghted in green) along with other directories like lib,tmp,bin (similar to linux directories in root folder).
However I don't know where root#ad02e79cfb5b:/# is present. I thought it might be root directory but when i open root directory there are folders like lib,tmp,bin but not ProjectX. I am also not able to open root#ad02e79cfb5b:/# using command cd root#ad02e79cfb5b:/#.
Where is root#ad02e79cfb5b:/# located ? How to access it again once I closed it?
When you run docker run with the -it flag it will run the container and give you a shell into it.
So the root#ad02e79cfb5b:/# you were seeing was the prompt inside the docker container (root is the user, ad02e79cfb5b is the host name and / means you are in the root folder).
To get back into it you first need to know if the container is still running. To achieve this run docker ps -a (not the thant the -a flag is important as without it you will only see running containers and not stopped ones).
If the container you ran is still running then note the id and run docker exec -it <container-id> /bin/bash to get a shell back to it.
If the container is stopped then I would suggest removing it docker rm <container-id> and re-running it (docker run -it ...).
I am trying to map an SMB network storage to Docker, in a development environment, to make it available to containers, in the same way as a shared local drive. This means, for the entire Docker VM, not individual containers. Another application needs the network storage through SMB access, but is in another domain, so I can't share anything from my local drives to it. Windows network drives also don't work with Docker.
The current workaround is to open nested shells on Docker, to access the VM and then mount the network storage. I tried this as a Windows batch file, but it stops at the first shell prompt and does not input anymore via "echo".
docker run --rm -it --privileged --pid=host justincormack/nsenter1
echo ctr -n services.linuxkit task exec -t --exec-id foo docker-ce /bin/sh
echo mkdir host_mnt/mystorage
echo mkdir host_mnt/mystorage/Videos
echo mkdir host_mnt/mystorage/Videos/my-private-storage
echo mount -v -t cifs -o username=myname,password=p#s$w0rd,file_mode=0777,dir_mode=0777,vers=2.0,uid=1234,gid=1234 //mystorage.mycompany.com/Videos/my-private-storage /host_mnt/mystorage/Videos/my-private-storage
echo exit
echo exit
Typing this into the console (without the "echo"s) requires deletion/restart of Docker containers afterwards.
Is there any way to map a network drive to Docker easily and upon Docker startup? Or any other way to easily use an SMB resource?
I think the biggest problem you're going to face is that the entire Moby VM used for Docker for Windows has a read-only filesystem. If you were to just attempt to do the mount directly from Moby itself, you would get the it's missing the helper applications for CIFS / NFS.
mount: /mnt: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program.
In most environments, we would just install cifs-utils or nfs-common, but because it's a read only filesystem, I can't think of a way to get that working.
I have a stopped windows container named "mycake"
Now I would like to start it again and access to its powershell inside the container. How would I do that. Thanks
At the moment I can do something like this
Remove the current container
docker container rm mycake
and then start a new container
docker container run -it --name mycake d_layerfiles powershell
I'm not sure that is the best way
Of course, you can only start a bash shell in a RUNNING container. Start a container by clicking the triangle start icon in Docker Desktop.
Then, as Hitesh Ghuge explained, this works very well, type this in a Powershell or Command Prompt terminal to get a linux command prompt from a linux container:
C:\> docker exec -it CONTAINER_ID /bin/bash
root#dfd92a569d54:/mnt#
Note that in the Docker Desktop app, the Container Name has an icon to copy the hex numeric container id to your clipboard. You may also use the name as id, e.g. laughing_hertz, or the visible part of the hex id, e.g. dfd92a569d54.
(The current directory in the linux shell, e.g. /mnt, probably depends on how the container was originally created, or on the situation on previous exit)
Im trying to use the docker client from inside WSL, connecting to the docker engine on Windows. Ive exposed the docker engine on Windows on port 2375, and after setting the DOCKER_HOST environment variable in WSL, I can verify this works by running docker ps.
The problem comes when i attempt to mount directories into docker containers from WSL. For example:
I create a directory and file inside my home folder on WSL (mkdir ~/dockertest && touch ~/dockertest/example.txt)
ls ~/dockertest shows my file has been created
I now start a docker container, mounting my docker test folder (docker run -it --rm -v ~/dockertest:/data alpine ls /data)
I would expect to see 'example.txt' in the docker container, but this does not seem to be happening.
Any ideas what I might be missing?
There are great instructions for Docker setup in WSL at https://nickjanetakis.com/blog/setting-up-docker-for-windows-and-wsl-to-work-flawlessly#ensure-volume-mounts-work - solved most of my problems. The biggest trick for me was with bind-mounted directories; you have to use a path in WSL that the Docker daemon will be able to translate, for example /c/Users/rfay/myproject.
I don't think you need to change the mound point as the link suggests. Rather if you use pwd and sed in combination you should get the effect you need.
docker run -it -v $(pwd | sed 's/^\/mnt//'):/var/folder -w "/var/folder" alpine
pwd returns the working folder in the format '/mnt/c/code/folder'. Pipe this to sed and replace '/mnt' with empty string will leave you with a path such as '/c/code/folder' which is correct for docker for windows.
Anyone stumbling here over this issue follow this: Docker Desktop WSL2 Backend and make sure you are running the version 2 of the WSL in PowerShell:
> wsl -l -v
NAME STATE VERSION
* docker-desktop Running 2
Ubuntu Running 2
docker-desktop-data Running 2
If your Ubuntu VERSION doesn't say 2, you need to update it according to the guide above.
After that, you will be able to mount your Linux directory to the Docker Container directly like:
docker run -v ~/my-project:/sources <my-image>
Specific to WSL 1
Ran into the same issue. One thing to remember is that docker run command does not execute a container then and there on a command shell. It sends the run arguments to a docker daemon that does not interpret WSL path correctly. Hence, you need to pass Windows formatted path in quotes and with back slashes escaped
Your Windows path is
\\wsl$\Ubuntu\home\username\dockertest
Docker command after escaping will probably be like
docker run -it --rm -v "\\\\wsl\$\\Ubuntu\\home\\username\\dockertest":/data alpine ls /data
try
docker -v /:{path} exe
Hope to help you.
I installed the Docker Toolbox on my Windows 10 home machine. When I hit the quickstart icon, I get a bash shell, and I can run a command like
> docker run -it ruby /bin/bash
That puts me into the bash shell of the docker Ruby container. That container is running on a VirtualBox VM created by the Docker Toolbox. The VM had a shared folder setting with:
Folder Path: \\?\C:\Users
Folder Name: c/Users
read-only: unchecked
auto mount: checked
make permanent: checked
I would like to be able to access the C:\Users\ folder on my Windows 10 host from my docker container via a directory called /code within the container (which is running Debian Jessie).
How can I configure my VM, or my Docker container to be able to access that folder from my docker container?
The key was figuring out how to express the shared volume which traversed the Windows-VirtualBox boundary, and the VirtualBox-Docker boundary.
Since the shared folder between the VirtualBox VM and Windows 10 home is C:\Users, the mount must be somewhere under that folder tree.
I created a folder in windows called C:\Users\Jay\MyApp. This will be visible inside the VirtualBox VM.
I then decided to call the folder c/MyApp in the Docker container.
The other key point is that the volume mount must start with "//". So the full docker command is:
docker run -it -v //c/Users/Jay/MyApp:/c/MyApp ruby /bin/bash
I can edit the file called C:\Users\Jay\MyApp\test.rb in Windows, using a nice text editor, and then run it in my Ruby Linux container as
root#ad1e3223e3c7:/# cd c/MyApp
root#ad1e3223e3c7:/c/MyApp# ruby test.rb
The output of test.rb appears on the console of the Docker container.