Just got started with Docker and installed Docker Desktop for Mac. It's all a bit confusing what this does, but it seems the GUI does not support creating/viewing images/containers?
So I started creating a docker file manually, but where do I place it? var/lib/docker/ is what people say, but that folder does not exist, although CLI says I already have to containers (created as a test after installation of the Desktop app).
Update: Installed Kitematic alongside the Desktop app through which I can view/create containers.
I like to put it in the root of the project, but you can put it anywhere and use the -f/--file argument for docker build to specify where it's located.
You place to Dockefile in the root of project. Dockerfile has context it means it can copy/add files/directories which are sibling/children of its Directory.
Related
I had docker installed on my system on its default directories. There were many images and containers I was working with. But to install docker compose yesterday I installed docker again with snap and it got installed on different location than older one. Its daemon doesn't detect old images and containers probably because of their different location than new directory's of this new docker. I checked and all images and containers are safe in older directories.
So is there any way to get contents of older directories to root directory of new docker so that I don't have to re download them? I'm reluctant to use rsync as I'm not sure how under the hood configuration works.
Or Can I have multiple root directories for daemon?
I'm using a container for Tensorflow-GPU environment to avoid the hassle of setting one manually and I was following this guide: https://code.visualstudio.com/docs/remote/containers
I've set up the container and installed the necessary extensions and then I try to run "Open Folder in a Container" command. It works fine but none of my files get linked to the new working area inside the docker.
I felt like it was saying that I should get access to all you existing files and folders for the project inside a container.
Is this not how this works? What are the normal way of linking project from the host system onto the docker?
EDIT: This is what I get when I open my container with none of my files present
I'm using Visual Studio Code on a Windows machine. I'm trying to setup a Python Dev Container using a directory that contains a large set of CSV files (about 200GB). When I click to launch the remote container in Visual Studio the application hangs saying (Starting Dev Container (show log): Building image.
I've been looking through the docs and having read the Advanced Container Configuation I've tried modifying the devcontainer.json file by adding workspaceMount and workspaceFolder entries:
"workspaceMount" : "source=//c/path/to/folder,target=/workspace,type=bind,consistency=delegated"
"workspaceFolder" : "/workspace"
But to no avail. Is there a solution to launching Dev Containers on Windows using folders which contain large files?
I had a slightly different problem, but the solution might help you or someone else. I was trying to run docker-compose inside a docker-in-docker image (provided by vscode). In my case, my container was able to start, but nothing inside the container was able to run.
To solve my issue, I updated vscode and and now there is a new option Remote-Containers: Clone Repository in Container Volume.... If your code is a git repo, you can do this:
Step #1:
Step #2:
Step #3 and onwards:
Follow the given steps provided by vscode and you should have your repository in the container as a volume. It reduced my building times from about 30mins to 3mins (within the running container) because I brought stuff into the container after it was up and running.
Assuming the 200GB is ignored by your .gitignore, what you could try to do is once the container has started, you can copy the 200GB worth of excel files into the container. I thought this would help because I did a similar thing by bringing in all my node_modules after running the container.
Inside a container I build a (C++) app. The source code directory is shared with --volume.
If docker runs on Linux, the shared directory runs at full speed, but if docker runs on mac, docker has to bridge the share which results in speed drop. Therefore I have to copy the whole source directory to the container before starting compiling. But this copy step is necessary on non-Linux hosts only.
How can I detect if the share is "natively" shared?
Can I get information about the host os from inside the container?
Update
The idea behind this workflow is to setup an image for a defined environment to cross-build the product for multiple platforms (win, mac, linux). Otherwise each developer has a different Linux OS/compilers/components etc installed.
As a docker newbie I thought that this image (with all required 3rdParty components/compilers) can be used to build the app within a container when it is launched.
One workaround I can think of is that you can use a special networking feature which is available in both Mac and Windows hosts, but not in Linux.
It is a special dns entry you can use to get the ip of the host from inside container - host.docker.internal. Read more here and here.
Now you just need a command to get a boolean value if it resolves or not. Since I don’t know which shell you are using, I cant say for sure but something like this should help you.
In my opinion you are looking at the issue from the wrong perspective.
First of all the compilation should be done at build time, not at runtime. If you do it in the container then it means that you are shipping an image with build tools, not to say that user of the image would need the source code to run the image. For this reason it is a good practice to compile at build time and only ship an image with the binary to run.
Secondly, compiling at build time is fast because the source code is sent to the docker daemon and accessed directly from there, no need for volumes.
Lastly, to answer your last question, it is you who runs the container. So you can tell it everything about the host where it is running by just adding and environment variable (for example). It is over complicated to just run the container and let it guess where it is running, when you already have that information at the moment yuo start the container.
I used the --env DO_COPY=1 when creating the container.
I am trying to run Hyperledger's BYFN Tutorial on a Win10 Home using Docker Toolbox, with VirtualBox 5.2.4. I am using the default image for the VirtualBox VM.
I have set up a shared folder (not in C:/Users, but on my other drive) and it seems to be functioning correctly - changes I make from either Windows, or the docker-machine are reflected in both places as intended. I successfully generate the network artifacts using "./byfn -m generate", but I get an error when trying to "./byfn up" it.
What happens is that, as far as I can see from the logs, all the containers get brought up correctly, but for some reason the volumes of the cli container are not attached correctly (I think). When byfn.sh finishes I get the following error:
When I ssh into the cli container, I can see the channel-artifacts, crypto and scripts folders, but their contents don't seem to correlate with the volumes: part of the docker-compose file. First, the scripts folder is empty (whereas in the docker-compose file it's specified that it should mount a bunch of files), so I get the above error. Second, the channel-artifacts containes only 1 directory named genesis.block, which should actually be a file. And in the crypto folder there are just a bunch of directories.
As you might have guessed, I'm pretty new at docker, so this might be intended behavior, but I'm still getting an error.
Please let me know if I can provide additional information. Thanks in advance.