I've been attempting to move an application to the cloud for a while now and have most of the services set up in pods running in a k8s cluster. The last piece has been giving me trouble, I need to set up an image with an older piece of software that cannot be installed silently. I then attempted in my dockerfile to install its .net dependencies (2005.x86, 2010.x86, 2012.x86, 2015.x86, 2015.x64) and manually transfer a local install of the program but that also did not work.
Is there any way to run through a guided install in a remote windows image or be able to determine all of the file changes made by an installer in order to do them manually?
You can track the changes done by the installer following these steps:
start a new container based on your base image
docker run --name test -d <base_image>
open a shell in the new container (I am not familiar with Windows so you might have to adapt the command below)
docker exec -ti test cmd
Run whatever commands you need to run inside the container. When you are done exit from the container
Examine the changes to the container's filesystem:
docker container diff test
You can also use docker container export to export the container's filesystem as a tar archive, and then docker image import to create an image from that archive.
Related
I'm running containerized build agents using the linux-sudo image tag and, using a Dockerfile (here) I have successfully customized it to suit our needs. This is running very well and successfully running the builds I need it to.
I am running it in a Swarm cluster with a docker-compose file (here), and I have recently enabled the DOCKER_IN_DOCKER variable. I can successfully run docker run hello-world in this container and I'm happy with the results. However I am having an issue running a small utility container inside the agent with a bind mount volume.
I want to use this Dockerfile inside the build agent to run npm CLI commands against the files in a mounted directory. I'm using the following command to run the container with a custom command and a volume as a bind mount.
docker run -it -v $(pwd):/app {IMAGE_TAG} install
So in theory, running npm install against the local directory that is mounted in the container (npm is the command in the ENTRYPOINT so I can just pass install to the container for simplicity). I can run this on other environments (Ubuntu and WSL) and it works very well. However when I run it in the linux-sudo build agent image it doesn't seem to mount the directory properly. If I inspect the directory in the running utility container (the npm one), the /app folder is empty. Shouldn't I expect to see the contents of the bind mount here as I do in other environments?
I have inspected the container, and it confirms there is a volume created of type bind and I can also see it when I list out the docker volumes.
Is there something fundamental I am doing wrong here? I would really appreciate some help if possible please?
I start a Docker container with a special bash script that runs the container and then creates a user X with a dynamic name, UID and GUID in the container. I can then bash into the container and perform actions as this user X. The script also creates an 'alias' user named vscode with the same UID as the earlier created dynamic user X.
In VSCode I can attach to this container. Two questions:
How can I setup VSCode to perform all actions as the 'vscode' user or as the user X? (When using devcontainer.json to create the container this is trivial, but now I attach to an existing container and devcontainer.json is not used).
In devcontainer.json you have the option to automatically install extensions. Which settings file do I need to create to automatically install extensions when attaching to a container?
The solution should be automated. Eg. manual intervention and committing the image as suggested below is possible but will make it much harder for users to just use my Docker image.
I updated to vscode 1.39 and tried to add:
ADD server-env-setup /root/.vscode-server/server-env-setup
But "server-env-setup" seems to be only used for WSL.
I'll answer your questions in reverted order:
VSCode installs extensions after creating the container by using docker exec command.
And now recipe: The easiest way is to take container already created by VSCode:
Run "Open folder on container" for creating dev container.
After container has done and you can work with VSCode. Stop your environment by clicking "Close remote connection".
Run docker ps -a. You should see last died containers something as:
How you can see the latest running container is: a7aa5af7ec08 vsc-typescript-2ea9f347739c5397afc431028000c02b. This your container with all extensions installed. And it doesn't matter how you install extensions manually or by configuring via devcontainer.json.
Run docker commit a7aa5af7ec08 all-installed-vscode-image:latest. Now you have a docker image with all your loved software installed. You can upload this image to your favorite docker registry and use also on other machines.
Now you can run docker run -i -u vscode all-installed-vscode-image:latest. And attach vscode to this container. This is an answer to your first question.
Also, you can review vscode documentation and use devcontainer.json configurations when you attach to already running containers and even containers running on remote machines.
VSCode now implements a "remoteUser" property ehich you can set in the image configuration. This will ensure that VSCode logs into the container as the correct user.
My goal is to use Docker to create a mail setup running postfix + dovecot, fully configured and ready to go (on Ubuntu 14.04), so I could easily deploy on several servers. As far as I understand Docker, the process to do this is:
Spin up a new container (docker run -it ubuntu bash).
Install and configure postfix and dovecot.
If I need to shut down and take a break, I can exit the shell and return to the container via docker start <id> followed by docker attach <id>.
(here's where things get fuzzy for me)
At this point, is it better to export the image to a file, import on another server, and run it? How do I make sure the container will automatically start postfix, dovecot, and other services upon running it? I also don't quite understand the difference between using a Dockerfile to automate installations vs just installing it manually and exporting the image.
Configure multiple docker images using Dockerfiles
Each docker container should run only one service. So one container for postfix, one for another service etc. You can have your running containers communicate with each other
Build those images
Push those images to a registry so that you can easily pull them on different servers and have the same setup.
Pull those images on your different servers.
You can pass ENV variables when you start a container to configure it.
You should not install something directly inside a running container.
This defeat the pupose of having a reproducible setup with Docker.
Your step #2 should be a RUN entry inside a Dockerfile, that is then used to run docker build to create an image.
This image could then be used to start and stop running containers as needed.
See the Dockerfile RUN entry documentation. This is usually used with apt-get install to install needed components.
The ENTRYPOINT in the Dockerfile should be set to start your services.
In general it is recommended to have just one process in each image.
I'm using docker in Ubuntu. During development phase I cloned all source code from Git in host, edit them in WebStorm, and them run with Node.js inside a docker container with -v /host_dev_src:/container_src so that I can test.
Then when I wanted to send them for testing: I committed the container and pushed a new version. But when I pulled and ran the image on the test machine, the source code was missing. That makes sense as in test machine there's no /host_src available.
My current workaround is to clone the source code on the test machine and run docker with -v /host_test_src:/container_src. But I'd like to know if it's possible to copy the source code directly into the container and avoid that manipulation. I'd prefer to just copy, paste and run the image file with the source code, especially since there's no Internet connection on our testing machines.
PS: Seems docker cp only supports copying file from container to host.
One solution is to have a git clone step in the Dockerfile which adds the source code into the image. During development, you can override this code with your -v argument to docker run so that you can make changes without rebuilding. When it comes to testing, you just check your changes in and build a new image. Now you have a fully standalone alone image for testing.
Note that if you have a VOLUME instruction in your Dockerfile, you will need to make sure it occurs after the git clone step.
The problem with this approach is that if you are using a compiled language, you only want your binaries to live in the final image. In this case, the git clone needs to be replaced with some code that either fetches or compiles the binaries.
Please treat your source codes are data, then package them as data container , see https://docs.docker.com/userguide/dockervolumes/
Step 1 Create app_src docker image
Put one Dockerfile inside your git repo like
FROM BUSYBOX
ADD . /container_src
VOLUME /container_src
Then you can build source image like
docker build -t app_src .
During development period, you can always use your old solution -v /host_dev_src:/container_src.
Step 2 Transfer this docker image like app image
You can transfer this app_src image to test system similar to your application image, probably via docker registry
Step 3 Run as data container
In test system, run app container above it. (I use ubuntu for demo)
docker run -d -v /container_src --name code app_src
docker run -it --volumes-from code ubuntu bash
root#dfb2bb8456fe:/# ls /container_src
Dockerfile hello.c
root#dfb2bb8456fe:/#
Hope it gives help
(give credits to https://github.com/toffer/docker-data-only-container-demo , which I get detail ideas)
Adding to Adrian's answer, I do git clone, and then do
CMD git pull && start-my-service
so the latest code at the checked out branch gets run. This is obviously not for everyone, but it works in some software release models.
You could try and have two Dockerfiles. The base one would know how to run your app from a predevined folder, but not declare it a volume. When developing you will be running this container with your host folder mounted as a volume. Another one, the package one, will inherit the base one and copy/add the files from your host directory, again without volumes, so that you would carry all the files to the tester's host.
I just now started using docker.
I made a python application using python 2.7. Python code files are in my system and in bitbucket repository. I am able to run that file in my local system through eclipse.
Now, how I can run that files in my docker and distribute the python application to other users(don't want to show code).
Can help me explaining the steps in simple language
Docker is in no way a mean to hide your code
If you want to run your code in the container, you will have to copy over your code into the container. If you don't want to expose the source code, compile python and distribute binaries. Use Cython to compile python to C code, then distribute your app as python binary libraries (pyd) instead.
Here is an example:
http://blog.biicode.com/bii-internals-compiling-your-python-application-with-cython/
Do the following 3 steps on your host to copy the code to docker container:
1.Get short container id :
docker ps
2.Get full container id
docker inspect -f '{{.Id}}' SHORT_CONTAINER_ID
3.copy file :
sudo cp path-to-file-on-host /var/lib/docker/aufs/mnt/FULL_CONTAINER_ID/PATH-TO-NEW-FILE-IN-CONTAINER
The way to run the code in container should be the same as you run on your host. Maybe there are some configurations required for the ports and ip.