I'm trying to deploy a docker image that is an asp.net core (.NET6) WebApi to ssh server.
I know the command for transferring the image file is:
docker save <my_image_namee> | ssh -C user#address docker load
Is it possible to execute this command within the Dockerfile right after building the image?
A Dockerfile can never run commands on the host, push its own image, save its output to a file, or anything else. The only thing it's possible to do in a Dockerfile is specify the commands needed to build the image within its isolated environment.
So, for example, in your Dockerfile you can't specify the image name that will be used (or its tag), or forcibly push something to a registry, or the complex docker save | ssh sequence you show. There's no option in the Dockerfile to do it.
You must run this as two separate commands using pretty much the syntax you show as your second command. If the two systems are on a shared network, a better approach would be to set up a registry server of some sort and docker push the image there; the docker save...docker load sequence isn't usually a preferred option, unless the two systems are on physically isolated networks. Whatever you need to do after you build the image, you could also consider asking your continuous-integration system to do it for you to avoid the manual step.
Related
I'm having difficulties understanding docker. No matter how many tutorials I watch, guides I read, for me docker-compose is like being able to define multiple Dockerfiles, ie multiple containers. I can define environment variables in both, ports, commands, base images.
I read in other questions/discussions that Dockerfile defines how to build an image, and docker-compose is how to run an image, but I don't understand that. I can build docker containers without having to have a Dockerfile.
It's mainly for local development though. Does Dockerfile have an important role when deploying to AWS for example (where it's probably coming out of the box for example for EC2)?
So the reason why I can work locally with docker-compose only is because the base image is my computer (sorting out the task Dockerfile is supposed to do)?
Think about how you'd run some program, without Docker involved. Usually it's two steps:
Install it using a package manager like apt-get or brew, or build it from source
Run it, without needing any of its source code locally
In plain Docker without Compose, similarly, you have the same two steps:
docker pull a prebuilt image with the software, or docker build it from source
docker run it, without needing any of its source code locally
I'd aim to have a Dockerfile that creates an immutable copy of your image, with all of its source code and library dependencies as part of the image. The ideal is that you can docker run your image without -v options to inject source code or providing the command at the docker run command line.
The reality is that there are a lot of moving parts: you probably need to docker network create a network to get containers to communicate with each other, and use docker run -e environment variables to specify host names and database credentials, and launch multiple containers together, and so on. And that's where Compose comes in: instead of running a series of very long docker commands, you can put all of the details you need in a docker-compose.yml file, check it in, and run docker-compose up to get all of those parts put together.
So, do:
Use Compose to start multiple containers together
Use Compose to write down complex runtime options like port mappings or environment variables with host names and credentials
Use Compose to build your image and start a container from it with a single command
Build your application code and a standard CMD to run it into your Dockerfile.
I am new to docker, and have downloaded the tfx image using
docker pull tensorflow/tfx
However, I am unable to find anywhere how to successfully launch a container for the same.
here's a naive attempt
You can use docker image ls to get a list of locally-built Docker images. Note that an "image" is a template for a VM.
To instantiate the VM and shell into it, you would use a command like docker run -t --entrypoint bash tensorflow/tfx. This spins up a temporary VM based on the tensorflow/tfx image.
By default, Docker assumes you want the latest version of that image stored on your local machine, i.e. tensorflow/tfx:latest in the list. If you want to change it, you can reference a specific image version by name or hash, e.g. docker run -t --entrypoint bash tensorflow/tfx:1.0.0 or docker run -t --entrypoint bash fe507176d0e6. I typically use the docker image ls command first and cut & paste the hash, so my notes can be specific about which build I'm referencing even if I later edit the relevant Dockerfile.
Also note that changes you make inside that VM will not be saved. Once you exit the bash shell, it goes away. The shell is useful for checking the state & file structure of a constructed image. If you want to edit the image itself, use a Dockerfile. Each line of a Dockerfile creates a new image when the Dockerfile is compiled. If you know that something went wrong between lines 5 and 10 of the Dockerfile, you can potentially shell into each of those images in turn (with the docker run command I gave above) to see what went wrong. Kinda tedious, but it works.
Also note that docker run is not equivalent to running a TFX pipeline. For the latter, you want to look into the TFX CLI commands or otherwise compile the pipeline - and probably upload it to an external Kubeflow server.
Also note that the Docker image is just a starting point for one piece of your TFX pipeline. A full pipeline will require you to specify the components you want, a more-complete Dockerfile, and more. That's a huge topic, and IMO, the existing documentation leaves a lot to be desired. The Dockerfile you create describes the image which will be distributed to each of the workers which process the full pipeline. It's the place to specify dependencies, necessary files, and other custom setup for the machine. Most ML-relevant concerns are handled in other files.
I have a very simple system consisting of two containers, and I can successfully orchestrate them on my local machine with docker compose. I would like to put this system in a single VM in the cloud and allow others to easily do the same.
Because my preferred cloud provider provides easy access to a container OS, I would like to fit this system in a single container for easy distribution and deployment. I don't think I'm doing anything to violate the difficulties here, so I was hoping to use a Docker-in-Docker setup and make a single composite image that runs docker compose to bring up my two containers, just like on my local machine.
But, when I try to add
RUN docker pull my/image1
RUN docker pull my/image2
to the composite Dockerfile that extends the Docker image, those commands fail upon build because the Docker daemon is not running.
What I'm trying to accomplish here is to pull the two sub-images into my composite image at build time to minimize startup time of the composite image. Is there a way to do that?
There is a way to do this, but it is probably a bad idea.
Use docker-machine to create a docker-machine instance.
Use docker-machine env to get the credentials for your newly created docker-machine instance. These will be a couple of environment variables.
Add something like ARG DOCKER_HOST="tcp://172.16.62.130:2376" for each of the credentials created in the previous step. Put it in your Dockerfile before the first RUN docker ....
After the last ARG .. but before the first RUN docker ... put in some ENV DOCKER_HOST=${DOCKER_HOST} for all credential variables.
This should enable the docker pull to work, but it does not really solve your problem because the pull happens on the docker-machine and does not get captured in the docker image.
To get your desired effect you would need to additionally have
RUN docker save ... to export the pulled image to a tar archive file on the image.
Then you would have to add corresponding logic to docker load ... import the tar archive file.
The bottom line is that you can do this, but you probably should not. I don't think it will save you any time. It will probably cost you time.
Can images in docker be installed by source code. What I mean is I want to build my environment with several components and their dependencies. I want to build the components by executing the source code. Does docker allow me to do something like that ?
Sounds like you want a dynamic docker build process. For this you need docker 1.9 up , use --build-args to pass argument variables . You can build multiple images from a single docker file passing in different argument values each time.
Obviously suffers the reproducibility issue discussed.
Yes, it allows you to do that. You need to start with a base image. For example Ubuntu:
docker pull ubuntu
docker run -t -i ubuntu /bin/bash
After that you will have a bash running inside of your container. Then you can apt-get stuff, run code, change configurations, clone repos and whatever else you want. After that to convert your container into an image you need to commit the container.
Be aware that this is not the Docker way of building infrastructure. The correct way is to create a recipe for building you images by using other base images and standard Docker instructions. This will allow your infrastructure to be stateless, faster to build and will provide more reproducibility.
When I run docker build . the id that is spit out is of the image, which is what I thought was being committed to the docker repo. But when i run docker commit <id>, it says that it is not a valid container id. I usually get around this by starting the image in a container and then committing that id. But what should I do if the container requires linked containers to run? Running the container can take a long time especially when the build process is in the run script. If this fails, or requires a linked container to succeed, the process will exit, and my container will shut down, which does not allow me to create my new image. Is there a way to build your dockerfile and commit to the repo at the same time? Alternatives?
A Dockerfile is designed to provide a completely host independent way to repeatably build images without depending on any aspect of the host's configuration. This is why linking is not included in individual build steps, as it would render the build dependent on the other containers on the host at the time of build. Because of this Dockerfiles are not the only way to build containers.
When you must have a host dependent build environment, use a Dockerfile for the base part, installing dependencies etc, then use docker run from a script/configuration management system of your choice to setup the other containers and do the actual build. Once the build is complete, you can commit the resulting container, tag it with a name, and then push it to the repo.
To address the question at the top of the post, If you want to give a name to an image produced by a Dockerfile use docker tag image-id name
Committing takes a container and produces an image
tagging takes an image and gives it a name
pushing takes an image an a name and makes it available to pull later.