I have been following a tutorial on how to deploy docker containers to AWS, which I have managed to do successfully. However now that I have modified a flask web app with my own code/logic, it never completes building the first service.
My last command:
docker-compose -f docker-compose-prod.yml up -d --build
It stars to say:
Building feapi
Then nothing happens, sometimes I get:
ERROR: SSL error: ('The write operation timed out',)
How can I debug this or at least see what is happening behind the scene, as I am not sure what is causing the error? I know docker-compose offers logs but not sure how to implement it and if its necessary.
Sometimes your context contains too many files.
When the docker build starts, it collects all the files which is in your project folder/context dir, except if you mention them in the .dockerignore.
If you have GBs of files around, it can collect it for minutes and the resulting image size will be huge.
In the official docs (https://docs.docker.com/develop/develop-images/dockerfile_best-practices/) see this paragraph:
"Omitting the build context can be useful in situations where your Dockerfile does not require files to be copied into the image, and improves the build-speed, as no files are sent to the daemon."
If you are uncertain about the size of your build context, you could run a simple docker build, which prints the context size it finds. Eg.:
> docker build -t check-context:latest .
Sending build context to Docker daemon 1.086MB
...
docker-composer --build does docker build for you.
If you run manually the docker build command, you will have the output of the build.
Then, docker build just runs the commmands of your Dockerfile this is up to you to put your Dockerfile commands in verbose / debug mode.
Related
I'm trying to learn how to write a Dockerfile. Currently my strategy is as follows:
Guess what commands are correct to write based documentation.
Run sudo docker-compose up --build -d to build a docker container
Wait ~5 minutes for my anaconda packages to install
Find that I made a mistake on step 15, and go back to step 1.
Is there a way to interactively enter the commands for a Dockerfile, or to cache the first 14 successful steps so I don't need to rebuild the entire file? I saw something about docker exec but it seems that's only for running containers. I also want to try and use the same syntax as I use in the dockerfile (i.e. ENTRYPOINT and ENV) since I'm not sure what the bash equivalent is/if it exists.
you can run docker-compose without the --build flag that way you don't have to rebuild the image every time, although as you are testing the Dockerfile i don't know if you have much options here; the docker should cache automatically the builds but only if there's no changes from the last time that you made a build, and there's no way to build a image interactively, docker doesn't work like that, lastly, the docker exec is just to run commands inside the container that was created from the build.
some references for you: docker cache, docker file best practices
This is basically a follow-up question to How to include files outside of Docker's build context?: I'm using large files in all of my projects (several GBs) which I keep on an external drive, only used for development.
I want to COPY or ADD these files to my docker container when building it. The answer linked above allows one to specify a different path to a Dockerfile, potentially extending the build context. I find this unpractical, since this would require setting the build context to system root (?), to be able to include a single file.
Long story short: Is there any way or workaround to include a file that is far removed from the docker build context?
Three suggestions on things you could try:
include a file that is far removed from the docker build context?
You could construct your own build context by cp (or tar) files on the host into a dedicated directory tree. You don't have to use the actual source tree or your build tree.
rm -rf docker-build
mkdir docker-build
cp -a Dockerfile build/the-binary docker-build
cp -a /mnt/external/support docker-build
docker build ./docker-build
# reads docker-build/Dockerfile, and the files in the
# docker-build directory, but nothing else; only sends
# the docker-build directory to Docker as the build context
large files [...] (several GBs)
Docker doesn't deal well with build contexts this large. In the past I've at least seen docker build take a long time just on the step of sending the build context to itself, and docker push and docker pull have network issues when trying to send the gigabyte+ layer around.
It's a little hacky and breaks the "self-contained image" model a little bit, but you can provide these files as a Docker bind-mount instead of including them in the image. Your application needs to know what to do if the data isn't there. When you go to deploy the application, you also need to separately distribute the files alongside the Docker image and other deployment artifacts.
docker run \
-v /mnt/external/support:/app/support
...
the-image-without-the-support-files
only used for development
Potentially you can get away with not using Docker at all during this phase of development. Use a local source tree and local development tools; run your unit tests against these large test fixtures as needed. Build a Docker image only when you're about to run pre-commit integration tests; that may be late enough in the development cycle that you don't need these files.
I think the main thing you are worried about is that you do not want to send all files of a directory to docker daemon while it builds the image.
When directory was so big (in GBss) it takes lot of time to build an image.
If the requirement is to just use those files while you build anything inside docker, you can mount those to the container.
A tricky way
Run a container with base image and mount the direcotries inside it. docker run -d -v local-path:container-path
Get inside the container docker exec -it CONTAINER_ID bash
Run build step ./build-something.sh
Create image from the running container docker commit CONTAINER_ID
Tag the image docker tag IMAGE_ID tag:v1. You can get Image ID from previous command
From long term perspective this method may seem to be very tedious, but if you want to build image for 1 or 2 times , you can try this method.
I tried this for one of my docker image, as I want to avoid large amount of files sent to docker daemon during image build
The copy command gets source and destination values,
just specify full absolute path to your hard drive mount point as the src directory
COPY /absolute_path/to/harddrive /container/path
I have really simple web application contains those containers:
Frontend website (Nuxt.js - node app)
Backend API (PHP, Symfony)
MySQL
Every container has own Dockerfile and I can run it with Docker Compose together. It's really nice and I like the simplicity.
There is deploy script on my server. It clones GIT monorepo and run docker-compose:
DIR=$(dirname $(readlink -f $0))
rm -rf $DIR/app
git clone git#bitbucket.org:adam/myproject.git $DIR/app
cd $DIR/app && \
docker-compose down --remove-orphans && \
docker-compose up --build -d
But this solution is really slow and it makes ~3 minutes downtime. For this project I can accept few seconds downtime, it's not fatal. I don't need really zero downtime. But 3 minutes is not acceptable.
The most time-consuming is "npm build" inside containers. It's something which it must be run after every change.
What I can do better? Is Swarm or Kubernetes really only solution? Can I build containers while the old app still running? And after build just stop old and run new?
Thanks!
If you can structure things so that your images are self-contained, then you can get a fairly short downtime.
I would recommend using a unique tag for your images. A date stamp works well; you mention you have a monorepo, so you can use the commit ID in that repo for your image tag too. In your docker-compose.yml file, use an environment variable for your image names
version: '3'
services:
frontend:
image: myname/frontend:${TAG:-latest}
ports: [...]
et: cetera
Do not use volumes: to overwrite the code in your images. Do have your CI system test your images as built, running the exact image you're getting ready to deploy; no bind mounts or extra artificial test code. The question mentions "npm build inside containers"; run all of these build steps during the docker build phase and specify them in your Dockerfile, so you don't need to run these at deploy time.
When you have a new commit in your repo, build new images. This can happen on a separate system; it can happen in parallel with your running system. If you use a unique tag per image then it's more obvious that you're building a new image that's different from the running image. (In principle you can use a single ...:latest tag but I wouldn't recommend it.)
# Choose a tag; let's pick something based on a timestamp
export TAG=20200117.01
# Build the images
docker-compose build
# Push the images to a repository
# (Recommended; required if you're building somewhere
# other than the deployment system)
docker-compose push
Now you're at a point where you've built new images, but you're still running containers based on old images. You can tell Docker Compose to update things now. If you docker-compose pull images up front (or if you built them on the same system) then this just consists of stopping the existing containers and starting new ones. This is the only downtime point.
# Name the tag you want to deploy (same as above)
export TAG=20200117.01
# Pre-pull the images
docker-compose pull
# ==> During every step up to this point the existing system
# ==> is running undisturbed
# Ask Compose to replace the existing containers
# ==> This command is the only one that has any downtime
docker-compose up -d
(Why is the unique tag important? Say a mistake happens, and build 20200117.02 has a critical bug. It's very easy to set the tag back to the earlier 20200117.01 and re-run the deploy, so roll back the deployed system without doing a git revert and rebuilding the code. If you're looking at cluster managers like Kubernetes, the changed tag value is a signal to a Kubernetes Deployment object that something has updated, so this triggers an automatic redeployment.)
Only problem really was docker-compose down before docker-compose build. I deleted down command and downtime is a few seconds now. I thought, build shutdowns running containers before building automatically. I don't know why. Thanks NoƩ for idea! I'm idiot.
While I do think that switching to Kubernetes (or maybe Docker Swarm which I don't have experience with) would be the best option, YES you can build your docker images and then restart.
You just need to run the docker-compose build command. See below:
DIR=$(dirname $(readlink -f $0))
rm -rf $DIR/app
git clone git#bitbucket.org:adam/myproject.git $DIR/app
cd $DIR/app && \
docker-compose build && \
docker-compose down --remove-orphans && \
docker-compose up -d
This long time can come from multiple things:
Your application ignore the stop signal, docker-compose wait for them to terminate before killing them. Check that your container are well exiting without waiting the kill signal.
Your Dockerfile is wrongly ordered. Docker have built-in cache for every step but if an earlier step changed then it have do make every steps again. I recommend you to look carefuly when you copy files it's often this that break cache.
Run docker-compose build before putting down containers. Be careful about mounted volumes, if docker can't get the context it will failed
I would like to run a test a parse-dashboard via Docker, as documented in the readme.
I am getting the error message, "Parse Dashboard can only be remotely accessed via HTTPS." Normally, you can bypass this by adding the line "allowInsecureHTTP": true in your parse-dashboard-config.json file. But even if I have added this option to my config file, the same message is displayed.
I tried to edit the config file in the Docker container, whereupon I discovered that none of my local file changes where present in the container. It appeared as though my project was an unmodified version of the code from the github repository.
Why do the changes that I make to the files in my working directory on the host machine not show up in the Docker container?
But what it is upload to my docker, it's in fact the config file of my master branch.
It depends:
what that "docker" is: the official DockerHub or a private docker registry?
how it is uploaded: do you build an image and then use docker push, or do you simply do a git push back to your GitHub repo?
Basically, if you want to see the right files in your Docker container that you run, you must be sure to run an image you have built (docker build) after a Dockerfile which COPY files from your current workspace.
If you do a docker build from a folder where your Git repo is checked out at the right branch, you will get an image with the right files.
The Dockerfile from the parse-dashboard repository you linked uses ADD . /src. This is a bad practice (because of the problems you're running into). Here are two different approaches you could take to work around it:
Rebuild the Image Each Time
Any time you change anything in the working directory (which the Dockerfile ADDs to /src), you need to rebuild for the change to take effect. The exception to this is src/Parse-Dashbaord/parse-dashboard-config.json, which we'll mount in with a volume. The workflow would be nearly identical to the one in the readme:
$ docker build -t parse-dashboard .
$ docker run -d -p 8080:4040 -v ./src/Parse-Dashbaord/parse-dashboard-config.json:/src/Parse-Dashboard/parse-dashboard-config.json parse-dashboard
Use a Volume
If we're going to use a volume to do this, we don't even need the custom Dockerfile shipped with the project. We'll just use the official Node image, upon which the Dockerfile is based.
In this case, Docker will not run the build process for you, so you should do it yourself on the host machine before starting Docker:
$ npm install
$ npm run build
Now, we can start the generic Node Docker image, and ask it do serve our project directory.
$ docker run -d -p 8080:4040 -v ./:/src node:4.7.2 "cd /src && npm run dashboard"
Changes will take effect immediately because you mount ./ into the image as a volume. Because it's not done with ADD, you don't need to rebuild the image each time. We can use the generic node image because if we're not ADDing a directory and running the build commands, there's nothing our image will do differently than the official one.
How do I copy files from a docker container to the host machine during docker build command?
As a part of building my docker image for my app, I'm running some tests inside it, and I would like to copy the output of the test run into the host (which is a continuous integration server) to do some reporting.
I wouldn't run the tests during build, this will only increase the size of your image. I would recommend you to build the image and then run it mounting a host volume into the container and changing the working directory to the mount point.
docker run -v `pwd`/results:/results -w /results -t IMAGE test_script
There is no easy way to do this. Volumes are only created at run time. You can grab it out of the docker filesystem, (e.g. mine is /var/lib/docker/devicemapper/mnt/CONTAINER_ID/rootfs/PATH_TO_FILE) though there is no good way to figure out when your test process is complete. You could create a file when it's finished and do an inotify, but this is ugly.