Docker SCRATCH container can't find files - docker

I have a very simple dockerfile:
FROM scratch
MAINTAINER "aosmith" <a..h#...com>
EXPOSE 6379
ADD redis-server /redis-server
ENTRYPOINT ["/redis-server"]
The docker file is in a folder with a statically compiled copy of redis-server.
The build runs find but the container refuses to start:
➜ redis git:(master) ✗ docker run f16
no such file or directory
Error response from daemon: Cannot start container 46be4ed97560cd63fa4f639bed0e25358e807a8229bb3b5a613aa1274e037040: [8] System error: no such file or directory
I've tried various combinations of CMD EXEC ADD and COPY with no luck.
I'm building redis from source like this:
make CFLAGS="-static" EXEEXT="-static" \
MALLOC=libc LDFLAGS="-I/usr/local/include/"
Worth noting I use basically the exact same Dockerfile for go projects without any problems.
Any ideas?

The "scatch" image is literally empty and can only be used by technologies like go which have near zero dependencies on it's runtime environment.
Try a base image that supplies a set of OS utilities, like bash, etc. For example
FROM ubuntu
MAINTAINER "aosmith" <a..h#...com>
EXPOSE 6379
ADD redis-server /redis-server
ENTRYPOINT ["/redis-server"]

Related

Docker volumes not mounting/linking

I'm in Docker Desktop for Windows. I am trying to use docker-compose as a build container, where it builds my code and then the code is in my local build folder. The build processes are definitely succeeding; when I exec into my container, the files are there. However, nothing happens with my local folder -- no build folder is created.
docker-compose.yml
version: '3'
services:
front_end_build:
image: webapp-build
build:
context: .
dockerfile: Dockerfile
ports:
- 5000:5000
volumes:
- "./build:/srv/build"
Dockerfile
FROM node:8.10.0-alpine
EXPOSE 5000
# add files from local to container
ADD . /srv
# navigate to the directory
WORKDIR /srv
# install dependencies
RUN npm install --pure-lockfile --silent
# build code (to-do: get this code somewhere where we can use it)
RUN npm run build
# install 'serve' and launch server.
# note: this is just to keep container running
# (so we can exec into it and check for the files).
# once we know that everything is working, we should delete this.
RUN npx serve -s -l tcp://0.0.0.0:5000 build
I also tried removing the final line that serves the folder. Then I actually did get a build folder, but that folder was empty.
UPDATE:
I've also tried a multi-stage build:
FROM node:12.13.0-alpine AS builder
WORKDIR /app
COPY . .
RUN yarn
RUN yarn run build
FROM node:12.13.0-alpine
RUN yarn global add serve
WORKDIR /app
COPY --from=builder /app/build .
CMD ["serve", "-p", "80", "-s", "."]
When my volumes aren't set (or are set to, say, some nonexistent source directory like ./build:/nonexistent), the app is served correctly, and I get an empty build folder on my local machine (empty because the source folder doesn't exist).
However when I set my volumes to - "./build:/app" (the correct source for the built files), I not only wind up with an empty build folder on my local machine, the app folder in the container is also empty!
It appears that what's happening is something like
1. Container is built, which builds the files in the builder.
2. Files are copied from builder to second container.
3. Volumes are linked, and then because my local build folder is empty, its linked folder on the container also becomes empty!
I've tried resetting my shared drives credentials, to no avail.
How do I do this?!?!
I believe you are misunderstanding how host volumes work. The volume definition:
./build:/srv/build
In the compose file will mount ./build from the host at /srv/build inside the container. This happens at run time, not during your image build, so after the Dockerfile instructions have been performed. Nothing from the image is copied out to the host, and no files in the directory being mounted in top of will be visible (this is standard behavior of the Linux mount command).
If you need files copied back out of the container to the host, there are various options.
You can perform your steps to populate the build folder as part of the container running. This is common for development. To do this, your CMD likely becomes a script of several commands to run, with the last step being an exec to run your app.
You can switch to a named volume. Docker will initialize these with the contents of the image. It's even possible to create a named bind mount to a folder on your host, which is almost the same as a host mount. There's an example of a named bind mount in my presentation here.
Your container entrypoint can copy the files to the host mount on startup. This is commonly seen on images that will run in unknown situations, e.g. the Jenkins image does this. I also do this in my save/load volume scripts in my example base image.
tl;dr; Volumes aren't mounted during the build stage, only while running a container. You can run the command docker run <image id> -v ./build/:/srv/build cp -R /app /srv/build to copy the data to your local disk
While Docker is building the image it is doing all actions in ephemeral containers, each command that you have in your Dockerfile is run in a separate container, each making a layer that eventually becomes the final image.
The result of this is that the data flow during the build is unidirectional, you are unable to mount a volume from the host into the container. When you run a build you will see Sending build context to Docker daemon, because your local Docker CLI is sending the context (the path you specified after the docker build, ususally . which represents the current directory) to the Docker daemon (the process that actually does the work). One key point to remember is that the Docker CLI (docker) doesn't actually do any work, it just sends commands to the Docker Daemon dockerd. The build stages shouldn't change anything on your local system, the container is designed to encapsulate the changes only into the container image, and give you a snapshot of the build that you can reuse consistently, knowing that the contents are the same.

Composer install doesn't install packages when running in Dockerfile

I cannot seem to run composer install in Dockerfile but I can in the container after building an image and running the container.
Here's the command from Dockerfile:
RUN composer require drupal/video_embed_field:1.5
RUN composer install --no-autoloader --no-scripts --no-progress
The output is:
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
Nothing to install or update
But after running the container with docker-compose:
...
drupal:
image: docker_image
container_name: container
ports:
- 8081:80
volumes:
- ./container/modules:/var/www/html/web/modules
links:
# Link the DB container:
- db
running docker exec composer install will install the packages correctly:
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
Package operations: 1 installs, 0 updates, 0 removals
...
Generating autoload files
I am assuming that the composer.json and composer.lock files are correct because I can run the composer install command in the container without any further effort, but only after the container is running.
Update
Tried combining the composer commands with:
RUN composer require drupal/video_embed_field:1.5 && composer install
Same issue, "Nothing to install or update". Ultimately I would like to continue using seperate RUN statements in Dockerfile to take advantage of docker caching.
Your issue is coming from the fact that, docker-compose is meant to orchestrate multiple docker container build and run at the same time and it somehow is not really showing easily what it does behind the scene to people starting with docker.
Behind a docker-compose up there are four steps:
docker-compose build if needed, and if there is no existing image(s) yet, create the image(s)
docker-compose create if needed, and if there is no container(s) existing yet, create the container(s)
docker-compose start start existing container(s)
docker-compose logs logs stderr and stdout of the containers
So what you have to spot on there, is the fact that action contained into you Dockerfile are executed at the image creation step.
When mounting folders is executed at start of containers step.
So when you try to use a RUN command, part of the image creation step, on files like composer.lock and composer.json that are mounted on starting step, you end up having nothing to install via composer because your files are not mounted anywhere yet.
If you do a COPY of those files that may actual get you somewhere, because you will then have the composer files as part of your image.
This said, be careful that the mounted source folder will totally override the mounting point so you could end up expecting a vendor folder and not have it.
What you should ideally do is to have it as the ENTRYPOINT, this one happens at the last step of the container booting.
Here is for a little developing comparison: a docker image is to a docker container what a class is to an instance of an class — an object.
Your container are all created from images built possibly long time before.
Most of the steps in your Dockerfile happens at image creation and not at container boot time.
While most of the instruction of docker-compose are aimed at the automatisation of the container build, which include the mounting of folders.
Just noting a docker-compose.yml approach to the issue when the volume mount overwrites the composer files inside the container:
docker-compose.yml
environment:
PROJECT_DIR: /var/www/html
volumes:
- ./php/docker/entrypoint/90-composer-install.sh:/docker-entrypoint-init.d/90-composer-install.sh
composer-install.sh
#!/usr/bin/env bash
cd ${PROJECT_DIR}
composer install
This runs composer install after the build, using the docker-entrypoint-init.d shell script

Confusion while deploying docker-composer image

I've been working in a sample ruby-on-rails application and deploying docker image in a linux server (ubuntu 14.04).
Here is my Dockerfile:
FROM ruby:2.1.5
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /rails_docker_demo
WORKDIR /rails_docker_demo
ADD Gemfile /rails_docker_demo/Gemfile
ADD Gemfile.lock /rails_docker_demo/Gemfile.lock
RUN bundle install
ADD . /rails_docker_demo
# CMD bundle exec rails s -p 3000 -b 0.0.0.0
# EXPOSE 3000
docker-compose.yml:
version: '2'
services:
db:
image: postgres
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
image: atulkhanduri/rails_docker_demos
volumes:
- .:/rails_docker_demo
ports:
- "3000:3000"
depends_on:
- db
deploy.sh:
#!/bin/bash
docker build -t atulkhanduri/rails_docker_demo .
docker push atulkhanduri/rails_docker_demo
ssh username#ip-address << EOF
docker pull atulkhanduri/rails_docker_demo:latest
docker stop web || true
docker rm web || true
docker rmi atulkhanduri/rails_docker_demo:current || true
docker tag atulkhanduri/rails_docker_demo:latest atulkhanduri/rails_docker_demo:current
docker run -d --restart always --name web -p 3000:3000 atulkhanduri/rails_docker_demo:current
EOF
Now my problem is that I'm not able to use docker-compose commands like docker-compose up, to run the application server.
When I uncomment the last two lines fromDockerfile i.e,
CMD bundle exec rails s -p 3000 -b 0.0.0.0
EXPOSE 3000
then I'm able to run the server on port 3000 but getting error could not translate host name "db" to address: Name or service not known. (my database.yml has "db" as host.) This is because postgres image is not used as I'm not using docker-compose file is not.
EDIT:
Output of docker network ls:
NETWORK ID NAME DRIVER SCOPE
b466c9f566a4 bridge bridge local
7cce2e53ee5b host host local
bfa28a6fe173 none null local
P.S: I've searched a lot in the internet but not yet able to use the docker-compose file.
Assumptions
If I am reading what you've done here correctly, my answer assumes the following two things.
You are using docker-compose to run the database container.
You are using plain docker commands (not docker-compose) to start the application server ("web").
First, I would suggest not doing that, it is a lot simpler to use docker-compose for both. However, I'll answer based on the above, assuming that there is some valid reason you cannot use docker-compose to run the "web" container.
About container and network names
When you run the docker-compose command to start the db container, among other things, two things happen.
The container is given a new name, composed of the directory you run the compose setup from, the static name in compose (db), and a number. So let's say you have this all in a directory name myapp, you would have a new container named myapp_db_1. You can see what it is named using docker ps.
A network bridge is created if it didn't already exist, named something like myapp_default - again, named after the directory that the compose setup is inside of.
Connecting to the right network
The problem is that your non-compose container is attached to the default network (probably docker_default), but your db container is attached to myapp_default. The two networks do not know about each other. You need to connect them. It probably makes more sense to tell the web app container to attach to the compose network.
First, get the correct network name. You can see all networks using docker network ls. It might look like this:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
c1f5764a112b bridge bridge local
175efb89adef docker_default bridge local
5185ff0e1054 myapp_default bridge local
Once you have the correct name, update your run command to know about the network using the --network option.
docker run -d --restart always --name web \
-p 3000:3000 --network myapp_default \
atulkhanduri/rails_docker_demo:current
Once it is attached to the proper network, the name "db" should resolve correctly.
If you used docker-compose to start both of them, this would not be necessary (this is one of the things docker-compose just takes care of for you silently).
Getting this to run on your server
In the comments, you mention that you are having some issues with compose on the server. Specifically you said:
Do I need to copy my complete project on the server? Can't I run the application from docker image only? Actually, I've copied docker-compose in server and it throws errors for Gemfile, then I copied Gemfile, then it says it should be a rails app. So I guess I need to copy my complete folder in server. Can you please confirm?
Let's look at some parts of your Dockerfile. I'll add some comments inline.
## Make a new directory, and then make it the current directory
RUN mkdir /rails_docker_demo
WORKDIR /rails_docker_demo
## Copy Gemfile and Gemfile.lock into this directory from outside
ADD Gemfile /rails_docker_demo/Gemfile
ADD Gemfile.lock /rails_docker_demo/Gemfile.lock
## Run the bundle installer, which will install to this directory
RUN bundle install
## Finally, copy everything from the outside local dir to here
ADD . /rails_docker_demo
So, clearly, /rails_docker_demo is your application directory within the container. You've installed a bunch of stuff here, and this will become a part of your image. When you push your image to the registry, then pull it down on the server (as you do in the deploy script), this will all come with it.
Now let's look at (some of) docker-compose.yml.
services:
web:
volumes:
- .:/rails_docker_demo
Here you have defined a volume mount, mounting the current directory (wherever docker-compose.yml lives) as /rails_docker_demo. When you do that, whatever happens to exist on the server is now available in /rails_docker_demo, but this mount undoes all the work from Dockerfile that I just mentioned above. Instead of having the resources you installed when you built the image, you have only whatever is on the server in the . directory. The mount is on top of the image's existing /rails_docker_demo directory, hiding its contents and replacing them with whatever is on the server at the moment.
Unless there is a reason you put this mount here, you probably just need to remove that volume mount from docker-compose.yml. You will still need docker-compose.yml on the server, but you should not need the rest of it (aside from the image, of course).
This mount you have done is a useful thing - for development purposes. It would let you use the container to run the application and quickly have code changes show up (without rebuilding the image). But in the case of your deployment, it is just causing trouble.
Try moving the EXPOSE above CMD, .e.g.
FROM ruby:2.1.5
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /rails_docker_demo
WORKDIR /rails_docker_demo
ADD Gemfile /rails_docker_demo/Gemfile
ADD Gemfile.lock /rails_docker_demo/Gemfile.lock
RUN bundle install
ADD . /rails_docker_demo
EXPOSE 3000
CMD bundle exec rails s -p 3000 -b 0.0.0.0

Host Volumes Not getting mounted on 'Docker-compose up'

I'm using docker-machine and docker-compose to develop a Django app with React frontend. The volumes don't get mounted on Debian environment but works properly on OSX and Windows, I've been struggling with this issue for days, I created a light version of my project that still replicate the issue you can find it in https://github.com/firetix/docker_bug.
my docker-compose.yml:
django:
build: django
volumes:
- ./django/:/home/docker/django/
My Dockerfile is as follow
FROM python:2.7
RUN mkdir -p /home/docker/django/
ADD . /home/docker/django/
WORKDIR /home/docker/django/
CMD ["./command.sh"]
When I run docker-compose build everything works properly. But when I run docker-compose up I get
[8] System error: exec: "./command.sh": stat ./command.sh: no such file or directory
I found this question on stackoverflow
How to mount local volumes in docker machine followed the proposed workarounds with no success.
I'm I doing something wrong? Why does this work on osx and windows but not on Debian environment? Is there any workaround that works on a Debian environment? Both Osx and Debian have /Users/ folders as a shared folder when I check VirtualBox GUI.
This shouldn't work for you on OSX, yet alone Debian. Here's why:
When you add ./command.sh to the volume /home/docker/django/django/ the image builds fine, with the file in the correct directory. But when you up the container, you are mounting your local directory "on top of" the one you created in the image. So, there is no longer anything there...
I recommend adding command.sh to a different location, e.g., /opt/django/ or something, and changing your docker command to ./opt/command.sh.
Or more simply, something like this, here's the full code:
# Dockerfile
FROM python:2.7
RUN mkdir -p /home/docker/django/
WORKDIR /home/docker/django/
# docker-compose.yml
django:
build: django
command: ./command.sh
volumes:
- ./django/:/home/docker/django/
I believe this should work. there were some problems with docker-compose versions using relative paths.
django:
build: django
volumes:
- ${PWD}/django:/home/docker/django

Linode/lamp + docker-compose

I want to install linode/lamp container to work on some wordpress project locally without messing up my machine with all the lamp dependencies.
I followed this tutorial which worked great (it's actually super simple).
Now I'd like to use docker-compose because I find it more convenient to simply having to type docker-compose up and being good to go.
Here what I have done:
Dockerfile:
FROM linode/lamp
RUN service apache2 start
RUN service mysql start
docker-compose.yml:
web:
build: .
ports:
- "80:80"
volumes:
- .:/var/www/example.com/public_html/
When I do docker-compose up, I get:
▶ docker-compose up
Recreating gitewordpress_web_1...
Attaching to gitewordpress_web_1
gitewordpress_web_1 exited with code 0
Gracefully stopping... (press Ctrl+C again to force)
I'm guessing I need a command argument in my docker-compose.yml but I have no idea what I should set.
Any idea what I am doing wrong?
You cannot start those two processes in the Dockerfile.
The Dockerfile determines what commands are to be run when building the image.
In fact many base images like the Debian ones are specifically designed to not allow starting any services during build.
What you can do is create a file called run.sh in the same folder that contains your Dockerfile.
Put this inside:
#!/usr/bin/env bash
service apache2 start
service mysql start
tail -f /dev/null
This script just starts both services and forces the console to stay open.
You need to put it inside of your container though, this you do via two lines in the Dockerfile. Overall I'd use this Dockerfile then:
FROM linode/lamp
COPY run.sh /run.sh
RUN chmod +x /run.sh
CMD ["/bin/bash", "-lc", "/run.sh"]
This ensures that the file is properly ran when firing up the container so that it stays running and also that those services actually get started.
What you should also look out for is that your port 80 is actually available on your host machine. If you have anything bound to it already this composer file will not work.
Should this be the case for you ( or you're not sure ) try changing the port line to like 81:80 or so and try again.
I would like to point you to another resource where LAMP server is already configured for you and you might find it handy for your local development environment.
You can find it mentioned below:
https://github.com/sprintcube/docker-compose-lamp

Resources