Not sure how to ask this question because I can't understand the problem. Also, I'm not a docker expert and this may be a stupid issue.
I have a Rails project with docker-compose. And there's 2 situations. First I'm able to build and run the app with docker-compose up and everything looks fine, the problem is the code is not reloading when I change it. Second, when I add a volume in docker-compose.yml, docker-compose up exit because Gemfile can't be found, the mounted folder is empty.
Dockerfile and docker-compose.yml extract, I renamed some stuff:
# File: Dockerfile.app
FROM ruby:2.5-slim-stretch
RUN apt-get update -qq && apt-get install -y redis-tools
RUN apt-get install -y autoconf bison build-essential #(..etc...)
RUN echo "gem: --no-document" > ~/.gemrc
RUN gem install bundler
ADD . /docker-projects
WORKDIR /docker-projects/project1/core
ENV BUNDLE_APP_CONFIG /docker-projects/project1/core/.bundle
RUN /bin/sh -c bundle install --local --jobs
# File: docker-compose.yml
app:
build: .
dockerfile: Dockerfile.app
command: /bin/sh -c "bundle exec rails s -p 8080 -b 0.0.0.0"
ports:
- "8080:8080"
expose:
- "8080"
volumes:
- .:/docker-projects
links:
- redis
- mysql
- memcached
My 'docker-projects' is a big project made of different rails_engines and gems libraries. We manage this with the 'repo' tool.
Running docker-compose build app work fine, and I can see bundle install logs. Then docker-compose up app exit with error 'Gemfile not found'.
It was working with no problem till I decided to recover 50gb of space from docker containers and rebuild everything. Not sure what changed.
If I add the volume(docker-compose), the mounted volume is empty. If I remove the volume(docker-compose), the code is not reloading as it was.
Versions I'm using:
Docker version 18.09.7, build 2d0083d
OSX 10.14.5
docker (through brew) with xhyve driver
I tried with a new basic docker-compose project and I didn't have this issue. Any ideas? I'll keep looking.
Thanks.
Ok, I found the problem. This is the command I was using to generate my docker-machine:
docker-machine create default \
--driver xhyve \
--xhyve-cpu-count 4 \
--xhyve-memory-size 12288 \
--xhyve-disk-size 256000 \
--xhyve-experimental-nfs-share \
--xhyve-boot2docker-url https://github.com/boot2docker/boot2docker/releases/download/v18.06.1-ce/boot2docker.iso
I probably did an upgrade in the middle because didn't work anymore. The docker-maching showed some warnings about NFS conflicts with my existing /etc/exports definition but the machine was created.
After searching around, I realize I have to rewrite the command above like this:
docker-machine create default \
--driver=xhyve \
--xhyve-cpu-count=4 \
--xhyve-memory-size=12288 \
--xhyve-disk-size=256000 \
--xhyve-boot2docker-url="https://github.com/boot2docker/boot2docker/releases/download/v18.06.1-ce/boot2docker.iso" \
--xhyve-experimental-nfs-share=/Users \
--xhyve-experimental-nfs-share-root "/"
The difference beside the '=' is the *-nfs-share options. I commented my /etc/exports to avoid the conflict warning, and recreated the machine. Now it works like it was before.
The option --xhyve-experimental-nfs-share-root is "/xhyve-nfsshares" by default, so I changed to "/" where I have it.
Related
I've got a docker-compose.yml file that mounts the current directory as a volume to /app in a container. The .yml looks something like this:
version: "3"
services:
app:
build:
context: .
dockerfile: docker/Dockerfile-commandbox
volumes:
- .:/app
ports:
- "8080:8080"
environment:
- TZ=${TIMEZONE-America/Los_Angeles}
Pretty basic.
Now, when I ssh into that container and navigate to the /app directory, it can see the ./wwwroot folder, but its contents are empty, despite not being empty on my host machine. Performing a directory listing on the wwwroot folder in the container displays no results; whereas the folder has lots of content.
What would be causing the container to not be able to see the contents of the wwwroot folder?
The Dockerfile-commandbox file is pretty straightforward as well.
FROM ortussolutions/commandbox:4.8.0
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get install -y \
net-tools \
tzdata \
vim \
&& rm -rf /var/lib/apt/lists/*
COPY scheduled/scheduled-tasks.cfm /app/
WORKDIR /app
RUN box server start cfengine=lucee#5.3.7.47 port=8080 serverHomeDirectory=/root/serverHome host=0.0.0.0 openbrowser=false saveSettings=false heapSize=4096 minHeapSize=4096 \
&& box config set server.defaults.app.cfengine=lucee#5.3.7.47 server.defaults.web.AJP.enable=true \
&& curl -sS http://localhost:8080/scheduled-tasks.cfm \
&& box server stop
UPDATE
To be clear, there are other files in the directory structure and each one is visible, with its contents, from the container.
This must have been a glitch in Docker. On 12/18 release 3.0.2 was available. Installing the update, and bringing up the containers made the issue go away. To confirm I rolled back to 3.0.1, and witnessed the issue again. I also did a full purge of my containers by running docker system prune -a and rebuilding everything from scratch.
As of Docker Desktop Community edition for Mac 3.0.2, this is no longer an issue.
I have a simple Dockerfile
FROM python:3.8-slim-buster
RUN apt-get update && apt-get install
RUN apt-get install -y \
curl \
gcc \
make \
python3-psycopg2 \
postgresql-client \
libpq-dev
RUN mkdir -p /var/www/myapp
WORKDIR /var/www/myapp
COPY . /var/www/myapp
RUN chmod 700 ./scripts/*.sh
And an associated docker-compose file
version: "3"
volumes:
postgresdata:
services:
myapp:
image: ralston3/myapp_api:prod-latest
tty: true
command: /bin/bash -c "/var/www/myapp/scripts/myscript.sh && echo 'hello world'"
ports:
- 8000:8000
volumes:
- .:/var/www/myapp
environment:
SOME_ENV_VARS=SOME_VARIABLE
# ... more here
depends_on:
- redis
- postgresql
# ... other docker services defined below
When I run docker-compose up via:
docker-compose up -f /path/to/docker-compose.yml up
My myapp container/service fails with myapp_myapp_1 exited with code 127 with another error mentioning myapp_1 | /bin/sh: 1: /var/www/myapp/scripts/myscript.sh: not found
Further, if I exec into the myapp container via docker exec -it {CONTAINER_ID} /bin/bash I can clearly see that all of my files are there. I can literally run the /var/www/myapp/scripts/myscript.sh and it works fine.
However, there seems to be some issue with docker-compose (which could totally be my mistake). But I'm just confused as to how I can exec into the container and clearly see the files there. But docker-compose exists with 127 saying "No such file or directory".
You are bind mounting the current directory into "/var/www/myapp" so it may be that your local directory is "hiding/overwriting" the container directory. Try removing the volumes declaration for you myapp service and if that works then you know it is the bind mount causing the issue.
Unrelated to your question, but a problem you will also encounter: you're installing Python a second time, above and beyond the version pre-installed in the python Docker image.
Either switch to debian:buster as base image, or don't bother installing antyhign with apt-get and instead just pip install your dependencies like psycopg.
See https://pythonspeed.com/articles/official-python-docker-image/ for explanation why you don't need to do this.
in my case there were 2 stages: builder and runner.
I was getting an executable in builder and running that exe using the alpine image in runner.
My mistake here was that I didn't use the alpine version for the builder. Ex. I used golang:1.20 but when I used golang:1.20-alpine the problem went away.
Make sure you use the correct version and tag!
I need help with Docker.
Lets say I have docker-compose.yml version 3 with Nginx+PHP. How do I add image vitr/casperjs so I can call it from PHP like
exec('casperjs --version', $output);
?
Any help is appreciated.
UPDATED:
It looks like correct answer would be: It is impossible.
You need to put PHP and CasperJS (and PhantoJS as well) to the same container to get them work together. It would be nice if someone might proof me wrong and show the better where to do it. Here is smth like working example:
FROM nanoninja/php-fpm
ENV PHANTOMJS_VERSION=phantomjs-2.1.1-linux-x86_64
ENV PHANTOMJS_DIR=/app/phantomjs
RUN apt-get update -y
RUN apt-get install -y apt-utils libfreetype6-dev libfontconfig1-dev wget bzip2
RUN wget --no-check-certificate https://bitbucket.org/ariya/phantomjs/downloads/${PHANTOMJS_VERSION}.tar.bz2
RUN tar xvf ${PHANTOMJS_VERSION}.tar.bz2
RUN mv ${PHANTOMJS_VERSION}/bin/phantomjs /usr/local/bin/
RUN rm -rf phantom*
RUN mkdir -p ${PHANTOMJS_DIR}
RUN echo '"use strict"; \n\
console.log("Hello, world!"); + \n\
console.log("using PhantomJS version " + \n\
phantom.version.major + "." + \n\
phantom.version.minor + "." + \n\
phantom.version.patch); \n\
phantom.exit();' \
> ${PHANTOMJS_DIR}/script.js
RUN apt-get update -y && apt-get install -y \
git \
python \
&& rm -rf /var/lib/apt/lists/*
RUN git clone https://github.com/n1k0/casperjs.git
RUN mv casperjs /opt/
RUN ln -sf /opt/casperjs/bin/casperjs /usr/local/bin/casperjs
Q: How to compose docker-compose.yml so i can access deamon's container from php?
A: You could share docker's unix domain socket to access daemon's container.
Something like follows:
docker-compose.yml:
version: '3'
services:
app:
image: ubuntu:16.04
privileged: true
volumes:
- /usr/bin/docker:/usr/bin/docker
- /var/run/docker.sock:/var/run/docker.sock
- /usr/lib/x86_64-linux-gnu/libltdl.so.7:/usr/lib/x86_64-linux-gnu/libltdl.so.7
command: docker run --rm vitr/casperjs casperjs --version
test:
# docker-compose up
WARNING: Found orphan containers (abc_plop_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Recreating abc_app_1 ... done
Attaching to abc_app_1
app_1 | 1.1.4
abc_app_1 exited with code 0
You can see 1.1.4 was print by execute command docker run --rm vitr/casperjs casperjs --version in app container.
This is just an example, you can call docker run --rm vitr/casperjs casperjs --version in your own php container not use ubuntu:16.04, still use exec in php code and get the output.
Updated: (2018/11/05)
First I think some concepts need to be align with you:
-d: this means start a container in detached mode, not daemon. In docker, when we talk about daemon, it means docker daemon which used to accept the connection of docker cli, see here.
--rm: this just to delete the temp container after use it, you can also do not use it.
Difference for using -d & no -d:
With -d: it will run container in detached mode, this means even the container running, the cli command docker run, will exit at once & show you a container id, no any log you will see, like next:
# docker run -d vitr/casperjs casperjs --version
d8dc585bc9e3cc577cab15ff665b98d798d95bc369c876d6da31210f625b81e0
Without -d: the cli command will not exit until the command for container finish, so you can see the output of the command, like next:
# docker run vitr/casperjs casperjs --version
1.1.4
So, your requirement is want to get the output of casperjs, surely you had to use no -d mode, I think.
If you accept above concepts, then you can go on to see a workable example:
folder structure:
abc
├── docker-compose.yml
└── index.php
docker-compose.yml:
version: '3'
services:
phpfpm:
container_name: phpfpm
image: nanoninja/php-fpm
entrypoint: php index.php
privileged: true
volumes:
- .:/var/www/html
- /usr/bin/docker:/usr/bin/docker
- /var/run/docker.sock:/var/run/docker.sock
- /usr/lib/x86_64-linux-gnu/libltdl.so.7:/usr/lib/x86_64-linux-gnu/libltdl.so.7
index.php:
<?php
exec('docker run vitr/casperjs casperjs --version', $output);
print_r($output);
test:
~/abc# docker-compose up
Starting phpfpm ... done
Attaching to phpfpm
phpfpm | Array
phpfpm | (
phpfpm | [0] => 1.1.4
phpfpm | )
phpfpm exited with code 0
You can see 1.1.4 was print through php, attention privileged & volumes are things had to be set.
I'm trying to create a custom docker image in order to use it as build image with AWS CodeBuild. It works fine if I just do docker build against Dockerfile with set up environment. But now I need to add a postgres instance to run the tests against. So I thought using docker-compose would do the trick. However I'm failing to figure out how to make it work. It seems like the static part of the composition (the image from Dockerfile) just stops right away when I try docker-compose up, since there is no entry point. At this point I can connect to db instance by running docker-compose run db psql -h db -U testdb -d testdb. But when I build and feed it to the script provided by AWS, it runs fine until my tests try to reach the DB-server. This is where it fail with timeout, as if there was no db instance.
Configs look like this:
version: '3.7'
services:
api-build:
tty: true
build: ./api_build
image: api-build
depends_on:
- db
db:
image: postgres:10-alpine
restart: always
environment:
POSTGRES_USER: testdb
POSTGRES_PASSWORD: testdb
And Dockerfile under ./api_build:
FROM alpine:3.8
FROM ruby:2.3-alpine as rb
RUN apk update && apk upgrade && \
echo #edge http://nl.alpinelinux.org/alpine/edge/community >> /etc/apk/repositories && \
echo #edge http://nl.alpinelinux.org/alpine/edge/main >> /etc/apk/repositories
RUN apk add --no-cache \
alpine-sdk \
tzdata \
libxml2-dev \
libxslt-dev \
libpq \
postgresql-dev \
elixir \
erlang
UPDATE: I just realized that docker-compose build just builds parts of composition if it's needed (e.g. Docker file updated), so does that mean there's no way to create an image using docker compose? Or am I doing something very wrong?
Since there are no answers I'll try to answer it myself. I'm not sure if it's gonna be useful, but I found out that I had some misconceptions concerning Docker, which prevented me from seeing a solution or the lack of.
1) What I didn't realize is that docker-compose is used for orchestration of container compositions, it cannot be built into a single image that contains all services that you need.
2) Multi-stage builds sounded exciting and a bit magical until I figured out that every next stage starts image from scratch. The only thing you can do is copy some files from previous stages (if aliased with AS). It's still cool, but copying manually an installation with hundreds of files might (and will) become a nightmare.
3) Docker is designed to have only one process running inside of the container, but it doesn't mean it can't run multiple processes. So the solution for my problem was using a supervisor. S6 in particular, which is said to be lightweight, which is exactly what I needed with tiny Alpine images.
I ended up deploying s6-overlay from just-containers:
RUN curl -L -s https://github.com/just-containers/s6-overlay/releases/download/v1.21.4.0/s6-overlay-amd64.tar.gz \
| tar xvzf - -C /
ENTRYPOINT [ "/init" ]
It provides /etc/services.d directory where service scripts go. For example for postgresql, the minimal example would be (in /etc/services.d/postgres/run):
#!/usr/bin/execlineb -P
s6-setuidgid postgres
postgres -D /usr/local/pgsql/data
Pretty much that's it.
I'm trying to setup remote debugging to a Docker Debian container, to debug a Ruby application on my Windows laptop.
This is the post that lead me in this direction:
https://intellij-support.jetbrains.com/hc/en-us/community/posts/207649545-Use-RubyMine-and-Docker-for-development-run-and-debug-before-deployment-for-testing-
I have the Ruby app and SSHD running in the container, though the recipe I found for configuring SSHD isn't fully compatible with the Linux distro that the Ruby image is based on.
I based my SSHD configuration on this Docker documentation page: https://docs.docker.com/engine/examples/running_ssh_service/
My image is based on the ruby:2.2 image from Docker Hub, which uses Debian 8 as opposed to Ubuntu 16.04 used in the SSHD example above.
I can get to a SSH prompt but I can't login with the screencast password for root that's set in the dockerfile.
I'm open to whatever solution works, either properly enabling root login or adding a new user with the correct permissions to allow for remote debugging. I'm just curious which path would be most straight forward in the Debian context. And if it's creating a new user, what permissions do they need?
Also, to be clear I'm treating this as a trial run and will obviously make sure to strip out the SSHD functionality in some way when I go to deploy the app for any context outside of development.
Thanks in advance for your help.
This is my current dockerfile
FROM ruby:2.2
RUN apt-get update && apt-get install -y \
build-essential \
libpq-dev \
nodejs \
openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
RUN mkdir /MyApp
WORKDIR /MyApp
ADD Gemfile /MyApp/Gemfile
ADD Gemfile.lock /MyApp/Gemfile.lock
RUN bundle install
ADD . /MyApp
and this is my docker-compose.yml
version: '2'
services:
web:
build: .
command: /CivilService/docker-dev-start.sh
volumes:
- .:/CivilService
ports:
- "3000:3000"
- "3022:22"
The docker-dev-start.sh looks like this
#!/bin/bash
# start the SSH server for connecting the debugger in development
/usr/sbin/sshd -D &
bundle exec rails s -p 3000 -b '0.0.0.0'
Different distro's can have slightly different SSH configs so replacing specific strings might not work.
To cover any type of PermitRootLogin string use:
RUN sed -i 's/PermitRootLogin .*/PermitRootLogin yes/' /etc/ssh/sshd_config