Docker container to execute a script on the host - docker

I run a node server (in a Docker container) to listen for github webhooks, so I can redeploy when my master gets updated. My home directory contains:
production/
app/webhooks/
docker-compose-webhooks.yml
deploy.sh
~/docker-compose-webhooks.yml
version: '3'
services:
webhooks:
image: node:10.11.0-alpine
container_name: abis-webhooks
working_dir: /webhooks
environment:
NODE_ENV: production
PORT: 5050
GITHUB_SECRET: ${GITHUB_SECRET}
expose:
- '5050'
volumes:
- ./app/webhooks:/webhooks
command: /bin/sh -c 'npm install --production; npm start'
~/deploy.sh
#!/bin/bash
cd ~/production && git pull origin master
...
...
What's the easiest way I can call the deploy.sh that is obviously located outside the container where node sits.
I took this from another post and add it in node:
exec(`docker run --rm -v /usr/bin:/usr/bin --privileged -v $(pwd)/depoly.sh:~/deploy.sh ubuntu bash ~/deploy.sh`)

Related

how to run git inside a container

I have the following compose file and I need to run multiple commands on the container. I need the container to do a git pull to grab the containers config. This is a Debian build so I have tried to install git and then run the git command. When I do this, the container constantly restarts.
---
version: "3"
services:
kamailio:
image: kamailio/kamailio:5.2.8-stretch
restart: unless-stopped
container_name: kamailio
#environment:
command:
- bash
- -c
- >
apt-get install git -y;
cd /tmp;
git clone https://github.com/dOpensource/dsiprouter.git;
volumes:
- kamailio_Data:/etc/kamailio

How do I make my VS Code dev/remote container port accessible to localhost?

I have a GraphQL application that run inside a container. If I run docker compose build followed by docker compose up I can connect to it via localhost:9999/graphql. Inside the dockerfile the port forwarding is 9999:80. When I run the docker container ls command I can see the ports are forewarded as expected.
I'd like to running this in a VS Code remote container. Selecting Open folder in remote container gives me the option of selecting either the dockerfile or the docker-compose file to build the container. I've tried both options and neither allows me to access the GraphQL playground from localhost. Running from docker-compose I can see that the ports appear to be forwarded in the same manner as if I ran docker compose up but I can't access the site.
Where am I going wrong?
Update: If I run docker compose up on the container that is built by vs code, I can connect to localhost and the graphql playground.
FROM docker.removed.local/node
MAINTAINER removed
WORKDIR /opt/app
COPY package.json /opt/app/package.json
COPY package-lock.json /opt/app/package-lock.json
COPY .npmrc /opt/app/.npmrc
RUN echo "nameserver 192.168.11.1" > /etc/resolv.conf && npm ci
RUN mkdir -p /opt/app/logs
# Setup a path for using local npm packages
RUN mkdir -p /opt/node_modules
ENV PATH /opt/node_modules/.bin:$PATH
COPY ./ /opt/app
EXPOSE 80
ENV NODE_PATH /opt:/opt/app:$NODE_PATH
ARG NODE_ENV
VOLUME ["/opt/app"]
CMD ["forever", "-o", "/opt/app/logs/logs.log", "-e", "/opt/app/logs/error.log", "-a", "server.js"]
version: '3.5'
services:
server:
build: .
container_name: removed-data-graph
command: nodemon --ignore 'public/*' --legacy-watch src/server.js
image: docker.removed.local/removed-data-graph:local
ports:
- "9999:80"
volumes:
- .:/opt/app
- /opt/app/node_modules/
#- ${LOCAL_PACKAGE_DIR}:/opt/node_modules
depends_on:
- redis
networks:
- company-network
environment:
- NODE_ENV=dev
redis:
container_name: redis
image: redis
networks:
- company-network
ports:
- "6379:6379"
networks:
company-network:
name: company-network

GitLab CI Docker WORKDIR not being created

I am trying to deploy my NodeJS repo to a DO droplet via GitLab CI. I have been following this guide to do so. What is odd is that the deployment pipeline seems to succeed but if I SSH into the box, I can see that the app is not running as has failed to find a package.json in /usr/src/app which is the WORKDIR my Dockerfile is pointing to.
gitlab-ci.yml
cache:
key: "${CI_COMMIT_REF_NAME} node:latest"
paths:
- node_modules/
- .yarn
stages:
- build
- release
- deploy
build:
stage: build
image: node:latest
script:
- yarn
artifacts:
paths:
- node_modules/
release:
stage: release
image: docker:latest
only:
- master
services:
- docker:dind
variables:
DOCKER_DRIVER: "overlay"
before_script:
- docker version
- docker info
- docker login -u ${CI_REGISTRY_USER} -p ${CI_BUILD_TOKEN} ${CI_REGISTRY}
script:
- docker build -t ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest --pull .
- docker push ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest
after_script:
- docker logout ${CI_REGISTRY}
deploy:
stage: deploy
image: gitlab/dind:latest
only:
- master
environment: production
when: manual
before_script:
- mkdir -p ~/.ssh
- echo "${DEPLOY_SERVER_PRIVATE_KEY}" | tr -d '\r' > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_rsa
- ssh-keyscan -H ${DEPLOYMENT_SERVER_IP} >> ~/.ssh/known_hosts
script:
- printf "DB_URL=${DB_URL}\nDB_NAME=${DB_NAME}\nPORT=3000" > .env
- scp -r ./.env ./docker-compose.yml root#${DEPLOYMENT_SERVER_IP}:~/
- ssh root#${DEPLOYMENT_SERVER_IP} "docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY}; docker-compose rm -sf scraper; docker pull ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest; docker-compose up -d"
Dockerfile
FROM node:10
WORKDIR /usr/src/app
COPY package.json ./
RUN yarn
COPY . .
EXPOSE 3000
CMD [ "yarn", "start" ]
docker-compose.yml
version: "3"
services:
scraper:
build: .
image: registry.gitlab.com/arby-better/scraper:latest
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
ports:
- 3000:3000
environment:
- NODE_ENV=production
env_file:
- .env
I'm using GitLab Shared Runners for my pipeline. My pipeline looks like it executes completely fine but for this symlink failure at the end:
...which I don't think is anything to worry about. If I SSH into my box & go to where the docker compose was copied & inspect:
Docker has not created /usr/src/app.
Versions:
Docker: 19.03.1
Docker-compose: 1.22.0
My DO box is Docker 1-click btw. Any help appreciated!
EDIT
I have altered my Dockerfile to attempt to force the dir creation so have added RUN mkdir -p /usr/src/app before the line declaring it as the working dir. This still does not create the directory...
When I look at the container status' (docker-compose ps), I can see that the containers are in an exit state & have exited with code either 1 or 254...any idea as to why?
Your compose file is designed for a development environment, where the code directory is replaced by a volume mount to the code on the developers machine. You don't have this persistent directory in production, nor should you be depending on code outside of the image in production, defeating the purpose of copying it into your image.
version: "3"
services:
scraper:
build: .
image: registry.gitlab.com/arby-better/scraper:latest
# Comment out or delete these lines, they do not belong in production
#volumes:
# - .:/usr/src/app
# - /usr/src/app/node_modules
ports:
- 3000:3000
environment:
- NODE_ENV=production
env_file:
- .env

Test my go app in a container launched by docker compose with Gitlab CI

I have a Golang app, that depends a FTP Server.
So, In docker compose, I build a FTP service and I refer to it into my tests.
So, in my docker-compose.yml I have:
version: '3'
services:
mygoapp:
build:
dockerfile: ./Dockerfile.local
context: ./
volumes:
- ./volume:/go
- ./test_files:/var/test_files
networks:
mygoapp_network:
env_file:
- test.env
tty: true
ftpd-server:
container_name: ftpd-server
image: stilliard/pure-ftpd:hardened
environment:
PUBLICHOST: "0.0.0.0"
FTP_USER_NAME: "julien"
FTP_USER_PASS: "test"
FTP_USER_HOME: "/home/www/julien"
restart: on-failure
networks:
mygoapp_network:
networks:
mygoapp_network:
external: true
In my gitlab-ci.yml I have
variables:
PACKAGE_PATH: /go/src/gitlab.com/xxx
VOLUME_PATH: /var/test_files
stages:
- test
# A hack to make Golang-in-Gitlab happy
.anchors:
- &inject-gopath
mkdir -p $(dirname ${PACKAGE_PATH})
&& ln -s ${CI_PROJECT_DIR} ${PACKAGE_PATH}
&& cd ${PACKAGE_PATH}
test:
image: docker:18
services:
- docker:dind
stage: test
# only:
# - production
before_script:
- touch test.env
- apk update
- apk upgrade
- apk add --no-cache py-pip
- pip install docker-compose
- docker network create mygoapp_network
- mkdir -p volume/log
script:
- docker-compose -f docker-local.yaml up --build -d
- docker exec project-0_mygoapp_1 ls /var/test_files
- docker exec project-0_mygoapp_1 echo $VOLUME_PATH
- docker exec project-0_mygoapp_1 go test ./... -v
All my services are up
But when I run
- docker exec project-0_myapp_1 echo $VOLUME_PATH
I can see $VOLUME_PATH is equal to /var/test_files
but inside code, when I do:
os.Getenv("VOLUME_PATH")
variable is empty
Also, in local, with a docker exec, variable is OK.
I also tried to put Variables into test definition, but it still doesn' work
EDIT: The only way I could do it is setting environment vars in docker compose, but it is not so great
Any idea how to fix it ?
The behaviour of your script is predictable - all environment variables are being expanded when they are met (unless they are in single quotes). So, your line
docker exec project-0_myapp_1 echo $VOLUME_PATH
is expanded before being executed, and $VOLUME_PATH is taken from gitlab runner, not from container.
The only way I see to get this script printing environment variable from inside container is putting script inside sh-file and calling that file:
doit.sh
echo $VOLUME_PATH
gitlab-ci.yml
docker exec project-0_myapp_1 doit.sh

Run Gatsby with docker compose

I am trying to run Gatsby with Docker Compose.
From what I understand the Gatsby site is running in my docker container.
I map port 8000 of the container to port 8000 on my localhost.
But when looking on localhost:8000 I am not getting my gatsby site.
I use the following Dockerfile to build the image with docker build -t nxtra/gatsby .:
FROM node:8.12.0-alpine
WORKDIR /project
COPY ./package.json /project/package.json
COPY ./.entrypoint/entrypoint.sh /entrypoint.sh
RUN apk update \
&& apk add bash \
&& chmod +x /entrypoint.sh \
&& npm set progress=false \
&& npm install -g yarn gatsby-cli
EXPOSE 8000
ENTRYPOINT [ "/entrypoint.sh" ]
entrypoints.sh contains:
#!/bin/bash
yarn install
gatsby develop
docker-compose.yml ran with docker-compose up
version: '3.7'
services:
gatsby:
image: nxtra/gatsby
ports:
- "8000:8000"
volumes:
- ./:/project
tty: true
docker ps shows that port 8000 is forwarded 0.0.0.0:8000->8000/tcp.
Inspecting my container with docker inspect --format='{{.Config.ExposedPorts}}' id confirms the exposure of the port -> map[8000/tcp:{}]
docker tops on the container shows the following processes are running in the container:
18465 root 0:00 {entrypoint.sh} /bin/bash /entrypoint.sh
18586 root 0:11 node /usr/local/bin/gatsby develop
18605 root 0:00 /usr/local/bin/node /project/node_modules/jest-worker/build/child.js
18637 root 0:00 /bin/bash
Dockerfile and docker-compose.yml are situated in the root of my Gatsby project.
My project is running correctly when I run it without docker gatsby develop.
What am I doing wrong to get the Gatsby site that runs in my container to be visible on localhost:8000?
My issue was that Gatsby was only listening to requests within the container, like this answer suggests. Make sure you've configured Gatsby for the host 0.0.0.0. Take this (somewhat hacky) setup as an example:
Dockerfile
FROM node:alpine
RUN npm install --global gatsby-cli
docker-compose.yml
version: "3.7"
services:
gatsby:
build:
context: .
dockerfile: Dockerfile
entrypoint: gatsby
volumes:
- .:/app
develop:
build:
context: .
dockerfile: Dockerfile
command: gatsby develop -H 0.0.0.0
ports:
- "8000:8000"
volumes:
- .:/app
You can run Gatsby commands from a container:
docker-compose run gatsby info
Or run the development server:
docker-compose up develop

Resources