Wercker Services Not Being Linked to Main Container - docker

The issue I am experiencing with Wercker is that the specific linked services in my wercker.yml are not being linked to my main docker container.
I noticed this issue when my node app was not running on port 3001 after a successful Wercker deploy in which it's output can be seen in the image below.
Therefore I SSH'd into my server and into my docker container that was running after the Wercker deploy using:
docker exec -i -t <my-container-name> ./bin/bash
and found the following MongoDB error in my PM2 logs:
[MongoError: connect EHOSTUNREACH 172.17.0.7:27017
The strange fact is that from the images below you can see that both the environment variables that I need from each respective service have been set:
Does anyone know why the services containers cannot be accessed from my main container even thought their environment variables have been set?
The folloing is the wercker.yml file that I am using.
box: node
services:
- id: mongo
- id: redis
build:
steps:
- npm-install
deploy:
steps:
- npm-install
- script:
name: install pm2
code: npm install pm2 -g
- internal/docker-push:
username: $DOCKER_USERNAME
password: $DOCKER_PASSWORD
repository: /
ports: "3001"
cmd: /bin/bash -c "cd /pipeline/source && pm2 start processes_prod.json --no-daemon"
env: "MONGO_PORT_27017_TCP_ADDR"=$MONGO_PORT_27017_TCP_ADDR,"REDIS_PORT_6379_TCP_ADDR"=$REDIS_PORT_6379_TCP_ADDR
- add-ssh-key:
keyname: DIGITAL_OCEAN_KEY
- add-to-known_hosts:
hostname:
- script:
name: pull latest image
code: ssh root# docker pull /:latest
- script:
name: stop running container
code: ssh root# docker stop || echo ‘failed to stop running container’
- script:
name: remove stopped container
code: ssh root# docker rm || echo ‘failed to remove stopped container’
- script:
name: remove image behind stopped container
code: ssh root# docker rmi /:current || echo ‘failed to remove image behind stopped container’
- script:
name: tag newly pulled image
code: ssh root# docker tag /:latest /:current
- script:
name: run new container
code: ssh root# docker run -d -p 8080:3001 --name /:current
- script:
name: env
code: env

AFAIK the Wercker services are available only in the build process, and not the deploy one. Mongo and Redis are persisted data stores - meaning they are not supposed to be reinstalled every time you deploy.
So make sure you manually setup Redis and Mongo in your deploy environment.

Related

start docker container from within self hosted bitbucket pipeline (dind)

I work on a spring-boot based project and use a local machine as test environment to deploy it as a docker container.
I am in the middle of creating a bitbucket pipeline that automates everything between building and deploying. For this pipeline I make use of a self hosted runner (docker) that also runs on the same machine and docker instance where I plan to deploy my project.
I managed to successfully build the project (mvn and docker), and load the docker image into my GCP container registry.
My final deployment step (docker run xxx, see yml script below) was also successful but since it is running in a container itself it was not running the script on the top level docker.
as far as I understand the runner itself has access to the host docker, because the docker.sock is mounted. but for each step another container is created which does not have access to the docker.sock, right? So basically I need to know how to give access to this file unless there's a better solution to that.
here the shortened pipeline definition:
image: maven:3.8.7-openjdk-18
definitions:
services:
docker:
image: docker:dind
pipelines:
default:
# build only for feature branches or so
branches:
test:
# build, docker and upload steps
- step:
name: Deploy
deployment: test
image: google/cloud-sdk:alpine
runs-on:
- 'self.hosted'
- 'linux'
caches:
- docker
script:
- IMAGE_NAME=$BITBUCKET_REPO_SLUG
- VERSION="${BITBUCKET_BUILD_NUMBER}"
- DOCKER_IMAGE="${DOCKER_REGISTRY}/${IMAGE_NAME}:${VERSION}"
# Authenticating with the service account key file
- echo $GCLOUD_API_KEYFILE > ./gcloud-api-key.json
- gcloud auth activate-service-account --key-file gcloud-api-key.json
- gcloud config set project $GCLOUD_PROJECT
# Login with docker and stop old container (if exists) and run new one
- cat ./gcloud-api-key.json | docker login -u _json_key --password-stdin https://eu.gcr.io
- docker ps -q --filter "name=${IMAGE_NAME}" | xargs -r docker stop
- docker run -d -p 82:8080 -p 5005:5005 --name ${IMAGE_NAME} --rm ${DOCKER_IMAGE}
services:
- docker

Why does skaffold build work but not skaffold run or skaffold dev?

I have a local NPM/Yarn repository "verdaccio" running in a docker container, bound to my host machine at http://0.0.0.0:4873/.
I am trialling skaffold with minikube.
My Dockerfile config requires two build args:
ARG NPM_TOKEN
ARG PACKAGE_REPO_DOMAIN
Which are used in my .yarnrc.yml file:
yarnPath: .yarn/releases/yarn-3.2.0.cjs
nodeLinker: "node-modules"
npmRegistryServer: "http://${PACKAGE_REPO_DOMAIN}:4873/"
httpRetry: 10
httpTimeout: 100000
# networkConcurrency: 2
unsafeHttpWhitelist:
- "0.0.0.0"
- localhost
- verdaccio
- host.minikube.internal
- host.docker.internal
npmRegistries:
"http://${PACKAGE_REPO_DOMAIN}:4873":
npmAlwaysAuth: true
npmAuthToken: ${NPM_TOKEN}
The configured domain is host.minikube.internal. Below is my skaffold yaml, notice I bound network to "host":
apiVersion: skaffold/v2beta28
kind: Config
build:
local:
push: false
artifacts:
- image: my-app
docker:
dockerfile: ./my-app/Dockerfile
target: dev
network: "host"
buildArgs:
NPM_TOKEN: "***REDACTED***"
PACKAGE_REPO_DOMAIN: "host.minikube.internal"
context: ../
sync:
manual:
- src: 'my-app/**/*.*'
dest: ./my-app
- src: './shared'
dest: './shared'
- src: '.yarn'
dest: '.yarn'
deploy:
helm:
releases:
- name: my-app
chartPath: ../../infrastructure/helm/charts/my-app
artifactOverrides:
image: my-app
imageStrategy:
fqn: {}
When running skaffold build then it works and builds the image fine. However when running either skaffold dev or skaffold run then yarn install hangs when building. This means that yarn's failing to reach the verdaccio local npm repository. I don't understand why though - surely it's still being built within the minikube environment and should use host.minikube.internal -> localhost ?
NB: I have remembered to also run this before skaffold (still fails):
skaffold config set --global local-cluster true
eval $(minikube -p minikube docker-env)
Edit
I have since made a minimum reproduction here:
https://github.com/gitn00b1337/skaffold-verdaccio
Requires yarn, minikube + helm.
CD into the project, then:
$ sudo chmod -R a+rw ./verdaccio/storage
$ yarn install
$ minikube start
$ docker-compose up (seperate terminal)
$ skaffold config set --kube-context minikube local-cluster true
$ eval $(minikube -p minikube docker-env)
$ skaffold build # works
$ skaffold run # fails
On our project we had to do the following to make verdaccio work:
Add a new user:
npm adduser --registry http://localhost:4873/
Create an .npmrc file in the shared module and in the service that imports the module with the following:
#my-app:registry=http://localhost:4873
strict-ssl=false
Publish the shared module verdaccio using yarn build && yarn publish and then you should be able to see it in your browser if you navigate to
http://localhost:4873
Then install the shared module in the service using yarn install <shared-module>.
I think the reason your setup is hanging is either its missing the .nmprc file or its needs strict-ssl=false
Once you add that then hopefully when you do skaffold run it will deploy to minikube

CircleCI - Running built docker image tests with additional database image - Connection refused

I wrote a circleci config for creating docker image of my app, running it, running tests, and pushing it to DockerHub.
But I can't figure out how to run tests that require a database.
Here is part of my config for running tests.
executors:
docker-executor:
environment:
DOCKER_BUILDKIT: "1"
docker:
- image: cimg/base:2021.12
jobs:
run-tests:
executor: docker-executor
steps:
- setup_remote_docker:
version: 20.10.7
docker_layer_caching: true
- run:
name: Load archived test image
command: docker load -i /tmp/workspace/testimage.tar
- run:
name: Start Container
command: |
docker create --name app_container << pipeline.parameters.app_image >>:testing
docker start app_container
- run:
name: Run Tests
command: |
docker exec -it app_container ./vendor/bin/phpunit --log-junit testresults.xml --colors=never
How to add MySQL service here, and how to connect it with my app docker image so I can run tests that require database?

How can I deploy a dockerized Node app to a DigitalOcean server using Bitbucket Pipelines?

I've got a NodeJS project in a Bitbucket repo, and I am struggling to understand how to use Bitbucket Pipelines to get it from there onto my DigitalOcean server, where it can be served on the web.
So far I've got this
image: node:10.15.3
pipelines:
default:
- parallel:
- step:
name: Build
caches:
- node
script:
- npm run build
So now the app was built and should be saved as a single file server.js in a theoretical /dist directory.
How now do I dockerize this file and then upload it to my DigitalOcean?
I can't find any examples for something like this.
I did find a Docker template in the Bitbucket Pipelines editor, but it only somewhat describes creating a Docker image, and not at all how to actually deploy it to a DigitalOcean server (or anywhere)
- step:
name: Build and Test
script:
- IMAGE_NAME=$BITBUCKET_REPO_SLUG
- docker build . --file Dockerfile --tag ${IMAGE_NAME}
- docker save ${IMAGE_NAME} --output "${IMAGE_NAME}.tar"
services:
- docker
caches:
- docker
artifacts:
- "*.tar"
- step:
name: Deploy to Production
deployment: Production
script:
- echo ${DOCKERHUB_PASSWORD} | docker login --username "$DOCKERHUB_USERNAME" --password-stdin
- IMAGE_NAME=$BITBUCKET_REPO_SLUG
- docker load --input "${IMAGE_NAME}.tar"
- VERSION="prod-0.1.${BITBUCKET_BUILD_NUMBER}"
- IMAGE=${DOCKERHUB_NAMESPACE}/${IMAGE_NAME}
- docker tag "${IMAGE_NAME}" "${IMAGE}:${VERSION}"
- docker push "${IMAGE}:${VERSION}"
services:
- docker
You would have to SSH into your DigitalOcean VPS and then do some steps there:
Pull the current code
Build the docker file
Deploy the docker file
An example could look like this:
Create some script like "deployment.sh" in your repository root:
cd <path_to_local_repo>
git pull origin master
docker container stop <container_name>
docker container rm <container_name>
docker image build -t <image_name> .
docker container run -itd --name <container_name> <image_name>
and then add the following into your pipeline:
# ...
- step:
deployment: staging
script:
- cat ./deployment.sh | ssh <ssh_user>#<ssh_host>
You have to add your ssh key for your repository on your server, though. Check out the following link, on how to do this: https://confluence.atlassian.com/display/BITTEMP/Use+SSH+keys+in+Bitbucket+Pipelines
Here is a similar question, but using PHP: Using BitBucket Pipelines to Deploy onto VPS via SSH Access

Docker container does not run on a EC2 instance part of EC2 Cluster

I am trying to automate the deployment process of my project. The environment looks like:
we use GitLab to store our code
we execute a CD/CI pipeline within GitLab to build a Docker image and to store it in Amazon repository
once the build stage is completed, Docker have to run in deployment stage the latest image on the first of two instances and after successful execution to scale the containers on the second instance.
This is how the .gutlab-ci.yml file looks like:
image: docker:latest
services:
- docker:dind
stages:
- build
- deploy
variables:
DOCKER_DRIVER: overlay2
testBuild:
stage: build
script:
- docker login -u AWS -p <password> <link to Amazons' repo>
- docker build -t <repo/image:latest> app/
- docker push <repo/image:latest>
testDeploy:
stage: deploy
variables:
AWS_DEFAULT_REGION: "us-east-2"
AWS_ACCESS_KEY_ID: "access key"
AWS_SECRET_ACCESS_KEY: "ssecretAK"
AWS_CLUSTER: "testCluster"
AWS_SIZE: "2"
before_script:
- apk add --update curl
- curl -o /usr/local/bin/ecs-cli https://s3.amazonaws.com/amazon-ecs-cli/ecs-cli-linux-amd64-latest
- chmod +x /usr/local/bin/ecs-cli
script:
- docker login -u AWS -p <password> <repo_link>
- docker run --rm --name <name-ofcontainer> -p 80:8000 -i <repo/image:latest>
- ecs-cli configure --region $AWS_DEFAULT_REGION --access-key $AWS_ACCESS_KEY_ID --secret-key $AWS_SECRET_ACCESS_KEY --cluster $AWS_CLUSTER
- ecs-cli scale --capability-iam --size $AWS_SIZE
only:
- development
Now when the script is successfully executed I SSH the instances and and enter docker ps -a it does not list a running container also it does not find the image with docker image.
If I enter manually the commands on one of the instances the website is available.
My questions is how to make the container available?
EDIT 1:
We use shared runner, if this is what you asks. The reason we use docker:dind is because when we do not use it the following error occurs and we cannot go further:
Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?). Using system default: https://index.docker.io/v1/
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Resources