Travis-CI does not add deploy section - travis-ci

I followed the Travis-CI documentation, to creating multiple deployments, and for notifications.
So this is my config: (the end has deploy and notifications)
sudo: required # is required to use docker service in travis
language: node_js
node_js:
- 'node'
services:
- docker
before_install:
- npm install -g yarn --cache-min 999999999
- "/sbin/start-stop-daemon --start --quiet --pidfile /tmp/custom_xvfb_99.pid --make-pidfile --background --exec /usr/bin/Xvfb -- :99 -ac -screen 0 1280x1024x16"
# Use yarn for faster installs
install:
- yarn
# Init GUI
before_script:
- "export DISPLAY=:99.0"
- "sh -e /etc/init.d/xvfb start"
- sleep 3 # give xvfb some time to start
script:
- npm run test:single-run
cache:
yarn: true
directories:
- ./node_modules
before_deploy:
- npm run build:backwards
- docker --version
- pip install --user awscli # install aws cli w/o sudo
- export PATH=$PATH:$HOME/.local/bin # put aws in the path
deploy:
- provider: script
script: scripts/deploy.sh ansyn/client-chrome.v.44 $TRAVIS_COMMIT
on:
branch: travis
- provider: script
script: scripts/deploy.sh ansyn/client $TRAVIS_TAG
on:
tags: true
notifications:
email: false
But this translates to (in Travis - view config): no deploy, no notifications
{
"sudo": "required",
"language": "node_js",
"node_js": "node",
"services": [
"docker"
],
"before_install": [
"npm install -g yarn --cache-min 999999999",
"/sbin/start-stop-daemon --start --quiet --pidfile /tmp/custom_xvfb_99.pid --make-pidfile --background --exec /usr/bin/Xvfb -- :99 -ac -screen 0 1280x1024x16"
],
"install": [
"yarn"
],
"before_script": [
"export DISPLAY=:99.0",
"sh -e /etc/init.d/xvfb start",
"sleep 3"
],
"script": [
"npm run test:single-run"
],
"cache": {
"yarn": true,
"directories": [
"./node_modules"
]
},
"before_deploy": [
"npm run build:backwards",
"docker --version",
"pip install --user awscli",
"export PATH=$PATH:$HOME/.local/bin"
],
"group": "stable",
"dist": "trusty",
"os": "linux"
}

Try changing
script: scripts/deploy.sh ansyn/client $TRAVIS_TAG
to
script: sh -x scripts/deploy.sh ansyn/client $TRAVIS_TAG
This will give a detailed result if the script is being executed or not. Also I looked into the build after those changes. It fails on below
Step 4/9 : COPY ./dist /opt/ansyn/app
You need to change your deploy section to
deploy:
- provider: script
script: sh -x scripts/deploy.sh ansyn/client-chrome.v.44 $TRAVIS_COMMIT
skip_cleanup: true
on:
branch: travis
- provider: script
script: sh -x scripts/deploy.sh ansyn/client $TRAVIS_TAG
skip_cleanup: true
on:
tags: true
So that the dist folder is there during deploy and not cleaned up

Related

Mocha --watch with Docker

I'm trying to use --watch on mocha but when I save code source or test source, it doesn't re run tests. I have an enviroment using docker-compose with node:16-slim image and my tests run inside it. This same config works with bare metal enviroment.
The dev docker image run the app with:
USER node
CMD ["npm", "run", "dev"]
And this npm script is:
"dev": "npx nodemon --inspect=0.0.0.0:1080 src/index.js",
test npm script:
"test:tdd": "cross-env NODE_ENV=test mocha --config .mocharc.tdd.js",
.mocharc.tdd.js:
module.exports = {
"reporter": "dot",
"watch": true,
"watch-ignore": [],
"file": 'test/common.js',
"recursive": true
};
output:
> test-app#1.0.0 test:tdd
> cross-env NODE_ENV=test mocha --config .mocharc.tdd.js
!
0 passing (6ms)
1 failing
1) Events
abc:
MissingParamError: Missing param: Data
at updated (src/app/events.js:8:22)
at Context.<anonymous> (test/app/events.test.js:13:28)
ℹ [mocha] waiting for changes...
Versions:
➜ test-app git:(master) ✗ npx mocha --version
10.0.0
➜ test-app git:(master) ✗ node --version
v16.15.0
➜ test-app git:(master) ✗ npx nodemon --version
2.0.15
What can I do to fix this? Thanks in advance =)
I solved it. I added watch-files attr to config file.
mocharc.tdd.js:
module.exports = {
"reporter": "dot",
"watch": true,
"watch-files": ['test/**/*.js', 'src/**/*.js'],
"watch-ignore": ['node_modules'],
"file": 'test/common.js',
"recursive": true
};

Why Cypress service is failing?

Below is my pipeline on which I'm trying to get Cypress job to run tests against
Nginx service (which points to the main app) which is built at the build stage
The below is based on official template from here https://gitlab.com/cypress-io/cypress-example-docker-gitlab/-/blob/master/.gitlab-ci.yml
:
image: docker:stable
services:
- docker:dind
stages:
- build
- test
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm
- cache/Cypress
- node_modules
job:
stage: build
script:
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
- apk add --update --no-cache gcc g++ make python2 python2-dev py-pip python3-dev docker-compose npm
- docker-compose up -d --build
e2e:
image: cypress/included:9.1.1
stage: test
script:
- export CYPRESS_VIDEO=false
- export CYPRESS_baseUrl=http://nginx:80
- npm i randomstring
- $(npm bin)/cypress run -t -v $PWD/e2e -w /e2e -e CYPRESS_VIDEO -e CYPRESS_baseUrl --network testdriven_default
- docker-compose down
Error output:
Cypress encountered an error while parsing the argument config
You passed: if [ -x /usr/local/bin/bash ]; then
exec /usr/local/bin/bash
elif [ -x /usr/bin/bash ]; then
exec /usr/bin/bash
elif [ -x /bin/bash ]; then
exec /bin/bash
elif [ -x /usr/local/bin/sh ]; then
exec /usr/local/bin/sh
elif [ -x /usr/bin/sh ]; then
exec /usr/bin/sh
elif [ -x /bin/sh ]; then
exec /bin/sh
elif [ -x /busybox/sh ]; then
exec /busybox/sh
else
echo shell not found
exit 1
fi
The error was: Cannot read properties of undefined (reading 'split')
What is wrong with this set up ?
From #jparkrr on GitHub : https://github.com/cypress-io/cypress-docker-images/issues/300#issuecomment-626324350
I had the same problem. You can specify entrypoint: [""] for the image in .gitlab-ci.yml.
Read more about it here: https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#overriding-the-entrypoint-of-an-image
In your case :
e2e:
image:
name: cypress/included:9.1.1
entrypoint: [""]

Why does my docker image fail when running as task in AWS ECS (Fargate)?

I have a docker image in ECR, which is used for my ECS task. The task spins up and runs for a couple of minutes. Then it shuts down, after reporting the following error:
2021-11-07 00:00:58npm ERR! A complete log of this run can be found in:
2021-11-07 00:00:58npm ERR! /home/node/.npm/_logs/2021-11-07T00_00_58_665Z-debug.log
2021-11-07 00:00:58npm ERR! signal SIGTERM
2021-11-07 00:00:58npm ERR! command sh -c node bin/www
2021-11-07 00:00:58npm ERR! command failed
2021-11-07 00:00:58npm ERR! path /usr/src/app
2021-11-06 23:59:25> my-app#0.0.0 start
2021-11-06 23:59:25> node bin/www
My Dockerfile looks like:
LABEL maintainer="my-team"
LABEL description="App for AWS ECS"
EXPOSE 8080
WORKDIR /usr/src/app
RUN chown -R node:node /usr/src/app
RUN apk add bash
RUN apk add openssl
COPY --chown=node src/package*.json ./
USER node
ARG NODE_ENV=dev
ENV NODE_ENV ${NODE_ENV}
RUN npm ci
COPY --chown=node ./src/generate-cert.sh ./
RUN ./generate-cert.sh
COPY --chown=node src/ ./
ENTRYPOINT ["npm","start"]
My package.json contains:
{
"name": "my-app",
"version": "0.0.0",
"private": true,
"scripts": {
"start": "node ./bin/www",
"test": "jest --coverage"
},
The app is provisioned using terraform, with the following task definition:
resource "aws_ecs_task_definition" "task_definition" {
family = "dataviz_task"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = "256"
memory = "512"
task_role_arn = aws_iam_role.dataviz_ecs_role.arn
execution_role_arn = aws_iam_role.dataviz_ecs_task_execution_role.arn
container_definitions = jsonencode([{
entryPoint : [
"npm",
"start"
],
environment : [
{ "name" : "ENV", "value" : local.container_environment }
]
essential : true,
image : "${var.account_id}${var.ecr_image_address}:latest",
lifecycle : {
ignore_changes : "image"
}
logConfiguration : {
"logDriver" : "awslogs",
"options" : {
"awslogs-group" : var.log_stream_name,
"awslogs-region" : var.region,
"awslogs-stream-prefix" : "ecs"
}
},
name : local.container_name,
portMappings : [
{
"containerPort" : local.container_port,
"hostPort" : local.host_port,
"protocol" : "tcp"
}
]
}])
}
My application runs locally in docker, but not when using the same image in AWS ECS.
To run locally, I uses a Make command make restart
which runs this from my Makefile:
build:
#docker build \
--build-arg NODE_ENV=local \
--tag $(DEV_IMAGE_TAG) \
. > /dev/null
.PHONY: package
package:
#docker build \
--tag $(PROD_IMAGE_TAG) \
--build-arg NODE_ENV=production \
. > /dev/null
.PHONY: start
start: build
#docker run \
--rm \
--publish 8080:8080 \
--name $(IMAGE_NAME) \
--detach \
--env ENV=local \
$(DEV_IMAGE_TAG) > /dev/null
.PHONY: stop
stop:
#docker stop $(IMAGE_NAME) > /dev/null
.PHONY: restart
restart:
ifeq ($(shell (docker ps | grep $(IMAGE_NAME))),)
#make start > /dev/null
else
#make stop > /dev/null
#make start > /dev/null
endif
Why does my docker image fail when running as task in AWS ECS (Fargate)?

How can I make EKS kick working with CircleCI?

I can push images to ECR but I am not even close sure what I should do next (what should be the flow) to make my images run on Kubernetes on EKS
jobs:
create-deployment:
executor: aws-eks/python3
parameters:
cluster-name:
description: |
Name of the EKS cluster
type: string
steps:
- checkout
- aws-eks/update-kubeconfig-with-authenticator:
cluster-name: << parameters.cluster-name >>
install-kubectl: true
- kubernetes/create-or-update-resource:
get-rollout-status: true
resource-file-path: tests/nginx-deployment/deployment.yaml
# resource-file-path: configs/k8s/prod-deployment.yaml
resource-name: deployment/prod-deployment
orbs:
aws-ecr: circleci/aws-ecr#6.15.0
aws-eks: circleci/aws-eks#1.1.0
kubernetes: circleci/kubernetes#0.4.0
version: 2.1
workflows:
deployment:
jobs:
- aws-ecr/build-and-push-image:
repo: bwtc-backend
tag: "${CIRCLE_BRANCH}-v0.1.${CIRCLE_BUILD_NUM}"
dockerfile: configs/Docker/Dockerfile.prod
path: .
filters:
branches:
ignore:
- master
- aws-eks/create-cluster:
cluster-name: eks-demo-deployment
requires:
- aws-ecr/build-and-push-image
- create-deployment:
cluster-name: eks-demo-deployment
requires:
- aws-eks/create-cluster
- aws-eks/update-container-image:
cluster-name: eks-demo-deployment
container-image-updates: 'nginx=nginx:1.9.1'
post-steps:
- kubernetes/delete-resource:
resource-names: nginx-deployment
resource-types: deployment
wait: true
record: true
requires:
- create-deployment
resource-name: deployment/nginx-deployment
- aws-eks/delete-cluster:
cluster-name: eks-demo-deployment
requires:
- aws-eks/update-container-image
That's what I've got in my config for now.
The problem I am facing at the moment is:
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
Exited with code exit status 2
CircleCI received exit code 2
I am using a snippet from CircleCI Documentation, so I guess it should work.
I passed in all the params as I can see but I can't get what I've missed here.
I need your help guys!
The last update has a bug. It is failing due to an incorrect URL. The issue is still open, see here. The correct url should be
https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz
unlike the current
"https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz"
While you wait for it to be approved, use the following to install eksctl first.
- run:
name: Install the eksctl tool
command: |
if which eksctl > /dev/null; then
echo "eksctl is already installed"
exit 0
fi
mkdir -p eksctl_download
curl --silent --location --retry 5 "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" \
| tar xz -C eksctl_download
chmod +x eksctl_download/eksctl
SUDO=""
if [ $(id -u) -ne 0 ] && which sudo > /dev/null ; then
SUDO="sudo"
fi
$SUDO mv eksctl_download/eksctl /usr/local/bin/
rmdir eksctl_download
and then run the job
- aws-eks/create-cluster:
cluster-name: eks-demo-deployment
That should solve the issue.
An example
# Creation of Cluster
create-cluster:
executor: aws-eks/python3
parameters:
cluster-name:
description: |
Name of the EKS cluster
type: string
steps:
- run:
name: Install the eksctl tool
command: |
if which eksctl > /dev/null; then
echo "eksctl is already installed"
exit 0
fi
mkdir -p eksctl_download
curl --silent --location --retry 5 "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" \
| tar xz -C eksctl_download
chmod +x eksctl_download/eksctl
SUDO=""
if [ $(id -u) -ne 0 ] && which sudo > /dev/null ; then
SUDO="sudo"
fi
$SUDO mv eksctl_download/eksctl /usr/local/bin/
rmdir eksctl_download
- aws-eks/create-cluster:
cluster-name: eks-demo-deployment

VSCode: Display forwarding from docker container in Remote Development Extension

How to set up remote display forwarding from Docker container using the new Remote Development extension?
Currently, my .devcontainer contains:
devcontainer.json
{
"name": "kinetic_v5",
"context": "..",
"dockerFile": "Dockerfile",
"workspaceFolder": "/workspace",
"runArgs": [
"--net", "host",
"-e", "DISPLAY=${env:DISPLAY}",
"-e", "QT_GRAPHICSSYSTEM=native",
"-e", "CONTAINER_NAME=kinetic_v5",
"-v", "/tmp/.X11-unix:/tmp/.X11-unix",
"--device=/dev/dri:/dev/dri",
"--name=kinetic_v5",
],
"extensions": [
"ms-python.python"
]
}
Dockerfile
FROM docker.is.localnet:5000/amd/official:16.04
RUN apt-get update && \
apt-get install -y zsh \
fonts-powerline \
locales \
# set up locale
&& locale-gen en_US.UTF-8
RUN pip install Cython
# run the installation script
RUN wget https://github.com/robbyrussell/oh-my-zsh/raw/master/tools/install.sh -O - | zsh || true
CMD ["zsh"]
This doesn't seem to do the job.
Setup details:
OS: linux
Product: Visual Studio Code - Insiders
Product Version: 1.35.0-insider
Language: en
UPDATE: You can find a thread on official git repo about this issue here.

Resources