While working with a simple declarative pipeline in Jenkins I'm running into an inconsistency where I can run the docker run commands manually to publish my expo project; however, when Jenkins creates the docker container and attempts to run the expo publish command I get a connection refused error. My initial guess was to add privileged to the docker container, then to ensure the user can run as root ... none of which actually helped. I'm curious if anyone has figured out how to run expo CI/CD inside of a docker container using Jenkins as the main way of facilitating that.
+ EXPO_DEBUG=true npx expo publish --non-interactive --release-channel develop
[07:50:24] Publishing to channel 'develop'...
[07:50:26] We noticed you did not build a standalone app with this SDK version and release channel before. Remember that OTA updates will not work with the app built with different SDK version and/or release channel. Read more: https://docs.expo.io/versions/latest/guides/publishing.html#limitations
[07:50:27] Building iOS bundle
[07:50:27] connect ECONNREFUSED 127.0.0.1:19001
[07:50:27] Error: connect ECONNREFUSED 127.0.0.1:19001
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1113:14)
My Jenkinsfile is pretty simple:
pipeline {
agent {
dockerfile {
filename 'Dockerfile'
}
}
stages {
stage('slack notification') {
agent none
steps {
slackSend color: "good", message: "Build Started - ${env.JOB_NAME} ${env.BUILD_NUMBER} (<${env.BUILD_URL}|Open>)"
}
}
stage('run tests') {
steps {
sh 'cd /project && yarn test'
}
}
stage ('publish to expo') {
environment {
expo_creds = credentials('expo_credentials')
}
steps {
sh "npx expo login -u $expo_creds_USR -p $expo_creds_PSW && mv env.beta.ts env.ts && EXPO_DEBUG=true npx expo publish --non-interactive --release-channel ${env.BRANCH_NAME}"
}
}
}
post {
success {
slackSend color: "good", message: "Build Finished - ${env.JOB_NAME} ${env.BUILD_NUMBER} (<${env.BUILD_URL}|Open>) duration: ${currentBuild.durationString}"
}
unstable {
slackSend color: "warning", message: "Build Unstable - ${env.JOB_NAME} ${env.BUILD_NUMBER} (<${env.BUILD_URL}|Open>) duration: ${currentBuild.durationString}"
}
failure {
slackSend color: "danger", message: "Build Failed - ${env.JOB_NAME} ${env.BUILD_NUMBER} (<${env.BUILD_URL}|Open>) duration: ${currentBuild.durationString}"
}
}
}
And my Dockerfile looks as follows:
FROM node:10.13-alpine as npm-dependencies
WORKDIR /project
RUN apk add --no-cache \
autoconf \
libtool \
automake \
g++ \
make \
libjpeg-turbo-dev \
libpng-dev \
libwebp-dev \
nasm
COPY yarn.lock .
COPY package.json .
COPY .npmrc .
RUN yarn install
FROM node:10.13-jessie
WORKDIR /project
COPY custom_types ./custom_typess
COPY img ./img
COPY assets ./assets
COPY src ./src
COPY tests ./tests
COPY babel.config.js ./
COPY .buckconfig ./
COPY .flowconfig ./
COPY .watchmanconfig ./
COPY app.json .
COPY App.js .
COPY env.docker.ts ./env.ts
COPY tsconfig.json .
COPY package.json .
COPY jest.config.js .
COPY --from=npm-dependencies /project/node_modules /project/node_modules
RUN npm install -g expo-cli
RUN mkdir /.npm && chmod 0777 /.npm
RUN mkdir /.cache && chmod 0777 /.cache
RUN mkdir /.yarn && chmod 0777 /.yarn
RUN mkdir /.expo && chmod 0777 /.expo
RUN mkdir /project/.expo && chmod 0777 /project/.expo
Okay, so this is really just expo specific and is probably a bug in how it's made. After doing the login step I manually cd'd into the /project directory and then rm -rf .expo.
setting the CWD to /project and then deleting .expo fixes the issue. why it worked outside of jenkins but not inside is still a bit befuddling; however, the combination of those two actions resolved it for me.
Related
It might be simple question but I could not find the proper solution.
I have a Docker image as below.. The things that I would like to do simply run curl command inside kubernetes pod but I received an error as below.. I could not able to exec via bash also.
$ kubectl exec -ti hub-cronjob-dev-597cc575f-6lfdc -n hub-dev sh
Defaulting container name to hub-cronjob.
Use 'kubectl describe pod/hub-cronjob-dev-597cc575f-6lfdc -n hub-dev' to see all of the containers in this pod.
/usr/src/app $ curl
sh: curl: not found
Tried with bash
$ kubectl exec -ti cronjob-dev-597cc575f-6lfdc -n hub-dev bash
mand in container: failed to exec in container: failed to start exec "8019bd0d92aef2b09923de78753eeb0c8b60a78619543e4cd27069128a30da92": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"bash\": executable file not found in $PATH": unknown
Dockerfile
FROM node:12-alpine AS builder
# Variables from outside
ARG NODE_ENVIRONMENT=development
ENV NODE_ENV=$NODE_ENVIRONMENT
# Create app directory
WORKDIR /usr/src/app
#Install curl
RUN apk --no-cache add curl -> did not work
RUN apk update && apk add curl curl-dev bash -> did not work
# Install app dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Build Stage 2
# Take the build from the previous stage
FROM node:12-alpine
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app /usr/src/app
# run the application
EXPOSE 50005 9183
CMD [ "npm", "run", "start:docker" ]
Your Dockerfile consists of multiple stages, which is also called multi-stage build.
Each FROM statement is a new stage and new image. In your case you have 2 stages:
builder where you build you app and install curl
second stage which copies /usr/src/app from builder stage
In this case second FROM node:12-alpine statement will contain only basic alpine packages, node tools and /usr/src/app which you have copied from the first stage.
If you want to have curl in your final image you need to install curl in second stage (after second FROM node:12-alpine):
FROM node:12-alpine AS builder
# Variables from outside
ARG NODE_ENVIRONMENT=development
ENV NODE_ENV=$NODE_ENVIRONMENT
# Create app directory
WORKDIR /usr/src/app
# Do not install
# Install app dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Build Stage 2
# Take the build from the previous stage
FROM node:12-alpine
#Install curl
RUN apk update && apk add curl
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app /usr/src/app
# run the application
EXPOSE 50005 9183
CMD [ "npm", "run", "start:docker" ]
As it was mentioned in comments you can test it by running docker container directly - no need to run pod in k8s cluster:
docker build -t image . && docker run -it image sh -c 'which curl'
It is common to use multi-stage build for applications implemented in compiled programming languages.
In the first stage you install all necessary dev tools and compilers and then compile sources into a binary file. Since you don't need and probably don't want sources and developer's tools in a production image you should create a new stage.
In the second stage you copy compiled binary file and run it as CMD or ENTRYPOINT. This way your image contains only executable code, which makes them smaller.
We can add curl using apk in the k8s pod.
apk add curl
I want to test my Go code in a CI environment which requires using Docker. How do I create a Docker image that has all the dependencies listed in go.mod already downloaded and compiled so that docker run $IMG go test uses the cached artifacts?
The desired properties of this image:
The image only uses go.mod to compile dependencies. I don't want to use the full source code because then any change to source code would invalidate the Docker layer that hold cached dependencies.
docker run $IMG go test ./... doesn't redownload or recompile dependencies listed in go.mod.
Avoid experimental Docker features.
Existing approaches
Parsing go.mod and using go get
From https://github.com/golang/go/issues/27719#issuecomment-578246826
This approach is close but it doesn't appear to use GOCACHE when I run go test. This also appears to choke on certain module paths, like gopkg.in/DataDog/dd-trace-go.v1:
FROM golang:1.13
WORKDIR /src
COPY go.mod ./
RUN set -eu \
&& go mod graph \
| cut -d '#' -f 1 \
| cut -d ' ' -f 2 \
| sort -u \
| sed -e 's#dd-trace-go.v1#&/ddtrace#' \
| xargs go get -v
docker run --mount /src:/src $IMG go test ./...
Using DOCKER_BUILDKIT with a mount cache
Originally described in https://github.com/golang/go/issues/27719#issuecomment-514747274. This only works for go build. I can't use it for go test because the cache mount is unmounted after the RUN command so it doesn't exist in the created Docker image.
This also depends on experimental docker features.
# syntax = docker/dockerfile:experimental
FROM golang:1.13 as go-builder
ARG VERSION
WORKDIR /src
COPY . /src/
# With a mount cache, Docker will cache the target directories for future
# invocations of this RUN layer. Meaning, once this command is run once, all
# successive calls will use the already downloaded and already compiled assets.
RUN --mount=type=cache,target=/go/pkg/mod \
--mount=type=cache,target=/root/.cache/go-build \
go build ./server
I often put COPY go.mod on the very begin of a Dockerfile, as it does not change that often.
FROM golang:1.14.3 as builder
WORKDIR /app
COPY go.mod .
COPY go.sum .
RUN go mod download
COPY . .
RUN go build -tags netgo -ldflags '-extldflags "-static"' -o app .
FROM centos:7
WORKDIR /root
COPY --from=builder /app/app .
So, if you go mod does not change, the line RUN go mod download only run for the first time.
In your dockerfile if you run go get ./... it will download all the dependencies to your docker image based on the go.mod and go.sum. You can also add the --insecure flag to the go get command if you are dealing with internal self signed repositories. Very similar to go mod download. Aside from that you can have a shell script that actually initiates the Go test ./... and reports it out to your CI environment if that allows, I know that Gitlab allows this.
FROM golang:1.15-alpine AS builder
RUN apk add --update git gcc musl-dev
RUN apk update && apk add bash
COPY . /app
RUN go version
WORKDIR /app
ENV CGO_ENABLED=0
RUN git config --global http.sslVerify false
RUN go get ./...
WORKDIR /app
RUN chmod +x ./unitTest.sh
RUN ./unitTest.sh
WORKDIR /app/cmd/svr
RUN go build -o app
RUN chmod 700 app
FROM alpine:latest
WORKDIR /root/
ARG build_stamp
ARG git_commit
ARG build_number
ENV BUILD_STAMP=$build_stamp
ENV GIT_COMMIT=$git_commit
ENV BUILD_NUMBER=$build_number
COPY --from=builder /app/cmd/svr .
EXPOSE 8000
CMD ["./app"]
and the script
#!/usr/bin/env bash
TESTS=$(go test -v -covermode=count -coverprofile=count.txt ./...)
echo "$TESTS"
if echo "$TESTS" | grep -q "FAIL" ; then
echo ""
echo "One or more Unit Tests for app have Failed. Build will now fail. Pipeline will also fail..."
echo ""
exit 1
else
echo ""
echo "All Unit Tests for application have passed!"
echo "Running Code Coverage..."
echo ""
COVERAGE=$(go tool cover -func=./count.txt)
echo "$COVERAGE"
exit 0
fi
Docker COPY is not copying over the bash script
FROM alpine:latest
#Install Go and Tini - These remain.
RUN apk add --no-cache go build-base gcc go
RUN apk add --no-cache --update ca-certificates redis git && update-ca-certificates
# Set Env Variables for Go and add Go to Path.
ENV GOPATH /go
ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH
RUN go get github.com/rakyll/hey
RUN echo GOLANG VERSION `go version`
COPY ./bench.sh /root/bench.sh
RUN chmod +x /root/bench.sh
ENTRYPOINT /root/bench.sh
Here is the script -
#!/bin/bash
set -e;
echo "entered";
hey;
I try running the above Dockerfile with
$ docker build -t test-bench .
$ docker run -it test-bench
But I get the error
/bin/sh: /root/bench.sh: not found
The file does exist -
$ docker run --rm -it test-bench sh
/ # ls
bin dev etc go home lib media mnt opt proc root run sbin srv sys tmp usr var
/ # cd root
~ # ls
bench.sh
~ #
Is your docker build successful. When I tried to simulate this, found the following error
---> Running in 96468658cebd
go: missing Git command. See https://golang.org/s/gogetcmd
package github.com/rakyll/hey: exec: "git": executable file not found in $PATH
The command '/bin/sh -c go get github.com/rakyll/hey' returned a non-zero code: 1
Try installing git using Dockerfile RUN apk add --no-cache go build-base gcc go git and run again.
The COPY operation here seems to be correct. Make sure it is present in the directory from where docker build is executed.
Okay, the script is using /bin/bash the bash binary is not available in the alpine image. Either it has to be installed or a /bin/sh shell should be used
Newbie to kaniko, and try to build docker images in ubuntu docker host.
I have a local Dockerfile and main.go app
# Dockefile
FROM golang:1.10.3-alpine AS build
ADD . /src
RUN cd /src && go build -o app
FROM alpine
WORKDIR /app
COPY --from=build /src/app /app/
CMD [ "./app" ]
#main.go
package main
import "fmt"
func main() {
fmt.Println("Hello, World!")
}
And in command line, i run
docker run -it -v $(pwd):/usr \
gcr.io/kaniko-project/executor:latest \
--dockerfile=Dockerfile --context=/usr --no-push
Unfortunately, I got error like below
...
INFO[0006] Skipping paths under /proc, as it is a whitelisted directory
INFO[0006] Using files from context: [/usr]
INFO[0006] ADD . /src
INFO[0006] Taking snapshot of files...
INFO[0006] RUN cd /src && go build -o app
INFO[0006] cmd: /bin/sh
INFO[0006] args: [-c cd /src && go build -o app]
/bin/sh: go: not found
error building image: error building stage: waiting for process to exit: exit status 127
What's wrong? (docker version 18.09.0)
You need to use different path for context in kaniko. Your command to run this build should look like this:
docker run -it -v $(pwd):/context \
gcr.io/kaniko-project/executor:latest \
--dockerfile=Dockerfile --context=/context --no-push
In your command with /usr as context kaniko where overriding this path in all of Dockerfiles and in golang image, go is located in /usr path thats why it couldn't find it then
# which go
/usr/local/go/bin/go
My Dockerfile looks something like this.
FROM mhart/alpine-node:8.11.3
RUN mkdir -p /app
COPY ./ /app
WORKDIR /app/build
RUN yarn global add serve
CMD ["serve", "-l", "3000"]
EXPOSE 3000
And then JenkinsFile looks something like this.
node {
try {
stage('Checkout source code') {
checkout scm
}
stage('Install packages') {
sh("docker run --rm -v `pwd`:/app -w /app node yarn install")
//sh("sudo chown -R jenkins: ./node_modules")
}
stage('Set the enviroment variables') {
sh("echo set-env-variables")
}
stage('Build static assets') {
sh("docker run --rm -v `pwd`:/app -w /app node yarn build")
}
}}
When i do run it on Jenkins,the console output says error Couldn't find a package.json file in "/app" and also it gives an error sudo not found even though i have added jenkins ALL=(ALL) NOPASSWD: ALL to /etc/sudoers file.
I run the commands listed in my Jenkinsfile on my terminal and they all work fine but when i run them on Jenkins , they dont work.