My Dockerfile looks something like this.
FROM mhart/alpine-node:8.11.3
RUN mkdir -p /app
COPY ./ /app
WORKDIR /app/build
RUN yarn global add serve
CMD ["serve", "-l", "3000"]
EXPOSE 3000
And then JenkinsFile looks something like this.
node {
try {
stage('Checkout source code') {
checkout scm
}
stage('Install packages') {
sh("docker run --rm -v `pwd`:/app -w /app node yarn install")
//sh("sudo chown -R jenkins: ./node_modules")
}
stage('Set the enviroment variables') {
sh("echo set-env-variables")
}
stage('Build static assets') {
sh("docker run --rm -v `pwd`:/app -w /app node yarn build")
}
}}
When i do run it on Jenkins,the console output says error Couldn't find a package.json file in "/app" and also it gives an error sudo not found even though i have added jenkins ALL=(ALL) NOPASSWD: ALL to /etc/sudoers file.
I run the commands listed in my Jenkinsfile on my terminal and they all work fine but when i run them on Jenkins , they dont work.
Related
It might be simple question but I could not find the proper solution.
I have a Docker image as below.. The things that I would like to do simply run curl command inside kubernetes pod but I received an error as below.. I could not able to exec via bash also.
$ kubectl exec -ti hub-cronjob-dev-597cc575f-6lfdc -n hub-dev sh
Defaulting container name to hub-cronjob.
Use 'kubectl describe pod/hub-cronjob-dev-597cc575f-6lfdc -n hub-dev' to see all of the containers in this pod.
/usr/src/app $ curl
sh: curl: not found
Tried with bash
$ kubectl exec -ti cronjob-dev-597cc575f-6lfdc -n hub-dev bash
mand in container: failed to exec in container: failed to start exec "8019bd0d92aef2b09923de78753eeb0c8b60a78619543e4cd27069128a30da92": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"bash\": executable file not found in $PATH": unknown
Dockerfile
FROM node:12-alpine AS builder
# Variables from outside
ARG NODE_ENVIRONMENT=development
ENV NODE_ENV=$NODE_ENVIRONMENT
# Create app directory
WORKDIR /usr/src/app
#Install curl
RUN apk --no-cache add curl -> did not work
RUN apk update && apk add curl curl-dev bash -> did not work
# Install app dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Build Stage 2
# Take the build from the previous stage
FROM node:12-alpine
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app /usr/src/app
# run the application
EXPOSE 50005 9183
CMD [ "npm", "run", "start:docker" ]
Your Dockerfile consists of multiple stages, which is also called multi-stage build.
Each FROM statement is a new stage and new image. In your case you have 2 stages:
builder where you build you app and install curl
second stage which copies /usr/src/app from builder stage
In this case second FROM node:12-alpine statement will contain only basic alpine packages, node tools and /usr/src/app which you have copied from the first stage.
If you want to have curl in your final image you need to install curl in second stage (after second FROM node:12-alpine):
FROM node:12-alpine AS builder
# Variables from outside
ARG NODE_ENVIRONMENT=development
ENV NODE_ENV=$NODE_ENVIRONMENT
# Create app directory
WORKDIR /usr/src/app
# Do not install
# Install app dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Build Stage 2
# Take the build from the previous stage
FROM node:12-alpine
#Install curl
RUN apk update && apk add curl
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app /usr/src/app
# run the application
EXPOSE 50005 9183
CMD [ "npm", "run", "start:docker" ]
As it was mentioned in comments you can test it by running docker container directly - no need to run pod in k8s cluster:
docker build -t image . && docker run -it image sh -c 'which curl'
It is common to use multi-stage build for applications implemented in compiled programming languages.
In the first stage you install all necessary dev tools and compilers and then compile sources into a binary file. Since you don't need and probably don't want sources and developer's tools in a production image you should create a new stage.
In the second stage you copy compiled binary file and run it as CMD or ENTRYPOINT. This way your image contains only executable code, which makes them smaller.
We can add curl using apk in the k8s pod.
apk add curl
Using the docker build command line I can pass in a build secret as follows
docker build \
--secret=id=gradle.properties,src=$HOME/.gradle/gradle.properties \
--build-arg project=template-ms \
.
Then use it in a Dockerfile
# syntax = docker/dockerfile:1.0-experimental
FROM gradle:jdk12 AS build
COPY *.gradle .
RUN --mount=type=secret,target=/home/gradle/gradle.properties,id=gradle.properties gradle dependencies
COPY src/ src/
RUN --mount=type=secret,target=/home/gradle/gradle.properties,id=gradle.properties gradle build
RUN ls -lR build
FROM alpine AS unpacker
ARG project
COPY --from=build /home/gradle/build/libs/${project}.jar /tmp
RUN mkdir -p /opt/ms && unzip -q /tmp/${project}.jar -d /opt/ms && \
mv /opt/ms/BOOT-INF/lib /opt/lib
FROM openjdk:12
EXPOSE 8080
WORKDIR /opt/ms
USER nobody
CMD ["java", "-Xdebug", "-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=0.0.0.0:8000", "-Dnetworkaddress.cache.ttl=5", "org.springframework.boot.loader.JarLauncher"]
HEALTHCHECK --start-period=600s CMD curl --silent --output /dev/null http://localhost:8080/actuator/health
COPY --from=unpacker /opt/lib /opt/ms/BOOT-INF/lib
COPY --from=unpacker /opt/ms/ /opt/ms/
I want to do a build using docker-compose, but I can't find in the docker-compose.yml reference how to pass the secret.
That way the developer just needs to type in docker-compose up
You can use environment or args to pass variables to container in docker-compose.
args:
- secret=id=gradle.properties,src=$HOME/.gradle/gradle.properties
environment:
- secret=id=gradle.properties,src=$HOME/.gradle/gradle.properties
Docker COPY is not copying over the bash script
FROM alpine:latest
#Install Go and Tini - These remain.
RUN apk add --no-cache go build-base gcc go
RUN apk add --no-cache --update ca-certificates redis git && update-ca-certificates
# Set Env Variables for Go and add Go to Path.
ENV GOPATH /go
ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH
RUN go get github.com/rakyll/hey
RUN echo GOLANG VERSION `go version`
COPY ./bench.sh /root/bench.sh
RUN chmod +x /root/bench.sh
ENTRYPOINT /root/bench.sh
Here is the script -
#!/bin/bash
set -e;
echo "entered";
hey;
I try running the above Dockerfile with
$ docker build -t test-bench .
$ docker run -it test-bench
But I get the error
/bin/sh: /root/bench.sh: not found
The file does exist -
$ docker run --rm -it test-bench sh
/ # ls
bin dev etc go home lib media mnt opt proc root run sbin srv sys tmp usr var
/ # cd root
~ # ls
bench.sh
~ #
Is your docker build successful. When I tried to simulate this, found the following error
---> Running in 96468658cebd
go: missing Git command. See https://golang.org/s/gogetcmd
package github.com/rakyll/hey: exec: "git": executable file not found in $PATH
The command '/bin/sh -c go get github.com/rakyll/hey' returned a non-zero code: 1
Try installing git using Dockerfile RUN apk add --no-cache go build-base gcc go git and run again.
The COPY operation here seems to be correct. Make sure it is present in the directory from where docker build is executed.
Okay, the script is using /bin/bash the bash binary is not available in the alpine image. Either it has to be installed or a /bin/sh shell should be used
Newbie to kaniko, and try to build docker images in ubuntu docker host.
I have a local Dockerfile and main.go app
# Dockefile
FROM golang:1.10.3-alpine AS build
ADD . /src
RUN cd /src && go build -o app
FROM alpine
WORKDIR /app
COPY --from=build /src/app /app/
CMD [ "./app" ]
#main.go
package main
import "fmt"
func main() {
fmt.Println("Hello, World!")
}
And in command line, i run
docker run -it -v $(pwd):/usr \
gcr.io/kaniko-project/executor:latest \
--dockerfile=Dockerfile --context=/usr --no-push
Unfortunately, I got error like below
...
INFO[0006] Skipping paths under /proc, as it is a whitelisted directory
INFO[0006] Using files from context: [/usr]
INFO[0006] ADD . /src
INFO[0006] Taking snapshot of files...
INFO[0006] RUN cd /src && go build -o app
INFO[0006] cmd: /bin/sh
INFO[0006] args: [-c cd /src && go build -o app]
/bin/sh: go: not found
error building image: error building stage: waiting for process to exit: exit status 127
What's wrong? (docker version 18.09.0)
You need to use different path for context in kaniko. Your command to run this build should look like this:
docker run -it -v $(pwd):/context \
gcr.io/kaniko-project/executor:latest \
--dockerfile=Dockerfile --context=/context --no-push
In your command with /usr as context kaniko where overriding this path in all of Dockerfiles and in golang image, go is located in /usr path thats why it couldn't find it then
# which go
/usr/local/go/bin/go
While working with a simple declarative pipeline in Jenkins I'm running into an inconsistency where I can run the docker run commands manually to publish my expo project; however, when Jenkins creates the docker container and attempts to run the expo publish command I get a connection refused error. My initial guess was to add privileged to the docker container, then to ensure the user can run as root ... none of which actually helped. I'm curious if anyone has figured out how to run expo CI/CD inside of a docker container using Jenkins as the main way of facilitating that.
+ EXPO_DEBUG=true npx expo publish --non-interactive --release-channel develop
[07:50:24] Publishing to channel 'develop'...
[07:50:26] We noticed you did not build a standalone app with this SDK version and release channel before. Remember that OTA updates will not work with the app built with different SDK version and/or release channel. Read more: https://docs.expo.io/versions/latest/guides/publishing.html#limitations
[07:50:27] Building iOS bundle
[07:50:27] connect ECONNREFUSED 127.0.0.1:19001
[07:50:27] Error: connect ECONNREFUSED 127.0.0.1:19001
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1113:14)
My Jenkinsfile is pretty simple:
pipeline {
agent {
dockerfile {
filename 'Dockerfile'
}
}
stages {
stage('slack notification') {
agent none
steps {
slackSend color: "good", message: "Build Started - ${env.JOB_NAME} ${env.BUILD_NUMBER} (<${env.BUILD_URL}|Open>)"
}
}
stage('run tests') {
steps {
sh 'cd /project && yarn test'
}
}
stage ('publish to expo') {
environment {
expo_creds = credentials('expo_credentials')
}
steps {
sh "npx expo login -u $expo_creds_USR -p $expo_creds_PSW && mv env.beta.ts env.ts && EXPO_DEBUG=true npx expo publish --non-interactive --release-channel ${env.BRANCH_NAME}"
}
}
}
post {
success {
slackSend color: "good", message: "Build Finished - ${env.JOB_NAME} ${env.BUILD_NUMBER} (<${env.BUILD_URL}|Open>) duration: ${currentBuild.durationString}"
}
unstable {
slackSend color: "warning", message: "Build Unstable - ${env.JOB_NAME} ${env.BUILD_NUMBER} (<${env.BUILD_URL}|Open>) duration: ${currentBuild.durationString}"
}
failure {
slackSend color: "danger", message: "Build Failed - ${env.JOB_NAME} ${env.BUILD_NUMBER} (<${env.BUILD_URL}|Open>) duration: ${currentBuild.durationString}"
}
}
}
And my Dockerfile looks as follows:
FROM node:10.13-alpine as npm-dependencies
WORKDIR /project
RUN apk add --no-cache \
autoconf \
libtool \
automake \
g++ \
make \
libjpeg-turbo-dev \
libpng-dev \
libwebp-dev \
nasm
COPY yarn.lock .
COPY package.json .
COPY .npmrc .
RUN yarn install
FROM node:10.13-jessie
WORKDIR /project
COPY custom_types ./custom_typess
COPY img ./img
COPY assets ./assets
COPY src ./src
COPY tests ./tests
COPY babel.config.js ./
COPY .buckconfig ./
COPY .flowconfig ./
COPY .watchmanconfig ./
COPY app.json .
COPY App.js .
COPY env.docker.ts ./env.ts
COPY tsconfig.json .
COPY package.json .
COPY jest.config.js .
COPY --from=npm-dependencies /project/node_modules /project/node_modules
RUN npm install -g expo-cli
RUN mkdir /.npm && chmod 0777 /.npm
RUN mkdir /.cache && chmod 0777 /.cache
RUN mkdir /.yarn && chmod 0777 /.yarn
RUN mkdir /.expo && chmod 0777 /.expo
RUN mkdir /project/.expo && chmod 0777 /project/.expo
Okay, so this is really just expo specific and is probably a bug in how it's made. After doing the login step I manually cd'd into the /project directory and then rm -rf .expo.
setting the CWD to /project and then deleting .expo fixes the issue. why it worked outside of jenkins but not inside is still a bit befuddling; however, the combination of those two actions resolved it for me.