I have been trying to deploy rocketchat from jenkins. intially I did my own installations for dependencies but now I found the cleanup code given in the tutorial. as fallowing . this code seems inconsistant. I am copying as it is to avoid any confusion. can anyone help incase of corrections as this is failing with syntax errors.
#Clean up if any left overs from last build
SCREEN_RUNNING=’/usr/bin/pgrep SCREEN’
SCREEN_RUNNING fi NODE_RUNNING=’/usr/bin/pgrep node’ ifNODE_RUNNING fi
if [ -f master.zip ]; then rm -f master.zip fi
INSTDIR=./Rocket.Chat-master
if [ -dINSTDIR fi MONDIR=/home/ubuntu/db
if [ -d $MONDIR ]; then rm -Rf /home/ubuntu/db fi pwd
#Install packages we need for the build sudo apt-get install unzip
curl https://install.meteor.com/ | sh
sudo npm install -g n sudo n 0.10.40
Related
I run a given Dockerfile in order to build image for my TeamCity Agent
FROM jetbrains/teamcity-agent:2022.10.1-linux-sudo
RUN curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
RUN sudo sh -c 'echo deb https://apt.kubernetes.io/ kubernetes-xenial main > /etc/apt/sources.list.d/kubernetes.list'
RUN curl -sL https://deb.nodesource.com/setup_16.x | sudo -E bash -
# https://github.com/AdoptOpenJDK/openjdk-docker/blob/master/12/jdk/ubuntu/Dockerfile.hotspot.releases.full
RUN sudo apt-get update && \
sudo apt-get install -y ffmpeg gnupg2 git sudo kubectl \
binfmt-support qemu-user-static mc jq
#RUN wget -O - https://apt.kitware.com/keys/kitware-archive-la3est.asc 2>/dev/null | gpg --dearmor - | sudo tee /etc/apt/trusted.gpg.d/kitware.gpg >/dev/null
#RUN sudo apt-add-repository 'deb https://apt.kitware.com/ubuntu/ focal main' && \
# sudo apt-get update && \
RUN sudo apt install -y cmake build-essential wget
RUN sudo curl -L https://nodejs.org/dist/v14.17.3/node-v14.17.3-linux-x64.tar.gz --output node-v14.17.3-linux-x64.tar.gz
RUN sudo tar -xvf node-v14.17.3-linux-x64.tar.gz
RUN echo 'export PATH="$HOME/node-v14.17.3-linux-x64/bin:$PATH"' >> ~/.bashrc
RUN echo "The version of Node.js is $(node -v)"
All the code was right, but then I decided to add node.js installation to the Dockerfile. that begins from this line:
RUN sudo curl -L https://nodejs.org/dist/v14.17.3/node-v14.17.3-linux-x64.tar.gz --output node-v14.17.3-linux-x64.tar.gz
However, the problem right now is that I have the following error, during execution of the last line of the Dockerfile:
RUN echo "The version of Node.js is $(node -v)"
Output for this line is:
Step 10/22 : RUN echo "The version of Node.js is $(node -v)"
21:07:41 ---> Running in 863b0e75e45a
21:07:42 /bin/sh: 1: node: not found
You need to make the 2 following changed in your Dockerfile for your node installation to be included in your $PATH env var -
Remove the $HOME variable from the path you're concating, as you are currently downloading node to your root folder and not the $HOME folder -
RUN echo 'export PATH="/node-v14.17.3-linux-x64/bin:$PATH"' >> ~/.bashrc
Either source ~/.bashrc explicitly for the $PATH changes to take place or run the export command as part of the Dockerfile
Once you apply these 2 changes, the error should go away.
Idea here is simple - dbt provides a way to generate static files and serve them by using commands dbt docs generate and dbt docs serve and I want to share in a way that everyone in my organization can see them (bypassing security concerns as of now). For this task I thought Cloud Run would be ideal solution as I already have Dockerfile and bash scrips which do some background work (cron job to clone git repo every x hours, etc.). Running this container locally works fine. But deploying this image in Cloud Run wasn't successful - it fails on the last step (which is dbt docs server --port 8080) with default error message Cloud Run error: The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable. Logs for this revision might contain more information. No additional information in logs before that wasn't printed.
Dockerfile:
FROM --platform=$build_for python:3.9.9-slim-bullseye
WORKDIR /usr/src/dbtdocs
RUN apt-get update && apt-get install -y --no-install-recommends git apt-transport-https ca-certificates gnupg curl cron \
&& apt-get clean
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install tzdata
RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && apt-get update -y && apt-get install google-cloud-sdk -y
RUN python -m pip install --upgrade pip setuptools wheel --no-cache-dir
RUN pip install dbt-bigquery
RUN ln -s /usr/local/bin/dbt /usr/bin/
RUN rm -rf /var/lib/apt/lists/*
COPY ./api-entrypoint.sh /usr/src/dbtdocs/
COPY ./cron_dbt_docs.sh /usr/src/dbtdocs/
COPY ./cron_script.sh /usr/src/dbtdocs/
ENV PORT=8080
RUN chmod 755 api-entrypoint.sh
RUN chmod 755 cron_dbt_docs.sh
RUN chmod 755 cron_script.sh
ENTRYPOINT ["/bin/bash", "-c", "/usr/src/dbtdocs/api-entrypoint.sh" ] ```
api-entrypoint.sh
#!/bin/bash
#set -e
#catch() {
# echo 'catching!'
# if [ "$1" != "0" ]; then
# echo "Error $1 occurred on $2"
# fi
#}
#trap 'catch $? $LINENO' EXIT
exec 2>&1
echo 'Starting DBT Workload'
echo 'Checking dependencies'
dbt --version
git --version
mkdir -p /data/dbt/ && cd /data/dbt/
echo 'Cloning dbt Repo'
git clone ${GITLINK} /data/dbt/
echo 'Working on dbt directory'
export DBT_PROFILES_DIR=/data/dbt/profile/
echo "Authentificate at GCP"
echo "Decrypting and saving sa.json file"
mkdir -p /usr/src/secret/
echo "${SA_SECRET}" | base64 --decode > /usr/src/secret/sa.json
gcloud auth activate-service-account ${SA_EMAIL} --key-file /usr/src/secret/sa.json
echo 'The Project set'
if test "${PROJECT_ID}"; then
gcloud config set project ${PROJECT_ID}
gcloud config set disable_prompts true
else
echo "Project Name not in environment variables ${PROJECT_ID}"
fi
echo 'Use Google Cloud Secret Manager Secret'
if test "${PROFILE_SECRET_NAME}"; then
#mkdir -p /root/.dbt/
mkdir -p /root/secret/
gcloud secrets versions access latest --secret="${PROFILE_SECRET_NAME}" > /root/secret/creds.json
export GOOGLE_APPLICATION_CREDENTIALS=/root/secret/creds.json
else
echo 'No Secret Name described - GCP Secret Manager'
fi
echo 'Apply cron Scheduler'
sh -c "/usr/src/dbtdocs/cron_script.sh install"
/etc/init.d/cron restart
touch /data/dbt_docs_job.log
sh -c "/usr/src/dbtdocs/cron_dbt_docs.sh"
touch /data/cron_up.log
tail -f /data/dbt_docs_job.log &
tail -f /data/cron_up.log &
dbt docs serve --port 8080
Container port is set to 8080 when creating Cloud Run service, so I don't think it's a problem here.
Have someone actually encountered similar problems using Cloud Run?
Logs in Cloud Logging
Your container is not listening/responding on port 8080 and has been terminated before the server process starts listening.
Review the last line in the logs. The previous line is building catalog.
Your container is taking too long to startup. Containers should start within 10 seconds because Cloud Run will only keep pending requests for 10 seconds.
All of the work I see in the logs should be performed before the container is deployed and not during container start.
The solution is to redesign how you are building and deploying this container so that the application begins responding to requests as soon as the container starts.
I am getting this error for one of the gitlab ci jobs when the gitlab-runner is using docker executor and one of the images I built.
This is the job getting failed in gitlab-ci.yml
image:
name: 19950818/banu-terraform-ansible-cicd
.
.
.
create-ssh-key-pair:
stage: create-ssh-key-pair
script:
- pwd
- mkdir -p ~/.ssh
# below lines gives the error
- |
# !/bin/bash
FILE=~/.ssh/id_rsa
if [ -f "$FILE" ]; then
echo "$FILE exists."
else
ssh-keygen -q -t rsa -N '' -f ~/.ssh/id_rsa 2>/dev/null <<< y >/dev/null
fi
But these lines DON'T MAKE the error when the executor is shell
This is the Dockerfile for the image 19950818/banu-terraform-ansible-cicd
FROM centos:7
ENV VER "0.12.9"
RUN yum update -y && yum install wget -y && yum install unzip -y
RUN yum install epel-release -y && yum install ansible -y
RUN wget https://releases.hashicorp.com/terraform/${VER}/terraform_${VER}_linux_amd64.zip
RUN unzip terraform_${VER}_linux_amd64.zip
RUN mv terraform /usr/local/bin/
RUN rm -rf terraform_${VER}_linux_amd64.zip
Can someone please tell me what is happening and how to overcome this problem?
My doubt is ssh-keygen -q -t rsa -N '' -f ~/.ssh/id_rsa 2>/dev/null <<< y >/dev/null line cause the error.
Change - | to - >.
See also GitLab Runner Issue #166.
# below lines gives the error
- |
# !/bin/bash
FILE=~/.ssh/id_rsa
if [ -f "$FILE" ]; then
echo "$FILE exists."
else
ssh-keygen -q -t rsa -N '' -f ~/.ssh/id_rsa 2>/dev/null <<< y >/dev/null
fi
Despite the # !/bin/bash, that part of the command is most likely being parsed by /bin/sh. The various parts of the script are passed to the entrypoint of the container, which will be /bin/sh -c, and that will read the first line as a comment. If it was passed as a script to run, you'd at least need to remove the space, so #!/bin/bash, but I suspect it would still be read as a comment depending on host gitlab calls the script and merges with the other scripts to run.
Why would that break with /bin/sh? <<< y is a bash specific syntax. That could be changed to:
echo y | ssh-keygen -q -t rsa -N '' -f ~/.ssh/id_rsa 2>/dev/null >/dev/null
If you want to see error messages from the command to make debugging easier, then eliminate the output redirections
echo y | ssh-keygen -q -t rsa -N '' -f ~/.ssh/id_rsa
Or if you really want to use bash for other reasons, then change the entrypoint of the image:
image:
name: 19950818/banu-terraform-ansible-cicd
entrypoint: ["/bin/bash", "-c"]
I have been using docker since some time now. I have encountered a situation wherein I need to execute instructions present in the Dockerfile based on some condition. For example here is the snippet of Dockerfile
FROM centos:centos7
MAINTAINER Akshay <akshay#dm.com>
# Update and install required binaries
RUN yum update -y \
&& yum install -y which wget openssh-server sudo java-1.8.0-openjdk \
&& yum clean all
#install Maven
RUN curl -Lf http://archive.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz -o /tmp/apache-maven-3.3.9.tar.gz
RUN tar -xzf /tmp/apache-maven-3.3.9.tar.gz -C /opt \
&& rm /tmp/apache-maven-3.3.9.tar.gz
ENV M2_HOME "/opt/apache-maven-3.3.9"
ENV PATH ${PATH}:${M2_HOME}/bin:
# Install Ant
ENV ANT_VERSION 1.9.4
RUN cd && \
wget -q http://archive.apache.org/dist/ant/binaries/apache-ant-${ANT_VERSION}-bin.tar.gz && \
tar -xzf apache-ant-${ANT_VERSION}-bin.tar.gz && \
mv apache-ant-${ANT_VERSION} /opt/ant && \
rm apache-ant-${ANT_VERSION}-bin.tar.gz
ENV ANT_HOME /opt/ant
ENV PATH ${PATH}:/opt/ant/bin
......
So as you can see in my docker file that I have installation instructions for both maven and ant. But now I've to install either one of them based on a condition. I know that I can use ARG instruction in the Dockerfile to fetch the argument during the build time, but the problem is I couldn't find any docs on how to enclose them in an if/else block.
I have read few other stackoverflow posts too regarding this, but in those I see that they have asked to use conditional statements inside of an instruction eg RUN if $BUILDVAR -eq "SO"; then export SOMEVAR=hello; else export SOMEVAR=world; fiLink, or writing a separate script file like this
But as you can see, my case is different. I can't make us of this since I would have a bunch of other instructions too which would be depended on that argument. I have to do something like this
ARG BUILD_TOOL
if [ "${BUILD_TOOL}" = "MAVEN" ]; then
--install maven
elif [ "${BUILD_TOOL}" = "ANT" ]; then
--install ant
fi
.... other instructions ....
if [ "${BUILD_TOOL}" = "MAVEN" ]; then
--some other dependent commands
elif [ "${BUILD_TOOL}" = "ANT" ]; then
--some other dependent commands
fi
If you don't want to use all those RUN if statements, you can instead create a bash script with the setup procedure and call it from the Dockerfile. For example:
FROM centos:centos7
MAINTAINER Someone <someone#email.com>
ARG BUILD_TOOL
COPY setup.sh /setup.sh
RUN ./setup.sh
RUN rm /setup.sh
And the setup.sh file (don't forget to make it executable):
if [ "${BUILD_TOOL}" = "MAVEN" ]; then
echo "Step 1 of MAVEN setup";
echo "(...)";
echo "Done MAVEN setup";
elif [ "${BUILD_TOOL}" = "ANT" ]; then
echo "Step 1 of ANT setup";
echo "(...)";
echo "Done ANT setup";
fi
You can then build it using docker build --build-arg BUILD_TOOL=MAVEN . (or ANT).
Note that I used a shell script here, but if you have other interpreters available (ex: python or ruby), you can also use them to write the setup script.
I don't know why this line return 1 when I run it a docker file:
RUN sh -c "$(wget https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh -O -)"
I have wget install and I don't know why it's return 1 (no error message)
No idea, but you don't have to use the one step install shorthand which might give you a better idea of where the command is failing.
RUN set -uex; \
wget https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh; \
sh ./install.sh; \
rm ./install.sh
I think it's due to the interactive part of the installation script.
You should generate previously .zshrc.
RUN apt-get install -y zsh
RUN git clone https://github.com/robbyrussell/oh-my-zsh \
<installation_path>/.oh-my-zsh
COPY conf/.zshrc <installation_path>/.zshrc
when you run the following command to install oh-my-zsh, the command installed oh-my-zsh successfully and exited with code 1. (you can run echo $? to check).
sh -c "$(wget https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh -O -)"
echo $? #output 1
But, the docker build shell thought it's an error when commands execute without returning code 0. To solve it, we can append a zero-returned command after the install command:
RUN sh -c "$(wget https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh -O -)"; exit 0;