I'm trying to deploy Atlantis on a Cloud Run Gen2 service with a GCS bucket mounted to it via gcsfuse.
Most seems to work fine, the atlantis server starts and can handle requests properly. Files are also written to the GCS bucket through gcsfuse.
But, when Atlantis tries to clone a git repository (as part of the: atlantis plan commmand) it returns the following error:
running git clone --branch f/gcsfuse-cloudrun --depth=1 --single-branch https://xxxxxxxx:<redacted>#github.com/xxxxxxxx/xxxxxxxx.git /app/atlantis/repos/xxxxxxxx/xxxxxxxx/29/default: Cloning into '/app/atlantis/repos/xxxxxxxx/xxxxxxxx/29/default'...
error: chmod on /app/atlantis/repos/xxxxxxxx/xxxxxxxx/29/default/.git/config.lock failed: Operation not permitted
fatal: could not set 'core.filemode' to 'false'
: exit status 128
I believe that I'm very close but I'm not too knowledgeable on Linux file system permissions.
My Dockerfile is as following:
FROM ghcr.io/runatlantis/atlantis:v0.21.1-pre.20221213-debian
USER root
# Install Python
ENV PYTHONUNBUFFERED=1
RUN apt-get update -y
RUN apt-get install -y python3 python3-pip
# Install system dependencies
RUN set -e; \
apt-get update -y && apt-get install -y \
tini \
lsb-release; \
gcsFuseRepo=gcsfuse-`lsb_release -c -s`; \
echo "deb http://packages.cloud.google.com/apt $gcsFuseRepo main" | \
tee /etc/apt/sources.list.d/gcsfuse.list; \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | \
apt-key add -; \
apt-get update; \
apt-get install -y gcsfuse \
&& apt-get clean
# Set fallback mount directory
ENV MNT_DIR /app/atlantis
# Create mount directory for service
RUN mkdir -p ${MNT_DIR}
RUN chown -R atlantis /app/atlantis/
RUN chmod -R 777 /app/atlantis/
WORKDIR $MNT_DIR
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY gcsfuse_run.sh ./
# Make the script an executable
RUN chmod +x /app/gcsfuse_run.sh
ENTRYPOINT ["/app/gcsfuse_run.sh"]
The entrypoint script ^, is as following:
#!/usr/bin/env bash
set -eo pipefail
echo "Mounting GCS Fuse to $MNT_DIR"
gcsfuse -o allow_other -file-mode=777 -dir-mode=777 --implicit-dirs --debug_gcs --debug_fuse $BUCKET $MNT_DIR
echo "Mounting completed."
# This is a atlantis provided docker script that comes from the base image
/usr/local/bin/docker-entrypoint.sh server
Help is highly appreciated!
We simulated the exact steps, but didn't face the issue.
Also we found the same type of issue on many places and for them below solutions worked :
Run the server with sudo permission.
Restart the system.
git config --global --replace-all core.fileMode false
The chmod operation is not supported by gcsfuse. As such, the suggestion by #tulsi-shah (git config --global --replace-all core.fileMode false) would provide a work-around.
https://github.com/googlecloudplatform/gcsfuse/blob/master/docs/semantics.md#inodes
Related
I am using Poetry to install a python project using Poetry in a Docker container. Below you can find my Docker file, which used to work fine until recently when I switched to a new version of Poetry (1.2.1) and the new recommended Poetry installer:
# pull official base image
FROM ubuntu:20.04
ENV PATH = "${PATH}:/home/poetry/bin"
ENV APP_HOME=/home/app/web
RUN apt-get -y update && \
apt upgrade -y && \
apt-get install -y \
python3-pip \
curl \
netcat \
gunicorn && \
rm -fr /var/lib/apt/lists
# alias python2 to python3
RUN ln -s /usr/bin/python3 /usr/bin/python
# Install Poetry
RUN mkdir -p /home/poetry && \
curl -sSL https://install.python-poetry.org | POETRY_HOME=/home/poetry python -
# Cleanup
RUN apt-get remove -y curl && \
apt-get clean
RUN pip install --upgrade pip && \
pip install cryptography && \
pip install psycopg2-binary
# create directory for the app user
# create the app user
# create the appropriate directories
RUN adduser --system --group app && \
mkdir -p $APP_HOME/static-incdtim && \
mkdir -p $APP_HOME/mediafiles
# copy project
COPY . $APP_HOME
WORKDIR $APP_HOME
# Install Python packages
RUN poetry config virtualenvs.create false
RUN poetry install --only main
# copy entrypoint-prod.sh
COPY ./entrypoint.incdtim.prod.sh $APP_HOME/entrypoint.sh
RUN chmod a+x $APP_HOME/entrypoint.sh
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
# change to the app user
USER app
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.sh"]
The poetry install works fine, I attached to a running container and run it myself and found that it works without problems. However, when I open a Python console and try to import a module (django) which is installed by the Poetry project, the module is not found. Please note that I am installing my project in the system environment (poetry config virtualenvs.create false). I verified, and there is only one version of python installed in the docker container. The specific error I get when trying to import a python module installed by Poetry is: ModuleNotFoundError: No module named xxxx
Although this is not an answer, it is too long to fit within the comment section. It is rather a piece of advice:
declare your ENV at the top of the Dockerfile to make it easier to read.
merge the multiple RUN commands together to avoid creating useless intermediate layers. In the particular case of apt-get install, this will also prevent you from installing a package which dates back from the first "apt-get update". Indeed, since the command line has not changed Docker will not re-execute the command and thus not refresh the package list..
avoid making a copy of all the files in "." when you previously copy some specific files to specific places..
Here, you Dockerfile could rather look like:
# pull official base image
FROM ubuntu:20.04
ENV PATH = "${PATH}:/home/poetry/bin"
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN apt-get -y update && \
apt upgrade -y && \
apt-get install -y \
python3-pip \
curl \
netcat \
gunicorn && \
rm -fr /var/lib/apt/lists
# alias python2 to python3
RUN ln -s /usr/bin/python3 /usr/bin/python
# Install Poetry
RUN mkdir -p /home/poetry && \
curl -sSL https://install.python-poetry.org | POETRY_HOME=/home/poetry python -
# Cleanup
RUN apt-get remove -y \
curl && \
apt-get clean
RUN pip install --upgrade pip && \
pip install cryptography && \
pip install psycopg2-binary
# create directory for the app user
# create the app user
# create the appropriate directories
RUN mkdir -p /home/app && \
adduser --system --group app && \
mkdir -p $APP_HOME/static-incdtim && \
mkdir -p $APP_HOME/mediafiles
WORKDIR $APP_HOME
# copy project
COPY . $APP_HOME
# Install Python packages
RUN poetry config virtualenvs.create false && \
poetry install --only main
# copy entrypoint-prod.sh
RUN cp $APP_HOME/entrypoint.incdtim.prod.sh $APP_HOME/entrypoint.sh && \
chmod a+x $APP_HOME/entrypoint.sh && \
chown -R app:app $APP_HOME
# change to the app user
USER app
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.sh"]
UPDATE:
Let's get back to your question. Having your program running okay when you "run it yourself" does not mean all the dependencies are met. Indeed, this can mean that your module has not been imported yet (and thus has not triggered the ModuleNotFoundError exception yet).
In order to validate this theory, you can either:
create a simple application which imports the failing module and then quits. If the import succeeds then there is something weird indeed.
list the installed modules with poetry show --latest. If the package is listed, then there is something weird indeed.
If none of the above indicates the module is installed, that just means the module is not installed and you should update your Dockerfile to install it.
NOTE: I do not know much about poetry, but you may want to have a list external dependencies to be met for your application. In the case of pip3, the list is expressed as a file named requirement.txt and can be installed with pip3 install -r requirement.txt.
It turns out this is known a bug in Poetry: https://github.com/python-poetry/poetry/issues/6459
Just doing a container start on this official logstash docker container does make logstash properly run, given the right config.
It does not have an entrypoint or cmd, or anything of the sort though. I am also not issuing one on the start command. So, how is logstash actually getting executed in this case?
I need to know because I need to edit the command for other reasons. We're working on running it in kubernetes but are just testing with local docker for now.
https://github.com/elastic/logstash/blob/7.15/Dockerfile
Copied for easy reference:
FROM ubuntu:bionic
RUN apt-get update && \
apt-get install -y zlib1g-dev build-essential vim rake git curl libssl-dev libreadline-dev libyaml-dev \
libxml2-dev libxslt-dev openjdk-11-jdk-headless curl iputils-ping netcat && \
apt-get clean
WORKDIR /root
RUN adduser --disabled-password --gecos "" --home /home/logstash logstash && \
mkdir -p /usr/local/share/ruby-build && \
mkdir -p /opt/logstash && \
mkdir -p /opt/logstash/data && \
mkdir -p /mnt/host && \
chown logstash:logstash /opt/logstash
USER logstash
WORKDIR /home/logstash
# used by the purge policy
LABEL retention="keep"
# Setup gradle wrapper. When running any `gradle` command, a `settings.gradle` is expected (and will soon be required).
# This section adds the gradle wrapper, `settings.gradle` and sets the permissions (setting the user to root for `chown`
# and working directory to allow this and then reverts back to the previous working directory and user.
COPY --chown=logstash:logstash gradlew /opt/logstash/gradlew
COPY --chown=logstash:logstash gradle/wrapper /opt/logstash/gradle/wrapper
COPY --chown=logstash:logstash settings.gradle /opt/logstash/settings.gradle
WORKDIR /opt/logstash
RUN for iter in `seq 1 10`; do ./gradlew wrapper --warning-mode all && exit_code=0 && break || exit_code=$? && echo "gradlew error: retry $iter in 10s" && sleep 10; done; exit $exit_code
WORKDIR /home/logstash
ADD versions.yml /opt/logstash/versions.yml
ADD LICENSE.txt /opt/logstash/LICENSE.txt
ADD NOTICE.TXT /opt/logstash/NOTICE.TXT
ADD licenses /opt/logstash/licenses
ADD CONTRIBUTORS /opt/logstash/CONTRIBUTORS
ADD Gemfile.template Gemfile.jruby-2.5.lock.* /opt/logstash/
ADD Rakefile /opt/logstash/Rakefile
ADD build.gradle /opt/logstash/build.gradle
ADD rubyUtils.gradle /opt/logstash/rubyUtils.gradle
ADD rakelib /opt/logstash/rakelib
ADD config /opt/logstash/config
ADD spec /opt/logstash/spec
ADD qa /opt/logstash/qa
ADD lib /opt/logstash/lib
ADD pkg /opt/logstash/pkg
ADD tools /opt/logstash/tools
ADD logstash-core /opt/logstash/logstash-core
ADD logstash-core-plugin-api /opt/logstash/logstash-core-plugin-api
ADD bin /opt/logstash/bin
ADD modules /opt/logstash/modules
ADD x-pack /opt/logstash/x-pack
ADD ci /opt/logstash/ci
USER root
RUN rm -rf build && \
mkdir -p build && \
chown -R logstash:logstash /opt/logstash
USER logstash
WORKDIR /opt/logstash
LABEL retention="prune"
If you look at the final layer on the image here, it looks like there is an ENTRYPOINT ["/usr/local/bin/docker-entrypoint"]. The Dockerfile you've linked might not be the one used to build the image.
My Dockerfile:
FROM golang:1.11.4
RUN apt-get update && apt-get install git bash curl -yqq
ENV ENV test
ENV GIT_TERMINAL_PROMPT=1
ENV GITHUB_TOKEN XXXXXXXXXXXXXXXXXX
RUN curl -Ls https://github.com/Masterminds/glide/releases/download/v0.12.3/glide-v0.12.3-linux-amd64.tar.gz | tar xz -C /tmp \
&& mv /tmp/linux-amd64/glide /usr/bin/
RUN curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
RUN mkdir -p $GOPATH/src/github.com/<Myrepo>/
COPY . $GOPATH/src/github.com/<Myrepo>/
WORKDIR $GOPATH/src/github.com/<Myrepo>/
RUN dep ensure -vendor-only
When i am building this docker file it hangs at RUN dep ensure -vendor-only
It fails to pull the dependencies which are private repos
Is there any possiblities to store git credentials inside Docker or any way to build Docker with one or more private repos of GOlang
Use some thing like this
# ensure that the private Github repo is
# accessed using SSH instead of HTTPS
RUN ssh-keyscan github.com > /root/.ssh/known_hosts
RUN echo "$SSH_KEY" > /root/.ssh/id_rsa && chmod 0600 /root/.ssh/id_rsa
RUN echo '[url "ssh://git#github.com/*your_repo*/"]' >> /root/.gitconfig && echo 'insteadOf = https://github.com/*your_repo*/' >> /root/.gitconfig
Refer this to add ssh key to your git repo
Adding .netrc file will pass credentials inside the docker containers and helps to pull more than one private repositories to build dependencies
#vim .netrc
machine github.com
login < your github token >
add those 2 lines and pass your github token
FROM golang:1.11.4
RUN apt-get update && apt-get install git bash curl -yqq
ENV ENV test
ENV GIT_TERMINAL_PROMPT=1
ENV GITHUB_TOKEN XXXXXXXXXXXXXXXXXX
RUN curl -Ls https://github.com/Masterminds/glide/releases/download/v0.12.3/glide-v0.12.3-linux-amd64.tar.gz | tar xz -C /tmp \
&& mv /tmp/linux-amd64/glide /usr/bin/
RUN curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
RUN mkdir -p $GOPATH/src/github.com/<Myrepo>/
COPY . $GOPATH/src/github.com/<Myrepo>/
COPY .netrc /root/
WORKDIR $GOPATH/src/github.com/<Myrepo>/
RUN dep ensure -vendor-only
in while i'm trying to build an image im getting following error.
In this image i need to download jenkins, run jenkins in background and then download jenkins-cli, then i have to give my input to cli.
FROM ubuntu:14.04
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y software-properties-common && \
add-apt-repository ppa:webupd8team/java -y && \
apt-get update && \
echo oracle-java7-installer shared/accepted-oracle-license-v1-1 select true | /usr/bin/debconf-set-selections && \
apt-get install -f -y oracle-java9-installer && \
apt install -y default-jre curl wget git nano; \
apt-get clean
# Install dependencies
RUN apt-get -y update && \
apt-get -yqq --no-install-recommends install git bzip2 curl unzip && \
apt-get update
ENV JAVA_HOME /usr
ENV PATH $JAVA_HOME/bin:$PATH
# copy jenkins war file to the container
ADD http://mirrors.jenkins.io/war-stable/2.107.1/jenkins.war /opt/jenkins.war
RUN chmod 644 /opt/jenkins.war
ENV JENKINS_HOME /jenkins
# configure the container to run jenkins, mapping container port 8080 to that host port
ENTRYPOINT ["nohup","java", "-jar", "/opt/jenkins.war"]
RUN mkdir /jenkins/
RUN echo 2.107.1 > /jenkins/jenkins.install.UpgradeWizard.state
RUN echo 2.107.1 > /jenkins/jenkins.install.InstallUtil.lastExecVersion
EXPOSE 8080
VOLUME /jenkins
#jenkins-cli installation
RUN mkdir -p /jcli
RUN chmod 644 /jcli
RUN curl --insecure -OL http://192.168.99.100:8080/jnlpJars/jenkins-cli.jar \
--output /jcli/jenkins-cli.jar
VOLUME /ssh
ENV JENKINS_URL "http://192.168.99.100:8080"
ENV PRIVATE_KEY "C:\Users\himn\.ssh/id_rsa"
ENTRYPOINT ["java","-jar","/jcli/jenkins-cli.jar","-noCertificateCheck","-noKeyAuth"]
CMD ["--help"]
QUESTIONS
Is jenkins-cli downloading
which directory its unable to locate
do i need to run jenkins in background to download jenkins-cli aand
work and How to do it?
and solutions for it
Thank u in advance
last few lines of my Dockerfile
# copy jenkins war file to the container
ADD http://mirrors.jenkins.io/war-stable/2.107.1/jenkins.war /opt/jenkins.war
RUN chmod 644 /opt/jenkins.war
ENV JENKINS_HOME /jenkins
# configure the container to run jenkins, mapping container port 8080 to that host port
ENTRYPOINT ["nohup","java", "-jar", "/opt/jenkins.war"]
RUN mkdir /jenkins/
RUN echo 2.107.1 > /jenkins/jenkins.install.UpgradeWizard.state
RUN echo 2.107.1 > /jenkins/jenkins.install.InstallUtil.lastExecVersion
EXPOSE 8080
VOLUME /jenkins
#jenkins-cli installation
RUN mkdir -p /jcli
RUN chmod 644 /jcli
RUN curl --insecure -OL http://192.168.99.100:8080/jnlpJars/jenkins-cli.jar \
--output /jcli/jenkins-cli.jar
VOLUME /ssh
ENV JENKINS_URL "http://192.168.99.100:8080"
ENV PRIVATE_KEY "C:\Users\himn\.ssh/id_rsa"
ENTRYPOINT ["java","-jar","/jcli/jenkins-cli.jar","-noCertificateCheck","-noKeyAuth"]
CMD ["--help"]
you can try after updating your Dockfile entrypoint as
ENTRYPOINT ["java","-jar","/opt/tmp/jenkin-cli.jar","-noCertificateCheck","-noKeyAuth"]
and also remove following lines:
RUN chmod 644 jenkins-cli.jar
WORKDIR /opt/tmp/jenkins-cli
COPY opt/tmp/jenkins-cli.jar ./jenkins-cli
This looks like a typo error. In your curl command i.e step 19/27, --output is /opt/tmp/jenkin-cli.jar it needs to be /opt/tmp/jenkins-cli.jar.
Error states that it is unable to locate the file /opt/tmp/jenkins-cli.jar because you created the file with name jenkin-cli.jar & not jenkins-cli.jar.
Now the 2nd mistake is you are missing / before opt and moreover,
COPY works from host to container & not within the container. In that case, you don't need to download the CLI to create a container and image.
I'm trying to learn Synatxnet. I have it running through Docker. But I really dont know much about either program Synatxnet or Docker. On the Github Sytaxnet page it says
The SyntaxNet models are configured via a combination of run-time
flags (which are easy to change) and a text format TaskSpec protocol
buffer. The spec file used in the demo is in
syntaxnet/models/parsey_mcparseface/context.pbtxt.
How exactly do I find the spec file to edit it?
I compiled SyntaxNet in a Docker container using these Instructions.
FROM java:8
ENV SYNTAXNETDIR=/opt/tensorflow PATH=$PATH:/root/bin
RUN mkdir -p $SYNTAXNETDIR \
&& cd $SYNTAXNETDIR \
&& apt-get update \
&& apt-get install git zlib1g-dev file swig python2.7 python-dev python-pip -y \
&& pip install --upgrade pip \
&& pip install -U protobuf==3.0.0b2 \
&& pip install asciitree \
&& pip install numpy \
&& wget https://github.com/bazelbuild/bazel/releases/download/0.2.2b/bazel-0.2.2b-installer-linux-x86_64.sh \
&& chmod +x bazel-0.2.2b-installer-linux-x86_64.sh \
&& ./bazel-0.2.2b-installer-linux-x86_64.sh --user \
&& git clone --recursive https://github.com/tensorflow/models.git \
&& cd $SYNTAXNETDIR/models/syntaxnet/tensorflow \
&& echo "\n\n\n" | ./configure \
&& apt-get autoremove -y \
&& apt-get clean
RUN cd $SYNTAXNETDIR/models/syntaxnet \
&& bazel test --genrule_strategy=standalone syntaxnet/... util/utf8/...
WORKDIR $SYNTAXNETDIR/models/syntaxnet
CMD [ "sh", "-c", "echo 'Bob brought the pizza to Alice.' | syntaxnet/demo.sh" ]
# COMMANDS to build and run
# ===============================
# mkdir build && cp Dockerfile build/ && cd build
# docker build -t syntaxnet .
# docker run syntaxnet
First, comment out the command line in the dockerfile, then create and cd into an empty directory on your host machine. You can then create a container from the image, mounting a directory in the container to your hard-drive:
docker run -it --rm -v /pwd:/tmp bash
You'll now have a bash session in the container. Copy the spec file into /tmp from /opt/tensorflow/syntaxnet/models/parsey_mcparseface/context.pbtxt (I'm guessing that's where it is given the info you've provided above -- I can't get your dockerfile to build an image so I can't confirm it; you can always run find . -name context.pbtxt from root to find it), and exit the container (ctrl-d or exit).
You now have the file on your host's hd ready to edit, but you really want it in a running container. If the directory it comes from contains only that file, then you can simply mount your host directory at that path in the container. If it contains other things, then you can use a, so called, bootstrap script to move the file from your mounted directory (in the example above, that's tmp) to its home location. Alternatively, you may be able to tell the software where to find the spec file with a flag, but that will take more research.