I have created a ssh key using ssh-keygen and added id-rsa.pub content to my github>settings>SSH & GPG keys.
I am able to clone the repo from my terminal with git clone git#github:myname/myrepo.git
But the same is giving the following error while building the docker file.
Cloning into 'Project-Jenkins'...
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
This is how I added the command in dockerfile
RUN git clone git#github.com:myname/myrepo.git
What went wrong here?
here is my dockerfile
FROM ubuntu
COPY script.sh /script.sh
CMD ["/script.sh"]
FROM python:2.7
RUN apt-get update
RUN apt-get install libmysqlclient-dev
RUN apt-get install -y cssmin
RUN apt-get install -y python-psycopg2
RUN pip install --upgrade setuptools
RUN pip install ez_setup
RUN apt install -y libpq-dev python-dev
RUN apt install -y postgresql-server-dev-all
COPY requirements.txt ./
CMD ["apt-get","install","pip"]
RUN apt-get install -y git
RUN git clone git#github.com:myname/myrepo.git
WORKDIR ./myrepo/LIMA
RUN pip install -r requirements.txt
CMD ["python","manage.py","migrate"]
CMD ["python","manage.py","collectstatic","--noinput"]
CMD ["python","manage.py","runserver"]
EXPOSE 8000
The syntax you have used finally ends up using SSH to clone, and inside the docker container, your github private key is not available which leads to error that you are getting. So instead try using,
RUN git clone https://{myusername}:{mypassword}#github.com/{myusername}/myrepo.git
Also remember if your password has '#' symbol use '%40' instead.
If you want to still go with private key approach, refer this question, How to access GIT repo with my private key from Dockerfile
Related
I am trying to install tesseract-ocr to docker from a Dockerfile. When I build the Dockerfile everything looks normal and i get no errors but when I run the container tesseract is not installed.
If I access the container using sudo docker exec -t -i <container_id> /bin/bash and manually install tesseract using apt-get install -y tesseract-ocr-all it installs and works perfectly. Why doesn't it work when I try to install it during the build process?
My Dockerfile looks like this:
FROM ubuntu:20.04
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get install -y tesseract-ocr-all
RUN tesseract --version
FROM python:3.7
WORKDIR ocr
COPY requirements.txt .
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . .
Thanks!
It looks like you are leveraging Docker multi-stage builds without realizing it.
When you put FROM python:3.7, you essentially throw away everything you have done above that, since you start a new stage.
The easiest solution I can see is to move
RUN apt-get update \
&& apt-get install -y tesseract-ocr-all
RUN tesseract --version
into the FROM python:3.7 stage, and remove the FROM ubuntu:20.04 stage.
You need to switch your user, since you likely don't have permission to run those commands. Something like this should work:
USER root
RUN apt-get update \
&& apt-get install -y tesseract-ocr-all
USER <switch back to previous user>
You'll need to figure out what the default user is to switch back, which you can probably find in the Ubuntu docs or using whoami.
I'm building a image that builds a Jenkins and I try to use a plugin over the Jenkins when it is running, so, I need get run Jenkins before my plugin execution.
I execute it like docker build -t dockerfile and the error wich I am obtaining:
jenkins.JenkinsException: Error in request: [Errno 99]
Cannot assign requested address
I think the problem is when the plugin is executed it guess Jenkins is running and not.
FROM foxylion/jenkins
MAINTAINER Mishel Uchuari <dmuchuari#hotmail.com>
RUN /usr/local/bin/install-plugins.sh workflow-remote-loader workflow-aggregator build-pipeline-plugin
ENV JENKINS_USER replicate
ENV JENKINS_PASS replicate
USER root
RUN apt-get -y update && apt-get -y upgrade
RUN apt-get install -y apt-utils
RUN apt-get install -y python-pip
RUN apt install -y linuxbrew-wrapper
RUN useradd someuser -m -s /bin/bash
USER someuser
RUN chmod -R 777 /home/someuser
RUN brew install libyaml
USER root
RUN apt-get install build-essential
RUN apt-get -y update && apt-get -y upgrade
RUN pip install jenkins-job-builder==2.0.0.0b2
RUN pip install PyYAML python-jenkins
RUN mkdir /etc/jenkins_jobs/
COPY jenkins_jobs.ini /etc/jenkins_jobs/
COPY scm_pipeline.yaml /etc/jenkins_jobs/
RUN jenkins-jobs --conf /etc/jenkins_jobs/jenkins_jobs.ini update /etc/jenkins_jobs/scm_pipeline.yaml
I had the same issue myself when using it under Docker:
File "/src/.tox/py27/local/lib/python2.7/site-packages/jenkins_jobs/builder.py", line 124, in get_plugins_info
raise e
JenkinsException: Error in request: [Errno 99] Cannot assign requested address
That was caused when it tries to retrieve the list of plugins, I went overriding plugins_info to short circuit the code path:
jjb = JenkinsJobs(args=['test', config_dir, '-o', output_dir])
jjb.builder['plugins_info'] = [] # prevents 99 cannot assign requested address
jjb.execute()
I had the issue with python 2.7.9 on Debian Jessie. If I remember correctly that is no more an issue with a later python version eg 2.7.13 from Debian Stretch.
(the patch on which I encountered the issue):
https://gerrit.wikimedia.org/r/#/c/380929/8/tests/test_integration.py
RUN brew install libyaml
brew is a package manager for Mac OS X. Also PyYAML gracefully skip compilation when the lib is not availble. So you probably do not need that one. And I guess it would work without installing build-essential.
RUN pip install jenkins-job-builder==2.0.0.0b2
RUN pip install PyYAML python-jenkins
I am surprised you have install PyYAML and python-jenkins explicitly. Supposedly installing jenkins-job-builder should install all the dependencies (eg PyYAML and python-jenkins).
I have a private git repo on bitbucket that I'm using to pip install a library. During my docker build, I copy the dir with the keys and config file into root. Then it pulls down the requirements and pip installs them. (It pip installs fine when I'm just using my local terminal, so I know it's not the pip install.) However, I keep getting a
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
If I remove the line in the Dockerfile that install the pip install that causes the image to fail, and then shell into the instance and pip install it works fine.
My directory looks like this:
app/
requirements.txt
docker_keys/
.ssh/
id_rsa
id_rsa.pub
config
My Dockerfile looks like this:
FROM python:3.5
RUN apt-get update && apt-get dist-upgrade -qqy && apt-get clean && rm -rf /var/lib/apt/lists/*
ENV PYTHONUNBUFFERED 1
RUN pip install --upgrade pip uwsgi
RUN mkdir /app
ADD . /app
WORKDIR /app
COPY docker_keys/.ssh /root/.ssh
RUN pip install -r requirements.txt
I'm assuming it has something to do with how I'm copying the key dir into root. Any help would be greatly appreciated.
Docker won't necessarily read from your identity file. Check /etc/ssh/ssh_config to ensure something like this exists:
IdentityFile ~/.ssh/id_rsa"
It's worth noting however that this is really insecure, and you shouldn't leave SSH private keys inside docker files.
I building docker image. But it does not success.
/Users/username/.vim exists on my host os, But it does not success.
How I can success docker- build?
Error Message:
Step 16 : ADD /Users/username/.vim/ /root/.vim
lstat Users/username/.vim/: no such file or directory
The following is my Dockerfile.
Dockerfile:
FROM ubuntu:latest
MAINTAINER MyName
RUN /bin/bash
RUN mkdir ~/cworks
RUN mkdir ~/pyworks
RUN apt-get -y update
RUN apt-get -y install curl
RUN apt-get -y install clang
RUN apt-get -y install man
RUN apt-get -y install vim
RUN apt-get -y install python3
RUN apt-get -y install git
RUN apt-get -y install make
RUN apt-get -y upgrade
RUN curl -kL https://bootstrap.pypa.io/get-pip.py | python3
ADD /Users/username/.vim/ /root/.vim
ADD /Users/username/.vimrc /root/.vimrc
Platform: OS X 10.11.4
You cannot execute ADD (or COPY) on a file that is not inside the directory that contains your Dockerfile.
The reason for that is that building docker images is meant to be a deterministic build. If I build the Dockerimage on my computer with your Dockerfile, I would have a different .vim
The docker team impose this limitation to encourage people using a self contained directory with a Dockerfile, and any file to add to it.
In your case, you will need to copy the file in the same directory of the Dockerfile first, and run:
ADD .vim /root/.vim
Or arguably better:
COPY .vim /root/.vim
Use this form instead. It works with hidden files(dot files) and filename contains whitespaces.
ADD ["/Users/username/.vim/", "/root/.vim"]
https://docs.docker.com/engine/reference/builder/#/add
Please remove tailing slash after .vim
So, it will go as
ADD /Users/username/.vim /root/.vim
I have a RoR app that uses imagemagick specified in the Gemfile. I am using Docker's official rails image to build my image with the following Dockerfile:
FROM rails:onbuild
RUN apt-get install imagemagick
and get the following error:
Cant install RMagick 2.13.2. Cant find Magick-config in /usr/local/bundle/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Now, that's probably because the imagemagic package is missing on the OS, even though I specified it in my Dockerfile. So I guess the bundle install command is issued before my RUN apt-get command is issued.
My question - using this base image, is there a way to ensure imagemagic is installed prior to bundling?
Do I need to fork and change the base image Dockerfile to achieve that?
you are right, the ONBUILD instructions from the rails:onbuild image will be executed just after the FROM instruction of your Dockerfile.
What I suggest is to change your Dockerfile as follow:
FROM ruby:2.2.0
RUN apt-get install imagemagick
# throw errors if Gemfile has been modified since Gemfile.lock
RUN bundle config --global frozen 1
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN apt-get update && apt-get install -y nodejs --no-install-recommends && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y mysql-client postgresql-client sqlite3 --no-install-recommends && rm -rf /var/lib/apt/lists/*
COPY Gemfile /usr/src/app/
COPY Gemfile.lock /usr/src/app/
RUN bundle install
COPY . /usr/src/app
EXPOSE 3000
CMD ["rails", "server"]
which I made based on the rails:onbuild Dockerfile moving down the ONBUILD instructions and removing the ONBUILD flavor.
Most packages clean out the cache to save on size. Try this:
apt-get update && apt-get install imagemagick
Or spool up a copy of the container and look for yourself
docker run -it --remove <mycontainernameorid> /bin/bash
The --remove will ensure that the container is removed after you exit the shell. Once in the shell look for the package binary (or dpkg --list)