How can I edit tomcat configuration files inside the docker container? - docker

I am trying to nano/vim inside a docker container to edit the tomcat config files but i am getting an error that nano/vim is unknown command. I tried to yum install, still yum is unknown comand. How do I go about it

The most common editor is vi. To install some packages into your container you have to know it's base image. Most of distros create a special file in /etc/ with all necessary information: something-release, you can find it out with this command:
cat /etc/*release
And then use the package manager of current distro.
for Alpine it will be apk update && apk add vim.
for Ubuntu/Debian - apt update && apt install vim.
for Centos/RedHat/Fedora - yum install vim
etc

Related

Docker build dependent on host Ubuntu version not on the actual Docker File

I'm facing an issue with my docker build.
I have a dockerfile as follow:
FROM python:3.6
RUN apt-get update && apt-get install -y libav-tools
....
The issue I'm facing is that I'm getting this error when building on ubuntu:20.04 LTS
E: Package 'libav-tools' has no installation candidate
I made some research and found out that ffmpeg should be a replacement for libav-tools
FROM python:3.6
RUN apt-get update && apt-get install -y ffmpeg
....
I tried again without any issue.
but when I tried to build the same image with ffmpeg on ubuntu:16.04 xenial I'm getting a message that
E: Package 'ffmpeg' has no installation candidate
after that, I replace the ffmpeg with libav-tools and it worked on ubuntu:16.04
I'm confused now why docker build is dependant on the host ubuntu version that I'm using and not the actual dockerfile.
shouldn't docker build be coherent whatever the ubuntu version I'm using.
Delete the the existing image and pull it again. Seems you have a old image which may have a different base OS and that is why you are seeing the issue

Why can't Docker find a existing package?

I am new at using Docker so this may be obvious for some. I am running Ubuntu 18.04TLS.
I want to install the package "python3-protobuf" inside an image. I try to do this with the following line in the Dockerfile:
...
RUN apt-get update && \
apt-get install -y --no-install-recommends \
python3-protobuf \
<some other packages to be installed>
...
When I run 'docker build -t myImageName', I get the message:
E: Unable to locate package python3-protobuf
There are many packages that I am installing but this is the only one that is creating a problem for me.
I know that the package name is correct because in the terminal, when I 'apt search' for it, it is found. Additionally, in the dockerfile I do the recommended 'update' and 'install' steps. So it should be finding it. Any ideas why it does not?
#banuj answered this question.
The package "python3-protobuf" became available from Ubuntu 18.04 and onward. The base image I took is using ubuntu 16.04.
I have two way to solve this:
Use a base image that is with ubuntu 18.04 (or later)
Use pip to install the package.
I ended up using option two.

Why dockered centos doesn't recognize pip?

I want to create a container with python and few packages over centos. I've tried to run several commands inside raw centos container. Everything worked fine I've installed everything I want. Then I created Dockerfile with the same commands executed via RUN and I'm getting /bin/sh: pip: command not found What could be wrong? I mean the situation at all. Why everything could be executed in the command line but not be executed with RUN? I've tried both variants:
RUN command
RUN command
RUN pip install ...
and
RUN command\
&& command\
&& pip install ...
Commands that I execute:
from centos
run yum install -y centos-release-scl\
&& yum install -y rh-python36\
&& scl enable rh-python36 bash\
&& pip install django
UPD: Full path to the pip helped. What's wrong?
You need to install pip first using
yum install python-pip
or if you need python3 (from epel)
yum install python36-pip
When not sure, ask yum:
yum whatprovides /usr/bin/pip
python2-pip-18.1-1.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : #System
Matched from:
Filename : /usr/bin/pip
python2-pip-18.1-1.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : updates
Matched from:
Filename : /usr/bin/pip
python2-pip-18.0-4.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : fedora
Matched from:
Filename : /usr/bin/pip
This output is from Fedora29, but you should get similar result in Centos/RHEL
UPDATE
From comment
But when I execute same commands from docker run -ti centos everything
is fine. What's the problem?
Maybe your PATH is broken somehow? Can you try full path to pip?
As it has already been mentioned by #rkosegi, it must be a PATH issue. The following seems to work:
FROM centos
ENV PATH /opt/rh/rh-python36/root/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RUN yum install -y centos-release-scl
RUN yum install -y rh-python36
RUN scl enable rh-python36 bash
RUN pip install django
I "found" the above PATH by starting a centos container and typing the commands one-by-one (since you've mentioned that it is working).
There is a nice explanation on this, in the slides of BMitch which can be found here: sudo-bmitch.github.io/presentations/dc2018/faq-stackoverflow.html#24
Q: Why doesn't RUN work?
Why am I getting ./build.sh is not found?
RUN cd /app/srcRUN ./build.sh
The only part saved from a RUN is the filesystem (as a new layer).
Environment variables, launched daemons, and the shell state are all discarded with the temporary container when pid 1 exits.
Solution: merge multiple lines with &&:
RUN cd /app/src && ./build.sh
I know this was asked a while ago, but I just had this issue when building a Docker image, and wasn't able to find a good answer quickly, so I'll leave it here for posterity.
Adding the scl enable command wouldn't work for me in my Dockerfile, so I found that you can enable scl packages without the scl command by running:
source /opt/rh/<package-name>/enable.
If I remember correctly, you won't be able to do:
RUN source /opt/rh/<package-name>/enable
RUN pip install <package>
Because each RUN command creates a different layer, and shell sessions aren't preserved, so I just ran the commands together like this:
RUN source /opt/rh/rh-python36/enable && pip install <package>
I think the scl command has issues running in Dockerfiles because scl enable <package> bash will open a new shell inside your current one, rather than adding the package to the path in your current shell.
Edit:
Found that you can add packages to your current shell by running:
source scl_source enable <package>

Docker usage with Odoo 10.0

I need to know how to setup a Docker to implement a container that could help me run an Odoo 10.0 ERP environment in it.
I'm looking for references or some setup guides, even I don't mind if you can paste the CLI below. I'm currently developing in a Ubuntu OS.
Thanks in Advance.......!!!
#NaNDuzIRa This is quite simple. I suggest that when you want to learn how to do something even if you need it very fast to look into the "man page" of the tool that you are trying to use to package your application. In this case, it is Docker.
Create a file name Dockerfile or dockerfile
Now that you know the OS flavor you want to use. Include that at the beginning of the "Dockerfile"
Then, you can add how you want to install your application in the OS.
Finally, you include the installation steps of Odoo for which i have added a link at the bottom of this post.
#OS of the image, Latest Ubuntu
FROM ubuntu:latest
#Privilege raised to install the application or package as a root user
USER root
#Some packages that will be used for the installation
RUN apt update && apt -y install wget
#installing Odoo
RUN wget -O - https://nightly.odoo.com/odoo.key | apt-key add -
RUN echo "deb http://nightly.odoo.com/10.0/nightly/deb/ ./" >> /etc/apt/sources.list.d/odoo.list
RUN apt-get -y update && apt-get -y install odoo
References
Docker
Dockerfile
Odoo

Invoke Ansible playbook in Jenkins

I have jenkins build and I am trying to invoke a ansible playbook file for an s3 upload. When I execute a post-build-script for invoking an ansible playbook file, I am ending with below error.
Cannot run program "ansible-playbook" (in directory "/var/jenkins_home/workspace/mybuild"): error=2, No such file or directory
Below screenshot is ansible post build script configuration.
FYI: There is a file(ansibledemo.yml) in my build folder. I tried giving absolute path(/var/jenkins_home/workspace/mybuild/ansibledemo.yml). Still no go.
When I try running ansible-playbook myplaybook.yml directly in jenkins image(terminal) I am ending up with bash: ansible-playbook: command not found
When I tried installing ansible in my jenkins server, I couldn't execute any installation commands. Please see the below screenshot.
Ansible is not install on your Jenkins machine, first you need to install the ansible on the jenkins machine:
On Ubuntu/Debian:
sudo apt-add-repository ppa:ansible/ansible
sudo apt-get update
sudo apt-get install ansible
On CentOS/RedHat:
sudo yum install epel-release
sudo yum install ansible
After that you will be able to run the ansible-playbook.
You can try to install using pip version as an alternative and try, Please see the below steps,
$ virtualenv venv
$ source venv/bin/activate
$ pip install ansible-container[docker,openshift]
You can see more options to install in docs: https://docs.ansible.com/ansible-container/installation.html
But always it is a good option to keep a separate vm / docker like "ansible-controller" and use that as a slave to jenkins, So that you don't need ansible plugins in ansible. And jenkins will be always stable without much load
Download package information from the configured sources.
# apt update
Install ansible
# apt install ansible
That's it.
If you run official jenkins container (based on debian) than repo with ansible build in already and you don't need "apt-add-repository". But you could install apt-add-repository by installing software-properties-common for further using.
dpkg -S apt-add-repository tells that this packet belongs to software-properties-common.
Error appears because the author of container always tries to make it as light as possible and remove package information.
You don't need sudo, because you become root in container by default. You become another user only if you mention it in intentionally.
Please, add information that you work in container to your question.

Resources