How to install jdk, jdeveloper, maven and svn in one docker container? - docker

I want to create development environment for all our programmers so I want to install jdk, jdeveloper, maven and svn in one docker container.
How can I do that?

First of all, you need to go to the docker site and learn how to create a Dockerfile. The file will run and create a container with whatever you want to install in it. For example:
FROM debian
RUN apt-get update -qq && DEBIAN_FRONTEND=noninteractive apt-get install -yqq\
openjdk-8-jre\
maven\
svn\
....
RUN ... (to run commands inside the container when it's created)
EXPOSE 80 8080... (whatever ports you want to expose)
This is a very simple example, you have to read the docs and see what is available. I would recommend looking at the Dockerfile for from the github repo that has all the library docker files and get an idea of what's happening so you know how to create your container.
On a side note, I would not install svn in the same container, it would be better to have it in a separate container so that you never have more than one service per container since each container runs in a separate process. You can link containers, but that would require reading the docs to see how that is done.

Why using jdeveloper? For eclipse you can find several examples in the docker hub.
Agree with #Hatem-Jaber seperate svn from the dev environment.

Related

How to properly set up jenkins with docker?

I'm new to Docker and am learning how to implement Docker with Jenkins. I was able to succesfully bind a docker volume to my host machine directory with the following command
docker run –name jenkinsci -p 8080:8080 -p 50000:50000 -v ~/Jenkins:/var/jenkins_home/ jenkins/jenkins:lts
Now that the basic Jenkins is set up and binded to my host, there are a few things I wasn't sure to handle.
(1) This is only accessible through localhost:8080. How do I make this accessible to other computers? I've read that I can change the URL to my company's public IP address? Is this the right approach?
(2) I want to automate the installation of select plugins and setting the paths in the Global Tools Configuration. There were some tips on github https://github.com/jenkinsci/docker/blob/master/README.md but I wasn't clear on where this Dockerfile is placed. For example, if I wanted the plugins MSBuild and Green Balls to be installed, what would that look like?
FROM jenkins/jenkins:lts
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
Would I have to create a text file called plugins.txt where it contains a list of plugins I want downloaded? Where will this Dockerfile be stored?
(3) I also want a Dockerfile that installs all the dependencies to run my .NET Windows project (nuget, msbuild, wix, nunit, etc). I believe this Dockerfile will be placed in my git repository.
Basically, I'm getting overwhelmed with all this Docker information and am trying to piece together how Docker interacts with Jenkins. I would appreciate any advice and guidance on these problems.
Its ok to get overwhelmed by docker+kubernetes. Its a lot of information and whole overall shift how we have been handling applications/services.
To make jenkins available on all interfaces, use following command.
docker run –name jenkinsci -p "0.0.0.0:8080:8080" -p "0.0.0.0:50000:50000" -v ~/Jenkins:/var/jenkins_home/ jenkins/jenkins:lts
Yes, you have to provide the plugins.txt file, and create a new jenkins image containing all the required plugins. After that you can use this new image instead of jenkins/jenkins:lts.
The new image, suited for your workload should contain all the dependencies required for your environment.

How to prepare a blank website to be dockerized?

I have a totally empty debian9 on which I installed docker-ce and nothing else.
My client wants me to run a website (already done locally on my PC) that he can migrate/move rapidly from one server to another moving docker images.
My idea is to install some empty docker image, and then install on it manually all dependencies (ngingrtmp, apache2, nodejs, mysql, phpmyadmin, php, etc...)
I need to install all these dependencies MANUALLY (to keep control) - not using a ready to go docker images from dockerhub, and then to create an IMAGE of ALL things I have done (including these dependencies, but also files I will upload).
Problem is : I have no idea how to start a blank image, connect to it and then save a modified image with components and dependencies I will run.
I am aware that the SIZE may be bigger with a simple dockerfile, but I need to customize lots of things such as using php5.6, apache2.2, edit some php.ini etc etc..
regards
if you don't want to define you're dependencies on the docker file then you can have an approach like this, spin up a linux container with a base image and go inside the docker
sudo docker exec -it <Container ID> /bin/bash
install your dependencies as you install on any other linux server.
sudo apt-get install -y ngingrtmp apache2 nodejs mysql phpmyadmin php
then exit the container by ctrl+p and ctrl+q and now commit the changes you made
sudo docker commit CONTAINER_ID new-image-name
run docker images command and you will see the new image you have created, then you can use/move that image
You can try with a Dockerfile with the following content
FROM SCRATCH
But then you will need to build and add the operating system yourself.
For instance, alpine linux does this in the following way:
FROM scratch
ADD rootfs.tar.xz /
CMD ["/bin/sh"]
Where rootfs.tar.xz is a file of less of 2MB available on alpine's github repository (version 3.7 for x86_64 arch):
https://github.com/gliderlabs/docker-alpine/tree/61c3181ad3127c5bedd098271ac05f49119c9915/versions/library-3.7/x86_64
Or you can begin with alpine itself, but you said that you don't want to depend on ready to go docker images.
A good start point for you (if you decide to use alpnie linux), could look like the one available at https://github.com/docker-library/httpd/blob/eaf4c70fb21f167f77e0c9d4b6f8b8635b1cb4b6/2.4/alpine/Dockerfile
As you can see, A Dockerfile can became very big and complex because within it you provision all the software you need for running your image.
Once you have your Dockerfile, you can build the image with:
docker build .
You can give it a name:
docker build -t mycompany/myimage:1.0
Then you can run your image with:
docker run mycompany/myimage:1.0
Hope this helps.

How to convert VM image to dockerfile?

For work purpose, I have an ova file which I need to convert it to DockerFile.
Does someone know how to do it?
Thanks in advance
There are a few different ways to do this. They all involve getting at the disk image of the VM. One is to mount the VDI, then create Docker image from that (see other Stackoverflow answers). Another is to boot the VM and copy the complete disk contents, starting at root, to a shared folder. And so on. We have succeeded with multiple approaches. As long as the disk in the VM is compatible with the kernel underlying the running container, creating Docker image that has the complete VM disk has worked.
Yes it is possible to use a VM image and run it in a container. Many our customers have been using this project successfully: https://github.com/rancher/vm.git.
RancherVM allows you to create VMs that run inside of Kubernetes pods,
called VM Pods. A VM pod looks and feels like a regular pod. Inside of
each VM pod, however, is a container running a virtual machine
instance. You can package any QEMU/KVM image as a Docker image,
distribute it using any Docker registry such as DockerHub, and run it
on RancherVM.
Recently this project has been made compatible for kubernetes as well. For more information: https://rancher.com/blog/2018/2018-04-27-ranchervm-now-available-on-kubernetes
Step 1
Install ShutIt as root:
sudo su -
(apt-get update && apt-get install -y python-pip git docker) || (yum update && yum install -y python-pip git docker which)
pip install shutit
The pre-requisites are python-pip, git and docker. The exact names of these in your package manager may vary slightly (eg docker-io or docker.io) depending on your distro.
You may need to make sure the docker server is running too, eg with ‘systemctl start docker’ or ‘service docker start’.
Step 2
Check out the copyserver script:
git clone https://github.com/ianmiell/shutit_copyserver.git
Step 3
Run the copy_server script:
cd shutit_copyserver/bin
./copy_server.sh
There are a couple of prompts – one to correct perms on a config file, and another to ask what docker base image you want to use. Make sure you use one as close to the original server as possible.
Note that this requires a version of docker that has the ‘docker exec’ option.
Step 4
Run the build server:
docker run -ti copyserver /bin/bash
You are now in a practical facsimile of your server within a docker container!
Source
https://zwischenzugs.com/2015/05/24/convert-any-server-to-a-docker-container/
in my opinon it's totally impossible. But you can create a dockerfile with same OS and mount your datas.

Why does official nginx alpine docker not use the standard repository?

Looking at https://github.com/nginxinc/docker-nginx/blob/f8fad321cf58d5cbcafa3d9fa15314b8a77b5e65/mainline/alpine/Dockerfile it's clear that the official nginx repo doesn't use the nginx that's available in alpine's repositories, but instead seems to compile from scratch. You can install nginx on any container using apk add --update --no-cache nginx. Why wouldn't nginx juse utilize this and maintain the official alpine repo?
Typically the maintainer wants more control over the install of the main component of the image. That control lets them:
Install a specific version
Install with container specific options
Avoid including packaging values that do not apply to containers, like startup scripts
One of the bigger reasons I can think of is to install a version immediately after it's release and before the package maintainers have had a chance to create a package for that release.
This act has some answer.like:
probably! nginx package in apk is not official and they want to configure and compile their self nginx
Or they add additional configure option to nginx image!

Dockerfile with multiple images

What I want to build, without Docker, would look like this:
AWS EC2 Debian OS where I can install:
Python 3
Anaconda
Run python scripts that will create new files as output
SSH into it so that I can explore the new files created
I am starting to use Docker an my first approach was to build everything into a Dockerfile but I don't know how to add multiple FROM so that I can install the official docker image of Debian, Python3 and Anaconda.
In addition, with this approach is it possible to get into the container and explore the files that have been created?

Resources