How to run Servicemix commands inside docker in Karaf console using deploy script? - docker

Colleagues, it so happened that I am now using the servicemix 7.0 technology in one old project. There are several commands that I run manually.
build image servicemix
docker build --no-cache -t servicemix-http:latest .
start container and copy data from local folder source and folder .m
docker run --rm -it -v %cd%:/source -v %USERPROFILE%/.m2:/root/.m2 servicemix-http
start console servicemix and run command
feature:repo-add file:/../source/source/transport/feature/target/http-feature.xml
run command
feature:install http-feature
copy deploy files from local folder to deploy folder into servicemix
docker cp /configs/. :/deploy
update image servicemix
docker commit servicemix-http
Now I describe the gitlab-ci.yml for deployment.
And the question concerns the servicemix commands that were launched from the karaf console.
feature:repo-add
feature:install
Is there any way to script them?

If all you need is to install features from specific feature repositories on startup you can add the features and feature-repositories to /etc/org.apache.karaf.features.cfg.
You can use ssh with private-key to pass commands to Apache karaf.
ssh karaf#localhost -p 8101 -i ~/.ssh/karaf_private_key -- bundle:list
You can also try running commands through Karaf_home/bin/client, albeit had no luck using it with Windows 10 ServiceMix 7.0.1 when I tried. It kept throwing me NullReferenceException so guessing my java installation is missing some security related configuration.
Works well when running newer Karaf installations on docker though and can be used through docker exec as well.
# Using password - (doesn't seem to be an option for servicemix)
docker exec -it karaf /opt/apache-karaf/bin/client -u karaf -p karaf -- bundle:list
# Using private-key
docker exec -it karaf /opt/apache-karaf/bin/client -u karaf -k /keys/karaf_key -- bundle:list

Related

Dev environnement for gcc with Docker?

I would like to create a minimalist dev environment for occasional developers which only need Docker.
The ecosystem would have:
code-server image to run Visual Studio Code
gcc image to build the code
git to push/commit the code
ubuntu with some modifications to run the code
I looked to docker-in-docker which could be a solution:
Docker
code-server
docker run -it -v ... gcc make
docker run -it -v ... git git commit ...
docker run -it -v ... ubuntu ./program
But it seems perhaps a bit overkill. What would be the proper way to have a full dev environment well separated, that only require Docker to be installed on the host machine (Linux, Windows, MacOS, Chromium)
I suggest using a Dockerfile.
This file specifies a few steps used to build an image.
The first line of the file specifies a base image(in your case, I would use Ubuntu):
FROM ubuntu:latest
Then, you can e.g. copy files to the image or select commands to run:
RUN apt install gcc make
RUN apt install git
and so on.
At the end, you may want to specify the program that is run when you start the container
CMD /bin/bash
Then you can build it with the command docker build -f Dockerfile -t devenv:latest. This builds a new image named devenv:latest (latest is the version) from the file Dockerfile.
Then, you can create a container from the file using docker run devenv:latest.
If you want to use this container multiple times, you could create it using docker run -it devenv:latest
If you want to, you can also use the code-server base image instead of ubuntu:latest.

Not able to mount -v in the run command

I am new to the concept of Docker. In my office server we have installed Docker and running the Jenkins images as containers.
We want to run the below command in the Jenkinsfile in a process to create the Jenkins pipeline.
docker run --rm -p 8080:8080 -v $PWD/app:/opt/app --name=app-server App
The problem is face is the volumes are not mounting in the /opt/app from the $PWD/app (which is the volume of the Jenkins container).
I have app.txt file in the $PWD/app. After running the above command it should present in the /opt/app, but the folder is empty.
Because of this the configuration files are missing in the app-server container volume. Error config files are missing is happening.
What is the reason for this problem. Why the files in the $PWD/app are not mounting to the /opt/app folder?
Point to consider:
docker run command is running in the Jenkins container.
The command on top is running prefect when the jenkins is running locally,not as docker container.

Awestruct in Docker Container via Jenkins

I want to use awestruct inside a docker container to build my website.
Everything seems to work but at the end I receive a message in Jenkins job:
Jenkins Job Error
When I run the docker container directly on the server (not via Jenkins job) with:docker run -i -t --rm --volume /home/jenkins/workspace/myWebsite :/home/jenkins/myWebsite:Z --name myWebsite_Container mbiarnes/myWebsite:latest /bin/bash I can access directly the container.
I can do a rake setup (rake setup output). But when doing a rake clean build I get an error
I tried nearly everything but I cannot advance.
This is my Dockerfile.
When doing awestruct --version I get bash: awestruct: command not foundlike awestruct is not installed, but in rake setup output you can see that awestruct 0.5.7 is used. Also during the building of docker image I didn't receive any errors and I can see in the logs that awestruct is installed succesfully.

jenkins plugins installed via CLI inside docker container is not showing up in jenkins web console

According to the README.md file in the official JenkinsCI for Docker repository, I have started a jenkins master in a docker container with a named volume such as this
$ docker run -d \
--publish 8080:8080 \
--volume jenkins_home:/var/jenkins_home \
--name jenkins_master \
jenkins
After that, I have used the browser to:
visit localhost:8080,
installed a few plugins from the jenkins web console,
ran a few pipelines
etc
All worked fine.
Later i tried to install some jenkins plugins via CLI (instead of the web console) like the following
$ docker exec -it jenkins_master /bin/bash
$ install-plugins.sh hockeyapp
It shows that everything installed correctly. However, when i visit localhost:8080 via the browser, i see that the hockeyapp plugin was not installed.
How can i make sure that plugins are available from the web console, while I install them from docker exec CLI?
Noteworthy, i found that there are 2 different plugins folder. One where hockeyapp is available. One where hockeyapp is not available.
$ ls /usr/share/jenkins/ref/plugins/ # shows hockeyapp
$ ls /var/jenkins_home/plugins/ # does not show hockeyapp
The install-plugins.sh is designed for pre-installing plugins.
Plugins that you install in that way will be picked up when the container starts, from the /usr/share/jenkins/ref/plugins/ directory that you mentioned.
Try restarting (or stopping and then starting) the container again. After that you should see the newly installed plugins appear correctly in the web console.
Looks like install-plugins.sh is depreciated now.
use:
jenkins-plugin-cli -f PATH_TO_FILE

How to get Container Id of Docker in Jenkins

I am using Docker Custom Build Environment Plugin to build my project inside "jpetazzo/dind" docker image. After building, in console output it shows:
Docker container 212ad049dfdf8b7180b8c9b185ddfc586b301174414c40969994f7b3e64f68bc started to host the build
$ docker exec --tty 212ad049dfdf8b7180b8c9b185ddfc586b301174414c40969994f7b3e64f68bc env
[workspace] $ docker exec --tty --user 122:docker 4aea29fff86ba4e50dbcc7387f4f23c55ff3661322fb430a099435e905d6eeef env BUILD_DISPLAY_NAME=#73
Here Docker Container which got started has container id 212ad049dfdf8b7180b8c9b185ddfc586b301174414c40969994f7b3e64f68bc .
Now further I want to execute some command on "Execute shell" part in "Build" option in Jenkins, there I want to use this Container Id. I tried using ${BUILD_CONTAINER_ID} as mentioned in the plugin page. But that does't work.
The documentation tells you to use docker run, but you're trying to do docker exec. The exec subcommand only works on a currently running container.
I suppose you could do a docker run -d to start the container in the background, and then make sure to docker stop when you're done. I suspect this will leave you with some orphaned running containers when things go wrong, though.

Resources