Tell in advance - I'm not a java/maven/linux developer/user.
I have a github project I need to build and package - https://github.com/simonellistonball/nifi-webdav-bundle with provided instruction https://www.tutorialspoint.com/apache_nifi/apache_nifi_custom_processor.htm. But I don't understand how to solve this with docker image https://hub.docker.com/_/maven. In my environment I have CentOS VM with git and docker installed.
My plan is: git clone repo into host VM, create docker volume, copy repo there, run container with some params (dunno which exact) and remove container.
Am I thinking right way? Is it even possible with maven container without creating my own image?
You have two choices:
Map /opt/nifi/nifi-current/extensions in the container to the host and drop the nar there.
Add a Dockerfile that looks something like this:
FROM apache/nifi:1.12.1
COPY --chown=nifi ./something.nar /opt/nifi/nifi-current/lib/something.nar
Related
I need to copy my customized keycloak themes into keycloak container to use it like mention here:
https://medium.com/#auscunningham/change-login-theme-in-keycloak-docker-image-55b5fa5ceec4
After identifying my container id: docker container ls and making a list of files like this: docker exec 7e3a420017a8 ls ./keycloak/themes
It returns the list of themes correctly, but using this to copy my files from local to container:
docker cp ./mycustomthem 7e3a420017a8:/keycloak/themes/
or
docker cp ./mycustomthem 7e3a420017a8:./keycloak/themes/
I get the following error:
Error: No such container:path: 7e3a420017a8:/keycloak
I cannot imagine where the error is, since I can list the files into the folder and container, could you help me?
Thank you in advance.
Works on my computer.
docker cp mycustomthem e67f76e8740b:/opt/jboss/keycloak/themes/raincatcher-theme
You have added the wrong path in command add full path /opt/jboss/keycloak/themes/raincatcher-theme.
This seems like a weird way to approach this problem. Why not just have a Dockerfile that uses the Keycloak container as the base image and then copies the theme into the container at build time? Then just run the image you build? This will also be a more stable pattern in the long term if you ever decide to add any plugins or customizations and it provides an easy upgrade path to new versions by just changing the base image in your Dockerfile.
Update according to your new question update:
Try the following:
docker cp ./mycustomthem 7e3a420017a8:/opt/jboss/keycloak/themes/
The correct path in Keycloak is actually /opt/jboss/keycloak/themes/
I am trying to start a container using Jenkins and a Dockerfile in my SCM.
Jenkins uses the Dockerfile from my SCM repository and builds the image on a remote server having a Dockerfile. This is done using the "cloud bees docker build and publish plugin".
When I ssh to the server, I see that the image has been built with the tags I had defined in Jenkins.
# docker image ls
What I am not able to do is run a container for the image that has been built. How to get the image-id and start the container? Shouldn't it have been very simple given many plugins are provided?
Could your problem be related to how to refer to the recently created docker in order tu run it? Can you provide an extract of your pipeline and how you are trying to achieve this?
It that was the case, there are different solutions, one being specifying a tag during the Docker creation, so you can then refer to it to run it.
In reply to how to work with image-ids, the docker build process will return the image id of the docker it creates. You can capture that id, and then use to run the docker.
start the container yourself on the VM by using standard docker run command.
use a software like watchtower to restart the container with an updated version when available
I am currently running a Jenkins with Docker. When trying to build docker apps, i am facing some doubt on if i should use Docker in Docker (Dind) by binding the /var/run/docker.sock file or by installing another instance of docker in my Jenkins Docker. I actually saw that previously, it was discouraged to use something else than the docker.sock.
I don't actually understand why we should use something else than the docker daemon from the host apart from not polluting it.
sources : https://itnext.io/docker-in-docker-521958d34efd
Best solution for "jenkins in docker container needs docker" case is to add your host as a node(slave) in jenkins. This will make every build step (literally everything) run in your host machine. It took me a month to find perfect setup.
Mount docker socket in jenkins container: You will lose context. The files you want to COPY inside image is located inside workspace in jenkins container and your docker is running at host. COPY fails for sure.
Install docker client in jenkins container: You have to alter official jenkins image. Adds complexity. And you will lose context too.
Add your host as jenkins node: Perfect. You have the contex. No altering the official image.
Without completely understanding why you would need to use Docker in Docker - I suspect you need to meet some special requirements considering the environment in which you build the actual image, may I suggest you multistage building of docker images? You might find it useful as it enables you to first build the building environment and then build the actual image (hence the name 'multistage-building). Check it out here: https://docs.docker.com/develop/develop-images/multistage-build/
I would like to run a test a parse-dashboard via Docker, as documented in the readme.
I am getting the error message, "Parse Dashboard can only be remotely accessed via HTTPS." Normally, you can bypass this by adding the line "allowInsecureHTTP": true in your parse-dashboard-config.json file. But even if I have added this option to my config file, the same message is displayed.
I tried to edit the config file in the Docker container, whereupon I discovered that none of my local file changes where present in the container. It appeared as though my project was an unmodified version of the code from the github repository.
Why do the changes that I make to the files in my working directory on the host machine not show up in the Docker container?
But what it is upload to my docker, it's in fact the config file of my master branch.
It depends:
what that "docker" is: the official DockerHub or a private docker registry?
how it is uploaded: do you build an image and then use docker push, or do you simply do a git push back to your GitHub repo?
Basically, if you want to see the right files in your Docker container that you run, you must be sure to run an image you have built (docker build) after a Dockerfile which COPY files from your current workspace.
If you do a docker build from a folder where your Git repo is checked out at the right branch, you will get an image with the right files.
The Dockerfile from the parse-dashboard repository you linked uses ADD . /src. This is a bad practice (because of the problems you're running into). Here are two different approaches you could take to work around it:
Rebuild the Image Each Time
Any time you change anything in the working directory (which the Dockerfile ADDs to /src), you need to rebuild for the change to take effect. The exception to this is src/Parse-Dashbaord/parse-dashboard-config.json, which we'll mount in with a volume. The workflow would be nearly identical to the one in the readme:
$ docker build -t parse-dashboard .
$ docker run -d -p 8080:4040 -v ./src/Parse-Dashbaord/parse-dashboard-config.json:/src/Parse-Dashboard/parse-dashboard-config.json parse-dashboard
Use a Volume
If we're going to use a volume to do this, we don't even need the custom Dockerfile shipped with the project. We'll just use the official Node image, upon which the Dockerfile is based.
In this case, Docker will not run the build process for you, so you should do it yourself on the host machine before starting Docker:
$ npm install
$ npm run build
Now, we can start the generic Node Docker image, and ask it do serve our project directory.
$ docker run -d -p 8080:4040 -v ./:/src node:4.7.2 "cd /src && npm run dashboard"
Changes will take effect immediately because you mount ./ into the image as a volume. Because it's not done with ADD, you don't need to rebuild the image each time. We can use the generic node image because if we're not ADDing a directory and running the build commands, there's nothing our image will do differently than the official one.
I'm using docker in Ubuntu. During development phase I cloned all source code from Git in host, edit them in WebStorm, and them run with Node.js inside a docker container with -v /host_dev_src:/container_src so that I can test.
Then when I wanted to send them for testing: I committed the container and pushed a new version. But when I pulled and ran the image on the test machine, the source code was missing. That makes sense as in test machine there's no /host_src available.
My current workaround is to clone the source code on the test machine and run docker with -v /host_test_src:/container_src. But I'd like to know if it's possible to copy the source code directly into the container and avoid that manipulation. I'd prefer to just copy, paste and run the image file with the source code, especially since there's no Internet connection on our testing machines.
PS: Seems docker cp only supports copying file from container to host.
One solution is to have a git clone step in the Dockerfile which adds the source code into the image. During development, you can override this code with your -v argument to docker run so that you can make changes without rebuilding. When it comes to testing, you just check your changes in and build a new image. Now you have a fully standalone alone image for testing.
Note that if you have a VOLUME instruction in your Dockerfile, you will need to make sure it occurs after the git clone step.
The problem with this approach is that if you are using a compiled language, you only want your binaries to live in the final image. In this case, the git clone needs to be replaced with some code that either fetches or compiles the binaries.
Please treat your source codes are data, then package them as data container , see https://docs.docker.com/userguide/dockervolumes/
Step 1 Create app_src docker image
Put one Dockerfile inside your git repo like
FROM BUSYBOX
ADD . /container_src
VOLUME /container_src
Then you can build source image like
docker build -t app_src .
During development period, you can always use your old solution -v /host_dev_src:/container_src.
Step 2 Transfer this docker image like app image
You can transfer this app_src image to test system similar to your application image, probably via docker registry
Step 3 Run as data container
In test system, run app container above it. (I use ubuntu for demo)
docker run -d -v /container_src --name code app_src
docker run -it --volumes-from code ubuntu bash
root#dfb2bb8456fe:/# ls /container_src
Dockerfile hello.c
root#dfb2bb8456fe:/#
Hope it gives help
(give credits to https://github.com/toffer/docker-data-only-container-demo , which I get detail ideas)
Adding to Adrian's answer, I do git clone, and then do
CMD git pull && start-my-service
so the latest code at the checked out branch gets run. This is obviously not for everyone, but it works in some software release models.
You could try and have two Dockerfiles. The base one would know how to run your app from a predevined folder, but not declare it a volume. When developing you will be running this container with your host folder mounted as a volume. Another one, the package one, will inherit the base one and copy/add the files from your host directory, again without volumes, so that you would carry all the files to the tester's host.