Run Jira in docker with initial setup snapshot - docker

In my company, we're using a Jira for issue tracking. I need to write an application, that integrates with it and synchronizes some data with other services. For testing, I want to have a docker image of the Jira with some initial data.
I'm using the official atlassian/jira-core image. After the initial setup, I saved a state by running docker commit, but unfortunately the new image seems to be empty, and I need to set it up again from scratch.
What should I do to save the initial setup? I want to run tests that will change something within Jira, so reverting it back will be necessary to have reliable test suite. After I spin a new container it should have created a few users, and project with some issues. I don't want to create it manually for each new instance. Also, the setup takes a lot of time which is not acceptable for testing.

To get persistent storage you need to mount /var/atlassian/jira in your host system. /var/atlassian/jira this can be used for storing your configuration etc. so you do not need to commit, whenever you spin up a new container with /var/atlassian/jira mount path will have all the configuration that you set previously.
docker run --detach -v /you_host_path/jira:/var/atlassian/jira --publish 8080:8080 cptactionhank/atlassian-jira:latest
For logs you can mount
/opt/atlassian/jira/logs
The above is valid if you are running with the latest tag or you can explore relevant dockerfile.
Set volume mount points for installation and home directory. Changes to the
home directory needs to be persisted as well as parts of the installation
directory due to eg. logs. VOLUME ["/var/atlassian/jira", "/opt/atlassian/jira/logs"]
atlassian-jira-dockerfile

look at the entrypoint.sh , the comments from there are:
check if the server.xml file has been changed since the creation of
this Docker image. If the file has been changed the entrypoint script
will not perform modifications to the configuration file.
so I think you need to provide your server.xml to stop the init process...

Related

How to modify the configuration of the database in docker of opengauss

Recently, I was trying to deploy the opengauss database using docker, and I saw that this docker was released by your company.
Currently encountered the following two problems:
The corresponding database configuration file was not found: “hab.conf or postgreq.conf”, where is the location of this file in the docker image? If not, can it be gs_*modified by tools.
When the database in docker is started and then restarted, the docker image will be launched, and there are no parameters linked to the configuration file in the docker image, so there is no way to modify the configuration file of the database. At present, the solution I think of is to “running container”directly “commit & save” the modified image into a new image. Is this the only solution?
.hba.conf or postgreq.conf is here
/var/lib/opengauss/data, support to use gs_guc to modify parameters.
.After changing the parameters that require database restart to take effect, just restart the container directly.
.You can also do persistence if you want, specify it through the -v parameter when running.
-v /enmotech/opengauss:/var/lib/opengauss

Docker - Safest way to upload new content to production

I am new to Docker.
Every time i need to upload new content in production I get anxious that something will go wrong so I try to understand how backups work and how to backup my Volumes which seems pretty complicated for me at the moment.
So i have this idea of creating a new image every time I want to upload new content.
Then i pull this image in the machine and stack rm/deploy the container and see if it works - if not I pull the old image.
If the code works I can then delete my old image.
Is this a proper/safe way to update production machines or I need to get going with backups and restores?
I mean i read this guide https://www.thegeekdiary.com/how-to-backup-and-restore-docker-containers/ but I don't quite understand how to restore my volumes.
Any suggestion would be nice.
Thank you
That's a pretty normal way to use Docker. Make sure you give each build a distinct tag, like a date stamp or source-control ID. You can do an upgrade like
# CONTAINER=...
# IMAGE=...
# OLD_TAG=...
# NEW_TAG=...
# Shell function to run `docker run`
start_the_container() {
docker run ... --name "$CONTAINER" "$IMAGE:$1"
}
# Shut down the old container
docker stop "$CONTAINER"
docker rm "$CONTAINER"
# Launch the new container
start_the_container "$NEW_TAG"
# Did it work?
if check_if_container_started_successfully; then
# Delete the old image
docker rmi "$IMAGE:$OLD_TAG"
else
# Roll back
docker stop "$CONTAINER"
docker rm "$CONTAINER"
start_the_container "$OLD_TAG"
docker rmi "$IMAGE:$NEW_TAG"
fi
The only docker run command here is in the start_the_container shell function; if you have environment-variable or volume-mount settings you can put them there, and the old volumes will get reattached to the new container. You do need to back up volume content, but that can be separate from this upgrade process. You should not need to back up or restore the contents of the container filesystems beyond this.
If you're using Kubernetes, changing the image: in a Deployment spec does this for you automatically. It will actually start the new container(s) before stopping the old one(s) so you get a zero-downtime upgrade; the key parts to doing this are being able to identify the running containers, and connecting them to a load balancer of some sort that can route inbound requests.
The important caveat here is that you must not use Docker volumes or bind mounts for key parts of your application. Do not use volumes for your application code, or static asset files, or library files. Otherwise the lifecycle of the volume will take precedence over the lifecycle of the image/container, and you'll wind up running old code and can't update things this way. (They make sense for pushing config files in, reading log files out, and storing things like the underlying data for your database.)

How can I save any changes of containers?

If I have one ubuntu container and I ssh to it and make one file after the container is destroyed or I reboot the container the new file was destroyed because the kubernetes load the ubuntu image that does not contain my changes.
My question is what should I do to save any changes?
I know it can be done because some cloud provider do that.
For example:
ssh ubuntu#POD_IP
mkdir new_file
ls
new_file
reboot
after reboot I have
ssh ubuntu#POD_IP
ls
ls shows nothing
But I want to it save my current state.
And I want to do it automatically.
If I use docker commit I can not control my images because it makes hundreds of images. because I should make images by every changes.
If I want to use storage I should mount /. but kubernetes does not allow me to mount /. and it gives me this error
Error: Error response from daemon: invalid volume specification: '/var/lib/kubelet/pods/26c39eeb-85d7-11e9-933c-7c8bca006fec/volumes/kubernetes.io~rbd/pvc-d66d9039-853d-11e9-8aa3-7c8bca006fec:/': invalid mount config for type "bind": invalid specification: destination can't be '/'
You can try to use docker commit but you will need to ensure that your Kubernetes cluster is picking up the latest image that you committed -
docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
This is going to create a new image out of your container which you can feed to Kubernetes.
Ref - https://docs.docker.com/engine/reference/commandline/commit/
Update 1 -
In case you want to do it automatically, you might need to store the changed state or the files at a centralized file system like NFS etc & then mount it to all running containers whenever required with the relevant permissions.
K8s ref - https://kubernetes.io/docs/concepts/storage/persistent-volumes/
Docker and Kubernetes don't work this way. Never run docker commit. Usually you have very little need for an ssh daemon in a container/pod and you need to do special work to make both the sshd and the main process both run (and extra work to make the sshd actually be secure); your containers will be simpler and safer if you just remove these.
The usual process involves a technique known as immutable infrastructure. You never change code in an existing container; instead, you change a recipe to build a container, and tell the cluster manager that you want an update, and it will tear down and rebuild everything from scratch. To make changes in an application running in a Kubernetes pod, you typically:
Make and test your code change, locally, with no Docker or Kubernetes involved at all.
docker build a new image incorporating your code change. It should have a unique tag, often a date stamp or a source control commit ID.
(optional but recommended) docker run that image locally and run integration tests.
docker push the image to a registry.
Change the image tag in your Kubernetes deployment spec and kubectl apply (or helm upgrade) it.
Often you'll have an automated continuous integration system do steps 2-4, and a continuous deployment system do the last step; you just need to commit and push your tested change.
Note that when you docker run the image locally in step 3, you are running the exact same image your production Kubernetes system will run. Resist the temptation to mount your local source tree into it and try to do development there! If a test fails at this point, reduce it to the simplest failing case, write a unit test for it, and fix it in your local tree. Rebuilding an image shouldn't be especially expensive.
Your question hints at the unmodified ubuntu image. Beyond some very early "hello world" type experimentation, there's pretty much no reason to use this anywhere other than the FROM line of a Dockerfile. If you haven't yet, you should work through the official Docker tutorial on building and running custom images, which will be applicable to any clustering system. (Skip all of the later tutorials that cover Docker Swarm, if you've already settled on Kubernetes as an orchestrator.)

Docker container crash - how to retain settings?

When a docker container crashes and I make a new one, how can I ensure I have the settings from the last one in place? The container I am running contains Jenkins only.
You need to use a volume if you want your configuration changes to be persisted.
In the image I am using as a base (usually jenkins/jenkins:2.75-alpine), the configuration lives under: /var/jenkins_home/workspace
so all I need to do is something like:
docker run -v /path/on/host:/var/jenkins_home jenkins/jenkins:2.75-alpine

Rebuild container after each change?

The Docker documentation suggests to use the ONBUILD instruction if you have the following scenario:
For example, if your image is a reusable python application builder, it will require application source code to be added in a particular directory, and it might require a build script to be called after that. You can't just call ADD and RUN now, because you don't yet have access to the application source code, and it will be different for each application build. You could simply provide application developers with a boilerplate Dockerfile to copy-paste into their application, but that is inefficient, error-prone and difficult to update because it mixes with application-specific code.
Basically, this all sounds nice and good, but that does mean that I have to re-create the app container every single time I change something, even if it's only a typo.
This doesn't seem to be very efficient, e.g. when creating web applications where you are used to change something, save, and hit refresh in the browser.
How do you deal with this?
does mean that I have to re-create the app container every single time I change something, even if it's only a typo
not necessarily, you could use the -v option for the docker run command to inject your project files into a container. So you would not have to rebuild a docker image.
Note that the ONBUILD instruction is meant for cases where a Dockerfile inherits FROM a parent Dockerfile. The ONBUILD instructions found in the parent Dockerfile would be run when Docker builds an image of the child Dockerfile.
This doesn't seem to be very efficient, e.g. when creating web applications where you are used to change something, save, and hit refresh in the browser.
If you are using a Docker container to serve a web application while you are iterating on that application code, then I suggest you make a special Docker image which only contains everything to run your app but the app code.
Then share the directory that contains your app code on your host machine with the directory from which the application files are served within the docker container.
For instance, if I'm developing a static web site and my workspace is at /home/thomas/workspace/project1/, then I would start a container running nginx with:
docker run -d -p 80:80 -v /home/thomas/workspace/project1/:/usr/local/nginx/html:ro nginx
That way I can change files in /home/thomas/workspace/project1/ and the changes are reflected live without having to rebuild the docker image or even restart the docker container.

Resources