Redeploying a Liberty .war app running in Liberty Docker - docker

I am using Liberty Docker version to test an alternative to Liberty boilerplate in BlueMix: https://hub.docker.com/_/websphere-liberty/
Creating the image is a slow process. I wonder if there is a recommended approach to re-deploy an application war. Currently I have to recreate the docker file every time I make a change in my App.
The app is deployed in /config/apps/ directory along with some shared libraries , bootstrap.properties file, ...

Have you actually looked at Usage section on that page?
In the point 1) it is described:
A .WAR file can therefore be mounted in the dropins directory of this
server and run. The following example starts a container in the
background running a .WAR file from the host file system
$ docker run -d -p 80:9080 -p 443:9443 \
-v /tmp/DefaultServletEngine/dropins/Sample1.war:/config/dropins/Sample1.war \
websphere-liberty:webProfile7
This should allow you to dynamically update the application without any need to rebuild the image.
You have there also other examples which show how to mount whole config folder.

Related

Run Jira in docker with initial setup snapshot

In my company, we're using a Jira for issue tracking. I need to write an application, that integrates with it and synchronizes some data with other services. For testing, I want to have a docker image of the Jira with some initial data.
I'm using the official atlassian/jira-core image. After the initial setup, I saved a state by running docker commit, but unfortunately the new image seems to be empty, and I need to set it up again from scratch.
What should I do to save the initial setup? I want to run tests that will change something within Jira, so reverting it back will be necessary to have reliable test suite. After I spin a new container it should have created a few users, and project with some issues. I don't want to create it manually for each new instance. Also, the setup takes a lot of time which is not acceptable for testing.
To get persistent storage you need to mount /var/atlassian/jira in your host system. /var/atlassian/jira this can be used for storing your configuration etc. so you do not need to commit, whenever you spin up a new container with /var/atlassian/jira mount path will have all the configuration that you set previously.
docker run --detach -v /you_host_path/jira:/var/atlassian/jira --publish 8080:8080 cptactionhank/atlassian-jira:latest
For logs you can mount
/opt/atlassian/jira/logs
The above is valid if you are running with the latest tag or you can explore relevant dockerfile.
Set volume mount points for installation and home directory. Changes to the
home directory needs to be persisted as well as parts of the installation
directory due to eg. logs. VOLUME ["/var/atlassian/jira", "/opt/atlassian/jira/logs"]
atlassian-jira-dockerfile
look at the entrypoint.sh , the comments from there are:
check if the server.xml file has been changed since the creation of
this Docker image. If the file has been changed the entrypoint script
will not perform modifications to the configuration file.
so I think you need to provide your server.xml to stop the init process...

A safe directory that can be used in Docker and a development environment

I have a webapp which needs to store temporary files wherever it runs.
Since I want the app to execute both in Docker and in a development environment - I need a safe directory that can be created in the development environment (Mac OS usually) and in the Docker container.
I used /usr/temp on the container but on a mac this directory in inaccessible.
What would be the best, safest directory to use?
Thank you
If the environment variable $TMPDIR is set, it's a standard place for temporary files, and if it's not set, it usually defaults to /tmp. (On MacOS it points to a per-user directory that quickly gets filled with clutter.) You don't mention what language you're using, but most have a specific function or module to create a file "in the usual temporary directory", which is this one.
In general environment variables are a good way to encapsulate differences between your development and various deployment environments and it makes sense here.
Also remember, on the one hand, that anything in Docker filesystem space you don't explicitly persist will be lost when the container exits, and on the other, that if the container stays running for a long time, there isn't any sort of automated /tmp cleaner. You'll need to properly manage the lifecycle of these files. Also also remember that you have near-complete control over the container's filesystem layout and if you need some specific directory to exist you can RUN mkdir it in your Dockerfile.
Docker provides volumes concept to help you sync up data between Host and Container.
In your case, lets say you want to sync /home/user/data from your host to /usr/temp in Container
you can do so like this
docker run -itd -v /home/user/data:/usr/temp --name mycontainer imagename
Once the container is up and running
Add some file in the data folder and it will be available inside the container in the temp folder and other way around.
I ended up having the following in my Dockerfile
ENV HOME /usr
WORKDIR $HOME/app
RUN mkdir -p $HOME/temp/abc \
$HOME/temp/xyz
And in my configuration files, I used /temp/abc or /temp/xyz to point to the destination folders.
Finally, in my application code I made sure to prepend any path resolution with process.env['HOME'] (NodeJS).
The above works well in both a development envirnment, since $HOME is set by default on a mac and also in production which runs the docker above.
Thanks everyone!

Accessing Files on a Windows Docker Container Easily

Summary
So I'm trying to figure out a way to use docker to be able to spin up testing environments for customers rather easily. Basically, I've got a customized piece of software that want to install to a Windows docker container (microsoft/windowsservercore), and I need to be able to access the program folder for that software (C:\Program Files\SOFTWARE_NAME) as it has some logs, imports/exports, and other miscellaneous configuration files. The installation part was easy, and I figured that after a few hours of messing around with docker and learning how it works, but transferring files in a simple manner is proving far more difficult than I would expect. I'm well aware of the docker cp command, but I'd like something that allows for the files to be viewed in a file browser to allow testers to quickly/easily view log/configuration files from the container.
Background (what I've tried):
I've spent 20+ hours monkeying around with running an SSH server on the docker container, so I could just ssh in and move files back and forth, but I've had no luck. I've spent most of my time trying to configure OpenSSH, and I can get it installed, but there appears to be something wrong with the default configuration file provided with my installation, as I can't get it up and running unless I start it manually via command line by running sshd -d. Strangely, this runs just fine, but it isn't really a viable solution as it is running in debug mode and shuts down as soon as the connection is closed. I can provide more detail on what I've tested with this, but it seems like it might be a dead end (even though I feel like this should be extremely simple). I've followed every guide I can find (though half are specific to linux containers), and haven't gotten any of them to work, and half the posts I've found just say "why would you want to use ssh when you can just use the built in docker commands". I want to use ssh because it's simpler from an end user's perspective, and I'd rather tell a tester to ssh to a particular IP than make them interact with docker via the command line.
EDIT: Using OpenSSH
Starting server using net start sshd, which reports it starting successfully, however, the service stops immediately if I haven't generated at least an RSA or DSA key using:
ssh-keygen.exe -f "C:\\Program Files\\OpenSSH-Win64/./ssh_host_rsa_key" -t rsa
And modifying the permissions using:
icacls "C:\Program Files\OpenSSH-Win64/" /grant sshd:(OI)(CI)F /T
and
icacls "C:\Program Files\OpenSSH-Win64/" /grant ContainerAdministrator:(OI)(CI)F /T
Again, I'm using the default supplied sshd_config file, but I've tried just about every adjustment of those settings I can find and none of them help.
I also attempted to setup Volumes to do this, but because the installation of our software is done at compile time in docker, the folder that I want to map as a volume is already populated with files, which seems to make docker fail when I try to start the container with the volume attached. This section of documentation seems to say this should be possible, but I can't get it to work. Keep getting errors when I try to start the container saying "the directory is not empty".
EDIT: Command used:
docker run -it -d -p 9999:9092 --mount source=my_volume,destination=C:/temp my_container
Running this on a ProxMox VM.
At this point, I'm running out of ideas, and something that I feel like should be incredibly simple is taking me far too many hours to figure out. It particularly frustrates me that I see so many blog posts saying "Just use the built in docker cp command!" when that is honestly a pretty bad solution when you're going to be browsing lots of files and viewing/editing them. I really need a method that allows the files to be viewed in a file browser/notepad++.
Is there something obvious here that I'm missing? How is this so difficult? Any help is appreciated.
So after a fair bit more troubleshooting, I was unable to get the docker volume to initialize on an already populated folder, even though the documentation suggests it should be possible.
So, I instead decided to try to start the container with the volume linked to an empty folder, and then start the installation script for the program after the container is running, so the folder populates after the volume is already linked. This worked perfectly! There's a bit of weirdness if you leave the files in the volume and then try to restart the container, as it will overwrite most of the files, but things like logs and files not created by the installer will remain, so we'll have to figure out some process for managing that, but it works just like I need it to, and then I can use windows sharing to access that volume folder from anywhere on the network.
Here's how I got it working, it's actually very simple.
So in my dockerfile, I added a batch script that unzips the installation DVD that is copied to the container, and runs the installer after extracting. I then used the CMD option to run this on container start:
Dockerfile
FROM microsoft/windowsservercore
ADD DVD.zip C:\\resources\\DVD.zip
ADD config.bat C:\\resources\\config.bat
CMD "C:\resources\config.bat" && cmd
Then I build the container without anything special:
docker build -t my_container:latest .
And run it with the attachment to the volume:
docker run -it -d -p 9999:9092 --mount source=my_volume,destination="C:/Program Files (x86)/{PROGRAM NAME}" my_container
And that's it. Unfortunately, the container takes a little longer to start (it does build faster though, for what that's worth, as it isn't running the installer in the build), and the program isn't installed/running for another 5 minutes or so after the container does start, but it works!
I can provide more details if anyone needs them, but most of the rest is implementation specific and fairly straightforward.
Try this with Docker composer. Unfortunately, I cannot test it as I'm using a Mac it's not a "supported platform" (way to go Windows). See if that works, if not try volume line like this instead - ./my_volume:C:/tmp/
Dockerfile
FROM microsoft/windowsservercore
# need to ecape \
WORKDIR C:\\tmp\\
# Add the program from host machine to container
ADD ["<source>", "C:\tmp"]
# Normally used with web servers
# EXPOSE 80
# Running the program
CMD ["C:\tmp\program.exe", "any-parameter"]
Docker Composer
Should ideally be in the parent folder.
version: "3"
services:
windows:
build: ./folder-of-Dockerfile
volume:
- type: bind
source: ./my_volume
target: C:/tmp/
ports:
- 9999:9092
Folder structure
|---docker-composer.yml
|
|---folder-of-Dockerfile
|
|---Dockerfile
Just run docker-composer up to build and start the container. Use -d for detach mode, should only use once you know its working properly.
Useful link Manage Windows Dockerfile

EC2 User Data runs script but does not boot up application

I have a custom AMI that has my app directory and a docker image. I'm setting up Auto Scale Group with Launch Configuration to create a new instance. I have a User Data script to boot up the application. This is the code:
#!/bin/bash
docker-compose -f /home/ec2-user/app/docker-compose.yaml up -d app
the script runs, but the app doesn't run. I can SSH and run the app manually which works. Looking at the cloud-init-output.log file, I'm getting the following:
/var/lib/cloud/instance/scripts/part-001: line 4: docker-compose: command not found
Docker-compose is available when I SSH as I've installed it before creating my custom AMI.
Anything I'm missing?
Doesn't matter regarding your best practice question. Either way would suffice.
HakRou is right however.
The boot strap is operating under a different security context / shell environment so you need to cater for that.
You could just put the entire path to the binary file as well such as:
/usr/local/bin/docker-compose -f /home/ec2-user/app/docker-compose.yaml up -d app
and see how that goes.
docker-compose might have been available to the user you used to SSH into your instance (like ec2-user, ubuntu or admin), but it might not be available to root, and root is the one used with user-data when Amazon spins a new instance.
So you might want to add a soft link of docker-compose in one of the folders in the root $PATH, /usr/bin for exemple.

Rebuild container after each change?

The Docker documentation suggests to use the ONBUILD instruction if you have the following scenario:
For example, if your image is a reusable python application builder, it will require application source code to be added in a particular directory, and it might require a build script to be called after that. You can't just call ADD and RUN now, because you don't yet have access to the application source code, and it will be different for each application build. You could simply provide application developers with a boilerplate Dockerfile to copy-paste into their application, but that is inefficient, error-prone and difficult to update because it mixes with application-specific code.
Basically, this all sounds nice and good, but that does mean that I have to re-create the app container every single time I change something, even if it's only a typo.
This doesn't seem to be very efficient, e.g. when creating web applications where you are used to change something, save, and hit refresh in the browser.
How do you deal with this?
does mean that I have to re-create the app container every single time I change something, even if it's only a typo
not necessarily, you could use the -v option for the docker run command to inject your project files into a container. So you would not have to rebuild a docker image.
Note that the ONBUILD instruction is meant for cases where a Dockerfile inherits FROM a parent Dockerfile. The ONBUILD instructions found in the parent Dockerfile would be run when Docker builds an image of the child Dockerfile.
This doesn't seem to be very efficient, e.g. when creating web applications where you are used to change something, save, and hit refresh in the browser.
If you are using a Docker container to serve a web application while you are iterating on that application code, then I suggest you make a special Docker image which only contains everything to run your app but the app code.
Then share the directory that contains your app code on your host machine with the directory from which the application files are served within the docker container.
For instance, if I'm developing a static web site and my workspace is at /home/thomas/workspace/project1/, then I would start a container running nginx with:
docker run -d -p 80:80 -v /home/thomas/workspace/project1/:/usr/local/nginx/html:ro nginx
That way I can change files in /home/thomas/workspace/project1/ and the changes are reflected live without having to rebuild the docker image or even restart the docker container.

Resources