Am able to deploy Liberty docker image in Local Docker container and can access Liberty server.
I pushed the liberty image to Minishift installed in my system ,but when am going to create docker container, am facing error as follows:
Is anyone tried this before, please share your view:
Log Trace:
unable to write 'random state'
mkdir: cannot create directory '/config/configDropins': Permission denied
/opt/ibm/docker/docker-server: line 32:
/config/configDropins/defaults/keystore.xml: No such file or directory
JVMSHRC155E Error copying username into cache name
JVMSHRC686I Failed to startup shared class cache. Continue without
using it as -Xshareclasses:nonfatal is specified
CWWKE0005E: The runtime environment could not be launched.
CWWKE0044E: There is no write permission for server directory
/opt/ibm/wlp/output/defaultServer
By default OpenShift will run images as an assigned user ID unique to a project. Many available images have been written so that they can only be run as root, even though they have no requirement to run as root.
If you try and run such an image, because directories/files have been set up so they are only writable by the root user, running the image as a non root user ID will cause it to fail.
Best practice is to write images so that can be run as an arbitrary user ID. Unfortunately very few people do this, with the result that their images cannot be used in more secure multi tenant environments for deploying applications in containers.
OpenShift documentation provides guidelines on how to implement images so that can run in such more secure environments. See section 'Support Arbitrary User IDs' in:
https://docs.openshift.org/latest/creating_images/guidelines.html
If the image is built by a third party and they show no interest in making the changes to their image so works in secure multi tenant environments, you have a few options.
The first is to create a derived image which in the steps to build it, goes back and fixes permissions on the directories and files so can be used. Note that in doing this you have to be careful what you change permissions on, as changing permissions on files in a derived image caused a complete copy of the file to be made. If files are large, this will start to blow out your image size.
The second is if you are admin on the OpenShift cluster, you can relax security on the cluster for the service account the image is run as so that it is allowed to run the container as root. You should avoid doing this if possible, especially with third party images which you do not trust. For details on how to do this see:
https://docs.openshift.org/latest/admin_guide/manage_scc.html#enable-images-to-run-with-user-in-the-dockerfile
A final way that might be able to be used with some images if total size of what needs to have permissions fixed is small, is to use an init container to make a copy of the directories that need write access to an emptyDir volume. Then in the main container mount that emptyDir volume on top of the directory copied. This avoids needing to modify the image or enable anyuid. The amount of space available in emptyDir volumes may not be enough if have to copy application binaries as well. This is probably only going to work where the application wants to update config files or create lock files. You wouldn't be able to use this if the same directory is used for large amounts of transient file system data such as cache database or logs.
Related
I am reading an article related to docker images and containers.
It says that a container is an instance of an image. Fair enough. It also says that whenever you make some changes to a container, you should create an image of it which can be used later.
But at the same time it says:
Your work inside a container shouldn’t modify the container. Like
previously mentioned, files that you need to save past the end of a
container’s life should be kept in a shared folder. Modifying the
contents of a running container eliminates the benefits Docker
provides. Because one container might be different from another,
suddenly your guarantee that every container will work in every
situation is gone.
What I want to know is that, what is the problem with modifying container's contents? Isn't this what containers are for? where we make our own changes and then create an image which will work every time. Even if we are talking about modifying container's content itself and not just adding any additional packages, how will it harm anything since the image created from this container will also have these changes and other containers created from that image will inherit those changes too.
Treat the container filesystem as ephemeral. You can modify it all you want, but when you delete it, the changes you have made are gone.
This is based on a union filesystem, the most popular/recommended being overlay2 in current releases. The overlay filesystem merges together multiple lower layers of the image with an upper layer of the container. Reads will be performed through those layers until a match is found, either in the container or in the image filesystem. Writes and deletes are only performed in the container layer.
So if you install packages, and make other changes, when the container is deleted and recreated from the same image, you are back to the original image state without any of your changes, including a new/empty container layer in the overlay filesystem.
From a software development workflow, you want to package and release your changes to the application binaries and dependencies as new images, and those images should be created with a Dockerfile. Persistent data should be stored in a volume. Configuration should be injected as either a file, environment variable, or CLI parameter. And temp files should ideally be written to a tmpfs unless those files are large. When done this way, it's even possible to make the root FS of a container read-only, eliminating a large portion of attacks that rely on injecting code to run inside of the container filesystem.
The standard Docker workflow has two parts.
First you build an image:
Check out the relevant source tree from your source control system of choice.
If necessary, run some sort of ahead-of-time build process (compile static assets, build a Java .jar file, run Webpack, ...).
Run docker build, which uses the instructions in a Dockerfile and the content of the local source tree to produce an image.
Optionally docker push the resulting image to a Docker repository (Docker Hub, something cloud-hosted, something privately-run).
Then you run a container based off that image:
docker run the image name from the build phase. If it's not already on the local system, Docker will pull it from the repository for you.
Note that you don't need the local source tree just to run the image; having the image (or its name in a repository you can reach) is enough. Similarly, there's no "get a shell" or "start the service" in this workflow, just docker run on its own should bring everything up.
(It's helpful in this sense to think of an image the same way you think of a Web browser. You don't download the Chrome source to run it, and you never "get a shell in" your Web browser; it's almost always precompiled and you don't need access to its source, or if you do, you have a real development environment to work on it.)
Now: imagine there's some critical widespread security vulnerability in some core piece of software that your application is using (OpenSSL has had a couple, for example). It's prominent enough that all of the Docker base images have already updated. If you're using this workflow, updating your application is very easy: check out the source tree, update the FROM line in the Dockerfile to something newer, rebuild, and you're done.
Note that none of this workflow is "make arbitrary changes in a container and commit it". When you're forced to rebuild the image on a new base, you really don't want to be in a position where the binary you're running in production is something somebody produced by manually editing a container, but they've since left the company and there's no record of what they actually did.
In short: never run docker commit. While docker exec is a useful debugging tool it shouldn't be part of your core Docker workflow, and if you're routinely running it to set up containers or are thinking of scripting it, it's better to try to move that setup into the ordinary container startup instead.
In my company, we're using a Jira for issue tracking. I need to write an application, that integrates with it and synchronizes some data with other services. For testing, I want to have a docker image of the Jira with some initial data.
I'm using the official atlassian/jira-core image. After the initial setup, I saved a state by running docker commit, but unfortunately the new image seems to be empty, and I need to set it up again from scratch.
What should I do to save the initial setup? I want to run tests that will change something within Jira, so reverting it back will be necessary to have reliable test suite. After I spin a new container it should have created a few users, and project with some issues. I don't want to create it manually for each new instance. Also, the setup takes a lot of time which is not acceptable for testing.
To get persistent storage you need to mount /var/atlassian/jira in your host system. /var/atlassian/jira this can be used for storing your configuration etc. so you do not need to commit, whenever you spin up a new container with /var/atlassian/jira mount path will have all the configuration that you set previously.
docker run --detach -v /you_host_path/jira:/var/atlassian/jira --publish 8080:8080 cptactionhank/atlassian-jira:latest
For logs you can mount
/opt/atlassian/jira/logs
The above is valid if you are running with the latest tag or you can explore relevant dockerfile.
Set volume mount points for installation and home directory. Changes to the
home directory needs to be persisted as well as parts of the installation
directory due to eg. logs. VOLUME ["/var/atlassian/jira", "/opt/atlassian/jira/logs"]
atlassian-jira-dockerfile
look at the entrypoint.sh , the comments from there are:
check if the server.xml file has been changed since the creation of
this Docker image. If the file has been changed the entrypoint script
will not perform modifications to the configuration file.
so I think you need to provide your server.xml to stop the init process...
What is the best practices for the updating container for the following scenario;
I have images that build on my web app project, and I am puplishing new images based on updated source code, once in a month.
Buy my web app generates files or updates some file in time after running in container. For example, app is creating new xml files under user folder for each web user. Another example is upload files by users.
I want to keep these files after running new updated image without lose.
/bin/
/first.dll
/second.dll
/other-soruces/
/some.cs
/other.cs
/user/
/user-1.xml
/user-2.xml
/uploads/
/images
/image-1.jpg
/web.config
Should I use the volume feature of Docker ? Is there any another strategy ?
Short answer, yes, you do want a volume for these directories. More specifically, two volumes: /user and /uploads.
This gets into a fundamental practice of image and container design that is best done by dividing your application into three parts:
The application code, binaries, libraries, and other runtime dependencies.
The persistent data that the application access and creates.
The configuration that modifies how the application runs, particularly in different environments with the same code.
Each of these parts should go in a different place in docker.
The first part, the code and binaries, goes in your image. This is what you ship to run your container on different nodes in docker, and what you store in a registry for later reuse.
The second part, your persistent data, gets stored in a volume. There are two main types of volumes to pick from: a named volume and a host volume (aka bind mount). A named volume has a particular feature that improves portability, it will be initialized to the contents of your image at the volume location when the volume is created for the first time. This initialization includes directory and file permissions and ownership, and can be used to seed your volume with an initial state. The host volume (bind mount) is just a directory mount from the docker host into the container, and you get exactly what was on the host, including the uid/gid of the files/directories, along with no initialization procedure. The host volume is very easy to access for developers, but lacks portability if you move into a multi-node swarm cluster, and suffers from uid/gid on the host mapping to different users inside the container since usernames inside the container can be different for the same id's. Any files you write inside the container that are not written to a volume should be considered disposable and will be lost when you recreate the container to update to a new image. And any directories you define as a volume should be considered owned by that volume and will not receive updates from the image when you replace the container.
The last piece, configuration, is often overlooked but equally important. This is anything injected into the application at startup to tell it where to connect for external data, config files that alter it's behavior, and anything that needs to be separated to allow the same image to be reusable in different environments. This is how you get portability from development to production with the same image, and how you get reusability of publicly provided images. The configuration is injected with environment variables, command line parameters, bind mounts of a config file (when you run on a single node), and configs + secrets which are essentially the same bind mount of a config file that is now stored in docker's swarm rather than locally on a single host. In your situation, the /web.config looks suspiciously like a config file that you'll want to move out of the image and inject as a bind mount or swarm config.
To put these all together, you will want a compose file that defines your image, the volumes to use, and any configs or environment variables to set.
I have a Dockerfile which builds an image that provides for me a complicated tool-chain environment to compile a project on a mounted volume from the host machines file system. Another reason is that I don't have a lot of space on the image.
The Dockerfile builds my tool-chain in the OS image, and then prepares the source by downloading packages to be placed on the hosts shared volume. And normally from there I'd then log into the image and execute commands to build. And this is the problem. I can download the source in the Dockerfile, but how then would I get it to the shared volume.
Basically I have ...
ADD http://.../file mydir
VOLUME /home/me/mydir
But then of course, I get the error 'cannot mount volume over existing file ..."
Am I going about this wrong?
You're going about it wrong, but you already suspected that.
If you want the source files to reside on the host filesystem, get rid of the VOLUME directive in your Dockerfile, and don't try to download the source files at build time. This is something you want to do at run time. You probably want to provision your image with a pair of scripts:
One that downloads the files to a specific location, say /build.
Another that actually runs the build process.
With these in place, you could first download the source files to a location on the host filesystem, as in:
docker run -v /path/on/my/host:/build myimage fetch-sources
And then you can build them by running:
docker run -v /path/on/my/host:/build myimage build-sources
With this model:
You're trying to muck about with volumes during the image build process. This is almost never what you want, since data stored in a volume is explicitly excluded from the image, and the build process doesn't permit you to conveniently mount host directories inside the container.
You are able to download the files into a persistent location on the host, where they will be available to you for editing, or re-building, or whatever.
You can run the build process multiple times without needing to re-download the source files every time.
I think this will do pretty much what you want, but if it doesn't meet your needs, or if something is unclear, let me know.
Question about Docker best practices/intended use:
I have docker working, and it's cool. I run a PaaS company, and my intent is maybe to use docker to run individual instances of our service for a given user.
So now I have an image that I've created that contains all the stuff for our service... and I can run it. But once I want to set it up for a specific user, theres a set of config files that I will need to modify for each user's instance.
So... the question is: Should that be part of my image filesystem, and hence, I then create a new image (based on my current image, but with their specific config files inside it) for each user?
Or should I put those on the host filesystem in a set of directories, and map the host filesystem config files into the correct running container for each user (hence, having only one image shared among all users)?
Modern PAAS systems favour building an image for each customer, creating versioned copies of both software and configuration. This follows the "Build, release, run" recommendation of the 12 factor app website:
http://12factor.net/
An docker based example is Deis. It uses Heroku build packs to customize the software application environment and the environment settings are also baked into a docker image. At run-time these images are run by chef on each application server.
This approach works well with Docker, because images are easy to build. The challenge I think is managing the docker images, something the docker registry is designed to support.