How to disable login authentication from infinispan sever in docker container - docker

I do not want login authentication to my infinispan server started in docker container
We have done following things to create infinispan sever
Take Infinispan official base image (infinispan/server:10.1.8.Final) to create Infinispan server.
During Infinispan server creation we need to copy following two files in the container.
cache.xml to /data/sk/server/infinispan-server-10.1.8.Final/server/data
infinispan.xml to /data/sk/server/infinispan-server-10.1.8.Final/server/conf
cache.xml copied successfully and its content is well reflecting on Infinspan server UI.
infinispan.xml does not persistent.
During container creation, infinispan.xml (our file ) is override by the same file which is present in the base image.

You need to copy your configuration to a different directory and pass it as an argument when starting the server. Details in Infinipan Images repository.
ps. not sure if it works in such an old image.

Related

Deploying dockerized web app into a closed system

We have to deploy a dockerized web app into a closed external system(our client's server).
(our image is made of gunicorn, nginx, django-python web app)
There are few options i have already considered:
option-1) using docker registries: push image into registry, pull
from client's system run docker-compose up with pulled image
option-2) docker save/load .tar files: docker save image in local dev
environment, move .tar file into the client's system and run docker load
(.tar file) there.
Our current approach:
we want to move source code inside a docker image(if possible).
we can't make our private docker registry public --yet--(so option-1 is gone)
client servers are only accessible from their internal local network(has no connection to any other external network)
we dont want to copy all the files when we make an update(to our app), what we want to do is somehow detect diff or changes on docker image and copy/move/update only changed parts of app into client's server.(option-2 is gone too)
Is there any better way to deploy to client's server with the approach explained above?
PS: I'm currently checking "docker commit": what we could do is, docker load our base image into client's server, start container with that image and when we have an update we could just copy our changed files into that container's file system, then docker commit(in order to keep changed version of container). But the thing i don't like in that option is we would need to keep changes in our minds, then move our changed files(like updated .py or .html files) to client's server.
Thanks in advance

Run Jira in docker with initial setup snapshot

In my company, we're using a Jira for issue tracking. I need to write an application, that integrates with it and synchronizes some data with other services. For testing, I want to have a docker image of the Jira with some initial data.
I'm using the official atlassian/jira-core image. After the initial setup, I saved a state by running docker commit, but unfortunately the new image seems to be empty, and I need to set it up again from scratch.
What should I do to save the initial setup? I want to run tests that will change something within Jira, so reverting it back will be necessary to have reliable test suite. After I spin a new container it should have created a few users, and project with some issues. I don't want to create it manually for each new instance. Also, the setup takes a lot of time which is not acceptable for testing.
To get persistent storage you need to mount /var/atlassian/jira in your host system. /var/atlassian/jira this can be used for storing your configuration etc. so you do not need to commit, whenever you spin up a new container with /var/atlassian/jira mount path will have all the configuration that you set previously.
docker run --detach -v /you_host_path/jira:/var/atlassian/jira --publish 8080:8080 cptactionhank/atlassian-jira:latest
For logs you can mount
/opt/atlassian/jira/logs
The above is valid if you are running with the latest tag or you can explore relevant dockerfile.
Set volume mount points for installation and home directory. Changes to the
home directory needs to be persisted as well as parts of the installation
directory due to eg. logs. VOLUME ["/var/atlassian/jira", "/opt/atlassian/jira/logs"]
atlassian-jira-dockerfile
look at the entrypoint.sh , the comments from there are:
check if the server.xml file has been changed since the creation of
this Docker image. If the file has been changed the entrypoint script
will not perform modifications to the configuration file.
so I think you need to provide your server.xml to stop the init process...

Is there any way of changing a base image or a service and do not have to build the whole application again?

I am new to docker and I have faced some complications understanding how big multi-level applications work with docker.
I want to use visual studio Asp .Net core and have several questions:
If I make a trivial change to one service do I need to build the whole application and make a new image of it?
How I am suppose to inform user of the change?
Do I have to deploy the whole app again in case of a little change?
If production images do not have access to source code how the developer is supposed to make change to production image?
If my client is not willing to make a container of their database can I use a docker network to make a connection with that external SQL server?
If I make a trivial change to one service do I need to build the whole application and make a new image of it?
Do I have to deploy the whole app again in case of a little change?
For each step in your Dockerfile Docker generates a intermediate container. If you use the same as before Docker use this cached intermediate container. If you change a command (e.g. add a package, copy new source code lines ...) this step will be produce a new intermediate container. Every step after this run on the new intermediate container - so no cached can be used.
So yes, you have to. But you can optimize your build using intermediate container and multi stage builds (see links).
How I am suppose to inform user of the change?
At first: Use tags for versioning.This helps a lot. But how to inform is a problem of your use-case, not docker. It is the same as releasing a new "normal" software version.
May you have a CI-Pipeline and can automatize this step. Or you have access to your customer systems and can automatically deploy the new container.
if my client is not willing to make a container of their database can I use a docker network to make a connection with that external SQL server?
Docker Networks are for the (isolated) communication between Docker Containers. You can use the "normal" way to connect from a client to a external database. May you have to publish a port outside your container (see docker references). And be aware that the address localhost refers inside a container, not your system.
This blog post are may helpful for you:
[1] https://andrewlock.net/caching-docker-layers-on-serverless-build-hosts-with-multi-stage-builds---target,-and---cache-from/
[2]
https://www.busbud.com/blog/going-docker-multi-stage-builds/

Web App for Containers displayed only default page when deployed using docker image

I created Container registry and then push the docker image of my web app to that registry. Created container instance and it is working fine.
Now i have to deploy this image to the Web App. There are two option which i found.
First i can choose 'Deploy to web app' option directly where docker image is stored.
Second i can create 'Web App for container' resource using the same docker image.
a.) When i tried first option : After deployed successfully when i run the web app it displayed default page. When i connected to the ftp to check the files in wwwroot folder. Only 'hostingstart.html' file is present.
b.) When i tried second option : After deployed successfully when i run the web app it displayed message displayed on page 'The Web App's container could not start. Please try again in few minutes. If you are an administrator of this Web App please verify your container settings and go to Azure Portal to review the diagnostic logs'
When i connected to the ftp to check the files in wwwroot folder. Only 'hostingstart.html' file is present.
Docker image has no issue as i am able to run it locally and on container instance.
My first question is : Is the above two methods are same thing beacuse in first method it looks like normal web app with kudu/app service editor option available but in second method i do not found kudu/app service editor support.
Second question is : I want to implement web app for container so only second option is the one i should go for?
Any idea what i am missing?
As shared by the original poster in the comments, retrying pushing the image to the container worked.

Nexus repository configuration with dockerization

Is it possible to configure Nexus repository manager (3.9.0) in a way which is suitable for a Docker based containerized environment?
We need a customized docker image which contains basic configurations for the nexus repository manager, like project specific repositories, LDAP based authentication for users. We found that most of the nexus configurations live in the database (OrientDB) used by nexus. We also found that there is a REST interface offered by nexus to handle configurations by 3rd parties, but we found no configuration exporter/importer capabilites besides backup (directory servers ha LDIF, application servers ha command line scripts, etc.).
Right now we export the configuration as backup files, and during the customized docker image build we copy those backup file back to the file system in the container:
FROM sonatype/nexus3:latest
[...]
# Copy backup files
COPY backup/* ${NEXUS_DATA}/backup/
When the conatiner starts up it will pick up the backup files and the nexus will be configured the way we need. However though, it would be much better if there was a way which would allow us the handle these configurations via a set of config files.
All that data is stored under /nexus-data, so you can create an initial docker container with a docker volume or a host directory that would keep all that data. After you preconfigured that instance you can distribute your customized docker image with that docker volume containing nexus data. Or if you used a host directory you can simply copy over all that data is similar fashion as you do now, but use /nexus-data directory instead.
You can find more information at DockerHub under Persistent Data.

Resources