Switch active database with Neo4j docker image - docker

I have imported my data to a new Neo4j database instead of the standard graph.db using import tool. I want to switch this database to web Neo4j. I used Neo4j docker image with /var/lib/neo4j volume. But I can't find my config file to change the active database, even after I mount conf directory specifically this file doesn't get generated.
How can I switch active Neo4j database in web client or neo4j shell?
Here is the command with which I created neo4j container:
docker run --publish=7474:7474 --publish=7687:7687 --volume=/var/lib/neo4j/import:/var/lib/neo4j/import --env=NEO4J_dbms_allow_upgrade='true' --env=NEO4J_dbms.security.allow_csv_import_from_file_urls='true' neo4j:latest

You cannot change the active database of a live Neo4J instance.
Enterprise edition does allow for some values to be changed without rebooting; the keys allowed to be changed this way are listed at the online documentation, but dbms.active_database is not one of them.
Instead, you have a few options.
You can mount a /conf directory
The conf directory can be filled with configuration files that will completely override the default ones. They are not generated by Neo4J, you must take an entire neo4j.conf file and place it in the directory which is then mounted to the container. You can change whatever values you need to in that file.
After the mapped directory is updated with the new file, you will need to bounce your image (or exec a bounce of neo4j from within the image).
You can set the active database with an environment variable
Similar to how you've passed in the other environment variables, you can pass in other configuration options. If your new database was called newgraph.db and it resided in the same directory as graph.db, you would need only to pass in --env=NEO4J_dbms_active__database=newgraph.db. If it resides in a different directory, give that directory with --env=NEO4J_dbms_directories_data=/path/to/new/data/dir.
As these are passed as environment variables, changing them requires starting a new Docker image.
You could also build your own image.
The final and perhaps most drastic option would be to create your own image that is based off of neo4j's image and has all of the changes that you need. Typically, this would not be required, but if you want to clean up your invocation of docker and not keep around any mapped configuration directories, this is the way to go. It would also ensure anybody who has your custom image needs no additional configuration; whether this is desirable is up to you and your deployment architecture.

Related

How to modify the configuration of the database in docker of opengauss

Recently, I was trying to deploy the opengauss database using docker, and I saw that this docker was released by your company.
Currently encountered the following two problems:
The corresponding database configuration file was not found: “hab.conf or postgreq.conf”, where is the location of this file in the docker image? If not, can it be gs_*modified by tools.
When the database in docker is started and then restarted, the docker image will be launched, and there are no parameters linked to the configuration file in the docker image, so there is no way to modify the configuration file of the database. At present, the solution I think of is to “running container”directly “commit & save” the modified image into a new image. Is this the only solution?
.hba.conf or postgreq.conf is here
/var/lib/opengauss/data, support to use gs_guc to modify parameters.
.After changing the parameters that require database restart to take effect, just restart the container directly.
.You can also do persistence if you want, specify it through the -v parameter when running.
-v /enmotech/opengauss:/var/lib/opengauss

Run Jira in docker with initial setup snapshot

In my company, we're using a Jira for issue tracking. I need to write an application, that integrates with it and synchronizes some data with other services. For testing, I want to have a docker image of the Jira with some initial data.
I'm using the official atlassian/jira-core image. After the initial setup, I saved a state by running docker commit, but unfortunately the new image seems to be empty, and I need to set it up again from scratch.
What should I do to save the initial setup? I want to run tests that will change something within Jira, so reverting it back will be necessary to have reliable test suite. After I spin a new container it should have created a few users, and project with some issues. I don't want to create it manually for each new instance. Also, the setup takes a lot of time which is not acceptable for testing.
To get persistent storage you need to mount /var/atlassian/jira in your host system. /var/atlassian/jira this can be used for storing your configuration etc. so you do not need to commit, whenever you spin up a new container with /var/atlassian/jira mount path will have all the configuration that you set previously.
docker run --detach -v /you_host_path/jira:/var/atlassian/jira --publish 8080:8080 cptactionhank/atlassian-jira:latest
For logs you can mount
/opt/atlassian/jira/logs
The above is valid if you are running with the latest tag or you can explore relevant dockerfile.
Set volume mount points for installation and home directory. Changes to the
home directory needs to be persisted as well as parts of the installation
directory due to eg. logs. VOLUME ["/var/atlassian/jira", "/opt/atlassian/jira/logs"]
atlassian-jira-dockerfile
look at the entrypoint.sh , the comments from there are:
check if the server.xml file has been changed since the creation of
this Docker image. If the file has been changed the entrypoint script
will not perform modifications to the configuration file.
so I think you need to provide your server.xml to stop the init process...

Best practice for handling service configuration in docker

I want to deploy a docker application in a production environment (single host) using a docker-compose file provided by the application creator. The docker based solution is being used as a drop-in replacement for a monolithic binary installer.
The application ships with a default configuration but with an expectation that the administrator will want to apply moderate configuration changes.
There appears to be a few ways to apply custom configuration to the services that are defined in the docker-compose.yml file however I am not sure which is considered best practice. The two I am considering between at the moment are:
Bake the configuration into a new image. Here, I would add a build step for each service defined in the docker-compose file and create a minimal Dockerfile which uses COPY to replace the existing configuration files in the image with my custom config files. Using sed and echo in CMD statements could also be used to change configuration inline without replacing the files wholesale.
Use a bind mount with configuration stored on the host. In this case, I would store all custom configuration files in a directory on the host machine and define bind mounts in the volumes parameter for each service in the docker-compose file.
The first option seems the cleanest to me as the application is completely self-contained, however I would need to rebuild the image if I wanted to make any further configuration changes. The second option seems the easiest as I can make configuration changes on the fly (restarting services as required in the container).
Is there a recommended method for injecting custom configuration into Docker services?
Given your context, I think using a bind mount would be better.
A Docker image is supposed to be reusable in different context, and building an entire image solely for a specific configuration (i.e. environment) would defeat that purpose:
instead of the generic configuration provided by the base image, you create an environment-specific image
everytime you need to change the configuration you'll need to rebuild the entire image, whereas with a bind mount a simple restart or re-read of the configuration file by application will be sufficient
Docker documentation recommend that:
Dockerfile best practice
You are strongly encouraged to use VOLUME for any mutable and/or
user-serviceable parts of your image.
Good use cases for bind mounts
Sharing configuration files from the host machine to containers.

How to update docker container image but keep the generated files by container app

What is the best practices for the updating container for the following scenario;
I have images that build on my web app project, and I am puplishing new images based on updated source code, once in a month.
Buy my web app generates files or updates some file in time after running in container. For example, app is creating new xml files under user folder for each web user. Another example is upload files by users.
I want to keep these files after running new updated image without lose.
/bin/
/first.dll
/second.dll
/other-soruces/
/some.cs
/other.cs
/user/
/user-1.xml
/user-2.xml
/uploads/
/images
/image-1.jpg
/web.config
Should I use the volume feature of Docker ? Is there any another strategy ?
Short answer, yes, you do want a volume for these directories. More specifically, two volumes: /user and /uploads.
This gets into a fundamental practice of image and container design that is best done by dividing your application into three parts:
The application code, binaries, libraries, and other runtime dependencies.
The persistent data that the application access and creates.
The configuration that modifies how the application runs, particularly in different environments with the same code.
Each of these parts should go in a different place in docker.
The first part, the code and binaries, goes in your image. This is what you ship to run your container on different nodes in docker, and what you store in a registry for later reuse.
The second part, your persistent data, gets stored in a volume. There are two main types of volumes to pick from: a named volume and a host volume (aka bind mount). A named volume has a particular feature that improves portability, it will be initialized to the contents of your image at the volume location when the volume is created for the first time. This initialization includes directory and file permissions and ownership, and can be used to seed your volume with an initial state. The host volume (bind mount) is just a directory mount from the docker host into the container, and you get exactly what was on the host, including the uid/gid of the files/directories, along with no initialization procedure. The host volume is very easy to access for developers, but lacks portability if you move into a multi-node swarm cluster, and suffers from uid/gid on the host mapping to different users inside the container since usernames inside the container can be different for the same id's. Any files you write inside the container that are not written to a volume should be considered disposable and will be lost when you recreate the container to update to a new image. And any directories you define as a volume should be considered owned by that volume and will not receive updates from the image when you replace the container.
The last piece, configuration, is often overlooked but equally important. This is anything injected into the application at startup to tell it where to connect for external data, config files that alter it's behavior, and anything that needs to be separated to allow the same image to be reusable in different environments. This is how you get portability from development to production with the same image, and how you get reusability of publicly provided images. The configuration is injected with environment variables, command line parameters, bind mounts of a config file (when you run on a single node), and configs + secrets which are essentially the same bind mount of a config file that is now stored in docker's swarm rather than locally on a single host. In your situation, the /web.config looks suspiciously like a config file that you'll want to move out of the image and inject as a bind mount or swarm config.
To put these all together, you will want a compose file that defines your image, the volumes to use, and any configs or environment variables to set.

Dockerized executable read/write on host filesystem

I just dockerized an executable that reads from a file and creates a new file in the very directory that file came from.
I want to use Docker in that setup, so that I avoid installing numerous third-party libraries in the production environment.
My problem now: I have file /this/is/a.file on my underlying (host) file system and my executable is supposed to create /this/is/b.file.
As far as I see it, the only chance to get this done is by mapping a volume that points to /this/is and then let the executable know where I mounted it to in the docker, container.
Am I right? Or is there a way that I just pass docker run mydockerizedstuff /this/is/a.file without using Docker volumes?
You're correct, you need to pass in /this/is as a volume and the executable will write to that location.
If you want to constrain the thing even more, you can pass /this/is/b.file as a volume. You need to create it (simply via touch) beforehand, otherwise Docker will consider it a directory and create it as such for you, but you'll know that the thing won't be able to create /this/is/c.file or any other thing.

Resources