Recently, I was trying to deploy the opengauss database using docker, and I saw that this docker was released by your company.
Currently encountered the following two problems:
The corresponding database configuration file was not found: “hab.conf or postgreq.conf”, where is the location of this file in the docker image? If not, can it be gs_*modified by tools.
When the database in docker is started and then restarted, the docker image will be launched, and there are no parameters linked to the configuration file in the docker image, so there is no way to modify the configuration file of the database. At present, the solution I think of is to “running container”directly “commit & save” the modified image into a new image. Is this the only solution?
.hba.conf or postgreq.conf is here
/var/lib/opengauss/data, support to use gs_guc to modify parameters.
.After changing the parameters that require database restart to take effect, just restart the container directly.
.You can also do persistence if you want, specify it through the -v parameter when running.
-v /enmotech/opengauss:/var/lib/opengauss
Related
I have docker stack started with docker stack deploy --compose-file ...
and later manually edited via Docker Portainer UI.
I'd like to write a script that updates the docker image tag of one of the services.
To do that I need to "download" the latest "docker-compose" stack definition however I cannot find the appropriate docker command.
I do know that the best would be to stop changing stack manually and rely on its definition stored in git but unfortunately, it is not up to me.
Please point me to the appropriate docker command or confirm that it is not available.
As far as i know there is no command you could get the compose file from the running container directly. At least not implemented out of the box in docker. You could try to parse all the relevant information from docker inspect and few other commands to list/inspect all relevant objects?.
I have once came across the similar situation where we had a running container but no run/compose command which we needed to update. At the time (roughly a year ago) i found and used docker-autocompose which did very good job. We only had to manually verify and adjust few things,but it got all the difficult parts with run parameters done for us.
It could help in your case to automate it if your compose configs are simple enough.
But if you wanted to fully automate it to mimic CD, then i would not recommend the approach above. In that case i would check if you could use portainer api as #LinFelix recommended. Or store compose files somewhere - prepared with parameters ($IMAGE_TAG) (git/on server) so you can then generate temporary compose files with all configuration and then remove the current one.
I'm new to docker and have been dabbling with it for the past few days. I've managed to successfully use docker-compose for a multi-container deployment involving an app server (flask + gunicorn) and web server (nginx).
Now, I'd like to recreate the deployment on an offline machine. After doing research, it seems that most have mentioned use docker save and docker load to transfer over the base images. However, I'm wondering whether its possible to recreate the deployment from the image created by docker-compose build? Reason being I would like to skip the entire process of wheeling my python package dependencies for offline use, which I would have to do for the method starting from the base images.
I've tried to save that particular image (output of docker-compose build) and load it on the offline machine, and then tried docker run and docker-compose up but both don't seem to work. Would like to check with the community whether this method is even possible, and if so what's the right way to go about it?
Thanks!
To solve my issue, I ended up making an image of each individual container post pip install, then using docker-compose.yml simply to spin them up. As David mentioned, it doesn't seem possible to spin up the container from the single image output by docker-compose build.
In my company, we're using a Jira for issue tracking. I need to write an application, that integrates with it and synchronizes some data with other services. For testing, I want to have a docker image of the Jira with some initial data.
I'm using the official atlassian/jira-core image. After the initial setup, I saved a state by running docker commit, but unfortunately the new image seems to be empty, and I need to set it up again from scratch.
What should I do to save the initial setup? I want to run tests that will change something within Jira, so reverting it back will be necessary to have reliable test suite. After I spin a new container it should have created a few users, and project with some issues. I don't want to create it manually for each new instance. Also, the setup takes a lot of time which is not acceptable for testing.
To get persistent storage you need to mount /var/atlassian/jira in your host system. /var/atlassian/jira this can be used for storing your configuration etc. so you do not need to commit, whenever you spin up a new container with /var/atlassian/jira mount path will have all the configuration that you set previously.
docker run --detach -v /you_host_path/jira:/var/atlassian/jira --publish 8080:8080 cptactionhank/atlassian-jira:latest
For logs you can mount
/opt/atlassian/jira/logs
The above is valid if you are running with the latest tag or you can explore relevant dockerfile.
Set volume mount points for installation and home directory. Changes to the
home directory needs to be persisted as well as parts of the installation
directory due to eg. logs. VOLUME ["/var/atlassian/jira", "/opt/atlassian/jira/logs"]
atlassian-jira-dockerfile
look at the entrypoint.sh , the comments from there are:
check if the server.xml file has been changed since the creation of
this Docker image. If the file has been changed the entrypoint script
will not perform modifications to the configuration file.
so I think you need to provide your server.xml to stop the init process...
I have imported my data to a new Neo4j database instead of the standard graph.db using import tool. I want to switch this database to web Neo4j. I used Neo4j docker image with /var/lib/neo4j volume. But I can't find my config file to change the active database, even after I mount conf directory specifically this file doesn't get generated.
How can I switch active Neo4j database in web client or neo4j shell?
Here is the command with which I created neo4j container:
docker run --publish=7474:7474 --publish=7687:7687 --volume=/var/lib/neo4j/import:/var/lib/neo4j/import --env=NEO4J_dbms_allow_upgrade='true' --env=NEO4J_dbms.security.allow_csv_import_from_file_urls='true' neo4j:latest
You cannot change the active database of a live Neo4J instance.
Enterprise edition does allow for some values to be changed without rebooting; the keys allowed to be changed this way are listed at the online documentation, but dbms.active_database is not one of them.
Instead, you have a few options.
You can mount a /conf directory
The conf directory can be filled with configuration files that will completely override the default ones. They are not generated by Neo4J, you must take an entire neo4j.conf file and place it in the directory which is then mounted to the container. You can change whatever values you need to in that file.
After the mapped directory is updated with the new file, you will need to bounce your image (or exec a bounce of neo4j from within the image).
You can set the active database with an environment variable
Similar to how you've passed in the other environment variables, you can pass in other configuration options. If your new database was called newgraph.db and it resided in the same directory as graph.db, you would need only to pass in --env=NEO4J_dbms_active__database=newgraph.db. If it resides in a different directory, give that directory with --env=NEO4J_dbms_directories_data=/path/to/new/data/dir.
As these are passed as environment variables, changing them requires starting a new Docker image.
You could also build your own image.
The final and perhaps most drastic option would be to create your own image that is based off of neo4j's image and has all of the changes that you need. Typically, this would not be required, but if you want to clean up your invocation of docker and not keep around any mapped configuration directories, this is the way to go. It would also ensure anybody who has your custom image needs no additional configuration; whether this is desirable is up to you and your deployment architecture.
I'm trying to make a build env with docker and i want to make this automatic. i've written a custom go binary to handle build stuff and i've built an image which has the go binary, maven and java8 sdk installed.
The steps that binary does are:
Clone a git repo
Run build command
Extract build artifacts to host. (which hasnt done yet.)
I'm passing repo url as parameter to binary while running container and it does build.
But the problem is i need those artifacts in order to run builted app.
I know i can use volumes, but i dont want to use them because when build has done volumes are becoming dangle and it needs a job for deleting those dangling volumes.
I thought i can create an api for saving files into host (that means i have to run that api inside host machine) and my custom go binary can send files to the api and api will do the saving.
But when it comes to calling host from inside a container i've got a problem. i'm getting connection refused to port xx error.
Is there a better way to do it , or should i change my approach?
found an answer on accessing-host-machine-as-localhost-from-a-docker-container-thats-also-inside
Running container with --add-host option is the answer.
While you could use
docker cp CONTAINER:SRC_PATH DEST_PATH
to get the files out of your container, I still believe using a volume is the better idea. Instead of using an anonymous volume use a named one:
docker run -v /local/host/dir:/build/output YOURIMAGE
This allows you to pick up the artefacts on your host from the /local/host/dir
https://docs.docker.com/engine/tutorials/dockervolumes/#locate-a-volume