I want to make some changes to the config file of the VerneMQ image running on docker. Is there any way to reach the config file so that changes could be made?
If you exec into the container docker exec -it <containerID> bash, you'll see that the vernemq.conf file is located under /etc/vermnemq/. Its just the matter of replacing this default conf by your own config file. Keep your vernemq.conf in same directory as where Dockerfile is and then add
following line into Dockerfile
COPY vernemq.conf /etc/vernemq/vernemq.conf
The above line copies your config file into container at given location and replaces the existing one. Finally build the image. For more advanced stuff, do checkout this!
Another approach could be to simply set your options as environment variables for the docker image.
From the official docker hub page:
VerneMQ Configuration
All configuration parameters that are available in vernemq.conf can be
defined using the DOCKER_VERNEMQ prefix followed by the confguration
parameter name. E.g: allow_anonymous=on is -e
"DOCKER_VERNEMQ_ALLOW_ANONYMOUS=on" or
allow_register_during_netsplit=on is -e
"DOCKER_VERNEMQ_ALLOW_REGISTER_DURING_NETSPLIT=on". All available
configuration parameters can be found on
https://vernemq.com/docs/configuration/.
This is especially useful for compose-like yml-based deployments.
You can create a new Dockerfile to modify image contents -
FROM erlio/docker-vernemq
RUN Modify Command
Use the new Dockerfile to build new image & run container using that.
Related
I want to use the nextcloud image from dockerhub as the base image for the purpose of creating the a new child image having my own company's logo in place of nextcloud and my preferred background colour.Can anyone help me with the process or any link to the solution to this?
https://nextcloud.com/changelog
-download this zip
-make a docker file
-your should install apache and setup it
-change logo and colour theme in your css file
-built a new image
The general approach is this:
Run the official image locally, follow the instructions on Docker Hub to get started.
Modify the application using docker and shell commands. You can:
open a shell (docker exec -it <container> sh) in the running container and use console commands to edit files;
copy files from from the container and back with docker cp;
mount local files into the container by using -v <src>:<dest> in docker run command.
When you're done with editing, you need to repeat all the steps in a Dockerfile:
# use the version you used to start the local container
FROM nextcloud
# write commands that you used inside the container (if any)
RUN echo hello!
# Push edited files that you copied/mounted inside
COPY local.file /to/some/place/inside/the/image
# More possible commands in Dockerfile Reference
# https://docs.docker.com/engine/reference/builder/
After that you can use docker build to create your modified image.
I need to copy my customized keycloak themes into keycloak container to use it like mention here:
https://medium.com/#auscunningham/change-login-theme-in-keycloak-docker-image-55b5fa5ceec4
After identifying my container id: docker container ls and making a list of files like this: docker exec 7e3a420017a8 ls ./keycloak/themes
It returns the list of themes correctly, but using this to copy my files from local to container:
docker cp ./mycustomthem 7e3a420017a8:/keycloak/themes/
or
docker cp ./mycustomthem 7e3a420017a8:./keycloak/themes/
I get the following error:
Error: No such container:path: 7e3a420017a8:/keycloak
I cannot imagine where the error is, since I can list the files into the folder and container, could you help me?
Thank you in advance.
Works on my computer.
docker cp mycustomthem e67f76e8740b:/opt/jboss/keycloak/themes/raincatcher-theme
You have added the wrong path in command add full path /opt/jboss/keycloak/themes/raincatcher-theme.
This seems like a weird way to approach this problem. Why not just have a Dockerfile that uses the Keycloak container as the base image and then copies the theme into the container at build time? Then just run the image you build? This will also be a more stable pattern in the long term if you ever decide to add any plugins or customizations and it provides an easy upgrade path to new versions by just changing the base image in your Dockerfile.
Update according to your new question update:
Try the following:
docker cp ./mycustomthem 7e3a420017a8:/opt/jboss/keycloak/themes/
The correct path in Keycloak is actually /opt/jboss/keycloak/themes/
I follow this K8s tutorial, and in the middle of the file, there is the following instruction:
12. Now let’s build an image, giving it a special name that points to our local cluster registry.
$docker build -t 127.0.0.1:30400/hello-kenzan:latest -f applications/hello-kenzan/Dockerfile applications/hello-kenzan
I don't understand why do you need to point to the dockerfile using -f applications/hello-kenzan/Dockerfile.
In the man of docker build:
-f, --file=PATH/Dockerfile
Path to the Dockerfile to use. If the path is a relative path and you are
building from a local directory, then the path must be relative to that
directory. If you are building from a remote URL pointing to either a
tarball or a Git repository, then the path must be relative to the root of
the remote context. In all cases, the file must be within the build context.
The default is Dockerfile.
So -f is to point to the dockerfile, but we already gave the path of the dockerfile in the end of build command - docker build ...applications/hello-kenzan, so why do you need to write it twice? am I missing something?
The reason for this is because he probably had multiple files called Dockerfile and using -f tells the docker to NOT use the Dockerfile in the current directory (when there's one) but use the Dockerfile in the applications/hello-kenzan instead.
While in THIS PARTICULAR example it was unnecessary to do this, I appreciate the tutorial creator to show you an option to use PATH and point the -f at specific place to show you (the person who wants to learn) that it is possible to use Dockerfiles that are not in PATH (i.e when you have multiple dockerfiles you want to create your builds with or when the file is not named Dockerfile but e.g myapp-dockerfile)
You are right. In this case you don't have to use the -f option. The official docs say:
-f: Name of the Dockerfile (Default is ‘PATH/Dockerfile’) and as the given PATH is applications/hello-kenzan the Dockerfile there will be found implicitly.
I want to copy the my.cnf file present in the host server to child docker image wherever I run docker file that uses a custom base image having below command.
ONBUILD ADD locate -i my.cnf|grep -ioh 'my.cnf'|head -1 /
but above line is breaking docker file. Please share correct syntax or alternatives to achieve the same.
Unfortunately, you can't declare host commands inside your Dockerfile. See Execute command on host during docker build
.
Your best bet is probably to tweak your deployment scripts to locate my.cnf on the host before running docker build.
I have installed Sentry onpremise and after some time tinkering I got it to work and changed the system.url-prefix option to the correct URL using the command line. However there are 2 problems still:
This option is not persistant
You cannot do the same for the mail.from option, which can only be set before running.
There are 3 config files at play, but not all of them register and that makes it confusing.
sentry.conf.py
Containing
SENTRY_OPTIONS['system.url-prefix'] = 'https://sentry.mydomain.com'
SENTRY_OPTIONS['mail.from'] = 'sentry#mydomain.com'
config.yml
Containing
mail.from: 'sentry#mydomain.com'
system.url-prefix: 'https://sentry.mydomain.com'
docker-compose.yml
Restarting the containers does not load the new config.
Related issue. However I don't know what to do after changing the config like in the comment (SENTRY_OPTIONS['mail.from'])
You need to make your modified config files visible inside the container.
If they are built into the image (possibly via COPY or ADD in the Dockerfile), then restarting your container does not help, because you're doing it on an old image. You should be rebuilding the image, stopping the old one and starting the new. Rather annoying and error-prone way.
Better way is to "mount" your files via volumes. Docker volumes can be single files, not only directories. You can add the section volumes in your docker-compose.yml:
my_container:
image: my_image
volumes:
sentry.conf.py:/full/path/to/sentry.conf.py/in/the/container
config.yml:/similar/full/path/to/config.yml
ports:
...
command: ...
There's a chance you already have some volumes defined for this particular container (to hold persistent data for example), then you need to simply add volume mappings for your config files.
Hope this helps. All the best in the New Year!
This is how you can edit an existing docker container config:
stop container:
docker stop <container name>
edit config:
docker run -it -v /var/lib/docker:/var/lib/docker alpine vi $(docker inspect --format='/var/lib/docker/containers/{{.Id}}/config.v2.json' <container name>)
restart docker
if the configuration files are stored as docker configs, then I found this guide to work...
https://medium.com/#lucjuggery/about-using-docker-config-e967d4a74b83
Basically add update as a NEW config
tell service to remove the old and then add the new config as the one to use. Service will be restarted
now you can remove the old docker config
this is not very nice, and if you want to name the new config with the old config identifier, you have to repeat it again!
Arrggghhh....