Nextcloud docker image rebranding - docker

I want to use the nextcloud image from dockerhub as the base image for the purpose of creating the a new child image having my own company's logo in place of nextcloud and my preferred background colour.Can anyone help me with the process or any link to the solution to this?
https://nextcloud.com/changelog
-download this zip
-make a docker file
-your should install apache and setup it
-change logo and colour theme in your css file
-built a new image

The general approach is this:
Run the official image locally, follow the instructions on Docker Hub to get started.
Modify the application using docker and shell commands. You can:
open a shell (docker exec -it <container> sh) in the running container and use console commands to edit files;
copy files from from the container and back with docker cp;
mount local files into the container by using -v <src>:<dest> in docker run command.
When you're done with editing, you need to repeat all the steps in a Dockerfile:
# use the version you used to start the local container
FROM nextcloud
# write commands that you used inside the container (if any)
RUN echo hello!
# Push edited files that you copied/mounted inside
COPY local.file /to/some/place/inside/the/image
# More possible commands in Dockerfile Reference
# https://docs.docker.com/engine/reference/builder/
After that you can use docker build to create your modified image.

Related

Grafana docker image with zabbix plugin

I want to add zabbix data source into my grafana in kunernetes, for that I created a custom image using this dockerfile and added
ARG GF_INSTALL_PLUGINS="alexanderzobnin-zabbix-app"
Then build the image and ran.
But when I logged to that docker container and run grafana-cli plugins ls, it shows nothing.
How can I create a docker image with zabbix datasource into that?
Since the base image use the GF_INSTALL_PLUGINS environment variable at run-time, it is best for you to set it during the run of the image
While running in docker run -e GF_INSTALL_PLUGINS="alexanderzobnin-zabbix-app" ..., if you use docker-compose or kubernetes then you should pass that value in environment variables.
If you want to install the plugin in the image you can use below statement
RUN grafana-cli plugins install $GF_INSTALL_PLUGINS
But this will not work if you volume mount /var/lib/grafana-plugins to a folder on host

Running attended installer inside docker windows/servercore

I've been attempting to move an application to the cloud for a while now and have most of the services set up in pods running in a k8s cluster. The last piece has been giving me trouble, I need to set up an image with an older piece of software that cannot be installed silently. I then attempted in my dockerfile to install its .net dependencies (2005.x86, 2010.x86, 2012.x86, 2015.x86, 2015.x64) and manually transfer a local install of the program but that also did not work.
Is there any way to run through a guided install in a remote windows image or be able to determine all of the file changes made by an installer in order to do them manually?
You can track the changes done by the installer following these steps:
start a new container based on your base image
docker run --name test -d <base_image>
open a shell in the new container (I am not familiar with Windows so you might have to adapt the command below)
docker exec -ti test cmd
Run whatever commands you need to run inside the container. When you are done exit from the container
Examine the changes to the container's filesystem:
docker container diff test
You can also use docker container export to export the container's filesystem as a tar archive, and then docker image import to create an image from that archive.

Best practice to connect my own code into a standard docker image in kubernetes

I have a lot of standard runtime docker images like python3 with tensorflow 1.7 installed and I want to use these standard images to run some customers code out side of them. The scenario seems quite similar with the serverless. So what is the best way to put the code into runtime dockers?
Right now I am trying to use a persistent volume to mount the code into runtime. But it has a lot of work. Is there some solution easier for this?
UPDATE
What is the workflow for google machine learning engine or floydhub. I think what I want is similar. They have a command line tool to make the local code combine with a standard env.
Following cloud native practices, code should be immutable, and releases and their dependencies uniquely identifiable for repeat-ability, replic-ability, etc - in short: you should really create images with your src code.
In your case, that would mean basing your Dockerfile on upstream python3 or TF images, there are a couple projects that may help with the workflow for above (code+build-release-run):
https://github.com/Azure/draft -- looks like better suited for your case
https://github.com/GoogleContainerTools/skaffold -- more golang friendly afaics
Hope it helps --jjo
One of the best practices is NOT to mount the code from a volume into it, but create a client-specific image that uses your TensorFlow image as a base image:
# Your base image comes in here.
FROM aisensiy/tensorflow:1
# Copy the client into your image.
COPY src /
# As Kubernetes will run your containers with an
# arbitrary UID, we set the user to nobody.
USER nobody
# ... and they will run with GID 0, so we
# need to change the group to 0 and make
# your stuff accessible to GID 0.
RUN \
chgrp -R 0 /src && \
chmod -R g=u && \
true
CMD ["/usr/bin/python", ...]
Some more best practices:
Always log to stdout instead of log files.
One process per container. If you need multiple local
processes, co-locate them into a single pod.
Even more best practices are provided in the OpenShift documentation: https://docs.openshift.org/latest/creating_images/guidelines.html
https://docs.openshift.org/latest/creating_images/guidelines.html
The code file can be passed from stdin when the container is being started. This way you can run arbitrary code when starting the container.
Please see below for example:
root#node-1:~# cat hello.py
print("This line will be printed.")
root#node-1:~#
root#node-1:~# docker run --rm -i python python < hello.py
This line will be printed.
root#node-1:~#
If this is your case,
You have a docker image with code in it.
Aim: To update the code inside docker image.
Solution:
Run a bash session with the docker image with a directory in your file system mounted as volume.
Place the updated code in the volume directory.
From the docker bash session replace the real code with updated code from the volume.
Save the current state of container as new docker image.
Sample Commands
Assume ~/my-dir in your file system has the new code updated-code.py
$ docker run -it --volume ~/my-dir:/workspace --workdir /workspace my-docker-image bash
Now a new bash session will start inside docker container.
Assuming you have the code in '/code/code.py' inside docker container,
You can simply update the code by
$ cp /workspace/updated-code.py /code/code.py
Or you can create new directory and place the code.
$ cp /workspace/updated-code.py /my-new-dir/code.py
Now the docker container contains updated code. But changes will be reset if you close the container and again run the image. To create a docker image with latest code, save this state of container using docker commit.
Open a new tab in the terminal.
$ docker ps
Will list all running docker containers.
Find CONTAINER ID of your docker container and save it.
$ docker commit id-of-your-container new-docker-image-name
Now run the docker image with latest code
$ docker run -it new-docker-image-name
Note: It is recommended to remove the old docker image using docker rmi command as docker images are heavy.
We're dealing with a similar challenge also. Our approach is to build a static docker image where Tensorflow, Python, etc are built once and maintained.
Each user has a PVC (persistent volume claim) where large files that may change such as datasets and workspaces live.
Then we have a bash shell that launches the cluster resources and syncs the workspace using ksync (like rsync for a kubernetes cluster).

Questions on Docker Build and Local Docker Repo

I am trying to create a docker image using the below command .
docker build -t mytestapp .
My DockerFile looks like this
# Set the base image
FROM rhel7:latest
USER root
# Dockerfile author / maintainer
MAINTAINER Name <email.id#example.com>
# Update application repository list and install the Redis server.
RUN mkdir /usr/local/myapp/
ADD myapp-0.0.1-jar /usr/local/myapp/
RUN java -Dspring.profiles.active=qa -jar /usr/local/myapp/myapp-0.0.1.jar
# Expose default port
EXPOSE 8080
Questions:
1) Is it fine the way I am adding the JAR file. Will it be available inside /usr/local on the container after I prepared am image from the above build.
2) When I build the image using docker build command , is the build image is pushed to docker repository hub by default.
Since the WAR file contains credentials, I don't want to push the image to Docker Hub but we would like to push to our local Docker registry using Docker distribution and pushing with docker push.
Please clarify.
Answering your questions:
Docker recommends using the COPY instructions for adding single files into an image. It will be available inside the container at /usr/local/myapp/myapp-0.0.1-jar
When you build the image it will be available on your local docker-host. It won't leave the server unless you explicitly tell it so.
Another tip I want to give you is the recommended docker image naming convention, which is [Repository/Author]/[Imagename]:[Version].
So for your image it might be called zama/mytestapp:1.0
If you want to push it into your local registry, you'll have to name your image after the syntax [LocalRegistry:Port]/[Repository/Author]/[Imagename]:[Version].
So your image might now be called registry.example.com:5000/zama/mystestapp:1.0
If you have authentication on your registry, you need to docker login first and then simply push the image with docker push registry.example.com:5000/zama/mystestapp:1.0.

Why do the changes I make in my working directory not show up in my Docker container?

I would like to run a test a parse-dashboard via Docker, as documented in the readme.
I am getting the error message, "Parse Dashboard can only be remotely accessed via HTTPS." Normally, you can bypass this by adding the line "allowInsecureHTTP": true in your parse-dashboard-config.json file. But even if I have added this option to my config file, the same message is displayed.
I tried to edit the config file in the Docker container, whereupon I discovered that none of my local file changes where present in the container. It appeared as though my project was an unmodified version of the code from the github repository.
Why do the changes that I make to the files in my working directory on the host machine not show up in the Docker container?
But what it is upload to my docker, it's in fact the config file of my master branch.
It depends:
what that "docker" is: the official DockerHub or a private docker registry?
how it is uploaded: do you build an image and then use docker push, or do you simply do a git push back to your GitHub repo?
Basically, if you want to see the right files in your Docker container that you run, you must be sure to run an image you have built (docker build) after a Dockerfile which COPY files from your current workspace.
If you do a docker build from a folder where your Git repo is checked out at the right branch, you will get an image with the right files.
The Dockerfile from the parse-dashboard repository you linked uses ADD . /src. This is a bad practice (because of the problems you're running into). Here are two different approaches you could take to work around it:
Rebuild the Image Each Time
Any time you change anything in the working directory (which the Dockerfile ADDs to /src), you need to rebuild for the change to take effect. The exception to this is src/Parse-Dashbaord/parse-dashboard-config.json, which we'll mount in with a volume. The workflow would be nearly identical to the one in the readme:
$ docker build -t parse-dashboard .
$ docker run -d -p 8080:4040 -v ./src/Parse-Dashbaord/parse-dashboard-config.json:/src/Parse-Dashboard/parse-dashboard-config.json parse-dashboard
Use a Volume
If we're going to use a volume to do this, we don't even need the custom Dockerfile shipped with the project. We'll just use the official Node image, upon which the Dockerfile is based.
In this case, Docker will not run the build process for you, so you should do it yourself on the host machine before starting Docker:
$ npm install
$ npm run build
Now, we can start the generic Node Docker image, and ask it do serve our project directory.
$ docker run -d -p 8080:4040 -v ./:/src node:4.7.2 "cd /src && npm run dashboard"
Changes will take effect immediately because you mount ./ into the image as a volume. Because it's not done with ADD, you don't need to rebuild the image each time. We can use the generic node image because if we're not ADDing a directory and running the build commands, there's nothing our image will do differently than the official one.

Resources