Trying to setup a docker container hosting a twiki installation (twiki.org). I’m using the httpd:2.4 apache image and tried to install twiki as documented under /var/www/
But whatever I try, I don’t get the code to be visible. Would someone have a dockerfile pointing out the required steps?
Related
I want to use the nextcloud image from dockerhub as the base image for the purpose of creating the a new child image having my own company's logo in place of nextcloud and my preferred background colour.Can anyone help me with the process or any link to the solution to this?
https://nextcloud.com/changelog
-download this zip
-make a docker file
-your should install apache and setup it
-change logo and colour theme in your css file
-built a new image
The general approach is this:
Run the official image locally, follow the instructions on Docker Hub to get started.
Modify the application using docker and shell commands. You can:
open a shell (docker exec -it <container> sh) in the running container and use console commands to edit files;
copy files from from the container and back with docker cp;
mount local files into the container by using -v <src>:<dest> in docker run command.
When you're done with editing, you need to repeat all the steps in a Dockerfile:
# use the version you used to start the local container
FROM nextcloud
# write commands that you used inside the container (if any)
RUN echo hello!
# Push edited files that you copied/mounted inside
COPY local.file /to/some/place/inside/the/image
# More possible commands in Dockerfile Reference
# https://docs.docker.com/engine/reference/builder/
After that you can use docker build to create your modified image.
I'm new to Docker and am learning how to implement Docker with Jenkins. I was able to succesfully bind a docker volume to my host machine directory with the following command
docker run –name jenkinsci -p 8080:8080 -p 50000:50000 -v ~/Jenkins:/var/jenkins_home/ jenkins/jenkins:lts
Now that the basic Jenkins is set up and binded to my host, there are a few things I wasn't sure to handle.
(1) This is only accessible through localhost:8080. How do I make this accessible to other computers? I've read that I can change the URL to my company's public IP address? Is this the right approach?
(2) I want to automate the installation of select plugins and setting the paths in the Global Tools Configuration. There were some tips on github https://github.com/jenkinsci/docker/blob/master/README.md but I wasn't clear on where this Dockerfile is placed. For example, if I wanted the plugins MSBuild and Green Balls to be installed, what would that look like?
FROM jenkins/jenkins:lts
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
Would I have to create a text file called plugins.txt where it contains a list of plugins I want downloaded? Where will this Dockerfile be stored?
(3) I also want a Dockerfile that installs all the dependencies to run my .NET Windows project (nuget, msbuild, wix, nunit, etc). I believe this Dockerfile will be placed in my git repository.
Basically, I'm getting overwhelmed with all this Docker information and am trying to piece together how Docker interacts with Jenkins. I would appreciate any advice and guidance on these problems.
Its ok to get overwhelmed by docker+kubernetes. Its a lot of information and whole overall shift how we have been handling applications/services.
To make jenkins available on all interfaces, use following command.
docker run –name jenkinsci -p "0.0.0.0:8080:8080" -p "0.0.0.0:50000:50000" -v ~/Jenkins:/var/jenkins_home/ jenkins/jenkins:lts
Yes, you have to provide the plugins.txt file, and create a new jenkins image containing all the required plugins. After that you can use this new image instead of jenkins/jenkins:lts.
The new image, suited for your workload should contain all the dependencies required for your environment.
I have a totally empty debian9 on which I installed docker-ce and nothing else.
My client wants me to run a website (already done locally on my PC) that he can migrate/move rapidly from one server to another moving docker images.
My idea is to install some empty docker image, and then install on it manually all dependencies (ngingrtmp, apache2, nodejs, mysql, phpmyadmin, php, etc...)
I need to install all these dependencies MANUALLY (to keep control) - not using a ready to go docker images from dockerhub, and then to create an IMAGE of ALL things I have done (including these dependencies, but also files I will upload).
Problem is : I have no idea how to start a blank image, connect to it and then save a modified image with components and dependencies I will run.
I am aware that the SIZE may be bigger with a simple dockerfile, but I need to customize lots of things such as using php5.6, apache2.2, edit some php.ini etc etc..
regards
if you don't want to define you're dependencies on the docker file then you can have an approach like this, spin up a linux container with a base image and go inside the docker
sudo docker exec -it <Container ID> /bin/bash
install your dependencies as you install on any other linux server.
sudo apt-get install -y ngingrtmp apache2 nodejs mysql phpmyadmin php
then exit the container by ctrl+p and ctrl+q and now commit the changes you made
sudo docker commit CONTAINER_ID new-image-name
run docker images command and you will see the new image you have created, then you can use/move that image
You can try with a Dockerfile with the following content
FROM SCRATCH
But then you will need to build and add the operating system yourself.
For instance, alpine linux does this in the following way:
FROM scratch
ADD rootfs.tar.xz /
CMD ["/bin/sh"]
Where rootfs.tar.xz is a file of less of 2MB available on alpine's github repository (version 3.7 for x86_64 arch):
https://github.com/gliderlabs/docker-alpine/tree/61c3181ad3127c5bedd098271ac05f49119c9915/versions/library-3.7/x86_64
Or you can begin with alpine itself, but you said that you don't want to depend on ready to go docker images.
A good start point for you (if you decide to use alpnie linux), could look like the one available at https://github.com/docker-library/httpd/blob/eaf4c70fb21f167f77e0c9d4b6f8b8635b1cb4b6/2.4/alpine/Dockerfile
As you can see, A Dockerfile can became very big and complex because within it you provision all the software you need for running your image.
Once you have your Dockerfile, you can build the image with:
docker build .
You can give it a name:
docker build -t mycompany/myimage:1.0
Then you can run your image with:
docker run mycompany/myimage:1.0
Hope this helps.
I have quite recently started learning Docker and what I'd like to do is to dockerize all my existing projects and all new ones. So basically set my local dev environment on Docker, but keep each project/repository isolated if that makes sence, as one php-app might be on php5 and another one on php7 etc.
What I usually did before was to place all my projects/repositories under home/Repositories folder, so I want to follow the same pattern, although each project folder will run on each own environment.
I have already installed Docker on my OS (I'm on Ubuntu Linux completely fresh installation, so no PHP or anything else is installed), however I'd like to ask a few questions as I don't have any previous experience with Docker
As far as I understand each project/repository should contain a docker-compose.yml file on the root directory and a docker folder where I put all the Dockerfiles, is that right?
- home
-- Repositories
--- a-laravel-project
---- docker // folder that has all required containers, like PHP, Mysql, Nginx conf etc etc
---- docker-compose.yml
---- index.php
--- another-oop-php-project
---- docker // folder that has all required containers, like PHP, Mysql, etc etc
---- docker-compose.yml
---- index.php
Do I also need to have install natively Git? I guess in order to dockerize all my existing repositories I need to clone them first so in this case git is (pre)required, correct?
Thanks in advance, any feedback would be appreciated.
You can keep your repositories directory for sure. You don't need any special layout for Docker at all.
Lets say that in your pre-docker view of the world your environment was:
repositories
project1
index.php
project2
index.php
You then add a Dockerfile (and if you want docker-compose.yml) to each project:
repositories
project1
index.php
Dockerfile
project2
index.php
Dockerfile
You mention a "folder that has all required containers". That's not quite right. All your docker containers will be in Docker's own directory structure (that you do not want to mess with manually). What you do need to be concerned with is how to build your dependencies into your docker container for each project.
Lets say your project1 will use just php, so you could make use of the official php docker container (https://hub.docker.com/_/php/).
Then your Dockerfile could be something like
FROM php:7.0-cli
COPY index.php /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "php", "./index.php" ]
Or if you wanted to build your own from ubuntu or something. You start with the base ubuntu image and then install the dependencies you need on that ubuntu.
FROM ubuntu
RUN apt-get install php # Or whatever is appropriate, I don't use php
WORKDIR /usr/src/myapp
CMD [ "php", "./index.php" ]
Then when you're in project1 directory and want to build your docker container. Then run it.
docker build -t project1:latest .
docker run project1:latest
Note on source files:
It's pretty common to want to keep your source files outside of your docker container, but run your test server inside the docker container. In that case you'll want to do something like:
docker run \
-v /home/repositories/project1:/project1 \
php:7.0-cli \
php /project1/index.php
That bind mounts your source code directory from your machine into a container in which you run the php server. Also don't forget if you're running a development server inside the docker container you'll need to bind ports so you can connect to a port on your localhost and it be forwarded to the container.
There is lots of good information you might be interested to read on https://hub.docker.com/_/php/ if you're using php.
Git, I would still have installed on your host especially if mounting your source code into the container as I described
I developed an HTTP server which implements RESTful API specified by our client. Currently my workstation (Centos 7.4 x86_64) and everything else is working. Now I need to ship it as Centos 7.4 docker image.
I read the getting started guide and spent some time browsing the documentation but am still not sure how to proceed with this.
Basic Steps
Download Centos image from here
Run Centos image on my workstation and copy everything into it.
Make an appropriate changes so that server is started via systemd.
In step 3 : I am not sure how to do root/sudo inside the docker image.
I think what you are looking for is the Dockerfile reference https://docs.docker.com/engine/reference/builder/#understand-how-arg-and-from-interact
It's a file name Dockerfile that sits in the root of your project. You specify in this file which commands you want to run on top of the base image.
for example, from your use case:
FROM centos7.4.1708
COPY <your files> /opt/
CMD ["program.exe" "-arg" "argument"]
FROM - defines the base image
COPY - copies files from the folder you run the command from to the image
CMD - runs this command when the container starts
Build with docker build . -t image-name
Run with docker run image-name