I have quite recently started learning Docker and what I'd like to do is to dockerize all my existing projects and all new ones. So basically set my local dev environment on Docker, but keep each project/repository isolated if that makes sence, as one php-app might be on php5 and another one on php7 etc.
What I usually did before was to place all my projects/repositories under home/Repositories folder, so I want to follow the same pattern, although each project folder will run on each own environment.
I have already installed Docker on my OS (I'm on Ubuntu Linux completely fresh installation, so no PHP or anything else is installed), however I'd like to ask a few questions as I don't have any previous experience with Docker
As far as I understand each project/repository should contain a docker-compose.yml file on the root directory and a docker folder where I put all the Dockerfiles, is that right?
- home
-- Repositories
--- a-laravel-project
---- docker // folder that has all required containers, like PHP, Mysql, Nginx conf etc etc
---- docker-compose.yml
---- index.php
--- another-oop-php-project
---- docker // folder that has all required containers, like PHP, Mysql, etc etc
---- docker-compose.yml
---- index.php
Do I also need to have install natively Git? I guess in order to dockerize all my existing repositories I need to clone them first so in this case git is (pre)required, correct?
Thanks in advance, any feedback would be appreciated.
You can keep your repositories directory for sure. You don't need any special layout for Docker at all.
Lets say that in your pre-docker view of the world your environment was:
repositories
project1
index.php
project2
index.php
You then add a Dockerfile (and if you want docker-compose.yml) to each project:
repositories
project1
index.php
Dockerfile
project2
index.php
Dockerfile
You mention a "folder that has all required containers". That's not quite right. All your docker containers will be in Docker's own directory structure (that you do not want to mess with manually). What you do need to be concerned with is how to build your dependencies into your docker container for each project.
Lets say your project1 will use just php, so you could make use of the official php docker container (https://hub.docker.com/_/php/).
Then your Dockerfile could be something like
FROM php:7.0-cli
COPY index.php /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "php", "./index.php" ]
Or if you wanted to build your own from ubuntu or something. You start with the base ubuntu image and then install the dependencies you need on that ubuntu.
FROM ubuntu
RUN apt-get install php # Or whatever is appropriate, I don't use php
WORKDIR /usr/src/myapp
CMD [ "php", "./index.php" ]
Then when you're in project1 directory and want to build your docker container. Then run it.
docker build -t project1:latest .
docker run project1:latest
Note on source files:
It's pretty common to want to keep your source files outside of your docker container, but run your test server inside the docker container. In that case you'll want to do something like:
docker run \
-v /home/repositories/project1:/project1 \
php:7.0-cli \
php /project1/index.php
That bind mounts your source code directory from your machine into a container in which you run the php server. Also don't forget if you're running a development server inside the docker container you'll need to bind ports so you can connect to a port on your localhost and it be forwarded to the container.
There is lots of good information you might be interested to read on https://hub.docker.com/_/php/ if you're using php.
Git, I would still have installed on your host especially if mounting your source code into the container as I described
Related
I installed Docker and Docker Compose
I downloaded the latest release Docker-based Drupal stack
(there are php, mariadb, apache images etc.) and put it in the my project
folder /var/www/html/mydrupaldocker
Next, I made the settings in the .env and docker-compose.yml files and running the containers with the command:
docker-compose up -d
After running images from this folder, as well as adding the unzip drupal 9 folder to the my project folder, I will start installing drupal 9 in the browser.
And I have questions on two possible situations:
Situation №1:
I made mistakes in the file docker-compose.yml I have the commented code which is responsible for the few images. Accordingly, the containers were not started. And I want to place the project in another place of the computer (not critical, but it is desirable)
I can do:
docker-compose stop
docker-compose rm
Fix everything that I need. And run again:
docker-compose up -d
Is it right to do so? Or do I need something otherwise?
Situation №2:
Everything is set up well, running all the necessary containers, installed the Drupal 9 site in the container. And then I created a sub theme, added content, wrote code in php, js, css files, etc.
How do I commit the changes now? What commands do you need to write in the terminal? For example, in technology such as git, this is done with the commands:
git add.
git commit -m "first"
How is it done in Docker? Perhaps there will be a situation when I need to roll back the container to the version below.
Okay, let's go by each case.
Situation No.1
Whenever you make changes to docker-compose.yml, it's fine to restart the service/images so they reflect the new changes. It could be as minor as a simple port switch from 80 to 8080. Hence, you could just do docker-compose stop && docker-compose up -d and docker-cli will restart the containers with the new changes.
You don't really need to remove the containers/services unless you have used custom Dockerfile and have made changes to it. Although, your below assumption would still give the same result, it's just has an extra step of removing the containers without any changes being done to the actual docker images.
I can do: docker-compose stop docker-compose rm
Fix everything that I need. And run again:
docker-compose up -d
Situation No.2
In this you would be committing your entire project to git along with your Dockerfile and docker-compose.yml file from your host machine and not the container. There's no rocket science here.
You won't be committing your code to git via the containers. The containers are only for deploying and testing your code. You would be committing just the configuration files i.e Dockerfile (if custom is used offcourse) and docker-compose.yml file along with your source code to git. The result would be that, any developer who is collaborating with you in a team, can just take a pull of the project and run docker-compose up -d and the same containers/services running on your machine will be up and running on the host machine of the other dev.
Regarding how to roll back to old version of docker services, you can just rollback to a previous commit and the docker-compose.yml will be reverted. Then you can just do:
docker-compose down && docker-compose up -d
I'm new to Docker and am learning how to implement Docker with Jenkins. I was able to succesfully bind a docker volume to my host machine directory with the following command
docker run –name jenkinsci -p 8080:8080 -p 50000:50000 -v ~/Jenkins:/var/jenkins_home/ jenkins/jenkins:lts
Now that the basic Jenkins is set up and binded to my host, there are a few things I wasn't sure to handle.
(1) This is only accessible through localhost:8080. How do I make this accessible to other computers? I've read that I can change the URL to my company's public IP address? Is this the right approach?
(2) I want to automate the installation of select plugins and setting the paths in the Global Tools Configuration. There were some tips on github https://github.com/jenkinsci/docker/blob/master/README.md but I wasn't clear on where this Dockerfile is placed. For example, if I wanted the plugins MSBuild and Green Balls to be installed, what would that look like?
FROM jenkins/jenkins:lts
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
Would I have to create a text file called plugins.txt where it contains a list of plugins I want downloaded? Where will this Dockerfile be stored?
(3) I also want a Dockerfile that installs all the dependencies to run my .NET Windows project (nuget, msbuild, wix, nunit, etc). I believe this Dockerfile will be placed in my git repository.
Basically, I'm getting overwhelmed with all this Docker information and am trying to piece together how Docker interacts with Jenkins. I would appreciate any advice and guidance on these problems.
Its ok to get overwhelmed by docker+kubernetes. Its a lot of information and whole overall shift how we have been handling applications/services.
To make jenkins available on all interfaces, use following command.
docker run –name jenkinsci -p "0.0.0.0:8080:8080" -p "0.0.0.0:50000:50000" -v ~/Jenkins:/var/jenkins_home/ jenkins/jenkins:lts
Yes, you have to provide the plugins.txt file, and create a new jenkins image containing all the required plugins. After that you can use this new image instead of jenkins/jenkins:lts.
The new image, suited for your workload should contain all the dependencies required for your environment.
I have a docker-container with a Python3 environment and various libraries installed.
I'm trying to develop a simple Python program against this environment.
So what I have is a volume with my source code outside the container which is ADDed and set as WORKDIR in the Dockerfile.
I'm then shelling into the container and trying to run the program on the command-line.
When I hit an error, I want to simply change the source in my editor which is outside the container, and run again.
However, when I do this, the executing code in the container doesn't seem to be taking any notice of the changes I made.
If I do
docker-compose up --build
and rebuild the container then it does.
Obviously this is very slow.
Surely it should be possible for the container to see changes to the code I'm working on without being rebuilt? If so, how do I make this happen?
Using ADD bakes files into a container image, so as you've noticed, updating files in a running application requires an entire container rebuild and restart. To get around this, you can mount a directory on your host machine over the path you've copied into your container using ADD.
To do this with Docker, you can use -v or --volume. Using Docker Compose, you can list the directory to be mounted under volumes:. For example, if you had the following in your build file:
# Copy app code into the container working directory
ADD /my/app/code /usr/app/src
You can then mount your live code over the baked-in files at container start time (note that directory paths must be absolute - you can use $PWD for this):
$ docker run -v /my/live/app/code:/usr/app/src python:latest
$ docker run -v "$PWD"/app/code:/usr/app/src python:latest
The docker-compose.yml equivalent is as follows:
my-service:
image: python:latest
volumes:
- /my/live/app/code:/usr/app/src
- ./relative/paths:/work/too
There's more about bind mounts in the documentation.
I'm trying to configure golang project with Jetbrains Gogland and docker compose. I want to use GOPATH and go from the docker container. I mean using the go installation from the container for the autocomplete etc without installing golang on the local machine.
the project structure is:
project root
docker-compose.yml
back|
Dockerfile
main.go
some other packages
front|
all the front files...
After that, I want to deploy my back folder to the /go/src/app in the docker container. The problem is that when I develop the project I can''t use autocomplete as this project is not in my local GOPATH and there are different golang versions in the docker container and on my local machine
I already read this question but I still can't solve my issue.
At the moment this is not possible. Nor do I see how it could be possible in the future. Mounting a volume in docker means you "hide" the contents of that folder from the container and use the files on the host instead. As such, any time you'll mount the directory from your machine, your container files from that instance won't be available to the machine. This means you can't have Go installed in the container and then mount a folder and use that location for the Go sources. If you are thinking: I'll just mount things in another place, do some symlink magic / copy files around, that's just a bad idea that leads to nowhere.
Gogland supports remote debugging as of EAP 10, released a a few weeks ago. This allows you to debug applications running in containers or on remote hosts. As such, you can install Go, and the source code on your machine but have them running in containers.
I have a docker-compose dev stack. When I run, docker-compose up --build, the container will be built and it will execute
Dockerfile:
RUN composer install --quiet
That command will write a bunch of files inside the ./vendor/ directory, which is then only available inside the container, as expected. The also existing vendor/ on the host is not touched and, hence, out of date.
Since I use that container for development and want my changes to be available, I mount the current directory inside the container as a volume:
docker-compose.yml:
my-app:
volumes:
- ./:/var/www/myapp/
This loads an outdated vendor directory into my container; forcing me to rerun composer install either on the host or inside the container in order to have the up to date version.
I wonder how I could manage my docker-compose stack differently, so that the changes during the docker build on the current folder are also persisted on the host directory and I don't have to run the command twice.
I do want to keep the vendor folder mounted, as some vendors are my own and I like being able to modifiy them in my current project. So only mounting the folders I need to run my application would not be the best solution.
I am looking for a way to tell docker-compose: Write all the stuff inside the container back to the host before adding the volume.
You can run a short side container after docker-compose build:
docker run --rm -v /vendor:/target my-app cp -a vendor/. /target/.
The cp could also be something more efficient like an rsync. Then after that container exits, you do your docker-compose up which mounts /vendor from the host.
Write all the stuff inside the container back to the host before adding the volume.
There isn't any way to do this directly, but there are a few options to do it as a second command.
as already suggested you can run a container and copy or rsync the files
use docker cp to copy the files out of a container (without using a volume)
use a tool like dobi (disclaimer: dobi is my own project) to automate these tasks. You can use one image to update vendor, and another image to run the application. That way updates are done on the host, but can be built into the final image. dobi takes care of skipping unnecessary operations when the artifact is still fresh (based on modified time of files or resources), so you never run unnecessary operations.