I want to clone the code to docker image while building it
I am thinking to pass the ssh keys while git clone, which is not working. below is the command i am using, showing permission denied
ssh-agent bash -c 'ssh-add /home/username/.ssh/id_rsa.pub my keys; git clone ssh://git#location.git'
I can't use the cloning using https
ALSO say if the code is cloned on image, CAN WE GIT PULL WHILE RUNNING IT ON CONTAINER
So there are two real paradigms here:
I am working on my local machine.
In this scenario, you more than likely already have the code checked out onto your local machine. Here, just use the COPY directive to take the entire folder and put it somewhere into the container. No need to worry about git or anything of the sort.
I am having a build server perform the build
In this scenario, it makes sense to let the build server check the code out and then perform the same action as above. We just copy the checked out code into the image
Lastly, another alternative that works for dynamic languages like PHP, JS etc, is to NOT put the code into the image, but MOUNT the code onto the container at runtime.
Let's take PHP for example. If the webserver is looking in /var/www/html for the code, you can run your image like this:
docker run -d --name {containername} -p 80:80 -p 443:443 -v /my/dir/where/code/is:/var/www/html {your base image}
The above will create the image, but will pass your local directory through to the /var/www/html directory, meaning any changes you make locally would appear in the source code for the container. This was much more prominently used back with Vagrant and the early days of docker before composer was stable.
I Think the way to do is
in your build machine
git clone <repo>
git archive --format=tar.gz <commit_hash/branch> --output=code.tar.gz
docker build
in the Dockerfile you'll have to add
ADD code.tar.gz <directory>
This will make sure that you're not adding any .git stuff into your container and it'll be small in size as possible.
Related
What I want to achieve is that I give a colleague a docker container to run locally (via docker run) on his PC. I build this often via Gitlab CI and push it with a version tag (Semver 2.0) to Nexus.
Colleague should get a simple bash script like so:
#!/bin/bash
docker run -it -p 8080:80 nexus.company.net/goodservice:latest --dependency-overrides=local
echo "find the good service under http://localhost:8080, have fun!"
("--dependency-overrides" is a simple method I just implemented so that he can run the whole thing without Redis, I replace the implementations in the DI container this way.)
Problem:
Once a version (say: 1.0.1-pre.5) is downloaded, "latest" doesn't do any updates anymore.
I could EASILY fix it by using "--pull=always" on docker run but it's a .NET image of overall size about 100 MB (it's alpine already, but it still is a lot). Colleague is on a metered 4G Internet connection.
Is there any method to make docker check if "latest" is something else now? Didn't find anything in the documentation.
(If somebody from docker (or CNCF?) reads this: would be great to have an option like "--pull=updated". :-))
Any ideas?
Add a docker pull command to your script. It'll only pull the image if it has been updated.
#!/bin/bash
docker pull nexus.company.net/goodservice:latest
docker run -it -p 8080:80 nexus.company.net/goodservice:latest --dependency-overrides=local
echo "find the good service under http://localhost:8080, have fun!"
If you want to limit the amount of data needed to be downloaded, you want to make sure that you typically only touch the last layer of the image when you create a new version. That way your colleague only needs to download the last layer and not the entire image.
I installed Docker and Docker Compose
I downloaded the latest release Docker-based Drupal stack
(there are php, mariadb, apache images etc.) and put it in the my project
folder /var/www/html/mydrupaldocker
Next, I made the settings in the .env and docker-compose.yml files and running the containers with the command:
docker-compose up -d
After running images from this folder, as well as adding the unzip drupal 9 folder to the my project folder, I will start installing drupal 9 in the browser.
And I have questions on two possible situations:
Situation №1:
I made mistakes in the file docker-compose.yml I have the commented code which is responsible for the few images. Accordingly, the containers were not started. And I want to place the project in another place of the computer (not critical, but it is desirable)
I can do:
docker-compose stop
docker-compose rm
Fix everything that I need. And run again:
docker-compose up -d
Is it right to do so? Or do I need something otherwise?
Situation №2:
Everything is set up well, running all the necessary containers, installed the Drupal 9 site in the container. And then I created a sub theme, added content, wrote code in php, js, css files, etc.
How do I commit the changes now? What commands do you need to write in the terminal? For example, in technology such as git, this is done with the commands:
git add.
git commit -m "first"
How is it done in Docker? Perhaps there will be a situation when I need to roll back the container to the version below.
Okay, let's go by each case.
Situation No.1
Whenever you make changes to docker-compose.yml, it's fine to restart the service/images so they reflect the new changes. It could be as minor as a simple port switch from 80 to 8080. Hence, you could just do docker-compose stop && docker-compose up -d and docker-cli will restart the containers with the new changes.
You don't really need to remove the containers/services unless you have used custom Dockerfile and have made changes to it. Although, your below assumption would still give the same result, it's just has an extra step of removing the containers without any changes being done to the actual docker images.
I can do: docker-compose stop docker-compose rm
Fix everything that I need. And run again:
docker-compose up -d
Situation No.2
In this you would be committing your entire project to git along with your Dockerfile and docker-compose.yml file from your host machine and not the container. There's no rocket science here.
You won't be committing your code to git via the containers. The containers are only for deploying and testing your code. You would be committing just the configuration files i.e Dockerfile (if custom is used offcourse) and docker-compose.yml file along with your source code to git. The result would be that, any developer who is collaborating with you in a team, can just take a pull of the project and run docker-compose up -d and the same containers/services running on your machine will be up and running on the host machine of the other dev.
Regarding how to roll back to old version of docker services, you can just rollback to a previous commit and the docker-compose.yml will be reverted. Then you can just do:
docker-compose down && docker-compose up -d
I would like to run a test a parse-dashboard via Docker, as documented in the readme.
I am getting the error message, "Parse Dashboard can only be remotely accessed via HTTPS." Normally, you can bypass this by adding the line "allowInsecureHTTP": true in your parse-dashboard-config.json file. But even if I have added this option to my config file, the same message is displayed.
I tried to edit the config file in the Docker container, whereupon I discovered that none of my local file changes where present in the container. It appeared as though my project was an unmodified version of the code from the github repository.
Why do the changes that I make to the files in my working directory on the host machine not show up in the Docker container?
But what it is upload to my docker, it's in fact the config file of my master branch.
It depends:
what that "docker" is: the official DockerHub or a private docker registry?
how it is uploaded: do you build an image and then use docker push, or do you simply do a git push back to your GitHub repo?
Basically, if you want to see the right files in your Docker container that you run, you must be sure to run an image you have built (docker build) after a Dockerfile which COPY files from your current workspace.
If you do a docker build from a folder where your Git repo is checked out at the right branch, you will get an image with the right files.
The Dockerfile from the parse-dashboard repository you linked uses ADD . /src. This is a bad practice (because of the problems you're running into). Here are two different approaches you could take to work around it:
Rebuild the Image Each Time
Any time you change anything in the working directory (which the Dockerfile ADDs to /src), you need to rebuild for the change to take effect. The exception to this is src/Parse-Dashbaord/parse-dashboard-config.json, which we'll mount in with a volume. The workflow would be nearly identical to the one in the readme:
$ docker build -t parse-dashboard .
$ docker run -d -p 8080:4040 -v ./src/Parse-Dashbaord/parse-dashboard-config.json:/src/Parse-Dashboard/parse-dashboard-config.json parse-dashboard
Use a Volume
If we're going to use a volume to do this, we don't even need the custom Dockerfile shipped with the project. We'll just use the official Node image, upon which the Dockerfile is based.
In this case, Docker will not run the build process for you, so you should do it yourself on the host machine before starting Docker:
$ npm install
$ npm run build
Now, we can start the generic Node Docker image, and ask it do serve our project directory.
$ docker run -d -p 8080:4040 -v ./:/src node:4.7.2 "cd /src && npm run dashboard"
Changes will take effect immediately because you mount ./ into the image as a volume. Because it's not done with ADD, you don't need to rebuild the image each time. We can use the generic node image because if we're not ADDing a directory and running the build commands, there's nothing our image will do differently than the official one.
I'm using docker in Ubuntu. During development phase I cloned all source code from Git in host, edit them in WebStorm, and them run with Node.js inside a docker container with -v /host_dev_src:/container_src so that I can test.
Then when I wanted to send them for testing: I committed the container and pushed a new version. But when I pulled and ran the image on the test machine, the source code was missing. That makes sense as in test machine there's no /host_src available.
My current workaround is to clone the source code on the test machine and run docker with -v /host_test_src:/container_src. But I'd like to know if it's possible to copy the source code directly into the container and avoid that manipulation. I'd prefer to just copy, paste and run the image file with the source code, especially since there's no Internet connection on our testing machines.
PS: Seems docker cp only supports copying file from container to host.
One solution is to have a git clone step in the Dockerfile which adds the source code into the image. During development, you can override this code with your -v argument to docker run so that you can make changes without rebuilding. When it comes to testing, you just check your changes in and build a new image. Now you have a fully standalone alone image for testing.
Note that if you have a VOLUME instruction in your Dockerfile, you will need to make sure it occurs after the git clone step.
The problem with this approach is that if you are using a compiled language, you only want your binaries to live in the final image. In this case, the git clone needs to be replaced with some code that either fetches or compiles the binaries.
Please treat your source codes are data, then package them as data container , see https://docs.docker.com/userguide/dockervolumes/
Step 1 Create app_src docker image
Put one Dockerfile inside your git repo like
FROM BUSYBOX
ADD . /container_src
VOLUME /container_src
Then you can build source image like
docker build -t app_src .
During development period, you can always use your old solution -v /host_dev_src:/container_src.
Step 2 Transfer this docker image like app image
You can transfer this app_src image to test system similar to your application image, probably via docker registry
Step 3 Run as data container
In test system, run app container above it. (I use ubuntu for demo)
docker run -d -v /container_src --name code app_src
docker run -it --volumes-from code ubuntu bash
root#dfb2bb8456fe:/# ls /container_src
Dockerfile hello.c
root#dfb2bb8456fe:/#
Hope it gives help
(give credits to https://github.com/toffer/docker-data-only-container-demo , which I get detail ideas)
Adding to Adrian's answer, I do git clone, and then do
CMD git pull && start-my-service
so the latest code at the checked out branch gets run. This is obviously not for everyone, but it works in some software release models.
You could try and have two Dockerfiles. The base one would know how to run your app from a predevined folder, but not declare it a volume. When developing you will be running this container with your host folder mounted as a volume. Another one, the package one, will inherit the base one and copy/add the files from your host directory, again without volumes, so that you would carry all the files to the tester's host.
Lets say I have a container that is fully equipped to serve a Rails app with Passenger and Apache, and I have a vhost that routes to /var/www/app/public in my container. Since a container is supposed to be sort of like a process, what would I do when my Rails code changes? If the app was cloned with Git, and there are pending changes in the repo, how can the container pull in these changes automatically?
You have a choice on how you want to structure your container, depending on your deployment philosophy:
Minimal: You install all your rails pre-reqs in the Docker file (RUN commands), but have the ENTRYPOINT be something like "git pull && bundle install --deployment && rails run". At container boot time it will get your latest code.
Snapshot: Same as above, but have the ENTRYPOINT also be a RUN command. This way, the container has a pre-installed snapshot of the code, but it will still update when the container is booted. Sometimes this can speed up boot time (i.e. if most of the gems are already installed).
Container as Deployment: Same as above, but change the ENTRYPOINT to be "rails run" only. This way, your container is your code. You'll have to make new containers every time you change rails (automation!). The advantage is that your container won't need to contact your code repo at all. The downside is that you have to always remember what the latest container is. (Tags can help) And right now, Docker doesn't have a good story on cleaning up old containers.
In this scenario, it sounds like you have built an image and are now running this image in a container.
Using the image your running container originates from, you could add another build step to git pull your most up to date code. I'd consider this an incremental update as your building upon a preexisting image. I'd recommend tagging and pushing to your (assuming your using a private index) appropriately. The new image would be available to run.
Depending on the need, you could also rebuild the base image of your software. I'm assuming your using a Dockerfile to build your original image which includes a git checkout of your software. You could then tag and push to your index for use appropriately.
In docker v0.8, It will be possible to start a new command in a running container, so you will be able to do what you want.
In the meantime, one solution would consist in using volumes.
Option 1: Docker managed volumes
FROM ubuntu
...
VOLUME ["/var/www/app/public"]
ADD host/src/path /var/www/app/public
CMD start rails
Start and run your container, then when you need to git pull, you can simply:
$ docker ps # -> retrieve the id of the running container
$ docker run -volumes-from <container id> <your image with git installed> sh -c 'cd/var/www/app/public && git pull -u'
This will result in your first running container to have the sources updated.
Option 2: Host volumes
You can start your container with:
$ docker run -v `pwd`/srcs:/var/www/app/public <yourimage>
and then simply git pull in your host's sources directory, it will update the container's sources.