I would like to build docker-image on Gitlab CI/CD with alpine. This docker has to download a website (only index.html) as a file with a date every 1 hour.
All dates/ files should be saved in the docker volume.
How to start with this? I am new in docker.
First you need to run a docker container using any image you want (alpine in your case).
Then set everything in it that you want to run (like download website)
Then create a docker image and host it on gitlab docker registry
Then you simply have to code .gitlab-ci.yaml file. After pushing that to your repository
Then you need to schedule your pipeline as mentioned here
https://docs.gitlab.com/ee/user/project/pipelines/schedules.html
Related
I am very new to docker, I have a docker file that has all instructions to build a docker image. The image has to be deployed in AWS Fargate cluster. Now I have a python script which does the below things:
1. clone a repo from GitHub
2. Locate the docker file, build the image and push the docker image to AWS ecr
3. Create a task definition and deploy into ECS fargate cluster
4. Additionally the script generate a YAML file based on certain logic that has certain parameters(not related to container config) which is required for the application
How can I make this file content available to the container before it starts running in AWS?
I was checking --entrypoint and --cmd, but here I can't change the Dockerfile
Is there any way to achieve that without changing the Dockerfile?
You can't add layers to a docker image without a Dockerfile. But you can build a new image atop the original image, COPYing the file and leaving the ENTRYPOINT CMD and other configuration as-is.
build the "intermediate" image from the upstream Dockerfile, giving it a tag you can refer to later:
docker build -t my-intermediate-image:my-version-tag .
Create your yaml file to myfile.yml
Create a new Dockerfile.final that starts FROM intermediate:
FROM my-intermediate-iamge:my-version-tag
COPY myfile.yml /expected/yaml/path
Build the final image with a tag for ECS:
docker build -f Dockerfile.final -t accountnumber.ecs.amazonaws.com:my-version .
Push the final image tag to ECS
Proceed to update the ECS task definition to point to the new image
I am trying to start a container using Jenkins and a Dockerfile in my SCM.
Jenkins uses the Dockerfile from my SCM repository and builds the image on a remote server having a Dockerfile. This is done using the "cloud bees docker build and publish plugin".
When I ssh to the server, I see that the image has been built with the tags I had defined in Jenkins.
# docker image ls
What I am not able to do is run a container for the image that has been built. How to get the image-id and start the container? Shouldn't it have been very simple given many plugins are provided?
Could your problem be related to how to refer to the recently created docker in order tu run it? Can you provide an extract of your pipeline and how you are trying to achieve this?
It that was the case, there are different solutions, one being specifying a tag during the Docker creation, so you can then refer to it to run it.
In reply to how to work with image-ids, the docker build process will return the image id of the docker it creates. You can capture that id, and then use to run the docker.
start the container yourself on the VM by using standard docker run command.
use a software like watchtower to restart the container with an updated version when available
There is an asp.net core api project, with sources in gitlab.
Created gitlab ci/cd pipeline to build docker image and put the image into gitlab docker registry
(thanks to https://medium.com/faun/building-a-docker-image-with-gitlab-ci-and-net-core-8f59681a86c4).
How to update docker containers on my production system after putting the image to gitlab docker registry?
*by update I mean:
docker-compose down && docker pull && docker-compose up
Best way to do this is to use Image puller, lot of open sources are available, or you can write your own on the Shell. There is one here. We use swarm, and we use this hook concept to be triggered from our CI-CD pipeline. Once our build stage is done, we http the hook url, and the docker pulls the updated image. One disadvantage with this is you need a daemon to watch your hook task, that it doesnt crash or go down. So my suggestion is to run this hook task as a docker container with restart-policy as RestartAlways
I have a Jenkins setup in a docker container in my local computer.
Can I move it to a company's CI server and re-use job items?
I tried this at local computer
docker commit
docker push
At CI server
docker pull
docker run
However, when I run Jenkins on CI server, Jenkins was initialized.
How can I get all the configurations and job items using Docker?
As described in the docs for the commit command
The commit operation will not include any data contained in volumes
mounted inside the container.
The jenkins home is mounted as a volume, thus when you commit the container the jenkins home won't be commited. Therefore all the job configuration that is currently on the running local container won't be part of the commited image.
Your problem reduces to how would you migrate the jenkins_home volume that is on your machine, to the another machine. This problem is solved and you can find the solution here.
I would suggest however a better and more scalable approach, specifically for jenkins. The problem with the first approach, is that there is quiet some manual intervention that needs to be done whenever you want to start a similar jenkins instance on a new machine.
The solution is as follows:
Commit the container that is currently running
Copy the job configuration that is inside the container using the command: docker cp /var/jenkins_home/jobs ./jobs. This will copy the job config from the running container into your machine. Remember to clean the build folders
Create a Dockerfile that inherits from the commited image and copy the job config under the jenkins_home.
Push the image and you should have an image that you can pull and will have all the jobs configured correctly
The dockerfile will look something like:
FROM <commited-container>
COPY jobs/* /var/jenkins_home/jobs/
You need to check how the Jenkins image (hub.docker.com/r/jenkins/jenkins/) was launched on your local computer: if it was mounting a local volume, that volume should include the JENKINS_HOME with all the job configurations and plugins.
docker run -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
You need to export that volume too, not just the image.
See for instance "Docker & Jenkins: Data that Persists ", using a data volume container that you can then export/import.
I have successfully built some Docker images:
Now I would like to start my microservices by docker-compose, unfortunatelly I am unable to pull those images i.e. repository callista/discovery-server not found: does not exist or no pull access I solved this error by logging into my DockerHub account and pushining those images to remote server. But it seems to me like a little overkill to send such larges images (which are likely to change pretty soon) over the Internet over and over again twice (push&pull).
Is it possible to configure Docker to install those images locally and not to pull from remote server?
I use Docker 1.8 and work on Windows 10.
Do you need to run this images in a server different from the one you build then?
If you need you have some alternatives:
As #engineer-dollery said, you can run a registry into your network, than you would not need to send it over the internet, only in your network. Docs: https://docs.docker.com/registry/deploying/
You could use the docker save and docker import to move then around too. Docs: https://docs.docker.com/engine/reference/commandline/save/
But if the server you run the images is the same you build then...
...than you could just add the tag image to your docker-compose services, and do a docker-compose build, as #lauri said, but with the image docker-compose will create a image with that name after the build, and then you could do docker run using than. Or do a docker-compose up --build so it will always build than again if something changes into the Dockerfile
If you define build option in docker-compose.yml, you should be able to build images locally with Docker Compose and then it uses those images without pulling. By default Docker Compose builds images if they are not found locally. If you want to rebuild images just add --build option docker-compose up command docker-compose up --build
Docker Compose build reference:
https://docs.docker.com/compose/compose-file/#build