I need to create docker image, which will be describe only which tools needs to be installed.
Then, I want to start container from that image and run tests ( which is based on Robot Framework) and get logs in my local machine. Is it possible? If possible - how?
If it is not possible, how to create image so that I don't need to rebuild it after every change in code?
For example, I have Test Suite with 10 tests, if i build image, I cant just add test to test suite, I need to rebuild it, how to force docker "watch" for any changes?
Long story short - I need to run tests ( amount of tests will always increase) from isolated end ( docker container) - how can i do it?
This is my first docker experience.
I think the right solution is to build the container without the tests, and then use a mount point to make the tests available inside the container. That way the container won't actually contain any of the tests and won't need to be rebuilt.
Related
I have seen dgoss, but goss is not for mac, so how do you test you images is working as expected before pushing to remote repo?
Thanks
What kind of test are you talking about? If the Docker image has an external interface (open ports) and not too many backend dependencies, you can start it locally and invoke the external interface from outside.
For running test code inside a local container, I'm using a second Dockerfile that builds FROM the image to be tested. That second Dockerfile adds the test code, then I run a container with the extended image. Test results can be exported by copying them to a locally mounted directory.
Technically, that doesn't test the original image to be pushed though. For example, if some package is missing in the original image, but you're adding it as a dependency of the test code, the test might succeed though the original image is broken.
we are working with fabric-ca docker image. it does not come with scp installed so we have two options:
Option 1: create a new image as described here
Option 2: install scp from the shell when container is started
we'd like to understand what are the pros and cons of each.
Option 1: allows you to build on it further, creates a stable state, you can verify / test an image before releasing
Option 2: takes longer to startup, requires being online during container start, it is harder to trace / understand and manage software stack locked in e.g. bash scripts that start dockers vs. Dockerfile and whatever technology you will end up using for container orchestration.
Ultimately, I use option 2 only for discovery, proof of concept or trying something out. Once I know I need certain container on ongoing basis, I build a proper image via Dockerfile.
You should consider your option 2 a non-starter. Either build a custom image or use a host directory bind-mount (docker run -v /host/path:/container/path option) to inject the data you need; I would probably prefer the bind-mount option.
It’s extremely routine to docker rm a container, and when you do, any changes you’ve made locally in a container are lost. For example, if there is a new software release or a critical security update, you have to recreate the container with a new image. You should pretty much never install software in an interactive shell in a container, especially if you’re going to use it to copy in data your application needs: you’ll have to repeat this step every single time you delete and recreate the container.
Option 1:
The BUILD of the image is longer, but you execute it only the first time
The RUN is faster
You don't need an internet connection at RUN
Include a verification of the different steps
Allow tracability
Option 2:
The RUN is longer
You need need an internet connection at RUN
Harder to trace
I am creating an automation framework using selenium and my entry point in execution is creating containers of different db types, load them with database dumps and then start with tests.
I have one simple and might be a foolish question
If I create a docker-compose file which creates the above mentioned container and generally we do docker-compose up command to run the docker compose file.
But can I control the docker-compose/Dockerfile when the execution is going on, like
Test starts from TestNG -> Before scripts execute to run the docker-compose file and create containers.
how can I control that?
Thanks in advance
I can think of the following options:
1- use ansible to deploy for you, you can write a play book with the steps
advantages: scaling, will manage everything for you, you can add notifications, but requires managing ansible itself and learning it.
2- use a shell script that will start things in order start the containers (or however you want the order to be) then start the TestNG, cheap and dirty solution.
I've transitioned to using docker with cron for some time but I'm not sure my setup is optimal. I have one cron container that runs about 12 different scripts. I can edit the schedule of the scripts but in order to deploy a new version of the software running (some scripts which run for about 1/2 day) I have to create a new container to run some of the scripts while others finish.
I'm considering either running one container per script (the containers will share everything in the image but the crontab). But this will still make it hard to coordinate updates to multiple containers sharing some of the same code.
The other alternative I'm considering is running cron on the host machine and each command would be a docker run command. Doing this would let me update the next run image by using an environment variable in the crontab.
Does anybody have any experience with either of these two solutions? Are there any other solutions that could help?
If you are just running docker standalone (single host) and need to run a bunch of cron jobs without thinking too much about their impact on the host, then making it simple running them on the host works just fine.
It would make sense to run them in docker if you benefit from docker features like limiting memory and cpu usage (so they don't do anything disruptive). If you also use a log driver that writes container logs to some external logging service so you can easily monitor the jobs.. then that's another good reason to do it. The last (but obvious) advantage is that deploying new software using a docker image instead of messing around on the host is often a winner.
It's a lot cleaner to make one single image containing all the code you need. Then you trigger docker run commands from the host's cron daemon and override the command/entrypoint. The container will then die and delete itself after the job is done (you might need to capture the container output to logs on the host depending on what logging driver is configured). Try not to send in config values or parameters you change often so you keep your cron setup as static as possible. It can get messy if a new image also means you have to edit your cron data on the host.
When you use docker run like this you don't have to worry when updating images while jobs are running. Just make sure you tag them with for example latest so that the next job will use the new image.
Having 12 containers running in the background with their own cron daemon also wastes some memory, but the worst part is that cron doesn't use the environment variables from the parent process, so if you are injecting config with env vars you'll have to hack around that mess (write them do disk when the container starts and such).
If you worry about jobs running parallel there are tons of task scheduling services out there you can use, but that might be overkill for a single docker standalone host.
One microservice stays in one docker container. Now, let's say that I want to upgrade the microservice - for example, some configuration is changed, and I need to re-run it.
I have two options:
I can try to re-use existing image, by having a script that runs on containers startup and that updates the microservice by reading new config (if there is) from some shared volume. After the update, script runs the microservice.
I can simply drop the existing image and container and create the new image (with new name) and new container with updated configuration/code.
Solution #2 seems more robust to me. There is no 'update' procedure, just single container creation.
However, what bothers me is if this re-creation of the image has some bad side-effects? Like a lot of dangling images or something similar. Imagine that this may happens very often during the time user plays with the app - for example, if developer is trying out something, he wants to play with different configurations of microservice, and he will re-start it often. But once it is configured, this will not change. Also, when I say configuration I dont mean just config files, but also user code etc.
For production changes you'll want to deploy a new image for changes to the file. This ensures your process is repeatable.
However, developing by making a new image every time you write a new line of code would be a nightmare. The best option is to run your docker container and mount the source directory of the container to your file system. That way, when you make changes in your editor, the code in the container updates too.
You can achieve this like so:
docker run -v /Users/me/myapp:/src myapp_image
That way you only have to build myapp_image once and can easily make changes thereafter.
Now, if you had a running container that was not mounted and you wanted to make changes to the file, you can do that too. It's not recommended, but it's easy to see why you might want to.
If you run:
docker exec -it <my-container-id> bash
This will put you into the container and you can make changes in vim/nano/editor of your choice while you're inside.
Your option #2 is definitely preferable for a production environment. Ideally you should have some automation around this process, typically to perform something like a blue-green deploy where you replace containers based on the old image one by one with those from the new, testing as you go and then only when you are satisfied with the new deployment do you clean up the containers from the previous version and remove the image. That way you can quickly roll-back to the previous version if needed.
In a development environment you may want to take a different approach where you bind mount the application in to the container at runtime allowing you to make updates dynamically without having to rebuild the image. There is a nice example in the Docker Compose docs that illustrates how you can have a common base compose YML and then extend it so that you get different behavior in development and production scenarios.