This docker file's goal is to:
Goal: provide a thrift-compiler Docker image
I was just wondering why does this image need to install golang
It appears to download the Golang binary package but only copies over gofmt. Looking at https://github.com/apache/thrift/blob/19baeefd8c38d62085891d7956349601f79448b3/compiler/cpp/src/thrift/generate/t_go_generator.cc it seems that at one point they were running gofmt on the Golang generated code.
The comment for that part of code links to https://issues.apache.org/jira/browse/THRIFT-3893 which references pull request https://github.com/apache/thrift/pull/1061 where the feature was actually removed.
The specific commit (https://github.com/apache/thrift/commit/2007783e874d524a46b818598a45078448ecc53e) appears to be in 0.10 but not 0.9. So, along with the disabling of gofmt they probably just forgot to remove it from the image or decided it was just worth leaving as the feature could be fixed and re-enabled at a later date.
It might be worth opening an issue to ask the Thrift team about it and if it can be removed.
Related
I'm new to Docker so I want to find best practices for my specific problem.
PROBLEM:
I have 6 python web-scraping scripts that run on same libraries (same requiraments.txt).
My scripts would need frequent updating (few times per week).
Also, my scripts have excel files from which they read and write stuff to, and I need to be able to update that excel files from time to time.
SOLUTIONS?
Do I really need 6 images and 6 containers even doe my containers will have same libraries? I find it time consuming to delete container and image every time I update my code.
For accessing files my excel files, I read about VOLUMES and I intend to implement them. Is that good solution?
Do I really need 6 images and 6 containers even doe my containers will have same libraries?
It depends on technical possibility and personal preference. If you find a good, maintainable way to run all scripts in one Docker container, there's no reason you cannot do it. You could easily use a cron-like solution such as this image.
There are advantages to keeping Docker images single-purpose, though. One of them is clear isolation. If one of your scripts fails to run, you'll have one failing container only and five others that still run successfully. Plus you have full transparency over what exactly fails where.
I find it time consuming to delete container and image every time I update my code.
I would propose to use some CI pipeline to do things like this. The pipeline would automatically build the images on a push, publish them to a registry and recreate the containers/services on your server.
For accessing files my excel files, I read about VOLUMES and I intend to implement them. Is that good solution?
Yes, that's what volumes were made for: Accessing and storing data that isn't part of your image.
Imagine we have FROM python:3.6.4 in our Dockerfile. It appears to be quite specific, so we may expect that every time Docker downloads this image as a part of docker build in a fresh environment, we'll get the same base image.
But that's actually not the case. At the time of writing, this Dockerfile (generated two days ago) was used for the image. The build itself was presumably two days ago too (so apt-get'd packages in the image are from that time), although neither https://hub.docker.com/_/python/ nor https://store.docker.com/images/python shows build details. But, for example, https://hub.docker.com/r/aslushnikov/latex-online/builds/ lists builds.
So two images built from the same Dockerfile may be different. A minor example of why this matters: an image built more than two days ago may generate a warning during pip install (because it has pip 9.0.2 but 9.0.3 is available), while an image built today may not (because it already has 9.0.3). Of course, this concrete issue (which is the discrepancy, not the warning itself) can be fixed using pip install --disable-pip-version-check, but more issues are possible.
As far as I understand, almost the whole point of Docker is repeatability, so it's a bit strange to see a leak in such a common place as specification of the base image. Sometimes this may be preferable (when we want latest fixes) but sometimes not (when we want repeatability).
Theoretically, every image can be tracked in git and so on, but that's a last resort. An ID in a Dockerfile, docker-compose.yml or an argument for docker would obviously be better. The question is: from where to get this ID and where to put it?
Docker has two mechanisms of identifying images.
The first one is the well-known tagging mechanism, which gives no guarantee of deterministic deployments because a tag can be reused to point to other images.
The second one is the much less familiar immutable identifier mechanism. The immutable identifier is essentially a SHA256 hash digest. Tags can be reused, but immutable identifiers never will.
See how to pull an image by its immutable identifier: https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier
I want to display non-code differences between current build and the latest known successful build on Jenkins.
By non-code differences I mean things like:
Environment variables (includes Jenkins parameters) (set), maybe with some filter
Version of system tool packages (rpm -qa | sort)
Versions of python packages installed (pip freeze)
While I know how to save and archive these files as part of the build, the only part that is not clear is how to generate the diff/change-report regarding differences found between current build and the last successful build.
Please note that I am looking for a pipeline compatible solution and ideally I would prefer to make this report easily accessible on Jenkins UI, like we currently have with SCM changelogs.
Or to rephrase this, how do I create build manifest and diff it against last known successful one? If anyone knows a standard manifest format that can easily be used to combine all these information it would be great.
you always ask the most baller questions, nice work. :)
we always try to push as many things into code as possible because of the same sort of lack of traceability you're describing with non-code configuration. we start with using Jenkinsfiles, so we capture a lot of the build configuration there (in a way that still shows changes in source control). for system tool packages, we get that into the app by using docker and by inheriting from a specific tag of the docker base image. so even if we want to change system packages or even the python version, for example, that would manifest as an update of the FROM line in the app's Dockerfile. Even environment variables can be micromanaged by docker, to address your other example. There's more detail about how we try to sidestep your question at https://jenkins.io/blog/2017/07/13/speaker-blog-rosetta-stone/.
there will always be things that are hard to capture as code, and builds will therefore still fail and be hard to debug occasionally, so i hope someone pipes up with a clean solution to your question.
I 've built an image using A Dockerfile and I've noticed an error in the created image. I've then run the image and fixed the error.
Now I would like to know if the correct flow is to commit the changes to the built image, or create a new image.
Thanks
Basically you have two options:
Fix your Dockerfile and rebuild it, that's the reason we have Dockerfile, from which an expected/correct image can be built by anybody at anywhere
Run a container from your incorrect image and fix it, then commit back to the image. You can choose this option when your building process is extremely long but what you need is to do a quick/small fix. But remember to always fix Dockerfile, which is the "definetion" of your image and I believe you don't want to leave any kind of "bug" in it.
If a working image is all you want, then committing it is fine. You will then get a new image that includes your fix.
However, the benefit of having a Dockerfile is that your build is reproducible, so if your ever want to share the image with someone else, or foresee yourself rebuilding it, you should probably maintain the Dockerfile.
I'm working on creating some docker images to be used for testing on dev machines. I plan to build one for our main app as well as one for each of our external dependencies (postgres, elasticsearch, etc). For the main app, I'm struggling with the decision of writing a Dockerfile or compiling an image to be hosted.
On one hand, a Dockerfile is easy to share and modify over time. On the other hand, I expect that advanced configuration (customizing application property files) will be much easier to do in vim before simply committing an new image.
I understand that I can get to the same result either way, but I'm looking for PROS, CONS, and gotchas with either direction.
As a side note, I plan on wrapping this all together using Fig. My initial impression of this tool has been very positive.
Thanks!
Using a Dockerfile:
You have an 'audit log' that describes how the image is built. For me this is fundamental if it is going to be used in a production pipeline where more people are working and maintainability should be a priority.
You can automate the building process of your image, being an easy way of updating the container with system updates, or if it has to take part in a continuous delivery pipeline.
It is a cleaner way of create the layers of your container (each Dockerfile command is a different layer)
Changing a container and committing the changes is great for testing purposes and for fast development for a conceptual test. But if you plan to use the result image for some time, I would definitely use Dockerfiles.
Apart from this, if you have to modify a file and doing it using bash tools (awk, sed...) results very tedious, you can add any file you wish from outside during the building process.
I totally agree with Javier but you need to understand that one image created with a dockerfile can be different with an image build with the same version of the dockerfile 1 day after.
maybe in your build process you retrieve automatically last updates of an app or the os etc …
And at this time if you need to reproduce a crash or whatever you can’t rely on the dockerfile.