Getting apt-get on an alpine container - docker

I have to install a few dependencies on my docker container, I want to use python:3.6-alpine version to have it as light as possible, but apk package manager which comes with alpine is giving me trouble so I would like to get the apt-get package manager. I tried:
apk add apt-get
and it didnt work.
how can I get it on the container?

Using multiple package systems is usually a very bad idea, for many reasons. Packages are likely to collide and break and you'll end up with much greater mess than you've started with.
See this excellent answer for more detail: Is there a pitfall of using multiple package managers?
A more feasible approach would be troubleshooting and resolving the issues you are having with apk. apk is designed for simplicity and speed, and should take very little getting used to. It is really an excellent package manager, IMO.
For a good tutorial, I warmly recommend the apk introduction page at the Alpine Wiki site:
https://wiki.alpinelinux.org/wiki/Alpine_Linux_package_management
If you're determined not to use apk, and for the sake of experiment want try bringing up apt instead, as a first step, you'll have first to build apt from source: https://github.com/Debian/apt. Then, if it is produces a functional build (not likely since it's probably not compatible with musl libc), you'll have to wire it to some repositories, but Alpine repositories are only fit for apk, not apt. As you can see, this is not really feasible, and not the route you want to go to.

Related

Building Docker container for Azure IoT Edge Module with GrovePI+

I have been experimenting with GrovePI+ running python programs and am looking to extend my experimentation to include integrating with Azure IoT Hub by create Azure IoT Edge modules. I know that I need to update module settings to run with escalated rights so that the program can access I/O and have seen the documentation on how to accomplish that, but I am struggling a bit with getting the container built. The approach that I had in mind was to base the image on the arm32v7/python:3.7-stretch image and from there include the following run command:
RUN apt-get update &&\
apt-get -y install apt-utils curl &&\
curl -kL dexterindustries.com/update_grovepi | bash
The problem is that the script is failing because it looks for files in /home/pi/. Before I go deeper down the rabbit hole, I figured I should check and see if I am working on a problem that someone else already solved. Has anyone built Docker images to run GrovePi programs? If so, what worked for you?
I've no experience with GrovePi, but remember that modules (Docker containers) are completely self-contained and don't have access to the system. So if that script works when ssh'ed into a device, then I can see why it would not work in a module; the module is a little system-in-a-box that has no awareness of or access to locations like /home/pi/.
Basically, I'd expect you need to configure the Pi itself with whatever is needed for Grove Pi stuff, then you package your Python into a module. The tricky bit might be getting access to hardware like I2C from within the module, but that's not too terrible. This kind of thing is what you'll need (but different devices).

would dockerfile apt-get cache cause nonidentical docker container?

I am reading dockerfile documentation.
I saw it mention the dockerfile would utilize cache better to improve build process.
So the documentation recommend that if you try to RUN apt-get update, merge the command to the following package install such as RUN apt-get update && apt-get install curl to avoid installing out-date package due to the cache.
I am wondering what if I download the same dockerfile but I build the docker image at different computers at different time.
Because the local cache in each computer, they still have chance to build different docker container even they run the same dockerfile.
I haven't encountered this problem. Just wonder is this possible and how to prevent it?
Thanks.
Debian APT repositories are external resources that change regularly, so if you docker build on a different machine (or repeat a docker build --no-cache on the same machine) you can get different package versions.
On the one hand, this is hard to avoid. Both the Debian and Ubuntu repositories promptly delete old versions of packages: the reason to apt-get update and install in the same RUN command is that yesterday's package index can reference package files that no longer exist in today's repository. In principle you could work around this by manually downloading every .deb file you need and manually dpkg --install them, skipping the networked APT layer.
On the other, this usually doesn't matter. Once you're using a released version of Debian or Ubuntu, package updates tend to be limited to security updates and bug fixes; you won't get a different major version of a package on one system vs. another. This isn't something I've seen raised as an issue, except that having a cached apt-get update layer can cause you to miss a security update you might have wanted.
Just a docker image is unchangeable. To ensure that the Dockerfile will generate the same image, you need to pin the exact software version in your install command.

How to install supervisor in a docker container?

I need to use supervisord in a docker container.
I want to keep the size of the container as small as possible.
Supervisord can be installed either using apt-get or python-pip.
Which method is recommended? and what should be thinking process while making these kind of decisions?
P.S Need supervisor because of legacy code. Can't do without it.
Supervisord version is not important.
Mostly depends on the version you want to install (if that relevant to you). apt-get's version are usually behind pip's version.
Also apt's version is tested and compatible with any other system dependency. Installing with pip could cause some conflicts with other already installed dependencies (most likely of your base OS is old)
If your goal is to keep image size small, make sure you install supervisor without leaving any cache (I.e: delete apt indices and /var/cache directory) or unwanted files (I.e: remove unneeded packages, use apt's install --no-install-recommends, use pip's install --no-cache) in a single Dockerfile RUN statement.

Combining multiple Docker images to create build environment

I am the developer of a software product (NJOY) with build requirements of:
CMake 3.2
Python 3.4
gcc 6.2 or clang 3.9
gfortran 5.3+
In reading about Docker, it seems that I should be able to create an image with just these components so that I can compile my code and use it. Much of the documentation is written with the implication that one wants to create a scalable web architecture and thus, doesn’t appear to be applicable to compiled applications like what I’m trying to do. I know it is applicable, I just can’t seem to figure out what to do.
I’m struggling with separating the Docker concept from a Virtual Machine; I can only conceive of compiling my code in an environment that contains an entire OS instead of just the necessary components. I’ve begun a Docker image by starting with an Ubuntu image. This seems to work just fine, but I get the feeling that I’m overly complicating things.
I’ve seen a Docker image for gcc; I’d like to combine it with CMake and Python into an image that we can use. Is this even possible?
What is the right way to approach this?
Combining docker images is not available. Docker images are chained. You start from a base images and you then install additional tools that you want to add on top of the base image.
For instance, you can start from the gcc image and build on it by creating a Dockerfile. Your Dockerfile might look something like:
FROM gcc:latest
# install cmake
RUN apt-get install cmake
# Install python
RUN apt-get install python
Then you build this dockerfile to create the Docker image. This will give you an image that contains gcc, cmake and python.

How can I preinstall software on travis-ci?

We use travis-ci for continuous integration. I'm troubled by the fact that our build process takes too long (~30 minutes). We depend on several Ubuntu packages which we fetch using apt-get, among others python-pandas.
We also have some of our own debs which we fetch over HTTPS and dpkg install. Finally, we have several pip/pypi requirements, such as Django, Flask, Werkzeug, numpy, pycrypto, selenium.
It would be nice to be able to at least pre-package some of these requirements. Does travis support something like this? How can I prepackage some of these requirements? Is it possible to build a custom travis base VM and start the build from there (perhaps using docker)? Especially the apt-get requirements from the default Ubuntu precise repository as well as the pip requirements should be easy to include.
So while this question is already answered, it's doesn't actually provide a solution path. You can use cache directives in travis to cache your built packages for future travis runs.
cache:
directories:
- $HOME/.pip-cache/
- $HOME/virtualenv/python2.7
install:
- pip install -r requirements.txt --download-cache "$HOME/.pip-cache"
Now your package content is saved for your next travis build. You can similarly store slow-to-retrieve resources in other directories and cache them.
Currently Travis-CI doesn't support such a feature. There are related issues currently open though such as custom VMs, running Docker in an OpenVz container - (Spotify seems to have a somewhat working example links in this issue), using Linux Containers (LXC), using KVM.
Some of those have workarounds mentioned in the issues, I'd give those a try until something more substantial is supported by Travis-CI. I'd also suggest reaching out to Travis-CI support and see if they have any suggestions (maybe there's something coming out soon that could help).

Resources