What simple fixed version images I can use with circleCI? - docker

I am trying to use a plain linux image like alpine that is a fixed version
I am trying to find simple images I can use.
I have spent a lot of time on CircleCI's site on pages such as
https://circleci.com/docs/2.0/tutorials/
and
https://circleci.com/docs/2.0/sample-config/#section=configuration
but every configfuration I find seems to be targeted at a specific language or setup. I just want a plain linux shell with bash. Ubuntu seems a bit large, hence me avoiding it. Are there other linux distributions which are a bit lighter but do have tools like bash and curl built in?
I finally found using
- image: alpine:latest
and
command: |
apk add bash curl-dev
echo Hello, world.
worked.
Are there other basic images I could use?
For instance is there a fixed version rather than justv 'latest' so I could ensure it works in the future and I am not surprised by change.

Related

Is Docker-ized dev envoirment good for maintaining legacy software?

Let's say I have old, unmaintained application that lives on a VPS (i.e. Symfony 3 PHP app that relies on PHP 5).
If some changes are needed I have to clone this app to my desktop, build it, change and re-deploy. As time goes, recreating desktop dev environment gets harder - in this example I can't simply build the app as I use PHP7 in my CLI that breaks building process.
I tried to dockerize the app, so I added Ubuntu 18 to my docker-compose file... and it doesn't work as latest Ubuntu that has PHP5 support is 14.04. 14.04 is also the oldest (official) version available on DockerHub. But will it be still available in 3 years? If not, Docker won't build a container.
So, my question is: is Docker a right tool to solve described problem at all?
If so, should I backup docker images described that my build relies on?
If not, beside proper maintenance, what tool is better?
You can install PHP5 in newer ubuntu versions, but it means adding an external repository.
You could also create your own docker image, containing only the libraries you want. If so, I'd advise to try and use alpine as a base image. There is a bit of a learning curve to adapt, but once you do it you'll have a small image tailored to your needs.
Given that containers allow you to isolate processus and conf with minimal footprint compared to a VM, I think it is the best option. Tailoring and maintaining your own image is not that expensive in terms of maintenance if you document it correctly, and it will allow you to always have a system 'maintaining' all your precise requirements.

Building Docker container for Azure IoT Edge Module with GrovePI+

I have been experimenting with GrovePI+ running python programs and am looking to extend my experimentation to include integrating with Azure IoT Hub by create Azure IoT Edge modules. I know that I need to update module settings to run with escalated rights so that the program can access I/O and have seen the documentation on how to accomplish that, but I am struggling a bit with getting the container built. The approach that I had in mind was to base the image on the arm32v7/python:3.7-stretch image and from there include the following run command:
RUN apt-get update &&\
apt-get -y install apt-utils curl &&\
curl -kL dexterindustries.com/update_grovepi | bash
The problem is that the script is failing because it looks for files in /home/pi/. Before I go deeper down the rabbit hole, I figured I should check and see if I am working on a problem that someone else already solved. Has anyone built Docker images to run GrovePi programs? If so, what worked for you?
I've no experience with GrovePi, but remember that modules (Docker containers) are completely self-contained and don't have access to the system. So if that script works when ssh'ed into a device, then I can see why it would not work in a module; the module is a little system-in-a-box that has no awareness of or access to locations like /home/pi/.
Basically, I'd expect you need to configure the Pi itself with whatever is needed for Grove Pi stuff, then you package your Python into a module. The tricky bit might be getting access to hardware like I2C from within the module, but that's not too terrible. This kind of thing is what you'll need (but different devices).

Combining multiple Docker images to create build environment

I am the developer of a software product (NJOY) with build requirements of:
CMake 3.2
Python 3.4
gcc 6.2 or clang 3.9
gfortran 5.3+
In reading about Docker, it seems that I should be able to create an image with just these components so that I can compile my code and use it. Much of the documentation is written with the implication that one wants to create a scalable web architecture and thus, doesn’t appear to be applicable to compiled applications like what I’m trying to do. I know it is applicable, I just can’t seem to figure out what to do.
I’m struggling with separating the Docker concept from a Virtual Machine; I can only conceive of compiling my code in an environment that contains an entire OS instead of just the necessary components. I’ve begun a Docker image by starting with an Ubuntu image. This seems to work just fine, but I get the feeling that I’m overly complicating things.
I’ve seen a Docker image for gcc; I’d like to combine it with CMake and Python into an image that we can use. Is this even possible?
What is the right way to approach this?
Combining docker images is not available. Docker images are chained. You start from a base images and you then install additional tools that you want to add on top of the base image.
For instance, you can start from the gcc image and build on it by creating a Dockerfile. Your Dockerfile might look something like:
FROM gcc:latest
# install cmake
RUN apt-get install cmake
# Install python
RUN apt-get install python
Then you build this dockerfile to create the Docker image. This will give you an image that contains gcc, cmake and python.

Is it a bad practice to use version managers like RVM inside docker containers?

I'm new to using docker and so far I'm unable to find many ruby/rails images that contain RVM or rbenv.
The most common thing I see is that each container has multiple tags and each tagged image version has only one version of Ruby installed. See this image for example.
The only way to use another version is to use another tag for the image you are using as you can not install a new version with RVM nor with rbenv.
Is this done on purpose?
Is it a bad practice to use version managers for programming languages inside docker containers?
Why?
This would be considered a bad practice or anti-pattern in docker. RVM is trying to solve a similar problem that docker is solving, but with a very different approach. RVM is designed for a host or VM with all the tools installed in one place. Docker creates an isolated environment where only the tools you need to run your single application are included.
Containers are ideally minimalistic, only containing the prerequisites needed for your application, making them more portable. Docker also uses layers and a union filesystem to reuse common base images for each image, so any copy of something like Ruby version X is only downloaded and written to disk once, ever (ignoring updates to that image).
It depends on how you are going to use it. If you need to install about any version of ruby on the custom docker image without messing around with downloading tarballs and applying patches RVM can be just perfect. RVM is basically a bash script so using it inside of docker container is as bad as using any other bash script inside of docker container.

Development environment setup for Mac and CentOS using Docker

I have searched the history a little bit but failed to find a good answer. So I just asked my question here. If there is a good answer already, please redirect it for me. Thanks.
The question is, I found my company's new hire doc lists a bunch of software to install to setup the development environment. Usually it took 1 or 2 days for a new hire to setup everything ready for a new mac. We want to shorten that process. The first thing I thought is Docker.
I read through the user guide of Docker and followed some blogs regarding to how to setup dev environment using Docker but still a little confused if Docker applies to our setting. So here's the detail of requirements:
We need to install a bunch of software (many of them are customized binaries). Right now, we distribute the source code, a new hire need to build from the source code, install it and set environment to include the binary into path. I am wondering if Docker allows us to install customized binaries into it's container?
The source code should not stay in the container. The source code is still checked out in one's local machine using git. Then, how can I rely on the Docker container's environment to build my software? I have searched a little bit is that, you need to mount your folder into the container, and then shell into your container to build? Is that how it works?
We usually develop in mac, does Docker also support mac container or it just allows you to run Linux container using boot2Docker?
Thank you so much in advance for your help.
Some answers :)
First, I think it's a really good idea to use Docker to standardise the development configuration (softwares, custom packages, env variables, ...).
With Docker, you can get your customised binaries from the host, it's not a problem. With the CMD command, you can use bash to install them and add them into your PATH. You can also write a shell script to install all your stuff and launch this script when you build your container
Your code will be on the host and you can "mount" a host folder in your docker image with the -v command. Ex: docker run -v /home/user/code:/tmp/code your_image. I'll detail below how the developer will use your Docker image.
Yep, you have to use Boot2Docker, it works well
Once your development image will be ready, you have to publish it on the official Docker registry (or to host a local registry on your network).
Next, the developer will launch the following Docker command:
docker run -rm -ti your_build_image /bin/bash
This will launch a bash terminal in your Docker image and the developer will be able to compile the code. Ex: cd /tmp/code + mvn clean install
Please have a look to this article to learn about volumes: http://jam.sg/blog/mongodb-docker-part-2/
And this one about Dockerfile: https://www.digitalocean.com/community/tutorials/docker-explained-using-dockerfiles-to-automate-building-of-images
You can also find a lot of Dockerfiles on github (search Dockerfile).
If the goal is to speed up the time it takes to get a Mac setup and usable in your environment, you might want to look at Boxen.
From the "About" section:
"Boxen is your team's IT robot. It's a dangerously opinionated framework that automates every piece of your development environment. GitHub, Inc. wrote the first version of Boxen (imaginatively called “The Setup”) to help employees start shipping on day one."

Resources