How to work efficiently locally with Kubernetes/Docker? - docker

I am new with Docker and i just made my first test with Kubernetes locally (with Minikube), and it sounds promising!
Now i would like to know how we are supposed to work with these tools efficiently when working on the code.
With docker, i wasn’t very satisfy about the process but it wasn’t so bad:
I made a change in the code
I stop the container
I rebuild the image
I run the image again
I guess there are ways/tools to avoid to executes all theses steps manually but i was thinking about diving further later.
But now i work with Kubernetes/Minikube, here is what the developing process looks like:
I made a change in the code
I delete the pod
I rebuild the image
I save it as a tar archive, then loads it in minikube
Executing all of these steps everytime we make a change in the code slow down significantly the productivity.
Is there a way to optimize/automatize this process for every time we make a change in the code?

There's a bunch of third party tools our there to help with this such as Draft and gitkube.
I personally use draft, which creates a heroku like workflow which makes pushing new applications much easier.
Using Draft with minikube is fairly straightforward:
# enable some plugins
minikube addons enable registry
minikube addons enable ingress
# install helm
# depends on your workstation, I have a mac so:
brew install kubernetes-helm
# configure helm on your minikube
helm init
# install draft
brew tap azure/draft
brew install draft
draft init --auto-accept --ingress-enabled
# from your repo do:
draft create --app myapp
# run your app
draft up
More resources:
https://medium.com/#vittore/draft-on-k8s-part1-e5b046857df4
https://radu-matei.com/blog/real-world-draft/

Related

How can I update the composer version that is being used inside my ddev containers?

Currently my docker/ddev setup is running Composer version 1.10.6 2020-05-06 inside the container.
I would like to make the composer version inside the container be 1.10.7 2020-06-03.
I found one way to do it: ddev exec sudo composer self-update, but it's not permanent. The container reverts back to using 1.10.6 after a ddev restart.
In all of my searches, I can't find a way to update the documents that create the container so they update composer permanently. I don't need it to attempt to update every time I start my container, I just need to be able to tell it now to permanently change over to the version I want.
An additional piece: adding RUN sudo composer self-update to the .ddev/web-build/Dockerfile makes it attempt to update every time, which is not ideal. I want to update when I'm ready, as I also need to update my test servers to match versions.
I added that command to my Dockerfile and it updated to 1.10.7. I removed the command from my Dockerfile so that it doesn't update every time I restart ddev. When I restarted ddev (without that command in the Dockerfile) it reverted composer back to 1.10.6.
Where is it getting the instructions to use that version? I need to find that and tell it to use 1.10.7 instead. I don't want it to update itself every time I do ddev restart.
It's not normally important, but you can add a .ddev/web-build/Dockerfile with these contents:
ARG BASE_IMAGE
FROM $BASE_IMAGE
RUN composer self-update
And your composer will be updated during the image build process.
Randy's suggestion worked well for me, however I've also found an alternative solution which involves less typing.
Read the project config.yaml and it explains how the Composer version can be changed.
This file is found in ~/yourprojectname/.ddev/config.yaml.
The first lines of the file are the configuration used and the remaining lines of the file explain the configuration alternatives available. Enjoy :)
# if composer_version:"" it will use the current ddev default composer release.
# It can also be set to "1", to get most recent composer v1
# or "2" for most recent composer v2.
# It can be set to any existing specific composer version.
# After first project 'ddev start' this will not be updated until it changes

How to set up a TYPO3 site with docker and ddev?

I'm new to docker and I've been told ddev is a simple way to set up a local container to run a TYPO3 project.
But I'm confused. I'm not familiar with all these containers yet. How should I proceed to get a grip?
The tutorial is based on https://docs.typo3.org/m/typo3/guide-contributionworkflow/master/en-us/Appendix/SettingUpTypo3Ddev.html but mind – that is a step-by-step-manual if you want to contribute to the TYPO3 core. If you want to run your own site, the «Clone TYPO3» section doesn’t apply.
So start like this:
Install Docker (Desktop App is fine) from
https://www.docker.com/products/docker-desktop
Install ddev: https://ddev.readthedocs.io/en/latest/#installation (Mac: brew tap drud/ddev && brew install ddev)
Create a directory where you want to run the site: mkdir mysite; cd mysite
Configure ddev: run ddev config
There’s not much to choose from in the wizard. You can set the web-root (eg. public_html, so you have a level more above) and choose from a few CMS presets. They don’t change too much, in the case of TYPO3 it will manage the db connection and some nginx settings.
The file .ddev/config.yaml will be created. In it you can find a lot of options.
Add your site (and, if necessary, run composer)
Run ddev with ddev start
See if mkcert is installed, if not, follow the provided instructions (this will make sure you can use self-signed certificates, at least in firefox) (mac: brew install mkcert nss; mkcert -install)
ddev will output a few informations, where you can find your site, which port, where phpmyadmin is etc
ddev help gives you more commands
If you want to log into the container, use ddev ssh. This is NOT used to change files etc. The files are mirrored automatically into the container! But you can log in to install binaries etc. Let’s try that.
Some commands you may need: What system are we running? uname -a -> linuxkit // Update available packages: sudo apt-get update // Search for a package apt-cache search packagename // Install Pdftools (pdftotext, pdfinfo..): sudo apt-get install poppler-utils // Get the path to imagemagick (if it’s already installed): whereis convert (remember, imagemagick is a collection, convert is one of the tools) // log out from the container, back to your system: exit
Now, how to connect to the database which lives inside the docker container?
run ddev describe and you will get the login data. It’s basically db for everything.
For TYPO3, the ddev setup command provides an AdditionalConfiguration.php file that can be used. It’s missing two important parameters though, SystemMaintainers and Installtool Password. Here’s an example.
$GLOBALS['TYPO3_CONF_VARS']['SYS']['trustedHostsPattern'] = '.*';
$GLOBALS['TYPO3_CONF_VARS']['DB']['Connections']['Default'] = array_merge($GLOBALS['TYPO3_CONF_VARS']['DB']['Connections']['Default'], [
'dbname' => 'db',
'host' => 'db',
'password' => 'db',
'port' => '3306',
'user' => 'db',
]);
// This mail configuration sends all emails to mailhog
$GLOBALS['TYPO3_CONF_VARS']['MAIL']['transport'] = 'smtp';
$GLOBALS['TYPO3_CONF_VARS']['MAIL']['transport_smtp_server'] = 'localhost:1025';
$GLOBALS['TYPO3_CONF_VARS']['SYS']['devIPmask'] = '*';
$GLOBALS['TYPO3_CONF_VARS']['SYS']['displayErrors'] = 1;
// add these
$GLOBALS['TYPO3_CONF_VARS']['SYS']['systemMaintainers'] = [123,456];
$GLOBALS['TYPO3_CONF_VARS']['BE']['lockSSL'] = 1; // optional
$GLOBALS['TYPO3_CONF_VARS']['BE']['installToolPassword'] = '123';
But what if you want to access the database with a separate tool instead of the preconfigured phpMyAdmin? If you use sequel pro, simply run ddev sequelpro and your database will be launched automagically in sequel pro.
You can also do this manually; then you need to define the db port to access it externally. Do this in .ddev/config.yaml, by adding (for example) host_db_port: "32778" Now we can set up a db management tool as such (and store the bookmark):
Remember: PHP will still use the default Port 3306!
Ok, here we go. ddev is already started, so make sure you’re in your local directory (where .ddev/ is) and run ddev describe to see the parameters again. Probably, if you go to https://mysite.ddev.local, you will find everything from your webroot working.
When done, finish with ddev stop. I’m not really sure where databases are persisted though yet, when ddev is stopped. Maybe you get a dump first with ddev snapshot.
Explore many more possibilities of ddev with ddev help.

Deploying cgal docker

I'm trying to deploy the official CGAL docker. From reading the README I understand that after downloading the specific image (e.g I want to open a docker with ubuntu16+CGAL and all of it's dependencies) using the following command:
docker pull cgal/testsuite-docker:ubuntu # get a specific image by replacing TAG with some tag
I need to install the cgal library itself using the
./test_cgal.py --user **** --passwd **** --images cgal-testsuite/ubuntu
The thing is that eventually I want to start the docker with an interactive shell, i.e
docker run --rm -it -v $(pwd):/source somedocker
And I couldn't understand where is the generated image, after the CGAL installation script.
Those images are not for running CGAL. They are only images we use to define an environment for our testsuite, and run tests in it, including compiling CGAL.
test_cgal.py will download the integration branch, which is rarely working as it is the branch in which we merge our PR to test them nightly. Don't use this to get a working CGAL. To my knowledge, there is no such image as the one you are looking for. No official one anyways.
Furthermore, installing cgal at runtime in this image will not modify the image, once you close the container your installation will be lost. You need to specify how to install CGA in the Dockerfile of your image and
then build it if you want a "ready to use" image.
You can use the dockerfile of the image you found to write your own, as there should be all the dependencies specified in it, but you need to edit it to download CGAL and maybe build it if you don't want the header-only version. This is not done in test-cgal.py or anywhere in this docker repository.

Cant load CSS with Docker installation on Superset

I installed superset following these steps using Docker but when I go to http://localhost:8088/superset there is no CSS. Furthermore, every time I try to create a chart I get sent back to the main page. Like, if I hit http://localhost:8088/chart/add I get the same static interface of http://localhost:8088/superset
I am trying to install on a MacBookPro 2018.
Link to installation steps I followed
https://github.com/apache/incubator-superset/blob/master/docs/installation.rst#user-content-start-with-docker
Code I used
git clone https://github.com/apache/incubator-superset/
cd incubator-superset/contrib/docker
docker-compose run --rm superset ./docker-init.sh
docker-compose up
I installed Apache Superset, docker version and I faced the similar issue. When I logged into the superset container by running docker exec I found that webpack.js was taking time to compile source. Once done it loaded perfectly.
Sometimes it takes time to build the superset_node package, which is responsible for compiling the UI components. In order to force-build them without waiting,
docker-compose down
docker-compose build
docker-compose up

Manage apt-get based dependency's on Docker

We have a large code based on c++ in my company and we are trying to move to a microservice infrastructure based on docker.
We have a couple of library in house that help us with things like helper functions and utility that we regularly use in our code. Our idea was to create a base image for developers with this library's already installed and make it available to use it as our "base" image. This will give us the benefit of all our software always using the latest version of our own library's.
My questions is related to the cache system of Docker in relation with CI and external dependencys. Lets say we have a Docker file like this:
FROM ubuntu:latest
# Install External dependencys
RUN apt update && apt install -y\
boost-libs \
etc...
# Copy our software
...
# Build it
...
# Install it
...
If our code changes we can trigger the CI and docker will understand that it can use the cached image that was created before up to the point were it copy's our software. What happens if one of our external dependency's offers a newer version? Will the cached be automatically be invalidated? How can we trigger a CI build in case any of our packages receives a new version?
In essence how do we make sure we are always using the latest packages available for our external dependency's?
Please keep in mind the above Dockerfile is just an example to illustrate we are trying to use other tricks in the playbook such us using a lighter base image (not Ubuntu) and multistage builds to avoid dev-packages in our production containers.
Docker's caching algorithm is fairly simple. It looks at the previous state of the image build, and the string of the command you are running. If you are performing a COPY or ADD, it also looks at a hash of those files being copied. If a previous build is found on the server with the same previous state and command being run, it will reuse the cache.
That means an external change, e.g. pulling packages from an external repository, will not be detected and the cache will be reused instead of rerunning that line. There are two solutions that I've seen to this:
Option 1: Change your command by adding versions to the dependencies. When one of those dependencies changes, you'll need to update your build. This is added work, but also guarantees that you only pull in the versions that you are ready for. That would look like (fixing boost-libs to a 1.5 version number):
# Install External dependencys
RUN apt update && apt install -y\
boost-libs=1.5 \
etc...
Option 2: change a build arg. These are injected as environment variables into the RUN commands and docker sees a change in the environment as a different command to run. That would look like:
# Install External dependencys
ARG UNIQUE_VAR
RUN apt update && apt install -y\
boost-libs \
etc...
And then you could build the above with the following to trigger the cache to be recreated daily on that line:
docker build --build-arg "UNIQUE_VAR=$(date +%Y%m%d)" ...
Note, there's also the option to build without the cache any time you wish with:
docker build --no-cache ...
This would cause the cache to be ignored for all steps (except the from line).

Resources