How can I update the composer version that is being used inside my ddev containers? - docker

Currently my docker/ddev setup is running Composer version 1.10.6 2020-05-06 inside the container.
I would like to make the composer version inside the container be 1.10.7 2020-06-03.
I found one way to do it: ddev exec sudo composer self-update, but it's not permanent. The container reverts back to using 1.10.6 after a ddev restart.
In all of my searches, I can't find a way to update the documents that create the container so they update composer permanently. I don't need it to attempt to update every time I start my container, I just need to be able to tell it now to permanently change over to the version I want.
An additional piece: adding RUN sudo composer self-update to the .ddev/web-build/Dockerfile makes it attempt to update every time, which is not ideal. I want to update when I'm ready, as I also need to update my test servers to match versions.
I added that command to my Dockerfile and it updated to 1.10.7. I removed the command from my Dockerfile so that it doesn't update every time I restart ddev. When I restarted ddev (without that command in the Dockerfile) it reverted composer back to 1.10.6.
Where is it getting the instructions to use that version? I need to find that and tell it to use 1.10.7 instead. I don't want it to update itself every time I do ddev restart.

It's not normally important, but you can add a .ddev/web-build/Dockerfile with these contents:
ARG BASE_IMAGE
FROM $BASE_IMAGE
RUN composer self-update
And your composer will be updated during the image build process.

Randy's suggestion worked well for me, however I've also found an alternative solution which involves less typing.
Read the project config.yaml and it explains how the Composer version can be changed.
This file is found in ~/yourprojectname/.ddev/config.yaml.
The first lines of the file are the configuration used and the remaining lines of the file explain the configuration alternatives available. Enjoy :)
# if composer_version:"" it will use the current ddev default composer release.
# It can also be set to "1", to get most recent composer v1
# or "2" for most recent composer v2.
# It can be set to any existing specific composer version.
# After first project 'ddev start' this will not be updated until it changes

Related

Upgrade a package in containerised application

I I would like to update a Yarn package inside package.json (Next.js project) within a docker container. I saw that inside the docker file we run yarn install --frozen-lockfile
For this project there is also a docker compose with other containers.
How would you do that? My first try was to run the docker compose up then yarn upgrade 'package' but I got errors not related to the package like I am running a new yarn install on my environment.
When you are upgrading anything it is always recommended NOT to do it on the live/running container. Instead, it is recommended you update what you want to update in your source code and Dockerfile and then create a NEW version of the image and deploy the new image over the old one with docker-compose in your case.
That's what best practice is striving towards. If this is possible it is recommended you go this route.

How to modify 'docker-compose-local.yml' for it to install all requirements (needed to run Amazon MWAA environment locally)

I am running aws-mwaa-local-runner in order to run a local Apache Airflow environment (in Docker for Windows).
However, after creating the container using ./mwaa-local-env start, I repeatedly get the Broken DAG ModuleNotFoundError' However, when I check my /docker/config/requirements.txt file (see here, although my file has a few more requirements that I need in it). When I compare my /docker/config/requirements.txt file with the output of pip freeze command run in Airflow container, I can see those requirements I need for my DAGs are missing.
I tried to pip install my other requirements in Airflow container but to no avail.
Is there a way to modify docker-compose-local.yml file so it installs all of my requirements.txt when creating the container (i.e. running the Airflow)?
Is there maybe something I might be missing? Any help or suggestion would be greatly appreciated.
Look at this: https://github.com/aws/aws-mwaa-local-runner . You should run the requirements file located in dags, locally.
pip install -r requirements.txt
Add your extra requirements to dags/requirements.txt, not docker/config/requirements.txt. The former is installed every time you start the service, but the latter is only installed when you build or rebuild the image.
Additionally, keeping your added requirements separate is important because you will need to upload the list to your MWAA environment.

How to set up a TYPO3 site with docker and ddev?

I'm new to docker and I've been told ddev is a simple way to set up a local container to run a TYPO3 project.
But I'm confused. I'm not familiar with all these containers yet. How should I proceed to get a grip?
The tutorial is based on https://docs.typo3.org/m/typo3/guide-contributionworkflow/master/en-us/Appendix/SettingUpTypo3Ddev.html but mind – that is a step-by-step-manual if you want to contribute to the TYPO3 core. If you want to run your own site, the «Clone TYPO3» section doesn’t apply.
So start like this:
Install Docker (Desktop App is fine) from
https://www.docker.com/products/docker-desktop
Install ddev: https://ddev.readthedocs.io/en/latest/#installation (Mac: brew tap drud/ddev && brew install ddev)
Create a directory where you want to run the site: mkdir mysite; cd mysite
Configure ddev: run ddev config
There’s not much to choose from in the wizard. You can set the web-root (eg. public_html, so you have a level more above) and choose from a few CMS presets. They don’t change too much, in the case of TYPO3 it will manage the db connection and some nginx settings.
The file .ddev/config.yaml will be created. In it you can find a lot of options.
Add your site (and, if necessary, run composer)
Run ddev with ddev start
See if mkcert is installed, if not, follow the provided instructions (this will make sure you can use self-signed certificates, at least in firefox) (mac: brew install mkcert nss; mkcert -install)
ddev will output a few informations, where you can find your site, which port, where phpmyadmin is etc
ddev help gives you more commands
If you want to log into the container, use ddev ssh. This is NOT used to change files etc. The files are mirrored automatically into the container! But you can log in to install binaries etc. Let’s try that.
Some commands you may need: What system are we running? uname -a -> linuxkit // Update available packages: sudo apt-get update // Search for a package apt-cache search packagename // Install Pdftools (pdftotext, pdfinfo..): sudo apt-get install poppler-utils // Get the path to imagemagick (if it’s already installed): whereis convert (remember, imagemagick is a collection, convert is one of the tools) // log out from the container, back to your system: exit
Now, how to connect to the database which lives inside the docker container?
run ddev describe and you will get the login data. It’s basically db for everything.
For TYPO3, the ddev setup command provides an AdditionalConfiguration.php file that can be used. It’s missing two important parameters though, SystemMaintainers and Installtool Password. Here’s an example.
$GLOBALS['TYPO3_CONF_VARS']['SYS']['trustedHostsPattern'] = '.*';
$GLOBALS['TYPO3_CONF_VARS']['DB']['Connections']['Default'] = array_merge($GLOBALS['TYPO3_CONF_VARS']['DB']['Connections']['Default'], [
'dbname' => 'db',
'host' => 'db',
'password' => 'db',
'port' => '3306',
'user' => 'db',
]);
// This mail configuration sends all emails to mailhog
$GLOBALS['TYPO3_CONF_VARS']['MAIL']['transport'] = 'smtp';
$GLOBALS['TYPO3_CONF_VARS']['MAIL']['transport_smtp_server'] = 'localhost:1025';
$GLOBALS['TYPO3_CONF_VARS']['SYS']['devIPmask'] = '*';
$GLOBALS['TYPO3_CONF_VARS']['SYS']['displayErrors'] = 1;
// add these
$GLOBALS['TYPO3_CONF_VARS']['SYS']['systemMaintainers'] = [123,456];
$GLOBALS['TYPO3_CONF_VARS']['BE']['lockSSL'] = 1; // optional
$GLOBALS['TYPO3_CONF_VARS']['BE']['installToolPassword'] = '123';
But what if you want to access the database with a separate tool instead of the preconfigured phpMyAdmin? If you use sequel pro, simply run ddev sequelpro and your database will be launched automagically in sequel pro.
You can also do this manually; then you need to define the db port to access it externally. Do this in .ddev/config.yaml, by adding (for example) host_db_port: "32778" Now we can set up a db management tool as such (and store the bookmark):
Remember: PHP will still use the default Port 3306!
Ok, here we go. ddev is already started, so make sure you’re in your local directory (where .ddev/ is) and run ddev describe to see the parameters again. Probably, if you go to https://mysite.ddev.local, you will find everything from your webroot working.
When done, finish with ddev stop. I’m not really sure where databases are persisted though yet, when ddev is stopped. Maybe you get a dump first with ddev snapshot.
Explore many more possibilities of ddev with ddev help.

Manage apt-get based dependency's on Docker

We have a large code based on c++ in my company and we are trying to move to a microservice infrastructure based on docker.
We have a couple of library in house that help us with things like helper functions and utility that we regularly use in our code. Our idea was to create a base image for developers with this library's already installed and make it available to use it as our "base" image. This will give us the benefit of all our software always using the latest version of our own library's.
My questions is related to the cache system of Docker in relation with CI and external dependencys. Lets say we have a Docker file like this:
FROM ubuntu:latest
# Install External dependencys
RUN apt update && apt install -y\
boost-libs \
etc...
# Copy our software
...
# Build it
...
# Install it
...
If our code changes we can trigger the CI and docker will understand that it can use the cached image that was created before up to the point were it copy's our software. What happens if one of our external dependency's offers a newer version? Will the cached be automatically be invalidated? How can we trigger a CI build in case any of our packages receives a new version?
In essence how do we make sure we are always using the latest packages available for our external dependency's?
Please keep in mind the above Dockerfile is just an example to illustrate we are trying to use other tricks in the playbook such us using a lighter base image (not Ubuntu) and multistage builds to avoid dev-packages in our production containers.
Docker's caching algorithm is fairly simple. It looks at the previous state of the image build, and the string of the command you are running. If you are performing a COPY or ADD, it also looks at a hash of those files being copied. If a previous build is found on the server with the same previous state and command being run, it will reuse the cache.
That means an external change, e.g. pulling packages from an external repository, will not be detected and the cache will be reused instead of rerunning that line. There are two solutions that I've seen to this:
Option 1: Change your command by adding versions to the dependencies. When one of those dependencies changes, you'll need to update your build. This is added work, but also guarantees that you only pull in the versions that you are ready for. That would look like (fixing boost-libs to a 1.5 version number):
# Install External dependencys
RUN apt update && apt install -y\
boost-libs=1.5 \
etc...
Option 2: change a build arg. These are injected as environment variables into the RUN commands and docker sees a change in the environment as a different command to run. That would look like:
# Install External dependencys
ARG UNIQUE_VAR
RUN apt update && apt install -y\
boost-libs \
etc...
And then you could build the above with the following to trigger the cache to be recreated daily on that line:
docker build --build-arg "UNIQUE_VAR=$(date +%Y%m%d)" ...
Note, there's also the option to build without the cache any time you wish with:
docker build --no-cache ...
This would cause the cache to be ignored for all steps (except the from line).

Setup Docker Jenkins with default plugins

I want to create a Jenkins based image to have some plugins installed as well as npm. To do so I have the following Dockerfile:
FROM jenkins:2.60.3
RUN install-plugins.sh bitbucket
USER root
RUN apt-get update
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash -
RUN apt-get install -y nodejs
RUN npm --version
USER jenkins
That works fine however when I run the image I have two problems:
It seems that the plugins I tried to install manually didn't get persisted for some reason.
I get prompted the list of plugins I want to install but I don't want to install anything else.
Am I missing anything configuring the Dockerfile or is it that what I want to achieve is simply not possible?
Without seeing the contents of install-plugins.sh, I can't comment as to why the plugins aren't persisting. It is most likely caused by an incorrect installation destination; persistence shouldn't be an issue at this stage, since the plugin installation is built into the image itself.
As for the latter issue, you should be able to skip the installation wizard altogether by adding the line ENV JAVA_OPTS=-Djenkins.install.runSetupWizard=false
to your Dockerfile. Please note that this can be a security risk, if the Jenkins image is exposed to the world at large, since this option disables the need for authentication
EDIT: The default plugin directory for the Docker image is /var/jenkins_home/plugins
EDIT 2: According to the README on the Jenkins Docker repo, adding the line RUN echo 2.0 > /usr/share/jenkins/ref/jenkins.install.UpgradeWizard.stateshould accomplish the same thing
Things have changed since 2017, when the last answer was posted, and it no longer works. The current way is in following Dockerfile snippet:
# Prevent setup wizard from running.
# WARNING: Jenkins will start with security disabled, without any password.
ENV JENKINS_OPTS="-Djenkins.install.runSetupWizard=false"
# plugins.txt must contain the list of plugins to be installed
# (One plugin per line, e.g. sidebar-link:1.11.0)
COPY plugins.txt /tmp/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /tmp/plugins.txt

Resources