Upgrade a package in containerised application - docker

I I would like to update a Yarn package inside package.json (Next.js project) within a docker container. I saw that inside the docker file we run yarn install --frozen-lockfile
For this project there is also a docker compose with other containers.
How would you do that? My first try was to run the docker compose up then yarn upgrade 'package' but I got errors not related to the package like I am running a new yarn install on my environment.

When you are upgrading anything it is always recommended NOT to do it on the live/running container. Instead, it is recommended you update what you want to update in your source code and Dockerfile and then create a NEW version of the image and deploy the new image over the old one with docker-compose in your case.
That's what best practice is striving towards. If this is possible it is recommended you go this route.

Related

Can i modify the docker image provided by playwright to add custom node version

I was utilizing Playwright to test my frontend application at work, however, we use node version 16.15.0 specifically. But, while looking at the docker file by Playwright I see that they install the latest node version which is causing issues when running in CircleCi.
Does anyone have any ideas for a workaround? Would I have to create a custom docker image using Playwright's image to tackle this and install the correct node version?
Any help would be appreciated!
https://github.com/microsoft/playwright/blob/main/utils/docker/Dockerfile.focal.
https://playwright.dev/docs/docker.
Yes, that would be the way to go. The best way to go about this would be to patch the Dockerfile.focal with an ARG instruction. You will then be able to pass values to this argument with your docker build command. This is the best approach that will make maintenance easier. Edit the Dockerfile.focal and add this variable as:
# leave this blank or specify a default value.
ARG NODE_VERSION=
Then in docker build you can set the value for this as. The docker build command in the script on the repo will change as follows:
docker build --platform "${PLATFORM}" -t "$3" -f "Dockerfile.$2" --build-arg NODE_VERSION=16.15.0 .
This will inject this variable into the image when it is being built so you can have the correct version. Also, this will make it easier to maintain since you will not have to change the Dockerfile every time you upgrade the version of NodeJS in your image.
Now, finally, you can edit the build.sh script to use the version variable. You can edit the line 13 in the script to something like:
apt-get install -y nodejs="{NODE_VERSION}" && \
You can use apt search nodejs after running the setup script to verify the correct version of the package.

How to modify 'docker-compose-local.yml' for it to install all requirements (needed to run Amazon MWAA environment locally)

I am running aws-mwaa-local-runner in order to run a local Apache Airflow environment (in Docker for Windows).
However, after creating the container using ./mwaa-local-env start, I repeatedly get the Broken DAG ModuleNotFoundError' However, when I check my /docker/config/requirements.txt file (see here, although my file has a few more requirements that I need in it). When I compare my /docker/config/requirements.txt file with the output of pip freeze command run in Airflow container, I can see those requirements I need for my DAGs are missing.
I tried to pip install my other requirements in Airflow container but to no avail.
Is there a way to modify docker-compose-local.yml file so it installs all of my requirements.txt when creating the container (i.e. running the Airflow)?
Is there maybe something I might be missing? Any help or suggestion would be greatly appreciated.
Look at this: https://github.com/aws/aws-mwaa-local-runner . You should run the requirements file located in dags, locally.
pip install -r requirements.txt
Add your extra requirements to dags/requirements.txt, not docker/config/requirements.txt. The former is installed every time you start the service, but the latter is only installed when you build or rebuild the image.
Additionally, keeping your added requirements separate is important because you will need to upload the list to your MWAA environment.

yarn workspaces and docker

I am trying to use yarn workspaces and then put my application into a Docker
image.
The folder structure looks like this:
root
Dockerfile
node_modules/
libA --> ../libA
libA/
...
app/
...
Unfortunately Docker doesn't support symbolic links - therefore it is not possible to copy the node_modules-folder in the root directory into a Docker image, even if the Dockerfile is in the root as in my case.
One thing I could do would be to exclude the symlinks with .dockerignore and then copy the real directory to the image.
Another idea - which I would prefer - would be to have a tool that replaces the symlinks with the actual contents of the symlink. Do you know if there is such a tool (preferably a Javascript package)?
Thanks
Yarn is used for dependency management, and should be configured to run within the Docker container to install the necessary dependencies, rather than copying them from your local machine.
The major benefit of Docker is that it allows you to recreate your development environment without worrying about the machine that it is running on - the same thing applies to Yarn, by running yarn install it installs the right versions for the relevant architecture of the machine your Docker image is built upon.
In your Dockerfile include the following after configuring your work directory:
RUN yarn install
Then you should be all sorted!
Another thing you should do is include the node_modules directory in your .gitignore and .dockerignore files so it is never include when distributing your code.
TL;DR: Don't copy node_modules directory from local machine, include RUN yarn install in Dockerfile

Docker-compose for local development, installing dependencies

I am trying to setup docker-compose architecture for local development and production and I can't figure when in the containers life it's the best time to install library dependencies. In the same time I am not sure if these should be placed in the container or in external volume.
All my code is mounted in external volumes, so that changes are immidiately taken into without rebuilding the containers, but I am not sure about libraries that need to be installed by pip (I am running python backend) and npm/yarn (for webpack front-end).
Placing requirments.txt and package.json into the containers and running pip install and yarn install in the container build process means that I have to rebuild the container any time dependecies change - that is too much overhead.
Putting them in an external volume and running pip install and yarn install as part of the command of each container when it is started seems to solve the issue.
The build process of each container then contains only platform dependencies (eg. installing python, webpack or other platform tools), but libraries are installed after started (with CMD directive).
Is this the correct approach? I have seen lot of examples doing exactly the oposite and running npm install in the build process of the container - but I don't see any advantage for that, am I missing something?
Installing dependecies is usually part of the build process. Mounting code is a good trick when developing in order to get changes directly reflected.
Concerning adding requirements.txt or package.json. Installing dependecies takes time, and for that you need to take advantage of docker layer caching. In particular, you want to avoid cache invalidation.
For pip I suggest the following in development phase: For dependencies that you are unlikely to change, install these in separate RUN instuction. Your Docker file will look something like.
FROM ..
RUN pip install package1 package2 package3 ...
ADD requirements.txt requirements.txt
RUN RUN pip install -r requirements.txt
...
Keep only dependencies that might be changed in requirements.txt. Once you are done developing, add the packages back to the requirements.txt and build using the requirements file.
A similar approach would be adding two requirements files, and at the end combining them.

building go package using docker

I am trying to dockerize the go package that I found here...
https://github.com/siddontang/go-mysql-elasticsearch
The docker image is much more convenient than installing go on all the servers. But the following dockerfile is not working.
FROM golang:1.6-onbuild
RUN go get github.com/siddontang/go-mysql-elasticsearch
RUN cd $GOPATH/src/github.com/siddontang/go-mysql-elasticsearch
RUN make
RUN ./bin/go-mysql-elasticsearch -config=./etc/river.toml
How do I build a go package directly from github using a concise dockerfile?
Update
https://hub.docker.com/r/eaglechen/go-mysql-elasticsearch/
I found the exact dockerfile that would do this. But the docker command mentioned on that page does not work. It does not start the go package nor does it start the container.
It depends on what you mean by "not working", but RUN ./bin/... means RUN from the current working directory (/go/src/app in golang/1.6/onbuild/Dockerfile).
And go build in Makefile would put the binary in
$GOPATH/src/github.com/siddontang/go-mysql-elasticsearch/bin/...
So you need to add to your Dockerfile:
WORKDIR $GOPATH/src/github.com/siddontang/go-mysql-elasticsearch
I guess this should do what I am looking for.
https://github.com/EagleChen/docker_go_mysql_elasticsearch
And I hope one day I will learn to use that little search box.

Resources