Updating Strapi Docker package.json - docker

I'm in the process of learning about Docker by trying to containerify a Strapi CMS. The default Docker image (https://github.com/strapi/strapi-docker) works well enough as a starting point, but I'm trying to add a couple packages to the Strapi instance for my needs (adding in Azure storage account support using https://www.npmjs.com/package/strapi-provider-upload-azure-storage). As I'm new to Docker, I'm having a hard time figuring how to make the container install that package as part of the Docker run process.
I see that the strapi/base image Dockerfile contains this line referencing a package.json file:
COPY ./package.json ./
I'm assuming that's where I would add a reference to the packages I'm wanting to install so that later on they are installed by npm, but I'm not sure where that package.json file is located, let alone how to modify it.
Any help on figuring out how to install that package during the Docker run process is greatly appreciated!

I figured out that strapi-docker uses a script to build images and not just the Docker files in the repo (bin/build.js). I also discovered that docker-entrypoint.sh is where the dependency installation is happening, so I added a couple of npm install statements after the check for the node_modules directory. Doing this allowed me to successfully add the desired packages to my Docker container.

I followed some of the advice from the Docker team here:
https://www.docker.com/blog/keep-nodejs-rockin-in-docker/
After performing the initial setup with docker-compose and the strapi/strapi image, I was able to install additional dependencies directly inside the container using docker-compose run <service name> yarn add <package>.
I opted for this route since I was having trouble installing the sharp library - it has different dependencies/binaries for Linux and Mac OS. This approached worked well for me, but the downside is that you can't mount your node_modules folder as a volume and it may take a little longer to install packages in the container.

Related

Download and install .sh from Dockerfile

I have a docker image that i have created from a custom dockerfile.
Now i want to install another program on it, that is installed through downloading and then running a .sh file.
I have already curl on the dockerfile and while i know how to download a file with curl in my system, i am not sure if the downloading and installation is the same inside docker - from a dockerfile.
Do i need to download it to a specific directory and do i have to delete it afterwards?
Looking similar to this Run a script in Dockerfile .You can install package using bootstrap scripts and include clean up process if any.
Inside the Docker container mostly same as your operating system, but some of the Docker images doesn't include all the abilities that you can use in your operating system. For example; nano, curl etc. might not work at the beginning, depending on the image. If thats not the case, you can use your container like your operating system. There is no difference.
However, if you want to control the flow, downloaded files etc. you can link a volume between a directory in your operating system and a directory inside the Docker container. After that when you change, add, remove a file in that directory, it will change in your container aswell.

Install Python Wheel when starting Docker Containers with docker-compose

We are currently developing a python package, which will be build via an AzureDevOps Pipeline and the result package will be stored in the Azure Artifacts.
In production we install that package directly to some databricks clusters directly from the Azure Artifacts. Benfit is, whenever a new Version of the package is available, it is getting installed when starting a cluster
For developing, i want to do the similar within a local spark environment with docker container. We already set up docker containers which are working fine except one thing.
When i run my docker-compose command, i want to install my package from AzureArtifacts with the latest version.
Because we need access tokens to get this package in our setup, i can't provide this tokens in a git Repo. Therefore i need a way to provide the token safely to a docker-compose command and install the package from startup.
Also, if using the Dockerfile for the command, each time we will get a new version of our package, i have to build the docker-images again.
So this tasks need to be done from the user in my mind (assume DockerImages are already build):
Have a local file where a token is stored
Use my Docker-compose command to start up a local environment (by the way, with spark-master and workers and jupyter-notebook)
Automatic: get the token from the local file, provide it to any startup-script in the docker container and install the package from Azure Artifacts.
As i am no real expert on Docker, i found some topics regarding to ENTRYPOINT and CMD, but didn't understand that and what exactly to do.
Have anyone a hint which way we can go to easily implement that above logic?
PS: For testing i tried to install the package with command: during docker-compose with plaintext token, the installation worked but the juypter notebook was not accessible anymore :-(
Hopefully anybody has an idea or a better approach for what i am aiming to do.
Best Regards
You can use build-args:
docker-compose build --build-arg ARTIFACTORY_USERNAME=<your_username> --build-arg ARTIFACTORY_PASSWORD=<your_password> <service_to_build>
then your Dockerfile might look like:
FROM <my_base_image>
ARG ARTIFACTORY_USERNAME
ARG ARTIFACTORY_PASSWORD
RUN pip install <your_package_name> --extra-index-url https://$ARTIFACTORY_USERNAME:$ARTIFACTORY_PASSWORD#pkgs.dev.azure.com/<org>/_packaging/<your_package_name>/pypi/simple/
...

How to create a dependency docker container separate from an official docker image?

I’m struggling to install some python dependencies separate from an official docker image (Odoo:12.0). I’m trying to learn docker but I’m not sure what to do in this case. I’ve tried:
Rebuilding the image adding the dependencies in the Dockerfile
(Somehow a bunch of dependencies fail to install via this method…)
Manually entering the docker container and downloading the dependencies there (Odoo doesn’t recognize the dependencies and
tells me the dependency is missing.)
I’ve read that one could make a sort of separate volume with those extra dependencies but I haven’t achieved much. Any ideas on how to proceed?
Cheers

Kubernetes Init Containers pip install

I am not sure I am understanding Kubernetes Init containers properly. What i want to do is run an initialization on the pod so that it pip installs some additional libraries that are not in my app container image. Specifically I am wanting to install the Azure storage queue such that i can use it with the standard TensorFlow image.
I set up my init container with the command "pip install azure-storage-queue" and that ran fine however my app container tells me "No module named azure"
Is this not how an init container can be used?
NOTE: I realize i could create a new image with all my prerequisites installed however this is just for development purposes
That's not really how init containers work... Init containers are meant to initialize the pod and the image isn't really shared with other containers that will later run on that pod.
The best solution is to create a new container image including the Python modules you need.
An alternative is to use a command to run in your container that first installs the modules using pip and later runs the script that needs them, that way you can avoid creating a new container image.

Additional steps in Dockerfile

I have a Docker image which is a server for a web IDE (Jupyter notebook) for Haskell.
Each time I want to allow the usage of a library in the IDE, I have to go to the Dockerfile and add the install command into it, then rebuild the image.
Another drawback of this, I have to fork the original image on Github, not allowing me to contribute to it.
I was thinking about writing another Dockerfile which pulls the base one with the FROM directive and then RUNs the commands to install the libraries. But, as they are in separate layers, the guest system does not find the Haskell package manager command.
TL;DR: I want to run stack install <library> (stack is like npm or pip, but for Haskell) from the Dockerfile, but I dont want to have a fork of the base image.
How could I solve this problem?
I was thinking about writing another Dockerfile which pulls the base one with the FROM directive and then RUNs the commands to install the libraries. But, as they are in separate layers, the guest system does not find the Haskell package manager command.
This is indeed the correct way to do this, and it should work. I'm not sure I understand the "layers" problem here - the commands executed by RUN should be running in an intermediate container that contains all of the layers from the base image and the previous RUN commands. (Ignoring the possibility of multi-stage builds, but these were added in 17.05 and did not exist when this question was posted.)
The only scenario I can see where stack might work in the running container but not in the Dockerfile RUN command would be if the $PATH variable isn't set correctly at this point. Check this variable, and make sure RUN is running as the correct user?

Resources