We are using artifactory for retrieving npm packages and we need to use an _auth token in .npmrc (npm config) to fetch those npm dependencies required by my project.
I have read articles saying that npm install should be the first step in the Dockerfile so that it can be cached and we don't need to download dependencies every time we spin up a new docker image with a little change.
Also, it is a bad practice to put any _auth tokens in Dockerfile as part of the build.
So what is the best practice to do npm install in Dockerfile?
I upvoted because the question is not bad, maybe not great wording.
Essentially I believe the answer is that you need to copy the .npmrc from your environment into the docker image, like:
COPY .npmrc /usr/src/app/.npmrc
This is however scary because those are your credentials.
The NPM docs recommend that you pass in your auth token to the npmrc file as an env variable. That could also work in this case:
https://docs.npmjs.com/docker-and-private-modules
I believe that should be fine and keep your creds safe.
If you do not want to keep your credentials in your container, you can always remove it after the 'npm install' step is complete:
RUN npm install
RUN rm /usr/src/app/.npmrc
And then you can proceed with the rest of the build
Related
I'm experimenting with docker. I have a simple dockerfile, which includes java and node.
https://github.com/Quafadas/scala-mill
https://github.com/Quafadas/scala-mill/blob/master/Dockerfile
It's published to dockerhub. Now, I'd like to build an application downstream
FROM quafadas/scala-mill:latest
RUN java -version
#This doesn't work :-(.
RUN npm --version
The command RUN npm --version works in the base image linked above, but apparently, not when I'm building on top of it using the FROM directive. Is anyone able to help me understand why?
/bin/sh: npm: command not found
15:15:19
The command '/bin/sh -c npm --version' returned a non-zero code: 127
enter code here
``
There seem to have been a few recent commits to the repo which has apparently fixed the issue! I was able to build and run the Dockerfile to get the npm version without any issues!
In case you need additional modules (such as npm) to be installed on a base image that doesn't provide it, use a multi-stage Dockerfile with different FROM commands to get your necessary dependencies onto a single Dockerfile and a docker image later. Reference.
I have a (private) repository at GitHub with my project and integrated GitHub-actions which is building a docker-image and pushing it directly to GHCR.
But I have a problem with storing and passing secrets to my build image. I have the following structure in my project:
.git (gitignored)
.env (gitignored)
config (gitignored) / config files (jsons)
src (git) / other folders and files
As you may see, I have .env file and config folder. Both of them store data or files, which are not in the repo but are required to be in the built environment.
So I'd like to ask, is there any option not to pass all these files to my main remote repo (even if it's private) but to link them during the build stage within the github-actions?
It's not a problem to publish env & configs somewhere else, privately, in another separate private remote-repo. The point is not to push these files to the main-private-repo, because RBAC logic doesn't allow me to restrict access to the selected files.
P.S. Any other advice of using GitLab CI or BitBucket, if you know how to solve the problem is also appreciated. Don't be shy to share it!
So it seems that this question is a bit hot, so I have found an answer for it.
Example that is shown above is based on node.js and nest.js app and pulling the private remote repo from GitHub.
In my case, this scenario was about pulling from separate private repo config files and other secrets. And we merge them with our project during container build. This option isn't about security of secrets inside container itself. But for making one part of a project (repo itself with business logic) available to other developers (they won't see credentionals and configs from separate private repo, in your development repo) and a secret-private repo with separate access permission.
You all need your personal access token (PAT), on github you can found it here:
As for GitLab, the flow is still the same. You'll need to pass token from somewhere in the settings. And also, just a good advice, create not just one, but two docker files, before testing it.
Why https instead of ssh? In that case you'll need also to pass ssh keys and also config the client correctly. It's a bit more complicated because of CRLF and LF formats, crypto-algos supported by ssh and so on.
# it could be Go, PHP, what-ever
FROM node:17
# you will need your GitHub token from settings
# we will pass it to build env via GitHub action
ARG CR_PAT
ENV CR_PAT=$CR_PAT
# update OS in build container
RUN apt-get update
RUN apt-get install -y git
# workdir app, it is a cd (directory)
WORKDIR /usr/src/app
# installing nest library
RUN npm install -g #nestjs/cli
# config git with credentials
# we will use https since it is much easier to config instead of ssh
RUN git config --global url."https://${github_username}:${CR_PAT}#github.com/".insteadOf "https://github.com/"
# cloning the repo to WORKDIR
RUN git clone https://github.com/${github_username}/${repo_name}.git
# we move all files from pulled repo to root of WORKDIR
# including files named with dot at the beginning (like .env)
RUN mv repo_folder/* repo_folder/.[^.]* . && rmdir repo_folder/
# node.js stuff
COPY package.json ./
RUN yarn install
COPY . .
RUN nest build app
CMD wait && ["node"]
As a result, you'll see a fully container with your code merged with files and code from other separate repo which we pull from.
I was wondering if someone could help me with my CI/CD configuration for a multi-JS project setup.
Project js-core has some core JS libaries, and project vue-core has some reusable vue components. They are both published as npm packages to the "js-core" project's repository (so that we can use a deploy token with npm publish, as opposed to using a group deploy token, where you have to script the package creation and push directly to the API).
vue-core has a dependency of js-core, so it needs to access the npm package registry of the js-core project to download it inside of the Docker CI/CD instance.
However, gitlab does not allow me to override the package registry's URL. According to some google research, setting the values with yarn config set #myorg:registry <gitlab-url> / yarn config set //gitlab.com/api/v4/... <deploy token> or npm config set #myorg:registry <gitlab-url> / npm config set //gitlab.com/api/v4/... <deploy token> should work. However I can see that it is still trying to download the packages from the vue-core package registry, even though it is disabled as a feature there.
This is the part of my .gitlab-ci.yml that runs before it fails:
image: node:17-alpine
stages:
- build
- test
before_script:
- yarn config set #myorg:registry https://gitlab.com/api/v4/projects/${NPM_PACKAGES_PROJECT_ID}/packages/npm/
- yarn config set //gitlab.com/api/v4/projects/${NPM_PACKAGES_PROJECT_ID}/packages/npm/:_authToken ${NPM_PACKAGES_PROJECT_TOKEN}
- yarn install
It fails at yarn install with:
[2/4] Fetching packages...
error An unexpected error occurred: "https://gitlab.com/api/v4/projects/<vue-core_project_id>/packages/npm/#myorg/js-core/-/#myorg/js-core-1.0.0.tgz: Request failed \"404 Not Found\"".
Where the project ID should be the value of <js-core_project_id>.
I have tried writing to all possible .npmrc file paths (~/.npmrc, ./.npmrc, ${CI_BUILD_DIR}/.npmrc, setting the values with npm config set and yarn config set, deactivating the packages feature in the gitlab project itself. I have also not found any predefined environment variables that would override my configs (see https://docs.gitlab.com/ee/ci/variables/predefined_variables.html). #myorg is correct and matches the gitlab url, as the actual value for "myorg" is a single word...
I am pretty much out of ideas at this point, any help is appreciated!
Update 1: I don't think it is a npm / yarn issue, maybe a gitlab caching problem? If I execute npm config ls -l and yarn config list the correct URLs are output in a format that works locally. I will attempt to clear the yarn cache (globally and locally) and pray that that works.
I am new to dockers, well we started working on docker file but I am stuck on how to maintain different versions of dependent software's of our web app.
suppose our web app uses crystal reports 1.X version, in runtime.
In future , if want to update version of crystal report to 1.2.X.
In these scenarios how a docker file and these dependent software's should be maintained(although version we can directly update in docker file)?
Should docker file be parametrised for the versions?
What would be the best approach?
Use your application language's native package dependency system (a Ruby Gemfile, Python Pipfile or requirements.txt, Node package.json, Scala build.sbt, ...). In a non-Docker development environment, maintain these dependencies the same way you would without Docker. When you go to translate this into a Dockerfile, copy these description files into the image and install them.
A near-universal Javascript Dockerfile, for example, would look like
FROM node:12
WORKDIR /app
# Copy in and install dependencies
COPY package.json yarn.lock .
RUN yarn install
# Copy in the rest of the application; build and set up to run it
COPY . .
RUN yarn build
EXPOSE 3000
CMD yarn start
If a dependency changed, you'd use a command like yarn up to update the package.json and yarn.lock files in your non-Docker development environment, and when you went to re-docker build, those updated files would update the dependency in the built image.
I have a lot of devdepencencies in my npm script. npm install takes a few minutes the first time, that's ok.
But since I'm integrating with TFS build server, it only needs to npm install once. After that, npm install is just wasting time because it takes 2-3 minutes to just determin the packages are already installed. Also, it seems to always reinstall the packages with -g global flag, even when existing.
How can I make it check if packages exist, and if so, skip npm install?
You can use npm-cache as an alternative way if you use on-premise build agents for build.
It is useful for build processes that run [npm|bower|composer|jspm]
install every time as part of their build process. Since dependencies
don't change often, this often means slower build times. npm-cache
helps alleviate this problem by caching previously installed
dependencies on the build machine. npm-cache can be a drop-in
replacement for any build script that runs [npm|bower|composer|jspm]
install.
How it Works
When you run npm-cache install [npm|bower|jspm|composer], it first
looks for package.json, bower.json, or composer.json in the current
working directory depending on which dependency manager is requested.
It then calculates the MD5 hash of the configuration file and looks
for a filed named .tar.gz in the cache directory ($HOME/.package_cache
by default). If the file does not exist, npm-cache uses the system's
installed dependency manager to install the dependencies. Once the
dependencies are installed, npm-cache tars the newly downloaded
dependencies and stores them in the cache directory. The next time
npm-cache runs and sees the same config file, it will find the tarball
in the cache directory and untar the dependencies in the current
working directory.
And you can also try with npm-install-missing.
However, if you are using VSTS Hosted Build Agent, then you cannot do this since every time you queue a build with Hosted Build Agent, a clean build agent is assigned for the build. That means there is no dependency package installed on the agent. You need to perform a complete npm install.