Q1: How does pipenv determine that pipfile.lock is out of date?
Q2: If I don't manually modify the Pipfile, will Pipfile.lock ever be outdated?
Q3: Will Pipfile.lock always be updated after I install a package?
Related
According to the documentation:
pipenv install foo==0.x
will install foo with the specified version and update both Pipfile and Pipfile.lock.
If the foo has been written into both files, then what is the need for pipenv lock command?
The document said it:
Regenarate Pipfile.lock and updates the dependencies inside it.
Under what situation do we need to call the pipenv lock command?
Thanks if you could clarify my confusion.
As I understand, it can be used to update all your dependencies before entering production.
pipenv lock is used to create a Pipfile.lock, which declares all dependencies (and sub-dependencies) of your project, their latest available versions, and the current hashes for the downloaded files. This ensures repeatable, and most importantly deterministic, builds.
You can use pipenv lock to compile your dependencies on your development environment and deploy the compiled Pipfile.lock to all of your production environments for reproducible builds.
We are using artifactory for retrieving npm packages and we need to use an _auth token in .npmrc (npm config) to fetch those npm dependencies required by my project.
I have read articles saying that npm install should be the first step in the Dockerfile so that it can be cached and we don't need to download dependencies every time we spin up a new docker image with a little change.
Also, it is a bad practice to put any _auth tokens in Dockerfile as part of the build.
So what is the best practice to do npm install in Dockerfile?
I upvoted because the question is not bad, maybe not great wording.
Essentially I believe the answer is that you need to copy the .npmrc from your environment into the docker image, like:
COPY .npmrc /usr/src/app/.npmrc
This is however scary because those are your credentials.
The NPM docs recommend that you pass in your auth token to the npmrc file as an env variable. That could also work in this case:
https://docs.npmjs.com/docker-and-private-modules
I believe that should be fine and keep your creds safe.
If you do not want to keep your credentials in your container, you can always remove it after the 'npm install' step is complete:
RUN npm install
RUN rm /usr/src/app/.npmrc
And then you can proceed with the rest of the build
Since I've installed go-sqlite3 as dependency in my go project my docker build time started oscillating around 1 min.
I tried to optimize the build by using go mod download to cache dependencies
But it didn't reduce overall build time.
Then I found out that
go-sqlite3 is a CGO enabled package you are required to
set the environment variable CGO_ENABLED=1 and have a gcc compile
present within your path.
So I run go install github.com/mattn/go-sqlite3 as an extra step and it reduced build time to 17s~
I also tried vendoring, but it didn't help with reducing the build time, installing library explicitly was always necessary to achieve that.
## Build
FROM golang:1.16-buster AS build
WORKDIR /app
# Download dependencies
COPY go.mod .
COPY go.sum .
RUN go mod download
RUN go install github.com/mattn/go-sqlite3 //this reduced build time to around 17s~
COPY . .
RUN go build -o /myapp
But somehow I am still not happy with this solution.
I don't get why adding this package makes my build so long and why I need to explicitly install it in order to avoid such long build times.
Also, wouldn't it be better to install all packages after downloading them?
Do you see any obvious way of improving my current docker build?
The fact of the matter is that the C-based SQLite package just takes a long time to build. I use it myself currently, and yes it's painful every time. I have also been unhappy with it, and have been looking for alternatives. I have been busy with other projects, but I did find this package QL [1], which you can build without C [2]:
go build -tags purego
or if you just need read only, you can try SQLittle [3].
https://pkg.go.dev/modernc.org/ql
https://pkg.go.dev/modernc.org/ql#hdr-Building_non_CGO_QL
https://github.com/alicebob/sqlittle
I am using npm version 7.18.1. From what I see from the documentation the --only=development option no longer exists.
I'm creating a multi-stage Docker image. What I need to do is to create an image for the development step in which only devDependency packages are installed. On the next step, I'm going to create another image with production packages installed in it.
How do I do this without the --only=development option?
I am new to dockers, well we started working on docker file but I am stuck on how to maintain different versions of dependent software's of our web app.
suppose our web app uses crystal reports 1.X version, in runtime.
In future , if want to update version of crystal report to 1.2.X.
In these scenarios how a docker file and these dependent software's should be maintained(although version we can directly update in docker file)?
Should docker file be parametrised for the versions?
What would be the best approach?
Use your application language's native package dependency system (a Ruby Gemfile, Python Pipfile or requirements.txt, Node package.json, Scala build.sbt, ...). In a non-Docker development environment, maintain these dependencies the same way you would without Docker. When you go to translate this into a Dockerfile, copy these description files into the image and install them.
A near-universal Javascript Dockerfile, for example, would look like
FROM node:12
WORKDIR /app
# Copy in and install dependencies
COPY package.json yarn.lock .
RUN yarn install
# Copy in the rest of the application; build and set up to run it
COPY . .
RUN yarn build
EXPOSE 3000
CMD yarn start
If a dependency changed, you'd use a command like yarn up to update the package.json and yarn.lock files in your non-Docker development environment, and when you went to re-docker build, those updated files would update the dependency in the built image.