Building devcontainer with --ssh key for GitHub repository in build process fails on VS Code for ARM Mac - docker

We are trying to run a python application using a devcontainer.json with VS Code.
The Dockerfile includes the installation of GitHub repositories with pip that require an ssh key. To build the images, we usually use the --ssh flag to pass the required key. We then use this key to run pip inside the Dockerfile as follows:
RUN --mount=type=ssh,id=ssh_key python3.9 -m pip install --no-cache-dir -r pip-requirements.txt
We now want to run a devcontainer.json inside VS Code. We have been trying many different ways.
1. Passing the --ssh key using the build arg variable:
Since you can not directly pass the --ssh key, we tried a workaround:
"args": {"kek":"kek --platform=linux/amd64 --ssh ssh_key=/Users/user/.ssh/id_rsa"}
This produces an OK looking build command that works in a normal terminal, but inside VS Code the key is not being passed and the build fails. (Both on Windows & Mac)
2. Putting an initial build command into the initializeCommand parameter and then a simple build command that should use the cached results:
We run a first build inside the initializeCommand parameter:
"initializeCommand": "docker buildx build --platform=linux/amd64 --ssh ssh_key=/Users/user/.ssh/id_rsa ."
and then we have a second build in the regular parameter:
"build": {
"dockerfile": "../Dockerfile",
"context": "..",
"args": {"kek":"kek --platform=linux/amd64"}
}
This solution is a nice workaround and works flawlessly on Windows. On the ARM Mac, however, only the initializeCommand build stage runs well, the actual build fails, as it does not use the cached version of the images. The crucial step when the --ssh key is used, fails just like described before.
We have no idea why the Mac VS Code ignores the already created images. In a regular terminal, again, the second build command generated by VS Code works flawlessly.
The problem is reproducible on different ARM Macs, and on different repositories.
Here is the entire devcontainer:
{
"name": "Dockername",
"build": {
"dockerfile": "../Dockerfile",
"context": "..",
"args": {"kek":"kek --platform=linux/amd64"}
},
"initializeCommand": "docker buildx build --platform=linux/amd64 --ssh ssh_key=/Users/user/.ssh/id_rsa .",
"runArgs": ["--env-file", "configuration.env", "-t"],
"customizations": {
"vscode": {
"extensions": [
"ms-python.python"
]
}
}
}

So, we finally found a work around:
We add a target to the initialize command:
"initializeCommand": "docker buildx build --platform=linux/amd64 --ssh ssh_key=/Users/user/.ssh/id_rsa -t dev-image ."
We create a new Dockerfile Dockerfile-devcontainer that only uses one line:
FROM --platform=linux/amd64 docker.io/library/dev-image:latest
In the build command of the devcontainer use that Dockerfile:
"name": "Docker",
"initializeCommand": "docker buildx build --platform=linux/amd64 --ssh ssh_key=/Users/user/.ssh/id_rsa -t dev-image:latest .",
"build": {
"dockerfile": "Dockerfile-devcontainer",
"context": "..",
"args": {"kek":"kek --platform=linux/amd64"}
},
"runArgs": ["--env-file", "configuration.env"],
"customizations": {
"vscode": {
"extensions": [
"ms-python.python"
]
}
}
}
In this way we can use the .ssh key and the docker image created in the initializeCommand (Tested on MacOS and Windows).

Related

Docker image build fails: "protoc-gen-grpc-web: program not found or is not executable"

i inherited a project with several microservices running on kubernetes. after copying the repo and running the steps that the previous team outlined, i have an issue building one of the images that i need to deploy. the script for the build is:
cd graph_endpoint
cp ../../Protobufs/Graph_Endpoint/graph_endpoint.proto .
protoc -I. graph_endpoint.proto --js_out=import_style=commonjs:.
protoc -I. graph_endpoint.proto --grpc-web_out=import_style=commonjs,mode=grpcwebtext:.
export NODE_OPTIONS=--openssl-legacy-provider
npx webpack ./test.js --mode development
cp ./dist/graph_endpoint.js ../public/graph_endpoint.js
cd ..
docker build . -t $1/canvas-lti-frontend:v2
docker push $1/canvas-lti-frontend:v2
i'm getting an error from line 4:
protoc-gen-grpc-web: program not found or is not executable
--grpc-web_out: protoc-gen-grpc-web: Plugin failed with status code 1.
any idea how to fix it? i have no prior experience with docker.
here's the Dockerfile:
FROM node:16
# Install app dependencies
COPY package*.json /frontend-app/
WORKDIR /frontend-app
RUN npm install
COPY server.js /frontend-app/
# Bundle app source
COPY public /frontend-app/public
COPY routes /frontend-app/routes
COPY controllers /frontend-app/controllers
WORKDIR /frontend-app
EXPOSE 3000
CMD [ "node", "server.js"]
and package.json:
{
"name": "frontend",
"version": "1.0.0",
"description": "The user-facing application for the Canvas LTI Student Climate Dashboard",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"#okta/oidc-middleware": "^4.3.0",
"#okta/okta-signin-widget": "^5.14.0",
"express": "^4.18.2",
"express-session": "^1.17.2",
"vue": "^2.6.14"
},
"devDependencies": {
"nodemon": "^2.0.20",
"protoc-gen-grpc-web": "^1.4.1"
}
}
You don't have protoc-gen-grpc-web installed on the machine on which you're running the build script.
You can download the plugins from the grpc-web repo's releases page.
protoc has a plugin mechanism.
protoc looks for its plugins in the path and expects these binaries to be prefixed protoc-gen-{foo}.
However, when you reference the plugin from protoc, you simply use {foo} and generally this is suffixed with _out and sometimes _opt, i.e. protoc ... --{foo}_out --{foo}_opt.
The plugin protoc-gen-grpc-web (once installed and accessible in the host's path) is thus referenced with protoc ... --grpc_web_out=...

Deployment manifest for Azure IoT Edge module not configuring docker build correctly

I am currently trying to get an IoT edge module built, however, the options I'm using in the deployment manifest aren't showing up in the docker build command generated when I right click on it and select "Build IoT Edge Solution" in VS code. In particular, I need one of the modules to use the GPU on the host machine as well as several volumes. I have the following under the module's section of the deployment.json:
"camera-module": {
# Other stuff
"settings": {
"image": "${MODULES.CameraModule}",
"createOptions": {
"HostConfig":{
"Binds":["volume/source:/volume/destination:rw",
"some/more:/volumes:rw",
],
"DeviceRequests": [
{
"Driver": "",
"Count": -1,
"DeviceIDs": null,
"Capabilities": [
[
"gpu"
]
],
"Options": {}
}
],
}
}
},
# More stuff
}
The GPU is necessary or I run into the following error when attempting to execute a command while building the container.
Step 4/10 : RUN /isaac-sim/python.sh -m pip install --upgrade pip
---> Running in 0b953f3d327f
running as root
Fatal Error: Can't find libGLX_nvidia.so.0...
Ensure running with NVIDIA runtime. (--gpus all) or (--runtime nvidia)
The command '/bin/sh -c /isaac-sim/python.sh -m pip install --upgrade pip' returned a non-zero code: 1
The actual docker build command generated by clicking on the options described at top of post is:
docker build --rm -f "/path/to/modules/CameraModule/Dockerfile.amd64" -t blah.azurecr.io/cameramodule:0.0.1-amd64 "path/to/modules/CameraModule"
The lack of volumes in the above command are what's making me think they aren't there, as I can't even build the container to actually test that right now. I could just modify the actual docker build command, but I'd prefer to just have the deployment manifest generate it correctly so that it's usable long term. How can I make sure that the gpu and volumes will be properly set up?

Use build-arg from docker to create json file

I have a docker build command which I'm running in Jenkins execute shell
docker build -f ./fastlane.dockerfile \
-t fastlane-test \
--build-arg PLAY_STORE_CREDENTIALS=$(cat PLAY_STORE_CREDENTIALS) \
.
PLAY_STORE_CREDENTIALS is a JSON file saved in Jenkins using managed files. And, then, inside my Dockerfile, I have
ARG PLAY_STORE_CREDENTIALS
ENV PLAY_STORE_CREDENTIALS=$PLAY_STORE_CREDENTIALS
WORKDIR /app/packages/web/android/fastlane/PlayStoreCredentials
RUN touch play-store-credentials.json
RUN echo $PLAY_STORE_CREDENTIALS >> ./play-store-credentials.json
RUN cat play-store-credentials.json
cat logs out a empty line or nothing at all.
Content of PLAY_STORE_CREDENTIALS:
{
"type": "...",
"project_id": "...",
"private_key_id": "...",
"private_key": "...",
"client_email": "...",
"client_id": "...",
"auth_uri": "...",
"token_uri": "...",
"auth_provider_x509_cert_url": "...",
"client_x509_cert_url": "..."
}
Any idea where the problem is?
Is there actually a file named PLAY_STORE_CREDENTIALS? If it is, and if it's a standard JSON file, I would expect your given command line to fail; if the file contains any whitespace (which is typical for JSON files), that command should result in an error like...
"docker build" requires exactly 1 argument.
For example, if I have in PLAY_STORE_CREDENTIALS the sample content from your question, we see:
$ docker build -t fastlane-test --build-arg PLAY_STORE_CREDENTIALS=$(cat PLAY_STORE_CREDENTIALS) .
"docker build" requires exactly 1 argument.
See 'docker build --help'.
Usage: docker build [OPTIONS] PATH | URL | -
...because you are not properly quoting your arguments. If you adopt #β.εηοιτ.βε's suggestion and quote the cat command, it appears to build as expected:
$ docker build -t fastlane-test --build-arg PLAY_STORE_CREDENTIALS="$(cat PLAY_STORE_CREDENTIALS)" .
[...]
Step 7/7 : RUN cat play-store-credentials.json
---> Running in 29f95ee4da19
{ "type": "...", "project_id": "...", "private_key_id": "...", "private_key": "...", "client_email": "...", "client_id": "...", "auth_uri": "...", "token_uri": "...", "auth_provider_x509_cert_url": "...", "client_x509_cert_url": "..." }
Removing intermediate container 29f95ee4da19
---> b0fb95a9d894
Successfully built b0fb95a9d894
Successfully tagged fastlane-test:latest
You'll note that the resulting file does not preserve line endings; that's because you're not quoting the variable $PLAY_STORE_CREDENTIALS in your echo statement. You should write that as:
RUN echo "$PLAY_STORE_CREDENTIALS" >> ./play-store-credentials.json
Lastly, it's not clear why you're transferring this data using environment variables, rather than just using the COPY command:
COPY PLAY_STORE_CREDENTIALS ./play-store-credentials.json
In the above examples, I'm testing things using the following Dockerfile:
FROM docker.io/alpine:latest
ARG PLAY_STORE_CREDENTIALS
ENV PLAY_STORE_CREDENTIALS=$PLAY_STORE_CREDENTIALS
WORKDIR /app/packages/web/android/fastlane/PlayStoreCredentials
RUN touch play-store-credentials.json
RUN echo $PLAY_STORE_CREDENTIALS >> ./play-store-credentials.json
RUN cat play-store-credentials.json
Update
Here's an example using the COPY command, where the value of the PLAY_STORE_CREDENTIALS build argument is a filename:
FROM docker.io/alpine:latest
ARG PLAY_STORE_CREDENTIALS
WORKDIR /app/packages/web/android/fastlane/PlayStoreCredentials
COPY ${PLAY_STORE_CREDENTIALS} play-store-credentials.json
RUN cat play-store-credentials.json
If I have credentials in a file named creds.json, this builds successfully like this:
docker build -t fastlane-test --build-arg PLAY_STORE_CREDENTIALS=creds.json .

Packer fails my docker build with error "sudo: not found" despite sudo being present

I'm trying to build a packer image with docker on it and I want docker to create a docker image with a custom script. The relevant portion of my code is (note that the top builder double-checks that sudo is installed):
{
"type": "shell",
"inline": [
"apt-get install sudo"
]
},
{
"type": "docker",
"image": "python:3",
"commit": true,
"changes": [
"RUN pip install Flask",
"CMD [\"python\", \"echo.py\"]"
]
}
The relevant portion of my screen output is:
==> docker: provisioning with shell script: /var/folders/s8/g1_gobbldygook/T/packer-shell23453453245
docker: /temp/script_1234.sh: 3: /tmp/script_1234.sh: sudo: not found
==> docker: killing the contaner: 34234hashvomit234234
Build 'docker' errored: Scipt exited with non-zero exit status: 127
The script in question is not one of mine. It's some randomly generated script that has a different series of four numbers every time I build. I'm new to both packer and docker, so maybe it's obvious what the problem is, but it's not to me.
There seem to be a few problems with your packer input. Since you haven't included the complete input file it's hard to tell, but notice a couple of things:
You probably need to run apt-get update before calling apt-get install sudo. Without that, even if the image has cached package metadata it is probably stale. If I try to build an image using your input it fails with:
E: Unable to locate package sudo
While not a problem in this context, it's good to explicitly include -y on the apt-get command line when you're running it non-interactively:
apt-get -y install sudo
In situations where apt-get is attached to a terminal, this will prevent it from prompting for confirmation. This is not a necessary change to your input, but I figure it's good to be explicit.
Based on the docs and on my testing, you can't include a RUN statement in the changes block of a docker builder. That fails with:
Stderr: Error response from daemon: run is not a valid change command
Fortunately, we can move that pip install command into a shell provisioner.
With those changes, the following input successfully builds an image:
{
"builders": [{
"type": "docker",
"image": "python:3",
"commit": true
}],
"provisioners": [{
"type": "shell",
"inline": [
"apt-get update",
"apt-get -y install sudo",
"pip install Flask"
]
}],
"post-processors": [[ {
"type": "docker-tag",
"repository": "packer-test",
"tag": "latest"
} ]]
}
(NB: Tested using Packer v1.3.5)

Docker cache permuted RUN instructions

If one builds an image from a Dockerfile, would permuting 2 RUN instructions:
Create a completely new image(new hash) with the same cached layers permuted?
No new image is created as permuting does not affect build for the same set of RUN instructions?
RUN instruction1 replaced by RUN instruction2
RUN instruction2 replaced by RUN instruction1
If you permut the RUN instuctions a new image will be created. Here is an example:
FROM alpine
RUN echo abc
RUN echo cdf
Running docker image build -t image1 . And then permuting the RUN commands
and running docker image build -t image2 .. You will find that image1 and image2 have different ids.
Given this minimal Dockerfile:
FROM busybox
RUN echo text1 > file1
RUN echo text2 > file2
When you run:
docker build . -t my-image
docker inspect my-image
Then you get:
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:08c2295a7fa5c220b0f60c994362d290429ad92f6e0235509db91582809442f3",
"sha256:2ce4cb064fd2dc11c0b6fe08ffed6364478f6de0a1ac115d8aa01005b4c2921a",
"sha256:b4f880ce3a2172db2a614faf516c172d1e205bbf293daaee0174c4a5bd93d5f3"
]
}
Now try again with permuted commands, build and inspect the image you get:
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:08c2295a7fa5c220b0f60c994362d290429ad92f6e0235509db91582809442f3",
"sha256:812b39039b60290f4aa193d8f8bf03fbd13020dd5cfa6e6638feb68dac72cf9c",
"sha256:451c384fb837aa70e446a36d3571123144cb497a42819b7a30348e7d49b24a0b"
]
}
Note:
If your commands do not modify the file system e.g. RUN echo text your image would have only one layer sha256:08c2295a7fa5c220b0f60c9943
62d290429ad92f6e0235509db91582809442f3 which represents empty FS.
Conclusion:
Not only a new image is created, but also new layers (i.e. the new image is not just a re-ordered list of existing layers). This is probably because the layer includes not only the contents but its parent hash as well.
See http://windsock.io/explaining-docker-image-ids/ for more details.

Resources