I have a repo that contains multiple lambda functions. Some functions are Node and some are Python (don't ask me why...). I don't want to have to put a dockerfile with each function/folder because they will be exactly the same.
I have a folder in root that is contains dockerfiles for both Python and Node. I reference these files in my SAM template.yaml depending on what the function is (Python or Node). The COPY step fails from my dockerfiles because it cannot get the correct context because the dockerfile is in a different folder from the actual functions. I thought that SAM might be able to get the context of the function and build from the one dockerfile somehow. Can SAM do this? Is there something that I need to do with the dockerfile? Does the using SAM (especially with the --use-container flag) eliminate the need for a dockerfile?
Current configuration:
samconfig.toml
template.yaml
|---Dockerfiles
| |---Python
| | | Dockerfile
| |---Node
| | | Dockerfile
|---PythonLambdaFunction
| | index.py
|---NodeLambdaFunction
| | index.js
Related
When adding container orchastrator support (docker-compose) to a .NET Core Web API project with a dependency on some project library the following folder structure is created
├── Solution
│ ├── API.Project
| | ├── API.Project.csproj
| | ├── Dockfile
| |
| ├── Library.project
| | ├── Library.project.csproj
| |
| ├── docker-compose.yaml
As you can see the library project is outside the Dockerfile context. If I build an image in my Github Action pipeline with docker/build-push-action#v2 (https://github.com/marketplace/actions/build-and-push-docker-images) it can't find the library project. If I move the Dockerfile to the Solution folder and build the image and run a container the visual studio debugger won't attach, but the container does run. However, when I make a http request to the container a null pointer exception is logged in the container logs (also in a container from the github action image). How do I build an docker image with a folder structure like this example? I would prefer too keep the Dockerfile inside the API.project folder.
With docker/build-push-action#v2 you can specify the context and the location of the docker file like so:
name: Build and push
uses: docker/build-push-action#v2
with:
context: .
file: API.Project/Dockerfile
push: true
tags: user/app:latest
This allows you to include files in the parent folder of the Dockerfile.
The null pointer exception I received when moving my Dockerfile to the parent folder had to do with a dependency on System.Security.Cryptography, but I didn't have to solve it, because specifying the docker build context and keeping the dockerfile inside the API.project folder fixed my issues
I have succesfully deployed a Spring Boot application in OpenShift as a container image from our private registry. So far, so good. Hoever, I would like to be able to see in the OC which "version" of the application is deployed and I am unable to see how to find that information.
I have added the version as a container label and I can see that the docker label is being read when the image is imported:
oc import-image my-app
imagestream.image.openshift.io/my-app imported
Name: my-app
Namespace: my-playground
Created: 46 hours ago
Labels: app=my-app
app.kubernetes.io/component=my-app
app.kubernetes.io/instance=my-app
app.kubernetes.io/name=my-app
app.kubernetes.io/part-of=my-app-app
Annotations: openshift.io/image.dockerRepositoryCheck=2022-02-25T10:52:18Z
...
Image Created: 15 minutes ago
...
Exposes Ports: 28888/tcp
Docker Labels: label=version-0.0.2-SNAPSHOT
...
But then I have tried in the console GUI and I can't see any information regarding the dockers labels I have added to the container. I can also see them through the oc client with oc describe istag/my-app:latest but not in the GUI.
Is there any way I can add the version info somewhere so I can see it easily in the Openshift console? Yes, command line is nice, but why use two tools when one should be enough?
Thank you!
If there is any other place I can put the version information so I can see it from the GUI, I'm all ears!
Well, the answer is that it is not easily visible in the web console. There is no place in the console where you can see the Docker Labels of the images of an Image Stream, so...
What I did was create this little handy script that queries the descriptor and then the image stream to display the docker labels
#! /bin/bash
DEPLOYMENT_IMAGE_STREAM=`oc describe deployment/$1 | grep ImageStreamTag | grep -Po "$1\:[^\"]+"`
echo "Deployment specifies image stream: $DEPLOYMENT_IMAGE_STREAM"
DOCKER_LABEL_VERSION=`oc describe istag/$DEPLOYMENT_IMAGE_STREAM | grep "Docker Labels" | awk '{ print \$3 }' | cut -d'=' -f2`
echo "Version specified in the image stream: $DOCKER_LABEL_VERSION"
Note that this is a crude script that is specific to our case, as we just add one Docker label and know where to cut the lines. If you need to use it, you migth need to adjust the awks and cut options a bit.
If you are using a Deployment Config instead of a simple Deployment, you need to change deployment/$1 with dc/$1 in the first line.
Cheers!
We have a container 'pepsr' we build at work which makes use of a generic configuration file 'install.config.json'.
This config file contains all the necessary information that enables pepsr to know the specifics of the very host it is ran on.
I'd like to make a compose yml that simply takes the pepsr image and COPY say 'harvard.installation.config.json' as 'installation.config.json' so that the final container knows it runs on 'harvard'.
The resulting builder script I dream of, is as follows:
$ ./makeRelease harvard
which invokes docker-compose build.
As a result, the script should produce the image named 'pepsr-hardvard'.
In other words, the makeRelease.sh script should make docker-compose does the following:
take the nameOfTheInstallation from the argument;
take the existing generic image 'pepsr';
COPY <nameOfTheInstallation>.installation.config.json installation.config.json
Set the name of the container as pepsr-<nameOfTheInstallation>
Important: We waived the --volume option to refer to the config file because we want an all-in-one resulting container:
that's easy to pull and run;
we want to update the config file over time when new features come in.
Any other option is welcome :-)
Docker-compose doesn't do what you are asking for. You need a Dockerfile for that.
FROM pepsr
ARG RELEASE_NAME
COPY ${RELEASE_NAME}.installation.config.json installation.config.json
Then you could use that in your docker-compose.yaml.
version: "3.9"
services:
pepsr-release:
image: "myregistry.io/pepsr-${RELEASE_NAME}:${RELEASE_TAG}"
build:
context: ./
args:
RELEASE_NAME:
Docker compose will read a .env file, if it's present in your directory, and also use any variable you have currently set in your shell (shell takes precedence if both are set). If you use such a variable as build arg with empty value, compose will use the value of the env variable it found.
So you could either create such an .env file next to the compose file, containing the RELEASE_NAME or set it in the shell like shown below.
export RELEASE_NAME="harvard"
export RELEASE_TAG="0.0.1"
docker-compose build
docker-compose push
You can even use the env variables to name the image or tag the image like shown above.
For this example, need a docker-compose.yaml and a Dockerfile next to each other.
.
|-- docker-compose.yaml
|-- Dockerfile
Building an image for each host seems a bit non-Docker-like, since one of the strengths of Docker is that your image can run anywhere unchanged.
If you don't want to use volumes or bind-mounts, then the only thing I can think of is piping the file into the container. Something like this
$ cat harvard.installation.config.json | docker run --rm -i <myimage>
Then you'll be able to read the configuration file from stdin in your image.
I finally came up with the following solution: I wrote two docker files:
For Pepsr generic : Dockerfile.pepsr.generic that build the whole pepsr software and container image but has no CMD;
For Pepsr installation: Dockerfile.pepsr.installation that simply copy the given installation configuration file plus CMD npm start to start the perpsr generic nodeJs app.
My compose file docker-compose.yml is as follows:
version: "3.9"
services:
pepsr-generic:
build:
dockerfile: Dockerfile.pepsr.generic
context: .
image: <myrepo>/pepsr-generic
container_name: pepsr-generic
network_mode: host
restart: always
pepsr-install:
build:
dockerfile: Dockerfile.pepsr.installation.release
context: .
args:
INSTALLATION_NAME:
hostname: pepsr-${INSTALLATION_NAME}
network_mode: host
image: <myrepo>/pepsr-${INSTALLATION_NAME}
container_name: pepsr-${INSTALLATION_NAME}
restart: unless-stopped
depends_on:
- pepsr-generic
Basically, my compose says:
The pepsr-generic builds against its own provided Dockerfile and it's given an image name and a container name.
The pepsr-install builds against its own Dockerfile, takes an argument I name INSTALLATION_NAME (its value comes form the ENV variable) and its image and container name are given according to it. One very important thing is that I declare that this container depends on the other one (pepsr-generic).
Finally, to ease the build I provide a bash script that sets the INSTALLATION_NAME environment variable to be passed to docker-compose as follows (a streamlined version):
#!/bin/bash
module="🐳 dockerPepsrMakeRelease"
TODAY=$(TZ=":GMT" date '+%Y-%m-%dT%H:%M:%S')
echo " ____"
echo " | _ \ ___ _ __ ___ _ __ "
echo " | |_) / _ \ '_ \/ __| '__|"
echo " | __/ __/ |_) \__ \ | "
echo " |_| \___| .__/|___/_| "
echo " |_| "
echo "---------------------------"
echo PEPSR: Release manager for Docker
export INSTALL_NAME=$1
docker-compose build
retVal=$?
if [ $retVal -ne 0 ]; then
echo "🔴 $module: error: could not docker-compose build. Aborted"
exit 1
fi
echo "🟢 $module: image completed."
exit 0
My project is structured kind of like this:
project
|- docker_compose.yml
|- svc-a
|- Dockerfile
|- svc-b
|- Dockerfile
|- common-lib
|- Dockerfile
Within docker_compose.yaml:
version: 3.7
services:
common-lib:
build:
context: ./common-lib/
image: common-lib:latest
svc-a:
depends_on:
- common-lib
build:
...
svc-b:
depends_on:
- common-lib
build:
...
In common-lib/Dockerfile relatively standard:
FROM someBuilderBase:latest
COPY . .
RUN build_libraries.sh
Then in svc-a/Dockerfile I import those built libraries:
FROM common-lib:latest as common-lib
FROM someBase:latest
COPY --from=common-lib ./built ./built-common-lib
COPY . .
RUN build_service_using_built_libs.sh
And the Dockerfile for svc-b is basically the same.
This works great using docker-compose build svc-a as it first builds the common-lib container because of that depends-on and I can reference to it easily with common-lib:latest. It is also great because running docker-compose build svc-b doesn't rebuild that base common library.
My problem is that I am defining a builder container as a docker compose service. When I run docker-compose up it attempts to run the common-lib as a running binary/service and spits out a slew of errors. In my real project I have a bunch of chains of these builder container services which is making docker-compose up unusuable.
I am relatively new to docker, is there a more canonical way to do this while a) avoiding code duplication building common-lib in multiple Dockerfiles and b) avoiding a manual re-run of docker build ./common-lib before running docker build ./svc-a (or b)?
The way you do it is not exactly how you should do it in Docker.
You have two options to achieve what you want :
1/ Multi stage build
This is almost what you're doing with this line (in your svc-a dockerfile)
FROM common-lib:latest as common-lib
However, instead of creating you common-lib image in another project, just copy the dockerfile content in your service :
FROM someBuilderBase:latest as common-lib
COPY . .
RUN build_libraries.sh
FROM someBase:latest
COPY --from=common-lib ./built ./built-common-lib
COPY . .
RUN build_service_using_built_libs.sh
This way, you won't need to add a common-lib service in docker-compose.
2/ Inheritance
If you have a lot of images that need to use what is inside your common-lib (and you don't want to add it in every dockerfile with multi stage build), then you can just use inheritance.
What's inheritance in docker ?
It's a base image.
From your example, svc-a image is based on someBase:latest. I guess, it's the same for svc-b. In that case, just add the lib you need in someBase image (with multi-stage build for example or by creating a base image containing your lib).
So! I'm setting up a CI/CD system which has it's own folder and yaml file. Within that folder, I need to call docker build. However, I continue to get
the following error:
unable to prepare context: path "./server" not found.
Here's the structure of my app:
├──CI Folder
| ├── deployment-file.yaml
├──server
| ├── Dockerfile.prod
| └── All the other files for building the server
Within the deployment-file.yaml, I'm calling:
docker build -t dizzy/dizzy-server:latest -f ./server/Dockerfile.prod ./server
I've tried every variation of this with relative paths like ../server, etc, but Docker won't take it. It gives me a new error of unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat.
What's the proper way to do this or am I required to move that deployment-file.yaml to the root directory...