Unable to pull package with Composer version 2.* - docker

I'm building docker image for Magento2 e-shop where we use some packages from magefan.com. Problem is that I'm unable to pull packages from magefan.com while building the image. The problem occurs only when using Composer version > 2 where it ends with following error.
[Composer\Downloader\TransportException]
The 'https://***:***#magefan.com/repo/packages/download/package/magefan-module-og-tags/version/2.0.14/' URL could not be accessed: HTTP/2 403
When I downgrade composer to version 1.10.26 everything works fine, but I would rather not use that version of composer since it's deprecated.
Weird thing is that I'm able to pull that package with latest Composer outside Docker build. Even when I run composer install inside that container everything works.
I suspect that the reason is that magefan.com does not support HTTP/2. I checked the URL with is-http2-cli and it returned that HTTP/2 is not supported.
× HTTP/2 not supported by https://***:***#magefan.com/repo/packages/download/package/magefan-module-og-tags/version/2.0.14/
What I don't understand is why outside of docker build, Composer is able to default to HTTP/1.1 but not in Docker build.
I tried both Debian and Alpine based PHP images.
Is there any way how to force Composer to not use HTTP/2?
Dockerfile:
FROM php:7.4.32-fpm-alpine3.15 AS base
...
FROM base AS composer
RUN apk add --no-cache unzip
COPY --from=composer:1.10.26 /usr/bin/composer /usr/local/bin/composer
# COPY --from=composer:2.0.14 /usr/bin/composer /usr/local/bin/composer
WORKDIR /app
COPY composer.json composer.lock auth.json ./
RUN composer install --no-dev --no-interaction -o
FROM base AS finish
...

Related

Docker - dotnet build can't find file

I am trying to make my application work in a Linux container. It will eventually be deployed to Azure Container Instances. I have absolutely no experience with containers what so ever and I am getting lost in the documentation and examples.
I believe the first thing I need to do is create a Docker image for my project. I have installed Docker Desktop.
My project has this structure:
MyProject
MyProject.Core
MyProject.Api
MyProject.sln
Dockerfile
The contents of my Dockerfile is as follows.
#Use Ubuntu Linux as base
FROM ubuntu:22.10
#Install dotnet6
RUN apt-get update && apt-get install -y dotnet6
#Install LibreOffice
RUN apt-get -y install default-jre-headless libreoffice
#Copy the source code
WORKDIR /MyProject
COPY . ./
#Compile the application
RUN dotnet publish -c Release -o /compiled
#ENV PORT 80
#Expose port 80
EXPOSE 80
ENTRYPOINT ["dotnet", "/compiled/MyProject.Api.dll"]
#ToDo: Split build and deployment
Now when I try to build the image using command prompt I am using the following command
docker build - < Dockerfile
This all processed okay up until the dotnet publish command where it errors saying
Specify a project or solution file
Now I have verified that this command works fine when run outside of the docker file. I suspect something is wrong with the copy? Again I have tried variations of paths for the WORKDIR, but I just can't figure out what is wrong.
Any advice is greatly appreciated.
Thank you SiHa in the comments for providing a solution.
I made the following change to my docker file.
WORKDIR app
Then I use the following command to build.
docker build -t ImageName -f FileName .
The image now creates successfully. I am able to run this in a container.

Undefined References - Golang CGO build fails using Docker, but not on host machine

I'm trying to use the lilliput library for Go. It is only made to run on Linux and OS X.
On my Linux (Debian 10.3) host machine as well as my WSL2 setup (Ubuntu 20.04.1), I have no problems running and building my code (excerpt below).
// main.go
package main
import (
"github.com/discordapp/lilliput"
)
func main() {
...
decoder, err := lilliput.NewDecoder(data)
...
}
However, when I try to put it in a Docker container, with the following configuration, it fails to build.
# Dockerfile v1
FROM golang:1.14.4-alpine AS build
RUN apk add build-base
WORKDIR /src
ENV CGO_ENABLED=1
COPY go.mod .
COPY go.sum .
RUN go mod download
COPY . .
RUN go build -o /out/api .
ENTRYPOINT ["/out/api"]
EXPOSE 8080
I already tried adjusting the Dockerfile with different approaches, for example:
FROM alpine:edge AS build
RUN apk update
RUN apk upgrade
RUN apk add --update go=1.15.3-r0 gcc=10.2.0-r5 g++=10.2.0-r5
WORKDIR /app
RUN go env
ENV GOPATH /app
ADD . /app/src
WORKDIR /app/src
RUN go get -d -v
RUN CGO_ENABLED=1 GOOS=linux go build -o /app/bin/server
FROM alpine:edge
WORKDIR /app
RUN cd /app
COPY --from=build /app/bin/server /app/bin/server
CMD ["bin/server"]
Both result in the following build log:
https://pastebin.com/zMEbEac3
For completeness, the go env of the host machine.
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/kingofdog/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/kingofdog/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/lib/go-1.11"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/go-1.11/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/kingofdog/{PROJECT FOLDER}/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build589460337=/tmp/go-build -gno-record-gcc-switches"
I already searched online for this error, but all I could find dealt with errors in the way others imported C libraries in their Go project. Yet, in my case I'm quite sure that it is not a mistake regarding the source code but rather a configuration mistake of the docker container, as the code itself works perfectly fine outside Docker and I couldn't find a similar issue on the lilliput repository.
The alpine docker image is a minimalistic Linux version - using musl-libc instead of glibc - and is typically used for building tiny images.
To get the more featureful glibc - and resolve your missing CGO dependencies - use the non-alpine version of the golang Docker image to build your asset:
#FROM golang:1.14.4-alpine AS build
#RUN apk add build-base
FROM golang:1.14.4 AS build
Did you build the dependencies?
You have to run the script to build the dependencies on Linux.
Script: https://github.com/discord/lilliput/blob/master/deps/build-deps-linux.sh
Their documentation mentions:
Building Dependencies
Go does not provide any mechanism for arbitrary building of dependencies, e.g. invoking make or cmake. In order to make lilliput usable as a standard Go package, prebuilt static libraries have been provided for all of lilliput's dependencies on Linux and OSX. In order to automate this process, lilliput ships with build scripts alongside compressed archives of the sources of its dependencies. These build scripts are provided for OSX and Linux.
In case it still fails, then issue might be linked to glibc-musl because alpine images have musl libc instead of glibc (GNU's libc). So, you can try it with maybe Ubuntu/ CentOS/etc. minimal images or find a way to get glibc on alpine.

Custom Docker Image with Azure CLI base for authentication

I'm building a .NET Core application that I would like to deploy via Azure Devops build pipeline. The pipeline will build, test and deploy using Docker containers.
I have successfully built the first docker image for my application using the following Dockerfile and am now attempting to run it on my local machine before using it in the pipeline:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
MAINTAINER yummylumpkins <yummy#lumpkins.com>
WORKDIR /app
COPY . ./
RUN dotnet publish MyAPIApp -c Release -o out
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "MyAPIApp.dll"]
Running this image inside a docker container locally crashes because my application uses AzureServiceTokenProvider() to attempt to fetch a token from Azure Services that will then be used to fetch secrets from Azure Key Vault.
The local docker container that the image runs from does not have the authorization to access Azure Services.
The docker container error output looks like this:
---> Microsoft.Azure.Services.AppAuthentication.AzureServiceTokenProviderException: Parameters: Connection String: [No connection string specified], Resource: https://vault.azure.net, Authority: https://login.windows.net/XXXX-XXXX-XXXX-XXXX. Exception Message: Tried the following 3 methods to get an access token, but none of them worked.
Parameters: Connection String: [No connection string specified], Resource: https://vault.azure.net, Authority: https://login.windows.net/XXXX-XXXX-XXXX-XXXX. Exception Message: Tried to get token using Managed Service Identity. Access token could not be acquired. Connection refused
Parameters: Connection String: [No connection string specified], Resource: https://vault.azure.net, Authority: https://login.windows.net/XXXX-XXXX-XXXX-XXXX. Exception Message: Tried to get token using Visual Studio. Access token could not be acquired. Environment variable LOCALAPPDATA not set.
Parameters: Connection String: [No connection string specified], Resource: https://vault.azure.net, Authority: https://login.windows.net/XXXX-XXXX-XXXX-XXXX. Exception Message: Tried to get token using Azure CLI. Access token could not be acquired. /bin/bash: az: No such file or directory
After doing a lot of research (and getting some positive feedback here) it would appear that the best way to authorize the locally running docker container is to build the image on top of an Azure CLI base image from Microsoft, then use az login --service-principal -u <app-url> -p <password-or-cert> --tenant <tenant> somewhere during the build/run process to authorize the local docker container.
I have successfully pulled the Azure CLI image from Microsoft (docker pull mcr.microsoft.com/azure-cli) and can run it via docker run -it mcr.microsoft.com/azure-cli. The container runs with the Azure CLI command line and I can log in via bash, but that's as far as I've come.
The next step would be to layer this Azure CLI image into my previous Dockerfile during the image build, but I am unsure to do this. I've tried the following:
# New base image is now Azure CLI
FROM mcr.microsoft.com/azure-cli
RUN az login -u yummylumpkins -p yummylumpkinspassword
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
MAINTAINER yummylumpkins <yummy#lumpkins.com>
WORKDIR /app
COPY . ./
RUN dotnet publish MyAPIApp -c Release -o out
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "MyAPIApp.dll"]
But this still doesn't work, the process still results in the same error mentioned above (I think because the login does not persist when adding the new dotnet core layer. My question is, how would I explicitly build Azure CLI image into my dockerfile/image building process with an azure login command to authorize the docker container, persist the authorization, and then set a command to run the app (MyAPIApp.dll) with the persisted authorization?
Or, am I taking the completely wrong approach with this? Thanks in advance for any feedback.
Posting an update with an answer here just in case anyone else has a similar problem. I haven't found any other solutions to this so I had to make my own. Below is my Dockerfile. Right now the image is sitting at 1GB so I will definitely need to go through and optimize, but I'll explain what I did:
#1 Install .NET Core SDK Build Environment
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
#2 Build YummyApp
COPY . ./
RUN dotnet publish YummyAppAPI -c Release -o out
#3 Install Ubuntu Base Image
FROM ubuntu:latest
MAINTAINER yummylumpkins <yummy#lumpkins.com>
WORKDIR /app
ENV ASPNETCORE_URLS=http://+:80
EXPOSE 80
#4 Install package dependencies & .NET Core SDK
RUN apt-get update \
&& apt-get install apt-transport-https \
&& apt-get update \
&& apt-get install -y curl bash dos2unix wget dpkg \
&& wget -q https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb \
&& dpkg -i packages-microsoft-prod.deb \
&& apt-get install -y software-properties-common \
&& apt-get update \
&& add-apt-repository universe \
&& apt-get update \
&& apt-get install apt-transport-https \
&& apt-get update \
&& apt-get install -y dotnet-sdk-3.1 \
&& apt-get update \
&& rm packages-microsoft-prod.deb
#5 Copy project files from earlier SDK build
COPY --from=build-env /app/out .
#6 Install Azure CLI for AppAuthorization
RUN curl -sL https://aka.ms/InstallAzureCLIDeb | bash
#7 Login to Azure Services and run application
COPY entrypoint.sh ./
RUN dos2unix entrypoint.sh && chmod +x entrypoint.sh
CMD ["/app/entrypoint.sh"]
Step 1 - Install .NET Core SDK Build Environment: We start with using .NET Core SDK as a base image to build my application. It should be noted that I have a big app with one solution and multiple project files. The API project is dependent on the other projects.
Step 2 - Build YummyApp: We copy the entire project structure from our local directory to our working directory inside the docker image (/app). Just in case anyone is curious, my project is a basic API app. It looks like this:
[YummyApp]
|-YummyAppDataAccess
|YummyAppDataAccess.csproj
|-YummyAppInfrastructure
|YummyAppInfrastructure.csproj
|-YummyAppAPI
|-YummyAppAPI.csproj
|-YummyAppServices
|-YummyAppServices.csproj
|-YummyApp.sln
After we copy everything over, we build/publish a Release configuration of the app.
Step 3 - Install Ubuntu Base Image: We start a new layer using Ubuntu. I initially tried to go with Alpine Linux but found it almost impossible to install Azure CLI on it without having to do some really hacky workarounds so I went w/ Ubuntu for the ease of installation.
Step 4 - Install package dependencies & .NET Core SDK: Inside the Ubuntu layer we set our work directory and install/update a bunch of libraries including our .NET Core SDK. It should be noted that I needed to install dos2unix for a shell script file I had to run later on. . .I will explain later.
Note: I initially tried to install .NET Core Runtime only as it is more lightweight and would bring this image down to about 700MB (from 1GB) but for some reason when I tried to run my application at the end of the file (Step 7) I was getting an error saying that no runtime was found. So I went back to the SDK.
Step 5 - Copy project files from earlier SDK Build: To save space, I copied the built project files from the first 'build image' over to this Ubuntu layer to save some space (about 1GB worth).
Step 6 - Install Azure CLI: In order to authorize my application to fetch a token from Azure Services, normally I use Microsoft.Azure.Services.AppAuthentication. This package provides a method called AzureServiceTokenProvider() which (via my IDE) authorizes my application to connect to Azure Services to get a token that is then used to access the Azure Key Vault. This whole issues started because my application is unable to do this from within a docker container, because Azure doesn't recognize the request coming from the container itself.
So in order to work around this, we need to login via az login in the Azure CLI, inside the container, before we start the app.
Step 7 - Login to Azure Services and run application: Now it's showtime. I had two different problems to solve here. I had to figure out how to execute az login and dotnet YummyAppAPI.dll when this container would be fired up. But Dockerfiles only allow one ENTRYPOINT or CMD to be executed at runtime, so I found a workaround. By making a shell script file (entrypoint.sh) I was able to put both commands into this file and then execute that one file.
After setting this up, I was getting an error with the entrypoint.sh that read something like this: entrypoint.sh: executable file not found in $PATH. I found out that I had to change the permissions of this file using chmod because otherwise, my docker container was unable to access it. That made the file visible, but the file was still unable to execute. I was receiving another error: Standard_init_linux.go:211: exec user process caused “no such file or directory”
After some more digging, it turns out that this problem happens when you try to use a .sh file created in Windows on a Linux-based system. So I had to install dos2unix to convert this file to something Linux compatible. I also had to make sure the file was formatted correctly. For anyone curious, this is what my entrypoint.sh looks like:
#!/bin/sh
set -e
az login -u yummy#lumpkins.com -p ItsAlwaysYummy
dotnet /app/YummyAppAPI.dll
exec "$#"
Note: The login and password is hard-coded. . .I know this is bad practice (in fact, it's terrible) however, this is only for my local machine and will never see production. The next step would be to introduce environment variables with a service principle login. Since this deployment will eventually happen in the Azure Devops pipeline, I can inject those ENV vars straight into the devops pipeline YAML so that all of this happens without me ever punching in credentials; they will come straight from the Key Vault where they are stored.
Lastly, the size of this container is huge (1GB) and it does need to be optimized if it will be updated/built regularly. I will continue working on that but I am open to suggestions on how best to do that moving forward.
Thanks again all.

node:latest for alpine, apk not found because sbin is not on path

I am running a docker-compose file using node:latest. I noticed an issue with the timezone that I am trying to fix. Following an example I found online, I tried to install tzdata. This is not working as I keep getting apk not found errors. After finding this stackoverflow.com question, Docker Alpine /bin/sh apk not found, it seems to mirror my issue as I docker exec'ed into the container and found the apk command in the /sbin folder. I tried to do the following to make it work but I am still not able to access apk. From other articles I found, this seemed to be the way to resolve the issue but apk is still not found.
CMD export PATH=$PATH:$ADDITIONAL_PATH
RUN apk add --no-cache tzdata
ENV TZ=America/Chicago
node:latest is based on buildpack-deps, which is based on Debian. Debian does not use apk; it uses apt. You either want to use Debian's apt to install packages (apt-get install tzdata) or switch to node:alpine, which uses apk for package management.
You can use node:alpine which is based on alpine.
node:alpine
CMD export PATH=$PATH:$ADDITIONAL_PATH
RUN apk add --no-cache tzdata
ENV TZ=America/Chicago
node:-alpine
This image is based on the popular Alpine Linux project, available in
the alpine official image. Alpine Linux is much smaller than most
distribution base images (~5MB), and thus leads to much slimmer images
in general.

How to Edit Docker Image?

I did a basic search in the community and could not find a suitable answer, so I am asking here. Sorry if it was asked earlier.
Basically , I am working on a certain project and we keep changing code at a regular interval . So ,we need to build docker image everytime due to that we need to install dependencies from requirement.txt from scratch which took around 10 min everytime.
How can I perform direct change to docker image and also how to configure entrypoint(in Docker File) which reflect changes in Pre-Build docker image
You don't edit an image once it's been built. You always run docker build from the start; it always runs in a clean environment.
The flip side of this is that Docker caches built images. If you had image 01234567, ran RUN pip install -r requirements.txt, and got image 2468ace0 out, then the next time you run docker build it will see the same source image and the same command, and skip doing the work and jump directly to the output images. COPY or ADD files that change invalidates the cache for future steps.
So the standard pattern is
FROM node:10 # arbitrary choice of language
WORKDIR /app
# Copy in _only_ the requirements and package lock files
COPY package.json yarn.lock ./
# Install dependencies (once)
RUN yarn install
# Copy in the rest of the application and build it
COPY src/ src/
RUN yarn build
# Standard application metadata
EXPOSE 3000
CMD ["yarn", "start"]
If you only change something in your src tree, docker build will skip up to the COPY step, since the package.json and yarn.lock files haven't changed.
In my case, I was facing the same, after minor changes, i was building the image again and again.
My old DockerFile
FROM python:3.8.0
WORKDIR /app
# Install system libraries
RUN apt-get update && \
apt-get install -y git && \
apt-get install -y gcc
# Install project dependencies
COPY ./requirements.txt .
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt --use-deprecated=legacy-resolver
# Don't use terminal buffering, print all to stdout / err right away
ENV PYTHONUNBUFFERED 1
COPY . .
so what I did, created a base image file first like this (Avoided the last line, did not copy my code)
FROM python:3.8.0
WORKDIR /app
# Install system libraries
RUN apt-get update && \
apt-get install -y git && \
apt-get install -y gcc
# Install project dependencies
COPY ./requirements.txt .
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt --use-deprecated=legacy-resolver
# Don't use terminal buffering, print all to stdout / err right away
ENV PYTHONUNBUFFERED 1
and then build this image using
docker build -t my_base_img:latest -f base_dockerfile .
then the final Dockerfile
FROM my_base_img:latest
WORKDIR /app
COPY . .
And as my from this image, I was not able to up the container, issues with my copied python code, so you can edit the image/container code, to fix the issues in the container, by this mean i avoided the task of building images again and again.
When my code got fixed, I copied the changes from container to my code base and then finally, I created the final image.
There are 4 Steps
Start the image you want to edit (e.g. docker run ...)
Modify the running image by shelling into it with docker exec -it <container-id> (you can get the container id with docker ps)
Make any modifications (install new things, make a directory or file)
In a new terminal tab/window run docker commit c7e6409a22bf my-new-image (substituting in the container id of the container you want to save)
An example
# Run an existing image
docker run -dt existing_image
# See that it's running
docker ps
# CONTAINER ID IMAGE COMMAND CREATED STATUS
# c7e6409a22bf existing-image "R" 6 minutes ago Up 6 minutes
# Shell into it
docker exec -it c7e6409a22bf bash
# Make a new directory for demonstration purposes
# (note that this is inside the existing image)
mkdir NEWDIRECTORY
# Open another terminal tab/window, and save the running container you modified
docker commit c7e6409a22bf my-new-image
# Inspect to ensure it saved correctly
docker image ls
# REPOSITORY TAG IMAGE ID CREATED SIZE
# existing-image latest a7dde5d84fe5 7 minutes ago 888MB
# my-new-image latest d57fd15d5a95 2 minutes ago 888MB

Resources