Cannot Run Selenium Chromedriver on M1 Mac - docker

I was able to run the following Docker file on my Mac with Intel chip, but I am getting errors when I run it on a Mac with M1. I then tried docker run --init --platform=linux/amd64 -e SPRING_PROFILES_ACTIVE=dev -e SERVER_FLAVOR=LOCAL_DEV -p 8080:8080 monolith-repo and docker buildx build --platform=linux/amd64 -t monolith-repo .. That got the Docker container to run but I get the following error when trying to call selenium:
org.openqa.selenium.SessionNotCreatedException: Could not start a new session. Possible causes are invalid address of the remote server or browser start-up failure.
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:561)
at org.openqa.selenium.remote.RemoteWebDriver.startSession(RemoteWebDriver.java:230)
at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:151)
at org.openqa.selenium.chromium.ChromiumDriver.<init>(ChromiumDriver.java:108)
at org.openqa.selenium.chrome.ChromeDriver.<init>(ChromeDriver.java:104)
at org.openqa.selenium.chrome.ChromeDriver.<init>(ChromeDriver.java:91)
at com.flockta.monolith.scraping.MakeNewWebpageScraperVersion2.getWebDriver(MakeNewWebpageScraperVersion2.java:176)
at com.flockta.monolith.scraping.MakeNewWebpageScraperVersion2.getWebpage(MakeNewWebpageScraperVersion2.java:52)
at com.flockta.monolith.job.ScrapingDataPipelineJob.getHtmls(ScrapingDataPipelineJob.java:308)
at com.flockta.monolith.job.ScrapingDataPipelineJob.processWebPage(ScrapingDataPipelineJob.java:168)
at com.flockta.monolith.job.ScrapingDataPipelineJob.processResult(ScrapingDataPipelineJob.java:153)
at com.flockta.monolith.job.ScrapingDataPipelineJob.processResult(ScrapingDataPipelineJob.java:59)
at com.flockta.monolith.job.AbstractScrapingPipelineJob.processResults(AbstractScrapingPipelineJob.java:57)
at com.flockta.monolith.job.AbstractScrapingPipelineJob.process(AbstractScrapingPipelineJob.java:30)
at com.flockta.monolith.ScheduledJobController.runDataJob(ScheduledJobController.java:71)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
at java.base/java.lang.Thread.run(Thread.java:832)
Caused by: org.openqa.selenium.WebDriverException: Driver server process died prematurely.
Build info: version: '4.1.1', revision: 'e8fcc2cecf'
System info: host: '1c789f0433ca', ip: '172.17.0.3', os.name: 'Linux', os.arch: 'amd64', os.version: '5.10.76-linuxkit', java.version: '15.0.2'
Driver info: driver.version: ChromeDriver
at org.openqa.selenium.remote.service.DriverService.start(DriverService.java:226)
at org.openqa.selenium.remote.service.DriverCommandExecutor.execute(DriverCommandExecutor.java:98)
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:543)
... 26 common frames omitted
My full Docker file is:
FROM maven:3.6.3-openjdk-15
#Chrome
ARG CHROME_VERSION=98.0.4758.102-1
ADD google-chrome.repo /etc/yum.repos.d/google-chrome.repo
RUN microdnf install -y google-chrome-stable-$CHROME_VERSION \
&& sed -i 's/"$HERE\/chrome"/"$HERE\/chrome" --no-sandbox/g' /opt/google/chrome/google-chrome
## ChromeDriver
ARG CHROME_DRIVER_VERSION=98.0.4758.102
RUN microdnf install -y unzip \
&& curl -s -o /tmp/chromedriver.zip https://chromedriver.storage.googleapis.com/$CHROME_DRIVER_VERSION/chromedriver_linux64.zip \
&& unzip /tmp/chromedriver.zip -d /opt \
&& rm /tmp/chromedriver.zip \
&& mv /opt/chromedriver /opt/chromedriver-$CHROME_DRIVER_VERSION \
&& chmod 755 /opt/chromedriver-$CHROME_DRIVER_VERSION \
&& ln -s /opt/chromedriver-$CHROME_DRIVER_VERSION /usr/bin/chromedriver
ENV CHROMEDRIVER_PORT 4444
ENV CHROMEDRIVER_WHITELISTED_IPS "127.0.0.1"
ENV CHROMEDRIVER_URL_BASE ''
EXPOSE 4444
EXPOSE 8080
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar", "-Xmx600m","/app.jar"]
Also, when I try to run docker build without the buildx build --platform=linux/amd64 I get an error:
docker build -t monolith-repo .
[+] Building 12.0s (7/9)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 37B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/maven:3.6.3-openjdk-15 0.3s
=> [internal] load build context 0.0s
=> => transferring context: 122B 0.0s
=> [1/5] FROM docker.io/library/maven:3.6.3-openjdk-15#sha256:aac64d9d716f5fa3926e6c8f43c680fa8404faae0b8a014c0c9b3d73d2d0f66a 0.0s
=> CACHED [2/5] ADD google-chrome.repo /etc/yum.repos.d/google-chrome.repo 0.0s
=> ERROR [3/5] RUN microdnf install -y google-chrome-stable-98.0.4758.102-1 && sed -i 's/"$HERE\/chrome"/"$HERE\/chrome" --no-sandbox/g' /opt/google/chrome/google-chrome 11.6s
------
> [3/5] RUN microdnf install -y google-chrome-stable-98.0.4758.102-1 && sed -i 's/"$HERE\/chrome"/"$HERE\/chrome" --no-sandbox/g' /opt/google/chrome/google-chrome:
#7 0.232 Downloading metadata...
#7 5.286 Downloading metadata...
#7 9.705 Downloading metadata...
#7 11.53 error: Could not depsolve transaction; 1 problem detected:
#7 11.53 Problem: conflicting requests
#7 11.53 - package google-chrome-stable-98.0.4758.102-1.x86_64 does not have a compatible architecture
#7 11.53 - nothing provides libm.so.6(GLIBC_2.2.5)(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
#7 11.53 - nothing provides ld-linux-x86-64.so.2(GLIBC_2.2.5)(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
#7 11.53 - nothing provides libpthread.so.0(GLIBC_2.2.5)(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
#7 11.53 - nothing provides libdl.so.2(GLIBC_2.2.5)(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
#7 11.53 - nothing provides librt.so.1(GLIBC_2.2.5)(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
#7 11.53 - nothing provides libpthread.so.0(GLIBC_2.3.2)(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
#7 11.53 - nothing provides libpthread.so.0(GLIBC_2.12)(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
#7 11.53 - nothing provides libpthread.so.0(GLIBC_2.3.4)(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
#7 11.53 - nothing provides ld-linux-x86-64.so.2(GLIBC_2.3)(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
#7 11.53 - nothing provides ld-linux-x86-64.so.2()(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
#7 11.53 - nothing provides libpthread.so.0(GLIBC_2.3.3)(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
------
executor failed running [/bin/sh -c microdnf install -y google-chrome-stable-$CHROME_VERSION && sed -i 's/"$HERE\/chrome"/"$HERE\/chrome" --no-sandbox/g' /opt/google/chrome/google-chrome]: exit code: 1
I note two things during the build:
'package google-chrome-stable-98.0.4758.102-1.x86_64 does not have a compatible architecture'
I am using 'chromedriver_linux64.zip (but it never gets to that stage) albeit https://chromedriver.storage.googleapis.com/ shows that there is a chromedriver_mac64_m1 as weak,
Is there a solution to get Chrome working on my local machine? Specifically, I need to be able to run this on my Mac and also deploy it to AWS. I think the AWS can be solved via buildx build --platform=linux/amd64 but I do not know how to get this to run locally. Any ideas?

The high level issue is that linux running on amd (Intel) may not be available on arm yet. Specifically see https://github.com/SeleniumHQ/docker-selenium/issues/1076 and the great work that jamesmortensen did with the https://hub.docker.com/u/seleniarm repo (and specifically https://hub.docker.com/r/seleniarm/standalone-chromium/tags ). To use it, do:
FROM seleniarm/standalone-chromium:4.1.1-alpha-20220119
ENV CHROMEDRIVER_PORT 4444
ENV CHROMEDRIVER_WHITELISTED_IPS "127.0.0.1"
ENV CHROMEDRIVER_URL_BASE ''
EXPOSE 4444
EXPOSE 8080
EXPOSE 5005
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
# For Testing
ENTRYPOINT ["java","-jar", "-Xmx600m","/app.jar"]
Java code is then:
return new ChromeDriver(service, getChromeOptions());
and
private ChromeOptions getChromeOptions() {
ChromeOptions chromeOptions = new ChromeOptions();
// User agent is required because some websites will reject your request if it does not have a user agent
chromeOptions.addArguments(String.format("user-agent=%s", USER_AGENT));
chromeOptions.addArguments("--log-level=OFF");
chromeOptions.setHeadless(true);
List<String> arguments = new LinkedList<>();
arguments.add("--disable-extensions");
arguments.add("--headless");
arguments.add("--disable-gpu");
arguments.add("--no-sandbox");
arguments.add("--incognito");
arguments.add("--disable-application-cache");
arguments.add("--disable-dev-shm-usage");
chromeOptions.addArguments(arguments);
return chromeOptions;
}
Note that the standalone is chromium , not chromedriver but this works because chromedriver is based off of chromium.
The root cause of this is that many packages (for example https://www.ubuntuupdates.org/package/google_chrome/stable/main/base/google-chrome-stable) do not have arm versions yet (they only have amd versions which is Intel based).
As for Docker files, I suggest having two docker files for now (much like zwbetz-gh's comment on Dec 28th, see https://github.com/SeleniumHQ/docker-selenium/issues/1076). To build the arn version you would do:
docker build -f DOCKER_FILE_ARN -t your_tag. Although I have still to test it, for the non-arn file, you would do: docker buildx build --platform=linux/amd64 -f DOCKER_FILE_AMD -t your_tag.

Related

How to use docker-abuild in a Dockerfile?

I want to use docker-abuild to build imagemagick in a Dockerfile. I use the following in the Dockerfile:
FROM alpinelinux/docker-abuild as imagickbuilder
COPY imagick/APKBUILD.imagick /home/builder/package/APKBUILD
COPY imagick/APKBUILD.imagick /home/builder/APKBUILD
COPY imagick/disable-avaraging-tests.patch /home/builder/package/disable-avaraging-tests.patch
COPY imagick/webmaster#mycompany.com-5b42f8ed.rsa /home/builder/ssh.rsa
COPY imagick/webmaster#mycompany.com-5b42f8ed.rsa.pub /etc/apk/keys/ssh.rsa.pub
ARG DABUILD_ARCH=aarch64
RUN dabuild -r
# tried abuild -r as well as builder -r
Regardless of what APKBUILD file I have/use, I'm getting the following error while building with docker build -t test .:
#...
#11 [7/7] RUN dabuild -r
#11 sha256:8c6e0fa4c055b4f5bbb7f633a3b4b4009cda31017a26dc48a047fd02466ce60c
#11 0.658 /bin/sh: dabuild: not found
#11 ERROR: executor failed running [/bin/sh -c dabuild -r]: exit code: 127
------
> [7/7] RUN dabuild -r:
------
executor failed running [/bin/sh -c dabuild -r]: exit code: 127
I'm getting the same error with abuild -r and abuilder -r. Any ideas?
JFYI, I'm running this under macOS Monterey 12.2.1 with an M1 Pro MacBook Pro.

Apple M1 Docker error cc1plus: error: unknown value 'armv8-a-march=armv8-a' for -march

Getting this error while building docker images on Mac OS BigSur with M1 chip.
What I've tried: Installed docker for Apple Silicon Graphic M1 from docker site
It fails while trying to install RocksDB from Docker
# docker.local
FROM golang:1.12.4-alpine3.9
RUN apk add bash build-base grep git
# Install RocksDB
RUN apk add coreutils linux-headers perl zlib-dev bzip2-dev lz4-dev snappy-dev zstd-libs zstd-dev && \
cd /tmp && \
wget -O - https://github.com/facebook/rocksdb/archive/v5.18.3.tar.gz | tar xz && \
cd /tmp/rocksdb* && \
make -j $(nproc) install-shared OPT=-g0 USE_RTTI=1 && \
rm -R /tmp/rocksdb* && \
apk del coreutils linux-headers perl
Errors:
#6 9.903 cc1plus: error: unknown value 'armv8-a-march=armv8-a' for -march
#6 9.903 cc1plus: note: valid arguments are: armv8-a armv8.1-a armv8.2-a armv8.3-a armv8.4-a native
#6 9.906 cc1plus: error: unknown value 'armv8-a-march=armv8-a' for -march
#6 9.906 cc1plus: note: valid arguments are: armv8-a armv8.1-a armv8.2-a armv8.3-a armv8.4-a native
#6 9.907 install -d /usr/local/lib
#6 9.908 CC shared-objects/cache/clock_cache.o
#6 9.908 CC shared-objects/cache/lru_cache.o
#6 9.909 CC shared-objects/cache/sharded_cache.o
#6 9.909 for header_dir in `find "include/rocksdb" -type d`; do \
#6 9.909 install -d /usr/local/$header_dir; \
#6 9.909 done
#6 9.911 cc1plus: error: unknown value 'armv8-a-march=armv8-a' for -march
#6 9.911 cc1plus: note: valid arguments are: armv8-a armv8.1-a armv8.2-a armv8.3-a armv8.4-a native
#6 9.912 make: *** [Makefile:684: shared-objects/cache/clock_cache.o] Error 1
#6 9.912 make: *** Waiting for unfinished jobs....
#6 9.912 make: *** [Makefile:684: shared-objects/cache/lru_cache.o] Error 1
#6 9.913 make: *** [Makefile:684: shared-objects/cache/sharded_cache.o] Error 1
#6 9.914 for header in `find "include/rocksdb" -type f -name *.h`; do \
#6 9.914 install -C -m 644 $header /usr/local/$header; \
#6 9.914 done
There are a couple of issues to address. The dockerfile as you have it will download a base golang ARM image, and try to use that to build. That's fine, as long as the required libs "know how" to build with an arm architecture. If they don't know how to build under arm (as seems to be the case here), you may want to try building under an AMD image of golang.
Intel / AMD containers will run under ARM docker on an M1. There are a few ways to build AMD containers on an M1. You can use buildkit, and then:
docker buildx build --platform linux/amd64 .
or, you can add the arch to the source image by modifying the Dockerfile to include something like:
FROM --platform=linux/amd64 golang:1.12.4-alpine3.9
which would use the amd64 arch of the golang image (assuming one exists). This is what I often use to build an image on ARM. This works even if docker is native ARM.

Rootless docker-compose cannot build timescale image

I have installed docker rootless on an ubuntu host machine. I have a Dockerfile for building timescaledb with the most important part looking like that:
# Install the tools we need for installation
RUN apt-get update && apt-get -y install gnupg2 lsb-release wget
# Add Postgres and Timescale package repository
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ $(lsb_release -c -s)-pgdg main" | tee /etc/apt/sources.list.d/pgdg.list
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
RUN sh -c "echo 'deb https://packagecloud.io/timescale/timescaledb/debian/ `lsb_release -c -s` main' > /etc/apt/sources.list.d/timescaledb.list"
RUN wget --quiet -O - https://packagecloud.io/timescale/timescaledb/gpgkey | apt-key add -
# Install Timescale
RUN apt-get update && apt-get -y install timescaledb-2-postgresql-12=2.0.0-zz~debian10
the corresponding docker-compose file looks like this:
timescale:
tty: true
volumes:
- timescale-volume:/var/lib/postgresql/data:rw
build:
context: ./timescale
dockerfile: Dockerfile
command:
- /bin/bash
depends_on:
- cert-mounter
When I run docker-compose up with sudo it works fine, the image is built and the container is running. If I execute it rootless I get the following error:
dpkg: error processing package postgresql-12 (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of timescaledb-2-postgresql-12:
timescaledb-2-postgresql-12 depends on postgresql-12; however:
Package postgresql-12 is not configured yet.
dpkg: error processing package timescaledb-2-postgresql-12 (--configure):
dependency problems - leaving unconfigured
Setting up exim4-daemon-light (4.92-8+deb10u5) ...
debconf: unable to initialize frontend: Dialog
debconf: (TERM is not set, so the dialog frontend is not usable.)
debconf: falling back to frontend: Readline
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of start.
Initializing GnuTLS DH parameter file
Setting up libmailutils5:amd64 (1:3.5-4) ...
Setting up mailutils (1:3.5-4) ...
update-alternatives: using /usr/bin/frm.mailutils to provide /usr/bin/frm (frm) in auto mode
update-alternatives: using /usr/bin/from.mailutils to provide /usr/bin/from (from) in auto mode
update-alternatives: using /usr/bin/messages.mailutils to provide /usr/bin/messages (messages) in auto mode
update-alternatives: using /usr/bin/movemail.mailutils to provide /usr/bin/movemail (movemail) in auto mode
update-alternatives: using /usr/bin/readmsg.mailutils to provide /usr/bin/readmsg (readmsg) in auto mode
update-alternatives: using /usr/bin/dotlock.mailutils to provide /usr/bin/dotlock (dotlock) in auto mode
update-alternatives: using /usr/bin/mail.mailutils to provide /usr/bin/mailx (mailx) in auto mode
dpkg: dependency problems prevent configuration of timescaledb-2-loader-postgresql-12:
timescaledb-2-loader-postgresql-12 depends on postgresql-12; however:
Package postgresql-12 is not configured yet.
dpkg: error processing package timescaledb-2-loader-postgresql-12 (--configure):
dependency problems - leaving unconfigured
Processing triggers for libc-bin (2.28-10) ...
Processing triggers for mime-support (3.62) ...
Errors were encountered while processing:
postgresql-common
postgresql-12
timescaledb-2-postgresql-12
timescaledb-2-loader-postgresql-12
E: Sub-process /usr/bin/dpkg returned an error code (1)
The command '/bin/sh -c apt-get update && apt-get -y install timescaledb-2-postgresql-12=2.0.0-zz~debian10' returned a non-zero code: 100
ERROR: Service 'timescale' failed to build
What could be the problem? Other containers are somehow built and run rootless without problems...
So I managed to make it work. In my Dockerfile I also set the uid of a user because I share some volumes and want the UIDs of users be consistent between the containers. So on top of my Dockerfile I had the following:
RUN useradd --uid 80000 postgres
replacing the uid with the lower value solved the issue
RUN useradd --uid 18000 postgres

How to set breakpoint in Dockerfile itself?

Searching up the above shows many results about how to set breakpoints for apps running in docker containers, yet I'm interested in setting a breakpoint in the Dockerfile itself, such that the docker build is paused at the breakpoint. For an example Dockerfile:
FROM ubuntu:20.04
RUN echo "hello"
RUN echo "bye"
I'm looking for a way to set a breakpoint on the RUN echo "bye" such that when I debug this Dockerfile, the image will build non-interactively up to the RUN echo "bye" point, exclusive. After then, I would be able to interactively run commands with the container. In the actual Dockerfile I have, there are RUNs before the breakpoint that change the file system of the image being built, and I want to analyze the filesystem of the image at the breakpoint by being able to interactively run commands like cd / ls / find at the time of the breakpoint.
You can't set a breakpoint per se, but you can get an interactive shell at an arbitrary point in your build sequence (between steps).
Let's build your image:
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM ubuntu:20.04
---> 1e4467b07108
Step 2/3 : RUN echo "hello"
---> Running in 917b34190e35
hello
Removing intermediate container 917b34190e35
---> 12ebbdc1e72d
Step 3/3 : RUN echo "bye"
---> Running in c2a4a71ae444
bye
Removing intermediate container c2a4a71ae444
---> 3c52993b0185
Successfully built 3c52993b0185
Each of the lines that says ---> 0123456789ab with a hex ID has a valid image ID. So from here you can
docker run --rm -it 12ebbdc1e72d sh
which will give you an interactive shell on the partial image resulting from the first RUN command.
There's no requirement that the build as a whole succeed. If a RUN step fails, you can use this technique to get an interactive shell on the image immediately before that step and re-run the command by hand. If you have a very long RUN command, you may need to break it into two to be able to get a debugging shell at a specific point within the command sequence.
I don't think this is possible directly - that feature has been discussed and rejected.
What I generally do to debug a Dockerfile is to comment all of the steps after the "breakpoint", then run docker build followed by docker run -it image bash or docker run -it image sh (depending on whether you have bash installed inside the container).
Then, I have an interactive shell, and I can run commands to debug why later stages are failing.
I agree that being able to set a breakpoint and poke around would be a handy feature, though.
You can run commands in intermediate containers using Remote shell debugging tricks.
Make sure your container images include basic utilities like netcat (nc) and fuser. These utilities enable "calling home" from any intermediate container image. At home you'll answer calls with netcat (or socat). This netcat will send your commands to containers, and print their outcomes. This debugging approach will work even on Dockerfiles that are built on unknown worker nodes somewhere in cloud.
Example:
FROM debian:testing-slim
# Set environment variables for calling home from breakpoints (BP)
ENV BP_HOME=<IP-ADDRESS-OF-YOUR-HOST>
ENV BP_PORT=33720
ENV BP_CALLHOME='BP_FIFO=/tmp/$BP.$BP_HOME.$BP_PORT; (rm -f $BP_FIFO; mkfifo $BP_FIFO) && (echo "\"c\" continues"; echo -n "($BP) "; tail -f $BP_FIFO) | nc $BP_HOME $BP_PORT | while read cmd; do if test "$cmd" = "c" ; then echo -n "" >$BP_FIFO; sleep 0.1; fuser -k $BP_FIFO >/dev/null 2>&1; break; else eval $cmd >$BP_FIFO 2>&1; echo -n "($BP) " >$BP_FIFO; fi; done'
# Install needed utils (netcat, fuser)
RUN apt update && apt install -y netcat psmisc
# Now you are ready to run "eval $BP_CALLHOME" wherever you want to call home.
RUN BP=before-hello eval $BP_CALLHOME
RUN echo "hello"
RUN BP=after-hello eval $BP_CALLHOME
RUN echo "bye"
Start waiting for and answering calls from a Dockerfile before launching a Docker build. On home host run nc -k -l -p 33720 (alternatively socat STDIN TCP-LISTEN:33720,reuseaddr,fork).
This is how above example looks like at home:
$ nc -k -l -p 33720
"c" continues
(before-hello) echo *
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
(before-hello) id
uid=0(root) gid=0(root) groups=0(root)
(before-hello) c
"c" continues
(after-hello)
...
The recent (May 2022) project ktock/buildg offers breakpoints.
See "Interactive debugger for Dockerfile" from Kohei Tokunaga
buildg is a tool to interactively debug Dockerfile based on BuildKit.
Source-level inspection
Breakpoints and step execution
Interactive shell on a step with your own debugigng tools
Based on BuildKit (needs unmerged patches)
Supports rootless
The command break, b LINE_NUMBER sets a breakpoint.
Example:
$ buildg.sh debug --image=ubuntu:22.04 /tmp/ctx
WARN[2022-05-09T01:40:21Z] using host network as the default
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.1s
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 195B done
#2 DONE 0.1s
#3 [internal] load metadata for docker.io/library/busybox:latest
#3 DONE 3.0s
#4 [build1 1/2] FROM docker.io/library/busybox#sha256:d2b53584f580310186df7a2055ce3ff83cc0df6caacf1e3489bff8cf5d0af5d8
#4 resolve docker.io/library/busybox#sha256:d2b53584f580310186df7a2055ce3ff83cc0df6caacf1e3489bff8cf5d0af5d8 0.0s done
#4 sha256:50e8d59317eb665383b2ef4d9434aeaa394dcd6f54b96bb7810fdde583e9c2d1 772.81kB / 772.81kB 0.2s done
Filename: "Dockerfile"
2| RUN echo hello > /hello
3|
4| FROM busybox AS build2
=> 5| RUN echo hi > /hi
6|
7| FROM scratch
8| COPY --from=build1 /hello /
>>> break 2
>>> breakpoints
[0]: line 2
>>> continue
#4 extracting sha256:50e8d59317eb665383b2ef4d9434aeaa394dcd6f54b96bb7810fdde583e9c2d1 0.0s done
#4 DONE 0.3s
...
From PR 24:
Add --cache-reuse option which allows sharing the build cache among invocation of buildg debug to make the 2nd-time debugging faster.
This is useful to speed up running buildg multiple times for debugging an errored step.
Note that breakpoints on cached steps are ignored as of now.
Because of this limitation, this feature is optional as of now. We should fix this limitation and make it the default behaviour in the future.
Man, Docker makes things hard. Here's a workaround I cooked up:
Insert FROM scratch where you want the break point.
Run docker build . --stage=<n-1> where <n> is the number of FROM commands before your "breakpoint". Eg, if it's a single stage build, use --stage=0.
Alternatively, if you have already named the stage where you want the break point with FROM <image> AS <stage> then you can use --stage=<stage> instead.
Docker has cached all your successful layers anyway (even if you can't see them), and because the FROM "breakpoint" comes before the (potentially unsuccessful) point of interest, the build should all come from cache and be very fast.
So for example, if my Dockerfile looks like this:
FROM debian:bullseye AS build
RUN apt-get update && apt-get install -y \
build-essential cmake ninja-build \
libfontconfig1-dev libdbus-1-dev libfreetype6-dev libicu-dev libinput-dev libxkbcommon-dev libsqlite3-dev libssl-dev libpng-dev libjpeg-dev libglib2.0-dev
<SNIP lots of other setup commands>
ADD my_source.tar.xz /
WORKDIR /my_source
RUN ./configure -option1 -option2
RUN cmake --build . --parallel
RUN cmake --install .
FROM alpine
COPY --from=build /my_build /my_build
...
Then I can add a "breakpoint" like this:
FROM debian:bullseye AS build
RUN apt-get update && apt-get install -y \
build-essential cmake ninja-build \
libfontconfig1-dev libdbus-1-dev libfreetype6-dev libicu-dev libinput-dev libxkbcommon-dev libsqlite3-dev libssl-dev libpng-dev libjpeg-dev libglib2.0-dev
<SNIP lots of other setup commands>
ADD my_source.tar.xz /
WORKDIR /my_source
#### BREAKPOINT ###
FROM scratch
#### BREAKPOINT ###
RUN ./configure -option1 -option2
RUN cmake --build . --parallel
RUN cmake --install .
FROM alpine
COPY --from=build /my_build /my_build
...
and trigger it with docker build . --stage=build

fastlane - error at google cloud build: "OCI runtime create failed: container_linux.go:345"

I'm using fastlane container that stores at google container registry to upload APK to google play store using Google Cloud Build.
APK has been succesfully created.However, when processing last step (fastlane), it face errors:
Step #2: 487ea6dabc0c: Pull complete
Step #2: a7ae4fee33c9: Pull complete
Step #2: Digest: sha256:2e31d5ae64984a598856f1138c6be0577c83c247226c473bb5ad302f86267545
Step #2: Status: Downloaded newer image for gcr.io/myapp789-app/fastlane:latest
Step #2: gcr.io/myapp789-app/fastlane:latest
Step #2: docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"supply\": executable file not found in $PATH": unknown.
Step #2: time="2019-08-29T23:22:55Z" level=error msg="error waiting for container: context canceled"
Finished Step #2
ERROR
ERROR: build step 2 "gcr.io/myapp789-app/fastlane" failed: exit status 127
Note:
1) Docker Source file was taken from https://hub.docker.com/r/fastlanetools/fastlane and then I build my own image.
2) Docker Image Build on Google Cloud VM using Debian GNU/Linux 9 (stretch)
Docker Source File for fastlane:
# Final image #
###############
FROM circleci/ruby:latest
MAINTAINER milch
ENV PATH $PATH:/usr/local/itms/bin
# Java versions to be installed
ENV JAVA_VERSION 8u131
ENV JAVA_DEBIAN_VERSION 8u131-b11-1~bpo8+1
ENV CA_CERTIFICATES_JAVA_VERSION 20161107~bpo8+1
# Needed for fastlane to work
ENV LANG C.UTF-8
ENV LC_ALL C.UTF-8
# Required for iTMSTransporter to find Java
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/jre
USER root
# iTMSTransporter needs java installed
# We also have to install make to install xar
# And finally shellcheck
RUN echo 'deb http://archive.debian.org/debian jessie-backports main' > /etc/apt/sources.list.d/jessie-
backports.list \
&& apt-get -o Acquire::Check-Valid-Until=false update \
&& apt-get install --yes \
make \
shellcheck \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
USER circleci
COPY --from=xar_builder /tmp/xar /tmp/xar
RUN cd /tmp/xar \
&& sudo make install \
&& sudo rm -rf /tmp/*
CloudBuild.yaml:
- name: 'gcr.io/$PROJECT_ID/fastlane'
args: ['supply', '--package_name','${_ANDROID_PACKAGE_NAME}', '--track', '${_ANDROID_RELEASE_CHANNEL}', '--json_key_data', '${_GOOGLE_PLAY_UPLOAD_KEY_JSON}', '--apk', '/workspace/${_REPO_NAME}/build/app/outputs/bundle/release/app.aab']
timeout: 1200s
Any Idea to solve this?
I solve this by building docker image using docker source from Google Cloud Official other than fastlane on hub.docker.com (where's it never update since 5 month ago)

Resources