I have created a docker file to containerise my Dart application. It creates a main application image, install all the necessary packages and copies my project into the container.
I have three packages to build that are next to each other, like so:
-- packages
---- package1
---- package2
---- package3
so I have use melos to facilitate building everything in one go.
However, when I then try to run melos bootstrap it yields me an error which appears to start from melos package itself all the way up to a pubspec loadFile parser:
# melos bs
Unhandled exception:
Null check operator used on a null value
#0 new JsonParser (package:pubspec/src/json_utils.dart:87:48)
#1 parseJson (package:pubspec/src/json_utils.dart:13:5)
#2 new PubSpec.fromJson (package:pubspec/src/pubspec.dart:96:15)
#3 PubSpec.loadFile (package:pubspec/src/pubspec.dart:128:15)
<asynchronous suspension>
#4 PackageMap.resolvePackages.<anonymous closure> (package:melos/src/package.dart:542:25)
<asynchronous suspension>
#5 Future.wait.<anonymous closure> (dart:async/future.dart:522:21)
<asynchronous suspension>
#6 PackageMap.resolvePackages (package:melos/src/package.dart:539:5)
<asynchronous suspension>
#7 MelosWorkspace.fromConfig (package:melos/src/workspace.dart:65:25)
<asynchronous suspension>
#8 _Melos.createWorkspace (package:melos/src/commands/runner.dart:100:13)
<asynchronous suspension>
#9 _BootstrapMixin.bootstrap (package:melos/src/commands/bootstrap.dart:5:23)
<asynchronous suspension>
#10 CommandRunner.runCommand (package:args/command_runner.dart:212:13)
<asynchronous suspension>
#11 MelosCommandRunner.runCommand (package:melos/src/command_runner.dart:80:5)
<asynchronous suspension>
#12 main (file:///root/.pub-cache/hosted/pub.dev/melos-2.9.0/bin/melos.dart:46:5)
<asynchronous suspension>
When I run it in my local version, I don't appear to have this problem and only happens within the docker container. Most of the resources of questions online refer to somewhere in the code which uses a null check operator "!" on a variable that is already null. However, in this case, the file the error references is not my own but from an internal package.
I am new to dart and melos and was wondering if anybody has come across this error and if they have any hints on why this is being raised within a container but not when run locally
Thank you in advance for any help.
my melos.yaml
packages:
- packages/**
scripts:
analyze:
exec: dart analyze .
get:
exec: dart pub get
My DockerFile:
### main application image ###
FROM dart:stable
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y cmake build-essential gperf libssl-dev zlib1g-dev \
&& rm -rf /var/lib/apt/lists/*
# Install packages
RUN apt update \
&& apt install -y \
libc++-dev \
&& rm -rf /var/lib/apt/lists/*
RUN rm -rf /app && mkdir -p /app
COPY . /app
ENV PATH="/root/.pub-cache/bin:${PATH}"
WORKDIR /app
RUN rm -rf packages/td_json_client/build && \
mkdir packages/td_json_client/build && \
cd packages/td_json_client/build && \
cmake -DCMAKE_INSTALL_PREFIX:PATH=/app/packages/td_json_client/lib/src/blobs/darwin/arm64 /app/packages/td_json_client/lib/src/log_callback && \
cmake --build . --target install
RUN ls /app/packages/td_json_client/build
RUN ls /app/packages/td_json_client/lib/src/blobs/darwin/arm64
RUN cp -r /app/packages/td_json_client/lib/src/blobs/darwin/arm64 /usr/local/lib/
RUN dart pub global activate melos
# I commented out the code below so I could sh into the docker image
# RUN melos bootstrap
# RUN melos run get
# CMD ["/app/packages/cli/bin/main.dart","login","-h","--api-id=","$API_ID", \
# "--api-hash=","$API_HASH","--phone-number=","$PHONE", \
# "--libtdjson-path=","$PATH_TD_JSON_LIB","--libtdjson-loglevel=","$LOG_LEVEL"]
I tried to verify if it came from anywhere within my code by adding the --scope option, e.g.
melos bootstrap --scope package/package1
melos bootstrap --scope package/package2
melos bootstrap --scope package/package3
but they all yielded the same error, indicating that I probably have something that is not set up correctly.
I was expecting to see something similar to what I get locally, which is:
Running "dart pub get" in workspace packages...
✓ <packagename1>
└> packages/<packagename1>
✓ <packagename2>
└> packages/<packagename2>
✓ <packagename3>
└> packages/<packagename3>
Linking workspace packages...
> SUCCESS
Generating IntelliJ IDE files...
> SUCCESS
Related
I'm trying to install python3.7.2 from source within my docker image.
However, it doesn't seem to recognize my ./configure file. The file would have to be copied because the git checkout is working. Any ideas why I'm getting not found error?
Dockerfile
RUN apt-get install -y --no-install-recommends zlib1g-dev libffi-dev
RUN apt-get install -y --no-install-recommends libssl-dev
#RUN git clone git#github.com:python/cpython.get
WORKDIR /root
COPY cpython/ .
WORKDIR /root/cpython
RUN git checkout -f v3.7.2 && ./configure --prefix="$HOME/python/v3.7.2" && make && make install
Dockerfile build
=> CACHED [12/55] WORKDIR /root 0.0s
=> [13/55] COPY cpython/ . 3.9s
=> [14/55] WORKDIR /root/cpython 0.1s
=> ERROR [15/55] RUN git checkout -f v3.7.2 && ./configure --prefix="$HOME/python/v3.7.2" && make && make insta 2.6s
------
> [15/55] RUN git checkout -f v3.7.2 && ./configure --prefix="$HOME/python/v3.7.2" && make && make install:
#19 2.230 Note: checking out 'v3.7.2'.
#19 2.230
#19 2.230 You are in 'detached HEAD' state. You can look around, make experimental
#19 2.230 changes and commit them, and you can discard any commits you make in this
#19 2.230 state without impacting any branches by performing another checkout.
#19 2.230
#19 2.230 If you want to create a new branch to retain commits you create, you may
#19 2.230 do so (now or later) by using -b with the checkout command again. Example:
#19 2.230
#19 2.230 git checkout -b new_branch_name
#19 2.230
#19 2.230 HEAD is now at 9a3ffc0... 3.7.2final
#19 2.236 /bin/sh: 1: ./configure: not found
------
executor failed running [/bin/sh -c git checkout -f v3.7.2 && ./configure --prefix="$HOME/python/v3.7.2" && make && make install]: exit code: 127
make: *** [Makefile:23: build] Error 1
I have the following Dockerfile that works on amd64. However, there does not appear to be a google-chrome-stable package for ARM (see https://pkgs.org/search/?q=google-chrome); specifically I get the following error when I try to do a docker build:
=> ERROR [3/5] RUN microdnf install -y google-chrome-stable-102.0.5005.61-1 && sed -i 's/"$HERE\/chrome"/"$HERE\/chrome" --no-sandbox/g' /opt/google/chrome/google-chrome 18.7s
------
> [3/5] RUN microdnf install -y google-chrome-stable-102.0.5005.61-1 && sed -i 's/"$HERE\/chrome"/"$HERE\/chrome" --no-sandbox/g' /opt/google/chrome/google-chrome:
#7 0.234 Downloading metadata...
#7 9.135 Downloading metadata...
#7 16.51 Downloading metadata...
#7 18.63 error: Could not depsolve transaction; 1 problem detected:
#7 18.63 Problem: conflicting requests
#7 18.63 - package google-chrome-stable-102.0.5005.61-1.x86_64 does not have a compatible architecture
#7 18.63 - nothing provides libm.so.6(GLIBC_2.2.5)(64bit) needed by google-chrome-stable-102.0.5005.61-1.x86_64
#7 18.63 - nothing provides ld-linux-x86-64.so.2(GLIBC_2.2.5)(64bit) needed by google-chrome-stable-102.0.5005.61-1.x86_64
#7 18.63 - nothing provides libpthread.so.0(GLIBC_2.2.5)(64bit) needed by google-chrome-stable-102.0.5005.61-1.x86_64
#7 18.63 - nothing provides libdl.so.2(GLIBC_2.2.5)(64bit) needed by google-chrome-stable-102.0.5005.61-1.x86_64
#7 18.63 - nothing provides librt.so.1(GLIBC_2.2.5)(64bit) needed by google-chrome-stable-102.0.5005.61-1.x86_64
#7 18.63 - nothing provides libpthread.so.0(GLIBC_2.3.2)(64bit) needed by google-chrome-stable-102.0.5005.61-1.x86_64
#7 18.63 - nothing provides libpthread.so.0(GLIBC_2.12)(64bit) needed by google-chrome-stable-102.0.5005.61-1.x86_64
#7 18.63 - nothing provides libpthread.so.0(GLIBC_2.3.4)(64bit) needed by google-chrome-stable-102.0.5005.61-1.x86_64
#7 18.63 - nothing provides ld-linux-x86-64.so.2(GLIBC_2.3)(64bit) needed by google-chrome-stable-102.0.5005.61-1.x86_64
#7 18.63 - nothing provides ld-linux-x86-64.so.2()(64bit) needed by google-chrome-stable-102.0.5005.61-1.x86_64
#7 18.63 - nothing provides libpthread.so.0(GLIBC_2.3.3)(64bit) needed by google-chrome-stable-102.0.5005.61-1.x86_64
Specifically, note package google-chrome-stable-102.0.5005.61-1.x86_64 does not have a compatible architecture. This makes sense as x86 is intel architecture and I am building on arm. My question is: 'is there a different package that I can use to get the equivalent of google-chrome-stable.
The full docker file (that works on amd64, not ARM) is:
FROM maven:3.6.3-openjdk-15
# Google Chrome
ARG CHROME_VERSION=96.0.4664.45-1
ADD google-chrome.repo /etc/yum.repos.d/google-chrome.repo
RUN microdnf install -y google-chrome-stable-$CHROME_VERSION \
&& sed -i 's/"$HERE\/chrome"/"$HERE\/chrome" --no-sandbox/g' /opt/google/chrome/google-chrome
## ChromeDriver
ARG CHROME_DRIVER_VERSION=96.0.4664.45
RUN microdnf install -y unzip \
&& curl -s -o /tmp/chromedriver.zip https://chromedriver.storage.googleapis.com/$CHROME_DRIVER_VERSION/chromedriver_linux64.zip \
&& unzip /tmp/chromedriver.zip -d /opt \
&& rm /tmp/chromedriver.zip \
&& mv /opt/chromedriver /opt/chromedriver-$CHROME_DRIVER_VERSION \
&& chmod 755 /opt/chromedriver-$CHROME_DRIVER_VERSION \
&& ln -s /opt/chromedriver-$CHROME_DRIVER_VERSION /usr/bin/chromedriver
ENV CHROMEDRIVER_PORT 4444
ENV CHROMEDRIVER_WHITELISTED_IPS "127.0.0.1"
ENV CHROMEDRIVER_URL_BASE ''
EXPOSE 4444
EXPOSE 8080
EXPOSE 5005
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar", "-Xmx600m","/app.jar"]
However, note that the above file does not allow me to specify the jdk version. I am trying to be able to specify both the jdk version and get selenium to run.
I want to use jupyter notbook by using Anaconda3 in docker ubuntu, but when I create an image with docker build --platform linux/amd64 . the following error occurs.
How can I solve it?
#8 141.1
#8 141.1 2022-05-05 04:21:55 (3.62 MB/s) - 'Anaconda3-2019.10-Linux-x86_64.sh' saved [530308481/530308481]
#8 141.1
#8 141.3 PREFIX=/opt/anaconda3
#8 143.4 Unpacking payload ...
[96] Failed to execute script entry_point concurrent.futures.process._RemoteTraceback:
#8 151.7 '''
#8 151.7 Traceback (most recent call last):
#8 151.7 File "concurrent/futures/process.py", line 367, in _queue_management_worker
#8 151.7 File "multiprocessing/connection.py", line 251, in recv
#8 151.7 TypeError: __init__() missing 1 required positional argument: 'msg'
#8 151.7 '''
#8 151.7
#8 151.7 The above exception was the direct cause of the following exception:
#8 151.7
#8 151.7 Traceback (most recent call last):
#8 151.7 File "entry_point.py", line 71, in <module>
#8 151.7 File "concurrent/futures/process.py", line 483, in _chain_from_iterable_of_lists
#8 151.7 File "concurrent/futures/_base.py", line 598, in result_iterator
#8 151.7 File "concurrent/futures/_base.py", line 435, in result
#8 151.7 File "concurrent/futures/_base.py", line 384, in __get_result
#8 151.7 concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
------ executor failed running [/bin/sh -c wget https://repo.anaconda.com/archive/Anaconda3-2019.10-Linux-x86_64.sh && sh /opt/Anaconda3-2019.10-Linux-x86_64.sh -b -p /opt/anaconda3 && rm -f Anaconda3-2019.10-Linux-x86_64.sh]: exit code: 1
The Dockerfile is like below:
FROM ubuntu:latest
# update
RUN apt-get -y update && apt-get install -y \
sudo \
wget \
vim
#install anaconda3
WORKDIR /opt
# download anaconda package and install anaconda
# archive -> https://repo.continuum.io/archive/
RUN wget https://repo.anaconda.com/archive/Anaconda3-2019.10-Linux-x86_64.sh && \
sh /opt/Anaconda3-2019.10-Linux-x86_64.sh -b -p /opt/anaconda3 && \
rm -f Anaconda3-2019.10-Linux-x86_64.sh
# set path
ENV PATH /opt/anaconda3/bin:$PATH
# update pip and conda
RUN pip install --upgrade pip
WORKDIR /
RUN mkdir /work
# execute jupyterlab as a default command
CMD ["jupyter", "lab", "--ip=0.0.0.0", "--allow-root", "--LabApp.token=''"]
Mac 12.3.1 Apple M1
Docker desktop for Mac 4.7.1
It can be a space issue during the installation, I was getting the exact error. If you accept the default option to install Anaconda on the “default path” Anaconda is installed in your user home directory, where space can be an issue. For me it got resolved when changed the default directory to some other directory during installation .You can also refer this page for similar issues- https://github.com/conda/conda/issues/10143
Pretty new to docker; trying to get base layer setup on docker though it gives me these errors:
It's noting that the repository is failing / how do I set that repository?
I don't think it's AWS issue as I have been able to see the AWS push in cloud formation.
$./generate_base_layer.sh
Error: No such container: layer-container
[+] Building 27.7s (6/13)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 551B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/amazonlinux:2 0.9s
=> [auth] library/amazonlinux:pull token for registry-1.docker.io 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 37B 0.0s
=> [2/8] RUN yum install -y python37 && yum install -y python3-pip && yum install -y 26.7s
=> => # Loaded plugins: ovl, priorities
> [2/8] RUN yum install -y python37 && yum install -y python3-pip && yum install -y zip && yum clean all:
#6 0.369 Loaded plugins: ovl, priorities
#6 36.47
#6 36.47
#6 36.47 One of the configured repositories failed (Unknown),
#6 36.47 and yum doesn't have enough cached data to continue. At this point the only
#6 36.47 safe thing yum can do is fail. There are a few ways to work "fix" this:
#6 36.47
#6 36.47 1. Contact the upstream for the repository and get them to fix the problem.
#6 36.47
#6 36.47 2. Reconfigure the baseurl/etc. for the repository, to point to a working
#6 36.47 upstream. This is most often useful if you are using a newer
#6 36.47 distribution release than is supported by the repository (and the
#6 36.47 packages for the previous distribution release still work).
#6 36.47
#6 36.47 3. Run the command with the repository temporarily disabled
#6 36.47 yum --disablerepo=<repoid> ...
#6 36.47
#6 36.47 4. Disable the repository permanently, so yum won't use it by default. Yum
#6 36.47 will then just ignore the repository until you permanently enable it
#6 36.47 again or use --enablerepo for temporary usage:
#6 36.47
#6 36.47 yum-config-manager --disable <repoid>
#6 36.47 or
#6 36.47 subscription-manager repos --disable=<repoid>
#6 36.47
#6 36.47 5. Configure the failing repository to be skipped, if it is unavailable.
#6 36.47 Note that yum will try to contact the repo. when it runs most commands,
#6 36.47 so will have to try and fail each time (and thus. yum will be be much
#6 36.47 slower). If it is a very temporary problem though, this is often a nice
#6 36.47 compromise:
#6 36.47
#6 36.47 yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
#6 36.47
#6 36.47 Cannot find a valid baseurl for repo: amzn2-core/2/aarch64
#6 36.47 Could not retrieve mirrorlist http://amazonlinux.default.amazonaws.com/2/core/latest/aarch64/mirror.list error was
#6 36.47 12: Timeout on http://amazonlinux.default.amazonaws.com/2/core/latest/aarch64/mirror.list: (28, 'Failed to connect to amazonlinux.default.amazonaws.com port 80 after 4723 ms: Connection timed out')
------
executor failed running [/bin/sh -c yum install -y python37 && yum install -y python3-pip && yum install -y zip && yum clean all]: exit code: 1
Unable to find image 'base-layer:latest' locally
docker: Error response from daemon: pull access denied for base-layer, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
Error: No such container:path: layer-container:layer.zip
I've already logged into docker and tried it with docker build -t... ; same issue
dockerfile:
FROM amazonlinux:2
# Install Python
RUN yum install -y python37 && \
yum install -y python3-pip && \
yum install -y zip && \
yum clean all
# Set up PIP and Venv
RUN python3.7 -m pip install --upgrade pip && \
python3.7 -m pip install virtualenv
RUN python3.7 -m venv base
RUN source base/bin/activate
# Install dependencies
COPY ./requirements.txt .
RUN pip install -r requirements.txt -t ./python
# Zip it up for deployment.
RUN zip -r layer.zip ./python/
ENTRYPOINT ["/bin/bash", "-l"]
generate_base.. file:
# Generates a base layer for the Lambda functions.
# Remove the container first (if it exists).
docker rm layer-container
# Build the base layer.
docker build -t base-layer .
# Rename it to layer-container.
docker run --name layer-container base-layer
# Copy the generated zip artifact so our CDK can use it.
docker cp layer-container:layer.zip . && echo "Created layer.zip with updated base layer."
I've been using the following Docker image (condensed for brevity) for a long time:
FROM elixir:1.11
ARG USER
ARG GROUP
ARG UID=1000
ARG GID=1000
ARG POSTGRESQL_VERSION=13
ARG POSTGRESQL_CLUSTER=my-cluster
ARG POSTGRESQL_PORT=5432
ARG POSTGRESQL_DIR=/etc/postgresql/$POSTGRESQL_VERSION/$POSTGRESQL_CLUSTER
ARG DEBIAN_FRONTEND=noninteractive
RUN set -xe \
&& ln -sf /usr/share/zoneinfo/Portugal /etc/localtime \
&& groupadd -g $GID $GROUP \
&& useradd -r -u $UID -g $GROUP -m -s /bin/bash -c "Docker image user" $USER \
&& apt-get update \
&& apt-get install -y lsb-release cmake \
&& echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main" | tee /etc/apt/sources.list.d/pgdg.list \
&& echo "deb http://deb.debian.org/debian `lsb_release -cs`-backports bullseye main" | tee -a /etc/apt/sources.list.d/pgdg.list \
&& echo "deb http://deb.debian.org/debian testing non-free contrib main" | tee -a /etc/apt/sources.list.d/pgdg.list \
&& wget -q -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - \
&& curl -sL https://deb.nodesource.com/setup_12.x | bash - \
&& apt-get update \
&& apt-get install -y postgresql-$POSTGRESQL_VERSION inotify-tools libgit2-dev vim expect nodejs lsof
However, I suddenly started experiencing the following error:
#6 31.36 Get:131 http://deb.debian.org/debian testing/main amd64 vim amd64 2:8.2.2434-3 [1494 kB]
#6 31.64 debconf: delaying package configuration, since apt-utils is not installed
#6 31.85 Fetched 175 MB in 6s (30.9 MB/s)
#6 31.87 Selecting previously unselected package gcc-11-base:amd64.
(Reading database ... 36509 files and directories currently installed.)
#6 31.90 Preparing to unpack .../gcc-11-base_11.2.0-4_amd64.deb ...
#6 31.90 Unpacking gcc-11-base:amd64 (11.2.0-4) ...
#6 31.95 Setting up gcc-11-base:amd64 (11.2.0-4) ...
#6 32.00 Selecting previously unselected package libgcc-s1:amd64.
(Reading database ... 36514 files and directories currently installed.)
#6 32.02 Preparing to unpack .../libgcc-s1_11.2.0-4_amd64.deb ...
#6 32.03 Unpacking libgcc-s1:amd64 (11.2.0-4) ...
#6 32.03 Replacing files in old package libgcc1:amd64 (1:8.3.0-6) ...
#6 32.08 Setting up libgcc-s1:amd64 (11.2.0-4) ...
(Reading database ... 36516 files and directories currently installed.)
#6 32.17 Preparing to unpack .../g++_4%3a10.2.1-1_amd64.deb ...
#6 32.18 Unpacking g++ (4:10.2.1-1) over (4:8.3.0-1) ...
#6 32.21 Preparing to unpack .../gcc_4%3a10.2.1-1_amd64.deb ...
#6 32.22 Unpacking gcc (4:10.2.1-1) over (4:8.3.0-1) ...
(Reading database ... 36516 files and directories currently installed.)
#6 32.32 Removing g++-8 (8.3.0-6) ...
#6 32.41 dpkg: gcc-8: dependency problems, but removing anyway as you requested:
#6 32.41 libtool depends on gcc | c-compiler; however:
#6 32.41 Package gcc is not configured yet.
#6 32.41 Package c-compiler is not installed.
#6 32.41 Package gcc-8 which provides c-compiler is to be removed.
#6 32.41 Package gcc which provides c-compiler is not configured yet.
#6 32.41
#6 32.41 Removing gcc-8 (8.3.0-6) ...
#6 32.44 dpkg: libgcc-8-dev:amd64: dependency problems, but removing anyway as you requested:
#6 32.44 libstdc++-8-dev:amd64 depends on libgcc-8-dev (= 8.3.0-6).
#6 32.44
#6 32.44 Removing libgcc-8-dev:amd64 (8.3.0-6) ...
(Reading database ... 36304 files and directories currently installed.)
#6 32.55 Preparing to unpack .../libc6_2.31-17_amd64.deb ...
#6 32.66 Checking for services that may need to be restarted...
#6 32.67 Checking init scripts...
#6 32.69 Unpacking libc6:amd64 (2.31-17) over (2.28-10) ...
#6 33.57 Setting up libc6:amd64 (2.31-17) ...
#6 33.60 /usr/bin/perl: error while loading shared libraries: libcrypt.so.1: cannot open shared object file: No such file or directory
#6 33.60 dpkg: error processing package libc6:amd64 (--configure):
#6 33.60 installed libc6:amd64 package post-installation script subprocess returned error exit status 127
#6 33.61 Errors were encountered while processing:
#6 33.61 libc6:amd64
#6 33.72 E: Sub-process /usr/bin/dpkg returned an error code (1)
After trying certain changes and going by trial and error I believe the culprit is the command
echo "deb http://deb.debian.org/debian testing non-free contrib main" | tee -a /etc/apt/sources.list.d/pgdg.list
I'm suspicious of this being related to the recent release of Debian Bullseye, but I'm not sure. I need the testing repository to fetch a libgit2-dev version from 1.0.0 onwards (the stable repository download 0.27.0).
Some fixes I tried were installing libssl-dev and other crypto related packages but nothing seems to work so far. If I remove the command shown above it works, but installs the older version of libgit2.
Any help would be appreciated