I am trying to install the GCP Profiler agent for my app which runs in GKE, following instructions here: https://cloud.google.com/profiler/docs/profiling-java
I can't get past this error. Can someone help?
Could not find agent library /opt/cprof/profiler_java_agent.so in
absolute path, with error: Error relocating
/opt/cprof/profiler_java_agent.so: __pthread_key_create: initial-exec
TLS resolves to dynamic definition in
/opt/cprof/profiler_java_agent.so
This is the Dockerfile
FROM openjdk:8-jdk-alpine
RUN apk update && apk add --no-cache gcompat
RUN apk update && apk add --no-cache libc6-compat
WORKDIR /app
# The application's jar file
ARG JAR_FILE=target/example-svc-*.jar
# Add the application's jar to the container
ADD ${JAR_FILE} example-svc.jar
EXPOSE 5050
RUN mkdir -p /opt/cprof && \
wget -q -O- https://storage.googleapis.com/cloud-profiler/java/latest/profiler_java_agent.tar.gz \
| tar xzv -C /opt/cprof
ENTRYPOINT ["java", \
"-agentpath:/opt/cprof/profiler_java_agent.so=-cprof_service=example-svc,-cprof_service_version=0.0.1-SNAPSHOT", \
"-jar", "/app/example-svc.jar"]
The problem appears to be the base version of the container image you are working from. Looking at your Dockerfile, you are starting from:
openjdk:8-jdk-alpine
Digging into the docs of this, we find:
The main caveat to note is that it does use musl libc instead of glibc
and friends, so certain software might run into issues depending on
the depth of their libc requirements.
(Reference: openjdk)
Now if we look at the Google docs found here, we find the following requirement defined:
Supported operating systems:
Linux versions whose standard C library is implemented with glibc.
... and this appears to be a conflict. Please try with an alternate version of an openjdk image that is not based on alpine.
Related
I'm trying to run protoc command into a docker container.
I've tried using the gRPC image but protoc command is not found:
/bin/sh: 1: protoc: not found
So I assume I have to install manually using RUN instructions, but is there a better solution? An official precompiled image with protoc installed?
Also, I've tried to install via Dockerfile but I'm getting again protoc: not found.
This is my Dockerfile
#I'm not using "FROM grpc/node" because that image can't unzip
FROM node:12
...
# Download proto zip
ENV PROTOC_ZIP=protoc-3.14.0-linux-x86_32.zip
RUN curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v3.14.0/${PROTOC_ZIP}
RUN unzip -o ${PROTOC_ZIP} -d ./proto
RUN chmod 755 -R ./proto/bin
ENV BASE=/usr/local
# Copy into path
RUN cp ./proto/bin/protoc ${BASE}/bin
RUN cp -R ./proto/include/* ${BASE}/include
RUN protoc -I=...
I've done RUN echo $PATH to ensure the folder is in path and is ok:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Also RUN ls -la /usr/local/bin to check protoc file is into the folder and it shows:
-rwxr-xr-x 1 root root 4849692 Jan 2 11:16 protoc
So the file is in /bin folder and the folder is in the path.
Have I missed something?
Also, is there a simple way to get the image with protoc installed? or the best option is generate my own image and pull from my repository?
Thanks in advance.
Edit: Solved downloading linux-x86_64 zip file instead of x86_32. I downloaded the lower architecture requirements thinking a x86_64 machine can run a x86_32 file but not in the other way. I don't know if I'm missing something about architecture requirements (It's probably) or is a bug.
Anyway in case it helps someone I found the solution and I've added an answer with the neccessary Dockerfile to run protoc and protoc-gen-grpc-web.
The easiest way to get non-default tools like this is to install them through the underlying Linux distribution's package manager.
First, look at the Docker Hub page for the node image. (For "library" images like node, construct the URL https://hub.docker.com/_/node.) You'll notice there that there are several variations named "alpine", "buster", or "stretch"; plain node:12 is the same as node:12-stretch and node:12.20.0-stretch. The "alpine" images are based on Alpine Linux; the "buster" and "stretch" ones are different versions of Debian GNU/Linux.
For Debian-based packages, you can then look up the package on https://packages.debian.org/ (type protoc into the "Search the contents of packages" form at the bottom of the page). That leads you to the protobuf-compiler package. Knowing that contains the protoc binary, you can install it in your Dockerfile with:
FROM node:12 # Debian-based
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
protobuf-compiler
# The rest of your Dockerfile as above
COPY ...
RUN protoc ...
You generally must run apt-get update and apt-get install in the same RUN command, lest a subsequent rebuild get an old version of the package cache from the Docker build cache. I generally have only a single apt-get install command if I can manage it, with the packages list alphabetically one to a line for maintainability.
If the image is Alpine-based, you can do a similar search on https://pkgs.alpinelinux.org/contents to find protoc, and similarly install it:
FROM node:12-alpine
RUN apk add --no-cache protoc
# The rest of your Dockerfile as above
Finally I solved my own issue.
The problem was the arch version: I was using linux-x86_32.zip but works using linux-x86_64.zip
Even #David Maze answer is incredible and so complete, it didn't solve my problem because using apt-get install version 3.0.0 and I wanted 3.14.0.
So, the Dockerfile I have used to run protoc into a docker container is like this:
FROM node:12
...
# Download proto zip
ENV PROTOC_ZIP=protoc-3.14.0-linux-x86_64.zip
RUN curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v3.14.0/${PROTOC_ZIP}
RUN unzip -o ${PROTOC_ZIP} -d ./proto
RUN chmod 755 -R ./proto/bin
ENV BASE=/usr
# Copy into path
RUN cp ./proto/bin/protoc ${BASE}/bin/
RUN cp -R ./proto/include/* ${BASE}/include/
# Download protoc-gen-grpc-web
ENV GRPC_WEB=protoc-gen-grpc-web-1.2.1-linux-x86_64
ENV GRPC_WEB_PATH=/usr/bin/protoc-gen-grpc-web
RUN curl -OL https://github.com/grpc/grpc-web/releases/download/1.2.1/${GRPC_WEB}
# Copy into path
RUN mv ${GRPC_WEB} ${GRPC_WEB_PATH}
RUN chmod +x ${GRPC_WEB_PATH}
RUN protoc -I=...
Because this is currently the highest ranked result on Google and the above instructions above won't work, if you want to use docker/dind for e.g. gitlab, this is the way how you can get the glibc-dependency working for protoc there:
#!/bin/bash
# install gcompat, because protoc needs a real glibc or compatible layer
apk add gcompat
# install a recent protoc (use a version that fits your needs)
export PB_REL="https://github.com/protocolbuffers/protobuf/releases"
curl -LO $PB_REL/download/v3.20.0/protoc-3.20.0-linux-x86_64.zip
unzip protoc-3.20.0-linux-x86_64.zip -d $HOME/.local
export PATH="$PATH:$HOME/.local/bin"
I feel confused by the Dockerfile and build process. Specifically, I am working my way through the book Docker on AWS and I feel stuck until I can work my way through a few more of the details. The book had me write the following Dockerfile.
#Test stage
FROM alpine as test
LABEL application=todobackend
#Install basic utilities
RUN apk add --no-cache bash git
#Install build dependencies
RUN apk add --no-cache gcc python3-dev libffi-dev musl-dev linux-headers mariadb-dev py3-pip
RUN ../../usr/bin/pip3 install wheel
#Copy requirements
COPY /src/requirements* /build/
WORKDIR /build
#Build and install requirements
RUN pip3 wheel -r requirements_test.txt --no-cache-dir --no-input
RUN pip3 install -r requirements_test.txt -f /build --no-index --no-cache-dir
# Copy source code
COPY /src /app
WORKDIR /app
# Test entrypoint
CMD ["python3","manage.py","test","--noinput","--settings=todobackend.settings_test"]
The following is a list of the things I understand versus don't understand.
I understand this.
#Test stage
FROM alpine as test
LABEL application=todobackend
It is defining a 'test' stage so I can run commands like docker build --target test and will execute all of the following commands until the next FROM / as command indicates a different target. LABEL is labeling the specific docker image that is built and from which containers will be 'born' (not sure if that is the right word to use). I don't feel any confusion about that EXCEPT if that tag translates to containers spawned from that image.
So NOW I start to feel confused.
I PARTLY understand this
#Install basic utilities
RUN apk add --no-cache bash git
I understand that apk is an overloaded term that represents both the package manager on Alpine Linux and a file type. In this context, it is a package manager command to install (or upgrade) a package to the running system. HOWEVER, I am suppose to be building / packaging up an application and all of its dependencies into an enclosed 'environment'. Sooo... where / when does this 'environment' come in? That is where I feel confused. When the docker file is running apk, is it just saying "locally, on your current machine, please install these the normal way." (ie, the equivalent of a bash script where apk installs to its working directory). When I run docker build --target test -t todobackend-test on my previously pasted docker file, is the docker command doing both a native command execution AND a Docker Engine call to create an isolated environment for my docker image? I feel like what must be happening is when the docker command is run it acts like a wrapper around the built-in package manager / bash / pip functionality AND the docker engine and is doing both but I don't know.
Anyways, I feel hope that this made sense. I just want some implementation details. Feel free to link documentation but it can feel super tedious and unnecessarily detailed OR obfuscated sometimes.
I DO want to point out that if I run an apk command in my Dockerfile with a bad dependency name (e.g. python3-pip instead of py3-pip). I get a very interesting error:
/bin/sh: pip3: not found
Notice the command path. I am assuming anyone reading this will understand why that feels hella confusing.
I am running a docker-compose file using node:latest. I noticed an issue with the timezone that I am trying to fix. Following an example I found online, I tried to install tzdata. This is not working as I keep getting apk not found errors. After finding this stackoverflow.com question, Docker Alpine /bin/sh apk not found, it seems to mirror my issue as I docker exec'ed into the container and found the apk command in the /sbin folder. I tried to do the following to make it work but I am still not able to access apk. From other articles I found, this seemed to be the way to resolve the issue but apk is still not found.
CMD export PATH=$PATH:$ADDITIONAL_PATH
RUN apk add --no-cache tzdata
ENV TZ=America/Chicago
node:latest is based on buildpack-deps, which is based on Debian. Debian does not use apk; it uses apt. You either want to use Debian's apt to install packages (apt-get install tzdata) or switch to node:alpine, which uses apk for package management.
You can use node:alpine which is based on alpine.
node:alpine
CMD export PATH=$PATH:$ADDITIONAL_PATH
RUN apk add --no-cache tzdata
ENV TZ=America/Chicago
node:-alpine
This image is based on the popular Alpine Linux project, available in
the alpine official image. Alpine Linux is much smaller than most
distribution base images (~5MB), and thus leads to much slimmer images
in general.
Total rust noob here. Trying to build a sccache binary for linux x64 with Redis: true. I'm starting with an alpine image:
FROM rust:alpine3.10
WORKDIR /root
RUN apk --no-cache add --update curl
RUN curl -L https://github.com/mozilla/sccache/archive/0.2.11.tar.gz \
-o sccache.tar.gz
RUN tar xf sccache.tar.gz
RUN cd sccache-0.2.11 &&\
cargo build --features=all --release
I get:
error: cannot produce proc-macro for `derive-error v0.0.3` as the target `x86_64-unknown-linux-musl` does not support these crate types
Works fine if I FROM rust, which is based on buster. I could just go with this (and I will), but what is going on here? I'm so out of my element I am not even sure what questions to ask.
Related?:
https://github.com/rust-lang/rust/issues/59302
The proc_macro crate relies on a couple of features only available to dynamically-linked executables, and since musl is anything but that, you cannot use proc_macro on musl.
The issue related to this is here, and Alex describes quite well some of the issues and tradeoffs that'd need to be made to make this crate available on full static targets: https://github.com/rust-lang/rust/issues/40174
Just to confirm from the container:
~# docker run -ti rust:alpine3.10 /bin/sh
/ # rustup show
Default host: x86_64-unknown-linux-musl
rustup home: /usr/local/rustup
I am trying to use a Python wrapper for a Java library called Tabula. I need both Python and Java images within my Docker container. I am using the openjdk:8 and python:3.5.3 images. I am trying to build the file using Docker-compose, but it returns the following message:
/bin/sh: 1: java: not found
when it reaches the line RUN java -version within the Dockerfile. The line RUN find / -name "java" also doesn't return anything, so I can't even find where Java is being installed in the Docker environment.
Here is my Dockerfile:
FROM python:3.5.3
FROM openjdk:8
FROM tailordev/pandas
RUN apt-get update && apt-get install -y \
python3-pip
# Create code directory
ENV APP_HOME /usr/src/app
RUN mkdir -p $APP_HOME/temp
WORKDIR /$APP_HOME
# Install app dependencies
ADD requirements.txt $APP_HOME
RUN pip3 install -r requirements.txt
# Copy source code
COPY *.py $APP_HOME/
RUN find / -name "java"
RUN java -version
ENTRYPOINT [ "python3", "runner.py" ]
How do I install Java within the Docker container so that the Python wrapper class can invoke Java methods?
This Dockerfile can not work because the multiple FROM statements at the beginning don't mean what you think it means. It doesn't mean that all the contents of the Images you're referring to in the FROM statements will end up in the Images you're building somehow, it actually meant two different concepts throughout the history of docker:
In the newer Versions of Docker multi stage builds, which is a very different thing from what you're trying to achieve (but very interesting nontheless).
In earlier Versions of Docker, it gave you the ability to simply build multiple images in one Dockerfile.
The behavior you are describing makes me assume you are using such an earlier Version. Let me explain what's actually happening when you run docker build on this Dockerfile:
FROM python:3.5.3
# Docker: "The User wants me to build an
Image that is based on python:3.5.3. No Problem!"
# Docker: "Ah, the next FROM Statement is coming up,
which means that the User is done with building this image"
FROM openjdk:8
# Docker: "The User wants me to build an Image that is based on openjdk:8. No Problem!"
# Docker: "Ah, the next FROM Statement is coming up,
which means that the User is done with building this image"
FROM tailordev/pandas
# Docker: "The User wants me to build an Image that is based on python:3.5.3. No Problem!"
# Docker: "A RUN Statement is coming up. I'll put this as a layer in the Image the user is asking me to build"
RUN apt-get update && apt-get install -y \
python3-pip
...
# Docker: "EOF Reached, nothing more to do!"
As you can see, this is not what you want.
What you should do instead is build a single image where you will first install your runtimes (python, java, ..), and then your application specific dependencies. The last two parts you're already doing, here's how you could go about installing your general dependencies:
# Let's start from the Alpine Java Image
FROM openjdk:8-jre-alpine
# Install Python runtime
RUN apk add --update \
python \
python-dev \
py-pip \
build-base \
&& pip install virtualenv \
&& rm -rf /var/cache/apk/*
# Install your framework dependencies
RUN pip install numpy scipy pandas
... do the rest ...
Note that I haven't tested the above snippet, you may have to adapt a few things.