Strange Tar behavior with -C option - tar

I am trying to create a tar archive using -C option. I also am using $(ls ) to make sure that tar does not complain about missing files.
I am seeing a strange behavior that the command works when run from some paths and not from others. I am unable to explain this.
What I want to tar: /opt/server/nginx/etc/nginx/nginx.conf
[In this case the file is present].
I don't want to preserve the full path, so I use -C /opt/server/nginx/etc/nginx
/tmp/backup/ folder is present.
Fails:
(venv) root#bhakta-at-host-1:/opt/server/agent# cd /root
(venv) root#bhakta-at-host-1:~# tar -cvzf /tmp/backup/NginxServer_cfg_save.tgz -C /opt/server/nginx/etc/nginx/ $(ls -d nginx.conf)
ls: cannot access 'nginx.conf': No such file or directory
tar: Cowardly refusing to create an empty archive
Try 'tar --help' or 'tar --usage' for more information.
(venv) root#bhakta-at-host-1:~#
(venv) root#bhakta-at-host-1:~#
Fails:
(venv) root#bhakta-at-host-1:/tmp# cd /var/tmp
(venv) root#bhakta-at-host-1:/var/tmp# tar -cvzf /tmp/backup/NginxServer_cfg_save.tgz -C /opt/server/nginx/etc/nginx/ $(ls -d nginx.conf)
ls: cannot access 'nginx.conf': No such file or directory
tar: Cowardly refusing to create an empty archive
Try 'tar --help' or 'tar --usage' for more information.
(venv) root#bhakta-at-host-1:/var/tmp#
Works:
(venv) root#bhakta-at-host-1:~# cd /tmp
(venv) root#bhakta-at-host-1:/tmp# tar -cvzf /tmp/backup/NginxServer_cfg_save.tgz -C /opt/server/nginx/etc/nginx/ $(ls -d nginx.conf)
nginx.conf
Works:
(venv) root#bhakta-at-host-1:/opt/server/nginx/etc/nginx# tar -cvzf /tmp/backup/NginxServer_cfg_save.tgz -C /opt/server/nginx/etc/nginx/ $(ls -d nginx.conf)
nginx.conf
(venv) root#bhakta-at-host-1:/opt/server/nginx/etc/nginx#
Works:
(venv) root#bhakta-at-host-1:/var/tmp# tar -cvzf /tmp/backup/NginxServer_cfg_save.tgz -C /opt/server/nginx/etc/nginx/ nginx.conf
nginx.conf
So what is going on here? It seems to be something to do with -C and $(ls )
Versions:
(venv) root#bhakta-at-host-1:/opt/server/nginx/etc/nginx# tar --version
tar (GNU tar) 1.29
Copyright (C) 2015 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Written by John Gilmore and Jay Fenlason.
(venv) root#bhakta-at-host-1:/opt/server/nginx/etc/nginx# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic

Related

Cannot execute GO binary file in Docker Containers having Linux Runner

Go and binaries were part of our docker image.
I tried all possible combinations to build Go binary
export GOARCH=386 && export GOOS=linux && go build ./cmd/status
export GOARCH=amd64 && export GOOS=windows && go build ./cmd/status
$ uname -a
Linux runner-4KP_No95-project-35871-concurrent-0 44.44.444-115.233.amzn1.x86_64 #1 SMP Thu Feb 27 23:49:15 UTC 2020 x86_64 GNU/Linux
Getting error as
/pipeline/status: /pipeline/status: cannot execute binary file
Sample section from docker file is -
ARG GOLANG_VERSION=1.14
FROM golang:${GOLANG_VERSION} as build-helpers
ENV GOPRIVATE=code.abcd.com
RUN mkdir -p /pipeline-helpers
ADD /reusable-aspects/ci-caching/golang-preheat-cache /golang-preheat-cache
RUN cd /golang-preheat-cache && go mod download
ADD helpers/go-pipeline-commands /pipeline-helpers/
RUN cd /pipeline-helpers && CGO_ENABLED=0 GOOS=linux make
FROM alpine
RUN mkdir -p /pipeline
WORKDIR /pipeline
COPY --from=build-helpers /pipeline-helpers/commit .
COPY --from=build-helpers /pipeline-helpers/status .
RUN chmod a+x commit
RUN chmod a+x status
ENTRYPOINT ["./commit"]
CMD []
Image logs where Go binaries are build are added below
[0KRunning with gitlab-runner 11.11.2 (ac2a293c)
[0;m[0K on aws-build-runner-scheduler 8616255e
[0;msection_start:1590231123:prepare_executor
[0K[0KUsing Docker executor with image gcr.io/kaniko-project/executor:debug ...
[0;m[0KPulling docker image gcr.io/kaniko-project/executor:debug ...
[0;m[0KUsing docker image sha256:adasdasdasdasdasdasdasdsa for gcr.io/kaniko-project/executor:debug ...
[0;msection_end:1590231124:prepare_executor
[0Ksection_start:1590231124:prepare_script
[0KRunning on runner-8616123e-project-12312-concurrent-0 via ip-12-122-122-122...
section_end:1590231125:prepare_script
[0Ksection_start:1590231125:get_sources
[0KReinitialized existing Git repository in /builds/abcde/pipeline/projetname/.git/
[32;1mFetching changes...[0;m
From https://code.abc.com/abcde/pipeline/projetname
* [new ref] refs/pipelines/5679048 -> refs/pipelines/5679048
0286714..043832e feat/qaPipelineDeploy -> origin/feat/qaPipelineDeploy
[32;1mChecking out 043832ea as feat/qaPipelineDeploy...[0;m
Removing helpers/bash-commons/src/welcome/version-info-pipeline.txt
[32;1mSkipping Git submodules setup[0;m
section_end:1590231128:get_sources
[0Ksection_start:1590231128:restore_cache
[0Ksection_end:1590231130:restore_cache
[0Ksection_start:1590231130:download_artifacts
[0Ksection_end:1590231132:download_artifacts
[0Ksection_start:1590231132:build_script
[0K[32;1m$ mkdir -p /kaniko/.docker[0;m
[32;1m$ export IMAGE_TAG=${CI_COMMIT_TAG:=$CI_COMMIT_REF_SLUG}[0;m
[32;1m$ imagename=$CI_REGISTRY_IMAGE/helpers:$IMAGE_TAG[0;m
[36mINFO[0m[0001] Resolved base name golang:1.14 to build-helpers
[36mINFO[0m[0001] Retrieving image manifest golang:1.14
[36mINFO[0m[0002] Retrieving image manifest golang:1.14
[36mINFO[0m[0003] Retrieving image manifest alpine
[36mINFO[0m[0004] Retrieving image manifest alpine
[36mINFO[0m[0005] Built cross stage deps: map[0:[/pipeline-helpers/commit /pipeline-helpers/status]]
[36mINFO[0m[0005] Retrieving image manifest golang:1.14
[36mINFO[0m[0005] Retrieving image manifest golang:1.14
[36mINFO[0m[0006] Executing 0 build triggers
[36mINFO[0m[0006] Unpacking rootfs as cmd RUN mkdir -p /pipeline-helpers requires it.
[36mINFO[0m[0021] ENV GOPRIVATE=code.abc.com
[36mINFO[0m[0021] RUN mkdir -p /pipeline-helpers
[36mINFO[0m[0021] Taking snapshot of full filesystem...
[36mINFO[0m[0022] Resolving 28120 paths
[36mINFO[0m[0025] cmd: /bin/sh
[36mINFO[0m[0025] args: [-c mkdir -p /pipeline-helpers]
[36mINFO[0m[0025] Running: [/bin/sh -c mkdir -p /pipeline-helpers]
[36mINFO[0m[0025] Taking snapshot of full filesystem...
[36mINFO[0m[0025] Resolving 28121 paths
[36mINFO[0m[0027] Using files from context: [/builds/abcde/pipeline/projetname/projetname-reusable-aspects/ci-caching/golang-preheat-cache]
[36mINFO[0m[0027] ADD /projetname-reusable-aspects/ci-caching/golang-preheat-cache /golang-preheat-cache
[36mINFO[0m[0027] Resolving 3 paths
[36mINFO[0m[0027] Taking snapshot of files...
[36mINFO[0m[0027] RUN cd /golang-preheat-cache && go mod download
[36mINFO[0m[0027] cmd: /bin/sh
[36mINFO[0m[0027] args: [-c cd /golang-preheat-cache && go mod download]
[36mINFO[0m[0027] Running: [/bin/sh -c cd /golang-preheat-cache && go mod download]
[36mINFO[0m[0033] Taking snapshot of full filesystem...
[36mINFO[0m[0033] Resolving 50967 paths
[36mINFO[0m[0045] Using files from context: [/builds/abcde/pipeline/projetname/helpers/go-pipeline-commands]
[36mINFO[0m[0045] ADD helpers/go-pipeline-commands /pipeline-helpers/
[36mINFO[0m[0045] Resolving 25 paths
[36mINFO[0m[0045] Taking snapshot of files...
[36mINFO[0m[0045] RUN cd /pipeline-helpers && CGO_ENABLED=0 GOOS=linux make
[36mINFO[0m[0045] cmd: /bin/sh
[36mINFO[0m[0045] args: [-c cd /pipeline-helpers && CGO_ENABLED=0 GOOS=linux make]
[36mINFO[0m[0045] Running: [/bin/sh -c cd /pipeline-helpers && CGO_ENABLED=0 GOOS=linux make]
[34m > Download dependencies [0m
[37m > Tidy dependencies [0m
[34m go mod tidy [0m
[37m > Building the binary [0m
[34m go build ./cmd/commit [0m
[34m go build ./cmd/query-qa-pipeline-status [0m
[37m > Format code [0m
[34m go fmt ./... [0m
[37m > Run unit tests [0m
[34m go test -run TestUnit ./... [0m
ok code.abc.com/abcde/pipeline/projetname/helpers/cmd/commit 0.005s
ok code.abc.com/abcde/pipeline/projetname/helpers/cmd/status 0.005s
[37m > Find static code issues [0m
[34m go vet ./... [0m
[36mINFO[0m[0055] Taking snapshot of full filesystem...
[36mINFO[0m[0056] Resolving 52425 paths
[36mINFO[0m[0061] RUN echo " Golang version: `go version`" >> /pipeline-helpers/version-info-pipeline.txt
[36mINFO[0m[0061] cmd: /bin/sh
[36mINFO[0m[0061] args: [-c echo " Golang version: `go version`" >> /pipeline-helpers/version-info-pipeline.txt]
[36mINFO[0m[0061] Running: [/bin/sh -c echo " Golang version: `go version`" >> /pipeline-helpers/version-info-pipeline.txt]
[36mINFO[0m[0061] Taking snapshot of full filesystem...
[36mINFO[0m[0065] Resolving 52426 paths
[36mINFO[0m[0069] RUN echo " projetname type: Helpers" >> /pipeline-helpers/version-info-pipeline.txt
[36mINFO[0m[0069] cmd: /bin/sh
[36mINFO[0m[0069] args: [-c echo " projetname type: Helpers" >> /pipeline-helpers/version-info-pipeline.txt]
[36mINFO[0m[0069] Running: [/bin/sh -c echo " projetname type: Helpers" >> /pipeline-helpers/version-info-pipeline.txt]
[36mINFO[0m[0069] Taking snapshot of full filesystem...
[36mINFO[0m[0069] Resolving 52426 paths
[36mINFO[0m[0072] RUN echo " Commit hash: `echo ${CI_COMMIT_SHA}`" >> /pipeline-helpers/version-info-pipeline.txt
[36mINFO[0m[0072] cmd: /bin/sh
[36mINFO[0m[0072] args: [-c echo " Commit hash: `echo ${CI_COMMIT_SHA}`" >> /pipeline-helpers/version-info-pipeline.txt]
[36mINFO[0m[0072] Running: [/bin/sh -c echo " Commit hash: `echo ${CI_COMMIT_SHA}`" >> /pipeline-helpers/version-info-pipeline.txt]
[36mINFO[0m[0072] Taking snapshot of full filesystem...
[36mINFO[0m[0072] Resolving 52426 paths
[36mINFO[0m[0076] Saving file pipeline-helpers/commit for later use
[36mINFO[0m[0076] Saving file pipeline-helpers/version-info-pipeline.txt for later use
[36mINFO[0m[0076] Saving file pipeline-helpers/status for later use
[36mINFO[0m[0076] Deleting filesystem...
[36mINFO[0m[0077] Retrieving image manifest alpine
[36mINFO[0m[0079] Retrieving image manifest alpine
[36mINFO[0m[0080] Executing 0 build triggers
[36mINFO[0m[0080] Unpacking rootfs as cmd RUN mkdir -p /pipeline requires it.
[36mINFO[0m[0080] RUN mkdir -p /pipeline
[36mINFO[0m[0080] Taking snapshot of full filesystem...
[36mINFO[0m[0080] Resolving 482 paths
[36mINFO[0m[0080] cmd: /bin/sh
[36mINFO[0m[0080] args: [-c mkdir -p /pipeline]
[36mINFO[0m[0080] Running: [/bin/sh -c mkdir -p /pipeline]
[36mINFO[0m[0080] Taking snapshot of full filesystem...
[36mINFO[0m[0080] Resolving 483 paths
[36mINFO[0m[0080] WORKDIR /pipeline
[36mINFO[0m[0080] cmd: workdir
[36mINFO[0m[0080] Changed working directory to /pipeline
[36mINFO[0m[0080] COPY --from=build-helpers /pipeline-helpers/commit .
[36mINFO[0m[0080] Resolving 1 paths
[36mINFO[0m[0080] Taking snapshot of files...
[36mINFO[0m[0080] Resolving 1 paths
[36mINFO[0m[0080] Taking snapshot of files...
[36mINFO[0m[0081] Resolving 1 paths
[36mINFO[0m[0081] Taking snapshot of files...
[36mINFO[0m[0081] COPY --from=build-helpers /pipeline-helpers/status .
[36mINFO[0m[0081] Resolving 1 paths
[36mINFO[0m[0081] Taking snapshot of files...
[36mINFO[0m[0081] RUN chmod a+x commit
[36mINFO[0m[0081] cmd: /bin/sh
[36mINFO[0m[0081] args: [-c chmod a+x commit]
[36mINFO[0m[0081] Running: [/bin/sh -c chmod a+x commit]
[36mINFO[0m[0081] Taking snapshot of full filesystem...
[36mINFO[0m[0081] Resolving 487 paths
[36mINFO[0m[0081] No files were changed, appending empty layer to config. No layer added to image.
[36mINFO[0m[0081] cmd: /bin/sh
[36mINFO[0m[0081] Taking snapshot of full filesystem...
[36mINFO[0m[0081] Resolving 487 paths
[36mINFO[0m[0081] No files were changed, appending empty layer to config. No layer added to image.
[36mINFO[0m[0081] RUN chmod a+x status
[36mINFO[0m[0081] cmd: /bin/sh
[36mINFO[0m[0081] args: [-c chmod a+x status]
[36mINFO[0m[0081] Running: [/bin/sh -c chmod a+x status]
[36mINFO[0m[0081] Taking snapshot of full filesystem...
[36mINFO[0m[0081] Resolving 487 paths
[36mINFO[0m[0081] No files were changed, appending empty layer to config. No layer added to image.
[36mINFO[0m[0081] CMD []
[32;1m$ echo projetname_IMAGE_TAG=${IMAGE_TAG}[0;m
projetname_IMAGE_TAG=feat-qapipelinedeploy
section_end:1590231218:build_script
[0Ksection_start:1590231218:after_script
[0Ksection_end:1590231219:after_script
[0Ksection_start:1590231219:archive_cache
[0Ksection_end:1590231220:archive_cache
[0Ksection_start:1590231220:upload_artifacts_on_success
[0Ksection_end:1590231222:upload_artifacts_on_success
[0K[32;1mJob succeeded
From Docker GIT.YML file , I am calling bash ./status command. It throws error as can not execute bianry file
There is one more docker file that gets build in a different stage after above docker image gets build. This docker image is used for testing in YML file.
RG GO_VERSION=1.14
# Install OpenAPI Validator
FROM golang:${GO_VERSION} AS openapivalidatorbuilder
WORKDIR /work
ENV GOPRIVATE=code.abcd.com
COPY /reusable-aspects/enforcement/open-api-check/ .
RUN go build .
ARG PIPELINE_HELPER=docker.abcd.com/projectName/pipeline/projects/helpers:master
FROM ${PIPELINE_HELPER} as helper
FROM golang:${GO_VERSION}
ENV GOPRIVATE=code.abcd.com
ADD /reusable-aspects/ci-caching/golang-preheat-cache /golang-preheat-cache
RUN cd /golang-preheat-cache && go mod download
RUN curl -L https://github.com/a8m/envsubst/releases/download/v1.1.0/envsubst-`uname -s`-`uname -m` -o envsubst
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
nodejs \
npm \
unzip
RUN npm --version
RUN npm install -g \
npm#6.11 \
serverless#1.51
RUN apt-get install -y \
# Install ruby and CFN_NAG
ruby-dev \
ruby-json \
ruby \
ruby-bundler \
# Install AWS CLI
awscli \
jq \
figlet
RUN rm -rf /var/cache/apk/*
RUN gem install cfn-nag --no-rdoc --no-ri
RUN mkdir /pipeline
ADD helpers/bash-commons/src/welcome /pipeline
RUN echo " Golang version: `go version`" >> /pipeline/version-info-pipeline.txt
RUN echo " Node version: `node -v`" >> /pipeline/version-info-pipeline.txt
RUN echo " Serverless version: `serverless -v`" >> /pipeline/version-info-pipeline.txt
RUN echo " projects type: Information Serverless Golang" >> /pipeline/version-info-pipeline.txt
COPY --from=openapivalidatorbuilder /work/open-api-check /pipeline/open-api-check
RUN chmod a+x /pipeline/open-api-check
COPY --from=helper /pipeline/hash /pipeline
COPY --from=helper /pipeline/status /pipeline
golang:1.14 is not alpine base but debian base. So of course you cannot run the debian build binary in alpine image.
Try replace
FROM golang:${GOLANG_VERSION} as build-helpers
with
FROM golang:${GOLANG_VERSION}-alpine as build-helpers
and add following lines to download necessary lib for building binary
RUN apk update && \
apk --update upgrade && \
apk add --no-cache ca-certificates gcc musl-dev git && \
update-ca-certificates && \
rm -rf /var/cache/apk/*
UPDATE
Add make and put the apk update and add right under FROM golang:...
FROM golang:${GOLANG_VERSION}-alpine as build-helpers
RUN apk update && \
apk --update upgrade && \
apk add --no-cache ca-certificates gcc musl-dev git make && \
update-ca-certificates && \
rm -rf /var/cache/apk/*
UPDATE AFTER OP UPDATE QUESTION
Since you are copying the alpine build status binary from helper to your final image with base golang:${VERSION}, which is debian environment, of course it cannot run.
I recommend you to use only one environment (alpine or debian) for all the build stages or final docker image.
So you first docker image's first build state should be
FROM golang:${GOLANG_VERSION}
and the final image please use debian instead of alpine
FROM debian

creating an RPM package from binary file doesn't package the files into an archive

I'm trying to create an RPM package from node project packaged into a binary file with pkg.
I've created an rpmbuild skeleton in /root/rpmbuild.
The binary package was copied into /root/rpmbuild/SOURCES.
I've created a menlolab-runner.service file in /root/rpmbuild.
I'm skipping the %prep and %build sections in the .spec file. During the install section the binary files is copied to /usr/bin folder. In the %post section the service file is copied to /etc/systemd/system/
%define version %(cat package.json | jq -r '.version')
%define release 1
%define buildroot /root/rpmbuild/BUILDROOT/
Name: %{name}
Version: %{version}
Release: %{release}
Summary: menlolab-runner
Group: Installation Script
License: MIT
Source0: runner
AutoReqProv: no
%description
The agent deployed on private and public infrastructure to manage tasks.
%global debug_package %{nil}
%prep
%build
%pre
getent group menlolab-runner >/dev/null || groupadd -r menlolab-runner
getent passwd menlolab-runner >/dev/null || useradd -r -g menlolab-runner -G menlolab-runner -d / -s /sbin/nologin -c "menlolab-runner" menlolab-runner
%install
mkdir -p %{buildroot}%{_bindir}/
mkdir -p %{buildroot}%{_unitdir}
cp runner %{buildroot}%{_bindir}/menlolab-runner
cp /root/rpmbuild/menlolab-runner.service %{buildroot}%{_unitdir}
%post
systemctl enable %{_unitdir}/menlolab-runner.service
chmod ugo+x /usr/bin/menlolab-runner
mkdir -p '/etc/menlolab-runner/'
chown -R 'menlolab-runner:menlolab-runner' '/etc/menlolab-runner'
chmod 700 '/etc/menlolab-runner'
mkdir -p '/var/lib/menlolab-runner/'
chown -R 'menlolab-runner:menlolab-runner' '/var/lib/menlolab-runner/'
mkdir -p '/var/lib/menlolab-runner/jobs/'
chown -R 'menlolab-runner:menlolab-runner' '/var/lib/menlolab-runner/jobs/'
chmod 700 '/var/lib/menlolab-runner/jobs/'
mkdir -p '/var/log/menlolab-runner/'
chown -R 'menlolab-runner:menlolab-runner' '/var/log/menlolab-runner/'
mkdir -p '/var/cache/menlolab-runner/'
chown -R 'menlolab-runner:menlolab-runner' '/var/cache/menlolab-runner/'
groupadd docker
usermod -aG docker menlolab-runner
%clean
rm -rf %{buildroot}
%files
%{_bindir}/menlolab-runner
%{_unitdir}/menlolab-runner.service
%defattr(644, menlolab-runner, menlolab-runner, 755)
My issue is the fact that .rpm contains no files after executing rpmbuild -ba /path/to/spec/file.
I think it's because I have no entry in the %files section. I'm not sure what to put into this section. If I add the path to binary file there I receive the following error:
error: File not found: /root/rpmbuild/BUILDROOT/menlolab-runner-0.2.5a2-1.x86_64/root/rpmbuild/SOURCES/runner
In your %install section you must place files into $RPM_BUILD_ROOT, so something like:
%install
cp runner $RPM_BUILD_ROOT%{_bindir}/menlolab-runner
Subsequently, the %files section should list the installed files, relative to the $RPM_BUILD_ROOT, e.g.:
%files
%{_bindir}/menlolab-runner

AWS EC2 /usr/bin/docker-compose: cannot execute binary file

I created an EC2 instance and installed docker on it, now I need to install docker-compose using SSM send-command and to do that I'm running the following command:
aws ssm send-command --instance-ids "${INSTACES_ID}" \
--document-name "AWS-RunShellScript" \
--parameters "commands=[
sudo curl -L "https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose,
sudo mv /usr/local/bin/docker-compose /usr/bin/docker-compose,
sudo chmod +x /usr/bin/docker-compose,
docker-compose --version
]"
The commands are executed correctly but the command docker-compose --version raises the following exception
"StandardErrorContent": "... /usr/bin/docker-compose: cannot execute binary file\nfailed to run commands: exit status 126"
I entered in EC2 via ssh the problem is the same, I also tried to give all permissions to bin file
$ ls -la /usr/bin | grep docker-compose
-rwxrwxrwx 1 ec2-user ec2-user 6204149 27 set 15.10 docker-compose
$ docker-compose --version
-bash: /usr/bin/docker-compose: cannot execute binary file

Docker build Gentoo operation not permitted

I have a docker-compose with this container to build Gentoo
default:
build: docker/gentoo
hostname: default.jpo.net
My Dockerfile to setup Gentoo in multi-stage build is
FROM gentoo/portage as portage
FROM gentoo/stage3-amd64
COPY --from=portage /usr/portage /usr/portage
RUN emerge --jobs $(nproc) -qv www-servers/apache net-misc/curl net-misc/openssh
RUN /usr/bin/ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key -N ''
RUN /usr/bin/ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key -N ''
RUN sed -i 's/#PubkeyAuthentication/PubkeyAuthentication/' /etc/ssh/sshd_config
RUN mkdir -p /root/.ssh && chmod 700 /root/.ssh && touch /root/.ssh/authorized_keys
RUN wget -O telegraf.tar.gz http://get.influxdb.org/telegraf/telegraf-0.11.1-1_linux_amd64.tar.gz \
&& tar xvfz telegraf.tar.gz \
&& rm telegraf.tar.gz \
&& mv /usr/lib/telegraf /usr/lib64/telegraf \
&& rm -rf /usr/lib && ln -s /usr/lib64 /usr/lib
ADD telegraf.conf /etc/telegraf/telegraf.conf
COPY entrypoint.sh /
COPY infinite_curl.sh /
RUN chmod u+x /entrypoint.sh /infinite_curl.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["telegraf", "-config", "/etc/telegraf/telegraf.conf"]
The problem is the build fail during the emerge command when it setup packages.
Then I get this error
PermissionError: [Errno 1] Operation not permitted
* ERROR: dev-libs/apr-1.5.2::gentoo failed (install phase):
* dodoc failed
I tried adding privileged=true in my docker-compose file and with adding USER root inside my Dockerfile without success.
I also tried to use the last version of openssh without success too.
I searched the Internet but I haven't found anything successfull.
Docker version
Docker version 17.12.0-ce, build c97c6d6
Docker-compose version
docker-compose version 1.18.0, build 8dd22a9
I'm on Ubuntu 16.04 and this build work well on Ubuntu 17.10 with same docker/docker-compose versions
Do you have some clues ?
Looking at in src-install() for that ebuild, this appears to be a bug upstream.
# Prallel install breaks since apr-1.5.1
#make -j1 DESTDIR="${D}" install || die
There are several two bugs related to building apr in parallel.

tar command on AIX is not working unzip

tar -zxvf unzip and untar command work on RHEL and Solaris however the not working on AIX 5/6/7 what is the equivalent command?
A good answer was provided in a comment. Adding an answer, though.
The portable version of tar -zxvf is that I reach for on old non-linux unix systems is:
gzip -dc foo.tgz | tar xvf -
gunzip <filename>.tar.gz
tar xvf <filename>.tar
Note that gunzip command outputs a .tar file. Then tar xvf untars this .tar file.

Resources