I am trying to deploy this Docker Gogs image to AWS.
It worked fine on my local Docker instance, but I get the following error on AWS:
Failed to build Docker image aws_beanstalk/staging-app: github.com/gogits/gogs /goroot/pkg/tool/linux_amd64/6l: running gcc failed: Cannot allocate memory [0m2015/02/15 15:09:04
The command [/bin/sh -c go get -v -tags sqlite] returned a non-zero code: 2. Check snapshot logs for details.
Add more swapfile may help.
Add file .ebextensions/01-commands.config
container_commands:
00001-add-swap:
command: '[ -f /var/swap ] || (dd if=/dev/zero of=/var/swap bs=1M count=1024 && mkswap /var/swap && swapon /var/swap)'
Related
Hello i want to add At command to docker container. I am using linux alpine .
I tried to use apk add at andapk add atd it is giving me the same error.
ERROR: unsatisfiable constraints: atd (missing):
required by: world[atd]
Is there a way to fix that or can is there a way to use apt-get since at exists for apt-get
Looks like at just available as is: apk add at
this Dockerfile works fine for me:
FROM alpine:latest
RUN apk add at
CMD at --help
example run:
$ docker build -t at_command_line -f Dockerfile .
$ docker run at_command_line:latest
at: unrecognized option: -
Usage: at [-V] [-q x] [-f file] [-u username] [-mMlbv] timespec ...
at [-V] [-q x] [-f file] [-u username] [-mMlbv] -t time
at -c job ...
atq [-V] [-q x]
at [ -rd ] job ...
atrm [-V] job ...
batch
I would just add to #ujlbu4's answer that you need to run the at daemon atd once your container is up and running or else the jobs will sit in the queue without getting executed.
Example Dockerfile:
FROM python:alpine
RUN apk add at
ENTRYPOINT ["atd"]
If you don't run atd you may see the following:
$ docker exec -it my_running_container /bin/sh
# echo "echo hi" | at now + 1 minutes
warning: commands will be executed using /bin/sh
job 6 at Mon Jun 21 18:11:00 2021
Can't open /var/run/atd.pid to signal atd. No atd running?
I'm building a docker image and getting the error:
=> ERROR [14/36] RUN --mount=type=secret,id=jfrog-cfg,target=/root/.jfrog/jfrog-cli.conf jfrog rt dl --flat artifact 0.7s
------
> [14/36] RUN --mount=type=secret,id=jfrog-cfg,target=/root/.jfrog/jfrog-cli.conf jfrog rt dl --flat artifact/artifact.tar.gz; set -eux; mkdir -p /usr/local/artifact; tar xzf artifact.tar.gz -C /usr/local/; ln -s /usr/local/artifact /usr/local/artifact;:
#22 0.524 [Error] open /root/.jfrog/jfrog-cli.conf: read-only file system
------
failed to solve with frontend dockerfile.v0: failed to solve with frontend gateway.v0: rpc error: code = Unknown desc = failed to build LLB: executor failed running [/bin/bash -eo pipefail -c jfrog rt dl --flat artifact/${ART_TAG}.tar.gz; set -eux; mkdir -p /usr/local/${ART_TAG}; tar xzf ${ART_TAG}.tar.gz -C /usr/local/; ln -s /usr/local/${ART_VERSION} /usr/local/artifact;]: runc did not terminate sucessfully
The command I use to build the docker image is
DOCKER_BUILDKIT=1 docker build -t imagename . --secret id=jfrog-cfg,src=${HOME}/.jfrog/jfrog-cli.conf (jfrog config exists at ${HOME}/.jfrog/jfrog-cli.conf)
JFrog is working and the artifact I'm downloading exists as I can manually download it outside of using docker.
On Linux, docker is run using the root user, so ${HOME} is /root and not /home/your-user-name or whatever your usual home folder is. Try using explicit full pathnames instead of the env var.
I'm using fastlane container that stores at google container registry to upload APK to google play store using Google Cloud Build.
APK has been succesfully created.However, when processing last step (fastlane), it face errors:
Step #2: 487ea6dabc0c: Pull complete
Step #2: a7ae4fee33c9: Pull complete
Step #2: Digest: sha256:2e31d5ae64984a598856f1138c6be0577c83c247226c473bb5ad302f86267545
Step #2: Status: Downloaded newer image for gcr.io/myapp789-app/fastlane:latest
Step #2: gcr.io/myapp789-app/fastlane:latest
Step #2: docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"supply\": executable file not found in $PATH": unknown.
Step #2: time="2019-08-29T23:22:55Z" level=error msg="error waiting for container: context canceled"
Finished Step #2
ERROR
ERROR: build step 2 "gcr.io/myapp789-app/fastlane" failed: exit status 127
Note:
1) Docker Source file was taken from https://hub.docker.com/r/fastlanetools/fastlane and then I build my own image.
2) Docker Image Build on Google Cloud VM using Debian GNU/Linux 9 (stretch)
Docker Source File for fastlane:
# Final image #
###############
FROM circleci/ruby:latest
MAINTAINER milch
ENV PATH $PATH:/usr/local/itms/bin
# Java versions to be installed
ENV JAVA_VERSION 8u131
ENV JAVA_DEBIAN_VERSION 8u131-b11-1~bpo8+1
ENV CA_CERTIFICATES_JAVA_VERSION 20161107~bpo8+1
# Needed for fastlane to work
ENV LANG C.UTF-8
ENV LC_ALL C.UTF-8
# Required for iTMSTransporter to find Java
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/jre
USER root
# iTMSTransporter needs java installed
# We also have to install make to install xar
# And finally shellcheck
RUN echo 'deb http://archive.debian.org/debian jessie-backports main' > /etc/apt/sources.list.d/jessie-
backports.list \
&& apt-get -o Acquire::Check-Valid-Until=false update \
&& apt-get install --yes \
make \
shellcheck \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
USER circleci
COPY --from=xar_builder /tmp/xar /tmp/xar
RUN cd /tmp/xar \
&& sudo make install \
&& sudo rm -rf /tmp/*
CloudBuild.yaml:
- name: 'gcr.io/$PROJECT_ID/fastlane'
args: ['supply', '--package_name','${_ANDROID_PACKAGE_NAME}', '--track', '${_ANDROID_RELEASE_CHANNEL}', '--json_key_data', '${_GOOGLE_PLAY_UPLOAD_KEY_JSON}', '--apk', '/workspace/${_REPO_NAME}/build/app/outputs/bundle/release/app.aab']
timeout: 1200s
Any Idea to solve this?
I solve this by building docker image using docker source from Google Cloud Official other than fastlane on hub.docker.com (where's it never update since 5 month ago)
I want to install envkey in my docker image which requires a key-value pair. I have the key-value pair with me but I am unable to figure out as to how do I install it in my docker image using those arguments and then deploy the same on jupyterhub.
I tried reading other deployments of mine which use envkey. Here is how it goes:
1. I have a Makefile and I run the command sudo make dev config=aviral.cfg
2. The dev command in the Makefile is as follows:
dev:
docker build -t $(IMAGE) -f Dockerfile.dev . && docker tag $(IMAGE) $(ALIAS)
#echo "\nCreate docker container.."
CONFIG=$(config) IMAGE=$(IMAGE) docker-compose -f docker-compose.yml up -d --scale test=0 --scale airflow_worker=0
#echo "\n$(GREEN)Done.$(NO_COLOR)\n"
#echo "Try airflow at http://localhost:8080."
#echo "and flower at http://localhost:5555."
The docker-compose file is:
airflow_worker:
image: ${IMAGE}:latest
restart: always
depends_on:
- airflow_scheduler
# ports:
# - 8793:8793
# environment:
# - GOOGLE_APPLICATION_CREDENTIALS=/gcloud/cloud.json
env_file:
- ${CONFIG}
command: worker
As you can see, the env_file is passed on.
I am unable to deduce how to do this same in the JuPyterHub.
The helm chart is here(https://jupyterhub.github.io/helm-chart/jupyterhub-0.8.2.tgz). And my config is:
proxy:
secretToken: "yada_yada"
singleuser:
image:
name: yada_yada.dkr.ecr.ap-south-1.amazonaws.com/demo
tag: 12h
lifecycleHooks:
postStart:
exec:
command: ["/bin/sh", "-c", 'ipython profile create; cd ~/.ipython/profile_default/startup; echo ''run_id = "sample" ''> aviral.py']
imagePullSecret:
enabled: true
registry: yada_yada.dkr.ecr.ap-south-1.amazonaws.com
username: aws
email: aviral#yada_yada.com
password: yada_yada
In my config file, I pass variables as:
ENVKEY=my_personal_envkey
I expect to have my configs passed in the docker, or perhaps I write a proper Makefile for this stuff, as of now, I am facing this error:
Step 19/32 : RUN curl -s https://raw.githubusercontent.com/envkey/envkey-source/master/install.sh | bash
---> Running in 35bc1cf0e1c8
envkey-source 1.2.9 Quick Install
Copyright (c) 2019 Envkey Inc. - MIT License
https://github.com/envkey/envkey-source
Downloading envkey-source binary for linux-amd64
Downloading tarball from https://github.com/envkey/envkey-source/releases/download/v1.2.9/envkey-source_1.2.9_linux_amd64.tar.gz
envkey-source is installed in /usr/local/bin
Installation complete. Info:
bash: line 97: 29 Segmentation fault envkey-source -h
The command '/bin/sh -c curl -s https://raw.githubusercontent.com/envkey/envkey-source/master/install.sh | bash' returned a non-zero code: 139
Although this question alone should be good enough to give you the picture but for the sake of context(if), here are some of the questions:
1. How do I make jupyter-hub access my private docker image repository?
2. Unable to run a lifecycle command from config.yaml while deploying jupyterhub
3. How to have file written automatically in the startup folder when a new user signs up/in on JuPyter hub?
Probably you get this error because install.sh script tries to add envkey-source binary under /usr/local/bin directory and then tries to run envkey-source -h and fails. Check if user(if non-root) have permission to do that or if /usr/local/bin directory exists in container image.
Hope it helps!
I have a project running on Docker with docker-compose for dev environment.
I want to get it running on GitLabCI with a gitlab-ci-multi-runner "Docker mode" instance.
Here is my .gitlab-ci.yml file:
image: soullivaneuh/docker-bash
before_script:
- apk add --update bash curl
- curl --silent --location https://github.com/docker/compose/releases/download/1.5.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- ./configure
- docker-compose up -d
Note that soullivaneuh/docker-bash image is just a docker image with bash installed.
The script fails on docker-compose up -d command:
gitlab-ci-multi-runner 0.7.2 (998cf5d)
Using Docker executor with image soullivaneuh/docker-bash ...
Pulling docker image soullivaneuh/docker-bash:latest ...
Running on runner-1ee5079f-project-3-concurrent-1 via sd-59984...
Fetching changes...
Removing app/config/parameters.yml
Removing docker-compose.env
HEAD is now at 5c5e7ff remove docker service
From https://git.dummy.net/project/project
5c5e7ff..45e643d docker-ci -> origin/docker-ci
Checking out 45e643dd as docker-ci...
Previous HEAD position was 5c5e7ff... remove docker service
HEAD is now at 45e643d... Remove docker info commands
$ apk add --update bash curl
fetch http://dl-4.alpinelinux.org/alpine/v3.2/main/x86_64/APKINDEX.tar.gz
OK: 10 MiB in 28 packages
$ curl --silent --location https://github.com/docker/compose/releases/download/1.5.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
$ chmod +x /usr/local/bin/docker-compose
$ ./configure
$ docker-compose up -d
bash: line 30: /usr/local/bin/docker-compose: No such file or directory
ERROR: Build failed with: exit code 1
I have absolutly no idea why this is failing.
Thanks for help.
The No such file or directory is misleading. I've received that many times while trying to run dynamically linked binaries using alpine linux (which it appears you are using).
The problem (as I understand it) is that the binary was compiled and linked against glibc, but alpine uses musl, not glibc.
You could use ldd /usr/local/bin/docker-compose to tell you which libraries are missing (or run it with strace if all else fails).
To get it working, it might be easier to install from python source (https://docs.docker.com/compose/install/#install-using-pip), which is what the official compose image does (https://github.com/docker/compose/blob/master/Dockerfile.run).
Or you could use an image built on debian or some other distro that uses glibc.