I need to run PhantomJS server to generate images on demand. When I set this up on a standard Amazon Linux EC2 instance, it works fine.
However, I want to distribute it in a Docker container. Using the Amazon Linux base (http://docs.aws.amazon.com/AmazonECR/latest/userguide/amazon_linux_container_image.html) I include the following RPMS:
RUN \
yum update && \
yum install -y tar \
yum install -y bzip2 \
yum install -y freetype6 \
yum install -y fontconfig \
yum install -y freetype-devel \
yum install -y fontconfig-devel \
yum install -y libicu-devel \
yum install -y libpng-devel \
yum install -y libjpeg-devel \
yum install -y gperf \
yum install -y bison \
yum install -y flex \
yum install -y gcc \
yum install -y gcc-c++
And then set up the phantomjs server as I did on the standard EC2 instance.
When launched, this generates images, but the images are missing their text labels. I can't find any debug output, and I didn't write the original code to generate the image.
Could anyone suggest what might be missing from the Docker container? I didn't have to install any extra libraries in the EC2 instance to get it to work. I've also tried increasing the spec of the host instance image in case there were issues with RAM.
Sample broken image:
https://gm1.ggpht.com/RxVy2Q6KpRVRxSPCoVEupfnl2ieHY9dr9Vu8o9P4JOjw4FqVsEfPgW1leA59R8n2hNF9u6cmL3LLO3idArCWBiE1EFpIz5CI9n29z1_95sC0lesTy6oxkcIoBoHMFNdMNSqURW9Sc1Is8Sd1t-YWsQKgJvtUsotBmRaEOWSKr7JpyjY6stSl1xJiJ5enc7ccvKTkPcuFNMl_NQCrv9b44brzpFjO2y6ZDrfBZolFXc-hqXvbRFazsRd-IVFh4mENLxVmQpeqbRug-egBHV_LCmj0ohBToxT4_b6_pqZpim9MZR6KFCX7QDu-rGtlhpMeweeDZ8uRkPwYyZ48hiEAQpVPAfsHNQGHR_kcRSN7-3bKDZJKjvPtcQjn-5bR-AMwX5B8iqFGyLLaG4QeA7AykmPJ4LGrX8aboPRRSdkH9EdYwEa4wH4IogHa6m4-OobG1FLdEgnveHzVL4XkB3zesrKa3-t5TgdL8nP9xTLaId2uLdqVO39QPTxKGrutyFJst1WhsdoUiBYhLD4JQZW0COBaQB9Kdu-anLpgaZ4oObrtqfzVRxrjdL5s7Qf_FagPtyZiSra2RfF3uDEpjRi0w3BSd8P-PvC2jmTqvuMz4rK-Go9pLLU1Dsqz3mR7p70yE7SVTzVy61YJLYT_NW3vAgHIir_HuJ4fpA3vg8qc2WGgUbOB83QtBsxQoIvu0oyIqq7k7pYzJ6SKCA=s0-l75-ft-l75-ft
I eventually resolved this by using an Ubuntu base rather than an Amazon Linux base. Never determined what was missing from Amazon Linux.
Related
I just don't get it right ... I'm trying to create a docker-container with vagrant installed to lint my Vagrantfiles in a Gitlab CI pipeline. But all I get ist this result:
$ cd src/main/kube-cluster && vagrant validate
==> vagrant: A new version of Vagrant is available: 2.2.16 (installed version: 2.2.6)!
==> vagrant: To upgrade visit: https://www.vagrantup.com/downloads.html
No usable default provider could be found for your system.
Vagrant relies on interactions with 3rd party systems, known as
"providers", to provide Vagrant with resources to run development
environments. Examples are VirtualBox, VMware, Hyper-V.
The easiest solution to this message is to install VirtualBox, which
is available for free on all major platforms.
If you believe you already have a provider available, make sure it
is properly installed and configured. You can see more details about
why a particular provider isn't working by forcing usage with
`vagrant up --provider=PROVIDER`, which should give you a more specific
error message for that particular provider.
For full pipeline, see https://gitlab.com/sommerfeld.sebastian/v-kube-cluster/-/pipelines/304344599
My Dockerfile is rather simple
FROM ubuntu:focal
LABEL maintainer="sommerfeld.sebastian#gmail.com"
# Avoid beeing stuck at tzdata
ENV TZ="Europe/Berlin"
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update \
&& apt-get install -y --no-install-recommends virtualbox=6.1.6-dfsg-1 virtualbox-qt=6.1.6-dfsg-1 virtualbox-dkms=6.1.6-dfsg-1 \
&& vboxmanage --version \
&& apt-get install -y --no-install-recommends ca-certificates=20210119~20.04.1 \
&& apt-get install -y --no-install-recommends curl=7.68.0-1ubuntu2.5 \
&& apt-get install -y --no-install-recommends libcurl4=7.68.0-1ubuntu2.5 \
&& curl -O https://releases.hashicorp.com/vagrant/2.2.6/vagrant_2.2.6_x86_64.deb \
&& apt-get install -y --no-install-recommends ./vagrant_2.2.6_x86_64.deb \
&& vagrant --version \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
But anyway I try, I always get the abouve result from my pipeline. Anyone got an idea on how I can fix my Image?
Alternatively I'm also open to another way to lint my Vagrantfiles.
I have setup a docker image in my Windows 10 Machine.
Can you please tell me how can I install ffmpeg to that docker image?
apt-get install ffmpeg
As of 2019, this seems to work, just fine on Docker.
Dockerfile
RUN apt-get -y update
RUN apt-get -y upgrade
RUN apt-get install -y ffmpeg
In your dockerfile you can write this command to add required repo, update your repository and then install ffmpeg.
Though I am not sure if this library still exist I just modified this Link for Docker you can follow same rules to install another package.
RUN set -x \
&& add-apt-repository ppa:mc3man/trusty-media \
&& apt-get update \
&& apt-get dist-upgrade \
&& apt-get install -y --no-install-recommends \
ffmpeg \
If you stop with select of geographic area on Ubuntu 18.04 or above, you can install it by specifying ENV DEBIAN_FRONTEND=noninteractive as shown below.
FROM ubuntu:18.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y ffmpeg
As of July 2022, this works for Dockerfile, well at least it worked for me.
RUN apt-get -y update && apt-get -y upgrade && apt-get install -y --no-install-recommends ffmpeg
Just to let the community know.
I'm building docker image with a Dockerfile:
FROM centos:centos7.1.1503
MAINTAINER foo <foo#bar.com>
ENV TZ "Asia/Shanghai"
ENV TERM xterm
RUN \
yum update -y && \
yum install -y epel-release &&\
yum update -y && \
yum install -y curl wget tar bzip2 unzip vim-enhanced passwd sudo yum-utils hostname net-tools rsync man && \
yum install -y gcc gcc-c++ git make automake cmake patch logrotate python-devel libpng-devel libjpeg-devel && \
yum install -y pwgen python-pip && \
yum clean all
and it show the error as below:
Error: libselinux conflicts with fakesystemd-1-17.el7.centos.noarch
If I change FROM centos:centos7.1.1503 to FROM centos:centos7,all will work fine. So ,what should I do to using centos7.1.1503
My Linux Distribution is Ubuntu 16.04.1 LTS and my docker version is 1.12.6.
Try running this inside the container you create, before any installation is made:
yum swap -y fakesystemd systemd && yum clean all
yum update -y && yum clean all
Or inside a Dockerfile at the begining before the first RUN you have tipped:
RUN yum swap -y fakesystemd systemd && yum clean all \
&& yum update -y && yum clean all
Hope was useful!
I have installed TensorFlow using the following command
docker run -it b.gcr.io/tensorflow/tensorflow:latest-devel
and I need to set up TensorFlow Serving on a windows machine. I followed the instructions and while running the below-mentioned sudo command while installing TensorFlow Serving dependencies:
sudo apt-get update && sudo apt-get install -y \
build-essential \
curl \
git \
libfreetype6-dev \
libpng12-dev \
libzmq3-dev \
pkg-config \
python-dev \
python-numpy \
python-pip \
software-properties-common \
swig \
zip \
zlib1g-dev
The following error is displayed:
bash: sudo: command not found
docker comes along with root it doesnt require sudo.
BTW if you want sudo in docker if you want to install sudo,
try this,
apt-get update && \
apt-get -y install sudo
now you can use sudo along with your command in docker...
Docker images typically do not have sudo, you are already running as root by default. Try
apt-get update && apt-get install -y build-essential curl git libfreetype6-dev libpng12-dev libzmq3-dev pkg-config python-dev python-numpy python-pip software-properties-common swig zip zlib1g-d
If you wish to not run as root, see the Docker documentation on the User command.
We don't want to weaken the security of the container by installing sudo in it. But we also don't want to change existing scripts that work on normal machines just so that they work in docker, which doesn't need the sudo.
Instead, we can define a dummy bash script to replace sudo, which just executes the arguments without elevating permissions, and is only defined inside the docker image.
Add this to your Dockerfile:
# Make sudo dummy replacement, so we don't weaken docker security
RUN echo "#!/bin/bash\n\$#" > /usr/bin/sudo
RUN chmod +x /usr/bin/sudo
I have following docker file, I want to specifically install a rpm file that is available on my disk as I am building docker instance. My invocation of rpm install looks like this. Command
RUN rpm -i chrpath-0.13-14.el7.x86_64.rpm fails.
Is there a way to install rpm file available locally to new Docker instance?
FROM centos:latest
RUN yum -y install yum-utils
RUN yum -y install python-setuptools
RUN easy_install supervisor
RUN mkdir -p /var/log/supervisor
RUN yum -y install which
RUN yum -y install git
# Basic build dependencies.
RUN yum -y install autoconf build-essential unzip zip
# Gold linker is much faster than standard linker.
RUN yum -y install binutils
# Developer tools.
RUN yum -y install bash-completion curl emacs git man-db python-dev python-pip vim tar
RUN yum -y install gcc gcc-c++ kernel-devel make
RUN yum -y install swig
RUN yum -y install wget
RUN yum -y install python-devel
RUN yum -y install ntp
RUN rpm -i chrpath-0.13-14.el7.x86_64.rpm
Put this line before your rpm -i command:
ADD /host/abs/path/to/chrpath-0.13-14.el7.x86_64.rpm /chrpath-0.13-14.el7.x86_64.rpm
Then you'll be able to do
RUN rpm -i chrpath-0.13-14.el7.x86_64.rpm
As and addendum to what others have written here, rather than using:
RUN rpm -i xyz.rpm
You might be better off doing this:
RUN yum install -y xyz.rpm
The latter has the advantages that (a) it checks the signature, (b) downloads any dependencies, and (c) makes sure YUM knows about the package. This last bit is less important than the other two, but it's still worthwhile.
Suppose you have your Dockerfile available at /opt/myproject/. Then first you have to put rpm inside /opt/myproject and then add
Add /xyz.rpm /xyz.rpm
RUN rpm -i xyz.rpm