Install ffmpeg package inside docker container - docker

I am trying to install ffmpeg package inside dockerfile, but after that ffmpeg package was not installed.
Here is my dockerfile:
FROM ubuntu:18.04
FROM maven:3.6.0-jdk-11-slim AS build
COPY src /home/app/src
COPY pom.xml /home/app
RUN mvn -f /home/app/pom.xml clean package -DskipTests
RUN apt-get update -y && apt-get install -y ffmpeg
FROM openjdk:11-jdk
COPY --from=build /home/app/target/car-1.0.jar /home/car.jar
EXPOSE 8080:8080
ENTRYPOINT ["java","-jar","/home/car.jar"]
If I enter inside docker container and run command:
apt-get update -y && apt-get install -y ffmpeg
package was successfuly installed and works.
With upper Dockerfile after finished build I typed:
whereis ffmpeg
and it was empty.
I also tried with static ffmpeg inside Dockerfile, but not works https://hub.docker.com/r/mwader/static-ffmpeg/
COPY --from=mwader/static-ffmpeg:4.3.1 /ffmpeg /usr/local/bin/
COPY --from=mwader/static-ffmpeg:4.3.1 /ffprobe /usr/local/bin/

Related

Pypy datascience docker image

I have some datascience projects running in docker containers (I use k8s). I am trying to speed up my code by using pypy as my interpreter, but this has been a nightmare.
My OS is ubuntu 20.04
The main libraries I need are:
SQLAlchemy
SciPy
gRPC
For grpc I'm using grpclib, and for SciPy I'm installing it using the miniconda docker image.
My final hurdle is installing psycopg2cffi to make SQLAlchemy work, but after a couple of all-nighters I still haven't managed to make this work. I can install it, but when I run I get a SCRAM authentication problem that I've seen others also get.
Is there a pypy docker file someone has already created that has datascience libraries in it? Doesn't seem like it would be something no one has tried to be before..
Here's by dockerfile so far:
FROM conda/miniconda3 as base
# Setp conda env with pypy3 as the interpreter
RUN conda create -c conda-forge -n pypy-env pypy python=3.8 -y
ENV PATH="/usr/local/envs/pypy-env/bin:$PATH"
RUN pypy -m ensurepip
RUN apt-get -y update && \
apt-get -y install build-essential g++ python3-dev libpq-dev
# Install big/annoying libraries first
RUN pip install psycopg2cffi -y
RUN conda install scipy -y
RUN pip install numpy
WORKDIR /home
COPY ./core/requirements/requirements.txt .
COPY ./core/requirements/basic_requirements.txt .
RUN pip install -r ./requirements.txt
FROM python:3.8-slim as final
WORKDIR /home
COPY --from=base /usr/lib/x86_64-linux-gnu/libpq* /usr/lib/x86_64-linux-gnu/
COPY --from=base /usr/local/envs/pypy-env /usr/local/envs/pypy-env
ENV PATH="/usr/local/envs/pypy-env/bin:$PATH"
COPY .env .env
COPY .src/ .

How to add copy files to Docker image

I'm running a python script to manipulate pictures. When I run the test the system obviously does not find the images. I new to docker and don't really understand how to do that.
This is how the structure looks
And this is the dockerfile
FROM ubuntu:latest
RUN apt update
RUN apt install python3 -y
RUN apt-get -y install python3-pip
RUN pip install pillow
RUN pip install wand
RUN DEBIAN_FRONTEND="noninteractive" apt-get install libmagickwand-dev --no-install-recommends -y
WORKDIR /usr/app/src
COPY image_converter.py ./
COPY test_image_converter.py ./
RUN python3 -m unittest test_image_converter.py
COPY image_converter.py ./
COPY test_image_converter.py ./
here you've COPIED ****.py file to ./(== WORKDIR)
so, in the same way copy your image files to your WORKDIR
such as
COPY test_images ./
This should work
IDK if this will help, but try to look its content.
https://www.geeksforgeeks.org/copying-files-to-and-from-docker-containers/
First, you didn't copy the image folder, should have these commands:
WORKDIR /usr/app/src
COPY test_images ./
Second, you should combine same type of Docker commands (to reduce layers, hence reduce docker image's size)
example : instead of saying
COPY image_converter.py ./
COPY test_image_converter.py ./
You can write
COPY *.py ./
Another example :
RUN apt update
RUN apt install python3 -y
You can write
RUN apt update; apt install python3 -y

Creating a dockerfile for a .deb file

I want to create a dockerfile for a debian file extension which runs on ubuntu 18.04. So far I've written this
FROM ubuntu:18.04 AS ubuntu
RUN apt-get update
WORKDIR /Downloads/invisily
RUN apt-get install ./invisily.deb
All phases run fine except the last one. It shows this error:
E: Unsupported file ./invisily.deb given on commandline
The command '/bin/sh -c apt-get install ./invisily.deb' returned a non-zero code: 100
I'm new to docker and cloud so any help would be appreciated thanks!
Edit:
I solved it by putting the dockerfile and the debian file in the same directory and using COPY . ./
This is what my dockerfile looks like now:
FROM ubuntu:18.04 AS ubuntu
RUN apt-get update
WORKDIR /invisily
COPY . ./
USER root
RUN chmod +x a.deb && \
apt-get install a.deb
A few things,
WORKDIR is the working directory inside of your container.
You will need to copy the file invisily.deb from locally to your container when building your Docker image.
You can pass multiple bash commands in the RUN field combining them with multilines.
Try something like this
FROM ubuntu:18.04 AS ubuntu
WORKDIR /opt/invisily
#Drop the invisily.deb in to the same directory as your Dockerfile
#This will copy it from local to your container, inside of /opt/invisily dir
COPY invisily.deb .
RUN apt-get update && \
chmod +x invisily.deb && \
apt-get install invisily.deb
in your WORKDIR there isn't any invisly.deb file, so if you have it you can copy it the container like this:
FROM ubuntu ...
WORKDIR /Downloads/invisily
RUN apt-get update
COPY ./your invisly file path ./
RUN chmod +x ./invisily
RUN apt-get install ./invisily.deb

Docker, sbt - cannot install sbt with centos docker image

I would like to have sbt in my docker image. I created a Dockerfile base on centos:centos8 image:
FROM centos:centos8
ENV SCALA_VERSION 2.13.1
ENV SBT_VERSION 0.13.18
RUN yum install -y epel-release
RUN yum update -y && yum install -y wget
RUN wget -O /usr/local/bin/sbt-launch.jar http://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/sbt-launch/$SBT_VERSION/sbt-launch.jar
WORKDIR /root
EXPOSE 8080
RUN sbt compile
CMD sbt run
And also I need to have sbt installed here, but when I ran this script I got an error:
Step 10/11 : RUN sbt compile
---> Running in 0aadcd774ba0
/bin/sh: sbt: command not found
I cannot understand why sbt could not been found. Is it a good way to achieve what I need or I should try other one? But I need to do it with centos
EDIT:
Finally it works after help from answer below. Working script looks like:
FROM centos:centos8
ENV SBT_VERSION 0.13.17
RUN yum install -y java-11-openjdk && \
yum install -y epel-release && \
yum update -y && yum install -y wget && \
wget http://dl.bintray.com/sbt/rpm/sbt-$SBT_VERSION.rpm && \
yum install -y sbt-$SBT_VERSION.rpm
WORKDIR /root
EXPOSE 8080
RUN sbt compile
CMD sbt run
You would need to install sbt inside your Dockerfile. Here is an example:
FROM centos:centos8
ENV SCALA_VERSION 2.13.1
ENV SBT_VERSION 0.13.17
RUN yum install -y epel-release
RUN yum update -y && yum install -y wget
# INSTALL JAVA
RUN yum install -y java-11-openjdk
# INSTALL SBT
RUN wget http://dl.bintray.com/sbt/rpm/sbt-${SBT_VERSION}.rpm
RUN yum install -y sbt-${SBT_VERSION}.rpm
RUN wget -O /usr/local/bin/sbt-launch.jar http://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/sbt-launch/$SBT_VERSION/sbt-launch.jar
WORKDIR /root
EXPOSE 8080
RUN sbt compile
CMD sbt run
Note: I did not see the version you had in your env variable (0.13.18) so I changed it to 0.13.17.
I ran into an issue where bintray.com was returning 403 randomly. I'm assuming it might be some kind of traffic throttle. Added the rpm file locally.
COPY sbt-0.13.18.rpm /
RUN yum install -y sbt-0.13.18.rpm

How do I add a package to an already existing image?

I have a RoR app that uses imagemagick specified in the Gemfile. I am using Docker's official rails image to build my image with the following Dockerfile:
FROM rails:onbuild
RUN apt-get install imagemagick
and get the following error:
Cant install RMagick 2.13.2. Cant find Magick-config in /usr/local/bundle/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Now, that's probably because the imagemagic package is missing on the OS, even though I specified it in my Dockerfile. So I guess the bundle install command is issued before my RUN apt-get command is issued.
My question - using this base image, is there a way to ensure imagemagic is installed prior to bundling?
Do I need to fork and change the base image Dockerfile to achieve that?
you are right, the ONBUILD instructions from the rails:onbuild image will be executed just after the FROM instruction of your Dockerfile.
What I suggest is to change your Dockerfile as follow:
FROM ruby:2.2.0
RUN apt-get install imagemagick
# throw errors if Gemfile has been modified since Gemfile.lock
RUN bundle config --global frozen 1
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN apt-get update && apt-get install -y nodejs --no-install-recommends && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y mysql-client postgresql-client sqlite3 --no-install-recommends && rm -rf /var/lib/apt/lists/*
COPY Gemfile /usr/src/app/
COPY Gemfile.lock /usr/src/app/
RUN bundle install
COPY . /usr/src/app
EXPOSE 3000
CMD ["rails", "server"]
which I made based on the rails:onbuild Dockerfile moving down the ONBUILD instructions and removing the ONBUILD flavor.
Most packages clean out the cache to save on size. Try this:
apt-get update && apt-get install imagemagick
Or spool up a copy of the container and look for yourself
docker run -it --remove <mycontainernameorid> /bin/bash
The --remove will ensure that the container is removed after you exit the shell. Once in the shell look for the package binary (or dpkg --list)

Resources