How to Install ffmpeg command into my docker - docker

Im am doing that in my docker-compose.yml:
app:
image: golang:1.14.3
ports:
- "8080:8080" ## Share API port with host machine.
depends_on:
- broker
- ffmpeg
volumes:
- .:/go/src/go-intelligent-monitoring-system
- /home/:/home/
working_dir: /go/src/go-intelligent-monitoring-system
command: apt-get install ffmpeg ########-------<<<<<<---------#################
command: go run main.go
But when I use it on my code, I have this error:
--> ""exec: "ffmpeg": executable file not found in $PATH""

Only the last command in compose file will take effect, so you didn't have chance to install ffmpeg with your current compose file.
As a replacement, you should install ffmpeg in your customized dockerfile like next, refers to this:
app:
build: ./dir
Put you customized Dockerfile in above dir like next:
Dockerfile:
FROM golang:1.14.3
RUN apt-get update && apt-get install ffmpeg -y

Related

Docker: How to use a Dockerfile in addition to docker-compose.yml

I am using this repo to set up a local wordpress development environment:
https://github.com/mjstealey/wordpress-nginx-docker#tldr
I hijacked the docker-compose to change the nginx port, but also to try to install openssl and vim on the nginx server. But when I do a docker-compose up the nginx server never starts properly.
This is what i see:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
57cf3dff059f nginx:latest "bash" About a minute ago Restarting (0) 14 seconds ago nginx
I tried to reference a Dockerfile inside the docker-compose like this:
nginx:
image: nginx:${NGINX_VERSION:-latest}
container_name: nginx
ports:
- '8085:8085'
- '443:443'
build: .
volumes:
- ${NGINX_CONF_DIR:-./nginx}:/etc/nginx/conf.d
- ${NGINX_LOG_DIR:-./logs/nginx}:/var/log/nginx
- ${WORDPRESS_DATA_DIR:-./wordpress}:/var/www/html
- ${SSL_CERTS_DIR:-./certs}:/etc/letsencrypt
- ${SSL_CERTS_DATA_DIR:-./certs-data}:/data/letsencrypt
depends_on:
- wordpress
restart: always
Notice the line that says "build: .".
Here's the contents of my Dockerfile:
FROM debian:buster-slim
RUN apt-get update
# installing vim isn't necessary. but just handy.
RUN apt-get -y install openssl
RUN apt-get -y install vim
Clearly, I'm doing something wrong. Maybe I should be defining tasks directly in the docker-compose for the nginx server?
I wanted to find a way to make a clean separation between our customizations and the original code. But maybe this isn't possible.
Thanks
EDIT 1
This is what the Dockerfile looks like:
FROM nginx:latest
RUN apt-get update
&& apt-get -y install openssl
&& apt-get -y install vim
And the nginx section of the docker-compose.yml:
nginx:
#image: nginx:${NGINX_VERSION:-latest}
container_name: nginx
ports:
- '8085:8085'
- '443:443'
build: .
volumes:
- ${NGINX_CONF_DIR:-./nginx}:/etc/nginx/conf.d
- ${NGINX_LOG_DIR:-./logs/nginx}:/var/log/nginx
- ${WORDPRESS_DATA_DIR:-./wordpress}:/var/www/html
- ${SSL_CERTS_DIR:-./certs}:/etc/letsencrypt
- ${SSL_CERTS_DATA_DIR:-./certs-data}:/data/letsencrypt
depends_on:
- wordpress
restart: always
I think you maybe need to change the base image that you are using in the docker file?:
FROM nginx:latest
Then in the docker file, because you are creating your own customised version of nginx image, you should either give custom name or tag.

go get in Dockerfile. I got cannot find package error

I added RUN go get to install packages during "docker-compose". However, the following cannot find package error was occurred when I run go build. I found that the packages are saved in /go/pkg/linux_amd64/.
run docker-compose and go build
$ docker-compose up -d
$ docker exec -it explorer-cli /bin/bash
# pwd
/go
# ls
bin pkg src
# echo $GOPATH
/go
# ls /go/pkg/linux_amd64/github.com/
go-sql-driver
# go build -i -o /go/bin/explorer-cli src/main.go
src/main.go:6:2: cannot find package "github.com/go-sql-driver/mysql" in any of:
/usr/local/go/src/github.com/go-sql-driver/mysql (from $GOROOT)
/go/src/github.com/go-sql-driver/mysql (from $GOPATH)
(it worked if I run "go get" manually)
# go get github.com/go-sql-driver/mysql
# ls src/
github.com main.go
# go build -i -o /go/bin/explorer-cli src/main.go
docker-compose.yml
version: '3.4'
services:
mysql:
image: mysql:latest
container_name: database
volumes:
- ./docker/:/etc/mysql/conf.d
- ./docker/:/docker-entrypoint-initdb.d
environment:
- MYSQL_RANDOM_ROOT_PASSWORD=true
- MYSQL_DATABASE=explorer
- MYSQL_USER=admin
- MYSQL_PASSWORD=12dlql*41
app:
build: .
tty: true
image: explorer-cli:latest
container_name: explorer-cli
volumes:
- ./src:/go/src
external_links:
- database
Dockerfile
FROM golang:latest
RUN apt-get update
RUN apt-get upgrade -y
ENV GOBIN /go/bin
RUN go get github.com/go-sql-driver/mysql
main.go
package main
import (
"database/sql"
_ "github.com/go-sql-driver/mysql"
)
func main() {
db, err := sql.Open("mysql", "XUSER:XXXX#(database:3306)/explorer")
if err != nil {
panic(err.Error())
}
defer db.Close()
}
Update 1
I noticed big differences between the following directories.
# ls /go/pkg/linux_amd64/github.com/go-sql-driver/
mysql.a
# ls /go/src/github.com/go-sql-driver/mysql/
AUTHORS connection_go18_test.go packets.go
CHANGELOG.md connection_test.go packets_test.go
CONTRIBUTING.md const.go result.go
LICENSE driver.go rows.go
README.md driver_go18_test.go statement.go
appengine.go driver_test.go statement_test.go
benchmark_go18_test.go dsn.go transaction.go
benchmark_test.go dsn_test.go utils.go
buffer.go errors.go utils_go17.go
collations.go errors_test.go utils_go18.go
connection.go fields.go utils_go18_test.go
connection_go18.go infile.go utils_test.go
Update 2
As #aerokite said, the "volumes" was overwriting the downloaded packages. I changed like the followings and it worked.
Dockerfile
version: '3.4'
FROM golang:latest
RUN apt-get update
RUN apt-get upgrade -y
ENV GOBIN /go/bin
RUN go get github.com/go-sql-driver/mysql
RUN mkdir /go/src/explorer-cli
docker-compose
services:
mysql:
image: mysql:latest
container_name: database
volumes:
- ./docker/:/etc/mysql/conf.d
- ./docker/:/docker-entrypoint-initdb.d
environment:
- MYSQL_RANDOM_ROOT_PASSWORD=true
- MYSQL_DATABASE=explorer
- MYSQL_USER=XUSER
- MYSQL_PASSWORD=XXXX
app:
build: .
tty: true
image: explorer-cli:latest
container_name: explorer-cli
volumes:
- ./src/explorer-cli:/go/src/explorer-cli
external_links:
- database
go build
go build -i -o /go/bin/explorer-cli src/explorer-cli/main.go
I have tried to recreate your problem.
FROM golang:latest
RUN apt-get update
RUN apt-get upgrade -y
ENV GOBIN /go/bin
RUN go get github.com/go-sql-driver/mysql
You have provided this Dockerfile. I have build it
$ docker build -t test .
Now I exec into this image to run your go build command.
$ docker run -it test bash
Then I have created main.go, you provided, in /go/src directory.
And finally, I have built successfully without any error
$ go build -i -o /go/bin/explorer-cli src/main.go
And I think I have found your problem. I have never used docker-compose. But you will understand.
Problem is here:
app:
build: .
tty: true
image: explorer-cli:latest
container_name: explorer-cli
volumes:
- ./src:/go/src <-- problem is here
external_links:
- database
You are mounting ./src into /go/src directory in your docker. This process is overwriting directory /go/src in your docker with your local ./src. And this is removing data you got from go get github.com/go-sql-driver/mysql
Do you understand?
But when you are running go get github.com/go-sql-driver/mysql, its now getting data again.
Solution (01):
Mount your local volume into somewhere else.
volumes:
- ./src:/tmp/src
And modify your Dockerfile to move this main.go to /go/src
Solution (02):
Copy main.go into your docker. Add this line in Dockerfile
COPY ./src/main.go /go/src

How to install packages from Docker compose?

Hi there I am new to Docker. I have an docker-compose.yml which looks like this:
version: "3"
services:
lmm-website:
image: lmm/lamp:php${PHP_VERSION:-71}
container_name: ${CONTAINER_NAME:-lmm-website}
environment:
HOME: /home/user
command: supervisord -n
volumes:
- ..:/builds/lmm/website
- db_website:/var/lib/mysql
ports:
- 8765:80
- 12121:443
- 3309:3306
networks:
- ntw
volumes:
db_website:
networks:
ntw:
I want to install the Yarn package manager from within the docker-compose file:
sudo apt-get update && sudo apt-get install yarn
I could not figure out how to declare this, I have tried
command: supervisord -n && sudo apt-get update && sudo apt-get install yarn
which fails silently. How do I declare this correctly? Or is docker-compose.yml the wrong place for this?
Why not use Dockerfile which is specifically designed for this task?
Change your "image" property to "build" property to link a Dockerfile.
Your docker-compose.yml would look like this:
services:
lmm-website:
build:
context: .
dockerfile: Dockerfile
container_name: ${CONTAINER_NAME:-lmm-website}
environment:
HOME: /home/user
command: supervisord -n
volumes:
- ..:/builds/lmm/website
- db_website:/var/lib/mysql
ports:
- 8765:80
- 12121:443
- 3309:3306
networks:
- ntw
volumes:
db_website:
networks:
Then create a text file named Dockerfile in the same path as docker-compose.yml with the following content:
FROM lmm/lamp:php${PHP_VERSION:-71}
RUN apt-get update && apt-get install -y bash
You can add as much SO commands as you want using Dockerfile's RUN (cp, mv, ls, bash...), apart from other Dockerfile capabilities like ADD, COPY, etc.
+info:
https://docs.docker.com/engine/reference/builder/
+live-example:
I made a github project called hello-docker-react. As it's name says is a docker-react box, and can serve you as an example as I am installing yarn plus other tools using the procedure I explained above.
In addition to that, I also start yarn using an entrypoint bash script linked to docker-compose.yml file using docker-compose entrypoint property.
https://github.com/lopezator/hello-docker-react
You can only do it with a Dockerfile, because the command operator in docker-compose.yml only keeps the container alive during the time the command is executed, and then it stops.
Try this
command: supervisord -n && apt-get update && apt-get install yarn
Because sudo doesn't work in docker.
My first time trying to help out:
would like you to give it a try (I found it on the internet)
FROM lmm/lamp:php${PHP_VERSION:-71}
USER root
RUN apt-get update && apt-get install -y bash

Docker Rails Tutorial generated files not exists

I am trying basic Docker & Rails tutorials on my windows10 home OS with Docker toolbox.
Client: 17.05.0-ce
Server: 17.06.0-ce
And hello-world tutorials works!
Now I am trying this youtube tutorial: https://www.youtube.com/watch?v=KH6pcHb6Wug&lc=z12ocxayznynslzjj04chbtgiwbhuf4z5xk0k.1499518307572479
And everything looks okay until I check rails generated project files.
In terminal showing like files are generating but when I use the command 'ls -l' its show only my manually created files (4).
What's happening with Rails generated files?
Where they go?
Here is docker-compose.yml content:
version: '2'
services:
db:
image: postgres
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/deep
ports:
- "3000:3000"
depends_on:
- db
Here is Dockerfile content:
FROM ruby:2.3.3
ENV HOME /home/rails/deep
# Install PGsql dependencies and js engine
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
WORKDIR $HOME
# Install gems
ADD Gemfile* $HOME/
RUN bundle install
# Add the app code
ADD . $HOME
Here is my terminal at end: https://ibb.co/c2eqFF
I found the solution:
https://github.com/laradock/laradock/issues/508
Just need to place a .env file next to your docker-compose.yml file, with the following content : COMPOSE_CONVERT_WINDOWS_PATHS=1
I found the solution: https://github.com/laradock/laradock/issues/508
Just need to place a .env file next to your docker-compose.yml file, with the following content : COMPOSE_CONVERT_WINDOWS_PATHS=1

Access volume in docker build

I am using docker compose and i have created a volume. I have mulitple containers. I am facing issue to run commands in the docker container.
I have node js container which have separate frontend and backend folders. i need to run npm install in both the folders.
version: '2'
services:
### Applications Code Container #############################
applications:
image: tianon/true
volumes:
- ${APPLICATION}:/var/www/html
node:
build:
context: ./node
volumes_from:
- applications
ports:
- "4000:30001"
networks:
- frontend
- backend
This is my docker file for node
FROM node:6.10
MAINTAINER JC Gil <sensukho#gmail.com>
ENV TERM=xterm
ADD script.sh /tmp/
RUN chmod 777 /tmp/script.sh
RUN apt-get update && apt-get install -y netcat-openbsd
WORKDIR /var/www/html/Backend
RUN npm install
EXPOSE 4000
CMD ["/bin/bash", "/tmp/script.sh"]
my workdir is empty as location /var/www/html/Backend is not available while building but available when i conainter is up. So my command npm install do not work
What you probably want to do, is to ADD or COPY the package.json file to the correct location, RUN npm install, then ADD or COPY the rest of the source into the image. That way, docker build will re-run npm install only when needed.
It would probably be better to run frontend and backend in separate containers, but if that's not an option, it's completely feasible to run ADD package.json-RUN npm install-ADD . once for each application.
The RUN is an image build step, at build time the volume isn't attached yet.
I think you have to execute npm install inside CMD.
You can try to add npm install inside /tmp/script.sh
Let me know
As Tomas Lycken Mentioned to copy files and then run npm install. I have separated containers for Frontend and backend. Most important is the node modules for the frontend and backend. Need to create them as volumes in services so that they are available when we up container.
version: '2'
services:
### Applications Code Container #############################
applications:
image: tianon/true
volumes:
- ${APPLICATION}:/var/www/html
- ${BACKEND}:/var/www/html/Backend
- ${FRONTEND}:/var/www/html/Frontend
apache:
build:
context: ./apache2
volumes_from:
- applications
volumes:
- ${APACHE_HOST_LOG_PATH}:/var/log/apache2
- ./apache2/sites:/etc/apache2/sites-available
- /var/www/html/Frontend/node_modules
- /var/www/html/Frontend/bower_components
- /var/www/html/Frontend/dist
ports:
- "${APACHE_HOST_HTTP_PORT}:80"
- "${APACHE_HOST_HTTPS_PORT}:443"
networks:
- frontend
- backend
node:
build:
context: ./node
ports:
- "4000:4000"
volumes_from:
- applications
volumes:
- /var/www/html/Backend/node_modules
networks:
- frontend
- backend

Resources