I'm trying to generate a Doxygen documentation in a Dockerfile using a Docker-compose to run all the services at the same time.
The goal is to generate the documentation in the container and to retrieve the generated files in local.
Here is my Dockerfile:
FROM ubuntu:latest
RUN apt-get update -y
RUN apt-get install -y doxygen doxygen-gui doxygen-doc graphviz
WORKDIR /doc
COPY Doxyfile .
COPY logo.png .
RUN doxygen Doxyfile
Here is the docker-compose with the doc service:
version: "3"
services:
doc:
build: ./doc
volumes:
- ./documentation:/doc
The doc is generated on the container and a new directory named "documentation" is generated but it is empty. How can I solve it to be filled with the generated documentation from the container ?
The goal is to generate the documentation in the container and to
retrieve the generated files in local.
You use a local directory as source of the mount here : - ./documentation:/doc.
It will make the /doc directory on the container to be synchronized with the ./documentation on the host but the source of the content is the host, not the container.
To get generated files on the host you can use a named volume instead of :
volumes:
- documentation-doxygen:/doc
After the container run you could get more information on that volume (location among other things) with docker volume inspect documentation-doxygen.
But if you mount the volume only to get the created folder, I think that you don't need to use volume at all.
A straighter alternative is simply doing the copy of the folder on the host after the container run :
docker copy DOC_CONTAINER_ID:/doc ./documentation-doxygen
As another alternative, if you want to execute doxygen Doxyfile in a local context in terms of folder but in a container (possible way in local env), you could replace RUN BY CMD or ENTRYPOINT to execute it as the container startup command and mount the current directory as a bind mount.
It will spare you some copies in the Dockerfile.
FROM ubuntu:latest
RUN apt-get update -y
RUN apt-get install -y doxygen doxygen-gui doxygen-doc graphviz
WORKDIR /doc
# REMOVE THAT COPY Doxyfile .
# REMOVE THAT COPY logo.png .
ENTRYPOINT doxygen Doxyfile
And the docker-compose part :
version: "3"
services:
doc:
build: ./doc
volumes:
- ./:/doc
Here ./is specified to use as bind source the base directory of the context of the doc service.
Related
I intend to build and run a dockerized container using an image that is built locally and not using docker hub. My use case is the following :
Cloned an open source repo source code(https://github.com/jitsi/jitsi-meet).
They have their own dockerized version too which build from the source and is deployed
on Docker Hub:(https://github.com/jitsi/docker-jitsi-meet)
Renamed the filenames and the contents inside the filenames of jitsi-meet for my own ease of use.
Packaged as a 7z packaged with the final changes.
I now Require to build the image locally using the code from the 7z package/ OR source code folder itself without uploading the image to the public docker hub.
RUN specific set of commands inside the Dockerfile.
my Dockerfile:
ARG MYPROJECT_REPO=myproject
ARG BASE_TAG=stable
FROM ${MYPROJECT_REPO}/base:${BASE_TAG}
LABEL org.opencontainers.image.title="Myproject"
LABEL org.opencontainers.image.url="https://myproject.org/myproject-meet/"
LABEL org.opencontainers.image.source="https://github.com/myproject/docker-myproject-meet"
LABEL org.opencontainers.image.documentation="https://myproject.github.io/handbook/"
ADD https://raw.githubusercontent.com/acmesh-official/acme.sh/2.8.8/acme.sh /opt
COPY rootfs/ /
RUN apt-dpkg-wrap apt-get update && \
apt-dpkg-wrap apt-get install -y cron nginx-extras myproject-meet-web socat curl jq && \
mv /usr/share/myproject-meet/interface_config.js /defaults && \
rm -f /etc/nginx/conf.d/default.conf && \
apt-cleanup
EXPOSE 80 443
VOLUME ["/config", "/usr/share/myproject-meet/transcripts"]
My docker-compose.yml (Only uploading relevant parts):
services:
# Frontend
myproject_webserver:
container_name: myproject-webserver
build:
dockerfile: ./Dockerfile
context: ./
#image: jitsi/web:${JITSI_IMAGE_VERSION:-unstable}
restart: ${RESTART_POLICY:-unless-stopped}
ports:
- '${HTTP_PORT}:80'
- '${HTTPS_PORT}:443'
volumes:
- ${CONFIG}/web:/config:Z
- ${CONFIG}/web/crontabs:/var/spool/cron/crontabs:Z
- ${CONFIG}/transcripts:/usr/share/myproject-meet/transcripts:Z
environment:
- AMPLITUDE_ID
- ANALYTICS_SCRIPT_URLS
As you can see i have commented out the public docker image of jitsi from docker hub and used build context instead. I need to build a local image and deploy to the DockerFile.
My core problem stems from the issue of renaming files/folders and the contents of the same.
Kindly correct my understanding of the following :
If i had used the core code i could have made minute changes to the code itself which are necessary without renaming and used a COPY command in DockerFile which would be used instead of the core file keeping everything else intact and also keeping the image line in docker-compose.yml as is.
So if the original repo has folder A/filenamea.js running inside a container :
Can docker COPY command be used if I have folder A1/filenamea1.js' as
renamed files to replace and run instead of the ones inside the container folder
A/filenamea.js?
I'm trying to run a docker-compose build command with a Dockerfile and a docker-compose.yml file.
Inside the docker-compose.yml file, I'm trying to bind a local folder on the host machine ./dist with a folder on the container app/dist.
version: '3.8'
services:
dev:
build:
context: .
volumes:
- ./dist:app/dist # I'm expecting files to be changed or added to the container's app/dist to be reflected to the host's ./dist folder
Inside the Dockerfile, I build some files with an NPM script that I'm wanting to make available on the host machine once the build is finished. I'm also touching a new file inside the /app/dist/test.md just as a simple test to see if the file ends up on the host machine, but it does not.
FROM node:8.17.0-alpine as example
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN npm install
RUN npm run dist
RUN touch /app/dist/test.md
Is there a way to do this? I also tried using the "long syntax" as mentioned in the Docker Compose v3 documentation: https://docs.docker.com/compose/compose-file/compose-file-v3/
The easiest way to do this is to install Node and run the npm commands directly on the host.
$BREW_OR_APT_GET_OR_YUM_OR_SOMETHING install node
npm install
npm run dist
# done
There's not an easy way to use a Dockerfile to build host content. The Dockerfile can't write out directly to the host filesystem; if you use a volume mount, the host volume hides the container content before anything else happens.
That means, if you want to use this approach, you need to launch a temporary container to get the content out. You can do it with a one-off container, mounting the host directory somewhere other than /app, making the main container command be cp:
sudo docker build -t myimage .
sudo docker run --rm \
-v "$PWD/dist:/out" \
myimage \
cp -a /app/dist /out
Or, if you specifically wanted to use docker cp:
sudo docker build -t myimage .
sudo docker create --name to-copy myimage
sudo docker cp -r to-copy:/app/dist ./dist
sudo docker rm to-copy
Note that any of these sequences are more complex than just installing a local Node via a package manager, and require administrator permissions (you can use the same technique to overwrite any host file, including the /etc/shadow file with encrypted passwords).
Let's consider such directory. (Note: A directory ends with \)
root\
|
-- some stuff
|
-- application\
| |
| -- app_stuff
| |
| -- out\
| |
| -- main.cpp
|
-- some stuff
I'm trying to build this app via docker.
The Dockerfile looks like:
FROM emscripten/emsdk:latest
RUN apt-get -q update
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN em++ application/main.cpp -o application/out/app.html
RUN pip3 install aiohttp
RUN pip3 install aiohttp_jinja2
RUN pip3 install jinja2
RUN ls application/out
The docker-compose looks like:
version: '3.8'
services:
application:
build: .
volumes:
- ./application/out:/app/application/out
command: python3 application/entry.py
ports:
- "8080:8080"
As you may notice in Dockerfile (RUN em++ application/main.cpp -o application/out/app.html), whereas docker is processing it generates new files to the out-directory. However, once it's done I can't find those files.
Note: These files appear in application\out in container.
...
Step 10/10 : RUN ls application/out
---> Running in 603f6b99f4b0
app.html
app.js
app.wasm
...
Where have I admitted a mistake?
The Dockerfile gives instructions on how to build a docker image, and not on what happens in the live container.
If you mount a volume, either via docker-compose or via a docker run command, either way, the volume will only be mounted once the container is created.
So what happens is
first docker creates the image executing the commands in the Dockerfile, and stores the image as an image
then docker will create a container using the stored image
then docker will mount the volumes you defined in the docker-compose.yml file. (At this point if anything is already present in the target directory, either the mount will fail or the original content of the target directory will be moved to a 'lost-and-found' directory)
then the entrypoint or cmd command is run (so here that would be python3 application/entry.py)
So if you need to get the output files out in your host directory, you either need to create those files in the entrypoint script of copy them in the entrypoint script
so you can create a file you call myscript.sh with the following
#!/bin/bash
em++ /app/application/main.cpp -o /app/application/out/app.html
python3 /app/application/entry.py
in your Dockerfile you remove the line RUN em++ application/main.cpp -o application/out/app.html and replace it with
COPY ./myscript.sh /
ENTRYPOINT /myscript.sh
and you remove the line command: python3 application/entry.py from your docker-compose.yml file.
You can use the CMD command rather than ENTRYPOINT if you prefer, that's just a matter of personal preference.
A Docker-compose volume can link a directory on the host to a directory inside of a container. You are overwriting the /app/application/out directory inside of the container with a volume to the host's ./application/out, effectively erasing any contents of /app/application/out originating from your built image.
Given the context, I presume your host's ./application/out directory is empty and you are overwriting the container's /app/application/out directory with nothing. You can test this by removing the volumes tag and see if the application is able to find files under /app/application/out afterwards.
Unrelated to your issue, take into consideration that your apt-get update command will cache Debian remote repository lists in your built image; this adds wasted space to your final image. See this post about deleting the cached lists.
I have an aws/appium test project I want to run in docker. I have a bash script that runs in the container which downloads a file from S3 and creates a zip of my project.
The Dockerfile:
FROM maven:3.3.9
RUN apt-get update && \
apt-get -y install python && \
apt-get -y install python-pip && \
pip install awscli
RUN export PATH=$PATH:/usr/local/bin
There's a docker compose file, the command runs a bash script:
version: '2'
volumes:
maven_cache: ~
services:
application: &application
build: .
tmpfs:
- /tmp:rw,nodev,noexec,nosuid
volumes:
- ./:/app
- maven_cache:/root/.m2/repository
working_dir: /app
command: ./aws-upload.sh
This is the beginning of the ./aws-upload.sh bash script. It prepares the files I need for uploading later:
#!/usr/bin/env bash
mvn clean package -DskipTests=true
aws s3 cp s3://<bucket-name>/app.apk $(pwd)
cp target/zip-with-dependencies.zip $(pwd)
I only want the above files to exist within the container, however they appear locally also. Is there something in my docker-compose file that isn't configured correctly?
Thanks
In your compose file you are defining a volume ./:/app which maps the host folder where the compose file is located to the containers app folder. If you execute your bash script in the app folder it will also make the files it is creating available on the host.
If you want to avoid this either remove the volume mapping (in case you don't need it) or execute the script in another folder which is not mapped to your host.
This is normal. When you declared the following inside the composefile:
volumes:
- ./:/app
This means mount the current host directory onto /app inside the container. This will effectivelty keep the current directory and the /app folder inside the container in sync.
Thus if the aws-upload.sh script creates files in /app, they will also show next to the compose file.
I have a docker-compose.yml script which looks like this:
version: '2'
services:
php:
build: ./docker/php
volumes:
- .:/var/www/website
The DockerFile located in ./docker/php looks like this:
FROM php:fpm
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
RUN php composer-setup.php
RUN php -r "unlink('composer-setup.php');"
RUN mv composer.phar /usr/local/bin/composer
RUN composer update -d /var/www/website
Eventho this always fails with the error
[RuntimeException]
Invalid working directory specified, /var/www/website does not exist.
When I remove the RUN composer update line and enter the container, the directory does exist and contains my project code.
Please tell me if I am doing anything wrong OR if I'm doing the composer update on a wrong place
RUN ... lines are run when the image is being built.
Volumes are attached to the container. You have at least two options here:
use COPY command to, well, copy your app code to the image so that all commands after that command will have access to it. (Do not push the image to any public Docker repo as it will contain your source that you probably don't want to leak)
install composer dependencies with command run on your container (CMD or ENTRYPOINT in Dockerfile or command option in docker-compose)
You are mounting your local volume over your build directory so anything you built in '/var/www/website' will be mounted over by your local volume when the container runs.