Docker BaseX DBA - docker

I use the following docker compose file to start the basexhttp server and the dba:
version: '3'
services:
basexhttp:
image: basex/basexhttp
ports:
- "1984:1984"
- "8984:8984"
dba:
image: basex/dba:8.5.4
ports:
- "11984:1984"
- "18984:8984"
- "18985:8985"
According to the documentation I should get the dba page with:
http://<host>:18984/dba.
Returns No function found that matches the request.
How do I get this to work?

Hi bergtwvd — I am sorry but your example is slightly outdated, we no longer maintain a separate basex/dba image — mostly due to our DBA no longer supporting connecting to remote basex instances..
I think the best approach is building your own image based on our "official" basexhttp image, that contains the DBA code:
Download BaseX.zip from http://files.basex.org/releases/
Create an empty folder for building your docker image.
Create a Dockerfile inside that folder with the following contents:
# Dockerfile
FROM basex/basexhttp:9.1
MAINTAINER BaseX Team
ADD ./webapp /srv/basex/webapp
Copy the webpapp folder contained in basex.zip into the same folder your Dockerfile is in
Run docker build:
# docker build
docker build -t mydba .
Sending build context to Docker daemon 685.6kB
Step 1/3 : FROM basex/basexhttp:latest
---> c9efb2903a40
Step 2/3 : MAINTAINER BaseX Team
---> Using cache
---> 11228f6d7b17
Step 3/3 : COPY webapp /srv/basex/
---> Using cache
---> d209f033d6d9
Successfully built d209f033d6d9
Successfully tagged mydba:latest
You may as well use this technique with docker-compose:
#docker-compose.yml
version: '3'
services:
dba:
build:
context: .
dockerfile: Dockerfile
ports:
- "8984:8984"
You should now be able to open http://localhost:8984 and access the DBA.
Hope this helps.

Related

Why does building and image with docker compose fail but succeeds without it?

I am trying to build an image with docker compose and it fails, however it works with just docker. I have read some SO posts saying that the error thrown when failing happens when a file/folder cannot be found in the Dockerfile. The build works when building with docker so I dont know why it wouldn't work with docker-compose. Why is this happening?
The structure for this project is this:
parent_proj
|_proj
|_Dockerfile
|_docker-compose.yml
Here is my docker-compose file:
version: '3.4'
services:
integrations:
build:
context: .
dockerfile: proj/Dockerfile
network: host
image: int
ports:
- "5000:5000"
Here is the Dockerfile inside proj/
FROM openjdk:11
USER root
#RUN apt-get bash
ARG JAR_FILE=target/proj-0.0.1-SNAPSHOT.jar
COPY ${JAR_FILE} /app2.jar
ENTRYPOINT ["java","-jar", "/app2.jar"]
When I'm inside the proj folder. I can run
docker build . -t proj
The above succeeds and I can subsequently run the container. However when I am in parent_proj and run docker compose build it fails with the error message
failed to compute cache key: failed to walk
/var/lib/docker/tmp/buildkit-mount316454722/target: lstat
/var/lib/docker/tmp/buildkit-mount316454722/target: no such file or
directory
Why does this happen? How can I build successfully with docker-compose without restructuring the project?
thanks
Your Compose build options and the docker build options you show are different. The successful command is (where -f Dockerfile is the default):
docker build ./proj -t proj # -f Dockerfile
# context: image: dockerfile:
But your Compose setup is running
docker build . -t img -f proj/Dockerfile
# context: image: dockerfile:
Which one is right? In the Dockerfile, you
COPY target/proj-0.0.1-SNAPSHOT.jar /some/container/path
That target/... source path is always relative to the build-context directory (Compose context: option, the directory parameter to docker build), even if it looks like an absolute path and even if the Dockerfile is in a different directory. If that target directory is a subdirectory of proj then you need the first form.
There's a shorthand Compose build: syntax if the only thing you need to specify is the context directory, and I'd use that here. If you don't specifically care what the image name is (you're not pushing it to a registry) then Compose can pick a reasonable name on its own; you don't need to specify image:.
version: '3.8'
services:
integrations:
build: ./proj
ports:
- "5000:5000"

How to share prepared files on build stage between containers with docker compose

I have 2 services: nginx and web
When I build web image I build the frontend via the command npm install && npm run build
But I need prepared files in both containers: in the web and in the nginx.
How to share files between containers (images)? I can't simply use volumes, because they will be mounted only in runtime.
The Dockerfile COPY directive can copy files from an arbitrary image. While it's most commonly used in multi-stage builds, you can use it with any image, even one you built yourself.
Say your docker-compose.yml file looks like:
version: '3.8'
services:
web:
build: .
image: my/web
nginx:
build:
context: .
dockerfile: Dockerfile.nginx
ports: [8000:80]
Note that we've explicitly given the web image a name; also notice that there are no volumes: in this setup.
In the proxy image, we can then copy files out of that image:
# Dockerfile.nginx
FROM nginx
COPY --from=my/web /app/static /usr/share/nginx/html
The only complication here is that Compose doesn't know that one image is built off of the other. You'll probably have to manually tell it to rebuild the application image so that it gets built before the proxy image.
docker-compose build web
docker-compose build
docker-compose up -d
You can use this in a more production-oriented setup to deploy this application without having the code directly available. You can create a base docker-compose.yml that names an image: for both containers, and then add a separate docker-compose.override.yml file that has the build: blocks. After running docker-compose build twice as above, you can docker-compose push the built images, and then run this container stack on your production system getting the images from the registry; without a local copy of the source tree and without volumes.

Dockerfile has no effect

Am starting with Docker and running into an issue. I want to enabled mod_rewrite in an apache-container and am using this docker-compose.yml
version: '3'
services:
php-apache:
image: php:7.2.1-apache
ports:
- 80:80
volumes:
- ./DocumentRoot:/var/www/html:z
and this Dockerfile:
FROM php:7.2.1-apache
RUN a2enmod rewrite
RUN service apache2 restart
I run "docker build --no-cache ." with output:
Sending build context to Docker daemon 90.16MB
Step 1/3 : FROM php:7.2.1-apache
---> f99d319c7004
Step 2/3 : RUN a2enmod rewrite
---> Running in 883573f39a39
Enabling module rewrite.
To activate the new configuration, you need to run:
service apache2 restart
Removing intermediate container 883573f39a39
---> 18c40ce865a6
Step 3/3 : RUN service apache2 restart
---> Running in b79bab530dc7
Restarting Apache httpd web server: apache2AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
.
Removing intermediate container b79bab530dc7
---> 8e2cfa7094f7
Successfully built 8e2cfa7094f7
Result: mod_rewrite not installed. When I log in to the console and manually run "a2enmod rewrite" all works fine. What am I missing here?
The docker build --no-cache . creates the docker image <none>:<none>.
Your compose-file references the base image: php:7.2.1-apache. You're basically preparing an image that you're not using.
You might want to use the -t argument in order to tag the image that you are building and then reference that image in the compose file. E.g:
docker build -t my-awesome-php-with-a2enmod --no-cache .
version: '3'
services:
php-apache:
image: my-awesome-php-with-a2enmod
ports:
- 80:80
volumes:
- ./DocumentRoot:/var/www/html:z

How to verify the validity of docker images build from docker-compose?

I am trying to come up with a CI system where I validate the Dockerfile and docker-compose.yaml files that are used to build our images.
I found Google containter-structure-tests
that can be used to verify the structre of Docker images that are built. This works if the docker images are build from Dockerfile.
Is there a way that I can verify the docker images with all the configurations that are added to the images by Docker-compose?
EDIT:
Maybe I didn't all put all my details into the questions.
Lets say I have docker-compose file with the following structure:
version: "3"
services:
image-a:
build:
context: .
dockerfile: Dockerfile-a
image-b:
build:
context: .
dockerfile: Dockerfile-b
ports:
- '8983:8983'
volumes:
- '${DEV_ENV_ROOT}/solr/cores:/var/data/solr'
- '${DEV_ENV_SOLR_ROOT}/nginx:/var/lib/nginx'
Now that the images would be built from Dockerfile-a and Dockerfile-b, there would be configurations made on top of image foo-b. How can I validate those configurations without building the container from image foo-b? Would that even be possible?
Assuming you have the following docker-compose.yml file:
version: "3"
services:
image-a:
build:
context: .
dockerfile: Dockerfile-a
image-b:
build:
context: .
dockerfile: Dockerfile-b
Build your images running the command docker-compose --project-name foo build. This will make all images' name start with the prefix foo_. So you would end up with the following image names:
foo_image-a
foo_image-b
The trick is to use a unique id (such as your CI job id) instead of foo so you can identify the very images that were just built.
Now that you know the names of your images, you can use:
container-structure-test test --image foo_image-a --config config.yaml
container-structure-test test --image foo_image-b --config config.yaml
If you are to make some kind of generic job which does not know the docker compose service names, you can use the following command to get the list of images starting with that foo_ prefix:
docker image list --filter "reference=foo_*"
REPOSITORY TAG IMAGE ID CREATED SIZE
foo_image-a latest 0c5e1cf8c1dc 16 minutes ago 4.15MB
foo_image-b latest d4e384157afb 16 minutes ago 4.15MB
and if you want a script to iterate over this result, add the --quiet option to obtain just the images' id:
docker image list --filter "reference=foo_*" --quiet
0c5e1cf8c1dc
d4e384157afb

docker-compose simple networking demo

I am new to docker and docker-compose and I'm trying to understand networking in docker. I have the following docker-compose.yml file
version: '3'
services:
app0:
build:
context: ./
dockerfile: Dockerfile0
app1:
build:
context: ./
dockerfile: Dockerfile1
And the Dockerfiles look like
FROM: python:latest
I'm using a python image because that's what I want for my actual use-case.
I run
docker-compose build
docker-compose up
output:
Building app0
Step 1/1 : FROM python:latest
---> 3624d01978a1
Successfully built 3624d01978a1
Successfully tagged docker_test_app0:latest
Building app1
Step 1/1 : FROM python:latest
---> 3624d01978a1
Successfully built 3624d01978a1
Successfully tagged docker_test_app1:latest
Starting docker_test_app0_1 ... done
Starting docker_test_app1_1 ... done
Attaching to docker_test_app0_1, docker_test_app1_1
docker_test_app0_1 exited with code 0
docker_test_app1_1 exited with code 0
From what I've read, docker-compose will create a default network and both containers will be attached to that network and should be able to communicate. I want to come up with a very simple demonstration of this, for example using ping like this:
docker-compose run app0 ping app1
output:
ping: app1: Name or service not known
Am I misunderstanding how docker-compose networking works? Should I be able to ping app1 from app0 and vice versa?
running on amazon linux.
docker-compose version version 1.23.2, build 1110ad01
You need to add something (a script, via CMD) to those Python containers that keeps them running, something listening on a port or a simple loop.
Right now they immediately terminate after starting and there is nothing to ping. (The whole container shuts down when its command finished)
Defining services in the docker-composer.yaml file maybe not not enough as if one service will be down the other one won't have information about it's IP address.
You can however create a dependence between them which will for example allow the instance to automatically start app1 service when you start app0.
Set following configuration:
version: '3'
services:
app0:
build:
context: ./
dockerfile: Dockerfile0
depends_on:
- "app1"
app1:
build:
context: ./
dockerfile: Dockerfile1
This is a good practice in case you want services to communicate between each other.

Resources