Use Docker container within Azure pipeline - docker

I am trying to create an Azure pipeline configuration where the pipeline uses my local Dockerfile to get the correct image.
I have the following python package structure
project
│
└───.azurepipelines
│ │ azure-pipelines.yml
│
└───.devcontainer
│ devcontainer.json
│ Dockerfile
I have found some related threads on the subject, but I don't quite understand how to achieve what I want. I want the rest of the steps in the pipe to be performed within the container. Do you have any suggestions?

You can specify a build and point to a specific docker file with docker-compose.
Here is an example of that. https://stackoverflow.com/a/50230608/37759
web:
build:
dockerfile: Dockerfile-alpine
context: ./web
ports:
- 8099:80
depends_on:
- database **strong text**
Note the build command along with dockerfile parameter. It says, build project web using dockerfile Dockerfile-alpine

Related

Cannot create container for service mysql: not a directory

I'm new to docker and I get a strange error when building my first containers. I'm using WSL2 if that matters.
Here is the part that's causing it :
# MySQL Service
mysql:
image: mysql:8
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: usr_data
volumes:
- ./.docker/mysql/my.cnf:/etc/mysql/conf.d/my.cnf
# - ./.docker/mysql/ohmy.cnf:/etc/mysql/conf.d/my.cnf
- ./.docker/test/test.txt:/tmp/test/test.txt
- mysqldata:/var/lib/mysql
healthcheck:
test: mysqladmin ping -h 127.0.0.1 -u root --password=MYSQL_ROOT_PASSWORD
interval: 5s
retries: 10
Both files my.cnf and ohmy.cnf exist and have the same content.
When I use docker-compose up -d I get the error :
ERROR: for mysql Cannot create container for service mysql: not a directory
When I uncomment the ohmy.cnf line and comment the my.cnf line I get no errors and it builds just fine. It also works great with the little test.txt I made.
I fail to see the difference between the two, and while it may work with my little workaround, I'd like to understand what is causing the error in the first place.
Thank you for your time.
Edit :
Here's my ./.docker
./.docker
├── mysql
│ ├── db
│ │ └── db.sql
│ ├── my.cnf
│ └── ohmy.cnf
├── nginx
│ └── conf.d
│ └── php.conf
├── php
│ └── Dockerfile
└── test
└── test.txt
6 directories, 6 files
In WSL 2 I updated docker compose to use $PWD instead of a relative path for volumes:
volumes:
- $PWD/logs:/var/log/docker:delegated
Delete the volumes from /var/lib/docker/volumes/[your-volumes]. Run the below commands
docker volume prune
Remove all unused local volumes. Unused local volumes are those which are not referenced by any containers.
docker system prune
Remove all unused containers, networks, images (both dangling and unreferenced), and optionally, volumes.
restarting my laptop helped in my special case, the other commands did not

access volumes set in docker-compose.yml from dockerfile

I have my project architecture like this:
.
├── app/
├── context/
│   ├── Dockerfile
│   ├── .dockerignore
│   └── php.ini
├── database/
├── http/
├── composer.json
└── docker-compose.yml
and in docker-compose.yml I have the following configuration:
version: '3.8'
services:
app:
container_name: "ERP"
restart: always
build:
context: ./context
dockerfile: Dockerfile
stdin_open: true
tty: true
ports:
- '8000:80'
links:
- db_server
volumes:
- .:/usr/src/app
working_dir: /usr/src/app
db_server:
container_name: "db_server"
image: 'mysql:8.0'
ports:
- '3306:3306'
But when I set the Dockerfile content to set up the application with docker-compose up, having Dockerfile content as this:
FROM ubuntu:20.04
WORKDIR /usr/src/app
RUN cat composer.json
It says "No such file or directory composer.json". Why?
UPDATE
I managed to solve the problem based on ENTRYPOINT configuration ..
as far I understand - I'm new to docker - , the ENTRYPOINT defines an entry script running at the beginning of a container , thus , starting that script will be definitely in the run time of the container and after the initializations specified by docker-compost.yml file .. so the contents of the actual context will be available to the script to see it in the time it runs ..
Thank you all for your answers .
That's because you define the context to be "./context", so you are stuck into this folder, where composer.json isn't.
Use "." for the context. and context/Dockerfile for the dockerfile.
Then mounting '.' will mount the whole directory, and not only the ./context one.
The build process to create an image occurs before the runtime process that takes that image and runs a container. The compose file includes a build section to allow you to easily build the image before running it, but all of the other parts of the compose file define runtime configurations like volume mounts and networking.
At build time, you do not get to specify the volume sources, at most you can define a required volume target in the image with a VOLUME step. (Note if you do that, future RUN steps within the Dockerfile may have issues modifying that directory since many build tools mount an anonymous volume as you've requested, but only capture the changes to the container filesystem, not the volume filesystem).
If you need the contents of a file or directory in your image, you must perform a COPY or ADD step in the Dockerfile to copy from the build context (typically imported as . or the current directory) into the image.
Key build.context define a path to a directory containing a Dockerfile. This is a context of building image, and during build process docker doesn't have access to composer.json file (it is out of the context).
Command RUN runs command during build. If you want to run it when container is starting, you should use CMD.
FROM ubuntu:20.04
WORKDIR /usr/src/app
CMD cat composer.json

Docker compose for multiple different projects

I have the following projects structure on my machine filesystem:
../
├── angular_front_end/
│ └── docker-compose.yml
│ └── Dockerfile
├── node_back_end_service/
│ └── docker-compose.yml
│ └── Dockerfile
└── php_back_end_service/
└── docker-compose.yml
└── Dockerfile
The thing is, I don't want to go through each one and do docker-compose up, it's horrible to maintain.
Is there a way to unite them all under one command somehow?
Also, can I run all of them under one container, like the back-end container in the screenshot below?
Thanks a lot!
You can create a single docker-compose.yml file at the root of this directory hierarchy that launches everything.
version: '3.8'
services:
frontend:
# Builds Dockerfile within that directory,
# can only reference files within this directory
build: angular_front_end
ports: ['3000:3000']
node:
build: node_back_end
php:
build: php_back_end
To the extent that these services require databases or other Docker resources, they all need to be duplicated in this top-level docker-compose.yml file.
In principle it's possible to reuse your existing Compose files, but there are two big issues you'll run into. You need to consistently use multiple docker-compose -f options, every time you run a docker-compose command; with your setup this will quickly become unwieldy (even with just three services). Also, all filesystem paths are interpreted relative to the first -f option's path so a declaration like build: . won't point at the right place.

Build docker image using different directory contexts

My current project consists of a mongo server, a rabbitmq server and a dotnet core service. It is structured as follows:
.
├── project1.docker-compose.yml #multiple docker-compose files for all projects
├── .dockerignore
├── Util/
| └── some common code across all projects
└── Project1/ #there are multiple projects at the same level with the same structure
├── .docker/
| ├── mongodb
| | └──Dockerfile
| └── rabbitmq
| └──Dockerfile
├── BusinessLogicClasses/
| └── some classes that contain my business logic
└── DotNetCoreService/
├── my service code
└── .docker
└──Dockerfile
Right now I am able to use docker-compose command to build the images for mongodb, rabbitmq and the dot net core succesfully. The docker-compose.yml sits at the home directory level because my different projects (in this case Project1) references code found under the Util directory. Therefore I need to be able to provide a context that is above both directories so that I can use COPY operations on the Dockerfile.
My basic project1.docker-compose.yml is as follows (I excluded not important parts)
version: '3'
services:
rabbitmq:
build:
context: Project1/.docker/rabbitmq/
mongodb:
build:
context: Project1/.docker/mongodb/
dotnetcoreservice:
build:
context: ./
dockerfile: Project1/DotNetCoreService/.docker/Dockerfile
As can be seen, the context for the dotnetcoreservice is at the home directory level. Therefore my Dockerfile for that specific image needs to target the full paths from the context as follows:
#escape=`
FROM microsoft/dotnet:2.0-sdk AS build
WORKDIR /app
COPY Project1/ ./Project1/
COPY Util/ ./Util/
RUN dotnet build Project1/DotNetCoreService/
This dockerfile works succesfully when invoked via the docker-compose command at the home directory level, however when invoked via the docker build .\Project1\DotNetCoreService\.docker\ command it fails with the following message:
COPY failed: stat
/var/lib/docker/tmp/docker-builder241915396/Project1: no
such file or directory
I think this is a matter of the actual context because the docker build instruction automatically sets the context to where the Dockerfile is. I would like to be able to use this same directory structure to create images both with the docker-compose build as well as with the docker build instructions.
Is this somehow possible?
Use flag -f to set custom path
Example: docker build --rm -t my-app -f path/to/dockerfile .
May 2022: The new releases of Dockerfile 1.4 and Buildx v0.8+ come with the ability to define multiple build contexts.
This means you can use files from different local directories as part of your build.
Dockerfiles now Support Multiple Build Contexts
Tõnis Tiigi
Multiple Projects
Probably the most requested use case for named contexts capability is the possibility to use multiple local source directories.
If your project contains multiple components that need to be built together, it’s sometimes tricky to load them with a single build context where everything needs to be contained in one directory.
There’s a variety of issues:
every component needs to be accessed by their full path,
you can only have one .dockerignore file,
or maybe you’d like each component to have its own Dockerfile.
If your project has the following layout:
project
├── app1
│ ├── .dockerignore
│ ├── src
├── app2
│ ├── .dockerignore
│ ├── src
├── Dockerfile
…with this Dockerfile:
#syntax=docker/dockerfile:1.4
FROM … AS build1
COPY –from=app1 . /src
FROM … AS build2
COPY –from=app2 . /src
FROM …
COPY –from=build1 /out/app1 /bin/
COPY –from=build2 /out/app2 /bin/
…you can invoke your build with docker buildx build –build-context app1=app1/src –build-context app2=app2/src .. Both of the source directories are exposed separately to the Dockerfile and can be accessed by their respective names.
This also allows you to access files that are outside of your main project’s source code.
Normally when you’re inside the Dockerfile, you’re not allowed to access files outside of your build context by using the ../ parent selector for security reasons.
But as all build contexts are passed directly from the client, you’re now able to use --build-context othersource=../../path/to/other/project to avoid this limitation.

docker-compose adding to PATH

I need to add some paths to my PATH in docker-compose.yml
in docker-compose.yml I have tried
app:
...
environment:
- PATH /code/project
however that just overwrites the existing PATH - whereas I want to add to the existing PATH
A docker-compose.yml does not offer you any mean to extend an environment variable which would already be set in a Docker image.
The only way I see to do such things is to have a Docker image which expects some environment variable (let's say ADDITONAL_PATH) and extends at run time its own PATH environment variable with it.
Let's take the following Dockerfile:
FROM busybox
ENV PATH /foo:/bar
CMD export PATH=$PATH:$ADDITIONAL_PATH; /bin/echo -e "ADDITIONAL_PATH is $ADDITIONAL_PATH\nPATH is $PATH"
and the following docker-compose.yml file (in the same directory as the Dockerfile):
app:
build: .
Build the image: docker-compose build
And start a container: docker-compose up, you will get the following output:
app_1 | ADDITIONAL_PATH is
app_1 | PATH is /foo:/bar:
Now change the docker-compose.yml file to:
app:
build: .
environment:
- ADDITIONAL_PATH=/code/project
And start a container: docker-compose up, you will now get the following output:
app_1 | ADDITIONAL_PATH is /code/project
app_1 | PATH is /foo:/bar:/code/project
Also note a syntax error in your docker-compose.yml file: there must be an equal sign (=) character between the name of the environment variable and its value.
environment:
- PATH=/code/project
instead of
environment:
- PATH /code/project
I know this is an old thread, but I think there are a couple of things that can be clarified.
Through docker-compose file one can only address variables from the host machine, therefore it is NOT possible to extend image's PATH from docker-compose.yml:
app:
...
environment:
- PATH=/code/project:$PATH
On the other hand, using RUN or CMD EXPORT directive will not suffice due to EXPORTED variables not persisting through images. Since every Dockerfile directive generates an intermediate image, these values will be reflected in them and not in the main image where you actually need them.
The best option would be to use build option in docker-compose.yml:
app:
build: .
and adding ENV option to a Dockerfile:
ENV PATH /path/to/bin/folder:$PATH
This is suggested in issue #684 and I would also suggest looking at an answer: docker ENV vs RUN export.
You can add your value.
To do so you need to know name or ID of the container, run it to know:
docker ps
This will print details of all running containers. Look for your container and copy its ID or name. Then run this:
docker inspect <container ID>
It will print all values of specified container. Look for ENV section and find PATH environment variable. Then copy its value, add your changes and extend it with your new values then set it again in your docker-compose.yml "environment" section.
app
environment:
- PATH=value-you-copied:new-value:new-value:etc
Note that you shouldn't remove anything from initial value of PATH, just extend it and add new value.
#Thomasleveil's answer works only for containers built directly from the docker-compose file (via the build). And you have no control over the command executed.
I needed this functionality for containers downloaded from (our) repository where this does not quite work.
I have found solution using the entrypoint and command.
Lets have some base container base and another one, java7, that is based upon it. And finaly some docker-compose using the java7 container to run some stuff.
Probably the most important file here, entrypoint.sh
$ cat base/script/entrypoint.sh
#!/bin/bash
export PATH="$PATH_ADD:$PATH"
echo "Path modified to $PATH"
exec $#
Dockerfile for base container
$ cat base/Dockerfile
FROM xxx
# copy entrypoint script that extends current PATH variable by PATH_ADD
COPY script/entrypoint.sh /usr/sbin
ENTRYPOINT ["/usr/sbin/entrypoint.sh"]
Dockerfile for java7 container
$ cat java7/Dockerfile
FROM base
# download java7
curl ... /opt/java/jdk7
ENV JAVA_HOME /opt/java/jdk7
Commands run by docker-compose
$ cat sbin/run-app1.sh
exec $JAVA_HOME/bin/java -version
$ cat sbin/run-app2.sh
exec $JAVA_HOME/bin/java -version
Docker-compose using these:
$ cat docker-compose.yml
version: '3'
services:
app1:
image: java7
command: run-app1.sh
environment:
PATH_ADD: /app/sbin
volumes:
- "./sbin:/app/sbin:cached"
app2:
image: java7
command: run-app2.sh
environment:
PATH_ADD: /app/sbin
volumes:
- "./sbin:/app/sbin:cached"
File structure
$ tree
.
├── base
│ ├── script
│   │ └── entrypoint.sh
│   └── Dockerfile
├── java7
│   └── Dockerfile
├── sbin
│ ├── run-app1.sh
│ └── run-app2.sh
└── docker-compose.yml
to add a single location to PATH in your docker-compose.yml file:
app
environment:
- PATH=/code/project:$PATH
to add multiple locations to your PATH in your docker-compose.yml file
app
environment:
- PATH=/code/project:/code/lib:/foo/bar:$PATH
to add to your PYTHONPATH
app
environment:
- PYTHONPATH=/code/project:/code/lib:/foo/bar

Resources