how to link yaml file in concourse? - docker

In my task I have
file: tasks/build-task-config.yml
unknown artifact source: 'tasks' in task config file path 'tasks/build-task-config.yml'
I'm running concourse via docker-compose
ci/
pipeline.yml
tasks/
build-task-config.yml
Above is my directory structure.
This is how I run fly
fly -t tutorial set-pipeline -c ./ci/main-pipeline.yml -p test-frontend
How I can resolve this issue?
How do paths works in Concourse ?
Edit:
I've tried with path ci/tasks/build-task-config.yml but it's also not working

You need an input to the task called tasks. This may come from a get: step, or as the output of a previous task. Most likely you have a get with your repo that has this code (let's pretend it's called source). If that's the case, then your task should look like this
- task: build-task-config # Or whatever name you want
file: source/ci/tasks/build-task-config.yml
...
Everything has to be relative to an input in a task, if it's not part of the base image.

Related

Provide GitHub file as default conf file in a docker-compose volume

So my question is whether it is possible to have a volume like:
"${my_conf_file}:-raw.my/GitHub/file.git":/conf.json
So this would be my goal, however I do not find anything related to this. In the end if the user has a file, the file should be passed, otherwise either conf.json should not be replaced by anything (because the GitHub file is already there, to be replaced by a conf file that a user might have) or the file from GitHub should be passed again.
If it best to figure out the first part ("${my_conf_file}:-raw.my/GitHub/file.git") ahead of the docker run.
In your start script (which calls docker run or uses your docker-compose.yml), add a script able to determine which config file you want (the user's, conf.json itself or the one from GitHub)
Once you can script that, then you can add your docker run -v call, which will mount the right file to :/conf.json in the container.

How to add a custom environment variables to docker-ejabberd

I am running docker-ejabberd on ECS and all works fine. Now i want to replace the my_sql user/pass that exists on the ejabberd.yml file with the environment variables been passed to the image while running the container. There is no clear way described even on the docker-ejabberd wiki or anywhere on how to do that simply. Does anyone face a similar situation and how to do that?
For example in the ejabberd.yml i have this section:
sql_server: ${MYSQL_SERVER}
sql_database: ${MYSQL_DATABASE_NAME}
sql_username: ${MYSQL_USERNAME}
sql_password: ${MYSQL_PASSWORD}
sql_port: ${MYSQL_PORT}
I want to pass those vars as env vars while docker run and then replace them before the container run.
Side note: We are using ECS and passing the variables through the task defination without any issue.
I went through some topics recommend using the ENTRY_POINT command to run a script that replaces the file before running the container but not sure if that's a good idea.
Also, I have an idea of replacing the variables in this ejabberd.yml file in the CICD pipeline just before building the image and while getting the code from the git repository and create the image on AWS ECR?
i want to replace the my_sql user/pass that exists on the ejabberd.yml file with the environment variables been passed to the image while running the container.
The ejabberd.yml file is read and parsed by the yconf library (https://github.com/processone/yconf) , and I doubt it supports such a thing.
I went through some topics recommend using the ENTRY_POINT command to run a script that replaces the file before running the container but not sure if that's a good idea.
Following that recomendation, if you don't want to mess with the whole ejabberd.yml and let a script manipulate it, you can ensure that only those specific options are parametrized:
You can define those vars using a script in a small file, and then include options from that small file into ejabberd.yml using
https://docs.ejabberd.im/admin/configuration/file-format/#include-additional-files
For example, in your ejabberd.yml, put something like this:
include_config_file:
/etc/ejabberd/database.yml:
allow_only: [sql_server, sql_database, sql_username, sql_password, sql_port]
Then write your script, that generates that small file, for example:
$ generate-database-config.sh
$ cat /etc/ejabberd/database.yml
sql_server: "localhost"
sql_database: "ejaup"
sql_username: "ejabberd_test"
sql_password: "ejabberd_test"
sql_port: 3306

How to navigate up one folder in a dockerfile

I'm having some trouble building a docker image, because the way the code has been structured. The code is written in C#, and in a solution there is a lot of projects that "support" the application i want to build.
My problem is if i put the dockerfile into the root i can build it, without any problem, and it's okay but i don't think it's the optimal way, because we have some other dockerfiles we also need to build and if i put them all into the root folder i think it will end up messy.
So if i put the dockerfile into the folder with the application, how do i navigate into the root folder to grab the folders i need?
I tried with "../" but from my point of view it didn't seem to work. Is there any way to do it, or what is best practice in this scenario?
TL;DR
run it from the root directory:
docker build . -f ./path/to/dockerfile
the long answer:
in dockerfile you cant really go up.
why
when the docker daemon is building you image, it uses 2 parameters:
your Dockerfile
the context
the context is what you refer to as . in the dockerfile. (for example as COPY . /app)
both of them affect the final image - the dockerfile determines what is going to happen. the context tells docker on which files it should perform the operations you've specified in that dockerfile.
thats how the docs put it:
A build’s context is the set of files located in the
specified PATH or URL. The build process can refer to any of the files
in the context. For example, your build can use a COPY instruction to
reference a file in the context.
so, usually the context is the directory where the Dockerfile is placed. my suggestion is to leave it where it belongs. name your dockerfiles after their role (Dockerfile.dev,Dockerfile.prod, etc) thats ok to have a few of them in the same dir.
the context can still be changed:
after all, you are the one that specify the context. since the docker build command accepts the context and the dockerfile path. when i run:
docker build .
i am actually giving it the context of my current directory, (ive omitted the dockerfile path so it defaults to PATH/Dockerfile)
so if you have a dockerfile in dockerfiles/Dockerfile.dev, you shoul place youself in the directory you want as context, and you run:
docker build . -f dockerfiles/Dockerfile.dev
same applies to docker-compose build section (you specify there a context and the dockerfile path)
hope that made sense.
You can use RUN command and after & do whatever you want.
RUN cd ../ &

Set line-buffering in container output

I use Java S2I image for a container running in Openshift (on premise). My problem is that the output of the image is page-buffered and oc logs ... does not show me the last logs.
I could probably spin up my docker image that would do stdbuf -oL -e0 java ... but I would prefer to stick to the 'official' image (just adding the jar to /deployments). Is there any way to reduce buffering (use line-buffering instead of page-buffering), or flush the output on demand?
EDIT: It seems that I could update deployment config and pass stdbuf in there, but that means that I'd have to compose all the args myself. Ideal solution would be passing --tty do Docker, but I can't see how a custom arguments could be passed that way in Openshift.
In your repo, try creating the file .s2i/bin/run. In it add:
#/bin/bash
exec stdbuf -oL -e0 /usr/local/s2i/run
I always forget where the S2I assemble and run scripts are in the Java S2I image, so you may need to replace /usr/local/s2i with the correct path.
What adding this file does is that it will be run as the startup command instead of the original run script. You can then run the original script with stdbuf. Ensure you use exec so that the sub process replaces the current one, else signals will not be propagated through properly.
Even though this might work, am surprised logging isn't working in an unbuffered mode already. I expect there would be a better way of controlling it through some Java config instead.

How to specify different .dockerignore files for different builds in the same project?

I used to list the tests directory in .dockerignore so that it wouldn't get included in the image, which I used to run a web service.
Now I'm trying to use Docker to run my unit tests, and in this case I want the tests directory included.
I've checked docker build -h and found no option related.
How can I do this?
Docker 19.03 shipped a solution for this.
The Docker client tries to load <dockerfile-name>.dockerignore first and then falls back to .dockerignore if it can't be found. So docker build -f Dockerfile.foo . first tries to load Dockerfile.foo.dockerignore.
Setting the DOCKER_BUILDKIT=1 environment variable is currently required to use this feature. This flag can be used with docker compose since 1.25.0-rc3 by also specifying COMPOSE_DOCKER_CLI_BUILD=1.
See also comment0, comment1, comment2
from Mugen comment, please note
the custom dockerignore should be in the same directory as the Dockerfile and not in root context directory like the original .dockerignore
i.e.
when calling
DOCKER_BUILDKIT=1
docker build -f /path/to/custom.Dockerfile ...
your .dockerignore file should be at
/path/to/custom.Dockerfile.dockerignore
At the moment, there is no way to do this. There is a lengthy discussion about adding an --ignore flag to Docker to provide the ignore file to use - please see here.
The options you have at the moment are mostly ugly:
Split your project into subdirectories that each have their own Dockerfile and .dockerignore, which might not work in your case.
Create a script that copies the relevant files into a temporary directory and run the Docker build there.
Adding the cleaned tests as a volume mount to the container could be an option here. After you build the image, if running it for testing, mount the source code containing the tests on top of the cleaned up code.
services:
tests:
image: my-clean-image
volumes:
- '../app:/opt/app' # Add removed tests
I've tried activating the DOCKER_BUILDKIT as suggested by #thisismydesign, but I ran into other problems (outside the scope of this question).
As an alternative, I'm creating an intermediary tar by using the -T flag which takes a txt file containing the files to be included in my tar, so it's not so different than a whitelist .dockerignore.
I export this tar and pipe it to the docker build command, and specify my docker file, which can live anywhere in my file hierarchy. In the end it looks like this:
tar -czh -T files-to-include.txt | docker build -f path/to/Dockerfile -
Another option is to have a further build process that includes the tests. The way I do it is this:
If the tests are unit tests then I create a new Docker image that is derived from the main project image; I just stick a FROM at the top, and then ADD the tests, plus any required tools (in my case, mocha, chai and so on). This new 'testing' image now contains both the tests and the original source to be tested. It can then simply be run as is or it can be run in 'watch mode' with volumes mapped to your source and test directories on the host.
If the tests are integration tests--for example the primary image might be a GraphQL server--then the image I create is self-contained, i.e., is not derived from the primary image (it still contains the tests and tools, of course). My tests use environment variables to tell them where to find the endpoint that needs testing, and it's easy enough to get Docker Compose to bring up both a container using the primary image, and another container using the integration testing image, and set the environment variables so that the test suite knows what to test.
Sadly it isn't currently possible to point to a specific file to use for .dockerignore, so we generate it in our build script based on the target/platform/image. As a docker enthusiast it's a sad and embarrassing workaround.

Resources