I have my dockerized container for elasticsearch and kibana running, with it automatically installing some plugins once i start the docker container.
I need to edit the config/elasticsearch.yml file to enable the usage of that one plugin and i am trying to find the way to get it done similar to the way i have installed the plugins through a file as shown below
ARG ELASTIC_VERSION="$ELASTIC_VERSION"
FROM docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION}
RUN bin/elasticsearch-plugin install https://github.com/spinscale/elasticsearch-ingest-opennlp/releases/download/7.6.0.1/ingest-opennlp-7.6.0.1.zip
RUN bin/elasticsearch-plugin install mapper-annotated-text
RUN bin/elasticsearch-plugin install analysis-phonetic
RUN bin/elasticsearch-plugin install ingest-attachment --batch
RUN bin/ingest-opennlp/download-models
The correct way would be to create a new docker image:
Create a new Dockerfile with elasticsearch as base image. Overwrite the elasticsearch.yml file in this image. And now, build this image
FROM elasticsearch
COPY elasticsearch.yml config/elasticsearch.yml
Optionally, push this image to dockerhub
Use this image for deployments
RESOLVED SO THANKS FOR ALL THE HELP RECEIVED
INSPIRED BY https://stackoverflow.com/a/49755244/12851178
Updated elasticsearch file; Keeping this here for others' future references
ARG ELASTIC_VERSION="$ELASTIC_VERSION"
FROM docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION}
RUN bin/elasticsearch-plugin install https://github.com/spinscale/elasticsearch-ingest-opennlp/releases/download/7.6.0.1/ingest-opennlp-7.6.0.1.zip
RUN bin/elasticsearch-plugin install mapper-annotated-text
RUN bin/elasticsearch-plugin install analysis-phonetic
RUN bin/elasticsearch-plugin install ingest-attachment --batch
RUN bin/ingest-opennlp/download-models
RUN echo "ingest.opennlp.model.file.persons: en-ner-persons.bin" >> /usr/share/elasticsearch/config/elasticsearch.yml
RUN echo "ingest.opennlp.model.file.dates: en-ner-dates.bin" >> /usr/share/elasticsearch/config/elasticsearch.yml
RUN echo "ingest.opennlp.model.file.locations: en-ner-locations.bin" >> /usr/share/elasticsearch/config/elasticsearch.yml
Related
Created container using jenkins/jenkins:lts-dk11 - and as far as I know a Jenkins user also has to be created with a home directory but that isn't happening
Below is the docker file, am I doing anything wrong?
Dockerfile:
FROM jenkins/jenkins:lts-jdk11
WORKDIR /var/jenkins_home
RUN apt-get update
COPY terraform .
COPY sencha .
COPY go .
COPY helm.
RUN chown -R jenkins:jenkins /var/jenkins_home
Built with:
docker build .
The image gets created, container also gets created, I do see Jenkins user with id 1000 but this user has no home dir, and moreover, helm, go, sencha, terraform are also not installed.
I did exec into the container to double-check if terraform is installed or not
#terraform --version, I see command not found
#which terraform also shows no result.
same output for go, sencha and helm
Any suggestions?
You need install the binaries in the /usr/local/bin/ path, like this example:
FROM jenkins/jenkins:lts-jdk11
WORKDIR /var/jenkins_home
RUN apt-get update
COPY terraform /usr/local/bin/terraform
Btw, the docker image jenkins:lts-jdk11 is based in debian distribution so you can use the apt package manager for install your apps.
I've started using circleci for CI (I'm a newbie) and I want to build a docker image and push it to dockerhub inside a circleci job.
the problem is the ADD statement of the dockerfile, the error say
ADD failed: stat /var/lib/docker/tmp/docker-builder814373370/app/build: no such file or directory
docker build work fine in local. The problem seems to be the 'remote environment' create by circleci to execute docker cmd inside a job (when the job is executing inside a container). I tried multiple things to share my folder to the remote environment but nothing has worked. I also tried to execute my job inside a 'machine' to get rid of the 'remote environment' but it gives me more errors.
I think I can achieve it by storing my project online in another job and then adding the folder by https inside the dockerfile. But I'm pretty sure there is a faster way, I just don't see it.
here my dockerfile:
FROM ubuntu:20.04
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update -yq && apt-get -yq install nodejs npm && npm install serve -g
ADD app/build/ /app
EXPOSE 5000
CMD serve -s /app -l 5000
and my circleci job:
working_directory: ~/project/
docker:
- image: circleci/buildpack-deps:stretch
steps:
- checkout
- setup_remote_docker
- run:
name: Build Docker image
command: sudo docker build . -t $IMAGE_NAME:latest
I achieve it by storing artifacts in another job and then adding the folder by https with curl and wget in a RUN statement of the dockerfile
After couple of days testing and working on docker (i am in general trying to migrate from vagrant to docker) i encountered a huge problem which i am not sure how or where to fix it.
docker-compose.yml
version: "3"
services:
server:
build: .
volumes:
- ./:/var/www/dev
links:
- database_dev
- database_testing
- database_dev_2
- mail
- redis
ports:
- "80:8080"
tty: true
#the rest are only images of database redis and mailhog with ports
Dockerfile
example_1
FROM ubuntu:latest
LABEL Yamen Nassif
SHELL ["/bin/bash", "-c"]
RUN apt-get install vim mc net-tools iputils-ping zip curl git -y
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN cd /var/www/dev
RUN composer install
Dockerfile
example_2
....
RUN apt-get install apache2 openssl php7.2 php7.2-common libapache2-mod-php7.2 php7.2-fpm php7.2-mysql php7.2-curl php7.2-dom php7.2-zip php7.2-gd php7.2-json php7.2-opcache php7.2-xml php7.2-cli php7.2-intl php7.2-mbstring php7.2-redis -y
# basically 2 files with just rooting to /var/www/dev
COPY docker/config/vhosts /etc/apache2/sites-available/
RUN service apache2 restart
....
now the example_1 composer.json file/directory not found
and example_2 apache will says the root dir is not found
file/directory = /var/www/dev
i guess its because its a volume and it wont be up until the container is fully up because if i launch the container without the prev commands which will lead to an error i can then login to the container and execute the commands from command line without anyerror
HOW TO FIX THIS ?
In your first Dockerfile, use the COPY directive to copy your application into the image before you do things like RUN composer install. It'd look something like
FROM php:7.0-cli
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN composer install
(cribbed from the php image documentation; that image may not have composer preinstalled).
In both Dockerfiles, remember that each RUN command creates a new empty container, runs its command, and cleans up after itself. That means commands like RUN cd ... have no effect, and you can't start a service in the background in one RUN command and have it available later; it will get stopped before the Dockerfile moves on to the next line.
In the second Dockerfile, commands like service or systemctl or initctl just don't work in Docker and you shouldn't try to use them. Standard practice is to start the server process as a foreground process when the container launches via a default CMD directive. The flip side of this is that, since the server won't start until docker run time, your volume will be available at that point. I might RUN mkdir in the Dockerfile just to be sure it exists.
The problem seems the execution order. At image build time /var/www/dev is available. When you start a container from that image the container /var/www/dev is overwritten with your local mount.
If you need no access from your host, the you can simple skip the extra volume.
In case you want use it in other containers to, the you should work with symlinks.
I am trying to add Glide to my Golang project but I'm not getting my container working. I am currently using:
# create image from the official Go image
FROM golang:alpine
RUN apk add --update tzdata bash wget curl git;
# Create binary directory, install glide and fresh
RUN mkdir -p $$GOPATH/bin
RUN curl https://glide.sh/get | sh
RUN go get github.com/pilu/fresh
# define work directory
ADD . /go
WORKDIR /go/src
RUN glide update && fresh -c ../runner.conf main.go
as per #craigchilds94's post. When I run
docker build -t docker_test .
It all works. However, when I change the last line from RUN glide ... to CMD glide ... and then start the container with:
docker run -it --volume=$(PWD):/go docker_test
It gives me an error: /bin/sh: glide: not found. Ignoring the glide update and directly starting fresh results in the same: /bin/sh fresh: not found.
The end goal is to be able to mount a volume (for the live-reload) and be able to use it in docker-compose so I want to be able to build it, but I do not understand what is going wrong.
This should probably work for your purposes:
# create image from the official Go image
FROM golang:alpine
RUN apk add --update tzdata bash wget curl git;
# Create binary directory, install glide and fresh
RUN go get -u github.com/Masterminds/glide
RUN go get -u github.com/pilu/fresh
# define work directory
ADD . /go
WORKDIR /go/src
ENTRYPOINT $GOPATH/bin/fresh -c /go/src/runner.conf /go/src/main.go
As far as I know you don't need to run the glide update after you've just installed glide. You can check this Dockerfile I wrote that uses glide:
https://github.com/timogoosen/dockerfiles/blob/master/btcd/Dockerfile
and here is the REAMDE: https://github.com/timogoosen/dockerfiles/blob/master/btcd/README.md
This article gives a good overview of the difference between: CMD, RUN and entrypoint: http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/
To quote from the article:
"RUN executes command(s) in a new layer and creates a new image. E.g., it is often used for installing software packages."
In my opinion installing packages and libraries can happen with RUN.
For starting your binary or commands I would suggest use ENTRYPOINT see:"ENTRYPOINT configures a container that will run as an executable." you could use CMD too for running:
$GOPATH/bin/fresh -c /go/src/runner.conf /go/src/main.go
something like this might work, I didn't test this part:
CMD ["$GOPATH/bin/fresh", "-c", "/go/src/runner.conf /go/src/main.go"]
This is my Dockerfile:
FROM node:7
RUN apt-get update && apt-get install -y --no-install-recommends \
rubygems build-essential ruby-dev \
&& rm -rf /var/lib/apt/lists/*
RUN npm install -gq gulp bower
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install
CMD ["gulp", "start:dev"]
When I build the image, the npm install command executes with little output and really quickly. I actually build it through docker-compose which does have a volume mounted - and I cannot see the node_modules folder being created on my hose. When I launch a container on this image, I can see there is no node_modules folder. I then execute npm install and things start working - it takes 2-3 minutes to install all the packages and the node_modules folder is indeed created.
What is happening here? What am I doing wrong? Why doesn't npm install work at build time, but then it works at run time?
The npm install should have worked based on your Dockerfile. You can see the created files if you run the image without a mounted volume (DIRNAME: where your docker-compose.yml is located):
docker run --rm -it DIRNAME_node ls -ahl /usr/src/app
With docker build, all data is stored in the image. So, it's intended that you don't see any files created on your host.
If you mount a volume (generally in Linux, also in a Docker container), it overlays the directory. So you can't see the node_modules created in the build step.
I suggest you do your tests based on the Docker image itself and don't mount the volume. Then you have an immutable Docker image which is better for deployment.
Also copying up the source and running npm install means that whenever the source code changes, the npm install step cache becomes invalid.
Instead, separate the steps/caches like so;
COPY package*.json ./
RUN npm install
On Windows 10 I was having the same issue reported in this question and after some research I've found a question with the necessary steps to solve the problem.
In short, the main problem is that during the install wizard I've selected the option "Windows as containers".
To solve the issue:
1) Switch to Linux Containers: On taskbar, right click on docker icon and click on the option as presented bellow:
2) Disable "Experimental Features" on Command Line: Open docker/settings and click on Command Line:
3) Disable Experimental setting on configuration file:: On docker/settings, click on Docker Engine and certify that experimental is set to false:
The question where I found the solution was related to another problem I was facing when trying to build docker images: Unspecified error (0x80004005) while running a Docker build. Both problems were related to the same issue: when installing docker for the first time I've selected the option "windows as containers".
Hope it helps. Cheers