I'd like to have some kind of "development docker image" in which npm install is executed every time I restart my Docker Container (becuase I don't want to build, push and pull the new dev image every day from my local machine to our Docker server).
So I thought I could do sth. like this in my Dockerfile:
CMD npm install git+ssh://git#mycompany.de/my/project.git#develop && npm start
Sadly, this doesn't work. The container stops immediately after docker start and I don't know why, because this works:
RUN npm install git+ssh://git#mycompany.de/my/project.git#develop
CMD npm start
(Just for testing, that's of course not what I want to have). But maybe I have some wrong perception of CMD and someone could enlighten me?
Make your CMD point to a shell script.
CMD ["/my/path/to/entrypoint.sh"]
with that script being:
#!/bin/bash
npm install git+ssh://git#mycompany.de/my/project.git#develop
npm start
# whatever else
I find this easier for a few reasons:
Inevitably these commands increase with more being done
It makes it much easier to run containers interactively, as you can run them with docker run mycontainer /bin/bash and then execute your shell script manually. This is helpful in debugging
Related
I'm writing e2e tests with Supertest for my NestJS application and I have a "test:e2e" script which looks like this:
"test:e2e": "nerdctl compose up && dotenv -e .env.test -- jest --no-cache --config ./test/jest-e2e.json && nerdctl compose down"
When I run the command yarn test:e2e, it stops after spinning up my docker container (from the nerdctl compose up command) and it doesn't run my tests or tear the container down. I know the double ampersands && are used to run the scripts sequentially which is my goal here, but I can't seem to figure out why it stops after spinning up my docker container. Maybe perhaps spinning up the container takes too long? Any help is greatly appreciated!
Environment:
macOS v12.6.1
Node v18.12.1
NPM v8.19.2
Silly mistake, I forgot to add the -d option to run my container in the background. Thank you Steven!
This is likely a standard task, but I've spent a lot of time googling and prototyping this without success.
I want to set up CI for a Java application that needs a database (MySQL/MariaDB) for its tests. Basically, just a clean database where it can write to. I have decided to use Jenkins for this. I have managed to set up an environment where I can compile the application, but fail to provide it with a database.
What I have tried is to use a Docker image with Java and MariaDB. However, I run into problems starting MariaDB daemon, because at that point Jenkins already activates its user (UID 1000), which doesn't have permissions to start the daemon, which only the root user can do.
My Dockerfile:
FROM eclipse-temurin:17-jdk-focal
RUN apt-get update \
&& apt-get install -y git mariadb-client mariadb-server wget \
&& apt-get clean
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
The docker-entrypoint.sh is pretty trivial (and also chmod a+x'd, that's not the problem):
#! /bin/sh
service mysql start
exec "$#"
However, Jenkins fails with these messages:
$ docker run -t -d -u 1000:1001 [...] c8b472cda8b242e11e2d42c27001df616dbd9356 cat
$ docker top cbc373ea10653153a9fe76720c204e8c2fb5e2eb572ecbdbd7db28e1d42f122d -eo pid,comm
ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument, as required by official docker images (see https://github.com/docker-library/official-images#consistency for entrypoint consistency requirements).
Alternatively you can force image entrypoint to be disabled by adding option `--entrypoint=''`.
I have tried debugging this from the command line using the built Docker image c8b472cda8b. The problem is as described before: because Jenkins passes -u 1000:1001 to Docker, docker-entrypoint.sh script no longer runs as root and therefore fails to start the daemon. Somewhere in Docker or Jenkins the error is "eaten up" and not shown, but basically the end result is that mysqld doesn't run and also it doesn't get to exec "$#".
If I execute exactly the same command as Jenkins, but without -u ... argument, leaving me as root, then everything works fine.
I'm sure there must be a simple way to start the daemon and/or set this up somehow completely differently (external database?), but can't figure it out. I'm practically new to Docker and especially to Jenkins.
My suggestion is:
Run the docker build command without -u (as root)
Create Jenkins user inside the container (via Dockerfile)
At the end of the entrypoint.sh switch to jenkins user by su - jenkins
One disadvantage is that every time you enter the container you will be root user
I have to execute two commands on the docker file, but both these commands are attached to the terminal and block the execution from the next.
dockerfile:
FROM sinet/nginx-node:latest
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
RUN git clone https://name:pass#bitbucket.org/joaocromg/front-web-alferes.git
WORKDIR /usr/src/app/front-web-alferes
RUN npm install
RUN npm install bower -g
RUN npm install gulp -g
RUN bower install --allow-root
COPY default.conf /etc/nginx/conf.d/
RUN nginx -g 'daemon off;' & # command 1 blocking
CMD ["gulp watch-dev"] # command 2 not executed
Someone know how can I solve this?
Try creating a script like this:
#!/bin/sh
nginx -g 'daemon off;' &
gulp watch-dev
And then execute it in your CMD:
CMD /bin/my-script.sh
Also, notice your last line would not have worked:
CMD ["gulp watch-dev"]
It needed to be either:
CMD gulp watch-dev
or:
CMD ["gulp", "watch-dev"]
Also, notice that RUN is for executing a command that will change your image state (like RUN apt install curl), not for executing a program that needs to be running when you run your container. From the docs:
The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the Dockerfile.
I suggest you try supervisord in this case. http://supervisord.org/
Edit: Here is an dockerized example of httpd and ssh daemon: https://riptutorial.com/docker/example/14132/dockerfile-plus-supervisord-conf
The answer here is that RUN nginx -g 'daemon off;' is intentionally starting nginx in the foreground, which is blocking your second command. This command is intended to start docker containers with this as the foreground process. Running RUN nginx will start nginx, create the master and child nodes and (hopefully) exit with a zero status code. Although as mentioned above, this is not the intended use of run, so a bash script would work best in this case.
I'm new to docker and am trying to dockerize an app I have. Here is the dockerfile I am using:
FROM golang:1.10
WORKDIR /go/src/github.com/myuser/pkg
ADD . .
RUN curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
RUN dep ensure
CMD ["go", "run", "cmd/pkg/main.go"]
The issue I am running into is that I will update source files on my local machine with some log statements, rebuild the image, and try running it in a container. However, the CMD (go run cmd/pkg/main.go) will not reflect the changes I made.
I looked into the container filesystem and I see that the source files are updated and match what I have locally. But when I run go run cmd/pkg/main.go within the container, I don't see the log statements I added.
I've tried using the --no-cache option when building the image, but that doesn't seem to help. Is this a problem with the golang image, or my dockerfile setup?
UPDATE: I have found the issue. The issue is related to using dep for vendoring. The vendor folder had outdated files for my package because dep ensure was pulling them from github instead of locally. I will be moving to go 1.1 which support to go modules to fix this.
I see several things:
According to your Dockerfile
Maybe you need a dep init before dep ensure
Probably you need to check if main.go path is correct.
According to docker philosophy
In my humble opinion, you should create an image with docker build -t <your_image_name> ., executing that where your Dockerfile is, but without CMD line.
I would execute your go run <your main.go> in your docker run -d <your_image_name> go run <cmd/pkg/main.go> or whatever is your command.
If something is wrong, you can check exited containers with docker ps -a and furthermore check logs with docker logs <your_CONTAINER_name/id>
Other way to check logs is access to the container using bash and execute go run manually:
docker run -ti <your_image_name> bash
# go run blablabla
I am really new with docker. I have it run now for airflow. For one of the airflow DAGs, I perform python jobs.<job_name>.run which is located on the server + within the docker. However, this python code needs packages to run and I am having trouble installing those.
If I put in the Dockerfile a RUN pip install ... it doesn't seem to work. If I go 'in' the docker container by docker exec -ti <name_of_worker> bash and I perform pip freeze, no packages show up.
However, if I perform the pip install command while I am in the worker, the airflow DAG will run successfully. However, I shouldn't have to perform this task every time I rebuild my containers. Can anyone help me?