I have a rule in my Makefile. Within this rule I need to manipulate some docker specific things so I need to get the id of the container in a portable way. In addition, I am using Docker Compose. Here is what I have that doesn't work.
a-rule: some deps
$(shell uuid="$(docker-compose ps -q myService)" docker cp "$$uuid":/a/b/c .)
I receive no errors or output, but I do not get a successful execution.
My goal is to get the uuid of the container that myService is running in and then use that uuid to copy a file from the container to my docker host.
edit:
the following works, but I'm still wondering if its possible to do inline variable settings
uuid=$(shell docker-compose ps -q myService)
a-rule: some deps
docker cp "$(uuid)":/a/b/c .
I ran into the same problem and realised that makefiles take output from shell variables with the use of $$. So I tried that and this should work for you:
a-rule: some deps
uuid=$$(docker-compose ps -q myService);\
docker cp "$$uuid":/a/b/c .
Bit late to the party but I hope this helps someone.
Related
I wish to build multiple docker images through my makefile. I have a make target looking like this:
docker:
docker build -t service1:latest -f ./service1/Dockerfile .
docker build -t service2:latest -f ./service2/Dockerfile .
...
To gain time, I want to run them in parallel, so I wanted to update my makefile like this:
docker:
docker build -t $(SERVICE):latest -f ./$(SERVICE)/Dockerfile .
And calling it with something which would look like this:
make -j=2 SERVICE=service1 docker SERVICE=service2 docker
But obviously it does not work since there is multiple issues with this.
I was thinking to use the pattern %, but I am not quite sure how to achieve this cleanly.
What would be the right way to achieve this?
You could write something like this:
IMAGES = \
service1 \
service2
all: $(IMAGES)
.PHONY: $(IMAGES)
$(IMAGES):
docker build -t $#:latest -f $#/Dockerfile .
The .PHONY directive is necessary because otherwise it will find the directory named service1 or service2 and decide that the target does not need updating. .PHONY tells it to ignore this and build the target in any case.
Using this Makefile, if I run make -j it spawns to build processes in parallel.
While this works, I'm not sure that make is really the right tool for the job. The idea behind make is that it will only rebuild those things that need to be rebuilt, saving time if only a few things have been modified.
In this situation, make doesn't really have any way to make that sort of decisions.
Since you want to rebuild everything every time, you might be better off with a simple shell script and xargs:
#!/bin/bash
seq 2 |
xargs -iSERVICENUM -P0 docker build -t serviceSERVICENUM -f serviceSERVICENUM/Dockerfile .
Or if your services aren't actually named in a numeric sequence:
#!/bin/bash
SERVICES='
foo
bar
'
xargs -iSERVICE -P0 docker build -t SERVICE -f SERVICE/Dockerfile . <<<$SERVICES
I'm trying to do an offline deployment of a docker image with RPM on CentOS.
My spec file is pretty simple :
Source1: myimage.tar.gz
...
%install
cp %{SOURCE1} ...
...
%post
docker load -i myimage.tar.gz
docker-compose up -d
docker image prune -af
I compress my image using docker save and gzip. Then, on another machine, I just load the image with docker and use docker-compose to run my service.
When executing the commands "docker load" and "docker-compose up", I got that error:
sudo: unable to execute /bin/docker: Permission denied
sudo: unable to execute /bin/docker-compose: Permission denied
sudo: unable to execute /bin/docker: Permission denied
My user is part of the docker group, I checked if the RPM file was executed using root, it is...
If I run the RPM on my dev machine, it works, if I execute the commands in a script that is not part of the RPM, it works...
Any ideas ?
Thanks in advance.
You're probably being blocked by SELinux. You can temporarily disable it to check with setenforce 0.
If that is the problem (it is; this is a comment turned into an answer), some possible solutions:
You might be able to use audit2allow to change the denials into new rules to import.
Maybe udica will help. I don't know enough about it to tell.
I tried the first solution and it worked ! grep rpm_script_t /var/log/audit/audit.log | audit2allow -m mypolicy > mypolicy.te
The problem came from the fact that the RPM scripts didn't have the access to the container_runtime_exec_t:file entrypoint that I suppose, allow it to run containers like docker.
Thanks a lot for the tip !
I'm trying to use the tf-sentencepiece operation in my model found here https://github.com/google/sentencepiece/tree/master/tensorflow
There is no issue building the model and getting a saved_model.pb file with variables and assets. However, if I try to use the docker image for tensorflow/serving, it says
Loading servable: {name: model version: 1} failed:
Not found: Op type not registered 'SentencepieceEncodeSparse' in binary running on 0ccbcd3998d1.
Make sure the Op and Kernel are registered in the binary running in this process.
Note that if you are loading a saved graph which used ops from tf.contrib, accessing
(e.g.) `tf.contrib.resampler` should be done before importing the graph,
as contrib ops are lazily registered when the module is first accessed.
I am unfamiliar with how to build anything manually, and was hoping that I could do this without many changes.
One approach would be to:
Pull a docker development image
$ docker pull tensorflow/serving:latest-devel
In the container, make your code changes
$ docker run -it tensorflow/serving:latest-devel
Modify the code to add the op dependency here.
In the container, build TensorFlow Serving
container:$ tensorflow_serving/model_servers:tensorflow_model_server && cp bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server /usr/local/bin/
Use the exit command to exit the container
Look up the container ID:
$ docker ps
Use that container ID to commit the development image:
$ docker commit $USER/tf-serving-devel-custom-op
Now build a serving container using the development container as the source
$ mkdir /tmp/tfserving
$ cd /tmp/tfserving
$ git clone https://github.com/tensorflow/serving .
$ docker build -t $USER/tensorflow-serving --build-arg TF_SERVING_BUILD_IMAGE=$USER/tf-serving-devel-custom-op -f tensorflow_serving/tools/docker/Dockerfile .
You can now use $USER/tensorflow-serving to serve your image following the Docker instructions
I have the following line in my Dockerfile which is supposed to capture the display number of the host:
RUN DISPLAY_NUMBER="$(echo $DISPLAY | cut -d. -f1 | cut -d: -f2)" && echo $DISPLAY_NUMBER
When I tried to build the Dockerfile, the DISPLAY_NUMBER is empty. But however when I run the same command directly in the terminal I get the see the result. Is there anything that I'm doing wrong here?
Commands specified with RUN are executed when the image is built. There is no display during build hence the output is empty.
You can exchange RUN with ENTRYPOINT then the command is executed when the docker starts.
But how to forward the hosts display to the container is another matter entirely.
Host environment variables cannot be passed during build, only at run-time.
Only build args can be specified by:
first "declaring the arg"
ARG DISPLAY_NUMBER
and then running
docker build . --no-cache -t disp --build-arg DISPLAY_NUMBER=$DISPLAY_NUMBER
You can work around this issue using the envsubst trick
RUN echo $DISPLAY_NUMBER
And on the command line:
envsubst < Dockerfile | docker build . -f -
Which will rewrite the Dockerfile in memory and pass it to Docker with the environment variable changed.
Edit: Note that this solution is pretty useless though, because you probably
want to do this during run-time anyways, because this value should depend on not on where the image is built, but rather where it is run.
I would personally move that logic into your ENTRYPOINT or CMD script.
I would like to create a dockerfile that builds a Cassandra image with a keyspace and schema already there when the image starts.
In general, how do you create a Dockerfile that will build an image that includes some step(s) that can't really be done until the container is running, at least the first time?
Right now, I have two steps: build the cassandra image from an existing cassandra Dockerfile that maps a volume with the CQL schema files into a temporary directory, and then run docker exec with cqlsh to import the schema after the image has been started as a container.
But that doesn't create an image with the schema - just a container. That container could be saved as an image, but that's cumbersome.
docker run --name $CASSANDRA_NAME -d \
-h $CASSANDRA_NAME \
-v $CASSANDRA_DATA_DIR:/data \
-v $CASSANDRA_DIR/target:/tmp/schema \
tobert/cassandra:2.1.7
then
docker exec $CASSANDRA_NAME cqlsh -f /tmp/schema/create_keyspace.cql
docker exec $CASSANDRA_NAME cqlsh -f /tmp/schema/schema01.cql
# etc
This works, but it makes it impossible to use with tools like Docker compose since linked containers/services will start up too and expect the schema to be in place.
I saw one attempt where the cassandra process as attempted to be started in the background in the Dockerfile during build, then cqlsh run, but I don't think that worked too well.
Ok I had this issue and someone advised me some strategy to deal with:
Start from an existing Cassandra Dockerfile, the official one for example
Remove the ENTRYPOINT stuff
Copy the schema (.cql) file and data (.csv) into the image and put it somewhere, /opt/data for example
create a shell script that will be used as the last command to start Cassandra
a. start cassandra with $CASSANDRA_HOME/bin/cassandra
b. IF there is a $CASSANDRA_HOME/data/data/your_keyspace-xxxx folder and it's not empty, do nothing more
c. Else
1. sleep some time to allow the server to listen on port 9042
2. when port 9042 is listening, execute the .cql script to load csv files
I found this procedure rather cumbersome but there seems to be no other way around. For Cassandra hands-on lab, I found it easier to create a VM image using Vagrant and Ansible.
Make a docker file Dockerfile_CAS:
FROM cassandra:latest
COPY ddl.cql docker-entrypoint-initdb.d/
COPY docker-entrypoint.sh /docker-entrypoint.sh
RUN ls -la *.sh; chmod +x *.sh; ls -la *.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["cassandra", "-f"]
edit docker-entrypoint.sh, add
for f in docker-entrypoint-initdb.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*.cql) echo "$0: running $f" && until cqlsh -f "$f"; do >&2 echo "Cassandra is unavailable - sleeping"; sleep 2; done & ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
above exec "$#"
docker build -t suraj1287/cassandra -f Dockerfile_CAS .
and rebuild the image...
Another approach used by our team is create schema on server init.
Our java code test if exist the SCHEMA, if not (new environment, new deployment) create it.
Same for every new TABLE, automatic CREATE TABLE creates required new tables for new data entities when they run in any new cluster (other developer local, preproduction, production).
All this code is isolated inside our DataDriver classes for portability, in case we change Cassandra for another DB in some client or project.
This prevent a lot of hassle both for admins and for developers.
This approach is even valid for initial data loading, we use on tests.