dpkg not working the same way when invoked from Dockerfile or within the container - docker

I have a Dockerfile describing a container used to build some libs.
Basically, it looks like this:
FROM debian:stretch-slim
COPY somedebianrepo/*.deb \
/basedir/
RUN dpkg -i /basedir/*.deb
When I build the image, I get :
dpkg: dependency problems prevent configuration of [one of my lib] ... depends on [some other lib] however [some other lib] is not installed
Which may sound obvious... but : when I comment the RUN line :
# RUN dpkg -i /basedir/*.deb
then build the image, start the container, and connect to it, I expected the dpkg command to act the same... But actually, when I launch directly the command works fine with no such error.
root#host$ docker exec -it -u root <mycontainer> bash
root#mycontainer $ dpkg -i /basedir/*.deb
root#mycontainer $ (no error)
I also tried with apt-get install, and also encountered such different behaviors.
Since I am quite newbie with Docker, the answer may be quite obvious... but still, it is not to me! I expected the commands executed through "RUN" to act the same way as if executed from within the container..
So if anyone could point out me where I am wrong, she/he is welcome!
EDIT 1 : I have tried to run apt-get update before the dpkg command, though I did not expect it to work : with no success

Related

Docker Tutorial Unclear: "Persisting our DB" and "Using Bind Mounts"

I have only started using Docker and was trying to follow the documentation on the official website... Everything was going smoothly until I got to this point.
In step 3:
Upon running the command, I get this error -> ls: cannot access 'C:/Program Files/Git/': No such file or directory.
I thought it was not that big of a deal so I went ahead and skipped to the following parts of the tutorial.
Then I came across the same error in this part:
I tried to locate the directory on my PC manually and found a remote git repository, but the commands still don't work for me. These were the commands that I have tried and their corresponding errors:
docker run -it ubuntu ls / - No such file or directory
cd /path/to/getting-started/app - No such file or directory
docker run -dp 3000:3000 ` -w /app -v "$(pwd):/app" ` node:12-alpine ` sh -c "yarn install && yarn run dev" - docker: Error response from daemon: the working directory 'C:/Program Files/Git/app' is invalid, it needs to be an absolute path.
See 'docker run --help'. (this error was after changing to the directory I manually searched on my PC)
I'm unsure if I have to set a PATH??? I don't think I have missed any of the steps provided in the earlier tutorials.
Thanks, guys! I was indeed using git bash on VSCode. I tried running it on my Windows terminal via ubuntu and now, everything's working fine. Thanks, Max, and Spears. Exactly what I was having issues with.
These comments helped me resolve the issue:
Maybe this is your problem github.com/docker-archive/toolbox/issues/673 –
Max
Sounds like you are using the git bash which comes packages with git scm for >windows. I strongly recommend to avoid this and switch to WSL2. The git bash >is NOT the kind of shell you are looking for when using docker due to missing >libs and nasty side effects which are mostly very hard to debug. - Spears

gRPC service definitions: containerize .proto compilation?

Let's say we have a services.proto with our gRPC service definitions, for example:
service Foo {
rpc Bar (BarRequest) returns (BarReply) {}
}
message BarRequest {
string test = 1;
}
message BarReply {
string test = 1;
}
We could compile this locally to Go by running something like
$ protoc --go_out=. --go_opt=paths=source_relative \
--go-grpc_out=. --go-grpc_opt=paths=source_relative \
services.proto
My concern though is that running this last step might produce inconsistent output depending on the installed version of the protobuf compiler and the Go plugins for gRPC. For example, two developers working on the same project might have slightly different versions installed locally.
It would seem reasonable to me to address this by containerizing the protoc step. For example, with a Dockerfile like this...
FROM golang:1.18
WORKDIR /src
RUN apt-get update && apt-get install -y protobuf-compiler
RUN go install google.golang.org/protobuf/cmd/protoc-gen-go#v1.26
RUN go install google.golang.org/grpc/cmd/protoc-gen-go-grpc#v1.1
CMD protoc --go_out=. --go_opt=paths=source_relative --go-grpc_out=. --go-grpc_opt=paths=source_relative services.proto
... we can run the protoc step inside a container:
docker run --rm -v $(pwd):/src $(docker build -q .)
After wrapping the previous command in a shell script, developers can run it on their local machine, giving them deterministic, reproducible output. It can also run in a CI/CD pipeline.
My question is, is this a sound approach and/or is there an easier way to achieve the same outcome?
NB, I was surprised to find that the official grpc/go image does not come with protoc preinstalled. Am I off the beaten path here?
My question is, is this a sound approach and/or is there an easier way to achieve the same outcome?
It is definitely a good approach. I do the same. Not only to have a consistent across the team, but also to ensure we can produce the same output in different OSs.
There is an easier way to do that, though.
Look at this repo: https://github.com/jaegertracing/docker-protobuf
The image is in Docker hub, but you can create your image if you prefer.
I use this command to generate Go:
docker run --rm -u $(id -u) \
-v${PWD}/protos/:/source \
-v${PWD}/v1:/output \
-w/source jaegertracing/protobuf:0.3.1 \
--proto_path=/source \
--go_out=paths=source_relative,plugins=grpc:/output \
-I/usr/include/google/protobuf \
/source/*

Docker not running certain RUN directives?

I am trying to run a docker container to automatically set up a sphinx documentation site, but for some reason I get the following error when I try to build
Step 9/11 : RUN make html
---> Running in abd76075d0a0
make: *** No rule to make target 'html'. Stop.
When I run the container and console in, I see that sphinx-quickstart does not seem to have been run since there are no files present at all in /sphinx. Not sure what I have done wrong. Dockerfile is below.
1 # Run this with
2 # docker build .
3 # docker run -dit -p 8000:8000 <image_id>
4 FROM ubuntu:latest
5
6 WORKDIR /sphinx
7 VOLUME /sphinx
8
9 RUN apt-get update -y
10 RUN apt-get install python3 python3-pip vim git -y
11
12 RUN pip3 install -U pip
13 RUN pip3 install sphinx
14
15 RUN sphinx-quickstart . --quiet --project devops --author 'Timothy Pulliam' -v '0.1' --language 'en' --makefile
16 RUN make html
17
18 EXPOSE 8000/tcp
19
20
21 CMD ["python3", "-m", "http.server"]
EDIT:
Using LinPy's suggestion I was able to get it to work. It is still strange that it would not work the other way.
The Dockerfile VOLUME directive mostly only has confusing side effects. Unless you’re 100% clear on what it does and why you want it, you should just delete it.
In particular, one of those confusing side effects is that RUN commands that write into the volume directory just get lost. So when on line 7 you say VOLUME /sphinx, the RUN sphinx-quickstart on line 15 tries to write its output into the current directory, which is a declared volume directory, so the output content isn’t persisted into the image.
(Storing your code in a volume isn’t generally appropriate; build it into the image so it’s reusable later. You can use docker run -v to bind-mount content over any container-side directory regardless of whether or not it’s declared as a VOLUME.)
so you need to set those in one line:
RUN sphinx-quickstart . --quiet --project devops --author 'Timothy Pulliam' -v '0.1' --language 'en' --makefile && make html
I think you can see in the logs , remove intermediate container there for the rule html is not there anymore
You've already resolved the issue with LinPy's helpful comment, but just to add more, doing a quick google search with your error message comes up with this StackOverflow post...
gcc makefile error: "No rule to make target ..."
Perhaps you were accidentally invoking a different command (in this case a GCC command) rather than the .bat file provided by Sphinx.
Hopefully this might shed a bit more light on WHY it was happening. I assume the Ubuntu parent image you're using has GCC pre-installed.

Is there any way to run "pkexec" from a docker container?

I am trying to set up a Docker image (my Dockerfile is available here, sorry for the french README: https://framagit.org/Gwendal/firefox-icedtea-docker) with an old version of Firefox and an old version of Java to run an old Java applet to start a VPN. My image does work and successfully allows me to start the Java applet in Firefox.
Unfortunately, the said applet then tries to run the following command in the container (I've simply removed the --config part from the command as it does not matter here):
INFO: launching '/usr/bin/pkexec sh -c /usr/sbin/openvpn --config ...'
Then the applet exits silently with an error. While investigating, I've tried running a command with pkexec with the same Docker image, and it gives me this result:
$ sudo docker-compose run firefox pkexec /firefox/firefox-sdk/bin/firefox-bin -new-instance
**
ERROR:pkexec.c:719:main: assertion failed: (polkit_unix_process_get_start_time (POLKIT_UNIX_PROCESS (subject)) > 0)
But I don't know polkit at all and cannot understand this error.
EDIT: A more minimal way to reproduce the problem is with this Dockerfile:
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get install -y policykit-1
And then run:
$ sudo docker build -t pkexec-test .
$ sudo docker run pkexec-test pkexec echo Hello
Which leads here again to:
ERROR:pkexec.c:719:main: assertion failed: (polkit_unix_process_get_start_time (POLKIT_UNIX_PROCESS (subject)) > 0)
Should I conclude that pkexec cannot work in a docker container? Or is there any way to make this command work?
Sidenote: I have no control whatsoever on the Java applet that I try to run, it is a horrible and very dated proprietary black box that I am supposed to use at work, for which I have no access to the source code, and that I must use as is.
I have solved my own problem by replacing pkexec by sudo in the docker image, and by allowing passwordless sudo.
Given an ubuntu docker image where a user called developer was created and configured with a USER statement, add these lines:
# Install sudo and make 'developer' a passwordless sudoer
RUN apt-get install sudo
ADD ./developersudo /etc/sudoers.d/developersudo
# Replacing pkexec by sudo
RUN rm /usr/bin/pkexec
RUN ln -s /usr/bin/sudo /usr/bin/pkexec
with the file developersudo containing:
developer ALL=(ALL) NOPASSWD:ALL
This replaces any call to pkexec made in a process running in the container, by a call to sudo without any password prompt, which works nicely.

initctl too old upstart check

I am trying to do a syntax check on an upstart script using init-checkconf. However when I run it, it returns ERROR: version of /sbin/initctl too old.
I have no idea what to do, I have tried reinstalling upstart but nothing changes. This is being run from within a docker container (ubuntu:14.04) which might have something to do with it.
I just ran into the same issue.
Looking in the container:
root#puppet-master:/# cat /sbin/initctl
#!/bin/sh
exit 0
I haven't tested it completly yet, but I added the following to my Dockerfile:
# Fix upstart
RUN rm -rf /sbin/initctl && ln -s /sbin/initctl.distrib /sbin/initctl
I thought this link explained it pretty good:
When your Docker container starts, only the CMD command is run. The only processes that will be running inside the container is the CMD command, and all processes that it spawns. That's why all kinds of important system services are not run automatically – you have to run them yourself.
Digging around some more, I found an official Ubuntu image containing a working version of upstart:
https://registry.hub.docker.com/_/ubuntu-upstart/

Resources