In case of docker, this can be achieved by mounting docker.sock inside container.
But since there is no daemon in podman. What's the replacement for docker.sock?
I want to typically check the podman images presents on host and start a new container.
I'm using Podman with --privileged=true and root.
There is a new API (status: experimental) that was announced in a blog post in January 2020.
[root#fedora31 ~]# podman --version
podman version 1.8.0
[root#fedora31 ~]# podman system service --timeout 500000 unix://root/foobar.sock
This function is EXPERIMENTAL.
As the API is still experimental this might change but right now you could make a query like this:
[root#fedora31 ~]# curl -s --unix-socket /root/foobar.sock http://d/v1.24/images/json | python3 -m json.tool
[
{
"Containers": 0,
"Created": 1572319417,
"Id": "f0858ad3febdf45bb2e5501cb459affffacef081f79eaa436085c3b6d9bd46ca",
"Labels": {
"maintainer": "Clement Verna <cverna#fedoraproject.org>"
},
"ParentId": "",
"RepoDigests": [
"sha256:8fa60b88e2a7eac8460b9c0104b877f1aa0cea7fbc03c701b7e545dacccfb433"
],
"RepoTags": [
"docker.io/library/fedora:latest"
],
"SharedSize": 0,
"Size": 201095865,
"VirtualSize": 201095865,
"CreatedTime": "0001-01-01T00:00:00Z"
},
null
]
[root#fedora31 ~]#
The command python3 -m json.tool was added to pretty-print the JSON output.
I think the UNIX socket can be accessed from inside a container by using the bind-mounting technique (that was mentioned in the question).
According to the man page, the command podman system service also accepts the flag --varlink.
Using Varlink instead of the new API might be a better solution right now as it is more mature but it will be deprecated in the future.
Related
I started a docker images anapsix/webdis:
sudo docker run -d -p 7379:7379 -e LOCAL_REDIS=true anapsix/webdis
and changed the etc/webdis.json to allow websockets and committed it with
sudo docker commit <container-id>
however, when I used the new image to start a container, it does not keep the changes. Is there something I'm doing wrong?
Thanks!
In this case your problem is that the anapsix/webdis image has an entrypoint script (/entrypoint.sh) that generates /etc/webdis.json when the container starts.
Looking at the script, you can set the value of websockets by setting the WEBSOCKETS variable when you start the container:
docker run -d -p 7379:7379 \
-e LOCAL_REDIS=true \
-e WEBSOCKETS=true \
anapsix/webdis
When we run it like this, the generated /etc/webdis.json looks like:
{
"redis_host": "127.0.0.1",
"redis_port": 6379,
"redis_auth": null,
"http_host": "0.0.0.0",
"http_port": 7379,
"threads": 5,
"pool_size": 10,
"daemonize": false,
"websockets": true,
"database": 0,
"acl": [
{
"disabled": ["DEBUG", "FLUSHDB", "FLUSHALL"]
},
{
"http_basic_auth": "user:password",
"enabled": ["DEBUG"]
}
],
"verbosity": 8,
"logfile": "/dev/stdout"
}
More broadly, using docker commit is almost always the wrong thing to do; you should generate custom images using a Dockerfile (this gives you a much more manageable, reproducible process for creating container images).
All my team members use the same server as docker remote context. I have set up a project using VSCode-Devcontainer with a devcontainer.json like this:
{
"name": "MyProject - DevContainer",
"dockerFile": "../Dockerfile",
"context": "..",
"workspaceMount": "source=vsc-myprojekt-${localEnv:USERNAME},target=/workspace,type=volume",
"workspaceFolder": "/workspace",
"extensions": [
"ms-python.python",
"ms-python.vscode-pylance"
],
"postCreateCommand": "/opt/entrypoint.sh",
"mounts": [
"source=/media/Pool/,target=/Pool,type=bind",
"source=cache,target=/cache,type=volume"
]
}
This worked fine for me, but now as my colleges start their devcontainers, we have the problem, that a newly started devcontainer kill other already running devcontainers.
We found that the local folder of the projekt seems to by the way to identify already running devcontainers:
[3216 ms] Start: Run: docker ps -q -a --filter label=devcontainer.local_folder=d:\develop\myproject
[3839 ms] Start: Run: docker inspect --type container 8ca7d3a44662
[4469 ms] Start: Removing Existing Container
As we all use the same path this identification based on the local folder is problematic. Is there a way to use other labels?
Seems to be a bug, because the issue I opened, was accepted as a bug report.
I'm trying to migrate one of my dev boxes over from centos 8 to RHEL9. I rely heavily on docker and noticed when I tried to run a docker command on the RHEL box it installed podman-docker. This seemed to go smoothly; I was able to pull an image, launch, build, commit a new version without problem using the docker commands I knew already.
The problem I have encountered though is I can't seem to interact with it via the docker socket (which seems to be a link to the podman one).
If I run the docker command:
[#rhel9 ~]$ docker images
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/redhat/ubi9 dev_image de371523ca26 6 hours ago 805 MB
docker.io/redhat/ubi9 latest 9ad46cd10362 6 days ago 230 MB
it has my images listed as expected. I should be able to also run:
[#rhel9 ~]$ curl --unix-socket /var/run/docker.sock -H 'Content-Type: application/json' http://localhost/images/json | jq .
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3 100 3 0 0 55 0 --:--:-- --:--:-- --:--:-- 55
[]
but as you can see, nothing is coming back. The socket is up and running as I can ping it without issue:
[#rhel9 ~]$ curl -H "Content-Type: application/json" --unix-socket /var/run/docker.sock http://localhost/_ping
OK
I also tried the curl commands using the podman socket directly but it had the same results. Is there something I am missing or a trick to getting it to work so that I can interact with docker/podman via the socket?
Podman isn't implemented using a client/server model like Docker. By default there is no socket, because there's no equivalent to the docker daemon. Podman does provide a compatibility interface that you can use by enabling the podman.socket unit:
$ systemctl enable --now podman.socket
This exposes a Unix socket at /run/podman/podman.sock that responds to Docker API commands. But!
The socket connects you to podman running as root, whereas you've been running podman as a non-root user: so you won't see the same list of images, containers, networks, etc.
Some random notes:
Podman by default runs "rootless": you can run it as an unprivileged user, and all of its storage, metadata, etc, is stored in your home directory.
You can also run Podman as root, in which case the behavior is more like Docker.
If you enable the podman socket, you can replace podman-docker with the actual Docker client (and use things like docker-compose), although I have run into occasional issues with this. Mostly I just use podman, and run docker engine in a VM). You will need to configure Docker to look at the podman socket in /run/podman/podman.sock.
I have podman.socket enabled on my system, so this works:
$ curl --unix-socket /run/podman/podman.sock -H 'content-type: application/json' http://localhost/_ping
OK
Or:
$ curl --unix-socket /run/podman/podman.sock -H 'content-type: application/json' -sf http://localhost/containers/json | jq
[
{
"Id": "f0d9a880c45bb5857b24f46bcb6eeeca162eb68d574c8ba16c4a03703c2d60f4",
"Names": [
"/sleeper"
],
"Image": "docker.io/library/alpine:latest",
"ImageID": "14119a10abf4669e8cdbdff324a9f9605d99697215a0d21c360fe8dfa8471bab",
"Command": "sleep inf",
"Created": 1655418914,
"Ports": [],
"Labels": {},
"State": "running",
"Status": "Up 3 days",
"NetworkSettings": {
"Networks": {
"podman": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "podman",
"EndpointID": "",
"Gateway": "10.88.0.1",
"IPAddress": "10.88.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "06:55:82:1b:1a:41",
"DriverOpts": null
}
}
},
"Mounts": null,
"Name": "",
"Config": null,
"NetworkingConfig": null,
"Platform": null,
"AdjustCPUShares": false
}
]
I managed to solve my problem although I'm not entirely sure how the scenario came about. I was looking through the output of docker info and podman info and noticed that they both had the remote socket set as:
remoteSocket:
exists: true
path: /run/user/1000/podman/podman.sock
rather than /run/podman/podman.sock which is where I thought it was (this socket does actually exist on my machine). Looking at the systemd file for podman.socket I can see that the socket was specified as %t/podman/podman.sock and checking the man page for podman-system-service it specified the rootless socket as unix://$XDG_RUNTIME_DIR/podman/podman.sock (where my $XDG_RUNTIME_DIR=/run/user/1000.
To get it all working with my software I just needed to make sure the DOCKER_HOST env variable was correctly set e.g. export DOCKER_HOST=unix:///run/user/1000/podman/podman.sock
Hi I have been successfull so far with simple jobs in marathon but it stuck when i have tried deploying a deocker job in mesos through marathon framework.
I am using a json file as below to deploy a docker job:
{
"id": "pga-docker",
"cpus": 0.2,
"mem": 1024.0,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "pga",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 80, "hostPort": 6565, "servicePort": 0, "protocol": "tcp" }
]
}
}
}
My pga docker image have no problem when run as container, but through marathon its just not working. Its staying in the deploying state forever.
I am using the below command line:
curl -X POST http://10.141.141.10:8080/v2/apps -d #basic-3.json -H "Content-type: application/json"
But when I run the same image from marathon UI, its working. To run from marathon I used "docker run --publish 6060:80 --name test --rm pga" in the cmd field of the UI new job page.
Any one have idea why this is hanged in the command line approach?
This is what i have found during some trial and error with the json file.
I found that when we run docker image in local system, if we have mentioned an entry point or a cmd then that will execute while running the container. But this is not same for mesos/marathon. my observation is that if I explicitly mentioned cmd in the deployment json then its working fine.
"cmd":"sh pga-setup.sh"
I will love to know if anyone faced a similar issue an solved it by another way.
I am trying to start a docker container using the following POST request:
Content-Type: application/json
{
"Hostname":"",
"Domainname": "",
"User":"",
"Memory":0,
"MemorySwap":0,
"CpuShares": 512,
"Cpuset": "0,1",
"AttachStdin":true,
"AttachStdout":true,
"AttachStderr":true,
"PortSpecs":6002,
"Tty":false,
"OpenStdin":false,
"StdinOnce":false,
"Env":null,
"Cmd":
[
"python",
"app.py"
],
"Image":"jobinar/smile_webapp",
"Volumes":{
"/tmp": {}
},
"WorkingDir":"",
"NetworkDisabled": false,
"ExposedPorts":{
"5000/tcp": {}
}
}
However, the container immediately stops after starting. How do I configure my request to prevent it from exiting?
I would appreciate a POST request which does this instead of the command-line way.
EDIT: I get a 201 CREATED response with the id of the created container and I can see that the container is created by running by using the docker ps -a command.
If you have upgraded you docker version you habe to delete /var/lib/docker/network on ubuntu