Interact with podman docker via socket in Redhat 9 - docker

I'm trying to migrate one of my dev boxes over from centos 8 to RHEL9. I rely heavily on docker and noticed when I tried to run a docker command on the RHEL box it installed podman-docker. This seemed to go smoothly; I was able to pull an image, launch, build, commit a new version without problem using the docker commands I knew already.
The problem I have encountered though is I can't seem to interact with it via the docker socket (which seems to be a link to the podman one).
If I run the docker command:
[#rhel9 ~]$ docker images
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/redhat/ubi9 dev_image de371523ca26 6 hours ago 805 MB
docker.io/redhat/ubi9 latest 9ad46cd10362 6 days ago 230 MB
it has my images listed as expected. I should be able to also run:
[#rhel9 ~]$ curl --unix-socket /var/run/docker.sock -H 'Content-Type: application/json' http://localhost/images/json | jq .
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3 100 3 0 0 55 0 --:--:-- --:--:-- --:--:-- 55
[]
but as you can see, nothing is coming back. The socket is up and running as I can ping it without issue:
[#rhel9 ~]$ curl -H "Content-Type: application/json" --unix-socket /var/run/docker.sock http://localhost/_ping
OK
I also tried the curl commands using the podman socket directly but it had the same results. Is there something I am missing or a trick to getting it to work so that I can interact with docker/podman via the socket?

Podman isn't implemented using a client/server model like Docker. By default there is no socket, because there's no equivalent to the docker daemon. Podman does provide a compatibility interface that you can use by enabling the podman.socket unit:
$ systemctl enable --now podman.socket
This exposes a Unix socket at /run/podman/podman.sock that responds to Docker API commands. But!
The socket connects you to podman running as root, whereas you've been running podman as a non-root user: so you won't see the same list of images, containers, networks, etc.
Some random notes:
Podman by default runs "rootless": you can run it as an unprivileged user, and all of its storage, metadata, etc, is stored in your home directory.
You can also run Podman as root, in which case the behavior is more like Docker.
If you enable the podman socket, you can replace podman-docker with the actual Docker client (and use things like docker-compose), although I have run into occasional issues with this. Mostly I just use podman, and run docker engine in a VM). You will need to configure Docker to look at the podman socket in /run/podman/podman.sock.
I have podman.socket enabled on my system, so this works:
$ curl --unix-socket /run/podman/podman.sock -H 'content-type: application/json' http://localhost/_ping
OK
Or:
$ curl --unix-socket /run/podman/podman.sock -H 'content-type: application/json' -sf http://localhost/containers/json | jq
[
{
"Id": "f0d9a880c45bb5857b24f46bcb6eeeca162eb68d574c8ba16c4a03703c2d60f4",
"Names": [
"/sleeper"
],
"Image": "docker.io/library/alpine:latest",
"ImageID": "14119a10abf4669e8cdbdff324a9f9605d99697215a0d21c360fe8dfa8471bab",
"Command": "sleep inf",
"Created": 1655418914,
"Ports": [],
"Labels": {},
"State": "running",
"Status": "Up 3 days",
"NetworkSettings": {
"Networks": {
"podman": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "podman",
"EndpointID": "",
"Gateway": "10.88.0.1",
"IPAddress": "10.88.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "06:55:82:1b:1a:41",
"DriverOpts": null
}
}
},
"Mounts": null,
"Name": "",
"Config": null,
"NetworkingConfig": null,
"Platform": null,
"AdjustCPUShares": false
}
]

I managed to solve my problem although I'm not entirely sure how the scenario came about. I was looking through the output of docker info and podman info and noticed that they both had the remote socket set as:
remoteSocket:
exists: true
path: /run/user/1000/podman/podman.sock
rather than /run/podman/podman.sock which is where I thought it was (this socket does actually exist on my machine). Looking at the systemd file for podman.socket I can see that the socket was specified as %t/podman/podman.sock and checking the man page for podman-system-service it specified the rootless socket as unix://$XDG_RUNTIME_DIR/podman/podman.sock (where my $XDG_RUNTIME_DIR=/run/user/1000.
To get it all working with my software I just needed to make sure the DOCKER_HOST env variable was correctly set e.g. export DOCKER_HOST=unix:///run/user/1000/podman/podman.sock

Related

How to run podman commands on host from within container

In case of docker, this can be achieved by mounting docker.sock inside container.
But since there is no daemon in podman. What's the replacement for docker.sock?
I want to typically check the podman images presents on host and start a new container.
I'm using Podman with --privileged=true and root.
There is a new API (status: experimental) that was announced in a blog post in January 2020.
[root#fedora31 ~]# podman --version
podman version 1.8.0
[root#fedora31 ~]# podman system service --timeout 500000 unix://root/foobar.sock
This function is EXPERIMENTAL.
As the API is still experimental this might change but right now you could make a query like this:
[root#fedora31 ~]# curl -s --unix-socket /root/foobar.sock http://d/v1.24/images/json | python3 -m json.tool
[
{
"Containers": 0,
"Created": 1572319417,
"Id": "f0858ad3febdf45bb2e5501cb459affffacef081f79eaa436085c3b6d9bd46ca",
"Labels": {
"maintainer": "Clement Verna <cverna#fedoraproject.org>"
},
"ParentId": "",
"RepoDigests": [
"sha256:8fa60b88e2a7eac8460b9c0104b877f1aa0cea7fbc03c701b7e545dacccfb433"
],
"RepoTags": [
"docker.io/library/fedora:latest"
],
"SharedSize": 0,
"Size": 201095865,
"VirtualSize": 201095865,
"CreatedTime": "0001-01-01T00:00:00Z"
},
null
]
[root#fedora31 ~]#
The command python3 -m json.tool was added to pretty-print the JSON output.
I think the UNIX socket can be accessed from inside a container by using the bind-mounting technique (that was mentioned in the question).
According to the man page, the command podman system service also accepts the flag --varlink.
Using Varlink instead of the new API might be a better solution right now as it is more mature but it will be deprecated in the future.

Finding docker container ip/port programatically

I need to find out (programmatically) the container ip & ports of a particular app that I have deployed in Docker. The app could be running in multiple nodes and scaled up.
Is there a way using Docker API to find out the container ips and ports?
The Docker API provides HTTP endpoints for all operations so it can be easily managed in most languages.
A simple example is using curl and jq on the command line.
You can list all containers ports
$ curl -s --unix-socket /var/run/docker.sock \
http:/v1.26/containers/json \
| jq '.[] | .Id, .Ports'
"b8249afa78bcc8027a38048384c7656a305a3c8d5a517d52df5a299223d8064d"
[
{
"IP": "0.0.0.0",
"PrivatePort": 3142,
"PublicPort": 3142,
"Type": "tcp"
}
]
"49125f8274242a5ae244ffbca121f354c620355186875617d43876bcde619732"
[
{
"IP": "0.0.0.0",
"PrivatePort": 4873,
"PublicPort": 4873,
"Type": "tcp"
}
]
Retrieve the list of the port map definitions for a specific container
$ curl -s --unix-socket /var/run/docker.sock \
http:/v1.26/containers/49125f8274242a5ae244ffbca121f354c620355186875617d43876bcde619732/json \
| jq '.NetworkSettings | .Ports | keys'
[
"4873/tcp"
]
Use the Docker SDK. Just choose the language... ;)
https://docs.docker.com/engine/api/sdks/

ansible_default_ipv4.address undefined in docker ubuntu

I am trying to run a simple ansible operation which should update a line in /etc/hosts:
- hosts: localhost
become: true
vars:
master_host: "ansible-master"
tasks:
- hostname: name="{{master_host}}"
- name: Add master host to /etc/hosts
lineinfile: dest=/etc/hosts line="{{ ansible_default_ipv4.address}} {{master_host}}"
regexp=".*{{master_host}}\s*$"
When I run this in virtualbox with ubuntu 16, it works fine.
When I run it in my ubuntu 16 Docker container, I get:
fatal: [localhost]: FAILED! => {"failed": true, "msg": "the field
'args' has an invalid value, which appears to include a variable that
is undefined. The error was: 'ansible_default_ipv4' is
undefined\n\nThe error appears to have been in
'/home/user/ansible/manage-ansible-master.yml': line 11, column 5, but
may\nbe elsewhere in the file depending on the exact syntax
problem.\n\nThe offending line appears to be:\n\n - hostname:
name=\"{{master_host}}\"\n - name: Add master host to /etc/hosts\n
^ here\n"}
Where is ansible trying to pull the local ip from and why can't it do so in docker?
BTW I have installed net-tools in my docker container and it has an eth0 ip.
On virtualbox and in docker I have a line in /etc/hosts
ansible-master 127.0.1.1
UPDATE:
I run
ansible all --connection=local -m setup | less
on virtualbox ubuntu and Docker ubuntu.
On Virtualbox I get a lot of network-related info that I don't get on Docker:
"ansible_facts": {
"ansible_all_ipv4_addresses": [
<ip>,
<another ip>
],
"ansible_all_ipv6_addresses": [
<ipv6>,
<another ipv6>
],
Also in virtualbox I get
"ansible_default_ipv4": {
"address": <value>,
"alias": <value>,
"broadcast": <value>,
"gateway": <value>,
"interface": <value>,
"macaddress": <value>,
"mtu": <value>,
"netmask": <value>,
"network": <value>,
"type": <value>
},
None of this appears in Docker.
I have had a similar problem with fedora; the solution was to install the package that provides the 'ip' command (which is used to generate the fact your looking for). in the case of fedora 'dnf install iproute'.
For Ubuntu, you have to install the iproute2 package in your pre_tasks. Don't forget to gather facts again in another task with - setup: afterwards.
Use hostname flag to put your local container hostname in /etc/hosts:
docker run --hostname=my_hostname

Rabbitmq connection refused from Docker container to local host

I have a docker container running a java process that I am trying to connect to rabbitmq running on my localhost.
Here are the steps I've done so far:
On my Local machine (macbook running Docker version 1.13.0-rc3, build 4d92237 with firewall turned off)
I've updated my rabbitmq_env.conf file to remove RABBITMQ_NODE_IP_ADDRESS so I am not tied to connect via localhost and i have an admin rabbitmq user. (not trying with guest user)
I tested this via telnet on my local machine and have no issues telnet <local-ip> 5672
Inside my docker container
able to ping local-ip and curl rabbitmq admin api
curl -i -u username:password http://local-ip:15672/api/vhosts returns sucessfully
[{"name":"/","tracing":false}]
When i try to telnet from inside the container I get
"Connection closed by foreign host"
looking at the rabbitmq.logs
=ERROR REPORT====
closing AMQP connection <0.30526.1> (local-ip:53349 -> local-ip:5672):
{handshake_timeout,handshake}
My java stacktrace incase helpful
Caused by: java.net.ConnectException: Connection refused (Connection >refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at >java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at >java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.>java:206)
at >java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at >com.rabbitmq.client.impl.FrameHandlerFactory.create(FrameHandlerFactory.ja>va:32)
at >com.rabbitmq.client.impl.recovery.RecoveryAwareAMQConnectionFactory.newCon>nection(RecoveryAwareAMQConnectionFactory.java:35)
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "716f935f19a107225650a95d06eb83d4c973b7943b1924815034d469164affe5",
"Created": "2016-12-11T15:34:41.950148125Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"9722a49c4e99ca5a7fabe56eb9e1c71b117a1e661e6c3e078d9fb54d7d276c6c": {
"Name": "testing",
"EndpointID": "eedf2822384a5ebc01e5a2066533f714b6045f661e24080a89d04574e654d841",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
What am I missing?
for me this works fine!
I have been installed the image docker pull rabbitmq:3-management
and run
docker run -d --hostname haroldjcastillo --name rabbit-server -e RABBITMQ_DEFAULT_USER=admin -e RABBITMQ_DEFAULT_PASS=admin2017 -p 5672:5672 -p 15672:15672 rabbitmq:3-management
the most important is to add the connection and management ports -p 5672:5672 -p 15672:15672
See you host in docker
docker-machine ip
return in my case:
192.168.99.100
Go to management http://192.168.99.100:15672
For Spring Boot you can configure this or works good for another connections
spring.rabbitmq.host=192.168.99.100
spring.rabbitmq.username=admin
spring.rabbitmq.password=admin2017
spring.rabbitmq.port=5672
Best wishes
For anyone else searching for this error, I'm using spring boot and rabbitmq in docker container, starting them with docker compose. I kept getting org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused from the spring app.
The rabbitmq hostname was incorrect. To fix this, I'm using the container names in the spring app configuration. Either put spring.rabbitmq.host=my-rabbit in spring's application.properties (or yml file), or in docker-compose.yaml add environment: SPRING_RABBITMQ_HOST: my-rabbit to the spring service. Of course, "my-rabbit" is the rabbitmq container name described in the docker-compose.yaml
I am using docker with linux container with rabbitmq:3-management and have created a dotnet core based web api. While calling from We API action method I faced the same issue and changed the value to "host.docker.internal"
following scenario worked for me
"localhost" on IIS Express
"localhost" on Docker build from Visual Studio
"host.docker.internal" on Docker build from Visual Studio
"Messaging": {
"Hostname": "host.docker.internal",
"OrderQueue": "ProductQueue",
"UserName": "someuser",
"Password": "somepassword" },
But facing the same issue when, the container created via docker build command, but not when container created using Visual Studio F5 command.
Now find the solution there are two ways to do it:
by default all the containers get added into "bridge" network go through with these steps
Case1: If you have already containers (rabbitmq and api) in the docker
and running then first check their ip / hostname
docker network ls
docker network inspect bridge # from this step you'll get to know what containers are associated with this
find the rabbitmq container and internal IP, apply this container name or IP and then run your application it will work from Visual Studio and Docker build and run command
Case2: if you have no containers running then you may like to create
your network in docker then follow these steps:
docker network create givenetworknamehere
add your container while using "docker run" command or after
Step2.1: if using docker run command for your container then;
docker run --network givenetworknamehere -d -p yourport:80 --name givecontainername giveyourimagename
Step2.2 if adding newly created network after container creation then use below
command docker network connect givenetworknamehere givecontainername
with these step you bring your container in your newly created same network and they can communicate.
Note: by default "bridge" network type get created
After a restart, all was working. I don't think Rabbit was using respecting .config changes

marathon docker jobs hanged in deployment state

Hi I have been successfull so far with simple jobs in marathon but it stuck when i have tried deploying a deocker job in mesos through marathon framework.
I am using a json file as below to deploy a docker job:
{
"id": "pga-docker",
"cpus": 0.2,
"mem": 1024.0,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "pga",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 80, "hostPort": 6565, "servicePort": 0, "protocol": "tcp" }
]
}
}
}
My pga docker image have no problem when run as container, but through marathon its just not working. Its staying in the deploying state forever.
I am using the below command line:
curl -X POST http://10.141.141.10:8080/v2/apps -d #basic-3.json -H "Content-type: application/json"
But when I run the same image from marathon UI, its working. To run from marathon I used "docker run --publish 6060:80 --name test --rm pga" in the cmd field of the UI new job page.
Any one have idea why this is hanged in the command line approach?
This is what i have found during some trial and error with the json file.
I found that when we run docker image in local system, if we have mentioned an entry point or a cmd then that will execute while running the container. But this is not same for mesos/marathon. my observation is that if I explicitly mentioned cmd in the deployment json then its working fine.
"cmd":"sh pga-setup.sh"
I will love to know if anyone faced a similar issue an solved it by another way.

Resources