I've been trying the latest RC's for docker and compose for a few days, and finally, today, the new stable versions (1.10 and 1.6 respectively).
The new networking stuff added in 1.9 has been great so far. But since I upgraded to 1.10rc1 (and so far for every RC and stable), containers in the same user defined network can no longer find each other. In fact, they can't even reach the outside world right now.
A quick example, file test_docker/docker-compose.yml:
version: '2'
services:
db1:
image: mysql
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: yes
db2:
image: mysql
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: yes
This creates two MySQL containers with the official image. According to the compose docs, a new testdocker_default should be created, with both containers automatically connected, which is the case:
docker network inspect testdocker_default
[
{
"Name": "testdocker_default",
"Id": "820f702e8e685567e4f1a8638cd9be305e96e37fcd741306eed6c1cf0d54ba02",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1/16"
}
]
},
"Containers": {
"16d5594bdfd11f55d33a207612b8447f6b50ff4be8b42d2313707b06ca618556": {
"Name": "testdocker_db2_1",
"EndpointID": "b6d5ff10fba860c01ac7a6508e56c5e116296cd06ea2158c695897e18fcd50ce",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"9b8b885dab3b5012c9663cb97a07af66fbe385f92c69a614a4d56bf85305ec3a": {
"Name": "testdocker_db1_1",
"EndpointID": "09e43aef8e14b0e876d47fabe67a3827dc4cea5d44b199113d9ab2678d8ce22a",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {}
}
]
Now, the documentation also says that the containers should be able to reach each other through db1 and db2, but this is not the case:
root#9b8b885dab3b:/# mysql -h db2 -u root
ERROR 2005 (HY000): Unknown MySQL server host 'db2' (111)
root#9b8b885dab3b:/# mysql -h testdocker_db2_1 -u root
ERROR 2005 (HY000): Unknown MySQL server host 'testdocker_db2_1' (111)
Additionally, neither container is able to reach the internet, unless I explicitly add Google's DNS to the /etc/resolv.conf.
I'm pretty sure I'm doing something wrong here, because I can't find issues raised by other people, but I can't figure out what it is.
Thanks guys!
Edit:
To clarify, containers can ping each other through their IP address, but the hostnames are not resolved.
This issue was reported on GitHub. The suggested workaround for the moment is to disable firewalld altogether.
I will update this answer with a better solution to this issue as soon as one is found.
Edit #1:
Pull request solving this issue (tested for Fedora 23). This PR is already merged with master, for anyone wanting to compile Docker from source.
Couldn't find an expected release date, but I'm guessing it will be released as a patch version in the next couple weeks. Will update this answer again with further information when available.
Edit #2:
Docker's 1.10.1 RC solves this issue. I'll mark this answer as accepted just to close this topic.
Related
A problem with a Docker Container running NextJS application trying to access another Docker Container running a NestJS-API.
The environment looks like this:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b04de77cb381 ui "docker-entrypoint.s…" 9 minutes ago Up 9 minutes 0.0.0.0:8004->3000/tcp ui
6af7c952afd6 redis:latest "docker-entrypoint.s…" 2 hours ago Up 2 hours 0.0.0.0:8003->6379/tcp redis
784b6f925817 api "docker-entrypoint.s…" 2 hours ago Up 2 hours 0.0.0.0:8001->3001/tcp api
c0fb02031834 postgres:latest "docker-entrypoint.s…" 21 hours ago Up 21 hours 0.0.0.0:8002->5432/tcp db
All containers are in the same bridged network.
A 'docker network inspect ' shows all Containers.
Containers are started in different docker-compose files (ui, redis+api, db)
API to DB
The api talks to the database db with postgresql://username:password#db:5432/myDb?schema=public
Notice the 'db' being the name on the Docker Network and port 5432 in the url.
Since they are on the same network you need to use the internal port 5432 instead of 8002.
Local UI
When I run the UI on the Host (on port 3000), it is able to access the API (in the Container).
Data is transferred from db-container to api-container to ui-on-the-host.
UI in the Container
Now I start also a browser on localhost:8004. This is the UI in the Container.
The UI is accessing the api on http://api:3001/*.
Sounds logical to use Docker Networkname and internal port. I also do that from API to DB.
But, this does not work: "net::ERR_NAME_NOT_RESOLVED".
Test: ncat test
Docker-Exec into the UI Container and doing a check (with ncat) shows the port is open:
/app $ nc -v api 3001
api (192.168.48.4:3001) open
Test: curl in the UI Container
(Added later)
When doing a Curl test out of the UI-Container to the API-Container I do get a result.
(See the simple/stupid Debug=endpoint called /dbg)
$ docker exec -u 0 -it ui /bin/bash
UI$ url http://api:3001/dbg
{"status":"I'm alive and kicking!!"}
About the Network
I did create my own Bridged Network.
So, the network is there and it looks like all Containers are connected to the network.
/Users/bert/_> docker network inspect my-net
[
{
"Name": "my-net",
"Id": "e786d630f252cf12856372b708c309f90f8bf177b6b1f742d6ae02f5094c7223",
"Created": "2021-03-11T14:10:50.417675Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.48.0/20",
"Gateway": "192.168.48.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"6af7c952afd60a3b4f36e244273db5f9f8a993a6f738785f624ffb59f381cf3d": {
"Name": "redis",
"EndpointID": "d9a6e6f6a4467bf38d3348889c86dcc53c5b0fa5ddc9fcf17c7e722fc6673a25",
"MacAddress": "02:42:c0:a8:30:05",
"IPv4Address": "192.168.48.5/20",
"IPv6Address": ""
},
"784b6f9258179e8ac03ee8bbc8584582dd9199ef5ec1af1404f7cf600ac707e1": {
"Name": "api",
"EndpointID": "d4b82f37559a4ee567cb304f033e1394af8c83e84916dc62f7e81f3b875a6e5f",
"MacAddress": "02:42:c0:a8:30:04",
"IPv4Address": "192.168.48.4/20",
"IPv6Address": ""
},
"c0fb02031834b414522f8630fcde31482e32d948de98d3e05678b34b26a1e783": {
"Name": "db",
"EndpointID": "dde944e1eda2c69dd733bcf761b170db2756aad6c2a25c8993ca626b48dc0e81",
"MacAddress": "02:42:c0:a8:30:03",
"IPv4Address": "192.168.48.3/20",
"IPv6Address": ""
},
"d678b3e96e0f0765ed62a70cc880b07836cf1ebf17590dc0e3351e8ee8b9b639": {
"Name": "ui",
"EndpointID": "c5a8d7e3d8b31d8dacb2f343bb77f4b364f0f3e3a5ed1025cc4ec5b65b44fd27",
"MacAddress": "02:42:c0:a8:30:02",
"IPv4Address": "192.168.48.2/20",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Conclusion:
UI-Container with Curl in Container can talk to API-Container.
UI-on-Host with Browser on Host can talk to API-Container.
UI-Container with Browser on Host cannot talk to API-Containe. !!?? Why????
Question then is how to use a UI-container in the browser and talk to other Containers over the Docker Bridged Network?
Ok, problem solved.
It was a matter of confusion where the NextJS Application gets the API-location from.
Since the NextJS-Application (the UI) is in the end just running in a browser, you need to specify the API location as seen from the browser, not as seen as inter-Container communication.
This question is regarding getting Xdebug to work with a CLI PHP script hosted inside a web-server Docker instance.
I have docker containers : web-server, varnish-cache, nginx-proxy.
I am able to successfully debug a Magento 2 web-page via browser with this VS Code Launch config:
This is with the new XDebug v3 which removed alot of v2 configuration settings
Client (Windows 10) IP (my laptop) : 192.168.1.150, Host (Ubuntu 20.04) IP: 192.168.1.105, hosting with Docker containers IP: 172.100.0.2-5
VS Code launch:
"name": "(Magento 2) Listen for XDebug on 192.168.1.5/105",
"type": "php",
"request": "launch",
"port": 9099,
"stopOnEntry": false, // Set to true to test any script on entry
"log": false,
// Remember to update remote_connect_back or remote_host
// inside xdebug PHP configuration.
// When using CLI debugging - rather use remote_host,
// because remote_connect_back = 1 does not work with CLI
// Server -> Local
"pathMappings": {
"/var/www/html/": "${workspaceRoot}",
},
"xdebugSettings": {
"max_children": 10000,
"max_data": 10000,
"show_hidden": 1
}
},
XDebug configuration (PHP 7.3)
zend_extension=xdebug.so
xdebug.log=/var/log/apache2/xdebug.log
xdebug.idekey=VSCODE
xdebug.client_port=9099
xdebug.client_discovery_header=HTTP_X_REAL_IP
xdebug.discover_client_host=On
; fallback for CLI - use client_host
xdebug.client_host=172.100.0.2
xdebug.start_with_request=yes
xdebug.mode=debug
Docker network:
docker inspect network magento2-network-frontend:
"Containers": {
"6538a93fbe811fbbd9646d4ce089e1b686b508862ed86f6afaac1b600043a1e5": {
"Name": "redis-cache-magento2.3.5",
"EndpointID": "d27bfbff61765cf2b840e98d43ec7a378e182baa7007dabde4bab5a41734fa2a",
"MacAddress": "02:42:ac:64:00:05",
"IPv4Address": "172.100.0.5/16",
"IPv6Address": ""
},
"7c7ba745db17d6d6a100901ed1e3fe38a3d26a97e086edc155254a7d41033bcf": {
"Name": "web-server-apache2-magento2-3-5",
"EndpointID": "9b81f6b7ff2292eba6fb68af209f1d5c958bea3ee0d505512862f225ed8e57be",
"MacAddress": "02:42:ac:64:00:02",
"IPv4Address": "172.100.0.2/16",
"IPv6Address": ""
},
"7f208ecce2aafdf182e4616ef2e8b043f3b8245018c299aae06c1acf4fc0d029": {
"Name": "varnish-cache-magento2-3-5",
"EndpointID": "e1c4e3f9e792b7dfd2cebfbb906bd237795820639a80ab8f530f0c8418257611",
"MacAddress": "02:42:ac:64:00:03",
"IPv4Address": "172.100.0.3/16",
"IPv6Address": ""
},
"dc599fa93b09650b70f8f95333caecc8f9db18cd19b17be57d84196e91f54c2a": {
"Name": "nginx-proxy-magento2-3-5",
"EndpointID": "7b8396af676d9af51b098d09f20d9e73ef83f4b085cb5f7195ea234aae7ed91d",
"MacAddress": "02:42:ac:64:00:04",
"IPv4Address": "172.100.0.4/16",
"IPv6Address": ""
}
The CLI command: _as can be seen it's a Magento 2 bin/magento migrate:data command from within the hosting Apache2 Web-server Docker container. (IP shown above is then : 172.100.0.2)
rm var/migration* && bin/magento migrate:data /var/www/html/app/code/ModuleOverrides/Magento_DataMigrationTool/etc/opensource-to-opensource/1.7.0.2/config.localboth.host_restoredb.xml
No debug breakpoints will work in my VS Code on Windows 10 Client (IP 192.168.1.150) because I am calling the script from within the container 172.100.0.2.
The log file /var/log/apache2/xdebug.log confirms something along this line:
Could not connect to debugging client. Tried: 172.100.0.2:9099 (fallback through xdebug.client_host/xdebug.client_port) :-(
So, since I have no idea how to run a CLI script from Windows 10 client and only from within Docker container, how/what can I do to get this CLI script to connect to Xdebug?
Additional information (if needed)
Magento 2 has CLI capability bin/magento [command] - and the command I am trying to debug is part of the data-migration-tool which is failing to import attributes correctly. No-one has a 100% working solution on the github repo for this particular issue - so I want to try and dig deeper to try and find a solution. Also, the tool is only a CLI tool, no web-ui option.
You need to set Xdebug's xdebug.client_host to the IP address of your IDE, which you indicated is 192.168.1.150.
You also need to turn off xdebug.discover_client_host, as that would try to use the internal Docker network IP (172.100.0.2), which is not where your IDE is listening on.
Remember: Xdebug makes a connection to the IDE, not the other way around.
I have a docker swarm in a cluster of machines and my use case is deploying several standalone containers that need to be connected which have static IP configurations, so I created an overlay to connect all the nodes of the swarm. I don't use/want to use anything related to docker SERVICES nor its replication in my docker swarm, it's not a real word scenario it's a test one.
The problem is when I deploy a container to a certain host a the swarm load balancer is created with a certain IP address which is random and I need it to be static because I can't change the configurations of the containers I want to deploy. I already searched how can I remove this load balancer, because as far as I'm concerned it's only used for external traffic coming into the swarm services/containers and they are not useful for my use case.
A solution would be deploy a dummy container and check which IP was assigned to the swarm load balancers in each node and then adjust the configuration files of the containers I want to deploy, but this approach does not scale well and it's a workaround of the actual problem. My problems started when randomly my containers couldn't start giving docker: Error response from daemon: attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded. where I could not identify it's reason to happen and then inferred it was because these load balancers where using the same IP adress I wanted to use in my containers.
My question is how can I statically define the IP of these load balancers or remove them completely for every node? Thank you for your time.
Docker Swarm Architecture Here is the output of docker inspect network <my-overlay-network>
"Name": "my-network",
"Id": "mo8rcf8ozr05qrnuqh64wamhs",
"Created": "2020-11-16T01:59:20.100290182Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.1.0/24",
"Gateway": "10.0.1.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"95b8e9c3ab5f9870987c4077ce264b96a810dad573a7fa2de485dd6f4b50f307": {
"Name": "unruffled_haslett",
"EndpointID": "422d83efd66ae36dd10ab0b1eb1a70763ccef6789352b06b8eb3ec8bca48410f",
"MacAddress": "02:42:0a:00:01:0c",
"IPv4Address": "10.0.1.12/24",
"IPv6Address": ""
},
"lb-my-network": {
"Name": "my-network-endpoint",
"EndpointID": "192ffaa13b7d7cfd36c4751f87c3d08dc65e66e97c0a134dfa302f55f77dcef3",
"MacAddress": "02:42:0a:00:01:08",
"IPv4Address": "10.0.1.8/24",
"IPv6Address": ""
}
`
I just use a wider subnet mask of /16 instead of /24. Which allowed me to have more IP addresses and thus avoiding collisions with the Internal load balancers.
This question already has answers here:
Docker container to connect to Postgres not in docker
(2 answers)
Closed 2 years ago.
OK.. Sorry to clog up this site with endless questions.
I have a .NET REST API that works in DOCKER. (Windows container)
But, the moment I try to connect to Postgres on my host I am unable to connect. I get unable to connect, request timed out, connection was actively refused... I have modified my connection string over a thousand times trying to get this to work.
when I look at docker networks is get:
C:\Windows\SysWOW64>docker network ls
NETWORK ID NAME DRIVER SCOPE
4c79ae3895aa Default Switch ics local
40dd0975349e nat nat local
90a25f9de905 none null local
when I inspect my container, it says it is using NAT for network.
C:\Windows\SysWOW64>docker network inspect nat
[
{
"Name": "nat",
"Id": "40dd0975349e1f4b334e5f7b93a3e8fb6aef864315ca884d8587c6fa7697dec5",
"Created": "2020-07-08T15:02:17.5277779-06:00",
"Scope": "local",
"Driver": "nat",
"EnableIPv6": false,
"IPAM": {
"Driver": "windows",
"Options": null,
"Config": [
{
"Subnet": "172.22.96.0/20",
"Gateway": "172.22.96.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"0d2dc2658a9948d84b01eaa9f5eb5a0e7815933f5af17e5abea17b82a796e1ec": {
"Name": "***MyAPI***",
"EndpointID": "3510dac9e5c5d49f8dce18986393e2855008980e311fb48ed8c4494c9328c353",
"MacAddress": "00:15:5d:fc:4f:8e",
"IPv4Address": "172.22.106.169/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.windowsshim.hnsid": "3007307C-49DC-4DB5-91C8-0E05DAC8E2B6",
"com.docker.network.windowsshim.networkname": "nat"
},
"Labels": {}
}
]
When I look at my network properties of my host I have :
Name: vEthernet (nat)
Description: Hyper-V Virtual Ethernet Adapter #2
Physical address (MAC): 00:15:5d:fc:43:56
Status: Operational
Maximum transmission unit: 1500
IPv4 address: 172.22.96.1/20
IPv6 address: fe80::d017:d598:692a:2e67%63/64
DNS servers: fec0:0:0:ffff::1%1, fec0:0:0:ffff::2%1, fec0:0:0:ffff::3%1
Connectivity (IPv4/IPv6): Disconnected
I am guessing that the NAT in the docker network ls linking to this network hyper v adapter.
both have 172.22.96.1 as the IPAddress
connection string:
Server=172.22.96.1;Port=5433;Database=QuickTechAssetManager;Uid=QuickTech;Pwd=password;
SO... when I try to connect from container to host to connect to postgres I get errors even though the I can ping the UP address.
when I look at my host file, host.docker.internal is set to 10.0.0.47 (my wifi connection).
Is this "disconnect" part of my network problems.
I have posted a few questions on this and I get one answer and then nothing further.
I am would absolutely love to have someone work with me for a bit to resolve this one - what should be minor - issue.
I have modified my pg_hba.conf file, I have done everything I can find...
I will give a phone number or email to anyone who wants to help me solve this. I have been killing myself for over a week and am getting nowhere. I am not even sure is this sort of request is allowed here but I am desperate. I am three months into a project and cant get paid until I get this one minor problem solved.
here is the other question I asked a few days ago:
Docker container to connect to Postgres not in docker
rentedtrout#gmail.com for anyone who wants to work with me on this.
Please and thank you in advance.
Have you tried using the host only switch?
docker run --network host "imagename".
This will allow to use the same network as the one in the host i.e if you are able to connect to Postgres from host, then you will be able to connect it from the container as well (with the same ip address).
first of all, I'm a totally n00b in docker, but I got into a project that are actually running in docker, so I've been reading about it.
My problem is, I have to inspect my development environment in a mobile device(iOS). I tried to access by my docker ip because this is what I basically do in my computer. After a few failed attempts I noticed that I've to access with the docker network bridge instead of docker host(the default).
I already have defined my docker bridge( I think its default), but i have no idea how to run my server with this network, can you guys help me?
A few important notes:
I'm using MAC OS X El capitan ( 10.11.1 )
The device and the mac are in the same wi-fi network and i can access using regularly localhost outside docker.
My following steps to run my server is:
cd gsat_grupo_5/docker && docker-compose -p gsat_grupo_5 up -d
docker exec -it gsatgrupo5_web_1 bash
python manage.py runserver 0.0.0.0:8000
When I run docker ps my output is:
My docker bridge output:
{
"Name": "bridge",
"Id": "1b3ddfda071096b16b92eb82590326fff211815e56344a5127cb0601ab4c1dc8",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Containers": {
"565caba7a4397a55471bc6025d38851b1e55ef1618ca7229fcb8f8dfcad68246": {
"Name": "gsatgrupo5_mongo_1",
"EndpointID": "471bcecbef0291d42dc2d7903f64cba6701f81e003165b6a7a17930a17164bd6",
"MacAddress": "02:42:ac:11:00:05",
"IPv4Address": "172.17.0.5/16",
"IPv6Address": ""
},
"5e4ce98bb19313272aabd6f56e8253592518d6d5c371d270d2c6331003f6c541": {
"Name": "gsatgrupo5_thumbor_1",
"EndpointID": "67f37d27e86f4a53b05da95225084bf5146261304016809c99c7965fc2414068",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"a0b62a2da367e720d3a55deb7377e517015b06ebf09d153c6355b8ff30cc9977": {
"Name": "gsatgrupo5_web_1",
"EndpointID": "52687cc252ba36825d9e6d8316d878a9aa8b198ba2603b8f1f5d6ebcb1368dad",
"MacAddress": "02:42:ac:11:00:06",
"IPv4Address": "172.17.0.6/16",
"IPv6Address": ""
},
"b3286bbbe9259648f15e363c8968b64473ec0a9dfe1b1a450571639b8fa0ef6f": {
"Name": "gsatgrupo5_mysql_1",
"EndpointID": "53290cb44cf5ed8322801d2dd0c529518f7d414b3c5d71cb6cca527767dd21bd",
"MacAddress": "02:42:ac:11:00:04",
"IPv4Address": "172.17.0.4/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
If there's some another smart approach to access my environment in my mobile device I'm listening.
I've to access with the docker network bridge instead of docker host(the default).
Unless you have a protocol that does something odd, like connecting back out to the device from the server, normally accessing <macip>:8000 from your device would be enough. Can you test the service from any other computers?
If you do require direct access the container network, that's a bit harder when using a Mac...
Docker for Mac doesn't support direct access to the Linux virtual machines bridge networks where your containers run.
Docker Toolbox runs a VirtualBox VM with the boot2docker vm image. It would be possible to use this but it's a little harder to apply custom network config to the VM that is setup and run via the docker-machine tools.
Plain Virtualbox is probably your best option, running your own VM with Docker installed.
Add two bridged network interfaces to the VM in Virtualbox. One for the VM and one for the the container, so they can both be available on your main network.
The first interface is for the host. It should pick up an address from DHCP like normal and Docker will then be available on your normal network.
The second bridged interface can be attached to your docker bridge and then the containers on that bridge will be on your home network.
On pre v1.10 versions of docker Pipework can be used to physically mapped an interface in to the container.
There is some specific VirtualBox interface setup required for both methods to make sure all this works.
Vagrant
Vagrant might make the VM setup a bit easier and repeatable.
$ mkdir dockervm
$ cd dockervm
$ vagrant init debian/jessie64
Vagrantfile network config:
config.vm.network "public_network", bridge: "en1: Wi-Fi (AirPort)"
config.vm.network "public_network", bridge: "en1: Wi-Fi (AirPort)"
config.vm.provider "virtualbox" do |v|
v.customize ['modifyvm', :id, '--nictype1', 'Am79C973']
v.customize ['modifyvm', :id, '--nicpromisc1', 'allow-all']
v.customize ['modifyvm', :id, '--nictype2', 'Am79C973']
v.customize ['modifyvm', :id, '--nicpromisc2', 'allow-all']
end
Note that this VM will have 3 interfaces. The first interface is for Vagrant to use as a management address and should be left as is.
Start up
$ vagrant up
$ vagrant ssh