I currently have a bunch of Docker-based services that work over SSL, for local development we just use a self-signed cert, but now we are trying to configure the production deployment.
My current testing environment is w10 based, and the containers run inside wsl
For most of the steps we are following these instructions and as for normal HTTP traffic is seems to be working, but when I try to request over HTTPS, I'm getting a "500 Internal Server Error", if I do a curl from inside the Linux instance, I can see that I get the site served, but if I try to reach it from elsewhere, I'll get the 500 error.
The question is, can I only configure ssl when working with the final public hosting and reconfigure my domain, or is there a way to test everything locally before moving to prod? and might be any issues with the self-signed cert currently inside the apache image?
Edit: From checking the documentation now I understand that in order to have lets-encrypt working, I need to use the actual final public DNS and hosting, but I'm wondering how could I configure this to work locally, or just drop the ssl part? I remember some requirement on our architecture for it to be used on ssl, but not quite sure right now, and locally, I need devs to be able to run multiple instances without issues
My app docker file is based upon this
one
and the current docker-compose file is as follows:
version: '3'
services:
web:
build:
context: ./modxServer
links:
- 'db:mysql'
ports:
- 443
- 80
networks:
- reverse-proxy
- back
environment:
XDEBUG_SESSION: wtf
MODX_VERSION: 2.8.1
MODX_CORE_LOCATION: /var/www/coreM0dXF1L3s
MODX_DB_HOST: 'mysql:3306'
MODX_DB_PASSWORD: modx
MODX_DB_USER: modx
MODX_DB_NAME: modx
MODX_TABLE_PREFIX: modx_
MODX_ADMIN_USER: admin
MODX_ADMIN_PASSWORD: admin
MODX_ADMIN_EMAIL: admin#admin.com
MODX_SERVER_ROUTE: boats.trotalo.com
VIRTUAL_HOST: boats.trotalo.com
VIRTUAL_PROTO: https
VIRTUAL_PORT: 443
LETSENCRYPT_HOST: boats.trotalo.com
LETSENCRYPT_EMAIL: camilo.casadiego#trotalo.com
volumes:
- '~/development/boatsSupervisionSystem/www:/var/www'
db:
image: 'mysql:8.0.22'
networks:
- back
environment:
MYSQL_ROOT_PASSWORD: mysql
MYSQL_DATABASE: modx
MYSQL_USER: modx
MYSQL_PASSWORD: modx
ports:
- 3306
command: --default-authentication-plugin=mysql_native_password
volumes:
- '~/development/boatsSupervisionSystem/mysql:/var/lib/mysql'
networks:
reverse-proxy:
external:
name: reverse-proxy
back:
driver: bridge
Currently, the only meaningful log I'm getting is this from lets-encrypt
021/08/31 00:09:46 [notice] 175#175: signal process started
Creating/renewal boats.trotalo.com certificates... (boats.trotalo.com)
[Tue Aug 31 00:09:46 UTC 2021] Using CA: https://acme-v02.api.letsencrypt.org/directory
[Tue Aug 31 00:09:46 UTC 2021] Creating domain key
[Tue Aug 31 00:09:47 UTC 2021] The domain key is here: /etc/acme.sh/camilo.casadiego#trotalo.com/boats.trotalo.com/boats.trotalo.com.key
[Tue Aug 31 00:09:47 UTC 2021] Single domain='boats.trotalo.com'
[Tue Aug 31 00:09:47 UTC 2021] Getting domain auth token for each domain
[Tue Aug 31 00:09:49 UTC 2021] Getting webroot for domain='boats.trotalo.com'
[Tue Aug 31 00:09:49 UTC 2021] Verifying: boats.trotalo.com
2021/08/31 00:09:25 Generated '/app/letsencrypt_service_data' from 2 containers
2021/08/31 00:09:25 Running '/app/signal_le_service'
2021/08/31 00:09:25 Watching docker events
2021/08/31 00:09:25 Contents of /app/letsencrypt_service_data did not change. Skipping notification '/app/signal_le_service'
2021/08/31 00:09:37 Received event start for container 7e0b47af1ddc
2021/08/31 00:09:37 Received event start for container 283bb4ebec51
2021/08/31 00:09:42 Debounce minTimer fired
2021/08/31 00:09:42 Generated '/app/letsencrypt_service_data' from 4 containers
2021/08/31 00:09:42 Running '/app/signal_le_service'
[Tue Aug 31 00:09:53 UTC 2021] boats.trotalo.com:Verify error:DNS problem: NXDOMAIN looking up A for boats.trotalo.com - check that a DNS record exists for this domain
[Tue Aug 31 00:09:53 UTC 2021] Please check log file for more details: /dev/null
In the end was more of an understanding issue, for local development I don't need Nginx, and there, I can just use self-signed certificates, and for prod, the official Nginx/lets-encrypt image does almost all the magic.
The command I used to launch the nginx containers is:
docker run -d \
--name nginx-letsencrypt \
--net reverse-proxy \
--volumes-from nginx-proxy \
-v $HOME/certs:/etc/nginx/certs:rw \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
nginxproxy/acme-companion
And inside each docker-composer.yml file, or as parameters for docker run:
VIRTUAL_HOST: mydomain.or.subdomain.com
VIRTUAL_PROTO: https
VIRTUAL_PORT: 443
LETSENCRYPT_HOST: mydomain.or.subdomain.com
LETSENCRYPT_EMAIL: your.name#mydomain.or.subdomain.com
Related
I have Docker installed on Windows 10 with Linux Subsystem (Ubuntu 20.04 LTS).
I have the following docker-compose.yml in my /home/project folder on the Ubuntu system:
version: '3.1'
services:
wordpress:
image: wordpress
restart: always
ports:
- 8181:80
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: exampleuser
WORDPRESS_DB_PASSWORD: examplepass
WORDPRESS_DB_NAME: exampledb
volumes:
- ./wp:/var/www/html
- ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: exampledb
MYSQL_USER: exampleuser
MYSQL_PASSWORD: examplepass
MYSQL_RANDOM_ROOT_PASSWORD: '1'
volumes:
- db:/var/lib/mysql
ports:
- "8086:3306"
mailhog:
image: mailhog/mailhog
ports:
- "1025:1025" # smtp server
- "8025:8025" # web ui
volumes:
db:
I then run docker compose up
The database and mailhog are starting up just fine but Wordpress/Apache2 gives me the following errors from the console
wordpress_1 | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.20.0.4. Set the 'ServerName' directive globally to suppress this message
wordpress_1 | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.20.0.4. Set the 'ServerName' directive globally to suppress this message
wordpress_1 | [Tue Aug 10 12:35:34.558581 2021] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.38 (Debian) PHP/7.4.14 configured -- resuming normal operations
wordpress_1 | [Tue Aug 10 12:35:34.558632 2021] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
It worked just before I went on vacation and I have no clue what's going on since I haven't changed anything on my work computer.
All help appreciated :)
I took your docker-compose.yml and ran it.
No problems.
Same error messages:
wordpress_1 | Complete! WordPress has been successfully copied to /var/www/html
wordpress_1 | No 'wp-config.php' found in /var/www/html, but 'WORDPRESS_...' variables supplied; copying 'wp-config-docker.php' (WORDPRESS_DB_HOST WORDPRESS_DB_NAME WORDPRESS_DB_PASSWORD WORDPRESS_DB_USER)
wordpress_1 | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.20.0.4. Set the 'ServerName' directive globally to suppress this message
wordpress_1 | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.20.0.4. Set the 'ServerName' directive globally to suppress this message
wordpress_1 | [Tue Aug 10 16:13:25.512446 2021] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.38 (Debian) PHP/7.4.22 configured -- resuming normal operations
wordpress_1 | [Tue Aug 10 16:13:25.512536 2021] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
But when I go to localhost:8181 I see the Wordpress install-screen (select language, etc)
So, the errors seem to be ok.
I have a ruby on rails app which I run via docker-compose up. I'm a complete noob in graphql and hasura and I've tried different ways to configure my docker but I cannot make it work.
My docker-compose.yml:
version: '3.6'
services:
postgres:
image: postgis/postgis:latest
restart: always
ports:
- "5434:5432"
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
environment:
PG_HOST_AUTH_METHOD: "trust"
graphql-engine:
image: hasura/graphql-engine:v1.2.2.cli-migrations-v2
restart: always
ports:
- "8081:8080"
volumes:
- ./metadata:/hasura-metadata
environment:
HSR_GQL_DB_URL: "postgres://postgres#postgres/db-dev-name"
HSR_GQL_ADMIN_SECRET: secret
env_file:
- .env
server:
build: .
depends_on:
- "postgres"
command: bundle exec rails server -p 8081 -b 0.0.0.0
ports:
- "8080:8081"
volumes:
- ./:/server
- gem_cache:/usr/local/bundle/gems
- node_modules:/server/node_modules
env_file:
- .env
volumes:
gem_cache:
node_modules:
config.yml:
version: 2
endpoint: http://localhost:8080
metadata_directory: metadata
actions:
kind: synchronous
handler_webhook_baseurl: http://localhost:8080
Docker logs server shows:
Listening on tcp://0.0.0.0:8081
And I can access the app after docker-compose up at http://localhost:8080/server/
Checking database connection, it is also reachable at port :5434 in a database manager GUI.
But when I try to execute hasura console --admin-secret secret, hasura console on browser is not showing. I am just getting this error on different logs:
docker logs server:
Started POST "//v1/query" for 172.25.0.1 at 2021-04-10 17:53:08 +0000
ActionController::RoutingError (No route matches [POST] "/v1/query")
docker logs postgresql:
17:27:33.453 UTC [1] LOG: starting PostgreSQL 13.2 (Debian 13.2-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
17:27:33.453 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
17:27:33.453 UTC [1] LOG: listening on IPv6 address "::", port 5432
17:27:33.459 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
17:27:33.467 UTC [103] LOG: database system was shut down at 2021-04-10 17:27:33 UTC
17:27:33.473 UTC [1] LOG: database system is ready to accept connections
17:27:41.250 UTC [111] ERROR: duplicate key value violates unique constraint "pg_extension_name_index"
17:27:41.250 UTC [111] DETAIL: Key (extname)=(postgis) already exists.
17:27:41.250 UTC [111] STATEMENT: CREATE EXTENSION IF NOT EXISTS postgis
17:27:47.565 UTC [116] ERROR: duplicate key value violates unique constraint "pg_extension_name_index"
17:27:47.565 UTC [116] DETAIL: Key (extname)=(pgcrypto) already exists.
17:27:47.565 UTC [116] STATEMENT: CREATE EXTENSION IF NOT EXISTS "pgcrypto"
17:28:01.619 UTC [128] ERROR: duplicate key value violates unique constraint "acct_status_pkey"
17:28:01.619 UTC [128] DETAIL: Key (status)=(new) already exists.
17:28:01.619 UTC [128] STATEMENT: INSERT INTO "acct_status" ("status") VALUES ($1) RETURNING "status"
docker logs graphql-engine: last few lines.
{"kind":"event_triggers","info":"preparing data"}}
{"type":"startup","timestamp":"2021-04-10T17:28:15.576+0000","level":"info","detail":{"kind":"event_triggers","info":"starting workers"}}
{"type":"startup","timestamp":"2021-04-10T17:28:15.576+0000","level":"info","detail":{"kind":"telemetry","info":"Help us improve Hasura! The graphql-engine server collects anonymized usage stats which allows us to keep improving Hasura at warp speed. To read more or opt-out, visit https://hasura.io/docs/1.0/graphql/manual/guides/telemetry.html"}}
{"type":"startup","timestamp":"2021-04-10T17:28:15.576+0000","level":"info","detail":{"kind":"server","info":{"time_taken":2.384872458,"message":"starting API server"}}}
I tried accessing http://localhost:8081//console/api-explorer via browser, and it seems the graphql container is getting the request but still wont display hasura console
{
"type": "http-log",
"timestamp": "2021-04-12T03:24:24.399+0000",
"level": "error",
"detail": {
"operation": {
"error": {
"path": "$",
"error": "resource does not exist",
"code": "not-found"
},
"request_id": "6d1e5f04-f7d4-48d6-932e-4cf81bdf9795",
"response_size": 65,
"raw_query": ""
},
"http_info": {
"status": 404,
"http_version": "HTTP/1.1",
"url": "/console/api-explorer",
"ip": "192.168.0.1",
"method": "GET",
"content_encoding": null
}
}
I've tried setting the HSR_GQL_DB_URL to either postgres / localhost:5432 as well as postgis:// instead of postgres://.
I've also tried changing config.yml endpoint field to http://localhost:8080/server/ but those were also not working.
Perhaps a bit too late, but here goes anyway:
According to the documentation under GraphQL engine server config reference -> Command config you need to set the HASURA_GRAPHQL_ENABLE_CONSOLE environment variable to enable the console served at /console. Alternatively for the hasura serve command, set the --enable-console option to true:
Flag
ENV variable
Description
...
...
...
--enable-console <true|false>
HASURA_GRAPHQL_ENABLE_CONSOLE
Enable the Hasura Console (served by the server on / and /console) (default: false)
...
...
...
So for your docker-compose.yml:
...
graphql-engine:
image: hasura/graphql-engine:v1.2.2.cli-migrations-v2
restart: always
ports:
- "8081:8080"
volumes:
- ./metadata:/hasura-metadata
environment:
HSR_GQL_DB_URL: "postgres://postgres#postgres/db-dev-name"
HSR_GQL_ADMIN_SECRET: secret
HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # add this
env_file:
- .env
...
I also read somewhere to make sure to quote true and false values when using environment variables in Docker related situations. This has worked for me, so I would stick with it. Try it out.
So this is my current docker-compose.yml:
version: "2.0"
services:
redis:
image: redis
container_name: framework-redis
ports:
- "127.0.0.1:6379:6379"
web:
image: myContainer:v1
container_name: framework-web
depends_on:
- redis
volumes:
- /var/www/myApp:/app
environment:
LOG_STDOUT: /var/log/docker.access.log
LOG_STDERR: /var/log/docker.error.log
ports:
- "8100:80"
I've tried different settings; for example: not using a port value for redis, using 0.0.0.0, switching to the expose option.
If I try to connect using 127.0.0.1 from the host machine it works, but it fails with a connection refused message for my app container.
Any thoughts?
If you're accessing framework-redis from framework-web, then you need to access it using ip (or container name, i.e., framework-redis) and port of framework-redis. Since, its going to be behind a docker bridge, an ip in the range of 172.17.0.0/16 will be allocated to framework-redis. You can use that IP or better just give the container name along with 6379 port.
$ cat docker-compose.yml
version: "2.0"
services:
redis:
image: redis
container_name: framework-redis
web:
image: redis
container_name: framework-web
depends_on:
- redis
command: [ "redis-cli", "-h", "framework-redis", "ping" ]
$
$ docker-compose up
Recreating framework-redis ... done
Recreating framework-web ... done
Attaching to framework-redis, framework-web
framework-redis | 1:C 09 Dec 2019 19:25:52.798 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
framework-redis | 1:C 09 Dec 2019 19:25:52.798 # Redis version=5.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
framework-redis | 1:C 09 Dec 2019 19:25:52.798 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
framework-redis | 1:M 09 Dec 2019 19:25:52.799 * Running mode=standalone, port=6379.
framework-redis | 1:M 09 Dec 2019 19:25:52.800 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
framework-redis | 1:M 09 Dec 2019 19:25:52.800 # Server initialized
framework-redis | 1:M 09 Dec 2019 19:25:52.800 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
framework-redis | 1:M 09 Dec 2019 19:25:52.800 * DB loaded from disk: 0.000 seconds
framework-redis | 1:M 09 Dec 2019 19:25:52.800 * Ready to accept connections
framework-web | PONG
framework-web exited with code 0
As you can see above, I received a PONG for the PING command.
Some additional points:
ports are written in the form HOST_PORT:CONTAINER_PORT. You don't need to give IP (as pointed out by #coulburton in the comments).
If you're only accessing framework-redis from framework-web, then you don't need to publish ports (i.e., 6379:6379 in the ports section). We only need to publish ports when we want to access an application running in the container network (which as far as I know by default is 172.17.0.0/16) from some other network (ex, the host machine or some other physical machine).
Code
I'm trying to run a redis service defined inside a docker-compose.yml as follows:
version: '3'
services:
redis:
image: "redis:5-alpine"
volumes:
- ./redis-vol:/home/data
app:
build: .
ports:
- 8080:8080
volumes:
- .:/home/app/
This is the Dockerfile:
FROM python:2.7-alpine3.8
WORKDIR /home/app
COPY ./requirements.txt .
RUN apk add python2-dev build-base linux-headers pcre-dev && \
pip install -r requirements.txt
# Source files
COPY ./api.py .
COPY ./conf.ini .
CMD ["uwsgi", "--ini", "conf.ini"]
The app consists of this snippet running a uwsgi interface on port 8080
import uwsgi
import redis
r = redis.Redis('redis')
def application(env, start_response):
start_response('200 OK', [('Content-Type','text/html')])
r.append('hello', 'world!')
r.save()
return [b"Hello World"]
And this is the conf.ini file:
[uwsgi]
http = :8080
wsgi-file = api.py
master = true
process = 2
enable-threads = true
uid = 1001
gid = 1001
The app service is supposed to save a key:value pair through redis every time it receives a request to http://localhost:8080.
Upon a successful request, the docker-compose process returns the following log:
redis_1_bdf757fbb2bf | 1:M 26 Nov 2018 15:38:20.399 * DB saved on disk
app_1_5f729e6bcd36 | [pid: 17|app: 0|req: 1/1] 172.21.0.1 () {38 vars in 690 bytes} [Mon Nov 26 15:38:20 2018] GET / => generated 11 bytes in 8 msecs (HTTP/1.1 200) 1 headers in 44 bytes (1 switches on core 0)
redis_1_bdf757fbb2bf | 1:M 26 Nov 2018 15:38:20.998 * DB saved on disk
app_1_5f729e6bcd36 | [pid: 17|app: 0|req: 2/2] 172.21.0.1 () {40 vars in 691 bytes} [Mon Nov 26 15:38:20 2018] GET /favicon.ico => generated 11 bytes in 4 msecs (HTTP/1.1 200) 1 headers in 44 bytes (1 switches on core 0)
Problem
Despite the DB saved on disk log, the redis_vol folder is empty and the dump.rdb file doesn't seem to be saved anywhere else.
What I am doing wrong? I've also tried to use redis:alpine as image but I have the following error at startup:
redis_1_bdf757fbb2bf | 1:M 26 Nov 14:57:27.003 # Can't handle RDB format version 9
redis_1_bdf757fbb2bf | 1:M 26 Nov 14:57:27.003 # Fatal error loading the DB: Invalid argument. Exiting.
And I've also tried to map the dump.rdb in the redis service as follows:
redis:
image: "redis:5-alpine"
volumes:
- ./redis-vol/dump.rdb:/home/data/dump.rdb
but the docker creates a folder named dump.rdb/ instead of a readable file.
If you still face the problem even after changing the volume map to
<your-volume>:/data
Make sure to delete the previous container with
docker container prune
before restarting your container again
Accoring to the redis documentation on the DockerHub page
If persistence is enabled, data is stored in the VOLUME /data
So you are using the wrong volume path. Yout should use /data instead
volumes:
- ./redis-vol:/data
To be able to mount a single file for your container with docker-compose, use absolute path of the file you want to mount from your filesystem:
redis:
image: "redis:5-alpine"
volumes:
- /Users/username/dirname/redis-vol/dump.rdb:/home/data/dump.rdb
As codestation correctly pointed out, it is on the
official documenation but he is suggesting
mount point instead of volumes which has some additional cons.
In both cases in documentation there is also statement about "persistence
enabled":
$ docker run --name some-redis -d redis redis-server --appendonly yes
or in your docker-compose file:
redis:
image: "redis:alpine"
command: redis-server --appendonly yes
volumes:
- your-volume:/data
I'm trying to setup a swarm using docker but I'm having issues with communicating between containers.
I have cluster with 5 nodes. 1 manager and 4 workers.
3 apps: redis, splash, myapp
myapp has to be on the 4 workers
redis, splash just on the manager
myapp has to be able to communicate with redis and splash
I tried using the container name but its not working. It resolves the container name to different IPs.
ping splash # return a different ip than the container actually has
I am deploying running the swarm using docker stack
docker stack deploy -c docker-stack.yml myapp
Linking container between them also doesn't work.
Any ideas ? Am I missing something ?
root#swarm-manager:~# docker version
Client:
Version: 17.09.0-ce
API version: 1.32
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:42:18 2017
OS/Arch: linux/amd64
Server:
Version: 17.09.0-ce
API version: 1.32 (minimum version 1.12)
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:40:56 2017
OS/Arch: linux/amd64
Experimental: false
docker-stack.yml contains:
version: "3"
services:
splash:
container_name: splash
image: scrapinghub/splash
ports:
- 8050:8050
- 5023:5023
deploy:
mode: global
placement:
constraints:
- node.role == manager
redis:
container_name: redis
image: redis
ports:
- 6379:6379
deploy:
mode: global
placement:
constraints:
- node.role == manager
myapp:
container_name: myapp
image: myapp_image:latest
environment:
REDIS_ENDPOINT: redis:6379
SPLASH_ENDPOINT: splash:8050
deploy:
mode: global
placement:
constraints:
- node.role == worker
entrypoint:
- ping google.com
---- EDIT ----
I tried with curl also. Didn't work.
docker stack deploy -c docker-stack.yml myapp
Creating network myapp_default
Creating service myapp_splash
Creating service myapp_redis
Creating service myapp_myapp
curl http://myapp_splash:8050
curl: (7) Failed to connect to myapp_splash port 8050: No route to host
curl http://splash:8050
curl: (7) Failed to connect to splash port 8050: No route to host
What worked is getting the actual container name of splash, which is some random generated string.
curl http://myapp_splash.d7bn0dpei9ijpba4q41vpl4zz.tuk1cimht99at9g0au8vj9lkz:8050
But this doesn't really help me.
Ping is not the proper tool to try and connect services. For some reason it doesn't work with docker networks. Try curl http://serviceName instead.
Other than that: Containers can't be named when using stack deploy, instead your service name is used (which coincidentally is the same) to access another service.
I manage to get it working using curl http://tasks.splash:8050 or http://tasks.myapp_splash:8050.
I don't know whats is causing this issue though. Feel free to comment with an answer.
It seems that containers in stack named tasks.<service name> so the command ping tasks.myservice works for me!
Itersting point to note that names like <stackname>_<service name> will also resolve and ping'able but IP address is incorrect. This is frustarating.
(For exmple if you do docker stack deploy -c my.yml AA you'll get name like AA_myservice which will resolve to incorrect addreses)
To add to above answer. From network point of view curl and ping do the same things. Both will try to resolve name passed to them and then curl will try to connect using specified protocol (http is the example above) and ping will send ICMP echo requests.