podman-compose failing to compose - docker

I realize podman-compose is still under development. I'm going to be replacing my docker stack with podman once I replace Debian with CentOS8 on my Poweredge server as part of my homelab. Right now I'm just testing out/playing around with podman on my Fedora machine.
OS: Fedora 32
KERNEL:5.6.12-300.fc32.x86_64
PODMAN: 1.9.2
PODMAN-COMPOSE: 0.1.5
PROBLEM: podman-compose is failing and I'm unable to ascertain the reason why.
Here's my docker-compose.yml:
version: "2.1"
services:
deluge:
image: linuxserver/deluge
container_name: deluge
network_mode: host
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
# - UMASK_SET=022 #optional
# - DELUGE_LOGLEVEL=error #optional
volumes:
- /home/mike/test/deluge:/config
- /home/mike/Downloads:/downloads
restart: unless-stopped
When I run podman-compose up Here is the output:
[mike#localhost test]$ podman-compose up
podman pod create --name=test --share net
ce389be26589efe4433db15d875844b2047ea655c43dc84dbe49f69ffabe867e
0
podman create --name=deluge --pod=test -l io.podman.compose.config-hash=123 -l io.podman.compose.project=test -l io.podman.compose.version=0.0.1 -l com.docker.compose.container-number=1 -l com.docker.compose.service=deluge --network host -e PUID=1000 -e PGID=1000 -e TZ=America/New_York --mount type=bind,source=/home/mike/test/deluge,destination=/config --mount type=bind,source=/home/mike/Downloads,destination=/downloads --add-host deluge:127.0.0.1 --add-host deluge:127.0.0.1 linuxserver/deluge
Trying to pull registry.fedoraproject.org/linuxserver/deluge...
manifest unknown: manifest unknown
Trying to pull registry.access.redhat.com/linuxserver/deluge...
name unknown: Repo not found
Trying to pull registry.centos.org/linuxserver/deluge...
manifest unknown: manifest unknown
Trying to pull docker.io/linuxserver/deluge...
Getting image source signatures
Copying blob a54f3db92256 done
Copying blob c114dc480980 done
Copying blob d0d29aaded3d done
Copying blob fa1dff0a3a53 done
Copying blob 5076df76a29a done
Copying blob a40b999f3c1e done
Copying config 31fddfa799 done
Writing manifest to image destination
Storing signatures
Error: error checking path "/home/mike/test/deluge": stat /home/mike/test/deluge: no such file or directory
125
podman start -a deluge
Error: unable to find container deluge: no container with name or ID deluge found: no such container
125
Then finally when I quite via ctrl-c :
^CTraceback (most recent call last):
File "/home/mike/.local/bin/podman-compose", line 8, in <module>
sys.exit(main())
File "/home/mike/.local/lib/python3.8/site-packages/podman_compose.py", line 1093, in main
podman_compose.run()
File "/home/mike/.local/lib/python3.8/site-packages/podman_compose.py", line 625, in run
cmd(self, args)
File "/home/mike/.local/lib/python3.8/site-packages/podman_compose.py", line 782, in wrapped
return func(*args, **kw)
File "/home/mike/.local/lib/python3.8/site-packages/podman_compose.py", line 914, in compose_up
thread.join(timeout=1.0)
File "/usr/lib64/python3.8/threading.py", line 1005, in join
if not self._started.is_set():
File "/usr/lib64/python3.8/threading.py", line 513, in is_set
def is_set(self):
KeyboardInterrupt
I'm not experienced enough to be able to read through this and figure out what the problem is so I'm hoping to learn from you all.
Thanks!

There is an error in your path:
volumes:
/home/mike/test/deluge:/config
/home/mike/test/deluge: no such file or directory
Check the folder path.

Related

Docker Compose - give access to docker.sock before running (Telegraf)

UPDATE:
I've tried implementing the accepted answer from here Telegraf can not connect to Docker sock like this in my docker compose file:
telegraf3:
image: telegraf
user: telegraf:$$(stat -c '%g' /var/run/docker.sock)
volumes:
- ./telegraf/telegraf3.conf:/etc/telegraf/telegraf.conf
- /var/run/docker.sock:/var/run/docker.sock
I am getting this error:
Error response from daemon: unable to find group $(stat -c '%g' /var/run/docker.sock): no matching entries in group file
How can i fix this issue? :)
Background:
I'm trying to run Telegraf (https://github.com/influxdata/telegraf) with the docker input. I'm running Telegraf via Docker Compose, and i've configured it roughly like this:
telegraf3:
image: telegraf
volumes:
- ./telegraf/telegraf3.conf:/etc/telegraf/telegraf.conf
- /var/run/docker.sock:/var/run/docker.sock:rw
env_file:
- ./telegraf/telegraf.env
depends_on:
- influxdb
The telegraf configuration uses a docker input plugin to interact with the docker.sock. It doesn't work, i get a permission related error:
test-grafana-telegraf3-1 | 2022-12-29T12:09:10Z E! [inputs.docker] Error in plugin: Got permission denied while trying to connect to the Docker daemon socket at unix:///docker.sock: Get "http://%2Fdocker.sock/v1.24/info": dial unix /docker.sock: connect: permission denied
Basically, the entrypoint.sh script runs telegraf (the application) with the telegraf user, which can't access the docker.sock
There's a fix for this issue described here: Telegraf can not connect to Docker sock
Issue
As i am using docker compose, i would like this fix to be defined in the compose file, and not be dependent on me starting the container with docker run.
I've tried this:
telegraf3:
image: telegraf
volumes:
- ./telegraf/telegraf3.conf:/etc/telegraf/telegraf.conf
- /var/run/docker.sock:/var/run/docker.sock:rw
env_file:
- ./telegraf/telegraf.env
depends_on:
- influxdb
command: ["bash -c -u telegraf $$(stat -c '%g' /var/run/docker.sock)", "/entrypoint.sh"]
But then i get this error:
test-grafana-telegraf3-1 | setpriv: failed to execute bash -c -u telegraf $(stat -c '%g' /var/run/docker.sock): No such file or directory
test-grafana-telegraf3-1 exited with code 127

Odoo: error 98 Adress already in use, how to fix?

Odoo 15, installed on ubuntu server in docker container
i didnt installed it myself, admin tell me that he cant solve this problem
site1#site1:/var/docker/odoo$ docker exec -it odoo bash
odoo#f6740a7479b8:/$ odoo
2022-07-22 20:23:53,743 59 INFO ? odoo: Odoo version 15.0-20220620
2022-07-22 20:23:53,743 59 INFO ? odoo: Using configuration file at /etc/odoo/odoo.conf
2022-07-22 20:23:53,743 59 INFO ? odoo: addons paths: ['/usr/lib/python3/dist-packages/odoo/addons', '/var/lib/odoo/.local/share/Odoo/addons/15.0', '/var/odoo/custom']
2022-07-22 20:23:53,743 59 INFO ? odoo: database: odoo#db:5432
2022-07-22 20:23:53,914 59 INFO ? odoo.addons.base.models.ir_actions_report: Will use the Wkhtmltopdf binary at /usr/local/bin/wkhtmltopdf
2022-07-22 20:23:54,150 59 INFO ? odoo.service.server: HTTP service (werkzeug) running on 0.0.0.0:8069
Traceback (most recent call last):
File "/usr/bin/odoo", line 8, in <module>
odoo.cli.main()
File "/usr/lib/python3/dist-packages/odoo/cli/command.py", line 61, in main
o.run(args)
File "/usr/lib/python3/dist-packages/odoo/cli/server.py", line 179, in run
main(args)
File "/usr/lib/python3/dist-packages/odoo/cli/server.py", line 173, in main
rc = odoo.service.server.start(preload=preload, stop=stop)
File "/usr/lib/python3/dist-packages/odoo/service/server.py", line 1356, in start
rc = server.run(preload, stop)
File "/usr/lib/python3/dist-packages/odoo/service/server.py", line 907, in run
self.start()
File "/usr/lib/python3/dist-packages/odoo/service/server.py", line 877, in start
self.socket.bind((self.interface, self.port))
OSError: [Errno 98] Address already in use
when i trying to update module or just execute odoo for outputs, this error occured.
but instad, odoo worked, i can develop and update modules manualy from browser.
i tried next solutions:
adding xmlrpc_port = 7654 to config file
didnt work, error also occured and odoo web intarface isnt available.
changing ports in docker compose file:
version: '3.1'
services:
web:
build: ./etc/odoo
container_name: odoo
depends_on:
- db
ports:
- "8069:8069"
in all variations, didnt help. How to solve that problem?
Your container probably already started odoo and you are trying to start it second time.
To execute commands in container you can try --no-http parameter

docker-compose: failed to register layer: Error processing tar file - cannot find Windows DLL MUI file

Docker desktop is using Linux containers.
(Yes, I tried this: Docker Error: failed to register layer: Error processing tar file(exit status 1): "...msader15.dll.mui: no such file or directory", but using Docker Desktop with Windows containers caused the docker-compose command to fail with the response Error response from daemon: operating system is not supported)
Structure
- engine-load-tests
|- Dockerfile
|- docker-compose.yml
|- engine_load_tester_locust\
|- main.py
|- WinPerfCounters\ [I know - the casing is inconsistent]
|- main.py
|- Dockerfile
|- environment config files, README, other files
Dockerfile
FROM python:3.9.6-windowsservercore-1809
COPY . ./WinPerfCounters/
RUN pip install --no-cache-dir -r ./WinPerfCounters/requirements.txt
CMD [ "python", "WinPerfCounters/main.py", "WinPerfCounters/load_test.conf" ]
Docker-Compose
version: "3.3"
services:
win_perf_counters:
container_name: win_perf_counters
platform: windows
image: python:3.9.6-windowsservercore-1809
build: ./WinPerfCounters
depends_on:
- influxdb
links:
- influxdb
Then other containers for locust, influx, and grafana...
Output - Snippets
------
> [python:3.9.6-windowsservercore-1809 1/3] FROM docker.io/library/python:3.9.6-windowsservercore-1809#sha256:54b7eadfbbc3a983bf6ea80eb7478b68d46267bbbcc710569972c140247ccd5e:
-----
failed to solve: rpc error: code = Unknown desc = failed to register layer: Error processing tar file(exit status 1): link /Files/Program Files/common files/Microsoft Shared/Ink/en-US/micaut.dll.mui /Files/Program Files (x86)/common fi
les/Microsoft Shared/ink/en-US/micaut.dll.mui: no such file or directory
You can't run Windows containers (i.e. derived from some Windows base image like windowsservercore-1809) when Docker Desktop is set to Linux containers.

Airflow: DockerOperator fails with Permission Denied error

I'm trying to run a docker container via Airflow but getting Permission Denied errors. I have seen a few related posts and some people seem to have solved it via sudo chmod 777 /var/run/docker.sock which is a questionable solution at best, but it still didn't work for me (even after restarting docker. If anyone managed to solve this problem, please let me know!
Here is my DAG:
from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.docker_operator import DockerOperator
args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': datetime(2020, 6, 21, 11, 45, 0),
'retries': 1,
'retry_delay': timedelta(minutes=1),
}
dag = DAG(
"docker",
default_args=args,
max_active_runs=1,
schedule_interval='* * * * *',
catchup=False
)
hello_operator = DockerOperator(
task_id="run_docker",
image="alpine:latest",
command="/bin/bash echo HI!",
auto_remove=True,
dag=dag
)
And here is the error that I'm getting:
[2020-06-21 14:01:36,620] {taskinstance.py:1145} ERROR - ('Connection aborted.', PermissionError(13, 'Permission denied'))
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 672, in urlopen
chunked=chunked,
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 387, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/local/lib/python3.6/http/client.py", line 1262, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1308, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1257, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1036, in _send_output
self.send(msg)
File "/usr/local/lib/python3.6/http/client.py", line 974, in send
self.connect()
File "/home/airflow/.local/lib/python3.6/site-packages/docker/transport/unixconn.py", line 43, in connect
sock.connect(self.unix_socket)
PermissionError: [Errno 13] Permission denied
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 720, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/util/retry.py", line 400, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/packages/six.py", line 734, in reraise
raise value.with_traceback(tb)
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 672, in urlopen
chunked=chunked,
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 387, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/local/lib/python3.6/http/client.py", line 1262, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1308, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1257, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1036, in _send_output
self.send(msg)
File "/usr/local/lib/python3.6/http/client.py", line 974, in send
self.connect()
File "/home/airflow/.local/lib/python3.6/site-packages/docker/transport/unixconn.py", line 43, in connect
sock.connect(self.unix_socket)
urllib3.exceptions.ProtocolError: ('Connection aborted.', PermissionError(13, 'Permission denied'))
Here is my setup:
Dockerfile:
FROM apache/airflow
RUN pip install --upgrade --user pip && \
pip install --user psycopg2-binary && \
pip install --user docker
COPY airflow/airflow.cfg /opt/airflow/
docker-compose.yaml:
version: "3"
services:
postgres:
image: "postgres:9.6"
container_name: "postgres"
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
ports:
- "5432:5432"
volumes:
- ./data/postgres:/var/lib/postgresql/data
initdb:
image: learning/airflow
entrypoint: airflow initdb
depends_on:
- postgres
webserver:
image: learning/airflow
restart: always
entrypoint: airflow webserver
environment:
- EXECUTOR=Local
healthcheck:
test: ["CMD-SHELL", "[ -f /opt/airflow/airflow-webserver.pid ]"]
interval: 1m
timeout: 5m
retries: 3
ports:
- "8080:8080"
depends_on:
- postgres
volumes:
- ./airflow/dags:/opt/airflow/dags
- ./airflow/plugins:/opt/airflow/plugins
- ./data/logs:/opt/airflow/logs
- /var/run/docker.sock:/var/run/docker.sock
scheduler:
image: learning/airflow
restart: always
entrypoint: airflow scheduler
healthcheck:
test: ["CMD-SHELL", "[ -f /opt/airflow/airflow-scheduler.pid ]"]
interval: 1m
timeout: 5m
retries: 3
depends_on:
- postgres
volumes:
- ./airflow/dags:/opt/airflow/dags
- ./airflow/plugins:/opt/airflow/plugins
- ./data/logs:/opt/airflow/logs
- /var/run/docker.sock:/var/run/docker.sock
Even knowing that this question is old, my answer can still help other people that are having this problem.
I've found a elegant (and functional) solution in the following link:
https://onedevblog.com/how-to-fix-a-permission-denied-when-using-dockeroperator-in-airflow/
Quoting the article:
There is a more elegant approach which consists of “wrapping” the file around a service (accessible via TCP).
--
from the above link, the solution is to:
add an additional service docker-proxy to access localhost docker (/var/run/docker.sock) via tcp://docker-proxy:2375 using socat.
version: '3.7'
services:
docker-proxy:
image: bobrik/socat
command: "TCP4-LISTEN:2375,fork,reuseaddr UNIX-CONNECT:/var/run/docker.sock"
ports:
- "2376:2375"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
replace kwarg docker_url='unix://var/run/docker.sock' with docker_url='tcp://docker-proxy:2375' for all DockerOperators.
If volume is already mapped to container, run chmod on HOST:
chmod 777 /var/run/docker.sock
Solved for me.
I ran into this issue on windows (dev environment), using the puckel image.
Note that the file /var/run/docker.sock does not exist on this image, I created it and changed the owner to the airflow user already existent in the puckel image
RUN touch /var/run/docker.sock
RUN chown -R airflow /var/run/docker.sock
You can try to run your docker file with:
docker run -v /var/run/docker.sock:/var/run/docker.sock your_image_name
I remember having issues similar to this and what I did, on top of what you have already done, was to dynamically add the docker group in the container with the GID of the docker.sock in a startup script like this:
#!/usr/bin/env bash
ARGS=$*
# Check if docker sock is mounted
if [[ -S /var/run/docker.sock ]];
then
GROUP=`stat -c %g /var/run/docker.sock`
groupadd -g $GROUP docker
usermod -aG docker airflow
else
echo "Docker unix sock not found. DockerOperators will not run."
fi
su airflow -c "/usr/bin/dumb-init -- /entrypoint $ARGS"
That way you don't touch the socket's permissions and the airflow user is still able to interact with it.
Some other considerations:
I had to redeclare the default user in the Dockerfile to start as root
Run airflow as user airflow
First things first, we need to mount /var/run/docker.sock as a volume, because it is the file through which the Docker Client and Docker Server can communicate, as is in this case - to launch a separate Docker container using the DockerOperator() from inside the running Airflow container. The UNIX domain socket requires either root permission, or Docker group membership. Since the Airflow user is not the root, we need to add it to the Docker group and this way it will get access to the docker.sock. For that you need to do the following:
1.1. Add a Docker group and your user to it in the terminal on your host machine (following the official Docker documentation)
sudo groupadd docker
sudo usermod -aG docker <your_user>
newgrp docker
1.2. Log out and log back in on your host machine
2.1. Get the Docker group id in the terminal on your host machine
cut -d: -f3 < <(getent group docker)
2.2. Add the Airflow user to this docker group (use the GID from the line above) in the Airflow's docker-compose.yaml
group_add:
- <docker_gid>
3.1. Get your user id in the terminal on your host machine
id -u <your_user>
3.2. Set your AIRFLOW_UID to match your user id (use the UID from the line above) on the host machine and AIRFLOW_GID to 0 in the Airflow's docker-compose.yaml
user: "<your_uid>:-50000:0"
4.1. If you're creating your own Dockerfile for the separate container, add your user there
ARG UID=<your_uid>
ENV USER=<your_user>
RUN useradd -u $UID -ms /bin/bash $USER
Add another leading / to /var/run/docker.sock (at the source which is a part before :) in volumes, as below:
volumes:
//var/run/docker.sock:/var/run/docker.sock
In my case 'sudo' before command helped - I run
sudo docker-compose up -d --build dev
instead of
docker-compose up -d --build dev
and it helped. Issue was in lack of rights.
Try
$sudo groupadd docker
$sudo usermod -aG docker $USER

I cannot use --package option on bitnami/spark docker container

I pulled docker image and executed below command to run image.
docker run -it bitnami/spark:latest /bin/bash
spark-shell --packages="org.elasticsearch:elasticsearch-spark-20_2.11:7.5.0"
and i got message like below
Ivy Default Cache set to: /opt/bitnami/spark/.ivy2/cache
The jars for the packages stored in: /opt/bitnami/spark/.ivy2/jars
:: loading settings :: url = jar:file:/opt/bitnami/spark/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
org.elasticsearch#elasticsearch-spark-20_2.11 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-c785f3e6-7c78-469f-ab46-451f8be61a4c;1.0
confs: [default]
Exception in thread "main" java.io.FileNotFoundException: /opt/bitnami/spark/.ivy2/cache/resolved-org.apache.spark-spark-submit-parent-c785f3e6-7c78-469f-ab46-451f8be61a4c-1.0.xml (No such file or directory)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
at org.apache.ivy.plugins.parser.xml.XmlModuleDescriptorWriter.write(XmlModuleDescriptorWriter.java:70)
at org.apache.ivy.plugins.parser.xml.XmlModuleDescriptorWriter.write(XmlModuleDescriptorWriter.java:62)
at org.apache.ivy.core.module.descriptor.DefaultModuleDescriptor.toIvyFile(DefaultModuleDescriptor.java:563)
at org.apache.ivy.core.cache.DefaultResolutionCacheManager.saveResolvedModuleDescriptor(DefaultResolutionCacheManager.java:176)
at org.apache.ivy.core.resolve.ResolveEngine.resolve(ResolveEngine.java:245)
at org.apache.ivy.Ivy.resolve(Ivy.java:523)
at org.apache.spark.deploy.SparkSubmitUtils$.resolveMavenCoordinates(SparkSubmit.scala:1300)
at org.apache.spark.deploy.DependencyUtils$.resolveMavenDependencies(DependencyUtils.scala:54)
at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:304)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:774)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
I tried other package, but it is not working with all same error message.
Can you give some advice to avoid this error?
Found the solution to it
as given in https://github.com/bitnami/bitnami-docker-spark/issues/7
what we have to do is create a volume on host mapped to docker path
volumes:
- ./jars_dir:/opt/bitnami/spark/ivy:z
give this path as cache path like this
spark-shell --conf spark.jars.ivy=/opt/bitnami/spark/ivy --conf
spark.cassandra.connection.host=127.0.0.1 --packages
com.datastax.spark:spark-cassandra-connector_2.12:3.0.0-beta --conf
spark.sql.extensions=com.datastax.spark.connector.CassandraSparkExtensions
All happened because /opt/bitnami/spark is not writable and we have to mount a volume to bypass that.
The error "java.io.FileNotFoundException: /opt/bitnami/spark/.ivy2/" occured because the location /opt/bitnami/spark/ is not writable. so in order to resolve this issue do modify the master spark service like this.
Added user as root and add mounted volume path for required jars.
see the working block of spark service written in docker compose:
spark:
image: docker.io/bitnami/spark:3
container_name: spark
environment:
- SPARK_MODE=master
- SPARK_RPC_AUTHENTICATION_ENABLED=no
- SPARK_RPC_ENCRYPTION_ENABLED=no
- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
- SPARK_SSL_ENABLED=no
user: root
ports:
- '8880:8080'
volumes:
- ./spark-defaults.conf:/opt/bitnami/spark/conf/spark-defaults.conf
- ./jars_dir:/opt/bitnami/spark/ivy:z

Resources