I want to run nginx with docker-compose.
docker-compose.yml:
version: "3.9"
services:
custom-nginx:
image: custom-nginx:latest
network_mode: host
volumes:
- /etc/letsencrypt:/etc/letsencrypt:ro
ports:
- 80:80
- 443:443
restart: always
nginx:
depends_on:
- custom-nginx
image: nginx:alpine
volumes:
- /etc/letsencrypt:/etc/letsencrypt:ro
restart: always
The folder gets mounted but when I look into it from the nginx container it's empty:
/ # ls -al /etc/letsencrypt/
total 4
drwxr-xr-x 2 root root 40 Nov 21 18:00 .
drwxr-xr-x 1 root root 4096 Nov 21 20:07 ..
The custom-nginx Dockerfile is just
FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
I assume it has to do with my permissions maybe?
ls -l /etc | grep lets
drwxr-x--- 9 root docker 4096 Nov 21 17:43 letsencrypt
The docker group has rx on the folder recursively. The docker user on the host can see all those folders, and the docker user is running docker in rootless mode.
I was assuming that the root user of the container should see all files the same way as the docker user on the host machine?
What am I missing here?
EDIT: This is the content of /etc/letsencrypt on the host
ls -l /etc/letsencrypt/
total 32
drwx------ 3 root root 4096 Nov 21 17:32 accounts
drwxr-x--- 3 root docker 4096 Nov 21 17:43 archive
-rwx------ 1 root root 207 Nov 12 2021 cli.ini
drwx------ 2 root root 4096 Nov 21 17:43 csr
drwx------ 2 root root 4096 Nov 21 17:43 keys
drwxr-x--- 3 root docker 4096 Nov 21 17:43 live
drwx------ 2 root root 4096 Nov 21 17:43 renewal
drwx------ 5 root root 4096 Nov 21 17:32 renewal-hooks
The container, inside its nginx.conf, only references /etc/letsencrypt/live/<domain_name>fullchain.pem which actually is a link:
/etc/letsencrypt/live/<domain_name>/fullchain.pem -> ../../archive/<domain_name>/fullchain1.pem
But both live and archive folders seem to have the necessary permissions in my opinion...
Related
Here is my docker-compose.yml :
version: '2'
services:
backgestionpersonne_TEST_CBS:
image: my-registry.compagny.com/my_repo/TEST_CBS:${TAG_VERSION}
container_name: TEST_CBS
restart: always
ports:
- 5555:80
networks:
- traefik
volumes:
- '/etc/pki/ca-trust/source/anchors/cert_Subordinate_CA.pem:/usr/local/share/ca-certificates/cert_Subordinate_CA.pem'
- '/etc/pki/ca-trust/source/anchors/cert_Root_CA.pem:/usr/local/share/ca-certificates/cert_Root_CA.pem'
- '/etc/pki/ca-trust/source/anchors/cert.pem:/usr/local/share/ca-certificates/cert.pem'
networks:
traefik:
external:
name: traefik
When I am in the container, I've got this missing rights with ?????????? :
root#2ce5b349fc30:/app# ls -ail /usr/local/share/ca-certificates/
ls: cannot access '/usr/local/share/ca-certificates/cert_Subordinate_CA.pem': Permission denied
ls: cannot access '/usr/local/share/ca-certificates/cert_Root_CA.pem': Permission denied
ls: cannot access '/usr/local/share/ca-certificates/cert.pem': Permission denied
total 0
18302330 drwxr-xr-x. 1 root root 105 Aug 1 14:24 .
890135 drwxr-xr-x. 1 root root 29 Jul 12 13:53 ..
? -?????????? ? ? ? ? ? cert_Subordinate_CA.pem
? -?????????? ? ? ? ? ? cert_Root_CA.pem
? -?????????? ? ? ? ? ? cert.pem
Do you know why this docker volume lost rights when I am inside the container ?
(I have the exact same docker-compose.yml file on another server, and the volume doesn't lose rights in it.)
When I use this volume, it works :
- '/tmp/tmp/cert_Subordinate_CA.pem:/usr/local/share/ca-certificates/cert_Subordinate_CA.pem'
- '/tmp/tmp/cert_Root_CA.pem:/usr/local/share/ca-certificates/cert_Root_CA.pem'
- '/tmp/tmp/cert.pem:/usr/local/share/ca-certificates/cert.pem'
Here is rights on both directories :
[root#svprd1148 ~]# ls -ail /tmp/tmp/
total 12
17379249 drwxr-xr-x. 2 root root 89 Jul 20 20:29 .
16777288 drwxrwxrwt. 9 root root 138 Aug 4 04:05 ..
18033843 -rw-r--r--. 1 root root 1578 Jun 17 11:41 cert_Root_CA.pem
18033827 -rw-r--r--. 1 root root 1125 Jun 17 10:20 cert_Subordinate_CA.pemm
18033836 -rw-r--r--. 1 root root 1588 Jun 17 10:19 cert.pem
and
[root#svprd1148 ~]# ls -ail /etc/pki/ca-trust/source/anchors/
total 32
45589 drwxr-xr-x. 2 root root 188 Aug 1 16:21 .
50341743 drwxr-xr-x. 4 root root 80 Jul 20 20:23 ..
51155 -rw-r--r--. 1 root root 1125 Jun 17 10:20 cert_Subordinate_CA.pem
51156 -rw-r--r--. 1 root root 1578 Jun 17 11:41 cert_Root_CA.pem
4691079 -rw-r--r--. 1 root root 1588 Jun 17 10:19 cert.pem
And I've got "permission denied" when I try to make a "chmod 777 -R /usr/local/share/ca-certificates/" inside the container
I found the solution here :
Permission denied on accessing host directory in Docker
It's necessary to add :Z at the end of each volume.
volumes:
- '/etc/pki/ca-trust/source/anchors/cert_Subordinate_CA.pem:/usr/local/share/ca-certificates/cert_Subordinate_CA.pem:Z'
- '/etc/pki/ca-trust/source/anchors/cert_Root_CA.pem:/usr/local/share/ca-certificates/cert_Root_CA.pem:Z'
- '/etc/pki/ca-trust/source/anchors/cert.pem:/usr/local/share/ca-certificates/cert.pem:Z'
works !
I tried to deploy Keycloak and it's database via Docker (Docker-Compose).
It retries 10 times, then failes the deployment. The same docker-compose.yml file worked for me in the past. Haven't done any OS or contianer updates since.
The the following error and warning is thrown:
keycloak | 09:48:42,070 ERROR [org.jgroups.protocols.TCP] (ServerService Thread Pool -- 60) JGRP000034: cff2ce8f5cdf: failure sending message to e832b25e9785: java.net.SocketTimeoutException: connect timed out
keycloak | 09:48:45,378 WARN [org.jgroups.protocols.pbcast.GMS] (ServerService Thread Pool -- 60) cff2ce8f5cdf: JOIN(cff2ce8f5cdf) sent to 05bdb7a4a7f5 timed out (after 3000 ms), on try 0
My docker-compose.yml looks like this:
keycloak:
container_name: keycloak
image: jboss/keycloak:11.0.2
ports:
- 8081:8080
environment:
- DB_VENDOR=mariadb
- DB_ADDR=authenticationDB
- DB_DATABASE=keycloak
- DB_USER=keycloak
- DB_PASSWORD=password
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
- JGROUPS_DISCOVERY_PROTOCOL=JDBC_PING
- JGROUPS_DISCOVERY_PROPERTIES=datasource_jndi_name=java:jboss/datasources/KeycloakDS,info_writer_sleep_time=500
depends_on:
- authenticationDB
authenticationDB:
container_name: authenticationDB
image: mariadb
volumes:
- ./keycloakDB:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: password
healthcheck:
test: ["CMD", "mysqladmin", "ping", "--silent"]
I've tried following:
SSH into Keycloak's container and curl authenticationDB:3306. I've got a no permission error so the container could talk to each other.
Check if the database is running inside the DB-Container and yes, it's running.
I am running out of ideas.
Normally it retried 10 times, and then successfully deployed keycloak.
Thanks in advance,
Rosario
I would say that docker image jboss/keycloak:11.0.2 doesn't support JDBC_PING:
$ docker run --rm --entrypoint bash -ti jboss/keycloak:11.0.2 \
-c 'ls -lah /opt/jboss/tools/cli/jgroups/discovery/'
total 4.0K
drwxrwxr-x. 1 jboss root 25 Sep 15 09:01 .
drwxrwxr-x. 1 jboss root 23 Sep 15 09:01 ..
-rw-rw-r--. 1 jboss root 611 Sep 15 09:01 default.cli
vs
$ docker run --rm --entrypoint bash -ti jboss/keycloak:12.0.2 \
-c 'ls -lah /opt/jboss/tools/cli/jgroups/discovery/'
total 8.0K
drwxrwxr-x. 1 jboss root 46 Jan 19 07:27 .
drwxrwxr-x. 1 jboss root 23 Jan 19 07:27 ..
-rw-rw-r--. 1 jboss root 611 Jan 19 07:27 default.cli
-rw-rw-r--. 1 jboss root 605 Jan 19 07:27 JDBC_PING.cli
Try to test newer version.
I'm using rancher and I set a secret using the rancher's GUI. I'm trying to make my application read this secret. Let's say the secret is called pass and I want to read it. Being known with docker, I wrote the following code:
readDockerSecret: function(secretName) {
return fs.readFileSync(`/run/secrets/${secretName}`, 'utf8');
}
// code
// read secret
try {
var secretName = "pass";
var pass = utils.readDockerSecret(pass);
} catch (err) {
if (err.code !== 'ENOENT') {
logger.error(`An error occurred while trying to read the secret: ${secretName}. Err: ${err}`);
} else {
logger.debug(`Could not find the secret: ${secretName}. Err: ${err}`);
}
}
But when I use it in rancher, it never finds the secret. In the rancher shell in the GUI I can see that I have the following herechy:
ls -la /run/secrets
total 0
drwxr-xr-x 1 root root 27 Aug 10 11:02 .
drwxr-xr-x 1 root root 21 Aug 10 10:53 ..
drwxr-xr-x 2 root root 40 Aug 10 11:02 credentials.d
drwxr-xr-x 3 root root 28 Aug 10 11:02 kubernetes.io
credentials.d is empty. But kubernetes.io contains:
/run/secrets/kubernetes.io
total 0
drwxr-xr-x 3 root root 28 Aug 10 11:02 .
drwxr-xr-x 1 root root 27 Aug 10 11:02 ..
drwxrwxrwt 3 root root 140 Aug 10 11:02 serviceaccount
ls -la /run/secrets/kubernetes.io/serviceaccount/
total 0
drwxrwxrwt 3 root root 140 Aug 10 11:02 .
drwxr-xr-x 3 root root 28 Aug 10 11:02 ..
drwxr-xr-x 2 root root 100 Aug 10 11:02 ..2020_08_10_11_02_18.157580662
lrwxrwxrwx 1 root root 31 Aug 10 11:02 ..data -> ..2020_08_10_11_02_18.157580662
lrwxrwxrwx 1 root root 13 Aug 10 11:02 ca.crt -> ..data/ca.crt
lrwxrwxrwx 1 root root 16 Aug 10 11:02 namespace -> ..data/namespace
lrwxrwxrwx 1 root root 12 Aug 10 11:02 token -> ..data/token
No sign for pass anywhere. Tried also to grep but without any luck. How should I read the secret in rancher?
EDIT: The screenshot:
In the yaml we have:
spec:
containers:
- envFrom:
- prefix: pass
secretRef:
name: pass
optional: false
image: <image-url>
imagePullPolicy: Always
name: <app-name>
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
runAsNonRoot: false
stdin: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
tty: true
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: pass
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
To be able to use a secret from inside a pod, you first need to "mount" that secret into the pod, either as an environment variable or a file. The secrets docs describe in detail how to do that.
Update
Using fsGroup in a SecurityContext allows the "group" permissions on the final mounting point to be set. So referring to the example below (/mydata/storage/sample/one) the perms for "one" will allow the fsGroup ID write access. However, none of the parent folders: "mydata", "storage", "sample" will have any permissions for that fsGroup. The are owned by root:root and have 755 as their permissions.
This is a huge problem if the running processes (runAsUser and runAsGroup) try to create files/folders in any of the parent paths
Original Post
When mounting volumes inside pods to containers, the mountpath does not need to exist. And it will be created. However this directories in this path get created with certain umask (i believe it's 0022).
I have set the umask in Dockerfile but it has not made any difference.
Is there a way to change that in the deployment yaml file?
Example (copied from Kubernetes docs)
$ cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: redis
namespace: play
spec:
containers:
- name: redis
image: redis
volumeMounts:
- name: redis-storage
mountPath: /mydata/storage/sample/one
volumes:
- name: redis-storage
emptyDir: {}
$ kubectl apply -f pod.yaml
pod/redis created
$ kubectl get pods -n play --watch
NAME READY STATUS RESTARTS AGE
redis 1/1 Running 0 67s
$ kubectl exec -it redis -n play bash
root#redis:/data# ls -l /
total 72
drwxr-xr-x 2 root root 4096 Aug 12 00:00 bin
drwxr-xr-x 2 root root 4096 May 13 20:25 boot
drwxr-xr-x 2 redis redis 4096 Aug 14 14:11 data
drwxr-xr-x 5 root root 360 Aug 20 04:25 dev
drwxr-xr-x 1 root root 4096 Aug 20 04:25 etc
drwxr-xr-x 2 root root 4096 May 13 20:25 home
drwxr-xr-x 1 root root 4096 Aug 14 14:11 lib
drwxr-xr-x 2 root root 4096 Aug 12 00:00 lib64
drwxr-xr-x 2 root root 4096 Aug 12 00:00 media
drwxr-xr-x 2 root root 4096 Aug 12 00:00 mnt
drwxr-xr-x 3 root root 4096 Aug 20 04:25 mydata
drwxr-xr-x 2 root root 4096 Aug 12 00:00 opt
dr-xr-xr-x 743 root root 0 Aug 20 04:25 proc
drwx------ 1 root root 4096 Aug 14 14:10 root
drwxr-xr-x 1 root root 4096 Aug 20 04:25 run
drwxr-xr-x 2 root root 4096 Aug 12 00:00 sbin
drwxr-xr-x 2 root root 4096 Aug 12 00:00 srv
dr-xr-xr-x 13 root root 0 Aug 19 21:55 sys
drwxrwxrwt 1 root root 4096 Aug 14 14:11 tmp
drwxr-xr-x 1 root root 4096 Aug 12 00:00 usr
drwxr-xr-x 1 root root 4096 Aug 12 00:00 var
root#redis:/data# ls -l /mydata/
total 4
drwxr-xr-x 3 root root 4096 Aug 20 04:25 storage
I think you need to setup SecurityContext in kubernetes, Example from the Docs:
Discretionary Access Control: Permission to access an object, like a
file, is based on user ID (UID) and group ID (GID).
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: false
continue reading
I'm building the docker containers for a Ruby on Rails project but when it finishes building all the assets return 405 (Not Allowed) when requesting them in my local computer and 404 (not found) in the production server.
Development:
Production:
I already reinstalled docker but I'm getting the same result.
UPDATE:
This is my dockerfile:
FROM ruby:2.5.1-alpine3.7
RUN apk add --update build-base nodejs tzdata libxml2-dev postgresql-dev postgresql-client git less
RUN apk --update add --virtual build-dependencies make g++
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN bundle install --binstubs
RUN apk del build-dependencies && rm -rf /var/cache/apk/*
COPY . ./
This is my docker-composer:
version: '3'
services:
setup:
image: minecraft.grumpyzombies.com/pcz-store
volumes:
- /root/apps/pcz-store/uploads:/myapp/public/uploads
- /root/apps/pcz-store/assets:/myapp/public/assets
depends_on:
- db
environment:
- RAILS_ENV=production
- SECRET_KEY_BASE=b84b5c15aa42b2be2290d4bb330e3ce4ec0d39847babca3c6189f1721e7bc636bd60d7d7cc0fb156d4845f403dfa5432448435926b79c913133bbf75d1cd498e
restart: on-failure
command: docker/assets_precompile.sh
db:
image: postgres:10
volumes:
- /root/apps/pcz-store/db:/var/lib/postgresql/data
web:
image: minecraft.grumpyzombies.com/pcz-store
volumes:
- /root/apps/pcz-store/uploads:/myapp/public/uploads
- /root/apps/pcz-store/assets:/myapp/public/assets
depends_on:
- setup
- db
environment:
- RAILS_ENV=production
- ELASTICSEARCH_URL=http://elasticsearch:9200
- SECRET_KEY_BASE=b84b5c15aa42b2be2290d4bb330e3ce4ec0d39847babca3c6189f1721e7bc636bd60d7d7cc0fb156d4845f403dfa5432448435926b79c913133bbf75d1cd498e
command: docker/entrypoint.sh
nginx:
build:
context: .
dockerfile: Dockerfile-nginx
image: minecraft.grumpyzombies.com/pcznginx
restart: on-failure
ports:
- '80:80'
- '443:443'
depends_on:
- web
volumes:
- /root/apps/pcz-store/assets:/var/www/myapp/public/assets
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.0
container_name: elasticsearch
restart: on-failure
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
environment:
- node.name=es01
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
privileged: true
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
ports:
- '9200:9200'
- '9300:9300'
command: su -c ./bin/elasticsearch elasticsearch
depends_on:
- web
- db
Second update:
This is what I get when I run: ls -lart /var/www/myapp/public/assets in the nginx container.
root#pcz-store:~# docker exec -it d47aecf71c05 sh
/var/www/myapp # ls -lart /var/www/myapp/public/assets
total 3624
-rw-r--r-- 1 root root 6760 Feb 28 21:59 terms_conditions_icon-34d7728acea9739e5663475fa58bb00070eada9dbf035f396ffa9ebf9d61a695.png
-rw-r--r-- 1 root root 7717 Feb 28 21:59 shipping_icon-9c050291621972a805212663d85d85ce16f4ab4bac1b6df26fb4c81111b04954.png
-rw-r--r-- 1 root root 2194 Feb 28 21:59 search_btn-1337ebd34122bf832ef77c294a9a0cfacc01eec14abacd672760d4534f66309f.png
-rw-r--r-- 1 root root 6323 Feb 28 21:59 pcz_small_logo-f2cc1d44bb90b448b9b88348664a8384b378a257ab2c0bf8b0a3858f6a68150d.png
-rw-r--r-- 1 root root 2336 Feb 28 21:59 filter_menu-32653a4203b8bf0d5de68691de26f9c0b590018b5d8c34e8108dff55ca0a8f28.png
-rw-r--r-- 1 root root 1090 Feb 28 21:59 card_with_items-14c488398004329490b11e22dd425126e726bb724d8997482011fb8d53681d10.png
-rw-r--r-- 1 root root 5494 Feb 28 21:59 card_payment_icon-506681ad792fbceb514fe8dcb48c48b1ad552c1a430bb83cb8fd0a72bbfeed6f.png
-rw-r--r-- 1 root root 771 Feb 28 21:59 card-8ac603143bafe864976639270b0e7218a40c4fc7d7bb14d8d1eaa0f4b5a96e14.png
-rw-r--r-- 1 root root 6338 May 6 21:57 pcz_logo_blue-df8681f5d301079d4b14a0f5f61a64015fa3005e665fcb6d6efac10d541bfba9.png
-rw-r--r-- 1 root root 7494 May 6 21:57 pcz_logo_2-f65fe6b96c6cb86a5546e2c96db79db837b97c9ebe2ac80a928ca30604890709.png
-rw-r--r-- 1 root root 6009 May 6 21:57 pcz_logo-30a5ea591292bd780af89c4070b9f0e9bee3feccee260e8767611f88d3129813.png
-rw-r--r-- 1 root root 4143 May 6 21:57 menu_icon-63b742c78ceea89533f5e47f71cf7694a7603f182618b8b91b2f2e8c9f5cad88.png
-rw-r--r-- 1 root root 1975 May 6 21:57 card_white_with_items-a27394bbc2b5b6687aaad72dc2ae1caadbac4670e91f644fb6bf179da664a41e.png
-rw-r--r-- 1 root root 1834 May 6 21:57 card_white-29b18b65b693a5470c4dc25503c7817e9ee655d0a92b5fced902dcdcde038c72.png
drwxr-xr-x 3 root root 4096 May 8 17:31 ..
-rw-r--r-- 1 root root 85634 May 9 18:30 login_background-db8d4fb738db92fa96201ae66011a35566030de51a44b3ffbd47d5f5b07d5614.jpg
-rw-r--r-- 1 root root 91419 May 9 18:30 admin-b99dedd500b2b0074741175197a8f478fa8efec08effdc1ae28ecc5c16c81460.css.gz
-rw-r--r-- 1 root root 575024 May 9 18:30 admin-b99dedd500b2b0074741175197a8f478fa8efec08effdc1ae28ecc5c16c81460.css
-rw-r--r-- 1 root root 98024 May 9 18:33 fontawesome-webfont-ba0c59deb5450f5cb41b3f93609ee2d0d995415877ddfa223e8a8a7533474f07.woff
-rw-r--r-- 1 root root 134485 May 9 18:33 fontawesome-webfont-ad6157926c1622ba4e1d03d478f1541368524bfc46f51e42fe0d945f7ef323e4.svg.gz
-rw-r--r-- 1 root root 444379 May 9 18:33 fontawesome-webfont-ad6157926c1622ba4e1d03d478f1541368524bfc46f51e42fe0d945f7ef323e4.svg
-rw-r--r-- 1 root root 98106 May 9 18:33 fontawesome-webfont-aa58f33f239a0fb02f5c7a6c45c043d7a9ac9a093335806694ecd6d4edc0d6a8.ttf.gz
-rw-r--r-- 1 root root 165548 May 9 18:33 fontawesome-webfont-aa58f33f239a0fb02f5c7a6c45c043d7a9ac9a093335806694ecd6d4edc0d6a8.ttf
-rw-r--r-- 1 root root 98200 May 9 18:33 fontawesome-webfont-7bfcab6db99d5cfbf1705ca0536ddc78585432cc5fa41bbd7ad0f009033b2979.eot.gz
-rw-r--r-- 1 root root 165742 May 9 18:33 fontawesome-webfont-7bfcab6db99d5cfbf1705ca0536ddc78585432cc5fa41bbd7ad0f009033b2979.eot
-rw-r--r-- 1 root root 77160 May 9 18:33 fontawesome-webfont-2adefcbc041e7d18fcf2d417879dc5a09997aa64d675b7a3c4b6ce33da13f3fe.woff2
-rw-r--r-- 1 root root 34260 May 9 18:33 application-ff019001d3b7f91bae6da2b446df296fa8f0b1e8d236c5de4449aa65f683f644.css.gz
-rw-r--r-- 1 root root 199990 May 9 18:33 application-ff019001d3b7f91bae6da2b446df296fa8f0b1e8d236c5de4449aa65f683f644.css
-rw-r--r-- 1 root root 69867 May 9 18:33 application-9fb13c3411be138ab44f1c4ab79a3325374cb7bdc733423b1d0930a4051030d3.js.gz
-rw-r--r-- 1 root root 235590 May 9 18:33 application-9fb13c3411be138ab44f1c4ab79a3325374cb7bdc733423b1d0930a4051030d3.js
-rw-r--r-- 1 root root 212342 May 9 18:33 admin-66de62e1b8d9a65dfa06311ddcf46d9b39d8ea9c52989f89886ed8a036077d75.js.gz
-rw-r--r-- 1 root root 768369 May 9 18:33 admin-66de62e1b8d9a65dfa06311ddcf46d9b39d8ea9c52989f89886ed8a036077d75.js
drwxr-xr-x 3 root root 4096 May 13 03:27 img
drwxr-xr-x 2 root root 4096 May 13 03:27 font-awesome
-rw-r--r-- 1 root root 21847 May 13 03:27 .sprockets-manifest-757ca632d463d59749404f32eb6c13db.json
drwxr-xr-x 4 root root 4096 May 13 03:27 .