Could someone please advice how to take the backup of Consul cluster datastore to S3. I know we can use EBS snapshot but I would like to have a script to move the Consult datastore over to S3 instead of snapshot approach which is not very effective.
I use Consul-Snapshot at work to backup our Consul data centers. You can provide AWS credentials or use an IAM role so Consul-Snapshot can upload the snapshots to an AWS S3 bucket.
Related
I'm looking for an easy implementation to send the old logs from Graylog automatically to s3 to save disk space.
Thanks!
Graylog offers archiving capabilities, using S3 compliant storage as a backend, in it's commercial offering. Graylog Ops or Graylog Security both offer this functionality and are available in self-managed or cloud-based platforms.
We are planning to host our Artifactory on ECS (Fargate) and mount the data to EFS. We will use an ALB in front of the containers (8081 and 8082) We still have some open issues:
Can we use multiple containers at the same time or will there be upload/write issues to EFS?
Is EFS a good solution or is S3 better?
What about the metadata. I read Artifactory is hosting this in some Derby database. What if we redeploy a new container? Will the data be gone? Can this data be persisted on EFS or do we need RDS?
Can we use multiple containers at the same time or will there be upload/write issues to EFS?
Ans: Yes you can use multiple containers to host Artifactory instances in a single host. However it is generally recommended to use multiple host to avoid the 'single point of failure' scenario. I don't anticipate any RW issues with EFS/S3.
Is EFS a good solution or is S3 better?
Ans: In my opinion both S3 and EFS are better known as scalable solutions rather than high performance oriented and it completely depends on the use-case. You can overcome this issue by enabling cache-fs in Artifactory which will store the frequently used binaries in a defined place (like a local disk with higher RW speeds). You can read more about cache-fs here: https://jfrog.com/knowledge-base/what-is-cache-fs-video/
What about the metadata. I read Artifactory is hosting this in some Derby database. What if we redeploy a new container? Will the data be gone? Can this data be persisted on EFS or do we need RDS?
Ans: when you are configuring more than one Artifactory node it is mandatory to have an external database (RDS) to store the configurations/references. On a side note: Artifactory generates the metadata for the packages/artifacts and store them in the FS only. However the references will be stored in the DB
I have taken an moodle AMI from AWS MARKETPLACE (moodle by bitnami) and launched an instance ,
my instance is up and running and working fine, but if i upload any videos or images in that moodle, where will be my data gets stored. I didnot created any S3 buckets or RDS,
Please help me if any one had already took this aws moodle by bitnami in AWS Marketplace
Bitnami Engineer here,
The data is stored in the instance where Moodle is running. If you access the instance using a SSH connection, you can get the app's files from /opt/bitnami/apps/moodle/htdocs.
I hope this information helps.
I have a Moodle site running on AWS and my opinion is that hosting this way does not take full advantage of a cloud based solution. You may wish to consider using EC2 service to run the code, EFS service for moodledata files,RDS for a managed database, Eleasticache for Redis caching, ELB for load balancing to multiple EC2 instances and termination of https, S3 Glacier for backups. If you have more one EC2 instance you can use spot instances to save money.
I am looking for information but I can't find it. Is it possible to attach Amazon S3 storage to the Docker container so that it is via the application (Spring Boot) as a disk? If so, how to do it? If it is important, the docker image is managed by kubernetes
I'd recommend to take a look at the product Min.io.
Its client has nice features that allow you to use simple FS commands to operate S3 storage from your container:
ls list buckets and objects
mb make a bucket
rb remove a bucket
cat display object contents
head display first 'n' lines of an object
pipe stream STDIN to an object
share generate URL for temporary access to an object
cp copy objects
mirror synchronize objects to a remote site
find search for objects
sql run sql queries on objects
stat stat contents of objects
lock set and get object lock configuration
retention set object retention for objects with a given prefix
legalhold set object legal hold for objects
diff list differences in object name, size, and date between buckets
rm remove objects
event manage object notifications
watch watch for object events
policy manage anonymous access to objects
admin manage MinIO servers
session manage saved sessions for cp command
config manage mc configuration file
update check for a new software update
version print version info
For convenience, you may add shell aliases to override your common Unix tools.
alias ls='mc ls'
alias cp='mc cp'
alias cat='mc cat'
alias mkdir='mc mb'
alias pipe='mc pipe'
alias find='mc find'
It also has native Java (Pyton, Golang, .NET, Haskel) client:
MinIO Java SDK for Amazon S3 Compatible Cloud Storage
Amazon S3 is an object storage and not a block storage or file storage and you should not attach it to a container. For any practical purpose you should be able to use the s3 storage from the spring boot application using AWS SDK or API.
With kubernetes you can use block storage such as AWSElasticBlockStore out of the box.
If you want to use object storage like S3, this is also possible however far more complicated. If you are still interested in implementing specifically S3 storage on your kubernetes cluster, this article describes very well the whole procedure.
I have a Rails app which uploads files to a server and stores them so that they can be processed by a Java application.
The current configuration is in AWS which have two instances and is balanced by ELB.
Path to upload dir in AWS-instance1 : /var/lib/rails_files
Path to upload dir in AWS-instance2 : /var/lib/rails_files
Is there a way so that I can sync the directory rails_files from both instances?
At a specific time, the Java application looks for the file name from db and picks the file from /var/lib/rails_files.
Or is it possible to attach/add a shared drive to both instances so that it can be accessed from both instances?
I don't want to go with s3 file upload.
Any help would be appreciated.
Rather than synchronizing between instances (and all the inherent security and configuration required), instead sync with Amazon S3.
For example, the AWS Command-Line Interface has a command that can sync between local directories and an Amazon S3 bucket:
aws s3 sync dir s3://BUCKET/dir
This also works in the reverse direction (from S3 to local) and between S3 buckets. By default, sync will not delete files from the destination.