what is the storage mechanism for moodle on AWS (moodle by bitnami) - storage

I have taken an moodle AMI from AWS MARKETPLACE (moodle by bitnami) and launched an instance ,
my instance is up and running and working fine, but if i upload any videos or images in that moodle, where will be my data gets stored. I didnot created any S3 buckets or RDS,
Please help me if any one had already took this aws moodle by bitnami in AWS Marketplace

Bitnami Engineer here,
The data is stored in the instance where Moodle is running. If you access the instance using a SSH connection, you can get the app's files from /opt/bitnami/apps/moodle/htdocs.
I hope this information helps.

I have a Moodle site running on AWS and my opinion is that hosting this way does not take full advantage of a cloud based solution. You may wish to consider using EC2 service to run the code, EFS service for moodledata files,RDS for a managed database, Eleasticache for Redis caching, ELB for load balancing to multiple EC2 instances and termination of https, S3 Glacier for backups. If you have more one EC2 instance you can use spot instances to save money.

Related

How to save doccano database to Google Cloud Storage after deploying to Cloud Run?

I deployed a doccano docker container to Cloud Run and I am successfully able to reach the WebApp.
Everything works fine, such as log in, data import and annotation.
Now I would like to connect the container to Google Cloud Storage in order to save all annotations in a bucket. Currently, all data is lost after the container restarts.
Any hints on how to accomplish that are highly appreciated!
What I (kind of) tried:
Container is up and running, some environment variables are set. But I don't know how I can set a bucket uri within the doccano docker container (doccanos documentation is a bit sparse in that regard).
Maybe this can be helpful for anyone with a similar use case:
My solution/workaround for deploying doccano on GCP was deploying a docker container to the Compute Engine (and opening a port to the app) instead of Cloud Run. Cloud Run seems indeed to be the wrong service for that use case. Compute Engine has a persistent storage which keeps all of the data even if the container has to restart.

Can I run a Neo4j container with AWS Copilot for a website?

I use Neo4j as the backend database for my web application. I'm looking at AWS Copilot to manage the services, migrating away from Docker Compose. For the last 5 years I've run Neo4j in a Docker container on an EC2 instance with the data persisted on an external EBS volume.
I'm finding the Copilot docs show how to set up a static website with no database, or (possibly) how to connect to one of their in-bred storage solutions like DynamoDB. On the Neo4j side, all I see is how to set up the Neo4j AMI from the AWS Marketplace.
Could anyone explain or point me to an example of how to use AWS Copilot to deploy Neo4j in a Docker container and link it to a web service? Or, if this is a bad idea, what would be a better approach?

How to properly use DynamoDB in a Docker container?

I am new to Docker and trying to figure out how to use dynamodb and boto3 within my Docker image. I have followed many tutorial and read many articles. From what I have the basic setup of most dockerized applications have a docker-compose file with two images, the service you have built, and an image of the database. So here is where I am confused, the only image I can find of DynamoDB is dynamodb-local. And to my understanding this image is only used to create a localized database on your computer. I need the ability to connect to an actual dynamodb table on my aws account. I currently just have instructions in my Dockerfile to download boto3 on build. Just wondering if I am doing anything wrong? Could anyone give some clarity, or some good resources to read?
If you need to connect to an external DynamoDB instance then you don't have to create a container for it.
You can just pass the required credentials to access the AWS hosted instance through environment variables to the other service container.
Although I do recommend spinning up a local database for development purposes.

How to configure Minio sourced from a subfolder in a single bucket in Docker?

I am trying to find a way to configure MinIO using Docker to back-end into a single S3 bucket, enabling my client to expose S3 capabilities to their internal customers.
To meet some very specialized compliance rules in an air-gapped environment, my client was provisioned a single bucket in an on-premise S3-compatible solution. They cannot get additional buckets but need to provide their internal organizational customers access to S3 capabilities, including the ability to leverage buckets, ACLs, etc. The requirement is to use their existing S3 storage bucket and not other on-premise storage.
I tried Minio gateway but it tries to create and manage new buckets on the underlying S3 provider. I couldn't find anything like a "prefix" capability I could supply to force it to only work inside {host}/{bucketName} instead of the root endpoint for their keys.
Minio server might work but we'd need to mount a docker volume to their underlying bucket and I'm concerned about the solution becoming brittle. Also, I can't seem to find any well-regarded, production-ready, vendor-supported S3 volume drivers. Since I don't have a volume plugin, I haven't validated performance yet, though I'm concerned it will be sub-par as well.
How can I, in a docker environment, make gateway work to provide bucket/user/management capabilities all rooted in a single underlying bucket/folder? I'm open to alternative designs provided I can meet the customer's requirements (run via docker, store in their underlying S3 storage, provide ability to provision and secure new buckets).

AWS OpsWorks vs AWS Beanstalk vs AWS CloudFormation? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
The community reviewed whether to reopen this question last month and left it closed:
Original close reason(s) were not resolved
Improve this question
I would like to know what are the advantages and disadvantages of using AWS OpsWorks vs AWS Beanstalk and AWS CloudFormation?
I am interested in a system that can be auto scaled to handle any high number of simultaneous web requests (From 1000 requests per minute to 10 million rpm.), including a database layer that can be auto scalable as well.
Instead of having a separate instance for each app, Ideally I would like to share some hardware resources efficiently. In the past I have used mostly an EC2 instance + RDS + Cloudfront + S3
The stack system will host some high traffic ruby on rails apps that we are migrating from Heroku, also some python/django apps and some PHP apps as well.
I would like to know what are the advantages and disadvantages of using AWS OpsWorks vs AWS Beanstalk and AWS CLoudFormation?
The answer is: it depends.
AWS OpsWorks and AWS Beanstalk are (I've been told) simply different ways of managing your infrastructure, depending on how you think about it. CloudFormation is simply a way of templatizing your infrastructure.
Personally, I'm more familiar with Elastic Beanstalk, but to each their own. I prefer it because it can do deployments via Git. It is public information that Elastic Beanstalk uses CloudFormation under the hood to launch its environments.
For my projects, I use both in tandem. I use CloudFormation to construct a custom-configured VPC environment, S3 buckets and DynamoDB tables that I use for my app. Then I launch an Elastic Beanstalk environment inside of the custom VPC which knows how to speak to the S3/DynamoDB resources.
I am interested in a system that can be auto scaled to handle any high number of simultaneous web requests (From 1000 requests per minute to 10 million rpm.), including a database layer that can be auto scalable as well.
Under the hood, OpsWorks and Elastic Beanstalk use EC2 + CloudWatch + Auto Scaling, which is capable of handling the loads you're talking about. RDS provides support for scalable SQL-based databases.
Instead of having a separate instance for each app, Ideally I would like to share some hardware resources efficiently. In the past I have used mostly an EC2 instance + RDS + Cloudfront + S3
Depending on what you mean by "some hardware resources", you can always launch standalone EC2 instances alongside OpsWorks or Elastic Beanstalk environments. At present, Elastic Beanstalk supports one webapp per environment. I don't recall what OpsWorks supports.
The stack system will host some high traffic ruby on rails apps that we are migrating from Heroku, also some python/django apps and some PHP apps as well.
All of this is fully supported by AWS. OpsWorks and Elastic Beanstalk have optimized themselves for an array of development environments (Ruby, Python and PHP are all on the list), while EC2 provides raw servers where you can install anything you'd like.
OpsWorks is an orchestration tool like Chef - in fact, it's derived from Chef - Puppet, Ansible or Saltstalk. You use Opsworks to specify the state that you want your network to be in by specifying the state that you want each resource - server instances, applications, storage - to be in. And you specify the state that you want each resource to be in by specifying the value that you want for each attribute of that state. For example, you might want the Apache service to be always up and running and start on boot-up with Apache as the user and Apache as the Linux group.
CloudFormation is a json template (**) that specifies the state of the resource(s) that you want to deploy i.e. you want to deploy an AWS EC2 micro t2 instance in us-east-1 as part of VPC 192.168.1.0/24. In the case of an EC2 instance, you can specify what should run on that resource through your custom bash script in the user-data section of the EC2 resource. CloudFormation is just a template. The template gets fleshed ourt as a running resource only if you run it either through the AWS Management Console for CloudFormation or if you run the aws cli command for Cloudformation i.e. aws cloudformation ...
ElasticBeanstalk is a PAAS- you can upload the specifically Ruby/Rails, node.js or Python/django or Python/Flask apps. If you're running anything else like Scala, Haskell or anything else, create a Docker image for it and upload that Docker image into Elastic Beanstalk (*).
You can do the uploading of your app into Elastic Beanstalk by either running the aws cli for CloudFormation or you create a recipe for Opsworks to upload your app into Elastic Beanstalk. You can also run the aws cli for Cloudformation through Opsworks.
(*) In fact, AWS's documentation on its Ruby app example was so poor that I lost patience and embedded the example app into a Docker image and uploaded the Docker image into Elastic Beanstalk.
(**) As of Sep 2016, Cloudformation also supports YAML templates.
AWS Beanstalk:
It is Deploy and manage applications in the AWS cloud without worrying about the infrastructure that runs yor web applications with Elastic Beanstalk.
No need to worry about EC2 or else installations.
AWS OpsWorks
AWS OpsWorks is nothing but an application management service that makes it easy for the new DevOps users to model & manage the entire their application
In Opsworks you can share "roles" of layers across a stack to use less resources by combining the specific jobs an underlying instance maybe doing.
Layer Compatibility List (as long as security groups are properly set):
HA Proxy : custom, db-master, and memcached.
MySQL : custom, lb, memcached, monitoring-master, nodejs-app, php-app, rails-app, and web.
Java : custom, db-master, and memcached.
Node.js : custom, db-master, memcached, and monitoring-master
PHP : custom, db-master, memcached, monitoring-master, and rails-app.
Rails : custom, db-master, memcached, monitoring-master, php-app.
Static : custom, db-master, memcached.
Custom : custom, db-master, lb, memcached, monitoring-master, nodejs-app, php-app, rails-app, and web
Ganglia : custom, db-master, memcached, php-app, rails-app.
Memcached : custom, db-master, lb, monitoring-master, nodejs-app, php-app, rails-app, and web.
reference : http://docs.aws.amazon.com/opsworks/latest/userguide/layers.html
AWS CloudFormation - Create and Update your environments.
AWS Opsworks - Manage your systems inside that environments like we do with Chef or Puppet
AWS Beanstalk - Create, Manage and Deploy.
But personally I like CloudFormation and OpsWorks both by using its full power for what they are meant for.
Use CloudFormation to create your environment then you can call Opsworks from cloud formation scripts to launch your machine. Then you will have Opsworks stack to manage it. For example add a user in linux box by using Opsworks or do patching of your boxes using chef recipes. You can write down chef recipes for deployment also. Otherwise you can use CodeDeploy specifically build for deployment.
AWS OpsWorks - This is a part of AWS management service. It helps to configure the application using scripting. It uses Chef as the devops framework for this application management and operation.
There are templates which can be used for configuration of server, database, storage. The templates can also be customized to perform any other task. DevOps Engineers have control on application's dependencies and infrastructure.
AWS Beanstalk - It provides the environment for language like Java, Node Js, Python, Ruby Go. Elastic Bean stalk provide the resource to run the application. Developers not to worry about the infrastructure and they don't have control on infrastructure.
AWS CloudFormation - CloudFormation has sample templates to manage the AWS resources in order.
As many others have commented AWS Beanstalk, AWS OpsWorks and AWS Cloud Formation offers different solutions for different problems.
In order to acomplish with
I am interested in a system that can be auto scaled to handle any high number of simultaneous web requests (From 1000 requests per minute to 10 million rpm.), including a database layer that can be auto scalable as well.
And taking into consideration you are in migration process I strongly recommend you to start taking a look at AWS Lambda & AWS DynamoDB solution (or hybrid one).
Both two are designed for auto scaling in a simple way and may be a very cheap solution.
You should use OpsWorks in place of CloudFormation if you need to deploy an application that requires updates to its EC2 instances. If your application uses a lot of AWS resources and services, including EC2, use a combination of CloudFormation and OpsWorks
If your application will need other AWS resources, such as database or storage service. In this scenario, use CloudFormation to deploy Elastic Beanstalk along with the other resources.
Just use terraform and ECS or EKS.
opsworks, elastic beanstalk and cloudformation old tech now. -)

Resources