I'm wondering how the storage template option "deleteOnExit" works in Cloudify 2.7.1 Stable.
I'm working on OpenStack cloud, and in my case, the option "deleteOnExit" in "SMALL_BLOCK" storage template is set to true.
Using the dynamic storage allocation way to create (with the SMALL_BLOCK template), attach, mount and format a storage via the context storage API. When I undeploy the application, the storage is not destroyed. Is it a normal behavior?
Thanks.
Yes this is the normal behavior, with dynamic storage, you are also responsible for deleting the volume when you undeploy.
here is an example of deleting the volume when the 'shutdown' lifecycle event is executed:
shutdown {
context.storage.unmount(device)
context.storage.detachVolume(volumeId)
context.storage.deleteVolume(volumeId);
}
Related
Says that I have a Hasura running in Container (inside a Kubernetes) and I want this container hasura to connect to 3 different Postgre databases.
Is there a way to configure this without using Hasura console web page, since this has thing to do with scaling later on.
You can use the pg_add_source API to dynamically add new Database sources to Hasura.
Conversely, you can use pg_drop_source to remove them.
The above approaches would work in a dynamic environment where Databases are being added and removed reguarly. If they're more static, you might want to consider programatically manipulating the Metadata files and then applying the changes using metadata apply instead.
AWS Foundational Security Best Practices v1.0.0 has a high risk check; [ECS.5] ECS containers should be limited to read-only access to root filesystems. The remediation explains how to change this in the console. However, I haven't found a way to do this for a QueueProcessingFargateService using CDK.
If a QueueProcessingFargateService could be created without an image, this could have been solved by calling add_container on the task definition, but image is mandatory so that doesn't work.
Does anyone know if it is possible to create a QueueProcessingFargateService with read-only root filesystem and if so, how?
(I use CDK in Python, but a solution in any other CDK language will be just as useful)
As this isn't a property directly supported on the construct you'll need to use escape hatches to set it:
https://docs.aws.amazon.com/cdk/v2/guide/cfn_layer.html#cfn_layer_resource
I would like to enable caching in ArangoDB, automatically when my app start.
I'm using docker-compose to start the whole thing but apparently there's no simple parameter to enable caching in ArangoDB official image.
According to the doc, all the files in /docker-entrypoint-initdb.d/ are executed at container start. So I added a js file with that code:
require('#arangodb/aql/cache').properties({mode: 'on'});
It is indeed executed but caching doesn't seem to be enabled (from what I see with arangosh within the container).
My app is a JS app using arangojs, so if I can do it this way, I'd be happy too.
Thanks!
According to the performance and server config docs, you can enable caching in several ways.
Your method of adding require("#arangodb/aql/cache").properties({ mode: "on" }); to a .js file in the /docker-entrypoint-initdb.d/ directory should work, but keep an eye on the logs. You may need to redirect log output with a different driver (journals, syslog, etc.) to see what's going on. Make sure to run the command via arangosh to see if it works.
If that's a bust, you might want to see if there is a way to pass parameters at runtime (such as --query.cache-mode on). Unfortunately, I don't use Docker Compose, so I can't give you direct advice here, but try something like -e QUERY.CACHE-MODE=ON
If there isn't a way to pass params, then you could modify the config file: /etc/arangodb3/arangod.conf.
And don't forget about the REST API methods for system management. You can access AQL configuration (view and alter) in the Web UI by clicking on the Support -> Rest API -> AQL.
One thing to keep in mind - I'm not sure if the caching settings are global or tied to a specific database. View the configuration on multiple databases (including _system) to test the settings.
In Azure the CENTOS default drive for /var/opt (where forest go by default) is only 30 GIG. If you install DataHub even if you set the project folder to /data/dataHub which is on /dev/sdc it creates all the storage under /var/opt which will fill with minimal data. This is clearly not ideal. I assume I am missing something and there is a config file somewhere where I can change the defaults for database creation so my data is separate from my software.
Any help will be appreciated.
MarkLogic uses a configuration file referenced at startup to determine where the data should go. If you need to the default data directory to be in a different location you should create/edit /etc/marklogic.conf and set a value for MARKLOGIC_DATA_DIR
# where MarkLogic keeps its data
export MARKLOGIC_DATA_DIR=/dev/sdc
When you restart MarkLogic it will use that as the new default location. Alternatively, you could make the mount point for the device /var/opt/MarkLogic or /var/opt/MarkLogic/Forests and leave the default settings in place.
I've cf application which I pushed and working as expected,now I want to change some file content in RT to avoid re-push.the application in deployed to warden container so it "persist" (for this instance ) in the filesystem of the container,How can I access to this file (i've node application so I guess with the FS module) location. i.e. if I've paused app with the following structure
myApp
folder1
index.html
1.if I want to change index html content by override how should I do that?I know the path of myApp/folder1/index.html but how I know
where my app is located in the container file system?
2. There is a way when I push application to say where to put the application? I mean in the container filesystem in specific path
e.g. when you create application in windows you decide where to put it...
C:\\myApp\folder1\index.html
or
D:\\myApp\folder1\index.html
I know that maybe this is advanced question but your help is appreciated!
p.s. lets say that I've some proxy for the application in the app container which listen to the app port and this can do some changes on the files of the applications
Writing directly to the container file system is not the right approach, because Cloud Foundry containers are intended to be ephemeral and transient.
Let's say that I have one instance of an application running, in Container A, and I change the contents of folder1/index.html. If that instance fails, and is automatically restarted by Cloud Foundry, the new instance won't have the persisted changes. If I need to scale up to 3 instances of my application, then Containers B and C won't have the changed files.
Allowing Cloud Foundry to manage the container file system will assure that you have consistent, repeatable behavior in your application.
If you need to make file changes in your Cloud Foundry application instance, the two recommended approaches are:
Read and write your file from a file service that is managed by Cloud Foundry. This will ensure that all application instances are accessing the same file system, and that your changes will survive beyond the container lifecycle.
Make the changes in your application artifact, and re-push the application.