Essbase Location change using disk volume-Hyperion - hyperion

I'm looking to change the application database backup from one volume(say /v02/) to vol4 using maxl command. i tried the below commands but i could the backup artifacts under the vol4. anyone please help me..
commands i tried:
alter database application_name.Consol add disk volume '/vol4/NQITA/' ;
alter database application_name.Consol set disk volume '/vol4/NQITA/' file_type index_data;

Related

Redis bulk action restore RDB - RDB Path issue

I'm new to Redis Cache and I just created a local database in which I'm trying to restore an export from a RDB file.
I'm using Redis Insight as a client, running via docker, with volumes.
I identified the bulk_operation folder, copied the .rdb file in there and now from the Redis Insight interface I'm trying to restore the RDB using bulk actions.
However the path I'm providing results in "File not found" and I can't get to the expected format.
./db/bulk_operation/export-7b85c642-3bfc-4904-b0c0-81c84aae6748.rdb
The /DB is mounted to my local persistence folder.
Any advice is appreciated.
My trial wasn't far from the truth, the expected format was /db/bulk_operation/export-7b85c642-3bfc-4904-b0c0-81c84aae6748.rdb - note the missing dot at the beginning.
The fact that RedisInsight also randomly gives errors did not help, on the first two tries it failed, then randomly it worked.
So don't give up on the first fail :)

azcopy v10 - copy to destination only if destination file does not exist

My command is .\azcopy cp "source" "dest" --recursive=true
Where both source and dest are storage containers.
When I run the cp command, it seems like azcopy iterates over every file and transfers to destination.
Is there way to only copy the file if the file does not exist or is different in the destination?
azcopy sync does something similar, but only supports dest/origin of local/container and not container/container, as is my understanding.
We've just added container to container support in version 10.3
If you want to stick with AzCopy v10, it looks like there is an --overwrite parameter which you can set to true (default), false, or prompt. By setting to false, it won't overwrite any files that already exist. However, it also won't overwrite any files which are newer in the source -- not sure if that is a deal-breaker for you.
Your understanding is right, currently, the azcopy sync supports only between the local file system to the blob storage container, not container/container. Since Synchronization is one-way. As a workaround, you could perform the synchronous process in two steps by syncing from the specified blob storage source to the local file path and then syncing them to the Blob storage destination from the local file path.
Another option is to use AzCopy v8.1. The /XO and /XN parameters allow you to exclude older or newer source resources from being copied, respectively. If you only want to copy source resources that don't exist in the destination, you can specify both parameters in the AzCopy command:
/Source:http://myaccount.blob.core.windows.net/mycontainer /Dest:http://myaccount.blob.core.windows.net/mycontainer1 /SourceKey:<sourcekey> /DestKey:<destkey> /S /XO /XN

How to add Chromedriver to existing Docker?

I've personal ASP.NET Core project which scrapes data from the web using Selenium and Chromium and saves it in local sqlite database.
I want to be able to run this app in Docker image on my Synology NAS. Managed to create and run Docker image (on my Mac), it displays data from sqlite db correctly, but getting error when trying to scrape:
The chromedriver file does not exist in the current directory or in a directory on the PATH environment variable.
From my very limited understanding of Dockers in general, I understand that I need to add chromiumdriver inside the docker somehow.
I've searched a lot, went trough ~30 different examples and still can't get this to work.
Any help is appreciated!
You need to build a new image based on the existing one, in which you add the chromedriver binary. In other words you need to extend your current image.
So create a directory containing a Dockerfile and the chromedriver binary.
Your Dockerfile should look like this:
FROM your_existing_image_name:version
COPY chromedriver desired_path_inside_container
Then open a terminal inside this directory and execute:
docker build -t your_existing_image_name:version++ .
After that you should be able to start a container from the newly created image.
Some notes:
I have assumed that your existing image has been tagged with a version. If it is not the case then remove :version from Dockerfile
Similarly, remove :version++ from the build command. However, is a good practice to include versioning in your images.
I have not add any entrypoint assuming that you do not need to change the existing one.

Neo4j Dump: How to specify the database?

Having successfully created and populated a database with 200,000+ nodes, I would like to create a dump as a backup.
The instructions in the documentation are simple:
neo4j-admin dump --database=<database> --to=<destination-path>
But it's not clear what to use for <database>. If I use graph.db (or leave out the option) I get an error. I know the location of the database folder.
If I put the path to the database I get the following error:
unexpected error: 'database' should be a name but you seem to have specified a path
OS: Windows 10
Partial answer:
The database parameter refers to databases that are located in neo4jFolder/data/databases folder, where neo4jFolder is the folder of the unzipped install of Neo4j.
For example: I unzipped the neo4j install zip into E:\Program Files\neo4j-community-3.3.2. My database was elsewhere on the drive. So I copied the database to E:\Program Files\neo4j-community-3.3.2\data\databases\MyDatabase. Then I was able to run neo4j-admin dump --database=MyDatabase --to=backup5.dmp successfully.
I don't know if it's possible to run dump on databases that are not found under /data/databases. I also don't know how to run a dump when Neo4j is installed with the exe installer. My solution is for the zip file installation.

Copying a local database from one computer to another ne04j

I created a database Neo4j on a PC, with many relationships, node, etc
how to move/ copy the database from this pc to another?
thanks for the help
francesco
update1: I have tried to found conf/neo4j-server.properties but i don't have...
this is a screenshot of my folder ne04j (It is in Windows document Folder)
http://s12.postimg.org/vn4e22s3x/fold.jpg
Neo4J databases live in your filesystem, you can simply make a copy of the folder in which your Neo4J data is stored. If you are running standalone this folder will be configured in conf/neo4j-server.properties and the line will look something like this:
org.neo4j.server.database.location=data/graph.db
Copy the content of that folder to the graph database folder on your other machine. I'd recommend that your databases are not running when you do this.
I believe you're looking for the dump shell command which you can use to export a database into a single Cypher create statement, you'd "dump" the database and then import it on your new machine.
Information on using the command is outlined here: Neo4j docs
A Neo4j database can be dumped and loaded using the following commands:
neo4j-admin dump --database=<database> --to=<destination-path>
neo4j-admin load --from=<archive-path> --database=<database> [--force]
Limitations
The database should be shutdown before running the dump and load commands.
https://neo4j.com/docs/operations-manual/current/tools/dump-load/
i used the above solution, but the file name was different.
in the folder of the neo4j data, look for folder called conf and inside the configuration file called neo4j.conf
inside this file you will see a line that direct to the folder that contain the data.
its called "graph.db"
replace it with the same folder from your backup of the DB that you want to clone.

Resources