I installed Apache Marmotta with Docker using docker pull apache/marmotta on an AWS server. I am able to see Core Services (http://34.229.180.217:8080/marmotta/core/admin/import) via the Import interface in my browser. However, I am not able to import RDF files through the interface.
The files (RDF and TTL) are on both my local machine and on the server. The files are very large (over 2 GB each) and so I'd like to use KiWi Loader to bring them into Marmotta so I can run SPARQL queries against them.
Is there a parameter I can adjust in Marmotta to allow for larger file imports? Otherwise, is it possible to use the KiWi Loader through the Docker installation? Any suggestions would be great.
You can import using the local directory. Just copy your RDF/TTL files to $MARMOTTA_HOME/import. You can define you context base in file-like structure. For example, if you want to store your data in http://34.229.180.217:8080/marmotta/foo, just store your file in $MARMOTTA_HOME/import/foo, here you are using the default context. However, if you want to store in other context create a folder with a URL encoded. For more details of the options that Apache Marmotta provide to import files check the documentation.
IMHO, I have had a lot of problems uploading big files. I think is mostly because Apache Marmotta commit the data after is everything in memory, it is an implementation of KiWi. I don't if you can upload by chunks, and using importer.batchsize property hasn't work much for me.
Related
I am new to docker. Recently I hosted an docker image(Asp.net core published contents with asp.net core runtime) on heroku. It is working fine. I am using LiteDB, serverless database, for my application.
Every time when I deploy new changes on heroku(new docker image with changes), the old LiteDB data file gets removed.
What I want to do is only to deploy the new docker image that will use the old LiteDB data file that was already on the container(Heroku container).
Is there any way to store data(files, images etc.) on docker and retrieve data anytime when i required? eg. in above case, copy my LiteDB data file to local computer.
IF
I am doing the above work wrong please provide me the correct way to do that.
Thanks.
This is not something you can do on Heroku (VOLUME is unsupported).
Your only solution is to store the data file somewhere else, such as Amazon S3. Or to use a server-side database, such as PostgreSQL.
I need a storage system with the following requirements:
1. It should support data/service clustering
2. It should be open-source so that I can extend functionalities later if needed
3. It should support file system because I want to access some files as public url(direct access). So that I can store my scripts in these files and directly refer these files.
4. Supports some kind of authentication
5. I want it to be on premise (Not cloud).
Ceph seems to qualify all the criteria but does it support the public access of files just like a URL(Point 3) ? It has ability to generate temporary URLs though but I want permanent URLs for few files.
You could run Nextcloud and have your data volume (and database, if you feel so inclined) stored on the Ceph cluster. That's open-source, you can setup direct links to files including permanent links, and is authenticated.
Currently I am using a saved_model file stored on my local disk to read an inference graph and use it in servers. Unfortunately giving a GCS path doesn't work for SavedModelBundle.load api.
Tried providing GCS path for the file but did not work.
Is this even supported, if not how can i achieve this using the SavedModelBundle api because i have some production servers running on google cloud that i want to serve some tensor-flow graphs.
A recent commit inadvertently broke the ability to load files from GCS. This has been fixed and is available in github.
Plenty of advice on how to change the Base URL the Artifactory Pro is running on, Custom Base URL via Rest API, etc.
However, We need to change all instances of Base URL while application is not running. So if any instance of Base URL exists in file system or Mysql db, need to update accordingly.
Thanks for any assistance.
The answer to that is a bit tricky. You can have an 'artifactory.config.import.xml' file under your '$ARTIFACTORY_HOME/etc/' folder. By doing so Artifactory, upon starting will consume the file and import it as it's configuration file.
PLEASE READ THIS PART CAREFULLY: This is the tricky part, the import of this file will overwrite any existing configuration that you have on this Artifactory instance. Meaning that you have to hold the latest modified configuration before shutting down the instance.
I want to convert my struts2 web application into an exe format so that exe file will load my project into server and database into MySQL.
Are there any such tools available for loading files into a folder?
Are there any forms other than .exe to which I could convert my project to do this action?
Is it possible to decrypt the code from class file to java file?
Which is the most secure form for a struts2 project for loading into a server?
You want to convert it to executable one?? generally installer came to assist auto installation like if you have created a product using all these technology and you want save your customer from all the setting and installation processes like database configuration,other configurations etc.
Is it possible to decrypt the code from class format to java format
there are many java d-compilers available which help you to convert .class files to java files though they sometime fails to convert it 100% but in most cases they tend to show some one what they actually want to see.
You can't load an exe file into a web server.
I suppose you could create an executable that includes a server and your war file, but I would strongly discourage the practice.
You could obfuscate (e.g., with ProGuard) and/or encrypt your .class files, but if they're determined to get to your unobfuscated byte code, they almost certainly will.
If they're not that determined, then it's probably not important enough to go through all the effort, debugging, and so on.