Plenty of advice on how to change the Base URL the Artifactory Pro is running on, Custom Base URL via Rest API, etc.
However, We need to change all instances of Base URL while application is not running. So if any instance of Base URL exists in file system or Mysql db, need to update accordingly.
Thanks for any assistance.
The answer to that is a bit tricky. You can have an 'artifactory.config.import.xml' file under your '$ARTIFACTORY_HOME/etc/' folder. By doing so Artifactory, upon starting will consume the file and import it as it's configuration file.
PLEASE READ THIS PART CAREFULLY: This is the tricky part, the import of this file will overwrite any existing configuration that you have on this Artifactory instance. Meaning that you have to hold the latest modified configuration before shutting down the instance.
Related
My challenge is that I'm working on a public Flutter app and I don't want to expose my API keys in the source code. I'm specifically looking to inject the Google Maps API key into my AppDelegate.swift file.
Ideally I'd want to be able to pass this through with a .env file (and the then inject these keys as environment variables in CI/CD), but I'm also fine doing a --dart-define. I have not found any working example of how to get this working.
I'm able to get this working on the Android side of things without any issues.
You can save it as an plist dictionary, and add the reading code from the file. For example: https://stackoverflow.com/a/62916637/11798831
Also you could rename file by adding Bundle.main.path(forResource: "config", ofType: "env").object(...
You can do not commit it in the project, just keep it locally. Also add it as additional file in CI/CD.
I'm starting to create documentation using asciidoc on my project which is following a microservices architecture.
We have a microservice for documentation. In its files I want to link to another document in another microservice.
I can do a relative link inside my own component but when I try to go higher with ../ it does not work and the link does nothing.
Does anyone may know why ?
Could it be because asciidoc is installed in the jenkins file of my component but not the others ?
Or is it because I do not use the link correctly ?
I use it as it is describe in the doc :
link:../other_microservice/other-document.asciidoc[]
I also tried the xref with no more success.
Thanks a lot for anyone who can help me
The link: macro is supposed to be used with a URL, not a file path. Generally, it does what you mean. However, Asciidoctor's safe mode prevents access to files which reside outside of the folder containing the source file specified for transformation.
So, if the documentation for your other microservices is going to be hosted separately (e.g. one URL per microservice), then you should update your link: macro usage to specify URLs instead.
If all of your microservice documentation is to be hosted under one URL, specify --safe when you invoke Asciidoctor. For more details, see:: https://asciidoctor.org/docs/user-manual/#running-asciidoctor-securely
I installed Apache Marmotta with Docker using docker pull apache/marmotta on an AWS server. I am able to see Core Services (http://34.229.180.217:8080/marmotta/core/admin/import) via the Import interface in my browser. However, I am not able to import RDF files through the interface.
The files (RDF and TTL) are on both my local machine and on the server. The files are very large (over 2 GB each) and so I'd like to use KiWi Loader to bring them into Marmotta so I can run SPARQL queries against them.
Is there a parameter I can adjust in Marmotta to allow for larger file imports? Otherwise, is it possible to use the KiWi Loader through the Docker installation? Any suggestions would be great.
You can import using the local directory. Just copy your RDF/TTL files to $MARMOTTA_HOME/import. You can define you context base in file-like structure. For example, if you want to store your data in http://34.229.180.217:8080/marmotta/foo, just store your file in $MARMOTTA_HOME/import/foo, here you are using the default context. However, if you want to store in other context create a folder with a URL encoded. For more details of the options that Apache Marmotta provide to import files check the documentation.
IMHO, I have had a lot of problems uploading big files. I think is mostly because Apache Marmotta commit the data after is everything in memory, it is an implementation of KiWi. I don't if you can upload by chunks, and using importer.batchsize property hasn't work much for me.
I am working on an App, in which I want to upload images and pdf to the FTP server. I am using this reference ref.All is working good. The images and pdf are getting uploaded on the server with proper names and sizes.
But, now I want to check if the directory is already exists on the server or not. I am not able to get it to work with this library.
So my question is that how to check directory on ftp,if directory is there then upload the files if not then first create directory on ftp and then upload files onto that directory?
Any Ideas.. ? Any help will be appreciated.
Different FTP servers will answer the LIST request in differing ways, so there is no single answer to this question. RFC959 says on the matter:
Since the information on a file may vary widely from system
to system, this information may be hard to use automatically
in a program, but may be quite useful to a human user.
Using the CWD request to change into the directory in question, and detecting a successful response will detect the directory, however that leaves you in that directory as a potentially unrequired side effect.
For these reasons, as well as others, you may find more modern protocols such as SSH (which includes a file transfer feature) to be more useful. You may find the DLSFTPClient CocoaPod useful.
M.
I'd like to give our business team the ability to edit certain pages and content themselves via a CMS solution in our grails application, and Weceem plugin seems like a good choice.
The potential showstopper I see is that is uses the local server file system for uploaded content, which is no good in a horizontally scaled cloud environment like ours (we run in AWS).
Question is, is it possible to tell Weceem to use the database to store binary/uploaded content, or (better yet) override the content upload handlers to use Amazon S3 instead of the file system (we already have code that uploads to S3 in our main app, so the question is just how to hook into Weceem)
I assume that in such situation its possible to create your own content type (domain class) in your app that stores binary uploaded content. This class should be a subclass of org.weceem.content.WcmContent class. In weceem you can check a small example for storing such content, see org.weceem.files.WcmContentFileDB class Also, here there is an information how to extend plugin with custom content type. I hope the information can be helpful.
As for uploading: in Weceem we use CKeditor plugin for uploading additional files/resources, also org.weceem.files.WcmContentFile is used, it stores files on file system, the files are uploaded using paths provided with org.weceem.services.WcmContentRepositoryService.getUploadPath(...) method. This path is calculated from configuration property that is provided in application config (e.g. 'weceem.upload.dir'). Not sure that you can hook here.