I'm running Artifactory CPP CE 7.7.3 and Traefik v2.2 using docker-compose. The service is only available over http://localhost/ui/. Now, what I need is an option which allows to add a URL path-prefix (e. g. http://localhost/artifactroy/ui).
My Setup
I used the described setup process from the Artifactory Docs suggest it.
My docker.compose.yaml is the official extracted from the jfrog-artifactory-cpp-ce-7.7.3-compose.tar.gz: ./templates/docker-compose.yaml.
I'm using a reverse proxy (traefik). For this, I've added the necessary traefik configuration lines to the docker-compose-file. Here is a small extract what I've added:
[...]
labels:
- "traefik.http.routers.artifactory.rule=Host(`localhost`) && PathPrefix(`/ui`)"
- "traefik.http.routers.artifactory.middlewares=artifactory-stripprefix"
- "traefik.http.middlewares.artifactory-stripprefix.stripprefix.prefixes=/"
- "traefik.http.services.artifactory.loadbalancer.server.port=8082"
With this I managed to access artifactory over http://localhost/ui/.
Problem:
I have multiple small services running on my server, each of this service is accusable via http://localhost/<service-name>. This is very convenient and want to make clear that this URL is related to this service on my production server.
Because of this, I want to have an URL like http://localhost/artifactroy/ui/... instead of http://localhost/ui/...
I struggled getting artifactory setup in that way. I already managed to get a redirection from typing e. g. http://localhost/artifactroy/ to http://localhost/ui/ but this is not what I want on my production server.
What I did
Went through the documentation in hope of finding an option which I just can passt to artifactroy to add a prefix (Not successful).
Tried configure traefik two full days, to alter headers to get the repose point to http://localhost/artifactroy/ui/... (Only partially successful, redirection didn’t work afterwards)
Tried finding the configuration which is responsible for configure artifactory in $JFROG_HOME/artifactory/var/etc (Not successful)
Is this even possible? Help is highly appreciated..
This example (even though not traefic example) gives you a direction to implement it. There are certain routes already used within the product. You need to add a context over and above it to ensure all comes via the new context path.
https://jfrog.com/knowledge-base/how-to-remove-artifactory-from-the-context-url-in-artifactory-7/
Related
I have several CloudRun services running. I have the need to do some environment specific things (also because I have a docker container being built from the same sources too) in code. I searched around quite a bit to find out if I can get the following run-time:
Check if my code is running in a CloudRun instance or not
Get other environment variables like service-name, project-name, deploy-time, awake-time, region-name, etc. - for various reasons
This demo container code shows how to get that kind of information:
https://github.com/GoogleCloudPlatform/cloud-run-hello/blob/master/hello.go
Some things are available directly as environment variables like service and revision
service := os.Getenv("K_SERVICE")
revision := os.Getenv("K_REVISION")
The container contract docs page shows the full list
https://cloud.google.com/run/docs/container-contract
as well as information about the metadata server that can give you things like project-id or region.
I would like to enable caching in ArangoDB, automatically when my app start.
I'm using docker-compose to start the whole thing but apparently there's no simple parameter to enable caching in ArangoDB official image.
According to the doc, all the files in /docker-entrypoint-initdb.d/ are executed at container start. So I added a js file with that code:
require('#arangodb/aql/cache').properties({mode: 'on'});
It is indeed executed but caching doesn't seem to be enabled (from what I see with arangosh within the container).
My app is a JS app using arangojs, so if I can do it this way, I'd be happy too.
Thanks!
According to the performance and server config docs, you can enable caching in several ways.
Your method of adding require("#arangodb/aql/cache").properties({ mode: "on" }); to a .js file in the /docker-entrypoint-initdb.d/ directory should work, but keep an eye on the logs. You may need to redirect log output with a different driver (journals, syslog, etc.) to see what's going on. Make sure to run the command via arangosh to see if it works.
If that's a bust, you might want to see if there is a way to pass parameters at runtime (such as --query.cache-mode on). Unfortunately, I don't use Docker Compose, so I can't give you direct advice here, but try something like -e QUERY.CACHE-MODE=ON
If there isn't a way to pass params, then you could modify the config file: /etc/arangodb3/arangod.conf.
And don't forget about the REST API methods for system management. You can access AQL configuration (view and alter) in the Web UI by clicking on the Support -> Rest API -> AQL.
One thing to keep in mind - I'm not sure if the caching settings are global or tied to a specific database. View the configuration on multiple databases (including _system) to test the settings.
I am working with a designer and I'd like them to have access to the interactions I've implemented on the site we're working on. However this time, I have 2 issues. My localhost is configured to a subdomain:
http://store.teststore:3000/ and we're on different networks. Is there anyway to work around this?
ngrok should work for you. Download and install it following these instructions here: https://ngrok.com/download. Documentation on how it is used can be found here https://ngrok.com/docs. Once installed running the below command should work for you (depending on the hosting environment):
ngrok http -host-header=rewrite store.teststore:3000
You will need to give the URL generated by ngrok and displayed in the cmd prompt to the designer.
Update: Handling absolute redirects
Based on your comment it sounds like, after login, your site does an absolute redirect (the full URL is specified). If it is possible I would change your code to do a relative redirect where the domain is omitted. You could also make your root domain configurable in the absolute redirect and configure it to be the ngrok domain provided for now. Lastly, you could attempt to configure your DNS with a CNAME record following ngroks Tunnels to custom domains documentation. This last option, however, requires a paid for ngrok subscription.
Install ngrok if you haven't yet and CD into your project directory and invoke ngrok. Note Your application must be running locally on the same port number ngrok will be running.
I have configured our local tfs proxy against the Active Directory site for our local office using the below syntax:
tf proxy /add http://MyProxy:8080 /default:site /Site:LocalOffice /name:MyProxy
When I run
tf proxy /configure
it correctly identifies my site, and sets up the correct proxy.
However, I'm seeing inconsistent behaviour during get operations.
My understanding is that when run a get operation (either via tf get or through Visual Studio), it should automatically recognise that the site has a proxy, and configure it.
When I tried this on a VM that had never used a proxy, this seems to have worked fine. However, on my own machine, I went into VS and removed the proxy settings, then closed the VS instance. Then I attempted a tf get from powershell, and found that it did not configure the proxy correctly (I confirmed using tf proxy).
I'm expecting the proxy to be automatically configured for any user who is currently in our office, overriding any manual settings they have. Is there additional setup I need to do in order to do this?
Update
Based on the documentation here, I would expect it to set up the proxy on a my machine when I ask for the code
If you add a proxy record with the default set to site, the first time that a developer from within the specified Active Directory domain performs a get operation, Team Foundation Server will redirect that developer's request to the proxy that is specified by the record that is associated with the site.
However, this doesn't happen even if I clear out the proxy settings in VC (and untick the box) and perform a get after a reboot. I can understand it perhaps not overriding a setting I enter by hand, but I would expect it to configure when no setting is present at all.
You need to use the /default flag:
tf proxy /add http://MyProxy:8080 /default:site /Site:LocalOffice
A full description of how this works can be found on: http://blogs.msdn.com/b/deepakkhare/archive/2014/05/06/tfs-proxy-unsung-hero.aspx
I have been able to build rabbitmq server on ubuntu linux. It came already prepackaged and on making, it is able to start as a service. When i got the client source, i failed to make because it appeared like it needed a folder called ./deps/rabbitmq-server. Analysing the code, i find that the author of the client was accessing the same header files as are found in the server, using include_lib("path to rabbit.hrl e.t.c") in his header file called "amqp_client.hrl". I then decided to add rabbitmq_server in the lib dir of erlang so as its paths are automatically added on start up of the vm. But still this didnot help. There is also another folder which the client references called "rabbit_common" for an include folder he assumes would contain all the .hrl files there. Please assist me in building both the client and server on my ubuntu server, for testing.
Also, if anyone has used RabbitMQ server for IMs, please provide some benchmarks and/or your findings on its throughput, speed and number of users. How can it be compared to ejabberd?. How can one create AJAX/Jquery/Javascript clients for Web functionality?
thanks
I hope you had made some progress as far as RabbitMQ and ejabberd are concerned.
Below is a link to an interesting discussion that might be of help.
http://old.nabble.com/AMPQ-vs-XMPP-and-RabbitMQ-vs-ejabberd-td17587109.html