How to get base URL in Websphere application Server? - websphere-6.1

I am running my application in WAS6.1. In the same server I have two EAR deployed. One application can be accessed using the URL server.com:port/app1 over http and the other one as server.com:port/app2. In the application1 I am referencing application2 as :
<media-access-proxy
base-url="http://ipaddress:port/app2"
prefetch-base-url=""
mode="mode1"/>
Since this ipaddress is static so everytime I have to change this if I am moving from dev environment to QA and from QA to production. I want to set it such as that is should take the base url by itself. Does WAS has a properties file like was.baseurl which could be placed in place of ipaddress:port? Or something like that?

The schema requires URI so you can't use anything special. However you can use a DNS name which is same for both environments. Use something like http://server:9080/app2 and add server to both servers' hosts file.

Related

Accessing decommissioned website in Umbraco

I have a website that we used to access via Umbraco. It was decommissioned on 11/22 to a new site with same name. There is some content we need to retrieve. I was thinking maybe we could access it via IP but that doesn't work. Anyone know how to accomplish this so we can log on to the old site via umbraco without interfering with the new site.
If you log into the server and find the site in IIS, you could set up new bindings on that site, so it responds to decommissioned.mysite.com. Then add a host file entry to your local machine, so decommissioned.mysite.com sends you to your decommissioned site.
When your computer performs a DNS lookup, the host file is the first place it will look. This means you can use the host file to bypass the DNS settings configured for the public. It comes in handy when you have a dev version of a site that isn't ready for the world, yet. On windows you can find the host file at C:\Windows\System32\Drivers\etc\hosts. You will probably need to run your text editor as an administrator to edit the file. This is what host file entries look like:
123.123.123.123 mydomain.com www.mydomain.com
321.321.321.321 www.myotherdomain.com blog.myotherdomain.com

dropwizard get on demand jdbi connection

I have a simple CRUD application with backend code in dropwizard. The entire app just comprises of simple resource classes and crud operations except one case where some business logic is involved.
I am trying to extract this into a service instead of putting it in the resource class itself. But for that my service would need an ondemand jdbi connection to access data and do its thing.
All my connect strings and config values are in YML file. Since this app would be running on different servers with different yml files, I dont want to hardcode the yml file name in order to read it again, to get the connect strings and do it that way.
How do I achieve this?
Can you detect what environment you are on?
If so, can you do something like ${environment}.yml?
There is Configuration project on apache which might help.
Otherwise, is it a case of in dev you want to run
java -jar app.jar server dev.yml
and in prod you want to run java -jar app.jar server prod.yml? I imagine you have separate daemons in each environment. So, those environment's will pick up the right configuration, if you've configured them that way.
Otherwise, if the property names are the same, but their values differ, and you pick up the right yml in the right environment, things should work.
If I haven't addressed your question, can you please elaborate your problem a little more?

OpenESB - different environments

I am developing a service layer app which provides a catalog of webservices, then I am orchestrating them using OpenESB.
I create my BPELs importing external WSDL definitions using http://localhost:8080/services/myService?wsdl.
The problem is -- these BPELs strongly depend on this specific URL, and when I deploy on production server, my ESB layer stops working.
How can I make my BPELs independent of the specific endpoint? Can I refer the URIs to an external config file?
To do it you must create application configuration and application variable and add them on your http address. Example: "http://${MyHtttpAddress}:${MyHttpPort}/service1/myService?wsdl"/>.
Applications and variable are set up in the administrative console and can be changed for each environment.
Regards
Paul

setting up subdomain in url locally

I am creating a rails application where we have functionality for registering a new User is there and there we are providing separate sub domain for each user by their user name.
so i want to map
user_name.localhost:3000.com where user_name is dynamic
Run your local development server with pow. If you symlink the app to the foo, than your pages are available under http://foo.dev, but also under every other subdomain like http://bar.foo.dev. There is no need to register a list of subdomains somewhere.
prax might be an alternative when you are on linux.
Access like this. No configuration required
user_name.lvh.me:3000
I have figured it out.
Actually the application is using subdomain_fu gem for creation of the subdomain and when the user is created the subdomain is name is saved in the db and the can accessed by making configuration into the file C:\Windows\System32\drivers\etc\hosts and map the ip which is being hit like ;
192.xxx.xx.xxx user_name.your_subdomain _name
and hit the url like this ;
user_name.your_subdomain _name:3000
It is working fine with me doing these steps.
Thanks to all for their valuable feed back.

Setting Up a Test Environment For an ASP.NET MVC3 Website

I've been working for a client's website over the past year. I usually test things locally and then deploy straight to the production website. This has caused us some issues lately so I thought I should create a test/staging environment in which we could thoroughly test new features before pushing them into production.
Anyway, we have a VPS hosting account. I usually use remote desktop to manage the website in IIS. So in order to create a test environment, I copy pasted the folder of the production website inside the same directory (so they are both at the same level) and changed the name of the folder. Then I created a new website in IIS and mapped the physical path to the httpdocs folder inside the copied folder. After that, I setup a new application pool which basically has the same settings of the production website's application pool. I also changed the connection string of the test website.
But then when I tried to view the test website, it did not work the way I expected it to do. I keep getting &ReturnUrl=%2f appended to the query string, and the website is stripped out of its styles (the CSS). I remember this used to happen before when we were still using a shared hosting account, but I have no idea how to fix that.
I really do not know what's wrong. I basically have the same exact setup except I'm using a different port and a different database. I even tried running the test website with the application pool of the production website, but that did not work either...
Any suggestions?
looks like permission problem to me, check if your user has correct privileges in the new folder/app pool :)

Resources