Openshift Online does not allow containers running processes as root for security reasons (see the corresponding question in their FAQ section). RStudio Server, on the other hand, requires root privileges for installation and certain operations. According to the RStudio Server admin guide:
RStudio Server runs as the system root user during startup and then
drops this privilege and runs as a more restricted user. RStudio
Server then re-assumes root privilege for a brief instant when
creating R sessions on behalf of users (the server needs to call
setresuid when creating the R session, and this call requires root
privilege).
Under these circumstances, is it somehow possible to get an RStudio Server docker container running on Openshift Online?
Using OpenShift Online the short answer is no, you will not be able to get it running. You would need to find a Docker image for it which is a single user version and doesn't implement a system whereby is trying to provide it for multiple users and expects to be able to switch user identity.
Related
I am in the process of evaluating moving a very large Azure Cloud Service (Web Role) microservice architecture to AKS and have been working through the necessary code and build changes to support it.
In order to replicate the production environment locally for the developers, we run nginx on the host with SSL offloading and DNS (hosted in Azure) A records pointing to 127.0.0.1. When running in the Azure Emulator, the net affect is the ability for both the developer to visit the various web front ends in their browser (i.e. https://myapp.mydomain.dev) as well as hit the various API's in the solution (Web API 2) in Postman/cURL, etc.
Additionally due to how the networking of the Azure Emulator works, the apps themselves can resolve each other through nginx on the host (i.e. MVC app at https://myapp.mydomain.dev can obtain a token from the IdP web API at https://identity.mydomain.dev and then use that token at the API at https://api.mydomain.dev). This is the critical piece and the source of my question.
All attempts at getting the containers themselves to resolve each other the same way the host OS can (browser/Postman, SSL offloading via nginx) have failed. Many of the instructions out there are understandably for linux containers but having adapted the various networking docker-compose settings for the windows container equivalent have not yet yielded an success. In order to keep the development environments aligned with the real work systems, which are tenantized and make sure of the default mapping in nginx to catch all incoming traffic and route it to a specific user facing app/container, it is not as simple as determining a "static" method of addressing these on startup and why the effort was put in to produce the development environments we have today.
Right now when one service (container) attempts to communication with another, it ultimately results in a resolution error as all requests resolve to https://127.0.0.1 due to the DNS A records hosted in Azure for the domain. Since this migration will be a longer term project, the environments need to co-exist so changing the way that DNS is resolved (real DNS A records pointing to 127.0.0.1), host running nginx and handling SSL offloading to the various webroles normally running in the Azure Emulator is not an option.
Is there a way (with Windows containers) to either:
Allow the container to utilize nginx on the host OS transparently (app must still call the API at https://api.mydomain.dev), which will cause the traffic to be routed properly to the correct container/port defined in the docker-compose file?
OR
Run nginx on each container, allowing each container to then resolve and route appropriately without knowing the IP of the other container, possibly through an alias which could be added to the containers nginx.conf before the service starts?
The platform utilizes OAuth2/OIDC and it is critical to maintain the full URL to the other services from the applications perspective. Beyond mirroring production and sandbox environments, this URL's are utilized for redirect URL and post logout redirect URL validation among other things so using "https://myContainerNameForOtherContainerAlias" is not a workable solution.
Will I have the same problem when setting up the AKS environment as well?
I am trying to run the infinispan docker image on a Windows 10 machine with docker desktop for windows.
I wrote a small test Java program that connects to localhost:11222 using hotrod and accesses a cache.
The problem is that after the initial connect the client receives from the server a new address 172.17.0.3:11222 and it fails connecting to this address because this is a docker internal one and
docker desktop for windows cannot route messages directly to an internal container address.
Is there any workaround available in infinispan or on the windows machine ?
The simplest solution is to disable the handling of topology updates in your Hot Rod client:
infinispan.client.hotrod.client_intelligence=BASIC
More information about client intelligence here.
Note that this is not recommended in production: the client will ignore new servers coming up and it will keep trying to contact the servers in the initial server list long after they stop.
I'm not sure if this is proper place for such question (maybe should be placed on SuperUser?), but I'll try.
I have one C# console application and one Windows service. Both does the same, but console app was created before and is kept for backward compatibility. Each of these is running WCF service, whose methods operates on files in C:\ProgramData\MyApp. Console app is run as limited user (non-admin), Windows service runs as NT AUTHORITY\NETWORK SERVICE. When app creates some dirs/files, service cannot delete it and vice versa.
I would like to have it secured. My question is: should I grant full permissions on C:\ProgramData\MyApp to NETWORK SERVICE and current user? Or should I create dedicated user for running service/app?
Assuming your application does not set explicit security permission on newly created files, granting Network Service account Delete permissions on the folder would solve your immediate problem.
This command will do the work:
icacls c:\ProgramData\MyApp /t /grant "NETWORK SERVICE":(OI)(CI)(IO)D
Repeat the same for your other user service account.
I'm using the ArtifactDeployer plugin to deploy the build job artifacts to a remote location (Windows share SMB).
However Jenkins never manages to succeed. Throwing errors like:
[ArtifactDeployer] - Starting deployment from the post-action ...
[ArtifactDeployer] - [ERROR] - Failed to deploy. Can't create the directory ... Build step
[ArtifactDeployer] - Deploy artifacts from workspace to remote directories' changed build result to FAILURE
Local deployment works fine.
The Jenkins machine OS is Windows 7 32-bit Prof.
Jenkins is running as a service using a local system account.
I tried using another account, my user account but the service failed to start (Windows error 1069: the service did not start due to a logon failure).
The network service account did run but than Jenkins throws errors it can't access the .NET framework.
When manually trying the remote copy, this works fine. I can create directories and write to it. On the same machine of course.
I tried two different remote reference in Jenkins:
1) \\targetdirectory
2) I:\ - by mapping a drive letter to the remote dir in windows
No success...
Any tips or suggestions? Thanks!
Update 15/02/2012:
Still no solution or workaround for this issue.
It's not only the plugin, I hit also this issue using "Execute Windows batch command".
I found a bug report that I want to share.
Solution
I found a solution. You have to grant access persmission to the computer in a domain instead of the user of that machine. Seems very logic if you look back to it.
A 2nd solution is to run the service using a domain user account. Above I made a mistake by using the local user .\user in stead of DOMAIN\user.
If you don't have a domain, the following will work for sure. This should work even if you have a domain.
Background Info:
You need your mapped drive to be mapped for the same account that the service is using AND be available at the right time. Normally mapped drives are mapped only for the logged in user, at the time that they log in. Service user contexts don't get "logged in" per se -- for example, if I map a drive as MyUser and the service runs as MyUser, the drive won't be available until I actually log in by typing in my password. However, we can use a script to map the drive at startup (instead of login) for a particular user. Jenkins normally runs as Local System Account, so if you don't want to change that, you'll need to run the script below as the SYSTEM user. You can instead create a specific user for Jenkins to run as, if you don't want to grant this mapped drive to all services/processes that run as SYSTEM, and run both the service and the script below as that user (this is probably more secure).
Solution Steps:
In ArtifactDeployer you want to deploy to a mapped network drive. In my case this is S:.
There is no special setup for permissions on the remote share. (In my case, a Windows Server 2008 share with a username and password that is used for mapping the drive.)
Write a batch file MapDrives.bat in a place that your chosen user (default: SYSTEM) has access to, with the following in it:
net use S: "\\server_name\share_name" /persistent:yes password_here /USER:username_here
Note that I am mapping to S: in that line.
Via Task Scheduler, create a task that runs as the same user as the service (default: SYSTEM), triggers At Startup, and as it's action, runs the batch file MapDrives.bat.
Reboot and it should work!
Citations:
After diving through many pages and many tests, ultimately, the best suggestions were found here, and led me to the above solution.
https://stackoverflow.com/a/4763324/150794
Make sure your 'local system account' has access rights to the remote directory (including write access). Then use the notation
\\targetdirectory
Mapping drive letters to remote directories only applies to the user account you are currently working with. The drive letter mapping will not be available to any other account.
I'd like to write a service (that starts up and runs whenever the machine is on) that queries Active directory since the user IIS uses does not have permission to query AD. How do I determine if A) my workstation where I have local admin rights, and B) a shared team workstation will allow me to do this?
Anything you can do as an interactive user can be done by a service with appropriate permissions and configuration, so it isn't so much an issue of determining if you can, but rather configuring the service so that it can.
Your installation package should request an appropriate set of credentials (and of course must be run by a user with privileges to install such a service). The service itself should simply catch and log any permission exceptions.
As an example - look at the SQL Server installation process. Early on it requests that you specify accounts with the required privileges.