I am trying to assign an existing Elastic IP to an EC2 instance in a SingleInstance Elastic Beanstalk environment created with CDK.
However, I could not find out:
How to get the reference to an existing Elastic IP with CDK
How to get the reference to the EC2 instance created by the Elastic Beanstalk Environment
How to allocate the Elastic IP to that Instance
Environment Source
const environment = new elasticbeanstalk.CfnEnvironment(this, `somename`, {
applicationName: 'someappname',
environmentName: 'someenvname',
platformArn:
'arn:aws:elasticbeanstalk:eu-central-1::platform/Node.js 16 running on 64bit Amazon Linux 2/5.5.0',
optionSettings: [
{
namespace: 'aws:elasticbeanstalk:environment',
optionName: 'EnvironmentType',
value: 'SingleInstance',
},
{
namespace: 'aws:autoscaling:launchconfiguration',
optionName: 'InstanceType',
value: 't3.small',
},
{
namespace: 'aws:autoscaling:launchconfiguration',
optionName: 'IamInstanceProfile',
value: 'aws-elasticbeanstalk-ec2-role',
},
],
versionLabel: applicationVersion.ref,
});
Related
When I deploy my api to AWS with serverless ($ serverless deploy), then there is always the stage added to the api URLs, for example:
foo.execute-api.eu-central-1.amazonaws.com/dev/v1/myfunc
Same thing happens when I run this locally ($ serverless offline):
localhost/dev/v1/myfunc
This is fine when deploying to AWS but for localhost I do not want this.
Question: Is there a way to remove the dev part in the url for localhost only so that the url looks like this?:
localhost/v1/myfunc
I already tried to remove the stage setting in serverless.yml but the default seems to be dev, so it doesn't matter if I specify the stage there or not.
service: my-service
frameworkVersion: "3"
provider:
name: aws
runtime: nodejs16.x
stage: dev
region: eu-central-1
apiGateway:
apiKeys:
- name: my-apikey
value: ${ssm:my-apikey}
functions:
myfunc:
handler: src/v1/myfunc/index.get
events:
- http:
path: /v1/myfunc
method: get
private: true
plugins:
- serverless-esbuild
- serverless-offline
- serverless-dotenv-plugin
The solution was to use --noPrependStageInUrl like this:
$ serverless offline --noPrependStageInUrl
I am triying to deploy a Jenkins using helm with JCASC to get vault secrets. I am using a local minikube to create mi k8 cluster and a local vault instance in my machine (not in k8 cluster).
Even that I am trying using initContainerEnv and ContainerEnv I am not able to reach the vault values. For CASC_VAULT_TOKEN value I am using vault root token.
This is helm command i run locally:
helm upgrade --install -f values.yml mijenkins jenkins/jenkins
And here is my values.yml file code:
controller:
installPlugins:
# need to add this configuration-as-code due to a known jenkins issue: https://github.com/jenkinsci/helm-charts/issues/595
- "configuration-as-code:1414.v878271fc496f"
- "hashicorp-vault-plugin:latest"
# passing initial environments values to docker basic container
initContainerEnv:
- name: CASC_VAULT_TOKEN
value: "my-vault-root-token"
- name: CASC_VAULT_URL
value: "http://localhost:8200"
- name: CASC_VAULT_PATHS
value: "cubbyhole/jenkins"
- name: CASC_VAULT_ENGINE_VERSION
value: "2"
ContainerEnv:
- name: CASC_VAULT_TOKEN
value: "my-vault-root-token"
- name: CASC_VAULT_URL
value: "http://localhost:8200"
- name: CASC_VAULT_PATHS
value: "cubbyhole/jenkins"
- name: CASC_VAULT_ENGINE_VERSION
value: "2"
JCasC:
configScripts:
here-is-the-user-security: |
jenkins:
securityRealm:
local:
allowsSignup: false
enableCaptcha: false
users:
- id: "${JENKINS_ADMIN_ID}"
password: "${JENKINS_ADMIN_PASSWORD}"
And in my local vault I can see/reach values:
>vault kv get cubbyhole/jenkins
============= Data =============
Key Value
--- -----
JENKINS_ADMIN_ID alan
JENKINS_ADMIN_PASSWORD acosta
Any of you have an idea what I could be doing wrong?
I haven't used Vault with jenkins so I'm not exactly sure about your particular situation but I am very familiar with how finicky the Jenkins helm chart is and I was able to configure my securityRealm (with the Google Login plugin) by creating a k8s secret with the values needed first:
kubectl create secret generic googleoauth --namespace jenkins \
--from-literal=clientid=${GOOGLE_OAUTH_CLIENT_ID} \
--from-literal=clientsecret=${GOOGLE_OAUTH_SECRET}
then passing those values into helm chart values.yml via:
controller:
additionalExistingSecrets:
- name: googleoauth
keyName: clientid
- name: googleoauth
keyName: clientsecret
then reading them into JCasC like so:
...
JCasC:
configScripts:
authentication: |
jenkins:
securityRealm:
googleOAuth2:
clientId: ${googleoauth-clientid}
clientSecret: ${googleoauth-clientsecret}
In order for that to work the values.yml also needs to include the following settings:
serviceAccount:
name: jenkins
rbac:
readSecrets: true # allows jenkins serviceAccount to read k8s secrets
Note that I am running jenkins as a k8s serviceAccount called jenkins in the namespace jenkins
After debugging my jenkins installation I figured out that the main issue was not my values.yml neither my JCASC integration as I was able to see the ContainerEnv values if I go inside my jenkins pod with:
kubectl exec -ti mijenkins-0 -- sh
So I needed to expose my vault server so my jenkins is able to reach it, I used this Vault tutorial to achieve it. Which in, brief, instead of using normal:
vault server -dev
We need to use:
vault server -dev -dev-root-token-id root -dev-listen-address 0.0.0.0:8200
Then we need to export an environment variable for the vault CLI to address the Vault server.
export VAULT_ADDR=http://0.0.0.0:8200
After that, we need to determine the vault address which we are going to redirect our jenkins ping, to do that we need start a minukube ssh session:
minikube ssh
Within this SSH session, retrieve the value of the Minikube host.
$ dig +short host.docker.internal
192.168.65.2
After retrieving the value, we are going to retrieve the status of the Vault server to verify network connectivity.
$ dig +short host.docker.internal | xargs -I{} curl -s http://{}:8200/v1/sys/seal-status
And now we can connect our jenkins pod with our vault, we just need to change CASC_VAULT_URL to use http://192.168.65.2:8200 in our main .yml file like this:
- name: CASC_VAULT_URL
value: "http://192.168.65.2:8200"
I have created VM instance in GCP using pulumi and installed docker. And I am trying to connect remote instance of the docker but its getting failed to due to connection establishment (asking for a key verification in a pop up window).
const remoteInstance = new docker.Provider(
"remote",
{
host: interpolate`ssh://user#${externalIP}:22`,
},
{ dependsOn: dockerInstallation }
);
I can able to run docker containers locally. But want to run the same in VM. The code snippet is here
with the recent version of "#pulumi/docker": "^3.2.0" you can now pass the ssh options. Reference
const remoteInstance = new docker.Provider(
"remote",
{
host: interpolate`ssh://user#${externalIP}:22`,
sshOpts: [
"-o",
"StrictHostKeyChecking=no",
"-o",
"UserKnownHostsFile=/dev/null",
],
},
{ dependsOn: dockerInstallation }
);
From a regular ECS container running with the bridge mode, or from a standard EC2 instance, I usually run
curl http://169.254.169.254/latest/meta-data/local-ipv4
to retrieve my IP.
In an ECS container running with the awsvpc network mode, I get the IP of the underlying EC2 instance which is not what I want. I want the address of the ENI attached to my container. How do I do that?
A new convenience environment variable is injected by the AWS container agent into every container in AWS ECS: ${ECS_CONTAINER_METADATA_URI}
This contains the URL to the metadata endpoint, so now you can do
curl ${ECS_CONTAINER_METADATA_URI}
The output looks something like
{
"DockerId":"redact",
"Name":"redact",
"DockerName":"ecs-redact",
"Image":"redact",
"ImageID":"redact",
"Labels":{ },
"DesiredStatus":"RUNNING",
"KnownStatus":"RUNNING",
"Limits":{ },
"CreatedAt":"2019-04-16T22:39:57.040286277Z",
"StartedAt":"2019-04-16T22:39:57.29386087Z",
"Type":"NORMAL",
"Networks":[
{
"NetworkMode":"awsvpc",
"IPv4Addresses":[
"172.30.1.115"
]
}
]
}
Under the key Networks you'll find IPv4Address
You application code can then look something like this (python)
METADATA_URI = os.environ['ECS_CONTAINER_METADATA_URI']
container_metadata = requests.get(METADATA_URI).json()
ALLOWED_HOSTS.append(container_metadata['Networks'][0]['IPv4Addresses'][0])
import * as publicIp from 'public-ip';
const publicIpAddress = await publicIp.v4(); // your container's public IP
I have installed Rancher 2 and created a kubernetes cluster of internal vm's ( no AWS / gcloud).
The cluster is up and running. We are behind Corp proxy.
1) Installed Kubectl and executed kubectl cluster-info . It listed my cluster information correctly.
2) Installed helm
3) Configured helm referencing Rancher Helm Init
4) Tried installing Jenkins charts via helm
helm install --namespace jenkins --name jenkins -f values.yaml stable/jenkins
The values.yaml has proxy details.
---
Master:
ServiceType: "ClusterIP"
AdminPassword: "adminpass111"
Cpu: "200m"
Memory: "256Mi"
InitContainerEnv:
- name: "http_proxy"
value: "http://proxyuserproxypass#proxyname:8080"
- name: "https_proxy"
value: "http://proxyuserproxypass#proxyname:8080"
ContainerEnv:
- name: "http_proxy"
value: "http://proxyuserproxypass#proxyname:8080"
- name: "https_proxy"
value: "http://proxyuserproxypass#proxyname:8080"
JavaOpts: >-
-Dhttp.proxyHost=proxyname
-Dhttp.proxyPort=8080
-Dhttp.proxyUser=proxyuser
-Dhttp.proxyPassword=proxypass
-Dhttps.proxyHost=proxyname
-Dhttps.proxyPort=8080
-Dhttps.proxyPassword=proxypass
-Dhttps.proxyUser=proxyuser
Persistence:
ExistingClaim: "jk-volv-pvc"
Size: "10Gi"
5) The workloads are created. However the Pods are stuck.Logs complains about SSL certificate verification.
How to turn SSL verification off. I dont see an option to set in values.yaml.
We cannot turn off installing plugins during deployment as well.
Do we need to add SSL cert when deploying charts?
Any idea how to solve this issue?
I had the same issues as you had. In my case it was due to the fact that my DNS domain had a wildcard A record. So updates.jenkins.io.mydomain.com would resolve fine. After removing the wildcard, that fails now, so the host will then properly interpret updates.jenkins.io, as updates.jenkins.io.
This is fully documented here:
https://github.com/kubernetes/kubernetes/issues/64924