Execution failed for task ':buildDB2BotcImage' - docker

System Info : Windows 10 Enterprise
Gradle Version 4.3.1
Docker Version 17.10.0-ce, build f4ffd25
I am getting an issue while doing gradle.build, Error I am getting
Unrecognized field "identitytoken" (class com.github.dockerjava.api.model.AuthConfig), not marked as ignorable (6 known properties: "serveraddress", "username", "auth", "password", "email", "registrytoken"])
at [Source: N/A; line: -1, column: -1] (through reference chain: java.util.LinkedHashMap["auths"]->java.util.LinkedHashMap["registry.au-syd.bluemix.net"]->com.github.dockerjava.api.model.AuthConfig["identitytoken"])
I have done some research and found this https://github.com/bmuschko/gradle-docker-plugin/issues/310 , and done some changes in my build.gradle but still i am getting this error.
The changes I have done in one of the part of build.gradle file :
Previously:
docker {
if (System.properties['os.name'].toLowerCase().contains('windows')) {
url = 'tcp://localhost:2376'
certPath = new File(System.properties['user.home'], '.docker/machine/certs')
}
registryCredentials {
url = 'https://maxrep01.swg.usma.ibm.com/'
username = 'username'
password = 'secret'
}
}
Changed:
docker {
if (System.properties['os.name'].toLowerCase().contains('windows')) {
if (new File("\\\\.\\pipe\\docker_engine").exists()) {
url = 'npipe:////./pipe/docker_engine'
}
else {
url = 'tcp://localhost:2376'
}
certPath = new File(System.properties['user.home'], '.docker/machine/certs')
}
registryCredentials {
url = 'https://maxrep01.swg.usma.ibm.com/'
username = 'username'
password = 'secret'
}
}
This is the only helpful link I am getting, but I am unable to get what is the issue is, my understanding is it is something related to windows 10, but as per the link the changes i have done should resolve the issue.
I am not sure what is wrong here?

Your problem is most likely related to Issue #921 of the docker-java project.
Hello, we got the issue, using latest version of "docker-java" while trying to load and parse DOCKER configuration file ("$HOME/.docker/config.json").
It doesn't accept the field 'identitytoken'. I have noticed that you have 'registrytoken' field, which have been used since DOCKER API v1.22 version for the same purpose. I checked documentation, it was renamed there into --> 'identitytoken' just in the next DOCKER API v1.23 version.
If waiting for a next docker-java version is not acceptable, it might help to tweak config.json to suit your needs.

Related

SQLDelight: [SQLITE_ERROR] SQL error or missing database (table already exists)

I am trying to build a small POC with JetpackCompose Desktop and SQLDelight. I want that the data is persisted even after the application is restarted (not only in memory as all the tutorial examples I encountered show), so I tried this:
// ArticlesLocalDataSource.kt
class ArticlesLocalDataSource {
private val database: TestDb
init {
val driver: SqlDriver = JdbcSqliteDriver(url = "jdbc:sqlite:database.db")
TestDb.Schema.create(driver)
database = TestDb(driver)
}
// ...
}
When I run the application for the first time this works, i.e: a database.db file in the project root is created and the data is stored successfully.
However, when I try to run the application a second time, then it crashes immediately with:
Exception in thread "main" org.sqlite.SQLiteException: [SQLITE_ERROR] SQL error or missing database (table ArticleEntity already exists)
at org.sqlite.core.DB.newSQLException(DB.java:1012)
at org.sqlite.core.DB.newSQLException(DB.java:1024)
at org.sqlite.core.DB.throwex(DB.java:989)
at org.sqlite.core.NativeDB.prepare_utf8(Native Method)
at org.sqlite.core.NativeDB.prepare(NativeDB.java:134)
at org.sqlite.core.DB.prepare(DB.java:257)
at org.sqlite.core.CorePreparedStatement.<init>(CorePreparedStatement.java:45)
at org.sqlite.jdbc3.JDBC3PreparedStatement.<init>(JDBC3PreparedStatement.java:30)
at org.sqlite.jdbc4.JDBC4PreparedStatement.<init>(JDBC4PreparedStatement.java:25)
at org.sqlite.jdbc4.JDBC4Connection.prepareStatement(JDBC4Connection.java:35)
at org.sqlite.jdbc3.JDBC3Connection.prepareStatement(JDBC3Connection.java:241)
at org.sqlite.jdbc3.JDBC3Connection.prepareStatement(JDBC3Connection.java:205)
at com.squareup.sqldelight.sqlite.driver.JdbcDriver.execute(JdbcDriver.kt:109)
at com.squareup.sqldelight.db.SqlDriver$DefaultImpls.execute$default(SqlDriver.kt:52)
at com.vgrec.TestPlus.TestDbImpl$Schema.create(TestDbImpl.kt:33)
at com.vgrec.data.local.ArticlesLocalDataSource.<init>(ArticlesLocalDataSource.kt:20)
I understand that it's crashing because there's an attempt to create the database again, but the database already exists. What is not clear to me, is how do I connect to the DB if a DB already exists?
For completeness, here's the build file:
// build.gradle
plugins {
kotlin("jvm") version "1.6.10"
id("org.jetbrains.compose") version "1.1.0"
// ...
id("com.squareup.sqldelight") version "1.5.3"
}
sqldelight {
database("TestDb") {
packageName = "com.test"
}
}
dependencies {
implementation(compose.desktop.currentOs)
// ..
implementation("com.squareup.sqldelight:sqlite-driver:1.5.4")
implementation("com.squareup.sqldelight:coroutines-extensions-jvm:1.5.4")
}
OK, so in the end I decided to just check if the database file exists and invoke the creation only if it does not exist.
Something like this:
init {
val driver: SqlDriver = JdbcSqliteDriver("jdbc:sqlite:database.db")
if (!File("database.db").exists()) {
TestDb.Schema.create(driver)
}
// ...
}
From first glance this seems to work as expected, but I am not sure this is the recommended approach as I am very new to SQLDelight, so other suggestions are welcome.

Terraform: AWS Lambda with Image not updating

We have a new terraform script that is pushing a docker image to an AWS Lambda. The script works well and correctly connects the fresh image to the Lambda. I can confirm this by checking the Image URL as shown in the AWS console for the Lambda and it is the newly pushed+connected image. However when testing the lambda it is clearly running the prior code. It seems like the Lambda has been updated but the running in-memory instances didnt get the message.
Question: is there a way to force the in-memory Lambdas to be cycled to the new image?
Here is our TF code for the Lambda:
resource "aws_lambda_function" "my_lambda" {
function_name = "MyLambda_${var.environment}"
role = data.aws_iam_role.iam_for_lambda.arn
image_uri = "${data.aws_ecr_repository.my_image.repository_url}:latest"
memory_size = 512
timeout = 300
architectures = ["x86_64"]
package_type = "Image"
environment {variables = {stage = var.environment, commit_hash=var.commit_hash}}
}
After more searching I found some discussions (here) that mention the source_code_hash option in terraform for the Lambda creation block (docs here). Its mostly used with a SHA hash of the zip file used for pushing code from an S3 bucket, but in our case we are using a container/image so there is not really a file to get a hash from. However, it turns out that it is just a string that Lambda checks for changes. So we added the following:
resource "aws_lambda_function" "my_lambda" {
function_name = "MyLambda_${var.environment}"
role = data.aws_iam_role.iam_for_lambda.arn
image_uri = "${data.aws_ecr_repository.my_image.repository_url}:latest"
memory_size = 512
timeout = 300
architectures = ["x86_64"]
package_type = "Image"
environment {variables = {stage = var.environment, commit_hash=var.commit_hash}}
source_code_hash = var.commit_hash << New line
}
And we use a bitbucket pipeline to inject the git hash into the terraform apply operation. This fix allowed the Lambda to correctly update the running version.
Alternatively, if you don't want to depend on bitbucket for this, you can add a data source for the ECR image:
data "aws_ecr_image" "repo_image" {
repository_name = "repo-name"
image_tag = "tag"
}
And then use its id as a source code hash like this:
source_code_hash = trimprefix(data.aws_ecr_image.repo_image.id, "sha256:")

How to get the most recent ebs snapshot using terraform datasource?

I am trying to get the most recent created snapshot using terraform, don't know how to do it, according to terraform's document, for aws ami, it can be done by:
data "aws_ami" "web" {
filter {
name = "state"
values = ["available"]
}
filter {
name = "tag:Component"
values = ["web"]
}
most_recent = true
}
I am expecting similar things for ebs snapshot like:
data "aws_ebs_snapshot" "latest_snapshot" {
filter {
name = "state"
values = ["available"]
}
most_recent = true
}
But there is no "most_recent" argument at the reference page for data -> "aws_ebs_snapshot" here, so how can I get the most recent created snapshot using terraform? and why cannot we use the similar syntax as what aws_ami has?
Currently not available in the latest release of Terraform v0.8.2, but this feature has been merged into the latest master of Terraform just a few days ago.
https://github.com/hashicorp/terraform/pull/10986
It is also listed in CHANGELOG of the next release v0.8.3, so it will be available soon.

Grails (3.0.8) plugin bintrayUpload returns HTTP/1.1 - forbidden

bintrayUpload always try to publish my plugin to grails/plugins instead my repo danieltribeiro/plugins. And I got a 403 forbidden because of that.
* What went wrong:
Execution failed for task ':bintrayUpload'.
> Could not create package 'grails/plugins/my-plugin': HTTP/1.1 403 Forbidden [message:forbidden]
I checked the answer here to a similar problem NullPointerExcpetion and tried all the solutions proposed without lucky. Including the Graeme Rocher Answer just changing the name and leaving userRepo in blank. But no matter what I do the task aways try to upload to grails.plugins repo.
My bintray plugin is 1.2
plugins {
id "io.spring.dependency-management" version "0.5.2.RELEASE"
id "com.jfrog.bintray" version "1.2"
}
I tried a lot of configs inside bintray closure on build.gradle following the Benoit tutorial. I created the plugins repo, configured env vars BINTRAY_USER and BINTRAY_KEY with correct values (otherwise it throwns a 401-Unauthorized)
Here is my last (not working) configuration.
version "0.1-SNAPSHOT"
group "danieltribeiro.plugins" // Or your own user/organization
bintray {
pkg {
userOrg = 'danieltribeiro' // If you want to publish to an organization
repo = 'plugins'
name = "${project.name}"
//issueTrackerUrl = "https://github.com/benorama/grails-$project.name/issues"
//vcsUrl = "https://github.com/benorama/grails-$project.name"
version {
attributes = ['grails-plugin': "${project.group}:${project.name}"]
name = project.version
}
}
}
Why this task continues to POST to grails.plugins?
Probally you are using:
apply from:'https://raw.githubusercontent.com/grails/grails-profile-repository/master/profiles/plugin/templates/bintrayPublishing.gradle'
It take the default bintray config.
Override this config or comment it and do manually:
bintray {
user = 'user'
key = '*****'
pkg {
userOrg = '' //
repo = 'nameRepo'
licenses = project.hasProperty('license') ? [project.license] :['Apache-2.0']
name = "$project.name"
issueTrackerUrl = "yourGitHub"
vcsUrl = "yourGitHub"
version {
attributes = ['grails-plugin': "$project.group:$project.name"]
name = project.version
}
}
}

Get input approver user name in Jenkins Workflow plugin

I am trying to get userid who approved an "input" step in workflow jenkins groovy script. Below is the sample script
node('node1'){
stage "test"
input message: 'test'
}
In the workflow UI if a person hits "thumbs up" I want to print his userid in the log. I dont see any option to do it.
def cause = currentBuild.rawBuild.getCause(Cause.UserIdCause)
cause.userId
will print the person who started the build. I have googled this for days but i am not finding anything. Any help here will be greatly appreciated :)
This Jira issue describes how this is likely to work going forward, however it is still open.
In the meantime, the approach of getting the latest ApproverAction via the build actions API was suggested on #Jenkins IRC recently and should work, note it's not sandbox safe.
Something along the lines of the below for getting the most recent approver:
#NonCPS
def getLatestApprover() {
def latest = null
// this returns a CopyOnWriteArrayList, safe for iteration
def acts = currentBuild.rawBuild.getAllActions()
for (act in acts) {
if (act instanceof org.jenkinsci.plugins.workflow.support.steps.input.ApproverAction) {
latest = act.userId
}
}
return latest
}
The JIRA incident referenced u-phoria has been resolved and the fix released.
By setting the submitterParameter to a value, the variable specified by submitterParameter will be populated with the Jenkins user ID that responded to the input field.

Resources