Import/Post CSV files into ServiceNow - post

We have a requirement for a CSV file to be pushed to the instance, imported and an incident created. I have created the import table and transformation map, and I've successfully tested them manually.
However, when I've attempted to use the instructions from ServiceNow documents site Post CSV files to Import Set nothing happens. The screen goes blank after I get prompted for login credentials.
When I check the system logs and import logs all I see is the error "java.lang.NullPointerException".
My url is basically the following one: https://.service-now.com/sys_import.do?sysparm_import_set_tablename=&sysparm_transform_after_load=true&uploadfile=
Is there something I'm missing?

I do the same thing but have it come in via an email to the my SN instance and process it using an inbound action.
var type = {};
type.schedule = 'u_imp_tmpl_u_term_phr_empl_mvs_ids'; //Display name for scheduled import -- eb9f2dae6f46a60051281ecbbb3ee4a5
type.table = 'u_imp_tmpl_u_term_phr_empl_mvs_ids'; //Import table name
gs.log('0. Process File Start');
if(type.schedule != '' && type.table != '') {
gs.log('1. Setting up data source');
current.name = type.schedule + ' ' + gs.nowDateTime(); //append date time to name of data source for audit
current.import_set_table_name = type.table;
current.import_set_table_label = "";
current.type= "File";
current.format = "CSV"; //"Excel CSV (tab)";
current.header_row = 1;
current.file_retrieval_method = "Attachment";
var myNewDataSource = current.insert();
gs.log('2. Data source inserted with sys_id - ' + myNewDataSource);
//point the scheduled import record to the new data source
var gr = new GlideRecord ('scheduled_import_set');
gr.addQuery('name', type.schedule);
gr.query();
if (gr.next()) {
gs.log('3. Found Scheduled Import definition');
gr.data_source = myNewDataSource;
gr.update();
gs.log('4. Scheduled Import definition updated to point to data source just added');
//Packages.com.snc.automation.TriggerSynchronizer.executeNow(gr);
//Execute a scheduled script job
SncTriggerSynchronizer.executeNow(gr);
} else {
// add error conditions to email somebody that this has occurred
gs.log('5. ERROR - Unable to locate scheduled import definition. Please contact your system administrator');
}
gs.log('6. Inbound email processing complete');
//gs.eventQueue("ep_server_processed",current);
event.state="stop_processing";
} else {
gs.log('7. Inbound email processing skipped');
}

Related

cloud Function -> BigQuery: Permission denied while getting Drive credentials. Does work with Cloud console?

This is what I have.
For my project I have A cloud Function that gets triggered by Pub/Sub.
The function selects data from BigQuery table to which a Google sheet is connected. So my data is inside Google sheet and BiqQuery is used to query my data.
The same function the inserts the selected data into another table inside BigQuery. All the selecting and Inserting is done with BigQuery Jobs.
Here is the problem
When the function is triggered I get the following error message.
Error: Access Denied: BigQuery BigQuery: Permission denied while getting Drive credentials. at new ApiError (/workspace/node_modules/#google-cloud/common/build/src/util.js:73:15) at Util.parseHttpRespBody (/workspace/node_modules/#google-cloud/common/build/src/util.js:208:38) at Util.handleResp (/workspace/node_modules/#google-cloud/common/build/src/util.js:149:117) at /workspace/node_modules/#google-cloud/common/build/src/util.js:479:22 at onResponse (/workspace/node_modules/retry-request/index.js:228:7) at /workspace/node_modules/teeny-request/build/src/index.js:226:13 at processTicksAndRejections (internal/process/task_queues.js:95:5)
The things that do work
When the same function is run via Cloud console selecting and inserting into BigQuery table work.
Querying the Google sheet via BigQuery page inside Google Cloud does also works as it should.
What I have tried
I followed everything inside this: Querying Drive data documentation.
I have have all the required permissions.
Is this a bug or am I doing something wrong?
Edit
I forgot to add my code.
This is the part where I select the data
Something happens with the Tag_Data which triggers the catch.
const Select_Tag_Info_query = `SELECT * FROM \`project.Dataset.table\` where TagId = "${tag}"`;
console.log(Select_Tag_Info_query); // outputs: "SELECT * FROM `pti-tag-copy.ContainerData2.PTI-tags` where TagId = "tag-1"
const Tag_Data = SelectJob(Options(Select_Tag_Info_query));
console.log(`This is Tag-Data: ${Tag_Data}`); // outputs: "This is Tag-Data: [object Promise]"
Tag_Data.catch((error) => {
console.log(`Something went wrong with selecting tag data from the spreadsheet`);
console.error(error);
});
//The resolve is not called because of the error above
Promise.resolve(Tag_Data).then(function (Tag_Data) {
let returned_tag = Tag_Data.TagId;
let returnd_country = Tag_Data.Country;
let returned_location = Tag_Data.Location;
let returned_poleId = Tag_Data.PoleId;
console.log(`this is tag: ${returned_tag}`);
console.log(`this is country: ${returnd_country}`);
console.log(`this is location: ${returned_location}`);
console.log(`this is poleid: ${returned_poleId}`);
});
This is how the BigQuery Jobs function looks like.
function Options(query) {
const options = {
configuration: {
query: {
query: query,
useLegacySql: false,
},
location: 'EU'
},
};
return options;
}
// This function is for selecting the tag data from spreadsheet via bigquery
async function SelectJob(options) {
// Run a BigQuery query job.
console.log(`select job is called.`) // This part is outputed
const [job] = await bigquery.createJob(options); // something goes wrong here.
const [rows] = await job.getQueryResults();
console.log(`${rows[0]["TagId"]}`); // This part is not outputed
console.log(`${rows[0]["Country"]}`); // This part is not outputed
console.log(`${rows[0]["Location"]}`); // This part is not outputed
console.log(`${rows[0]["PoleId"]}`); // This part is not outputed
if (rows.length < 1) {
console.log("array is empty");
} else {
console.log(`selected tag data from spreadsheet`);
return {
TagId: rows[0]["TagId"],
Country: rows[0]["Country"],
Location: rows[0]["Location"],
PoleId: rows[0]["PoleId"]
}
}
}
I think something goes wrong at this part of the code inside the Select Job(). because the other console.logs are not outputed. My reason for saying that is because inside BigQuery project history I don't see the query. I only see a red circle with a white ! mark. see the photo BigQuery project history.
const [job] = await bigquery.createJob(options);
const [rows] = await job.getQueryResults();

Azure DevOps Server 2019 Programmatially copy test case error Exception: 'TF237124: Work Item is not ready to save'."

I'm able to copy most test cases with this code (trying to copy shared steps to be part of the test case itself) but this one will not copy but I can not see any error message as to why - could anyone suggest anything else to try. See output from Immediate windows. Thanks John.
?targetTestCase.Error
null
?targetTestCase.InvalidProperties
Count = 0
?targetTestCase.IsDirty
true
?targetTestCase.State
"Ready"
?targetTestCase.Reason
"New"
foreach (ITestAction step in testSteps)
{
if (step is ITestStep)
{
ITestStep sourceStep = (ITestStep)step;
ITestStep targetStep = targetTestCase.CreateTestStep();
targetStep.Title = sourceStep.Title;
targetStep.Description = sourceStep.Description;
targetStep.ExpectedResult = sourceStep.ExpectedResult;
//Copy Attachments
if (sourceStep.Attachments.Count > 0)
{
string attachmentRootFolder = _tfsServiceUtilities.GetAttachmentsFolderPath();
string testCaseFolder = _tfsServiceUtilities.CreateDirectory(attachmentRootFolder, "TestCase_" + targetTestCase.Id);
//Unique folder path for test step
string TestStepAttachementFolder = _tfsServiceUtilities.CreateDirectory(testCaseFolder, "TestStep_" + sourceStep.Id);
using (var client = new WebClient())
{
client.UseDefaultCredentials = true;
foreach (ITestAttachment attachment in sourceStep.Attachments)
{
string attachmentPath = TestStepAttachementFolder + "\\" + attachment.Name;
client.DownloadFile(attachment.Uri, attachmentPath);
ITestAttachment newAttachment = targetTestCase.CreateAttachment(attachmentPath);
newAttachment.Comment = attachment.Comment;
targetStep.Attachments.Add(newAttachment);
}
}
}
targetTestCase.Actions.Add(targetStep);
targetTestCase.Save();
}
Since this code works for most test cases, this issue may come from the particular test case. In order to narrow down the issue, please try the following items:
Run the code on another client machine to see whether it works.
Try to modify this particular test case using the account API uses, to see whether it can be saved successfully.
Try validate the WorkItem prior to save. The validate() method will return an arraylist of invalid fields.

Use Algolia Search with Firebase Database in Xcode iOS Swift

I'm trying to connect my Firebase Database with Algolia Search.
I follow this following link from Algolia.
https://www.algolia.com/doc/tutorials/indexing/3rd-party-service/firebase-algolia/
It says I got to create a Node.js application, so I did it.
Then to create a file called .env and Generate New Private Key from Firebase (what I did).
But the next step is a little bit strange because it asks me to create a JavaScript file called index.js in the "Create" section of the link I provide you.
From the "Create" section I'm very confused, and I don't know if this link is a right way to connect my Firebase Database and Algolia for Search.
If someone now how to do that, he will help me a lot.
Thanks in advance for your advice or tutorials if you have.
First I got to create an .env file (this is the name of the file not an extension)
Here's the "Create" section what I'm talking about
Third steps is for connect the Firebase database (I try to connect mine)
Then I got to run node index.js in the Terminal. But the Terminal said FIREBASE FATAL ERROR: Can't determine Firebase Database URL. Be sure to include database URL option when calling firebase.initializeApp().
Here's my index.js file :
var dotenv = require('dotenv');
var firebaseAdmin = require("firebase-admin");
var algoliasearch = require('algoliasearch');
// load values from the .env file in this directory into process.env
dotenv.load();
// configure firebase
var serviceAccount = require("./serviceAccountKey.json");
firebaseAdmin.initializeApp({
credential: firebaseAdmin.credential.cert(serviceAccount),
databaseURL: process.env.FIREBASE_DATABASE_URL // Here the error
});
var database = firebaseAdmin.database();
var algolia = algoliasearch(process.env.ALGOLIA_APP_ID, process.env.ALGOLIA_API_KEY);
// Index for Algolia with my database
var index = algolia.initIndex('users');
// Begin import
var contactsRef = database.ref("/users"); // My table calls users in firebase
contactsRef.once('value', initialImport);
function initialImport(dataSnapshot) {
// Array of data to index
var objectsToIndex = [];
// Get all objects
var values = dataSnapshot.val();
// Process each child Firebase object
dataSnapshot.forEach((function(childSnapshot) {
// get the key and data from the snapshot
var childKey = childSnapshot.key;
var childData = childSnapshot.val();
// Specify Algolia's objectID using the Firebase object key
childData.objectID = childKey;
// Add object for indexing
objectsToIndex.push(childData);
}))
// Add or update new objects
index.saveObjects(objectsToIndex, function(err, content) {
if (err) {
throw err;
}
console.log('Firebase -> Algolia import done');
process.exit(0);
});
}
// End Algolia import
Now my .env file I created but it seems doesn't work :
ALGOLIA_APP_ID=TFISJH1AP3 // The name app in Algolia
ALGOLIA_API_KEY=83bd61cd47159f... // Secret Key
FIREBASE_DATABASE_URL=https://live-event-3989e.firebaseio.com // Maybe here it's not like that I got to write my firebase URL ?
Set manuell
firebase.initializeApp({
databaseURL: "https://<yourURL>.firebaseio.com",
});
and
const ALGOLIA_APP_ID = '<your id>';
const ALGOLIA_API_KEY = '<your key>';
// configure algolia
const algolia = algoliasearch(
ALGOLIA_APP_ID,
ALGOLIA_API_KEY
);

When I try to read an X12 204 using EDI Fabric, I get "Invalid Node Name: ST", but the file is well formed. Any idea why?

Here's an example 204 I have made. It validates with a couple different validation tools (EDI Notepad and Altova), but when I try to use EDI fabric to parse it, it gets the ISA and GS data just fine, but then errors out with "Invalid Node Name: ST".
I can't figure out why, any ideas?
ISA*ZZ* *ZZ* *ZZ*XXXX *ZZ*YYYY *170130*1025*U*00401*485789958*0*P*~
GS*SM*YYYY*XXXX*20170130*1027*485790079*X*004010
ST*204*485790093
B2**YYYY**123456789**CC
B2A*00
L11*123456789*CR
S5*1*LD
G62*64*20160131*1*1351
SE*7*485790093
GE*1*485790079
IEA*1*485789958
Here is the code:
internal static void Parse204(FileStream file,
List<MyCompany.TruckRouteInfo> result)
{
var reader = EdiFabric.Framework.Readers.X12Reader.Create(file);
file.Flush();
var qEdiItems = reader.ReadToEnd();
var ediItems = qEdiItems.ToList();
var m204 = ediItems.OfType<M_204>().ToList();
foreach (var item in m204)
{
MyCompany.TruckRouteInfo stop = new MyCompany.TruckRouteInfo ();
foreach (var l11 in item.S_L11)
{
if (l11.D_128_2 == EdiFabric.Rules.X12004010204.X12_ID_128.CR)
{
stop.Reference1 = l11.D_127_1;
}
}
result.Add(stop);
}
}
I've just literally copied your example and pasted it to a file which was processed fine. Works on my machine :)
My best guess would be to open the file and inspect the line terminators for any discrepancies, which might have been sorted when I copied it\pasted it.

asmack-android-8-4.0.5.jar createOutgoingFileTransfer need fullJID I can't get it

1.I had read https://www.igniterealtime.org/builds/smack/docs/latest/documentation/extensions /filetransfer.html
snippet code from this guide, it not need resource part
// Create the file transfer manager
FileTransferManager manager = new FileTransferManager(connection);
// Create the outgoing file transfer
OutgoingFileTransfer transfer = manager.createOutgoingFileTransfer("romeo#montague.net");
// Send the file
transfer.sendFile(new File("shakespeare_complete_works.txt"), "You won't believe this!");
2.so I read spark source code org.jivesoftware.spark.PresenceManager find this method , so the documentation long time no to update;
/**
* Returns the fully qualified jid of a user.
*
* #param jid the users bare jid (ex. derek#jivesoftware.com)
* #return the fully qualified jid of a user (ex. derek#jivesoftware.com --> derek#jivesoftware.com/spark)
*/
public static String getFullyQualifiedJID(String jid) {
System.out.println("getFullyQualifiedJID : " + jid);
final Roster roster = SparkManager.getConnection().getRoster();
Presence presence = roster.getPresence(jid);
System.out.println("getFullyQualifiedJID : " + presence.getFrom());
return presence.getFrom();
}
I find this method not work for asmack , so google it found this
Smack's FileTransferManager.createOutgoingFileTransfer only accepts full JIDs. How can I determine the full JID of a user in Smack?
//snippet code from my project
Roster roster = connection.getRoster();
List presenceList = roster.getPresences(jid);
Log.d(TAG, "bareJid : " + jid);
for (Presence presence : presenceList) {
Log.d(TAG, "fullJID : " + presence.getFrom());
}
why the code can not get the fullJID.
the output:
12-23 06:55:35.840: D/MChat(1805): bareJid : test#tigereye-pc
12-23 06:55:35.840: D/MChat(1805): fullJID : test#tigereye-pc
4.the result is the same, so how can I get the fullJID
Thanks & Regards
You have to supply Full user id as : user#serveripaddress/Smack
For Example :
xyz#192.168.1.1/Smack
The need the full JID and the client resource.
You can do something like that:
String fullJID = xmppConnection.getRoster().getPresence(JID).getFrom();
My JID variable is the full JID without the resource.

Resources