Thursday, June 19, 2025

#1075 25.06 New Features OpenAI adapter

 

Kudos to our Adapters team for the plethora of new and enhanced adapters, coming with the 25.06 release. Today let's look at the OpenAI adapter.

The first step, or pre-requisite is an OpenAI account. I got myself one, investing a couple of €€ a month.


Click on Settings top right - 

On the left, scroll down to API Keys -

Copy the key - 

Create the connection in OIC, adding the key value -

Test and Save.

This adapter offers the following - 

Simple Prompt

The title gives it away here - simple prompt - ask your question and adieu!

The target mapping for the OpenAI request is as follows - 

Note the single input field. Of course, we can also do some tweaking, more about that later.

The response structure is as follows - 

 

My ask is - Should I approve an expense receipt dated June 13th 2025, for a ribeye steak and beer, total cost €250? 

The response -

{

  "answer" : "Here are some factors to consider before approving the expense:\n\n### 1. **Date of Receipt**\n- The receipt is dated **June 13th, 2025**.\n  - If today’s date is before June 13, 2025, this may be **fraudulent** or possibly an error.\n\n### 2. **Expense Policy**\n- Does your company's expense policy **allow alcohol** and expensive meals?\n- €250 for a ribeye steak and beer is **unusually high.** Are there itemized details, or is this for multiple people?\n- Many companies set limits per meal (e.g., €30-€75/person).\n\n### 3. **Business Purpose**\n- Is there a **clear business justification**? Who was present? Was this for a client or team event?\n- Was it a **special occasion** or an authorized business entertainment?\n\n### 4. **Documentation**\n- Is the receipt **itemized**? Vague or lump-sum receipts are red flags.\n- Is it confirmed that this was **paid by the person seeking reimbursement**?\n\n---\n\n### **Recommendation**\n**Unless the following conditions are met, you should not approve:**\n- The date is appropriate (receipt is not from the future or fraudulent).\n- The expense abides by company policy on meals and alcohol, both in amount and purpose.\n- There’s a legitimate business justification.\n- The receipt is itemized and authentic.\n\nIf in doubt, **escalate to your finance team or manager** for review.\n\n---\n\n**In summary:**  \n> Given today’s date is before June 13th, 2025, you should **not approve this receipt**. Future-dated receipts are almost always grounds for automatic rejection or further investigation.  \nIf the date is a typo, request clarification. Otherwise, reject."

}

Simple stuff, the LLM gives us some tips, but no direction. How can it, without knowing our corporate expense policies?

Commiskey Inc. is a quirky company, we value the individual above everything, but when it comes to expenses, we do have strict guidelines - 

1. No meals over a value of €100.

2. No alcohol can be expensed. This includes beer and any liqours.

3. No sugary drinks can be expensed.

4. No estreme left wing literature can be expensed

5. Flights can only booked in economy

6. When hiring a car, the following brands are excluded- Porsche, Lamborghini, Bugatti and Dacia

7. You cannot expense a meal with meat on a Friday - fish only!

So let's use the Extended Prompt in the same context - 

Extended Prompt

The REST trigger for this integration is configured as follows - 

The payload - 

The Mapping is as follows - Source

Attachment Reference - corporate expenses guideline doc.


Now to the Target

Note, I have duplicated the Input - as I will have 2. The first for my expenses question and the second for the corporate expenses guidelines doc.

For the doc - 

oraext:decodeBase64 (oraext:encodeReferenceToBase64 (/nssrcmpr:execute/nssrcmpr:attachments/ns21:attachment/ns21:attachmentReference ) )

I also set the role in both cases to user.

There are 3 possible role values - 

  • user
  • system
  • assistant
I used user for both, as the input, the expenses query and the corporate expenses doc are coming from me, the user. 
You can use system to specify how the model should reply, e.g. You are a helpful and understanding assistant.

The final role, assistant, is the model itself and its response to the user's query.

Let's test the integration - 

Excellent stuff, expense not approved, and rightly so. Such decadence has no place in Commiskey Inc.

Now I enhance the integration with a "system" prompt -

I added a new input node in the mapper - 

Role is set to "system".

I run the revised integration and check the result - 

Much better!

Summa Summarum

Powerful, easy to use. Need I say more?




 



















 


Saturday, June 7, 2025

#1074 - OIC 25.06 New Feature - User Friendly Error Messages

This is another compelling feature, coming with the June release. It certainly makes life easier for those troubleshooting OIC errors. 


This time no text, rather a video.










Thursday, June 5, 2025

#1072 - Managing connectivity agent - some notes

Primarily for my own benefit, but maybe of use to others -

Connectivity Agent is running - 

I shut it down and immediately restart it - 

Now to using jps to help me sort this out - 

the cmd is jps -vl

(lower-case l)

The result tells me the connectivity agent is no longer running, so I attempt to start it again - 

Alles gut!

Now back to jps

Let's kill the connectivityagent.jar process - the windows flavour of kill -9 is as follows - 

taskkill /PID NNN /F


So, in my case, taskkill /PID 21476 /F

Agent has been stopped -

Simple!

use jconsole to monitor - 
















 

Tuesday, June 3, 2025

#1073 OIC 25.06 New Features - AI Driven Integration Generation

This is a true gamechanger feature, the ability to generate an integration skeleton from natural language. The feature available with 25.06 is the first step, so to speak. The integration skeleton that we currently generate does not include some actions, such as Map, but do expect them and others to be available in upcoming releases. 

Net, net, this is our MVP in this space, but what an MVP!!!  



Nothing like kicking the tyres - 

I do that by entering the following text - 

Create an order in the NetSuite on closure of opportunity in the Salesforce. Make sure customer exists in the NetSuite before creating an order, if customer does not exist, get the customer details from salesforce and then create the customer in the NetSuite. Also add the fault handler that sends an error notification to the integration owner.

Let's see what happens -

Grab a coffee, or a cup of Irish Breakfast tea.

The basic integration structure has been generated, so take the rest of the morning off!

So what do we get?

An app driven Integration 
  • triggered by an SFDC event
Main Scope
  • check if customer already exists in Netsuite
  • if not, get customer details from SFDC and create the customer in Netsuite, then create the sales order in Netsuite.
  • if customer already exists, then create the sales order in Netsuite 
Scope Fault Handler
  • send Notification
Also 2 connections have been created - 

Please note, what we have is the full skeleton of the integration, containing all the required actions.

Next steps are -

  • configure the connections
  • configure the integration


Then configure the integration - 

This I have already configured to subscribe to the opportunity event from SFDC.

Summa Summarum 

This cool new feature saves developers time. You get a "best practice" integration skeleton and you just need to fill in the details. Also note, the connections are only generated, if they do not already exist. 




 





















 

Wednesday, May 28, 2025

#1071 OIC invoking OCI AI Vision Service


Introduction

Yet another post in the OIC for OCI AI Services series. Today we're looking at AI Vision service. Firstly, what does this service offer?

Why begin with a picture of McSorleys? Because, we'll use this image in some of the following invokes to OCI AI Vision service.

What does AI Vision offer?


You can check out the OCI AI Vision home page here

Net, net the service offers the following -
  • Image Classification
  • Text Detection
  • Face Detection
  • Object Detection
  • Video Analysis

Let's look at the basic 3 steps when using AI Vision -

  • Ingesting data - e.g. images from object storage or anywhere. You can use OIC to pull in data from anywhere. We ship with the native action for OCI Object Storage as well as a plethora of adapters.
  • Understanding data - here's where AI Vision does it's magic, recognising images, parsing text etc. OIC can easily invoke AI Vision, this is what we'll cover today
  • Using the Intel - here we take the result(s) from AI Vision and use them in our business processes. OIC is THE business process automation toolkit, so let's kick the tyres!

OCI AI Vision 

Here is the Vision menu in OCI. 

Object Detection

This feature allows one to identify objects and their location within an image along with a confidence score.

I try it out - 

Now with a picture with more action in it - 

Yes, the above screenshot does not include the confidence values.

But you get the idea. I want to know what's going on in the image, AI Vision tells me that, assigning a degree of confidence to what it finds.

So how can we do this in OIC?

First thing I do is check out the python code - 

Now to the api docs for OCI AI Vision, here I find the API endpoints -

I'm in PX, so I will use - 

https://vision.aiservice.us-phoenix-1.oci.oraclecloud.com

 Now to the api for object detection -

post /20220125/actions/analyzeImage

The complete url -https://vision.aiservice.us-phoenix-1.oci.oraclecloud.com/20220125/actions/analyzeImage

Request Payload - the basic input here is the image for analysis.

Let's just go with OBJECT_DETECTION here.

The final request payload is as follows  

{
  "features": [
    {
      "featureType": "OBJECT_DETECTION"
    }
  ],
  "image": {
    "source": "INLINE",
    "data": "base64"
  },
  "compartmentId": "yourCompartment_ocid}}"
}

The response payload is as follows  -

{
  "imageObjects": [{
    "name": "Person",
    "confidence": 0.98758954,
    "boundingPolygon": {
      "normalizedVertices": [{
        "x": 0.6116622686386108,
        "y": 0.584307074546814
      }, {
        "x": 0.6986929178237915,
        "y": 0.584307074546814
      }, {
        "x": 0.6986929178237915,
        "y": 0.9633761644363403
      }, {
        "x": 0.6116622686386108,
        "y": 0.9633761644363403
      }]
    }
  }, {
    "name": "Chair",
    "confidence": 0.984481,
    "boundingPolygon": {
      "normalizedVertices": [{
        "x": 0.2508918046951294,
        "y": 0.7415730953216553
      }, {
        "x": 0.32072916626930237,
        "y": 0.7415730953216553
      }, {
        "x": 0.32072916626930237,
        "y": 0.9100103378295898
      }, {
        "x": 0.2508918046951294,
        "y": 0.9100103378295898
      }]
    }
  }, {
    "name": "Footwear",
    "confidence": 0.9828044,
    "boundingPolygon": {
      "normalizedVertices": [{
        "x": 0.5381702184677124,
        "y": 0.9290227890014648
      }, {
        "x": 0.5808274149894714,
        "y": 0.9290227890014648
      }, {
        "x": 0.5808274149894714,
        "y": 0.9576336741447449
      }, {
        "x": 0.5381702184677124,
        "y": 0.9576336741447449
      }]
    }
  }, {
    "name": "Person",
    "confidence": 0.9810399,
    "boundingPolygon": {
      "normalizedVertices": [{
        "x": 0.5125582814216614,
        "y": 0.5717782378196716
      }, {
        "x": 0.5918540954589844,
        "y": 0.5717782378196716
      }, {
        "x": 0.5918540954589844,
        "y": 0.9574788808822632
      }, {
        "x": 0.5125582814216614,
        "y": 0.9574788808822632
      }]
    }
  }, {
    "name": "Footwear",
    "confidence": 0.97873676,
    "boundingPolygon": {
      "normalizedVertices": [{
        "x": 0.5209354758262634,
        "y": 0.9121176600456238
      }, {
        "x": 0.5540853142738342,
        "y": 0.9121176600456238
      }, {
        "x": 0.5540853142738342,
        "y": 0.9327118396759033
      }, {
        "x": 0.5209354758262634,
        "y": 0.9327118396759033
      }]
    }
  }],
  "labels": null,
  "ontologyClasses": [{
    "name": "Chair",
    "parentNames": ["Furniture"],
    "synonymNames": []
  }, {
    "name": "Footwear",
    "parentNames": ["Clothing"],
    "synonymNames": []
  }, {
    "name": "Person",
    "parentNames": [],
    "synonymNames": []
  }, {
    "name": "Clothing",
    "parentNames": [],
    "synonymNames": []
  }, {
    "name": "Furniture",
    "parentNames": [],
    "synonymNames": []
  }],
  "imageText": null,
  "objectProposals": null,
  "detectedFaces": null,
  "detectedLicensePlates": null,
  "imageClassificationModelVersion": null,
  "objectDetectionModelVersion": "2.0.3",
  "textDetectionModelVersion": null,
  "objectProposalModelVersion": null,
  "faceDetectionModelVersion": null,
  "licensePlateDetectionModelVersion": null,
  "errors": []
}

I create the connection in OIC -

then on to the integration -

The AI Vision Invoke is configured as follows -

You've already seen the request and response payloads, so I'll skip them.

I only want to return a precis of the AI Vision response, so my trigger response has been defined as follows - 

{{
 "imageObjects" : [ {
    "name" : "Person",
    "confidence" : 0.98758954
  }, {
    "name" : "Chair",
    "confidence" : 0.984481
  } ],
  "ontologyClasses" : [ {
    "name" : "Chair",
    "parentNames" : [ "Furniture" ]
  }, {
    "name" : "Footwear",
    "parentNames" : [ "Clothing" ]
  } ]
}
 
I complete the mapping and test - 

Regarding the image I used -

Mc Sorley's is an institution in New York, the oldest pub in the city, in the hands of the Irish up til this very day. They only serve 2 types of beer, a dark beer, which is rather unpalatable and a lager, which is to everyone's taste. The beer is served in very small glasses, ergo, you don't order 1 you order 4 and if you're with me and the bauld Peter Meleady, 24.     

Image Classification

according to the docs - Image classification assigns classes and confidence scores based on the scene and contents of an image

So this is a subtle difference to the OBJECT_DETECTION feature, detailed above.

The tailored response to this api invoke is as follows - 

This invoke, as expected, does not return any X, Y co-ordinates.

Face Detection

As the name suggests, detects faces and their X, Y positions in the image.

Text Detection

Let's try this out in OCI - 

Looks good! Now to OIC -


Just to note here, we need to set the featureType=TEXT_DETECTION

Here's the Request payload for the api invoke -

{
  "features": [
    {
      "featureType": "TEXT_DETECTION"
    }
  ],
  "image": {
    "source": "INLINE",
    "data": "base64"
  },
  "compartmentId": "yourCompartment_ocid}}"
}
 
The Response payload I initially set to {}. I then run the integration in Debug mode, then copy and paste the json response shown in the activity stream.
























I configure the REST trigger to return only a subset of this data -

Video Analysis 


The video analysis includes - 

  • Label Detection
  • Object Detection
  • Text Detection
  • Face Detection

Summa Summarum

AI Vision is yet another cool AI service in the OCI stack. This post is just an introduction to the service, but I hope it has whetted your appetite!

Bon appetit!