Wednesday, April 15, 2026

#1138 - OIC 26.04 New Features - My Top Picks

Introduction 

This is a quick review of my top picks from the new features, coming with 26.04; and do we have a plethora of new features for you! These include -
  • Human-in-the-Loop (HITL)
  • Enhanced AI Agent Native Action
  • New Agent Patterns incl. support for OCI Gen AI hosted models
  • Knowledge Base
  • OIC Agent Enhancements
    • Versioning and cloning
    • Ability to change pattern
    • Integrations, with GET trigger, can now be exposed as tools
  • OIC Project Enhancements
  • Observability
    • Service Metrics for File Server
    • Service Metrics for Async Queue Depth
    • Service Metrics for Scheduled jobs

So, without further ado, let's begin - 

Human-in-the-Loop


Human-in-the-Loop (HITL) in Oracle Integration Cloud (OIC) is a functionality that enables, manages, and automates human interaction with agentic AI workflows. It allows for the integration of human judgment, approval, and oversight into automated processes to improve accuracy, safety, and compliance. Check out my blog post here for more details.

Enhanced AI Agent Native Action 

With 26.04, there is support for invoking the agent and getting agent response.

The agent response is via the option - Get AI agent activity stream.  

New Agent Patterns

With 26.04 we will be supporting the following patterns - 
  • Plan for OCI GenAI (26.4)
  • ReAct for OCI GenAI (26.4)
  • Plan (26.4)
  • ReAct (26.4)
ReAct = Reason & Act, Plan = Plan & Execute.
These 2 patterns use different approaches to handling task complexity and execution flow.
ReAct is iterative, looping over reason, act, observe, whereas Plan does things upfront. Plan decomposes the requirements into a structured plan and then executes that plan.
Ergo, ReAct is best for requests that require flexibility, handling of unpredictability etc. Plan is best for more structured tasks, such as my apocryphal order processing demo.

Now OIC supports both patterns, with or without OCI Gen AI. 
The ability to leverage OCI GenAI as the LLM for your OIC based AI Agents, is a huge value add for OIC customers. It makes it so easy to kick the tyres, with no need to have an OpenAI account etc. It also gives you more choice. Check out the excellent post from my colleague Steve T. 

Regarding data privacy etc - check out the following link to read OCI Generative AI handles user data. You can also check out the GenAI FAQ -


First thing to check when considering using OCI GenAI for your agent pattern is model availability.
Consider the following, your OIC instance is in a specific region, in my case, us-phoenix-1. 
So one needs to check which models are available in your region; click here to check out region availability.

Here's some screenshots for North American, note the Phoenix column  -

Check out the notes column for Gemini -

Id est, these are external calls; this may impinge on your decision whether to use this model, or not. 


 You can also go to the Analytics & AI page in your OIC Console. Ensure you are in the correct region (in my case, us-phoenix-1), then click on Chat. Ensure you are in your OIC compartment and then click the model dropdown list -  

Select the model you want to use for your Agent Pattern and try it out - 

Ok, let's go for openai.gpt-4.1-mini.

I create a new Agent Pattern

I just need to add the model type -
The pattern has been created along with the integration -

and the Lookup, which is used by the integration - 
Edit the Lookup - 

Set the Region to your region, in my case, us-phoenix-1. Set the Compartment_OCID to your OIC compartment OCID.

Now to my new Agent

Excellent! 

Back to the choice of model - some heuristics you could use are -
  • quality of reasoning
  • speed/latency
  • price/cost
You should also consider what your agent needs to do, i.e. how complex is/are the task(s) it needs to execute?

Also, how valuable are these tasks from a business perspective? e.g. the complex processing of large orders may be key for your business.

You may have other agentic workflows that are less complex, but with high throughputs; a different model may be appropriate for such.

Then you could have simple agentic workflows, like my order processing example - here the  openai.gpt-4.1-mini model suffices.

I think of these 3 categories as racehorse, workhorse and pony.
Whatever you think of my category names, you will need to make informed choices.

Finally, the decision to use OCI GenAI is this case, could be driven by data privacy concerns as well as simplified billing, considering OIC and GenAI costs coming out of the same UCM pot.

Knowledge Base

So what is a knowledge base? Think of this as a centralized, structured repository of data, that enables your agents to understand the context in what they are operating. A very simple example would be an expenses approvals agent, leveraging a knowledge base that contains your corporate expense guidelines. Usually an LLM, such as ChatGPT, would be blissfully unaware of such. 

Pre-req for adding a knowledge base is an OpenSearch connection - 


Now I can create the knowledge base - 

The following embedding models are supported - 
FYI - The tools available in the Hugging Face model hub, enable advanced semantic search, hybrid search, and multi-modal retrieval within the OpenSearch engine.

Back to my knowledge base, let's add a document, my expenses guidelines doc - 

Expense Guidelines for NiallC Corp
------------------------------------
1. No meat dishes can be expensed on a Friday
2. No hard liquor can be expensed.
3. Beer can be expensed, but only 1 beer per meal, except on March 17th, when staff can have up to 10 beers to celebrate Patrick's Day.
4. No meal can total over $100, except on March 17th
5. No extreme left wing literature can be expensed.
6. No extreme right wing literature can be expensed.
7. Only economy flights can be expenses. The exception is for flights longer than 6 hours, here premium economy can be booked and expensed.
 

So I need to ingest this into my knowledge base; for this I use a new action in OIC - 

Here I am ingesting files from an SFTP server directory - 

The RAG Ingest is configured as follows - 

I map as follows and run the integration - 

documentFile is set to the File Reference, returned by the GetDoc action.

Looks good; let's check out the knowledge base -

Now to leveraging this doc in an expense approvals scenario; first step is the creation of a "search" integration -

So how did I implement this? 

I use the new RAG search action - 

I test the integration - 

Check out the activity stream - 

Note the score value is 1, a direct hit!
Now to exposing this as a tool -

Now to my expense approvals agent, which will leverage this tool; let's run it! -














OIC Agent Enhancements 

Versioning & Cloning

Versioning and cloning are now supported in 26.04 - 

Changing the Pattern

This is available via the Agent - Edit info UI - 

Expose GET Integrations as Tools

In 26.01 - 

In 26.04 - 


OIC Projects

26.04 includes support for mass copy of integrations from the global space into a project.



Observability

The 26.04 release includes service metrics for OIC File Server - namely, metrics on current access.

This is very useful for customers with use cases that heavily leverage OIC File Server, via the ftp adapter.

We also have released the service metric for async queue depth; this is extremely useful for throughput and performance monitoring. 

Check out my detailed post here.

Summa Summarum

For me, 26.04 is one of those milestone releases. The ability now to use OCI GenAI as agent pattern LLM is crucial for many of our customers. The addition of the Knowledge Base increases the power of our agent, and the overall ease of use, makes it very easy for OIC users to adopt our agentic framework.