Monday, April 27, 2026

#1142 - OIC Agentic AI Framework and Agent Input

Introduction 

The goal of this post is to detail how what we enter, in our OIC agent projects, surfaces in the prompts sent to the LLM.
 
Generally speaking, this is what is sent to the Agent/LLM - 
  • User Prompt – the specific request from the user e.g. approve the order with the number 123 for customer XYZ
  • System Prompt - establishes the core identity, tone, and global rules that persist across all conversations. In this case, this could include the order processing rules etc.
  • Tool Parameters - the specific data points required for the agent to execute a tool

Let's use the Resubmission Agent from the previous post; we begin by checking out the artifacts created -

Agent Pattern 

Note the guidelines I've entered here; these will flow into the System Prompt.

Agent Tools

In OIC, these are based on integrations.

Tool guidelines flow into the System Prompt. Tool description is also passed to the LLM, but through a different channel.

Here's an example from the Open AI API docs -

# 1. Define your tool
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get the current weather in a given location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {"type": "string", "description": "City and state"},
                },
                "required": ["location"],
            },
        },
    } 
] 

Ergo, the data you enter for tool description, as well as the data you enter for parameter description, is passed to the LLM.

# 2. Pass tools to the API
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "What's the weather in London?"}],
    tools=tools,
    tool_choice="auto"
) 


Agent

The agent definition contains Role and Guidelines.


These also flow into the System Prompt.


User Prompt

This input is the user prompt

Let's run the agent - 

First part of the System Prompt includes the Agent Role and Guidelines, then we see the Pattern guidelines. 


Then we see the User Prompt

Then we see the Tool Guidelines -


Now the thinking begins -

Tool is invoked - 

So what happens when we continue the conversation? Let's check that out via a new conversation. I begin by ensuring we have an error -

I run the agent - 

I continue the conversation - 



I now ask - what was the exact time of the error?

As you can see, the new user prompt is added to the audit flow, which mirrors the data sent to the LLM, apart from the tools data.

Net, net - in such a conversation, we can have multiple user prompts sent to the LLM, but system prompt data is pushed only once.



 






 





 



No comments: