OIC is the conduit in this regard, albeit a very intelligent one.
The first business use case I can up with is as follows - I am late in shipping an order to a customer and need to email them my humblest apologies.
Here are the full request and response payloads -
It's not perfect but you get the idea. OIC could then take this response and send the email via the NOTIFICATION action.
Now back to the examples screenshot from the start of the post -
My book suggestions is a good example of Q&A or Chat - answering my questions, based on existing knowledge. This is the typical chatbot use case. The assistant helps me through it's intimate knowledge of its subject matter area.
Now let's revisit the email above and see what we can do in respect of non-English customers -
There is a delay in shipping order 2113 to my German customer Hasselbacher Motoren Werk. So my generated email would be as follows -
So let's take that and try out the following model -
Parsing Text for Contact Information
OIC then takes such and creates a new contact in Oracle CX, SFDC or your CRM of choice.
The results -
As you can see in respect of 3 - ChatGPT doesn't do sarcasm.
But you could well imagine, OIC taking such results and creating a note in the target CRM, so the sales person will be even more prepared, when talking with this client in the future.
text": "\n\nSELECT * \nFROM orders \nWHERE customer_name = 'Hare of the Dog Pub' \nAND product_name = 'iBike';"
I click on one of the links - granted a lovely dog, but nothing compared to our Akira.
Now let's ask ChatGPT
Just some final explanations and reflections -
you may have noticed the following request payload fields -
Firstly, what does the "temperature" request field mean? This controls the level of randomness or as one author puts it, creativity. The higher you set the more creative, however less precise.
Here's a simple example -
Here's a poem on Leo Tolstoy with temperature set to 0
A man of great renown, His name was Leo Tolstoy,
His works of literature, Are known the world o'er.
He wrote of love and war, Of life and death and more,
His works are timeless, And will be forevermore.
He wrote of human nature, Of joy and sorrow too,
His words are still alive, And will remain so true.
His works are timeless, His words are still alive,
Leo Tolstoy's legacy, Will never die.
temperature set to 1.8
Great influence no voice can overwhelm.
From striving writers can draw much bread divine,
His legacy ignites canvusses anew,
Enormoubcs series praid Leo gall ions or Benvolvsovoo sttin okrinegu tithy tomstaitteSo p nmostsoean sdaignebte noMeansailleainspeinfian iopbrocu havelitfeoon
From Tsar Alexadvres neaches Akinyein todayrajoWgrjvpointbenbreentigorrh st om skine Sosmin fentleport he salurtJoinsse lmobbanow bur grfireasesnuheionsver sirenstism tum rprise ant bollow An crsum ashylaing sherueippfsi sp ce ofmerregavenninau do artsrene Thkar ate deennues aking counsnine ain thrffatisothermad Teoldsty Grretws Cas ad Tns CiandoIofrlsov ra Manolavelt Jeredeer,\nFstone bo NrabolinrarytlereadSe yo epic writings of unwit dy flight Isolerojoyst fairtelichecingdelvey lam made Immllgurmt disshim
1.8 is very Finnegan's Wake, but I think you get the idea. You can extrapolate from this for any other use cases.
Secondly, max_tokens - The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
Let's try out -