Some of you have some pretty awesome data models, and I’ve seen the ER Diagrams proudly taped to your cubicle walls.
Here in imaginary land, we have very basic schemas with just a few tables, and all have great, descriptive names.
But far too many of you live in a world where 30,000 tables and views exist together in a single schema, and serve up multiple applications. And the naming conventions of these objects are not… conventional.
When you task your shiny new (AI) Agent to go ‘do the thing,’ and that works turns into interfacing with your database, the AI will have the same problems that my new friend Ed had just last week.
Ed started working for our department a few days ago, and his first Jira ticket had him updating our payroll system. But that ticket didn’t tell Ed that all the payroll tables could be identified by their IDX_227 prefixes.
Confession: Ed is someone I made up for this story, but we all know what Ed is facing.

So Ed tried to find the data in question, but after struggling for longer than he should have, he broke down and for help.
Your Agent is like Ed. Eager, smart, and persistent. But, it will run into the same problems Ed did. And it will consume tokens and LLM costs while it generates SQL/database code looking for your data. And in many cases it will not make progress, and come back to you for help.
Let’s fix that!
What if we could put the prompt engineering or context into the database, itself?
So yes, I could spend a few more minutes putting together a better prompt. I could tell it exactly what it needs to do to answer my question. I could provide a ‘data map’ for my agent when it’s time to work with our database. “Hey, here are your tables, your APIs, here’s how the data is related. These codes mean this, when you see…“
And I and everyone else would need to share these prompts, across ALL of our agents. And this valuable information would be in our agent frameworks, and not in our database. So others wouldn’t benefit much. By others, I mean those pesky humans that still login and write queries or click their way around schemas.
OR, we could annotate our objects and schemas.

Using SQL Developer’s Extension for VS Code, users can now easily explain with much greater detail how their data models have been implemented.
I’ve used a ridiculously bad example to showcase this, but perhaps you’ve purchased a commercial off-the-shelf product, and the application is pretty good but the database bits were a bit… less than what Dr’s Boyce and Codd had intended?
Maybe you can’t fix that, but you can make it clear where the Retail system data can be found!
Let’s say sprinkled around my 400 poorly named tables and columns, I have 3 sets of retail records in my schema.
And I’ve annotated those 3 tables, and created a Retail Online Sales category in my AI Enrichment system.
With this homework done, ONCE, it’s now a trivial for my Agent, or anyone else’s to find AND interpret that data!

But how? How did Cline ‘find’ or discover these tables?
I’m using our latest version of our MCP Server, SQLcl 25.3.1. One of the major improvements is the introduction of a new MCP Tool, ‘schema-information’

My Agent can use this MCP Tool to obtain a ‘map’ of my schema. It includes a mix of the default information stored in our USER_ dictionary views, but it ALSO includes any annotations we have supplied via our AI Enrichment interface in SQL Developer.
Here’s a partial response from using the new schema-information tool:
Plain TextRich DisplayMarkdown
-- I have cut out all of the tables, views, comments, procedures, indexes, etc and jumped straight to the juicy stuff
...
(GROUP:ONLINE RETAIL SALES),(DESCRIPTION:Contains sales records for our Comic Book collection. Sales information is stored as JSON in a column named C. Columns A and B can be ignored, although column A is a GUID for the PRIMARY KEY.),(ENRICHMENT_COMPLETED:true)) : A(DATA_TYPE:NUMBER) , B(DATA_TYPE:VARCHAR2) , C(DATA_TYPE:JSON,DESCRIPTION:Contains the real data for sales. A JSON object including an array of comic books sold. A sample record - { ""date_of_sale"":""2025-11-01"", ""products"": [ { ""product_name"":""Harbinger #1"", ""quantity"":1, ""price"":4.99, ""description"":""Valiant comic"" } ], ""total_price"":4.99 } )
...My Agent no longer needs to employ the LLM to generate queries to get the list of objects, their comments, indexes, procedures, etc. !!!
The ‘schema-information’ tool provides this information. And since it’s a tool, it will be consistent and uniform across Agents and tasks. The LLM isn’t going to miss something because it forgot to join tables to columns views, or or or…
Saving the LLM work, means less time to get to your answer, better answers, and cost/resource savings.
How do we get started?
- Update the SQL Developer Extension for VS Code to version 25.3.1 or later
- Connect to your schema
- Jump into the new ‘AI Enrichment’ section

You’ll start by creating one or more ‘table groups,’ and then assign tables to those groups. And then for those tables, you can annotate the tables and the columns.
Here’s what that might look like. I have annotated the C column of my table, to describe the data that can be found in this JSON typed attribute:

I have observed that agents love to sample data from tables. They want to understand what they’re looking at.
Those queries can get expensive. So we can just feed them this info as they connect and get our annotations.
Yes, it’s work to provide this information, but now it’s centrally located for all AI frameworks AND anyone else that is working with our database.
Do I need Oracle AI Database 26ai to use this feature?
No! This works across all the supported versions of Oracle, from 19c, 21c, 23ai and 26ai.
Watch Kris demo this at AI World
I will do a series of blog post tutorials on how to use this feature, but in the meantime, about about a quick live demo?
