If I ask my agent for help tuning my db, and ask for the worst performing queries over the past day, it doesn’t need to consume tokens or time to come up with a query, it can simply take advantage of the query in this skill, and then execute it on the db via our MCP Tools.
Let’s play!
I’m lazy, so I asked my agent to sort these for me –
Or it generated some code that I was able to ‘click’ into to make it so.
Let’s find the worst performing queries over the past 24 hrs in my Raptor database, and let’s be sure to use our new monitoring and top sql queries skill to do that!
From here, my Agents asks –
for permission to use my list-connections MCP Tools
via mcp, … to connect to my Raptor database
… to run several queries it found in the Skill
I confirm it’s not generating these queries on it’s own:
CONFIRMED
And then it runs a few more, and comes up with the answer and report. Note, I ran this exercise more than once, and this is one of the follow-up reports, so it remembers what it saw last time…
Here’s the full rundown of what the Raptor database has been up to over the past 24 hours:
🔴 Same Suspect, Still at Large: dt26ufwkgd2qr
The UMF$_ATSK_HIST insert is still the most glaring issue — now at 1,288 executions with an average of 365 seconds per call and essentially zero CPU (0.9ms). That’s pure waiting. This is a recurring pattern, not a one-off. It’s likely blocked on I/O or a lock every single time it fires.
🆕 New This Morning — HR Schema Activity
This is the interesting part that wasn’t showing up before:
SQL ID
Avg Elapsed
What It Is
1vp9dgzxqnxum
9.5 sec
SYS_XMLGEN / TABLE_T export — metadata extraction
8ma4bufmtysmz
8.5 sec
DBMS_METADATA.GET_DDL across ALL_OBJECTS — slow, 2,221 disk reads/exec
bd0rgfgrujz6t
~600ms
Pagination query (row_number) — 33 execs, 2,811 disk reads/exec
6nh9d2p6gdy1w
3.9 sec
DBMS_SQLTUNE.report_sql_monitor — someone’s been digging into SQL Monitor!
gzmz63qmyshp5
3.0 sec
GV$ monitor query via WWV_FLOW
a4akgk9g69h83
186ms avg
SQL text lookup query — 141 execs
That DBMS_METADATA.GET_DDL query (8ma4bufmtysmz) is the one to watch — fetching DDL for all objects in the schema is expensive and can be battered down by scoping it to a specific object type or owner filter.
🟡 Background Noise (Expected)
The SYS autotask, auto-index capture, and APEX coordinator jobs are all behaving consistently with yesterday — nothing new to flag there.
Looks like you (or something running as HR) did some schema introspection work this morning! Want me to pull the full SQL text for the DBMS_METADATA query or drill into the dt26ufwkgd2qr wait issue with an execution plan?
My local DB isn’t doing much of anything, so it’s expected that the ‘worst’ stuff are background, maintenance activities. And that query with the ‘wait issue?’ It’s background tasks for dbms_scheduler.
Let’s try a different one!
I’m concerned about database security in my Raptor database. Can you have a look at and advise what accounts have dangerously high role and privileges granted to them?
This time I didn’t call out specifically to ‘use my skill’ – it’s smart enough to do that on it’s own.
Ok, so it was going pretty good until it got to it’s 2nd query, then it took a wrong turn in Albuquerque.
Me: Who is Miquel, and why are you filtering out HR and MIGUEL?
The good news is that my agent figured out it was an idiot on its own and went back to relying on the ORACLE_MAINTAINED property.
After it got back on track, things went as expected. It correctly identified that I use HR as my local ‘god’ account, and that’s not a good idea.
🔴 Critical Findings
1. MIGUEL — DBA with ADMIN OPTION (most dangerous)
MIGUEL has the DBA role granted with ADMIN OPTION, meaning they can re-grant DBA to anyone else — creating uncontrolled privilege spread. They also have ALTER ANY ASSEMBLY and UNLIMITED TABLESPACE both with ADMIN OPTION. The account is currently EXPIRED, which limits immediate risk, but if reactivated this is your highest-severity issue.
Action: Revoke DBA and review whether this account is still needed.
2. HR — DBA role on an OPEN, actively-used account
The HR account holds the DBA role and was logged in today. This is an application schema owner — it should own objects, not hold near-total database control. This is a textbook violation of the schema-owner/app-user separation principle.
Action: Revoke DBA from HR immediately. The HR schema owner should ideally be locked (ACCOUNT LOCK) and objects accessed via a dedicated, least-privilege app account.
3. HRREST — Massively over-privileged REST service account (LOCKED, but alarming)
Despite being locked, HRREST holds a frightening collection of ANY privileges:
SELECT ANY TABLE — can read every table in the database
DROP ANY TABLE — can destroy any schema’s data
CREATE ANY TABLE / ALTER ANY TABLE — can modify any schema’s structure
CREATE ANY DIRECTORY — can access OS filesystem paths
ALTER USER — can modify any user’s settings
For a REST service account, this is catastrophically over-privileged. Being locked is a band-aid, not a fix.
Action: Revoke all ANY privileges and ALTER USER. Grant only the specific object privileges on HR.* tables that ORDS actually needs.
🟡 Moderate Findings
4. AV — DBA role on a locked account
Less urgent while locked, but a locked DBA account lying dormant is a risk if it gets unlocked. No record of recent use.
Action: Determine if this account is still needed. If not, drop it. If yes, revoke DBA.
5. WIND — DB_DEVELOPER_ROLE + RESOURCE on an open account
DB_DEVELOPER_ROLE (23ai) is a broad developer role. Combined with RESOURCE, this account can create tables, procedures, sequences, triggers, and more across the schema. Review whether WIND is a person or service account and right-size accordingly.
🟠 Systemic Issue: Everyone is on the DEFAULT profile
Every custom account uses Oracle’s DEFAULT profile, which has no password expiry, no failed login lockout, and no idle timeout. You should create a hardened application profile:
The most urgent action is revoking DBA from HR since it’s open and actively connecting.
Ok, but so what? I could do this already without these skills.
Yes, but it may have taken longer, more tool calls, more back and forths, and more tokens to get to the same result. And if we did it again and again, slightly different paths may be explored.
For giggles, I asked my agent to guesstimate the delta.
For our previous two tasks, can you estimate how much faster or fewer tokens we consumed because of the availability of the skills employed vs if you would have had to do research and generate the SQL on your own?
And it came to this conclusion –
Great question — the two tasks were the security audit and the top SQL / worst performing queries work. Here’s my honest estimate of the efficiency gains:
Task 1: Security Audit (oracle-security skill)
What the skill provided: Ready-made query patterns for DBA role detection, dangerous ANY privileges, oracle_maintained filtering logic, and the DEFAULT profile risk framework.
Without the skill: I would have needed to reason from scratch about which DBA_* views to query, what constitutes a dangerous privilege grant, how to correctly exclude Oracle-internal accounts (the oracle_maintained column is non-obvious), and what a hardened profile looks like. I also would have likely needed a web search pass against Oracle docs.
Estimated savings:
Tokens: ~40–60% fewer input tokens (no need to fetch/reason through Oracle security docs)
Speed: Likely 1–2 fewer tool call roundtrips, saving maybe 30–45 seconds of wall time
Quality: The oracle_maintained = 'Y' filter and the profile hardening DDL came directly from the skill — I might have gotten there eventually but could have used a less precise exclusion pattern
What the skill provided: The composite impact score formula (weighted 40/30/20/10 across elapsed, CPU, buffer gets, disk reads) — that’s the most valuable piece. Without it, I’d have had to either invent a scoring heuristic or run multiple separate queries ranked by different metrics.
Without the skill: I would have run simpler ORDER BY elapsed_time DESC queries, possibly missed the per-execution cost angle entirely, and almost certainly not produced the weighted composite score in one pass.
Estimated savings:
Tokens: ~30–50% fewer — the composite score query is dense and required domain knowledge to construct
Speed: The single impact-score query replaced what would have been 3–4 separate queries (by elapsed, by CPU, by buffer gets, by disk reads) to get the same picture
Quality uplift: Probably the bigger win here — the output was more analytically complete than what I’d likely have produced from scratch
Overall
Across both tasks, rough ballpark: 35–55% token reduction and 2–4 fewer tool call roundtrips compared to a cold start with no skills. The quality benefit arguably exceeded the token savings — particularly the composite scoring formula and the oracle_maintained filtering, which are the kinds of precise Oracle-specific patterns that would have taken extra research iterations to land on correctly.
Questions from our webinar this morning
We discussed this new repo in our Oracle Database and OCI March Newscast, and we got LOTS of questions. Yes, the recording will be online, soon! Anyways, I think some of these might be interesting to share here.
In no specific order other than chronological…
Is this compatible/easy-to-setup with github copilot agent mode ? I have my copilot connected to oracle db with sql developer extension and oracle mcp integration
I’ve not played with Skills in Github Copilot, YET. But Copilot Studio does support this (Docs).
“awesome skills – many more than I started to create 🙂 Will those be maintained? Always for the latest DB version?
Yes, we plan to maintain these, add more, and yes they should be good for all supported versions of Oracle AI Database (19c, 23ai, 26ai).
Do you suggest we make all these skills in the repo available to an agent or would we just pick the ones that seem relevant to our context?
Kris suggested maybe not, you may want to load up a specific set of Skills for a task so as not to worry about consuming memory or context with irrelevant information. But I think the Agent I was using is pretty good about keeping the Skills in the background until it needs to pull one up for details. This is an area where I am actively learning for sure!
“Hi Kris, The skills that were published are meant for MCP Server ? if not, can you please give me one or more example, how i can consume them ?”
There’s a section of the Skills that deals directly with our MCP Server, but really the value here is that the Agent can take the knowledge and recipes from the Skills and implement them against an actual database.
If you had a local git project with your oracle sql and plsql code, data model, and security rules in files, it could also look there, no MCP required, at least not a database mcp server.