Step-by-Step Implementation Guide

Analyze Oracle Analytics Cloud AI Assistant User Feedback Using Oracle Cloud Infrastructure Logs

Understanding how users interact with Oracle Analytics Cloud AI Assistant is essential for improving response quality, refining AI Agents and increasing adoption. This guide walks you through capturing every user utterance, response, and feedback action — especially thumbs-down feedback — from OCI logs, and turning it into actionable analytics.

OAC AI Assistant OCI Logging Autonomous Database Feedback Analysis

By Ravi Bhuma — Oracle Analytics · April 2026

System Architecture

End-to-End Feedback Pipeline

Data flows from user interaction to actionable analytics

OAC InstanceFeedback events
1
OCI LoggingCentralized logs
2
Object StorageBatch files
3
Autonomous DBParse & store
4
AnalyticsVisualize
5
How the pipeline works: OAC sends feedback logs to OCI Logging. A Service Connector pushes them to Object Storage. A database procedure pulls, parses, and enriches them into queryable views. OAC Analytics connects back to visualize patterns in user satisfaction, generated LSQL, and feature requests.
📡
Phase A

Configure OCI Logging

Capture feedback events in OCI Logging (6 steps)

🎯 Why this matters

You need to know which OAC instance you're working with and its compartment so you can enable resource logging for that specific instance in the next steps.

Navigate to your OAC instance in the OCI Console. Note down the instance name and the Compartment in which it resides. You'll use these when configuring the logging service.

Path: Analytics & AI Analytics Cloud Your Instance
You have noted the OAC instance name and compartment
🔍 Where is OCI Logging?

OCI Logging is the central service where all logs from your cloud resources are aggregated. You're about to create a dedicated log group for OAC feedback logs.

In the OCI Console, go to Observability → Logging → Log Groups. You'll see existing log groups or an empty list if this is your first time.

Path: Observability Logging Log Groups
📦 What is a Log Group?

A log group is a container for related logs. Creating a dedicated group for OAC feedback makes it easier to filter, monitor, and export those logs later.

Click Create Log Group. Name it OAC-AI-Logs. This organizes all feedback logs in one place for easy discovery.

💡
Use a clear, descriptive name so your team can easily identify this log group's purpose.
🎯 Enable resource logging

Resource logging means OCI will automatically capture diagnostic events from your OAC instance and store them in the log group. The INFO level catches user interactions and feedback.

In the Log Group, click Enable Resource Log. Select Analytics Cloud, choose your OAC instance, set Log Type to Diagnostic, and set the Log Level to INFO. This ensures feedback events are captured.

OAC resource logging is now enabled at INFO level
✅ Confirm logs are flowing

After enabling logging, you need to generate some activity to test that logs are being captured. The GenAI category confirms that AI feedback is flowing into OCI Logging.

Open your OAC instance and use the AI Assistant to ask a question. Then go to Log Explorer in the OCI Console and search for logs with Category = GenAI. You should see your feedback events appear within seconds.

⚠️
No logs yet? Make sure users have interacted with OAC (queries, feedback) after enabling logging. Check Log Explorer with the correct compartment and time range.
🌉 Automated log routing

A Service Connector automatically pushes logs from OCI Logging to Object Storage, where your database can retrieve them. This automation ensures no logs are missed and runs continuously.

Go to Observability → Service Connectors → Create Connector. Set Source to Logging (select your OAC-AI-Logs group), and Target to Object Storage. Configure Batch Rollover settings: aim for 1MB or 60 seconds, whichever comes first.

💡
Batch rollover prevents excessive API calls while keeping data fresh. Lower values = fresher data but higher API costs.
🗄️
Phase B

Setup Database Tables & Procedures

Setup database structures to ingest and parse logs (8 steps)

🔐 Secure authentication

Your database needs credentials to authenticate with OCI services. These credentials allow the database to list and read files from Object Storage.

Execute the following SQL in your Autonomous Database:

SQL
BEGIN
  DBMS_CLOUD.CREATE_CREDENTIAL(
    credential_name => 'OCI_NATIVE_CRED',
    user_ocid       => 'ocid1.user.oc1..YOUR_USER_OCID',
    tenancy_ocid    => 'ocid1.tenancy.oc1..YOUR_TENANCY_OCID',
    private_key     => 'YOUR_PRIVATE_KEY_CONTENT',
    fingerprint     => 'YOUR_FINGERPRINT'
  );
END;
/
⚠️
Replace the OCIDs, private key, and fingerprint with your actual values. You can find these in your OCI user settings.
📋 JSON document store

SODA (Simple Oracle Document Architecture) collections store JSON documents. This is where your raw logs will land — each log entry becomes a document with searchable fields.

Execute the following SQL to create a SODA collection:

SQL
DECLARE
  l_collection SODA_COLLECTION_T;
BEGIN
  l_collection := DBMS_SODA.CREATE_COLLECTION('OAC_LOGS');
END;
/
⚠️
Important: SODA collections have three automatic columns: DATA (JSON), RESID (resource ID), and ETAG (version tag). You will NOT see a column called json_document. Always reference the DATA column when querying.
⏲️ Incremental processing anchor

The watermark tracks the timestamp of the last log batch you processed. This prevents reprocessing old logs and ensures you only pull new data each time the ingestion procedure runs.

SQL
CREATE TABLE OAC_LOG_TIMESTAMP_UPDATE (
  last_processed_time TIMESTAMP DEFAULT TIMESTAMP '1970-01-01 00:00:00'
);
INSERT INTO OAC_LOG_TIMESTAMP_UPDATE VALUES(TIMESTAMP '1970-01-01 00:00:00');
COMMIT;
ℹ️
Starting at 1970 ensures the first job load captures all historical logs. After the first run, it will update to the current timestamp.
🔄 Heart of the pipeline

This procedure is the heart of your pipeline. It lists all objects from Object Storage that are newer than the watermark, downloads each one, parses the JSON, and stores it in the SODA collection, then updates the watermark.

Create a procedure named OAC_INGEST_LOGS that:

  1. Reads the watermark timestamp from OAC_LOG_TIMESTAMP_UPDATE
  2. Lists objects in Object Storage using DBMS_CLOUD.GET_OBJECT_LIST with a filter for creation time > watermark
  3. For each object, calls DBMS_CLOUD.COPY_COLLECTION to load the JSON into the OAC_LOGS SODA collection
  4. Updates the watermark with the current timestamp
⚠️
Replace YOUR_NAMESPACE with your OCI Object Storage namespace. You can find this in OCI Console → Object Storage → Buckets → Namespace field.
📊 JSON → SQL columns

Views act as queryable abstractions over your JSON data. This view extracts key fields like tenant_id, category, feedback, and utterance using JSON path expressions, making them available as regular SQL columns.

Create a view named OAC_LOG_DETAILS:

SQL
CREATE OR REPLACE VIEW OAC_LOG_DETAILS AS
SELECT
  JSON_VALUE(data, '$.oracle.tenantid') AS tenant_id,
  JSON_VALUE(data, '$.oracle.compartmentid') AS compartment_id,
  JSON_VALUE(data, '$.data.category') AS category,
  JSON_VALUE(data, '$.data.message') AS message,
  JSON_VALUE(data, '$.data.additionalDetails.feedback') AS feedback,
  JSON_VALUE(data, '$.data.additionalDetails.utterance') AS utterance,
  JSON_VALUE(data, '$.data.additionalDetails.datamodelName') AS datamodel_name,
  JSON_VALUE(data, '$.data.additionalDetails.userId') AS user_id,
  JSON_VALUE(data, '$.data.ecid') AS ecid,
  JSON_VALUE(data, '$.data.parentEcid') AS parent_ecid,
  TO_TIMESTAMP(JSON_VALUE(data, '$.time'), 'YYYY-MM-DD"T"HH24:MI:SS.FF3"Z"') AS event_time
FROM OAC_LOGS;
🔗 Correlate feedback with requests

This view correlates feedback entries with their parent request events using the parentEcid field. This creates a complete picture: the original question, the LSQL generated, and the user's feedback on the result.

Create a view named OAC_AI_FEEDBACK_ANALYSIS that:

  1. Filters OAC_LOG_DETAILS for entries where feedback IS NOT NULL
  2. Extracts feedback sentiment (positive/negative/neutral) from the feedback field
  3. Joins with parent events using parent_ecid to get the original request_time and request_ecid
  4. Calculates elapsed_time (time from request to feedback)
ℹ️
This creates a normalized table of actionable feedback with context.
🧬 The final correlation layer

This view joins feedback with the actual Logical SQL that was generated and sent to the database. Now you can see exactly which LSQL queries received negative feedback.

Create a view named OAC_FEEDBACK_WITH_LSQL:

SQL
CREATE OR REPLACE VIEW OAC_FEEDBACK_WITH_LSQL AS
SELECT
  f.user_id, f.utterance, f.feedback, f.feedback_category,
  f.feedback_details, f.datamodel_name, f.elapsed_time,
  f.request_time, f.feedback_time,
  SUBSTR(l.message, INSTR(l.message, 'SELECT'), 3000) AS lsql,
  REGEXP_SUBSTR(l.message, 'logical request hash:\s*(\w+)', 1, 1, 'i', 1) AS lsql_hash
FROM OAC_AI_FEEDBACK_ANALYSIS f
JOIN OAC_LOG_DETAILS l
  ON SUBSTR(l.ecid, 1, 36) = SUBSTR(f.request_ecid, 1, 36)
  AND l.event_time BETWEEN f.request_time - INTERVAL '30' SECOND
                       AND f.request_time + INTERVAL '5' SECOND
  AND l.message LIKE '%SQL Request%'
  AND l.message LIKE '%SELECT%';
💡
Session Correlation Tip: The join uses the first 36 characters of the ECID (the session ID portion) and a ±30 second time window around the request. This accounts for clock skew and ensures you find the matching LSQL log entry even if exact times don't match.
🤖 Continuous processing

The scheduler runs your ingestion procedure automatically on a fixed interval. This ensures your database is constantly pulling new logs without manual intervention, and your dashboards always show fresh data.

SQL
BEGIN
  DBMS_SCHEDULER.CREATE_JOB(
    job_name        => 'OAC_LOG_INGEST_JOB',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN OAC_INGEST_LOGS; END;',
    repeat_interval => 'FREQ=MINUTELY; INTERVAL=1',
    enabled         => TRUE
  );
END;
/

This job runs every 1 minute. You can adjust the interval based on your needs (e.g., INTERVAL=5 for every 5 minutes).

📊
Phase C

Connect Analytics & Visualize

Visualize feedback in OAC (3 steps)

🔐 Connect OAC to your database

OAC needs to connect back to your database to access the views you've created. This connection allows you to build dashboards and analyses against your parsed feedback data.

In OAC, go to Data → Connections. Click Add Connection, select Oracle Database, and enter your Autonomous Database details (host, port, service name, username, password). Test the connection and save it.

📊 Build from views

A dataset in OAC is built from a database view. You'll point OAC to your OAC_FEEDBACK_WITH_LSQL view, which automatically makes all columns available for analysis.

In OAC, go to Data → Create and choose Dataset. Select your database connection and the view OAC_FEEDBACK_WITH_LSQL. Recommended visualizations:

  • Feedback Sentiment Trend: Timeline showing positive/negative/neutral feedback over time
  • Top Negative LSQL Patterns: Bar chart of most-criticized LSQL queries
  • Feedback by Data Model: Which data models receive the most feedback
  • Response Time vs. Satisfaction: Scatter chart of elapsed_time vs. feedback sentiment
🚨 Common issues

After building your initial dashboard, use this reference table to diagnose common issues in your data.

SymptomLikely CauseSolution
Dashboard shows no dataLogs haven't been ingested yetCheck OAC_LOG_INGEST_JOB status; manually run OAC_INGEST_LOGS
Some feedback lacks LSQLSession IDs don't match or time window too narrowExpand LSQL time window in view from ±5 sec to ±60 sec
Slow dashboard refreshView joins are unindexedCreate indexes on ecid, parent_ecid, and event_time columns
Feedback sentiment always NULLJSON path incorrect or feedback field emptyQuery OAC_LOG_DETAILS directly; inspect raw JSON in feedback column
🔴
Live

Live Troubleshooting

Real-time verification during testing (2 steps)

🎯 End-to-end demo

This step-by-step demo lets you see the complete flow in action. You'll trigger a question, watch the logs flow through the system, and verify the LSQL was captured and parsed correctly.

1
Ask a Question in OAC

Open your OAC instance and ask the AI Assistant a question like "What is the total revenue for Q1 2024?" Make sure to provide feedback when prompted (either thumbs up/down or a comment).

2
Trigger Ingestion Manually

Don't wait for the scheduled job. In your database, run: EXEC OAC_INGEST_LOGS; This pulls logs immediately.

3
Query Live View

Run: SELECT * FROM OAC_FEEDBACK_WITH_LSQL WHERE feedback IS NOT NULL ORDER BY request_time DESC FETCH FIRST 1 ROW ONLY; You should see your feedback entry with the matching LSQL.

4
Inspect Full LSQL

Copy the lsql column value. This is the exact query OAC generated from your utterance. Verify it looks correct for your question.

🔍 Investigate negative feedback

Use this query to dive deep into negative feedback entries. It shows you exactly what the user said, what LSQL was generated, and flags any issues with the correlation.

SQL
SELECT
  user_id,
  utterance,
  feedback,
  lsql,
  lsql_hash,
  request_time,
  feedback_time,
  elapsed_time,
  CASE
    WHEN lsql IS NULL THEN '⚠ No LSQL found'
    WHEN elapsed_time > 30 THEN '⚠ Slow response'
    ELSE '✓ OK'
  END AS status
FROM OAC_FEEDBACK_WITH_LSQL
WHERE feedback_category = 'negative'
ORDER BY feedback_time DESC;
⚠️
Synonym Alert: If your OAC instance is connected to the database via a public synonym (e.g., PROD_SUBJECT_AREA instead of OWNER.SUBJECT_AREA), the LSQL in logs may show the synonym name. When correlating against your OAC repository, account for this difference.
🔧
Reference

Operations & Maintenance

Maintenance, monitoring, and troubleshooting (4 steps)

📥 Replay historical logs

Sometimes you need to manually trigger an ingestion or replay historical logs. This step shows you how to do that and reset the watermark to start fresh if needed.

Manual Ingestion:

SQL
EXEC OAC_INGEST_LOGS;

Reset Watermark:

SQL
UPDATE OAC_LOG_TIMESTAMP_UPDATE
SET last_processed_time = TIMESTAMP '1970-01-01 00:00:00';
COMMIT;
⚠️
After resetting, clear the SODA collection before re-ingesting to avoid duplicates (or implement deduplication logic in your ingestion procedure).
⚙️ Three timing knobs

Your pipeline has three timing knobs: Service Connector batch rollover, scheduler interval, and LSQL correlation window. This guide explains what each controls and the trade-offs.

SettingDefaultDescription
Connector Batch Rollover1 MB or 60 secWhen Service Connector pushes logs to Object Storage. Lower values = fresher data, higher API costs.
Scheduler Interval1 minuteHow often OAC_INGEST_LOGS runs. 1 min for real-time dashboards; 5-10 min for batch analysis.
LSQL Time Window±5 secTime range to search for matching SQL Request logs. Use ±30 sec as a sweet spot.

Connector Batch Rollover Deep Dive

The Service Connector groups logs into batches and uploads them to Object Storage. The rollover setting is a threshold: upload when either the size (1 MB) OR time (60 sec) is reached, whichever comes first. Impact: Larger batch sizes improve throughput (fewer API calls) but increase latency.

Scheduler Interval Deep Dive

Your ingestion job runs on a fixed schedule. Every run queries the watermark, lists new objects, and loads them. Impact: 1 minute = feedback appears in OAC within 1-2 minutes. 5-10 minutes = reduced DB load but stale data.

LSQL Time Window Deep Dive

When joining feedback with LSQL logs, you account for clock skew and network latency. A tight ±5 sec window is fast but risky. A wider ±60 sec is safer but risks false correlations. Recommended: ±30 seconds. Periodically run SELECT * FROM OAC_FEEDBACK_WITH_LSQL WHERE lsql IS NULL to detect correlation misses.

🔧 Common issues and fixes

Common issues you might encounter during operation, with diagnostic queries and fixes.

IssueSymptomFix
No logs ingestedOAC_LOGS is emptyCheck Service Connector status. Verify OAC-AI-Logs are being created. Run OAC_INGEST_LOGS manually.
ORA-00904: invalid columnView creation failsJSON path is wrong. Query OAC_LOGS directly and use JSON_KEYS to inspect structure.
LSQL column is NULLFeedback visible but no LSQL matchedTime window too narrow or session IDs don't match. Expand to ±60 sec.
ORA-27475: job existsScheduler job creation failsDrop existing: BEGIN DBMS_SCHEDULER.DROP_JOB('OAC_LOG_INGEST_JOB'); END; /
Correlation failsFeedback entries don't link to requestsparentEcid may be missing in logs. Check raw JSON.
Dashboard shows stale dataLast feedback >30 min oldCheck scheduler job status. If paused, enable: BEGIN DBMS_SCHEDULER.ENABLE('OAC_LOG_INGEST_JOB'); END; /
📖 Complete inventory

A complete inventory of all 8 database objects you've created. Use this as a checklist when auditing your setup.

Object NameTypePurposeStep
OCI_NATIVE_CREDCredentialAuthenticates database to OCI Object StorageB-1
OAC_LOGSSODA CollectionStores raw JSON logs from OCI LoggingB-2
OAC_LOG_TIMESTAMP_UPDATETableWatermark to track last ingestion timestampB-3
OAC_INGEST_LOGSProcedureLoads logs from Object Storage into SODA collectionB-4
OAC_LOG_DETAILSViewExposes key fields from raw JSON as SQL columnsB-5
OAC_AI_FEEDBACK_ANALYSISViewCorrelates feedback with parent request eventsB-6
OAC_FEEDBACK_WITH_LSQLViewFinal view joining feedback with generated LSQLB-7
OAC_LOG_INGEST_JOBScheduler JobRuns OAC_INGEST_LOGS automatically every 1 minuteB-8
🏗️

What You've Built

A summary of your complete feedback analytics pipeline

🛢️

Database Objects

8 production objects: collections, tables, views, procedures, and a scheduler job that run automatically.

🔗

Key View: OAC_FEEDBACK_WITH_LSQL

The master view correlating user feedback, utterances, and generated LSQL queries with full context.

⚙️

Automation

A 1-minute scheduler job that continuously pulls new logs from Object Storage, parses JSON, and updates views.

📊

Insight You Gain

Identify which LSQL patterns users dislike, measure response satisfaction by data model, and detect AI-to-database performance issues.

📖 References & Resources

Learn more and stay connected

Blog Articles

Official Documentation

Workshops & Community

OAC AI Assistant & Agents

Feedback Analysis Guide

By Ravi Bhuma — Oracle Analytics

Last updated: April 2026