Conceptual Discussion & Design Patterns
Date: February 2, 2026
This document captures a comprehensive discussion on designing a master-detail CRUD (Create, Read, Update, Delete) user interface with intelligent duplicate detection. The conversation explores architectural decisions, workflow patterns, and best practices for building robust data management systems.
A master-detail interface consists of two primary views:
List View (Master): Displays records in a table or list format with columns for key fields. Supports filtering, sorting, and row selection.
Detail View: Shows complete information for the selected record. Provides editing capabilities and action buttons (Save, Delete, New).
The list must be updated after each operation to reflect the current database state:
After INSERT:
- Check if new record matches current filter
- If yes: Add to list at appropriate position
- If no: Show message explaining record exists but is filtered out
After UPDATE:
- Check if modified record still matches filter
- If yes: Update in place or re-sort if sort criteria changed
- If no: Remove from list with explanation
After DELETE:
- Remove record from list
- Select next/previous item or show empty state
For Sorted Lists: Calculate correct position based on sort column and order. INSERT operations add at the correct sorted position. UPDATE operations may require moving the record if sort-relevant fields changed.
For Unsorted Lists: INSERT typically adds to top or bottom (configurable). UPDATE keeps record in same position unless explicit re-sort triggered.
When a user edits a record and the changes cause it to no longer match the active filter, the record "disappears" from the list. This can be confusing and frustrating if not handled properly.
Anti-Pattern (Bad):
- β Record silently disappears
- β User confused about where their work went
Good Pattern:
- β Show snackbar: "Record updated. Now hidden by current filter."
- β Offer action button: "SHOW ALL" or "CLEAR FILTER"
- β Keep detail view open briefly, then close gracefully
Material Design 3 defines a hierarchy of feedback mechanisms:
Update the UI immediately, then send to database asynchronously. If database operation fails, rollback the UI changes. This provides the fastest perceived performance but requires careful error handling.
User clicks Save
β
Immediately update UI (list + detail)
β
Send to database asynchronously
β
On Success: Show snackbar
On Error: Rollback UI, show error dialog
Wait for database operation to complete before updating UI. Simpler to implement and debug. Since LiveCode operations are synchronous and modern equipment is fast, the wait time is typically negligible.
User clicks Save
β
Show loading indicator (optional)
β
Wait for database operation
β
Update UI + show feedback
Use the pessimistic (synchronous) approach. LiveCode's synchronous nature makes this simpler, and modern database operations are fast enough that users won't notice delays. This avoids the complexity of rollback logic.
FUNCTION handleInsert(newRecordData):
// 1. Validate
errors = validate(newRecordData)
IF errors.exist:
SHOW_INLINE_ERRORS(errors)
RETURN false
// 2. Check for duplicates
duplicates = findPotentialDuplicates(newRecordData)
IF duplicates.count > 0:
choice = showDuplicateDialog(duplicates, newRecordData)
IF choice != "SAVE_ANYWAY":
RETURN false
// 3. Generate new ID
newId = database.executeScalar("SELECT gen_id(seq_table, 1)")
// 4. Database INSERT
TRY:
sql = "INSERT INTO table (id, ...) VALUES (?, ...)"
database.execute(sql, [newId, ...])
newRecordData.id = newId
// 5. Update list view
IF matchesCurrentFilter(newRecordData):
position = calculateInsertPosition(newRecordData)
list.insertAt(position, newRecordData)
list.select(newRecordData)
showSnackbar("Record created")
ELSE:
showSnackbar("Record created (hidden by filter)", action="VIEW")
RETURN true
CATCH error:
showErrorDialog("Could not create record: " + error.message)
RETURN false
FUNCTION handleUpdate(recordId, changes):
// 1. Validate
errors = validate(changes)
IF errors.exist:
SHOW_INLINE_ERRORS(errors)
RETURN false
// 2. Check for duplicates
duplicates = findPotentialDuplicates(changes)
IF duplicates.count > 0:
choice = showDuplicateDialog(duplicates, changes)
IF choice != "SAVE_ANYWAY":
RETURN false
// 3. Database UPDATE
TRY:
sql = "UPDATE table SET ... WHERE id = ?"
database.execute(sql, [changes..., recordId])
// 4. Update list view
stillMatches = matchesCurrentFilter(changes)
IF stillMatches:
IF sortFieldChanged(changes):
newPosition = calculatePosition(changes)
list.moveToPosition(recordId, newPosition)
ELSE:
list.updateInPlace(recordId, changes)
showSnackbar("Changes saved")
ELSE:
list.remove(recordId)
showSnackbar("Changes saved (record now filtered)", action="SHOW ALL")
RETURN true
CATCH error:
showErrorDialog("Could not save changes: " + error.message)
RETURN false
FUNCTION handleDelete(recordId):
// 1. Confirmation dialog
confirmed = showDialog(
title: "Delete record?",
message: "This action cannot be undone.",
actions: ["CANCEL", "DELETE"]
)
IF NOT confirmed:
RETURN false
// 2. Database DELETE
TRY:
database.execute("DELETE FROM table WHERE id = ?", [recordId])
// 3. Update UI
listIndex = list.getIndex(recordId)
list.remove(recordId)
// 4. Select next item or show empty state
IF list.count > 0:
nextIndex = min(listIndex, list.count - 1)
list.selectAt(nextIndex)
ELSE:
detailView.clear()
detailView.showEmptyState()
showSnackbar("Record deleted")
RETURN true
CATCH error:
showErrorDialog("Could not delete record: " + error.message)
RETURN false
LiveCode operations are synchronous and blocking. Unlike typical web applications with asynchronous JavaScript, LiveCode waits for database operations to complete. This simplifies error handling but means the UI may briefly freeze during operations.
Key Consideration: Modern databases and hardware make most operations fast enough that users won't notice delays. For operations that might take longer (bulk imports, complex reports), consider showing a progress indicator.
Workaround for Long Operations: If truly asynchronous operations are needed, LiveCode can delegate to external processes (Python via sockets, shell scripts) but this adds significant complexity and is rarely necessary for typical CRUD operations.
MariaDB supports UPSERT operations through two mechanisms:
-- Method 1: INSERT ... ON DUPLICATE KEY UPDATE
INSERT INTO customers (id, name, email)
VALUES (?, ?, ?)
ON DUPLICATE KEY UPDATE
name = VALUES(name),
email = VALUES(email),
updated_at = NOW();
-- Method 2: REPLACE INTO (not recommended)
REPLACE INTO customers (id, name, email)
VALUES (?, ?, ?);
For most applications, maintaining separate INSERT and UPDATE operations is clearer:
Reasons to Keep Separate:
Even with unique database IDs, users may accidentally create near-duplicate records. For example, entering "John Smith" twice with slightly different contact information. Duplicate detection helps users avoid data quality issues.
Hard Duplicates (Database Constraint):
Fields with UNIQUE constraints where duplicates are impossible (e.g., email addresses, SKU codes). The database enforces uniqueness and will reject duplicate INSERTs with an error.
Soft Duplicates (Business Logic):
Fields where duplicates might be legitimate but should be investigated (e.g., customer names, phone numbers). The application checks for similarity and asks the user to confirm.
Option A: Real-time (while typing)
- Immediate feedback as user types
- Multiple database queries (use debouncing)
- Shows inline warning icon/text
- Non-blocking - user can continue
Option B: On save (validation step)
- Single query when user clicks Save
- Shows dialog if duplicates found
- User makes explicit decision
- May surprise user after filling entire form
Option C: Hybrid (recommended)
- Real-time: Show subtle warning while typing
- On save: Firm check with decision dialog
- Best user experience - early warning + confirmation
DUPLICATE_RULES = {
"customers": {
"enabled": true,
"fields": [
{
"name": "name",
"method": "case_insensitive",
"weight": "high"
},
{
"name": "email",
"method": "case_insensitive",
"weight": "high"
},
{
"name": "phone",
"method": "exact",
"weight": "medium"
}
],
"logic": "OR", // ANY field matches
"threshold": 1 // Minimum matches to warn
}
}
-- Exact match (case-insensitive)
SELECT id, name, email, phone
FROM customers
WHERE LOWER(name) = LOWER(?)
AND id != ?
LIMIT 5;
-- Fuzzy match (phonetic similarity)
SELECT id, name, email, phone
FROM customers
WHERE SOUNDEX(name) = SOUNDEX(?)
AND id != ?
LIMIT 5;
-- Multiple field check
SELECT id, name, email, phone
FROM customers
WHERE (LOWER(name) = LOWER(?) OR LOWER(email) = LOWER(?))
AND id != ?
LIMIT 5;
When potential duplicates are found, show a dialog that:
Should duplicate detection be implemented in database triggers or in application code? This is a critical architectural decision with far-reaching implications.
PROS:
- β Enforced regardless of which application accesses the database
- β Cannot be bypassed by buggy or malicious code
- β Works for batch imports, direct SQL, multiple applications
- β Data integrity guaranteed at the source
CONS:
- β No user interaction possible
- β Binary decision only (allow or reject)
- β Poor user experience (generic database errors)
- β Cannot show similar records for comparison
- β No "save anyway" option
- β Difficult to implement fuzzy matching
- β Harder to maintain and debug
BEST FOR:
- Hard constraints (email must be unique)
- Audit logging
- Cascade operations
- Computed fields
PROS:
- β Rich user interaction (dialogs, choices)
- β Can show similar records for comparison
- β User can decide: cancel, view, or save anyway
- β Flexible matching strategies (fuzzy, soundex, etc.)
- β Better UX (check before form submission)
- β Can show match scores and reasons
- β Easier to maintain and modify logic
- β Configurable per table/context
CONS:
- β Must be implemented in each application
- β Can be bypassed by direct database access
- β Requires careful coding in all CRUD operations
- β Race condition possible (two users simultaneously)
BEST FOR:
- Soft duplicate warnings
- Business logic rules
- User-assisted decisions
- Fuzzy/similarity matching
A potential issue with application-layer checking: two users might check for duplicates simultaneously, both find none, and both insert duplicate records.
Solutions:
Accept the risk (pragmatic): In practice, extremely rare for two users to enter identical data simultaneously. Application-layer check catches 99% of duplicates.
Database UNIQUE constraint (strict): Add UNIQUE constraint for fields that truly must be unique. Second insert fails with clear error.
Double-check before insert (defensive): Perform final duplicate check immediately before INSERT statement.
Transaction with lock (complex): Use SELECT FOR UPDATE within transaction. Impacts performance and increases deadlock risk.
LAYER 1: Database - Hard Constraints Only
Use UNIQUE constraints for fields that absolutely must be unique:
- Email addresses (if business rule requires)
- SKU codes
- Account numbers
- License keys
CREATE TABLE customers (
id BIGINT PRIMARY KEY,
name VARCHAR(255) NOT NULL,
email VARCHAR(255) UNIQUE, -- Hard constraint
phone VARCHAR(50),
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP NULL
);
LAYER 2: Application - Soft Duplicate Detection
Use SELECT queries before INSERT/UPDATE for fields where duplicates need investigation:
- Names (might legitimately duplicate)
- Addresses
- Phone numbers (shared in families)
- Product descriptions
LAYER 3: Database - Audit & Cascade
Use AFTER INSERT/UPDATE triggers for:
- Audit logging (who, when, what)
- Creating related records automatically
- Updating summary/computed tables
- Sending notifications
User enters data in form
β
User clicks SAVE button
β
[APPLICATION LAYER]
ββ> Validate form fields
ββ> If validation fails: Show inline errors, STOP
ββ> If validation passes: Continue
β
[APPLICATION LAYER]
ββ> Execute SELECT query to check for duplicates
ββ> If duplicates found: Show dialog with options
β ββ> User clicks CANCEL: STOP
β ββ> User clicks VIEW EXISTING: Load that record, STOP
β ββ> User clicks SAVE ANYWAY: Continue
ββ> If no duplicates: Continue
β
[DATABASE LAYER - BEFORE INSERT]
ββ> BEFORE INSERT trigger fires
ββ> Generate ID with gen_id()
β
[DATABASE LAYER - INSERT]
ββ> INSERT statement executes
ββ> UNIQUE constraints checked (hard constraints only)
ββ> Constraint violation: Error returned to application
ββ> No violation: Continue
β
[DATABASE LAYER - AFTER INSERT]
ββ> AFTER INSERT trigger fires
ββ> Log to audit table
β
[APPLICATION LAYER]
ββ> Receive success confirmation
ββ> Update list view (add/update record)
ββ> Update detail view (show saved data)
ββ> Show snackbar: "Record created" or "Changes saved"
-- Sequence for generating IDs
CREATE SEQUENCE seq_customers;
-- Table definition
CREATE TABLE customers (
id BIGINT PRIMARY KEY,
name VARCHAR(255) NOT NULL,
email VARCHAR(255) UNIQUE, -- Hard constraint
phone VARCHAR(50),
address TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP NULL,
created_by VARCHAR(100),
updated_by VARCHAR(100)
);
-- BEFORE INSERT trigger for ID generation
DELIMITER //
CREATE TRIGGER customers_bi
BEFORE INSERT ON customers
FOR EACH ROW
BEGIN
IF (NEW.id IS NULL OR NEW.id = 0) THEN
SET NEW.id = (SELECT gen_id(seq_customers, 1));
END IF;
SET NEW.created_at = NOW();
END//
DELIMITER ;
-- AFTER INSERT trigger for audit logging
DELIMITER //
CREATE TRIGGER customers_ai
AFTER INSERT ON customers
FOR EACH ROW
BEGIN
INSERT INTO audit_log (
table_name,
record_id,
action,
user_id,
timestamp
) VALUES (
'customers',
NEW.id,
'INSERT',
NEW.created_by,
NOW()
);
END//
DELIMITER ;
1. Architecture:
- Database handles hard constraints (UNIQUE) and ID generation
- Application handles soft duplicate detection and user interaction
- Triggers used only for audit logging and cascade operations
2. Duplicate Detection:
- Implemented in application layer using SELECT queries
- Configurable per table with flexible matching strategies
- User-friendly dialogs with informed decision options
3. Workflow Pattern:
- Synchronous operations (appropriate for LiveCode)
- Separate INSERT and UPDATE commands (clearer logic)
- Material Design 3 feedback patterns (snackbars, dialogs)
4. List Synchronization:
- Update list after every operation
- Handle filter mismatches gracefully with user feedback
- Maintain sort order when inserting/updating records
The conceptual foundation is now solid. Before implementation:
Data Quality:
- Reduces accidental duplicate entries
- Maintains database integrity with hard constraints
- Provides audit trail for all changes
User Experience:
- Clear feedback for all operations
- Informed decisions when duplicates detected
- Graceful handling of filtered records
Maintainability:
- Clear separation of concerns
- Configurable duplicate rules without code changes
- Easy to debug and modify
When moving to implementation, the following areas will need attention:
A. LiveCode-Specific Implementation:
- Database connection handling
- Error handling patterns
- UI component configuration
- Event handler structure
B. Configuration Management:
- Where to store duplicate detection rules
- How to modify rules without code changes
- Table-specific validation rules
C. Testing Strategy:
- Test duplicate detection with various similarity levels
- Test filter interactions with CRUD operations
- Test error handling and rollback scenarios
- Test user feedback in all scenarios
D. Edge Cases to Consider:
- Concurrent user modifications
- Network failures during save
- Very large result sets in lists
- Records with many matching duplicates
This document represents a thorough exploration of master-detail CRUD workflows with intelligent duplicate detection. The principles discussed are applicable across many database systems and programming environments, though the specifics are tailored for LiveCode with MariaDB.
The emphasis on user experience, data quality, and maintainable architecture creates a solid foundation for building robust data management applications. Take time to understand each concept before implementation, and proceed step by step as planned.