🏠 Home

Master-Detail CRUD WorkflowπŸ”—

Database Design & Duplicate DetectionπŸ”—

Conceptual Discussion & Design Patterns

Date: February 2, 2026


Table of ContentsπŸ”—

  1. Introduction
  2. Master-Detail CRUD Workflow Overview
  3. List Synchronization Strategy
  4. Filter Considerations
  5. User Feedback Patterns
  6. State Management Flow
  7. Conceptual Workflow Pseudo-Code
  8. Package Management in LiveCode
  9. UPSERT vs Separate INSERT/UPDATE
  10. Duplicate Detection Strategy
  11. Trigger vs Application Layer Analysis
  12. Recommended Architecture
  13. Summary & Next Steps

IntroductionπŸ”—

This document captures a comprehensive discussion on designing a master-detail CRUD (Create, Read, Update, Delete) user interface with intelligent duplicate detection. The conversation explores architectural decisions, workflow patterns, and best practices for building robust data management systems.

Key Topics CoveredπŸ”—


Master-Detail CRUD Workflow OverviewπŸ”—

System ComponentsπŸ”—

A master-detail interface consists of two primary views:

List View (Master): Displays records in a table or list format with columns for key fields. Supports filtering, sorting, and row selection.

Detail View: Shows complete information for the selected record. Provides editing capabilities and action buttons (Save, Delete, New).

Core OperationsπŸ”—


List Synchronization StrategyπŸ”—

When to Refresh the ListπŸ”—

The list must be updated after each operation to reflect the current database state:

After INSERT:
- Check if new record matches current filter
- If yes: Add to list at appropriate position
- If no: Show message explaining record exists but is filtered out

After UPDATE:
- Check if modified record still matches filter
- If yes: Update in place or re-sort if sort criteria changed
- If no: Remove from list with explanation

After DELETE:
- Remove record from list
- Select next/previous item or show empty state

Position StrategyπŸ”—

For Sorted Lists: Calculate correct position based on sort column and order. INSERT operations add at the correct sorted position. UPDATE operations may require moving the record if sort-relevant fields changed.

For Unsorted Lists: INSERT typically adds to top or bottom (configurable). UPDATE keeps record in same position unless explicit re-sort triggered.


Filter ConsiderationsπŸ”—

The Disappeared Record ProblemπŸ”—

When a user edits a record and the changes cause it to no longer match the active filter, the record "disappears" from the list. This can be confusing and frustrating if not handled properly.

Material Design 3 Pattern: Snackbar with ActionπŸ”—

Anti-Pattern (Bad):
- βœ— Record silently disappears
- βœ— User confused about where their work went

Good Pattern:
- βœ“ Show snackbar: "Record updated. Now hidden by current filter."
- βœ“ Offer action button: "SHOW ALL" or "CLEAR FILTER"
- βœ“ Keep detail view open briefly, then close gracefully


User Feedback PatternsπŸ”—

Material Design 3 defines a hierarchy of feedback mechanisms:

Level 1: Subtle (Preferred for Frequent Operations)πŸ”—

Level 2: Moderate (For Important Operations)πŸ”—

Level 3: Prominent (For Critical Operations)πŸ”—

When to Inform UserπŸ”—


State Management FlowπŸ”—

Optimistic UI Update (Advanced)πŸ”—

Update the UI immediately, then send to database asynchronously. If database operation fails, rollback the UI changes. This provides the fastest perceived performance but requires careful error handling.

User clicks Save
    ↓
Immediately update UI (list + detail)
    ↓
Send to database asynchronously
    ↓
On Success: Show snackbar
On Error: Rollback UI, show error dialog

Pessimistic Update (Simpler, for LiveCode)πŸ”—

Wait for database operation to complete before updating UI. Simpler to implement and debug. Since LiveCode operations are synchronous and modern equipment is fast, the wait time is typically negligible.

User clicks Save
    ↓
Show loading indicator (optional)
    ↓
Wait for database operation
    ↓
Update UI + show feedback

Recommendation for LiveCode ContextπŸ”—

Use the pessimistic (synchronous) approach. LiveCode's synchronous nature makes this simpler, and modern database operations are fast enough that users won't notice delays. This avoids the complexity of rollback logic.


Conceptual Workflow Pseudo-CodeπŸ”—

INSERT WorkflowπŸ”—

FUNCTION handleInsert(newRecordData):
  // 1. Validate
  errors = validate(newRecordData)
  IF errors.exist:
    SHOW_INLINE_ERRORS(errors)
    RETURN false

  // 2. Check for duplicates
  duplicates = findPotentialDuplicates(newRecordData)
  IF duplicates.count > 0:
    choice = showDuplicateDialog(duplicates, newRecordData)
    IF choice != "SAVE_ANYWAY":
      RETURN false

  // 3. Generate new ID
  newId = database.executeScalar("SELECT gen_id(seq_table, 1)")

  // 4. Database INSERT
  TRY:
    sql = "INSERT INTO table (id, ...) VALUES (?, ...)"
    database.execute(sql, [newId, ...])
    newRecordData.id = newId

    // 5. Update list view
    IF matchesCurrentFilter(newRecordData):
      position = calculateInsertPosition(newRecordData)
      list.insertAt(position, newRecordData)
      list.select(newRecordData)
      showSnackbar("Record created")
    ELSE:
      showSnackbar("Record created (hidden by filter)", action="VIEW")

    RETURN true

  CATCH error:
    showErrorDialog("Could not create record: " + error.message)
    RETURN false

UPDATE WorkflowπŸ”—

FUNCTION handleUpdate(recordId, changes):
  // 1. Validate
  errors = validate(changes)
  IF errors.exist:
    SHOW_INLINE_ERRORS(errors)
    RETURN false

  // 2. Check for duplicates
  duplicates = findPotentialDuplicates(changes)
  IF duplicates.count > 0:
    choice = showDuplicateDialog(duplicates, changes)
    IF choice != "SAVE_ANYWAY":
      RETURN false

  // 3. Database UPDATE
  TRY:
    sql = "UPDATE table SET ... WHERE id = ?"
    database.execute(sql, [changes..., recordId])

    // 4. Update list view
    stillMatches = matchesCurrentFilter(changes)
    IF stillMatches:
      IF sortFieldChanged(changes):
        newPosition = calculatePosition(changes)
        list.moveToPosition(recordId, newPosition)
      ELSE:
        list.updateInPlace(recordId, changes)
      showSnackbar("Changes saved")
    ELSE:
      list.remove(recordId)
      showSnackbar("Changes saved (record now filtered)", action="SHOW ALL")

    RETURN true

  CATCH error:
    showErrorDialog("Could not save changes: " + error.message)
    RETURN false

DELETE WorkflowπŸ”—

FUNCTION handleDelete(recordId):
  // 1. Confirmation dialog
  confirmed = showDialog(
    title: "Delete record?",
    message: "This action cannot be undone.",
    actions: ["CANCEL", "DELETE"]
  )

  IF NOT confirmed:
    RETURN false

  // 2. Database DELETE
  TRY:
    database.execute("DELETE FROM table WHERE id = ?", [recordId])

    // 3. Update UI
    listIndex = list.getIndex(recordId)
    list.remove(recordId)

    // 4. Select next item or show empty state
    IF list.count > 0:
      nextIndex = min(listIndex, list.count - 1)
      list.selectAt(nextIndex)
    ELSE:
      detailView.clear()
      detailView.showEmptyState()

    showSnackbar("Record deleted")
    RETURN true

  CATCH error:
    showErrorDialog("Could not delete record: " + error.message)
    RETURN false

Package Management in LiveCodeπŸ”—

LiveCode operations are synchronous and blocking. Unlike typical web applications with asynchronous JavaScript, LiveCode waits for database operations to complete. This simplifies error handling but means the UI may briefly freeze during operations.

Key Consideration: Modern databases and hardware make most operations fast enough that users won't notice delays. For operations that might take longer (bulk imports, complex reports), consider showing a progress indicator.

Workaround for Long Operations: If truly asynchronous operations are needed, LiveCode can delegate to external processes (Python via sockets, shell scripts) but this adds significant complexity and is rarely necessary for typical CRUD operations.


UPSERT vs Separate INSERT UPDATEπŸ”—

UPSERT in MariaDBπŸ”—

MariaDB supports UPSERT operations through two mechanisms:

-- Method 1: INSERT ... ON DUPLICATE KEY UPDATE
INSERT INTO customers (id, name, email)
VALUES (?, ?, ?)
ON DUPLICATE KEY UPDATE
    name = VALUES(name),
    email = VALUES(email),
    updated_at = NOW();

-- Method 2: REPLACE INTO (not recommended)
REPLACE INTO customers (id, name, email)
VALUES (?, ?, ?);

Recommendation: Keep INSERT and UPDATE SeparateπŸ”—

For most applications, maintaining separate INSERT and UPDATE operations is clearer:

Reasons to Keep Separate:

  1. Clearer business logic - you know exactly what happened
  2. Better audit trail - different timestamps for created_at vs updated_at
  3. Simpler debugging - easier to trace execution flow
  4. Better user feedback - "Record created" vs "Changes saved"
  5. More control - handle new record ID retrieval cleanly
  6. Different validations - can apply different rules for create vs update

Duplicate Detection StrategyπŸ”—

The Need for Duplicate DetectionπŸ”—

Even with unique database IDs, users may accidentally create near-duplicate records. For example, entering "John Smith" twice with slightly different contact information. Duplicate detection helps users avoid data quality issues.

Types of DuplicatesπŸ”—

Hard Duplicates (Database Constraint):
Fields with UNIQUE constraints where duplicates are impossible (e.g., email addresses, SKU codes). The database enforces uniqueness and will reject duplicate INSERTs with an error.

Soft Duplicates (Business Logic):
Fields where duplicates might be legitimate but should be investigated (e.g., customer names, phone numbers). The application checks for similarity and asks the user to confirm.

When to CheckπŸ”—

Option A: Real-time (while typing)
- Immediate feedback as user types
- Multiple database queries (use debouncing)
- Shows inline warning icon/text
- Non-blocking - user can continue

Option B: On save (validation step)
- Single query when user clicks Save
- Shows dialog if duplicates found
- User makes explicit decision
- May surprise user after filling entire form

Option C: Hybrid (recommended)
- Real-time: Show subtle warning while typing
- On save: Firm check with decision dialog
- Best user experience - early warning + confirmation

Configurable Duplicate RulesπŸ”—

DUPLICATE_RULES = {
  "customers": {
    "enabled": true,
    "fields": [
      {
        "name": "name",
        "method": "case_insensitive",
        "weight": "high"
      },
      {
        "name": "email",
        "method": "case_insensitive",
        "weight": "high"
      },
      {
        "name": "phone",
        "method": "exact",
        "weight": "medium"
      }
    ],
    "logic": "OR",  // ANY field matches
    "threshold": 1  // Minimum matches to warn
  }
}

Duplicate Detection Query ExamplesπŸ”—

-- Exact match (case-insensitive)
SELECT id, name, email, phone
FROM customers
WHERE LOWER(name) = LOWER(?)
  AND id != ?
LIMIT 5;

-- Fuzzy match (phonetic similarity)
SELECT id, name, email, phone
FROM customers
WHERE SOUNDEX(name) = SOUNDEX(?)
  AND id != ?
LIMIT 5;

-- Multiple field check
SELECT id, name, email, phone
FROM customers
WHERE (LOWER(name) = LOWER(?) OR LOWER(email) = LOWER(?))
  AND id != ?
LIMIT 5;

Duplicate Dialog User ExperienceπŸ”—

When potential duplicates are found, show a dialog that:


Trigger vs Application Layer AnalysisπŸ”—

The Fundamental QuestionπŸ”—

Should duplicate detection be implemented in database triggers or in application code? This is a critical architectural decision with far-reaching implications.

Database Layer (Triggers)πŸ”—

PROS:
- βœ“ Enforced regardless of which application accesses the database
- βœ“ Cannot be bypassed by buggy or malicious code
- βœ“ Works for batch imports, direct SQL, multiple applications
- βœ“ Data integrity guaranteed at the source

CONS:
- βœ— No user interaction possible
- βœ— Binary decision only (allow or reject)
- βœ— Poor user experience (generic database errors)
- βœ— Cannot show similar records for comparison
- βœ— No "save anyway" option
- βœ— Difficult to implement fuzzy matching
- βœ— Harder to maintain and debug

BEST FOR:
- Hard constraints (email must be unique)
- Audit logging
- Cascade operations
- Computed fields

Application Layer (Pre-insert SELECT)πŸ”—

PROS:
- βœ“ Rich user interaction (dialogs, choices)
- βœ“ Can show similar records for comparison
- βœ“ User can decide: cancel, view, or save anyway
- βœ“ Flexible matching strategies (fuzzy, soundex, etc.)
- βœ“ Better UX (check before form submission)
- βœ“ Can show match scores and reasons
- βœ“ Easier to maintain and modify logic
- βœ“ Configurable per table/context

CONS:
- βœ— Must be implemented in each application
- βœ— Can be bypassed by direct database access
- βœ— Requires careful coding in all CRUD operations
- βœ— Race condition possible (two users simultaneously)

BEST FOR:
- Soft duplicate warnings
- Business logic rules
- User-assisted decisions
- Fuzzy/similarity matching

Race Condition ConsiderationπŸ”—

A potential issue with application-layer checking: two users might check for duplicates simultaneously, both find none, and both insert duplicate records.

Solutions:

  1. Accept the risk (pragmatic): In practice, extremely rare for two users to enter identical data simultaneously. Application-layer check catches 99% of duplicates.

  2. Database UNIQUE constraint (strict): Add UNIQUE constraint for fields that truly must be unique. Second insert fails with clear error.

  3. Double-check before insert (defensive): Perform final duplicate check immediately before INSERT statement.

  4. Transaction with lock (complex): Use SELECT FOR UPDATE within transaction. Impacts performance and increases deadlock risk.


Hybrid Approach: Three LayersπŸ”—

LAYER 1: Database - Hard Constraints Only

Use UNIQUE constraints for fields that absolutely must be unique:
- Email addresses (if business rule requires)
- SKU codes
- Account numbers
- License keys

CREATE TABLE customers (
  id BIGINT PRIMARY KEY,
  name VARCHAR(255) NOT NULL,
  email VARCHAR(255) UNIQUE,  -- Hard constraint
  phone VARCHAR(50),
  created_at TIMESTAMP DEFAULT NOW(),
  updated_at TIMESTAMP NULL
);

LAYER 2: Application - Soft Duplicate Detection

Use SELECT queries before INSERT/UPDATE for fields where duplicates need investigation:
- Names (might legitimately duplicate)
- Addresses
- Phone numbers (shared in families)
- Product descriptions

LAYER 3: Database - Audit & Cascade

Use AFTER INSERT/UPDATE triggers for:
- Audit logging (who, when, what)
- Creating related records automatically
- Updating summary/computed tables
- Sending notifications

Complete Workflow ArchitectureπŸ”—

User enters data in form
    ↓
User clicks SAVE button
    ↓
[APPLICATION LAYER]
β”œβ”€> Validate form fields
β”œβ”€> If validation fails: Show inline errors, STOP
└─> If validation passes: Continue
    ↓
[APPLICATION LAYER]
β”œβ”€> Execute SELECT query to check for duplicates
β”œβ”€> If duplicates found: Show dialog with options
β”‚   β”œβ”€> User clicks CANCEL: STOP
β”‚   β”œβ”€> User clicks VIEW EXISTING: Load that record, STOP
β”‚   └─> User clicks SAVE ANYWAY: Continue
└─> If no duplicates: Continue
    ↓
[DATABASE LAYER - BEFORE INSERT]
β”œβ”€> BEFORE INSERT trigger fires
└─> Generate ID with gen_id()
    ↓
[DATABASE LAYER - INSERT]
β”œβ”€> INSERT statement executes
└─> UNIQUE constraints checked (hard constraints only)
    β”œβ”€> Constraint violation: Error returned to application
    └─> No violation: Continue
    ↓
[DATABASE LAYER - AFTER INSERT]
β”œβ”€> AFTER INSERT trigger fires
└─> Log to audit table
    ↓
[APPLICATION LAYER]
β”œβ”€> Receive success confirmation
β”œβ”€> Update list view (add/update record)
β”œβ”€> Update detail view (show saved data)
└─> Show snackbar: "Record created" or "Changes saved"

Database Schema with gen_id()πŸ”—

-- Sequence for generating IDs
CREATE SEQUENCE seq_customers;

-- Table definition
CREATE TABLE customers (
  id BIGINT PRIMARY KEY,
  name VARCHAR(255) NOT NULL,
  email VARCHAR(255) UNIQUE,  -- Hard constraint
  phone VARCHAR(50),
  address TEXT,
  created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  updated_at TIMESTAMP NULL,
  created_by VARCHAR(100),
  updated_by VARCHAR(100)
);

-- BEFORE INSERT trigger for ID generation
DELIMITER //
CREATE TRIGGER customers_bi
BEFORE INSERT ON customers
FOR EACH ROW
BEGIN
  IF (NEW.id IS NULL OR NEW.id = 0) THEN
    SET NEW.id = (SELECT gen_id(seq_customers, 1));
  END IF;
  SET NEW.created_at = NOW();
END//
DELIMITER ;

-- AFTER INSERT trigger for audit logging
DELIMITER //
CREATE TRIGGER customers_ai
AFTER INSERT ON customers
FOR EACH ROW
BEGIN
  INSERT INTO audit_log (
    table_name, 
    record_id, 
    action, 
    user_id, 
    timestamp
  ) VALUES (
    'customers', 
    NEW.id, 
    'INSERT', 
    NEW.created_by, 
    NOW()
  );
END//
DELIMITER ;

Summary and Next StepsπŸ”—

Key Decisions MadeπŸ”—

1. Architecture:
- Database handles hard constraints (UNIQUE) and ID generation
- Application handles soft duplicate detection and user interaction
- Triggers used only for audit logging and cascade operations

2. Duplicate Detection:
- Implemented in application layer using SELECT queries
- Configurable per table with flexible matching strategies
- User-friendly dialogs with informed decision options

3. Workflow Pattern:
- Synchronous operations (appropriate for LiveCode)
- Separate INSERT and UPDATE commands (clearer logic)
- Material Design 3 feedback patterns (snackbars, dialogs)

4. List Synchronization:
- Update list after every operation
- Handle filter mismatches gracefully with user feedback
- Maintain sort order when inserting/updating records

Implementation ReadinessπŸ”—

The conceptual foundation is now solid. Before implementation:

Benefits of This ApproachπŸ”—

Data Quality:
- Reduces accidental duplicate entries
- Maintains database integrity with hard constraints
- Provides audit trail for all changes

User Experience:
- Clear feedback for all operations
- Informed decisions when duplicates detected
- Graceful handling of filtered records

Maintainability:
- Clear separation of concerns
- Configurable duplicate rules without code changes
- Easy to debug and modify

Next Steps When ReadyπŸ”—

When moving to implementation, the following areas will need attention:

A. LiveCode-Specific Implementation:
- Database connection handling
- Error handling patterns
- UI component configuration
- Event handler structure

B. Configuration Management:
- Where to store duplicate detection rules
- How to modify rules without code changes
- Table-specific validation rules

C. Testing Strategy:
- Test duplicate detection with various similarity levels
- Test filter interactions with CRUD operations
- Test error handling and rollback scenarios
- Test user feedback in all scenarios

D. Edge Cases to Consider:
- Concurrent user modifications
- Network failures during save
- Very large result sets in lists
- Records with many matching duplicates

Closing ThoughtsπŸ”—

This document represents a thorough exploration of master-detail CRUD workflows with intelligent duplicate detection. The principles discussed are applicable across many database systems and programming environments, though the specifics are tailored for LiveCode with MariaDB.

The emphasis on user experience, data quality, and maintainable architecture creates a solid foundation for building robust data management applications. Take time to understand each concept before implementation, and proceed step by step as planned.