A Guide to Avoiding Disastrous Mistakes
Richard Carlton, Nov 30, 2023, FileMaker Training Videos
Menu
- Introduction: What Are Low-Yield Nukes in FileMaker?
- Deleting Records Directly: A Dangerous Path
2.1 Why Direct Deletion is a Problem
2.2 Best Practices for Safe Deletion
2.3 Creating Soft Deletes
2.4 Building a Garbage Table - Cascading Deletes: The Silent Killer
3.1 How Cascading Deletes Work
3.2 Why Cascading Deletes Can Be Dangerous
3.3 Avoiding Cascading Delete Problems - The Replace Field Contents Command: A Carpet Bomb
4.1 Understanding the Replace Command
4.2 When and Where to Use Replace Safely
4.3 Avoiding Unintended Replacements - Truncating Tables: The Nuclear Option
5.1 Understanding Truncate Table
5.2 When Truncating Tables is Appropriate
5.3 Avoiding Disaster with Truncate Table - The Importance of Backups: Your Last Line of Defense
6.1 Understanding Backup Strategies
6.2 Setting Up Automated Backups
6.3 Using Immutable Backups for Extra Protection - Unsupervised Layout Edits
7.1 The Risk of Unsupervised Layout Changes
7.2 Recommended Prevention: Duplicate and Version Control
7.3 Steps to Ensure Safety - Replacing Scripts Without Backups
8.1 The Dangers of Replacing Scripts
8.2 Best Practices for Script Replacements - Ignoring User Roles & Privileges
9.1 The Risk of Unrestricted Access
9.2 Best Practices for User Access Control - Lack of Error Handling in Scripts
10.1 The Importance of Error Handling
10.2 Best Practices for Adding Error Handling - Crash Recovery: Not a Safe Option
11.1 Understanding Crash Recovery Risks
11.2 Best Practices for Handling Crashes - Conclusion: Staying Safe in FileMaker Development
- Check List
- Video
1. Introduction: What Are Low-Yield Nukes in FileMaker? <a name=”introduction”></a>
In software development, there are actions and features that, if used incorrectly, can lead to catastrophic results. In FileMaker, these actions are what we call “low-yield nukes.” They are easy to trigger but can cause massive, often irreparable damage. From accidentally deleting critical records to cascading deletes wiping out entire datasets, it’s crucial to recognize and avoid these pitfalls.
This guide covers the most common “nukes” in FileMaker development and how to mitigate their risks. Whether you are a beginner or a seasoned developer, understanding these dangerous traps will help you build more secure, reliable solutions.
2. Deleting Records Directly: A Dangerous Path <a name=”deleting-records-directly”></a>
2.1 Why Direct Deletion is a Problem <a name=”why-direct-deletion-is-a-problem”></a>
One of the most common yet dangerous actions in FileMaker is deleting records directly. Users often request delete buttons that allow them to completely remove records from the database. While this seems simple, it’s also risky because once records are deleted, they’re gone for good. Even worse, users can mistakenly delete critical data, leading to operational disruptions, lost revenue, or even legal complications if it involves sensitive information.
The immediate issue developers face is when a user claims, “I swear I added this record, but now it’s gone.” Now, you’re in a dilemma—there’s no clear audit trail, and the missing data might be irreplaceable.
2.2 Best Practices for Safe Deletion <a name=”best-practices-for-safe-deletion”></a>
To avoid the direct deletion trap, it’s essential to guide your users toward safer methods. Here are a few strategies to mitigate the risk of permanent data loss:
- Use permissions wisely: Limit who can delete records and require admin approval for such actions.
- Implement warning dialogs: Always prompt users with a confirmation message, warning them of the permanence of their actions.
- Create user roles with restricted privileges: Only allow trusted users to delete critical data, and track deletions using an audit log.
2.3 Creating Soft Deletes <a name=”creating-soft-deletes”></a>
Soft deletes offer a safer alternative to full deletions. Rather than erasing a record entirely, you mark it as “inactive” or “deleted” by updating a status field. The data remains in the system, but it’s hidden from users in their normal workflows. This method ensures that if something was deleted in error, you can always recover it later.
How to Create Soft Deletes:
- Add a status field to your records (e.g.,
Status
field with values “Active” or “Deleted”). - Update your scripts or layout filters to exclude records with the “Deleted” status from user-facing reports and searches.
- Create a separate layout or admin interface to allow administrators to review and restore deleted records when needed.
2.4 Building a Garbage Table <a name=”building-a-garbage-table”></a>
In situations where soft deletes aren’t an option, you might consider building a “Garbage Table” for records that are removed from the main system. Instead of permanently deleting records, they are moved to a secondary table where they are archived for a set period before being purged.
Steps to Build a Garbage Table:
- Create a new table (e.g.,
Deleted_Records
) with fields that match your main table. - When a user requests a deletion, use a script to copy the record to the
Deleted_Records
table before removing it from the main system. - Set up a scheduled script to periodically clear the
Deleted_Records
table after a defined period (e.g., 30 or 60 days).
3. Cascading Deletes: The Silent Killer <a name=”cascading-deletes”></a>
3.1 How Cascading Deletes Work <a name=”how-cascading-deletes-work”></a>
Cascading deletes are a feature in FileMaker that automatically removes related records in other tables when a parent record is deleted. While this might sound useful for maintaining referential integrity, it can be dangerous if you’re not fully aware of the relationships between your tables.
When cascading deletes are enabled, a single delete action on a parent record could result in the unintentional deletion of dozens—or even hundreds—of related records across different tables.
3.2 Why Cascading Deletes Can Be Dangerous <a name=”why-cascading-deletes-can-be-dangerous”></a>
The danger with cascading deletes lies in context. FileMaker’s relationship graph evaluates all related tables when a record is deleted. Even if you’re on a layout that doesn’t directly deal with the related records, cascading deletes could still wipe them out.
For example, you might have a script that deletes a customer record, but unbeknownst to you, that customer is related to hundreds of order records. If cascading deletes are enabled, all those orders will disappear too, even if you didn’t intend to touch them.
3.3 Avoiding Cascading Delete Problems <a name=”avoiding-cascading-delete-problems”></a>
Best Practices to Avoid Cascading Deletes:
- Disable Cascading Deletes in most cases. Instead of using this feature, write scripts that explicitly handle related record deletions.
- Always test relationships before allowing any kind of delete action. Make sure you understand the full impact of what’s being deleted.
- Use auditing tools to log all delete actions, including related records, to ensure you can trace back what was deleted and why.
4. The Replace Field Contents Command: A Carpet Bomb <a name=”replace-field-contents”></a>
4.1 Understanding the Replace Command <a name=”understanding-the-replace-command”></a>
The Replace Field Contents command is one of the most dangerous tools in FileMaker, primarily because there is no undo for this action. This command allows you to replace the contents of a field for every record in your found set with a new value or calculated result. If misused, it can overwrite thousands of records in seconds.
4.2 When and Where to Use Replace Safely <a name=”when-and-where-to-use-replace-safely”></a>
To use the Replace command safely, follow these guidelines:
- Limit its use to small found sets: If you only need to update a few records, perform a Find first to limit the scope of the replacement.
- Create backups before large operations: If you need to perform a replace operation on a large number of records, create a backup of your database first.
- Always review the Replace dialog: FileMaker warns you about the number of records being modified, so carefully review this information before proceeding.
4.3 Avoiding Unintended Replacements <a name=”avoiding-unintended-replacements”></a>
To avoid unintentionally replacing data:
- Use scripts instead of manual replace: For bulk updates, write a script that loops through each record, updating it one at a time. This allows for greater control and error handling.
- Test your calculation first: Before using a calculated result in the Replace command, test the calculation on a few records to ensure it’s working as expected.
5. Truncating Tables: The Nuclear Option <a name=”truncating-tables”></a>
5.1 Understanding Truncate Table <a name=”understanding-truncate-table”></a>
The Truncate Table script step is the nuclear option for clearing out an entire table’s data. This operation bypasses the typical record deletion processes, making it much faster than manually deleting records one by one. However, it comes with a huge risk: once you truncate a table, all records are permanently deleted, and there is no undo.
5.2 When Truncating Tables is Appropriate <a name=”when-truncating-tables-is-appropriate”></a>
Truncate Table can be useful in certain scenarios, such as:
- Clearing audit logs: If you have an audit log table that grows over time, truncating the table periodically can keep the log manageable.
- Deleting test data: During development, you might generate a large amount of test data. Truncating the table can quickly remove all test records without affecting the structure of the table.
5.3 Avoiding Disaster with Truncate Table <a name=”avoiding-disaster-with-truncate-table”></a>
Best Practices for Safe Truncation:
- Use truncate sparingly, and only when you’re sure the data is no longer needed.
- Always back up the table before truncating it, especially in a live production environment.
- Never truncate critical tables with active relationships to other data, as this could lead to significant data loss.
6. The Importance of Backups: Your Last Line of Defense <a name=”importance-of-backups”></a>
6.1 Understanding Backup Strategies <a name=”understanding-backup-strategies”></a>
Backups are the last line of defense when things go wrong. In FileMaker, there are two main types of backups:
- Manual backups: These are backups created by the developer or administrator at key points (e.g., before a major update).
- Scheduled backups: Automated backups that run on a regular basis (e.g., hourly, daily, weekly).
6.2 Setting Up Automated Backups <a name=”setting-up-automated-backups”></a>
Automating backups is essential for any FileMaker solution running in production. Here’s how to set up automated backups:
- Go to FileMaker Server Admin Console and configure a backup schedule.
- Set up hourly or daily backups, depending on how critical your data is.
- Store backups in multiple locations: Save backups locally on the server as well as in cloud storage (such as Amazon S3) to protect against local hardware failures.
6.3 Using Immutable Backups for Extra Protection <a name=”using-immutable-backups”></a>
Immutable backups are a recent addition to data security. These are backups that cannot be altered or deleted for a specified period, providing an extra layer of protection against ransomware attacks. For example, Amazon’s S3 service allows you to lock backups for 30 days, ensuring that no one—not even you—can modify or delete them during that time.
7. Unsupervised Layout Edits <a name=”unsupervised-layout-edits”></a>
7.1 The Risk of Unsupervised Layout Changes <a name=”the-risk-of-unsupervised-layout-changes”></a>
Making unsupervised changes to layouts can be extremely risky, especially when working in live production environments. A single change can break a carefully crafted user interface, unintentionally remove critical elements, or disrupt user workflows.
7.2 Recommended Prevention: Duplicate and Version Control <a name=”recommended-prevention”></a>
To mitigate the risks associated with layout changes, always duplicate the layout before making significant edits and maintain a version control system to track changes.
7.3 Steps to Ensure Safety <a name=”steps-to-ensure-safety”></a>
- Duplicate Layouts: Before editing, make a duplicate of the layout to ensure that you can revert if necessary.
- Naming Convention: Implement a clear naming system for different versions of layouts, such as “LayoutName_v1” or “LayoutName_backup.”
- Backups Before Live Changes: Always back up the solution before editing layouts in a live system.
8. Replacing Scripts Without Backups <a name=”replacing-scripts-without-backups”></a>
8.1 The Dangers of Replacing Scripts <a name=”the-dangers-of-replacing-scripts”></a>
Replacing scripts without backing them up can result in the loss of complex logic and custom workflows. A single script edit could break multiple functions if not handled carefully.
8.2 Best Practices for Script Replacements <a name=”best-practices”></a>
- Duplicate Scripts Before Editing: Always create a duplicate of the script before making changes. This allows you to roll back to the original if something goes wrong.
- Test in Development Environment: Run the new or modified script in a test environment before deploying it to the live system.
- Version Control: Keep track of different script versions to avoid confusion or accidental overwrites.
9. Ignoring User Roles & Privileges <a name=”ignoring-user-roles-privileges”></a>
9.1 The Risk of Unrestricted Access <a name=”the-risk-of-unrestricted-access”></a>
Giving users unrestricted access to the database can lead to accidental data deletions, modifications, or unauthorized access to sensitive information. Without clearly defined roles and privileges, a user could unintentionally—or intentionally—cause significant harm.
9.2 Best Practices for User Access Control <a name=”best-practices-user-access”></a>
- Create Specific User Roles: Define clear roles with appropriate access levels. Limit who can delete or modify critical records.
- Restrict Privileges: Set custom privileges for users to limit access to sensitive data and critical functions.
- Audit Logs: Implement audit logs to track who accessed or changed critical records, providing accountability.
10. Lack of Error Handling in Scripts <a name=”lack-of-error-handling”></a>
10.1 The Importance of Error Handling <a name=”importance-of-error-handling”></a>
When scripts fail silently without proper error handling, they can lead to incomplete actions, broken workflows, and data integrity issues. It’s essential to catch and address errors in scripts to prevent these problems.
10.2 Best Practices for Adding Error Handling <a name=”best-practices-error-handling”></a>
- Use
Get(LastError)
: Add checks throughout your scripts to capture any errors using theGet(LastError)
function. - Log Errors: Store errors in a dedicated error log table to review and resolve issues later.
- Provide User Feedback: Display custom dialogs or error messages to inform users when something goes wrong, preventing confusion and allowing them to take corrective action.
11. Crash Recovery: Not a Safe Option <a name=”crash-recovery”></a>
11.1 Understanding Crash Recovery Risks <a name=”understanding-crash-recovery-risks”></a>
When a FileMaker Server crashes, it may attempt to reopen damaged files automatically. This can be a dangerous practice, as the file may have become corrupted during the crash. If you allow users to continue working with the file, the corruption may spread, making recovery more difficult or impossible.
11.2 Best Practices for Handling Crashes <a name=”best-practices-for-handling-crashes”></a>
- Disable automatic reopening of crashed files: This forces administrators to manually review and restore files after a crash.
- Always restore from a backup after a crash, rather than using the potentially corrupted file.
- Test your backups regularly to ensure they are reliable and can be restored quickly in the event of a disaster.
12. Conclusion: Staying Safe in FileMaker Development <a name=”conclusion”></a>
While the FileMaker platform offers powerful tools, these tools need to be used carefully. Low-yield nukes such as direct deletions, cascading deletes, the replace command, table truncation, and unsupervised edits can wreak havoc on your system if mishandled.
By following the best practices outlined in this guide—such as using soft deletes, disabling cascading deletes, automating backups, controlling access through roles and privileges, and implementing error handling—you can protect your data and ensure the long-term success of your FileMaker solutions.
Remember, prevention is always better than recovery. Stay vigilant, build safeguards, and avoid these hidden traps to keep your database—and your reputation—safe.
Check List
Action | Risk | Recommended Prevention | Steps to Mitigate Risk |
---|---|---|---|
1. Deleting Records Directly | Permanent loss of data, user errors, missing critical records | Use soft deletes or archive records instead of direct deletion | – Create a “Status” field to mark records as “Deleted” instead of deleting them. – Filter views to exclude “Deleted” records while keeping them in the system. – Create a separate archive table to store removed records. |
2. Cascading Deletes | Unintended deletion of related records across multiple tables | Disable cascading deletes and manage related deletions through scripts | – Disable cascading deletes in relationships. – Write scripts to handle related record deletions explicitly. – Regularly audit relationship settings to ensure no unwanted cascading deletions are enabled. |
3. Replace Field Contents Command | Unrecoverable mass update of records, no undo function | Use scripts for controlled replacements, test on small datasets first | – Use the Replace command sparingly and only on small found sets. – Always perform a backup before running large Replace operations. – Review the Replace dialog carefully, noting how many records will be affected. |
4. Truncate Table | Permanent and unrecoverable deletion of all records in a table | Only use when data is no longer needed, create backups beforehand | – Use Truncate Table for non-critical data (e.g., clearing audit logs or test data). – Backup the table before truncating it. – Implement a separate script with a confirmation step to avoid accidental use. |
5. No Backups | Total data loss, inability to recover after server crashes or user mistakes | Set up automatic backups and use offsite/immutable backups for extra security | – Set up hourly, daily, and weekly backup schedules in FileMaker Server. – Store backups both locally and in the cloud (e.g., Amazon S3). – Use immutable backups to protect against ransomware or accidental deletion. |
6. Unsupervised Layout Edits | Broken user interface, unintentional removal of critical layout elements | Duplicate layouts before editing, use version control | – Duplicate layouts before making significant changes. – Implement a naming convention for layout versions (e.g., “LayoutName_v1”, “LayoutName_v2”). – Keep backups of layouts and scripts before editing live systems. |
7. Replacing Scripts Without Backups | Broken workflows, loss of custom logic, no way to undo changes | Always duplicate scripts before making edits | – Before modifying a script, duplicate it and work on the copy. – If the new script works as expected, rename it and replace the original. – Test modified scripts in a development environment before applying them to live systems. |
8. Ignoring User Roles & Privileges | Unintended data access or deletion by unauthorized users | Implement strict user roles, limit permissions, and use audit logs | – Create specific user roles with limited privileges. – Limit who can delete records or modify critical fields. – Implement audit logs to track changes made by users, especially those with higher privileges. |
9. Lack of Error Handling in Scripts | Scripts failing silently, loss of data integrity | Add error handling and logging to all critical scripts | – Use the Get(LastError) function in scripts to catch and handle errors.– Implement custom error dialogs or notifications for users when issues occur. – Log critical script errors to a dedicated log table for later review and debugging. |
10. Crash Recovery | Corrupted data after server crashes, spreading of corruption | Disable automatic reopening of crashed files, restore from backups after crashes | – Disable “Automatically Open Databases After Crash” in FileMaker Server settings. – Always restore from the last known good backup after a crash. – Regularly test backups to ensure they are reliable and can be restored quickly. |