Alec Gregory, May 25, 2023, Beezwax blog
Source: https://blog.beezwax.net/lets-make-an-audit-log-with-claris-filemaker/

Claris FileMaker 2023 puts powerful, native audit logging within reach of developers, with a greatly reduced code footprint compared to existing approaches. In this blog post we introduce the OnWindowTransaction
script trigger, leveraging it along with native FileMaker script transactions to create an audit log that is efficient, scalable, flexible, performant, and convenient to implement.
Audit Log – What Data Changed, When and by Whom?
Broadly defined as a record of activity in a system over time, an audit log is a common requirement for solutions handling data. While the use cases for audit logs are actually somewhat diverse – ranging from data integrity to analytics to compliance – an audit log starts with some method to track what data has changed, when it was changed, and by whom.
Until now, Claris FileMaker has not had robust, native features making it easy to build audit logging. Developers have needed to choose from a wide-range of options including building extra schema, complex scripting, various products or plug-ins, and miscellaneous techniques.
Claris FileMaker 2023 finally makes things easier. In this post, we’ll show what’s required to create a simple, powerful audit log in Claris FileMaker with OnWindowTransaction
. There’s a lot more we won’t cover (yet), including UI for leveraging audit log data, and measuring and tuning the log. But stay tuned for future blog posts!
Hello OnWindowTransaction
OnWindowTransaction
is a file-level script trigger added in Claris FileMaker 2023. In many ways it operates like the other file-level script triggers (OnFirstWindowOpen
, OnLastWindowClose
, OnWindowOpen
, OnWindowClose
) but it also has special behavior, which we highlight below.
Key features and behavior of the OnWindowTransaction script trigger
There are several things to know about the OnWindowTransaction
script trigger:
OnWindowTransaction
fires after every commit that takes place in any window. It’s important to note that the word Transaction
in OnWindowTransaction
is broadly defined. That is, it covers every record change, not just those that take place in a script transaction block.
OnWindowTransaction
has access to a JSON object of metadata about the records in the triggering transaction including the affected Record ID and the type of action. Actions logged are New
, Modified
, and Deleted
. This object is accessed in the Get ( ScriptParameter )
function. Access to this object allows metadata audit logging.
OnWindowTransaction
has read access to the contents of a single field in every table. That field can technically be of any type (Text, Number, Date, Time, Timestamp, Container) but the field contents will be returned as either text or JSON (if the field contains a valid JSON object or array). It can also be any class (Normal, Calculated, Summary). The field name is OnWindowTransaction
by default, but this can be changed at the file level. Access to this field greatly simplifies the task of logging data changes.
OnWindowTransaction
has the Continue
behavior (see Claris documentation). That is, it will run after a currently running script rather than interrupting it. If a script includes multiple transactions, then multiple calls to the OnWindowTransaction
script will be queued.
OnWindowTransaction
is not currently supported for OData or Data API endpoints that modify record data, but it is supported in scripts run via OData or the Data API. It is not supported for changes made via xDBC.
Let’s audit
We will be leveraging the OnWindowTransaction
script trigger to build an audit log of changes to data in a FileMaker solution. To skip straight to trying out the audit log, download the example file and edit some data. To learn how to integrate audit logging into an existing solution click the ℹ︎ icon on any layout. For a detailed description of the auditing method, read on.
Audit process
The audit process consists of the following steps:
- A transaction completes
- FileMaker collects audit data for each record in the transaction, including data in the
OnWindowTransaction
field if it exists in the table - The
OnWindowTransaction
script trigger fires. Audit data is available viaGet ( ScriptParameter )
- The triggered script augments, processes and stores audit data in the
AuditLog
table and, if necessary, schema data in theSchemaLog
table
Storing audit data
In step 4 of the audit process, our implementation will store Audit and Schema data in tables within our solution file. This makes our log convenient and simple. Another approach is to store the Audit and Schema data externally. There are a wide variety of suitable stores including a different FileMaker file, a different database(😱) (such as Oracle, MySQL, PostgreSQL, mongoDB, or MSSQL), a cloud service (such as AWS CloudWatch or S3), or a structured data file (such as a CSV file).
Audit requirements
We want our audit log to:
- Log changes to records in specific tables in our solution. This includes:
- Record creation
- Record modification
- Record deletion
- Logs must include metadata and data to allow us to see what data was changed
And as we suggested in the introduction, we want our audit log to be:
- Convenient to implement in a new or existing solution
- Efficient in operation and storage: we don’t want to create redundant logs or generate unnecessary movements of data
- Scalable in implementation and operation
- Flexible in implementation to support different logging strategies
- Performant in operation so it doesn’t have too high an impact on overall system performance
Audit implementation
Our fully implemented audit log consists of:
- Two tables: One table to store changes to data and one table to store changes to schema
- One field in every table we want to audit
- One script to handle the
OnWindowTransaction
script trigger and to create log records
Two tables: AuditLog and SchemaLog
Our audit log uses two tables:
- The
AuditLog
table tracks record data changes. - The
SchemaLog
log table tracks schema changes.
Tracking data changes is uncontroversial as this is the core purpose of an audit log. The reason we track schema changes is so we can properly map changes logged in the past to the current (or indeed any) version of our schema. This accommodation for “data migration” is a bit more complex than what we’re covering here, so we plan to describe it in more detail in a future post.
AuditLog table
The AuditLog
table model is informed by the structure of the data we get from the OnWindowTransaction
script trigger. It looks like this:
{
"<FileName>" :
{
"<BaseTableName>" :
// Array of changes
[
// One array for each record changed in the table
[
"<Action ? New|Modified|Deleted>",
<RecordID>,
// This is the developer defined OnWindowTransaction field
<BaseTableName::OnWindowTransaction>
]
]
}
}
Let’s describe the structure:
- We can see that it’s a JSON object, organized at the top-level by file name.
- The next level is organized by table name.
- Within each table changes are represented in an array. Each change is itself an array containing the following information:
- Action (New, Modfied, Deleted)
- RecordID
- Contents of the
OnWindowTransaction
field
- Within each table changes are represented in an array. Each change is itself an array containing the following information:
- The next level is organized by table name.
Because the smallest unit in the structure is a change to a record, we will use that same level for our AuditLog
table: each record in our AuditLog
table represents a change to a record in our database. Let’s first identify the core fields for logging metadata and add them to our schema:
Action
BaseTable
File
RecordId
Additional basic audit logging functions are to log who made a change and when, so let’s add those fields next. We are also going to add BaseTableId
, which will help us track and map between different versions of our schema:
Account
ActionTimestamp
BaseTableId
To make our log more convenient to use, it is helpful to include the solution-specific primary key of the record being logged, so we’ll add a field for that:
RecordPrimaryKey
We also want to be able to measure the accuracy and completeness of our log. For that the record modification count is essential. We will add that as another “metadata” field in our table:
RecordModificationCount
Finally we want to to log the record data related to the change. Let’s create one last field to hold this data:
RecordData
That makes our full schema:
Account
Action
ActionTimestamp
BaseTableId
BaseTable
File
RecordData
RecordId
RecordModificationCount
RecordPrimaryKey
SchemaLog table
The SchemaLog
table will be used to track changes to the schema of the tables we are logging. Here’s its schema:
BaseTableId
BaseTableModCount
BaseTableName
File
Schema
SchemaTimestamp
As outlined in the introduction to this section, our goal is to track changes over time and over different schemas. To do this we keep need to create a log of each version of a table’s schema that has been active for data changes. We can use the File
, BaseTableId
(and in some cases BaseTableName
), and SchemaTimestamp
fields to determine which version of the schema was active for any given change in the AuditLog
table.
Two tables: Wrapping up
We now have in place the two tables that make up our audit log. Our next task will be to ensure we can accurately capture data to be added to the AuditLog
table.
One field: OnWindowTransaction
To meet the requirements modeled in our AuditLog
schema, we have seen that we need to augment the default data that the OnWindowTransaction
script trigger provides us. Some of that augmented data needs to be defined in the schema of the tables we are auditing, namely the RecordPrimaryKey
, RecordModificationCount
, and RecordData
fields. Content for these fields is collected in an unstored calculation field named OnWindowTransaction
(see Key features and behavior of the OnWindowTransaction script trigger) in each table that needs data auditing.
Logging data
Determining which fields to audit (and when)
Our requirements dictate that our audit log should be efficient. We can interpret this to mean that we should log full record data only when records are created or deleted, and log only data that has changed when records are modified. Additionally, we should log only fields that hold what is generally understood to be actual “data”. Fortunately, FileMaker provides us with the tools to do both of these things:
Logging only data that has changed
Our tool for logging only data that has changed during a record edit is the Get ( ModifiedFields )
function. This will return a list of the fields that have been modified during the current commit.
Defining and identifying true “data fields“
Deciding what constitutes an true “data field” is a little subjective and identifying these fields is a little complex. A good working definition is that all fields are true data fields apart from unstored calculations, summary fields and global fields. A stricter definition might also exclude all calculation fields from the definition of an actual data field. But, we will apply the former, less strict definition.
Identifying true data fields when we are excluding unstored calculation fields is a two-part process. We first use ExecuteSQL
to query the FileMaker_BaseTableFields
metadata table for fields that don’t have the class Summary
and don’t have the type global
. See below for an example query:
ExecuteSQL (
"SELECT
FieldName
FROM
FileMaker_BaseTableFields
WHERE
BaseTableName = ?
AND
FieldClass <> ?
AND
FieldType NOT LIKE ?";
"";
"";
"<baseTableName>";
"Summary";
"global%"
)
We then need to filter out unstored calcs by looping through the resulting list and using the FieldType
function to remove fields with the type UnstoredCalc
. See below for a While function which does this:
While (
[
~i = 1;
~finalFieldList = ""
];
not IsEmpty ( GetValue ( ~fullFieldList; ~i) );
[
~field = GetValue ( ~fullFieldList; ~i );
~finalFieldList =
List (
~finalFieldList;
If (
LeftWords ( FieldType ( ""; ~baseTableName & "::" & ~field ); 1 ) ≠ "UnstoredCalc";
~field;
)
);
~i = ~i + 1
];
~finalFieldList
)
Local variable cache
Querying the the FileMaker_BaseTableFields
metadata table and processing the output through the FieldType
function is fast, but we don’t want to do it more than once per table in a transaction. We can ensure that we keep this work to a minimum by storing the result in a local variable named $fieldInfo
. We use a unique repetition number derived from the base table ID to avoid collisions in cases where we are editing data in multiple base tables in the same transaction. This variable will persist at least until the end of the transaction where it was first defined. Using an If
statement we can ensure that if the $fieldInfo
for a table has been set already, it won’t be calculated again during the transaction. See the comments in the OnWindowTransaction
field definition for more details.
Adding fine-grained control
For more fine-grained control of the field list for a table we can define more “specific” rules on a table-by-table basis. This level of control makes our logging approach flexible. Adding a more specific field list to a table means we have to modify the OnWindowTransaction
field for that table only, which makes our implementation scalable.
Tips on fine-grained control
Along with business requirements, performance considerations should inform the decision to use fine-grained control. Fine grained control should be considered when:
- A table has many fields or field repetitions. Consider fine-grained control when the broad rules are returning more than 200 fields (including repetitions)
- A table has container fields that hold large files, especially if these files are regularly modified. Auditing large files can take a long time. If a field typically holds large files (over 50MB), consider removing it from the field list and logging a calculated field based on an MD5 hash of the container field instead.
Determining the field list at commit time
As we outlined above, we want our log to be efficient, which means we log full record data only when records are created or deleted, and when records are modified we log only data that has changed. To determine when a record has been modified we can use the Get ( RecordOpenState )
function. When this function returns 2
it means the record is being modified. In this case we can use the Get ( ModifiedFields )
function to get a list of the fields that have been modified and remove any that aren’t in our field list. Example code to do this is here:
If (
// If this is a record modification only include modified fields
Get ( RecordOpenState ) = 2;
FilterValues (
Get ( ModifiedFields );
~fieldListAll
);
// Otherwise include all fields
~fieldListAll
)
Formatting record data
Because we are storing one change per record and have a single RecordData
field in our AuditLog
table, we will need a structure to store our data. It makes sense to use JSON as it’s well-supported within FileMaker and in other databases (e.g. Oracle, MySQL, PostgreSQL) and languages (e.g. JavaScript, Ruby, Python) that might consume our log. Therefore, our RecordData
field will be a text field containing a JSON object. Each field of the record data will be stored in a property of the JSON object. The structure will look like this:
{
"FieldName1": <fieldValue1>,
"FieldName2": <fieldValue2>
}
A complication of storing data in JSON format is that we need to make allowances for the mismatch between FileMaker and JSON data types. The map from FileMaker data type to JSON data type is below:
- Number fields are stored as JSON strings to avoid loss of precision when storing very large numbers
- Date fields are stored as JSON numbers: the number of days since
1/1/0001
- Time fields are stored as JSON numbers: the number of seconds since midnight
- Timestamp fields are stored as JSON numbers: the number of seconds since
1/1/0001
- Container fields are stored as JSON arrays: the first array item is a Base64 string of the container data, the second array item is a string of the file name.
- Text fields are stored as JSON strings
- Empty fields of any type are stored as JSON null.
Here’s an example object including all data types:
{
"CONTAINER_FIELD" : ["base64string", "filename.ext"],
"DATE_FIELD" : 495493,
"NUMBER_FIELD" : "7057542",
"TEXT_FIELD" : "DCE5C0E1-3463-4FDB-B286-F4AA6B82F521",
"TIMESTAMP_FIELD" : 63817078563,
"TIME_FIELD" : 51363,
"EMPTY_FIELD" : null
}
Logging metadata
Our two record-level metadata fields are RecordModificationCount
and RecordPrimaryKey
. RecordModificationCount
can easily be accessed via the Get ( RecordModificationCount )
function. RecordPrimaryKey
can be a direct reference to the record primary key field.
We also define our schema log data in the OnWindowTransaction
calculation. While this leads to some duplication it is necessary to handle multi-file solutions.
The final output
Our final output is a JSON object of data and metadata structured as follows:
{
"metadata": { ... },
"data": { ... }
}
The OnWindowTransaction field definition
Implementing the principles and requirements outlined above in the OnWindowTransaction
field definition leads to a lengthy but highly efficient calculation. It leverages the unique features of the calculation dialog and OnWindowTransaction
script trigger to output precisely what we need to implement our audit log.
One field: wrapping up
The OnWindowTransaction
field is one powerful field. Its definition determines what data we log for each type of change. It handles data formatting, schema information, and ensures that data is provided as efficiently as possible by using a local variable cache.
One script: Audit
Our Audit
script receives data from the OnWindowTransaction
script trigger and writes it to our audit log. To determine how we do this, let’s again examine the format of the auditing data:
{
"<FileName>" :
{
"<BaseTableName>" :
// Array of changes
[
// One array for each record changed in the base table
[
"<Action ? New|Modified|Deleted>",
<RecordID>,
// This is the field we defined in the OnWindowTransaction field
<BaseTableName::OnWindowTransaction>
]
]
}
}
Basic script structure
The nested nature of the data means we will need to loop through each layer of JSON until we get to the record change level. The basic loop structure is:
Loop through Files
Loop through Base Tables in File
Loop through Changes in Base Table
Log Change in AuditLog table
End Change loop
End Base Table Loop
End File Loop
We elaborate on this structure by applying various logging “rules” using If
script blocks. Some rules we can apply are described in the subsequent sections.
Omitting files
In multi-file solutions, we may want to exclude some files from logging altogether. We can maintain a list of files to exclude in our script (or elsewhere) and skip auditing for those files:
Loop through Files
If File should be logged
Loop through Base Tables in File
Loop through Changes in Base Table
Log Change in AuditLog table
End Change Loop
End Base Table Loop
End If File should be logged
End File Loop
Omitting tables
We also need to ensure we don’t log changes to data in our AuditLog
or SchemaLog
base tables as this would put us in an infinite loop. Other tables may also need to be excluded:
Loop through Files
If File should be logged
Loop through Base Tables in File
If Base Table should be logged
Loop through Changes in Base Table
Log Change in AuditLog table
End Change Loop
End If Base Table should be logged
End Table Loop
End If File should be logged
End File Loop
Schema logging
In addition to logging changes to data, we are also logging base table schema changes:
Loop through Files
If File should be logged
Loop through Base Tables in file
If Base Table Schema has changed
Log Base Table Schema in SchemaLog table
End If Base Table Schema has changed
Loop through Changes in table
Log Change in AuditLog table
End Change Loop
End Base Table Loop
End If File should be logged
End File Loop
Omitting Find Mode “transactions”
Finally, the OnWindowTransaction
behaves somewhat unintuitively when the system is in Find mode: entering Find mode will always fire the OnWindowTransaction
trigger and its output is confusing. To omit data triggered in Find mode we will ignore changes that have a blank value for the OnWindowTransaction
field:
Loop through Files
If File should be logged
Loop through Base Tables in File
If Base Table should be logged
If Base Table Schema has changed
Log Base Table Schema in SchemaLog table
End If Base Table Schema has changed
Loop through Changes in Table
If Change should be logged
Log Change in AuditLog table
End If Change should be logged
End Change Loop
End If Base Table should be logged
End Table Loop
End If File should be logged
End File Loop
Activating the OnWindowTransaction script trigger
All that remains is for us to activate the OnWindowTransaction
in the File Options of our file. Go to File > File Options > Script Triggers
and select OnWindowTransaction
. Then select the Audit
script.
One script: wrapping up
The Audit
script is where we prepare the audit log data and write it to our logging tables. We use nested loops to traverse the logging output from the OnWindowTransaction
script trigger and If
blocks to ensure we only log the data we need to.
We’ve audited
That’s it (for now)! We have built a comprehensive audit log using two tables, one field, and one script. There are some implementation details we’ve had to wrap our heads around, but the result is an audit log that can be easily implemented and customized.
Next steps
While the default layout for the AuditLog let’s us inspect our handiwork, we don’t yet have an interface for the log that could be user facing. We plan to follow up with more blog posts on this topic, outlining UI patterns to make the most of our audit log built with Claris FileMaker 2023.
Installation instructions
Source: https://www.beezwax.net/hyve/bzauditlog-product/installation-instructions
bzAuditLog is an audit logging module for Claris FileMaker 2023+.
Quickstart
Requires Claris FileMaker 2023 or higher.
- Copy or Import the AuditLog and SchemaLog tables into your solution
- Copy the OnWindowTransaction field into every table you want to audit. Tip: To avoid the field being commented out, ensure that you change the recordPrimaryKey property to match the primary key field of the target table. If the table doesn’t have a primary key, set the recordPrimaryKey property to a blank string
- Copy or import the Audit script into the script workspace
- In File > File Options > Script Triggers set the OnWindowTransaction script trigger to run the Audit script
That’s it. Changes will now be logged for all tables. Record data will be logged for all tables with the OnWindowTransaction field.
Fine tuning
- To entirely exclude tables from being logged, add them to the list in the If step on line 38
- To change the fields that get logged for a table, edit the ExecuteSQL query in the OnWindowTransaction field definition for that table
bzAuditLog-ui
Source: https://github.com/beezwax/bzAuditLog-ui
Is an extension of bzAuditLog which adds record timeline and record detail views
Powered by bzBond