Recently I was tasked with solving an interesting software design problem. The existing software's implementation wrote many files to a file system through a web service and the requirement was to add in a mechanism to store metadata for specific files. The files still needed to be written to the file system and were not to be modified in any way. Other than that, it was up to me to devise a solution for how best to store the file metadata. Here is a diagram to illustrate the basic architecture (the red question mark represents the unknown for how to store the file metadata):
:: Design Approaches
I opted early to use a database to store the file metadata instead of more files in the file system that stored associated metadata (that would very quickly become unmanageable). Given the use of a database, the difficult aspect of this problem was how I would guarantee that both a file gets written to the file system and its associated metadata is stored in the database table. I could not get into a situation where one of the operations succeeded while the other failed... it was all or nothing. While researching various solutions and talking to other developers I came up with these 3 approaches as outlined below.
1. Microsoft SQL Server FILESTREAM Data Type
I won't go into too much detail about this SQL Server FILESTREAM data type since it is explained thoroughly here (and an example here) but essentially it allows for structured data to be stored in a database while also allowing the storage of unstructured data on an NTFS filesystem. Essentially a single SQL operation can be used be used to store both the file on the file system and the file's metadata in the database. With regards to the transaction around this operation here is what I found from the first article:
FILESTREAM is the only solution that provides transactional consistency of structured and unstructured data as well as integrated management, security, low-cost, and excellent streaming performance. This is accomplished by storing the structured data in the database files and the unstructured BLOB data in the file system, while maintaining transactional consistency between the two stores. More details of the FILESTREAM architecture are given in the “Overview of FILESTREAM” section later in this white paper.
I also came across these two issues that may restrict the ability to use this FILESTREAM data type:
1. Microsoft SQL Server FILESTREAM Data Type
I won't go into too much detail about this SQL Server FILESTREAM data type since it is explained thoroughly here (and an example here) but essentially it allows for structured data to be stored in a database while also allowing the storage of unstructured data on an NTFS filesystem. Essentially a single SQL operation can be used be used to store both the file on the file system and the file's metadata in the database. With regards to the transaction around this operation here is what I found from the first article:
FILESTREAM is the only solution that provides transactional consistency of structured and unstructured data as well as integrated management, security, low-cost, and excellent streaming performance. This is accomplished by storing the structured data in the database files and the unstructured BLOB data in the file system, while maintaining transactional consistency between the two stores. More details of the FILESTREAM architecture are given in the “Overview of FILESTREAM” section later in this white paper.
I also came across these two issues that may restrict the ability to use this FILESTREAM data type:
- SQL Server maintains a link of sorts from table rows to the FILESTREAM files associated with them. This means that deleting or renaming any FILESTREAM files directly through the file system will result in database corruption.
- The FILESTREAM data type is only available on SQL Server 2008
2. Modified File Query Architecture
Given that it's not trivial to guarantee that both a file system and SQL operation either succeed or fail together, another approach can be used where each operation can happen independently but only the final operation in the sequence of operations "commits/registers" the overall action (in my case the overall action is that both the file and its metadata are stored successfully or not at all).
The thought here was to perform the operation which stores the file to the file system first. If this fails I know the overall action has failed. It it succeeds I can move onto the next action in the sequence which is the SQL operation.
For the SQL operation I would be storing the file's metadata in a row within a database table. This row will have an identifier for the file stored on the file system (whether it be its location on the file system or simply its unique filename). At this point if the SQL operation succeeds the overall action succeeds. The tricky part comes if the SQL operation fails, I would then be left with a file on the file system which is unreferenced by any row in the database table (the beginnings of data integrity problems). Given that the files stored on the file system are queried by another process, this can very easily cause a corrupted system where files did not have any associated metadata.
So what to do if the SQL operation fails? The simplest (but naive) solution would be to simply delete the file on the file system if the SQL operation failed for any reason. This has 2 problems:
- What if the deletion of the file fails? You could build in some retry logic but if the deletion failed indefinitely, the file would sit lying around on the file system forever as an orphaned file with no associated metadata.
- What if the other process queried for the file on the file system before its metadata was inserted into the database? Given that the file is stored first and then its metadata is stored second, it is theoretically possible for the other process to query a file that is on the file system but that does not yet have its metadata stored in the database. This could be solved with sufficient error case handling (if metadata doesn't exist try again until it is) but this would not be a very elegant solution.
To use this approach effectively, the other process's query mechanism would need to be updated such that it queried the database table first (to determine what metadata exists) and then only queried for files on the file system to which there was metadata for.
The drawbacks of this approach is that the other process needs to be modified (this could be done but the less code changes the better) and there would still be the issue of orphaned files, with no metadata, that exist on the file system. A separate clean-up routine could be built that periodically tries to take care of (i.e. delete) the orphaned files but again, I'd have to write a whole other application to perform clean-up... there has to be a better way.
3. Transactional NTFS & The .NET TransactionScope Class
I knew that the .NET framework has the ability to wrap SQL operations in a transaction but I knew this didn't work for traditional file system operations. What I wanted to guarantee was that if any of the file operations or SQL operations failed, then none of them would be committed. I needed an atomic transaction for both the file system write and the SQL insert statement.
I posted this question on Stack Overflow and got a ton of responses. It turns out that since Windows Vista was introduced, NTFS has been bundled with a special component called Transactional NTFS (TxF). According to Wikipedia, Transactional NTFS "brings the concept of atomic transactions to the NTFS file system, allowing Windows application developers to write file output routines that are guaranteed either to succeed completely or to fail completely". Thus I now had a mechanism to enable transactions for both SQL and file system operations.
Upon researching more about how I would add this transactional support to file system operations within my application, I came across this Transactional NTFS Screen-cast by Jason Olsen. Not only does he highlight the difference between a regular file system operation and a transacted file system operation, but he also dives deeper into the underlying code that makes the transaction possible. As you will notice from the screen-cast, the underlying code supporting the transactions is very low-level and uses the Win32 API functions. I'd prefer to interact with managed code (at the architectural layer that this is being implemented in) so I found a more convenient managed wrapper that allows me to use the well known .NET TransactionScope class (This is the same class that SQL operations can be wrapped with to form transactions).
Note that using Transactional NTFS requires developing and running the application on Windows Vista or above. Additionally, transacted file operations can be performed on network shares but only if their underlying file systems support Transactional NTFS themselves.
Note that using Transactional NTFS requires developing and running the application on Windows Vista or above. Additionally, transacted file operations can be performed on network shares but only if their underlying file systems support Transactional NTFS themselves.
:: Conclusion
After evaluating these 3 approaches I decided that the use of "Transactional NTFS & The .NET TransactionScope Class" would solve the underlying problem most effectively while minimizing the amount of code base changes I would need to make in order to get this feature operational.