Article > Copy-on-write in a relational database
Description :: Sharing data, but only as long as everyone plays nice
0. Index
1. Copy-on-write data
 1.1 Table structure
 1.2 End-user views
 1.3 Underlying triggers and support procedures
 1.4 Centralizing references
 1.5 Hack: back-references
 Copy-on-write is a mechanism whereby a shared resource will not be duplicated for each reference to it, so long as none of the references desire to make changes; if changes do need to be made, the original is cloned, changes are made, and old references continue to point to old content while the reference that needed the changes refers to the modified content. The new version may be wholesale copy of the old version, with changes embedded, or it may consist of incremental changes to be applied to another version ad-hoc. You can find literature on this concept by searching for "copy on write string" or "copy on write filesystem" -- in each case, whether for simple text strings like "hello" or entire files, the concept is basically the same.
 A typical copy-on-write mechanism works something like this: everyone who needs to use a shared resource accesses or refers to it only through an explicit reference; all change requests for that resource are routed through the reference rather than directly to the resource; when a change request comes in, you note how the change was requested, perform your cloning and modifying magic, and fix the reference (or all the old references) so they refer to the correct version of the resource, or to a method of getting it.
 That last case should be clarified: whether your indirect reference is something like "John, as he was the day of his 18th birthday" or "mydocument.txt version 34", this does not inherently determine how the versioned data is stored. Your system could very well keep entirely separate copies of each version and point directly to a specific, whole copy, or it could also keep one full copy of the original and then a series of changes, with a record stating that to get to version 34, you need to take the original and apply changes A7, R3, L56, and H9 (randomly picked so you perhaps won't think there's an order implied.)
 That, by the way, would be how CVS (Concurrent Versions System) stores your text files: one base reference version and then a set of changes (diffs) that, when combined, allow you to roll forward to a specific "other" version; that's also how Firebird stores backversions of records in its multi-version concurrency model: the most recent version of a record is stored (as a whole) when a change is made, along with smaller just-the-necessary records (deltas) that allow you to backtrack from the most recent version to an earlier version, making it easy to get the latest version, but still possible to get an earlier version; when older versions are no longer needed, the deltas can be erased without affecting the more recent versions. When using a change-over-time mechanism, there's no particular rule as to which end needs to be the reference copy, just so long as you're consistent about it. No changes should be allowed to appear in any unexpected places.
 That's all very well and good for block structures that can be copied whole, but what about multi-table entities in a relational database, when you don't want every single update statement to run through a big ugly stored procedure?
1.1 Table structure
 Let's start with the hypothetical (but rather probable) case of versioning "people"; let's say that each person has several addresses, maybe several phone numbers, and a few other properties like a name. You want to use your relational database the way Codd intended you to, so you refuse to smash those addresses and phone numbers into array fields in the person table; no, you want to actually split those out. You start off with something like this:
people(id, name, birthdate) <- addresses(person_id, usps_address)
                            <- phonenums(person_id, phone_number)
   
 Proud of yourself, you now consider how people are used in your database -- ah, yes, you track orders they place in your inventory system! Continuing, you decide to do the following:
orders(id, orderdate, paying_person_id, receiving_person_id) -> people...
   
 Excellent. Now your boss comes around, looks at your design, and says "Bob, that's great, but if the same customer orders equipment from us twice, we don't want his updated address information to show up on the old order, it could mess up our reporting." Sadly, you use personal information for other purposes, such as tracking complaints, and you don't really want to make new copies of people's records for every new situation -- a new copy every time they place an order, call in to complain, etc. So you decide you need some form of versioning that allows orders to remember people as they were, and maybe, just maybe, allows for a few extra tricks!
 You set off to implement something like we talked about earlier; first off, you need your references to contain a version number of some sort:
orders(id, orderdate, paying_person_id, paying_person_ver, receiving_person_id, receiving_person_ver) ->

people(id, ver, name, birthdate) <- addresses(person_id, person_ver, usps_address)
                                 <- phonenums(person_id, person_ver, phone_number)
   
 In this model, we expect to make a copy of the person record and all associated addresses and phone numbers when any changes, anywhere, occur. Changing the name, adding an address, removing a phone number, fixing a typo in the address -- all of these would result in an entirely new person record with a full copy of all related records.
 
1.2 End-user views
 To hide all of this wonderful thinking from your users, programmers, report-writers, and what-have-you, VIEWs come in handy. Consider the following views:
create view v_orders as 
select orders.id, orders.orderdate, 
       p1.name paying_name, p1.birthdate paying_birthdate, 
       p2.name receiving_name, p2.birthdate receiving_birthdate 
from orders 
left join people p1 on p1.id = orders.paying_person_id 
                   and p1.ver = orders.paying_person_ver
left join people p2 on p2.id = orders.receiving_person_id 
                   and p2.ver = orders.receiving_person_ver;

create view v_order_paying_addresses as
select orders.id, a1.usps_address
from orders 
left join addresses a1 on a1.person_id = orders.paying_person_id 
                      and a1.person_ver = orders.paying_person_ver;
   
 Your users now have convenient access to this versioned information, and they don't even need to know that it's versioned. In the following section, we'll see how we can trap their attempts at updating this information and "do the right thing" for them automatically; they'll continue to think they're just using tables, that all this information is per-order (as that's the only way they can access it in this case) and never see the copy-on-write algorithm.
1.3 Underlying triggers and support procedures
 You'll be happy to know that despite your best efforts to re-expose versioned data as virtual tables (views), and avoid the annoyance of making all changes via strict stored procedures, you now get to do exactly that, but in the background.
 You can place "instead of" triggers on your various order-oriented views (n.b. in Firebird 2.x, all triggers on views become instead-of triggers, and the view will not attempt to pass-through the requested action even if it thinks it knows how on simple views) to catch attempts to modify them. As they contain joins, it's unlikely your RDBMS will have decided it knows how to allow changes anyway, so until you do this, your views are read-only.
 Before we get into what the triggers need to do, let's talk about copy-on-write. Anytime we detect an attempted change through these views, we'll know what the order_id was the user was intending to change, but how do we know if we need to make a new copy of a given person record? The following options come to mind: don't make a new copy if the copy you're attempting to modify was created by your local transaction (so a client program can request several changes in a row as separate statements without causing new copies to be created each time), and possibly tracking the "localized" nature of copies so a copy that is dedicated to a specific order, and no other, won't be cloned again (you may not want this if you're also using the cloning mechanism to keep an audit history of all changes.)
 The first thing you'll do, then, is look at the data they're attempting to change, hunt down which person and which version of the person they're attempting to modify, decide if a clone is needed, and if so, create one, then apply the changes to that cloned version, modify the reference so it points to the cloned version, and go about your merry way. Note that you're not required to make changes to the cloned version: you could make a clone, then modify the original; you would then need to change not one or a few references, but any and all other references to point to the clone; beyond just the performance issues involved in hunting all of those down and modifying them, you also have the locking issue: you might reasonably expect your transaction to be the only one modifying this order or information related to it, but all other orders even vaguely related to it? That's a lot of orders, a lot of potential record locks, deadlocks, etc. to worry about.
 Let's add something to the person record, a transaction id we can use to determine if we just recently created this record clone:
people(id, name, birthdate, transactionid)
   
 Now we need a utility function to go and optionally clone our record for us, and tell us what record we need to update:
create procedure maybe_clone_person(old_person_id, old_person_ver) as
begin
 if ((select transactionid from people 
      where id = :old_person_id and ver = :old_person_ver) 
   != current_transactionid) then
  begin
   -- cloning process
   new_version = next value for version_number_sequence;

   insert into people(id, ver, name, birthdate)
    select id, :new_version, name, birthdate
    from people where id = :old_person_id and ver = :old_person_ver;

   insert into addresses(person_id, person_ver, usps_address)
    select person_id, :new_version, usps_address
    from addresses where person_id = :old_person_id and person_ver = :old_person_ver;

   insert into phonenums(person_id, person_ver, phone_number)
    select person_id, :new_version, phone_number
    from phonenums where person_id = :old_person_id and ver = :old_person_ver;

   return new_version;
  end
 else
  -- nevermind, we already did this
  return old_person_ver;
end
   
Note: all code examples in this article are pseudocode at best; please don't expect them to compile, particularly not on whatever platform you're using!
 Now we can create a trigger that will trap changes to a person's name:
create trigger payable_person_change before update on v_orders as
begin
 if (old.paying_name != new.paying_name or old.paying_birthdate != new.paying_birthdate) then
  begin
   paying_person_id, paying_person_ver = 
    (select paying_person_id, paying_oerson_ver from orders where id = old.id);

   true_paying_person_ver = maybe_clone_person(paying_person_id, paying_person_ver);

   if (paying_person_ver != true_paying_person_ver) then
    begin
     -- we've cloned, so we go ahead and fix the reference we know about
     update orders set paying_person_ver = :true_paying_person_ver
     where id = old.id;
    end
   update people set name = new.name where id = :true_paying_person_id;
  end
 -- ... something equivalent for receiving_name, receiving_birthdate
end
   
 Now, when you update the name of the paying or receiving person on an order, the database will automatically think about cloning the person record, applying your change there, and repointing your order to the new person version, if it's not already done so. If you run multiple update statements in a row, the cloning and re-referencing steps won't happen. On that view, you're not very likely to allow inserts and deletes; you might, however, need to support the case where there was no person_id on an order, and an application attempts to set the person's name; you'd probably want to then go create a new person record, fill it in, and link the order to that person. I trust you can figure out where to do that. What about addresses and phone numbers, though? (To avoid future repetition, we'll move some code into a handy procedure as well:)
create procedure maybe_clone_order_paying_person(order_id) as
begin
 paying_person_id, paying_person_ver = 
  (select paying_person_id, paying_oerson_ver from orders where id = old.id);

 true_paying_person_ver = maybe_clone_person(paying_person_id, paying_person_ver);

 if (paying_person_ver != true_paying_person_ver) then
  begin
   -- we've cloned, so we go ahead and fix the reference we know about
   update orders set paying_person_ver = :true_paying_person_ver
   where id = old.id;
  end
 return paying_person_id, true_paying_person_ver;
end

create trigger payable_person_address_add before insert on v_order_paying_addresses as
begin
 paying_person_id, paying_person_ver = maybe_clone_order_paying_person(new.order_id);

 insert into addresses (person_id, person_ver, usps_address)
  values (:paying_person_id, paying_person_ver, new.usps_address);
end

create trigger payable_person_address_chg before update on v_order_paying_addresses as
begin
 paying_person_id, paying_person_ver = maybe_clone_order_paying_person(new.order_id);

 update addresses set usps_address = new.usps_address
 where person_id = :paying_person_id
   and person_ver = paying_person_ver
   and usps_address = old.usps_address;
end

create trigger payable_person_address_rem before delete on v_order_paying_addresses as
begin
 paying_person_id, paying_person_ver = maybe_clone_order_paying_person(new.order_id);

 delete from addresses 
 where person_id = :paying_person_id
   and person_ver = paying_person_ver
   and usps_address = old.usps_address;
end
   
 Now any change to an address will also potentially clone the person record and all attendant addresses, then modify those addresses. Yay!
1.4 Centralizing references
 The worst problem (that I can think of right now) with the implementation above is that it's hard to know who is referencing a given versioned entity so you can fix the references. It would also be nice to easily fix all references at once and not have to hunt them down in dozens of tables -- if your order tracking system also had references to people in the "shipping" tables, etc. you would have to multiply the work you're doing above to fix references; also, to avoid confusion, it seems reasonable that in our scenario above, modifying the name of the paying person, when that is exactly the same person who is receiving the order, should just fix both and keep them synchronized. Consider creating a single table that provides a mapping as follows:
people_used(order_id, person_id, person_ver)
orders(id, orderdate, paying_person_id, paying_person_ver)
   
 Now, when you go to fix an order to use the right version of a person after cloning, don't modify the order directly -- instead, you can modify this table to indicate that when this order (anywhere in it, including shipping, paying, receiving, whatever) refers to a certain person, it means a specific version. If your paying and receiving people are the same, they will always refer to the same version of the person. You wind up doing more joins, but you have fewer version fields laying about.
 Don't think it's all fun and games though; you may be able to quickly find all references to a version in one place, and fix them all there, but you need to make sure you populate the table consistently. You'll want every table that refers to people to also have a trigger somewhat like the following:
create trigger orders_pick_payable before update or insert on orders as
begin
 -- we have not yet registered a preference as to which version to use
 if(new.payable_person_id is not null and not exists
    (select * from people_used where person_id = new.payable_person_id
     and order_id = new.id)) then
  begin
   -- we'll just use the latest one we can find
   insert into people_used(order_id, person_id, person_ver)
    select new.id, id, max(ver) 
    from people
    where id = new.payable_person_id
    group by id;
  end
end
   
 You have other options, of course; you could mark certain versions as private and hide them from any new or newly-modified orders; you could keep track of the current "HEAD" (in CVS terminology) version of each person and always use that one when no other is specified.
 Speaking of options, you could even have "global" people whose records are not cloned when modified under any circumstances, and "template" people who are always copied, but the new copies are hidden (see above), thus preventing local changes from ever being seen elsewhere but also having the benefit of starting from something better than scratch. You can also give your users the ability to see that there are new versions of a given person available, and "jump" to using them instead (akin to a CVS "update" command) -- all you have to do is modify that centralized reference table.
1.5 Hack: back-references
 The address/phone-number solution above is a bit cavalier; it assumes that what your views expose is sufficient to know what needs updating. If we had used 'id' fields on those detail tables and the views exposed had needed to expose them as well because nothing else would do (let's say, for example, that a given person can have the same address listed several times, and you're using an 'id' field as the primary key, not the combination of a dozen fields, including the address and lots of little switches and knobs and whatnot to make it all work), you'll have a problem: the id exposed in the view may be great for a user to include in an update statement, but your cloning process is going to change it! I recommend you avoid this situation, but if you simply cannot...
 Add fields to your detail tables that indicate what original row they came from:
addresses(id, person_id, person_ver, usps_address, prev_id)
   
 Make sure your cloning process populates that 'prev_id' field; modify your instead-of trigger so it hunts down the correct row to modify not by 'usps_address' as we did above, but instead by first checking that the person version the user thought they should modify was the right one, so you know if you should use the id field as-is, or use prev_id to translate from the address row they tried to update into the actual address row you want them to update (on the newly-minted clone). During the first update, your trigger will clone the person and all addresses, then go back and consider the originally-requested update; it will find that the versions don't match, so it will not update the address row by matching id = new.address_id but instead by matching prev_id = new.address_id. Assuming you never clone a person more than once during the transaction, you should be at most one step away from whatever person_ver the user thought they were modifying.
 In the very next update statement run, the view will already have picked up on the modified person_ver reference and will be returning rows from the clone; the updates attempted against those rows and executed by the trigger will reference address rows from the right version, and you can more or less transparently pass-through the request. Nobody said this would be simple.
 And what, you ask, should you do if you do need to modify the same person record several times during a transaction because several orders are all getting updated at the same time, and might refer to the same person, but if so, still need cloning to happen? Well ... your cloning process won't be modifying the references from those other orders as it goes, so as you go from order to order and attempt an update, it will keep seeing an old version of the person again, won't believe it was creating during the current transaction, will create a new clone and attach that order to it, and proceed. You should be able to get away with this without further ado. If that doesn't work, please let me know. (See home page for details.)