Meaningful meta-aggregations of records
In records, (at least in Australasia) we tend to get hung up on the idea of an aggregation - the idea that a transactional record isn't complete until we have all of the information created or used by the transaction.
500 years ago, Luca Pacioli wrote his treatise on the venetian method of accounting (what we now call double entry) and was getting hung up on an entirely different type of aggregation (or what we might call a “meta-aggregation” for the sake of a more accurate label), with an entirely different purpose.
Prior to double entry, merchants had no way of understanding whether they were solvent - whether they had enough money (or promises of money, or things they could sell) to pay for all of the promises to pay that they'd made (ie. loans, goods bought but not paid for etc.).
Then along came double entry - and all of a sudden, merchants who were using it knew whether their businesses were viable or not from moment to moment.
The reason they knew this was the simple but brilliant idea that if they added up all of the assets that they had, and then added up all the liabilities, and did some very basic math (subtracted the value of liabilities from assets), they'd know whether their assets exceeded their liabilities - whether they had money to spend, or not.
Imagine running a business and trying to make a decision to buy something while actually not knowing whether the business had the money to buy it. How would you react?
This next level aggregation of records allowed them to spend money with certainty because they knew where their business was - and because they knew where it was, they could make good strategic and operational decisions about where it should go.
It's worthwhile thinking about this today.
When strategic decisions are being made, how much information are the people making those decisions actually using?
How much of the information they COULD use is locked up in a records system, and doesn't get used because they can't aggregate it in a meaningful way?
This is something that has both frustrated and surprised me for my whole records management journey.
Our obsession with getting the objects into the repository, and the lack of focus on providing meaningful aggregations so that better business decisions can be made.
IT and data people get this - it's why they're obsessed with PowerBI and Tableau (and 40 other products) and they have a preference for structured data - they know that they structured data makes meaningful cross-sectional aggregations viable.
Luca Pacioli and the entire accounting profession figured this out 500 years ago.
IT figured it out 50 years ago.
They don’t seem to be struggling for funding.
Generally speaking, I think records management IS struggling for funding.
What I'm trying to figure out at the moment is whether this means that everything is in its right place - will there always be a general low level records management group that manages at the bucket level, and more specialised groups that manage with more meaningful meta-aggregations? Or is this just a failure of records management to adapt practice to changing capability and technology?