How to talk to people about how records deliver improved results.
If I could get people to buy into just one thing to add to their records program, it would be using the D.I.K.A.R model as the basis for it.
DIKAR is a simple concept, it's an acronym for Data, Information, Knowledge, Actions, Results.
The quality of our results is determined by the quality of our actions.
The quality of our actions is determined by the quality of our knowledge.
The effectiveness of our knowledge in making decisions about what action to take and how to take it is constrained by the quality of our information.
The quality of our information is constrained by the quality of our data.
The thing it does that's important, is allow you to tie the quality of the information people are using, to the quality of the results that they're getting.
If people are getting poor quality results, they must have taken poor or incorrect actions.
Why did they take a poor or incorrect action?
There are only two reasons - the first is failure of incentives, the second is failure at the knowledge level.
Incentive failure is simple - someone decided to to the wrong thing because what they did had better incentives than the thing that they should have done (this is where most archival compliance goes wrong).
Failure at the knowledge level means that people didn't have sufficient knowledge to make sense of the situation and decide on the correct action, or they had insufficient training to carry out the action once they had decided on it.
If that wasn't the case, they must have had poor quality information or data.
To me, the only real goal that makes sense for records is that we focus on making sure that any time someone gets poor results in our organisation, it's because they didn't have the right knowledge, or they decided to do the wrong thing.
Our goal should always be to make sure that they had the right information, at the right level of quality, in the right place, at the right time.
If we're always focusing on that, we should be able to ask for the money we need - and organisations should be happy to pay it.
The thing we have to do for it to be credible though, is examine our own practices.
I think we can make a case that putting all the information in an EDRMS behind a functional classification actually (in aggregate) hurts our organization.
If you want to do that, and be really credible, you need to do an option evaluation about where the information is created, kept and managed, and the impact that has on information quality, and how that impacts on the result. If it improves the result, and you can explain that - great.
If not, then the DIKAR model has worked exactly as it should, and implementing a practice because you like it even though it makes results go backwards seems to me to be a career limiting move.