Generative AI, the third generation of records systems and beyond chicken little
I've got a working model that I'm presenting at the RIMPA conference in a couple of weeks.
Basically it says that we're seeing a third generation of records systems emerge now.
Generation 1 was just a catalogue.
Generation 2 was a catalogue coupled to a repository - and presented capability that you get when those two things are tightly coupled together.
Generation 3 is a catalogue coupled to a classifier - this is my working hypothesis.
I'm positing that the third generation of records systems is going to help us know our records better than we've ever been able to before.
It's essentially solving the internal knowledge problem - how do we come to know what's in our records without having to have a person read every one of them with record-specific purposes in mind.
What they help us do, is pay attention to things that are important by telling us what's important, and what's not - so that we don't have waste time getting people to do that (like we arguably do with generation 2).
The problem, is that every example I've seen still requires us to tell them what things are important to pay attention to - this isn't really a problem, it's really me looking at the constraints that generation 3 removes, and thinking about where the constraint moves to next.
We tell them what's important by a variety of means - but whether it's ML trained on existing data, or some form of NLP/ontological coding, we're still saying "this type of thing is important because of this reason."
It makes a certain amount of sense that the next generation of that technology (maybe Gen 4, maybe Gen3.5) won't just let us say "pay attention to these types of things," but also say to us "these are the things you should be paying attention to" - and given the capabilities of generative AI, it makes sense that it fills this gap.
It's the right tech for now because the simple truth is that the problem of larger and larger data environments is obviously not about storing them.
It's about making sense of them.
And it's a combined sense-making and knowledge management challenge.
We have to make sense of our data environment in the context of our business environment - the opportunities for gain, and the risk and regulatory environment.
All of which requires specific expertise.
All of which changes regularly.
The key set of questions are how quickly can we make sense and gain knowledge of the current data, business and regulatory environments, what's your rate of knowledge loss and knowledge staleness, and then how quickly can you operationalise what you know - go from knowing the risks, to treating them so that you're not just well equipped to be chicken little (a role which we could argue data and records management have played for many years without the sky actually falling in).
It's this virtuous cycle that I think will give us the biggest role for generative AI.
In the past, people haven't been able to act on what we know fast enough and efficiently enough to make a difference.
The third generation of records systems reduces the gap between the creation of a risk to acting on it to the point where there's almost no gap, and reduces the cost of acting to the point where it's probably negligible for most organisations.
The next challenge is to make sense of everything so that we know what we should be acting on, the obvious role is for generative AI to be telling us what the risks are that we should be paying attention to.
Make sense?
What do you think?