In this economy, I've been repeatedly pinged by clients on how to maximize their investment in their existing Identity Management software investment. In other words "I want to do all this stuff, but I don't want to buy more software, and barely buy any services."
So here is an idea that came from a conversation with one of our engineers. This is for clients that own a Password Management solution only, but want to be able to deprovision users. They could create a workflow to change the password to all target systems to a random password that no one knows. In effect, the user would be locked out of all accounts. A small program could be written to call the workflow's SPML interface (assuming it has one) based on a feed from Payroll or HR as well for a nightly process. No new software, barely any services, but an effective deprovision of accounts.
I'm noodling if this would pass an audit, but I doubt it would since the account is still active. But it would work, it would leverage the clients investment in connectors built for all target systems, and could be accomplished in no time.
I think it's the best thing since sliced bread, but I'm sure I'll find a new favorite tomorrow. Would this work?
Tuesday, December 30, 2008
Thursday, December 25, 2008
Tuesday, December 23, 2008
Virtual Directories and Persistent Cache
I got drawn into a debate lately about the pros/cons of persistent cache in a virtual directory, and the practical implications of it. (I know this is an old debate. Better late than never?) A persistent cache is basically storing a copy of data locally at the virtual directory, so it doesn't have go get the data each time.
The first question is 'why add this capability? isn't the whole point of a virtual directory provide real-time access to backend data?' In my conversations, I basically received one answer: performance. Virtualizing and transforming the data can slow things down a bit.
Clayton Donley makes a case against persistent cache in an older post. To summarize:
(i.e., if you want a metadirectory, then get a metadirectory!)
So, I've come up with a few questions, and was wondering if anyone has any thoughts about it...
The first question is 'why add this capability? isn't the whole point of a virtual directory provide real-time access to backend data?' In my conversations, I basically received one answer: performance. Virtualizing and transforming the data can slow things down a bit.
Clayton Donley makes a case against persistent cache in an older post. To summarize:
- Persistent cache will mean data isn't real-time, which means the 'freshness' of data will be compromised.
- There are security concerns with adding another place to keep the data.
- There is pain associated with managing yet another directory.
(i.e., if you want a metadirectory, then get a metadirectory!)
So, I've come up with a few questions, and was wondering if anyone has any thoughts about it...
- Since performance is the main point here, does any one have numbers on the performance hit caused by virtual directories?
- Is performance the only real justification for persistent cache?
Subscribe to:
Posts (Atom)