Posts tagged: Killfile

Using kill-files with Service Store

By Max Ranzau
 

mdkFrom the MDK Divison. In this article I’ll cover some experiences in regards to handling authoritative data on a super-scaleable basis. For the example at hand, lets say you have an authoritative datasource which only provides you delta’s, i.e. you only get orders for which people objects to create and who to kill (whoa, that didn’t come out right). You want to ensure that at all times your list of people in the ServiceStore is in sync with reality, based on the deltas you receive. In our example we are basing this off CSV files.

In order to handle this, you’ll have to create two data connections, one that makes new people and one that kills them (oh, there I go again). This is important as with only one data source, Service Store will delete any people records where there isn’t a corresponding entry in the datasourced CSV files. This can be avoided by splitting up add’s and deletes onto two seperate data connectors. The key is using the flags on the mapping pages correctly. If you don’t, you’ll risk wiping out (or at least marking for deletion) every current user in your service store, so pay close attention.

Assuming there might be more than one make/kill order coming through at any point, you would need to collect these in two static csv files, as the servicestore only knows how to read data from one CSV file. Each of the incoming orders typically only contains one order/line. You will of course need to create a datasource for both of these CSV files. The collection can be done with a bit of scheduled nifty Powershell’ing. Feel free to reach out for that if you have no idea how to make it.

Once you have two CSV files ready for synchronization into ServiceStore, you’ll want to set up your data connector mapping flags correctly. I found the following works best. For importing people to create:

  • [X] Ignore duplicates
  • [X] Allow inserts
  • [  ] Allow updates
  • [  ] Allow deletes (mark for deletion)

For people to, ahem…”migrate to the cloud”, the flags need to be configured differently. You will have to allow updates in order for the mark for deletion mechanism do it’s thing.

  • [  ] Ignore duplicates
  • [  ] Allow inserts
  • [X] Allow updates
  • [X] Allow deletes (mark for deletion)

In order for the above data connections to work to work, 1) both the CSV files need to reference some people identifier  In my case a GUID is available per user. 2) both the make and kill files should have a commonly named column, such as ACTION which signifies what is happening. This will also help your script to sort the incoming CSV’s into the right pile. To give you an overview of the process, study the following diagram (click to enlarge):

kf-diagram

  1. On a scheduled basis, the Powershell scripts reads every deposited delta file, either a make or a killfile. The files are deleted once read
  2. All make-files are written to a combined make file and all kill-files are done the same way.
  3. The script executes the resocc.exe commandlines to trigger syncs of the two data connections, using the datasources pointing to the created combine-csv files.
  4. People are created or marked for deletion in the service store.
  5. The collection CSV files are deleted before the next scheduled run.

This method makes for an effective way of receiving multiple creation/deletion commands as part of an onboading/offboarding scenario. If you wish to learn more about this solution, feel free to reach out.