RES Workspace 2015 SR2 – What’s new?

By Max Ranzau

 

Hello everyone, here is a technically digested overview of some of the features in the new Service Release 2 of RES Workspace 2015. Fair warning: These notes were mostly created from the releasenotes in the pre-release, so there may be some nuggets which did not make it into this recap. Second, this is not an exhaustive list, it’s the items which I found the most interesting and/or useful in my work.

warning, yellowOne important thing to keep in mind when doing the upgrade. If you have all agents connected via relay servers, you must reconfigure one of them to point directly to the datastore before doing the SR2 upgrade. I guess RES is probably reconfiguring the matrix changing the database schema. Then upgrade the relay servers and finally all the agents.

o016logoOffice 2016 Support. This is one of the most anticipated features in my opinion. Not only does SR2 include new User Settings templates for the 2016 suite, but it also supports Outlook 2016 for Email Template configuration. Nothing more to say about it other than it seems to work as advertised, when taken for a spin around the block in the RESguru Skunkworks.

win10logoWindows 10 Support. This one you need to pay close attention to: While Workspace seems to work swimmingly on Windows 10 in regards to User Settings, configuration and security – which in my optics usually are the most important bits – there are some things to be aware of. One such thing is creating new tiles do not take effect upon a session refresh: Users will need to log out and back in before these changes take effect. I personally view this as an issue, since we’ve been accustomed to shortcuts appearing at refresh since the early days of PowerMenu 2000. I know from my talks with the product teams they are hard at work to fix this. Workspace SR2 specifically supports the Win 10 build 10240 as of July 2015 and Win 10 v.1511 (OS Build 10586.29). Be sure to check your build/version first, by running the winver.exe command. RES tracks and support Win 10 updates as of May 10, 2016 — KB3156421 (OS Build 10586.318) for Win 10 1511, See update history here. Finally, it’s worth mentioning there is a page in the Workspace SR2 release notes, titled “Microsoft Windows 10 known limitations”. It’s two pages long so I won’t rehash it here, yet do make sure you read and understand this thing before you throw yourself into a Windows 10 project.

Aat-app-endctions: New timing option ‘At application end’ for Execute Command. This is one of those things that have been sitting on the backlog for what feels like half a century. And let’s be honest; it’s one of the features which the goonies in green have been knocking RES for not having. Long story short, this allows you to fire off Sync jobs, cleanups and whatnot upon termination of an application. It goes almost without saying to use common sense on this feature. Any app which places itself in the system tray never really terminates.

winauthAbility to specify account in console for SQL windows authentication. I’ve always hated dealing with the combination of WM and windows authentication with a vengeance. Mainly due to that it was cumbersome to make sure all the pieces line up. For example; before SR2 you had to make sure the account you were logged in with running the windows console had database access. This has been fixed, so now you can just configure the SQL windows credentials.

bypass-groupAdvanced Settings: Bypass composer setting now also supports groups. While it was useful to be able to exclude certain people from being hit by workspace manager, such as admins, it was previously a hardcoded list inside the Workspace console. By now enumerating AD groups, this allows us to control it externally. For example, we can now build a Service to request temporary admin permissions or similar elevations, one could also build a service around this for admins to request Workspace manager to lower it’s shields for a bit.

agent-csvCSV export of agents: Once you have searched for your agents, there’s now an icon in the Workspace toolbar to export a list of agents. I could see this being useful for several automated purposes. Now all we need is a command-line switch for pwrtech.exe to be able to unattend this export. If you are interested here are the headers for the export: Computer name,Run Workspace Composer,FQDN,Domain,Operating system version,Last console user,Agent version,AppGuard version,NetGuard version,RegGuard version,ImgGuard version,Laptop,XenApp version,Citrix Site,VDX Engine version,VDX Plugin version,Last contact,Synchronization status,Connection,Connects to,Relay Server discovery,Relay Server list,Relay Server name,WebGuard version.

aburnerOverall performance enhancements. SR2 has seen a boost on the performance side. Areas such as the DBcache, FileSync, Direct datastore connections, Relay Servers, authorized files / filehash imports and XenApp environments with more than 1000 published apps. Logging has been enhanced to truncate excessive repeating log entries. Essentially if something goes bump in the night more than once per minute for an hour, truncation happens. See the releasenotes for more info. Another item worth mentioning is that SR2 includes new kernel filter drivers, thus a reboot on all affected computers is necessary when installing SR2

New product packaging: Besides the above technical enhancements, there are also some major changes on the product packaging and pricing side. I’ve covered these in a separate article.

fhtNew File Hash Monitor tool: Okay so I cheated a bit and gave the official corp blog a once-over after writing this article. I noticed something that wasn’t in the original, uhm prerelease-release notes: The File Hash Monitor tool. Allow me to fill in a few blanks. Essentially this is a separate download from the RES portal here, which allows you to pick up filehashes ahead of time. When you install it, you specify a scan interval, a target CSV file and some target folders where your executables are, for example C:\Program Files\. Much like the Relay Server, a configuration tool is installed alongside a service called RESFHM. The service will start generating the CSV file within a few moments after initial configuration. The resulting CSV file looks like this:

scan

Once you have your CSV file cooked and done, you can import it into Workspace by running the console executable like this: PWRTECH.EXE /IMPORTHASHES=<your_csv_file> [/CREATEIFNOTEXISTS]. See page 386 in the admin guide.

One rather cool thing which I think should be emphasized, is the ROFHMT (please tell me we’re not going to call it that ;) has the ability to scan executables inside container files such as MSI, CAB, RAR, ZIP, etc. (see screenshot above to the right). You can add your own extensions as well and customize what tool is used to decompress them. Per default it’s set up to use the freeware 7Zip to handle these.

Commandline export of the Security log: Now it’s possible to pull out XML exports for some of the security logs. Use the console binary to run the export as: PWRTECH.EXE /EXPORTLOG /TYPE=<Logtype> /OUTPUT=<log filepath> /START=<startdate> /END=<enddate>. Currently for ‘logtype’ the following logs are supported:

Logtype value Description
APPLICATION Managed app security log
REMDISK Removable disk security log
NETWORK Network security log

Start and end dates are optional yet must be be in YYYYMMDD or YYYYMMDDhhmmss if specified. Also, make sure that the user you run the pwrtech.exe command line with, has at least read permission in the administrative roles for the security subsystem who’s log you want to export.

While it’s cool to be able to do these exports, there’s still an item left on my xmas wishlist: Will we ever be able to clear the logfiles from within the console? Doing the Workspace baseline security on a new installation, this is paramount and yet still the only way to do it is by either hacking the datastore directly or using Patrick’s excellent, yet unsupported Log Management Tool. Oh well, there’s always the next FR/SR to look forward to.

In conclusion: Overall SR2 is a solid update, well worth the subscription advantage. Besides the above enhancements and performance boosts, this update fixes 50+ issues and bugs. Good work! Read the final releasenotes here: pdffile

 

New RES product packaging, part 1 of 2

By Max Ranzau

 

packFrom the Packaging&Shipping dept. Today some major changes were announced on the product packaging side. While it doesn’t affect the technical operations of the products (sorry, the unified license server is not there yet), it does have conceptual impact, which we all would do well to wrap our collective gray goo around. This is the first part of a two-phase announcement, the second one is coming out on May 24th next week during Synergy. Let’s run through the most important bits of the first announcement to understand what’s going on here. The headlines are as follows:

  1. WM and AM are merging into one product. This means that the current stand-alone product Automation is going to be part of Workspace. Again the consoles aren’t merging, this is just a licensing and naming change:
  2. Free RES Core for Workspace. This is essentially just the consoles plus basic functionality, like we’ve seen in the earlier Express versions of Workspace Manager and PowerFuse. For example Core has UserSettings, however only at the global level. If you want the per-app user settings, you will need the new Composition module. See item 4 below.
  3. No more metal versions. The old Bronze, Silver and Gold names have gone the way of the Dodo. This is a good thing, because it means you can now mix and match the editions without having to start out with the mandatory Bronze (configuration and user settings).
  4. Workspace will now have 4 modules:
    • Composition – Same as always (application based user settings, console configuration, app/shortcut management). This is what used to be in the old Bronze more or less.
    • Security – This includes the well-known managed app security, dynamic privileges/process elevation , network security, etc. One thing I didn’t see on the list was Read-only blanketing, however we’ll have to see if it’s still in there.
    • Governance – New name for the module formerly known as Advanced Administration. Contains administrative roles, usage tracking, auditing performance components and license management of managed apps.
    • Automation – This is essentially Automation manager lobbed into the mix as a WM module, where desktop is licensing is inferred, however these are still licensed separately per desktop and I’ll have to presume that any needed servers in the mix are still being licensed differently than desktop. Acording to RES, Automation also comes with some (as of yet undefined) predefined building blocks.
  5. Pricing. The MSRP still holds at $€30 per named user for all modules, with the exception of the free Core. However, it still remains to be seen if RES will be offering a bundling discount if you purchase the whole Workspace product.

According to RES Marketing, these changes are scheduled to go into effect early July 2016. Finally as indicated above, this is the first of a two-part announcement, the second going official next week during Synergy in Las Vegas. However it goes without saying that Service Store was not mentioned above. I will also be investigating what the new Suite with everything will look like. Stay tuned!

 

Removing zombies from Service Store

By Max Ranzau

 

From Rick Grimes has no patience for the undeleted dead.the Hacking Dead dept. Service Store is a fine HR data processor and workflow engine, when you set it up to pull people and department data in from an authoritative data source. In a previous article I showed an example on how to do just that.  However, when a person is marked as deleted in your datasource, IT Store doesn’t delete the user. They effectively are the living dead IT Store people, except in this case they won’t try to claim a license or your brains.

Update: This article was updated on May 8th 2016 with new and improved SQL.

Deleting a user in IT Store has always been a two-stage affair. Initially when IT Store marks a person for deletion it uses the opportunity to scan for any and all delivered services. One should not tinker with this. However, once mentioned services have been properly returned, the user is then marked as [Ready for deletion]. But that’s all she wrote. Nothing more happens.

3zombiesEffectively this means over time an organization with thousands of annual onboarding/offboardings (think educational institutions for example) will have a pileup of undead un-deleted people in IT Store. Sure, they’re obscurred from view until you check the “Include people marked for deletion”. Your only current option is to manually go Mischonne on them in the console yourself. (Yes, I know – old screenshot, but it’s the same deal)

Update: There is also a another problem with leaving people not deleted in the ServiceStore. If you need to re-use people identifiers, say when you delete someone, their email address can be re-registered. This is not the case if a person is not deleted manually from the store.

The design rationale is that since some HR systems don’t delete the employee when off-boarded, then neither should ITS. Here’s where I disagree. It makes sense for HR systems to keep a record of previous people for administrative reasons, but since ITS is the conduit into the rest of the IT infrastructure organization, there’s IMHO little point in keeping a record here once you’ve cleaned up everywhere else. After all, during off-boarding we’d probably be exporting the user’s mailbox and zip up his homedrive as we don’t want dead user remains floating around in the production environment.

At this stage there’s only one way to deal with this if you don’t want to manually flush users marked ready for deletion: Hack the IT Store database.

warning, yellowLike any other vendor, RES gets nervous ticks and reaches for their crossbow, when  you start messing with the brraaaiiins grey matter of the datastores, thus the usual warnings apply: If you do this, you’re on your own. See the MOAD for details. Also, may I recommend you make a backup of the datastore and KNOW how to restore it.

That said, let’s look at the updated hack. It consists of 3 consecutive SQL delete queries. The first version of this database hack only deleted the person, but since people attributes and identifiers are stored in separate tables, they would be orphaned if you don’t clean them out before deleting the person. Presuming your datastore is running MSSQL, the new and improved update SQL looks like this:

-- delete all people identifiers associated with this person
DELETE 
   FROM [$[in.db.its.name]].[dbo].[OR_PeopleIdentifiers]
      FROM [$[in.db.its.name]].[dbo].[OR_PeopleIdentifiers] AS ppli 
      INNER JOIN [$[in.db.its.name]].[dbo].[OR_Objects] AS pers 
         ON ppli.PersonGuid = pers.Guid
    WHERE pers.Type = 1 and pers.RecordStatus = 2;

-- delete all people attributes associated with this person
DELETE 
   FROM [$[in.db.its.name]].[dbo].[OR_PeopleAttributes]
      FROM [$[in.db.its.name]].[dbo].[OR_PeopleAttributes] AS ppla 
      INNER JOIN [$[in.db.its.name]].[dbo].[OR_Objects] AS pers 
         ON ppla.PersonGuid = pers.Guid
   WHERE pers.Type = 1 and pers.RecordStatus = 2;

-- delete the person
DELETE FROM [$[in.db.its.name]].[dbo].[OR_Objects]
	WHERE [$[in.db.its.name]].[dbo].[OR_Objects].Type = 1 and 
             [$[in.db.its.name]].[dbo].[OR_Objects].RecordStatus = 2;

The $[in.db.its.name] above is an Automation Manager module parameter, containing the name of the ITS database. Running this update query will be the same as manually marking all the users marked [Ready for deletion]. One SNAFU back from IT Store 2014 was  the people will not be removed from the ITS console before you exit and re-launch it. My guess is that the records are cached in RAM and are only updated when the old IT Store was doing it’s own operations. This is however not the case with ServiceStore 2015, as the affected people are removed immediately.

sql Putting this into Automation Manager, I came across a minor problem with the SQL statement execute task in Automation Manager. It looks like as of SR3 (7.0.3.0) the password field can’t be properly parameterized. Sure, you can rightclick on the password field and insert a parameter, but next time you go back and edit the module, the password stops working. Until RES fixes this and puts in a proper set of credential-type accepting field, you’re better off hardcoding the password.

If you’re still up for it, try out this buildingblock in your lab:  legobrick-cropped

Note1: Buildingblock has NOT been updated with the new SQL statement above, you’ll need to paste that in yourself.

Note2: If you suspect you might already have orphaned people attributes or people identifiers in your datastore you can check with these two statements:

-- test if we have any orphaned people attributes
select * from your_storedb.dbo.OR_PeopleAttributes
WHERE NOT EXISTS(SELECT NULL
                    FROM your_storedb.dbo.OR_Objects obj
                   WHERE obj.Guid = PersonGuid  )


-- test if we have any orphaned people identifiers
select * from your_storedb.dbo.OR_PeopleIdentifiers
WHERE NOT EXISTS(SELECT NULL
                    FROM your_storedb.dbo.OR_Objects obj
                   WHERE obj.Guid = PersonGuid  )

If both queries above come back with zero rows, you’re fine. Otherwise, you’ve got orphans. You can wipe them out like another Scrooge by running these two deletes:

-- delete orphaned people attributes
delete from your_storedb.dbo.OR_PeopleAttributes
where not exists (
    select NULL 
    from resss.dbo.OR_Objects obj
    where obj.Guid = PersonGuid
);

-- delete orphaned people identifiers
delete from your_storedb.dbo.OR_PeopleIdentifiers
where not exists (
    select NULL 
    from resss.dbo.OR_Objects obj
    where obj.Guid = PersonGuid
);

 

Using kill-files with Service Store

By Max Ranzau
 

mdkFrom the MDK Divison. In this article I’ll cover some experiences in regards to handling authoritative data on a super-scaleable basis. For the example at hand, lets say you have an authoritative datasource which only provides you delta’s, i.e. you only get orders for which people objects to create and who to kill (whoa, that didn’t come out right). You want to ensure that at all times your list of people in the ServiceStore is in sync with reality, based on the deltas you receive. In our example we are basing this off CSV files.

In order to handle this, you’ll have to create two data connections, one that makes new people and one that kills them (oh, there I go again). This is important as with only one data source, Service Store will delete any people records where there isn’t a corresponding entry in the datasourced CSV files. This can be avoided by splitting up add’s and deletes onto two seperate data connectors. The key is using the flags on the mapping pages correctly. If you don’t, you’ll risk wiping out (or at least marking for deletion) every current user in your service store, so pay close attention.

Assuming there might be more than one make/kill order coming through at any point, you would need to collect these in two static csv files, as the servicestore only knows how to read data from one CSV file. Each of the incoming orders typically only contains one order/line. You will of course need to create a datasource for both of these CSV files. The collection can be done with a bit of scheduled nifty Powershell’ing. Feel free to reach out for that if you have no idea how to make it.

Once you have two CSV files ready for synchronization into ServiceStore, you’ll want to set up your data connector mapping flags correctly. I found the following works best. For importing people to create:

  • [X] Ignore duplicates
  • [X] Allow inserts
  • [  ] Allow updates
  • [  ] Allow deletes (mark for deletion)

For people to, ahem…”migrate to the cloud”, the flags need to be configured differently. You will have to allow updates in order for the mark for deletion mechanism do it’s thing.

  • [  ] Ignore duplicates
  • [  ] Allow inserts
  • [X] Allow updates
  • [X] Allow deletes (mark for deletion)

In order for the above data connections to work to work, 1) both the CSV files need to reference some people identifier  In my case a GUID is available per user. 2) both the make and kill files should have a commonly named column, such as ACTION which signifies what is happening. This will also help your script to sort the incoming CSV’s into the right pile. To give you an overview of the process, study the following diagram (click to enlarge):

kf-diagram

  1. On a scheduled basis, the Powershell scripts reads every deposited delta file, either a make or a killfile. The files are deleted once read
  2. All make-files are written to a combined make file and all kill-files are done the same way.
  3. The script executes the resocc.exe commandlines to trigger syncs of the two data connections, using the datasources pointing to the created combine-csv files.
  4. People are created or marked for deletion in the service store.
  5. The collection CSV files are deleted before the next scheduled run.

This method makes for an effective way of receiving multiple creation/deletion commands as part of an onboading/offboarding scenario. If you wish to learn more about this solution, feel free to reach out.