Category: Uncategorized

RES Workspace 2015 SR2 – What’s new?

By Max Ranzau


Hello everyone, here is a technically digested overview of some of the features in the new Service Release 2 of RES Workspace 2015. Fair warning: These notes were mostly created from the releasenotes in the pre-release, so there may be some nuggets which did not make it into this recap. Second, this is not an exhaustive list, it’s the items which I found the most interesting and/or useful in my work.

warning, yellowOne important thing to keep in mind when doing the upgrade. If you have all agents connected via relay servers, you must reconfigure one of them to point directly to the datastore before doing the SR2 upgrade. I guess RES is probably reconfiguring the matrix changing the database schema. Then upgrade the relay servers and finally all the agents.

o016logoOffice 2016 Support. This is one of the most anticipated features in my opinion. Not only does SR2 include new User Settings templates for the 2016 suite, but it also supports Outlook 2016 for Email Template configuration. Nothing more to say about it other than it seems to work as advertised, when taken for a spin around the block in the RESguru Skunkworks.

win10logoWindows 10 Support. This one you need to pay close attention to: While Workspace seems to work swimmingly on Windows 10 in regards to User Settings, configuration and security – which in my optics usually are the most important bits – there are some things to be aware of. One such thing is creating new tiles do not take effect upon a session refresh: Users will need to log out and back in before these changes take effect. I personally view this as an issue, since we’ve been accustomed to shortcuts appearing at refresh since the early days of PowerMenu 2000. I know from my talks with the product teams they are hard at work to fix this. Workspace SR2 specifically supports the Win 10 build 10240 as of July 2015 and Win 10 v.1511 (OS Build 10586.29). Be sure to check your build/version first, by running the winver.exe command. RES tracks and support Win 10 updates as of May 10, 2016 — KB3156421 (OS Build 10586.318) for Win 10 1511, See update history here. Finally, it’s worth mentioning there is a page in the Workspace SR2 release notes, titled “Microsoft Windows 10 known limitations”. It’s two pages long so I won’t rehash it here, yet do make sure you read and understand this thing before you throw yourself into a Windows 10 project.

Aat-app-endctions: New timing option ‘At application end’ for Execute Command. This is one of those things that have been sitting on the backlog for what feels like half a century. And let’s be honest; it’s one of the features which the goonies in green have been knocking RES for not having. Long story short, this allows you to fire off Sync jobs, cleanups and whatnot upon termination of an application. It goes almost without saying to use common sense on this feature. Any app which places itself in the system tray never really terminates.

winauthAbility to specify account in console for SQL windows authentication. I’ve always hated dealing with the combination of WM and windows authentication with a vengeance. Mainly due to that it was cumbersome to make sure all the pieces line up. For example; before SR2 you had to make sure the account you were logged in with running the windows console had database access. This has been fixed, so now you can just configure the SQL windows credentials.

bypass-groupAdvanced Settings: Bypass composer setting now also supports groups. While it was useful to be able to exclude certain people from being hit by workspace manager, such as admins, it was previously a hardcoded list inside the Workspace console. By now enumerating AD groups, this allows us to control it externally. For example, we can now build a Service to request temporary admin permissions or similar elevations, one could also build a service around this for admins to request Workspace manager to lower it’s shields for a bit.

agent-csvCSV export of agents: Once you have searched for your agents, there’s now an icon in the Workspace toolbar to export a list of agents. I could see this being useful for several automated purposes. Now all we need is a command-line switch for pwrtech.exe to be able to unattend this export. If you are interested here are the headers for the export: Computer name,Run Workspace Composer,FQDN,Domain,Operating system version,Last console user,Agent version,AppGuard version,NetGuard version,RegGuard version,ImgGuard version,Laptop,XenApp version,Citrix Site,VDX Engine version,VDX Plugin version,Last contact,Synchronization status,Connection,Connects to,Relay Server discovery,Relay Server list,Relay Server name,WebGuard version.

aburnerOverall performance enhancements. SR2 has seen a boost on the performance side. Areas such as the DBcache, FileSync, Direct datastore connections, Relay Servers, authorized files / filehash imports and XenApp environments with more than 1000 published apps. Logging has been enhanced to truncate excessive repeating log entries. Essentially if something goes bump in the night more than once per minute for an hour, truncation happens. See the releasenotes for more info. Another item worth mentioning is that SR2 includes new kernel filter drivers, thus a reboot on all affected computers is necessary when installing SR2

New product packaging: Besides the above technical enhancements, there are also some major changes on the product packaging and pricing side. I’ve covered these in a separate article.

fhtNew File Hash Monitor tool: Okay so I cheated a bit and gave the official corp blog a once-over after writing this article. I noticed something that wasn’t in the original, uhm prerelease-release notes: The File Hash Monitor tool. Allow me to fill in a few blanks. Essentially this is a separate download from the RES portal here, which allows you to pick up filehashes ahead of time. When you install it, you specify a scan interval, a target CSV file and some target folders where your executables are, for example C:\Program Files\. Much like the Relay Server, a configuration tool is installed alongside a service called RESFHM. The service will start generating the CSV file within a few moments after initial configuration. The resulting CSV file looks like this:


Once you have your CSV file cooked and done, you can import it into Workspace by running the console executable like this: PWRTECH.EXE /IMPORTHASHES=<your_csv_file> [/CREATEIFNOTEXISTS]. See page 386 in the admin guide.

One rather cool thing which I think should be emphasized, is the ROFHMT (please tell me we’re not going to call it that ;) has the ability to scan executables inside container files such as MSI, CAB, RAR, ZIP, etc. (see screenshot above to the right). You can add your own extensions as well and customize what tool is used to decompress them. Per default it’s set up to use the freeware 7Zip to handle these.

Commandline export of the Security log: Now it’s possible to pull out XML exports for some of the security logs. Use the console binary to run the export as: PWRTECH.EXE /EXPORTLOG /TYPE=<Logtype> /OUTPUT=<log filepath> /START=<startdate> /END=<enddate>. Currently for ‘logtype’ the following logs are supported:

Logtype value Description
APPLICATION Managed app security log
REMDISK Removable disk security log
NETWORK Network security log

Start and end dates are optional yet must be be in YYYYMMDD or YYYYMMDDhhmmss if specified. Also, make sure that the user you run the pwrtech.exe command line with, has at least read permission in the administrative roles for the security subsystem who’s log you want to export.

While it’s cool to be able to do these exports, there’s still an item left on my xmas wishlist: Will we ever be able to clear the logfiles from within the console? Doing the Workspace baseline security on a new installation, this is paramount and yet still the only way to do it is by either hacking the datastore directly or using Patrick’s excellent, yet unsupported Log Management Tool. Oh well, there’s always the next FR/SR to look forward to.

In conclusion: Overall SR2 is a solid update, well worth the subscription advantage. Besides the above enhancements and performance boosts, this update fixes 50+ issues and bugs. Good work! Read the final releasenotes here: pdffile


New RES product packaging, part 1 of 2

By Max Ranzau


packFrom the Packaging&Shipping dept. Today some major changes were announced on the product packaging side. While it doesn’t affect the technical operations of the products (sorry, the unified license server is not there yet), it does have conceptual impact, which we all would do well to wrap our collective gray goo around. This is the first part of a two-phase announcement, the second one is coming out on May 24th next week during Synergy. Let’s run through the most important bits of the first announcement to understand what’s going on here. The headlines are as follows:

  1. WM and AM are merging into one product. This means that the current stand-alone product Automation is going to be part of Workspace. Again the consoles aren’t merging, this is just a licensing and naming change:
  2. Free RES Core for Workspace. This is essentially just the consoles plus basic functionality, like we’ve seen in the earlier Express versions of Workspace Manager and PowerFuse. For example Core has UserSettings, however only at the global level. If you want the per-app user settings, you will need the new Composition module. See item 4 below.
  3. No more metal versions. The old Bronze, Silver and Gold names have gone the way of the Dodo. This is a good thing, because it means you can now mix and match the editions without having to start out with the mandatory Bronze (configuration and user settings).
  4. Workspace will now have 4 modules:
    • Composition – Same as always (application based user settings, console configuration, app/shortcut management). This is what used to be in the old Bronze more or less.
    • Security – This includes the well-known managed app security, dynamic privileges/process elevation , network security, etc. One thing I didn’t see on the list was Read-only blanketing, however we’ll have to see if it’s still in there.
    • Governance – New name for the module formerly known as Advanced Administration. Contains administrative roles, usage tracking, auditing performance components and license management of managed apps.
    • Automation – This is essentially Automation manager lobbed into the mix as a WM module, where desktop is licensing is inferred, however these are still licensed separately per desktop and I’ll have to presume that any needed servers in the mix are still being licensed differently than desktop. Acording to RES, Automation also comes with some (as of yet undefined) predefined building blocks.
  5. Pricing. The MSRP still holds at $€30 per named user for all modules, with the exception of the free Core. However, it still remains to be seen if RES will be offering a bundling discount if you purchase the whole Workspace product.

According to RES Marketing, these changes are scheduled to go into effect early July 2016. Finally as indicated above, this is the first of a two-part announcement, the second going official next week during Synergy in Las Vegas. However it goes without saying that Service Store was not mentioned above. I will also be investigating what the new Suite with everything will look like. Stay tuned!


Removing zombies from Service Store

By Max Ranzau


From Rick Grimes has no patience for the undeleted dead.the Hacking Dead dept. Service Store is a fine HR data processor and workflow engine, when you set it up to pull people and department data in from an authoritative data source. In a previous article I showed an example on how to do just that.  However, when a person is marked as deleted in your datasource, IT Store doesn’t delete the user. They effectively are the living dead IT Store people, except in this case they won’t try to claim a license or your brains.

Update: This article was updated on May 8th 2016 with new and improved SQL.

Deleting a user in IT Store has always been a two-stage affair. Initially when IT Store marks a person for deletion it uses the opportunity to scan for any and all delivered services. One should not tinker with this. However, once mentioned services have been properly returned, the user is then marked as [Ready for deletion]. But that’s all she wrote. Nothing more happens.

3zombiesEffectively this means over time an organization with thousands of annual onboarding/offboardings (think educational institutions for example) will have a pileup of undead un-deleted people in IT Store. Sure, they’re obscurred from view until you check the “Include people marked for deletion”. Your only current option is to manually go Mischonne on them in the console yourself. (Yes, I know – old screenshot, but it’s the same deal)

Update: There is also a another problem with leaving people not deleted in the ServiceStore. If you need to re-use people identifiers, say when you delete someone, their email address can be re-registered. This is not the case if a person is not deleted manually from the store.

The design rationale is that since some HR systems don’t delete the employee when off-boarded, then neither should ITS. Here’s where I disagree. It makes sense for HR systems to keep a record of previous people for administrative reasons, but since ITS is the conduit into the rest of the IT infrastructure organization, there’s IMHO little point in keeping a record here once you’ve cleaned up everywhere else. After all, during off-boarding we’d probably be exporting the user’s mailbox and zip up his homedrive as we don’t want dead user remains floating around in the production environment.

At this stage there’s only one way to deal with this if you don’t want to manually flush users marked ready for deletion: Hack the IT Store database.

warning, yellowLike any other vendor, RES gets nervous ticks and reaches for their crossbow, when  you start messing with the brraaaiiins grey matter of the datastores, thus the usual warnings apply: If you do this, you’re on your own. See the MOAD for details. Also, may I recommend you make a backup of the datastore and KNOW how to restore it.

That said, let’s look at the updated hack. It consists of 3 consecutive SQL delete queries. The first version of this database hack only deleted the person, but since people attributes and identifiers are stored in separate tables, they would be orphaned if you don’t clean them out before deleting the person. Presuming your datastore is running MSSQL, the new and improved update SQL looks like this:

-- delete all people identifiers associated with this person
   FROM [$[]].[dbo].[OR_PeopleIdentifiers]
      FROM [$[]].[dbo].[OR_PeopleIdentifiers] AS ppli 
      INNER JOIN [$[]].[dbo].[OR_Objects] AS pers 
         ON ppli.PersonGuid = pers.Guid
    WHERE pers.Type = 1 and pers.RecordStatus = 2;

-- delete all people attributes associated with this person
   FROM [$[]].[dbo].[OR_PeopleAttributes]
      FROM [$[]].[dbo].[OR_PeopleAttributes] AS ppla 
      INNER JOIN [$[]].[dbo].[OR_Objects] AS pers 
         ON ppla.PersonGuid = pers.Guid
   WHERE pers.Type = 1 and pers.RecordStatus = 2;

-- delete the person
DELETE FROM [$[]].[dbo].[OR_Objects]
	WHERE [$[]].[dbo].[OR_Objects].Type = 1 and 
             [$[]].[dbo].[OR_Objects].RecordStatus = 2;

The $[] above is an Automation Manager module parameter, containing the name of the ITS database. Running this update query will be the same as manually marking all the users marked [Ready for deletion]. One SNAFU back from IT Store 2014 was  the people will not be removed from the ITS console before you exit and re-launch it. My guess is that the records are cached in RAM and are only updated when the old IT Store was doing it’s own operations. This is however not the case with ServiceStore 2015, as the affected people are removed immediately.

sql Putting this into Automation Manager, I came across a minor problem with the SQL statement execute task in Automation Manager. It looks like as of SR3 ( the password field can’t be properly parameterized. Sure, you can rightclick on the password field and insert a parameter, but next time you go back and edit the module, the password stops working. Until RES fixes this and puts in a proper set of credential-type accepting field, you’re better off hardcoding the password.

If you’re still up for it, try out this buildingblock in your lab:  legobrick-cropped

Note1: Buildingblock has NOT been updated with the new SQL statement above, you’ll need to paste that in yourself.

Note2: If you suspect you might already have orphaned people attributes or people identifiers in your datastore you can check with these two statements:

-- test if we have any orphaned people attributes
select * from your_storedb.dbo.OR_PeopleAttributes
                    FROM your_storedb.dbo.OR_Objects obj
                   WHERE obj.Guid = PersonGuid  )

-- test if we have any orphaned people identifiers
select * from your_storedb.dbo.OR_PeopleIdentifiers
                    FROM your_storedb.dbo.OR_Objects obj
                   WHERE obj.Guid = PersonGuid  )

If both queries above come back with zero rows, you’re fine. Otherwise, you’ve got orphans. You can wipe them out like another Scrooge by running these two deletes:

-- delete orphaned people attributes
delete from your_storedb.dbo.OR_PeopleAttributes
where not exists (
    select NULL 
    from resss.dbo.OR_Objects obj
    where obj.Guid = PersonGuid

-- delete orphaned people identifiers
delete from your_storedb.dbo.OR_PeopleIdentifiers
where not exists (
    select NULL 
    from resss.dbo.OR_Objects obj
    where obj.Guid = PersonGuid


Using kill-files with Service Store

By Max Ranzau

mdkFrom the MDK Divison. In this article I’ll cover some experiences in regards to handling authoritative data on a super-scaleable basis. For the example at hand, lets say you have an authoritative datasource which only provides you delta’s, i.e. you only get orders for which people objects to create and who to kill (whoa, that didn’t come out right). You want to ensure that at all times your list of people in the ServiceStore is in sync with reality, based on the deltas you receive. In our example we are basing this off CSV files.

In order to handle this, you’ll have to create two data connections, one that makes new people and one that kills them (oh, there I go again). This is important as with only one data source, Service Store will delete any people records where there isn’t a corresponding entry in the datasourced CSV files. This can be avoided by splitting up add’s and deletes onto two seperate data connectors. The key is using the flags on the mapping pages correctly. If you don’t, you’ll risk wiping out (or at least marking for deletion) every current user in your service store, so pay close attention.

Assuming there might be more than one make/kill order coming through at any point, you would need to collect these in two static csv files, as the servicestore only knows how to read data from one CSV file. Each of the incoming orders typically only contains one order/line. You will of course need to create a datasource for both of these CSV files. The collection can be done with a bit of scheduled nifty Powershell’ing. Feel free to reach out for that if you have no idea how to make it.

Once you have two CSV files ready for synchronization into ServiceStore, you’ll want to set up your data connector mapping flags correctly. I found the following works best. For importing people to create:

  • [X] Ignore duplicates
  • [X] Allow inserts
  • [  ] Allow updates
  • [  ] Allow deletes (mark for deletion)

For people to, ahem…”migrate to the cloud”, the flags need to be configured differently. You will have to allow updates in order for the mark for deletion mechanism do it’s thing.

  • [  ] Ignore duplicates
  • [  ] Allow inserts
  • [X] Allow updates
  • [X] Allow deletes (mark for deletion)

In order for the above data connections to work to work, 1) both the CSV files need to reference some people identifier  In my case a GUID is available per user. 2) both the make and kill files should have a commonly named column, such as ACTION which signifies what is happening. This will also help your script to sort the incoming CSV’s into the right pile. To give you an overview of the process, study the following diagram (click to enlarge):


  1. On a scheduled basis, the Powershell scripts reads every deposited delta file, either a make or a killfile. The files are deleted once read
  2. All make-files are written to a combined make file and all kill-files are done the same way.
  3. The script executes the resocc.exe commandlines to trigger syncs of the two data connections, using the datasources pointing to the created combine-csv files.
  4. People are created or marked for deletion in the service store.
  5. The collection CSV files are deleted before the next scheduled run.

This method makes for an effective way of receiving multiple creation/deletion commands as part of an onboading/offboarding scenario. If you wish to learn more about this solution, feel free to reach out.

Reconfiguring ServiceStore to a new datastore

By Max Ranzau


This article will help you change an environment from an existing RES Service Store database to a fresh new one. In my case I needed to spin up a fresh database to weed out an error which I did not want to import into my production environment, thus I needed a temporary disposable working servicestore without having to build one from scratch in my development environment. In other words, this operation allows you to switch your servicestore back and forth between several databases. There’s however a couple of things you’ll need before you start:

  • Current SA password for the SQL server
  • Current Catalog Services password as we don’t want to bother changing that everywhere
  1. First, start the Setup & Sync Tool, and go to setup |database in the menu.
  2. Do not touch the settings there, just hit the create button in the lower left.

  3. Fill everything out in the next wizard Note the password will not be your SA password pre-filled, it’s just a bunch of dots, you will have to know the SA pwd to continue here. Note to someone who would bother putting it in Uservoice: Password field should be blank.

  4. After successful authentication, name your new datastore:

  5. Size your database correctly. If this is just a dummy/scratch database, you can just leave it at the defaults. Otherwise if you’re importing a massive amount of stuff from buildingblocks, you’ll do well to size the DB accordingly.

  6. Enter the new SQL credentials. Note that even though you might already have an SQL user you’d like to use, the installer insists on creating a new one. This can be fixed later, so just enter something sensible and continue.

  7. If circumstances allow, re-use the current Catalog Services password as that will save you a boatload of configuration

  8. Hit, next, verify everything looks kosher then hit the Creat button. The wizard will now generate a new datastore with fresh new tables from scratch.

  9. Once you hit Finish the Setup&Sync thingy will relaunch, pointing to the new, empty servicestore DB.
  10. If you’re just spinning up a temporary blank store to test something, you may want to re-use the SQL user from your previous database. This will save you even more reconfiguration hassle. First, you’ll want to hop into your SQL Management studio and give the old SQL user DBO permissions on the newly created database. Rightclick on the old SQL user, chose properties (1), go to the User Mapping section and checkmark the new database (2), then checkmark db_owner at the bottom (3)

  11. Go to your catalog server(s) and start regedit. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\RES\ITStore\Catalog Services and change the database name in the REG_SZ value DBName to the new database name
  12. Start the services snapin and bounce the service “RES ONE Service Store Catalog Services” aka RESOCS
  13. Go to your transaction engine(s) and change the REG_SZ DBname to the same name in HKEY_LOCAL_MACHINE\SOFTWARE\RES\ITStore\Transaction Engine
  14. While there, bounce the service “RES ONE Service Store Transaction Engine” aka RESOTE
  15. Fire up your browser and go to the store /management site. Once there you go to the burger menu, setup, datastore. Since we kept the SQL username and password the same, the only thing you will need to change is the database name as shown. The console will offer a dropdown showing all the databases available, including your new one.

  16. Once you’ve saved the changes, use the Test Connection to verify you’re good to go, then hit save. Note that you’ll be kicked out of the console for re-login, but since there are no security roles defined in the blank database, you’ll be able to log right back in, using your normal administrative account.

You are now running on a new fresh database with the 25 built-in 45 day eval licenses. The same process can be reversed to hop back to your original database if you should be so inclined. Do however remember in addition to run the Setup & Sync Tool, go to the Setup|Database, pointing it the correct database. Just because you change the web-interface to one datastore, the S&ST can still be pointing to another.

Overall, the advantage of this approach is that you do not have to change any additional servicestore components such as the mobile clients, website or windows clients as they’re all pointing to either the website or catalog servers, which haven’t changed.

A couple of closing notes: If you are using this approach to debug a ServiceStore database, where you need to restore the database ever so often to a previous backup, you are likely to run into the problem that db-restore takes forever. I’ll just sit there and wait for hell to freeze over, eventually failing because the datastore is in use. There is a easy way to get around this. WARNING: Use this only on a garbage ServiceStore datastore which you’re going to discard anyway.

  1. On the SQLserver housing the ServiceStore database you want to unlock, start a command prompt
  2. Start SQLCMD (should be in the path on a SQL box)
  3. Paste this in at the 1> prompt: ALTER DATABASE yourSSDBname SET OFFLINE WITH ROLLBACK IMMEDIATE
  4. Hit enter
  5. Type GO at the 2> prompt and hit enter again.

This will take a few moments, but is lightning fast compared to the alternative. It will produce output something like this: Nonqualified transactions are being rolled back. Estimated rollback completion: 0%. Nonqualified transactions are being rolled back. Estimated rollback completion: 100%.  Once complete the locked servicestore database is offline and you can restore it immediately.


Moving a WMDB across environments

By Max Ranzau


mblsFrom the Desperate Measures Dept. This article is a result of half a days work resurrecting a WM database back from the grave, or more to the point, an old environment in a different domain. There were no buildingblocks, only a full database backup. Ever the optimist, I figured it would be an easy win: A few minutes of restoring, running the /lockedout on the Workspace Manager console, adding a new user to the Technical Manager admin role. Boy, was I wrong… When the full scope of what I had to do dawned on me, I was shouting repeated references to the cocktail above.

Here’s the skinny, I had to move a Workspace Manager database from one environment to another, this meant a new domain too. On the new SQL server, which was going to be the new home for the WM datastore, I restored the db backup and set up a SQL user for it as well, finally installing a WM console and configured it to hit the datastore. Obviously locked out, run pwrtech.exe /lockedout. Then this happens:

Here it is again for the search engines: Access is only permitted from the following domains: Yes, the truth slowly dawns on you: You’re locked out of your own DBMS for real and the only saving grace (/lockedout) meant to pull you out of that s***hole, does absolutely bupkis! I’ll spare you all the futile attempts I went through to solve this and cut straight to the chase to what worked. It’s more than likely not be the quick solve you’re hoping for, but at least it’ll get your WM statastore back in one piece.

Step 1: Identify the former domain FQDN

While I recalled the netbios name of the old environment, I was unsure of the FQDN of the old domain where the WM database used to live. This can be identified like this: Open the WM backup in SQL Mgment studio, open the table tblObjects, look for objects of lngObjectType=64 (click to enlarge)

Step 2: Build a sandbox

In a VM sandbox build, a new clean Domain Controller with the same fqdn as found in step 1. Yes, there’s no use crying about it. It will take the time it takes and it’s the only way as things are as of WM 2015 SR1. This box is temporary and can be tossed out when you’re done with this long sad exercise. OS shouldn’t matter too much as we only need this box to import and cleanse the WM datastore of whatever gunk is preventing the unlock.

Step 2 – Prep the sandbox

  • On top of the sandbox DC, install same or newer version of the DBMS which the database came from or you likely won’t be able to restore your WM database. I’m presuming this to be MSSQL. Be sure to set it up for mixed mode authentication.
  • Restore the WM database into the sandbox DC/SQL.
  • Install the WM Console MSI, preferably the same version as you had in the old environment, however a newer one should [probably] work nicely as well.
  • Run pwrtech.exe /lockedout (copy the path from the console shortcut) and enter your SA credentials and login in the dialog shown above.

And poof! Another roadblock went up. I vaguely recalled from the old environment, that Domain admins were technical managers, but I’d have expected Workspace Manager to be smarter than this:

Sure enough, for the S&Giggles, I tried creating another domain admin and got the same result. Only one way forward then…


Step 3 – Create a non-admin user and restore access

  • Create a normal user member of domain users. You can’t make local users here since we’re on a domain controller.
  • Make sure to add this user to print operators as these guys can log on locally, we’ll need this on a server.
  • Log off , log back in with the new normal user
  • start pwrtech.exe /lockedout, fill in the sql credentials to the db and Voilá! Access should be restored.

Step 4 – Prep the sandbox database

Now we have managed to pry open the sandboxed WM database, it’s a matter of getting it prepped, so we can put it back into action in the new target environment. First, let’s take a look at what the administrative roles looks like as of now:

As you can see above, it’s quite obviously pointing to the old LAB domain and the domain admins group, the latter is why logging in with any domain administrator didn’t make any change. Our normal non-admin user has been added as expected as a result of the /lockedout procedure. Next we’re going to add Local Computer as a directory service:

For good measure we’re going to bump the local computers directory service up to first priority:

Note: There’s no point in adding the target domain just yet as it probably can’t be resolved from the sandbox anyway.

  • Once the localcomputer directory service is added, hit Shift+F5 or go to the File|Reload now menu in the console, for the new directory service to be made available elsewhere.
  • Go back to the Technical manager admin role and edit it.
  • Hit Add, Users/Groups and follow the yellowbrick road below as numbered below

  1. Change the dropdown to local computer
  2. Uncheck “Limit to this computer only (NAME)”
  3. Hit search
  4. Pick the normal user you created
  5. Hit ok

If you didn’t dun goof’d, it should look like this:

Step 5 – Make a new backup

Now we’re done playing doctor on the database in the sandbox, let’s back it up and put it over in the target environment. Fire up your SQL Studio, rightclick the RES db and chose backup:

Unless you intentionally aim at overwriting the old backup, be sure to remove it and replace it with a new target backup file:

When the backup is done, go to the backup location C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Backup You may be prompted by this lil’ fella on the right. But not to worry just hit Continue, enter your admin credentials if prompted and everything will be fine.

I noted that for any number of inexplicable reasons the size of the datastore jumped 3-fold. Hopefully this isn’t linear for larger databases!

Step 6 – Restoring the doctored WM database

It’s now time to take the backup of the doctored WM datastore out of the sandbox and put it into the target environment. Once the SQL backup file is copied over to the target SQL box, you’ll probably want to reference the destination database name to the old busted one, so any consoles pointing to it do not have to be reconfigured.

When you delete (make sure it’s the right DB!) put a checkbox in the Close existing connections checkbox.

Now let’s restore the doctored database. Be sure your restore settings look something like this:

Once the database has been restored, make sure you create/edit an SQL user which has DBO on the datastore. In my case, I already had a RESWM user, I just needed to configure the user mappings so the ownership and permission are okay:

If the above fails, it may be due to that an orphaned SQL user without login with the same name already exists in the RESWM datastore, go to security under the WM database and knock it out:

Step 7 – Set up a local user in target environment

We’re almost there now. Go to a computer (in the target domain environment) where you have the RES WM console available, but don’t launch it just yet. Remember we have to add that local user we configured within the WM console in the sandbox previously. If you’re on a domain controller, just create a domain user like you did before, however I’m going to presume you are doing this on a member server: If that’s the case go to Server Manager | Tools menu | Computer Management | System tools | Local users and Groups | Users | rightclick, New user:

Add the ‘normal’ user. Password doesn’t matter. For good measure, add it to the Print operators group again, not that it may be completely necessary but hey, the account is temporary anyway:

Now, hold down shift and rightclick on the Workspace Manager desktop shortcut, and chose Run as a different user on the context menu, entering the name of the computer you are on with credentials of the normal user:

Step 8 – Configure console in the target domain

The context should now launch. Go into User Context, Directory Service and add the current domain:

Note: You won’t be able to neither browse nor test the new domain, as you’re not running the console with a domain user at the moment.

Go back into the Techadm role and delete any references to the LAB domain. We can’t add the target domain just yet as we still are running the console with a local user. Make sure you don’t delete .\normal just yet.

Now, exit the console completely and run the “C:\Program Files (x86)\RES Software\Workspace Manager\pwrtech.exe” /lockedout command. Once again the console executable will prompt you for the SQL credentials, but now since we’ve added the Targetdomain as a directory service and we’ve removed all references to the old LAB doman, we will have success:

Step 9 – Cleanup

We now have full acces again to the databse There’s only a few housekeeping items left to do:

  1. Add any additional necessary users and groups to the TechAdm role
  2. Delete the .\normal user from the TechAdm role
  3. Delete the LocalComputer directory service from User Context
  4. Delete the ‘normal’ local user account
  5. Power down/Save/Revert/Kill the sandbox DC+SQL as you don’t need it anymore.

Conclusion: They said it couldn’t be done and the only way was to restore the buildingblocks that weren’t there. But as you may know, the RESguru never takes no for an answer, neither from a piece of stubborn software, nor incidentally from short managers with girly names! Be that as it may, this was by far the longest workaround I’ve had to come up with in a very long time: I can only wish that RES fixes this snafu so hopefully no one else have to jump through the same hoops as I did to get my WM database back.


Setting up a WM console on a jumpbox

By Max Ranzau


From the Multiple Hoops dept. The other day I was tasked with setting up a Workspace Manager console on a jumpbox. You know, the typical setup for a client where you VPN into a non-domainmember computer, from where you RDS to the different servers you need to access. The wish is to have the RES WM console running on this box so you don’t have to do Inception-RDS to make a few changes in WM, thus preserving screen real estate. Note: this will of course only work if your jumpbox is allowed to hit the database directly  If the jumpbox is firewalled to the hilt and only allows outbound RDS connections, stop reading right here.

Presuming you’re still with us, you might already have installed the WM console on your jumpbox and connected it to the relay server. When you launch it, you’ll get kicked right back out as the console looks for your local computername\username in the datastore and obviously it’s not there yet, so let’s add it:

The above sounds simple enough, but it appears there’s a few steps to go through, which incidentally left me wondering if there was an easier way to do it. I mean, under applications you can add users manually, but no such luck on Admin roles… (hint hint, nudge nudge dear product management ;)

  1. Assuming you already have WM running on one or more domain-enabled computers, go to one of these. Presuming it’s a Server 2012[R2], launch the Server Manager, goto the Tools menu and Computer Management.
  2. Go to System Tools | Local Users | Users and add a local user. The User name and password must be the same as for the jumpbox local user. This account is temporary and can be nuked at the end of the story.
  3. Now launch the WM console and go to User Context | Directory Services and chose New from the toolbar
  4. In the dialog, chose Local Computer from the Type dropdown and hit Ok. No further changes are necessary. WM now understands that local computer accounts can be used for access control, which also applies to Administrative Roles.
  5. Go to Administration | Administrative Roles | <your security role> | Access Control tab | Add button | Users/group
  6. From the directory services dropdown, chose local computer from the Directory Service dropdown, then search and select your username, which you added in step 2. Be sure the “Limit to this computer only (COMPUTERNAME)”-checkbox is NOT checked.
  7. If you did the above right, your account will be listed as .\username when you return to the previous dialog
  8. Now it’s time to return to your jumpbox and launch the WM console there. Since your username is now in the WM database it will let you. In practice you could stop here, however this would leave the jumpbox username able to launch the WM console from every computer. Let’s just add an ounce more of prevention by locking in the computername too:
  9. On the jumpbox’s WM console, go to Administration | Administrative Roles | <your security role> | Access Control tab
  10. Select your “.\username” and edit it. Repeat step 6, except make sure this to check the “Limit to this computer only (COMPUTERNAME)”-checkbox. When you return to the previous dialog, you’ll note that your account is listed correctly as your jumpboxcomputername\username
  11. As the last loose end to tie up, go back to the domain member computer where you created the temporary local user account and delete it.


Keeping Virtual Sandboxes under control

By Rob Aarts and Max Ranzau

Rob: After using VMware Thinapp in several projects I wanted to share some best practices The first one is about a common mistake I see made on a regular basis. Applications with several entry points for executables, are presented using Workspace Manager, using multiple managed applications. So far so good.

The problem arises when all entry points (from the same Thinapp capture) have their own Zero Profile setting pointing to the same Sandbox location. Are you still with me here? Let’s have a look at the example below:


Here’s a working example:

  • When a user launches Application 1, Zero Profile settings are loaded and written to the sandbox.
  • The user then launches Application 2 and Zero Profile settings are loaded and writes to the same sandbox location.

What is likely to happen, is that settings for Application 1 become corrupted, due to it’s settings are being changed by another process while it’s running. I personally have seen some strange behavior from apps, which absolutely don’t like this messing are around with their appdata behind the scenes. It doesn’t take a degree in rocket science to imagine what may happen when Application 3 is launched. It will just increase the likelyhood of corruption.

The solution to avoid this mess is simple and was covered previously, although for natively installed applications only: Have a look at Max’s article RG056 in the tech library. Setting up a placeholder application as described in the article will allow you to configure  individual apps app to save the sandbox and direct The Zero Profile from Application 1, 2 and 3 to this placeholder App:


Max: Once you have this set-up, the next challenge is to make sure your User-Settings capture configurations are not overlapping. As of WM SR3 there is a setting for global User settings to grab a setting exclusively. This means that if say 3 different global user settings grab the same registry value you can check one of them as exclusive and only that UserSetting will store it. Unfortunately this approach doesn’t work well for Managed Application based user-settings, as the capture-exclusive feature isn’t available there (yet?). Anyhow, there is a workaround for this. Let’s say you start with creating a suite-settings placeholder app, like described above for Office:

  1. You create a new managed app
  2. Under user settings, you add all the capture templates for Word, Excel, Powerpoint etc. and you have a nice list like shown below
  3. Then everything is cool and ready to rumble, right?


Unfortunately that’s not quite the case, as the templates are likely to overlap. This is not the fault of the template designers, but a function of that they need each to be able to stand alone. This means we have a bit of cleaning up to do, but it’s quite easy. When you are on the User Settings|Capturing tab of the SuiteSettings app as shown above, do the following

  1. Click the Show details checkbox at the bottom of the dialog box
  2. Now click on the data column header to sort on files and registry entries being captured
  3. Look for identical rows (highlight)


Note the line for the ‘Microsoft InfoPath Designer 2010’, which I have highlighted and disabled. I disabled it because that particular User Setting was already captured by the template called ‘Microsoft Infopath Filler 2010’ and as you may recall from our discusion above, we do not have the option to capture exclusively on Managed apps.

You disable an item by doubleclicking on it. Don’t fall for the temptation of removing the checkbox you immediately see, as that will disable the entire template, in which you are only interested in disabeling a certain file/reg grab. Instead  go to the Capturing tab, then select the offending/duplicate entry, double click again and THEN remove the Enabled checkbox you see. Sequence shown below:


You can of course also delete the duplicate entries to tidy things up. In this case I kept them around for illustrative purposes. One thing I’d like to make you aware of: First, go to the global User Settings node, and at the bottom check both ‘Show details’ and ‘Show all User Settings’:


dpNotice that once you link up multiple applications to the same suite app, you will see multiple entries of the same user-setting. This is not a bug or an indication that something unnecessary is being captured. For example, look at the example above where about half way down you see about 7 references to %APPDATA\Microsoft\Access and both Word, Excel etc are pointing to it. This does NOT mean the and Word and Excel templates had duplicate entries. It’s simply because the combination of the two checkmarks shows the canonical list of all combinations of apps and user settings, thus the repeats. In short: They’re mostly harmless. Don’t panic!

We hope with this little away-mission into advanced WM User Settings management to have given you some new thoughts on how to both wrangle virtual applications as well as suite settings for multiple apps.

Rob & Max


Seamless switch from Policies to WM

gpo-morpheusFrom the The-GPO-has-you Dept. As of recent, one of my clients was facing an interesting issue: They wanted to do a seamless switchover from a currently windows GPO managed environment to a RES Workspace Manager environment. Essentially the job was to devise a method to make one system let go and have the other one take over at the same time. This example was built on a 2012R2 AD with a Win7 front-end.

This method revolves around using a simple AD group that serves a dual purpose. 1) When a user is put in the group, specified policies are denied and 2) Workspace Manager takes effect. The nice part of this approach is that it is fully reversible, just by removing the user from the group.

doc-icon2<<< Click here to read the article

Setting up a Lab HR system for IT Store

xor-logoFrom the Lab Essentials Dept. This article is to show you how you can stand up your very own open-source HR system and hook it up with RES IT Store. One of the things you may often hear about in regards to RES IT Store, is the ability to do employee on/offboarding. If you want to test this out for real, you probably won’t get access to a live HR system in production, thus I wrote this article.

doc-icon2<<< Click here to read the article