Welcome – A letter from the Founder

Max Ranzau (aka RESguru 1)My name is Max Ranzau. I founded RESguru.com as a technical blog at the end of of 2008, dedicating my time and efforts towards creating better IT solutions through use of RES Software products in the enterprise. This site is the home of RCS – RESguru Consulting Services and is one of the primary go-to places for independent information, brain-share and tools for the workspace and automation engineers across the planet. I intend to keep it just that way.

If you are a new visitor, allow me a moment to welcome you properly, by immediately dispensing with the glossy marketeering and cut to the chase:

I am here to change Enterprise IT.

I dare to guess that plenty suit & tie jobs have said similar to you over the years. Nothing more or less, it’s the best one-liner by which it can be expressed what RCS does. Fact: it is exceptionally hard to explain what exactly and completely RES Software does in one sentence, without either becoming too vague or a long winded tale. Trust me on this, the good folks at RES Software have been grappling with this conundrum over the last 15 years: Having a stellar product suite, but no quick and singular way of explaining what it does!

I’m not going to pretend I’m driving around with the tomes of messaging wisdom* in my trunk. However, that is not going to prevent me from pitching in my two cents. As I see it, there are two main avenues of putting RES technology into context:

A) Presuming you’re technically oriented:  tbox

  • Get rid of complexity in  policy management, point-solution utilities and scripting. You know as well as I, that enough of this, or  by inheriting someone else’s hand-me-down environment, you’ll be up to your eyeballs figuring out what’s going on before you can make any changes.
  • Best profile+configuration management: Microsoft’s ways of handling configuration, both central (policies) and users (profiles) haven’t changed much in 20 years. The RES Suite will make profile management work smoothly for you.
  • Automation of complex script-less tasks across the entire IT estate, independently of domains. 150+ built in graphical tasks range from the simplest of deployment/configuration items to VDI, Mobile Device Management and Helpdesk, integrating with all the major vendors.
  • Workflows can be automated. The RES Suite will allow a company to define and execute workflows for almost anything that can be given to a user. It’s all about service. I usually tell my students that you can make workflows for giving employees their phones, forklifts or PhotoShop. It doesn’t matter as you model the business in the software, define who is qualified to automatically get or request what, assign any approvals to the workflow and then tie it all into the automation and configuration management where necessary. The real strength here is that all 3 products in the RES Suite talk together.
  • Documentation: As engineers, we just love cranking it out by the page, right? That was sarcasm! We love building great solutions but creating the associated paperwork afterwards is a hassle, even if it’s billable. For admins in terms of auditing, it’s the same challenge. The RES suite can do the documentation for you.

The RES Suite is like a well-organized Swiss Army knife, with 15.000+ blades. There are loads of other things the RES suite can do for you, and hopefully you’ll get a sense of this when you browse through the Tech Library of the RESguru site. There is over 100 and counting free practical how-to articles on how to solve common everyday problems with RES technology. Have a look at the intro page here for a proper introduction and tour around the site.

brfcaseB) Presuming you are financially oriented:

  • 60% savings on current support/helpdesk/administrative load is not unheard of.
  • 90% savings on external consultants camping in your data-center for months at a time as your own staff becomes able to do most things faster on their own. These kind of numbers been reported by some of my clients.
  • 20% more users typically on a central environment – just by virtue of efficient management.
  • No doctors with flashlights were involved in obtaining the numbers above! These are based on real RES projects that I have worked over the last 15 years. Having said that, obviously your mileage may vary in accordance with what you are trying to solve.
  • Business Processes – any organization has them. If your ever move people around, hiring or firing, the IT folks are usually the last to know, resulting in a long time before new employees can do what they’re paid to or exposing the company to unnecessary risk, by not closing down access properly when someone leaves. Sometimes I encounter customers who have custom-built and rather byzantine systems in place, some may be even manual of tossing around emails or even paper forms for approvals. As long as nothing changes you can maintain status-quo, however when the business requirements change, that’s where your costs become evident.
  • Appsense. Have you been struggling with their products for too long not being able to get things to work for you as promised? Spent countless hours having consultants in and out the door on break/fix missions? It’s time to stop and look at a Real Enterprise Solution.
  • Agility: What can be stood up in a few days by a engineer proficient in RES tech, can in most cases match and trump what would take a team of engineers using classic tools and methodology several months to implement.

The above should provide you with an decent idea of what is within the realm of the possible in the RES universe. RES technology is not rocket science, it’s just good product design and common sense for the modern enterprise. Covering the entire RES Suite, RCS offers the following on all VDI platforms, TS/Citrix, Laptops, Mobile devices, Windows and Linux environments:

  • Consulting, advisory and managed service agreements for new and existing RES installations
  • Scoping and technical presales assistance to integrators
  • Design and technical architecture documentation
  • Implementations, remote and on-site.
  • Technical competitive analysis. Here is a couple of examples.
  • Training, Education and Workshops in all RES products both online and on-site. See this for details.

For information on services, rates, schedules and anything else, reach out via the contact page , or call +1 610 462 2200. I look forward to talking with you.

With best regards,

Max Ranzau

 

Using kill-files with Service Store

By Max Ranzau
 

mdkFrom the MDK Divison. In this article I’ll cover some experiences in regards to handling authoritative data on a super-scaleable basis. For the example at hand, lets say you have an authoritative datasource which only provides you delta’s, i.e. you only get orders for which people objects to create and who to kill (whoa, that didn’t come out right). You want to ensure that at all times your list of people in the ServiceStore is in sync with reality, based on the deltas you receive. In our example we are basing this off CSV files.

In order to handle this, you’ll have to create two data connections, one that makes new people and one that kills them (oh, there I go again). This is important as with only one data source, Service Store will delete any people records where there isn’t a corresponding entry in the datasourced CSV files. This can be avoided by splitting up add’s and deletes onto two seperate data connectors. The key is using the flags on the mapping pages correctly. If you don’t, you’ll risk wiping out (or at least marking for deletion) every current user in your service store, so pay close attention.

Assuming there might be more than one make/kill order coming through at any point, you would need to collect these in two static csv files, as the servicestore only knows how to read data from one CSV file. Each of the incoming orders typically only contains one order/line. You will of course need to create a datasource for both of these CSV files. The collection can be done with a bit of scheduled nifty Powershell’ing. Feel free to reach out for that if you have no idea how to make it.

Once you have two CSV files ready for synchronization into ServiceStore, you’ll want to set up your data connector mapping flags correctly. I found the following works best. For importing people to create:

  • [X] Ignore duplicates
  • [X] Allow inserts
  • [  ] Allow updates
  • [  ] Allow deletes (mark for deletion)

For people to, ahem…”migrate to the cloud”, the flags need to be configured differently. You will have to allow updates in order for the mark for deletion mechanism do it’s thing.

  • [  ] Ignore duplicates
  • [  ] Allow inserts
  • [X] Allow updates
  • [X] Allow deletes (mark for deletion)

In order for the above data connections to work to work, 1) both the CSV files need to reference some people identifier  In my case a GUID is available per user. 2) both the make and kill files should have a commonly named column, such as ACTION which signifies what is happening. This will also help your script to sort the incoming CSV’s into the right pile. To give you an overview of the process, study the following diagram (click to enlarge):

kf-diagram

  1. On a scheduled basis, the Powershell scripts reads every deposited delta file, either a make or a killfile. The files are deleted once read
  2. All make-files are written to a combined make file and all kill-files are done the same way.
  3. The script executes the resocc.exe commandlines to trigger syncs of the two data connections, using the datasources pointing to the created combine-csv files.
  4. People are created or marked for deletion in the service store.
  5. The collection CSV files are deleted before the next scheduled run.

This method makes for an effective way of receiving multiple creation/deletion commands as part of an onboading/offboarding scenario. If you wish to learn more about this solution, feel free to reach out.

Bug alert: The Zombification Attribute

By Max Ranzau
Will code C# for brrraaaaaaiiiins!

 

From The Brrrraaaains Dept. Although the title might sound like a weird crossover episode between Big Bang Theory and The Walking Dead, I had a super scary experience with Service Store this week. All of a sudden people attributes had disappeared from a client development environment and everyone was biting their nails the problem would propagate into production. Even the built-in People Attributes; Security Questions and Answers, had disappeared from all users when you went to their Attributes tab. What was even worse; services were failing left and right – specifically those which used any reference to #Subscriber(personattribute) or #Requester(personattribute). Looking directly into the OR_PeopleAttributes table, via SQL Studio I could see my attributes were still intact, alas something was making the ServiceStore act all gnarly and puke dayglo, while everything else seemed to work normal.

At this time of writing, I have only experienced this problem with the latest ServiceStore 2015 FR2, Update 2 AKA (8.2.2.0). I do not know if earlier versions of Service/IT store are affected. And yes, this has been reported to the Merry Men of RES Support. You’ll probably notice a new KB article over the next few days until engineering devises a fix for this.

errattr

I’ll spare you the long trials and tribulations I went through to nail this bug down over the course of a night, with only a pot of coffee and Radio Paradise for company. Let’s cut to the chase:

The problem is specifically with certain People Attributes you may define: It would appear if you define a Person Attribute of the TABLE type, with more than 6 columns defined, the problem will manifest itself at some point and your people attributes will be zombified. What specifically triggers it, is not exactly clear, however I suspect it would be when a WFA (WorkFlow Action) references the attribute. I was able to manually trigger it in a new clean database by importing a buildingblock containing the offending attribute and a dummy service.

The good news is that until the software engineers fix the problem, it is relatively easy to get rid of: Go and either delete the table Person Attribute from your Data Model, or edit it down to 6 columns or less. The moment column #7 is deleted and the table definition saved, all the hidden people attributes would re-appear.

So, now we know what we’re dealing with, allow me a moment to spin my thoughts on this: The table objects were originally created to cater for MDM, like registering something a user might have more than one of, such as mobile devices, tablets etc. Typically 4-5 fields were used for Device type, Model, Carrier, Phone#, etc, thus I can only muse that more than 6 columns might never have been attempted duing test – that is until your friendly neighborhood blogging-bull came charging through the china store and created a table attribute with 14 columns.

This concludes the alert/early warning. As mentioned, RES have already been notified, so hopefully this article will be obsolete soon.

 

Reconfiguring ServiceStore to a new datastore

By Max Ranzau

 

This article will help you change an environment from an existing RES Service Store database to a fresh new one. In my case I needed to spin up a fresh database to weed out an error which I did not want to import into my production environment, thus I needed a temporary disposable working servicestore without having to build one from scratch in my development environment. In other words, this operation allows you to switch your servicestore back and forth between several databases. There’s however a couple of things you’ll need before you start:

  • Current SA password for the SQL server
  • Current Catalog Services password as we don’t want to bother changing that everywhere
  1. First, start the Setup & Sync Tool, and go to setup |database in the menu.
  2. Do not touch the settings there, just hit the create button in the lower left.

  3. Fill everything out in the next wizard Note the password will not be your SA password pre-filled, it’s just a bunch of dots, you will have to know the SA pwd to continue here. Note to someone who would bother putting it in Uservoice: Password field should be blank.

  4. After successful authentication, name your new datastore:

  5. Size your database correctly. If this is just a dummy/scratch database, you can just leave it at the defaults. Otherwise if you’re importing a massive amount of stuff from buildingblocks, you’ll do well to size the DB accordingly.

  6. Enter the new SQL credentials. Note that even though you might already have an SQL user you’d like to use, the installer insists on creating a new one. This can be fixed later, so just enter something sensible and continue.

  7. If circumstances allow, re-use the current Catalog Services password as that will save you a boatload of configuration

  8. Hit, next, verify everything looks kosher then hit the Creat button. The wizard will now generate a new datastore with fresh new tables from scratch.

  9. Once you hit Finish the Setup&Sync thingy will relaunch, pointing to the new, empty servicestore DB.
  10. If you’re just spinning up a temporary blank store to test something, you may want to re-use the SQL user from your previous database. This will save you even more reconfiguration hassle. First, you’ll want to hop into your SQL Management studio and give the old SQL user DBO permissions on the newly created database. Rightclick on the old SQL user, chose properties (1), go to the User Mapping section and checkmark the new database (2), then checkmark db_owner at the bottom (3)

  11. Go to your catalog server(s) and start regedit. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\RES\ITStore\Catalog Services and change the database name in the REG_SZ value DBName to the new database name
  12. Start the services snapin and bounce the service “RES ONE Service Store Catalog Services” aka RESOCS
  13. Go to your transaction engine(s) and change the REG_SZ DBname to the same name in HKEY_LOCAL_MACHINE\SOFTWARE\RES\ITStore\Transaction Engine
  14. While there, bounce the service “RES ONE Service Store Transaction Engine” aka RESOTE
  15. Fire up your browser and go to the store /management site. Once there you go to the burger menu, setup, datastore. Since we kept the SQL username and password the same, the only thing you will need to change is the database name as shown. The console will offer a dropdown showing all the databases available, including your new one.

  16. Once you’ve saved the changes, use the Test Connection to verify you’re good to go, then hit save. Note that you’ll be kicked out of the console for re-login, but since there are no security roles defined in the blank database, you’ll be able to log right back in, using your normal administrative account.

You are now running on a new fresh database with the 25 built-in 45 day eval licenses. The same process can be reversed to hop back to your original database if you should be so inclined. Do however remember in addition to run the Setup & Sync Tool, go to the Setup|Database, pointing it the correct database. Just because you change the web-interface to one datastore, the S&ST can still be pointing to another.

Overall, the advantage of this approach is that you do not have to change any additional servicestore components such as the mobile clients, website or windows clients as they’re all pointing to either the website or catalog servers, which haven’t changed.

A couple of closing notes: If you are using this approach to debug a ServiceStore database, where you need to restore the database ever so often to a previous backup, you are likely to run into the problem that db-restore takes forever. I’ll just sit there and wait for hell to freeze over, eventually failing because the datastore is in use. There is a easy way to get around this. WARNING: Use this only on a garbage ServiceStore datastore which you’re going to discard anyway.

  1. On the SQLserver housing the ServiceStore database you want to unlock, start a command prompt
  2. Start SQLCMD (should be in the path on a SQL box)
  3. Paste this in at the 1> prompt: ALTER DATABASE yourSSDBname SET OFFLINE WITH ROLLBACK IMMEDIATE
  4. Hit enter
  5. Type GO at the 2> prompt and hit enter again.

This will take a few moments, but is lightning fast compared to the alternative. It will produce output something like this: Nonqualified transactions are being rolled back. Estimated rollback completion: 0%. Nonqualified transactions are being rolled back. Estimated rollback completion: 100%.  Once complete the locked servicestore database is offline and you can restore it immediately.

 

Moving a WMDB across environments

By Max Ranzau

 

mblsFrom the Desperate Measures Dept. This article is a result of half a days work resurrecting a WM database back from the grave, or more to the point, an old environment in a different domain. There were no buildingblocks, only a full database backup. Ever the optimist, I figured it would be an easy win: A few minutes of restoring, running the /lockedout on the Workspace Manager console, adding a new user to the Technical Manager admin role. Boy, was I wrong… When the full scope of what I had to do dawned on me, I was shouting repeated references to the cocktail above.

Here’s the skinny, I had to move a Workspace Manager database from one environment to another, this meant a new domain too. On the new SQL server, which was going to be the new home for the WM datastore, I restored the db backup and set up a SQL user for it as well, finally installing a WM console and configured it to hit the datastore. Obviously locked out, run pwrtech.exe /lockedout. Then this happens:

Here it is again for the search engines: Access is only permitted from the following domains: Yes, the truth slowly dawns on you: You’re locked out of your own DBMS for real and the only saving grace (/lockedout) meant to pull you out of that s***hole, does absolutely bupkis! I’ll spare you all the futile attempts I went through to solve this and cut straight to the chase to what worked. It’s more than likely not be the quick solve you’re hoping for, but at least it’ll get your WM statastore back in one piece.

Step 1: Identify the former domain FQDN

While I recalled the netbios name of the old environment, I was unsure of the FQDN of the old domain where the WM database used to live. This can be identified like this: Open the WM backup in SQL Mgment studio, open the table tblObjects, look for objects of lngObjectType=64 (click to enlarge)

Step 2: Build a sandbox

In a VM sandbox build, a new clean Domain Controller with the same fqdn as found in step 1. Yes, there’s no use crying about it. It will take the time it takes and it’s the only way as things are as of WM 2015 SR1. This box is temporary and can be tossed out when you’re done with this long sad exercise. OS shouldn’t matter too much as we only need this box to import and cleanse the WM datastore of whatever gunk is preventing the unlock.

Step 2 – Prep the sandbox

  • On top of the sandbox DC, install same or newer version of the DBMS which the database came from or you likely won’t be able to restore your WM database. I’m presuming this to be MSSQL. Be sure to set it up for mixed mode authentication.
  • Restore the WM database into the sandbox DC/SQL.
  • Install the WM Console MSI, preferably the same version as you had in the old environment, however a newer one should [probably] work nicely as well.
  • Run pwrtech.exe /lockedout (copy the path from the console shortcut) and enter your SA credentials and login in the dialog shown above.

And poof! Another roadblock went up. I vaguely recalled from the old environment, that Domain admins were technical managers, but I’d have expected Workspace Manager to be smarter than this:

Sure enough, for the S&Giggles, I tried creating another domain admin and got the same result. Only one way forward then…

 

Step 3 – Create a non-admin user and restore access

  • Create a normal user member of domain users. You can’t make local users here since we’re on a domain controller.
  • Make sure to add this user to print operators as these guys can log on locally, we’ll need this on a server.
  • Log off , log back in with the new normal user
  • start pwrtech.exe /lockedout, fill in the sql credentials to the db and Voilá! Access should be restored.

Step 4 – Prep the sandbox database

Now we have managed to pry open the sandboxed WM database, it’s a matter of getting it prepped, so we can put it back into action in the new target environment. First, let’s take a look at what the administrative roles looks like as of now:

As you can see above, it’s quite obviously pointing to the old LAB domain and the domain admins group, the latter is why logging in with any domain administrator didn’t make any change. Our normal non-admin user has been added as expected as a result of the /lockedout procedure. Next we’re going to add Local Computer as a directory service:

For good measure we’re going to bump the local computers directory service up to first priority:

Note: There’s no point in adding the target domain just yet as it probably can’t be resolved from the sandbox anyway.

  • Once the localcomputer directory service is added, hit Shift+F5 or go to the File|Reload now menu in the console, for the new directory service to be made available elsewhere.
  • Go back to the Technical manager admin role and edit it.
  • Hit Add, Users/Groups and follow the yellowbrick road below as numbered below

  1. Change the dropdown to local computer
  2. Uncheck “Limit to this computer only (NAME)”
  3. Hit search
  4. Pick the normal user you created
  5. Hit ok

If you didn’t dun goof’d, it should look like this:

Step 5 – Make a new backup

Now we’re done playing doctor on the database in the sandbox, let’s back it up and put it over in the target environment. Fire up your SQL Studio, rightclick the RES db and chose backup:

Unless you intentionally aim at overwriting the old backup, be sure to remove it and replace it with a new target backup file:

When the backup is done, go to the backup location C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Backup You may be prompted by this lil’ fella on the right. But not to worry just hit Continue, enter your admin credentials if prompted and everything will be fine.

I noted that for any number of inexplicable reasons the size of the datastore jumped 3-fold. Hopefully this isn’t linear for larger databases!

Step 6 – Restoring the doctored WM database

It’s now time to take the backup of the doctored WM datastore out of the sandbox and put it into the target environment. Once the SQL backup file is copied over to the target SQL box, you’ll probably want to reference the destination database name to the old busted one, so any consoles pointing to it do not have to be reconfigured.

When you delete (make sure it’s the right DB!) put a checkbox in the Close existing connections checkbox.

Now let’s restore the doctored database. Be sure your restore settings look something like this:

Once the database has been restored, make sure you create/edit an SQL user which has DBO on the datastore. In my case, I already had a RESWM user, I just needed to configure the user mappings so the ownership and permission are okay:

If the above fails, it may be due to that an orphaned SQL user without login with the same name already exists in the RESWM datastore, go to security under the WM database and knock it out:

Step 7 – Set up a local user in target environment

We’re almost there now. Go to a computer (in the target domain environment) where you have the RES WM console available, but don’t launch it just yet. Remember we have to add that local user we configured within the WM console in the sandbox previously. If you’re on a domain controller, just create a domain user like you did before, however I’m going to presume you are doing this on a member server: If that’s the case go to Server Manager | Tools menu | Computer Management | System tools | Local users and Groups | Users | rightclick, New user:

Add the ‘normal’ user. Password doesn’t matter. For good measure, add it to the Print operators group again, not that it may be completely necessary but hey, the account is temporary anyway:

Now, hold down shift and rightclick on the Workspace Manager desktop shortcut, and chose Run as a different user on the context menu, entering the name of the computer you are on with credentials of the normal user:

Step 8 – Configure console in the target domain

The context should now launch. Go into User Context, Directory Service and add the current domain:

Note: You won’t be able to neither browse nor test the new domain, as you’re not running the console with a domain user at the moment.

Go back into the Techadm role and delete any references to the LAB domain. We can’t add the target domain just yet as we still are running the console with a local user. Make sure you don’t delete .\normal just yet.

Now, exit the console completely and run the “C:\Program Files (x86)\RES Software\Workspace Manager\pwrtech.exe” /lockedout command. Once again the console executable will prompt you for the SQL credentials, but now since we’ve added the Targetdomain as a directory service and we’ve removed all references to the old LAB doman, we will have success:

Step 9 – Cleanup

We now have full acces again to the databse There’s only a few housekeeping items left to do:

  1. Add any additional necessary users and groups to the TechAdm role
  2. Delete the .\normal user from the TechAdm role
  3. Delete the LocalComputer directory service from User Context
  4. Delete the ‘normal’ local user account
  5. Power down/Save/Revert/Kill the sandbox DC+SQL as you don’t need it anymore.

Conclusion: They said it couldn’t be done and the only way was to restore the buildingblocks that weren’t there. But as you may know, the RESguru never takes no for an answer, neither from a piece of stubborn software, nor incidentally from short managers with girly names! Be that as it may, this was by far the longest workaround I’ve had to come up with in a very long time: I can only wish that RES fixes this snafu so hopefully no one else have to jump through the same hoops as I did to get my WM database back.

 

Setting up a WM console on a jumpbox

By Max Ranzau

 

From the Multiple Hoops dept. The other day I was tasked with setting up a Workspace Manager console on a jumpbox. You know, the typical setup for a client where you VPN into a non-domainmember computer, from where you RDS to the different servers you need to access. The wish is to have the RES WM console running on this box so you don’t have to do Inception-RDS to make a few changes in WM, thus preserving screen real estate. Note: this will of course only work if your jumpbox is allowed to hit the database directly  If the jumpbox is firewalled to the hilt and only allows outbound RDS connections, stop reading right here.

Presuming you’re still with us, you might already have installed the WM console on your jumpbox and connected it to the relay server. When you launch it, you’ll get kicked right back out as the console looks for your local computername\username in the datastore and obviously it’s not there yet, so let’s add it:

The above sounds simple enough, but it appears there’s a few steps to go through, which incidentally left me wondering if there was an easier way to do it. I mean, under applications you can add users manually, but no such luck on Admin roles… (hint hint, nudge nudge dear product management ;)

  1. Assuming you already have WM running on one or more domain-enabled computers, go to one of these. Presuming it’s a Server 2012[R2], launch the Server Manager, goto the Tools menu and Computer Management.
  2. Go to System Tools | Local Users | Users and add a local user. The User name and password must be the same as for the jumpbox local user. This account is temporary and can be nuked at the end of the story.
  3. Now launch the WM console and go to User Context | Directory Services and chose New from the toolbar
  4. In the dialog, chose Local Computer from the Type dropdown and hit Ok. No further changes are necessary. WM now understands that local computer accounts can be used for access control, which also applies to Administrative Roles.
  5. Go to Administration | Administrative Roles | <your security role> | Access Control tab | Add button | Users/group
  6. From the directory services dropdown, chose local computer from the Directory Service dropdown, then search and select your username, which you added in step 2. Be sure the “Limit to this computer only (COMPUTERNAME)”-checkbox is NOT checked.
  7. If you did the above right, your account will be listed as .\username when you return to the previous dialog
  8. Now it’s time to return to your jumpbox and launch the WM console there. Since your username is now in the WM database it will let you. In practice you could stop here, however this would leave the jumpbox username able to launch the WM console from every computer. Let’s just add an ounce more of prevention by locking in the computername too:
  9. On the jumpbox’s WM console, go to Administration | Administrative Roles | <your security role> | Access Control tab
  10. Select your “.\username” and edit it. Repeat step 6, except make sure this to check the “Limit to this computer only (COMPUTERNAME)”-checkbox. When you return to the previous dialog, you’ll note that your account is listed correctly as your jumpboxcomputername\username
  11. As the last loose end to tie up, go back to the domain member computer where you created the temporary local user account and delete it.

 

Authorizing in WM – How it SHOULD work

By Max Ranzau

 

chockFrom the My-Two-Cents Dept. Working with RES Workspace Manager for about 1½ decade, I’ve been witness to many improvements. While the products gets better with each release, regardless of vendor it’s not always flowers and chocolate. By now, most seasoned Workspace Engineers familiar with the product, know the difference between learning mode and blocking mode on the security subsystems. Dialing in the security for a new client/customer always takes a bit of time, as you’ll have to deal with the security baseline – and then authorizing the things that are unique for said customer environment. The work I always seem to find myself spending time on is hopping back and forth between Authorized Files and either the Managed Application node or the Read-Only Blanketing node.

The issue at hand is this; every time that one has dealt with a log entry by right-clicking on it, said log entries will still be in the log. It makes it a challenge to maintain an overview of what’s been dealt with and what hasn’t – especially if you are using wildcard rules to kill multiple log entries with one stone. It would be wonderful if this process could be managed better. I’ve gone through the necessary steps in a previous article here. To optimize this work, below are a few of ideas off the top of my head how this ideally should work:

  • The security logs should be reworked to show a “Processed” or “Authorized” flag. Think of it like the little red flag you can set on your emails and tasks in Outlook.
  • When authorizing a specific log entry, there should be check boxes in the authorization dialog box to “Mark affected log entries as authorized” and/or a “Delete affected entries in log file”. Workspace Manager can already can filter views with the Attention flag etc. in Workspace Analysis, so it should be familiar territory, development wise.
  • In the Authorized file node there should be similar options to process all current log files through active authorizations so it becomes evident which things you haven’t dealt with yet.
  • Finally, it would be stellar to incorporate Patrick Grinsven’s excellent work on the DBlogCleaner tool (which is out in a new version, stay tuned)

Now, before some well-meaning person asks why I don’t put these ideas into UserVoice for voting etc, I will offer my thanks for the consideration, yet I am perfectly happy passing that baton with the associated credit to someone else. In other words, feel free to co-opt these ideas and make them your own.

 

Keeping Virtual Sandboxes under control

By Rob Aarts and Max Ranzau

Rob: After using VMware Thinapp in several projects I wanted to share some best practices The first one is about a common mistake I see made on a regular basis. Applications with several entry points for executables, are presented using Workspace Manager, using multiple managed applications. So far so good.

The problem arises when all entry points (from the same Thinapp capture) have their own Zero Profile setting pointing to the same Sandbox location. Are you still with me here? Let’s have a look at the example below:

p1

Here’s a working example:

  • When a user launches Application 1, Zero Profile settings are loaded and written to the sandbox.
  • The user then launches Application 2 and Zero Profile settings are loaded and writes to the same sandbox location.

What is likely to happen, is that settings for Application 1 become corrupted, due to it’s settings are being changed by another process while it’s running. I personally have seen some strange behavior from apps, which absolutely don’t like this messing are around with their appdata behind the scenes. It doesn’t take a degree in rocket science to imagine what may happen when Application 3 is launched. It will just increase the likelyhood of corruption.

The solution to avoid this mess is simple and was covered previously, although for natively installed applications only: Have a look at Max’s article RG056 in the tech library. Setting up a placeholder application as described in the article will allow you to configure  individual apps app to save the sandbox and direct The Zero Profile from Application 1, 2 and 3 to this placeholder App:

p2

Max: Once you have this set-up, the next challenge is to make sure your User-Settings capture configurations are not overlapping. As of WM SR3 there is a setting for global User settings to grab a setting exclusively. This means that if say 3 different global user settings grab the same registry value you can check one of them as exclusive and only that UserSetting will store it. Unfortunately this approach doesn’t work well for Managed Application based user-settings, as the capture-exclusive feature isn’t available there (yet?). Anyhow, there is a workaround for this. Let’s say you start with creating a suite-settings placeholder app, like described above for Office:

  1. You create a new managed app
  2. Under user settings, you add all the capture templates for Word, Excel, Powerpoint etc. and you have a nice list like shown below
  3. Then everything is cool and ready to rumble, right?

p6

Unfortunately that’s not quite the case, as the templates are likely to overlap. This is not the fault of the template designers, but a function of that they need each to be able to stand alone. This means we have a bit of cleaning up to do, but it’s quite easy. When you are on the User Settings|Capturing tab of the SuiteSettings app as shown above, do the following

  1. Click the Show details checkbox at the bottom of the dialog box
  2. Now click on the data column header to sort on files and registry entries being captured
  3. Look for identical rows (highlight)

p5

Note the line for the ‘Microsoft InfoPath Designer 2010’, which I have highlighted and disabled. I disabled it because that particular User Setting was already captured by the template called ‘Microsoft Infopath Filler 2010’ and as you may recall from our discusion above, we do not have the option to capture exclusively on Managed apps.

You disable an item by doubleclicking on it. Don’t fall for the temptation of removing the checkbox you immediately see, as that will disable the entire template, in which you are only interested in disabeling a certain file/reg grab. Instead  go to the Capturing tab, then select the offending/duplicate entry, double click again and THEN remove the Enabled checkbox you see. Sequence shown below:

p7

You can of course also delete the duplicate entries to tidy things up. In this case I kept them around for illustrative purposes. One thing I’d like to make you aware of: First, go to the global User Settings node, and at the bottom check both ‘Show details’ and ‘Show all User Settings’:

p4

dpNotice that once you link up multiple applications to the same suite app, you will see multiple entries of the same user-setting. This is not a bug or an indication that something unnecessary is being captured. For example, look at the example above where about half way down you see about 7 references to %APPDATA\Microsoft\Access and both Word, Excel etc are pointing to it. This does NOT mean the and Word and Excel templates had duplicate entries. It’s simply because the combination of the two checkmarks shows the canonical list of all combinations of apps and user settings, thus the repeats. In short: They’re mostly harmless. Don’t panic!

We hope with this little away-mission into advanced WM User Settings management to have given you some new thoughts on how to both wrangle virtual applications as well as suite settings for multiple apps.

Rob & Max

 

Removing zombies from IT Store

By Max Ranzau

 

From Rick Grimes has no patience for the undeleted dead.the Hacking Dead dept. IT Store is a fine HR data processor and workflow engine, when you set it up to pull people and department data in from an authoritative data source. In a previous article I showed an example on how to do just that.  However, when a person is marked as deleted in your datasource, IT Store doesn’t delete the user. They effectively are the living dead IT Store people, except in this case they won’t try to claim a license or your brains.

Deleting a user in IT Store has always been a two-stage affair. Initially when IT Store marks a person for deletion it uses the opportunity to scan for any and all delivered services. One should not tinker with this. However, once mentioned services have been properly returned, the user is then marked as [Ready for deletion]. But that’s all she wrote. Nothing more happens.

3zombiesEffectively this means over time an organization with thousands of annual onboarding/offboardings (think educational institutions for example) will have a pileup of undead un-deleted people in IT Store. Sure, they’re obscurred from view until you check the “Include people marked for deletion”. Your only current option is to manually go Mischonne on them in the console yourself.

The design rationale is that since some HR systems don’t delete the employee when off-boarded, then neither should ITS. Here’s where I disagree. It makes sense for HR systems to keep a record of previous people for administrative reasons, but since ITS is the conduit into the rest of the IT infrastructure organization, there’s IMHO little point in keeping a record here once you’ve cleaned up everywhere else. After all, during off-boarding we’d probably be exporting the user’s mailbox and zip up his homedrive as we don’t want dead user remains floating around in the production environment.

At this stage there’s only one way to deal with this if you don’t want to manually flush users marked ready for deletion: Hack the IT Store database.

warning, yellowLike any other vendor, RES gets nervous ticks and reaches for their crossbow, when  you start messing with the brraaaiiins grey matter of the datastores, thus the usual warnings apply: If you do this, you’re on your own. See the MOAD for details.

That said, let’s look at the hack. It’s a simple SQL update query. Presuming your datastore is running MSSQL, the update SQL looks like this:

DELETE FROM [$[in.db.its.name]].[dbo].[OR_Objects]
WHERE [TYPE] = 1 and [RecordStatus] = 2

The $[in.db.its.name] above is an Automation Manager module parameter, containing the name of the ITS database. Running this update query will be the same as manually marking all the users marked [Ready for deletion]. One SNAFU to be aware of is that the users will not be removed from the console before you exit and re-launch it. My guess is that the records are cached in RAM and are only updated when IT Store is doing it’s own operations.

sql Putting this into Automation Manager, I came across a minor problem with the SQL statement execute task in Automation Manager. It looks like as of SR3 (7.0.3.0) the password field can’t be properly parameterized. Sure, you can rightclick on the password field and insert a parameter, but next time you go back and edit the module, the password stops working. Until RES fixes this and puts in a proper set of credential-type accepting field, you’re better off hardcoding the password.

If you’re still up for it, try out this buildingblock in your lab:  legobrick-cropped

Migrating from a broken UEM product, part 1

doesnot2From the REScue 911 Dept. Recently I was involved in a client project where they had a problem. And it was a big problem:  Effectively they were using another profile management product which was malfunctioning. I’d prefer not to give the game away by naming the vendor. Not that I have any problem with verbally beating vendors over the head when they deserve it – this is out of courtesy to the client.

Suffice to say, the product in question employed by my client was practically holding the user’s profile settings hostage. Allow me to clarify: If your current UEM tool redirects a write to a proprietary format, you are putting all the user’s profile data into a basket you have no or little control over. Meaning: If you switch said UEM tool off, then all your user’s settings are stuck in said basket. The following article puts you on a path out of this situation.

doc-icon2<<< Click here to read the article

 

Seamless switch from Policies to WM

gpo-morpheusFrom the The-GPO-has-you Dept. As of recent, one of my clients was facing an interesting issue: They wanted to do a seamless switchover from a currently windows GPO managed environment to a RES Workspace Manager environment. Essentially the job was to devise a method to make one system let go and have the other one take over at the same time. This example was built on a 2012R2 AD with a Win7 front-end.

This method revolves around using a simple AD group that serves a dual purpose. 1) When a user is put in the group, specified policies are denied and 2) Workspace Manager takes effect. The nice part of this approach is that it is fully reversible, just by removing the user from the group.

doc-icon2<<< Click here to read the article