All posts by Jonathan Almquist

WFAnalyzer.exe has stopped working

Just a quick post here, for those management pack developers that have run into the problem of simulating a workflow using the Visual Studio Authoring Extensions.

I have been missing the Workflow Analyzer companion to the MP Simulator for QUITE SOME TIME. I’ve tried troubleshooting the problem on several occasions, probably spending more than 10-12 hours burning midnight oil over the past months researching, debugging, sifting through logs and many Stack Overflow pages regarding .NET exceptions.

I’ve uninstalled and installed again the VSAE, and probably Visual Studio as well at some point, to no avail – the catastrophic behavior of Workflow Analyzer was clearly the bane of my development work. “The system cannot find the file specified?” What a benign message that is, especially when there is no file specified in the exception!

Just fixed it, though – out of what seems to be sheer luck.

I uninstalled Microsoft Monitoring Agent from my development workstation and installed the System Center 2012 SP1 agent – low and behold, the Workflow Analyzer sprung to life!

So happy now that I can actually see runtime data!

 

Here are a couple other references that didn’t provide a solution for me, but this issue seems very eluding and they might work in your case.

http://blogs.msdn.com/b/tysonpaul/archive/2014/07/01/workflow-analyzer-unhandled-exception-quot-the-system-cannot-find-the-file-specified-quot.aspx

http://blogs.inframon.com/post/2012/04/04/Management-Pack-Simulation-The-good-the-bad-and-the-%E2%80%A6-workaround.aspx

http://stackoverflow.com/questions/27596215/c-sharp-code-wont-launch-programs-win32exception-was-unhandled

Coupling time offset to monitoring interval

The requirements gathering phase of the management pack development lifecycle is critically important to the success of the project. Something that may come out of this phase is receiving company health check scripts, and this is an excellent opportunity to incorporate familiar company knowledge into a new monitoring solution.

These scripts might be used to check for some condition that may have occurred in the past n minutes or hours – n is referred to as a time offset in this case. This article will briefly describe a simple concept to a best practice around implementing this type of script in a custom data source.

This concept can be broken down into the simplest term, where n and monitoring interval share configuration.

For example, a script executes the following SQL query:

SELECT COUNT(Column1) as [Count], Name 
FROM MyDatabase
WHERE Timestamp BETWEEN DATEADD(minute,-60,GETDATE()) AND GETDATE()
GROUP BY Name

The part I want to draw your attention to is the WHERE clause in the SQL query, because this is where time offset comes into the picture – it is how time offset is identified, and allows for the implementation of this coupling concept.

The query above would return records that have been written in the past 60 minutes from now. When the script is plugged into a data source, “now” is the monitoring interval, which is configured on the scheduler that triggers script execution.

So, we conclude that “now” is IntervalSeconds on the simple scheduler module.

Now that we know we can couple time offset with monitoring interval, we can easily use the same value for both by sharing the same configuration. In order to do this, two minor changes need to be made in any script you plan to incorporate using this concept:

1. Ensure time offset is in seconds.
2. Replace the time offset value with the IntervalSeconds configuration.

In this scenario, we cover points 1 and 2 above by updating the 1st and 2nd arguments in the DATEADD function like this:
 
WHERE Timestamp BETWEEN DATEADD(second,-$Config/IntervalSeconds$,GETDATE()) AND GETDATE()

Now compose the module as usual…
Why is using this concept a good practice?

Monitoring interval is a standard override parameter, and inevitably it will be overridden – maybe not on this particular monitor, and maybe not until you’re long gone. But don’t assume the customer is going to keep the default interval – ever.

By coupling script time offsets to monitoring intervals, a basic interval override will not cause monitor state skewing.

Passing Data in Composite Workflow

I thought a quick and fun blog subject would be to build a composite workflow, passing data between each module, and writing output to the event log.

Workflow sequence:

  • Module1 outputs a ServiceName property bag data item – in this case “dhcp”.
  • Module2 accepts that ServiceName input, queries the service, and outputs Status property bag data item.
  • Module3 accepts that Status input, and simply writes it to the event log.
  • Composite module ties them together and executes in sequence.
  • Rule1 starts with a scheduler and ends with a write action, to execute the composite workflow.

Anyway – this is strictly a learning tool, but I thought it might be helpful to anyone interested in practicing composite workflows.

Here’s the full code:

<?xml version="1.0" encoding="utf-8"?> 
<ManagementPack SchemaVersion="2.0" ContentReadable="true" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<Manifest>
<Identity>
<ID>CompositeModuleTest</ID>
<Version>1.0.0.3</Version>
</Identity>
<Name>CompositeModuleTest</Name>
<References>
<Reference Alias="Windows">
<ID>Microsoft.Windows.Library</ID>
<Version>7.5.8501.0</Version>
<PublicKeyToken>31bf3856ad364e35</PublicKeyToken>
</Reference>
<Reference Alias="System">
<ID>System.Library</ID>
<Version>7.5.8501.0</Version>
<PublicKeyToken>31bf3856ad364e35</PublicKeyToken>
</Reference>
</References>
</Manifest>
<TypeDefinitions>
<ModuleTypes>
<WriteActionModuleType ID="composite" Accessibility="Internal">
<Configuration></Configuration>
<ModuleImplementation>
<Composite>
<MemberModules>
<WriteAction ID="com1" TypeID="module1" />
<WriteAction ID="com2" TypeID="module2" />
<WriteAction ID="com3" TypeID="module3" />
</MemberModules>
<Composition>
<Node ID="com3">
<Node ID="com2">
<Node ID="com1" />
</Node>
</Node>
</Composition>
</Composite>
</ModuleImplementation>
<OutputType>System!System.PropertyBagData</OutputType>
<InputType>System!System.BaseData</InputType>
</WriteActionModuleType>
<WriteActionModuleType ID="module1" Accessibility="Internal">
<Configuration></Configuration>
<ModuleImplementation>
<Composite>
<MemberModules>
<WriteAction ID="script1" TypeID="Windows!Microsoft.Windows.PowerShellPropertyBagWriteAction">
<ScriptName>script1.ps1</ScriptName>
<ScriptBody>

$api= new-object -comObject "MOM.ScriptAPI"
$api.LogScriptEvent('script1.ps1',100,4,"start")
$bag = $api.CreatePropertyBag()
$bag.AddValue("ServiceName","dhcp")
$bag

</ScriptBody>
<Parameters></Parameters>
<TimeoutSeconds>300</TimeoutSeconds>
</WriteAction>
</MemberModules>
<Composition>
<Node ID="script1" />
</Composition>
</Composite>
</ModuleImplementation>
<OutputType>System!System.PropertyBagData</OutputType>
<InputType>System!System.BaseData</InputType>
</WriteActionModuleType>
<WriteActionModuleType ID="module2" Accessibility="Internal">
<Configuration></Configuration>
<ModuleImplementation>
<Composite>
<MemberModules>
<WriteAction ID="script2" TypeID="Windows!Microsoft.Windows.PowerShellPropertyBagWriteAction">
<ScriptName>script2.ps1</ScriptName>
<ScriptBody>

param($ServiceName)
$api= new-object -comObject "MOM.ScriptAPI"
$api.LogScriptEvent('Script2.ps1',100,4,$ServiceName)
$service = get-service $ServiceName
$bag = $api.CreatePropertyBag()
$bag.AddValue("ServiceName",$serviceName)
$bag.AddValue("Status",$service.status.ToString())
$bag

</ScriptBody>
<Parameters>
<Parameter>
<Name>ServiceName</Name>
<Value>$Data/Property[@Name="ServiceName"]$</Value>
</Parameter>
</Parameters>
<TimeoutSeconds>300</TimeoutSeconds>
</WriteAction>
</MemberModules>
<Composition>
<Node ID="script2" />
</Composition>
</Composite>
</ModuleImplementation>
<OutputType>System!System.PropertyBagData</OutputType>
<InputType>System!System.BaseData</InputType>
</WriteActionModuleType>
<WriteActionModuleType ID="module3" Accessibility="Internal">
<Configuration></Configuration>
<ModuleImplementation>
<Composite>
<MemberModules>
<WriteAction ID="script3" TypeID="Windows!Microsoft.Windows.PowerShellPropertyBagWriteAction">
<ScriptName>script3.ps1</ScriptName>
<ScriptBody>

param($Status)
$api= new-object -comObject "MOM.ScriptAPI"
$api.LogScriptEvent('Script3.ps1',100,4,$Status)

</ScriptBody>
<Parameters>
<Parameter>
<Name>Status</Name>
<Value>$Data/Property[@Name="Status"]$</Value>
</Parameter>
</Parameters>
<TimeoutSeconds>300</TimeoutSeconds>
</WriteAction>
</MemberModules>
<Composition>
<Node ID="script3" />
</Composition>
</Composite>
</ModuleImplementation>
<OutputType>System!System.PropertyBagData</OutputType>
<InputType>System!System.BaseData</InputType>
</WriteActionModuleType>
</ModuleTypes>
</TypeDefinitions>
<Monitoring>
<Rules>
<Rule ID="rule1" Enabled="true" Target="Windows!Microsoft.Windows.Computer">
<Category>Operations</Category>
<DataSources>
<DataSource ID="schedule" TypeID="System!System.SimpleScheduler">
<IntervalSeconds>10</IntervalSeconds>
<SyncTime />
</DataSource>
</DataSources>
<WriteActions>
<WriteAction ID="WA1" TypeID="composite" />
</WriteActions>
</Rule>
</Rules>
</Monitoring>
</ManagementPack>

Logical Disk Free Space Monitor (extended)

The question of how to manage the logical disk free space monitor comes up time and time again. Just about every customer I’ve worked with, and all over the forums, people express their distain for and frustration of managing overrides related to this monitor – and for good reason. It’s one of those monitors that touch every type of logical disk on every computer in the environment, and of course there are going to be different threshold requirements that require overrides – even the out-of-box flexibility of using both types of thresholds (mb and %) usually isn’t enough for us to “set it and forget it”.

This a great opportunity to reduce administrative overhead by enabling local administrators to change monitoring thresholds directly on the local machine, without having to login to the console and create overrides.

Because this is such a popular request, I’ve extended the Logical Disk Free Space monitor and added it to the Windows Monitoring (Extended) community pack.

Download the community pack from GitHub.

 

NOTE 1 – There are overrides defined in the pack that disable the vendor Logical Disk Free Space unit monitor. Once this pack is installed, that monitor will be replaced by this monitor. If you have overrides applied to the vendor monitors that you want to keep, those will need to be applied to this new unit monitor.

NOTE 2 – The unit monitor works out of the box exactly the same say as the original Logical Disk Free Space unit monitor. It will only behave differently if you implement the extended features of the unit monitor.

NOTE 3 – I chose to target Microsoft.Windows.LogicalDisk because the script data source, according to library documentation, should run fine against all versions of Windows. This reduces it from three monitors to just one.

 

Product knowledge has also been extended to include usage instructions:

image

 

An alert generated by this monitor will look like this:

image

 

A state change event (health explorer) for this monitor will look like this:

image

 

 

Smile

SCOM | BlueStripe | Live Maps Integration

Last year I helped a customer integrate System Center Operations Manager 2012, BlueStripe (FactFinder), and Savision Live Maps. Contact me if your company is planning to integrate these products – there are several things to consider to get the most out of this integration and make it a huge success.

I developed an enhanced integration pack that was integral in the success of this project. Read more in this case study written by BlueStripe:

https://bluestripe.com/case-studies/sap-application-performance-major-utility-keeps-sap-working/

(I am not affiliated with BlueStripe or Savision. I consulted on this project through SCOMskills.)

 

Smile