WebSphere Integration Developer: a correction on Filter inputs in XML Transformations

When developing XML Transform in WebSphere Integration Developer (WID), you provide conditions on either “Condition” or “Filter Inputs” tabs of the transform properties. Condition tab usage is straightforward: you enter an XPath expression that is evaluated at runtime to a true or false. Filter Inputs, however, plays a trick on you. Help text above the tab indicates that you should enter a condition and the transform would apply to the input elements for which the condition evaluates to true.

But if you enter a condition that evaluates to boolean, a runtime error results, claiming this:

Can not convert #BOOLEAN to a NodeList!

This is your clue that help prompt and documentation are incorrect and you need to enter an XPath that evaluates to the list of nodes for which you want the transform to apply, and not the condition. Quick inspection of the generated XSLT conforms this finding: the value you enter into “Filter Inputs” is inserted directly into the select attribute of xsl:for-each element, e.g. if you enter Filter Inputs expression "$Status[Lev:Level='Error' or Lev:Level='Warn']", XSLT will contain this:

<xsl:for-each select="$Status[Lev:Level='Error' or Lev:Level='Warn']">
<xsl:copy-of select="."/>
</xsl:for-each>

Effective WSDL service versioning in WSRR

WebSphere Services Registry and Repository (WSRR) is IBM’s tool for SOA governance. From technical perspective, the tool focuses on 2 kinds of services: WSDL services and SCA modules. WSDL services in WSRR are associated with WSDL documents (physical files serving as a vehicle for WSDL XML). Services and documents may be versioned. The concept of versioning is fairly important in WSRR, which is to be expected of a governance tool. Each object governed by WSRR, including services, is equipped with a version attribute, which makes up part of its identity. WSDL services are identified by name, namespace and version. Sounds great, but let’s take a closer look at the way versions work in WSRR and how it helps enable service governance. I will concentrate on WSDL services. SCA module versioning is a bit different since a SCA module may have a version number “baked in”.
Normal service lifecycle contemplates changes in service interfaces expressed in WSDL. Some changes are backward-compatible and some are not. Imagine you have a service that is widely used within your organization. If a change is needed that is not backward-compatible, it would not be a good idea to simply replace old service with new one without notice. Frequently organizations adopt a versioning scheme that adds version number (possibly in a form of a timestamp) to service namespace. This helps clearly define which version of the service is in use. This approach is advocated in IBM developerWorks article “Best practices for Web Service versioning”.
For example, you might have an AccountLookup service that has just undergone non backward-compatible upgrade. Old service may have been AccountLookupService in namespace “http://www.acme.com/services/accounts/2010/03” and new one is in namespace “http://www.acme.com/services/accounts/2010/12”. This works well for all concerned, but because of different namespaces, these 2 services will not be recognized as the same service. The value of version attribute does not matter. You have 2 completely different WSDL services in WSRR. How do you tell WSRR that 2 WSDLs represent 2 subsequent versions of the same service?
For a solution, look outside the pure technical realm of governance. WSRR defines several non-technical concepts that assist in establishing governance: Business Capability, Business Service, Business Application. To understand why one would use these concepts, let’s remember the SOA mantra that every service exists to fulfill business purpose. SOA is a business-IT initiative and Service-Oriented Architecture can not exist without service identification. Once AccountLookup service has been identified, business function it carries out should be documented. This holds true for all services, whether newly developed from business requirements or pre-existing IT capabilities being exposed as a service. In WSRR terms, recognizing business function means defining a Business Service.
Business Service in WSRR may have a charter or other documents attached to it. And naturally you can associate any number of WSDL services with a business service.
So here is a recipe for service versioning in WSRR:

  1. Once a service has been identified, create a Business Service definition in WSRR. It will represent the business function of the WSDL service
  2. Each WSDL version,once loaded in WSRR, should be associated with a Business Service that will serve as its container

One interesting corollary of this approach is that it effectively marginalizes the value of version attribute for WSDL documents and services. You may still use it, e.g. to relate to version number of WSDL document in an external document repository. However, I do not expect to see it as identity differentiator for WSDL services: there will be no WSDL service in WSRR with same name and namespace, but different version numbers. I do not see this as a problem, though.

My Prolifics colleague Rajiv Ramachandran suggested an enhancement to this versioning scheme: add “previous version” and “next version” relationships to WSDL services and make all versions of the same WSDL into a double-linked list. This will make it possible to easily navigate the successive versions of the same service from the latest to the oldest and reverse.

WSRR: WS-I validator and loading WSDL files

To WebSphere Service Registry and Repository (WSRR) practitioners, I’d like to recommend this excellent developerWorks article explaining how to leverage WS-I compliance validator that is part of that software. Among the things you will learn: how to make this validator enforce arbitrary restrictions expressed in Schematron.

Just one thing to keep in mind: this article was written for WSRR 6.2 and you will not be able to reproduce the scenario in this article with default settings of WSRR 6.3 (original or with Fixpack 1). This is because MAKE_GOVERNABLE event will not fire for WSDL documents containing endpoint definitions in default setup. I recommend that you replace MAKE_GOVERNABLE with CREATE event in sample code – then it will work.

Here’s how the story unfolds. When you load a service document (a WSDL document or a SCA module) into WSRR, a complex sequence of actions is taken by the software behind the scene. WSDL document is validated. The document is parsed and concepts relevant to WSRR are identified. Those include service, port, binding, operation, message and endpoint definitions, XML schema elements and types. All these objects are linked together. As these objects are created, configurable modifier kicks in to enable governance and initiate appropriate lifecycle. Once one item in a collection of linked objects has governance enabled, all other linked items are pointed to the same governance record. From that point on, they are governed together: it is not possible to enable governance separately for any other item in the linked collection.

Configurable modifier named “Triggers” contains the following fragment:

<!-- Mapping to push the relevant items through the Service Endpoint Lifecycle -->
<mapping>
<entity>
<any-of>
<model-type model-uri="http://www.ibm.com/xmlns/prod/serviceregistry/v6r3/ServiceModel#ServiceEndpoint"/>
<model-type model-uri="http://www.ibm.com/xmlns/prod/serviceregistry/v6r3/ServiceModel#MQServiceEndpoint"/>
<model-type model-uri="http://www.ibm.com/xmlns/prod/serviceregistry/v6r3/ServiceModel#SOAPServiceEndpoint"/>
<model-type model-uri="http://www.ibm.com/xmlns/prod/serviceregistry/v6r3/ServiceModel#ExtensionServiceEndpoint"/>
</any-of>
</entity>
<configuration name="InitiateEndpointLifecycle"/>
</mapping>

And configurable modifier named “InitiateEndpointLifecycle” includes this:

<?xml version="1.0" encoding="UTF-8"?>

<action-configuration xmlns="http://www.ibm.com/xmlns/prod/serviceregistry/Actions"
name="InitiateEndpointLifecycle"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.ibm.com/xmlns/prod/serviceregistry/Actions Actions.xsd">

<!-- Transition the selected object into the SLA Lifecycle -->
<make-governable-action uri="http://www.ibm.com/xmlns/prod/serviceregistry/lifecycle/v6r3/LifecycleDefinition#InitiateEndpointLifecycle"/>

</action-configuration>

These 2 modifiers are both enabled in Governance Enablement profile. Putting all this together, if a SOAP Service Endpoint is found in WSDL document, a SOAPServiceEndpoint object will be created and endpoint lifecycle will be initiated. WSDL document itself will be linked to the governance record of the SOAP Endpoint. It will not be possible to enable governance on the WSDL document separately.

Indeed, when a sample WSDL is loaded, we see exactly this effect. When I click on newly loaded WSDL document and proceed to the Governance tab, I see the following:

Governance Tab of WSDL document

Governance Tab of WSDL document

If I click on the “Root governance record” link, it takes me to the SOAP Service Endpoint:

SOAP Endpoint governance tab

SOAP Endpoint governance tab

WebSphere Business Monitor: troubleshooting events not flowing through a monitor model

One of the most common problems in WebSphere Business Monitor runtime administration is events not making it all the way to Dashboards (Portal or Business Space). Applications are running normally and emitting events that should be input to WebSphere Business Monitor (colloquially known as BAM), but BAM produces no output: no instance data is seen in dashboards. I’ve recently had an opportunity to troubleshoot 2 BAM installations in different organizations on consecutive days and I was faced with this problem in both cases.

Here are some things to check if you come across this situation:
1. A good first step in problem determination if you have access to monitor database is to check tables in the monitor model schema. There are few tables in this schema, so the task is easy. Here’s the list of tables from one simple model I developed:
CONSUMED_EVENT_T
PROCESSED_EVENTS
EVENT_SEQUENCE_INDICES
INCOMING_EVENTS
MCT_XTRCTMDLM_20091201151007
KCT_KPI_TRIGGER_20091201151007

There is one table per monitoring context (the one which name starts with MCT), plus tables for incoming, consumed and processed events, tables for KPIs and, in this case, one more service table. If this is the first time you attempt to run a model, you are just interested in cardinalities (number of rows in tables). Otherwise, you’d be looking at the number of recent rows.
If there is no data in any tables, events are not being routed to BAM. If there are rows in incoming_events table only, events are sent to BAM, but monitor model is not processing events. In the end, if the model works correctly, you should see one row in the monitoring context (MCT_*) table per, well, monitoring context created by original events.

2. Check that your monitor model is startable. Inspect SystemOut.log of the server running your model’s moderator module.
This message indicates successful startup of the monitor model:
[12/2/09 13:53:53:375 EST] 00000011 ConsumerDaemo I com.ibm.wbimonitor.mm.TestModel3.20081024161629.moderator.ConsumerDaemonHandlerImpl startDaemon() CWMRT3005I: The Monitor Model "TestModel3 20081024161629" is starting consumption on this server or cluster member in SERIAL_ST mode with reordering=false from PRIMARY_JMS_QUEUE.

On the other hand, this message indicates failure:
CWMRT2009W: The MM application is not in a startable state. This is usually because lifecycle steps are not complete.

If you see this error, perform step 3.

3. Check that mandatory lifecycle steps have been performed. Log on to admin console, go to Applications -> Monitor Models -> your model -> click on version -> make sure that all lights are green.
Monitor version deployment - lifecycle steps complete OK
“Schema created” is the only lifecycle step that impacts operational data flow through monitor model.
If “Schema created” step is red, database schema creation for the model did not complete. Click on “Manage schema” link on the right. If your database environment is not restrictive and monitor database user has administrative permissions, you can create schema by clicking “Run Create Schema Script” button. In case of restrictive databases, such as DB2 for z/OS, this button is not even enabled. You will need to export the DDL by clicking “Export Create Schema Script” and work with your DBA to create and configure schema.

4. Check that CEI distribution is active
Click on your model version, and check the text under “CEI distribution mode”.
Monitor model version CEI distribution is ActiveIt should be active AND it should be in the right mode. In most cases (with exception of test environment), you likely are using queue bypass, in which case CEI distribution mode should read “Active (monitor model queue bypass)”.
To change distribution mode, click on “Change CEI distribution mode” link on the right.
CEI distribution is Active with Queue bypass
Select desired value from “Target” drop down and click OK. Restart the target CEI server/cluster!

5. Check that correct CEI server has been configured as event source. This is a common problem in complex multiple cluster topologies.
Click on your model (not version) and then click on “Change CEI configuration”.
CEI source server selection
Check the table under “Event group profile list name” at the bottom of the properties page. In case of a complex topology with multiple CEI servers in the cell, all CEI-enabled clusters/servers will be listed individually. Make sure the right CEI server is selected (checkbox ticked). If you need to make a change, follow this procedure. First, checkboxes are made inactive (unavailable) if at least one model version has active CEI distribution. Deactivate CEI distribution for all model versions (change distribution mode to Inactive) as described in previous step. Wait for a minute for the change to become effective. Then come back to this property page – those checkboxes will become enabled. Select the right cluster/server, click Apply and then change distribution mode to Active for all model versions as described in step 4.

Update on Alphablox in WebSphere Business Monitor

Earlier this year I blogged about installing Alphablox as part of WebSphere Business Monitor. Real production environments require clustering, and it was difficult to accomplish. Using DB2 on z/OS for data repository was particularly daunting task. Mike Killelea commented on my original post, noting that IBM disclaimed support for this scenario.

This time of the year, IBM is preparing version 7 of its BPM stack for release. From what I saw, Alphablox is much better integrated into Business Monitor (BAM). Remember, until now you had to run a separate Alphablox installer (with the exception of non-production-grade standalone profile). Now, ABX install is fully integrated. I specifically inquired about z/OS database support and was told that it is ON. I’ll post an update when I learn more.

Database creation script for Alphablox install

Suppose you install Alphablox, say as part of WebSphere Business Monitor product. Alphablox has been and still remians (as of version 6.2) an important integral part of WBM, providing dimensional analysis of monitored data and KPIs. Alphablox requires a number of database objects to function. By default it will attempt to create database tables it needs upon first run. But what if you are using DB2 for z/OS or another restrictive database environment, where programs are not allowed to run DDL. DBAs hold the keys to database structure and you are required to submit any DDL to a DBA for review and execution. Not unreasonable for a structured corporate IT where I’d expect to find WebSphere Business Monitor.
So you want to get your hands on DDL Alphablox would attempt to execute and hand it your DBA. It is possible, even though scripts are not on the surface as they are for the rest of WBM (or othe products in Business Process Management stack).

While scripts do not exist in a form of standalone DDL files, DDL statements are available. Just inspect properties file named $ABX_ROOT/repository/servers/$dbtype.dmlsql, where $ABX_ROOT is the root of Alphablox installation directory and $dbtype is your database type, e.g. db2_zos.dmlsql
Look for properties named DDL.CREATE1-DDL.CREATE6 and DDL.INDEX1-DDL.INDEX10

You will find entries like this:

DDL.CREATE1            = CREATE TABLE ABX_VERSION (DESCRIPTION VARCHAR(32) NOT NULL, VALUE VARCHAR(64) NOT NULL)

This is not a complete script, of course. You will have to add things like tablespaces and permission grants, but all it is exactly what program needs.

DROP statements and a number of updates/inserts round out this file.

UPDATE 12/5/2009. In response to Mike Killelea’s comment below. I worked with Mike on the DB2-z/OS based Alphablox installation. Mike correctly mentions that ABX install over DB2/z not straightforward. Installation procedure is nothing like regular ABX install and you have to jump through some hoops. Yet in the end it is possible to host Alphablox repository on DB2-z/OS. In my post, I wanted to give a pointer that would be useful in doing so. It is certainly not all you will need to do. I’m not in a position to disclose details of this procedure – please contact IBM support. I just wanted to correct the record in that the task is doable.

Cloudburst: My refrigerator runs WebSphere

IBM has pre-announced WebSphere Application Server Hypervisor Edition and WebSphere Cloudburst appliance. I understand both will be officially made available during SOA Impact this week. WAS HE is application server in a VM image. Cloudburst is even better – it lets you have your application server in an appliance. Remember “my coffee maker runs Java?” Well, now your refrigerator runs WebSphere! WAS ND is pre-installed and you have an option to turn on Feature Packs with a simple checkbox. While at this time there are only 2 publicly available Feature Packs for WAS 7 (SCA and Web 2.0), another one is on its way (XML, which will include XML Schema 2.0, XPath 2.0 and XQuery 1.0). Profile creation is also included – just choose which profile you want. This way, after a few minutes of initial configuration you have a working installation of WAS ready to run.

This is a very interesting move by IBM, building on success of their DataPower line of appliances, which expanded from original 3 models to current 5 with recent addition of B2B gateway (XB60) and low-latency messaging appliance (XM70). Now, this new form factor comes to the application server world. Both WAS HE and Cloudburst will help dramatically reduce environment creation overhead. Now server environment can be stood up in a very short time.

While WAS HE and Cloudburst may be used together, this is not the only way. One could say that they represent opposite ends of the software “hardness” specter. Cloudburst is clearly out there on the “hard” side. But WAS HE is just a VM image, which may be brought online when needed. When no longer necessary, it can be shut down and left to wither on a backup shelf. This gives organizations ability to manage server capacity easier.

Cloudburst, advertised as a “private cloud” solution, can be deployed in more “mundane” implementations having nothing to do with clouds at all.

It would be interesting to see if this move leads to more appliance offerings. I’m thinking next in line will be widely used software running on WAS – Portal and Process Server.

UPDATE 5/6/2009: Feature Pack for XML is now available as an open beta here.