Thursday, May 31, 2007

The new drop of the Web Services feature pack beta is available

It can be downloaded from the Early Programs site.

The beta, like the eventual GA version of the feature pack, is an optionally installable package. If you need the latest web services standards, you can add them onto WebSphere Application Server V6.1. If you don't, you can ignore it.

The latest beta is basically a fully functional version of the feature pack that is going through additional testing now. The feature pack has support for:
  • Web Services Reliable Messaging (WS-RM)
  • Web Services Addressing (WS-Addressing)
  • SOAP Message Transmission Optimization Mechanism (MTOM)
  • Web Services Secure Conversations (WS-SC)
  • Java API for XML Web Services (JAX-WS 2.0)
  • Java Architecture for XML Binding (JAXB 2.0)
  • SOAP with Attachments API for Java (SAAJ 1.3)
  • Streaming API for XML (StAX 1.0)
That's right - it's like Christmas morning for early adopting Java developers!

Wednesday, May 30, 2007

My surreal work experience for the day

I noted the title of an eWeek article called, "Microsoft to Reach Out to IBM, Cisco on Interoperability" and decided to take a look. Microsoft's on-again, off-again relationship with interoperability is frustrating for our customers, for example when they choose subsets of a given specification to support. So if Microsoft wants to be serious about interoperability, it can only be a good thing, right?

So here's what
Bob Muglia, Microsoft's senior vice president for the Server and Tools division, had to say:

"[Our customer council] basically told me, pretty directly, that while Microsoft's implementation was in great shape, IBM's and others were not, and that Microsoft needed to do a better job helping them do a better implementation," Muglia said. "And I had to think about that, as it is one thing for us to work with customers around interoperability, but quite another to go out and help a competitor build a better product to enable interoperability."

But Microsoft has now decided to go and talk to IBM and BEA Systems and a few others to help improve and define their interoperability. "Ultimately these guys have to make their products good, but there is a lot we can do working with them to make their products interoperate better with us," Muglia said.

IBM could not be immediately reached for comment on Muglia's remarks or its thoughts about improving interoperability with Microsoft.

After I picked myself off the floor, I asked myself, "Why would eWeek allow itself to be used like this?" Clearly, this was simply a marketing move, not a signal of genuine intention to work with IBM. If Microsoft really wanted to work with IBM on interoperability, would it launch this activity by telling eWeek that the goal was for Microsoft to help IBM with our implementation? And without a response from IBM? This is not a news story, it's an advertisement.

SO-UH... IT SOA Part 4. No, really separation of concerns.

From the previous blog site, I've had 3 entries on exploring why the SOA vision isn't really hard -- not to be confused with the reality in the industry -- which is nonetheless quite difficult to sort through. At any rate, so far I've stated that Service Orientation is important because it gives the industry a common approach to develop, design and speak of distributed applications; that in order to be successful, SOA principles must at its heart embrace our existing technology; and that finally, a greater level of flexibility can be achieved by ensuring that our service consumers and our service providers have a level of independence -- an appropriate level of separation of concerns, such that provider implementations can be changed with independence from and of the consumers.

To really achieve a separation of concerns between the service consumer and service provider, it is often beneficial to introduce the concept of service virtualization. Service virtualization is the concept that we don't actually publish the concrete service provider to consumers, we publish an intermediary which exposes the formal service interface and business contract. In its simplest form, the intermediary is really just a pass-thru which merely passes parameters and context to the real, concrete service provider implementation. In more complicated forms, dynamic decisions can be added to the intermediary to choose the most appropriate service provider implementation. A very simple business example is the famous "getStockQuote" service. Some stock quotes are delayed 20 minutes but are free to obtain; whereas real time quotes usually cost the consumer a small fee to obtain. Both provide the same service, one has a greater expense to the business, but better quality of service. This intermediary is sometimes called a mediation.

Service virtualization is one of several beneficial aspects of Enterprise Service Bus patterns.

Bottomline: Separation of concerns is a good thing to build flexible, agile applications. Best practices and newer programming constructs can achieve a good deal of separation of concerns. Service virtualization complements these concepts, and extends separation of concerns to older programming styles, patterns, languages. Service virtualization facilitates us embracing legacy applications, utilizing them where they are hosted.

Steve Kinder

Tuesday, May 29, 2007

Web Service Transaction support in WAS

The OASIS standards consortium recently announced the completion of the WS-Transaction 1.1 standard which specifies Web services protocols for 2 well-known transaction models: WS-AtomicTransaction (WS-AT) for atomic two-phase commit (2PC) transactions and WS-BusinessActivity (WS-BA) for compensating transactions.

WAS has provided support for WS-AT 1.0 and WS-BA 1.0 (the input specifications to the OASIS WS-Transaction Technical Committee) since 6.0 and 6.1 respectively.

WS-AT support in WAS (since 6.0)

WS-AT is suited to short-running, tightly-coupled services due to its 2PC nature and the consequence that participants/resources are held in-doubt during the second phase of 2PC. It is used to distribute an atomic transaction context between multiple application components such that any resources (e.g. databases, JMS providers, JCA resource adapters) used by those components are coordinated by WAS (using XA) to an atomic outcome. It is typically used between components deployed within a single enterprise where there are two primary scenarios requiring WS-AT:

  • A SOA deployment requiring atomic transactional outcomes between two or more service components.
  • Transaction federation between heterogeneous runtime environments. WAS WS-AT interoperability has been tested with CICS and with Microsoft .NET 3.0 framework in Windows Vista.

WS-AT is to Web services as Object Transaction Service (OTS) is to remote EJBs and in WAS neither of these require any coding, on the part of the application developer, to use. WAS ensures that the transaction context of the requester component is established by the container of the target component regardless of whether the target is invoked as an EJB (using the EJB remote interface) or a Web service (using JAX-RPC). The transaction can be started at the requester using the standard UserTransaction interface or, in the case of an EJB, as a standard EJB container-managed transaction; WAS takes care of deciding whether to use OTS or WS-AT to propagate that transaction context on remote requests depending on the type of invocation - no Java coding is required. If the requester is a Web service client then the application assembler simply needs to indicate, when assembling the requester component in AST or RAD, that Web Services Atomic Transaction should be used on requests, as described in the task Configuring transactional deployment attributes in the InfoCenter.

Distributed transactions in WAS, whether distributed between remote EJBs using OTS or Web services using WS-AT, benefit from WAS transaction high-availability (HA) support. In a non-HA WAS configuration, transaction recovery following a server failure occurs when the WAS server is restarted. In a WAS HA configuration, transaction recovery may also occur when a failed server restarts but in-doubt transactions can also be peer-recovered by an active server (which is processing its own, separate, transactional workload) in the same core group. Configuring WAS for transactional high availability is described in the InfoCenter topic Transactional high availability and also in a dedicated paper with some additional background, Transactional high availability and deployment considerations in WebSphere Application Server V6.

There is further information on WS-AT in the topic Web Services Atomic Transaction support in WebSphere Application Server in the InfoCenter.

Additional features of the WS-AT support in WAS 6.1

A documented limitation with the WAS 6.0 WS-AT support is that the WAS WS-AT context cannot be propagated through a firewall. Actually, the context itself can be propagated through a firewall but the participant registration protocol flow that follows typically cannot. Since WS-AT is primarily used within a single enterprise this is not too onerous a restriction but it clearly limits the topologies in which WS-AT can be used in WAS 6.0. This limitation is removed in WAS 6.1 where the WS-AT service can be configured with the hostname of the HTTP(S) proxy server to which protocol messages should be sent for routing to the WS-AT service. While any generic HTTP proxy can be used, there are additional capabilities built into the WebSphere Proxy Server (shipped as part of WAS ND) that recognize WS-AT context and WS-AT protocol messages to provide edge-of-domain support for WS-AT high-availability and work-load-management, essentially extending these capabilities to non-WAS clients. The InfoCenter has more information on this in the topic Web Services transactions, firewalls and intermediary nodes.

WS-BA support in WAS (since 6.1)

WS-BA is appropriate for long-running and loosely-coupled services. Its compensation model means that resource manager locks are not held for the duration of the transaction, as a consequence of which intermediate results are exposed to other transactions.But WS-BA is not exclusively for use by long-running applications - short-duration, tightly-coupled applications can benefit from a compensating transaction model just as much. Scenarios where short-running applications may chose a compensating rather than ACID model include:

  • the use of resources which cannot be rolled back as part of a global (2PC) transaction. For example, an application that sends an email cannot roll this back after it has been sent - but it could send a follow-up email. Other examples of non-2PC resources are LocalTransaction/NoTransaction resource adapters (RAs) in J2EE.
  • avoidance of in-doubt transactions in resource managers, where the application wishes for an atomic outcome but can tolerate intermediate state being available to other transactions.

WAS 6.1 introduces a construct called a BAScope which is essentially an attribute that can be added to any WAS core unit of work (UOW), for example a global transaction (e.g. started by the EJB container for an EJB using container-managed transactions (CMT) or by a bean using the JTA UserTransaction API). A BAScope associates a Business Activity with whatever WAS core UOW an application runs under, and has a context that is implicitly propagated on remote calls made by that application. If the remote calls are Web service calls then the BAScope context is propagated as a WS-BA CoordinationContext. If the remote calls are RMI/IIOP EJB calls then the BAScope is propagated in an IIOP ServiceContext encapsulation of a WS-BA CoordinationContext (which is actually a CORBA Activity Service context as mentioned in the footnote of this post).

Any deployed WAS component runs in a WAS core UOW scope - for example a global transaction or a local transaction containment (which is the WAS environment for what the EJB spec refers to as an "unspecified transaction context"). Any resources (JDBC, JMS, JCA RAs) used by the component are accessed in the context of that UOW. For example, an EJB with a transaction attribute of "RequiresNew" accesses resources in the scope of a global transaction. So here's what is new with BAScopes: any EJB deployed in WAS 6.1 or later may be assembled to have a BAScope associated with its core UOW, where the BAScope has the same lifecycle as the core UOW to which it is associated and completes in the same direction. An EJB that runs under a BAScope (regardless of the core UOW) may be configured (through assembly) with a compensation handler, which is a plain old Java object that implements the interface methods close and compensate. The compensation handler logic needs to be provided as part of the application (and is assembled into the application EAR file) but it is the WAS BAScope infrastructure which ensures that the compensation handler gets driven appropriately when the BAScope ends, regardless of any process failures. The BAScope infrastructure also persists any compensation instance data provided by the EJB during forward processing for later use by the compensation handier.

The figure below shows the AST pane for adding a BAScope compensation handler to an EJB

AST Assembly options for BAScope
In the above example, the ScenarioBFlightProviderACompensationHandler application class implements close and compensate methods. If the EJB's transaction is rolled back, then the compensate method will be driven which is an opportunity to compensate any non-transactional work. If the EJB's transaction is committed then the WAS BAScope support either promotes the compensation handler to any parent BAScope or, if there is no parent, drives the close method (which might typically perform some simple clean-up or else just be a no-op). BAScope parent-child relationships are managed by WAS and can be used to compensate work previously committed as part of a global transaction. For example, if EJBa running under transaction T1 calls EJBb running under transaction T2 and both EJBs are configured to run under a BAScope, then the BAScope associated with T2 is a child of the BAScope associated with T1 as illustrated in the diagram below.

Nested BAScopes
EJBb can define a compensation handler class and register some data relevant to transaction T2. If T2 commits, then that compensation handler (and compensation data) is promoted to BAScope2. If EJBa then hits an exception so that its transaction T1 is rolled back, the compensation handler is called to compensate for the work previously committed in T2.

For further information on WS-BA, you can read the topic Transaction compensation and business activity support in the InfoCenter. There is a also a simple and yet comprehensive WAS 6.1 WS-BA sample that can be downloaded from developerWorks.

It should be noted that the compensation support in WAS 6.1 is focused exclusively on tightly-coupled components using a synchronous request-response interaction pattern and has no notion of any business process choreography associated. More complex compensation support for microflows and BPEL processes, including WID development tooling, requires WebSphere Process Server.

Footnote: WAS BAScopes and standards
A little while ago I wrote a short piece on the heritage of WS-Coordination and its relationship with the CORBA Activity service and the J2EE Activity service (JSR 95). When we designed the BAScope feature for WAS 6.1, we chose to use the J2EE Activity service architecture as the basis for our WS-BA implementation. The WAS 6.1 BAScope support is architected as a J2EE Activity "High Level Service" (HLS). This is purely internal and has no impact on applications that use BAScope interfaces - we already had, in WAS, this extensible Activity service infrastructure (which is also used as the basis for WAS ActivitySession support ) and simply built on this for our BAScope support. So while the WAS BAScope Java application programming interface is WAS-specific, the BAScope support itself is built upon open standards specifications. As mentioned earlier, BAScope contexts distributed between Web services use a standard WS-BA CoordinationContext and BAScope contexts distributed between EJBs use a standard CORBA CosActivity::ActivityContext service context.

-- Ian Robinson

Friday, May 25, 2007

Darryl K. Taft comments on IMPACT 2007

Darryl K. Taft at eWeek has a new article called "IBM Shows Impact of SOA". Excerpt:

Steve Mills, senior vice president, IBM Software, in a statement, said that "SOA has been a growth engine for IBM, as well as our customers, because it gives companies the much-needed flexibility to focus on achieving business results without being hindered by the constructs of established infrastructures. IBM's differentiation is in its ability to address business challenges using the right balance of business and technical skills along with an unmatched, multipronged approach to meeting customers' needs."

Mills also said he sees the worlds of Web 2.0 and SOA coming together to offer new opportunities for both vendors and users.

"We're bringing the people impact into the picture and leveraging things like RSS and Atom," Mills said. Web 2.0 technologies are bringing more usability, "consumability" and user-driven content into the equation, he said.

It's interesting to me that both Sandy Carter's book and Steve Mills' statement tied together SOA and Web 2.0. At first glance, it seems like the only thing they have in common is that they both are big buzzwords... and hey, why use one buzzword when you can use TWO! But looking a little deeper, I think in some ways Web 2.0 technologies are doing a great job of delivering SOA, particularly with regard to mashups. Stitching together a quick mashup using a bunch of existing services is very SOA. It also shows very clearly that SOA is not a technology offering. It's a way of doing business. Whether you use AJAX and JSON or EJB3 and web services, you can choose to architect your solutions using SOA principles.

Book review:"designing the obvious" by Robert Hoekman, Jr.

I recently finished Robert Hoekman's new book, "designing the obvious: a common sense approach to web application design". I enjoyed the book and thought that Hoekman had some good insights into what it means to produce quality design, though I also had some disagreements and found some of his advice to be of limited value in the domain of enterprise software.

One of the things that liked best was the section on "understanding users". This has long been a pet peeve of mine -- while obviously good designers needs to understand their users, I've often felt that running usability tests on design were of limited value. And I think the issue is that we often ask the wrong questions. Hoekman has a great summary of this problem:

For example, if I were designing and building an application used to track sales trends for the Sales department in my own company, my first instinct... would be to hunt down Molly from Sales and ask her what she would like the application to do. Tell me, Molly, how should the Sales Trends Odometer display the data you need?

I would never ask my father the same type of question. My father knows next to nothing about how Web sites are built, what complicated technologies lie lurking beneath the pretty graphics, or the finer points of information architecture. He wouldn't even be able to pick a web application out of a lineup, because he doesn't understand what I mean by the word application...

What my father does know is that the model rockets he builds for fun on weekends require parts that he has to hunt down on a regular basis... If I told my dad he could buy these parts online, he'd be interested. If I asked him how the application should filter the online catalog and allow him to make purchases, he'd look at me like I was nuts.

Molly from Sales would likely do the same thing.

Molly has no idea what your software might be able to do, and she almost certainly doesn't have the technical chops to explain it.

I think this is a great point. We need to understand our users and we need to understand how the use the product (or use of proposed new designs), but we should not be asking them to design our product. We're the professional software designers - we shouldn't abdicate our responsibilities onto the user. It's nice to Hoekman saying that in print.

I also liked what Hoekman had to say about turning beginners into intermediates. Basically, the biggest proportion of most products' user base is the "perpetual intermediate". Someone who has learned just enough to get by, and has no plans to ever learn more. One of our goals is to help beginners reach this stage. WAS does quite a bit in this space, perhaps most notably in our "Command Assist" functionality in the console, which allows people to transition easily from using the console to using scripting.

The only major problem I had with the book, and it's not all that major, is that it was written with an assumption that bad design happens out of ignorance for many of the principles he is espousing. That people are not doing things the right way because they don't know the difference between the right way and the wrong way. Clearly, this is sometimes true, but I think it's the exception to the rule (at least it is the exception when there are professional usability people involved in the project... and the audience for the book seems to be professional usability people). The simple fact is that there is a cost associated with improving the design of a feature or a product. Designing things well takes more time and people. It usually means that the system is taking on more of the complexity from the user, which requires more coding and testing. Everyone wants good design, but when push comes to shove, I think many of our customers would admit that if better design means fewer features and longer time to market... well, it's not a no-brainer that better design should win the day. It's a balance. Sometimes better design is the right answer, and sometimes time-to-market pressures are too important to lengthen the release cycle to include an expensive, iterative design phase. Be able to tell the difference between those cases is the trick, and not an easy one. But the point is that the "obvious" design techniques that Hoekman discusses are already common design techniques in most shops... when there's time to do them.

But overall I thought it was an excellent book, especially for people who are trying to go from beginner to intermediate in their design skills. It's well-written and engaging, and that counts for a lot.

Jython scripting improvements in WAS V6.1

Two things we hear all the time from customers:

1. Scripting is used extensively and sometimes exclusively for managing production environments. In many shops, scripting is the primary interface to the WAS product.

2. Scripting is hard.

In WAS V6.1, we took a major step forward in improving the user experience for scripting by adding a Jython editor to the Application Server Toolkit, that provides the type of "code assist" that developers have come to expect in other languages.

I'm compelled to note that the script editor does not support Jacl. We're encouraging customers to move from Jacl to Jython, and this Jython editor is one way to encourage that movement.

Musings about Sandy's Carter's new SOA book

From a user experience standpoint, I'm a "true believer" when it comes to SOA. I think SOA is exactly the right approach to make real world improvements in the amount of complexity that users need to consume in order to be productive. And I use those words very carefully, because I am of the school of thought that "complexity" is a zero-sum game. Those of us in user experience never eliminate complexity, we just move it around. SOA gives us an architecture to move around complexity in a more efficient manner, so that users can be productive in their particular job without needing to understand the complexity of the entire ecosystem.

So when I picked up Sandy Carter's new book, "The New Language of Business: SOA & Web 2.0", I was interested to see if she would touch on this perspective. Clearly, SOA is a vast topic that can be tackled from many different angles, so there was no guarantee that user experience would be touched at all.

Naturally, given her background, Sandy focuses on the business side of the SOA equation. She hammers home the points that being flexible and responsive requires an alignment of business needs and IT needs, and neither can do the job by themselves. In addition, she makes it clear that this is nothing new... from Henry Ford to McDonald's, flexibility, responsiveness, and efficiency have always been of major concern to enterprises. What has changed now is the tools available to take this to the next level, such as standards-based technology to make interoperability an assumption rather than a herculean effort. In particular, I enjoyed the case studies that described how real companies were adopting and benefiting from SOA approaches (including IBM). The one thing that stood out in these case studies was their diversity. Every company approached SOA differently, based on their business needs, their IT skills and architecture, and their process maturity. This is yet another reason why I appreciate SOA (and IBM in general) -- we don't try to claim that there are one-size-fits-all solutions to any of the problems facing enterprise customers today. And Sandy clearly did not try to make SOA appear easier than it really is.

But what about user experience? I'm happy to say that the book touched on user experience topics in several places, though it wasn't a major theme. When it was mentioned, she tended to focus on the user experience of the end user, rather than the implementers. Again, this is not surprising given that the focus of the book is about the business justification for going to a SOA architecture, and clearly one of those benefits is to allow customers to create a more seamless experience for their end users. For example, here's an excerpt from one of the case studies:

Standard Life group of companies Plc, headquartered in Edinburgh in the U.K., has become one of the world's leading financial services companies. The majority of Standard Life group of companies' business and revenue is generated through independent financial advisors (IFAs) that help their customers select financial and insurance products from a number of different assurance companies. many IFAs utilize industry-sponsored portals to obtain product information, compare prices from multiple providers, and provide a single view of a customer's holdings.

Standard Life group of companies realized that, to remain competitive, it needed to offer its IFAs easier, more flexible, and quicker online access to its financial information. Standard Life group of companies also needed to reduce the cost of doing business with multiple business channels. By reducing its costs, not only could it improve its bottom line, but it also could improve its competitive standing and its relationships with its IFAs; more self-service and quicker processes via automation could help the IFAs improve their margins.

This is a message repeated in several places throughout the book - that SOA techniques can allow our customers to provide a better user experience for their customers. But I would also add that SOA can provide a better user experience for IBM's middleware customers as well, namely because it will allow them to pick and choose who in their organization is capable of handling the most complexity, and architecting their solutions accordingly.

Microsoft Host Integration Server - redux

A few days ago, I made a post about Microsoft Host Integration Server that probably deserves more discussion.

I was having dinner with a customer at the zBLC a few weeks ago, and the customer brought up an interesting question - "When Microsoft talks to large enterprise customers, are they intentionally lying or do they just not understand what they're saying?" It's often hard to tell the difference between mendacity and ignorance, but eventually we decided that Microsoft simply didn't understand large enterprise customers. They don't understand the personal commitment, the scale, or the intersection of all the cost pressures. Mainframe ignorance is just one example of this trend.

To some extent, it's hard to blame Microsoft too harshly for not understanding mainframe costs, because my impression is that even within large enterprise shops that have existing mainframes, the mainframe business is put into a position where they're constantly defending the mainframe investment. As one customer put it, the problem is that the costs of the mainframe are so well understood... they are able to tell their executives exactly how much they are spending on the hardware, software, personnel, power, etc. This actually puts them at a disadvantage because things are often done ad-hoc on the distributed side of the house, or at the very least there are different divisions responsible for different pieces of the distributed puzzle... one division runs the server farm buildings, another owns the machines, another manages the software, etc... that's it hard to get a full accounting of all the costs associated with the non-mainframe business. And many of the costs are more "hidden" than the mainframe costs. Sure, the grizzled mainframe veterans command good salaries, but the number of technicians who need to service the server farms outnumber them a hundred to one. And this doesn't even bring up server utilization, which approaches 100% on the mainframe and approaches 0% on distributed.

Clearly, the mainframe isn't always the answer. But just as clearly, there are cases where putting load on the mainframe is cheaper than trying building a new warehouse for your next server farm.

Maybe someday Microsoft will get it.

I'm not holding my breath, though.

Microsoft Host Integration Server?

Over on the Mainframe Blog, there's been an amusing chain of posts here, here, and here, on the topic of "What's the point of Microsoft Host Integration Server?". Just to pique people's interest, here is what Microsoft had to say about it:

I am most amused at the idea of mainframe guys arguing cost. In the interest of transparency, Timothy might also do the math on what the customer would save if they ripped the mainframe out altogether. Not only do they not need to buy HIS, but they can save a lot more on the recurring costs of the mainframe.


Last time I checked (and I haven’t been paying attention for a while), the mainframe was three orders of magnitude more expensive per MIP than x86 and was falling further and further behind. You’re crazy to put any new workload on the mainframe. This is why IBM is always peddling the fad of the moment on the mainframe to see if they can hoodwink people to maintaining or even increasing the workloads on their mainframe. Run Linux and Java on the mainframe, they say, never mind the fact that these cross platform approaches should drive people to the lowest cost hardware, not the highest cost.


To put this in perspective, we’ve just been through the biggest computing buildout ever in the last decade with the Web, and the mainframe is nowhere to be seen. Even when IBM has tried to pay customers to run portions of their web sites on a mainframe, they have failed.

Obviously it will be news to our z/OS WAS customers that running portions of their websites on a mainframe has "failed".

More evidence that when it comes to the mainframe, Microsoft just doesn't get it.

What does the Playstation 3 and the IBM Mainframe have in common?

They're both fun? They both come in whatever color you want, as long as it's black? They are both controlled remotely? They're both looking for more partner application support?

Well, apparently the answer is "They will both have IBM's cell processor" according to this article in eWeek magazine.

Where were you weeks ago?

I was at a customer site last week to help them with planning for a WebSphere AppServer Migration. They had started doing some work earlier, had some "challenges" and made some subsequent calls. After our meeting they gave me the quote in the title. In this, like in many cases, an ounce of prevention is worth a pound of cure. For example, when moving to WebSphere AppServer v6.1 you can no longer directly access our jars in the /lib directory reliably. Some of them have moved and they have to be properly initialized. The appropriate way to reference our jars is to call our setupCmdLine from your script to setup the classpath.

The problem is you have to find the prevention (in this case information) before the pain occurs. We believe we have some help, check out this site for planning, tips and "gotcha's" before migrating WebSphere AppServer versions:

Dana Duffield

What the heck is a core group bridge?

Core groups were introduced in WebSphere Application Server Network Deployment V6.0, and my impression is that they are still not well understood by customers. Part of the reason why is that, by default, a single core group is created for the entire cell and all new servers are added to that core group, which means users can go quite awhile without ever needing to know anything about the core group concept. It's not until you try to create large-scale topologies that core groups become a critical component in the configuration. The one sentence definition of a core group is that they are a group of processes that talk to each other to maintain knowledge about each other's state for purposes of failover/HA.

But what about a core group bridge? A core group bridge is a communication vehicle between multiple core groups, either within a cell or between cells. One interesting scenario where a core group bridge is handy is when you have a cell for your proxy server in a DMZ and a cell for your application servers on the other side of the firewall. You want your proxy server to know about the state of the application servers for obvious routing purposes. How do you do this? Via a core group bridge.

I've been trying to learn more about core groups recently and a colleague pointed me to a good article on developerWorks that provides a good introduction to core group bridges entitled Core group bridges 101. I highly recommend it for anyone working with large topologies or trying to create communication between cells.

Moving blog

Old location was here.