Thursday, October 8, 2009

Update on JPA 2.0 and Apache OpenJPA

Developerworks just posted an article from Kevin Sutter, our lead for Java persistence, on JPA 2.0 and the work at Apache on OpenJPA. Click here to learn about the new features that are coming in JPA 2.0 and get an early preview via early access drivers at Apache here.

Friday, October 2, 2009

XML-In-Practice Day #2 Summary

Keynote - XML and Web 3.0 - Mills David


This keynote was presented in a very interesting style. Very visual, very content packed. Not only did he talk about Web 3.0 - the semantic web, he talked about Web 4.0 and work that he believed was already being done to support it. For Web 4.0, he defined it as ubiquitous (image of computer implanted in back of skull) where everything (not just everyone) is connected and has some level of intelligence. For Web 3.0, he talked about representing the meaning of content and using that meaning to improve the way we work with the web to make the internet more relevant, usable, and enjoyable. A basic example is today, we may expose the contents of a database to the web without giving away the schema. If we instead exposed both the content and the schema, computers could find ways to link this data to another similar web repository or service and create new value based on what the data meant.

The eXtensible Business Reporting Language, XBRL - Evan Lenz


This session started with an excellent overview of XBRL (a XML usage required for US financial reporting by the SEC), then showed the issues, and then proposed a new approach.

Among the issues he cited:

a) all concepts are global (no namespaces, no hierarchy) which meant (as shown in one real world example that required 12,000 concepts) there would be no way to work with the data except in tools, no adhoc queries, and names of concepts ended up being on average 49 characters in length (hierarchy being built into the name).

b) the use of schemas and XLink really required too much plumbing for little value.

c) high noise to signal ratio meaning that with the linkages being so verbose and separate, it becomes very hard to work with XBRL without tools (not human readable, not ad hoc queriable).

In general his proposal was to use XML to represent the structure of reports. Even though XBRL is based upon many XML concepts, Evan suggested the use of XML, currently, for XBRL is incidental.

Work Flows, Standards, and Innovations Panel


Along with two other vendors, I participated in this panel. I again demonstrated the WebSphere XML Feature Pack (mostly this time focusing on XSLT 2.0). I went over some slides that show Rational tooling for XML, XSD, DTD, XPath 1.0 and XSLT 2.0 including editors, validators, wizards, debuggers, executors, etc. I then showed a live demo of some future Rational Application Developer work we're considering in the XML space.

Tools from NIST Created to Support the Development of XML-Based Content Standards Through the Application of Naming and Design Rules (NDR) - KC Morris


This presentation showed a very interesting demo of tools NIST has created to validate schemas and instance documents. This validation is centered on rules defined by an organization that ensure all applications of XML technology use consistent naming and definition patterns for data and metadata. My guess is it might have flagged XBRL instances as in error (if the format wasn't required by the SEC). Also of interest was that there were rules for OAGi BOD which is heavily used in automotive manufacturing.

The use of XML in the Irish Government's eCabinent Initiative - Michael Boses


This presentation talked about moving the old process for distributing data across the Irish government (large volumes of paper delivered by military personnel) to electronic form stored in an XML content repository and accessed through personalized portals. In the end, they showed the tablet PC's built into the cabinet table meaning that all the way to the top, there was the ability to stay entirely digital. Some interesting points where a) even though XML (DITA) made this possible, they avoided talking in XML terms in implementing the project as XML is totally behind the scenes and doesn't "sell" - instead they just used terms of like "smart documents" and b) they piloted the program with stakeholders to work out the bugs and purposely avoided turning the solution live until it was totally ready (they for a few months used electronic right up to the main cabinet meeting and then printed the documents for the meeting). Also, they avoided XML being seen by the users by using XSLT to web and Quark's tool for editing of XML content in Microsoft Word.

XML Tools Summit


As somewhat of an extension of the panel done earlier in the day, this was a session to have all participants catalog what tools they are using in the XML space, what tools they need but can't find, and exchange information with each other about what tools work well. This tools summit will be carried forward after the conference online as IDEAlliance hopes to create an "Angie's list of tools". It was very interesting to see how many tools people used (I counted over 30) and how critical these tools are to the publishing, standards, data scenarios.

Wednesday, September 30, 2009

XML-In-Practice Day #1 Summary

I'm at IDEAlliance XML-In-Practice 2009 in DC this week talking about IBM WebSphere XML and learning about other XML products and technologies. There are four tracks -- Publishing and Media, Applications, Foundation and Interoperability AKA the technology track, Electronic Medical Records and President Obama's Economic Plan, and e-Government. Based on my totally unscientific head count, the attendance to each track is 50%, 25%, 12% and 13%.

I attended the following sessions today -

Keynote - XML Enabled Medical Records - Dr. Clement McDonald


I learned a) how technology is used to create repositories of information within hospitals/caregivers b) how much workload these systems exchange in the form of HL7 messages c) how distributed systems share data within a localized region for decision making, consistency of care d) how Web 2.0 is helping replace very complicated forms based desktop apps that are trusted currently e) how Dr's are happy to have a wealth of electronic information to help, but see putting new data into the system as something they cannot afford to do given already limited time with patient (call for smarter devices/speech to text) f) how different data is across the various medical interactions all the way from very structured to very narrative. The best two parts of the talk was the 0.5 seconds he showed an XML document which stressed the business aspect of this is key - the technology just has to exist behind the scenes to make it possible and seeing a Dr. throw stuffed pings to the audience (joke on how LOINC standards sounds like OINK).

Overview of President Obama's Electronic Medical Records Plan and Health Information Technology Architecture - John Quinn


I a) learned how much money is set aside to rewards providers that move to standardized medical health records and what timelines exist to get these rewards b) learned how these timelines are aggressive based on time to implementation of typical systems c) learned how the rewards are based upon a certified system which is challenging to guarantee for valuable use. I really took away a deeper appreciation for not only the complexity inside of a single hospital, but also how challenging a national mandate will be (especially to individual physicians).

XSLT Stylesheets from Version 1.0 to 2.0 - Priscilla Walmsley


I didn't take alot of notes in this session as I'm rather knowledgeable about this topic. However, I'd say it was a great presentation given the example (before and after) based approach.

Customer Use Case: How IBM Simplifies Complex Content Developing and Publishing Across the Enterprise. - Daniel Dionne


Great presentation that didn't go into just what DITA for content development/publishing is, but showed the entire lifecycle and processes needed to make a wide adoption work. Went into some rather impressive use cases of the technology, along with challenges, within the IBM company.

Technical Overview of Relax-NG - Bob Ducharme


Can't say I was a huge supporter of this talk, but that is likely due to the fact that I'm a data-oriented XML guy and I'm working on standards, customer situations that are very dependent on XML Schema. Bob discussed areas where Relax-NG was better than XML schema for mostly document oriented scenarios. I would have liked to see more mention of XML Schema 1.1 and how that changed the story. I did get some good value out of understanding why some document-centric customers are still using DTD's.

HL7's use of XML - Paul Knapp


Learned how HL7 V3 XML isn't really yet used in US e-healthcare apps (every hospital is exchanging internal messages in HL7 V2). Abroad, new projects that are less than three years old are very likely to be HL7 V3. We should be seeing more of this in the states with new projects, especially as we start to consider the need to share information outside of a single hospital, etc. Paul did mention binary XML and how that would help many of the HL7 V3 current issues.

MarkLogic Beer and Demo Jam


I did a 4 minute demo along with nine others during the reception. You get 5 minutes to do a demo with no preparation and the best demos win free stuff. I demoed the XML Feature Pack and the 40 samples we have along with the end to end blog checker sample written in XPath 2.0, XSLT 2.0, and XQuery 1.0. The samples I showed had a nice CSS and dashboard we've added since Beta 4 and that visual skinning over the XML technologies drew positive comments from the crowd. Didn't win anything in the end. Oh well.

After hours


Finally, I was able to do dinner with about 15 folks who regularly attend these conferences. Some great conversation with people from all parts of the industry.

Sunday, September 20, 2009

WebSphere eXtreme Scale cache provider for Dynacache






























The dynamic cache engine is the default cache provider for the Dynacache APIs and frameworks. Starting WebSphere Application Server 7.0.0.5 and 6.1.0.27 Dynacache allows WebSphere eXtreme Scale to act as the core caching engine for Dynacache.

You can configure the dynamic cache service to use WebSphere eXtreme Scale as your cache provider instead of the default dynamic cache engine.

This provides customers the ability to leverage transactional support, improved scalability, high availability and other XTP features without making changes to their existing Dynacache caching code.


This capability can also be enabled on WAS service packs 6.1.0.23, 6.1.0.25 and 7.0.0.3 via APAR PK85622.

Links:

Tuesday, September 15, 2009

XML Feature Pack Thin Client Demo - Zero to running in 6 minutes

NOTE: This post is our of date. For the same demo on the released product see this link

As we announced, the latest beta release of the XML Feature Pack contains the Thin Client for XML. As well as allowing you to use this in your client applications to WebSphere Application Servers, the thin client allows for quick and easily evaluation of the technology. Here I show a quick demo of using the following simple XML, XPath, XSLT and XQuery files along with Java files to invoke them.

Demo 7 - XML Feature Pack Beta 4 Thin Client for XML


Direct Link (HD Version)


Here are the files for the demo:

thinclientdemo.zip


Which contains (HelloXSLT.java, HelloXQuery.java, HelloXPath.java, simple.xsl, simple.xq, locations.xml)

Monday, September 14, 2009

Rational Automation Framework for WebSphere

Leigh and David spent the better part of 8 years working on the WebSphere Foundation Architecture and WebSphere Application Server - specifically in the areas of administration, configuration, systems management, and performance tooling. In mid-2007 both David and Leigh took the opportunity to expand their horizons and explore new options in IBM, though never really moving too far away from WebSphere systems management. Since leaving the WebSphere Architecture and Development organization in 2007, we have been working in the IBM Rational brand focusing on software delivery automation. We are excited to announce that the result of that effort is the announcement and delivery of the Rational Automation Framework for WebSphere - available as of May 15, 2009.

IBM Rational Automation Framework for WebSphere is an optional feature that extends and enhances IBM Rational Build Forge around WebSphere Application Server and WebSphere Portal environments. This customizable management framework is designed specifically to automate installation, patching, configuration management, and application deployments for IBM WebSphere Application Server and IBM WebSphere Portal.

Rational Automation Framework for WebSphere reduces the complexity of managing your IBM WebSphere Application Server and IBM WebSphere Portal environment due to common pains, such as:
  • The lack of consistency and/or repeatability in the installation, configuration, and application deployments in IBM WebSphere Application Server and IBM WebSphere Portal environments as a part of the Software Delivery Lifecycle.
  • The challenge of connecting disparate application development, test, and operations groups into a single traceable and enforceable process for the Software Delivery Lifecycle.
  • The inability to manage IBM WebSphere Application Server and IBM WebSphere Portal environments across multiple Software Delivery Lifecycle environments and/or beyond the cell scope leading to the development of costly, difficult to support, homegrown solutions.
  • The lack of change history, auditability, and governance around the changes to the IBM WebSphere Application Server and IBM WebSphere Portal environment configurations.
  • The need to be able to quickly reproduce IBM WebSphere Application Server and IBM WebSphere Portal environments in the case of a disaster.

For those companies facing IBM WebSphere Application Server and IBM WebSphere Portal infrastructure management challenges, the key to delivering greater operational productivity with quality is automation. By eliminating manual and complex tasks when managing IBM WebSphere Application Server and IBM WebSphere Portal environments, Rational Automation Framework for WebSphere can provide accuracy, reliability, repeatability, and consistency to help cut costs and improve productivity and quality.


David Brauneis
Chief Architect, Rational Automation Framework for WebSphere

Leigh Williamson
Distinguished Engineer & Chief Architect, Rational Software Delivery Automation

XML Feature Pack Beta 4 - Now With Thin Client

A month ago, we announced the Beta 3 refresh which was specification complete on XPath 2.0, XSLT 2.0, and XQuery 1.0. On Friday we released a Beta 4 refresh which continues to remove any remaining restrictions as well as adds one new major feature - The Thin Client for XML with WebSphere Application Server.

As noted on the open beta download page,

The beta includes the IBM Thin Client for XML with WebSphere Application Server. The thin client allows access to the same Feature Pack API and runtime functionality (XPath 2.0, XSLT 2.0, XQuery 1.0) available in the WebSphere Application Server Feature Pack for XML. The thin client can be copied to multiple clients running Java SE 1.6 in support of a WebSphere Application Server V7.0 installation.


This means if you have client applications to WebSphere Application Servers you can copy the XML Feature Pack thin client file to these clients and get the same XML programming model support in your clients.

We also believe this thin client support will help "new to WebSphere" folks evaluate this technology. As such, we have added a download link to the jar file on the open beta website. Click on that link and then click on "Local install using Download Director or HTTP" and follow through to download "IBM Thin Client for XML with WebSphere Application Server
com.ibm.xml.thinclient_1.0.0.jar". I hope to show a demo of how fast you can get up and going with the thin client in the next day or so.

Saturday, September 12, 2009

Hidden nodes in XPath - fail on namespaces by me

I was working on a sample with the XML Feature Pack last week to show good integration between the XML Feature Pack Beta and databases that support XML columns, such as DB2 pureXML.

I ran into an issue that stumped me for a while and wanted to write about it so maybe others won't be slowed down as long as I. I was writing a XCollectionResolver and XResultsResolver that connected to the database. For some reason, while these resolvers returned data that looked valid, they couldn't be navigated by XPath 2.0. I saw things like this is XQuery:

let $a := trace($domainSpammers/spammers/spammer/email, "email =")

Traced nothing, while

let $a := trace(node-name($domainSpammers/*/*/*), "threestars = ")

Traced email, uri, and name. I even put domainSpammers into the output of the XQuery and could see the spammers/spammer/email tree:

<spammers xmlns="http://www.w3.org/1999/xhtml" xmlns:xs="http://www.w3.org/2001/XMLSchema">
  <spammer>
    <name>Joe Smith</name>
    <email>jsmith@email.com</email>
    <uri>http://joe.uri.com</uri>
  </spammer>
</spammers>

I looked at this for a few hours. Luckily one of my team members saw the issue. You can see by the title of this post and the above xml, the issue was the elements are in the XHTML namespace.

It turns out I was writing to the document from XSLT 2.0 using the new feature of multiple result documents. While I wanted my browser returned page to be in the default namespace of XHTML, I didn't want the data written to the database to be in the XHTML namespace. However, since I didn't clarify this, it mistakenly was written to that namespace.

Next time, and maybe this will help you, I'll add namespace-uri() to my debugging arsenal:

let $a := trace(
  (node-name($domainSpammers/*/*/*),
  namespace-uri($domainSpammers/*/*/*)), "threestars = ")

Which would clearly have shown that email was in the XHTML space:

a = email http://www.w3.org/1999/xhtml;

Which would have saved me a few hours of pulling my hair out.

Friday, September 11, 2009

SCA 1.0.1 Beta Refresh Available

The SCA team in WebSphere has revved the 1.0.1 Beta yet again and it utilizes the Rational Install Manager (IM) software which allows the 1.0.1 to be laid down on a vanilla WAS w/o having to install the 4Q08 GA (1.0.0) level of code.

Rather than go into detail about additional capability, I'll tease you to go look at the official early program website for our beta.

Enjoy.

Wednesday, September 9, 2009

SPEC working on standard SOA Benchmark

I continue to be interested in helping customers understand the performance of Service Oriented Architect (SOA) applications. As you can see here, I'm working (as the chair) in this SPEC working group, along with considerable input from Oracle and VMware on pushing forward on a standard benchmark for SOA based applications and the middleware infrastructure on which they run.

The interesting parts (in my opinion) of this press release are:

The benchmark will be developed by a trusted benchmarking organization with input from all SPEC members. Also, as mentioned in the press release, we're looking for participation by other interested parties. If you're interested in joining SPEC or providing input, let Bob Cramblitt know. I'm truly excited to see a SOA benchmark come from SPEC as they have a proven track record in creating industry trusted benchmarks for middleware performance.

While the initial focus is Web Services, Enterprise Service Buses, and Business Process Management (BPEL), the group realizes these technologies are only part of the entire SOA picture. It's good to see the group start with a sensible core and grow the effort over time.

The group is working to stay flexible on its support of multiple approaches to implementing these technologies. This is key, as SOA is an architectural approach and there are multiple ways to implement such technologies. However, in an industry standard benchmark it's important to audit and standardize common implementations to confirm they would be used in typical customer implementations.

I'll continue to post publically shareable information as the work group makes progress. If you have any quick questions, post them here and I'll ask them at the working group.

Thursday, August 27, 2009

Apache Wink and JAX-RS

I've received a number of questions from customers on IBM and JAX-RS support. I wanted to mention to interested parties that IBM started an Apache Incubator project (Apache Wink) with HP and others on building an open source JAX-RS implementation. JAX-RS (if you aren't aware) is JSR 311 - the JCP defined standard programming model for building REST-based services. You can check out the original proposal (here) and the project site (here) and lastly, the project Wiki (more useful) (here). Lastly, we've started a "WebSphere Web Services" specific blog (here) where we will be discussing features, capabilities, and other items going forward... This post (here) talks about some of the capabilities in their first (and fairly complete) release of the incubator project that was just formalized today.

Take a look...

Tuesday, August 25, 2009

On Twitter?

A couple of us have an active presence on twitter if you use it. Andrew Spyker (@aspyker) and I (@burckart) are both pretty active as are some of our product managers like Savio Rodrigues (@SavioRodrigues) and Erik Kristiansen(@erikkristiansen). Another great person to follow is Billy Newport, (@billynewport), our WebSphere eXtreme Scale architect. Feel free to reach out to any of us on twitter.

Monday, August 24, 2009

Information Center for XML Feature Pack Beta Posted

In software development, it's not just about creating the runtimes and API's and install images. As a customer, information to help understand how to use the features of any product is as important or more important than the product itself. This is why we've spent so much time on creating samples for the XML Feature Pack. Today we go a big step further by releasing the XML Feature Pack Information Center.

The Feature Pack for XML Information Center is comprehensive documentation on the XML Feature Pack. The content of this documentation is live and can change as we get more information articles. If there are features you have questions on, please let me know and I'll ensure we consider updates for future updates. Also note that we have moved the Javadoc for the API to this information center.

Friday, August 21, 2009

What's going on with Communications Enabled Applications?

We have had a lot of activity on the WebSphere Communications Enabled Applications blog. I have had several blogs highlighting scenarios from retail to finance to inventory management. I even came up with another one of my non-award winning videos here describing the customer experience the CEA Feature Pack can bring to a website.

Beyond that, there has been a whole bunch of information presented and I wanted to give you all a quick summary and reference to some of those blogs.

First, Roger wrote a great and verbose cheat sheet in PDF format on how to get up and running with the CEA Feature Pack and Plants by WebSphere sample. If you are looking to try it out, his document is a great place to start.

James has written some specific details on which version of vendor systems we tested our CEA Feature Pack against. He also wrote a more in depth piece on how to configure the feature pack to work with Avaya AES.

Andy wrote some getting started and advanced usage blogs on the Web 2.0 widget capabilities in the CEA Feature Pack. To get started, he wrote a blog on embedding the telephony widgets like click to call and embedding the peer to peer cobrowsing (aka coshopping) widget. He also wrote several more advanced blogs on how to create a two way form, how to handle personalized content and actions in cobrowsing scenarios, and how to add CEA widgets to a page already using another version of dojo.

Finally, we had our first guest blogger. Dustin Amrhein tried out the feature pack for the first time this week and found good ways to easily extend the Web 2.0 widgets included. He wrote his first blog showing how you can customize the click to call widget to add the ability to select a specialist. We are looking forward to him writing several more blogs on the other scenarios he tried out.

Tuesday, August 18, 2009

Other XML Feature Pack Beta 3 Highlights

Yesterday I blogged about one of the major focus areas of the Beta 3 of the XML Feature Pack. In today's video demo, I show the other major items of note in Beta 3. The major features of Beta 3 are:

- Spec complete on XPath 2.0, XSLT 2.0, and XQuery
- Changes to the XML Feature Pack API to adjust for new features and results of usability studies
- Focus on development and deployment issues with full command line support to pre-compile XML artifacts for performance, ANT commands to do the same, and support for running with Java 2 Security enabled.
- As always, more samples. In addition to the end to end sample of XQuery I demoed yesterday, we have over 40 samples that show the new features of the new standards.

Here is the video that overviews these features:

Demo 6 - XML Feature Pack Beta 3 Highlights


Direct Link (HD Version)


For more information on the WAS V7.0 XML Feature Pack Beta, please go here to download the code, samples, documentation as well as see other demos.

Monday, August 17, 2009

XQuery End to End Sample for XML Feature Pack (Demo #5)

So far in the XML Feature Pack demos, I have talked about topics such as the introduction to the feature pack, specific features in how to use the feature packs, and demonstrations of XPath 2.0 and XSLT 2.0 in end to end web applications. Today I'll continue the demos building upon the same application used in previous demos - the Blog Comment Checker that mines data from blogger.com feeds (encoded in XML) and presents a web application that identifies problematic user supplied comments. I hope you'll see how different XQuery is as a language and how similar it is to concepts you might already know like SQL and JSP templating. I imagine, for some users, the learning curve of XQuery won't be near as steep as other XML processing languages.

Demo 5 - End to End XQuery 1.0 (Part 1)


Direct Link (HD Version)


Demo 5 - End to End XQuery 1.0 (Part 2)


Direct Link (HD Version)

Saturday, August 15, 2009

Download Stats Extreme Makeover - The Value of XSLT 2.0 and XQuery 1.0

On Friday, I had the opportunity to use XSLT 2.0 and XQuery 1.0 in a way that proved the value of the new standards.

For our Beta programs, we get a weekly report of how many downloads have occurred. Over time we have asked for more and more breakdowns of the data (how many were from IBM'ers vs. non-IBM'ers, how many were from non gmail/hotmail/etc email addresses, how many for code vs. how many for documentation, how many unique users have downloaded, etc.). We use these reports to gauge the interest in our betas and the effectiveness of our beta programs.

Currently the process for doing these reports is:

1. Load a webpage somewhere on the intranet that returns in either HTML or space and newline separated text the download records (and I don't have the time nor contacts to change this "service").
2. Import these download records into Excel
3. Write VBScript that processes the rows into summary tables, but not all summaries have VBScript written for them due to the fact that VBScript can get quite complicated.
4. Read the summary tables and hand compute some summaries and transpose them into emails and presentations to beta teams.

This process can take a few hours and is error prone.

I asked for the raw data for the service as I knew there had to be an easier way.

First I started with the HTML as its pretty much XML and I figured I could just write some simple XQuery summaries. This failed as HTML isn't XML (if its not XHTML). Don't get me started on that rant (in this case, we had things like width=-1 with no quotes)! So I restarted with the text file version of the data.

XSLT 2.0 has a new function - unparsed-text() that lets you load from text files. This allows you to load any data into XSLT 2.0, but only as a string. As the XSLT 2.0 specification shows the real power of this function is magnified by XPath 2.0's support of regular expressions. Combining the unparsed-text() with regex tokenize() with newline as the separator allowed me to for-each across every line in the file. Adding another new XSLT 2.0 function into the mix - analyze-string() with a regular expression that did capturing of each of the fields allowed me to transform each line into a well formed XML element with sub-elements for each of the fields (download element with elements for filename, email address, etc). Ah, now I have data in a well formed structured XML format. Life is good.

Once I had this transformation of the input data, I returned to the task of creating the summaries. With XQuery, any query I want is just moments away. To prove the point, I decided to write a simple query that would break down the downloads for IBM'ers and non-IBM'ers separately. I would also group each unique user and list all the downloads that user had done with summaries at all levels of how many code vs. how many documentation downloads had occured. I want to put this in XML format, so others could query my summaries or I could put the summaries into HTML or load back into Excel (if you really really want to).

It took me approximately 30 minutes to create all of the above queries. The code is approximately 20 lines long and is well designed into re-usable functions that could be customized later. The code is easy for most of my peer Java developers to understand as XQuery looks alot like an imperative language with SQL like queries mixed into templating of output.

My next step is to move this to a web application (what I did so far was prototyped it with sample data) that connects directly to the service. The web application could easily offer up web forms that allowed the user to specify search criteria supported by the back end service (date range for example) along with what XQuery/summary view was required.

There are still some items that can't be automated and need human intervention. As an example, the process of deciding what constitutes an IBM'er is complex as some IBM'ers have "ibm.com" email address and some do not. Also, IBM could be "IBM, International Business Machines, or mistyped". It would take a bit of time to create some services that approximate what a human eye could spot manually. Adding a human facing pop-up that allowed visual inspection of the automated data analysis steps would be valuable. Also, I admit that I didn't create the charts and graphs - just the raw data that could be loaded into Excel to create charts and graphs.

But in the end I have converted a manual error prone data processing scenario into an automated approach (for data query and summaries) that creates all the same valuable raw data for reports with the potential to add more reports much more quickly. All of this was made possible using well documented W3C standards that have all the needed features (some new with XPath/XSLT 2.0 and XQuery 1.0) that make this scenario possible.

Thursday, August 13, 2009

Specification complete on XPath 2.0, XSLT 2.0, XQuery 1.0 - Beta 3 of XML Feature Pack Released

I'm happy to announce another update of the XML Feature Pack. The first beta focused on XPath 2.0. The second beta focused on XSLT 2.0. This third beta rounds out the specs with support for XQuery 1.0.

Once we're not in beta, we'll post final numbers on the W3C website on the XML Query Test Suite conformance. Currently we're at 96.8% on minimal conformance and we have support for the optional full axis and serialization features. As for XPath 2.0, and XSLT 2.0, we consider this beta release to support all of the specifications with minor restrictions as listed in the getting started guide. This means that we now have support for all for the W3C recommended standards for Querying, Transforming, and Accessing XML data (except the less popular XQuery/X).

We have expanded the XML Feature Pack API to handle feedback from usability studies and to handle new features required by the standards.

We have focused on items that make the feature pack easier to use. We have made changes to allow the feature pack runtime to run under Java 2 Security, without requiring an application to enable any more security than minimally needed. We have expanded the options for the command line tools for pre-compiling xml artifacts. We have also added ANT tasks for integrating this pre-compilation support into your build scripts.

With the full standards support, updated API, and runtime and development time improvements, this feature pack release should be quite useful. In the next few days, I'll post some videos to demonstrate the new capabilities on YouTube. In the meantime, you can download the feature pack beta and join the web forums to ask questions.

Monday, August 10, 2009

Another reason for DataPower SOA appliances - XML Threat Protection

I've gotten a few questions on recent XML security buzz. On today's blog post by Rich Salz (lead architect of our appliances) discussed how XML threat protection is a required tool when exposing important services to untrusted sources. He talked about this due to the recent press interest in "XML Exploits" which was started by some XML fuzzing work by Codenomicon.

I wanted to post about it here to make sure this wasn't an unknown concept to our customers. Rich talks about "defense in depth", which is what most of our WebSphere customers are doing today. To quote Keys Botzum (one of our lead security consultants for WebSphere, "Anyone that is exposing services to untrusted sources absolutely needs to be running a XML firewall, like DataPower)". In my world, defense in depth means putting a WebSphere DataPower XML Security Gateway XS40 in front of any services that could be called by untrusted sources. Also, given the performance characteristics of the DataPower devices (basically no latency impact), you likely want to do this on all services (as sometimes hacks aren't intention and sometimes hacks come from internal sources).

If you haven't heard of XML firewalls or XML threat protection, think about network firewalls. Network firewalls are great for protecting us from threats that can be detected at the network level (like the recent Twitter and FaceBook distributed denial of service attacks), but they don't help you with threats that are in the payloads of messages themselves. XML Firewalls help you with such application level threats by turning away bad messages before they enter your enterprise applications.

If you have any questions on the concept, feel free to pop over to Rich's blog and ask.

Wednesday, August 5, 2009

Interviews of WebSphere architects

Recently we have made a set of informal videos interviewing various WebSphere architects about the numerous advantages of WebSphere Application Server in the areas of developer and management efficiency, application innovation, and performance. These are focused on the difference between WebSphere and JBoss but also cover in detail many of the WebSphere Application Server strategical focus areas.

There are two playlists which can help you view all the videos. This playlist of WebSphere vs JBoss Developer Discussions walks through the specifics around how development is getting better and easier. This playlist covers WebSphere vs JBoss Operations discussions and covers more of the systems management side of the product.

Here is one of the videos (you'll note I didn't include one of mine ;-) ) on developer efficiency.