Tuesday, July 16, 2013

Social networking of open source communities

I was struck with a surprising thought today. Software developers have experience with specific technologies, and interests in significant and relevant updates to those technologies. We form communities around those technologies, collaborating in problem solving, across time and distance. Often times, several forum postings that we were able to google will provide a lifeline for understanding an issue and finding solutions. And yet, it is not completely mainstream by now for the basic elements of an opensource community to employ all of the benefits of social networking.

Consider how stand-alone things like mailing lists, forums, and bug reporting systems are. Users may post messages that link to bug reports and vice versa, manually, but why not have each system automatically indexed for searching, so that, just like relevant google adsense advertisements can be displayed within them, instead have relevant links shown from the other systems. Imagine a forum thread of some problem, where the more the users discuss it, the better the system would be at listing relevant bug reports. Not just from the same organisation's website, but other websites for related technologies. When discussing an issue on the icesoft.org forums, it could show you jira entries with matching keywords, relevant discussions on theserverside.com forums, the mojarra mailing list, or blog postings about the particular browser that you can reproduce the issue on. Users could then Like the links that they found to be most helpful and on-topic. This would affect the rankings for all users, helping best solutions go viral, and the most pertinent issues become noticed sooner, with more eyeballs making bugs shallower.

When a bug report is closed and fixed, or new feature is added, it pertains to a particular product, that will be available in particular upcoming versions, or immediately from the version control, involving specific modules or features. Individual developers might have interest in products as a whole, and wish to be automatically notified of major releases or minor releases. They might have a particular interest in certain modules, and may wish to know of new features or bug fixes as they are committed, or after they are confirmed by QA. The sooner a nightly build can be found to cause a regression, the greater the likelihood of having that immediately fixed, while the context remains fresh in the developers' minds. One can imagine automated regression testing that detects pertinent nightly builds of dependent software, downloads it, and tests the dependant software's integration with it. With finite testing resources, it might not be feasible to test every nightly build of every bit of software together, but with automation, crowd-sourcing and social networking, one can image waiting to only expend finite testing resources against software that others have validated and marked as stable, or alternatively, testing when pertinent modules have been updated, for a fast turn-around at addressing recent potential destabilisations.

Crowd-sourcing of evaluations of product stability could well drive development cycles, where crossing a threshold of instability could cause a shift from new product development to bug fixing, or attaining a threshold of stability could cause shifts toward feature development, or trigger ideal moments for merging of development branches onto main trunk.  Imagine being able to leverage supplier and customer automated tests, as one could receive indications of their integration with your software, on an ongoing basis. Where previously there would have been little incentive to do such agile integration testing, or publish the results, due to lack of developer resources in-house, crowd-sourcing could provide an augmentation of analysis, as those who care most and have resources could at least provide preliminary analysis, in their own self interest.

There would be a beneficial feedback mechanism between support teams, development teams, customers, and marketing, as recurring and new customers become aware of the buzz of new developments, in anticipation of new releases. Previously less known complementary and competing technologies could gain beneficial exposure, pushing forward that technological ecosystem as a whole.

Wednesday, July 18, 2012

ICEfaces Advanced Components Heritage

Backdrop


ICEfaces 2 was an ambitious product, taking our feature set from ICEfaces 1.x and completely redesigning and reimplementing the underpinnings to optimally integrate with JSF 2. We were less fighting with JSF to sink our hooks in, and more simply interoperating, with all of the hooks designed in. That's because Ted Goddard of ICEfaces fame was on the JSF 2 spec committee, and JSF 2 was quite the collaborative endeavour, with input from many stakeholders and other groups who integrate with JSF as well.

ICEfaces is much more than just a component library, it automatically, granularly and optimally AJAX-ifies a JSF page. It provides server push capabilities, has failover and clustering support, integrates with many IDEs, provides low client overhead components, accessibility components, rich interactive components, composite components for simplifying applications development, mobile components for smart phones and tablets, gives the window scope for beans, integrates with Portlets and many other technologies, is tested across a broad spectrum of browsers and application servers. Basically, it enhances application development in a multitude of ways. But I'm on the component team, so you'll notice that my blog is focused on that one aspect.

While porting all of our pre-existing components to ICEfaces 2 + JSF 2, it was felt we had to also show some new components that were new and compelling. Also, browsers and the web had drastically changed in the past several years since we'd come out with the design philosophy behind the ICEfaces 1.x components. Web 2.0 was a lot more heavy on the javascript side, browsers were a lot faster, and with more smartphones it made sense to avoid network traffic as they now had real browsers on them instead of crippled lite versions.

So, we came out with a two pronged strategy:

1. Our pre-existing component suite would be made to run on ICEfaces 2 + JSF 2, which would provide backwards compatibility for our customers, and continue to provide components that were light on client resources, some of which worked with or without javascript.

2. Develop javascript centric components that were even richer than before, minimised network traffic, and made use of, and integrated with, the new JSF 2 features. Eventually these would come to be called the ICEfaces Advanced Components, or for short, ACE components.


Beginnings

We wanted to make use of a third party javascript library that would support all of the browsers that we needed to support, which had solid momentum, and that would provide either components to wrap, or the building blocks to create our own.

At that point, jQuery UI was pretty nascent, and YUI 2 had many components, and there were several other projects looking seriously into YUI 2 as well. There were 3-4 other javascript libraries that we also investigated and prototyped test components with.

Detailed investigation focusing on YUI 2 commenced around May 2009. And in August we began designing and prototyping our code generation, resource concatenation / compression, and CSS modifying (to work with the JSF resource mechanism) concept, which eventually became the ICEfaces Advanced Component Environment, or ACEnvironment, as I like to write it. At that time, JSF 2 was quite the moving target, and we had to balance our efforts of investigation and new development with supporting our mature ICEfaces 1.8 components, and porting the 1.8 components over to ICEfaces 2, as what we internally referred to as our compatibility components.

For several months I was occupied with a vacation, then sickness, and then work on the ICEpdf 4 release. The rest of the component team continued with their YUI 2 and generator efforts.


Unique Challenges

The major technical difference between ICEfaces and all of our competitors is our DOM differencing technology. It allows for sending only what has changed to the browser, with our framework determining this optimised change set, without need for application developers to explicitly state their page update dependencies. It drastically alters what must be taken into consideration when developing components. For example, when using the standard JSF 2 f:ajax tag, the most granularity of a rendered update is a whole component. Granted, the components specified may be large containers of many other components, or a single small component, so there is quite the range of granularity. Any JSF component library that supports updating a portion of a component, had to have developed its own non-standard augmentation to JSF to support this. With ICEfaces, individual DOM elements may be updated, which are only a small portion of a component. And with rich components, that have many DOM elements with javascript listeners attached to them, one can't simply update random elements, without executing some javascript to re-register listeners, and keep the javascript objects in sync with the html markup. Developing a cross-browser, performant means of accomplishing this that worked with YUI 2 was quite involved.

With our large number of enterprise customers, and extensive QA process, there are certain scenarios that we are required to address which our competitors may not have been aware of, chose to not address, or were unable to solve. One key example of which was the issue where a component requires javascript and CSS resources in the head. When a component was not previously on the page, and then dynamically was added, there was the problem with dynamically adding those resources to the head, with the JSF 2.0 update mechanism. We upgraded from YUI 2 to YUI 3 to take advantage of its dynamic loader, which allowed for both dynamically updating resources in the head which also reducing our resource sizes. Unfortunately, this caused three very large problems. The first of which was that using the loader required adding callbacks to execute our javascript when the loading had completed. While the total time to load a page reduced, the page remained blank noticeably longer, as it no longer incrementally rendered the page but rather waited on all javascript to download and execute before rendering the page. The second issue was that certain Portlet application servers do not allow for accessing JAR'd JSF 2 registered resources by name, as the YUI 3 loader required, but rather by some registered resource handle determined on the server side. The third issue was that YUI 3 did not contain newer versions of the YUI 2 controls, but instead included a mechanism called 2in3 for running YUI 2 within YUI 3, which was cumbersome to use, especially from within the YUI 3 loader. Eventually, after much effort, we ended up shelving the YUI 3 loader efforts, and we settled on the MandatoryResourceComponent solution, which was to identify all components in the system which could potentially be dynamically added, and pre-load their resources into the head. This was configurable at the application and page level.

Another large concern was the issue of having large data tables of these rich components, which each use much more html markup and well as javascript. Particularly in legacy browsers such as IE 6, with poor javascript performance. Much effort was put into making applications responsive with this scenario.

Integrating Open Source Technologies

Years earlier we had added integration with Seam and then RichFaces as it became called. They were quite co-operative and saw the value in our technologies inter-operating. While RichFaces has more of a competing component library, Seam itself is more of a complementary framework to ours. We spent the time and money on integration because Seam was quite popular, and our support customers requested it.

With PrimeFaces, or PF as I called them, they more directly competed with the component aspect of ICEfaces, but that didn't matter to us. If our support customers wanted PF components to operate within our ICEfaces eco-system, then that is what we set out to accomplish. From our investigation we found that their component library had not resolved the technical hurdles ours had, so we saw it as more of a quick and dirty component library, that was popular and fast growing from an 80/20 development focus.

Immediately we found there were quite a few technical problems with integration. As mentioned above, any JSF component library that supports updating a portion of a component, had to have developed its own non-standard augmentation to JSF to support this. The PF framework made use of several non-standard means of updating the page. Firstly, it did not use the standard JSF element update mechanism, but rather used its own XHR mechanism, meaning that all of our integration with standard JSF was not being used. Then there were the updates that were not XML or HTML at all, but rather JSON data responses. Also, there was what we called sub-component rendering, where when a component would render itself, it would only render a portion of itself, like the body of the dataTable, without the header or footer. And in several cases, the component javascript would expect that when it submitted itself, that all of itself would be updated, and couldn't handle that ICEfaces was only updating the small portion that had changed.

We modified the parts of our framework that were necessary to work around these issues. In many cases the fix required changes in the PF component code. And of course the parts of their core javascript that completely side-stepped the JSF javascript would have needed to be modified to create the same hooks. We were looking to pay them for their time to incorporate the changes necessary for our integration. Code changes that we had made as unobtrusive as possible, that our whole team had put considerable effort into. Talks with them dragged on, while we resumed our efforts on our own components.


Continuing Development

We continued adding new components to ACE, while augmenting the ACE generator to add features to our components and improve the documentation capabilities. We made several releases of ICEfaces 1.8.x and 2.x. My main ACE accomplishments were in the generator, supporting team members with their components, and two of my own components: ace:fileEntry and ace:tabSet.

With our ICEfaces 1.8 ice:inputFile component, we rendered an iframe that submitted to a file upload servlet, and which could cause the wrapping page to update and display progress notifications. So while ice:inputFile might be placed within a form, it wasn't really a part of that form. Later we added a feature to support submitting the parent form before and/or after the file upload, so that applications could use the result of the form submit to affect the file saving, or the file verification to affect the form saving. We also added a feature that no one else had, to allow applications to specify a callback that would handle the saving of the file, so that it could be forwarded over a socket to another server, or saved to a database, without writing a temporary file to the file-system. As well, when we did write the file to the file-system, we wrote it once, to where it should end up, and did not use temporary files nor large byte[] in memory that could easily exhaust the memory available to the application server, as our competitors did.

Unfortunately with ICEfaces 2, that component would not work with the new means of ICEfaces / JSF integration, which was less about fighting and taking over JSF, and more about easily hooking in. As such we streamlined large portions of our framework, including removing our own custom Servlets, which is what ice:inputFile relied upon. The opportunity was seen to create a file uploading component that integrated better with JSF, and that could address our every issue with the older one. Firstly, we wanted one that would not require a custom Servlet, nor upload to a separate view/URL. It should upload in a single lifecycle, along with the submitted form elements, so that they could be validated together atomically, while still using Ajax to incrementally update the page after. And, it still had to be able to show progress, and support the callback feature. We looked to HTML 5 to solve many of these problems, but unfortunately we had to support legacy browsers that were still widely used. So an HTML 4 solution was found, with an eye to pluggable HTML 5 support in the future.

Our competitors required the Flash plugin, made use of Servlet filters so did not allow for integrated form validation, did not support Ajax responses, and most did not directly write the uploaded files to the appropriate location desired by the application, and none supported a callback feature. And who knows which of them would work in a Portlets environment. It was quite the unique innovation.

With ace:tabSet, a previous co-worker had developed the component, it it was given to me to add a seemingly simple set of features: Allow for dynamically adding and removing tabs, while allowing for tabs to be lazily loaded and cached, such that their contents would not be disrupted as they held iframes to legacy JSP pages that might be full of entered data. Unfortunately, our ace:tabSet had been designed primarily to solve the main problem with our older ice:panelTabSet, that it could not have separate forms within each panelTab, and hadn't really been designed to wrap all of the features available in the YUI control. I was shown the PF tabView component and saw that it could cache the tab contents. Both component were built on YUI's tab control, so what was possible in one should be possible in the other. YUI allowed for caching and dynamically loading content, either from other URLs or via a callback. One could envision the callback using sub-component rendering to populate each tab's contents. With tabView it seemed that it was closer to what we needed, and just the dynamic adding and removing of tabs would be necessary. Unfortunately, PF had by then declined our offer to collaborate on integration, so I couldn't augment it. My options with ace:tabSet were limited since we'd already released it, and a brand new re-design that would break backwards compatibility was not possible. So I came up with a way that the server could add and remove tabs, that would work with our DOM differencing and the JSF element update mechanism, and would move elements around to where YUI expected them to be. The many different modes of caching, that could be specified per ace:tabPane, were developed as per the customer's evolving understanding of their requirements.


Assimilating PrimeFaces

Approximately a year after we first investigated integration with PF, we focused our efforts on assimilating it as a first class citizen of ACE. This was a multi-stage nearly year-long undertaking. There were several reasons for this, as mentioned above, such as customer demand for integration. But there was also the growing issue with YUI where it was not supporting different versions together on the same page, which was causing a lot of problems with our Portlet integration, and PF was migrating away from YUI to jQuery.
We began with simply adding icefaces.jar to their showcase, as we had done before, and identified what broke, and set about fixing them. It was the same set of incompatibilities we had found before, but now the task to rectify them was even larger, as there were more components, and we wished to fix them all. A combination of testing and code auditing were used to identify issues. We also identified bugs that existed without ICEfaces present, and fixed those as well. Particularly, we would see that features would work in isolation but not in combination.

To be a first class citizen of ACE required adapting each component from hand rolled code to being specified as a Meta class, generated into a Base class, and custom code going into the component class which sub-classes the Base class. Proper separation of code between the component and renderer. It's important to note that the generated properties are superior to the standard way that properties are implemented, which caused regressions as the PF code worked around those limitations, particularly how ValueExpressions are set in ACE property setter methods and not in standard setter methods. In each Renderer we re-coded them to make use of our best practices, including using JSONBuilder, which does escaping that was absent from the PF code.

We unified the styling between the components, documented the components, their properties and features, which had been largely absent. We altered the core javascript to use the JSF submitting mechanism, and added any features that had been in our 1.8 components that were lacking in the analogs from PF. The dataTable component was specifically targeted for adding many new features, as well as making all of the individual features work together, which they had not before. Many automated QA tests were made for each component, testing each property, feature, and situation of use.

With the fileEntry component, we had run into an issue where we would certify it against a certain version of Apache file-upload, and then there could be problems if applications bundled their own different version, or if the application server included some version. So we had repackaged Apache file-upload into an ACE package, that way no matter what version of it was on the classpath, our code would work. Similarly, we repackaged all of the changed PF code, as we moved it into ACE, so it would work with the generator, and so that if there was some newer version of PF on the classpath, then the PF derived components in ACE would continue to function, as well as the newer PF components in their own jar.


False Accusations

After releasing ICEfaces 3, the PF team slagged us all over the Internet. They portrayed us as haven taken their code and just renamed/repackaged it. And they said that forking their code was somehow immoral, even though they had chosen their open source license themselves, and had chosen not to collaborate with integration, paid for by us. Never mind the sheer quantity of jQuery, jQuery UI, and third party library code that they had in their javascript, which is somehow a different matter in their eyes.


http://blog.primefaces.org/?p=1692

They distorted the facts in their blog by only showing the Panel component's Renderer and javascript and omitting the files for it that we had greatly changed (Panel, PanelBase, PanelMeta, PanelTag, faces-config.xml, facelets-taglib.xml).  They even cut out the license headers, which created the impression that we hadn't acknowledged them or abided by the license. Every single file includes the license. They repeatedly said they saw no difference in the code, even though one can see all the JSONBuilder code right there. Or all the new features and fixes in the other components.


They pointed out that we had forked an older more stable release, and not their more recent development branch, which was migrating from using jQuery to instead use hand-rolled code, which we found had cross browser issues, such as incorrect div scrolling offset calculations.


ICEfaces 3.1 ACE

We've continued to add our very own brand new components, like autoCompleteEntry, chart, dataExporter, list, listControl, richTextEntry, and textAreaEntry.  This is in addition to the pre-existing set of ACE component that we created before PF integration. So ACE is not just an old release of some other software, but continues to move forward on its own path.


http://wiki.icesoft.org/display/ICE/ACE+Components

Thursday, February 16, 2012

ICEfaces Advanced Component Environment

Purpose

The ICEfaces Advanced Component Environment was conceived as a new platform for creating JSF 2+ components, that would solve many of the development inefficiencies and inconsistencies in the previous platform. Many lessons were learned in ICEfaces 1.x components, as they migrated from being JSF 1.1 to JSF 1.1 + 1.2 components. Best practices were found, after many different approaches had been taken by different developers over time.

Some key differences planned for the new components were:

1. Instead of being thin on the client with all processing being done server side, as ICEfaces 1.x components are, the new components would use javascript to do as much processing as possible on the client, and only interact with the server as necessary. There will always be arguments for each approach, as each has its place, so being able to provide both, rounds out our capabilities.

2. Components would rely on many different resources: third party javascript we're wrapping, our integration javascript, structural css, theme css, images and sprite images.

Already we were headed in a direction where the components would have most of their logic in the javascript, and the java should be quite minimalist, with some component specific code, but most everything else could be logically reduced to some declarations that would favour code generation.

There is a lot of wiring code and markup in JSF components, with JSP Tag files, Facelets TagHandlers, the faces-config.xml, taglib.xml. Our previous platform was truly a pain to maintain, and was based on xml files which provided no real type safety, error checking or reporting. We wanted something that would solve all those problems, while also generating all the boilerplate component code, such as property getter/setter methods and state saving code.

JSF standard getter/setter methods aren't really good at all. They don't allow for the component property having a different value for each row of the UIData or UIRepeat that they're in. As well, they can't be used with a combination of the property being tied to a ValueExpression as well as the setter method being called directly. This can happen when the Renderer's decode method needs to set state in the component, or when applications use a component binding or get access to the component via FacesEvents' source. Some properties have an inherently read/write aspect to them, and some have an inherently read only aspect. It seemed best to standardise exactly how that would be implemented. Past disparate implementations exhibited different side effects and bugs.


Debate

There was a large debate between three main approaches to specifying each component:

1. Continue using xml files, but expand what they would state about a component, so that the generator would contain no hard-coded exceptions, like the old one had been rife with.

2. Add annotations directly to the component or renderer, which would be used to generate the component code. This seemed ideal, where we just write the unique component code, and then add annotations to generate the redundant parts.

3. Add separate classes that would have the annotations, which would be separate from the actual component and renderer classes. This was a compromise between the other two options, where the separate annotated classes in theory could be swapped out for xml files, and either declaration format could generate the components.

Right away we favoured annotations in Java files, where the compiler itself could type check most things for us. There were cyclical declaration issues in the concept of annotating either the end component or an abstract super class. We wanted to maintain the option of the Meta classes being completely separate from the resulting component code, so we could make use of that for our IDE integration as well, which we don't release as open source. And it wouldn't be good to clutter the component with info for every single IDE integration, and thereby obscure the core of the component details. The compromise ended up being the best of all worlds.

There was a long process of prototyping the generator with some example components, and continuously pulling code out of the components and pushing the functionality into the generator. JSF 2 was quite the moving target, which really validated the generator concept, as a single change could be made in the generator to immediately affect every component.


Documentation

Documentation is key with ICEfaces components, since the intent behind a property needs to be communicated, along with the implications of different values, and how some properties interact. With modern IDE integration, the documentation needs to be available when editing view definition files as well as when writing bean code and using property getter/setter methods. The ACEnvironment allows for specifying the TLDDoc and JavaDoc in a single place, as well as individually specifying the TLDDoc and the setter JavaDoc and the getter JavaDoc. In practice, having a single place where you can explain a property means it will happen that once, instead of not at all.

The standard TLDDoc doesn't cover aspects like property default values, ClientBehaviors, facets. With ACE, anything in the Meta class can eventually be shown in the TLDDoc and JavaDoc.


Themes, Sprites, Resources

Outside of the generator and Meta files, there's a whole part of the ACEnvironment that provides themeroller standard themes, that creates sprite images from regular images and updates css files to make use of the sprites. There's jQuery and YUI javascript, as well as any other third party javascript that we integrate with. The build process concatenates and minifies the javascript and css files for performance.


Coding Conventions

One huge intangible aspect of the Advanced Components, are the code conventions they use, aided by helper classes. These are derived from many years of experience designing JSF components, all aimed at side-stepping common pitfalls.

The component Base classes contain all of the generated component code, and the concrete sublass of that is the actual component class, which may need to override broadcast(-), queueEvent(-), process*(-) methods. The rest of the java code is in the Renderer, and is comprised of the decode and event queuing logic, as well as the markup and javascript rendering logic. We have moved away from using DOM centric rendering APIs, and instead use the ResponseWriter approach, which ICEfaces internally can use to directly render output to the client, or employ an intermediate server side DOM for optimising transmitted updates. The html is all properly escaped, and the javascript rendering uses a special helper class JSONBuilder that elevates component writers above the level of javascript string concatenation, to a javascript datastructure level, which also employs proper escaping.

On the client end, the javascript dialogs with the Renderer, receiving and transmitting information. Exactly how the javascript functions are called, and how the script tags are placed within the html markup, and what gets ids and what doesn't, are all quite intentionally done. Some of that is to work with IE browsers, and most of it is to facilitate our incremental DOM updates. It's quite common for an Advanced Component to render itself out, and need to be cognoscente of whether the DOM differencing will update the whole component, some sub-section of the html markup, or just the javascript. In many cases, because the javascript adds listeners to the html elements, if one is updated then both should be. Or in other cases we need to ensure that only the javascript is updated and not the html elements. These subtleties can be completely unnoticeable to a non-ICEfaces developer.


Conclusion

ICEfaces 2 ACE and ICEfaces 3 ACE have been a long time coming, with much experience driving the design goals, and many changes in JSF 2.0.x and 2.1.x in both Mojarra and MyFaces continuing to move the goal posts. It's a rich web component library that focusses on client execution, to complement our more server centric ICE component library. As ICEfaces Component Team Leader during its development, I've enjoyed the opportunity to work with my team to develop something so architecturally unique, that overcomes so many challenges, and creates a solid foundation for further development. I'm proud that ACEnvironment is the foundation that ICEmobile is built on, which itself is a revolutionary component library!

Thursday, April 9, 2009

JSF Event Re-Phasing

JSF has a standard mechanism for controling the lifecycle phase that certain events are broadcast in. For ActionSource implementors, such as UICommand, and the ActionEvent(s) they create and queue, and EditableValueHolder implementors, such as UIInput, and the ValueChangeEvent(s) they create and queue, they use their immediate property to determine the phase to be broadcast in. For ActionSource, when immediate="false" (default), its ActionEvent uses the INVOKE_APPLICATION phase, and when immediate="true", it uses the APPLY_REQUEST_VALUES phase. For EditableValueHolder, when immediate="false" (default), its ValueChangeEvent uses the PROCESS_VALIDATIONS phase, and when immediate="true", it uses the APPLY_REQUEST_VALUES phase.

This primarily allows for Cancel buttons. In a form with input components that have validation settings, command components can be set to have
immediate="true", so that they may bypass validation, and accomplish a non-form task, such as navigating to another page. In some cases, input components may wish to convey their information to the server when a Cancel button is clicked. As so applications may set immediate="true" on input components as well.

But how does one make a ValueChangeEvent be broadcast in INVOKE_APPLICATION phase, so that the valueChangeListener will be fired after UPDATE_MODEL phase, when the input values are now all actually in the bean? One needs a way to make that event be broadcast later than there is an API to specify. In practice, applications use event re-queuing. In the valueChangeListener, the event phase is examined, and set to the later phase, and then the event is re-queued, and the listener returns. When it is invoked again, in the properly desired phase, then the real logic is executed. It works, but is tedious to do in every listener. But most of all, it's non-declarative. When reading a view definition page, one can look for immediate and know which phase the event will broadcast in. But one has to know to look in the bean code to know if re-queuing is happening. It would be more ideal if there was a component or a tag that would accomplish this.

In steps the ice:setEventPhase component. It can take any event type, as specified by class name, not just the standard JSF events, and change them to any phase. It operates on the events of child components, that is any component that is within the ice:setEventPhase component.

It works because when a component queues an event, the event bubbles up through its ancestors until it reaches the UIViewRoot. This allows containers like dataTable to wrap events inside other event objects which contain the row index. When child component events bubble up through the ice:setEventPhase, it then modifies their broadcast phase. Quite straightforward.

The prototypical use case is when one input component changes the value(s) of other input component(s) in its valueChangeListener. What goes wrong is that the ValueChangeEvent is broadcast either in APPLY_REQUEST_VALUES or PROCESS_VALIDATIONS phases. Both of which happen before UPDATE_MODEL phase. So when the one component's valueChangeListener modifies the bean values of the other input components, those input components later get their bean values overwritten by their submitted values, which are basically their old values. With ice:setEventPhase, you can change the ValueChangeEvent to broadcast in INVOKE_APPLICATION phase, so that the valueChangeListener gets the last say in the bean property values.


http://wiki.icefaces.org/display/ICE/ICE+Components+Reference
http://res.icesoft.org/docs/latest/tld/ice/setEventPhase.html

Thursday, May 15, 2008

EJB 3.1 calendar based timers

I attended Ken Saks' session on EJB 3.1 at JavaOne, and found it quite interesting, mostly because I tend not to work on the back-end of applications, since I'm more of a GUI / component developer, so it was informative to me. But one thing immediately leaped out to me, which was the ommission of time zone and locale parameters for the new calendar based timer functionality. I've worked on calendar and time entry components, in JSF and Swing, and I can tell you that it's a typical oversight. So, I'm partly writing this, to hopefully influence the EJB 3.1 team to include it, but partly to raise awareness of this issue in general.

I'm going to start with a simple example, taken from page 38 of the PDF of slides, from the session, now available online.

ScheduleExpression expr = new ScheduleExpression().dayOfWeek(“Mon-Fri”).hour(12);

This should make a timer go off at noon every week day. The example uses hard-coded values, but most likely those would be input by a user in some configuration GUI, or by editing some configuration file. In any case, a human would enter those values. And what that human would probably think is that it would be noon, in their time zone. Perhaps they're working in a branch office, so they might think it should be in the time zone of their head office. Unless instructed, they wouldn't know which of the two it may be. To further complicate things, the server, on which that application is deployed, may not even be in either time zone, as it may be co-located elsewhere, maybe in one of those "safe" places that don't flood or get earth quakes. In that case, they would have no idea, and would have to use trial and error. And those are just the mostly likely scenarios. Add on the possibility of distributed applications, with beans executing in containers around the world, or failover servers intentionally geographically dispersed, and we see that this could actually be impossible for a user or developer to account for.

There's also the lesser issue of daylight savings, where some people need to schedule activities that respect daylight savings, and some people need to ignore daylight savings.

While being able to use "Mon" and "Fri" is a usability gain for most people, it ignores the fact that different locales use different starting days of the week. Some places begin with Sunday, others with Monday. In those cases, people want to be able to just give an index into their week, respecting when their week actually starts. Especially for large organisations which have branch offices in varying countries.

Finally, there's the issue that we're all still assuming the Gregorian calendar. There are many countries which do not use the Gregorian calendar. Almost the entire Middle East does not.

My recommendation is to look into java.util.Calendar, and allow for all of the parameters that it requires. And then add in syntactic sugar.

Wednesday, January 16, 2008

Mobile application component considerations

http://www.michaelyuan.com/blog/2007/09/25/jsf-and-mobile-web-applications-part-1-what-looks-good-on-paper-doesnt-always-work-out/

I was reading this guy's article, and more importantly the readers' comments, about writing JSF applications for mobile applications, and thinking about what ideas best fit with ICEfaces. And that led me to think about some other articles I've read, including one by Ted, a co-worker of mine.

http://blog.icefaces.org/blojsom/blog/default/2007/09/07/Ajax-on-the-iPhone-with-ICEfaces/

The problem is that you can't really write once and run anywhere a web application, with a sophisticated user interface, simply due to screen real-estate trade-offs. If you have a large screen, then you'll want to make more of the interface available to the user at a glance. There's a psychological limit to that though, creating an upper-bound on what you can have in a web page, even on a 30" display. Unfortunately, there's a lower-bound as well, so we can't just target mobile browsers with small screens, and expect desktop users to be happy. So, I'm going to look at ways of addressing this, from a component writer's perspective.

A few differences between desktop and mobile web applications are:

Complexity of the user interface

Showing the user less information, and asking them to do less work, at a time. This can be accomplished by breaking that page into several pages, like a wizard, or by relying more on drill-down detail pages to show successively more detail.

The problems is that you then have to have parallel page hierarchies, which have to be kept in sync as your application evolves.

Another approach is to remain with the single large page, but use intra-page data hiding, so that all of the data is on that one page, just not necessarily at the same time. For example, you could use an <ice:panelTabSet> component, or an <ice:menuBar> in conjunction with an <ice:panelStack> to create sub-panels, where only one will be shown at a time. And the switching will be done via Ajax, without disturbing the rest of the display. Or, better yet, you can use several <ice:panelCollapsible> components, where all of them could be expanded at once, for the desktop user's benefit, or only one at a time be expanded, for mobile users. This is quite simple to do at the application level.

Functionally equivalent components

Some components might be able to accomplish the same tasks, where one would be richer than the other, or have other trade-offs. Just looking at the <ice:panelTabSet> versus list of <ice:panelCollapsible> components, we can see that there are several ways of accomplishing the same goal, but with implications for mobile applications. Perhaps a better example is with date selection, where we could use a calendar or a text field. The text field will be smaller, and fit on a mobile screen easier, but will probably be more cumbersome for actual date entry. Plus, there's the whole issue of date format validation, whether they enter "January 16, 2008" or "2008/01/16" or "01/16/2008", etc. So, showing a date is best accomplished as text, and entered via a calendar. In that case, one would use <ice:selectInputDate renderAsPopup="true"/>.

Still, in the immediate-term, it will undoubtedly be necessary for some applications to use lowest-common-denominator components, or different pages with different components, for mobile versus desktop interactions.

Device aware component rendering

Probably the biggest TODO item in supporting mobile devices, for the component team, is simply rendering differently for mobile devices. Where possible we'd like to keep the HTML markup the same, and simply be more mobile browser friendly. But, where necessary, we'll have to detect when rendering to a mobile browser, and adjust accordingly.

In a way, <ice:selectInputDate renderAsPopup="true"/> exhibits some of the first strategy already, by adapting between the two modes of displaying and entering data. Another example of this would be text entry on the iPhone, with the popup keyboard. Pretty much every input component could be made to simplify its rendering, visually, while not actively being used to input data. This is something that really would not be possible without Ajax.

The iPhone, while simplifying some user interactions, actually complicates others. For example, how does one do Drag and Drop, when finger dragging has been re-appropriated to mean scrolling? Will we simply not support Drag and Drop, or will we have to render some WebKit specific markup? Or, since the iPhone uses a special interface for menu selection, how can we adapt <ice:selectInputText> to benefit from that, if at all possible?

The main example of a component that already renders itself differently, depending on the user's browser, is <ice:outputStyle>, which will output a link for a main CSS file, as well as one that is specific to the user's browser, thereby allowing for CSS work-arounds for web browsers' idiosyncrasies.

CSS styling

Which brings us to CSS, and styling web pages differently for desktop versus mobile browsers. Currently, <ice:outputStyle> can differentiate between:
  • Internet Explorer 6.x and below
  • Internet Explorer 7
  • Safari
  • Safari on the iPhone
  • Opera
  • Opera Mobile
How can you tell that we're Firefox centric? ;)

As you can see, you can serve out different CSS files for the main mobile browsers, automatically, without any application level coding. Over time, the default CSS styles for our components, for the mobile browsers, will get more and more refined. But, the beauty of targeting feature-complete mobile browsers, like Safari and Opera, is that few changes need to be made to styling.

What we're not doing

Notice how I didn't mention using different RenderKits for mobile devices that don't support straight HTML + CSS, or have limited Javascript or Ajax capabilities? Because most cellphones are moving away from those constraints. Maybe a few years ago it was worth throwing a couple years of development into those devices. Hell, maybe even now it appears tempting. But within a year or two that's just going to be a mistake.

Wednesday, January 9, 2008

Bonjour

After getting a taste for writing on the ICEfaces Forums, I've decided to make my own blog, to have as a single place to express my ideas related to ICEfaces. But first, I should explain who I am, and what I do, for this to make any sense at all :)

I work at ICEsoft, where we've got three main products, ICEfaces, ICEbrowser, and ICEpdf. I actually started there working on ICEpdf, which is a Java library for viewing PDF files, that you can embed into your Java applications or applets, say for online help features or whatever. I mostly focussed on adding support for more image types, added the ability to parse the newer PDF file formats which allow for faster incremental loading of PDF files, and a tonne of memory and speed optimisations.

Now I spend most of my time working on ICEfaces, which is an Ajax framework for JSF (JavaServer Faces). Its goal is to transparently add Ajax capabilities to regular JSF applications, where developers just don't have to worry about Javascript or manual configuration of what interactions will update what parts of the page. I've worked in several parts of the framework, most notably in our integration with Facelets, and also as a member of the Component team.

I've always worked on closed source applications in the past, sometimes contributing to opensource projects that my work relied on. So, it's been a new experience working on ICEfaces, where we're actively encouraged to connect with our community. I think that we're pretty privileged in that our community is so strong, helpful, and insightful. Sometimes it's pretty surprising just how superior developing with a community is, over just developing for clients. Hopefully, as times goes by, I'll be able to share those kinds of anecdotes here.