AJAX has at least three main areas of risk: technical, cultural/political, and marketing risks:
Technical - These are issues that directly relate to the design, development, and maintenance of software, including security, browser capabilities, timeline, cost of development and hardware, skills of the developers, and other things of that nature.
Cultural/Political - These are fuzzy issues that focus around the experience of end users, their attitudes and expectations, and how all this relates to software.
Marketing - These are issues that relate to successful execution of the business model resulting in sales, donations, brand recognition, new account registrations, and so on.
These issues are all related, and you can easily bundle them into completely different groups depending on the frame of reference. What's important is to categorize risk into levels of severity for your project and use that as a driver for decision making.
Technical RisksTechnical risk, unlike other kinds of risk, can actually result in a project not being completed. These sources of risk must be of prime importance when evaluating third-party frameworks for building AJAX applications because of the lack of technical control. Some studies have shown that 50 percent of enterprise software projects never go into production (Robbins-Gioia Survey, 2001). Following are some of the reasons why.
ReachSometimes, when writing software for large groups of people, we need to build for the lowest common denominator. Essentially, we need to sometimes build so that the individuals with the most out-of-date, inferior hardware and software can still access the application. The general public tends to use a lot of different client browsers and operating systems. We're stating the obvious here, but it's important for Web applications to be compatible with the browsers our users want to use, or we risk not delivering the software to them. Whether that means a ~1 percent market share for Opera is worth paying attention to and is something that needs to be dealt with - software must, at least, be tested rigorously on a representative sample of these platforms so that we know what our reach is. This is an example of a technical risk and this reach/richness trade-off is probably the biggest everyday problem with the Web.
The basic problem with Web applications is that different browsers interpret pages differently. Although this much is obvious, what isn't known is what challenges will be faced as we begin to "push the envelope." What's easy to do in Firefox might end up being ridiculously hard in Internet Explorer. The risk lies in successful execution of the project requirements while reaching all our target browsers and operating systems.
Research firm In-Stat/MDR predicts mobile workers in the United States alone will reach 103 million by 2008, and the following year the number of worldwide mobile workers will reach 878 million. This means that an ever-increasing number of workers will be accessing corporate Web applications from outside the workplace, resulting in a loss of control over the software-especially of their Web browsers.
There is a general trade-off between the level of richness in an application and the number of people that can use that application (because of client platform incompatibility). The seriousness of this risk is determined by several factors:
• Whether the application is public versus private (behind the firewall). Public applications have an inherently more heterogeneous audience. Enterprise applications often have an advantage in that it's easier to tell corporate users to stick to one or two browsers than the general public.
• The breakdown of preferred browsers and operating systems of the target audience, that is, how many employees or customers use Safari Mac versus Firefox Mac versus Firefox PC versus Internet Explorer?
• The potential marketing impact of being incompatible with a segment of users. A good question to ask is, "How many people will we lose if we can't support Safari, and is that acceptable from a public relations point of view and cost-benefit point of view?"
• The degree to which users are willing to adapt their use of browser or operating system.
Over time, this trade-off has skewed in favor of richness. There is a tacit understanding between browser vendors that they need to provide a comparable level of JavaScript, DHTML, XML, and XMLHttpRequest functionality to be competitive, and generally speaking, there is a way to write AJAX-powered software that works on all the major browsers. Mozilla, which is cross-platform, tries to ensure that things work the same whether they're running on Linux, MacOS, or Windows. Safari has been playing catch-up ball with Mozilla, as has Opera, but every quarter, new features are announced for upcoming version of those products, and the great browser convergence continues. As these browsers continue to mature, it is easier to write rich applications that work across them all. An example of this is the recent introduction of XSLT support in Safari, making it possible to deliver XML-driven applications across all major browsers.
Browser CapabilitiesSo much going on in the world of AJAX is uncharted territory right now. It seems that browser vendors are just beginning to understand what developers want from them, and glaring bugs and omissions sometimes create unexpected roadblocks when building cross-platform solutions. Some notable examples are the long-standing absence of XSLT in Opera and Safari and anchor-tag bookmarking problems in Safari. Internet Explorer 6 and 7 have glaring bugs in positioning of DHTML elements that require sometimes complex workarounds. Some techniques that work well in Internet Explorer can be prohibitively slow in Firefox (particularly relating to XSLT).
This risk is that developing a feature can take an unpredictable length of time or reveal itself to be basically impossible. Clearly, there is still a limit to the degree that the browser can mimic true desktop-like software, and where the boundaries lie precisely is still being explored. So often, AJAX development becomes a process of creative workarounds. Developers find themselves going down one road to solve a problem, realizing it's not going to work, having to back up and look for a new one. Maintenance
JavaScript, DHTML, and CSS code have a tendency to become complex and difficult to maintain. One difficulty is that a lot of developers do not use a good IDE to write and test their code. Another difficulty is the need to employ tricky optimization techniques in script for performance considerations. These factors contribute to spaghetti code (code with a disorganized and tangled control structure) and higher long-term maintenance costs than applications written in a traditional architecture that rely more on server-side processing. The risk centers on quickly and adequately maintaining applications over time in a changing technological environment.
Maintenance risk is aggravated by the way browser vendors arbitrarily change the way the browser works and interprets CSS and JavaScript. On occasion, Microsoft or Mozilla will "pull the rug out" from a particular technique or approach by closing a security hole or "fixing" a CSS problem. An example of this is Mozilla and access to the clipboard, which has changed at least once. Another is changes to the DHTML box model in Internet Explorer 7. As Microsoft approaches a more standards-compliant CSS implementation, it will break many of the Web applications that were built to work on an older, buggier model.
The risk is that enterprises must react quickly and frequently to address sudden, unexpected and costly maintenance duties because of changes in the browser, which can be exacerbated by hard-to-maintain spaghetti code.
Forward CompatibilityForward compatibility is related to maintenance risk. As new browsers and operating systems arrive on the scene, parts of AJAX applications might need to be rewritten to accommodate the changes in the layout engine, CSS interpreter, and underlying mechanisms of JavaScript, XMLHttp, and DHTML. In the past, early-stage browsers such as Opera and Safari have been bad for arbitrarily changing the way CSS positions elements on a page. IE7 has done this again, too. This is a risk because developers need to be one step ahead of all possible changes coming from new browsers that would affect the user experience. This can impact cost containment because it's inherently unpredictable, whereas backward-compatibility work can be tested and more accurately estimated. It's important to note, however, that public betas are always available for new versions of browsers.
Firefox 3.0Right on the heels of Firefox 2.0 is the upcoming Firefox 3.0 release, slated potentially for Q4 2007. Version 3 will likely be more of an upgrade than a completely new iteration. Mozilla is considering 50 new possible features, including upgrades to the core browser technology, improved add-on management and installation, a new graphical interface for application integration, enhanced printing functionality, private browsing capability, and a revised password manager.
For developers, Firefox 3.0 will mean more in terms of Web standards compatibility and accessibility. One goal is to pass the
ACID2 Web standards HTML and CSS rendering test, which implies changes to the browser's core rendering engine. Compliance for CSS 2.1 is also on the roadmap, which will also affect the way pages are displayed.
Safari 3.0Little is known about the next version of Safari, and Apple rarely comments on the product roadmap, but Safari 3.0 is rumored to include major updates to the CSS rendering engine, which will feature a full or partial implementation of CSS 3.0 including the capability to allow users to resize text areas on the fly. Safari 3.0 will also include an updated Web Inspector tool for browsing the DOM, which will assist developers.
Internet Explorer 8 (IE "Next")It might seem premature to be discussing IE8, given the recent release of IE7 and Vista, but Microsoft is already planning the next iteration. The final product is expected sometime in 2008 and will possibly feature some emphasis on microformats (content embedded inline with HTML). Although some improvements to XHTML support are expected, it is not yet known if JavaScript 2.0 will be on the roadmap. According to IE platform architect Chris Wilson, Microsoft will invest more in layout and adhering to the Cascading Style Sheets (CSS) 2.1 specifications. He also said Microsoft wants to make its browser object model more interoperable "to make it easier to work with other browsers and allow more flexible programming patterns."
Opera 10Although no release date has been set, the vision for Opera 10 appears to be platform ubiquity. Opera's goal is to create a browser that can run on any device and operating system, including mobile and gaming consoles - a move that could shift the balance a little in favor of this powerful, but still underappreciated, browser.
Third-Party Tools Support and ObsolescenceAdopting third-party tools such as
Dojo or
Script.aculo.us can add a lot of functionality to an application "for free" but also bring with them inherent risk. More than one project has gone sour as a result of serious flaws in third-party frameworks, and because of the black-box nature of third-party tools, they are next to impossible to troubleshoot. One West Coast e-commerce firm implementing Dojo needed to fly in highly paid consultants to address issues they were having with the framework. The flaws were addressed and contributed back into the framework but not before the project incurred large unexpected costs.
Obsolescence can also inflict pain down the road if frameworks are not maintained at the rate users would like, or supported in future iterations of development. This can be particularly painful when rug-pulling events occur, such us when browsers or operating systems are upgraded. Adding features or improving the functional capabilities can require bringing in developers with in-depth knowledge of the tool.
Cultural and Political RisksThere are internal and external political risks in any software project. Something that is overlooked a lot right now, in our exuberance over rich Web applications, is the potential negative impact on our audience. Of course, the point is to improve usability, but is there a possibility that ten years of barebones HTML has preprogrammed Internet users to the point of inflexibility? It's a mistake to assume our users aren't smart, but all users have expectations about the way Web applications should respond and provide feedback. If our audience is sophisticated, trainable, and adaptable, designers have more latitude in the way users can be expected to interact with the application. Are we saying designers should be afraid to innovate on inefficient, outdated Web 1.0 user interfaces? Not at all, but some caution might be warranted.
End Users' ExpectationsAJAX has a way of making things happen quickly on a page. An insufficiency of conventional visual cues (or affordances) can actually inhibit usability for less-technologically expert users. The general public has a heterogeneous set of expectations. If experience tells a user that an item must usually be clicked, rather than dragged, they might get bogged down with a drag-and-drop element - regardless of its apparent ease of use. It's not hard to imagine how this could happen: If you have never seen a draggable element in a Web page before, why would you expect to see one now?
Switching costs are low on the Internet. This is a cultural and economic characteristic of the Web in general, which contributes to a short attention span of users. If users become frustrated by something on a public web site, they have a tendency to move on to something else. AJAX is a double-edged sword in this instance.
TrainabilityIn the public Web, application users are not generally trainable because they start off with a weak relationship to the vendor. The trainability of your audience depends on the nature of the relationship, on their own motivation to learn, the depth of training required, and, of course their attention span. Training for a Web application might include onsite demonstrations, embedded Flash movie tutorials, or printed instructions. In a consumer-targeted application, switching costs are generally low, and users are poorly motivated to acclimate to a new interface or workflow. Factors that affect trainability include the following:
• Strength of the relationship - Employees are much more likely to be motivated to learn a new workflow than strangers on the Web. Existing customers are also more likely to take the time to learn than new sales leads.
• Payoff for the user - People are more motivated to learn if there is a payoff, such as getting free access to a valuable service, being entertained, or getting to keep their job. If the payoff is ambiguous or not valuable enough, users are less motivated to learn.
• Difficulty of the task - More difficult tasks require a greater commitment to learn.
In the enterprise, we generally have more influence over our users than in consumer-vendor relationships. In other words, our ability to get users to learn a new interface is stronger. That said, the importance of getting user acceptance can't be understated. End-user rejection is one of the major causes of software project failure (Jones, Capers. Patterns of Software Systems Failure and Success. Boston, MA: International Thompson Computer Press, 1996).
LegalWeb accessibility is an issue that links the legal environment to the technical world of Web application design. In the United States, Section 508 dictates how government organizations can build software and limits the use of Rich Internet Applications - at least to the extent that they can still be built to support assistive devices such as text-to-speech software. There are some ways of building accessible AJAX applications, and some corporations might believe that because they are in the private sector, they are immune to lawsuits. In fact, there have been efforts to sue private corporations with inaccessible web sites under the Americans with Disabilities Act (ADA), such as the
widely publicized Target Corp. Web site case in 2006. Increasingly, accessibility will become a topical issue as RIA becomes the norm. Fortunately, key organizations are attempting to address the issue with updated legislation and software solutions.
Section 508Section 508 of the Rehabilitation Act requires that U.S. government organizations use computer software and hardware that meets clearly defined standards of accessibility. Although Section 508 doesn't require private sector companies to conform to the standards, it does provide strong motivation by requiring Federal agencies to use vendors that best meet the standards.
Telecommunications ActUnlike 508, Section 255 of the Telecommunications Act does indeed apply to the private sector. It states that telecommunication products and services be accessible whenever it is "readily achievable" - a vague and wide-reaching requirement.
ADAThe Americans with Disabilities Act (ADA) basically requires accessibility in the provision of public services and employment. The ADA empowers employees to ask for "reasonable accommodations" throughout the enterprise, including intranet sites, software, and hardware. The ADA is also applied to Web sites of organizations and businesses, for example, in the Target Web site lawsuit, causing concern throughout the country of sudden heightened legal exposure.
Marketing RisksAll organizations should be concerned about marketing. Internet marketing has spawned a new breed of marketers who have to know about search engine optimization, Web site monetization, as well as understand the target audience and its cultural and technological attributes. All the other risks mentioned here ultimately become marketing risks because they impact the ability of an organization to conduct its business online.
Search Engine AccessibilityMany organizations rely heavily on search engine rankings for their business. Doing anything that might potentially impact rankings negatively would be deemed unacceptable. A lot of marketers are concerned that using AJAX on a corporate site might mean that pages will no longer turn up in search engine results pages (SERPs). This is a real and important consideration. It's also important to note that nobody but the search engine "insiders" (the Google engineers) know exactly how their technologies work. They don't want us to know, probably because knowing would give us an unfair advantage over people who are trying to make good Web sites and deserve good rankings, too. Google's modus operandi has always been to reward people who make Web sites for users, not search engines. Unfortunately, in practice, this isn't even close to being true. Search Engine Optimization (SEO) is a veritable minefield of DO's and DON'Ts, many of which could sink a Web site for good.
Before we look at this in more detail, we should begin with a bit of overview. Search engines use special programs called bots to scour the Web and index its contents. Each engine uses different techniques for finding new sites and weighting their importance. Some allow people to directly submit specific sites, and even specific hyperlinks, for indexing. Others rely on the organic evolution of inbound links to "point" the bots in the right direction. Inbound links are direct links from other sites that are already in the search engine. The problem with bots is that they are not proper Web browsers. Google, for example, previously used an antiquated Lynx browser to scour Web pages, meaning it was unable to evaluate JavaScript and read the results. Recently, Google
appears to have upgraded its crawler technology to use a Mozilla variant (the same engine that Firefox uses). There is evidence that the Google crawler (aka Googlebot) is now capable of clicking JavaScript-loaded hyperlinks and executing the code inside.
With Google using Mozilla, all common sense points to the likelihood that Googlebot can indeed interpret JavaScript, but that doesn't necessarily help AJAX to be search engine-accessible. For a page to turn up in Google SERPs, it must have a unique URL. This means that content loaded as part of an XHR request will not be directly indexable. Even if Google captures the text resulting from an XHR, it would not direct people to that application state through a simple hyperlink. This affects SERPs negatively.
Google is not the only search engine, however, and other engines (MSN Search and Yahoo) are reportedly even less forgiving when it comes to JavaScript. That doesn't imply necessarily that a site must be AJAX or JavaScript-free, because bots are actually good at skipping over stuff they don't understand. If an application is "behind the firewall" or protected by a login, SERPs won't matter, and this can all be disregarded. It does, however, reinforce that using AJAX to draw in key content is perilous if SERPs on that content are important.
The allure of a richer user experience might tempt developers to try one of many so-called black hat techniques to trick the search engines into indexing the site. If caught, these can land the site on a permanent black-list. Some examples of black-hat techniques follow:
• Cloaking - Redirection to a mirror site that is search-engine accessible by detecting the Googlebot user agent string.
• Invisible text - Hiding content on the page in invisible places (hidden SPANs or absolutely positioned off the screen) for the purpose of improving SERPs.
• Duplicate content - Setting up mirror pages with the same content but perhaps less JavaScript with the hope of getting that content indexed, but directing most people to the correct version. This is sometimes used with cloaking.
Given the current status of Googlebot technology, some factors increase the risk of search engine inaccessibility:
• AJAX is used for primary navigation (navigation between major areas of a site).
• The application is content-driven and SERPs are important.
• Links followed by search engine bots cannot be indexed - the URLs cannot be displayed by browsers without some sort of redirection.
ReachReach risk is as much a marketing issue as it is a technical one. The problem with AJAX is that not everyone can use it. Even if our AJAX application supports the majority of browser variants, there is still that segment of users who will not have JavaScript enabled in their browsers. This might be true if they are in a tightly controlled corporate environment where security is important. Also, some people just turn it off because they don't want to be bothered by pop-ups and other intrusive dynamic behaviors. Between
3 percent and
10 percent of the general public has JavaScript disabled at any given time.
Reach is also affected by every other risk mentioned here. Having lower SERPs affects reach because fewer people can be exposed to the site. Losing users because the interface is too new or innovative naturally affects reach, as does losing people due to upgrades in browser technology that break Web site functionality. The only way to totally minimize reach risk is to eliminate all but the most basic, correctly formatted HTML.
MonetizationInternet marketers are also quickly realizing that AJAX throws a popular Web site revenue model into disarray. Although it's true that Google Adsense uses a CPC (Cost per Click) model, many other advertising-driven site use the CPM (Cost per thousand impressions) model that rewards advertisers for mere page views. The idea here is that marketers believe that the value of advertising is more to do with branding and recognition than direct conversions. Whether this is true, under CPM, an average click-through is expensive. Ads generally get low click-through rates (sometimes 0.1 percent or less). AJAX creates a problem for CPM because under normal conditions if hyperlinks trigger an XHR instead of a full page load, the ad does not register another impression. The benefits are still reaped for the advertiser, but the Web site loses revenue. Simply implementing a trigger to refresh the ad based on a page-event (such as an XHR) might not be a fair way to solve the problem either. Disagreements are bound to surface about what kind of request should fairly trigger an impression. The magic of XHR and JavaScript might also seem a bit too ambiguous for some advertisers wary of impression fraud. This event-system also lacks a directly comparable baseline from which to compare different Web sites. If one Web site loads more content on each XHR, or uses more pagination than another, the number of impressions can be artificially inflated.
Risk Assessment and Best PracticesThe number of variables in evaluating the role of AJAX in your project can be a bit overwhelming. The important thing to remember is that all software projects have risk. AJAX is no different in this regard. We already discussed some of these, and following are a few strategies for reducing overall risk.
Use a Specialized AJAX Framework or ComponentSave time by leaving browser compatibility and optimization issues to the people that know them best. There are well-optimized third-party AJAX frameworks and components available that have already solved many of the cross-browser issues. Many of these are maintained quite aggressively with regular updates. This can be a cost and time-savings approach well worth any new introduced risks. Judge a framework or tool by the length of time it has been in continuous development and the quality of support available and balance that with the degree to which you are prepared to build a dependence on it.
AJAX Framework and Component Suite ExamplesDojo, open source
Prototype, open source
DWR, open source
Nitobi, commercial
Telerik, commercial
Progressive Enhancement and Unobtrusive JavaScriptProgressive Enhancement (PE) can be an excellent way to build AJAX applications that function well, even when the client browser can't execute the JavaScript and perform the XHRs. PE is different from Graceful Degradation because in the latter, we build rich functionality and then some mechanism for degrading the page so that it at least looks okay on incompatible browsers. PE is sometimes also referred to as Hijax.
• PE essentially means that you should write your application in such a way that it functions without JavaScript.
• Layer on JavaScript functionality after the application is working.
• Make all basic content accessible to all browsers.
• Make all basic functionality accessible to all browsers.
• Be sure enhanced layout is provided by externally linked CSS.
• Provide enhanced behaviors with unobtrusive, externally linked JavaScript.
• See that end-user browser preferences are respected.
In PE, we begin by writing the application with a traditional post-back architecture and then incrementally enhancing it to include unobtrusive event handlers (not using embedded HTML events, but in externally referenced JavaScript) linked to XHR calls as a means for retrieving information. The server can then return a portion of the page instead of the entire page. This page fragment can then be inserted into the currently loaded page without the need for a page refresh.
When a user visits the page with a browser that doesn't support JavaScript, the XHR code is ignored, and the traditional model continues to function perfectly. It's the opposite paradigm of Graceful Degradation. By abstracting out the server-side API, it's possible to build both versions with relatively little effort, but some planning is required.
This has benefits for accessibility (by supporting a non-JavaScript browser), as well as Search Engine Optimization (by supporting bookmarkable links to all content).
Following is an example of unobtrusive enhancement to a hyperlink. In the first code snippet, we show a hard link to a dynamic page containing customer information.
Show Customer DetailsIn the next snippet, we see the same link; only we intercept the click and execute an AJAX request for the same information. By calling our showCustomerDetails.php page with the attribute contentOnly=true, we tell it to simply output the content, without any of the page formatting. Then, we can use DHTML to place it on the page after the AJAX request returns the content.
onclick="returnAjaxContent('showCustomerDetails.php?contentOnly=true', myDomNode); return false;">
Show Customer Details
When the user without JavaScript clicks the link, the contents of the onclick attribute are ignored, and the page showCustomerDetails.php loads normally. If the user has JavaScript, this page cannot be loaded (because of the return false at the end of the onclick), and instead the AJAX request triggers, using the returnAJAXContent() method that we just made up but would handle the XHR in the example.
What's even more preferable, and more in keeping with the progressive enhancement methodology, is to remove all inline JavaScript completely. In our example here, we can apply a unique CSS class to the link instead of using the onclick attribute:
Show Customer Details
Then, in our onload event when the page is downloaded to the browser, execute something like the following in externally referenced JavaScript to attach the event to the hyperlink:
function attachCustomerDetailsEvent() {
var docLinks = document.getElementsByTagName("a");
for (var a=0; a < docLinks.length; a++) {
if (docLinks[a].className.match("ajaxDetails")) {
docLinks[a].onclick = function() {
returnAjaxContent('showCustomerDetails.php?contentOnly=true', myDomNode);
return false;
};
}
}
}
This loops through all the
tags on the page; find the one marked with the class AJAXDetails and attach the event. This code would then be totally unobtrusive to a browser without JavaScript.
Google Sitemaps
Google has provided us a way of helping it find the entirety of our sites for indexing. It does this by allowing developers to define an XML-based sitemap containing such information as URLs for important pages, when they were last updated, and how often they are updated.
Google Sitemaps are helpful in situations where it is difficult to access all areas of a Web site strictly through the browseable interface. It can also help the search engine find orphaned pages and pages behind Web forms.
If an application uses unique URLs to construct Web page states, Sitemap XML can be a useful tool to help Google find all important content but is not a guarantee that it will. It also has the advantage of being one of the few SEO techniques actually sanctioned by Google.
Many free tools are available to assist with the generation of a Google Sitemap file, but one is easily created if you can crawl and provide information about important areas of your Web site. Following is an example of a Google Sitemap XML file:
http://www.nitobi.com/
2007-10-01
1.0
http://www.nitobi.com/products/
2005-10-03T12:00:00+00:00
weekly
http://www.nitobi.com/news/
The LOC tag provides a reference to the URL. LASTMOD describes when it was last updated, CHANGEFREQ gives Google an idea of how often the content is updated, and PRIORITY is a number between 0 and 1 that indicates a reasonable importance score. In general, it's not advantageous to make all pages a 1.0 because it will not increase your ranking overall. Additionally, new articles or pages should receive a higher priority than the home page, for example, if it is relatively static.
After a sitemaps file has been created, Google must be made aware of it. This can be done by visiting webmaster tools on google.com. In a short time, the file will be downloaded and then re-downloaded at regular intervals, so be sure to keep it up-to-date.
Visual Cues and AffordancesOne of the things usability experts try to do is construct an interface in such a way that people don't need to be trained on it. The interface should use patterns that suggest the features and functionality within, that is, something that can be dragged should have an obvious grab point that suggests "drag me," and possibly a drop-shadow to indicate that it is floating above the page. Try to think of ways to help the user by visually augmenting on-screen controls with cues. Entire books have been written on UI design and usability (some great ones include Don't Make Me Think by Steve Krug and Designing Visual Interfaces: Communication Oriented Techniques by Kevin Mullet and Darrell Sano), but here are some quick guidelines:
• Make controls visible and intuitive. Use high-contrast, evocative iconography to indicate functionality, that is use a trash can for delete.
• Use images to augment links and actions. There is a positive relationship between using image links and user success for goal-driven navigation.
• Use familiarity to your advantage. Build on users' prior knowledge of popular desktop software such as Microsoft Office, Photoshop, Media Player, Windows Explorer, and so on by using similar iconography and interface paradigms.
• Provide proactive assistance. Use HTML features such as tooltips (alt tags) and rollovers (onmouseover, onmouseout) to provide proactive information about the control and inform the user about its function.
• Utilize subtractive design. Draw attention to the visual cues that matter by reducing the clutter on the screen. Do this by eliminating any visual element that doesn't directly contribute to user communication.
• Use visual cues. Simply style an object so that users can easily determine its function. Good visual cues resemble real-world objects. For example, things that need to be dragged can be styled with a texture that indicates good grip (something bumpy or ridged). Something that can be clicked should have a 3D pushable button resemblance.
• Be consistent. Repeat the use of visual patterns throughout the application wherever possible.
Free databases of user interface patterns are available online, including the good
Yahoo Design Pattern Library.
Avoid Gold PlatingGold plating is adding more to the system than specified in the requirements. Gold plating can also occur in the design phase of a project by adding unnecessary requirements. Building in features above and beyond what the requirements of a software project state can be a lot of fun but can add costs and maintenance work down the road. Every additional feature is a feature that needs to be tested, that can break other parts of the software, and that someone else might need to reverse engineer and understand some day. Goldplating sometimes results from conversations that start: "Wouldn't it be cool if..." Keeping tight control on scope creep; and managing the project carefully helps avoid gold plating.
The counter-argument to this is that tightly controlling scope and being strict about requirements can stifle innovation and take the fun out of developing rich applications. It might be that some of our best features come from moments of inspiration midway through the project. A balance between a focus on requirements and leeway for unplanned innovation could be considered - keeping in mind how it impacts the overall risk of the project.
Plan for MaintenanceTesting needs to happen in any software development project, but with AJAX, developers must perform testing and maintenance at regular intervals to ensure longitudinal success as browsers evolve. Periodically review the target browser list for currency and update to include new versions of popular browsers (including beta versions). Establish repeatable tests and run through them when the browser list changes.
Software risk managementSome global principals of software risk management can handle risk in software. Briefly, here are a few of the things we recommend to generally keep it in check:
* Adopting a holistic view - Taking the wide-angle approach and looking at not only the immediate technical and budgetary constraints, but also external issues such as opportunity cost (the value of an alternative to the choice you make) and how this project impacts marketing goals. The point is to maintain a common understanding of what is important in a software project.
* Having a common product vision - Developing a culture of shared ownership between team members and understanding what the project is and what the desired outcomes are.
* Using teamwork - Bringing together the different strengths of each team member to form a whole that is more than the sum of its parts.
* Maintaining a long-term view - Keeping the potential future impact of decisions in mind and budgeting for long-term risk
management and project management.
* Having open lines of communication - Encouraging both formal and informal means of team communication.
Adopt a Revenue Model the WorksWe discussed earlier how AJAX can create a problem with traditional CPM cost-per-impression revenue model. It can cause a site's traffic (in terms of the number of raw impressions) to be underestimated, and consequently, undervalued.
What we want to achieve with ad-driven monetization is a way to tie the true value of a Web site with the cost of advertising there. The question is what makes ad space valuable? Lots of things do, such as unique traffic, people spending a lot of time on a site, people buying things on a site, having a niche audience that appeals to particular advertisers, and so on. To be fair, a revenue model needs to be simple and measurable, and vendors of advertising need to set their own rates based on the demand for their particular property.
Cost-per-Mille (Cost per Impression) Model GuidelinesThe thing to pay attention to in CPM revenue models is to update the advertisement when enough content on the page has changed to warrant a new impression.
Cost-per-Click Model GuidelinesClick-through rates are impacted by the appropriateness of the ad for the Web site. In content-driven, consumer-targeted Web sites, the ad server must show contextual ads based on content. When page content is loaded with AJAX, it might not be read by Adsense or other ad servers. An update to the advertising context might be appropriate.
Cost-per-Visitor GuidelinesIf a visitor is defined as a unique person per day, a cost-per-visitor model works irrespective of how many page loads occur or how bad or good the advertising is. A unique visitor can be measured reasonably well by looking at the IP address and browser User Agent and by setting a cookie.
Include Training as Part of the ApplicationNow that we know what affects user trainability, we can look at what impacts the success of user training. If we want to provide training for software applications to improve user acceptance, how do we do it?
• Organize training around user goals, not product features. For example, it would be better to structure a lesson around the goal of creating an invoice, rather than how to use the invoice tool. This way, users can understand why they should be motivated to pay attention. It also gets to the heart of what they want to learn.
• Find out what users want to use the tool for; provide training for that. Information overload is deadly for the success of training. Trying to cover too much ground can overwhelm your users and get them to turn off, bringing information absorption to a halt.
• Use training to identify flaws in product design. If training is delivered in-person, it can be an opportunity to identify parts of the application that are too hard to use. Although no substitute for early usability testing, this might be the last opportunity to catch problems.
• Support and encourage a user community. Support communication tools that allow users to teach one another. Forums and mailing lists can be useful in this regard.
When we think of training, we might be thinking mistakenly about in-person sessions or even live webinars. These can be worthwhile, and by no means rule them out, but consider low-cost alternatives, too:
• Use context-specific training material. Make material accessible from within the application and at useful interaction points. For example, provide information on how to create a new invoice available from the invoice management screen and so on.
• Show don't tell. Use a screen capture tool such as Adobe Captivate, Camtasia, or iShowU (for the Mac) to provide inexpensive screencast training material that you can deliver through a web page. Many users prefer to learn this way, and there's nothing like having an actual demonstration of a product feature because by definition, it shows a complete goal-story from beginning to end. Some free in-application web tour tools are also available, such as Nitobi Spotlight (http://www.nitobi.com) AmberJack (http://amberjack. org/), although these might not be as effective as a prerecorded demonstration with audio.
Summary
Because of the unstable nature of the JavaScript/CSS/DHTML/XHR paradigm (the fact that the earth keeps shifting beneath our feet with each browser release), we need to employ a Continuous Risk Management process during and after an application is rolled out. This doesn't need to be overly officious and complicated, but it should at least involve unit and regression testing and a holistic look at current browser technology and the underlying mechanisms of AJAX. To put it simply: Does our solution continue to function with current browsers and OSs and will it continue to over the near-term with upcoming releases?
Along with a continuous approach to analyzing risk in a software project must be a willingness to revisit design decisions and also perform rework and maintenance. Both browsers and users can be a moving target, and changes to the JavaScript, CSS, and XHR engines can subtly affect AJAX applications. These are most likely to be the culprit of any long-term maintenance problems. Microsoft, Mozilla, Opera and Apple are all watching the AJAX landscape carefully to help us avoid these as best they can, but a continuous approach to risk management is needed to stay on top of this and ensure a long and healthy lifespan for our Web applications.
Resources
Search Engine Optimization
WebProNews
SearchEngineWatch
Google SEO Recommendations
Google Guidelines for Site Design
Google Sitemaps
Statistics
The Counter Global Web Usage Statistics
Roadmaps
Firefox 3 Roadmap
ACID2 Rendering Test
CSS 3.0 Roadmap
Screen Capture Tools
Adobe Captivate
Camtasia
iShowU
courtesy @computerworld.com