Saturday, November 3, 2007

Keyword Optimize Your Resume

Applying for a job without knowing somebody at the company first often feels like a quixotic mission. You throw your resume into the faceless online job site grinder and hope a human being somewhere along the way recognizes your obvious talents and relevant life and work experience. Good luck with that, Don!

Here’s how to put keyword optimization to work getting your resume discovered.


Use “preferable” terms. This one comes from Pinny Cohen. Recruiters and HR people are bound to search on the most obvious or common terms when seeking out candidates to forward to a hiring manager. So how do you figure out what terms people might be looking for? Cohen mentions a page updated weekly by job site TheLadders.com, which lists the 100 top recruiter search words. Using these instead of more creative phrasing will help those recruiters find you.

Include a keyword summary. CareerPerfect advises that you add this at the beginning of a resume even if you’ve used keywords throughout for three reasons:

  • It lets you offer up variations on the keyword that may not fit elsewhere in the file.
  • The more keywords you have, the greater the keyword density, which can help your ranking.
  • You’re more likely to cover alternative keywords that might be used by the searcher.

The site advises separating keywords with commas or periods.

Integrate the keywords in a “Qualification Summary.” Pat Kendall of Advanced Resume Concepts says that search engines for job sites are becoming sophisticated enough to read keywords in context, and therefore they can figure out if your keywords are legitimate based on the text that surrounds them. Therefore, just providing a laundry list of keywords won’t necessarily be as effective as a summary statement that provides the human element.

She offers two examples. Here’s a sample of a non-”keyword loaded” summary:

Achievement-oriented with 15 years of successful experience and proven ability to meet objectives, communicate with clients, and quickly excel in new industries.

Here’s a “keyword-heavy” sample:

Achievement-oriented sales professional with 15 years of success in international trade and global marketing. Skilled in developing marketing programs, coordinating new product introductions and providing customer support. Proven track record in cold calling, new business development and key account management.

Use keywords inconsistently. As job-hunt.org reminds us, you don’t know if the recruiter will type in MA, Mass or Massachusetts, so cover all bases if that’s where you’re looking for work.

Get the top keywords into titles. As “Pimp Your Work” points out, the location of keywords within your resume is important. “For example, if the keyword is in your title, you’ll have a better chance of ranking high rather than if it were just in your profile body.”

Get keyword hints from the job listing itself. According to Monster.com “resume expert” Kim Isaacs, if you study the particular job listing, “you’ll be able to get into the mind of employers who literally spell out what they’re looking for.”

Here’s an example she provides:

Requirements: The qualified candidate will have a minimum of five years of human resource experience in a fast-paced environment with strong knowledge of benefits administration including medical, dental, life, 401(k) and COBRA. Proficiency in MS Office programs is required. Bachelor’s degree preferred.

What have you figured out about optimizing your resume?

courtesy @webworkerdaily.com



Top 10 Free Video Rippers, Encoders, and Converters



So many video file formats, so many handheld video players, so many online video sites, and so little time. To have your favorite clips how you want them—whether that's on your DVR, iPod, PSP or desktop—you need the right utility to convert 'em into the format that works for you. Commercial video converter software's aplenty, but there are several solid free utilities that can convert your video files on every operating system, or if you've just got a web browser and a quick clip. Put DVDs on your iPod, YouTube videos on DVD, or convert any video file with today's top 10 free video rippers, encoders and converters.



10. VLC media player (Open source/All platforms)
vlc.png Ok, so VLC is a media player, not converter, but if you're watching digital video, it's a must-have—plus VLC can indeed rip DVD's, as well as play ripped discs in ISO format (no actual optical media required.) VLC can also play FLV files downloaded from YouTube et al, no conversion to AVI required. Since there's a portable version, VLC's a nice choice for getting your DVD rips/saved YouTube video watching on wherever you go.

9. MediaCoder (Open source/Windows)
Batch convert audio and video compression formats with the open source Media Coder for Windows, which works with a long laundry lists of formats, including MP3, Ogg Vorbis, AAC, AAC+, AAC+V2, MusePack, WMA, RealAudio, AVI, MPEG/VOB, Matroska, MP4, RealMedia, ASF/WMV, Quicktime, and OGM, to name a few.

8. Avi2Dvd (Freeware/Windows)

Make your video files burnable to a DVD with Avi2Dvd, a utility that converts Avi/Ogm/Mkv/Wmv/Dvd files to Dvd/Svcd/Vcd format. Avi2Dvd can also produce DVD menus with chapter, audio, and subtitle buttons.

7. Videora Converter (Freeware/Windows only)

Videora Converter is a set of programs, each designed to convert regular PC video files into a format tailored to your favorite video-playing handheld device. The Videora program list includes iPod Video Converter (for 5th gen iPods), iPod classic Video Converter (for 6th gen classic iPods), iPod nano Video Converter (for 3rd gen iPod nanos), iPod touch Video Converter, iPhone Video Converter, Videora Apple TV Converter, PSP Video 9, Videora Xbox360 Converter, Videora TiVo Converter, and Videora PMP Converter. Lifehacker alum Rick Broida used Videora in conjunction with DVD Decrypter to copy DVDs to his iPod.

Honorable Mention: Ares Tube for Windows converts YouTube and other online videos to iPod format.

6. Any Video Converter (Freeware/Windows only)

anyvideoconverter.jpg Convert almost all video formats including DivX, XviD, MOV, rm, rmvb, MPEG, VOB, DVD, WMV, AVI to MPEG-4 movie format for iPod/PSP or other portable video device, MP4 player or smart phone with Any Video Converter, which also supports user-defined video file formats as the output. Batch process multiple files that AVC saves to a pre-selected directory folder, leaving the original files untouched.

5. Hey!Watch (webapp)

Web application Hey!Watch converts video located on your computer desktop as well as clips hosted on video sites. Upload your video to Hey!Watch to encode it into a wide variety of file formats, like H264, MP4, WMV, DivX, HD Video, Mobile 3GP/MP4, iPod, Archos and PSP. Hey!Watch only allows for 10MB of video uploads per month for free, and from there you pay for what you use, but it's got lots of neat features for video publishers like podcast feed generation and automatic batch processing with options you set once.

4. VidDownloader (webapp)

When you don't want to mess with installing software to grab that priceless YouTube clip before it gets yanked, head over to web site VidDownloader which sucks in videos from all the big streaming sites (YouTube, Google Video, iFilm, Blip.TV, DailyMotion, etc.), converts 'em for you to a playable format and offers them for download. Other downloaders for online video sites buy you a Flash FLV file, but VidDownloader spits back an AVI file.

3. iSquint (Freeware/Mac OS X only)

isquinthero.pngConvert any video file to iPod-sized versions and automatically add the results to your iTunes library. iSquint is free, but Lifehacker readers have praised the pay-for iSquint upgrade, VisualHub, which offers more advanced options for a $23 license fee. Check out the feature comparison chart between iSquint and VisualHub.

2. DVD Shrink (Freeware/Windows only)

Copy a DVD to your hard drive and leave off all the extras like bonus footage, trailers and other extras to save space with DVD Shrink. Download Adam's one-click AutoHotkey/DVD Shrink utility to rip your DVDs to your hard drive for skip-free video play from scratchy optical media.

Honorable mention: DVD Decrypter (beware of advertisement interstitial page), which Windows peeps can use to copy DVDs to their iPods.

1. Handbrake (Open source/Windows, Mac)


Back up your DVD's to digital file with this open source DVD to MPEG-4 converter app. See also how to rip DVDs to your iPod with Handbrake.
handbrake1.png
What's your favorite way to convert video to the right format? Did we miss any good ones in this list? Let us know in the comments.



courtesy @lifehacker.com



Microsoft Sees Windows Vista Growth Phase Underway

Microsoft is predicting a strong first holiday season for its OS after it said it is starting to see mass adoption from businesses nearly a year after it was released.

Microsoft's Windows Vista is starting to see mass adoption from businesses nearly a year after it was released, the company said while predicting a strong first holiday season for the product.

"We feel like we are starting to hit our stride not only in demand, but in deployment in business," Kevin Johnson, president of Microsoft's platform and services group, said in an interview.

Microsoft delivered quarterly results last week that eclipsed Wall Street's most bullish forecasts, helped in part by strong demand for Vista, the latest upgrade to its flagship Windows operating system. Vista was introduced in January.

Vista's success was not always a foregone conclusion. Early Vista buyers complained about the lack of compatibility with existing devices and software programs.

Microsoft also buckled to PC manufacturer demands that the company delay the scheduled transition to Vista and extend sales of its previous Windows operating system, Windows XP, for another five months because some customers preferred XP.

In a note to clients Wednesday, Bernstein Research analyst Charles Di Bona said he thinks Vista's upgrade cycle is "underappreciated" and expects growth at the Windows business to be stronger than market expectations.

Di Bona forecasts Windows revenue to grow by 15 percent in this fiscal year ending in June versus Microsoft's own estimate of an increase of 12 percent to 13 percent. Each percentage point of growth represents about $150 million in revenue and roughly $110 million in operating profit, based on previous results.

Windows runs on more than 90 percent of the world's computers and Microsoft makes about 75 cents in profit for every dollar in Windows sales. The Windows client business generated $15 billion in revenue is fiscal 2007.

PREMIUM AND PIRACY

Revenue at the segment, Microsoft's largest and most profitable unit, rose 25 percent in the September quarter, boosted by a PC market growing at around 15 percent.

In addition, improved measures to curb piracy and greater adoption of higher-margin, premium versions of Vista helped push the segment's sales above PC market growth, Johnson said in the interview this week.

Microsoft executives have said for years that being able to crack down on pirated versions of its software will help drive significant increases in sales. Chief Executive Steve Ballmer has said that more than 20 percent of its software running around the world is pirated.

Vista comes with a new authentication program that sends security updates and improves service to users of genuine copies of Windows. Johnson said the company has made progress in educating consumers to the advantages of buying more expensive, non-pirated versions.

As consumers use their computers more for home entertainment, Microsoft has boosted the percentage of higher-end versions of Windows. Premium versions accounted for about 75 percent of all Windows copies in the first quarter, compared to about 59 percent a year earlier, Di Bona said in his report.

Microsoft's Johnson said the company should see a pick-up in corporate deployment of Vista after the release of Windows Vista Service Pack 1, the first major update to the new operating system.

A sign of business customers' intent to upgrade was a 27 percent increase in unearned revenue at the Windows business during the past quarter, Johnson said.

Unearned, or deferred, revenue reflects long-term contracts on the balance sheet that have been signed but not recognized as income until the product is delivered. In this case, it is when the customers start deploying Windows Vista.

courtesy @informationweek.com

Interview: Microsoft's Ferguson talks about Oslo, SOA

Don Ferguson was a fellow at IBM who, in a rare move, left the company. In an interview, he explains his move and what he's up to at his new home, Microsoft

Don Ferguson bears what may be a unique distinction. He has the held lofty title of "Fellow," a title associated with being a distinguished technologist, at not only IBM but Microsoft as well. Last year, Ferguson left IBM, where he had participated in development of the company's WebSphere middleware platform, to come to Microsoft. He is a technical fellow in the Microsoft Office of the CTO. InfoWorld Editor at Large Paul Krill met with Ferguson at the Microsoft SOA and Business Process in Redmond, Wash. Tuesday to talk about Microsoft, including the company's new Oslo modeling and services project, and IBM.

InfoWorld: You mentioned this morning that you had been at IBM and that you left about nine months ago?

Ferguson: December 23.

InfoWorld: Apparently that doesn't happen too often where an IBM Fellow leaves the company.

Ferguson: It's very rare.

InfoWorld: Why did you leave?

Ferguson: There's a few reasons. I had worked for IBM for 20 years, and I turned 45 at the same time. So I sat down and thought -- what am I going to do? I have 20 more years in the industry if I work to 65, exactly at the midpoint. It's time to think about what to do next. And there were a lot of things I could have done next that were very exciting about IBM, but I decided that if I was going to make a change it needed to be a really big change.

InfoWorld: What was IBM's reaction to your going to Microsoft?

Ferguson: I don't know. It seems like the kind of thing you should ask them, not me.

InfoWorld: You mentioned this morning that there was speculation that you were a double agent. Was that all just humor?

Ferguson: It was humor. I think most people's reaction mirrored mine, which is I felt a terrible sense of loss. Not that it was really anything, it's just you suddenly go from having worked with these same people for 10 years to seeing them a couple times a year when they come into town.

InfoWorld: What are your responsibilities at Microsoft?

Ferguson: I'm in the Office of the CTO, [with] David Vaskevitch, so I think in general about the implications of technology trends on Microsoft's product portfolio over the very long horizon. An example of this is obviously the modeling because modeling will play out in the relatively near term. It already has. And it'll also play out in the long term.

InfoWorld: You mentioned this morning that you had been involved in the development of IBM WebSphere. Could you talk a little bit about what brought on the need for WebSphere, and did you see any irony in going to probably the only major software vendor that doesn't have an app server?

Ferguson: Well, I'm not sure that I would say that Microsoft doesn't have an app server.

InfoWorld: Microsoft is certainly not a big Java proponent.

Ferguson: Well, there's a difference between an app server and Java. CICS on the mainframe is an app server. It was an app server for a long time, but it wasn't in Java. It has Java now, but it was an app server. So app server doesn't equal Java. App server is a specific structure. It fills a specific role, and not necessarily Java. It's not necessarily based on Java. So I'm not sure I would agree that Microsoft doesn't have an app server.

InfoWorld: Which Microsoft product is positioned in the app server space?

Ferguson: It surfaces itself in a few ways. I'm only beginning to completely get my mind around what we do. Internet Information Server is certainly part of it.

InfoWorld: So what was your role in development of WebSphere?

Ferguson: I started one of the projects that was the precursor to WebSphere, so I was the technical lead for the six-person team that said, "This is the direction we're going to go in." And then I was the chief architect for the WebSphere products, from the very first thing we did until they had me take over the software group architecture.

InfoWorld: Could you talk about your role in Oslo and what the importance of this is? I guess the conception of Oslo preceded your arrival at Microsoft.

Ferguson: Yes. Mostly, my participation in Oslo up to this point has been providing an external perspective. I have a different background from a lot of people. I spent a lot of time with customers, so I sometimes tend to provide an external perspective on both Oslo and BizTalk Services.

InfoWorld: What do you see as the significance of Oslo?

Ferguson: If you look at what people talk about in SOA, they'll often talk about this closed loop of model, then assemble, then develop, then deploy, then monitor, and then refine. I want to understand what my business is doing, I want to deploy an app. There's a development phase, a deployment phase, and then I want to see what's actually going on. I think the modeling capabilities in Oslo and the fact that there's a shared model among all of those things is going to make going around that loop much smoother and much quicker. So if you think about a business being a control loop, you think about what the business is doing and how you want to change it. You think about how programs and the data center need the change, then you deploy and look at what's happening, and then you go back and refine. If you think about it as a control loop, it makes it a [more] productive, a lot more agile, if you have this common core model that people can collaborate around.

InfoWorld: So is Microsoft's SOA strategy about modeling?

Ferguson: It's part of the SOA strategy, but there's a strong overlap between modeling and SOA, but the two things aren't necessarily linked. An example of this is one of the things that you model is what are your business objects, what are your business entities, customer account, purchase order? So you do information modeling. You model the information. You put a SOA abstraction on that so you can access the information, but that tends to be business modeling, but in kind of the entity relationship, kind of the logical data model space. So that's an example of something that you model, but isn't directly related to SOA. And then there are lots of use cases of SOA, lots of styles of SOA that don't start with modeling. So people will often do some simple early integration, but they're not really modeling it because they know what they're doing, they know what they're trying to accomplish. One way to think about this is -- there are several ways to classify projects, and one is the spectrum of systematic versus opportunistic. Modeling today tends to be systematic. You're thinking systematically about how you want to change. So you thought systematically about how you want to change your business model; now you need to think systematically about the implications of that for your app portfolio and how you deliver it. Opportunistic tends to be very bottom-up. Two people talk to each other, and they realize that there's an opportunity to send messages back and forth or they want to do a quick project, people will use SOA for opportunistic and systematic. So you can do SOA without doing modeling. There's also things that you model that aren't SOA like the evolution of your database, your information model. You model the evolution of what you want your portal to look like. And then there is an intersection, and I think that the intersection is that one of the things that you model is process. SOA tends to enable business processes because it gives you a palate of business verbs that you can use to build the processes. When you build the process with modeling, it becomes a service. Business modeling is a way to build services, and services is something that enables business modeling, and the two of them kind of loop in that space.

Infoworld: Is Microsoft going to approach the industry at large about embracing Oslo? Is there any intention that Sun Microsystems would buy into this or IBM or anybody else?

Ferguson: That's a good question. The only thing that I know for sure is that a lot of this surfaces through what we've already done in Web services. Web services gives you verbs that are running elsewhere, so you can call them and the Web service standards like WSDL, XML, and WS-Policy allow you to plug external services into your modeling environment and vice-versa. So what it is, at this point, is it's runtime, it's protocol interoperability, and it's design time interoperability. So we can collaborate on a design, but we're modeling in our space and you're modeling in your space.

InfoWorld: Is Microsoft is concerned about any criticism that Oslo is proprietary? That you're locking everybody else out of this and how is this going to be massively accepted if it's Microsoft and Microsoft only?

Ferguson: I've never met anybody who wasn't concerned about criticism. I mean maybe Buddha wasn't concerned about criticism. So I'm sure that there's a concern. I think that some of the responses that I gave about the existing set of standards, ameliorating some of those concerns, that has to be something we communicate. Because the first thing we need to do is to make sure that the criticisms are founded in fact, not perception, and so we certainly need to articulate the things that aren't articulated because people then may choose to criticize us after we've explained it. But we need to do the explanation of why we think that the existing set of standards significantly enables the modeling things that everyone's trying to do. If, after that point, people choose to criticize us, then I think at that point, at least I'll think about it. But my primary goal over the next few months is to explain the technology that we've made available, how does that help eliminate some of those decisions. That is one of the most misunderstood things about Web services, in my opinion -- that people invariably focus on it for being protocol interoperability, what flows over the wire. And there is a set of Web service standards that are about design collaboration, tool interoperability, description -- how you find surfaces.

InfoWorld: What would some of those be?

Ferguson: WSDL, WS-Policy, WS-Metadata Exchange.

InfoWorld: I've asked this question of other people, but it seems like there's this whole alphabet soup of Web services standards. How do you deal with it? It's almost like you want to plug your computer into the wall, but you have 30 different plugs and have to go into 30 different outlets and you better make sure you get them right. Is that a concern?

Ferugson: Yes, it is. But I think there's a few ways that you deal with it. One is, this is not fundamentally different from other domains. When you have plug-and-play, it can be complex. But what always emerges [are] patterns, documented best practices. Because you can't force everybody down one path so you have to do the breadth and flexibility, but then you can't force everybody to figure it out. So part of what you need to do is you need to explain -- like these are four core ways to think about it. And then start to do things around that. To some extent, the WS-I (Web Services Interoperability Organization) profiles are an example of that, which is, this is a cluster of functions that's useful together. So I think what you'll see is that there will be documentation of these clusters make sense together. Modeling also helps with that because the standards are descriptive -- WSDL, WS-Policy -- but they allow one modeling environment to explain to another what the intent of the service that's being surfaced is, whether it's business intent or whether it's infrastructure intent.

InfoWorld: Have you focused at all on the services part of Oslo?

Ferguson: I spend a lot of time thinking about BizTalk Services and the Internet Service bus. That's one of the things that I found very attractive about coming here, it was something that I care passionately about. I think it will be a very big thing in the Internet.

InfoWorld: Why?

Ferguson: There's lots of reasons for that, so I can give you a couple of them. One of them is the enterprise service bus pattern has demonstrated that it's useful for lots of companies and it's emerged as a best practice. But it's currently beyond the scope of lots of businesses. The small [and] medium-size business just can't do an enterprise service bus, they can't really do B2B. I think [Internet Service Bus is] going to broaden that out to a huge number of companies that aren't capable of getting involved today.

InfoWorld: IBM with WebSphere competed against BEA Systems and its WebLogic platform. Oracle's now trying to buy BEA. Oracle has its own app server. What is your perspective on Oracle's proposed acquisition of BEA? Do you think it's a good idea? What does that mean for the general user community at large if Oracle buys BEA? And do you think they'll be successful in acquiring BEA?

Ferguson: I don't know. I'm not very good at things like that. I'm not going to start thinking about Oracle and BEA.

courtesy @infoworld.com

10+ questions to help determine how well you're performing as an IT manager

Overview: External feedback and performance reviews can only go so far in helping you discover your shortcomings, set improvement goals, and build on your strengths. Sometimes, you also need to take a hard, honest look at your performance and ask, "Would I hire myself for this job?"

For this strategy to work, you have to ask yourself the tough questions and answer them truthfully. Here are some that might work for you. If nothing else, they will give you an idea what you should be asking yourself if you're serious about excelling in your work.

External feedback and performance reviews can only go so far in helping you discover your shortcomings, set improvement goals, and build on your strengths. Sometimes, you also need to take a hard, honest look at your performance and ask, "Would I hire myself for this job?" Knowing yourself, your skills, and your experience -- and knowing the job (since you’re already doing it) -- you're well positioned to ask, "Am I the best, practical choice for the job I have or could my organization do better?"
For this strategy to work, you have to ask yourself the tough questions and answer them truthfully. Although those questions will vary from job to job, here are some that might work for you. If nothing else, they will give you an idea of the kinds of questions you should be asking yourself if you're serious about excelling in your work.


#1: Technology changes everyday. Can you list three examples of things you’re doing to keep your technical knowledge current?

#2: Your boss has a family emergency that’s going to keep him or her out of the office for a week. Your boss can call only one person to keep things running until he or she returns. Are you the one who gets that phone call? If so, why? If not, why not?

#3: What specific steps have you taken over the past six months to either increase the performance of the bottom 20 percent of your staff or to move them to positions where they can be successful?

#4: When was the last time you talked with the account reps for your three largest vendors?

#5: What specific steps have you taken over the past six months to keep your star performers on board and motivated?

#6: If your group services internal clients, what do they think of the work your department is doing? Are you guessing or have you actually asked them in the last 30 days?

#7: If you suddenly get sick, do you have a subordinate you could trust to keep things moving until you get back?

#8: When was the last time you checked on the financial stability of the outsourcing firms you use?

#9: Do you know which of your department’s projects is furthest behind schedule? Do you know why?

#10: Consider your direct reports. Does each of them know what your top three priorities are for them?

#11: Consider your boss. When was the last time he or she asked you to take over a special project? If it’s been more than six months, why do you think that is?

#12: Can you list three things you’re doing to help HR with recruitment or retention?

#13: Personal networking is important for you and your organization. What professional associations do you belong to, and how active are you in them?
A continuing process

For this self-interview to be worth the time you spend on it, you must not only ask tough questions but also make the necessary improvements. On the other hand, the only person who knows the results of this interview is you. If you’re relentless in uncovering and correcting your own management weaknesses, you’ll find it much easier to land the next job down the line.

courtesy @TechRepublic

Thursday, November 1, 2007

10 ways to effectively estimate and control project costs

Building a better bottom line is just as important for an IT department as it is for the whole organization at the enterprise level. It involves an understanding of the main drivers of IT costs, aligning IT spending plans with overall business strategy, using financial resources efficiently, viewing IT expenditures as investments and having procedures to track their performance, and implementing sound processes for making IT investment decisions..

1. Control baseline costs
Nondiscretionary money spent maintaining established IT systems is referred to as baseline costs. These are the “grin and bear it” costs, those required just to keep things going. Baseline costs constitute around 70 percent of all IT spending for the average organization, so this is a good place to start. These costs tend to creep over time due to the addition of new systems, meaning there’s less money available for discretionary project work. Worse yet, this creep gives the appearance that IT costs are rising while the value derived from IT investments stays the same or actually goes down.
Fortunately, baseline costs can be easily controlled. Renegotiate vendor contracts, reexamine service levels, manage assets effectively, consolidate servers, sunset older applications, maintain a solid enterprise architecture, and practice good project and resource management. By so doing you can lower the percentage of the IT budget allocated to baseline costs and keep them in line, avoiding problems with opportunity costs. Think of IT projects as an investment portfolio; the idea is to maximize value and appreciation. Baseline costs are food, clothing, and shelter; we have to spend the money but it doesn’t have to overwhelm the budget.

2. Acknowledge hidden IT spending impacts
Gartner estimates more than 10 percent of corporate technology spending occurs in business units, beyond the control of IT. Several factors contribute to increasing hidden IT spending:
• Flat organizational models more difficult to rein in and control
• Virtual enterprise structures ostensibly set up as nimble, agile organizational constructs but without regard for policy and procedure
• Changing organizational authority where business unit managers are given (or take) responsibility for decentralized technology spending
• Selective IT outsourcing, in which a business unit will independently decide it doesn’t need to participate in overall enterprise architecture to fulfill its departmental mission
The impact of all this hidden technology spending can be profound and prevents IT from being able to control project costs. Architectural pollution from rogue projects can delay change, resulting in cost overruns and lost opportunities. Business unit-sponsored systems eventually become the responsibility of IT, increasing the cost of support and maintenance (there are those baseline costs again). Cultural biases in business units may conflict with overall strategic goals, increasing costs and resulting in the destabilization of information and knowledge. This is just as important for small companies as well as large; fundamental business decision-making is driven by solid information, and if we don’t have it we can’t do it.

3. Understand long-term application costs
As a general rule, ongoing application costs are about 40 percent to 60 percent of the original development cost for each year in an application’s life cycle. Sound like a lot? These are the costs associated with application support, maintenance, operations, software licenses, infrastructure, and allocated help desk and operational staff. Controlling these ongoing costs is critical; as a component of baseline costs, they’re necessary evils. Collect and maintain information about all new development work underway throughout the entire enterprise and actively participate in all projects as a value-added business partner. Communicate effectively and relentlessly; report to senior management anticipated costs both at the start of projects and at appropriate intervals thereafter. Don’t forget to maintain a historical record of all costs.

4. Understand IT cost estimation truths
How good an estimator of project costs are you? I’m sorry to disappoint you, but no matter how good you think you are, you’re not that good. None of us is; your crystal ball is just as cloudy as anyone else’s. This is the single biggest reason IT projects have such a high failure rate. Remember: The cost of IT initiatives will typically exceed original estimates by an average of 100 percent.
Institutional knowledge is lacking as to the result of major intitiatives, the advice and counsel of IT is routinely omitted or ignored, and business process change relies too heavily on IT ownership of those business processes. How often have you been called upon to estimate, if not virtually guarantee, a project cost before the scope has been fully defined?
As an IT professional, whatever your role on a project, you must provide business managers with parameters for setting funding expectations and force those business managers to explain why their assumptions are valid. If you’re an IT manager, track all major development efforts throughout the enterprise and regardless of your role, participate in the creation of a knowledge base of maintenance and support costs to drive future verifiable and credible estimation. Don’t underestimate the future costs of maintenance and support and whatever you do, don’t make the classic cardinal error: Do not, under any circumstances, pad budgets in anticipation of an underestimation. Keep track of project costs as the project unfolds and communicate, immediately and vociferously, the instant you detect even the potential for an overrun.

5. Leverage current system investments
Applications, purchased software, networks, infrastructure, and any IT investment should all be regularly reviewed, at least on an annual basis, to ensure maximum value is being extracted and that original ROI goals are being met. Start with the original requirements and review them to ensure return on investment goals were delivered. Examine changes in the business and review new requests to determine whether they fit with the existing systems. Consider business reengineering. Review embedded processes to determine whether they’re consistent with new organizational models and make changes where necessary. Review vendor and product features, making sure they still fit within the organization. Enterprise architecture is organic; it’s not once and done. It changes over time. Keeping up with those changes allows for adjustments either at the periphery or by making modifications to existing components. This is an effective way to control overall costs.

6. Implement short-term cost cutting measures
Often we can control costs by putting in place tactical solutions. Short-term thinking can also be an effective tool in project cost estimation, in that it focuses us on the details. Getting from New York to Tokyo involves a fairly long flight, but we can’t forget that we still have to figure out how we’re going to get to the airport to begin with.
Try to postpone capital purchases as long as possible. This may not only provide time to negotiate better costs, but an idea for a less expensive solution may present itself after the project has begun. Always control project scope. Come to agreement as quickly as possible with business unit customers and sponsors as to the overall project scope and put that in writing. Have an effective change management process for the inevitable “just one more thing” discussions, which will limit or postpone until after project delivery the single biggest reason for cost overruns.
Try to control human resource spending. There are only two reasons to use external consultants--to fill a knowledge gap (we don’t know how to do something) and to fill a resource gap (we have too few to complete the project on time). Negotiate the best possible rates and where possible, use fixed-price agreements rather than T&M (time and materials).

7. Implement long-term cost cutting measures
Be tactical, but don’t forget to be strategic at the same time. Make sure there’s an enterprise architecture; it’s hard to put the puzzle together when you have no picture on the front of the box to go by. Eliminate duplicate processes and systems, eliminating unnecessary costs in the process. Reprioritize and rejustify all IT projects on a regular basis. Just because something made sense in January doesn’t mean it still does in August, so why waste the budget? And outsource selectively. These are the costs that typically are the most controllable yet too often lead to the highest cost overruns.

8. Implement pricing and chargeback mechanisms
I once worked for a CIO at a Fortune 500 company who decided an internal chargeback process was needed to make business units more accountable for technology costs. He successfully implemented the new approach and was credited with saving the corporation many millions of dollars. He was also fired, because this approach is the one most fraught with political peril.
Absent a chargeback mechanism, business units tend to look upon IT as a giant free toystore. Put one in place and those same business units feel free to go to the outside to get more competitive technology pricing, and IT loses control and becomes marginalized.
If your company is going to consider this, there are ways to achieve both goals: making the business units accountable and maintaining central technology architectural control. Internal IT must be competitlve with external service providers. Periodic benchmarking exercises are key. Don’t underestimate the substantial resources needed to effectively administer chargeback mechanisms to ensure that business units have all the information they need and no one feels at a disadvantage. IT must have a clear understanding of all costs and manage the demand appropriately. Use client satisfaction surveys and service level agreements (a good idea no matter what the circumstances) and always show a balance between costs and benefits.

9. Use governance to drive IT investment decisions
Too many organizations fly blind, with little synergy between IT and the business. In most organizations, IT is a discretionary expense center; there’s a fundamental framework (baseline costs again) but most, if not all, of what’s required beyond that isn’t necessarily mission critical.
Enlightened organizations understand that IT is a value-added strategic business partner, and a successful collaboration between IT and the business drives significantly increased stakeholder value. Establish, or if one exists become a participant of, a strategy council to examine enterprise-level issues of strategy, politics, priorities, and funding. Set up a business council to define priorities, oversee projects, and measure (and communicate) project success across business units. This group must, of course, have the courage to cancel projects when that becomes necessary; not everything that starts must finish. Put together a technical council to develop guidelines and principles for technology standards and practices. These are three very different organizational constructs, and while there may be some overlap in terms of participation, the mission of each is mutually exclusive

10. Quantify the value/benefit proposition for IT investments
Why do we do what we do? That’s not an existential or rhetorical question. IT exists to provide value, to participate in the achievement of organizational strategic goals. How can we prove we’ve done so? Just because we’ve built a thing, that doesn’t mean much. Does the thing work? Does the thing provide value? Is that value measurable and consistent with the corporate mission?
Some quantifiable benefits of IT work can be improved operating efficiencies, enhanced personal productivity, enhanced decision quality, and/or enabling or supporting organizational strategic initiatives. What’s most critical is to ensure the credibility of any measurements used to justify IT investments and provide after-the-fact valuations. You may be working on a project that will reduce a process from five person-days' worth of work to two. Does that mean three people are going to be fired, with the resulting compensation cost saving attributable to your project? Probably not. Those folks will most likely be reassigned, so don’t take credit for expense reductions that aren’t going to happen.

courtesy @TechRepublic

Wednesday, October 31, 2007

Microsoft Sees The Future Of Software In Modeling

Microsoft officials assert that software modeling is not just for the most expensive tools on the market, but for "the average developer" as well.

Microsoft is about to move into the software modeling market previously occupied by the likes of IBM Rational tools and Telelogic's automotive, vertical industry modeling tools.

In a series of announcements slated for Tuesday on its Redmond, Wash., campus, Microsoft officials will assert that software modeling is not just for the most expensive tools on the market but for "the average developer" as well.

It might sound prosaic -- software designers have been drawing diagrams of the programs they planned to build for many years. But modern modeling techniques allow code to be generated from the symbols and syntax of the model. Furthermore, it's a two-way street: If a change is made to the model, it's reflected in the code; if a change is made to the code, it's reflected in the model.

That makes for a great deal of more visibility into the software on which businesses depend. And it captures and highlights changes when things go wrong. In many cases, it leads to greater reliability in the software.

"This modeling capability will surface in the products that people know and use today," including Visual Studio development tools, Biztalk Studio business process development tools, and the .Net framework, said Steven Martin, director of product management for the Connected Systems Division, in an interview.

Sophisticated modeling in the past has been associated with the Unified Modeling Language models produced by Borland's former Togethersoft tools, now part of a new independent Borland business unit, CodeGear. UML modeling has also been the hallmark of Compuware, Telelogic, and IBM Rational tools.

"In the past a very select group of users has used modeling. Microsoft is going to make modeling mainstream for the average developer," said Martin.

The details of the new capabilities will be provided in an opening keynote by Robert Wahbe, VP of the Connected Systems Division, and Don Ferguson, Microsoft technical fellow.

BizTalk Vesion 6.0 for SOA, Visual Studio 10 and System Center 5.0 will all eventually be equipped with modeling capabilities. "We want to get rid of terms like 'import' and 'export.' We want to have a unified approach to modeling," Martin added.

If achieved, a model of requirements for a new application could be handed off to a software architect, who would diagram out a system. That model would move to a developer who would generate code from the diagram, filling in custom parts. The model would also accompany the new application into production, becoming the document that illustrates how the software works and recording any changes to the code.

"We won't be satisfied until two people in different organizations can work on an application separately and deploy it either to the Web or locally in their own organizations," said Martin.

Microsoft plans to start delivering beta versions of its software with the modeling capabilities in 2008. No date was set when the capabilities would be merged into the existing product lines.

courtesy @informationweek.com

Yahoo Messenger deepens social networking features

Yahoo releases beta of its instant messaging service that accentuates the social networking and interactive capabilities of Yahoo Messenger

Yahoo, disconcertingly unable to develop a popular social networking site, will try to accentuate the social networking capabilities of Yahoo Messenger when it releases on Tuesday a beta upgrade of this widely used instant messaging service.


Among its new features, Yahoo Messenger 9.0 will allow people to invite friends to watch videos or flip through photo albums in real time, sharing those activities as if they were sitting side by side in one's living room.

Made possible by what Yahoo calls an "in-line media player," this feature is one of several new ones designed to boost interactions among Messenger users.

Another such feature is a redesigned "friends" list that provides more space for each contact entry and makes it easier to establish an IM, voice, or SMS link with another user.

Now the question is whether Yahoo plans to leverage the more than 94 million people who use Messenger and build a social network around that base, giving the company, finally, an answer to MySpace and Facebook.

While she wouldn't provide a direct answer to this question, Sabrina Ellis, vice president of Yahoo Messenger, acknowledged that the service has an inherent social networking component.

"In some ways, if you look at it, Yahoo Messenger is a social network in that people have actually defined who their friends are and these are people they communicate with," Ellis said.

The new features, such as the ability to share and watch videos virtually together, will let people deepen their interactions on Yahoo Messenger beyond what has been possible so far, she said.

"These elements will really help people develop relationships and expand some of their friendships," Ellis said.

According to comScore, last month Yahoo Messenger had 94.3 million unique users, up almost 30 percent from September 2006 and second worldwide to Microsoft's Windows Live Messenger with almost 227 million.

Still, Yahoo hasn't been able to find its groove in social networking. It recently announced it will phase out its social networking site Yahoo 360 and migrate its features and content to a "universal profile system" that will more closely tie Yahoo's various online services together.

Asked how Yahoo Messenger will fit within this integrated, centralized platform, Ellis said the IM service will be part of that effort, along with Yahoo Mail, Yahoo Answers, and other services.

"We're committed to making sure that all of our users can benefit from all different Yahoo services," she said.

Other new features in the beta version of Yahoo Messenger 9.0 include the following:

-- Localized versions for the Philippines, Indonesia, Malaysia, Thailand, India (in Hindi), and Vietnam

-- The ability to transfer an unlimited number of files of up to 2GB

-- Call forwarding, to send calls from Yahoo Messenger to a mobile phone or landline as a voicemail

-- The ability to share photo sets from Yahoo's Flickr service in real time

courtesy @infoworld.com

Google Vs. Zoho: Can Either Replace Microsoft Office?

While Microsoft Office currently rules as the king of the office suites, there are at least two online contenders for the crown. Could Google or Zoho truly compete with Office? We try them both out.

The gold standard for office productivity has become Microsoft Office -- a suite of applications used by most of us in our day-to-day business and personal activities. While there have been a number of commercial (Corel WordPerfect Office) and free (OpenOffice.org) alternatives available, it's the new online applications that have been causing the most talk -- and, possibly, offering the most promise.

Until recently, the idea of online applications replacing locally-installed software was, to say the least, impractical. In fact, before a majority of computer users were on broadband connections, it would have been completely useless: if you're only online a few hours a day you can't confine your word processing and spreadsheet activity to those hours.

That has changed in the last few years. Most of us are online most of the time -- certainly, we have continuous access to the Internet at work and at home. As a result, using an online word processor or calendar app sounds a lot less ridiculous than it did before. And there are some things current software applications do rather badly (such as sharing files for collaborative work) that online apps are a lot better at.

The idea of committing to a Web connection for your basic tasks is still a tricky one, though. While the Internet may be ubiquitous in homes and offices, getting online from your commuter train or while you wait for your kid to finish dance class is problematic at best. In addition, glitches in broadband service, especially in remote areas that depend on satellite service, are common enough that the likelihood of even temporary loss of access to a word processor or spreadsheet can make many of us a bit nervous. But if you're willing to take the risk, two Web services have taken the lead in offering online applications that have the potential to, one day, knock Microsoft Office off its pedestal.

But can Google and/or Zoho really challenge something as entrenched in the marketplace as Microsoft Office? In the following pages, we compare each of these online contenders to the leader of the pack by matching them up to six of Microsoft Office's applications: Word, Excel, Outlook, PowerPoint, OneNote, and Access. (Note: There is currently no database application such as Access in Google.) How do Google and Zoho rate? Is it time to switch, or are the two online services still second-raters when compared to Microsoft's established frontrunners? Read on, and see what you think.

Introducing: Google

Google, which has joined Microsoft and Apple as a contender for "tech company most likely to take over the world," has been slowly buying up interesting online applications and integrating them into its own line of advertising-supported products. It has accumulated a wide range of applications: word processing, e-mail, photo album, simple Web site developer, blogging application, and so on.


However, while there is a great deal of value in the variety, there is little to no attempt to organize them into a cohesive whole. The nearest that Google comes to this is in its Google Docs application, which combines a word processor, spreadsheet app, and presentation package. Calendar and Gmail, apps you'd normally expect to be part of a productivity suite, are totally separate. You can use Google's home page, iGoogle, to organize some of these onto the same page, but it's not quite as efficient a method as that used by, yes, Microsoft.

Introducing: Zoho

Zoho's motto is "Work. Online" and its aim is to provide you with portable replacements for many of the programs you expect to find installed on a desktop PC. The analogy the folks at Zoho use is a desk phone vs. a mobile phone: the fact that you can take your cell phone nearly anywhere (as long as there's service) gives it possibilities a regular phone doesn't have.

Despite Zoho being new to the game, it's been adding applications and features to its online office suite with persistent regularity; for example, it recently added e-mail to its feature set. Zoho even has some applications, such as its Creator database, that many hard drive-based packages do not.

There's no question about it: Zoho is obviously serious in its bid to offer people at least some of Office's functionality without the price, and with the added bonus of being able to work anywhere that there's a Web browser and an Internet connection.

read more here @informationweek.com

Handling office politics from the outside

One of the reasons I became an independent consultant was to get away from the various interpersonal conflicts that recur in an office setting. Simply by putting my stakes down off the org chart, I avoid all manner of intrigue over power plays, office gossip, discontent with policies and procedures, and a whole host of other miscellaneous useless discussion that my regularly employed colleagues have to deal with on a regular basis. Even if I have an opinion on one of these topics, I think it prudent to remain silent unless asked to contribute — and even then to try to maintain as neutral a tone as possible.

But sometimes it’s not quite so easy to divide between “office politics” and the portion of my clients’ business strategies for which I bear some responsibility. The very fact that humans are involved means that many business problems cannot be separated from social problems. No matter how emotional the discussions, if the decisions that come out of them could impact the quality of the solution I’m attempting to help my client achieve, then I’m duty-bound to participate.

Some things to remember when jumping into the fray:

1. Business first. My client is in business to make money, not to run a social club, a forum, or a charity for its employees. My suggestions need to focus on what promotes business success. Employee satisfaction can be a big component of that, but it’s secondary.
2. Who’s your Daddy/Momma? Maybe most of the time my client’s company is one big happy family, but if it comes down to conflicts between various powerful people in the organization, then I must remain loyal to the person who controls my contract. If I disagree with them on an issue that has become inflammatory, then I’ll try to reason with them privately instead of adding fuel to the fire.
3. Be friendly to all sides of the argument. Don’t make things worse by acting in a partisan manner. Try to reconcile the different viewpoints, without belittling the importance of any argument. Being an outsider may help with the perception of disinterested objectivity.
4. Just add humor. Many times strong emotions can be diffused by revealing that the topic shouldn’t be taken too seriously. But sometimes that can backfire, so I have to choose my lines well. Best not to joke on the main source of contention, but rather on some aspect of it that everyone can find absurd.

I’d like to be more specific about the incident that sparked this post, but then I’d be violating rule #5:

5. Don’t blog about it. In fact, don’t discuss it with anyone outside the organization. I signed an NDA with my client, which means that they let me in on internal affairs that they don’t expect me to share. It’s not a matter of freedom of speech, it’s a matter of trust.


courtesy @TechRepublic

Three ways project managers give in to threat

We live in a corporate environment (and indeed a social one) where the fear of personal, financial, or political harm counts for more than the honest calculation of risk. Economists are slowly starting to key onto this with their calculations of “perceived risk”; priests, theologians, and mothers everywhere probably realized this somewhere around the time humans started to speak. As project managers, we have to deal not only with the threat we face ourselves, but also the threats an organization’s management react to and the threats which press down upon our project resources.

All of these threats wear down our enthusiasm and determination. They also prevent us from doing what’s right rather than what will shield us from fear. We worry more about doing what our bosses want, what our executives might want, and what ever will keep people’s eyes off of our project than honestly trying to get the job done on time, under budget, and within the requirements.

In our role as project managers we give in to threat in a variety of ways. The most prominent of these is to misreport data about misalignments in resources, time, or function requests from the project. We can also obscure the work we do behind layers and layers of data, hide issues as they occur, and place ourselves in a situation where we control all of the inflowing information, thereby shaping our perception of reality so that it no longer matches with what is really going on.

Misreporting happens all the time. Most of us can do simple calculations or look at a calendar and tell when things fall behind. We know from personal experience that we do not have enough time or that our resources are stretched too thin. Yet we say nothing. We become afraid that the project will be canceled or our contracts terminated if we do not do exactly what we are told when we are told it. It happens often enough, in fact, that the threat has some teeth in it.

Another failure, one that I’ve fallen into from time to time myself, involves generating too much reporting data. On one hand it looks like we’ve accomplished something. Hundreds of reports showing that we’ve done something with our time, certainly look better than an entire day spent on the phone talking with people and having nothing tangible to show for it. Just as importantly, the volume of the information shared insures that we can control the conversations around it. We direct people to the “important” parts, and subtly shade the rest to support what we want to say.

Hiding issues is another favorite failing of mine. It’s easy to talk about how all issues should be escalated to management. It’s simple to say “Why yes, we should have talked about this earlier”. But we also know that people want projects to be quiet and perfect. Perturbations along the project path do nothing to endear a project manager to his managers. Indeed, they are usually seen as failures on the part of the project manager, and can become grounds for dismissal when the inevitable witch-hunt begins.

The last failing is probably the most insidious. As project managers it is our responsibility to report project status. When people want answers about a project, they come to us. It’s easy to make all information flow up to us first, for vetting and perception management, before it reaches anyone else. Logical, even. Unfortunately this kind of activity actively leads those who report to us to taint the truth. They tell us what we want to hear or what they think will make us happy rather than what we really need to know. We then pass on the bad information in total confidence, now unaware of events in our own project environment.

I don’t have any easy answers for dealing with threat. We don’t talk much about the importance of feelings in business or how vital a role our emotions play in our decision making processes.

Find errors as early as possible on your project

Project teams generally use three types of quality management activities:

* Quality planning
* Quality control
* Quality assurance

The purpose of quality assurance is to prevent as many errors as possible by having sound processes in place to begin with. The purpose of quality control is to inspect or test the deliverables to find as many remaining errors as possible.

One important aspect of quality control is to find errors and defects as early in the project as possible. Therefore, a good quality control process will end up taking more effort hours and cost upfront. However, there will be a large payback as the project progresses.

For instance, it’s much better to spot problems with business requirements during the analysis phase of the project rather during the testing process. If you see the problem during the requirements gathering process it might just take a call to your client and the quick update of a Word document to fix it. On the other hand, if you discover this problem in the testing phase, it could impact the business requirements, the solution design, and some of the construct work. It will also require you to re-test the solution again. As you can see, this is a potentially huge impact to your project.

Likewise, if you were manufacturing a computer chip, it would be much cheaper to find a problem with a computer chip when the chip is manufactured, rather than have to replace it when a customer brings the computer in for service after a purchase. In fact, if the error isn’t caught until after the chip is sold to the customer, the cost to make the repair might cost more that the entire cost to manufacture the product to begin with.

There was a project early in my career that applied poor shortcuts to the development process. Since the programmers were under time pressure to complete their modules, they figured they would write the code and make sure it compiled cleanly. Then they would then call the module complete, with the attitude that they could “fix it in the testing phase.” They thought that they were just deferring their unit test time until later in the project. But by pushing these errors further downstream, it actually took much longer to fix the problems later - to the detriment to the overall project.

In Figure A, we see a traditional approach to finding errors.
Figure A


On many projects the team plans to find as many errors as possible during the testing process, with some errors not caught until support/maintenance.

In Figure B we see the better approach. It is much better to catch any errors that are introduced as quickly as possible. In other words, errors in the deliverable created in the Analysis Phase should be caught in the Analysis Phase; errors introduced in the Design Phase should be caught during the Design Phase, etc. This greatly minimizes the impact of correcting the errors.
Figure B

The bottom line is that the project team should try to maintain high quality and low defects during the deliverable creation processes, rather than hope to catch and fix problems during the testing phase at the end of the project (or worse, have the client find the problem after the project has been completed).

courtesy @TechRepublic

10 common Web design mistakes to watch out for

When you start designing a Web site, your options are wide open. Yet all that potential can lead to problems that may cause your Web site to fall short of your goals. Whether you're building a commercial Web site, a personal or hobby site, or a professional nonprofit site, you'll want to keep these issues in mind.

1. Failing to provide information that describes your Web site
Every Web site should be very clear and forthcoming about its purpose. Either include a brief descriptive blurb on the home page of your Web site or provide an About Us (or equivalent) page with a prominent and obvious link from the home page that describes your Web site and its value to the people visiting it.
It's even important to explain why some people may not find it useful, providing enough information so that they won't be confused about the Web site's purpose. It's better to send away someone uninterested in what you have to offer with a clear idea of why he or she isn't interested than to trick visitors into wasting time finding this out without your help. After all, a good experience with a Web site that is not useful is more likely to get you customers by word of mouth than a Web site that is obscure and difficult to understand.

2. Skipping alt and title attributes
Always make use of the alt and title attributes for every XHTML tag on your Web site that supports them. This information is of critical importance for accessibility when the Web site is visited using browsers that don't support images and when more information than the main content might otherwise be needed.
The most common reason for this need is accessibility for the disabled, such as blind visitors who use screen readers to surf the Web. Just make sure you don't include too much text in the alt or title attribute -- the text should be short, clear, and to the point. You don't want to inundate your visitors with paragraph after paragraph of useless, vague information in numerous pop-up messages. The purpose of alt and title tags is, in general, to enhance accessibility.

3. Changing URLs for archived pages
All too often, Web sites change URLs of pages when they are outdated and move off the main page into archives. This can make it extremely difficult to build up significantly good search engine placement, as links to pages of your Web site become broken. When you first create your site, do so in a manner that allows you to move content into archives without having to change the URL. Popularity on the Web is built on word of mouth, and you won't be getting any of that publicity if your page URLs change every few days.

4. Not dating your content
In general, you must update content if you want return visitors. People come back only if there's something new to see. This content needs to be dated, so that your Web site's visitors know what is new and in what order it appeared. Even in the rare case that Web site content does not change regularly, it will almost certainly change from time to time -- if only because a page needs to be edited now and then to reflect new information.
Help your readers determine what information might be out of date by date stamping all the content on your Web site somehow, even if you only add "last modified on" fine print at the bottom of every content page. This not only helps your Web site's visitors, but it also helps you: The more readers understand that any inconsistencies between what you've said and what they read elsewhere is a result of changing information, the more likely they are to grant your words value and come back to read more.

5. Creating busy, crowded pages
Including too much information in one location can drive visitors away. The common-sense tendency is to be as informative as possible, but you should avoid providing too much of a good thing. When excessive information is provided, readers get tired of reading it after a while and start skimming. When that gets old, they stop reading altogether.
Keep your initial points short and relevant, in bite-size chunks, with links to more in-depth information when necessary. Bulleted lists are an excellent means of breaking up information into sections that are easily digested and will not drive away visitors to your Web site. The same principles apply to lists of links -- too many links in one place becomes little more than line noise and static. Keep your lists of links short and well-organized so that readers can find exactly what they need with little effort. Visitors will find more value in your Web site when you help them find what they want and make it as easily digestible as possible.

6. Going overboard with images
With the exception of banners and other necessary branding, decorative images should be used as little as possible. Use images to illustrate content when it is helpful to the reader, and use images when they themselves are the content you want to provide. Do not strew images over the Web site just to pretty it up or you'll find yourself driving away visitors. Populate your Web site with useful images, not decorative ones, and even those should not be too numerous. Images load slowly, get in the way of the text your readers seek, and are not visible in some browsers or with screen readers. Text, on the other hand, is universal.

7. Implementing link indirection, interception, or redirection
Never prevent other Web sites from linking directly to your content. There are far too many major content providers who violate this rule, such as news Web sites that redirect links to specific articles so that visitors always end up at the home page. This sort of heavy-handed treatment of incoming visitors, forcing them to the home page of the Web site as if they can force visitors to be interested in the rest of the content on the site, just drives people away in frustration. When they have difficulty finding an article, your visitors may give up and go elsewhere for information. Perhaps worse, incoming links improve your search engine placement dramatically -- and by making incoming links fail to work properly, you discourage others from linking to your site. Never discourage other Web sites from linking to yours.

8. Making new content difficult to recognize or find
In #4, we mentioned keeping content fresh and dating it accordingly. Here's another consideration: Any Web site whose content changes regularly should make the changes easily available to visitors. New content today should not end up in the same archive as material from three years ago tomorrow, especially with no way to tell the difference.
New content should stay fresh and new long enough for your readers to get some value from it. This can be aided by categorizing it, if you have a Web site whose content is updated very quickly (like Slashdot). By breaking up new items into categories, you can ensure that readers will still find relatively new material easily within specific areas of interest. Effective search functionality and good Web site organization can also help readers find information they've seen before and want to find again. Help them do that as much as possible.

9. Displaying thumbnails that are too small to be helpful
When providing image galleries with large numbers of images, linking to them from lists of thumbnails is a common tactic. Thumbnail images are intended to give the viewer an idea of what the main image looks like, so it's important to avoid making them too small.
It's also important to produce scaled-down and/or cropped versions of your main images, rather than to use XHTML and CSS to resize the images. When images are resized using markup, the larger image size is still being sent to the client system -- to the visitor's browser. When loading a page full of thumbnails that are actually full-size images resized by markup and stylesheets, a browser uses a lot of processor and memory resources. This can lead to browser crashes and other problems or, at the very least, cause extremely slow load times. Slow load times cause Web site visitors to go elsewhere. Browser crashes are even more effective at driving visitors away.

10. Forgoing Web page titles
Many Web designers don't set the title of their Web pages. This is obviously a mistake, if only because search engines identify your Web site by page titles in the results they display, and saving a Web page in your browser's bookmarks uses the page title for the bookmark name by default.
A less obvious mistake is the tendency of Web designers to use the same title for every page of the site. It would be far more advantageous to provide a title for every page that identifies not only the Web site, but the specific page. Of course, the title should still be short and succinct. A Web page title that is too long is almost as bad as no Web page title at all.

These considerations for Web design are important, but they're often overlooked or mishandled. A couple of minor failures can be overcome by successes in other areas, but it never pays to shoot yourself in the foot just because you have another foot to use. Enhance your Web site's chances of success by keeping these design principles in mind.

Happy Web Development....


courtesy @TechRepublic

Web Site Development Process - The life-cycle steps

A system development process can follow a number of standard or company specific frameworks, methodologies, modeling tools and languages. Software development life cycle normally comes with some standards which can fulfill the needs of any development team. Like software, web sites can also be developed with certain methods with some changes and additions with the existing software development process. Let us see the steps involve in any web site development.

1. Analysis:
Once a customer is started discussing his requirements, the team gets into it, towards the preliminary requirement analysis. As the web site is going to be a part of a system, It needs a complete analysis as, how the web site or the web based application is going to help the present system and how the site is going to help the business. Moreover the analysis should cover all the aspects especially on how the web site is going to join the existing system. The first important thing is finding the targeted audience. Then, All the present hardware, software, people and data should be considered during the time of analysis. For example, if a company XYZ corp is in need of a web site to have its human resource details online, the analysis team may try to utilize the existing data about the employees from the present database. The analysis should be done in the way, that it may not be too time consuming or with very less informative. The team should be able to come up with the complete cost-benefit analysis and as the plan for the project will be an output of analysis, it should be realistic. To achieve this the analyst should consult the designers, developers and testers to come up with a realistic plan.

Input: Interviews with the clients, Mails and supporting docs by the client, Discussions Notes, Online chat, recorded telephone conversations,Model sites/applications etc.,
Output: 1. Work plan, 2. Cost involved, 3. Team requirements, 4. Hardware-software requirements, 5. Supporting documents and 6. the approval

2. Specification Building:
Preliminary specifications are drawn up by covering up each and every element of the requirement. For example if the product is a web site then the modules of the site including general layout, site navigation and dynamic parts of the site should be included in the spec. Larger projects will require further levels of consultation to assess additional business and technical requirements. After reviewing and approving the preliminary document, a written proposal is prepared, outlining the scope of the project including responsibilities, timelines and costs.

Input: Reports from the analysis team
Output: Complete requirement specifications to the individuals and the customer/customer's representative

3. Design and development:
After building the specification, work on the web site is scheduled upon receipt of the signed proposal, a deposit, and any written content materials and graphics you wish to include. Here normally the layouts and navigation will be designed as a prototype.

Some customers may be interested only in a full functional prototype. In this case we may need to show them the interactivity of the application or site. But in most of the cases customer may be interested in viewing two or three design with all images and navigation.

There can be a lot of suggestions and changes from the customer side, and all the changes should be freezed before moving into the next phase. The revisions could be redisplayed via the web for the customer to view.

As needed, customer comments, feedback and approvals can be communicated by e-mail, fax and telephone.
Throughout the design phase the team should develop test plans and procedures for quality assurance. It is necessary to obtain client approval on design and project plans.
In parallel the Database team will sit and understand the requirements and develop the database with all the data structures and sample data will also be prepared.

Input: Requirement specification
Output: Site design with templates, Images and prototype



4. Content writing:
This phase is necessary mainly for the web sites. There are professional content developers who can write industry specific and relevant content for the site. Content writers to add their text can utilize the design templates. The grammatical and spelling check should be over in this phase.

Input: Designed template
Output: Site with formatted content

5. Coding:
Now its programmers turn to add his code without disturbing the design. Unlike traditional design the developer must know the interface and the code should not disturb the look and feel of the site or application. So the developer should understand the design and navigation. If the site is dynamic then the code should utilize the template. The developer may need to interact with the designer, in order to understand the design. The designer may need to develop some graphic buttons when ever the developer is in need, especially while using some form buttons. If a team of developers is working they should use a CVS to control their sources. Coding team should generate necessary testing plans as well as technical documentation. For example Java users can use JavaDoc to develop their documents to understand their code flow. The end-user documentation can also be prepared by the coding team, which can be used by a technical writer who can understand them, writes helps and manuals later.

Input: The site with forms and the requirement specification
Output: Database driven functions with the site, Coding documents

6. Testing:

Unlike software, web based applications need intensive testing, as the applications will always function as a multi-user system with bandwidth limitations. Some of the testing which should be done are, Integration testing, Stress testing, Scalablity testing, load testing, resolution testing and cross-browser compatibility testing. Both automated testing and manual testing should be done without fail. For example its needed to test fast loading graphics and to calculate their loading time, as they are very important for any web site. There are certain testing tools as well as some online testing tools which can help the testers to test their applications. For example ASP developers can use Microsoft's Web Application Test Tool to test the ASP applications, which is a free tool available from the Microsoft site to download.

After doing all the testing a live testing is necessary for web sites and web based applications. After uploading the site there should be a complete testing(E.g.. Links test)
Input: The site, Requirement specifications, supporting documents, technical specifications and technical documents
Output: Completed application/site, testing reports, error logs, frequent interaction with the developers and designers

7. Promotion:
This phase is applicable only for web sites. Promotion needs preparation of meta tags, constant analysis and submitting the URL to the search engines and directories. There is a details article in this site on site promotion, click here to read it. The site promotion is normally an ongoing process as the strategies of search engine may change quite often. Submitting a site URLs once in 2 months can be an ideal submission policy. If the customer is willing, then paid click and paid submissions can also be done with additional cost.

Input: Site with content, Client mails mentioning the competitors
Output: Site submission with necessary meta tag preparation

8. Maintenance and Updating:
Web sites will need quite frequent updations to keep them very fresh. In that case we need to do analysis again, and all the other life cycle steps will repeat. Bug fixes can be done during the time of maintenance. Once your web site is operational, ongoing promotion, technical maintenance, content management & updating, site visit activity reports, staff training and mentoring is needed on a regular basis depend on the complexity of your web site and the needs within your organization.

Input: Site/Application, content/functions to be updated, re-Analysis reports
Output: Updated application, supporting documents to other life cycle steps and teams.

The above-mentioned steps alone are not strict to web application or web site development. Some steps may not applicable for certain tasks. Its depend on the cost and time involved and the necessity. Sometimes if it is a intranet site, then there will be no site promotion. But even if you are a small development firm, if you adopt certain planning along with this web engineering steps in mind, it will definitely reflects in the Quality of the outcome.

courtesy @macronimous.com

Monday, October 29, 2007

Are you ready for AJAX risks?

AJAX has at least three main areas of risk: technical, cultural/political, and marketing risks:

Technical - These are issues that directly relate to the design, development, and maintenance of software, including security, browser capabilities, timeline, cost of development and hardware, skills of the developers, and other things of that nature.

Cultural/Political - These are fuzzy issues that focus around the experience of end users, their attitudes and expectations, and how all this relates to software.

Marketing
- These are issues that relate to successful execution of the business model resulting in sales, donations, brand recognition, new account registrations, and so on.

These issues are all related, and you can easily bundle them into completely different groups depending on the frame of reference. What's important is to categorize risk into levels of severity for your project and use that as a driver for decision making.

Technical Risks


Technical risk, unlike other kinds of risk, can actually result in a project not being completed. These sources of risk must be of prime importance when evaluating third-party frameworks for building AJAX applications because of the lack of technical control. Some studies have shown that 50 percent of enterprise software projects never go into production (Robbins-Gioia Survey, 2001). Following are some of the reasons why.

Reach

Sometimes, when writing software for large groups of people, we need to build for the lowest common denominator. Essentially, we need to sometimes build so that the individuals with the most out-of-date, inferior hardware and software can still access the application. The general public tends to use a lot of different client browsers and operating systems. We're stating the obvious here, but it's important for Web applications to be compatible with the browsers our users want to use, or we risk not delivering the software to them. Whether that means a ~1 percent market share for Opera is worth paying attention to and is something that needs to be dealt with - software must, at least, be tested rigorously on a representative sample of these platforms so that we know what our reach is. This is an example of a technical risk and this reach/richness trade-off is probably the biggest everyday problem with the Web.


The basic problem with Web applications is that different browsers interpret pages differently. Although this much is obvious, what isn't known is what challenges will be faced as we begin to "push the envelope." What's easy to do in Firefox might end up being ridiculously hard in Internet Explorer. The risk lies in successful execution of the project requirements while reaching all our target browsers and operating systems.

Research firm In-Stat/MDR predicts mobile workers in the United States alone will reach 103 million by 2008, and the following year the number of worldwide mobile workers will reach 878 million. This means that an ever-increasing number of workers will be accessing corporate Web applications from outside the workplace, resulting in a loss of control over the software-especially of their Web browsers.

There is a general trade-off between the level of richness in an application and the number of people that can use that application (because of client platform incompatibility). The seriousness of this risk is determined by several factors:

• Whether the application is public versus private (behind the firewall). Public applications have an inherently more heterogeneous audience. Enterprise applications often have an advantage in that it's easier to tell corporate users to stick to one or two browsers than the general public.

• The breakdown of preferred browsers and operating systems of the target audience, that is, how many employees or customers use Safari Mac versus Firefox Mac versus Firefox PC versus Internet Explorer?

• The potential marketing impact of being incompatible with a segment of users. A good question to ask is, "How many people will we lose if we can't support Safari, and is that acceptable from a public relations point of view and cost-benefit point of view?"

• The degree to which users are willing to adapt their use of browser or operating system.

Over time, this trade-off has skewed in favor of richness. There is a tacit understanding between browser vendors that they need to provide a comparable level of JavaScript, DHTML, XML, and XMLHttpRequest functionality to be competitive, and generally speaking, there is a way to write AJAX-powered software that works on all the major browsers. Mozilla, which is cross-platform, tries to ensure that things work the same whether they're running on Linux, MacOS, or Windows. Safari has been playing catch-up ball with Mozilla, as has Opera, but every quarter, new features are announced for upcoming version of those products, and the great browser convergence continues. As these browsers continue to mature, it is easier to write rich applications that work across them all. An example of this is the recent introduction of XSLT support in Safari, making it possible to deliver XML-driven applications across all major browsers.

Browser Capabilities

So much going on in the world of AJAX is uncharted territory right now. It seems that browser vendors are just beginning to understand what developers want from them, and glaring bugs and omissions sometimes create unexpected roadblocks when building cross-platform solutions. Some notable examples are the long-standing absence of XSLT in Opera and Safari and anchor-tag bookmarking problems in Safari. Internet Explorer 6 and 7 have glaring bugs in positioning of DHTML elements that require sometimes complex workarounds. Some techniques that work well in Internet Explorer can be prohibitively slow in Firefox (particularly relating to XSLT).

This risk is that developing a feature can take an unpredictable length of time or reveal itself to be basically impossible. Clearly, there is still a limit to the degree that the browser can mimic true desktop-like software, and where the boundaries lie precisely is still being explored. So often, AJAX development becomes a process of creative workarounds. Developers find themselves going down one road to solve a problem, realizing it's not going to work, having to back up and look for a new one. Maintenance

JavaScript, DHTML, and CSS code have a tendency to become complex and difficult to maintain. One difficulty is that a lot of developers do not use a good IDE to write and test their code. Another difficulty is the need to employ tricky optimization techniques in script for performance considerations. These factors contribute to spaghetti code (code with a disorganized and tangled control structure) and higher long-term maintenance costs than applications written in a traditional architecture that rely more on server-side processing. The risk centers on quickly and adequately maintaining applications over time in a changing technological environment.

Maintenance risk is aggravated by the way browser vendors arbitrarily change the way the browser works and interprets CSS and JavaScript. On occasion, Microsoft or Mozilla will "pull the rug out" from a particular technique or approach by closing a security hole or "fixing" a CSS problem. An example of this is Mozilla and access to the clipboard, which has changed at least once. Another is changes to the DHTML box model in Internet Explorer 7. As Microsoft approaches a more standards-compliant CSS implementation, it will break many of the Web applications that were built to work on an older, buggier model.

The risk is that enterprises must react quickly and frequently to address sudden, unexpected and costly maintenance duties because of changes in the browser, which can be exacerbated by hard-to-maintain spaghetti code.

Forward Compatibility

Forward compatibility is related to maintenance risk. As new browsers and operating systems arrive on the scene, parts of AJAX applications might need to be rewritten to accommodate the changes in the layout engine, CSS interpreter, and underlying mechanisms of JavaScript, XMLHttp, and DHTML. In the past, early-stage browsers such as Opera and Safari have been bad for arbitrarily changing the way CSS positions elements on a page. IE7 has done this again, too. This is a risk because developers need to be one step ahead of all possible changes coming from new browsers that would affect the user experience. This can impact cost containment because it's inherently unpredictable, whereas backward-compatibility work can be tested and more accurately estimated. It's important to note, however, that public betas are always available for new versions of browsers.

Firefox 3.0

Right on the heels of Firefox 2.0 is the upcoming Firefox 3.0 release, slated potentially for Q4 2007. Version 3 will likely be more of an upgrade than a completely new iteration. Mozilla is considering 50 new possible features, including upgrades to the core browser technology, improved add-on management and installation, a new graphical interface for application integration, enhanced printing functionality, private browsing capability, and a revised password manager.

For developers, Firefox 3.0 will mean more in terms of Web standards compatibility and accessibility. One goal is to pass the ACID2 Web standards HTML and CSS rendering test, which implies changes to the browser's core rendering engine. Compliance for CSS 2.1 is also on the roadmap, which will also affect the way pages are displayed.

Safari 3.0

Little is known about the next version of Safari, and Apple rarely comments on the product roadmap, but Safari 3.0 is rumored to include major updates to the CSS rendering engine, which will feature a full or partial implementation of CSS 3.0 including the capability to allow users to resize text areas on the fly. Safari 3.0 will also include an updated Web Inspector tool for browsing the DOM, which will assist developers.

Internet Explorer 8 (IE "Next")

It might seem premature to be discussing IE8, given the recent release of IE7 and Vista, but Microsoft is already planning the next iteration. The final product is expected sometime in 2008 and will possibly feature some emphasis on microformats (content embedded inline with HTML). Although some improvements to XHTML support are expected, it is not yet known if JavaScript 2.0 will be on the roadmap. According to IE platform architect Chris Wilson, Microsoft will invest more in layout and adhering to the Cascading Style Sheets (CSS) 2.1 specifications. He also said Microsoft wants to make its browser object model more interoperable "to make it easier to work with other browsers and allow more flexible programming patterns."

Opera 10

Although no release date has been set, the vision for Opera 10 appears to be platform ubiquity. Opera's goal is to create a browser that can run on any device and operating system, including mobile and gaming consoles - a move that could shift the balance a little in favor of this powerful, but still underappreciated, browser.

Third-Party Tools Support and Obsolescence

Adopting third-party tools such as Dojo or Script.aculo.us can add a lot of functionality to an application "for free" but also bring with them inherent risk. More than one project has gone sour as a result of serious flaws in third-party frameworks, and because of the black-box nature of third-party tools, they are next to impossible to troubleshoot. One West Coast e-commerce firm implementing Dojo needed to fly in highly paid consultants to address issues they were having with the framework. The flaws were addressed and contributed back into the framework but not before the project incurred large unexpected costs.

Obsolescence can also inflict pain down the road if frameworks are not maintained at the rate users would like, or supported in future iterations of development. This can be particularly painful when rug-pulling events occur, such us when browsers or operating systems are upgraded. Adding features or improving the functional capabilities can require bringing in developers with in-depth knowledge of the tool.

Cultural and Political Risks

There are internal and external political risks in any software project. Something that is overlooked a lot right now, in our exuberance over rich Web applications, is the potential negative impact on our audience. Of course, the point is to improve usability, but is there a possibility that ten years of barebones HTML has preprogrammed Internet users to the point of inflexibility? It's a mistake to assume our users aren't smart, but all users have expectations about the way Web applications should respond and provide feedback. If our audience is sophisticated, trainable, and adaptable, designers have more latitude in the way users can be expected to interact with the application. Are we saying designers should be afraid to innovate on inefficient, outdated Web 1.0 user interfaces? Not at all, but some caution might be warranted.

End Users' Expectations


AJAX has a way of making things happen quickly on a page. An insufficiency of conventional visual cues (or affordances) can actually inhibit usability for less-technologically expert users. The general public has a heterogeneous set of expectations. If experience tells a user that an item must usually be clicked, rather than dragged, they might get bogged down with a drag-and-drop element - regardless of its apparent ease of use. It's not hard to imagine how this could happen: If you have never seen a draggable element in a Web page before, why would you expect to see one now?

Switching costs are low on the Internet. This is a cultural and economic characteristic of the Web in general, which contributes to a short attention span of users. If users become frustrated by something on a public web site, they have a tendency to move on to something else. AJAX is a double-edged sword in this instance. Trainability

In the public Web, application users are not generally trainable because they start off with a weak relationship to the vendor. The trainability of your audience depends on the nature of the relationship, on their own motivation to learn, the depth of training required, and, of course their attention span. Training for a Web application might include onsite demonstrations, embedded Flash movie tutorials, or printed instructions. In a consumer-targeted application, switching costs are generally low, and users are poorly motivated to acclimate to a new interface or workflow. Factors that affect trainability include the following:

• Strength of the relationship
- Employees are much more likely to be motivated to learn a new workflow than strangers on the Web. Existing customers are also more likely to take the time to learn than new sales leads.

• Payoff for the user - People are more motivated to learn if there is a payoff, such as getting free access to a valuable service, being entertained, or getting to keep their job. If the payoff is ambiguous or not valuable enough, users are less motivated to learn.

• Difficulty of the task - More difficult tasks require a greater commitment to learn.

In the enterprise, we generally have more influence over our users than in consumer-vendor relationships. In other words, our ability to get users to learn a new interface is stronger. That said, the importance of getting user acceptance can't be understated. End-user rejection is one of the major causes of software project failure (Jones, Capers. Patterns of Software Systems Failure and Success. Boston, MA: International Thompson Computer Press, 1996).

Legal

Web accessibility is an issue that links the legal environment to the technical world of Web application design. In the United States, Section 508 dictates how government organizations can build software and limits the use of Rich Internet Applications - at least to the extent that they can still be built to support assistive devices such as text-to-speech software. There are some ways of building accessible AJAX applications, and some corporations might believe that because they are in the private sector, they are immune to lawsuits. In fact, there have been efforts to sue private corporations with inaccessible web sites under the Americans with Disabilities Act (ADA), such as the widely publicized Target Corp. Web site case in 2006. Increasingly, accessibility will become a topical issue as RIA becomes the norm. Fortunately, key organizations are attempting to address the issue with updated legislation and software solutions.

Section 508


Section 508 of the Rehabilitation Act requires that U.S. government organizations use computer software and hardware that meets clearly defined standards of accessibility. Although Section 508 doesn't require private sector companies to conform to the standards, it does provide strong motivation by requiring Federal agencies to use vendors that best meet the standards.

Telecommunications Act


Unlike 508, Section 255 of the Telecommunications Act does indeed apply to the private sector. It states that telecommunication products and services be accessible whenever it is "readily achievable" - a vague and wide-reaching requirement.


ADA

The Americans with Disabilities Act (ADA) basically requires accessibility in the provision of public services and employment. The ADA empowers employees to ask for "reasonable accommodations" throughout the enterprise, including intranet sites, software, and hardware. The ADA is also applied to Web sites of organizations and businesses, for example, in the Target Web site lawsuit, causing concern throughout the country of sudden heightened legal exposure.

Marketing Risks


All organizations should be concerned about marketing. Internet marketing has spawned a new breed of marketers who have to know about search engine optimization, Web site monetization, as well as understand the target audience and its cultural and technological attributes. All the other risks mentioned here ultimately become marketing risks because they impact the ability of an organization to conduct its business online.

Search Engine Accessibility


Many organizations rely heavily on search engine rankings for their business. Doing anything that might potentially impact rankings negatively would be deemed unacceptable. A lot of marketers are concerned that using AJAX on a corporate site might mean that pages will no longer turn up in search engine results pages (SERPs). This is a real and important consideration. It's also important to note that nobody but the search engine "insiders" (the Google engineers) know exactly how their technologies work. They don't want us to know, probably because knowing would give us an unfair advantage over people who are trying to make good Web sites and deserve good rankings, too. Google's modus operandi has always been to reward people who make Web sites for users, not search engines. Unfortunately, in practice, this isn't even close to being true. Search Engine Optimization (SEO) is a veritable minefield of DO's and DON'Ts, many of which could sink a Web site for good.

Before we look at this in more detail, we should begin with a bit of overview. Search engines use special programs called bots to scour the Web and index its contents. Each engine uses different techniques for finding new sites and weighting their importance. Some allow people to directly submit specific sites, and even specific hyperlinks, for indexing. Others rely on the organic evolution of inbound links to "point" the bots in the right direction. Inbound links are direct links from other sites that are already in the search engine. The problem with bots is that they are not proper Web browsers. Google, for example, previously used an antiquated Lynx browser to scour Web pages, meaning it was unable to evaluate JavaScript and read the results. Recently, Google appears to have upgraded its crawler technology to use a Mozilla variant (the same engine that Firefox uses). There is evidence that the Google crawler (aka Googlebot) is now capable of clicking JavaScript-loaded hyperlinks and executing the code inside.

With Google using Mozilla, all common sense points to the likelihood that Googlebot can indeed interpret JavaScript, but that doesn't necessarily help AJAX to be search engine-accessible. For a page to turn up in Google SERPs, it must have a unique URL. This means that content loaded as part of an XHR request will not be directly indexable. Even if Google captures the text resulting from an XHR, it would not direct people to that application state through a simple hyperlink. This affects SERPs negatively.

Google is not the only search engine, however, and other engines (MSN Search and Yahoo) are reportedly even less forgiving when it comes to JavaScript. That doesn't imply necessarily that a site must be AJAX or JavaScript-free, because bots are actually good at skipping over stuff they don't understand. If an application is "behind the firewall" or protected by a login, SERPs won't matter, and this can all be disregarded. It does, however, reinforce that using AJAX to draw in key content is perilous if SERPs on that content are important.


The allure of a richer user experience might tempt developers to try one of many so-called black hat techniques to trick the search engines into indexing the site. If caught, these can land the site on a permanent black-list. Some examples of black-hat techniques follow:

• Cloaking - Redirection to a mirror site that is search-engine accessible by detecting the Googlebot user agent string.

• Invisible text - Hiding content on the page in invisible places (hidden SPANs or absolutely positioned off the screen) for the purpose of improving SERPs.

• Duplicate content
- Setting up mirror pages with the same content but perhaps less JavaScript with the hope of getting that content indexed, but directing most people to the correct version. This is sometimes used with cloaking.

Given the current status of Googlebot technology, some factors increase the risk of search engine inaccessibility:

• AJAX is used for primary navigation (navigation between major areas of a site).

• The application is content-driven and SERPs are important.

• Links followed by search engine bots cannot be indexed - the URLs cannot be displayed by browsers without some sort of redirection.

Reach

Reach risk is as much a marketing issue as it is a technical one. The problem with AJAX is that not everyone can use it. Even if our AJAX application supports the majority of browser variants, there is still that segment of users who will not have JavaScript enabled in their browsers. This might be true if they are in a tightly controlled corporate environment where security is important. Also, some people just turn it off because they don't want to be bothered by pop-ups and other intrusive dynamic behaviors. Between 3 percent and 10 percent of the general public has JavaScript disabled at any given time.

Reach is also affected by every other risk mentioned here. Having lower SERPs affects reach because fewer people can be exposed to the site. Losing users because the interface is too new or innovative naturally affects reach, as does losing people due to upgrades in browser technology that break Web site functionality. The only way to totally minimize reach risk is to eliminate all but the most basic, correctly formatted HTML.

Monetization


Internet marketers are also quickly realizing that AJAX throws a popular Web site revenue model into disarray. Although it's true that Google Adsense uses a CPC (Cost per Click) model, many other advertising-driven site use the CPM (Cost per thousand impressions) model that rewards advertisers for mere page views. The idea here is that marketers believe that the value of advertising is more to do with branding and recognition than direct conversions. Whether this is true, under CPM, an average click-through is expensive. Ads generally get low click-through rates (sometimes 0.1 percent or less). AJAX creates a problem for CPM because under normal conditions if hyperlinks trigger an XHR instead of a full page load, the ad does not register another impression. The benefits are still reaped for the advertiser, but the Web site loses revenue. Simply implementing a trigger to refresh the ad based on a page-event (such as an XHR) might not be a fair way to solve the problem either. Disagreements are bound to surface about what kind of request should fairly trigger an impression. The magic of XHR and JavaScript might also seem a bit too ambiguous for some advertisers wary of impression fraud. This event-system also lacks a directly comparable baseline from which to compare different Web sites. If one Web site loads more content on each XHR, or uses more pagination than another, the number of impressions can be artificially inflated.

Risk Assessment and Best Practices


The number of variables in evaluating the role of AJAX in your project can be a bit overwhelming. The important thing to remember is that all software projects have risk. AJAX is no different in this regard. We already discussed some of these, and following are a few strategies for reducing overall risk.

Use a Specialized AJAX Framework or Component

Save time by leaving browser compatibility and optimization issues to the people that know them best. There are well-optimized third-party AJAX frameworks and components available that have already solved many of the cross-browser issues. Many of these are maintained quite aggressively with regular updates. This can be a cost and time-savings approach well worth any new introduced risks. Judge a framework or tool by the length of time it has been in continuous development and the quality of support available and balance that with the degree to which you are prepared to build a dependence on it.

AJAX Framework and Component Suite Examples


Dojo, open source

Prototype, open source

DWR, open source

Nitobi, commercial

Telerik, commercial

Progressive Enhancement and Unobtrusive JavaScript


Progressive Enhancement (PE) can be an excellent way to build AJAX applications that function well, even when the client browser can't execute the JavaScript and perform the XHRs. PE is different from Graceful Degradation because in the latter, we build rich functionality and then some mechanism for degrading the page so that it at least looks okay on incompatible browsers. PE is sometimes also referred to as Hijax.

• PE essentially means that you should write your application in such a way that it functions without JavaScript.

• Layer on JavaScript functionality after the application is working.

• Make all basic content accessible to all browsers.

• Make all basic functionality accessible to all browsers.

• Be sure enhanced layout is provided by externally linked CSS.

• Provide enhanced behaviors with unobtrusive, externally linked JavaScript.

• See that end-user browser preferences are respected.

In PE, we begin by writing the application with a traditional post-back architecture and then incrementally enhancing it to include unobtrusive event handlers (not using embedded HTML events, but in externally referenced JavaScript) linked to XHR calls as a means for retrieving information. The server can then return a portion of the page instead of the entire page. This page fragment can then be inserted into the currently loaded page without the need for a page refresh.

When a user visits the page with a browser that doesn't support JavaScript, the XHR code is ignored, and the traditional model continues to function perfectly. It's the opposite paradigm of Graceful Degradation. By abstracting out the server-side API, it's possible to build both versions with relatively little effort, but some planning is required.

This has benefits for accessibility (by supporting a non-JavaScript browser), as well as Search Engine Optimization (by supporting bookmarkable links to all content).

Following is an example of unobtrusive enhancement to a hyperlink. In the first code snippet, we show a hard link to a dynamic page containing customer information.

Show Customer Details

In the next snippet, we see the same link; only we intercept the click and execute an AJAX request for the same information. By calling our showCustomerDetails.php page with the attribute contentOnly=true, we tell it to simply output the content, without any of the page formatting. Then, we can use DHTML to place it on the page after the AJAX request returns the content.

onclick="returnAjaxContent('showCustomerDetails.php?contentOnly=true', myDomNode); return false;">
Show Customer Details


When the user without JavaScript clicks the link, the contents of the onclick attribute are ignored, and the page showCustomerDetails.php loads normally. If the user has JavaScript, this page cannot be loaded (because of the return false at the end of the onclick), and instead the AJAX request triggers, using the returnAJAXContent() method that we just made up but would handle the XHR in the example.

What's even more preferable, and more in keeping with the progressive enhancement methodology, is to remove all inline JavaScript completely. In our example here, we can apply a unique CSS class to the link instead of using the onclick attribute:


Show Customer Details


Then, in our onload event when the page is downloaded to the browser, execute something like the following in externally referenced JavaScript to attach the event to the hyperlink:

function attachCustomerDetailsEvent() {
var docLinks = document.getElementsByTagName("a");
for (var a=0; a < docLinks.length; a++) {
if (docLinks[a].className.match("ajaxDetails")) {
docLinks[a].onclick = function() {

returnAjaxContent('showCustomerDetails.php?contentOnly=true', myDomNode);
return false;
};
}
}
}

This loops through all the tags on the page; find the one marked with the class AJAXDetails and attach the event. This code would then be totally unobtrusive to a browser without JavaScript.

Google Sitemaps

Google has provided us a way of helping it find the entirety of our sites for indexing. It does this by allowing developers to define an XML-based sitemap containing such information as URLs for important pages, when they were last updated, and how often they are updated.

Google Sitemaps are helpful in situations where it is difficult to access all areas of a Web site strictly through the browseable interface. It can also help the search engine find orphaned pages and pages behind Web forms.

If an application uses unique URLs to construct Web page states, Sitemap XML can be a useful tool to help Google find all important content but is not a guarantee that it will. It also has the advantage of being one of the few SEO techniques actually sanctioned by Google.

Many free tools are available to assist with the generation of a Google Sitemap file, but one is easily created if you can crawl and provide information about important areas of your Web site. Following is an example of a Google Sitemap XML file:




http://www.nitobi.com/
2007-10-01
1.0


http://www.nitobi.com/products/
2005-10-03T12:00:00+00:00
weekly


http://www.nitobi.com/news/



The LOC tag provides a reference to the URL. LASTMOD describes when it was last updated, CHANGEFREQ gives Google an idea of how often the content is updated, and PRIORITY is a number between 0 and 1 that indicates a reasonable importance score. In general, it's not advantageous to make all pages a 1.0 because it will not increase your ranking overall. Additionally, new articles or pages should receive a higher priority than the home page, for example, if it is relatively static.

After a sitemaps file has been created, Google must be made aware of it. This can be done by visiting
webmaster tools on google.com. In a short time, the file will be downloaded and then re-downloaded at regular intervals, so be sure to keep it up-to-date.

Visual Cues and Affordances

One of the things usability experts try to do is construct an interface in such a way that people don't need to be trained on it. The interface should use patterns that suggest the features and functionality within, that is, something that can be dragged should have an obvious grab point that suggests "drag me," and possibly a drop-shadow to indicate that it is floating above the page. Try to think of ways to help the user by visually augmenting on-screen controls with cues. Entire books have been written on UI design and usability (some great ones include Don't Make Me Think by Steve Krug and Designing Visual Interfaces: Communication Oriented Techniques by Kevin Mullet and Darrell Sano), but here are some quick guidelines:

• Make controls visible and intuitive. Use high-contrast, evocative iconography to indicate functionality, that is use a trash can for delete.

• Use images to augment links and actions. There is a positive relationship between using image links and user success for goal-driven navigation.

• Use familiarity to your advantage. Build on users' prior knowledge of popular desktop software such as Microsoft Office, Photoshop, Media Player, Windows Explorer, and so on by using similar iconography and interface paradigms.

• Provide proactive assistance. Use HTML features such as tooltips (alt tags) and rollovers (onmouseover, onmouseout) to provide proactive information about the control and inform the user about its function.

• Utilize subtractive design. Draw attention to the visual cues that matter by reducing the clutter on the screen. Do this by eliminating any visual element that doesn't directly contribute to user communication.

• Use visual cues. Simply style an object so that users can easily determine its function. Good visual cues resemble real-world objects. For example, things that need to be dragged can be styled with a texture that indicates good grip (something bumpy or ridged). Something that can be clicked should have a 3D pushable button resemblance.

• Be consistent.
Repeat the use of visual patterns throughout the application wherever possible.

Free databases of user interface patterns are available online, including the good Yahoo Design Pattern Library.

Avoid Gold Plating


Gold plating is adding more to the system than specified in the requirements. Gold plating can also occur in the design phase of a project by adding unnecessary requirements. Building in features above and beyond what the requirements of a software project state can be a lot of fun but can add costs and maintenance work down the road. Every additional feature is a feature that needs to be tested, that can break other parts of the software, and that someone else might need to reverse engineer and understand some day. Goldplating sometimes results from conversations that start: "Wouldn't it be cool if..." Keeping tight control on scope creep; and managing the project carefully helps avoid gold plating.

The counter-argument to this is that tightly controlling scope and being strict about requirements can stifle innovation and take the fun out of developing rich applications. It might be that some of our best features come from moments of inspiration midway through the project. A balance between a focus on requirements and leeway for unplanned innovation could be considered - keeping in mind how it impacts the overall risk of the project.

Plan for Maintenance

Testing needs to happen in any software development project, but with AJAX, developers must perform testing and maintenance at regular intervals to ensure longitudinal success as browsers evolve. Periodically review the target browser list for currency and update to include new versions of popular browsers (including beta versions). Establish repeatable tests and run through them when the browser list changes.

Software risk management


Some global principals of software risk management can handle risk in software. Briefly, here are a few of the things we recommend to generally keep it in check:

* Adopting a holistic view - Taking the wide-angle approach and looking at not only the immediate technical and budgetary constraints, but also external issues such as opportunity cost (the value of an alternative to the choice you make) and how this project impacts marketing goals. The point is to maintain a common understanding of what is important in a software project.
* Having a common product vision - Developing a culture of shared ownership between team members and understanding what the project is and what the desired outcomes are.
* Using teamwork - Bringing together the different strengths of each team member to form a whole that is more than the sum of its parts.
* Maintaining a long-term view - Keeping the potential future impact of decisions in mind and budgeting for long-term risk management and project management.
* Having open lines of communication
- Encouraging both formal and informal means of team communication.

Adopt a Revenue Model the Works

We discussed earlier how AJAX can create a problem with traditional CPM cost-per-impression revenue model. It can cause a site's traffic (in terms of the number of raw impressions) to be underestimated, and consequently, undervalued.

What we want to achieve with ad-driven monetization is a way to tie the true value of a Web site with the cost of advertising there. The question is what makes ad space valuable? Lots of things do, such as unique traffic, people spending a lot of time on a site, people buying things on a site, having a niche audience that appeals to particular advertisers, and so on. To be fair, a revenue model needs to be simple and measurable, and vendors of advertising need to set their own rates based on the demand for their particular property.

Cost-per-Mille (Cost per Impression) Model Guidelines

The thing to pay attention to in CPM revenue models is to update the advertisement when enough content on the page has changed to warrant a new impression.

Cost-per-Click Model Guidelines

Click-through rates are impacted by the appropriateness of the ad for the Web site. In content-driven, consumer-targeted Web sites, the ad server must show contextual ads based on content. When page content is loaded with AJAX, it might not be read by Adsense or other ad servers. An update to the advertising context might be appropriate.

Cost-per-Visitor Guidelines

If a visitor is defined as a unique person per day, a cost-per-visitor model works irrespective of how many page loads occur or how bad or good the advertising is. A unique visitor can be measured reasonably well by looking at the IP address and browser User Agent and by setting a cookie.

Include Training as Part of the Application

Now that we know what affects user trainability, we can look at what impacts the success of user training. If we want to provide training for software applications to improve user acceptance, how do we do it?

• Organize training around user goals
, not product features. For example, it would be better to structure a lesson around the goal of creating an invoice, rather than how to use the invoice tool. This way, users can understand why they should be motivated to pay attention. It also gets to the heart of what they want to learn.

• Find out what users want to use the tool for; provide training for that. Information overload is deadly for the success of training. Trying to cover too much ground can overwhelm your users and get them to turn off, bringing information absorption to a halt.

• Use training to identify flaws in product design
. If training is delivered in-person, it can be an opportunity to identify parts of the application that are too hard to use. Although no substitute for early usability testing, this might be the last opportunity to catch problems.

• Support and encourage a user community
. Support communication tools that allow users to teach one another. Forums and mailing lists can be useful in this regard.

When we think of training, we might be thinking mistakenly about in-person sessions or even live webinars. These can be worthwhile, and by no means rule them out, but consider low-cost alternatives, too:

• Use context-specific training material
. Make material accessible from within the application and at useful interaction points. For example, provide information on how to create a new invoice available from the invoice management screen and so on.

• Show don't tell. Use a screen capture tool such as Adobe Captivate, Camtasia, or iShowU (for the Mac) to provide inexpensive screencast training material that you can deliver through a web page. Many users prefer to learn this way, and there's nothing like having an actual demonstration of a product feature because by definition, it shows a complete goal-story from beginning to end. Some free in-application web tour tools are also available, such as Nitobi Spotlight (http://www.nitobi.com) AmberJack (http://amberjack. org/), although these might not be as effective as a prerecorded demonstration with audio.

Summary

Because of the unstable nature of the JavaScript/CSS/DHTML/XHR paradigm (the fact that the earth keeps shifting beneath our feet with each browser release), we need to employ a Continuous Risk Management process during and after an application is rolled out. This doesn't need to be overly officious and complicated, but it should at least involve unit and regression testing and a holistic look at current browser technology and the underlying mechanisms of AJAX. To put it simply: Does our solution continue to function with current browsers and OSs and will it continue to over the near-term with upcoming releases?

Along with a continuous approach to analyzing risk in a software project must be a willingness to revisit design decisions and also perform rework and maintenance. Both browsers and users can be a moving target, and changes to the JavaScript, CSS, and XHR engines can subtly affect AJAX applications. These are most likely to be the culprit of any long-term maintenance problems. Microsoft, Mozilla, Opera and Apple are all watching the AJAX landscape carefully to help us avoid these as best they can, but a continuous approach to risk management is needed to stay on top of this and ensure a long and healthy lifespan for our Web applications.

Resources

Search Engine Optimization

WebProNews
SearchEngineWatch
Google SEO Recommendations
Google Guidelines for Site Design
Google Sitemaps

Statistics

The Counter Global Web Usage Statistics

Roadmaps

Firefox 3 Roadmap
ACID2 Rendering Test
CSS 3.0 Roadmap

Screen Capture Tools

Adobe Captivate
Camtasia
iShowU

courtesy @computerworld.com