Friday, August 17, 2007

Virtualization is on the rise

With the increasing need for IT departments to consolidate servers and reduce costs, virtualization is on the rise.

Virtualization has been around since IBM mainframes ruled the world, but it was not until VMware pioneered virtualization on Intel chipsets that this technology really began to gain momentum.

In the beginning days of IBM, a huge cost was involved in the virtualization of the z/VM chipset. In addition to the cost, virtualization took up so much of the processor’s power that the systems ran slow and were not cost effective. Additionally, the technology on the chipsets was far behind where it needed to be.

Let’s fast forward to today, where we are seeing Pentium IV chips with blazing speeds, cheap computers, and a more open market for virtualization to occur. Now that computers are so inexpensive, more and more IT shops are receptive to the idea of virtualization.

Many of these companies need a more efficient way consolidate their IT network infrastructure. With the lower cost of hardware combined with the rapid improvements in technology, virtualization is becoming increasingly popular.

As a technical consultant in the field, I am seeing the adoption of virtual machines due to the vast number of underutilized servers that are wasting space and costing companies thousands of dollars. These days, every third-party vendor has stringent requirements for its applications, and common practice for many years was to meet these requirements by freely purchasing what was necessary.

In most cases, this caused numerous servers to sit underutilized. This trend continued throughout the dot-com era and the IT boom. Then the IT meltdown occurred, and virtualization began to finally shine.

The slump caused companies throughout the U.S. to look very closely at their budgets and the amount of money they were spending on computers — especially leased computers — and it didn’t take very long for them to realize that server consolidation was the way to go.

For example, if you have a single quad processor running at 8% and sitting idle for 92% of the time and you need additional servers, virtualization would be able to efficiently and cost-effectively take advantage of that machine rather than purchasing additional servers. I have seen server rooms with 100-plus servers drop to 25 servers with a NAS or SAN because they were taking advantage of virtualization.

Everywhere you look, there is a need that virtualization can meet. Take the companies today that are still using business software that runs only in a Windows NT 4.0 environment; they may not have the time, money, or ability to purchase an upgraded version of NT, but the need to move their business to the latest Microsoft operating system exists nonetheless.

Virtualization is the solution to this incompatibility. The company can upgrade its domain to Windows 2003 server, install a virtual product, and load Windows NT 4.0. Then the legacy applications can be run on the virtual server.

Or consider the many businesses that need the ability to have zero downtime and in addition want to be able to fully utilize their two- and four-processor servers. By using the virtualization technology, companies can run more applications on a single server than ever before. With the ability to move virtual systems around as necessary and the small amount of overhead needed to run this technology, virtualization is becoming more and more popular for companies to implement.

Two of the more popular vendors in this area are VMware (an EMC Corp. company as of Q1 2004) and Microsoft (formerly known as Connectix). VMware has been in the virtualization market for quite some time and has an extensive product line. Its flagship products include VMWARE Workstation, GSX and ESX Server.

Microsoft recently entered the virtualization market as well with its buyout of Connectix. Its current product, Virtual PC 2004 and Virtual Server, also takes advantage of virtualization of Intel-based systems.

Virtualization is a cost-effective way to add value to your company and is the wave of the future for corporate networks to meet their needs.

VMware comes of age in IPO

In what must have been one of the most closely watched events involving a technological company this year, VMware launched its IPO on the NYSE (New York Stock Exchange) just yesterday.

The initial sale price of $29 per share for 33 million shares is expected to raise more than $900 million in capital. According to eWeek, VMware’s estimated worth now stands at more than $10 billion, with juggernaut EMC controlling about 90 percent of the stock.

By the end of the trading day, the stock price has risen and stood at $51 per share.

If you recall, VMware was founded in 1998, catching on first with hobbyists, then system administrators and CIOs who saw the potential in the then non-existence niche of x86 virtualization. For those who literally “grew up” with using VMware’s products, a touch of nostalgia here is inevitable.

While virtualization in general has been around for ages, especially in mainframes, VMware was the one who brought it into the mainstream. VMware of today is hardly the fledging start-up with a product in search of a problem. The market has matured significantly since, and the opportunities are simply staggering.

Still, as a direct result of VMware’s own success, the competition has finally awoken. Top league players flush with billions in their pockets have joined in the fray — and they are playing for keeps, not just a mere slice of the pie.

Also, the continual incorporation of virtualization into both hardware in the form of microprocessors, and software, in the form of the free Linux kernel, is threatening to make the need for a separate virtualization product irrelevant. To thrive, VMware must continue to lead the industry, and if at all possible, even speed up the pace of its innovation even further.

This is echoed by Greene, who oversaw the IPO as president and CEO of VMware:

We have very consistently explained to everyone that we will continue to invest quite heavily in [R&D] because we do have such a rich road map…

We also will continue to increase the reach of our products… [to] further unlock all the value in the virtualization platform… There is still a lot to be done there to fill that out and strength how we make things highly available, disaster recovery-tolerant and secure.

Adding on, Tom Bittman, a vice president and chief of research at Gartner said that VMware would be wise to invest some of the money gained through the IPO in a new consulting force as well as look for key acquisition to bolster its business.

At the moment, the biggest challenge to VMware is believed to be from Microsoft Windows Server 2008 with its own “Viridian” hypervisor, which is expected to debut as a beta before the end of this year.

And of course, Citrix is making its move as well in Citrix to acquire XenSource in deal worth $500 million.

The illogic of two-year cellular contracts

Cellular data is getting faster again, with the debut of HSUPA.

A year ago, EV-DO Rev. A was hot stuff, giving CDMA companies (Verizon and Sprint) the lead in North American mobile data, with nothing on the horizon that would give an advantage, or even parity, for the GSM cellular carriers (such as Cingular-AT&T and T-Mobile).

Now, GSM-family Class 6 HSPDA devices downloads reach 1,800 Kbps, reaching a rough parity with EV-DO Rev. A. That isn’t good enough, so HSUPA was created, and the FCC just approved the first such device, giving roughly double what EV-DO Rev. A delivers (when AT&T builds the network to match the device).

This keeping-up-with-the Joneses, cold cellular war of one-up-man-ship is getting down right crazy, for with a new faster data system every year, upgrade penalties are a prerequisite to advance with another carrier’s system. The standard two-year cellular contract really gets in the way of progress. It may be effective for the cellular companies in slowing down churn (customers moving to new cell phone companies), but why not give better service instead of locking customers in with contracts?

New zero-day Yahoo Messenger bug found


Researchers at McAfee have verified and reproduced a zero-day bug first reported by Chinese researchers pertaining to the Webcam functionality on Yahoo Messenger.

The bug was reproduced on the most recent version of Messenger as of today, which is V8.1.0.413.

Wrote McAfee researcher Wei Wang:

It seems like a classic heap overflow, which can be triggered when the victim accepts a webcam invite.

Yahoo’s security has been notified of the problem. According to a Yahoo’s spokesman in an e-mail to InformationWeek:

Since learning of this issue, we have been actively working towards a resolution and expect to have a fix shortly. Yahoo takes security seriously and consistently employs measures to help protect our users.

No exploit code for this new flaw has been published yet. It is noted that this vulnerability is different from another one that was patched in June.

For now, you should stop accepting Webcam invites from untrusted sources until a patch for this flaw has been released and installed. Additionally, McAfee also recommend that you block outgoing traffic on TCP port 5100.

Thursday, August 16, 2007

How copyright and licensing issues affect programming work

What ever happened to the Perl vs. C# text processing shootout I promised? Well, I will tell you what happened.

Something had been bothering me for ages, but I could not put my finger on it. As I sat down to write the code, I finally remembered what it was: I am not allowed to publish benchmarks of .Net without Microsoft’s expressly written permission. This is part of the Microsoft .Net Framework EULA, and it also applies to many of its other products, such as SQL Server 2005. Therefore, instead of writing about Perl vs. .Net, I will be discussing how licensing terms can affect the work of a programmer. Please note: I AM NOT A LAWYER.

In the late 90s, I worked for a small startup. One of my tasks was to research patents and copyrights. The company had a clever system (for the time) on its Web site that it wanted to protect. The Amazon “One Click Shopping” patent had already been used against Barnes & Noble, and my boss wanted to be able to do the same to our competitors if they put a similar feature on their sites. So I became fairly familiar with patent and copyright laws. I was able to determine that we could probably not get a patent on what we were doing (it was pretty obvious how to do it), but we were implicitly and automatically granted copyright over the code and could argue in court if need be that a competitor’s code was based on ours without proper payment.

The lesson I learned from this was fairly striking: Copyright is powerful. Most folks do not realize how strong it is and that you don’t actually have to do anything to copyright your work. If I make a doodle on a dinner napkin, I automatically have a copyright on it, which can be enforced. Of course, proof of authorship is always helpful and publicly marking something as being copyrighted makes it clear to all those who view the work that there is a copyright on it. However, the lack of copyright markings does not grant free usage.
How does this work in the world of programming?

For starters, programmers are notorious for going to a search engine when they get stuck on a problem and then copying and pasting code into their code without the necessary modifications. Unfortunately, this is most likely illegal unless the poster of the original code expressly granted usage rights to you. In other words, by hitting the search engine and grabbing some code, you are opening your employer up to a potential lawsuit. Granted, the chances of this being noticed are probably billions to one; but if you aren’t granted express usage rights, you are probably in violation of the law.

There are even more problems with doing a copy/paste of code. What happens when you copy/paste a piece of open source into your project? Well, it depends on the type of open source license it has. If it is the BSD license, you are in great shape because the BSD license is forgiving. However, the GPL is a completely different story. You have to be careful when copying/pasting, linking to (in the compilation sense and not in the URL meaning), or otherwise making use of GPL-ed code. The GPL has a sneaky way of injecting itself into projects in a way that can potentially force large amounts of a project (if not the entire thing) to be subject to it.

Microsoft is a huge contributor to the license and copyright headaches. For one thing, Microsoft is one of the largest obtainers of patents in the world, in no small part to the massive amounts of R&D it does. If you are working on a project, there is a darn good chance that Microsoft has a patent on something identical or similar, particularly in some oddball parts of computing. And Microsoft has never been afraid to flex its legal muscles (in other words, when buying out the offender would be more expensive than suing them). Microsoft also has a bad habit of slipping unusual license terms (like the no benchmark clause) in places that you would not expect. Before embarking on a project that makes use of Microsoft technologies, I recommend that you carefully inspect all relevant licenses; if you are unsure about what they mean, consult a lawyer.

Even clip art can turn around to bite you. One of my favorite Web sites, iStockphoto, has reasonable licensing terms for its images. Royalty free may be used on Web sites and marketing collateral, etc., but it’s a problem if you want to: use one of the site’s images in a template library that you redistribute or use an iStockphoto image at a resolution above 800×600. And so on. You have to be very careful with these types of agreements. For instance, say that your Web developer uses one of these images for your company’s Web site, which is a permitted use. Then you decide that you like the photo so much, you will use it in the splash screen for your application. There could be liability if there wasn’t clear communication between the person who read and signed the license and the developer using the image.

It is extremely unlikely that your project’s miscellaneous and/or accidental violations of copyright and license terms will ever be noticed, unless it receives large enough mainstream usage. In addition, violations within closed source, proprietary code are difficult to find. Nevertheless, there are plenty of lawsuits in the history of IT to show that it is quite possible to have the pants sued off you too.
My general rules of thumb about copyright and licensing

I only use content in a project that I have created or that I know another employee within our company has created. I do not copy and paste code from the Web, and I do not use GPL-ed code within a project unless the project is to be released as GPL-ed code. I carefully check any content, code, images, etc. for copyright and license terms before using them; if I’m confused, I kick it over to a lawyer. If I do not see any explicit rights granted to me, I assume none. I think this is a fairly reasonable and sensible approach.

Basically, think twice before copying and pasting code you dig up on the Web into your project.

Fedora Updates vs Windows Updates

I've got Fedora 7 running in a virtual machine. I started it up today and got a popup saying 41 updates were ready. I clicked for the updates to install and let them run. I then got a window saying I should reboot due to the updates, and did I want to do it now or later?

I've read plenty of comments berating Microsoft for "Patch Tuesday", and plenty more saying "I never have to reboot my Linux system." I personally am pleased by the getting updates to all my apps, not just the OS. I don't mind the occasional reboot, but maybe that's just my Windows background. What I don't understand is why some belittle MS for pushing updates and suggesting reboots when at least one Linux distribution does the same thing.

Is Fedora the only distribution that does this? If not, why take Redmond to task when Windows isn't the only OS that does it?

Please, no flames or unnecessary criticism of either OS or its users. Please stay on the topic of updates. Thanks well in advance.

Get everyone on the same page with a project kickoff meeting

Sometimes projects don’t always go through an organized sequence of planning and execution. On many projects, you’re forced to jump into execution and then catch up with the planning later. Before you know it, you find that team members and stakeholders have varying levels of understanding about the purpose and status of the project.

Regardless of how you start your project, you should always hold a project kickoff. The purpose of the kickoff meeting is to formally notify all team members, clients, and stakeholders that the project has begun and make sure everyone has a common understanding of the project and their roles. Like all formal meetings, there should be an agenda. There are a number of specific things you want to do at this meeting:

* Introduce the people at the meeting.
* Recap the information in the Project Charter, including the purpose of the project, the scope, the major deliverables, the risks, the assumptions, the estimated effort and budget, and the deadline.
* Discuss the important roles and responsibilities of the project team, clients and stakeholders. Many, if not all, of the people that will work on the project should be in attendance. If there’s confusion about the role of any person or organization, I you should discuss and clarify it here.
* Go over the general approach and timeline of the project. This gives people a sense for how the project will unfold. In particular, you will want to ensure that people understand what they need to be doing in the short-term to support the project.
* Discuss the project management procedures. It’s important for everyone to understand how the project manager will manage schedule, issues, scope, risk, etc., since many people play a role in these procedures. For example, you need a process to surface scope change requests, determine their impact, and bring them forward for approval. You don’t want to fight with people about how the process works after the project has started. The kickoff meeting is the time to make sure every understands and agrees to the proposed project management procedures.
* Discuss and answer any outstanding questions. The purpose of the discussion is not to rehash the purpose of the project, but to allow people to voice specific questions or concerns they have as the project begins.
* Confirm that the project is now underway.

In general, the project team, client, and stakeholders should be in attendance. Most kickoff meetings can be conducted in an hour or two, but other complex and long projects may require a day or two.

Putting a stop to PDF spam

As I mentioned a little while ago spammers are now using PDF documents to spam users with fake stock alerts. While the spammers are now diversifying by enclosing a PDF file inside zip files and even hiding their adverts inside Excel files, we can still have considerable success filtering them out.

I’ve recently happened upon a plug-in for SpamAssassin and some third-party Phishing and Scam databases for ClamAV; combined, these cut out substantial amounts of spam including PDF, XLS, and other difficult to deal with variants.

PDFInfo

PDFInfo is a plug-in which allows SpamAssassin to analyse PDF files and assign points based on predefined rules. PDFInfo comes with a set of default rules, but custom rules can also be constructed. Several evaluative functions can be used to construct rules. These range from simple filename and size comparisons to MD5 checksums and pixel coverage. Unfortunately, as PDF documents are actually just a postscript image, PDFInfo cannot analyse the text inside a document; this would require some kind of text recognition engine. I have seen that @mail are using a SpamAssassin module which scores PDF attachments based on their content using the pdftotext application. I won’t try using that in a production environment until I can see what type of system load it generates.

Back to PDFInfo; installation is relatively simple once you have worked out where the plug-in files are supposed to go!

Download both PDFInfo.pm and pdfinfo.cf and place PDFInfo.pm in the SpamAssassin Plugin directory and pdfinfo.cf into the local SpamAssassin config directory. If you aren’t sure where your Plugin directory is then try:

# find / -name SPF.pm

or

# find / -name Test.pm

I found my Plugin directory inside /usr/share/perl5/Mail/Spamassassin/ which was incidentally also the local config directory where pdfinfo.cf should be placed.

Once that’s done, edit init.pre adding the following line:

loadplugin Mail::SpamAssassin::Plugin::PDFInfo

My init.pre file was located in /etc/spamassassin. To check that the PFDInfo plug-in is loading correctly run:

# spamassassin --lint -D

Within the output you should find:

debug: plugin: loading Mail::SpamAssassin::Plugin::PDFInfo from @INC

debug: plugin: registered Mail::SpamAssassin::Plugin::PDFInfo=HASH(0x8ff9ed0)

I had one problem when I first tried to install PDFInfo; in my debug output I had an error saying that it could not locate ‘Logger.pm‘. I searched the system and found one file called Logger.pm but this was part of Razor2. After a lot of searching through forums and mailing list archives, I found the easiest way of resolving it was to upgrade SpamAssassin to its latest version. After that I didn’t have any problems.

Once PDFInfo is installed it’s a good idea to restart SpamAssassin.

Sane Security

Sane Security produces a set of ClamAV signature database files that help to filter out Scam and Phishing emails. Seeing as PDF spam quite obviously falls into one of those categories, these will help us to filter them out. Various scripts are available for download; these will retrieve and install the latest databases. I chose to go with Ralph Hildebrandt’s script (script 1b), which also downloads the third party MSRBL databases via Rsync.

Very little customisation of the script is required. Open up the script in a text editor and take a look at the following options:

SYSLOG_ON=1

PATH=/bin:/usr/bin:/usr/local/bin

CLAM_USER="clamav"

CLAM_GROUP="clamav"

Make sure that PATH includes the location of ClamAV’s binaries and that CLAM_USER and CLAM_GROUP are both set to the correct values for your system. To have the script log to syslog, keep SYSLOG set to 1; otherwise, disable it by changing the value to 0.

Place the script somewhere sensible (I dropped it in to /etc/clamav/) and run it for the first time:

# /etc/clamav/UpdateSaneSecurity.sh debug

Debug Mode is ON

Sleeping for 108 seconds ...

PHISH_SIGS : http://www.sanesecurity.co.uk/clamav/phishsigs/phish.ndb.gz

SCAM_SIGS : http://www.sanesecurity.co.uk/clamav/scamsigs/scam.ndb.gz

ClamScan : /usr/bin/clamscan

Curl : /usr/bin/curl

GunZip : /bin/gunzip

RSync : /usr/bin/rsync

Temp Dir is /var/tmp/clamdb

/var/tmp/clamdb does not exist and will be created

Scam Log File : /var/tmp/clamdb/SCAM-UpdateSession.log

Phish Log File : /var/tmp/clamdb/PHISH-UpdateSession.log

MSRBL-IMAGE Log File : /var/tmp/clamdb/MSRBL-IMAGES-UpdateSession.log

MSRBL-SPAM Log File : /var/tmp/clamdb/MSRBL-SPAM-UpdateSession.log

Checking for ClamAV database directory....Found /var/lib/clamav

/var/lib/clamav/scam.ndb.gz does not exist doing initial download

/var/lib/clamav/phish.ndb.gz does not exist doing initial download

/var/lib/clamav/MSRBL-SPAM.ndb does not exist doing initial download

/var/lib/clamav/MSRBL-Images.hdb does not exist doing initial download

As you can see, the script sleeps for a few seconds (this is a random number) to stop the servers from being hammered by all users on the turn of each hour. After this, it checks for any updates and installs them as necessary; as this was the first time the script was run, you’ll notice it downloads and installs all four databases. The script will automatically detect the ClamAV database directory if it’s in a standard location. If not, then edit the script file accordingly.

Once the script has run successfully, add a Crontab entry to execute the script automatically (without debug). Sane Security ask people not to update more than once per hour in order to avoid putting its server under unnecessary load.

Interestingly a recent post on the Sane Security blog notes that Barracuda Networks appear to be using Sane’s signature databases in their Barracuda Spam Firewall.

Since using these two add-ons, I’ve found they successfully block quite significant amounts of spam while adding very little overhead to the system. Grepping through my mail logs, I can see that the Sane Security databases are very successful.

How have you been dealing with the recent rise in spam levels and the various sneaky tactics being employed by spammers? Leave a comment and share your ideas on how to fight this growing problem.

10 tech certifications that actually mean something

There are hundreds of tech certification programs and exams out there, some sponsored by software vendors, some by vendor-neutral organizations, and some by educational institutions. A number of them are easy to obtain — as evidenced by the many IT pros who list a three-line string of acronyms after their names. You pay your money and you take a multiple-choice test; if you pass, you’re in.

Others are excruciatingly difficult: Cost is high; eligibility to even take the exam is dependent on having years of experience, formal education, and/or sponsorship from others who already hold the title; and the exams are grueling, multi-day affairs that require hands-on performance of relevant tasks. Most are somewhere in between.

But which certifications really provide a measure of your knowledge and skills in a particular area? And which will really help you get a job or promotion? Here’s a look at 10 of the technical certifications that actually mean something in today’s IT job market.

#1: MCSE

The Microsoft Certified Systems Engineer (MCSE) certification suffered a bad reputation several years back when numerous people were memorizing the answers to exam questions from “brain dumps” posted by test-takers on the Internet and obtaining the certification without any real understanding of the technology.

Microsoft responded by replacing the knowledge-based multiple-choice questions with a variety of performance-related scenario questions that make it much more difficult to cheat. The difficulty level of the questions was escalated, and the number of exams required to obtain the certification was increased to seven.

The MCSE has consequently regained respect in many corners of the IT community and is a useful certification for demonstrating your expertise in Microsoft server products.
#2: MCA

In addition to making the MCSE exams more difficult, Microsoft has created many new certifications. The Microsoft Certified Architect (MCA) is the premiere Microsoft certification, designed to identify top experts in the industry. To obtain the MCA, you must have at least three years of advanced IT architecture experience, and you have to pass a rigorous review board conducted by a panel of experts.

There are a number of MCA programs. The infrastructure and solutions MCA certifications cover broad architecture skills, but there are also more technology-specific programs for messaging and database skills. There are currently fewer than 100 MCAs in the world, making this an elite certification.
#3: CCIE

The Cisco Certified Internetwork Expert (CCIE) is widely recognized as one of the most difficult to obtain (and expensive) IT certifications. Like the MCSE/MCA, it’s a vendor-sponsored certification, focusing on Cisco’s products.

The CCIE requires that you pass both a written exam and a hands-on lab. To sit for the written exam, you must pay $300 and choose from one of several tracks: Routing and Switching, Security, Storage Networking, Voice, and Service Provider.

You must pass the written exam before you’re eligible to take the lab exam. This is an eight-hour hands-on test of your ability to configure and troubleshoot Cisco networking equipment and software. The lab exams cost $1,250 each. This does not, of course, include travel expenses that may be necessary since the labs are conducted only in certain locations.

As if all that weren’t enough, you don’t get to rest on your laurels after obtaining the certification. CCIEs must recertify every two years or the certification is suspended.
#4: CCSP

Another Cisco exam that’s popular with employers in today’s security-conscious business world is the Cisco Certified Security Professional (CCSP). It focuses on skills related to securing networks that run Cisco routers and other equipment.

You’re required to pass five written exams and must recertify every three years by passing one current exam. Before you can take the CCSP exams, you must meet the prerequisites by obtaining one of Cisco’s lower-level certifications, either the Cisco Certified Network Associate (CCNA) or the Cisco Certified Internetwork Specialist (CCIP).
#5: CISSP

Security certifications confer some of the highest-paying jobs in IT today, and one of the most well-respected non-vendor specific security certifications is the Certified Information Systems Security Professional (CISSP). The organization that grants the CISSP is the (ISC)2, which was founded in 1989 and has issued certifications to more than 50,000 IT professionals.

Exam candidates must have at least four years of direct full-time work experience as a security professional. One year of experience can be waived if you have a four-year or graduate degree in information security from an approved institution. Another unique feature of the CISSP is that you must subscribe to the (ISC)2 code of ethics to take the exam.

Exam fees vary based on geographic region. In the United States, standard registration is $599 ($499 for early registration). You must recertify every three years by obtaining at least 120 hours of continuing professional education, and you must pay a yearly fee of $85 to maintain the certification. The exam is a six-hour test consisting of 250 multiple-choice questions.
#6: SSCP

For those who can’t meet the rigorous experience requirements to sit for the CISSP, the (ISC)2 also offers the Systems Security Certified Practitioner (SSCP) certification. SSCP candidates need have only one year of direct full-time security work experience. The exam consists of 125 multiple-choice questions, and you have three hours to complete it.

Those who pass the written exam must be endorsed by someone who holds a current (ISC)2 certification and will attest to the candidate’s professional experience or by an officer of the corporation or organization that employs you (owner, CEO, managing partner, CIO, etc.). As with the CISSP, you must recertify every three years by submitting proof of continuing education credits and paying an annual maintenance fee.
#7: GSE

Another popular and well-regarded security certification is the GIAC Security Expert (GSE), offered by the SANS Software Security Institute. Before you can attempt the GSE, you must complete three lower-level certifications: GIAC Security Essentials Certification (GSEC), GIAC Certified Intrusion Analyst (GCIA), and GIAC Certified Incident Handler (GCIH).

The lower-level certifications require passing multiple-choice exams, and at least two of the three certifications must be at the “Gold” level, which requires that in addition to the written exam, you submit a technical report that’s approved to be published in the SANS Reading Room. A personal interview is also part of the GSE qualification process.

Pricing depends on whether you take the exam as part of SANS self-study or conference training programs or challenge the exam. Without the training, each lower-level exam costs $899.
#8: RHCE/RHCA

Many companies are looking to save money by switching to Linux-based servers, but they need personnel who are trained to design, deploy, and administer Linux networks. There are a number of Linux certifications out there, but the Red Hat Certified Engineer (RHCE) certification has been around since 1999 and is well respected in the industry.

The exam is performance-based. You’re required to perform actual network installation, configuration, troubleshooting, and administration tasks on a live system. You have a full day (9:00 a.m. to 5:00 p.m.) to complete it. The cost is $749.

The Red Hat Certified Architect (RHCA) is an advanced certification that requires completion of five endorsement exams, each of which costs $749 and range from two to eight hours. Like the RHCE exam, they are hands-on skills tests. You must have the RHCE certification to take the RHCA exams.
#9: ITIL

For those who aspire to management positions in IT services, the Information Technology Infrastructure Library (ITIL) certifications provide demonstration of knowledge and skills involved in that discipline. There are three certification levels: Foundation, Practitioner, and Manager.

The Manager level certification requires completion of a rigorous two-week training program, and you must have the Foundation certification and five years of IT management experience. Then, you must pass two three-hour exams consisting of essay questions.
#10: Certifications for special situations

Many specialist exams are available in IT subcategories that can be helpful to those who want to specialize in those areas. Some of these include:

* Health Insurance Portability and Accountability Act (HIPAA) compliance certification
* Sarbanes-Oxley (SOX) compliance certification
* Database administration certification
* Wireless networking certifications
* Voice over IP certifications

In addition, for those who have little or no experience in IT, entry-level certifications such as those offered by CompTIA may help you get a foot in the door as you start your IT career.

Seven traits of fearful managers

I recently came across a survey blurb that stated that a certain percentage of management feared being out of the office because they were afraid that a subordinate would outshine them in their absence. While I don’t remember the exact percentage of those respondents, I know it wasn’t a trivial number.

I was a bit shocked by the response because (perhaps naively) the thought never occurs to me when I’m out of the office. The fact that there are managers with this paranoid fear says several things to me about their management difficulties:

1. They are very insecure in their position.
2. They work in a very dog-eat-dog environment.
3. They are afraid of their own subordinates.
4. They probably take credit for everything.
5. They probably never share in the blame.
6. They have low self esteem.
7. They don’t view their marketability as being very high, increasing their fear that they will lose their job.

Perhaps I have it all wrong, and in fact, they are all very well adjusted and particularly shrewd in the analysis of their current situation? Maybe a few, but I’m guessing the rest of the respondents have one or more of the problems described above — all of which are bad and need further elaboration.

First and foremost, no manager can be effective if they truly are afraid of being shown up by their subordinates when they are out of the office. These managers are not likely to mentor their subordinates, don’t give a flip about continuity/succession planning, are probably very risk averse, are very controlling and route every decision to themselves, and probably micromanage to an extreme — in other words — a real dream to work for, eh?

Addressing the first two points, one must wonder if the insecurity is rooted in actual behavior observed in the current environment (have they seen it done to others before in their workplace?); is this a manifestation of past experience or just plain paranoia? If this is a regular practice in your workplace and you don’t happen to be playing for an NFL team where you are fighting for roster spots all the time, then one might consider looking for a healthier environment. If the insecurity is based on other factors, some serious introspection is probably warranted.

Points three, four, and five above are mostly symptoms of their fear, manifested as poor management. Points six and seven are personal problems that need to be dealt with on a case-by-case basis and perhaps some counseling; however, all the traits in the list above can be dealt with proactively in some fashion by changing some workplace behavior.

If you’re afraid of being replaced by a subordinate, you can solidify your position in proactive and healthy ways. The first way is by building better relationships. As a manager, you have readier access to individuals in the organization that your subordinates do not. Use this access to build relationships with those above you and across from you on the org chart. It is not only good for business — you will come to understand the operations of your organization better — but it also buys you the good will of the people you interact with. Relationships are taken into consideration when hiring, firing, and promotion opportunities present themselves.

Being out of the office (assuming it is for business) also means that you again have opportunities for relationship building and for networking. Good work outside will filter back to your organization.

If you are concerned because you believe you have lost your edge and your subordinates are sharper than you…well…do the obvious. Work on sharpening your skills to stay competitive. Keep in mind that the skills you are sharpening are probably not the same as those of the people reporting to you, particularly as you move up the org chart. How well you program in C# probably doesn’t amount to a hill of beans to your boss if your job is not to program but to manage. Lifelong learning helps you to avoid skill gaps that can lead to insecurity and low self esteem.

Try and remember that as a manager you work THROUGH people and they are your assets and hopefully your allies. If your workplace resembles Mutiny on the Bounty, you had better take a hard look at how you are managing. Your subordinates should not hate you nor be plotting against you. If they are, you better try to get to the root of the problem ASAP, and you should start by looking at yourself first.

Lastly, work hard, be proud of what you do, but always be ready to leave. The workplace, as is the world, is a very unpredictable place and hardly ever fair. Never get so settled in a position that you become complacent. Always plan for your next move. Put away some money in an “emergency cache” that can fund six months of unemployment. It may take you awhile to build it, but having it gives you the peace of mind that your world hasn’t completely fallen apart should you find yourself out of work. That peace of mind also works to reduce anxiety about being let go.

In summary, a manager carrying around fears of their subordinates outshining them when they are absent is a problem that needs to be dealt with. Whether you need to leave an unhealthy environment or do some serious self-evaluation and behavior changing, you do not want to operate out of fear. It is unhealthy for the organization, the people you supervise, and ultimately you, the manager.

Have your ever been paranoid about scheming subordinates or been in an unhealthily competitive environment? Have you ever observed or tried to coach other managers with these traits?

20 things to consider when deciding on the structure of your IT organization.

At some point in most organizations, the decision is made to centralize and/or standardize Information Technology Services. This need for centralization and standardization arises from the complexity that comes with increasing size and the difficulty of managing an environment that has multiple moving parts—many in different directions.

The desire to take control of an environment that is considered in “disarray” is a strong one, and in many cases, it’s not a bad idea. However, having been on both sides of this debate, I have discovered some truisms about changing your IT structure that you might want to ponder before making a final decision:

1. Totally centralized, totally decentralized, or a hybrid IT environment can all work—it just takes good top management, a robust set of plans and an IT framework to pull it off.
2. If you are going to insist on centralizing IT, you better be prepared to be flexible and provide superior customer service. One thing that decentralized environments tend to excel at is customer service—because they are closer to the customer and often are run by the customer. Therefore, if you are going to take IT functions away from the other departments, be prepared to deliver service like they did.
3. Standardization does not have to mean centralization. It means that all parties agree to abide by a set of standards.
4. Forcing standards down people’s throats is like taxation without representation. You are inviting people to rebel. Form a governance committee where users have a voice.
5. Standards are not always black and white and they need to be reviewed frequently.
6. Technology changes rapidly and standards that don’t change with them will soon become hated mandates.
7. Piggybacking on the point above, try to build IT environments that are flexible and can accommodate new and changing technology.
8. Don’t use standards as a lame excuse for not being open to new ideas and innovation.
9. Setting a standard for a product such as a laptop and then giving users one configuration choice is not really a choice, nor is it customer friendly.
10. Listen to users’ needs and make sure your standardized choices can meet those needs; if not, your standards are worthless.
11. Just because a departmental IT operation is small does not mean it is insignificant. Often, they are working better and smarter than central IT and are providing better customer service.
12. Unless you are staffed for it and are extremely customer-focused, allowing users no control will lead to end user frustration.
13. IT support/helpdesk and the rest of your IT operation need to communicate often.
14. Communicate, communicate, communicate—about your plans, about your problems, about threats, current trends, etc. Don’t treat your end users like mushrooms; they will hate you for it and will not support you.
15. At budget time, you will hear nasty rumors floating around regarding your IT organization, whether they are true or not. Best to thwart those by abiding by the rule above.
16. Never forget that the IT organization is there to help the business work better, smarter, faster, cheaper…it is not enough just to keep the lights on.
17. An IT organization without standards can be a management nightmare and extremely wasteful. But an IT organization whose standards are too rigid, tends to be out of touch.
18. Communication will aid any type of structure you choose—and the structure you use will help determine the kinds of communication you need to employ.
19. Great technologists do not necessarily make the best managers.
20. No organizational structure can completely make up for bad management.

Having said all of that, my experience has been the happiest when running or being a part of a hybrid environment. Some IT services are best managed as a centralized service while others are left decentralized - although, I have seen the extreme in each work well or very poorly.

In most cases, as long as your users are getting good service and have a voice in the operations, most don’t give a hoot how IT is structured. However, if you stop delivering good service, you will start to feel pressure to move in the opposite direction, as users will clamor for change in order to get better service.

About to negotiate a raise? Read this first

Think you’re not getting paid enough? Before you throw yourself on your boss’s desk and declare “Baby needs new shoes!” you need to read the advice of Jim Camp, a negotiation coach and trainer and author of NO: The Only Negotiating Strategy You Need for Work and Home. CIO.com recently ran a piece in which Camp outlines the best strategy for negotiating a raise.
He calls his system the “No System” because “We have been taught that win-win is the best possible result, that we need to ‘get to yes’ so that all sides are happy. That’s the biggest mistake you can make in negotiations. No is the best word in a negotiation. If you invite your respected adversary (in this case, your boss) to say no right from the get-go, you will be amazed at how relaxed she becomes during the discussion.”

I thought his suggestions made an incredible amount of sense. The first suggestion was:

1. Don’t be emotional-According to Camp, neediness is the number-one deal-killer. “Not needing this raise or promotion gives you power.”

Basically, I think he’s saying that you should approach the situation as if you were acting as your own agent. (But don’t refer to yourself in the third person. Instead of more money in your hand, you might get a stapler upside your head.)

Don’t try to appeal to your boss’s emotions or sense of fairness. Appeal to his spreadsheet. If you can make a case for yourself by the number of hours you work or money that you’ve made or saved the company, that’s the path you should take. Your boss will more than likely have to make the same case for his boss who will be even more emotionally removed.

Are angry women incompetent?

I mentioned in a previous blog that the annual meeting of the Academy of Management was being held last week in Philadelphia. The topic I mentioned in that blog was how a study that seemed to indicate that the best way to get ahead in the workplace is to be a tyrant.

According to CNN, the controversial results of another survey were going to be released at that same conference. This study, conducted by Victoria Brescoll, a post-doctoral scholar at Yale University, shows that “a man who gets angry at work may well be admired for it but a woman who shows anger in the workplace is liable to be seen as ‘out of control’ and incompetent.”

(You have to wonder if the conference itself is not a big ole lab experiment. It’s like they’re releasing all of these survey results just to see how long it takes the men and women attendees to break out in fist fights.)

Conspiracy theory aside, here’s the basic breakdown of the experiment conducted on anger and gender:

Brescoll conducted three tests in which men and women watched videos of a job interview and were asked to rate the applicants’ status and assign them a salary. In the first instance, the scripts were the same except where the candidate described feeling either angry or sad about losing an account due to a colleague’s late arrival at a meeting. Here’s how the ratings broke down in order of status assigned, in descending order:

* Man who said he was angry
* Woman who said she was sad
* Man who said he was sad
* Woman who said she was mad (this was last by a large margin)

And, it gets worse. The average salary assigned to the angry man was almost $38,000 compared to about $23,500 for the angry woman and in the region of $30,000 for the other two candidates.

At the risk of coming across as incompetent, WHAT KIND OF CRAP IS THAT?!

In the CNN piece, Brescoll explains that the attitude is not conscious, that “People are hardly aware of it.” That makes me feel better…not at all.

BEA runs Java on bare (virtual) metal


BEA Weblogic Server Virtual Edition doesn’t run on Windows. It doesn’t run on Linux, or MacOSX, or FreeBSD, AmigaDOS, CP/M, OS/2, or any other operating system you can think of.

It runs directly on an x86 hypervisor.

A hypervisor is a thin layer of low level code that sits just above the hardware and creates a virtualized version of that hardware. Usually several of them, so one server box can look like 2, or 10, or 100. Normally, an operating system like Linux or Windows or Solaris then runs on top of that layer, and your applications run on top of that. The technique has recently become popular because it lets administrators scale easily and unlock wasted potential in dedicated machines. With me so far?

Now, Java programs run on their own virtual machine, a pretend machine that runs “bytecode” instructions and has multiple threads and garbage collection. So now you have your code, on top of the Java VM, on top of the OS, on top of the hypervisor, on top of the hardware.


BEA said this is silly, and eliminated one of the layers - the operating system. [BEA runs Java on bare (virtual) metal] As long as all your programs are in 100% Java, you don’t need it. This frees up resources for more important things, like your application. BEA estimates they can reduce resource consumption by “25-50%” compared with a traditional software stack, though this sounds like a wild guess to me.

Of course, the things that an operating system does, such as process scheduling and memory management are still in there somewhere–they’re just subsumed in, and customized for, the Java virtual environment. BEA calls the result “LiquidVM”. Their white paper refers to “OS compression” which basically means they put in just the stuff you need and none of the stuff you don’t.

The idea of a Java-only system isn’t new. Azul Systems has been doing this for years on their specialized multi-way boxes. Even virtual machines and hypervisors aren’t new; IBM pioneered the idea with VM/CMS on big-iron mainframes. But BEA is the first vendor to bring a compressed software stack and virtualization to cheap commodity Intel/AMD x86 hardware.

WebLogic Server Virtual Edition is now available for trial download from bea.com. Initially it only runs on the VMWare ESX hypervisor, but BEA plans to add support for Xen later this year and Microsoft Viridian after that.

Sun lowers barriers to open-source Java

Sun Microsystems is making it easier for open-source programmers to ensure their Java versions meet the company's compatibility requirements, but the deal extends only to those involved in Sun's own open-source Java project.

Sun plans to announce on Thursday a program that grants access to its Java Technology Compatibility Kit to anyone with an open-source Java project that is based substantially on Sun's open-source Java software and governed by the General Public License (GPL). Programmers need access to the test kit to prove that a project is in compliance with the Java specification.

Projects that pass Sun's compatibility kit tests also can use the official Java logos for free, said Rich Sands, OpenJDK community marketing manager at Sun.

Previously, access to the kit was available only to Java licensees--typically larger companies such as IBM or Motorola--or to nonprofit groups that participated in Sun's scholarship program. But the scholarship program carried obligations that precluded shipping software under the GPL, Sands said.

"The compatibility kit license that's been out there in the scholarship program had a few terms in it that wouldn't work with GPL. We've changed (the) license in such a way that developers can fully meet all their obligations under GPL," Sands said.

The Java platform is a collection of software components that lets a program written in the Java language run on a variety of computers without having to be specifically translated for each one. A component called a Java virtual machine, assisted by libraries of prewritten code, translates programs so they run properly on a particular computer. For years, open-source advocates have called on Sun to make the core Java technology, called Java Standard Edition, an open-source project. But even after the company was persuaded to do so, it took years to accomplish.

The new move significantly broadens the horizons of open-source programmers who want to participate in Sun's open-source Java project, called OpenJDK and formally launched in May.

But the new program doesn't extend to Apache Harmony, a rival effort to build a version of Java Standard Edition. Geir Magnusson, a Harmony leader, had called on Sun in April to liberalize the compatibility kit terms.

Magnusson on Thursday said that Sun is placing limits on the Apache Software Foundation that it isn't placing on the OpenJDK community.

"Those limits restrict what an end user of Harmony could do with the software, and limits like that aren't compatible with an open-source license, which is why Apache can't accept that TCK (Technology Compatibility Kit) license," he said.

The issue has rankled the Apache Software Foundation enough that it voted in July against a major new Java specification, Java Enterprise Edition 6, whose technical details it supported. Sun "shouldn't be allowed to start another JSR (Java specification request) until the above matter is resolved," the foundation said in remarks about its protest vote.

Sun acknowledges that not everybody is happy with its Java work.

"We've known that we weren't going to be able to satisfy everyone in the open-source and free software worlds. There are incompatible licenses and philosophies and approaches," Sands said. "We're trying very hard to figure out some way to bridge this, but we've not been able to do that."

The Apache Software Foundation is free to use the compatibility test kit through the scholarship program and, indeed, Apache projects besides Harmony do so, Sands added.

Open-source programmers are free to "fork" software--that is, to create new variations that aren't necessarily compatible with the main or original versions. With its open-source plan, Sun expects programmers to diverge from the official, compatible, logo-emblazoned Java as they experiment with new ideas.

The compatibility kit itself, though, isn't an open-source project. "We wouldn't want people being creative about what compatibility means, because then you end up breaking compatibility," said Jean Elliott, senior director of Java software product marketing.

Sands called the open-source effort so far a success. Sun wasn't able to release all of Java as open-source software because it wasn't able to get permission for some software it licensed from third parties, Sands said, but programmers are seeing those encumbrances as coding challenges.

"The community has really rallied around getting a fully open-source implementation," Sands said.

How do I… Perform date/time arithmetic with Java’s Calendar class?

Java’s Calendar class offers a set of methods for converting and manipulating temporal information. In addition to retrieving the current date and time, the Calendar class also provides an API for date arithmetic. The API takes care of the numerous minor adjustments that have to be made when adding and subtracting intervals to date and time values.

Calendar’s built-in date/time arithmetic API is extremely useful. For example, consider the number of lines of code that go into calculating what the date will be five months from today. Try doing this yourself — you need to know the number of days in the current month and in the intervening months, as well as make end-of-year and leap year modifications to arrive at an accurate final result. These kinds of calculations are fairly complex and are quite easy to get wrong — especially if you’re a novice developer.

This tutorial examines the Calendar class API and presents examples of how you can use Calendar objects to add and subtract time spans to and from dates and times, as well as how to evaluate whether one date precedes or follows another.

Adding time spans

Let’s say you want to add a time span to a starting date and print the result. Consider the following example, which initializes a Calendar to 01 Jan 2007 and then adds two months and one day to it to obtain a new value:

package datetime;
import java.text.SimpleDateFormat;
import java.util.Calendar;
import java.util.GregorianCalendar;

public class myClass
{
public static void main(String[] args)
{
myClass tdt = new myClass();
tdt.doMath();
}

/**
* method to create a calendar object, add 2m 1d, and print result
*/
private void doMath()
{
// set calendar to 1 Jan 2007
Calendar calendar = new GregorianCalendar(2007,Calendar.JANUARY,1);
System.out.println("Starting date is: ");
printCalendar(calendar);

// add 2m 1d
System.out.println("Adding 2m 1d... ");
calendar.add(Calendar.MONTH,2);
calendar.add(Calendar.DAY_OF_MONTH,1);

// print ending date value
System.out.println("Ending date is: ");
PrintCalendar(calendar);
}

/**
* utility method to print a Calendar object using SimpleDateFormat.
* @param calendar calendar object to be printed.
*/
private void printCalendar(Calendar calendar)
{
// define output format and print
SimpleDateFormat sdf = new SimpleDateFormat("d MMM yyyy hh:mm aaa");
String date = sdf.format(calendar.getTime());
System.out.println(date);
}
}

The main workhorse of this class is the doMath() method, which begins by initializing a new GregorianCalendar object to 1 Jan 2007. Next, the object’s add() method is invoked; this method accepts two arguments: the name of the field to add the value to and the amount of time to be added. In this example, the add() method is called twice — first to add two months to the starting date and then to add a further one day to the result. Once the addition is performed, the printCalendar() utility method is used to print the final result. Notice the use of the SimpleDateFormat object to turn the output of getTime() into a human-readable string.

When you run the class, this is the output you’ll see:

Starting date is:
1 Jan 2007 12:00 AM
Adding 2m 1d...
Ending date is:
2 Mar 2007 12:00 AM

This kind of addition also works with time values. To illustrate, consider the next example, which adds 14 hours and 55 minutes to a starting time value:

package datetime;
import java.text.SimpleDateFormat;
import java.util.Calendar;
import java.util.GregorianCalendar;

public class myClass
{
public static void main(String[] args)
{
myClass tdt = new myClass();
tdt.doMath();
}

/**
* method to create a calendar object, add 14h 55min, and print result
*/
private void doMath()
{
// set calendar to 1 Jan 2007
Calendar calendar = new GregorianCalendar(2007,Calendar.JANUARY,1, 1,0);
System.out.println("Starting date is: ");
printCalendar(calendar);

// add 14h 55min
System.out.println("Adding 14h 55min... ");
calendar.add(Calendar.HOUR,14);
calendar.add(Calendar.MINUTE,55);

// print final value
System.out.println("Ending date is: ");
printCalendar(calendar);
}

/**
* utility method to print a Calendar object using SimpleDateFormat.
* @param calendar calendar object to be printed.
*/
private void printCalendar(Calendar calendar)
{
// define output format and print
SimpleDateFormat sdf = new SimpleDateFormat("d MMM yyyy hh:mm aaa");
String date = sdf.format(calendar.getTime());
System.out.println(date);
}
}

This is almost identical to the previous class except that the calls to add() involve the calendar’s hour and minute fields. Here’s the output:

Starting date is:
1 Jan 2007 01:00 AM
Adding 14h 55min…
Ending date is:
1 Jan 2007 03:55 PM

Tip: You can obtain a complete list of the calendar constants that can be used with add() from the Calendar class’ documentation.
Subtracting time spans

Subtraction is fairly easy as well — you simply use negative values as the second argument to add(). Here’s an example:

package datetime;
import java.text.SimpleDateFormat;
import java.util.Calendar;
import java.util.GregorianCalendar;

public class myClass
{
public static void main(String[] args)
{
myClass tdt = new myClass();
tdt.doMath();
}

/**
* method to create a calendar object, subtract time, and print result
*/
private void doMath()
{
// initialize calendar
Calendar calendar = new GregorianCalendar(2007,Calendar.JANUARY,2, 3,30);
System.out.println("Starting date is: ");
printCalendar(calendar);

// subtract 1y 1d 4h 5min
System.out.println("Subtracting 1y 1d 4h 5min... ");
calendar.add(Calendar.YEAR,-1);
calendar.add(Calendar.DAY_OF_MONTH,-1);
calendar.add(Calendar.HOUR,-4);
calendar.add(Calendar.MINUTE,-5);

// print result
System.out.println("Ending date is ");
printCalendar(calendar);
}

/**
* utility method to print a Calendar object using SimpleDateFormat.
* @param calendar calendar object to be printed.
*/
private void printCalendar(Calendar calendar)
{
// define output format and print
SimpleDateFormat sdf = new SimpleDateFormat("d MMM yyyy hh:mm aaa");
String date = sdf.format(calendar.getTime());
System.out.println(date);
}
}

Here’s the output:

Starting date is:
2 Jan 2007 03:30 AM
Subtracting 1y 1d 4h 5min...
Ending date is
31 Dec 2005 11:25 PM

In this example, the Calendar object automatically takes care of adjusting the year and the day when the subtraction results in the date “overflowing” from 1 Jan 2006 to 31 Dec 2005.
Adding vs. rolling

As the previous example illustrates, the add() method automatically takes care of rolling over days, months, and years when a particular calendar field “overflows” as a result of addition or subtraction. However, this behavior is often not what you want. In those situations, the Calendar object also has a roll() method, which avoids incrementing or decrementing larger calendar fields when such overflow occurs. To see how this works, look at the following example:

package datetime;
import java.text.SimpleDateFormat;
import java.util.Calendar;
import java.util.GregorianCalendar;

public class myClass
{
public static void main(String[] args)
{
myClass tdt = new myClass();
tdt.doAdd();
tdt.doRoll();
}

/**
* method to create a calendar object, add 1m, and print result
*/
private void doAdd()
{
// initialize calendar
Calendar calendar = new GregorianCalendar(2006, Calendar.DECEMBER,1);
System.out.println("Starting date is: ");
printCalendar(calendar);

System.out.println("After add()ing 1 month, ending date is: ");
calendar.add(Calendar.MONTH, 1);
printCalendar(calendar);
}

/**
* method to create a calendar object, roll 1m, and print result
*/
private void doRoll()
{
// initialize calendar
Calendar calendar = new GregorianCalendar(2006, Calendar.DECEMBER,1);
System.out.println("Starting date is: ");
printCalendar(calendar);

System.out.println("After roll()ing 1 month, ending date is: ");
calendar.roll(Calendar.MONTH, 1);
printCalendar(calendar);
}

/**
* utility method to print a Calendar object using SimpleDateFormat.
* @param calendar calendar object to be printed.
*/
private void printCalendar(Calendar calendar)
{
// define output format and print
SimpleDateFormat sdf = new SimpleDateFormat("d MMM yyyy hh:mm aaa");
String date = sdf.format(calendar.getTime());
System.out.println(date);
}
}

Here’s the output:

Starting date is:
1 Dec 2006 12:00 AM
After add()ing 1 month, ending date is:
1 Jan 2007 12:00 AM

Starting date is:
1 Dec 2006 12:00 AM
After roll()ing 1 month, ending date is:
1 Jan 2006 12:00 AM

In the first case, when one month is added to the starting date of 1 Dec 2006, the add() method realizes that a year change will occur as a result of the addition, and the year is rolled over to 2007. When using roll(), this behavior is disabled, and only the month field is incremented by 1, with the year change ignored. Depending on the requirements of your application, you may find such restricted changes useful in certain situations.
Checking date precedence

The Calendar object also includes the compareTo() method, which lets you compare two dates to find out which one comes earlier. The compareTo() method accepts another Calendar object as an input argument and returns a value less than zero if the following conditions are true:

* The date and time of the input Calendar object is later than that of the calling Calendar object.
* A value greater than zero if the reverse is true.
* A value of 0 if the two Calendar objects represent the same date.

Here’s an example that compares 1 Jan 2007 12:00 AM and 1 Jan 2007 12:01 AM with compareTo():

package datetime;
import java.text.SimpleDateFormat;
import java.util.Calendar;
import java.util.GregorianCalendar;

public class myClass
{
public static void main(String[] args)
{
// initialize two calendars
Calendar calendar1 = new GregorianCalendar(2007,Calendar.JANUARY,1,0,0,0);
Calendar calendar2 = new GregorianCalendar(2007,Calendar.JANUARY,1,0,1,0);

// define date format
String date1 = null;
String date2 = null;
SimpleDateFormat sdf = new SimpleDateFormat("d MMM yyyy hh:mm aaa");

// compare dates
if((calendar1.compareTo(calendar2)) < 0)
{
date1 = sdf.format(calendar1.getTime());
date2 = sdf.format(calendar2.getTime());
}
else
{
date1 = sdf.format(calendar2.getTime());
date2 = sdf.format(calendar1.getTime());
}
System.out.println(date1 + " occurs before " + date2);
System.out.println(date2 + " occurs after " + date1);
}
}

Here’s the output:

1 Jan 2007 12:00 AM occurs before 1 Jan 2007 12:01 AM
1 Jan 2007 12:01 AM occurs after 1 Jan 2007 12:00 AM

This example uses two Calendar objects and a little date arithmetic:

package datetime;
import java.text.SimpleDateFormat;
import java.util.Calendar;
import java.util.GregorianCalendar;

public class myClass
{
public static void main(String[] args)
{
// initialize two calendars
Calendar calendar1 = new GregorianCalendar(2007,Calendar.FEBRUARY,16,0,0,0);
Calendar calendar2 = new GregorianCalendar(2007,Calendar.FEBRUARY,18,0,1,0);

// define date format
SimpleDateFormat sdf = new SimpleDateFormat("d MMM yyyy hh:mm aaa");

// add 2d to calendar #1
calendar1.add(Calendar.DAY_OF_MONTH, 2);

// subtract 1min from calendar #2
calendar2.add(Calendar.MINUTE, -1);

// compare dates
String date1 = sdf.format(calendar1.getTime());
String date2 = sdf.format(calendar2.getTime());
if((calendar1.compareTo(calendar2)) < 0)
{
System.out.println(date1 + " occurs before " + date2);
}
else if((calendar1.compareTo(calendar2)) > 0)
{
System.out.println(date1 + " occurs after " + date2);
}
else
{
System.out.println("The two dates are identical: " + date1);
}
}
}

Although both calendars start out differently, they’re converted to the same time stamp through a bit of date arithmetic. This is verified via the compareTo() method, which returns 0 when asked to compare them, indicating that they represent the same instant in time:

The two dates are identical: 18 Feb 2007 12:00 AM

Imagine the possibilities

The Calendar class’ date/time API is a nifty tool for accurately manipulating date and time values without too much stress or custom coding. The examples in this article are just some of many possibilities this API opens up — the rest is up to your imagination.

Weigh the pros and cons before upgrading to Windows Vista for gaming

Takeaway: Microsoft Windows Vista introduces a new graphic API to the mix: DirectX 10. The advances encompassed in the DirectX 10 technology may entice you to upgrade to a new PC or video card, but you should also consider some of the challenges upgrading will present. Josh Hoskins reveals some of the upgrade drawbacks and weighs them against the visual benefits.

With the release of Windows Vista and DirectX 10, a new epoch in PC gaming is upon us. The images and videos released of DirectX 10 games show a level of detail and realism that has impressed everyone. Unfortunately, it's not all wonderful news. Several issues must be considered before you upgrade your gaming rig to Vista.
Upgrade challenges

One of the first things you'll notice about gaming on Vista is that Direct Sound is no longer a part of DirectX. In many games, Direct Sound was used to provide positional audio in the game. With DirectSound no longer being included, your sound in those games will downgrade to stereo sound. While not affecting game performance much, it is something worth considering.

Another thing to consider is the fact that many older games are not compatible with Vista out of the box. A large part of this is due to the security restructuring in Vista, not allowing games to write to the Program Files directory after install. Although many game companies are working on or have already released patches for their games, many older games are no longer supported by their manufacturers. You can bypass this issue by setting the games to run in Windows XP mode, but doing this bypasses the improved security system of Vista, which is one of its major selling points.

While DirectX 10 is backwardly compatible with previous versions of DirectX, there are many reports that DirectX 9 and older games run slower on Windows Vista than on Windows XP. This has been tested to be true in several games, though the exact reason remains undiagnosed. Many believe this is due to the new display architecture and Aero Glass.

In previous versions of Windows, the GPU was not a shared resource. It was totally controlled by one resource at a time. In Window Vista, the GPU cycles can be split between multiple resources (like a CPU). While this does provide for many of the advanced features in the Windows Vista GUI, it is also limits the cycles your game can command from your GPU. The Aero Glass interface itself requires a DirectX 9 compatible video card to render the new desktop effects. There has not been much concrete testing, but many have speculated that disabling the Aero Glass interface will give you increased performance in games.

At this, point DirectX 10 compatible graphics cards are still hard to find and very expensive. Currently, only Nvidia has DirectX 10 cards on the market. These cards come in a couple of versions, but the lowest-end cards still price at over $400. This dearth of competition is truly limiting. Although ATI is working on its DirectX 10 cards, none is currently available.

Hardware vendors have been developing their drivers for Windows XP for several years now, and most are fairly stable. Unfortunately, the drivers currently available for Windows Vista are still immature, and many are unstable. Nvidia is currently facing a potential class action lawsuit due to the unstable nature of its video drivers under Vista. As most gamers know, unstable or even poorly written drivers can have a huge effect not only on gaming performance, but also on whether a game can even run.
Upgrade benefits

While all these issues seem to stack heavily against Windows Vista, there is one important fact to remember. DirectX 10 is only for Windows Vista. This may not sound huge, but the potential for DirectX 10 games is unbelievable. The realism of even the first generation of games is at totally unprecedented levels. Describing the effect of DirectX 10 is much better done with visuals (here and here) than with words. Most upcoming games will support DirectX 10 and DirectX 9, but the better visuals are available only on DirectX 10. Some other upcoming games, such as Alan Wake, will be DirectX 10 exclusives.

Figure A
An image from the yet-to-be-released game Crysis (courtesy of GameSpot)

One performance issue to be considered is the fact that DirectX 10 games that support backward compatibility with DirectX 9 will perform significantly faster under DirectX 10 than DirectX9. These games (none of which is released yet) will be among the main selling points of Vista. You will literally be crippling your performance by not upgrading to Vista (if you have a DirectX 10 video card).

The PC gamer is truly standing at a crossroads. The future clearly leads to Windows Vista, but is it too soon to go down that path? Do the many pitfalls deter you from upgrading, or do you go all out, buy the hardware you need to run DirectX 10, and bask in the amazing graphics presented to you (in a few months)? Both arguments have their merit, but the choice is up to you (and your pocketbook).

Patches not a solution for Vista design problem

Patch Tuesday is just around the corner, that monthly ritual when Microsoft absorbs a not-inconsiderable chunk of Web bandwidth to fix the design problems it’s been sitting on for one to thirty days. One of those problems just might be the security breach created by ATI video card drivers.

But one design flaw is not likely to be fixed this month or any month. Vista, you see, makes HD movies and TV look worse by design. “Microsoft acknowledged that quality of premium content would be lowered if requested by copyright holders”, and some consumer-generated content is getting caught up in that feature. It seems as though the Hollywood studios may join the unholy trinity which has fueled cycle after cycle of upgrades.

The original cyclical marketing system required the new OS to be so slow that you had to buy new hardware that was fast enough to run it. You had to have the new OS because it was required by the latest version of applications. You had to have the latest version of applications because they came with new file formats which were incompatible with the old, yet perfectly functional, software. In order to effectively do business and exchange information, everyone had to jump on the merry-go-round and hang on for dear life.

However, software publisher influence is waning fast; their budgets pale by comparison to Hollywood’s pots of gold. They’re willing to do just about anything to lock up our computers so we can’t buy and sell movies, like we’ve bought and sold books and recordings for a century under the First Sale Doctrine. Early recordings (such as Edison cylinders) had license restrictions, but the trust-busting spirit of a century ago just might have had something to do with our current rights to buy and not rent books and music.

Common Resume Blunders

Make sure your resume is in top-notch shape by avoiding the top 10 resume blunders:

1. Too Focused on Job Duties

Your resume should not be a boring list of job duties and responsibilities. Go beyond showing what was required and demonstrate how you made a difference at each company, providing specific examples. When developing your achievements, ask yourself:

* How did you perform the job better than others?
* What were the problems or challenges faced? How did you overcome them? What were the results? How did the company benefit from your performance?
* Did you receive any awards, special recognitions or promotions as a result?

2. Flowery or General Objective Statement

Many candidates lose their readers in the beginning. Statements such as "a challenging position enabling me to contribute to organizational goals while offering an opportunity for growth and advancement" are overused, too general and waste valuable space. If you're on a career track, replace the objective with a tagline stating what you do or your expertise.

3. Too Short or Too Long

Many people try to squeeze their experiences onto one page, because they've heard resumes shouldn't be longer. By doing this, job seekers may delete impressive achievements. Other candidates ramble on about irrelevant or redundant experiences. There is no rule about appropriate resume length. When writing your resume, ask yourself, "Will this statement help me land an interview?" Every word should sell you, so include only the information that elicits a "yes."

4. Using Personal Pronouns and Articles

A resume is a form of business communication, so it should be concise and written in a telegraphic style. There should be no mentions of "I" or "me," and only minimal use of articles. For example:

I developed a new product that added $2 million in sales and increased the market segment's gross margin by 12%.

Should be changed to:

Developed new product that added $2 million in sales and increased market segment's gross margin by 12%.

5. Listing Irrelevant Information

Many people include their interests, but they should include only those relating to the job. For example, if a candidate is applying for a position as a ski instructor, he should list cross-country skiing as a hobby.

Personal information, such as date of birth, marital status, height and weight, normally should not be on the resume unless you're an entertainment professional or job seeker outside the US.

6. Using a Functional Resume When You Have a Good Career History

It irks hiring managers not to see the career progression and impact you made at each position. Unless you have an emergency situation, such as virtually no work history or excessive job-hopping, avoid the functional format.

The modified chronological format, or combination resume, is often the most effective. Here's the basic layout:

* Header (name, address, email address, phone number).
* Lead with a strong profile section detailing the scope of your experience and areas of proficiency.
* Reverse chronological employment history emphasizing achievements over the past 10 to 15 years.
* Education (new grads may put this at the top).

7. Not Including a Summary Section That Makes an Initial Hard Sell

This is one of the job seeker's greatest tools. Candidates who have done their homework will know the skills and competencies important to the position. The summary should demonstrate the skill level and experiences directly related to the position being sought.

To create a high-impact summary statement, peruse job openings to determine what's important to employers. Next, write a list of your matching skills, experience and education. Incorporate these points into your summary.

8. Not Including Keywords

With so many companies using technology to store resumes, the only hope a job seeker has of being found is to sprinkle relevant keywords throughout the resume. Determine keywords by reading job descriptions that interest you, and include the words you see repeatedly in your resume.

9. Referring to Your References

Employers know you have professional references. Use this statement only to signal the end of a long resume or to round out the design.

10. Typos

One typo can land your resume in the garbage. Proofread and show your resume to several friends to have them proofread it as well. This document is a reflection of you and should be perfect.

Race, Sex and Religion on Your Resume

You're probably aware that hiring managers cannot ask discriminatory questions during interviews. But this legal protection isn't too useful in preventing discrimination before the interview. If your resume contains personal information unrelated to your job target -- your race, nationality, ethnicity, religion, sexual orientation, etc. -- you might fall victim to discrimination, even if you're qualified for the position.

Your resume is a marketing tool designed to get your foot in the door, so every bit of information on it should be selling your value to potential employers. Follow these guidelines to ensure your resume only contains personal information relevant to your job target.

Personal Info That May Be Omitted

* Affiliations, Volunteer Work, Extracurricular Activities and Hobbies: You may leave out organization names that disclose your cultural background, religious affiliation, sexual orientation and other possible targets of discrimination. List only experiences that help sell you as a candidate for your targeted job.

* Languages: Listing your native language may reveal your nationality. Include only languages that add to your qualifications for the job. In certain cases, knowing a second language is a plus and should be included on your resume.

* Personal Information: With the exception of federal or state jobs, which may require this information, and entertainment jobs, for which personal attributes would be considered bona fide qualifications, your date of birth, marital status, nationality, etc., should be omitted.

Personal Information That Should Be on Your Resume

* Your Name: You can't pick a new name in hopes of getting more interviews unless you have legally changed it.

* Your Employers: If you worked for the Gay & Lesbian Alliance Against Defamation, for example, you shouldn't hide your employer's name and misrepresent your work history.

* Schools Attended: Even if your postsecondary school has a religious affiliation, you need to include the school name in your Education section.

* Work Experience or Training in Foreign Countries: You should include all work and educational experiences, as long as they are relatively recent.

Deciding What to Include

* Think About It: Will revealing the information in question highlight skills that would qualify you for the position? For example, if you're pursuing a management position and held leadership roles with religious organizations, consider including these experiences.

* Target Your Audience: If you're applying for a position with the American Civil Liberties Union, for instance, your resume may highlight your cultural background, involvement in related organizations and diversity-related accomplishments. If you don't know the organization's culture or the hiring manager's possible biases, omit personal information that will not add to your qualifications.

* Bear in Mind the Prospective Employer's Geographic Location: In some communities, involvement in civic or religious groups is highly desirable and including your related experience on your resume would enhance your credentials.

* Evaluate Your Personal Preferences: The this-is-me-take-it-or-leave-it attitude may leave you hungry when looking for a job in a world where discrimination still exists. You don't want to lose a chance at your dream job because of a hiring manager's possible biases. You may or may not report to the person once hired, anyway.

Declutter Your Resume in 5 Steps

In preparation for a job search, you dust off your old resume and tack on your most recent job, new skills and training. But without editing or deleting old information, your resume becomes a hodgepodge of outdated accomplishments, awards and skills.

It's time to declutter your resume. Clean up your act in these five steps:

Step 1: Narrow Your Career Goal

Tom Kelly, president of Executive Recruiting Solutions, says many job seekers' biggest problem is not being sure of what they want to do, adding that it's particularly an issue for those branching out into new careers or industries. "The resume starts to lose focus," he says. "A whole bunch of extra stuff ends up in it in order to try to appeal to a wider range of employers or industries."

Kelly recommends limiting your resume's focus or creating more than one version if you have multiple target jobs. "It's best to declutter the resume by targeting one to three industries, max," Kelly says. This makes it easier to consolidate down to relevant content.

Step 2: Condense Your Opening Summary

Les Gore, managing partner of Executive Search International, recommends including a qualifications summary near the top of your resume. "Tell me a little about your background," he says. "Don't go overboard, and don't overdo the selling. Be succinct and descriptive in terms of your experience and collective knowledge."

And forget about crafting lofty mission statements or "me-focused" objectives that talk about wanting a fulfilling career with opportunity for growth, advises Harvey Band, managing partner of recruiting firm Band & Gainey Associates. "You're wasting page space with that, and you're wasting your time and mine," he says. "Use the top third of the page to communicate your most recent experience and your most impressive accomplishments. Get my attention. Then I'll keep reading."

Step 3: Edit Work Experience

Your resume's experience section should provide an overview of your career chronology and a few highlights of key accomplishments for your most recent work experience. For professionals on an established career track, this may mean summarizing experience more than 10 to 15 years old into an "early career" section.

"I like to see summaries of earlier careers versus long, detailed explanations," says Kelly, who recommends job seekers provide brief, one-line descriptions of earlier positions. "You don't have to list every job that you've had out of college on your resume."

Gore agrees. "Often, I see way too much information on responsibilities and not enough on the accomplishments," says Gore, who reviews hundreds of resumes each month. Although he finds it helpful for candidates to provide a brief overview of the range of their responsibilities, Gore recommends these details be summarized in just a few sentences.

When trying to weed accomplishments for space reasons, think numbers. "Take a hard look at what you're saying," Band says. "If you can't back it up with numbers, percentages or quantify it in some other way, then cut it."

Gore also likes the quantitative approach, as does Kelly, who suggests quantified statements have more value to an employer than more general, nonquantified accomplishments.

Step 4: Consolidate Education

The education section is another area where you can gain space when updating your resume. Although detailed information about internships, courses, academic honors and extracurricular activities can be important for new or recent graduates, professionals with four or more years of experience can omit or greatly condense this information, says Kelly.

Step 5: Select Your Skills

Many job seekers know the importance of resume keywords, but be careful not to go overboard.

Band says if your skills section resembles a laundry list of random terms, you need to do some serious editing. "The best resumes are custom-created for a specific opportunity," he says. "If you're targeting your resume, then you don't need to try to throw in every single skill set that you think might be important."

And now's a good time to dump outdated technology, too. "Fortran, Cobol and other outdated computer programs need to go," says Kelly. Not only can you gain some valuable space, but you'll avoid coming across as a dinosaur.

Think Like an Employer

Throughout each step of the resume-decluttering process, Band advises candidates to address the three key questions employers want your resume to answer: What can you do for me? What have you done before? Can you do it for me again?

Tuesday, August 14, 2007

Linux pre-installation option now available for Dell’s hardware

It appears that you can now order a Dell Inspiron 6400 notebook or Inspiron 530N desktop and have Ubuntu 7.04 pre-installed as an option.

Well, at least for the United Kingdom, France, and Germany. The United States already had the option since May this year.

According to the Direct2Dell blog:

Similar to what we’ve done in the United States, we will configure and install open source drivers for hardware, when possible for these new products.

Future plans for offering SuSE Linux Enterprise Desktop 10 factory-installed for China has also been announced by Kevin Kettler in his LinuxWorld keynote.

No prices have been announced yet, but general reports indicate that laptops normally work out to about $50 cheaper compared to Dell’s “Home” versions.

SolutionBase: Taking SquirrelMail to new levels

SquirrelMail is a great option if you want to create Web-based e-mail for your company's Web-based mail server. But, like all good Linux applications, SquirrelMail is packed with configuration options that allow it to go beyond just a Web-based mail solution. Like all IT departments, this is exactly what you need. In this article, I'll introduce you to some of the SquirrelMail plug-ins that will help you take SquirrelMail to new levels.

For More Information...

look here

cheers Aurobindo

Manipulate text with sed

Sed is a very handy and very powerful little text manipulator. Sed is short for “stream editor” and what it does is manipulate and filter text. Typically, sed is used “in-transmit,” meaning that you pipe the output of one command into sed to have it modify the contents of another program’s output and format it, rendering new output. You can also operate sed on a text file; it will send the transformed text to standard output, which can then be redirected into another file.

The best way to illustrate the power of sed is to provide a few examples:

$ printf "line onenline twon" | sed -e 's/.*/( & )/'

( line one )

( line two )

This example outputs two lines, and sed transforms them into two lines wrapped in parentheses. It does this by matching a pattern and transforming it. The expression is s/[pattern]/[replacement]/. You can use other characters as delimiters; in this case I used the backslash (/), but you can also use a comma or pipe (|).

In the above expression, the pattern to match is “.*” (everything); the replacement expression uses the ampersand (&) as a placeholder to indicate all matched text. In this case, it’s the entire line, so the replacement text is ( [text] ).

You can also use sed to transpose text. Assume you had a file with two words per line, but you wished to have the second word displayed first, then the second, separated by a comma:

$ printf "line onenline twon" | sed -e 's/(.*) (.*)/2,1/'

one,line

two,line

Here the line line one is transformed into one,line. The pattern uses parentheses to create matching blocks. In other words, the expression (.*) (.*) matches one string, a space, then another string. Both of these strings are placed into hold buffers, which are represented by \1 for the first and \2 for the second. The replacement expression then uses these hold buffers to place the text in the format we want: second string, comma, first string.

You can use sed to do some very interesting things, such as create a command to rename certain files:

$ ls -1 *.txt | sed -e 's/.*/mv & &.old/' >execute; sh execute && rm -f execute

This chain of commands takes the output of ls -1 *.txt, which sed modifies — turning the file name from list.txt to mv list.txt list.txt.old, which is then piped into a file called execute. Once this is complete, execute is executed by the sh shell, which will perform the mv command on each listed file; when it has successfully completed, the execute file is removed.

This has just scratched the surface of using sed. It is extremely powerful and has many interesting uses, and is definitely worth a closer look.

Monitor system information with SQL Server 2005’s default trace

Sometimes it is difficult to diagnose problems on your SQL Server after they have occurred. So find out why you should use SQL Server 2005’s default trace feature to monitor certain events.

Introducing default trace

A trace is an activity that is run in the background on a SQL Server machine that captures specific events and data related to those events. This information is great for diagnosing performance problems, finding deadlocks, and auditing security information — just to name a few of its benefits.

Trace files are created and maintained in SQL Server through the TSQL language. You may be familiar with using SQL Server Profiler to diagnose performance issues. SQL Server Profiler is a front-end application that allows you to set up and monitor SQL Server with one or more traces through a graphical UI.

In SQL Server 2005, a default trace is always running in the background to monitor certain events. There is almost no overhead involved in maintaining this trace, and it can save you hours trying to figure out what is happening on your server. In fact, if you’re just now learning about default trace, you can still study your trace log files to diagnose recent problems with your SQL Server.

The trace log files roll over, allowing you to view historical trace data. This trace is fairly lightweight, which means that: It doesn’t use too many resources on your SQL Server; and it doesn’t capture every event that is happening on your server. The default trace captures information such as when the server starts and stops, failed login attempts, when objects are created and deleted, and when the log files grow and shrink. If you need to capture information than what the default trace is gathering for you, you may want to set up a separate trace to collect the data.
How to find default trace

You can set SQL Server trace files to be stored in a table in a database, as an XML file, or to a text file on a server. By default, the default trace saves event data to the LOG folder at the location of the SQL Server installation. If you don’t know where that is, there are a few system functions that you can use to figure it out.

The script below calls a system table-valued function that will return data for a specific trace running in the database or information for all traces running. The call I am making will return all of the traces in the system.

SELECT *
FROM fn_trace_getinfo(default)

If your system is currently only running the default trace, there is a good chance that the resultset returned from the above function call is similar to the resultset on my machine, as shown in the following table.

The above function tells me the name and location of where my default trace file is located on my database server. It also tells me that I am currently on my eighth log trace file. This means that there will likely be at least a few other trace log files in that folder that I can query for problems later if necessary.
Looking at the log data

You have two options for viewing the data from the trace log file. You can navigate to that location on the database server and click on the file; this will open the trace file up in SQL Server Profiler so you can view the information. From there, you can save the results to a table or XML document.

The second option is that you can copy this file path directly from the resultset and use it as a parameter to another system table-valued function that will allow you to directly query the data. I prefer this option because it lets me skip some steps, such as storing the data in the database to query it.

SELECT *
FROM fn_trace_gettable('C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLLOGlog_8.trc',default)
ORDER BY starttime

The above function call returns all of the data from the trace file and sorts the data by the time the event occurred. With this ability, I can quickly look at the events that have occurred recently on my server to determine what is causing the problems.