Sunday, 9 March 2014

The Programmer Productivity Paradox

Programmers seem to be fairly productive people.

You always see them typing at their desks; they chafe for meetings to finish so that they can go back to their desks and code. When asked, they will say that there is not enough time to produce the code, and the sooner they can start coding, the sooner they will be done.

So writing code must be the most important thing, correct?

If the average programmer writes about 50 lines of production code a day.  A 50,000 line program would take 1,000 man days to produce.  The 50,000 line listing can be entered by a programmer at about 1,000 lines a day or about 50 man days.

So what the heck are the developers doing for the other 950 days?

Before addressing that issue, lets make a simple observation. Capers Jones has compared many methodologies (RUP, XP, Agile, Waterfall, etc) and programming languages over thousands of projects and determined that programmers write between 325 and 750 lines of code (LOC) per month, which is less than the 1,000 LOC per month suggested above1.  Even if programmers do not average 50 lines of code per day, the following is clear2.
  • Methodology does not explain the apparent productivity gap
  • No language accounts for the apparent productivity gap
The reality is that only a fraction of a developer's time is actually spent writing production code. If a developer is typing in code all the time then they are really trying different combinations of code until they finally find the combination of code that works.

Or more correctly, the combination that seems to match the requirements until either QA or the business analyst comes back and lets them know there is a problem.

That is why developers that plan their code before using the keyboard tend to outperform other developers. Not only do only a few developers really plan out their code before coding but also years of experience do not teach developers to learn to plan.  In fact studies over 40 years show that developer productivity does not change with years of experience. (see No Experience Required!)

Years of experience do not lead to higher productivity

Interestingly enough, there are methodologies that have been around for a long time that emphasize planning code.  Watts Humphrey is the created of the Personal Software Process (PSP)3.  Using PSP has been measured to:

PSP can raise productivity by 21.2% and quality by 31.2%

If you are interested there are many other proven methods of raising code quality that are not commonly used (see Not Planning is for Losers).

If your developers at their keyboard and not planning at a white board then odds are that your productivity is not as high as it could be.

Bibliography

1 The The Mythical Man Month is even more pessimistic suggesting that programmers produce 10 production lines of code per day
2 Jones, Capers and Bonsignour, Olivier.  The Economics of Software Quality.  Addison Wesley.  2011
3 Watts, Humphrey.  Introduction to the Personal Software Process, Addison Wesley Longman. 1997

Other articles in the "Loser" series

Want to see more sacred cows get tipped? Check out:
Moo?
Make no mistake, I am the biggest "Loser" of them all.  I believe that I have made every mistake in the book at least once :-)

Monday, 17 February 2014

When BA means B∪ll$#!t Artist

BA  means Business Analyst, but what makes for a good BA?  When do you have a good BA and when do you have someone who isn't? Many projects fail at the beginning due to incomplete, inconsistent, and overly verbose analysis that then leads to incorrect project plans and projects heading in the wrong direction.

Business analysis consists of all facets of solving business problems. Business analysis is a role executed by project, product, and engineering managers as well as other people.  However, there are people that do business analysis as their main role and we simply label them as business analysts or product managers.

We shall stick to the term business analyst for simplicity. Business analysts perform a range of tasks including:
  • Gathering requirements
  • Writing business cases and project charters
  • Performing gap analyses to incorporate new products
Are your BAs any good?  

The success of many projects depend on the ability of the BA do correctly identify the problem you are trying to solve.  If your BA is not competent then you are doomed before you start.

Ever suspect that the people responsible for performing business analysis for you are not up for the challenge?  Here are three simple questions; good business analysts will get all three correct and not take longer than 30 seconds.

Context Question
#1 Which of these two lines is longer?
#2 What does this sign say?
#3 I have two products whose prices add up to $1.10 and one product is $1 more than the other. How much is the more expensive product?

The answers are:
  1. The lines are the same length
    • Take a ruler if you are not convinced
  2. Paris in the the spring
    • Notice that the occurs twice
  3. The more expensive product is $1.05
    • If the more expensive one had been $1 then the cheaper one would be $0.10 but then the expensive one would only be $0.90 more expensive than the cheaper one.
These three questions illustrate that the mind can jump to conclusions that are often wrong. The mind can be tricked because it assesses things quickly, i.e. jumps to conclusions.  Only some people have the instinct to double check their results and check for personal bias.

Most BAs are well educated; however, the interesting thing is that studies show that educated people are less able to see their biases and they jump to conclusions more readily than other people.1 Even worse, well educated people have less of a tendency to check their conclusions than other people.

A good business analyst realizes that during analysis there will be situations where the mind will jump to conclusions.  Many business analysts are asked to resolve conflicting requirements, recognize missing requirements, and deal with biases coming from many different sources.  Unless you have the reflex of checking facts for consistency and eliminating bias then you won't make a good business analyst.

So what is the quality of your BAs?

Other articles

Bibliography

1 West, Richard F. and Meserve, Russel J. Cognitive Sophistication does not Attenuate the Bias Blind Spot. Journal of Personality and Social Psychology.

Thursday, 16 January 2014

No Business Case == Project Failure

A business case comes between a bright idea for a software project and the creation of the software project. 

  • To - idea to have a project is born
  • Tcheck - formal or informal business case
  • Tstart - project is initiated
  • Tend - project finishes successfully or is abandoned
Not all ideas for software projects make sense.  In the yellow zone above, between idea and project being initiated, some due diligence on the project idea should occur.  This is where you do the business case, even if only informally on the back of a napkin.

The business case is where you pause and and estimate  whether the project is worth it, i.e. will this project leave you better off than if you did not do it. For those who want precise definitions the project should be NPV +ve.  In layman's terms, the project should leave the organization better off on it's bottom line or at least improve skill levels so that other projects are better off.

Projects that do not improve skills or the bottom line are a failure.

Out of 10 software projects (see Understanding your chances):
  • 3 are successful
  • 4 are challenged, i.e. over cost, over budget, or deliver much less functionality
  • 3 will fail, i.e. abandoned
This means that the base rate of success for any software project is only 3 out of 10. Yet executives routinely execute projects assuming that they can not fail, even though the project team may know that the project will be a failure from day 1.

Business cases give executives a chance to stop dubious projects before they start. (see Stupid is as Stupid Does) Understanding how formal the business case needs to be comes down to uncertainty. There are three key uncertainties with every project:
  • Requirements uncertainty
  • Technical uncertainty
  • Skills uncertainty
When there is a moderate amount of uncertainty in any of these three areas then a formal business case with cash flows and risks needs to be prepared.

Requirements Uncertainty

Requirements uncertainty is what leads to scope shift (scope creep).

The probability of a project failing is proportional to the number of unknown requirements when the project starts (see Shift Happens).

Requirements uncertainty is only low for two particular projects:
  1. re-engineering a project where the requirements do not change
  2. the next minor version of a software project. 
For all other software projects the requirements uncertainty is moderate and a formal business case should be prepared.

Projects new to you have high requirements uncertainty.

Technical Uncertainty

Technical uncertainty exists when it is not clear that all requirements can be implemented using the selected technologies at the level of performance required for the project. Technical uncertainty is only low when you have a strong understanding of the requirements and the implementation technology.  Uncertainty and risk are two different animals (see  Uncertainty Trumps Risk in Software Development)

When there is only a moderate understanding of the requirements or the implementation technology then you will encounter the following problems:
  • Requirements that get clarified late in the project that the implementation technology will not support
  • Requirements that can not be implemented once you improve your understanding of the implementation technology
Therefore technical uncertainty is high when you are doing a project for the first time and requirement uncertainty is high.  Technical uncertainty is high when you are using new technologies, i.e. shifting from Java to .NET or changing GUI technology.

Projects with new technologies have moderate to high uncertainty.

Skills Uncertainty

Skills uncertainty comes from using resources that are unfamiliar with the requirements or the implementation technology.  Skills uncertainty is a knowledge problem.

Skills uncertainty is only low when the resources understand the current requirements and implementation technology.

Resources unfamiliar with the requirements will often implement requirements in a suboptimal way when requirements are not well written.  This will involve rework; the worse the requirements are understood the more rework will be necessary.

Resources unfamiliar with the implementation technology will make mistakes choosing implementation methods due to lack of familiarity with the philosophies of the implementation libraries.  Often after a project is complete, resources will understood that different implementation tactics should have been used.

Formal or Informal Business Cases?

An informal business case is possible only if the requirements, technical, and skills uncertainty is low.  This only happens in a few situations:
  • Replacing a system where the requirements will be the same and the implementation technology is well understood by the team
  • The next minor version of a software system
Every other project requires a formal business case that will quantify what kind of uncertainty and what degree of uncertainty exists in the project.  At a minimum project managers facing moderate to high uncertainty should be motivated to push for a business case (see Stupid is as Stupid Does).

Here is a list of projects that tend to be accepted without any kind of real business case that quantifies the uncertainties:
  • Change of implementation technology
    • Moving to object-oriented technology if you don't use it
    • Moving from .NET to Java or vice versa
  • Software projects by non-software companies
  • Using generalists to implement technical solutions
  • Replacing systems with resources unfamiliar with the requirements
    • Often happens with outsourcing
Projects with moderate to high risks and no business case are doomed to fail.

Related articles

  • No Experience Necessary!
    • Surprisingly there is no correlation between years of experience and productivity.  This has been verified over 40 years.
  • Stop it! No, Really Stop It!
    • 5 worst practices that software organizations commonly practice that they need to stop right away

Tuesday, 3 December 2013

What the Heck are Non-Functional Requirements?

Simply put, if functional requirements create code that will address the needs of the end-users (customers), then non-functional requirements address the needs of the people who install, operate, and configure the code.

Those people are the operations personnel and help desk personnel in whatever organization that uses your software.  Every developer needs to be aware of what those non-functional requirements are and why operations personnel and help desk personnel are customers that are just as important as the end-users.

Functional Requirements

Functional requirements are baked into the code that developer's deliver (interpreted or compiled).   Events from input devices (network, keyboard, devices) trigger functions to convert input into output -- all functions have the form:


This is true whether you use an object-oriented language or not. Non-functional requirements involve everything that surrounds a functional code unit.  Non-functional requirements concern things that involve time, memory, access, and location:

Performance
  • Server must be able to handle 100 transactions a second
  • End user must have less than 1 second response
Availability
  • The service has to be available 99.99% of the time during the service hours
Capacity
  • During service hours the server must support 700 simultaneous users
Continuity
  • Service is resilient to disk, machine, and operational center failure
Security
  • Ensure that only people authorized to access the service can
  • Ensure that data is never corrupted due to illegal activity or machine failure

I won't spend any time on performance because this is the non-functional requirement that everyone understands.

Non-functional requirements are slightly different between desktop applications and services; this article is focused on non-functional requirements for services.

If you have any knowledge of ITIL you will recognize that the highlighted items deal with the warranty of a service.  In fact, the functional requirements involve the utility of a service, the non-functional requirements involve the warranty of a service.

Availability

Availability is about making sure that a service is available when it is supposed to be available. Availability is about a Configuration Item (CI) in the environment of the operations center that specifies how the code is accessed.  Availability is decided independently of the code and is at best part of the Service Design Package (SDP) that is delivered to the operations department, at worst it is simply code dumped on the operations personnel.

Developer's need to be aware of the difficulty of creating the CI for the operations personnel.  If a CI is manually created then there will always be a potential for an error when the service is installed or updated. The requirement to create a CI is a non-functional requirement and the ability to minimize errors is another non-functional requirement.

Developer's need to be aware of single-points of failure (i.e. services hard-coded to a specific IP) which causes fits in operations that are not running virtual machines (VM) that can have virtual IPs . The requirement to create code that is not reliant on static IPs or specific machines is a non-functional requirement.  Availability is simplified in operations if the code is resilient enough to allow itself to easily move (or be replicated) among servers.

Availability non-functional requirements include:
  • Ability to easily make the CI
  • Automatic installation of CI or mechanisms
  • Ability to detect and prevent manual errors for a CI
  • Ability to easily move code between servers

Capacity

Capacity is about delivering enough functionality when required.  If you ask a web service to supply 1,000 requests a second when that server is only capable of 100 requests a second then some requests will get dropped.  This may look like an availability issue, but it is caused because you can't handle the capacity requested.

Internet services almost always can't provide enough capacity with a single machine and operations personnel need to be able to run multiple servers with the same software to meet capacity requirements.  The ability to run multiple servers without conflicts is a non-functional requirement. The ability to take a failing node and restart it on another machine or VM is a non-functional requirement.

Capacity non-functional requirements include:
  • Ability to run multiple instances of code easily
  • Ability to easily move a running code instance to another server

Continuity

Continuity involves being able to be robust against major interruptions to a service, these include power outages, floods or fires in an operational center, or any other disaster that can disrupt the network or physical machines.

Where availability and capacity often involve redundancy inside a single operation center, continuity involves geographic and network redundancy.  Continuity at best involves having multiple servers that can work in geographically distributed operation centers.  At worst, you need to be able to have a master-slave fail over model with the ability to journal transactions and eventually bring the master back up.

Continuity non-functional requirements include:
  • Resilience of a code base to potential network outages, i.e. ability to retry transactions or find a new server
  • Making sure that correct error messages are returned when physical failures are encountered, i.e. if the network is unavailable then don't give the end-user a message "Customer record has errors, please correct".
  • Ability to recognize inconsistent data and not continue to corrupt data inside the database.


Security
Security non-functional requirements concern who has access to functions and preventing the integrity of data from being corrupted.

Where access is concerned, how difficult will it be for operations personnel or help desks to set up security for users?  Developer's build in different levels of access into their applications without considering how difficult it will be for a 3rd party (help desk or operations) to set-up end users.  The ease of setting up security is a non-functional requirement.

Data integrity is another non-functional requirement.  Developer's need to consider how their applications will behave if the program encounters corrupted data due to machine or network failures.  This is not as important an issue in environments using RAID or redundant databases.

Security non-functional requirements include:
  • Ease of a help desk to set up a new user on an application
  • Ability to configure a user's rights to enable them only to access the functions that they have a right to
  • Ensuring through data redundancy or consistency algorithms that data is not corrupted


When You Forget Non-Functional Requirements...

Commonly start-ups are so busy setting up their services that the put non-functional requirements on the back burner.  The problem is that there are non-functional requirements that need to be designed into the architecture when software is created.

For example, it is easy to be fooled into building software that is tied to a single machine, however, this will not scale in operations and cause problems later on.  One of the start-ups I was with built a server for processing credit/debit card transactions without considering non-functional requirements (capacity, continuity).

 It cost more to add the non-functional requirements than it cost to develop the software!

Every non-functional requirement that is not thought through at the inception of a project will often represent significant work to add later on.  Every such project is a 0 function point project that will require non zero cost!

Generally availability, capacity, and continuity is not a problem for services developed with cloud computing in mind.  However, there are thousands of legacy services that were developed before cloud computing was even possible.

If you are developing a new service then make sure it is cloud enabled!

Operations People are People Too

Make no mistake, operations and help desk personnel are fairly resourceful and have learned how to manage software where non-functional requirements are not addressed by the developers.

Hardware and OS solutions exist for making up for poorly written software that assumes single machines or does not take into account the environment that the code is running in, but that can come at a fairly steep cost in infrastructure.

The world has moved to services and it is no longer possible for developers to ignore the non-functional requirements involved with the code that they are developing.  Developer's that think through the non-functional requirements can reduce costs dramatically on the bottom line and quality of service being delivered.

The guys that run  operational centers and help desks are customers that are only slightly less important than the end-user.  Early consideration of the non-functional requirements makes their lives easier and makes it much easier to sell your software/services. It is no longer possible for competent developers to be unaware of non-functional requirements.



Other Articles
No Experience Necessary Counter-intuitive evidence why years of experience does not make developers more productive
Shift Happens Why scope shift on development projects is inevitable and why not capturing requirements at the start of a project can doom it to failure.
Inspections are not Optional Software inspections are intensive but evidence shows that for each hour of inspection you can reduce QA by 4 hours!

Tuesday, 19 November 2013

Who should set defect priority?

Surprisingly, defect priority should not be set by QA.  QA are generally the owners of the defect tracking system and control it, but this is one attribute that they should not control.  The defect tracker is a shared resource between QA, engineering, engineering management, and product mangement and is a coordinating mechanism for all these parties.

Commonly people mix up priority and severity, for example, there may be a severe defect that causes the software either not to install or to cease functioning.

It is common for new releases to have various installation problems when they first get to QA. This blocks QA, so they mark the defects with a high severity and high priority.  This issue has a high severity and needs to be addressed right away, but remember bug tracking systems are append only -- once this defect gets into the system, it will never get out.  This kind of issue should be escalated to the engineers and engineering management because it makes little sense to clog the defect tracking system with it.

Now there may be intermittent issues that cause the software to fail and you may assume that this defect would be high priority, but if it occurs very rarely and would cost too much to fix then this defect may be a low priority.  Once again the priority of an intermittent severe issue can not be determined by QA.

Similarly, there may be many cosmetic or minor defects where fixing them might make a huge difference in the user experience and reduce support calls.  Even though these defects are minor, they may be easy to fix and save you serious money.  Once again, this can not be decided by QA.

Therefore, QA should reserve the right to set the initial severity, and may have an internal field for QA priority, but the priority of a defect should be determined by a product manager (PM) who has a more complete understanding of the overall context of the product.  The priority of a defect is a business issue, not an engineering or QA issue.

Ideally, the product manger will set the priority of a defect during a defect triage session where representatives from engineering, engineering management, and QA are present. As you go through each new defect each person can present their logic for what the priority should be.  The PM should then set the priority defect and assign the release it should be fixed in (i.e. this release, minor release, major release, won't fix).  Ideally there are extra fields in the defect record for both QA and development to put their priority beliefs.

It is important that the main status field of a defect not include any of the following or your defect tracker will go to hell (i.e. this information should be in other fields):
  • Priority
  • Severity
  • Fix version
It is very hard to generate meaningful reports when these attributes creep into the defect status.  This has happened if you have a multi-page training manual for entering defects into the system.  Some of the statuses that make sense initially but turn the defect system into a nightmare are:
  • FAD (functions as designed)
  • WontFix
  • NextVersion
  • CantReproduce
Of course, if you don't have regular bug triage sessions with all the parties mentioned above then you are probably sand-bagged in fire-fighting.

Other related articles:

Wednesday, 30 October 2013

Don't manage enhancements in the bug tracker

As development progresses we inevitably run into functionality gaps that are either deemed as enhancements.

These issues often get captured by QA in the bug tracker and assigned to a developer that will be unable to resolve the issue.

Assigning issues to the wrong people will cause the defect to bounce around like a hot potato and waste everyone's time.

Enhancements are a requirements defect that can not be resolved by either QA or the developer.

The life cycle of a defect and the life-cycle of a enhancement are two entirely different things.  A defect is a difference between a stated requirement and the code. If there is no documentation then there is no code defect (see It's not a bug, it's...).

Most enhancements will eventually be coded by some developer; they just should not be managed from the bug tracker. They need to go through the requirements process to determine whether or not the change will be made.  Even when changes are made they can be very different from what QA or the developer expects.

Defect Life-cycle

The defect life-cycle is well known:
  • Defect is identified as a departure from the requirements
  • Defect is assigned to a developer
  • Defect is corrected
  • Correct is verified
    • If not corrected re-open and re-assign to developer
  • The defect is closed

This is the correct way to manage a defect.

This is the incorrect way to manage enhancements. When QA discovers functionality not covered by the requirements then we have an issue; however, it is unlikely that it will be resolved by a developer.

Therefore assigning enhancements to developers is simply going to waste time because neither QA nor developer should be deciding requirements.

Non-Code Defect Life-cycle

There are defects that are not coding defects, this includes:
  1. Insufficient requirements
  2. Correct requirements but incorrect test plans
Enhancements are really requirement defects. Enhancements should be logged as such in the bug tracker and assigned to the person in charge of requirements (business analyst or product manager).  Those individuals should be responsible to track down how these issues should be handled.

If the requirements are correct and the test plans are defective then it should be logged as a test defect. This is tricky because QA often controls the bug tracker and may not  log errors that they commit.  This is typically reported in QA with a  functions as designed status.

At a minimum, the implementation of requirements and test defects in the bug tracker can do several positive things for you:
  1. It removes the responsibility to find a solution from development.
  2. It makes it clear how many defects are in the requirements or test plans.
  3. It reduces stress; no developer wants to be blamed for an issue that is not his.
  4. Many enhancements call for updated project plans and pushing back the deadline.

Put Responsibility Where it Belongs

The creation of requirements and test defects in the bug tracker goes a long way to cleaning up the bug tracker.  In fact, requirements and test defects represent about 1 out of 4 of defects in most systems.

The percentages break down as follows:
  • Requirements defects: 9.58%
  • Testing defects: 15.42%
The creation of requirement and test defects in the bug-tracker alleviate pressure on the engineering department and redirect it  to either the product manager or QA. Eventually enough data will accumulate in the bug-tracker to get management's attention.

At a minimum, these defect categories should help reduce the amount of fire-fighting at the end of a project..



See also:

Wednesday, 23 October 2013

It's not a bug, it's...

When does a bug become a bug?
Who decides that it is a bug?

How many legs does a lamb have if I say the tail is a leg?  The answer is 4, just because I say the tail is a leg does not make it a leg!

Bugs should be obvious, but we say It's not a bug, it's a feature because often it isn't obvious.  Watson Humphrey felt that we should use the term defect and not bug because most people don't take bugs seriously, so let's use the term defect instead.

So when does a defect become a defect?
  • When quality assurance tells you that you have a defect?
  • When product management says that it is a defect?
  • When the customer says that it is a defect?
The answer is: none of the above. Now it might turn out that there is a problem and that code needs to change, but a defect only exists if:

code behaves differently than the requirements specification

This is important because most systems are under specified (if they are specified at all :-) ). Code can only be considered a defect if it differs from the specification.  We call defects undocumented features because we know that the problem is that the requirements were never written.

Incomplete and Inconsistent Requirements

Many organizations do not create sufficiently complete requirements before starting development, either because they don't know how to capture requirements properly or because they don't have resources capable of capturing complete requirements.

Incomplete (and inconsistent) requirements and unrealistic deadlines often force developers into making decisions about how to implement features.  The end result is that developers are regularly told that they have defects in their code.

While this process is common, it is destructive.  When requirements are under specified and inconsistent developers end up needing to perform serious rework. The rework will can require dramatic changes that will impact the architecture of the code.

The time required to find a work around (if it is possible) is rarely included in the project plan. Complicating matters is that the organizations that are reluctant to spend time creating requirements also tend to underestimate their projects.  This puts tremendous pressure on the engineering department to deliver; this promotes the 5 worst practices in software development (see Stop It! No… really stop it.)

When poorly or undocumented systems require changes that are not specified we should  call them change requests rather than defects.

Only 54% of Issues are Resolved by Engineers

The attitude that all defects must be resolved by the engineering department is severely misguided.  Analysis by Capers Jones of over 18,000+ projects shows that only about 54% of all defects can be resolved by the engineers! (only the 3 highlighted rows below)

Defect Role Category Frequency Role
Requirements defect 9.58% BA/Product Management
Architecture or design defect 14.58% Architect
Code defect 16.67% Developer
Testing defect 15.42% Quality Assurance
Documentation defect 6.25% Technical Writer
Database defect 22.92% Data base administrator
Website defect 14.58% Operations/Webmaster

This means that precious time will be wasted assigning issues to developers that they can not resolve.  The time necessary to redirect the issue to the correct person is a major contributing factor to fire-fighting

Getting Control of the Defect Process

For most organizations fixing the defect process involves understanding and categorizing defects correctly. Organizations that are not tracking the different sources of defects probably have a bug tracker that has gone to hell.  Here is how you can fix that problem, see Bug Tracker Hell and How To Get Out!

At a minimum you need to implement the requirements defect, once you identify issues that are caused by poor requirements it will shine the white hot light of shame onto the resources that are capturing your requirements.

Once you realize how many requirements defects exist in your system you can begin to inform senior management about the requirements problem.

Reducing Fire-Fighting

The best way to reduce fire fighting is to start writing better requirements (or writing requirements :-) ).  To do so you need to figure out which of the following are broken:
  1. Not enough time is allocated to the requirements phase
  2. Unskilled people are capturing your requirements
In all likelihood both of these issues need to be fixed in your organization.  When requirements are incomplete and inconsistent you will have endless fire-fighting meetings involving everyone (see Root cause of 'Fire-Fighting' in Software Projects)

Stand your ground if someone tells you that you have coded a defect when there is no documentation for the requirement.