I believe every lean or TOC practitioner has been in the situation to ask a line manager about their bottleneck and to receive the answer – our bottleneck is the number of resources. Meaning, of course: ” do your consultant thingy but make sure I get more people at the end, if you want my support”. It is also normal, that every consultan learned to be skeptical about this claim – it is sort of the first round in a complicated ballet in which we end up (or not) finding the “real” bottleneck and solving the problem.

But how can we be sure, that that the claim about the number of people is not true? I recently looked at an operation where each employee is processing customer requests. There are no machines there is just reading of e-mails and forms, looking up answers and providing them in a standardized form to the customer. In such a case the number of available resources can very well turn out to be the limiting factor aka bottleneck.

Thinking about this case I think we can apply the cycle time formula discussed a few posts ago to get a good view of what is going on. Imagine one employee working on a specific task. How much can he accomplish in a given day? If I know the cycle time for the operation then obviously the number of jobs he can finish is the available time divided by the cycle time –

NJ=T/CT.

The cycle time is

CT=P/A

where P is the pure processing time and A is the percent of time the employee devotes to the task we study. Putting it all together :

NJ=T*A/P –

available time times percent devoted to the task divided by the pure processing time.

The total number of jobs finished in a day (aka throughput) by the operation we study is

TP=N*NJ = N*T*A/P – where N is the number of employees working on the job.

It is worth to discuss the terms in more detail. First: P is the pure processing time – that is the time it takes to finish one job if there are no interruptions of any sort blocking the employee. This is the ideal case and it can not be changed unless we change the process: introduce faster machines, eliminate some steps etc. “A” is the percent of time the employee can work in the ideal process. Two factors can drastically reduce A: if the employee has other , unrelated tasks to do then obviously the percent of time available to do the task at hand will be reduced. But, A can also be reduced if the operator can not follow the ideal path (which takes P minutes) because of various causes – like incomplete information that needs to be found by calling other colleagues, waiting for a printer to be available and the like. So, even if somebody might nominally only work on a given task, his percentage availability can be less than 100% if the work is not organized properly

Having the formula we can make a rough idea of how efficient the operation really is : if the total number of finished jobs is higher or close to the daily demand possibly we have too many people on this task. If the number is generally lower than the demand we have a classical bottleneck situation and the statement that the people might be the bottleneck could turn out to be true.

To see how we can use the formula, let us discuss the case where the number of available employees gets reduced (for example somebody goes on a sick leave). What can the manager do to maintain the throughput at the “normal” level?

TP=A*T*N/P so he can increase A, T or N or reduce P.

The easiest to discuss is increasing N – we just lost an employee, so getting back to the original level would mean that we need to “borrow” someone from a different operation . This might be possible if the employees are capable of working on the task – which brings us to the very important lean idea of cross trainings . This strategy will work if the company consciously cross-trains employees , not an easy proposition in most cases.

The second option is increasing A – that would practically mean to refuse less important tasks that distract people from the most important job and to make sure that inefficiencies are avoided. Maybe no-one can work on the more important task of the department but people can temporarily take over the less important jobs freeing up capacity for the department to do its important tasks.

Unfortunately the easiest and most frequently taken solution is to increase T – which simply translates to working overtime. Reducing P is not an option – P is by definition the shortest time needed to accomplish the task , “working faster” is not impacting it.

What if the solution is to keep working as usual and hope for the best? As a general put it “hope is not a method”. Ignoring the problem is exactly equivalent to stuffing more products down the constraint that it can handle – and the formula tells us what will happen. If, say 5 people have to accomplish the throughput of 6 they will most probably try to run the supernumerary products in parallel, interrupting work on one ask to push the other task a bit further, This means that for each product they will have less “A” available so the throughput will be even LESS then if they just ignored the supernumerary jobs completely.

So, what would be the good strategy in case of a temporary loss of capacity ? As the Theory of Constraints would have it :

Get help to reduce the bottleneck or eliminate it completely
Protect the bottleneck (the employees) by delegating/refusing less important tasks,
Get more time

and finally if all of these fail:

Ignore the demand that can not be met

Trying to push more through a bottleneck than its capacity might look heroic but it will make things just a lot worse then they need to be.