Standard Deviation of Productivity using time and output

I am doing productivity research for the company I am interning for. IN one of our production departments, we have employees producing units by hand. What my COO wnats me to figure out is an acceptable range of output/hour that every worker should fall into.

What I have done so far:
-timed workers on producing 10 units, 3 times. ie. first 10 =2 min 57sec, second 10= 2min 30 sec, third 10= 3min 13sec
-found an average time for all 3.
-found an average output/hr using the avg time.

How do I find an acceptable range of output/hr using standard deviation or some kind of statistics rather than plainly choosing what the range should be.

Blah2008-08-06T11:54:50Z

Favorite Answer

Statistics can't tell you what an employee "should" do. (That's a management decision.) It can tell you what employees *actually* do.

Note that what M above proposes is actually a procedure for determining how close your sample mean might be to the true mean -- not what you want. But a similar procedure can tell you how many observations you really need. And that procedure can be used later in control charts to tell you whether your production process has changed.

You want to characterize your production process with a mean and standard deviation. (However, this assumes that the process is normally distributed, and that assumption should be tested). The amount of data you have gathered is likely barely adequate (but can be used to tell how many observations you really need), and I would have to question how you gathered it (it should be done randomly -- different employees, different shifts, different days, different time points within the shift, and so on).

But to use your existing data, you should think of it as 30 observations instead of 3 samples of 10 each. You've already calculated the mean. You need the standard deviation of the 30 observations. Then, assuming that your calculated mean and SD are the true ones (a really big IF), you can make claims such as "N% of all units are produced in 17.3 seconds, +/- M seconds." The margin of error is derived from the SD (for 95%, use +/- 1.96 SDs).

Lastly, you can't simply translate "stopwatch time" into output. There are several factors that must be taken into account, including the bias introduced by monitoring workers, bio-breaks, and so on.

If you want to take it offline for more detailed help, add yourself to my network, and we can arrange to IM or email.
Source(s):
I am a professor of business and management and teach operations management

M2008-08-06T10:03:28Z

1. Set Standard Output (per hour)

2. Use Test Statistic
x-bar +/- (t critical value) * s/SQRT(n)

3. Computation of Test Statistic
x-bar = Sample mean
s = Sample standard deviation
n = Number of samples
df = degrees of freedom [n - 1]

"look-up" from Table "t critical value" based on 'level of significanc'

4. Compare Standard Output to Confidence Interval of Sampled Output

Anonymous2016-12-11T09:01:18Z

primary deviation (SD) tells you (on customary) how a techniques away each and every documents factor is from the advise. If the SD is intense, which ability on customary each and every documents factor is plenty remote from the advise, so the training would be very unfolded (e.g. a million, a hundred, 450, 2000 could have an extremely intense SD). If the SD is low, which ability on customary each and every documents factor is amazingly close to to the advise, so the training would be grouped mutually (e.g. sixty seven, sixty seven.3, sixty seven.9, sixty 8.2 could have an extremely low SD).

Anonymous2014-11-04T19:21:42Z

confusing issue. seek at google or bing. that can assist!