Sorry. You're missing the point. If testing at multiple altitudes and weights you may be looking at tens of runs (and thus equations).
Also, I don't limit it to 4th order and just 5 decimal points.
I guess, to each their own.
Finn
Actually, I think you are missing the point. The test method IS a tremendous time saver, but the data reduction, as presented, is weak statistically. This engineer trained in statistics knows that good data analyzed poorly can be ... misleading. What is worse than having poor data? Learning something that is far off the mark is worse than poor data.
Using a better data reduction should help the process and can be automated so it is very little effort beyond getting the data off the flight logger and installing it in the spreadsheet. Issues include not holding enough significant digits in the curve fit coefficients, no check of significance of terms in the selected equations, no investigation of math that fits better, and enforced granularity. I am merely asking folks to improve the precision of the data reduction, which is intended to reduce the amount of subsequent runs to refine and check your estimates of Vy and Vx ...
Coefficient Precision - Using the fixed decimal point format in the Excel curve fit on the Graph gave us the t^4 term (shown in the Kitplanes article) of 0.00001. That means it is somewhere between 0.0000051 and 0.0000150. The argument the 1E-5 is not important, t=100 = 1e2, t^4 = 1e8 and 0.00001*t^4 = 1000 I would call that a bunch of error if not managed. Using more points right of the decimal point helps but just changing to scientific notation in the format always gives you whatever number of significant digits you feel you need. Four is a good idea.
Coefficient Significance - When I took the 10 seconds of data supplied and plotted it, I ran Linest() on it and got data with more significant digits on the coefficients. My practiced eye led me to try adding terms for t^5 and sqrt(t), then checked the value of each coefficient (1st row) against the error estimator for the term (2nd row). If the coefficient is several times larger than the error estimate, it is probably real, if the error estimator is around the size of the coefficient or bigger, the coefficient is probably garbage. On that data, the first order term, t appears to be crap, but the higher order terms all appeared significant. Most statisticians well tell you if the x^3 term is significant, you keep the x term and the x^2 term, even if they do not appear to be significant, so we keep t. R^2 with the original fit is 99.981% which sounds really good, but with it including t^5 and sqrt(t), it improved to 99.995% - it has a quarter of the error of the base approach. Now you have an improved base to differentiate once for an accel curve.
Granularity - The data and analysis as presented gives you info at 1 second intervals while the airplane is changing speed at 5 knots a second. While I agree that the rate of climb curves are relatively smooth near Vy and near Vx, a little more precision is available, so why not use it. With calculated and fabulously fitted acceleration equations, we plot our reserve power based rate of climb estimate at shorter time steps and get a smoother curve for picking off Vy and Vx, as well as Vz.
Since it can be set up in Excel and just put in from flight logger data, we can just program the data reduction once and run it on each set of data. Why would you skip it if it were easy to do? I assure you guys, writing this note took me WAY longer than doing the math and programming it.
Billski