Difference quotient

In single-variable calculus, the difference quotient is usually the name for the expression

which when taken to the limit as h approaches 0 gives the derivative of the function f.[1][2][3][4] The name of the expression stems from the fact that it is the quotient of the difference of values of the function by the difference of the corresponding values of its argument (the latter is (x+h)-x=h in this case).[5][6] The difference quotient is a measure of the average rate of change of the function over an interval (in this case, an interval of length h).[7][8]:237[9] The limit of the difference quotient (i.e., the derivative) is thus the instantaneous rate of change.[9]

By a slight change in notation (and viewpoint), for an interval [a, b], the difference quotient

is called[5] the mean (or average) value of the derivative of f over the interval [a, b]. This name is justified by the mean value theorem, which states that for a differentiable function f, its derivative f′ reaches its mean value at some point in the interval.[5] Geometrically, this difference quotient measures the slope of the secant line passing through the points with coordinates (a, f(a)) and (b, f(b)).[10]

Difference quotients are used as approximations in numerical differentiation,[8] but they have also been subject of criticism in this application.[11]

The difference quotient is sometimes also called the Newton quotient[10][12][13][14] (after Isaac Newton) or Fermat's difference quotient (after Pierre de Fermat).[15]

Overview

The typical notion of the difference quotient discussed above is a particular case of a more general concept. The primary vehicle of calculus and other higher mathematics is the function. Its "input value" is its argument, usually a point ("P") expressible on a graph. The difference between two points, themselves, is known as their DeltaP), as is the difference in their function result, the particular notation being determined by the direction of formation:

  • Forward difference:  ΔF(P) = F(P + ΔP) − F(P);
  • Central difference:  δF(P) = F(P + ½ΔP) − F(P − ½ΔP);
  • Backward difference: ∇F(P) = F(P) − F(P − ΔP).

The general preference is the forward orientation, as F(P) is the base, to which differences (i.e., "ΔP"s) are added to it. Furthermore,

  • If |ΔP| is finite (meaning measurable), then ΔF(P) is known as a finite difference, with specific denotations of DP and DF(P);
  • If |ΔP| is infinitesimal (an infinitely small amount——usually expressed in standard analysis as a limit: ), then ΔF(P) is known as an infinitesimal difference, with specific denotations of dP and dF(P) (in calculus graphing, the point is almost exclusively identified as "x" and F(x) as "y").

The function difference divided by the point difference is known as "difference quotient":

If ΔP is infinitesimal, then the difference quotient is a derivative, otherwise it is a divided difference:

Defining the point range

Regardless if ΔP is infinitesimal or finite, there is (at least—in the case of the derivative—theoretically) a point range, where the boundaries are P ± (0.5) ΔP (depending on the orientation—ΔF(P), δF(P) or ∇F(P)):

LB = Lower Boundary;   UB = Upper Boundary;

Derivatives can be regarded as functions themselves, harboring their own derivatives. Thus each function is home to sequential degrees ("higher orders") of derivation, or differentiation. This property can be generalized to all difference quotients.
As this sequencing requires a corresponding boundary splintering, it is practical to break up the point range into smaller, equi-sized sections, with each section being marked by an intermediary point (Pi), where LB = P0 and UB = Pń, the nth point, equaling the degree/order:

  LB =  P0  = P0 + 0Δ1P     = Pń − (Ń-0)Δ1P;
        P1  = P0 + 1Δ1P     = Pń − (Ń-1)Δ1P;
        P2  = P0 + 2Δ1P     = Pń − (Ń-2)Δ1P;
        P3  = P0 + 3Δ1P     = Pń − (Ń-3)Δ1P;
            ↓      ↓        ↓       ↓
       Pń-3 = P0 + (Ń-3)Δ1P = Pń − 3Δ1P;
       Pń-2 = P0 + (Ń-2)Δ1P = Pń − 2Δ1P;
       Pń-1 = P0 + (Ń-1)Δ1P = Pń − 1Δ1P;
  UB = Pń-0 = P0 + (Ń-0)Δ1P = Pń − 0Δ1P = Pń;
  ΔP = Δ1P = P1 − P0 = P2 − P1 = P3 − P2 = ... = Pń − Pń-1;
  ΔB = UB − LB = Pń − P0 = ΔńP = ŃΔ1P.

The primary difference quotient (Ń = 1)

As a derivative

The difference quotient as a derivative needs no explanation, other than to point out that, since P0 essentially equals P1 = P2 = ... = Pń (as the differences are infinitesimal), the Leibniz notation and derivative expressions do not distinguish P to P0 or Pń:

There are other derivative notations, but these are the most recognized, standard designations.

As a divided difference

A divided difference, however, does require further elucidation, as it equals the average derivative between and including LB and UB:
In this interpretation, Pã represents a function extracted, average value of P (midrange, but usually not exactly midpoint), the particular valuation depending on the function averaging it is extracted from. More formally, Pã is found in the mean value theorem of calculus, which says:
For any function that is continuous on [LB,UB] and differentiable on (LB,UB) there exists some Pã in the interval (LB,UB) such that the secant joining the endpoints of the interval [LB,UB] is parallel to the tangent at Pã.
Essentially, Pã denotes some value of P between LB and UB—hence,
which links the mean value result with the divided difference:
As there is, by its very definition, a tangible difference between LB/P0 and UB/Pń, the Leibniz and derivative expressions do require divarication of the function argument.

Higher-order difference quotients

Second order

Third order

Ńth order

Applying the divided difference

The quintessential application of the divided difference is in the presentation of the definite integral, which is nothing more than a finite difference:

Given that the mean value, derivative expression form provides all of the same information as the classical integral notation, the mean value form may be the preferable expression, such as in writing venues that only support/accept standard ASCII text, or in cases that only require the average derivative (such as when finding the average radius in an elliptic integral). This is especially true for definite integrals that technically have (e.g.) 0 and either or as boundaries, with the same divided difference found as that with boundaries of 0 and (thus requiring less averaging effort):

This also becomes particularly useful when dealing with iterated and multiple integrals (ΔA = AU − AL, ΔB = BU − BL, ΔC = CU − CL):

Hence,

and

gollark: *But* they'd be common as mints.
gollark: ```Mana courses through this very reflective, almost metallic egg, around which time is distorted. It produces a beautiful glow, and has a reddish gleam, although it is much smaller than the others and smells uncannily like cheese.```
gollark: Idea: an egg whose description combines *every keyword* of all rares.
gollark: Coppers are the best xenowyrms.
gollark: No.

See also

References

  1. Peter D. Lax; Maria Shea Terrell (2013). Calculus With Applications. Springer. p. 119. ISBN 978-1-4614-7946-8.
  2. Shirley O. Hockett; David Bock (2005). Barron's how to Prepare for the AP Calculus. Barron's Educational Series. p. 44. ISBN 978-0-7641-2382-5.
  3. Mark Ryan (2010). Calculus Essentials For Dummies. John Wiley & Sons. pp. 41–47. ISBN 978-0-470-64269-6.
  4. Karla Neal; R. Gustafson; Jeff Hughes (2012). Precalculus. Cengage Learning. p. 133. ISBN 978-0-495-82662-0.
  5. Michael Comenetz (2002). Calculus: The Elements. World Scientific. pp. 71–76 and 151–161. ISBN 978-981-02-4904-5.
  6. Moritz Pasch (2010). Essays on the Foundations of Mathematics by Moritz Pasch. Springer. p. 157. ISBN 978-90-481-9416-2.
  7. Frank C. Wilson; Scott Adamson (2008). Applied Calculus. Cengage Learning. p. 177. ISBN 978-0-618-61104-1.
  8. Tamara Lefcourt Ruby; James Sellers; Lisa Korf; Jeremy Van Horn; Mike Munn (2014). Kaplan AP Calculus AB & BC 2015. Kaplan Publishing. p. 299. ISBN 978-1-61865-686-5.
  9. Thomas Hungerford; Douglas Shaw (2008). Contemporary Precalculus: A Graphing Approach. Cengage Learning. pp. 211–212. ISBN 978-0-495-10833-7.
  10. Steven G. Krantz (2014). Foundations of Analysis. CRC Press. p. 127. ISBN 978-1-4822-2075-9.
  11. Andreas Griewank; Andrea Walther (2008). Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation, Second Edition. SIAM. pp. 2–. ISBN 978-0-89871-659-7.
  12. Serge Lang (1968). Analysis 1. Addison-Wesley Publishing Company. p. 56.
  13. Brian D. Hahn (1994). Fortran 90 for Scientists and Engineers. Elsevier. p. 276. ISBN 978-0-340-60034-4.
  14. Christopher Clapham; James Nicholson (2009). The Concise Oxford Dictionary of Mathematics. Oxford University Press. p. 313. ISBN 978-0-19-157976-9.
  15. Donald C. Benson, A Smoother Pebble: Mathematical Explorations, Oxford University Press, 2003, p. 176.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.