Set of elliptic integrals
In mathematics , the Carlson symmetric forms of elliptic integrals are a small canonical set of elliptic integrals to which all others may be reduced. They are a modern alternative to the Legendre forms . The Legendre forms may be expressed in terms of the Carlson forms and vice versa.
The Carlson elliptic integrals are:[ 1]
R
F
(
x
,
y
,
z
)
=
1
2
∫
0
∞
d
t
(
t
+
x
)
(
t
+
y
)
(
t
+
z
)
{\displaystyle R_{F}(x,y,z)={\tfrac {1}{2}}\int _{0}^{\infty }{\frac {dt}{\sqrt {(t+x)(t+y)(t+z)}}}}
R
J
(
x
,
y
,
z
,
p
)
=
3
2
∫
0
∞
d
t
(
t
+
p
)
(
t
+
x
)
(
t
+
y
)
(
t
+
z
)
{\displaystyle R_{J}(x,y,z,p)={\tfrac {3}{2}}\int _{0}^{\infty }{\frac {dt}{(t+p){\sqrt {(t+x)(t+y)(t+z)}}}}}
R
G
(
x
,
y
,
z
)
=
1
4
∫
0
∞
1
(
t
+
x
)
(
t
+
y
)
(
t
+
z
)
(
x
t
+
x
+
y
t
+
y
+
z
t
+
z
)
t
d
t
{\displaystyle R_{G}(x,y,z)={\tfrac {1}{4}}\int _{0}^{\infty }{\frac {1}{\sqrt {(t+x)(t+y)(t+z)}}}{\biggl (}{\frac {x}{t+x}}+{\frac {y}{t+y}}+{\frac {z}{t+z}}{\biggr )}t\,dt}
R
C
(
x
,
y
)
=
R
F
(
x
,
y
,
y
)
=
1
2
∫
0
∞
d
t
(
t
+
y
)
(
t
+
x
)
{\displaystyle R_{C}(x,y)=R_{F}(x,y,y)={\tfrac {1}{2}}\int _{0}^{\infty }{\frac {dt}{(t+y){\sqrt {(t+x)}}}}}
R
D
(
x
,
y
,
z
)
=
R
J
(
x
,
y
,
z
,
z
)
=
3
2
∫
0
∞
d
t
(
t
+
z
)
(
t
+
x
)
(
t
+
y
)
(
t
+
z
)
{\displaystyle R_{D}(x,y,z)=R_{J}(x,y,z,z)={\tfrac {3}{2}}\int _{0}^{\infty }{\frac {dt}{(t+z)\,{\sqrt {(t+x)(t+y)(t+z)}}}}}
Since
R
C
{\displaystyle R_{C}}
and
R
D
{\displaystyle R_{D}}
are special cases of
R
F
{\displaystyle R_{F}}
and
R
J
{\displaystyle R_{J}}
, all elliptic integrals can ultimately be evaluated in terms of just
R
F
{\displaystyle R_{F}}
,
R
J
{\displaystyle R_{J}}
, and
R
G
{\displaystyle R_{G}}
.
The term symmetric refers to the fact that in contrast to the Legendre forms, these functions are unchanged by the exchange of certain subsets of their arguments. The value of
R
F
(
x
,
y
,
z
)
{\displaystyle R_{F}(x,y,z)}
is the same for any permutation of its arguments, and the value of
R
J
(
x
,
y
,
z
,
p
)
{\displaystyle R_{J}(x,y,z,p)}
is the same for any permutation of its first three arguments.
The Carlson elliptic integrals are named after Bille C. Carlson (1924-2013).
Incomplete elliptic integrals [ edit ]
Incomplete elliptic integrals can be calculated easily using Carlson symmetric forms:
F
(
ϕ
,
k
)
=
sin
ϕ
R
F
(
cos
2
ϕ
,
1
−
k
2
sin
2
ϕ
,
1
)
{\displaystyle F(\phi ,k)=\sin \phi R_{F}\left(\cos ^{2}\phi ,1-k^{2}\sin ^{2}\phi ,1\right)}
E
(
ϕ
,
k
)
=
sin
ϕ
R
F
(
cos
2
ϕ
,
1
−
k
2
sin
2
ϕ
,
1
)
−
1
3
k
2
sin
3
ϕ
R
D
(
cos
2
ϕ
,
1
−
k
2
sin
2
ϕ
,
1
)
{\displaystyle E(\phi ,k)=\sin \phi R_{F}\left(\cos ^{2}\phi ,1-k^{2}\sin ^{2}\phi ,1\right)-{\tfrac {1}{3}}k^{2}\sin ^{3}\phi R_{D}\left(\cos ^{2}\phi ,1-k^{2}\sin ^{2}\phi ,1\right)}
Π
(
ϕ
,
n
,
k
)
=
sin
ϕ
R
F
(
cos
2
ϕ
,
1
−
k
2
sin
2
ϕ
,
1
)
+
1
3
n
sin
3
ϕ
R
J
(
cos
2
ϕ
,
1
−
k
2
sin
2
ϕ
,
1
,
1
−
n
sin
2
ϕ
)
{\displaystyle \Pi (\phi ,n,k)=\sin \phi R_{F}\left(\cos ^{2}\phi ,1-k^{2}\sin ^{2}\phi ,1\right)+{\tfrac {1}{3}}n\sin ^{3}\phi R_{J}\left(\cos ^{2}\phi ,1-k^{2}\sin ^{2}\phi ,1,1-n\sin ^{2}\phi \right)}
(Note: the above are only valid for
−
π
2
≤
ϕ
≤
π
2
{\displaystyle -{\frac {\pi }{2}}\leq \phi \leq {\frac {\pi }{2}}}
and
0
≤
k
2
sin
2
ϕ
≤
1
{\displaystyle 0\leq k^{2}\sin ^{2}\phi \leq 1}
)
Complete elliptic integrals [ edit ]
Complete elliptic integrals can be calculated by substituting φ = 1 ⁄2 π:
K
(
k
)
=
R
F
(
0
,
1
−
k
2
,
1
)
{\displaystyle K(k)=R_{F}\left(0,1-k^{2},1\right)}
E
(
k
)
=
R
F
(
0
,
1
−
k
2
,
1
)
−
1
3
k
2
R
D
(
0
,
1
−
k
2
,
1
)
{\displaystyle E(k)=R_{F}\left(0,1-k^{2},1\right)-{\tfrac {1}{3}}k^{2}R_{D}\left(0,1-k^{2},1\right)}
Π
(
n
,
k
)
=
R
F
(
0
,
1
−
k
2
,
1
)
+
1
3
n
R
J
(
0
,
1
−
k
2
,
1
,
1
−
n
)
{\displaystyle \Pi (n,k)=R_{F}\left(0,1-k^{2},1\right)+{\tfrac {1}{3}}nR_{J}\left(0,1-k^{2},1,1-n\right)}
When any two, or all three of the arguments of
R
F
{\displaystyle R_{F}}
are the same, then a substitution of
t
+
x
=
u
{\displaystyle {\sqrt {t+x}}=u}
renders the integrand rational. The integral can then be expressed in terms of elementary transcendental functions .
R
C
(
x
,
y
)
=
R
F
(
x
,
y
,
y
)
=
1
2
∫
0
∞
d
t
t
+
x
(
t
+
y
)
=
∫
x
∞
d
u
u
2
−
x
+
y
=
{
arccos
x
/
y
y
−
x
,
x
<
y
1
y
,
x
=
y
arcosh
x
/
y
x
−
y
,
x
>
y
{\displaystyle R_{C}(x,y)=R_{F}(x,y,y)={\frac {1}{2}}\int _{0}^{\infty }{\frac {dt}{{\sqrt {t+x}}(t+y)}}=\int _{\sqrt {x}}^{\infty }{\frac {du}{u^{2}-x+y}}={\begin{cases}{\frac {\arccos {\sqrt {{x}/{y}}}}{\sqrt {y-x}}},&x<y\\{\frac {1}{\sqrt {y}}},&x=y\\{\frac {\operatorname {arcosh} {\sqrt {{x}/{y}}}}{\sqrt {x-y}}},&x>y\\\end{cases}}}
Similarly, when at least two of the first three arguments of
R
J
{\displaystyle R_{J}}
are the same,
R
J
(
x
,
y
,
y
,
p
)
=
3
∫
x
∞
d
u
(
u
2
−
x
+
y
)
(
u
2
−
x
+
p
)
=
{
3
p
−
y
(
R
C
(
x
,
y
)
−
R
C
(
x
,
p
)
)
,
y
≠
p
3
2
(
y
−
x
)
(
R
C
(
x
,
y
)
−
1
y
x
)
,
y
=
p
≠
x
1
y
3
/
2
,
y
=
p
=
x
{\displaystyle R_{J}(x,y,y,p)=3\int _{\sqrt {x}}^{\infty }{\frac {du}{(u^{2}-x+y)(u^{2}-x+p)}}={\begin{cases}{\frac {3}{p-y}}(R_{C}(x,y)-R_{C}(x,p)),&y\neq p\\{\frac {3}{2(y-x)}}\left(R_{C}(x,y)-{\frac {1}{y}}{\sqrt {x}}\right),&y=p\neq x\\{\frac {1}{y^{{3}/{2}}}},&y=p=x\\\end{cases}}}
By substituting in the integral definitions
t
=
κ
u
{\displaystyle t=\kappa u}
for any constant
κ
{\displaystyle \kappa }
, it is found that
R
F
(
κ
x
,
κ
y
,
κ
z
)
=
κ
−
1
/
2
R
F
(
x
,
y
,
z
)
{\displaystyle R_{F}\left(\kappa x,\kappa y,\kappa z\right)=\kappa ^{-1/2}R_{F}(x,y,z)}
R
J
(
κ
x
,
κ
y
,
κ
z
,
κ
p
)
=
κ
−
3
/
2
R
J
(
x
,
y
,
z
,
p
)
{\displaystyle R_{J}\left(\kappa x,\kappa y,\kappa z,\kappa p\right)=\kappa ^{-3/2}R_{J}(x,y,z,p)}
Duplication theorem [ edit ]
R
F
(
x
,
y
,
z
)
=
2
R
F
(
x
+
λ
,
y
+
λ
,
z
+
λ
)
=
R
F
(
x
+
λ
4
,
y
+
λ
4
,
z
+
λ
4
)
,
{\displaystyle R_{F}(x,y,z)=2R_{F}(x+\lambda ,y+\lambda ,z+\lambda )=R_{F}\left({\frac {x+\lambda }{4}},{\frac {y+\lambda }{4}},{\frac {z+\lambda }{4}}\right),}
where
λ
=
x
y
+
y
z
+
z
x
{\displaystyle \lambda ={\sqrt {x}}{\sqrt {y}}+{\sqrt {y}}{\sqrt {z}}+{\sqrt {z}}{\sqrt {x}}}
.
R
J
(
x
,
y
,
z
,
p
)
=
2
R
J
(
x
+
λ
,
y
+
λ
,
z
+
λ
,
p
+
λ
)
+
6
R
C
(
d
2
,
d
2
+
(
p
−
x
)
(
p
−
y
)
(
p
−
z
)
)
=
1
4
R
J
(
x
+
λ
4
,
y
+
λ
4
,
z
+
λ
4
,
p
+
λ
4
)
+
6
R
C
(
d
2
,
d
2
+
(
p
−
x
)
(
p
−
y
)
(
p
−
z
)
)
{\displaystyle {\begin{aligned}R_{J}(x,y,z,p)&=2R_{J}(x+\lambda ,y+\lambda ,z+\lambda ,p+\lambda )+6R_{C}(d^{2},d^{2}+(p-x)(p-y)(p-z))\\&={\frac {1}{4}}R_{J}\left({\frac {x+\lambda }{4}},{\frac {y+\lambda }{4}},{\frac {z+\lambda }{4}},{\frac {p+\lambda }{4}}\right)+6R_{C}(d^{2},d^{2}+(p-x)(p-y)(p-z))\end{aligned}}}
[ 2]
where
d
=
(
p
+
x
)
(
p
+
y
)
(
p
+
z
)
{\displaystyle d=({\sqrt {p}}+{\sqrt {x}})({\sqrt {p}}+{\sqrt {y}})({\sqrt {p}}+{\sqrt {z}})}
and
λ
=
x
y
+
y
z
+
z
x
{\displaystyle \lambda ={\sqrt {x}}{\sqrt {y}}+{\sqrt {y}}{\sqrt {z}}+{\sqrt {z}}{\sqrt {x}}}
In obtaining a Taylor series expansion for
R
F
{\displaystyle R_{F}}
or
R
J
{\displaystyle R_{J}}
it proves convenient to expand about the mean value of the several arguments. So for
R
F
{\displaystyle R_{F}}
, letting the mean value of the arguments be
A
=
(
x
+
y
+
z
)
/
3
{\displaystyle A=(x+y+z)/3}
, and using homogeneity, define
Δ
x
{\displaystyle \Delta x}
,
Δ
y
{\displaystyle \Delta y}
and
Δ
z
{\displaystyle \Delta z}
by
R
F
(
x
,
y
,
z
)
=
R
F
(
A
(
1
−
Δ
x
)
,
A
(
1
−
Δ
y
)
,
A
(
1
−
Δ
z
)
)
=
1
A
R
F
(
1
−
Δ
x
,
1
−
Δ
y
,
1
−
Δ
z
)
{\displaystyle {\begin{aligned}R_{F}(x,y,z)&=R_{F}(A(1-\Delta x),A(1-\Delta y),A(1-\Delta z))\\&={\frac {1}{\sqrt {A}}}R_{F}(1-\Delta x,1-\Delta y,1-\Delta z)\end{aligned}}}
that is
Δ
x
=
1
−
x
/
A
{\displaystyle \Delta x=1-x/A}
etc. The differences
Δ
x
{\displaystyle \Delta x}
,
Δ
y
{\displaystyle \Delta y}
and
Δ
z
{\displaystyle \Delta z}
are defined with this sign (such that they are subtracted ), in order to be in agreement with Carlson's papers. Since
R
F
(
x
,
y
,
z
)
{\displaystyle R_{F}(x,y,z)}
is symmetric under permutation of
x
{\displaystyle x}
,
y
{\displaystyle y}
and
z
{\displaystyle z}
, it is also symmetric in the quantities
Δ
x
{\displaystyle \Delta x}
,
Δ
y
{\displaystyle \Delta y}
and
Δ
z
{\displaystyle \Delta z}
. It follows that both the integrand of
R
F
{\displaystyle R_{F}}
and its integral can be expressed as functions of the elementary symmetric polynomials in
Δ
x
{\displaystyle \Delta x}
,
Δ
y
{\displaystyle \Delta y}
and
Δ
z
{\displaystyle \Delta z}
which are
E
1
=
Δ
x
+
Δ
y
+
Δ
z
=
0
{\displaystyle E_{1}=\Delta x+\Delta y+\Delta z=0}
E
2
=
Δ
x
Δ
y
+
Δ
y
Δ
z
+
Δ
z
Δ
x
{\displaystyle E_{2}=\Delta x\Delta y+\Delta y\Delta z+\Delta z\Delta x}
E
3
=
Δ
x
Δ
y
Δ
z
{\displaystyle E_{3}=\Delta x\Delta y\Delta z}
Expressing the integrand in terms of these polynomials, performing a multidimensional Taylor expansion and integrating term-by-term...
R
F
(
x
,
y
,
z
)
=
1
2
A
∫
0
∞
1
(
t
+
1
)
3
−
(
t
+
1
)
2
E
1
+
(
t
+
1
)
E
2
−
E
3
d
t
=
1
2
A
∫
0
∞
(
1
(
t
+
1
)
3
2
−
E
2
2
(
t
+
1
)
7
2
+
E
3
2
(
t
+
1
)
9
2
+
3
E
2
2
8
(
t
+
1
)
11
2
−
3
E
2
E
3
4
(
t
+
1
)
13
2
+
O
(
E
1
)
+
O
(
Δ
6
)
)
d
t
=
1
A
(
1
−
1
10
E
2
+
1
14
E
3
+
1
24
E
2
2
−
3
44
E
2
E
3
+
O
(
E
1
)
+
O
(
Δ
6
)
)
{\displaystyle {\begin{aligned}R_{F}(x,y,z)&={\frac {1}{2{\sqrt {A}}}}\int _{0}^{\infty }{\frac {1}{\sqrt {(t+1)^{3}-(t+1)^{2}E_{1}+(t+1)E_{2}-E_{3}}}}dt\\&={\frac {1}{2{\sqrt {A}}}}\int _{0}^{\infty }\left({\frac {1}{(t+1)^{\frac {3}{2}}}}-{\frac {E_{2}}{2(t+1)^{\frac {7}{2}}}}+{\frac {E_{3}}{2(t+1)^{\frac {9}{2}}}}+{\frac {3E_{2}^{2}}{8(t+1)^{\frac {11}{2}}}}-{\frac {3E_{2}E_{3}}{4(t+1)^{\frac {13}{2}}}}+O(E_{1})+O(\Delta ^{6})\right)dt\\&={\frac {1}{\sqrt {A}}}\left(1-{\frac {1}{10}}E_{2}+{\frac {1}{14}}E_{3}+{\frac {1}{24}}E_{2}^{2}-{\frac {3}{44}}E_{2}E_{3}+O(E_{1})+O(\Delta ^{6})\right)\end{aligned}}}
The advantage of expanding about the mean value of the arguments is now apparent; it reduces
E
1
{\displaystyle E_{1}}
identically to zero, and so eliminates all terms involving
E
1
{\displaystyle E_{1}}
- which otherwise would be the most numerous.
An ascending series for
R
J
{\displaystyle R_{J}}
may be found in a similar way. There is a slight difficulty because
R
J
{\displaystyle R_{J}}
is not fully symmetric; its dependence on its fourth argument,
p
{\displaystyle p}
, is different from its dependence on
x
{\displaystyle x}
,
y
{\displaystyle y}
and
z
{\displaystyle z}
. This is overcome by treating
R
J
{\displaystyle R_{J}}
as a fully symmetric function of five arguments, two of which happen to have the same value
p
{\displaystyle p}
. The mean value of the arguments is therefore taken to be
A
=
x
+
y
+
z
+
2
p
5
{\displaystyle A={\frac {x+y+z+2p}{5}}}
and the differences
Δ
x
{\displaystyle \Delta x}
,
Δ
y
{\displaystyle \Delta y}
Δ
z
{\displaystyle \Delta z}
and
Δ
p
{\displaystyle \Delta p}
defined by
R
J
(
x
,
y
,
z
,
p
)
=
R
J
(
A
(
1
−
Δ
x
)
,
A
(
1
−
Δ
y
)
,
A
(
1
−
Δ
z
)
,
A
(
1
−
Δ
p
)
)
=
1
A
3
2
R
J
(
1
−
Δ
x
,
1
−
Δ
y
,
1
−
Δ
z
,
1
−
Δ
p
)
{\displaystyle {\begin{aligned}R_{J}(x,y,z,p)&=R_{J}(A(1-\Delta x),A(1-\Delta y),A(1-\Delta z),A(1-\Delta p))\\&={\frac {1}{A^{\frac {3}{2}}}}R_{J}(1-\Delta x,1-\Delta y,1-\Delta z,1-\Delta p)\end{aligned}}}
The elementary symmetric polynomials in
Δ
x
{\displaystyle \Delta x}
,
Δ
y
{\displaystyle \Delta y}
,
Δ
z
{\displaystyle \Delta z}
,
Δ
p
{\displaystyle \Delta p}
and (again)
Δ
p
{\displaystyle \Delta p}
are in full
E
1
=
Δ
x
+
Δ
y
+
Δ
z
+
2
Δ
p
=
0
{\displaystyle E_{1}=\Delta x+\Delta y+\Delta z+2\Delta p=0}
E
2
=
Δ
x
Δ
y
+
Δ
y
Δ
z
+
2
Δ
z
Δ
p
+
Δ
p
2
+
2
Δ
p
Δ
x
+
Δ
x
Δ
z
+
2
Δ
y
Δ
p
{\displaystyle E_{2}=\Delta x\Delta y+\Delta y\Delta z+2\Delta z\Delta p+\Delta p^{2}+2\Delta p\Delta x+\Delta x\Delta z+2\Delta y\Delta p}
E
3
=
Δ
z
Δ
p
2
+
Δ
x
Δ
p
2
+
2
Δ
x
Δ
y
Δ
p
+
Δ
x
Δ
y
Δ
z
+
2
Δ
y
Δ
z
Δ
p
+
Δ
y
Δ
p
2
+
2
Δ
x
Δ
z
Δ
p
{\displaystyle E_{3}=\Delta z\Delta p^{2}+\Delta x\Delta p^{2}+2\Delta x\Delta y\Delta p+\Delta x\Delta y\Delta z+2\Delta y\Delta z\Delta p+\Delta y\Delta p^{2}+2\Delta x\Delta z\Delta p}
E
4
=
Δ
y
Δ
z
Δ
p
2
+
Δ
x
Δ
z
Δ
p
2
+
Δ
x
Δ
y
Δ
p
2
+
2
Δ
x
Δ
y
Δ
z
Δ
p
{\displaystyle E_{4}=\Delta y\Delta z\Delta p^{2}+\Delta x\Delta z\Delta p^{2}+\Delta x\Delta y\Delta p^{2}+2\Delta x\Delta y\Delta z\Delta p}
E
5
=
Δ
x
Δ
y
Δ
z
Δ
p
2
{\displaystyle E_{5}=\Delta x\Delta y\Delta z\Delta p^{2}}
However, it is possible to simplify the formulae for
E
2
{\displaystyle E_{2}}
,
E
3
{\displaystyle E_{3}}
and
E
4
{\displaystyle E_{4}}
using the fact that
E
1
=
0
{\displaystyle E_{1}=0}
. Expressing the integrand in terms of these polynomials, performing a multidimensional Taylor expansion and integrating term-by-term as before...
R
J
(
x
,
y
,
z
,
p
)
=
3
2
A
3
2
∫
0
∞
1
(
t
+
1
)
5
−
(
t
+
1
)
4
E
1
+
(
t
+
1
)
3
E
2
−
(
t
+
1
)
2
E
3
+
(
t
+
1
)
E
4
−
E
5
d
t
=
3
2
A
3
2
∫
0
∞
(
1
(
t
+
1
)
5
2
−
E
2
2
(
t
+
1
)
9
2
+
E
3
2
(
t
+
1
)
11
2
+
3
E
2
2
−
4
E
4
8
(
t
+
1
)
13
2
+
2
E
5
−
3
E
2
E
3
4
(
t
+
1
)
15
2
+
O
(
E
1
)
+
O
(
Δ
6
)
)
d
t
=
1
A
3
2
(
1
−
3
14
E
2
+
1
6
E
3
+
9
88
E
2
2
−
3
22
E
4
−
9
52
E
2
E
3
+
3
26
E
5
+
O
(
E
1
)
+
O
(
Δ
6
)
)
{\displaystyle {\begin{aligned}R_{J}(x,y,z,p)&={\frac {3}{2A^{\frac {3}{2}}}}\int _{0}^{\infty }{\frac {1}{\sqrt {(t+1)^{5}-(t+1)^{4}E_{1}+(t+1)^{3}E_{2}-(t+1)^{2}E_{3}+(t+1)E_{4}-E_{5}}}}dt\\&={\frac {3}{2A^{\frac {3}{2}}}}\int _{0}^{\infty }\left({\frac {1}{(t+1)^{\frac {5}{2}}}}-{\frac {E_{2}}{2(t+1)^{\frac {9}{2}}}}+{\frac {E_{3}}{2(t+1)^{\frac {11}{2}}}}+{\frac {3E_{2}^{2}-4E_{4}}{8(t+1)^{\frac {13}{2}}}}+{\frac {2E_{5}-3E_{2}E_{3}}{4(t+1)^{\frac {15}{2}}}}+O(E_{1})+O(\Delta ^{6})\right)dt\\&={\frac {1}{A^{\frac {3}{2}}}}\left(1-{\frac {3}{14}}E_{2}+{\frac {1}{6}}E_{3}+{\frac {9}{88}}E_{2}^{2}-{\frac {3}{22}}E_{4}-{\frac {9}{52}}E_{2}E_{3}+{\frac {3}{26}}E_{5}+O(E_{1})+O(\Delta ^{6})\right)\end{aligned}}}
As with
R
J
{\displaystyle R_{J}}
, by expanding about the mean value of the arguments, more than half the terms (those involving
E
1
{\displaystyle E_{1}}
) are eliminated.
In general, the arguments x, y, z of Carlson's integrals may not be real and negative, as this would place a branch point on the path of integration, making the integral ambiguous. However, if the second argument of
R
C
{\displaystyle R_{C}}
, or the fourth argument, p, of
R
J
{\displaystyle R_{J}}
is negative, then this results in a simple pole on the path of integration. In these cases the Cauchy principal value (finite part) of the integrals may be of interest; these are
p
.
v
.
R
C
(
x
,
−
y
)
=
x
x
+
y
R
C
(
x
+
y
,
y
)
,
{\displaystyle \mathrm {p.v.} \;R_{C}(x,-y)={\sqrt {\frac {x}{x+y}}}\,R_{C}(x+y,y),}
and
p
.
v
.
R
J
(
x
,
y
,
z
,
−
p
)
=
(
q
−
y
)
R
J
(
x
,
y
,
z
,
q
)
−
3
R
F
(
x
,
y
,
z
)
+
3
y
R
C
(
x
z
,
−
p
q
)
y
+
p
=
(
q
−
y
)
R
J
(
x
,
y
,
z
,
q
)
−
3
R
F
(
x
,
y
,
z
)
+
3
x
y
z
x
z
+
p
q
R
C
(
x
z
+
p
q
,
p
q
)
y
+
p
{\displaystyle {\begin{aligned}\mathrm {p.v.} \;R_{J}(x,y,z,-p)&={\frac {(q-y)R_{J}(x,y,z,q)-3R_{F}(x,y,z)+3{\sqrt {y}}R_{C}(xz,-pq)}{y+p}}\\&={\frac {(q-y)R_{J}(x,y,z,q)-3R_{F}(x,y,z)+3{\sqrt {\frac {xyz}{xz+pq}}}R_{C}(xz+pq,pq)}{y+p}}\end{aligned}}}
where
q
=
y
+
(
z
−
y
)
(
y
−
x
)
y
+
p
.
{\displaystyle q=y+{\frac {(z-y)(y-x)}{y+p}}.}
which must be greater than zero for
R
J
(
x
,
y
,
z
,
q
)
{\displaystyle R_{J}(x,y,z,q)}
to be evaluated. This may be arranged by permuting x, y and z so that the value of y is between that of x and z.
Numerical evaluation [ edit ]
The duplication theorem can be used for a fast and robust evaluation of the Carlson symmetric form of elliptic integrals
and therefore also for the evaluation of Legendre-form of elliptic integrals. Let us calculate
R
F
(
x
,
y
,
z
)
{\displaystyle R_{F}(x,y,z)}
:
first, define
x
0
=
x
{\displaystyle x_{0}=x}
,
y
0
=
y
{\displaystyle y_{0}=y}
and
z
0
=
z
{\displaystyle z_{0}=z}
. Then iterate the series
λ
n
=
x
n
y
n
+
y
n
z
n
+
z
n
x
n
,
{\displaystyle \lambda _{n}={\sqrt {x_{n}}}{\sqrt {y_{n}}}+{\sqrt {y_{n}}}{\sqrt {z_{n}}}+{\sqrt {z_{n}}}{\sqrt {x_{n}}},}
x
n
+
1
=
x
n
+
λ
n
4
,
y
n
+
1
=
y
n
+
λ
n
4
,
z
n
+
1
=
z
n
+
λ
n
4
{\displaystyle x_{n+1}={\frac {x_{n}+\lambda _{n}}{4}},y_{n+1}={\frac {y_{n}+\lambda _{n}}{4}},z_{n+1}={\frac {z_{n}+\lambda _{n}}{4}}}
until the desired precision is reached: if
x
{\displaystyle x}
,
y
{\displaystyle y}
and
z
{\displaystyle z}
are non-negative, all of the series will converge quickly to a given value, say,
μ
{\displaystyle \mu }
. Therefore,
R
F
(
x
,
y
,
z
)
=
R
F
(
μ
,
μ
,
μ
)
=
μ
−
1
/
2
.
{\displaystyle R_{F}\left(x,y,z\right)=R_{F}\left(\mu ,\mu ,\mu \right)=\mu ^{-1/2}.}
Evaluating
R
C
(
x
,
y
)
{\displaystyle R_{C}(x,y)}
is much the same due to the relation
R
C
(
x
,
y
)
=
R
F
(
x
,
y
,
y
)
.
{\displaystyle R_{C}\left(x,y\right)=R_{F}\left(x,y,y\right).}
References and External links [ edit ]
B. C. Carlson, John L. Gustafson 'Asymptotic approximations for symmetric elliptic integrals' 1993 arXiv
B. C. Carlson 'Numerical Computation of Real Or Complex Elliptic Integrals' 1994 arXiv
B. C. Carlson 'Elliptic Integrals:Symmetric Integrals' in Chap. 19 of Digital Library of Mathematical Functions . Release date 2010-05-07. National Institute of Standards and Technology.
'Profile: Bille C. Carlson' in Digital Library of Mathematical Functions . National Institute of Standards and Technology.
Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 6.12. Elliptic Integrals and Jacobian Elliptic Functions" , Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8 , archived from the original on 2011-08-11, retrieved 2011-08-10
Fortran code from SLATEC for evaluating RF , RJ , RC , RD ,