Computer Assisted Mathematics (2014)
¼ø¶È»ñÎÁ/03
¤ò¥Æ¥ó¥×¥ì¡¼¥È¤Ë¤·¤ÆºîÀ®
³«»Ï¹Ô:
* ϢΩ°ì¼¡ÊýÄø¼°¤Îµá²ò [#tdae5197]
** LU ʬ²ò [#f759ffe4]
Áݤ½Ð¤·Ë¡¤òÀþ·ÁÂå¿ô¤Ç¤¤Á¤ó¤Èª¤¨Ä¾¤·¡¢¤«¤Ä¡¢¤Ê¤ë¤Ù¤¯ÊØ...
*** ¥µ¥ó¥×¥ë¥×¥í¥°¥é¥à [#f0e1cf05]
// programu source ɽµ
#highlighter(language=ruby,number=on,cache=off){{
# LU decomposition test program.
# Solve a problem ax=b where a is a matrix and b is a vec...
include Math
# size of the problem
n = 4
# matrix
a = [
[3.0, 3.0, -5.0, -6.0],
[1.0, 2.0, -3.0, -1.0],
[2.0, 3.0, -5.0, -3.0],
[-1.0, 0, 0, 1.0 ]
]
# vector
b = [1.0, 2.0, 3.0, 4.0]
# LU decomposition without pivot operation.
# Note that we don't need the vector b and computation co...
for k in (1..(n-1))
for i in (k+1..n)
a[i-1][k-1] /= a[k-1][k-1]
for j in (k+1..n)
a[i-1][j-1] -= a[i-1][k-1]*a[k-1][j-1]
end
end
end
# We obtain y=L^(-1)b by a forward substitution.
# Its loop is double and it means its computation cost or...
y = [0.0, 0.0, 0.0, 0.0]
for i in (1..n)
y[i-1] = b[i-1]
if i < n
for j in ((i+1)..n)
b[j-1] -= a[j-1][i-1]*b[i-1]
end
end
end
# Solve the solution x=U^(-1)y by a backward substitution.
# Its computation cost order is also n^2.
x = [0.0, 0.0, 0.0, 0.0]
n.downto(1) { |i|
x[i-1] = y[i-1]/a[i-1][i-1]
if i > 2
(i-1).downto(1){ |j|
y[j-1] -= (a[j-1][i-1]/a[i-1][i-1])*y[i-1]
}
end
}
# result
p x
# Sample result
# [-4.999999999999999, 8.881784197001252e-16, -1.99999999...
}}
** ºÇµÞ¹ß²¼Ë¡ [#l1699d2e]
¹ÔÎó A ¤¬ÀµÄêÃÍÂоΤξì¹ç¡¢Ax = b ¤Î¿¿¤Î²ò x ¤ËÂФ·¤Æ S(a...
¤³¤Î S(a) ¤ÏÈóÉé¤Ç¤¢¤ê¡¢a = x ¤Î»þ¤Î¤ß¥¼¥í¤Ë¤Ê¤ë¡¥
¤³¤ì¤òÍøÍѤ·¤Æ¡¢S(a) ¤¬¾®¤µ¤¯¤Ê¤ë¤è¤¦¤Ë¥Ù¥¯¥È¥ë a ¤òÈ¿Éü...
¤³¤Î¹Í¤¨Êý¤Ë±è¤Ã¤Æ¡¢S(a) ¤ò¡ÖºÇ¤âÁ᤯¾®¤µ¤¯¤¹¤ëÊý¸þ¡×¤Ë a...
&ref(/materials/warning.png); S(a) ¤ò¤â¤Ã¤È¤âÁ᤯¾®¤µ¤¯¤¹...
¤Á¤Ê¤ß¤Ë¡¢¤³¤ÎÊý¸þ¤ò¡Ö¤â¤¦¾¯¤·¸¤¯¡×Áª¤ó¤À¤â¤Î¤¬¸å½Ò¤Î CG...
*** ¥µ¥ó¥×¥ë¥×¥í¥°¥é¥à [#v5866189]
// programu source ɽµ
#highlighter(language=ruby,number=on,cache=off){{
# Steepest descent method to solve Ax = b where A: matrix...
include Math
##### contants #####
# Allowed error which is evaluated by norm.
Eps = 1.0e-08
# Matrix
A = [
[7.0, 1.0, 2.0, 3.0],
[1.0, 10.0, 3.0, 4.0],
[2.0, 3.0, 13.0, 6.0],
[3.0, 4.0, 6.0, 16.0 ]
]
# Vector on the right hand side
B = [1.0, 2.0, 3.0, 4.0]
# size of the problem
N = A.size
##### Functions ######
### Function: inner product
def inner_product(x,y)
z = 0.0
for i in (1..N)
z += x[i-1]*y[i-1]
end
return z
end
### Function: scalar multiply
def vc_coeff(c,b)
z = Array.new(N).collect{ 0.0 }
for i in (1..N)
z[i-1] = c * b[i-1]
end
return z
end
### Function: sum of vectors
def vc_sum(a,b)
c = Array.new(N).collect{ 0.0 }
for i in (1..N)
c[i-1] = a[i-1]+b[i-1]
end
return c
end
### Function: matrix * vector
def mtrx_vc_product(a,b)
c = Array.new(N).collect{ 0.0 }
for i in (1..N)
for j in (1..N)
c[i-1] += a[i-1][j-1]*b[j-1]
end
end
return c
end
### Function: A * vector
def a_vc(b)
return mtrx_vc_product(A,b)
end
### Function: residual = b - Ax
def residual(x)
return vc_sum(B, vc_coeff(-1.0, a_vc(x)))
end
### Function: The norm to use in the Jacobi method.
def norm(x)
v = 0.0
for i in (1..N)
v += x[i-1]**2
end
return sqrt(v)
end
### Function: print results
def print_result(x)
print "x = ["
for i in (1..N-1)
printf("%.6f",x[i-1])
print ", "
end
printf("%.6f",x[N-1])
print "], "
print "residual= ",norm(residual(x)),"\n"
end
### Function: Steepest descent method iteration
def steepest_descent_iteration(x)
r = residual(x)
w = inner_product(r,r) / inner_product(r, a_vc(r))
return vc_sum(x, vc_coeff(w,r))
end
##### main program #####
# initial values
x = Array.new(N).collect{ 1.0 }
print_result(x)
# loop to obtain a good x.
loop_times = 0
loop do
if (norm(residual(x)) < Eps) then
break
else
x = steepest_descent_iteration(x)
print_result(x)
loop_times += 1
end
end
# result
print "\n\nYou obtain a numerical solution after ",loop_t...
print_result(x)
print "\n"
}}
** CG Ë¡ [#fc9cfaef]
S(a) ¤òºÇ¤âÁ᤯¾®¤µ¤¯¤¹¤ëÊý¸þ¤Ë²þÁ±¤¹¤ëºÇµÞ¹ß²¼Ë¡¤ÏÍßÄ¥¤ê...
¶ñÂÎŪ¤Ë¤Ï¡¢°ì²óÁ°¤Ë a ¤ò²þÁ±¤·¤Æ¤¤¿¡ÖÊý¸þ¡×¤ËÂФ·¡¢¼¡¤Î...
&ref(/materials/warning.png); ºÇµÞ¹ß²¼Ë¡¤Ï¡¢Àè¤Î²þÁ±Êý¸þ...
&ref(/materials/warning.png); Æó¤Ä¤Î¥Ù¥¯¥È¥ë x,y ¤¬ A-ľ...
¤µ¤Æ¡¢¤³¤Î¤è¤¦¤Ë²þÁ±Êý¸þ¤ò A-ľ¸ò¤Ë¤Ê¤ë¤è¤¦¤Ë¤¹¤ë¤È¡¢¡Ö¸¶...
*** ¥µ¥ó¥×¥ë¥×¥í¥°¥é¥à [#oe1111a1]
// programu source ɽµ
#highlighter(language=ruby,number=on,cache=off){{
# CG method to solve Ax = b where A: matrix, b: vector.
include Math
##### contants #####
# Allowed error which is evaluated by norm.
Eps = 1.0e-08
# Matrix
A = [
[7.0, 1.0, 2.0, 3.0],
[1.0, 10.0, 3.0, 4.0],
[2.0, 3.0, 13.0, 6.0],
[3.0, 4.0, 6.0, 16.0 ]
]
# Vector on the right hand side
B = [1.0, 2.0, 3.0, 4.0]
# size of the problem
N = A.size
##### Functions ######
### Function: inner product
def inner_product(x,y)
z = 0.0
for i in (1..N)
z += x[i-1]*y[i-1]
end
return z
end
### Function: scalar multiply
def vc_coeff(c,b)
z = Array.new(N).collect{ 0.0 }
for i in (1..N)
z[i-1] = c * b[i-1]
end
return z
end
### Function: sum of vectors
def vc_sum(a,b)
c = Array.new(N).collect{ 0.0 }
for i in (1..N)
c[i-1] = a[i-1]+b[i-1]
end
return c
end
### Function: matrix * vector
def mtrx_vc_product(a,b)
c = Array.new(N).collect{ 0.0 }
for i in (1..N)
for j in (1..N)
c[i-1] += a[i-1][j-1]*b[j-1]
end
end
return c
end
### Function: A * vector
def a_vc(b)
return mtrx_vc_product(A,b)
end
### Function: residual = b - Ax
def residual(x)
return vc_sum(B, vc_coeff(-1.0, a_vc(x)))
end
### Function: The norm to use in the Jacobi method.
def norm(x)
v = 0.0
for i in (1..N)
v += x[i-1]**2
end
return sqrt(v)
end
### Function: print results
def print_result(x)
print "x = ["
for i in (1..N-1)
printf("%.6f",x[i-1])
print ", "
end
printf("%.6f",x[N-1])
print "], "
print "residual= ",norm(residual(x)),"\n"
end
### Function: CG iteration
def cg_iteration(x,r,p,r_R,q)
q_Q = inner_product(p,q)
w = r_R/q_Q
x_new = vc_sum(x, vc_coeff(w,p))
r_new = vc_sum(r, vc_coeff((-1.0*w),q))
r_R_new = inner_product(r_new, r_new)
beta = r_R_new / r_R
p_new = vc_sum(r_new, vc_coeff(beta,p))
q_new = a_vc(p_new)
return [x_new, r_new, p_new, r_R_new, q_new]
end
##### main program #####
# initial values
x = Array.new(N).collect{ 1.0 }
r = residual(x)
p = vc_coeff(1.0, r)
r_R = inner_product(r,r)
q = a_vc(p)
print_result(x)
# loop to obtain a good x.
loop_times = 0
loop do
if (norm(residual(x)) < Eps) then
break
else
x,r,p,r_R,q = cg_iteration(x,r,p,r_R,q)
print_result(x)
loop_times += 1
end
end
# result
print "\n\nYou obtain a numerical solution after ",loop_t...
print_result(x)
print "\n"
}}
// ¨¬¨¨®¨¯¨°¨±¨²¨³¨´¨µ¨¶
// ¥³¥Þ¥ó¥É¥é¥¤¥óÆþÎϤϡֹÔƬ¤ò > ¤Ç»Ï¤á¤ë¡×.
// ¥³¥Þ¥ó¥É¥é¥¤¥ó½ÐÎϤϡֹÔƬ¤ò¥Ö¥é¥ó¥¯¤Ç»Ï¤á¤ë¡×.
// ¼Â½¬¥¢¥¤¥³¥ó
// &ref(/materials/notes.png);
// Ãí°Õ¥¢¥¤¥³¥ó
// &ref(/materials/warning.png);
// Link ¥¢¥¤¥³¥ó
// &ref(/materials/JNorth_arrow-right-sm.png);
// OK ¥¢¥¤¥³¥ó
// &ref(/materials/OK.png);
// NG ¥¢¥¤¥³¥ó
// &ref(/materials/NG.png);
// ¥µ¥ó¥×¥ë¥¢¥¤¥³¥ó
// &ref(/materials/Gnome-Preferences.png);
// Âçʸ»ú¤Ç¤Î¶¯Ä´
// CENTER:&size(24){''¤Û¤²¤Û¤²''};
// programu source ɽµ
// #highlighter(language=ruby,number=on,cache=on){{}}
// ¥Ñ¥¤¥×
// ¦#contents
½ªÎ»¹Ô:
* ϢΩ°ì¼¡ÊýÄø¼°¤Îµá²ò [#tdae5197]
** LU ʬ²ò [#f759ffe4]
Áݤ½Ð¤·Ë¡¤òÀþ·ÁÂå¿ô¤Ç¤¤Á¤ó¤Èª¤¨Ä¾¤·¡¢¤«¤Ä¡¢¤Ê¤ë¤Ù¤¯ÊØ...
*** ¥µ¥ó¥×¥ë¥×¥í¥°¥é¥à [#f0e1cf05]
// programu source ɽµ
#highlighter(language=ruby,number=on,cache=off){{
# LU decomposition test program.
# Solve a problem ax=b where a is a matrix and b is a vec...
include Math
# size of the problem
n = 4
# matrix
a = [
[3.0, 3.0, -5.0, -6.0],
[1.0, 2.0, -3.0, -1.0],
[2.0, 3.0, -5.0, -3.0],
[-1.0, 0, 0, 1.0 ]
]
# vector
b = [1.0, 2.0, 3.0, 4.0]
# LU decomposition without pivot operation.
# Note that we don't need the vector b and computation co...
for k in (1..(n-1))
for i in (k+1..n)
a[i-1][k-1] /= a[k-1][k-1]
for j in (k+1..n)
a[i-1][j-1] -= a[i-1][k-1]*a[k-1][j-1]
end
end
end
# We obtain y=L^(-1)b by a forward substitution.
# Its loop is double and it means its computation cost or...
y = [0.0, 0.0, 0.0, 0.0]
for i in (1..n)
y[i-1] = b[i-1]
if i < n
for j in ((i+1)..n)
b[j-1] -= a[j-1][i-1]*b[i-1]
end
end
end
# Solve the solution x=U^(-1)y by a backward substitution.
# Its computation cost order is also n^2.
x = [0.0, 0.0, 0.0, 0.0]
n.downto(1) { |i|
x[i-1] = y[i-1]/a[i-1][i-1]
if i > 2
(i-1).downto(1){ |j|
y[j-1] -= (a[j-1][i-1]/a[i-1][i-1])*y[i-1]
}
end
}
# result
p x
# Sample result
# [-4.999999999999999, 8.881784197001252e-16, -1.99999999...
}}
** ºÇµÞ¹ß²¼Ë¡ [#l1699d2e]
¹ÔÎó A ¤¬ÀµÄêÃÍÂоΤξì¹ç¡¢Ax = b ¤Î¿¿¤Î²ò x ¤ËÂФ·¤Æ S(a...
¤³¤Î S(a) ¤ÏÈóÉé¤Ç¤¢¤ê¡¢a = x ¤Î»þ¤Î¤ß¥¼¥í¤Ë¤Ê¤ë¡¥
¤³¤ì¤òÍøÍѤ·¤Æ¡¢S(a) ¤¬¾®¤µ¤¯¤Ê¤ë¤è¤¦¤Ë¥Ù¥¯¥È¥ë a ¤òÈ¿Éü...
¤³¤Î¹Í¤¨Êý¤Ë±è¤Ã¤Æ¡¢S(a) ¤ò¡ÖºÇ¤âÁ᤯¾®¤µ¤¯¤¹¤ëÊý¸þ¡×¤Ë a...
&ref(/materials/warning.png); S(a) ¤ò¤â¤Ã¤È¤âÁ᤯¾®¤µ¤¯¤¹...
¤Á¤Ê¤ß¤Ë¡¢¤³¤ÎÊý¸þ¤ò¡Ö¤â¤¦¾¯¤·¸¤¯¡×Áª¤ó¤À¤â¤Î¤¬¸å½Ò¤Î CG...
*** ¥µ¥ó¥×¥ë¥×¥í¥°¥é¥à [#v5866189]
// programu source ɽµ
#highlighter(language=ruby,number=on,cache=off){{
# Steepest descent method to solve Ax = b where A: matrix...
include Math
##### contants #####
# Allowed error which is evaluated by norm.
Eps = 1.0e-08
# Matrix
A = [
[7.0, 1.0, 2.0, 3.0],
[1.0, 10.0, 3.0, 4.0],
[2.0, 3.0, 13.0, 6.0],
[3.0, 4.0, 6.0, 16.0 ]
]
# Vector on the right hand side
B = [1.0, 2.0, 3.0, 4.0]
# size of the problem
N = A.size
##### Functions ######
### Function: inner product
def inner_product(x,y)
z = 0.0
for i in (1..N)
z += x[i-1]*y[i-1]
end
return z
end
### Function: scalar multiply
def vc_coeff(c,b)
z = Array.new(N).collect{ 0.0 }
for i in (1..N)
z[i-1] = c * b[i-1]
end
return z
end
### Function: sum of vectors
def vc_sum(a,b)
c = Array.new(N).collect{ 0.0 }
for i in (1..N)
c[i-1] = a[i-1]+b[i-1]
end
return c
end
### Function: matrix * vector
def mtrx_vc_product(a,b)
c = Array.new(N).collect{ 0.0 }
for i in (1..N)
for j in (1..N)
c[i-1] += a[i-1][j-1]*b[j-1]
end
end
return c
end
### Function: A * vector
def a_vc(b)
return mtrx_vc_product(A,b)
end
### Function: residual = b - Ax
def residual(x)
return vc_sum(B, vc_coeff(-1.0, a_vc(x)))
end
### Function: The norm to use in the Jacobi method.
def norm(x)
v = 0.0
for i in (1..N)
v += x[i-1]**2
end
return sqrt(v)
end
### Function: print results
def print_result(x)
print "x = ["
for i in (1..N-1)
printf("%.6f",x[i-1])
print ", "
end
printf("%.6f",x[N-1])
print "], "
print "residual= ",norm(residual(x)),"\n"
end
### Function: Steepest descent method iteration
def steepest_descent_iteration(x)
r = residual(x)
w = inner_product(r,r) / inner_product(r, a_vc(r))
return vc_sum(x, vc_coeff(w,r))
end
##### main program #####
# initial values
x = Array.new(N).collect{ 1.0 }
print_result(x)
# loop to obtain a good x.
loop_times = 0
loop do
if (norm(residual(x)) < Eps) then
break
else
x = steepest_descent_iteration(x)
print_result(x)
loop_times += 1
end
end
# result
print "\n\nYou obtain a numerical solution after ",loop_t...
print_result(x)
print "\n"
}}
** CG Ë¡ [#fc9cfaef]
S(a) ¤òºÇ¤âÁ᤯¾®¤µ¤¯¤¹¤ëÊý¸þ¤Ë²þÁ±¤¹¤ëºÇµÞ¹ß²¼Ë¡¤ÏÍßÄ¥¤ê...
¶ñÂÎŪ¤Ë¤Ï¡¢°ì²óÁ°¤Ë a ¤ò²þÁ±¤·¤Æ¤¤¿¡ÖÊý¸þ¡×¤ËÂФ·¡¢¼¡¤Î...
&ref(/materials/warning.png); ºÇµÞ¹ß²¼Ë¡¤Ï¡¢Àè¤Î²þÁ±Êý¸þ...
&ref(/materials/warning.png); Æó¤Ä¤Î¥Ù¥¯¥È¥ë x,y ¤¬ A-ľ...
¤µ¤Æ¡¢¤³¤Î¤è¤¦¤Ë²þÁ±Êý¸þ¤ò A-ľ¸ò¤Ë¤Ê¤ë¤è¤¦¤Ë¤¹¤ë¤È¡¢¡Ö¸¶...
*** ¥µ¥ó¥×¥ë¥×¥í¥°¥é¥à [#oe1111a1]
// programu source ɽµ
#highlighter(language=ruby,number=on,cache=off){{
# CG method to solve Ax = b where A: matrix, b: vector.
include Math
##### contants #####
# Allowed error which is evaluated by norm.
Eps = 1.0e-08
# Matrix
A = [
[7.0, 1.0, 2.0, 3.0],
[1.0, 10.0, 3.0, 4.0],
[2.0, 3.0, 13.0, 6.0],
[3.0, 4.0, 6.0, 16.0 ]
]
# Vector on the right hand side
B = [1.0, 2.0, 3.0, 4.0]
# size of the problem
N = A.size
##### Functions ######
### Function: inner product
def inner_product(x,y)
z = 0.0
for i in (1..N)
z += x[i-1]*y[i-1]
end
return z
end
### Function: scalar multiply
def vc_coeff(c,b)
z = Array.new(N).collect{ 0.0 }
for i in (1..N)
z[i-1] = c * b[i-1]
end
return z
end
### Function: sum of vectors
def vc_sum(a,b)
c = Array.new(N).collect{ 0.0 }
for i in (1..N)
c[i-1] = a[i-1]+b[i-1]
end
return c
end
### Function: matrix * vector
def mtrx_vc_product(a,b)
c = Array.new(N).collect{ 0.0 }
for i in (1..N)
for j in (1..N)
c[i-1] += a[i-1][j-1]*b[j-1]
end
end
return c
end
### Function: A * vector
def a_vc(b)
return mtrx_vc_product(A,b)
end
### Function: residual = b - Ax
def residual(x)
return vc_sum(B, vc_coeff(-1.0, a_vc(x)))
end
### Function: The norm to use in the Jacobi method.
def norm(x)
v = 0.0
for i in (1..N)
v += x[i-1]**2
end
return sqrt(v)
end
### Function: print results
def print_result(x)
print "x = ["
for i in (1..N-1)
printf("%.6f",x[i-1])
print ", "
end
printf("%.6f",x[N-1])
print "], "
print "residual= ",norm(residual(x)),"\n"
end
### Function: CG iteration
def cg_iteration(x,r,p,r_R,q)
q_Q = inner_product(p,q)
w = r_R/q_Q
x_new = vc_sum(x, vc_coeff(w,p))
r_new = vc_sum(r, vc_coeff((-1.0*w),q))
r_R_new = inner_product(r_new, r_new)
beta = r_R_new / r_R
p_new = vc_sum(r_new, vc_coeff(beta,p))
q_new = a_vc(p_new)
return [x_new, r_new, p_new, r_R_new, q_new]
end
##### main program #####
# initial values
x = Array.new(N).collect{ 1.0 }
r = residual(x)
p = vc_coeff(1.0, r)
r_R = inner_product(r,r)
q = a_vc(p)
print_result(x)
# loop to obtain a good x.
loop_times = 0
loop do
if (norm(residual(x)) < Eps) then
break
else
x,r,p,r_R,q = cg_iteration(x,r,p,r_R,q)
print_result(x)
loop_times += 1
end
end
# result
print "\n\nYou obtain a numerical solution after ",loop_t...
print_result(x)
print "\n"
}}
// ¨¬¨¨®¨¯¨°¨±¨²¨³¨´¨µ¨¶
// ¥³¥Þ¥ó¥É¥é¥¤¥óÆþÎϤϡֹÔƬ¤ò > ¤Ç»Ï¤á¤ë¡×.
// ¥³¥Þ¥ó¥É¥é¥¤¥ó½ÐÎϤϡֹÔƬ¤ò¥Ö¥é¥ó¥¯¤Ç»Ï¤á¤ë¡×.
// ¼Â½¬¥¢¥¤¥³¥ó
// &ref(/materials/notes.png);
// Ãí°Õ¥¢¥¤¥³¥ó
// &ref(/materials/warning.png);
// Link ¥¢¥¤¥³¥ó
// &ref(/materials/JNorth_arrow-right-sm.png);
// OK ¥¢¥¤¥³¥ó
// &ref(/materials/OK.png);
// NG ¥¢¥¤¥³¥ó
// &ref(/materials/NG.png);
// ¥µ¥ó¥×¥ë¥¢¥¤¥³¥ó
// &ref(/materials/Gnome-Preferences.png);
// Âçʸ»ú¤Ç¤Î¶¯Ä´
// CENTER:&size(24){''¤Û¤²¤Û¤²''};
// programu source ɽµ
// #highlighter(language=ruby,number=on,cache=on){{}}
// ¥Ñ¥¤¥×
// ¦#contents
¥Ú¡¼¥¸Ì¾: