python-优化Cython代码以进行numpy方差计算

我正在尝试优化我的cython代码,似乎还有很多改进的余地,这是IPython笔记本中%prun扩展的配置文件的一部分:

 7016695 function calls in 18.475 seconds

   Ordered by: internal time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
   400722    7.723    0.000   15.086    0.000 _methods.py:73(_var)
   814815    4.190    0.000    4.190    0.000 {method 'reduce' of 'numpy.ufunc' objects}
        1    1.855    1.855   18.475   18.475 {_cython_magic_aed83b9d1a706200aa6cef0b7577cf41.knn_alg}
   403683    0.838    0.000    1.047    0.000 _methods.py:39(_count_reduce_items)
   813031    0.782    0.000    0.782    0.000 {numpy.core.multiarray.array}
   398748    0.611    0.000   15.485    0.000 fromnumeric.py:2819(var)
   804405    0.556    0.000    1.327    0.000 numeric.py:462(asanyarray)

看到我的程序花了将近8秒钟来计算方差,我希望这可以加快速度

我正在使用404〜1000次一维数组长度的np.var()计算方差.我检查了C标准库,很遗憾,没有为此功能,并且我不想用C编写自己的库.

1.还有其他选择吗?

2.有什么方法可以减少花在列表第二项上的时间?

如果可以帮助查看,这是我的代码:

cpdef knn_alg(np.ndarray[double, ndim=2] temp, np.ndarray[double, ndim=1] jan1, int L, int w, int B):

cdef np.ndarray[double, ndim=3] lnn = np.zeros((L+1,temp.shape[1],365))

lnn = lnn_alg(temp, L, w)

cdef np.ndarray[double, ndim=2] sim = np.zeros((len(temp),temp.shape[1]))
cdef np.ndarray [double, ndim=2] a = np.zeros((L+1,lnn.shape[1]))
cdef int b
cdef np.ndarray [double, ndim=2] c = np.zeros((L,lnn.shape[1]-3))
cdef np.ndarray [double, ndim=2] lnn_scale = np.zeros((L,lnn.shape[1]))
cdef np.ndarray [double, ndim=2] cov_t = np.zeros((3,3))   
cdef np.ndarray [double, ndim=2] dk = np.zeros((L,4))
cdef int random_selection
cdef np.ndarray [double, ndim=1] day_month
cdef int day_of_year
cdef np.ndarray [double, ndim=2] lnn_scaled
cdef np.ndarray [double, ndim=2] temp_scaled
cdef np.ndarray [double, ndim=2] eig_vec
cdef double PC_t
cdef np.ndarray [double, ndim=1] PC_l
cdef double K 
cdef np.ndarray[double, ndim=2] knn
cdef np.ndarray[double, ndim=1] val
cdef np.ndarray[double, ndim=1] pn
cdef double rand_num
cdef int nn
cdef int index
cdef int inc
cdef int i 

sim[0,:] = jan1

for i in xrange(1,len(temp),B):

    #If leap day then randomly select feb 28 or mar 31
    if (temp[i,4]==2) & (temp[i,3]==29):
        random_selection = np.random.randint(0,1)
        day_month = np.array([[29,2],[1,3]])[random_selection]
    else:
        day_month = temp[i,3:5]

    #Convert day month to day of year for L+1 nearest neighbors selection
    current = datetime.datetime(2014, (<int>day_month[1]), (<int>day_month[0]))
    day_of_year = current.timetuple().tm_yday - 1

    #Take out current day from L+1 nearest neighbors
    a = lnn[:,:,day_of_year]
    b = np.where((a[:,3:6] == temp[i,3:6]).all(axis=-1))[0][0]
    c = np.delete(a,(b), axis=0)

    #Scale and center data from nearest neighbors and spatially averaged historical data
    lnn_scaled = scale(c[:,0:3])
    temp_scaled = scale(temp[:,0:3])

    #Calculate covariance matrix of nearest neighbors
    cov_t[:,:] = np.cov(lnn_scaled.T)

    #Calculate eigenvalues and vectors of covariance matrix
    eig_vec = eig(cov_t)[1]

    #Calculate principal components of scaled L nearest neighbors and 
    PC_t = np.dot(temp_scaled[i],eig_vec[0])
    PC_l = np.dot(lnn_scaled,eig_vec[0])

    #Calculate mahalonobis distance
    dk = np.zeros((404,4))
    dk[:,0] = np.array([sqrt((PC_t-pc)**2/np.var(PC_l)) for pc in PC_l])
    dk[:,1:4] = c[:,3:6]

    #Extract K nearest neighbors
    dk = dk[dk[:,0].argsort()]
    K = round(sqrt(L),0)
    knn = dk[0:(<int>K)]

    #Create probility density function
    val = np.array([1.0/k for k in range(1,len(knn)+1)])
    wk = val/(<int>val.sum())
    pn = wk.cumsum()

    #Select next days value from KNNs using probability density function with random value
    rand_num = np.random.rand(1)[0]
    nn = (abs(pn-rand_num)).argmin()
    index = np.where((temp[:,3:6] == knn[nn,1:4]).all(axis=-1))[0][0]

    if i+B > len(temp):
        inc = len(temp) - i
    else:
        inc = B

    if (index+B > len(temp)):
        index = len(temp)-B

    sim[i:i+inc,:] = temp[index:index+inc,:]    

return sim 

方差计算在此行中:

 dk[:,0] = np.array([sqrt((PC_t-pc)**2/np.var(PC_l)) for pc in PC_l])

任何建议都将非常有帮助,因为我是cython的新手.

解决方法:

我经过了上述计算,我认为运行如此缓慢的原因是我使用的是np.var()这是一个python(或numpy)函数,并且不允许使用C编译循环.在使用numpy的同时如何做到这一点让我知道.

我最终要做的是从此编码计算:

dk[:,0] = np.array([sqrt((PC_t-pc)**2/np.var(PC_l)) for pc in PC_l])

将此作为单独的功能:

cimport cython
cimport numpy as np
import numpy as np
from libc.math cimport sqrt as csqrt
from libc.math cimport pow as cpow
@cython.boundscheck(False)
@cython.cdivision(True)

cdef cy_mahalanobis(np.ndarray[double, ndim=1] PC_l, double PC_t):
    cdef unsigned int i,j,L
    L = PC_l.shape[0]
    cdef np.ndarray[double] dk = np.zeros(L)
    cdef double x,total,mean,var


    total = 0
    for i in xrange(L):
        x = PC_l[i]
        total = total + x

    mean = total / L
    total = 0
    for i in xrange(L):
        x = cpow(PC_l[i]-mean,2)
        total = total + x

    var = total / L

    for j in xrange(L):
        dk[j] = csqrt(cpow(PC_t-PC_l[j],2)/var)

    return dk   

而且因为我没有调用任何python函数(包括numpy),所以整个循环都可以用C进行编译(对于Ipython笔记本使用注释选项cython -a file.pyx或%% cython -a时,不会出现黄线) .

总的来说,我的代码最终快了一个数量级!值得手动编写此代码!我的cython(以及与此相关的python)不是最好的,因此,任何其他建议或答案都将不胜感激.

上一篇:在Android SQLite和WHY中使用日期的首选方式是什么?


下一篇:JavaWeb开发之Servlet