Fast CUDA implementation ofsoft-DTWfor PyTorch. Based onpytorch-softdtwbut can run up to 100x faster! Bothforward()andbackward()passes are implemented using CUDA. My implementation is partly inspired by"Developing a pattern discovery method in time series data and its GPU acceleration"wherein a ...
1、DTW(dynamic time warping)& KNN 在深度学习大量使用之前,在时间序列分类方面,DTW和KNN的结合是一种很有用的方法。 在做两个不同的时间序列匹配的时候,虽然我们可以通过肉眼发现它们之间存在着很高的相似性,但是由于部分位置的拉伸或者压缩,或者沿着时间轴上存在着平移,对两者直接计算欧氏距离效果一般不是很好,而...
🐛 Bug I was using cdist in pytorch 1.1.0, the backward pass worked perfectly. Now with the new version that supports batching (1.2.0) It shows a CUDA out of memory error ! It tried to allocate hundreds of GiB of memory, depending the siz...
5uMpHUWLAHOJjpQnhIQh3QvtFkCcZvYUOX7jObmIjHhIPFbrhP4SDTWKNxn01K5xk9DmHUdY\nqGopEJGNNWye3MHGOz7Epez1yBfCDab4yyeSTTSs7mlxiXB1bBqVVmUkYJ6htyjRvDR8VWIc1v4u\nvEbL7bqI/Na/+dfm35/90pc4yAEbCMj6/ve/bxr37sEesr9/yPEDUU4mwIAGSGqMWKWGZ4Va5YNQ\nU8poAeD17fbRyWBEED3Gctzd2hWRShVRljdvXJYzxGp...