tag:blogger.com,1999:blog-65472821356395084422024-03-19T10:55:10.787+01:00Some notes in GPGPU with OpenCLA blog about algorithms for OpenCL and GPGPU computations in general. Contains some texts and code with algorithms and neat tricks for GPGPU programming and some links to downloads of toolkits or other tutorials in using OpenCL.Anonymoushttp://www.blogger.com/profile/16287606329653844587noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-6547282135639508442.post-75287805828478467832012-09-18T19:27:00.001+02:002012-09-19T15:06:08.047+02:00<h2>
</h2>
<h2>
Efficient convolution on multi-dimensional data (part 2)</h2>
<br />
In <a href="http://gpgpu2.blogspot.se/2012/07/efficient-convolution-of-multi.html">part 1</a> of this post I talked about a general method for performing faster convolution operations on GPU hardware. This was achieved by reordering the memory operations to get a higher utilization of the floating point units.<br />
<br />
Today we will continue by taking a look at a few specific implementations of convolution operations, complete with the OpenCL code, and see how we can go from a < 10 % utilization to closer to 65% when operating on a 3D dataset and performing convolutions with multiple different convolution kernels.<br />
<br />
<h4>
Basic assumptions</h4>
The input image is 256 * 256 * 256 intensity values (unsigned char). The kernel size is given as a CL compile time variable and a set of N kernels to perform convolution upon is given to the kernel as a packed array of floats in KXYZ order (K for Kernel, XYZ for the 3 dimensions). <br />
<br />
<h2>
The first naive implementation </h2>
We can implement a <i>naive</i> convolution operation by using one work-item for each target pixel and using an accumulator that iterates over all source pixels within the kernel size window. In the first implementation we use an explicit array for the different accumulators.<br />
<pre class="brush: cpp">
/* Expected DEFINE's from the compilation */
/* kernsize -- the size of the filtering kernel in each direction */
/* nkernels -- the number of filters to use */
/* Result is only defined in the area defined by [0 ... (imagesize - kernsize)]
This means that the kernels are not centered around the input pixel
but rather gives offsets the output image slightly. You need to
shift it back manually afterwards if you care for this
*/
kernel void convolute(int4 imagesize, global unsigned char *input,
global unsigned char *output, global float *filter) {
int4 gid = (int4)(get_global_id(0), get_global_id(1), get_global_id(2), 0);
int4 lid = (int4)(get_local_id(0), get_local_id(1), get_local_id(2), 0);
int4 group = (int4)(get_group_id(0), get_group_id(1), get_group_id(2), 0);
int4 pixelid = gid;
// Starting offset of the first pixel to process
int imoffset = pixelid.s0 + imagesize.s0 * pixelid.s1 +
imagesize.s0 * imagesize.s1 * pixelid.s2;
int i;
/* The naive way of doing convolutions */
if(gid.s0 + kernsize > imagesize.s0 ||
gid.s1 + kernsize > imagesize.s1 ||
gid.s2 + kernsize > imagesize.s2) return;
int dx,dy,dz;
float val[nkernels];
for(i=0;i<nkernels;i++) val[i]=0.0;
for(dz=0;dz<kernsize;dz++)
for(dy=0;dy<kernsize;dy++)
for(dx=0;dx<kernsize;dx++) {
unsigned char raw = input[imoffset+dx+dy*imagesize.s0 +
dz*imagesize.s0*imagesize.s1];
for(i=0;i<nkernels;i++) {
val[i] += raw * filter[i+nkernels*(dx+kernsize*dy+kernsize*kernsize*dz)];
}
}
for(i=0;i<nkernels;i++)
output[imoffset*nkernels+i] = val[i];
}
</pre><br />
As you can see by running the OpenCL algorithm above multiple times you get an increase in effective speed when performing multiple convolutions at the same time since the input variable <i>raw</i> does not need to be read for each convolution kernel.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZZ9NvVeKxzd3C9rdaU_Ret54ShbyUBiZn9Nqq8KV6AkaCTOhQo5sEhNew68IptOvGPCKbw6zfs3fYknSP6JJ6XcYjAaIO_Jy0ClKBCViNB4zdmQZUNKiAR2h_5hfZeZ5z78KNo8LWCYw/s1600/naiveConvolutionSpeed-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="245" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZZ9NvVeKxzd3C9rdaU_Ret54ShbyUBiZn9Nqq8KV6AkaCTOhQo5sEhNew68IptOvGPCKbw6zfs3fYknSP6JJ6XcYjAaIO_Jy0ClKBCViNB4zdmQZUNKiAR2h_5hfZeZ5z78KNo8LWCYw/s400/naiveConvolutionSpeed-1.png" width="400" /></a></div>
When executed 1000 times on a AMD 6970 GPU hardware on a 256*256 *256 dataset with kernelsizes of 7*7*7 we gain an execution speed ranging from 0.184 <i>seconds per convolution kernel</i> (for one kernel) down to 0.022 <i>seconds per convolution kernel </i>(for 32 kernels). <br />
<br />
Note especially the bump in the timing chart above occuring for 3 kernels. This bump is not a sampling problem but rather caused by each kernel writing out an uneven number of char values at the end of each computation. (Even though it seems like each work item is writing to memory individually the writes for a unit is coalesced together, but something seems to happen exactly for the case of writing out 3 bytes at a time). <br />
<br />
<h2>
The second naive implementation </h2>
Another way of implementing the same naive convolution algorithm is to use the OpenCL built-in vector data types (eg. uchar2, float4, ...) and operations thereupon instead of an explicit array with for-loops. This gives the implementation below.<br />
<br />
<h4>
Macro for vector operations</h4>
As in the previous implementation we will use a set of macros to define the datatypes <span style="font-family: "Courier New",Courier,monospace;">kernf</span> for a vector of floats that matches the number of kernels, <span style="font-family: "Courier New",Courier,monospace;">kernuc</span> for a corresponding vector of <span style="font-family: "Courier New",Courier,monospace;">unsigned char</span>'s. We define the macro <span style="font-family: "Courier New",Courier,monospace;">kernstore</span> to write a vector of unsigned chars to the memory and <span style="font-family: "Courier New",Courier,monospace;">convert_kernuc</span> to convert (type casting for vectors) floating point values to unsigned chars. <br />
<pre class="brush: cpp">
/* Preprocessor settings to define types that can
process multiple convolution kernels the same time. */
#if nkernels == 1
typedef float kernf;
typedef uchar kernuc;
#define kernstore(val,offset,arr) arr[offset]=val
#define convert_kernuc convert_uchar
#elif nkernels == 2
typedef float2 kernf;
typedef uchar2 kernuc;
#define kernstore vstore2
#define convert_kernuc convert_uchar2
#elif nkernels == 3
typedef float3 kernf;
typedef uchar3 kernuc;
#define kernstore vstore3
#define convert_kernuc convert_uchar3
#elif nkernels == 4
typedef float4 kernf;
typedef uchar4 kernuc;
#define kernstore vstore4
#define convert_kernuc convert_uchar4
#elif nkernels == 8
typedef float8 kernf;
typedef uchar8 kernuc;
#define kernstore vstore8
#define convert_kernuc convert_uchar8
#elif nkernels == 16
typedef float16 kernf;
typedef uchar16 kernuc;
#define kernstore vstore16
#define convert_kernuc convert_uchar16
#else
#error "nkernels should be one of: 1,2,3,4,8,16"
#endif
</pre>
<h4>Caching the filter in local memory</h4>
Another change we can do to (attempt) to improve the performance is to load the filter values into local memory before the actual convolution operations. Although this <i>seem</i> to minimize the number of times the filter values are read from global memory (once for the whole filter per compute unit, as opposed to once per work-item) it has <i>no effect</i> at all as long as we only perform one convolution per work-item.<br />
<br />
<pre class="brush: cpp">
/* Copy global filter to local memory */
local kernf filter[kernsize*kernsize*kernsize];
event_t event = async_work_group_copy(filter,filterG,kernsize*kernsize*kernsize,0);
wait_group_events(1, &event);
</pre>
<br />
This can be seen in the run-time analysis below where the second implementation performs worse than the first implementation. The reason for this lack of improvement is again most likely due to the coalescing of memory operations from each compute device, leading to the first implementation also reading the filter data exactly once per compute unit. <br />
<br />
Nevertheless, we choose to use this explicit load of the filter data since it will come very much in handy in the final implementation. <br />
<h4>Full Kernel code</h4>
The full code of the kernel minus the macro's from above and the compile time defines.<br />
<pre class="brush: cpp">
kernel void convolute(int4 imagesize, global unsigned char *input,
global unsigned char *output, global kernf *filterG) {
int4 gid = (int4)(get_global_id(0), get_global_id(1), get_global_id(2), 0);
int4 lid = (int4)(get_local_id(0), get_local_id(1), get_local_id(2), 0);
int4 group = (int4)(get_group_id(0), get_group_id(1), get_group_id(2), 0);
// First (?) pixel to process with this kernel
int4 pixelid = gid;
// Starting offset of the first pixel to process
int imoffset = pixelid.s0 + imagesize.s0 * pixelid.s1 +
imagesize.s0 * imagesize.s1 * pixelid.s2;
int i;
/* Copy global filter to local memory */
local kernf filter[kernsize*kernsize*kernsize];
event_t event = async_work_group_copy(filter,filterG,kernsize*kernsize*kernsize,0);
wait_group_events(1, &event);
if(gid.s0 + kernsize > imagesize.s0 ||
gid.s1 + kernsize > imagesize.s1 ||
gid.s2 + kernsize > imagesize.s2) return;
int dx,dy,dz;
kernf val = (kernf)(0.0);
for(dz=0;dz<kernsize;dz++)
for(dy=0;dy<kernsize;dy++)
for(dx=0;dx<kernsize;dx++) {
unsigned char raw = input[imoffset+dx+dy*imagesize.s0 +
dz*imagesize.s0*imagesize.s1];
val += raw * filter[dx+kernsize*dy+dz*kernsize*kernsize];
}
kernstore( convert_kernuc(val), imoffset, output);
}
</pre>
<br />
As we will see, this second implementation uses some macro definitions to use the appropriate floatX datatype (eg. float4), and it explicitly caches the filter data in local memory. The results of these two operations give execution speeds ranging from 0.093 seconds per convolution kernel (for 1 kernel) down to 0.0365 seconds (for 16 kernels).<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcw7em3gZT9hpy4iQvjsX8VwEEGkkUshxn_bdUFq41uLvtK29HV5AvKGKPTY0X9hAmQVfVwpSId5HHOSSrIEbxH9FhZ2hWouj8CoZqvXmDKIGbgE2E7hcUoVlkZS8fxHH_qc3clt_6jAM/s1600/naiveConvolutionSpeed-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="245" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcw7em3gZT9hpy4iQvjsX8VwEEGkkUshxn_bdUFq41uLvtK29HV5AvKGKPTY0X9hAmQVfVwpSId5HHOSSrIEbxH9FhZ2hWouj8CoZqvXmDKIGbgE2E7hcUoVlkZS8fxHH_qc3clt_6jAM/s400/naiveConvolutionSpeed-2.png" width="400" /></a></div>
<br />
<br />
In the above graph of execution speed we can see that little happens after the point of using 4 kernels simultaneously. It is also at this point that the first implementation becomes faster than the second implementation. <br />
<br />
So, are these two implementations good enough? 0.01 seconds to perform a convolution requiring 5.7 billion (256*256*256*7*7*7) operations certainly <i>seems</i> fast. To answer this question we will plot the execution times vs. the <i>theoretically maximum</i> (2.52 Tflops) performance of the target GPU.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUY2CHrl6BxPuqFbyrwjz61zmac0Y-93i7Hhu2l3e49C4lrCKmFc3-3TdLxaFtxzSHIpsqv4UC9Ku9ArVYYKqUsiBdVcd1v9e7P7jCTdJWFbcuZLatqDl9ORlxv1xFv9lA8xy4kYc4YA4/s1600/naiveConvolutionSpeed-3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="226" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUY2CHrl6BxPuqFbyrwjz61zmac0Y-93i7Hhu2l3e49C4lrCKmFc3-3TdLxaFtxzSHIpsqv4UC9Ku9ArVYYKqUsiBdVcd1v9e7P7jCTdJWFbcuZLatqDl9ORlxv1xFv9lA8xy4kYc4YA4/s400/naiveConvolutionSpeed-3.png" width="400" /></a></div>
<br />
The computation above assumes that a FMAC (Fused Multiply Add to accumulator) operation counts as a single floating point operation and demonstrates that our implementation is memory starved since we only can utilize roughly <i>10%</i> of the theoretical maximum. <br />
<h2>Reordering the memory operations for efficiency</h2>
Next we will implement the methods described in <a href="http://gpgpu2.blogspot.se/">my previous post</a> in which we let each work item be responsible for the convolution operations for multiple pixels, and where we by reordering the convolution operations can reuse the memory fetch of the same input values for the results of multiple different output values. <br />
<br />
Before we start, we will take a careful look at some of the macros in use.<br />
<br />
<h4>Macro for vector operations</h4>
As in the previous implementation we will use a set of macros to define the datatypes <span style="font-family: "Courier New",Courier,monospace;">kernf</span> for a vector of floats that matches the number of kernels, <span style="font-family: "Courier New",Courier,monospace;">kernuc</span> for a corresponding vector of <span style="font-family: "Courier New",Courier,monospace;">unsigned char</span>'s. We define the macro <span style="font-family: "Courier New",Courier,monospace;">kernstore</span> to write a vector of unsigned chars to the memory and <span style="font-family: "Courier New",Courier,monospace;">convert_kernuc</span> to convert (type casting for vectors) floating point values to unsigned chars. <br />
<h4> Multiply-add operations </h4>
Use one of these three definitions to perform the multiply-add operation.<br />
<br />
Second most exact (or tied with fma), but fastest due to use of FMAC instructions.<br />
<pre class="brush: cpp">#define mmad(x,y,z) (x+y*z)</pre>
<br />
Undefined precision (for some cases this can be very very wrong)<br />
<pre class="brush: cpp">#define mmad(x,y,z) mad(x,y,z) </pre>
<br />
Guaranteed to be the most exact<br />
<pre class="brush: cpp">#define mmad(x,y,z) fma(x,y,z)</pre>
<br />
<br />
<h4> Loop unrolling/reordering through macro expansion / optimizer</h4>
Before we give the final code for the efficient implementation we will study the last two macros used in the code. We start by noting that since the kernel will now be reasonsible for computing the outputs for <span style="font-family: "Courier New",Courier,monospace;">ko</span> number of destination pixels we need as many accumulators.<br />
<br />
<pre class="brush: cpp"> kernf val[CONV_UNROLL];
</pre>
<br />
Ideally we would like to completely unroll the intermost loop (<span style="font-family: "Courier New",Courier,monospace;">dx=0 ... kernelsize</span>) from the implementations above and to extend it by the loop unrolling factor (<span style="font-family: "Courier New",Courier,monospace;">dx=0 ... kernelsize</span>+ko). Furthermore, the convolution steps should now use different filter positions to increment the different accumulators - but can do so using the <i>same</i> raw value. In pseudo code:<br />
<br />
<pre class="brush: cpp">
raw = pixel(x+0,y+dy,z+dz)
val[0] += filter[0][dy][dz] * raw
raw = pixel(x+1,y+dy,z+dz)
val[0] += filter[1][dy][dz] * raw
val[1] += filter[0][dy][dz] * raw
...
</pre>
<br />
We note that the first raw value will be used by one target value, the second by two etc. until the 7'th (for kernelsize=7) raw value which will be used by all. After the 7'th value the first target pixel will not need the raw data and the number of used filters will diminish by one if <span style="font-family: "Courier New",Courier,monospace;">ko<kernsize</span>. <br />
<br />
If we where to manually write the code that is needed for a kernelsize of 7 and a convolution unroll of 16 we would need 335 lines of code (23 raw values, and 7*16-7*8 multiplication steps). Instead of doing this by hand for each possible case of kernelsize and loop unrolling we let the macro processor expand these for us. We note that the multiplication for a single step can be written as follows if we let <span style="font-family: "Courier New",Courier,monospace;">pos</span> be the <span style="font-family: "Courier New",Courier,monospace;">dx</span> value and <span style="font-family: "Courier New",Courier,monospace;">ko</span> be one instance of the convolution unroll values. <br />
<br />
<pre class="brush: cpp">
if(pos-ko >= 0 && pos-ko < kernsize) {
val[ko] = mmad(val[ko],(kernf)(raw),filter[(pos-ko)+offset]);
}
</pre>
Now, since all of the expressions in the if-statement are compile time constants we can safely use macro expansion to output the lines above <i>for all</i> possible values of <span style="font-family: "Courier New",Courier,monospace;">pos</span> and <span style="font-family: "Courier New",Courier,monospace;">ko</span>. The optimizer will remove the actual if statements and only keep the multiplication code for the values that satisfy the if statements. <br />
<br />
Macro expanding out the code above for all values of pos and ko is done through the two sets of macros <span style="font-family: "Courier New",Courier,monospace;">MAD(ko,pos)</span> and <span style="font-family: "Courier New",Courier,monospace;">MADS(pos)</span>. <br />
<br />
<h4>
Final efficient implementation of convolutions</h4>
Finally the full source code for this<br />
<br />
<pre class="brush: cpp">
/* Expected DEFINE's from the compilation */
/* kernsize -- the size of the filtering kernel in each direction */
/* nkernels -- the number of filters to use */
/* CONV_UNROLL -- amount of unrolling to perform */
/* Result is only defined in the area defined by [0 ... (imagesize - kernsize - CONV_UNROLL)]
This means that the kernels are not centered around the input pixel but rather gives offsets the output image slightly.
You need to shift it back manually afterwards if you care for this.
*/
/* Preprocessor settings to define types that can process multiple convolution kernels the same time.
*/
#if nkernels == 1
typedef float kernf;
typedef uchar kernuc;
#define kernstore(val,offset,arr) arr[offset]=val
#define convert_kernuc convert_uchar
#elif nkernels == 2
typedef float2 kernf;
typedef uchar2 kernuc;
#define kernstore vstore2
#define convert_kernuc convert_uchar2
#elif nkernels == 3
typedef float3 kernf;
typedef uchar3 kernuc;
#define kernstore vstore3
#define convert_kernuc convert_uchar3
#elif nkernels == 4
typedef float4 kernf;
typedef uchar4 kernuc;
#define kernstore vstore4
#define convert_kernuc convert_uchar4
#elif nkernels == 8
typedef float8 kernf;
typedef uchar8 kernuc;
#define kernstore vstore8
#define convert_kernuc convert_uchar8
#elif nkernels == 16
typedef float16 kernf;
typedef uchar16 kernuc;
#define kernstore vstore16
#define convert_kernuc convert_uchar16
#elif nkernels == 32
typedef float32 kernf;
typedef uchar32 kernuc;
#define kernstore vstore32
#define convert_kernuc convert_uchar32
#else
#error "nkernels should be one of: 1,2,3,4,8,16,32"
#endif
/* Use one of these three definitions to perform the multiply-add operation */
#define mmad(x,y,z) (x+y*z) // Second most exact (or tied with fma), but fastest due to use of FMAC (FMA-accumulator) instruction
//#define mmad(x,y,z) mad(x,y,z) // Undefined precision (for some cases this can be very very wrong)
//#define mmad(x,y,z) fma(x,y,z) // Guaranteed to be the most exact
kernel void convolute(int4 imagesize, global unsigned char *input,
global unsigned char *output, global kernf *filterG) {
int4 gid = (int4)(get_global_id(0)*CONV_UNROLL, get_global_id(1), get_global_id(2), 0);
int4 lid = (int4)(get_local_id(0), get_local_id(1), get_local_id(2), 0);
int4 group = (int4)(get_group_id(0), get_group_id(1), get_group_id(2), 0);
// First (?) pixel to process with this kernel
int4 pixelid = gid;
// Starting offset of the first pixel to process
int imoffset = pixelid.s0 + imagesize.s0 * pixelid.s1 + imagesize.s0 * imagesize.s1 * pixelid.s2;
int i,j;
int dx,dy,dz;
/* MAD performs a single convolution operation for each kernel,
using the current 'raw' value as the input image
'ko' as an instance of an unrolled convolution filter
'pos' as the X-offset for each of the unrolled convolution filters
Note that all the if statements dependent only on static values -
meaning that they can be optimized away by the compiler
*/
#define MAD(ko,pos) {if(CONV_UNROLL>ko) { \
if(pos-ko >= 0 && pos-ko < kernsize) { \
val[ko] = mmad(val[ko],(kernf)(raw),filter[(pos-ko)+offset]); \
}}}
#define MADS(pos) {if(pos<kernsize) { \
raw=input[imoffset2+pos]; \
MAD(0,pos); MAD(1,pos); MAD(2,pos); MAD(3,pos); MAD(4,pos); MAD(5,pos); MAD(6,pos); MAD(7,pos); \
MAD(8,pos); MAD(9,pos); MAD(10,pos); MAD(11,pos); MAD(12,pos); MAD(13,pos); MAD(14,pos); MAD(15,pos); \
MAD(16,pos); MAD(17,pos); MAD(18,pos); MAD(19,pos); MAD(20,pos); MAD(21,pos); MAD(22,pos); MAD(23,pos); \
MAD(24,pos); MAD(25,pos); MAD(26,pos); MAD(27,pos); MAD(28,pos); MAD(29,pos); MAD(30,pos); MAD(31,pos); \
MAD(32,pos); MAD(33,pos); MAD(34,pos); MAD(35,pos); MAD(36,pos); MAD(37,pos); MAD(38,pos); MAD(39,pos); \
}}
kernf val[CONV_UNROLL];
for(j=0;j<CONV_UNROLL;j++)
val[j]=(kernf)(0.0);
int localSize = get_local_size(0) * get_local_size(1) * get_local_size(2);
local kernf filter[kernsize*kernsize*kernsize];
/* Copy global filter to local memory */
event_t event = async_work_group_copy(filter,filterG,kernsize*kernsize*kernsize,0);
wait_group_events(1, &event);
if(gid.s0 + kernsize + CONV_UNROLL > imagesize.s0 ||
gid.s1 + kernsize > imagesize.s1 ||
gid.s2 + kernsize > imagesize.s2) return;
for(dz=0;dz<kernsize;dz++)
for(dy=0;dy<kernsize;dy++) {
int offset = dy*kernsize*nkernels + dz*kernsize*kernsize*nkernels;
int imoffset2 = imoffset+dy*imagesize.s0 + dz*imagesize.s0*imagesize.s1;
unsigned char raw;
/* kernsize + convolution_unroll < 42 */
MADS(0); MADS(1); MADS(2); MADS(3); MADS(4); MADS(5);
MADS(6); MADS(7); MADS(8); MADS(9); MADS(10); MADS(11);
MADS(12); MADS(13); MADS(14); MADS(15); MADS(16); MADS(17);
MADS(18); MADS(19); MADS(20); MADS(21); MADS(22); MADS(23);
MADS(24); MADS(25); MADS(26); MADS(27); MADS(28); MADS(29);
MADS(30); MADS(31); MADS(32); MADS(33); MADS(34); MADS(35);
MADS(36); MADS(37); MADS(38); MADS(39); MADS(40); MADS(41);
}
for(j=0;j<CONV_UNROLL;j++) {
kernstore( convert_kernuc(val[j]), imoffset+j, output);
}
}
</pre>
<br />
<h4>
Analysis of the full implementation </h4>
Now, let's take a look at what efficiency we can gain using this. We start by looking at again at the GPU utilization as a function of the level of convolution unrolling performed and with three different settings for the workgroup size. These measurements have been done using the same hardware as above and have fixed the number of convolution kernels used to 8.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0Nv9J10ak3IYupSb8vvenFr2ZZ7SKCYUcs1X0wSzVvWylkGp0KtdDTkpNWTVdtcthTbc3xYyxsmbkyRclbNaLZASgt8UORFxk8pjrxLFHzxvkuXeEUFMCl8pXFKxcMjji0dHu1AjcBc4/s1600/naiveConvolutionSpeed-4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="486" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0Nv9J10ak3IYupSb8vvenFr2ZZ7SKCYUcs1X0wSzVvWylkGp0KtdDTkpNWTVdtcthTbc3xYyxsmbkyRclbNaLZASgt8UORFxk8pjrxLFHzxvkuXeEUFMCl8pXFKxcMjji0dHu1AjcBc4/s640/naiveConvolutionSpeed-4.png" width="640" /></a></div>
<br />
As you can see we gain a speed advantage up to roughly 65% of the theoretical maximum for these settings and for this input kernel size. For other values we can gain slightly higher of lower efficiencies. For reference the optimal values occur for a convolution unroll of 16 which gives 0.004 seconds per convolution kernel.<br />
<br />
To compare this method with the naive method I also give you the below graph which shows the efficiency as a function number of convolution kernels. With a fixed workgroup size of 16,16,1 and 4 kernels. <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgaAVZdUtw6hh8gp0w51iGjTfEZnvypNgtNwnbuIGUv_d6ZqEgBEXpTZhqwhQkIM0u6N8P1lTpzbzeYQup-IUD0lFRcO9Z8IiHKKXALVT3nROmi3YLlGW4BoZrzXvLxrdIJeVd4KAmBuDQ/s1600/naiveConvolutionSpeed-5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="491" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgaAVZdUtw6hh8gp0w51iGjTfEZnvypNgtNwnbuIGUv_d6ZqEgBEXpTZhqwhQkIM0u6N8P1lTpzbzeYQup-IUD0lFRcO9Z8IiHKKXALVT3nROmi3YLlGW4BoZrzXvLxrdIJeVd4KAmBuDQ/s640/naiveConvolutionSpeed-5.png" width="640" /></a></div>
<h3>
Conclusions</h3>
So what conclusions can we draw from this? Well, to start with:<br />
<br />
1) A naive convolution operation is <i>memory starved</i> when perform on the GPU. By rearranging the order of the innermost steps we can gain a significant performance gain.<br />
<br />
2) Using float3/uchar3 when writing out to memory is a bad idea.<br />
<br />
3) Performing a local cache of the filter data is pointless if the data is only used once -- even if it is used once for each work item, as long as they access each item simultaneously. <br />
<br />
4) The optimizer can be trusted to remove dead code when using <i>if</i> statements with only compile time constants. This can be utilized to simplify the creation of repetitive amounts of code such as loop unrolling through the use of macro expansions. Anonymoushttp://www.blogger.com/profile/16287606329653844587noreply@blogger.com0tag:blogger.com,1999:blog-6547282135639508442.post-30430583254814767402012-07-10T14:55:00.000+02:002012-09-18T19:36:47.249+02:00<h2>
Efficient convolution of multi-dimensional data</h2>
<br />
For the first real content post I will introduce a method for improving the memory bandwidth efficiency when performing basic <a href="http://en.wikipedia.org/wiki/Convolution">convolution</a> operations on multidimensional data (eg. 2D/3D/4D datasets). I presented the basic method for this as a side note in the publication below, but without much technical details -- which we can look at in this post instead. <br />
<br />
<blockquote class="tr_bq">
M. Broxvall, K. Emilsson, P. Thunberg. <a href="http://aass.oru.se/%7Embl/publications.shtml">Fast GPU Based Adaptive Filtering of 4D Echocardiography</a>. <i> IEEE Transactions on Medical Imaging </i>, 2011, DOI 10.1109/TMI.2011.2179308</blockquote>
<br />
I will assume that you know how a <a href="http://en.wikipedia.org/wiki/Convolution">convolution</a> operation is defined and the purposes it has in image processing. Furthermore, I will assume that the convolution filter size (to avoid confusion I will call convolution kernels as convolution filters as to avoid a confusion with the notion of kernels from OpenCL) is much smaller than the image size that you are trying to convolve. <br />
<br />
Assume that we have the convolution filter \( f \) of size \( \vec k \), an image \( M \) and want to calculate the convolution \( f * M \). A typical approach to this would be to launch one OpenCL kernel for each output point \( \mbox{out} \) and to perform the computation below:<br />
<br />
\(\mbox{out}(\vec p) = 0\)<br />
for \(j_1\) = 0 ... \(k_1\) do<br />
...<br />
for \(j_n\) = 0 to \(k_n\) do<br />
\(\mbox{out}(\vec p) \leftarrow \mbox{out}(\vec p) + f(\vec j) M(\vec p + \vec j)\)<br />
<br />
With this formulation we will need to perform \(\prod_i k_i\) multipllications and additions, as well as the same number of <i>memory fetch</i> operations reading from the input image \(M\). If we would perform, for instance, a convolution of size 10 over a 3D dataset we thus require 1000 memory fetches and 1000 floating point <i>multiply-add</i> operations for each generated output.<br />
<br />
The <i>multiply-add</i> operations can often be performed faster than the equivalent two floating point operations by using dedicated hardware, and is exposed in the <a href="http://www.khronos.org/registry/cl/">OpenCL standard </a>through the operations mad or mad24. <br />
<br />
Now, assuming that our hardware does <i>not cache</i> the input image we will have a problem. This is the case on many GPUs when you are using global memory buffers as opposed to OpenCL image objects + samplers. The reason for this has to do with how the hardware is optimised to texture memory accesses and a bit outside the scope of this post. <br />
<br />
Although modern GPUs have high bandwidth to onboard memory on the order of 160 GB/s (ATI 6950) they have even higher floating point capacities (2.25Tflops) the convolution operation above is memory bound. Assuming that the \(out\) and \(f\) variables are stored in on-board memory the naive algorithm above would be capable of <i>at most</i> \(4 \times 10^{10}\) steps of the innermost loop and thus only utilize 1-2% of the total computational capacity.<br />
<br />
In order to optimize on this and gain better performance we note that the same input image samples will contribute to many different output values, but will be multiplied with different filter coefficients before doing so. An obvious interpolation is thus to <i>re-use</i> the same input image values and use them to compute multiple output values.<br />
<br />
The extreme case of this would be to load the whole image into high-speed memory (GPU registers or shared memory) -- but this would obviously not be possible for anything but trivial image sizes. However, we can do this by redesigning our compute kernels to compute \(k_o\) number of outputs for each kernel and to re-use the same input image values for multiple output computations. We extend here the convolution filter \(f\) to contain zeroes at all
points outside the original convolution filter (in the code we can do
this more flexible). <br />
<br />
<br />
<br />
<br />
\(\mbox{out}(\vec p + \overline{(0,...,0,0)}) = 0\)<br />
...<br />
\(\mbox{out}(\vec p + \overline{(0,...,0,k_o)}) = 0\)<br />
for \(j_1\) = 0 ... \(k_1\) do<br />
...<br />
for \(j_n\) = 0 to \(k_{n-1}\) do <br />
for \(j_n\) = 0 to \(k_n + k_o - 1\) do <br />
\(v \leftarrow M(\vec p+\vec j) \)<br />
\(\mbox{out}(\vec p + \overline{(0,...,0,0)}) \leftarrow \mbox{out}(\vec p + \overline{(0,...,0,0)}) + v f(j - \overline{(0,...,0,0)}) \)<br />
... <br />
\(\mbox{out}(\vec p + \overline{(0,...,0,k_o)}) \leftarrow \mbox{out}(\vec p + \overline{(0,...,0,0)}) + v f(j - \overline{(0,...,0,k_o)}) \)<br />
<br />
The above code requires \(k_1 ... k_n-1 (k_n + k_o - 1)\) memory accesses to compute \(k_o\) number of outputs. We thus gain a (theoretical) speed up of a factor of \( k_n k_o / (k_n k_o - 1) \) times, which for large sizes of the convolution kernel goes to \(k_o\).<br />
<br />
So, now the obvious question: do we realy get a speedup factor of \(k_o\) and where does the speedup tap out.<br />
<br />
The short answer to this is: it depends on the number of available hardware registers. With larger values of \(k_o\) we with exhaust the number of available hardware registers and the GPU will schedule fewer kernel invokations in parallel on each compute device.<br />
<br />
<i>Coming next</i>: code + numbers demonstrating this speedup.Anonymoushttp://www.blogger.com/profile/16287606329653844587noreply@blogger.com0tag:blogger.com,1999:blog-6547282135639508442.post-73040797536749954832012-07-09T22:38:00.003+02:002012-07-09T22:42:40.364+02:00Purpose of this blog<h2>
Hello World\n</h2>
<br />
The purpose of this blog is to collect some thought, notes and algorithms of mine regarding GPGPU computations in general and some specific neat OpenCL code. We'll start with a small summary of the concepts and links to a few relevant sources to get you started with writing OpenCL code -- the intention of which is mostly for me remember the links and not to make a complete getting started tutorial in OpenCL.<br />
<br />
First of, GPGPU stands for <i>general purpose GPU (computation)</i> where GPU in term stands for <i>graphic processing unit.</i> These units are the very powerful massively parallel processors that are in use today mainly for rendering graphics, but which incidentally also happens to be good for solving other parallelizable tasks requiring mostly floating point operations. <span style="background-color: white;">When looking at the raw computational performance of these devices they are often one or two orders of magnitude (10-100) times faster than modern CPU's. However, the power of these devices come mainly from a higher level of parallelism rather than the clock frequency. As such, programming these devices efficiently is a challenge and require very different development tools as well as algorithms in order to be efficient. </span><br />
<br />
CUDA is a language specific toolkit developed by NVidia that allows the programmer to perform general purpose computations (convolutions, fourier transforms, signal processing, bitcoin mining, ...) of NVidia cards. Since CUDA was one of the first widespread toolkits dedicated to GPGPU computations it has a large adoption in the high-performance computing community and there exists many GPU cluster machines (supercomputers build from GPU's) that perform computationally heavy tasks.<br />
<br />
The main drawback (IMHO) of CUDA is that it restricts the user to NVidia specific platforms, and that it lacks support for some features. The main advantage is the ease of use coming from the mature toolkit and the integration into the programming language itself. (The later is also a drawback with it...)<br />
<br />
<a href="http://www.khronos.org/opencl/">OpenCL</a> is an open standard maintained by <a href="http://www.khronos.org/">Khronos</a> (the same group that develops OpenGL) that serves to give a standardized interface for <i>compute devices</i> to be used from any programming language through a standardized API interfacing to one or more drivers. A compute device can here be a GPU or any other device that can perform parallel computations such as a multi-core CPU or a Cell-processor.<br />
<br />
The main advantage (IMHO) with OpenCL is that is an Open Standard, that it works with a wide range of devices and that is (host) language agnostic. The main drawback, is that it is language agnostic. Using OpenCL directly from your C/C++/Python etc. code means a significant overhead in <i>glue code </i>- but once you have this in place it gives you alot of flexibility as well as very good interfacing with OpenGL.<br />
<br />
For the purpose of this blog I will write mainly on the development of algorithms and neat tricks for OpenCL and I use mainly AMD/ATI's OpenCL drivers and to some lesser extent Intel's OpenCL drivers.<br />
<br />
<br />
<ul>
<li><span style="background-color: white;">AMD 5870 / 5870M / 6870 cards running on AMD APP drivers</span></li>
<li><span style="background-color: white;">AMD CPUs (x4, x6) running on AMD APP drivers</span></li>
<li><span style="background-color: white;">Intel core i7 CPUs (x4) running on AMD APP drivers</span></li>
<li><span style="background-color: white;">Intel core i7 CPUs (x4) running on Intel's OpenCL drivers</span></li>
</ul>
<br />
<br />
In addition to the above device/driver combinations a common other combination is to use NVidia's OpenCL drivers. Currently I don't have access to modern NVidia hardware, but perhaps I'll get one for measuring differences at a later point.<br />
<br class="Apple-interchange-newline" />Anonymoushttp://www.blogger.com/profile/16287606329653844587noreply@blogger.com0