How fast is the GPU as compared to performing the operations on the CPU? Write a program in which you can parameterize how much data is processed on the GPU, ranging from no computation using a shader program to all of the computation being performed using a shader program. How does the performance of you application change when the computation is being performed solely on the GPU?
Q2;
Are there sizes of triangle strip lengths that work better than others? Try to determine the maximum size of a triangle strip that maximizes performance. What does this tell you about the memory, or cache structure, on the graphics hardware?