To simulate a continuious-time system with input x(t) and impulse response h(t) using a digital system, one can first sample x(t) and h(t) to obtain x[n] and h[n] respectively and then perform a discrete-time convolution between x[n] and h[n] to obtain a discrete-time output y[n]. The sampling rate needs to be larger than twice the bandwidth of x(t) and h(t). From y[n], one can use a digital-to-analog converter to virtually obtain the output y(t) of the continuous-time system.
But for many situations, h[n] could be too long (e.g., with thousands of samples). In such a situation, the digital system described above has to not only store thousands of parameters (which consumes memory) but also perform thousands multiplications to get each sample of the output y[n]. For certain real-time applications (where the sampling interval can be only a few microseconds), this may be unacceptable. How do you design a new digital system that can do virtually the same job but with much less memory and much lower computational complexity?