Maintaining a system clock that can be read by any user program requires only that the operating system read a physical device (keeping the physical time) and write the time into a globally readable variable. Suppose the time to read the physical clock and manipulate the variable is 100 microseconds. What percentage of the total CPU time is spent maintaining a clock that is accurate to a resolution of one millisecond? What about a 100 microsecond resolution? What about a 10 microsecond resolution? Explain your rationale.