Question Short1
Compare and contrast the ideas of semantic complexity and structural complexity of a computer program. Consider both the conceptual differences in measuring these and the operational differences.
Question Short 2
Consider a user interface designer who is designing the front end of a database system. (This is a business application, not a game or a work of art.) What is your operational definition of creativity for this person's work? What surplus meaning does this definition miss?
Question Short 3
What is Campbell's Law? What is Measurement By Objectives? Why would someone cite Campbell's Law as a criticism of Management By Objectives?
Question Short 4
Why is it appropriate to use some surrogate measures as part of a Goal/Question/Metrics measurement program? Give and defend at least two reasons.
Question Short 5
Why do people call a model like the CHAT model an interpretive framework? (Briefly describe the CHAT model as part of your answer.)
Question Short 6
Describe three factors people will use to assess the credibility of a qualitative measurement?
Question Short 7
Imagine a project dashboard that describes the status of a software development project. Suppose that it represents each dimension or aspect of status as a picture, a green smiley-face, a yellow neutral-face, or a red grumpy-face. Are these qualitative or quantitative measures of status? Why?
LONG
Question Long 1
In her paper, "Developing an Effective Metrics Program," Rosenberg described a group of "sample goals, questions and metrics. The goals are general and could be adapted with minor modifications to any project development. Questions are derived to quantify the goals. The metrics needed to provide the answers to the questions are then chosen and shown in italics."
Here is a goal from her paper, with associated questions and metrics:
- GOAL: To predict the schedule in order to manage it
- QUESTION: What is the actual vs. expected effort level?
- METRIC: Effort (such as hours worked)
- QUESTION: What is the volatility of the requirements?
- METRIC: Count of requirements, count of modifications to requirements
- QUESTION: What is the rate of module completion?
- METRIC: Count of modules completed
Use your knowledge of measurement dysfunction to critique this set of questions and metrics. In particular:
(a) If you collected these metrics, would they provide you with answers to the questions? Why or why not? What other information, if any, would you need?
(b) If you could answer these questions, could you accurately predict the schedule? Why or why not? What other information, if any, would you need?
(c) If you relied on these metrics, what aspects of your project do you think would be systematically under-managed? Explain your thinking.
Question Long 2
In her paper, "Developing an Effective Metrics Program," Rosenberg described a group of "sample goals, questions and metrics. The goals are general and could be adapted with minor modifications to any project development. Questions are derived to quantify the goals. The metrics needed to provide the answers to the questions are then chosen and shown in italics."
Here is a goal from her paper, with associated questions and metrics:
- GOAL: The system must release on time with at least 90% of the errors located and removed
- QUESTION: When will 90% of the errors be found?
? METRIC: Effort (such as hours worked)
? METRIC: Errors (count errors detected)
- QUESTION: What is the discrepancy rate of closure?
? METRIC: Errors (count errors detected)
? METRIC: Closure status of the errors
Use your knowledge of measurement dysfunction to critique this set of questions and metrics. In particular
(a) If you collected these metrics, would they provide you with answers to the questions? Why or why not? What other information, if any, would you need?
(b) If you could answer these questions, could you know whether at least 90% of the errors had been located and removed? Why or why not? What other information, if any, would you need?
(c) If you relied on these metrics, would any aspect of the project be systematically under-managed or mismanaged? Explain your thinking.
Question Long 3
In her paper, "Developing an Effective Metrics Program," Rosenberg described a group of "sample goals, questions and metrics. The goals are general and could be adapted with minor modifications to any project development. Questions are derived to quantify the goals. The metrics needed to provide the answers to the questions are then chosen and shown in italics."
Here is a goal from her paper, with associated questions and metrics:
- GOAL: Examine the product quality from the point of view of the customer
- QUESTION: What percentage of the modules exceed the structure / architecture guidelines?
- METRIC: Size (such as LOC)
- METRIC: Complexity (such as Oviedo's metric)
- QUESTION: What modules are high risk?
- METRIC: Complexity (such as Oviedo's metric) for each module
- METRIC: Size (such as LOC) for each module
- METRIC: Errors
Use your knowledge of measurement dysfunction to critique this set of questions and metrics. In particular
(a) If you collected these metrics, would they provide you with answers to the questions? Why or why not? What other information, if any, would you need?
(b) If you could answer these questions, could you accurately describe the quality from the point of view of the customer? Why or why not? What other information, if any, would you need?
(c) If you relied on these metrics, would any aspect of the project be systematically under-managed or mismanaged? Explain your thinking.
Question Long 4
Suppose that you wanted to PREDICT THE DIFFICULTY of a PROGRAMMING PROJECT in order to HIRE APPROPRIATELY for it. Suppose too that your company has done lots of projects and kept lots of raw data that you can mine.
(a) Use the Goal / Question / Metric approach to choose some metrics that you could appropriately use to develop your prediction(s). Briefly describe between 4 and 7 such metrics.
I'm trying to understand whether you have a good process for generating a good set of relevant questions and finding / generating metrics that could answer those questions. If it is not absolutely obvious why a question is relevant to predicting the difficulty of a programming project in order to hire appropriately for it, give me a brief explanation. If you think it might not be obvious how a metric is tied to one of the questions, give me a brief explanation. If something is not obvious to me, I will not guess.
(b) Pick two of your suggested metrics and for each one, state two good reasons for thinking it would be valid and useful. State one reason for thinking that it would be invalid or non-useful.
Question Long 5
One of the criticisms of measurement of the work of software engineers is that they will probably change how they work to make the numbers look better.
Discuss this. Is this a problem or a benefit of measurement (or both)? Why do you think so? Give examples.
Question Long 6
Why is it reasonable to use a balanced scorecard system to measure staff performance? How does this approach mitigate concerns about measurement dysfunction?
Question Long 7
Compare and contrast the criteria we use to assess the quality of quantitative measures versus qualitative measures.