Case Study:
Data Mining in the Real World
“I’m not really a contrarian about data mining. I believe in it. After all, it’s my career. But data mining in the real world is a lot different from the way it’s described in textbooks. “There are many reasons it’s different. One is that the data are always dirty, with missing values, values way out of the range of possibility, and time values that make no sense. Here’s an example: Somebody sets the server system clock incorrectly and runs the server for a while with the wrong time. When they notice the mistake, they set the clock to the correct time. But all of the transactions that were running during that interval have an ending time before the starting time. When we run the data analysis, and compute elapsed time, the results are negative for those transactions. “Missing values are a similar problem. Consider the records of just 10 purchases. Suppose that two of the records are missing the customer number and one is missing the year part of transaction date. So you throw out three records, which is 30 percent of the data. You then notice that two more records have dirty data, and so you throw them out, too. Now you’ve lost half your data. “Another problem is that you know the least when you start the study. So you work for a few months and learn that if you had another variable; say the customer’s Zip code, or age, or something else, you could do a much better analysis. But those other data just aren’t available. Or, maybe they are available, but to get the data you have to reprocess millions of transactions, and you don’t have the time or budget to do that. “Overfitting is another problem, a huge one. I can build a model to fit any set of data you have. Give me 100 data points and in a few minutes, I can give you 100 different equations that will predict those 100 data points. With neural networks, you can create a model of any level of complexity you want, except that none of those equations will predict new cases with any accuracy at all. When using neural nets, you have to be very careful not to overfit the data. “Then, too, data mining is about probabilities, not certainty. Bad luck happens. Say I build a model that predicts the probability that a customer will make a purchase. Using the model on new-customer data, I find three customers who have a .7 probability of buying something. That’s a good number, well over a 50–50 chance, but it’s still possible that none of them will buy. In fact, the probability that none of them will buy is .3 × .3 × .3, or .027, which is 2.7 percent. “Now suppose I give the names of the three customers to a salesperson who calls on them, and sure enough, we have a stream of bad luck and none of them buys. This bad result doesn’t mean the model is wrong. But what does the salesperson think? He thinks the model is worthless and can do better on his own. He tells his manager who tells her associate, who tells the Northeast Region, and sure enough, the model has a bad reputation all across the company. “Another problem is seasonality. Say all your training data are from the summer. Will your model be valid for the winter? Maybe, but maybe not. You might even know that it won’t be valid for predicting winter sales, but if you don’t have winter data, what do you do?
“When you start a data mining project, you never know how it will turn out. I worked on one project for 6 months, and when we finished, I didn’t think our model was any good. We had too many problems with data: wrong, dirty, and missing. There was no way we could know ahead of time that it would happen, but it did. “When the time came to present the results to senior management, what could we do? How could we say we took 6 months of our time and substantial computer resources to create a bad model? We had a model, but I just didn’t think it would make accurate predictions. I was a junior member of the team, and it wasn’t for me to decide. I kept my mouth shut, but I never felt good about it. Fortunately, the project was cancelled later for other reasons. “However, I’m only talking about my bad experiences. Some of my projects have been excellent. On many, we found interesting and important patterns and information, and a few times I’ve created very accurate predictive models. It’s not easy, though, and you have to be very careful. Also, lucky!”
Q1. Summarize the concerns expressed by this contrarian.
Q2. Do you think the concerns raised here are sufficient to avoid data mining projects altogether?
Q3. If you were a junior member of a data mining team and you thought that the model that had been developed was ineffective, maybe even wrong, what would you do? If your boss disagrees with your beliefs, would you go higher in the organization? What are the risks of doing so? What else might you do?
Your answer must be typed, double-spaced, Times New Roman font (size 12), one-inch margins on all sides, APA format and also include references.