My overall impression from the discussion is that no one seems to know what it is, but everyone has an opinion on it. The moderator frequently had to move us along before all audience questions were addressed. It is quite a popular topic.
|A working knowledge of statistics, my friends, |
is a minimum requirement for making better
decisions using data.
Shouldn't one new skill be to have at least a working knowledge of statistics? Isn't that a minimum requirement for anyone who wants to use data to make better decisions?
For example, let's say you implement a new sales training program (or a significant change to an existing sales training program)? How do you know whether it worked (however you define worked)?
With a working knowledge of statistics we understand that we can test the results to see whether there were any significant changes in the data (whether it worked)…that sales went up and went up enough that it mattered.
In it's simplest form, we would compare two groups. One group being the group of people who participated in the new program. The second group being sales people who did not.
We can use a simple statistics test (can be done in Excel, for Pete's sake) to compare the sales results of the two groups…..and draw a conclusion, with reasonable confidence, whether the results of the group that attended the training differed significantly from the group who did not attend the training.
There are far more complex procedures that statistics offers for analyzing data, and I certainly have only a foundational knowledge of statistics, but isn't thinking like this a mandatory minimum? Shouldn't we be thinking that way, when it comes to evaluating the results of training programs we design?
Most L&D pros will say no. But I think that is why all discussions of big data in the learning and development community skip the topic of big data entirely.
The headline of this post was inspired by a blog post from A VC called, If You Aren't Technical, Get Technical.