Lies, Damn Lies, and Statistics

There's a study just done at Oxford University where they suggest the order in which names are listed on the ballot affects the outcomes.

They estimate that W's 2000 win and Trump's win would not have happened if the names were reversed in swing states.

Without looking at the study, while I can see it's a plausible theory, I suspect maybe the study and conclusion is biased by wishful thinking. What was the motivation for releasing that study and / or story now?

The LA Times ran a piece recently as well. They were the only pollster who consistently gave Trump a victory but it was based on a popular vote victory. In other words they were wrong but they turned out to be right because the other polls got the swing states wrong.

Question to ask you: What % of the population do you think could follow a discussion of the LA Times polling without resorting to matters of trust and bias and political statements?

I feel quite confident in saying that we know that these things are hard to get right, that there is no intention to deceive people or promote an agenda, and that the transparency is there to explain and improve. But it's sad to realize that most people don't see such things that way.

Surely you are not suggesting that the media reports facts, not opinions? There is zero question that they were promoting an agenda. Not the pollsters of course. There are few stories on a daily basis I can even stomach to read without the obvious bias. Just to say again, on both sides.

Perhaps you are just speaking of the polls, not the media?

'Question to ask you: What % of the population do you think could follow a discussion of the LA Times polling without resorting to matters of trust and bias and political statements?

I would guess it would not be statistically different from zero.

Perhaps I am too pessimistic but my view would be there is not much that can be done with most adults but there is a great deal that could be done with children if we threw out traditional approaches to learning. Instead of teaching the traditional way why not teach children how to think.

For instance every school year pick a problem--take an obvious one to start the mess at the DMV.

Assign every senior class in the nation the task of figuring out why DMVs everywhere are a mess, how it could be fixed, what the solution would cost, and how to get all stakeholders on board, and what processes would be needed to ensure problems did not arise in the future.

Allow seniors to use any resource they wished, including parents, friends, subject area experts etc. Create online tools to allow discussion and collaboration and award a significant prize ($1 million at least) to the class with the best solution as determined by a system such as used on Dancing with the Stars. The students would share the money equally.

Then move on to more complex issues such as how to pay for college for all students, how to create secure retirement for the whole population.

This approach would do two things: 1) it would allow children to develop invaluable life skills, and 2) it would completely change the tone of the national discussion because shouting louder than one's opponent would serve no purpose. To win a class would not merely have to develop an actual solution to a real problem but would need to succeed in bringing all stakeholders on board.

Now I think I will take a nap and dream about more realistic things like replacing all the silly actual languages in the world with something practical like Blissymbols.:lol:

I think you're responding to my point about the LA Times polling.

Yes I was talking specifically about their polling. In particular the fact that their polling said Trump would win (the popular vote). The popular interpretation is that the LA Times were the only people who got the polling right, whereas the truth is they had a bigger margin of error then everyone else and never forecast Trump would win the electoral vote the way he did.

It's a good idea to teach kids to be problem solvers. Although it is unlikely that the economy would work if everybody was a problem solver.

But there are techniques that would be able to leverage the existing adults in their existing positions and systems, to come up with the right solutions and then implement them. And these techniques don't necessarily suffer from not having an outsider's viewpoint and benefit from the fact that if you have those who do the job bought in on the solution (because they came up with it), you have little issue with responsibility, ownership and motivation.

Based on what I've seen with these techniques and today's kids, it's infinitely preferable.

I hadn't been paying any attention to this thread but peeked in to see why the heck it had so many comments.

Truthfully, I just skimmed because there were lots of words and no pictures.

I skimmed faster when it turned to politics.

That said, I noticed the discussion about buddy-voting, where friends might pump up the helpful votes of people they like rather than vote for truly helpful people.

Since we can award many helpful votes but give only one heart I wonder if the ratio might reveal some rough measure of buddy voting and if the ratio could somehow be weighted based on the number of comments.

I haven't pondered on it, sufficiently, to defend my hypothesis. It's more of a feeling.

Feel free to entertain this notion or ignore it, whichever seems more appropriate.

"I skimmed faster when it turned to politics."

That is true for me as well, I try and post non partisan as I am so sick of partisanship.

There could be some truth to what you said about hearts vs likes. On the surface, kind of makes sense, but, would need to check some. It seems like someone with 10 hears and 1000 likes would be off, and perhaps mostly friends.

I think the thread has touched on just how hard it is to use numbers to come to any "objective" conclusion without doing a huge amount of work. We want "simple" answers that appear to be based on hard data and life is just normally not that way.

I have given some thought to the statistics available from Social and also looked at what methods have been developed to analyze these types of data. There are in fact some reasonable approaches but they need to be instituted at a point at which a forum is fully active so they are not applicable in our case.

It would be possible to do the following two things with existing information for all or a sample of members as long as Social is running and Profile pages can be accessed:

  1. rank every member from "most" to "least" helpful based on objective criteria ( there are 4 metrics, Comments, Discussions, Hearts, and Likes, with 2 showing the contribution by the member an two showing the reaction)

  2. multiple and conflicting rankings could be produced depending on what decisions were made concerning the weight given to each factor (for instance using the 4 measures Karl can be shown to be either the most or least helpful of the four I listed and the same is true for the other 3: with more complex weighting which certainly seems sensible here additional results are possible)

So just dealing with the numbers that exist very different and contradictory conclusions are possible depending on how one chooses to use them.

THAT CANNOT BE CHANGED AT THIS POINT IN TIME--IT IS TOO LATE

Next comes the question of whether the numbers were "rigged"

If one is trying to determine whether a group of members conspired to "inflate" totals that would require a formal investigation and is simply not going to happen. So there never will be a "firm" answer on that. It seems a little implausible to think there was conscious “rigging.”

There is much simpler explanation. The forum consisted of stratified layers of users. Each layer had it own intrinsic defining properties and could easily be shown, using using ANOVA or other standard tools, to have a separate identify. Comparing across these layers is absolutely wrong in any type of analysis and must be rejected on methodology grounds. It is the equivalent of comparing commute times in Los Angeles for someone who lives 40 miles from the job to the commute times of someone in Manhattan who lives two blocks from the office. Yes it is possible to measure the times but no sensible person would think that truly similar things are being compared.

Even within strata there are very serious methodology problems. For instance some members would simply state the correct answer; others would explain all the intermediate steps. Is it the case both members were equally helpful?

Then there is the enormous variety of types of posts. Some extremely knowledgeable members posted rarely and on arcane subjects that few members commented on. How should such contributions be viewed?

Then there is the “Ring Plus Bias” which distorts the data to the point they are impossible to use for any serious purpose.

The Ring Plus Bias was found in two areas: threads dealing with how members were benefitting from RingPlus service or anything related to RingPlus always attracted extremely high volumes of posts. Posters who expressed “positive” feelings in these threads would have dramatically better profiles that those who did not. However, these threads were primarily morale boosting in nature or opportunities for mutual emotional support. They did not and could not by their nature solve any problem nor provide any really new information. Yet they do greatly influence the statistics. In any analysis these would need to be broken out and treated separately.

Then there is the post count bias. As the number of posts a member makes increases it is to be expected that any ratio (Comment/Likes) will decline. The larger the number of posts the bigger the decline but the relationship is not linear.

Next there is the nature of the post bias. Shorter and less technical posts are likely to lead to higher ratios even though they actually provide less “useful” information.

Then there is the VM bias. It is simply not correct to compare any VM member with a non VM member. It is indeed valid to compare within categories of VMs (within Moderators and within EC).

However such comparisons are almost certainly poor measures and may produce completely erroneous results. The reason is obvious. Both EC and Moderators have functions that are handled out of sight and there is no way to adjust the data for this.

These are just a few of the issues involved. It is actually a lot more complex and confused.

The bottom line is the numbers could be used to produce one or many measures of “helpful.”

The meaning of “helpful” in any of these contexts is in the mind of the user not in the numbers themselves.

And now for an actual statistical question:

@oldbooks1 You definitely deserve a "like" and a "helpful" for your most recent post, above. I do believe you have gotten to the essence of how we can finally get accurate and objective information regarding forum posts: Have a dart gun ready at all times, tranquilize each poster in real time as they post, and track every forum contribution from that point forward. :slight_smile:

I think your method is almost perfect and would only offer a minor suggestion--perhaps the dart gun should be used before they post.
Two major benefits would ensue: forum statistics would no longer be a source of controversy and the world itself would be a saner place.

This is one of the largest threads on this site. Clearly, this is one of the most important things on the minds of those who visit this site. :cheer:

I'm not quite sure what it is all about.

I tried to read it but it makes my head hurt.

Sorry, I should have explained it in detail rather than just giving the helicopter tour.:lol:

Now, any idea on how to count the number of moose in Anchorage?

In our area deer are a massive problem and eat up everything people put in their gardens which makes them very upset. Last year a neighbor lost it when after spending $500 to plant a fancy garden and going away for the weekend he returned to find all the plants had been eaten.

There is a program to control the deer population but nobody can come up with a way to decide how many there are and how actively it should be implemented. It sounds easy on paper but is not so easy in practice.

You lost me. Do you want to count moose or deer?

"I'm not quite sure what it is all about."

It was intended to be funny. i.e., I am misusing stats to show an opinion.

I just want to count deer since I do not live in Alaska. It just seemed to me the people in Anchorage are facing a similar counting problem and to not know how to do it either.

I prefer to count them in my freezer, when I can get someone to go hunt them for me.

I have a great recipe for Bambi Fried Rice.

This, of course, makes sense.

Let's take the example of reddit. Posts get upvoted and downvoted. Is there a chance that a helpful vote gets missed? Of course.

But can we still say that certain posts were found to be helpful. Yes we can.

Like I said, while there are points we can make about how the data could be better, there are nevertheless things we can say with reasonable confidence.

Although I was never myself comfortable with making decisions with 70% of the information, bear in mind that that number as well is not statistically calculated.