Yahoo Answers is shutting down on May 4th, 2021 (Eastern Time) and beginning April 20th, 2021 (Eastern Time) the Yahoo Answers website will be in read-only mode. There will be no changes to other Yahoo properties or services, or your Yahoo account. You can find more information about the Yahoo Answers shutdown and how to download your data on this help page.

___
Lv 5
___ asked in Social ScienceSociology · 8 years ago

If testing hypotheses consistently provides measurable yet inconclusive data...?

...how can one go about reconfiguring his model to produce more accurate results? Assuming of course that the experiments are not in a truly controlled environment and chaos tends to cancel any real measurable truth.

Tips, theories and poetry are all appreciated.

2 Answers

Relevance
  • 8 years ago
    Favorite Answer

    One tip: Meditate.

    Meditate on the subject you want to resolve. Meditate with this aim in mind.

    Take distance from your own reasoning and own feelings while traveling through meditation as on a magic carpet until you find the solution. The solution is there, inside your subconscious mind. Meditating in calm and silence, lets you open a little little gate to the subconscious mind and get the solution with accuracy.

    Source(s): ms
  • OPM
    Lv 7
    8 years ago

    So I have several practical suggestions. The first is get more data. The usual culprit for things like this is an insufficiency of data.

    The second is methodological. I will provide you an example from Zilniak (I think it is his example at least). Imagine you have two weight loss drugs. One drug causes 6 pounds of weight loss with a standard deviation of 1/2 pound. It is clearly statistically significant. The other drug causes an average of a 20 pound weight loss but with a standard deviation of 7 pounds and so is not significant (although it depends on the alpha a bit). Regardless the 20 pound drug stochastically dominates the six pound drug. Also imagine the drug interaction is fatal so you have to choose between drugs. They are mutually exclusive.

    Significance can hide effect size and focusing on significance can cause you to miss the point. It isn't that significance does not matter, rather it is part of the decision process.

    Now as to isolating your effects, I recommend very strongly that you switch to Bayesian methods. They allow an unlimited number of hypothesis to be tested. It may be the form you are using that is limiting your ability to tease out the effect.

    Imagine you believe that some effect is a function of x,y,z you denote it f(x,y,z). You are not certain as to the real world relationship. So you test f=b(0) b(1)x b(2)y b(3)z b(4)xy b(5)xz b(6)yz b(7)xyz and you test f=b(8) b(9)x b(10)y b(11)z b(12)xy b(13)xz b(14)yz and so forth down to f=b(n), which is f is a constant. You test all permutations. Bayes factors will provide a solution as to the best fit model to the model that appears in nature given the data and the prior knowledge.

    As the sample size becomes large the Bayes factors measure the agreement between the data generating function in nature and the model. The greater the agreement between nature and the model the higher the Bayes factor. The downside is programmatic. Bayesian methods must include exhaustive and mutually exclusive outcomes. You have to exhaust your universe of possibilities.

    The nice thing about Bayes factors is that under very mild conditions a Bayesian solution is always an admissible solution whereas that is not true for Frequentist solutions with three or more independent variables.

    The greatest difficulty would be retraining your mind. There is no such thing as statistical significance in Bayesian probability because there is no such thing as chance as a criterion. Bayesian methods test the probability an hypothesis is true, not that some null is false. What remains is uncertainty not chance. In practice this becomes a giant difference.

    The other big difference is that Bayesian methods are an extension of inductive reasoning while Frequentist methods are an extension of deductive reasoning. Frequentist reasoning is complete subject to the confidence interval. Bayesian decision making is incomplete even when subject to the Bayesian credible interval. There is always a positive probability that you have missed some thing of importance and so you can never say, "this is enough." After sufficient repetition with a Frequentist method you can say "we can stop now."

    ****EDIT**** I noticed that in the formula above it removed the plus signs and so forth. You need to test the permutations of possible interactions and sole effects.

Still have questions? Get your answers by asking now.