IMPRECISE RANDOMNESS : AN APPLICATION OF THE MATHEMATICS OF PARTIAL PRESENCE

In this article we are going to discuss imprecise randomness using the mathematics of partial presence. The mathematical explanations of imprecise randomness would actually be complete only if it is explained with reference to the RandomnessImpreciseness Consistency Principle. In this article, we have described imprecise randomness with reference to a numerical example of the two sample ttest.


Introduction
If the realizations of a random variable are imprecise in the sense that two independent laws of randomness can define the presence level of values of the variable in a given interval, we would have to deal with the matters using the idea of imprecise randomness (Baruah (2012)).An apparently similar theory, the theory of fuzzy sets, is already in existence since 1965.In the theory of fuzzy sets, it had been accepted that the fuzzy sets do not conform to the classical measure theoretic formalisms.Secondly, it had been agreed upon that given a fuzzy set neither its intersection with its complement is the null set, nor its union with the complement is the universal set.In the Zadehian definition of complementation, fuzzy membership function and fuzzy membership value have been taken to be the same, and that is where the problem lies.Indeed fuzzy membership function and fuzzy membership value are two different things for the complement of a normal fuzzy set (Baruah (1999(Baruah ( , 2011))).Instead of saying that the theory of fuzzy sets has been incorrectly explained, Baruah(Baruah (2012))has started the whole process anew naming his finding as the theory of imprecise sets.Fuzzy randomness in terms of uncertain probabilities has been studied by Buckley (Buckley (2003)) and Buckley and Eslami (Buckley &Eslami (2004))among others.Our approach is however different from their's in the sense that we would be defining imprecise randomness using the Randomness-Impreciseness Consistency Principle together with our definition of complement of an imprecise set.
Baruah has already established (Baruah (2012)) that every law of impreciseness can actually be expressed in terms of two laws of randomness, with randomness defined in the measure theoretic sense.In this article, we are going to discuss about testing statistical hypotheses with reference to imprecise randomness.
In what follows, we shall first discuss about Baruah's Randomness-Impreciseness Consistency Principle andthe complement of an imprecise set.Thereafter we shall discuss the matters with reference to testing an imprecise hypothesis with reference to thetwo sample t-test.

The Randomness-Impreciseness
Consistency Principle is associated with a presence level indicator function ( ) with a constant reference function 0 in the entire real line (Baruah (2012)).Here ( ) The imprecise number would be characterized by , R being the real line.In the Dubois-Prade nomenclature, for a fuzzy number with fuzzy membership function ( )  Here, the term random variable has been used in the broader measure theoretic sense.It should be noted here that the notion of probability does not enter in to the measure theoretic definition of a random variable( Rohatgi&Saleh ( 2001)).

The Complement of an Imprecise Number
If a normal imprecise number is defined with a presence level indicator function ( ) will have the membership function ( ) , and from 0, otherwise.The definition of the complement of an imprecise set is based on the following axiom: Axiom 1: For a normal imprecise number as defined above, the complement will have constant presence level indicator function equal to 1, the reference function being ( ) x .

Imprecise Randomness
If the two laws of randomness defining impreciseness are indeed laws of probability, two possibilities can actually be there.When a non-rejectable hypothesis is made imprecise, there may still be a probability that the imprecise hypothesis would actually be found rejectable, the probability of rejection decided by the right reference function.In the same way, if a rejectable hypothesis is made imprecise,theremay still be a probability that the imprecise hypothesis would be found non-rejectable, the probability of non-rejection being decided by the left reference function this time ( Baruah (2011)).. Assume that X is a random variable following the normal probability law with mean μ and variance unity.Now if the parameter μ is imprecise, with membership defined in [μδ, μ, μ + δ], we would actually define an infinite number of normal probability density functions with location parameter ranging from (μδ) to (μ + δ ) with maximum membership assigned at the value μ.This is where the current definition of imprecise randomness ends.
Assume that we have asample of n observations x 1 , x 2, ... ,x n from a normally distributed population with mean μ and variance σ 2 .We can then proceed to infer about the population, based on the sample data.Assume further that we have imprecise data and we need to proceed for statistical analysis with reference to imprecise randomness.
The imprecise data are in terms of imprecise numbers around x i , i=1,2,…,n defined as, say, The analysis can now proceed towards making an imprecise statistical analysis.Without any loss of generality, and for computational simplicity, such imprecise numbers are usually taken as triangular.
It can be seen that from the distribution function ( ) , we shall get the density function ( ) ( ) In the same way, from thecomplementary distribution function ( ) we shall get the density function with membership function is in fact defined by two laws of randomness with distribution functions ( ) Accordingly, forimprecise randomness first, there should be a variable following some law of randomness.Secondly, in an interval around every realization of the random variable, there should be impreciseness.If it is presumed that the two laws of randomness are in fact two laws of probability, then the conclusions can be made probabilistically.

Two samplet-Test with Imprecise Data
We now cite a numerical example.The gain in weights (in kgs.)of pigs fed on two diets A and B are given below: Diet A: 49, 53, 51, 52, 47, 50, 52, 53 Diet B: 51, 54, 51, 52, 49, 53, 53, 52 Assume that the random samples have been collected from normal populations and the population variances are equal and unknown.Now we want to test whether the two diets differ significantly as regards their effect on increase in weight, i.e., H , the test statistic is given by which follows the Student's-t probability law with ( ) where x and y is the sample means for diet Aand dietBrespectively and 2 S is the sample variance.
In our case, the calculated value of t is 1.083 which is less than the tabulated value of ti.e.2.15 at 5% probability level of significance for 14 degrees of freedom.Therefore we may conclude that there is no reason to reject the null hypothesis that the diets A and Bdoes not differ significantly as regards their effect on increase in weight.
Let us now assume that the data are imprecise of the interval type and that the data are triangular.The random variable X of which x is a realization in the sample was assumed to be normally distributed.In other words, there is one law of randomness in while there is another law of randomness in [ ] , both of them being uniform, for a normally distributed realization xwith mean µ and error variance 2 σ , say.It should be noted here that for probabilistic conclusions based on imprecise random data, we would need to define that these two laws of randomness are indeed two probability laws in the statistical sense.
Thus the gain in weights (in kgs.) of pigs fed on two diets A and B with triangular membership functions are as follows:  Under 0 H , we finally arrived at an imprecise value of Student's-t statistic for 14 degrees of freedom, with the following presence level indicator function:   In the crisp or non-imprecise situation of the above example, we would have concluded that there is no reason to reject the null hypothesis of equality of the mean weights at 5% probability level of significance as the data dependent value of ( ) is the probability that the imprecise null hypothesis would have to be rejected at 5% probability level of significance.In other words, when a non rejectable hypothesis is made imprecise, there may still be a probability that the imprecise hypothesis would actually be found rejectable.In the same way, if a rejectable hypothesis is made imprecise, there may still be a probability that the imprecise hypothesis would be found nonrejectable, the probability of non rejection being decided by the left reference function this time.

Conclusions:
Two laws of randomness are necessary and sufficient to define the partial presence of an element in a normal imprecise number.In this article, based on the Randomness-Impreciseness Consistency Principle, we have forwarded the definition of imprecise randomness.In testing of imprecise hypothesis, we deal with the alternative hypothesis which is the complement of the imprecise null hypothesis.We have shown that when a non rejectable hypothesis is made imprecise, there may still be a probability that the imprecise hypothesis would actually be found rejectable.In the same way if a rejectable hypothesis is made imprecise, there may still be a probability that the imprecise hypothesis would be found non-rejectable.

ψ
is the distribution function of a random variable defined in the interval

H
would be defined as discussed earlier.
range of the variance should be positive.Again in computing the value of t, we have also considered the left reference function at