Archives

  • 2018-07
  • 2018-10
  • 2018-11
  • 2019-04
  • 2019-05
  • 2019-06
  • 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2019-12
  • 2020-01
  • 2020-02
  • 2020-03
  • 2020-04
  • 2020-05
  • 2020-06
  • 2020-07
  • 2020-08
  • 2020-09
  • 2020-10
  • 2020-11
  • 2020-12
  • 2021-01
  • 2021-02
  • 2021-03
  • 2021-04
  • 2021-05
  • 2021-06
  • 2021-07
  • 2021-08
  • 2021-09
  • 2021-10
  • 2021-11
  • 2021-12
  • 2022-01
  • 2022-02
  • 2022-03
  • 2022-04
  • 2022-05
  • 2022-06
  • 2022-07
  • 2022-08
  • 2022-09
  • 2022-10
  • 2022-11
  • 2022-12
  • 2023-01
  • 2023-02
  • 2023-03
  • 2023-04
  • 2023-05
  • 2023-06
  • 2023-07
  • 2023-08
  • 2023-09
  • 2023-10
  • 2023-11
  • 2023-12
  • 2024-01
  • 2024-02
  • 2024-03
  • 2024-04
  • 2024-05
  • br As this takes place we

    2018-10-24


    As this takes place, we have an easily testable relation meaning that it does not particularly matter in which order the independent regressors and the independent response components are lined when takes the diagonal form. With the OLS estimate and the mutual covariance matrix of the vector components we can obtain the boundary point of maximum likelihood for the distribution . As a result, χ2- and t-statistics of the form (5) и (4) (i.e. values r and s) are also obtained for testing Н∗. Ref. [1] suggested using m of F-statistics for each lth equation (l = 1, 2,…, m) rather than a general F-statistics of the form (7) (i.e. q):
    These F-statistics, for a true hcv protease inhibitors Н∗, have the distribution ,−−1 (here the number of the degrees of freedom is not (n − h) but (n − h − 1), as here the centering that leads to the ‘disappearance’ of the free term is taken into account). It should be noted, however, that only for a hypothesis that is true and easily proved, namely, that is equivalent to the hypothesis for the error variance of system (11): the following important relation holds:
    Otherwise, the following statistics should be taken as : where
    Then the condition (14) will be satisfied, i.e. since the columns Z1, Z2, …, Z of the matrix Z do not correlate with each other, and the mutual covariance matrix of the regression parameters for these variables has a block diagonal structure (see above). If the hypothesis Ho is true, the expressions (13) and (15) match. The condition (14) (or (16)) is a rather significant refinement on the results of [1]. If any of the inequalities is satisfied for each l-th equation (l = 1, 2,…, m) then the hypothesis hcv protease inhibitors Н∗ is rejected.
    The generalized Euclidean metric and the main theorem Now, since we have obtained the algorithm for finding the boundary point of maximum likelihood * and tested the simple hypothesis , let us construct a more rigorous proof, different from the one suggested in Section 2, for testing the complex hypothesis H : ∈ G. We shall not make any major assumptions about the sample size n. The following theorem is true for all above-described estimates.
    Recall that can be metrized by any nonsingular, positive definite symmetrical matrix A. If we define the scalar product of the vectors as then a norm of the vector (its modulus), a distance and an angle between the vectors will be determined by the following formulae:
    If the matrix A is not positive definite (i.e., some of its eigenvalues are negative), then the metric (of the type d2) introduced by A is called indefinite [7]. It is common knowledge that both the nonsingular matrix and its inverse are positive definite. Here we discuss just the nonsingular case. Obviously, the boundary point * of maximum likelihood is the closest in the metric (18), of the all points of the area G, including the boundary, to the point (to the estimate of the parameter vector obtained using Eqs. (1), (6) and (12)). Therefore, when the inequality (8) holds for *, this inequality will hold for any other point ∈ G. The same statement is also true for the inequality (9) using the F-test, as in this case (an estimate from one regression equation) we are dealing with an equivalent metric (see inequality (9)). The relation follows directly from Eq. (4) and means that when using the t-statistics, we start working, from a geometric point of view, with a generalized metric
    This metric coincides with the ordinary metric d∞ (the maximum of a component modulus) in a space that has been subjected to a linear transformation with the matrix of the operator 1/2QT. This transformation is, like all linear transformations, affine, i.e. it preserves all metric relationships. Therefore, if any of the inequalities (10) is satisfied for *, this inequality will hold for any point from area G as well. The case of applying statistics (13) and (15) remains to be investigated; here the parameters are estimated from a set of regression equations, since using statistics (5) and (4) (i.e. quantities r and ) in this case is essentially the same as estimating the parameters as a sample mean value or from a single equation of linear regression.