Peter B. Imrey
Peter B. Imrey
Institution: Department of Quantitative Health Sciences, Lerner Research Institute, Cleveland Clinic, Cleveland, Ohio
Email:
6 hours ago
Şerif Ercan
Şerif Ercan
Institution: Department of Medical Biochemistry, Lüleburgaz State Hospital, Kırklareli, Turkey
Email:
6 hours ago
Farrokh Habibzadeh,
Farrokh Habibzadeh
Institution: Global Virus Network, Middle East Region, Shiraz, Iran
Email:
Parham Habibzadeh,
Parham Habibzadeh
Institution: Research Center for Health Sciences, Institute of Health, Shiraz University of Medical Sciences, Shiraz, Iran
Email:
Mahboobeh Yadollahie
Mahboobeh Yadollahie
Institution: Freelance Researcher, Shiraz, Iran
Email:
Serologic tests are important for conducting seroepidemiologic and prevalence studies. However, the tests used are typically imperfect and produce false-positive and false-negative results. This is why the seropositive rate (apparent prevalence) does not typically reflect the true prevalence of the ...
More
Serologic tests are important for conducting seroepidemiologic and prevalence studies. However, the tests used are typically imperfect and produce false-positive and false-negative results. This is why the seropositive rate (apparent prevalence) does not typically reflect the true prevalence of the disease or condition of interest. Herein, we discuss the way the true prevalence could be derived from the apparent prevalence and test sensitivity and specificity. A computer simulation based on the Monte-Carlo algorithm was also used to further examine a situation where the measured test sensitivity and specificity are also uncertain. We then complete our review with a real example. The apparent prevalence observed in many prevalence studies published in medical literature is a biased estimation and cannot be interpreted correctly unless we correct the value.
Less
6 hours ago
Abdurrahman Coşkun
Abdurrahman Coşkun
Institution: Acibadem Labmed Clinical Laboratories, Istanbul, Turkey
Email:
In laboratory medicine, mathematical equations are frequently used to calculate various parameters including bias, imprecision, measurement uncertainty, sigma metric (SM), creatinine clearance, LDL-cholesterol concentration, etc. Mathematical equations have strict limitations and cannot be used in a...
More
In laboratory medicine, mathematical equations are frequently used to calculate various parameters including bias, imprecision, measurement uncertainty, sigma metric (SM), creatinine clearance, LDL-cholesterol concentration, etc. Mathematical equations have strict limitations and cannot be used in all situations and are not open to manipulations. Recently, a paper “Bias estimation for Sigma metric calculation: Arithmetic mean versus quadratic mean” was published in Biochemia Medica. In the paper, the author criticized the approach of taking the arithmetic mean of the multiple biases to obtain a single bias and proposed a quadratic method to estimate the overall bias using external quality assurance services (EQAS) data for SM calculation. This approach does not fit the purpose and it should be noted that using the correct equation in calculations is as important as using the correct reagent in the measurement of the analytes, therefore before using an equation, its suitability should be checked and confirmed.
Less
1 day ago
Reuven Cohen,
Reuven Cohen
Institution:
Email:
Oren Perez
Oren Perez
Institution:
Email:
In this article we study the social dynamic of temporal partitioning congestion games (TPGs), in which participants must coordinate an optimal time-partitioning for using a limited resource. The challenge in TPGs lies in determining whether users can optimally self-organize their usage patterns. Rea...
More
In this article we study the social dynamic of temporal partitioning congestion games (TPGs), in which participants must coordinate an optimal time-partitioning for using a limited resource. The challenge in TPGs lies in determining whether users can optimally self-organize their usage patterns. Reaching an optimal solution may be undermined, however, by a collectively destructive meta-reasoning pattern, trapping users in a socially vicious oscillatory behavior. TPGs constitute a dilemma for both human and animal communities. We developed a model capturing the dynamics of these games and ran simulations to assess its behavior, based on a 2×2 framework that distinguishes between the players’ knowledge of other players’ choices and whether they use a learning mechanism. We found that the only way in which an oscillatory dynamic can be thwarted is by adding learning, which leads to weak convergence in the no-information condition and to strong convergence in the with-information condition. We corroborated the validity of our model using real data from a study of bats’ behaviour in an environment of water scarcity. We conclude by examining the merits of a complexity-based, agent-based modelling approach over a game-theoretic one, contending that it offers superior insights into the temporal dynamics of TPGs. We also briefly discuss the policy implications of our findings.
Less
2 days ago
Farrokh Habibzadeh
Farrokh Habibzadeh
Institution:
Email:
Background
Randomized clinical trials (RCTs) shape our clinical practice. Several studies report a mediocre replicability rate of the studied RCTs. Many researchers believe that the relatively low replication rate of RCTs is attributed to the high p value significance threshold. To solve this probl...
More
Background
Randomized clinical trials (RCTs) shape our clinical practice. Several studies report a mediocre replicability rate of the studied RCTs. Many researchers believe that the relatively low replication rate of RCTs is attributed to the high p value significance threshold. To solve this problem, some researchers proposed using a lower threshold, which is inevitably associated with a decrease in the study power.
Methods
The results of 22 500 RCTs retrieved from the Cochrane Database of Systematic Reviews (CDSR) were reinterpreted using 2 fixed p significance threshold (0.05 and 0.005), and a recently proposed flexible threshold that minimizes the weighted sum of errors in statistical inference.
Results
With p 0.05 criterion, 28.5% of RCTs were significant; p 0.005, 14.2%; and p flexible threshold, 9.9% (2/3 of significant RCTs based on p 0.05 criterion, were found not significant). Lowering the p cut-off, although decreases the false-positive rate, is not generally associated with a lower weighted sum of errors; the false-negative rate increases (the study power decreases); important treatments may be left undiscovered. Accurate calculation of the optimal p value thresholds needs knowledge of the variance in each study arm, a posteriori.
Conclusions
Lowering the p value threshold, as it is proposed by some researchers, is not reasonable as it might be associated with an increase in false-negative rate. Using a flexible p significance threshold approach, although results in a minimum error in statistical inference, might not be good enough too because only a rough estimation may be calculated a priori; the data necessary for the precise computation of the most appropriate p significance threshold are only available a posteriori. Frequentist statistical framework has an inherent conflict. Alternative methods, say Bayesian methods, although not perfect, would be more appropriate for the data analysis of RCTs.
Less
2 days ago
Abdurrahman Coskun
Abdurrahman Coskun
Institution:
Email:
Highlights
• Prediction interval has a great potential to be used in laboratory medicine
• It is a powerful tool for computing personalized reference interval and reference change value
• It can be used to assess the stability of analytical systems
• It can be used in monitoring the accu...
More
Highlights
• Prediction interval has a great potential to be used in laboratory medicine
• It is a powerful tool for computing personalized reference interval and reference change value
• It can be used to assess the stability of analytical systems
• It can be used in monitoring the accuracy and reproducibility of analytical systems
Monitoring is indispensable for assessing disease prognosis and evaluating the effectiveness of treatment strategies, both of which rely on serial measurements of patients’ data. It also plays a critical role in maintaining the stability of analytical systems, which is achieved through serial measurements of quality control samples. Accurate monitoring can be achieved through data collection, following a strict preanalytical and analytical protocol, and the application of a suitable statistical method. In a stable process, future observations can be predicted based on historical data collected during periods when the process was deemed reliable. This can be evaluated using the statistical prediction interval. Statistically, prediction interval gives an “interval” based on historical data where future measurement results can be located with a specified probability such as 95%. Prediction interval consists of two primary components: (i) the set point and (ii) the total variation around the set point which determines the upper and lower limits of the interval. Both can be calculated using the repeated measurement results obtained from the process during its steady-state. In this paper, (i) the theoretical bases of prediction intervals were outlined, and (ii) its practical application was explained through examples, aiming to facilitate the implementation of prediction intervals in laboratory medicine routine practice, as a robust tool for monitoring patients’ data and analytical systems.
Less
2 days ago
Molly E. Contini,
Molly E. Contini
Institution:
Email:
Jeffrey R. Spence,
Jeffrey R. Spence
Institution:
Email:
David J. Stanley
David J. Stanley
Institution:
Email:
Researchers and practitioners are typically familiar with descriptive statistics and statistical inference. However, outside of regression techniques, little attention may be given to questions around prediction. In the current paper, we introduce prediction intervals using fundamental concepts that...
More
Researchers and practitioners are typically familiar with descriptive statistics and statistical inference. However, outside of regression techniques, little attention may be given to questions around prediction. In the current paper, we introduce prediction intervals using fundamental concepts that are learned in descriptive and inferential statistical training (i.e., sampling error, standard deviation). We walk through an example using simple hand calculations and reference a simple R package that can be used to calculate prediction intervals.
Less
2 days ago
Genki Shibukawa
Genki Shibukawa
Institution: University
Email: info@res00.com
We present an explicit formula of the powers for the 2×2 quantum matrices, that is a natural quantum analogue of the powers of the
usual 2 × 2 matrices. As applications, we give some non-commutative relations of the entries of the powers for the 2 × 2 quantum matrices, which is a simple proof of...
More
We present an explicit formula of the powers for the 2×2 quantum matrices, that is a natural quantum analogue of the powers of the
usual 2 × 2 matrices. As applications, we give some non-commutative relations of the entries of the powers for the 2 × 2 quantum matrices, which is a simple proof of the results of Vokos-Zumino-Wess (1990).
Less
2 years ago