<jats:p>Effective management of the COVID-19 pandemic requires widespread and frequent testing of the population for SARS-CoV-2 infection. Saliva has emerged as an attractive alternative to nasopharyngeal samples for surveillance testing as it does not require specialized personnel or materials for ...
More
<jats:p>Effective management of the COVID-19 pandemic requires widespread and frequent testing of the population for SARS-CoV-2 infection. Saliva has emerged as an attractive alternative to nasopharyngeal samples for surveillance testing as it does not require specialized personnel or materials for its collection and can be easily provided by the patient. We have developed a simple, fast, and sensitive saliva-based testing workflow that requires minimal sample treatment and equipment. After sample inactivation, RNA is quickly released and stabilized in an optimized buffer, followed by reverse transcription loop-mediated isothermal amplification (RT-LAMP) and detection of positive samples using a colorimetric and/or fluorescent readout. The workflow was optimized using 1,670 negative samples collected from 172 different individuals over the course of 6 months. Each sample was spiked with 50 copies/μL of inactivated SARS-CoV-2 virus to monitor the efficiency of viral detection. Using pre-defined clinical samples, the test was determined to be 100% specific and 97% sensitive, with a limit of detection of 39 copies/mL. The method was successfully implemented in a CLIA laboratory setting for workplace surveillance and reporting. From April 2021-February 2022, more than 30,000 self-collected samples from 755 individuals were tested and 85 employees tested positive mainly during December and January, consistent with high infection rates in Massachusetts and nationwide.</jats:p>
Less
Posted 2 months ago
Gemechu Feyisa Yadeta
Gemechu Feyisa Yadeta
Institution: Physics Department, College of Natural and Computational Sciences, Mattu University, Mattu
Email: info@rnfinity.com
In this work, the alpha particle-induced reaction on Cadmium-116 in the energy range 20-40 MeV has been studied. The excitation function for the following reaction channels of this type have been studied in the energy range of 15 MeV-40 MeV are; 48-Cd-116(α, n) 50-Sn-119. This reaction has a total ...
More
In this work, the alpha particle-induced reaction on Cadmium-116 in the energy range 20-40 MeV has been studied. The excitation function for the following reaction channels of this type have been studied in the energy range of 15 MeV-40 MeV are; 48-Cd-116(α, n) 50-Sn-119. This reaction has a total number of exciton six, number of neutron one and number of holes also one. 48-Cd-116(α, 2n + p) 49-In-117. In this reaction (TD = 10, Ex1 = 3 and Ex2 = 3). 48-Cd-116(α, 3n) 50-Sn-117. The exciton number of this reaction is (TD = 10, Ex1 = 3 and Ex2 = 3) 48-Cd-116(α, 3n + p) 49-In-116. It has an exciton number of (TD = 12 Ex1 = 4 and Ex2 = 4) 48-Cd-116(α, n + α) 48-Cd-115. This reaction has (TD = 14, Ex1 = 1, Ex2 = 5 and Ex3 = 4) were studied and comparative analysis was performed for reaction channels of 116-Cd target nuclei. The experimentally measured excitation functions obtained from the EXFOR data source, IAEA, were compared with the theoretical calculations with and without the inclusion of pre-equilibrium emission of particles, made by the COMPLET code. The level density parameter is varied to obtain good agreement between the calculated and measured data with minimum effort on the fitting parameter.
Less
Posted 5 months ago
Houda Chakib,
Houda Chakib
Institution: Data4Earth Laboratory, Faculty of Sciences and Technics
Email: houda.chakib@yahoo.fr
Najlae Idrissi,
Najlae Idrissi
Institution: 1Data4Earth Laboratory, Faculty of Sciences and Technics
Email: n.idrissi@usms.ma
Oussama Jannani
Oussama Jannani
Institution: Data4Earth Laboratory, Faculty of Sciences and Technics
Email: o.jannani@gmail.com
In recent years, image compression techniques have received a lot of attention from researchers as the number of images at hand keep growing. Digital Wavelet Transform is one of them that has been utilized in a wide range of applications and has shown its efficiency in image compression field. Moreo...
More
In recent years, image compression techniques have received a lot of attention from researchers as the number of images at hand keep growing. Digital Wavelet Transform is one of them that has been utilized in a wide range of applications and has shown its efficiency in image compression field. Moreover, used with other various approaches, this compression technique has proven its ability to compress images at high compression ratios while maintaining good visual image quality. Indeed, works presented in this paper deal with mixture between Deep Learning algorithms and Wavelets Transformation approach that we implement in different color spaces. In fact, we investigate RGB and Luminance/Chrominance YCbCr color spaces to develop three image compression models based on Convolutional Auto-Encoder (CAE). In order to evaluate the models’ performances, we used 24 raw images taken from Kodak database and applied the approaches on every one of them and compared achieved experimental results with those obtained using standard compression method. We draw this comparison in terms of performance parameters: Structural Similarity Index Metrix SSIM, Peak Signal to Noise Ratio PSNR and Mean Square Error MSE. Reached results indicates that with proposed schemes we gain significate improvement in distortion metrics over traditional image compression method especially SSIM parameter and we managed to reduce MSE values over than 50%. In addition, proposed schemes output images with high visual quality where details and textures are clear and distinguishable.
Less
Posted 1 year ago
Huan -Yu Chen,
Huan -Yu Chen
Institution: Department of Computer Science and Information Engineering, National Taichung University of Science and Technology
Email: info@rnfinity.com
Chuen-Horng Lin,
Chuen-Horng Lin
Institution: Department of Computer Science and Information Engineering, National Taichung University of Science and Technology
Email: info@rnfinity.com
Jyun-Wei Lai,
Jyun-Wei Lai
Institution: Department of Computer Science and Information Engineering, National Taichung University of Science and Technology
Email: info@rnfinity.com
Yung-Kuan Chan
Yung-Kuan Chan
Institution: Department of Management Information Systems, National Chung Hsing University
Email: info@rnfinity.com
This paper proposes a multi–convolutional neural network (CNN)-based system for the detection, tracking, and recognition of the emotions of dogs in surveillance videos. This system detects dogs in each frame of a video, tracks the dogs in the video, and recognizes the dogs’ emotions. The system ...
More
This paper proposes a multi–convolutional neural network (CNN)-based system for the detection, tracking, and recognition of the emotions of dogs in surveillance videos. This system detects dogs in each frame of a video, tracks the dogs in the video, and recognizes the dogs’ emotions. The system uses a YOLOv3 model for dog detection. The dogs are tracked in real time with a deep association metric model (DeepDogTrack), which uses a Kalman filter combined with a CNN for processing. Thereafter, the dogs’ emotional behaviors are categorized into three types—angry (or aggressive), happy (or excited), and neutral (or general) behaviors—on the basis of manual judgments made by veterinary experts and custom dog breeders. The system extracts sub-images from videos of dogs, determines whether the images are sufficient to recognize the dogs’ emotions, and uses the long short-term deep features of dog memory networks model (LDFDMN) to identify the dog’s emotions. The dog detection experiments were conducted using two image datasets to verify the model’s effectiveness, and the detection accuracy rates were 97.59% and 94.62%, respectively. Detection errors occurred when the dog’s facial features were obscured, when the dog was of a special breed, when the dog’s body was covered, or when the dog region was incomplete. The dog-tracking experiments were conducted using three video datasets, each containing one or more dogs. The highest tracking accuracy rate (93.02%) was achieved when only one dog was in the video, and the highest tracking rate achieved for a video containing multiple dogs was 86.45%. Tracking errors occurred when the region covered by a dog’s body increased as the dog entered or left the screen, resulting in tracking loss. The dog emotion recognition experiments were conducted using two video datasets. The emotion recognition accuracy rates were 81.73% and 76.02%, respectively. Recognition errors occurred when the background of the image was removed, resulting in the dog region being unclear and the incorrect emotion being recognized. Of the three emotions, anger was the most prominently represented; therefore, the recognition rates for angry emotions were higher than those for happy or neutral emotions. Emotion recognition errors occurred when the dog’s movements were too subtle or too fast, the image was blurred, the shooting angle was suboptimal, or the video resolution was too low. Nevertheless, the current experiments revealed that the proposed system can correctly recognize the emotions of dogs in videos. The accuracy of the proposed system can be dramatically increased by using more images and videos for training the detection, tracking, and emotional recognition models. The system can then be applied in real-world situations to assist in the early identification of dogs that may exhibit aggressive behavior.
Less
Posted 1 year ago
Joan Danielle K. Ongchoco,
Joan Danielle K. Ongchoco
Institution: Department of Psychology
Email: info@rnfinity.com
Madeline Gedvila,
Madeline Gedvila
Institution: Department of Psychology
Email: info@rnfinity.com
Wilma A. Bainbridge
Wilma A. Bainbridge
Institution: Department of Psychology
Email: info@rnfinity.com
Time is the fabric of experience — yet it is incredibly malleable in the mind of the observer: seeming to drag on, or fly right by at different moments. One of the most influential drivers of temporal distortions is attention, where heightened attention dilates subjective time. But an equally impo...
More
Time is the fabric of experience — yet it is incredibly malleable in the mind of the observer: seeming to drag on, or fly right by at different moments. One of the most influential drivers of temporal distortions is attention, where heightened attention dilates subjective time. But an equally important feature of subjective experience involves not just the objects of attention, but also what information gets tagged to be remembered or forgotten in the first place, independent of attention (i.e. intrinsic image memorability). Here we test how memorability influences time perception. Observers viewed scenes in an oddball paradigm, where the last scene could be a forgettable “oddball” amidst memorable ones, or vice versa. Subjective time dilation occurred only for forgettable oddballs, but not memorable ones — demonstrating an oddball effect where the oddball did not differ in low-level visual features, image category, or even subjective memorability. But more importantly, these results emphasize how memory can interact with temporal experience: forgettable endings amidst memorable sequences dilate our experience of time.
Less
Posted 1 year ago
Abstract This article presents a fast parallel lossless technique and a lossy image compression technique for 16-bit single-channel images. Nowadays, such techniques are “a must” in robotics and other areas where several depth cameras are used. Since many of these algorithms need to be run in lo...
More
Abstract This article presents a fast parallel lossless technique and a lossy image compression technique for 16-bit single-channel images. Nowadays, such techniques are “a must” in robotics and other areas where several depth cameras are used. Since many of these algorithms need to be run in low-profile hardware, as embedded systems, they should be very fast and customizable. The proposal is based on the consideration of depth images as surfaces, so the idea is to split the image into a set of polynomial functions that each describes a part of the surface. The developed algorithm herein proposed can achieve a similar—or better—compression rate and especially higher speed rates than the existing techniques. It also has the potential of being fully parallelizable and to run on several cores. This feature, compared to other approaches, makes it useful for handling and streaming multiple cameras simultaneously. The algorithm is assessed in different situations and hardware. Its implementation is rather simple and is carried out with LIDAR captured images. Therefore, this work is accompanied by an open implementation in C++.
Less
Posted 1 year ago
Baekcheon Seong,
Baekcheon Seong
Institution: Yonsei University
Email: info@rnfinity.com
Abstract Several image-based biomedical diagnoses require high-resolution imaging capabilities at large spatial scales. However, conventional microscopes exhibit an inherent trade-off between depth-of-field (DoF) and spatial resolution, and thus require objects to be refocused at each lateral locati...
More
Abstract Several image-based biomedical diagnoses require high-resolution imaging capabilities at large spatial scales. However, conventional microscopes exhibit an inherent trade-off between depth-of-field (DoF) and spatial resolution, and thus require objects to be refocused at each lateral location, which is time-consuming. Here, we present a computational imaging platform, termed E2E-BPF microscope, which enables large-area, high-resolution imaging of large-scale objects without serial refocusing. This method involves a physics-incorporated, deep-learned design of binary phase filter (BPF) and jointly optimized deconvolution neural network, which altogether produces high-resolution, high-contrast images over extended depth ranges. We demonstrate the method through numerical simulations and experiments with fluorescently labeled beads, cells and tissue section, and present high-resolution imaging capability over a 15.5-fold larger DoF than the conventional microscope. Our method provides highly effective and scalable strategy for DoF-extended optical imaging system, and is expected to find numerous applications in rapid image-based diagnosis, optical vision, and metrology.
Less
Posted 1 year ago
Saeed Niksaz,
Saeed Niksaz
Institution: Shahid Bahonar University of Kerman
Email: info@rnfinity.com
Abstract Automatic medical report generation is the production of reports from radiology images that are grammatically correct and coherent. Encoder-decoder is the most common architecture for report generation, which has not achieved to a satisfactory performance because of the complexity of this t...
More
Abstract Automatic medical report generation is the production of reports from radiology images that are grammatically correct and coherent. Encoder-decoder is the most common architecture for report generation, which has not achieved to a satisfactory performance because of the complexity of this task. This paper presents an approach to improve the performance of report generation that can be easily added to any encoder-decoder architecture. In this approach, in addition to the features extracted from the image, the text related to the most similar image in the training data set is also provided as the input to the decoder. So, the decoder acquires additional knowledge for text production which helps to improve the performance and produce better reports. To demonstrate the efficiency of the proposed method, this technique was added to several different models for producing text from chest images. The results of evaluation demonstrated that the performance of all models improved. Also, different approaches for word embedding, including BioBert, and GloVe, were evaluated. Our result showed that BioBert, which is a language model based on the transformer, is a better approach for this task.
Less
Posted 1 year ago
Rong Lan,
Rong Lan
Institution: Xi’an University of Posts and Telecommunications
Email: info@rnfinity.com
Haowen Mi,
Haowen Mi
Institution: Xi’an University of Posts and Telecommunications
Email: 1466403072@qq.com
Na Qu,
Na Qu
Institution: Xi’an University of Posts and Telecommunications
Email: info@rnfinity.com
Feng Zhao,
Feng Zhao
Institution: Xi’an University of Posts and Telecommunications
Email: info@rnfinity.com
Haiyan Yu,
Haiyan Yu
Institution: Xi’an University of Posts and Telecommunications
Email: info@rnfinity.com
Lu Zhang
Lu Zhang
Institution: Xi’an University of Posts and Telecommunications
Email: info@rnfinity.com
Abstract Although evidence c-means clustering (ECM) based on evidence theory overcomes the limitations of fuzzy theory to some extent and improves the capability of fuzzy c-means clustering (FCM) to express and process the uncertainty of information, the ECM does not consider the spatial information...
More
Abstract Although evidence c-means clustering (ECM) based on evidence theory overcomes the limitations of fuzzy theory to some extent and improves the capability of fuzzy c-means clustering (FCM) to express and process the uncertainty of information, the ECM does not consider the spatial information of pixels, which makes it to be unable to effectively deal with noise pixels. Applying ECM directly to image segmentation cannot obtain satisfactory results. This paper proposes a robust evidence c-means clustering combining spatial information for image segmentation algorithm. Firstly, an adaptive noise distance is constructed by using the local information of pixels to improve the ability to detect noise points. Secondly, the pixel’s original, local and non-local information are introduced into the objective function through adaptive weights to enhance the robustness to noise. Then, the entropy of pixel membership degree is used to design an adaptive parameter to solve the problem of distance parameter selection in credal c-means clustering (CCM). Finally, the Dempster’s rule of combination was improved by introducing spatial neighborhood information, which is used to assign the pixels belonging to the meta-cluster and the noise cluster into the singleton cluster. Experiments on synthetic images, real images and remote sensing SAR images demonstrate that the proposed algorithm not only suppress noise effectively, but also retain the details of the image. Both the segmentation visual effect and evaluation indexes indicate its effectiveness in image segmentation.
Less
Posted 1 year ago
Abstract Generally, a large amount of training data is essential to train deep learning model for obtaining more accurate detection performance in computer vision domain. However, to collect and annotate datasets will lead to extensive cost. In this letter, we propose a self-supervised auxiliary tas...
More
Abstract Generally, a large amount of training data is essential to train deep learning model for obtaining more accurate detection performance in computer vision domain. However, to collect and annotate datasets will lead to extensive cost. In this letter, we propose a self-supervised auxiliary task to learn general videos features without adding any human-annotated labels, aiming at improving the performance of violence recognition. Firstly, we propose a violence recognition method based on convolutional neural network with self-supervised auxiliary task, which can learn visual feature for improving down-stream task (recognizing violence). Secondly, we establish a balance-weighting scheme to solve the crucial problem of balancing the self-supervised auxiliary task and violence recognition task. Thirdly, we develop an attention receptive-field module, indicating that the proper use of the spatial attention mechanism can effectively expand the receptive fields of the module, further improving semantically meaningful representation of the network. To evaluate the proposed method, two benchmark datasets have been used, and better performance is shown by the experimental results comparing with other state-of-the-art methods.
Less
Posted 1 year ago