Recent Submissions

  • Algorithm for Detection of Raising Eyebrows and Jaw Clenching Artifacts in EEG Signals Using Neurosky Mindwave Headset

    Vélez, Luis; Kemper, Guillermo (2021-01-01)
    The present work proposes an algorithm to detect and identify the artifact signals produced by the concrete gestural actions of jaw clench and eyebrows raising in the electroencephalography (EEG) signal. Artifacts are signals that manifest in the EEG signal but do not come from the brain but from other sources such as flickering, electrical noise, muscle movements, breathing, and heartbeat. The proposed algorithm makes use of concepts and knowledge in the field of signal processing, such as signal energy, zero crossings, and block processing, to correctly classify the aforementioned artifact signals. The algorithm showed a 90% detection accuracy when evaluated in independent ten-second registers in which the gestural events of interest were induced, then the samples were processed, and the detection was performed. The detection and identification of these devices can be used as commands in a brain–computer interface (BCI) of various applications, such as games, control systems of some type of hardware of special benefit for disabled people, such as a chair wheel, a robot or mechanical arm, a computer pointer control interface, an Internet of things (IoT) control or some communication system.
  • Correspondence Between TOVA Test Results and Characteristics of EEG Signals Acquired Through the Muse Sensor in Positions AF7–AF8

    Castillo, Ober; Sotomayor, Simy; Kemper, Guillermo; Clement, Vincent (2021-01-01)
    This paper seeks to study the correspondence between the results of the test of variable of attention (TOVA) and the signals acquired by the Muse electroencephalogram (EEG) in the positions AF7 and AF8 of the cerebral cortex. There are a variety of research papers that estimates an index of attention in which the different characteristics in discrete signals of the brain activity were used. However, many of these results were obtained without contrasting them with standardized tests. Due to this fact, in the present work, the results will be compared with the score of the TOVA, which aims to identify an attention disorder in a person. The indicators obtained from the test are the response time variability, the average response time, and the d′ prime score. During the test, the characteristics of the EEG signals in the alpha, beta, theta, and gamma subbands such as the energy, average power, and standard deviation were extracted. For this purpose, the acquired signals are filtered to reduce the effect of the movement of the muscles near the cerebral cortex and then went through a subband decomposition process by applying transformed wavelet packets. The results show a well-marked correspondence between the parameters of the EEG signal of the indicated subbands and the visual attention indicators provided by TOVA. This correspondence was measured through Pearson’s correlation coefficient which had an average result of 0.8.
  • Algorithm Oriented to the Detection of the Level of Blood Filling in Venipuncture Tubes Based on Digital Image Processing

    Castillo, Jorge; Apfata, Nelson; Kemper, Guillermo (2021-01-01)
    This article proposes an algorithm oriented to the detection of the level of blood filling in patients, with detection capacity in millimeters. The objective of the software is to detect the amount of blood stored into the venipuncture tube and avoid coagulation problems due to excess fluid. It also aims to avoid blood levels below that required, depending on the type of analysis to be performed. The algorithm acquires images from a camera positioned in a rectangular structure located within an enclosure, which has its own internal lighting to ensure adequate segmentation of the pixels of the region of interest. The algorithm consists of an image improvement stage based on gamma correction, followed by a segmentation stage of the area of ​​pixels of interest, which is based on thresholding by HSI model, in addition to filtering to accentuate the contrast between the level of filling and staining, and as a penultimate stage, the location of the filling level due to changes in the vertical tonality of the image. Finally, the level of blood contained in the tube is obtained from the detection of the number of pixels that make up the vertical dimension of the tube filling. This number of pixels is then converted to physical dimensions expressed in millimeters. The validation results show an average percentage error of 0.96% by the proposed algorithm.
  • A Detection Method of Ectocervical Cell Nuclei for Pap test Images, Based on Adaptive Thresholds and Local Derivatives

    Oscanoa1, Julio; Mena, Marcelo; Kemper, Guillermo; julioscanoa@gmail.com (Science and Engineering Research Support Society, 2015-04)
    Cervical cancer is one of the main causes of death by disease worldwide. In Peru, it holds the first place in frequency and represents 8% of deaths caused by sickness. To detect the disease in the early stages, one of the most used screening tests is the cervix Papanicolaou test. Currently, digital images are increasingly being used to improve Pap test efficiency. This work develops an algorithm based on adaptive thresholds, which will be used in Pap smear assisted quality control software. The first stage of the method is a pre-processing step, in which noise and background removal is done. Next, a block is segmented for each one of the points selected as not background, and a local threshold per block is calculated to search for cell nuclei. If a nucleus is detected, an artifact rejection follows, where only cell nuclei and inflammatory cells are left for the doctors to interpret. The method was validated with a set of 55 images containing 2317 cells. The algorithm successfully recognized 92.3% of the total nuclei in all images collected.
    Acceso abierto
  • A Novel Steganography Technique for SDTV-H.264/AVC Encoded Video

    Di Laura, Christian; Pajuelo, Diego; Kemper, Guillermo (Hindawi Publishing Corporation, 2016-04)
    Today, eavesdropping is becoming a common issue in the rapidly growing digital network and has foreseen the need for secret communication channels embedded in digital media. In this paper, a novel steganography technique designed for Standard Definition Digital Television (SDTV) H.264/AVC encoded video sequences is presented. The algorithm introduced here makes use of the compression properties of the Context Adaptive Variable Length Coding (CAVLC) entropy encoder to achieve a low complexity and real-time inserting method. The chosen scheme hides the private message directly in the H.264/AVC bit stream by modifying the AC frequency quantized residual luminance coefficients of intrapredicted I-frames. In order to avoid error propagation in adjacent blocks, an interlaced embedding strategy is applied. Likewise, the steganography technique proposed allows self-detection of the hidden message at the target destination. The code source was implemented by mixing MATLAB 2010 b and Java development environments. Finally, experimental results have been assessed through objective and subjective quality measures and reveal that less visible artifacts are produced with the technique proposed by reaching PSNR values above 40.0 dB and an embedding bit rate average per secret communication channel of 425 bits/sec. This exemplifies that steganography is affordable in digital television.
    Acceso abierto
  • A biometric method based on the matching of dilated and skeletonized IR images of the veins map of the dorsum of the hand

    Universidad Peruana de Ciencias Aplicadas (UPC) (IEEE, 2015-06-02)
    This work proposes a biometric identification system that works together with a palm vein reader sensor and a hand-clenching support, designed to perform the capture the back of the hand. Several processing steps were performed: extraction of the region of interest, binarization, dilation, noise filtering, skeletonization, as well as extraction and verification of patterns based on the measurment of coincidence of vertical and horizontal displacements of skeletonized and dilated images. The proposed method achieved the following results: processing time post capture of 1.8 seconds, FRR of 0.47% and FAR of 0,00%, with a referential database of 50 people from a total of 1500 random captures.
    Acceso abierto