Development of Automated 12-Lead QT Dispersion Algorithm for Sudden Cardiac Death
M Malarvili, S Hussain, A Ab. Rahman
Keywords
12-lead ecg, dsp, qt dispersion, sudden death
Citation
M Malarvili, S Hussain, A Ab. Rahman. Development of Automated 12-Lead QT Dispersion Algorithm for Sudden Cardiac Death. The Internet Journal of Medical Technology. 2004 Volume 2 Number 2.
Abstract
A single global QT interval measurement from the 12 lead ECG has been the standard measure, but recently there has been a great interest in the distribution of the QT intervals, at a given instant of time, across the 12 lead Electrocardiogram (ECG) leads. This parameter is called as QT dispersion (QTd). The QTd is emerging as an important new clinic tool as it has proposed as a marker of the risk of sudden cardiac death after Myocardium Infarction (MI). However, the measured QTd values vary among different studies around the world in term of the mean and the standard deviation. This research is aimed to develop an automatic algorithm to compute QTd for patients in Malaysia. DSP techniques were applied to 12 lead ECG signals to develop algorithm to facilitate offline automatic QT interval analysis. The results are statistically represented in terms of the mean and standard deviation, SD (mean ± SD). A threshold determined from Gaussian probability distribution function (pdf) is used to evaluate the significance difference of QTd between the non-MI patients and MI patients. The characteristic function is used to justify the discrimination of QTd between the non-MI and MI patients. A total number of 432 ECG recordings (36 patients X 12 leads per patient) were analyzed. It is found that the mean value of QTd for non-MI and MI group is 37.28 ± 11.13ms (p<0.05) and 66.17 ± 13.95ms (p<0.05) respectively. The QTd for non-MI and MI have the sensitivity of 88.89%. The threshold found is 50ms. Thus, QTd index is a clinically useful diagnostic adjunct in discriminating between non-MI patients and patients with MI. In conclusion, the QTd values are significantly higher in patients with MI compared to non-MI patients.
Introduction
The ECG is a valuable, non-invasive diagnostic tool which has been around since the end of the nineteenth century [1]. The ECG is a surface measurement of the electrical potential generated by electrical activity in cardiac tissue. The standard 12 lead ECG is a representation of the heart’s electrical activity recorded through 12 small electrode patches attached to the skin of chest, arms and legs. A typical clinical ECG makes use of 12 combinations of recording locations on the limbs and chest so as to obtain as much information as possible concerning different areas of the heart. They are used to examine ambulatory patients who are at rest during a recording or performing an exercise program (stress test) and also patients in intensive care. ECG recordings are examined by a cardiologist or physician who visually checks features of the signal and estimates the most important parameters of the signal. Using his expertise the physician judges the status of a patient. Manual checking is time consuming and could lead to erroneous. The recognition and analyzing of ECG signal is difficult, since their size and form may change eventually and there can be a considerably amount of noise in the signal. So, this research is aimed to develop a computer-based rhythm analyses to improve the diagnostic value of the ECG.
QTd is defined as the difference between the maximum and minimum QT intervals of any of 12 leads, it is a marker of myocardial electrical instability and has been proposed as a marker of the risk of death for those awaiting heart transplantation [4]. The QTd phenomenon lies in the fact that by electrodynamics laws the ventricle complex duration must be uniform for almost all leads except for special cases. But electrocardiographic measurements towards 12 lead ECG shows the lead-to-lead QT-duration distribution exists and it is used as a predictor of the heart rhythm disturbances[2]. Figure 1 shows a cycle of the ECG waveform which consists of the P wave, the QRS wave, the T wave and other important parameters. A stripe of ECG waveform for a patient may contain a few cycles depending on the period it is recorded. The QT interval begins at the onset of the QRS complex and terminates at the end of the T wave [3]. It represents the time of ventricular depolarization and repolarization. The QT interval is inversely related to heart rate [3].
In the last years researchers from elsewhere show great interest in developing the QT dispersion algorithm. And presenting here is research taking place at University Technology Malaysia. The efforts can be characterized as : developing an automated algorithm to compute the index and the prognostic significance of QT dispersion in the 12 leads. Presented here are the development of the QTd algorithm and the Graphical User Interface (GUI) for the algorithm.
Review Of QT Dispersion (QTd)
Increased dispersion of repolarization is known to be an important factor in the development of ventricular arrhythmias [3]. According to researches conducted, an increased QTd occurred in patients after MI, in patients with long QT syndrome and in patients with hypertrophic cardiomyopathy. There is circadian variation in QT interval, and 24 hour assessment has shown that this variation is blunted in survivors of sudden cardiac death [3]. The 24 hour means QTd is also significantly higher in these patients. QTd is proposed by [4,5] as a marker of myocardial electrical instability and has been proposed as a marker of the risk of death for those awaiting heart transplantation.
The measured QTd values vary among different studies in term of the mean and the standard deviation. This is due to a few factors. Firstly, previous studies measured the QT intervals and QTd from the recording of the ECG paper sheets produced at high speeds. A 12 lead surface ECG at a paper speed of 50mm/sec was recorded for healthy subjects and the resulting QTd had the value of 38 ± 14(mean ± SD)ms [6]. Meanwhile [7] reported 44 ms (39-49 ms) for coronary artery disease group and normal group 40 ms (25-55ms). Both are done manually and this study doesn’t promise consistentcy, reproducibility and efficientcy. Secondly, the value varies because of the difference in the number of leads used for analysis. This is due to the limitation of the ECG machine or the algorithm developed itself. There are some research results which support the idea that the three limb leads are influenced less by the heart dipole than the chest leads, leads V1 through V6. Research done by [8], claimed that in a population of patients with congenital long QT syndrome, assessment of QTd in the 6 chest leads correlated well with the results obtained from analysis of all 12 ECG leads. Later, in another study by [9] who did analysis for three leads, V1,V5 and V3, found that QTd in congenital long QT syndrome was significantly higher compared to normal controls (120 ± 72 versus 53 ± 42 ms). In another study by E. Lepeschkin et. al., 1952, the observation of 40ms difference in the QT duration among only 3 limb leads (1, 11,111) was determined [10]. In contrary, [10] who computed QTd among limb leads (1, 11, 111) concluded that the value is 7.48 ms.
From these findings, it is very important that the dispersion for QT interval carried out for 12 lead ECG for a better and accurate result. A research done by [11] for 12 lead ECG, found that the QTd for healthy patients is 43 .0 ± 14.6 (mean ± SD) ms, 49.7 ± 15.9 (mean ± SD) ms for patients with arrhythmia, 52.7 18.6 (mean SD) ms for patients with MI and 50.6 15.9 (mean SD) ms for patients with both arrhythmia and MI. In another study towards 12 lead ECG by [12], reported that QTd for healthy patients is 25 .0 ms and 47 ms for patients with MI. The result is significantly different for both groups. These results have confirmed the fact that patients with arrhythmia or infarction present a higher QTd index compared to healthy patients. As for studies done manually and automatically by [13], using 5 different technique of T end point model found that QTd analysis on 12 lead ECG for healthy patients varies from 25ms to 35ms, for the infarct group, it ranges from 50ms to 60 ms and for the group of arrhythmia groups in varies from 40ms to 50 ms. There is general agreement between techniques that dispersion was the greatest in the infarct group and the least in the normal group. He also concluded that automatic techniques are more powerful at discriminating between cardiac and normal patient.
Whilts researchers have refined and enhanced the way in which automatic detection and classification of cardiac arrhythmia is performed, diagnostic errors can still occur. This could be the cause to the different value of QTd obtained by the various researchers elsewhere around the world. One source of error is often attributed to the variability of the signal characteristics between patients. There is a need for evaluation procedures from results obtained during the development of the QTd algorithm. Presenting here is research taking place at University Teknologi Malaysia. The algorithm is tested on patients of Hospital Universiti Kebangsaan Malaysia (HUKM).
Methodology
DSP techniques were applied to ECG signals to develop an algorithm to facilitate automatic QT interval analysis. The technique covers data acquisition, QRS wave detection algorithm, feature extraction which includes preprocessing, waveform recognition and finally the duration measurement of QT interval and QTd through time-domain analysis. Figure 2 shows the block diagram of the system design. The software algorithm is developed using Microsoft Visual C++. Figure 3 is the flow chart of the algorithm to compute the QT interval for every cycle in a lead [16]. The procedure is repeated for the entire 12 lead.
Data Acquisition
The standard 12 lead ECG were recorded simultaneously for each patient. ECG data were gathered from 36 with 18 (9 female and 9 male) normal patients and 18 (10 female and 8 male) patients with MI patients of Hospital Universiti Kebangsaan Malaysia (HUKM) for 10 seconds using an ECG amplifier (bandwidth 0.05-100Hz). A PC with PC-ECG interface card is used as data acquisition equipped with a 12 bit analogue to digital conversion card. The ECG was digitized at sampling frequency of 500Hz. A total number of 432 ECG recordings (36 patients X 12 leads per patient) were analyzed.
QRS Detection Algorithm
A real time QRS detection algorithm was developed by [8]. It recognizes QRS complexes based on analyses of the slope, amplitude and width. This algorithm has sensitivity of 99.69% and a positive predictivity of 99.77% when tested using the MIT/BIH database. Thus, this algorithm uses the QRS detection developed by [19]. Figure 4 below shows the process involved in the QRS detection algorithm. The digitized ECG signal, ECG (k) is passed through a band pass filter composed of cascaded high pass filter and low pass integer filter, differentiation, squaring and moving average filter.
The band pass filter reduces noise in the ECG signal by matching the spectrum of the average QRS complex. Thus, it attenuates noise due to muscle noise, 50Hz interface, baseline wander, and T wave interference. This filter isolates the predominant QRS energy centered at 10 Hz. Energy of QRS is between 5Hz-15Hz [19]. The filter implemented in this algorithm is a recursive integer filter in which poles are located to cancel the zeros on the unit circle of the z plane or known as the IIR filter. The transfer functions, difference equation and frequency response for the second order low pass filter is as below. It has a cutoff frequency at 11Hz. This filter eliminates noise such as the EMG and 50Hz power line noise.
The transfer function and difference equation for the high pass filter is as below. It has a cutoff frequency at 5Hz. This filter eliminates motion artifacts and also the P wave and T wave.
After the signal has been filtered, it is then differentiated through a five point derivative to obtain information on slope of QRS complex and overcome the baseline drift problem. It also helps to accentuates QRS complexes relative to P & T wave. Below are the transfer function and the difference equation.
The previous process and the moving average which will be explained next are linear processing parts of the QRS detector. The squaring function that the signal now passes through is a nonlinear operation. This squaring operation besides makes all data positive; it emphasizes the higher frequency component nonlinearly and attenuates the lower frequency component. The equation is as follow:
The last transformation of the signal before the QRS wave detection is done would be moving average filter which acts as a smoother and performs a moving window integrator over 150ms. Refer to Equation 11. Generally, the width of the window should be approximately the same as the widest possible QRS complex. The QRS complex is detected when the time distance and the slope amplitude is within the threshold, for each lead. The threshold is updated for each lead depending on the maximum peak for each lead. The total number of QRS wave detected is stored in
Figure 10
Figure 11
Feature Extraction
This involves the preprocessing and waveform recognition. The preprocessing covers the signal transformation through the differentiation equation and low pass filter. This transformation will ease the process of Q wave, R wave and T wave recognition through the well known zero crossing method. Following is the detail explanations. Each lead is scanned for Q point, R point and T offset point. This study uses the criteria from the [17] to develop the QTd algorithm for 12 lead analyses with the graphical user interface (GUI). This method is chosen as it is the one closest to the results obtained visually by specialist [20] when tested for single lead (lead II).
Preprocessing
The first step of data processing is filtering the ECG to remove the unwanted noise. ECG was filtered using a bandpass filter between 0.05Hz-100Hz to eliminate the Electromyogram (EMG) and motion artifact [18]. Hardware 50Hz notch filter is used to eliminate the power line noise [18]. The signals were further processed with digital filters. Moving-average filter was applied to smooth the ECG signal. The smoothed signals were then differentiated to obtain information on the slope of the signal using Equation 12 which is the transfer function of the differentiator [17]:
The signal is then filtered with a first order low pass filter (Equation 13) to avoid residual noise and intrinsic differentiation noise [17].
Waveform Detection
The detection of QRS onset is done on the differentiated signal d(k) and filtered ECG signal, f(k). A QRS complex consists of at least one large peak. So, the differentiation of a QRS complex shows at least two large peaks within a limited range of time (160ms) on either side of baseline. Thus, two thresholds which are the same in magnitude but opposite in polarity are used to detect the potential QRS onset throughout the whole recording. The threshold is updated for each lead depending on the maximum peak for each lead. Forward search and downward search algorithm is applied to the maximum peak (up point) and the minimum point (down point) to obtain zero crossing points. With this, 4 locations will be obtained for a cardiac cycle. The 4 locations are compared; the smallest value and the largest value are ignored while the other two points will be the same which refers to the R wave. R wave is defined as the zero-crossing between the up point and the down point. This location is obtained from the f (k) signal. This R wave maps onto the R wave peaks in the ECG signal, ECG (k).
The Q wave location makes use of the differentiated signal, d (k) and not the f(k) because the Q wave has high frequencies that are not present in low pass filtered signal. Q wave is defined as the zero crossing preceding the R wave in the differentiated signal, d (k). In cases where the Q point is not found within 80ms from the R point, the first peak anterior to the intersection is considered as the Q point. The number of Q point and R point obtained are compared with the number of the QRS peaks(
Wide range of T wave shapes across the 12 leads and the low amplitude of T waves contributes to difficulties in detecting the T wave peak and T wave end. T wave morphology varies according to the lead. It can be categorize into several patterns namely, the normal T wave (upward-downward), inverted T wave (downward-upward), only upward T wave and only downward T wave [17]. These shapes are shown in Figure 8. T wave amplitudes less than 100uV are excluded from the analysis as low amplitude T waves are known to increase measurement error [15].
T wave peak is searched from the R point within a window based on the R-R interval. The beginning of the window is defined as bwind and the end of the window is defined as the ewind. The window length is as below [17]:
The window is updated according to the changes in R-R interval. The window length is decreased when RR interval decreases to avoid the next P wave being detected as a false T wave peak. T wave end point or offset is defined as the intersection of the T slope which best fit between 10% and 30% of T wave amplitude with the isoelectric baseline [15]; refer to Figure 9. Each lead is scanned for Q point, R point and T offset point for each lead.
Duration Measurement
The heart rate, QT interval and QTd are calculated. RR interval in each lead is computed to determine the heart rate. Heart rate is defined as equation 3 below:
The heart rate is defined as heart beat per minute (bpm). Since 1 minute=60 second and 1 second =1000milisecond. Therefore, 1 minute= 1000 X 60 = 60000milisecond. QT interval is computed as the difference between the onset of Q wave and offset of T wave. Whilst QTd is the difference between the maximum and minimum of QT intervals on any of 8 leads in millisecond.
Results And Statistical Analysis
The automated measurement algorithm is tested by performing 12 leads analysis with the ECG signals gathered from the group of non-MI and MI. The results are statistically represented in terms of the mean and standard deviation, SD (mean ± SD). A threshold determined from Gaussian probability distribution function (pdf) is used to evaluate the significance difference of QTd between the normal patients and MI patients. The characteristic function is used to justify the discrimination of QTd between the non-MI and MI patients.
Figure 10 shows the mean value of QTd for non-MI patients and MI patients through 12 leads analysis. It is found that the mean value of QTd for non-MI group is 37.28 ± 11.13ms (p<0.05). As for the MI group the mean value is 66.17 ± 13.95ms (p<0.05). The standard deviation among the normal patients is 11.13ms while for the MI is 13.95ms. There is great difference between the normal and MI group.
The evaluation of the verification system used the threshold determined from statistical property of the Gaussian distribution for QTd. The threshold is determined from the distribution shown in Figure 11. The mean value of QTd is taken as the index for verification of the patients. A test patient can be verified as non-MI or MI if the QTd is greater or lower than the predetermined threshold. From the distribution, it is found that the threshold is 50ms. Thus, any test patient with QTd value lower than 50ms will be categorized as non-MI. On the other hand, if a test patient has QTd greater than 50ms will be categorized as MI.
Thus, two kinds of errors are evident at the Error Area. Falsely Negative (FN) when the patient is non-MI but wrongly diagnosed as MI. Falsely Positive (FP) when a patient has MI and wrongly diagnose as non-MI. This phenomenon is clearly shown above in Figure 11. Based on this, the QTd for normal and MI have the sensitivity of 88.89%. A significant difference exists in the QTd for both groups. This can be seen clearly from the characteristic function [21] of 12 leads analysis shown in Figure 12. From the figure, the graph for non-MI and MI are separated far which indicates that the QTd distribution can discriminate the non-MI patients and the patients with MI.
The Graphic User Interface (GUI) for the research has been done (Refer to Appendix). As can be seen, the onset of the Q wave and the offset of the T wave are marked. The mean QT interval in millisecond and heart rate for each lead is displayed at the bottom panel. The index of QT dispersion is displayed as well.
Further study is suggested for larger number of samples so that the algorithm is better validated. Testing of the algorithm towards the non-MI and MI group is still going. The effort to identify the QTd index as a marker for non-MI and abnormal is also still going on.
Conclusions
The QTd values are significantly higher in patients with MI compared to non-MI patients. The reproducibility results show that this algorithm is able to improve the consistency of the QTd which is being calculated manually at the moment. Lack of a standard measurement technique has led to poor sensitivity and specificity in using the isolated QTd measurement for predicting susceptibility to life threatening sudden cardiac death due to MI. Thus, the development of computerized interpretation of the ECG is suggested to assist the physicians detect patients prone to MI at early stage.
Acknowledgement
Authors would like to express gratitude to Prof Dr. Dato Khalid Yusoff (M.D) for his assistance and guidance in acquiring the ECG and heart sounds. This project is supported by the IRPA Grant 74153: Development of a Prototype Intelligent diagnostic Systems of Heart Diseases Based on the ECG waveform.