Feature Engineering of the Gamma Region for Machine Learning Identification of Monoclonal Proteins

Background

In a previous post I was toying with better ways to integrate monoclonal proteins by deconvolution of monoclonal peaks. Turns out this is a hard problem. In any case, Deep Learning Neural Networks have become so mature that I wanted to see how easy it is to identify the presence of monoclonal proteins in the gamma region from the densitometric scan. This is not a new idea as it was first suggested in 1992 in J. Clin Pathol by Kratzer et al. I have to say, that was a very cool idea but as far as I can tell, it has not gotten much traction. This paper is mostly cited by review articles but there are a handful of further attempts. The challenges of developing a commercial product are probably regulatory in nature. However, I am hopeful that it would not be too hard to make a decision support tool.

When Kratzer et al. tackled this problem he was working on an Escom “IBM Compatible” computer with a 286 processor and 1 MB of RAM. Escom is a defunct German computer company that purchased Commodore after it began to flounder. This is why it would have been important for them to engineer their features carefully. I expect that Tensorflow may not need much feature engineering at all but I want to try what they have suggested as the features for the gamma region.

They took the scan of the gamma region, baselined it and then made it periodic and applied fast fourier transform (FFT) to represent the scan in frequency space. This is a very good idea.

Some Basic Stuff and Sanity Checking

Let’s make a function that is a sin function for \(\ell \le x \le \ell\) and is otherwise 0.

And now let’s examine this function for \(-10 < t < 10\) and \(\ell = 1\)

plot of chunk unnamed-chunk-205
The FFT of this function can be determined with the R fft() function. The differential \(\Delta f\) is calculated with the formula

\[\Delta f = \frac{1}{N \Delta t} \]

plot of chunk unnamed-chunk-206
Zooming in a bit we have:

plot of chunk unnamed-chunk-207

This looks as it should – trying to be a delta function, namely \(\delta(x – 1)\). As we increase the number of periods we feed the FFT the better this becomes:

So as the number of periods increase, things improve and by 10 periods, we have a very sharp \(\delta\) function.

plot of chunk unnamed-chunk-209

Applying FFT to a Gaussian

We can do the same thing to a Gaussian to mimic the gamma region of electrophoresis. To do this we just replicate the gamma function, inverting it in order to make the resulting function odd. Let’s just examine 5 periods for simplicity which is 10 replications of the gamma region.

plot of chunk unnamed-chunk-210

So, with a pure Gaussian, we more or less have 4 peaks in the FFT. If we now mimic a monoclonal protein by adding a second Gaussian, we can see that the FFT becomes more complex because more frequencies (i.e. \(n\)) of \(e^{\frac{-2 \pi i}{N}kn}\) are required to characterize the concatenated periodic function.

plot of chunk unnamed-chunk-211

The gamma region with a monoclonal results in an obviously more complex FFT.

Apply to a Real Gamma Region

Here is an example densitometric scan. We can pull out the gamma region.

plot of chunk unnamed-chunk-212

Then we can baseline the gamma region, replicate it 10 times and apply the FFT.

plot of chunk unnamed-chunk-213

plot of chunk unnamed-chunk-213

And that is more or less the gist of things. The simpler the gamma region is, the fewer peaks there will be in the frequency domain. This permits recapitulation of what Kratzer et al did in their 1992 paper with modern techniques. As I say, I am not sure its necessary but it might improve model performance. Of course it would seem possible to FFT the whole scan also and reduce the dimensionality of problem to about 30 frequencies and amplitudes.

Parting Thought

As the FFT discerns the frequencies of the function, so also:

“Do not consider his appearance or his height … The Lord does not look at the things people look at. People look at the outward appearance, but the Lord looks at the heart.”

1 Sam 16:7

Reproducible Research: Write your Clinical Chemistry paper using R Markdown

Abstract
Background: This blog post is going to show you how to write a reproducible article in the field of clinical chemistry using R Markdown. The only thing that will change for journal to journal will be the reference fomating and perhaps section numbering. The source code itself will be provided so that you can use it as a template.

Methods: The paper will use R, R-Markdown, bookdown and pandoc. The references will be taken care of using BibTeX and reference formatting will be managed with Zotero csl files.

Results: The result will be a manuscript that anyone can reproduce.

Conclusions: R Markdown makes reproducible research through literate programming pretty easy.

1 Background

Last week at the MSACL conference Dr. Keith Baggerly from MD Anderson Cancer Centre’s Bioformatics and Computational Biology Group spoke about the importance of reproducible research using the Duke University ovarian cancer biomarker scandal as a backdrop. The talk…was…incredible and illustrated how easy it is to introduce catastrophic errors into your research papers through the use of GUI analytical tools. The now retracted article that Baggerly dismantled is here. I urge everyone in our field to watch similar talks from Keith discussing biomarker analysis in mass spectrometric proteomic data and microarray data. Shannon Haymond and I were then discussing how to make a submission in the field of clinical chemistry that is reproducible. While this article will not discuss the basics of R and R Markdown, it will serve as a guide for those who know a little about these and give you a working YAML header and get the citations and cross-references correct.

2 Overhead

2.1 YAML

RMarkdown articles require a YAML header to instruct R Markdown how to process your article. This is the YAML code that worked for me. I am sure there are other ways to do this:

Indenting and spacing really matter a lot in YAML so don’t mess with them. You can generate PDF, MS Word or HTML output as needed by uncommenting and commenting out the output type as appropriate in the YAML above.

2.2 CSL Files

CSL files take care of citation formatting for you. Depending on what journal you are making submission to, you will need a different .csl file. They exist for every conceivable journal. In the world of non-reproducible reports, this process is taken care of by reference managers in GUI word processors but since GUI word processors do not produce reproducible research, we must break up with them.

From the YAML, you will see that you need a file called “clinical-chemistry.csl” which I downloaded from here. Put this file in the same folder as your R Markdown file. The Clinical Chemistry .csl depends on the .csl file of the American Association for Cancer Research but it will be downloaded for you on the fly. If you need a different reference format, search the CSL GitHub repository for the appropriate file.

2.3 BibTeX

Reference management in R Markdown is taken care of by BibTeX. You can see from the YAML that we need a bibliography text file called “bibliography.bib”. You can name it whatever you like but you will need to change the YAML accordingly. In any case, any citation you intend to make will have to be in the .bib file. I am going to cite Shannon Haymond because this article was her idea and I will toss in a couple of other references so you can see that they get cited in order as we would like.

Below is my bibliography.bib file. I put it in the same folder as my R Markdown file. You make a .bib file using a text file editor (or RStudio) by cutting and pasting the BibTex citations from Google Scholar

2.4 Journal Abbreviations

Now, the fussiest thing I had to do was get the references abbreviating properly. We need an abbreviation database. Fortunately, I could download the abbreviation database from the Web of Science as a .csv file and then convert it to a JSON file. This was Stephen’s idea. I had to deal with a couple of badly behaving characters from some journal titles. This script, if embedded in your document, will download the .csv for you and then make the abbreviation database for you. That way your citations will say, “J Clin Pathol” and not “Journal of Clinical Pathology” etc.

2.5 Citation Management

Now we can proceed to cite articles from our .bib file freely inserting syntax like this: [@shannon2017]. Shannon wrote this interesting article on symmetric dimethylarginine (1), Stephen wrote about wellness initiatives with Dr. Diamandis (2) and Dan wrote a paper about PTH when he was a resident (3). That makes for a total of 3 citations in this manuscript. (1–3)

3 Reporting Your Results

3.1 Figures

You can embed figures in R Markdown as local images or hyperlinks as follows:

![Grumpy Cat does not care about reproducibility](grumpy.jpg)

Grumpy Cat does not care about reproducibility

Grumpy Cat does not care about reproducibility

If you need to do cross-referencing of figures in your document which will change automatically for you if you insert another figure, you can do this by inserting your figure with an R code-chunk and giving the code-chunk a name.


This is the ancient aliens guy.

Figure 3.1: This is the ancient aliens guy.

Then you can reference your code chunk using syntax like this: See figure \@ref(fig:ancient-aliens)

and you will automatically create appropriately cross-referenced figures that get automatically numbered like this: See figure 3.1. Of course the hallmark of the reproducibility to embed R code right into the document. See for example figure 3.2

 

This is a reproducible figure

Figure 3.2: This is a reproducible figure

3.2 Inline Calculations

When you are reporting your amazing results you can have inline code calculations by syntax that looks like this: `r round(median(x),1)` and would result in the median value of \(x\) being reported as 43.9 mmol/L.

3.3 Tables

Tables are not a problem either and can be made with the kable() function of the knitr package or with the xtable package. Tables can be crossreferenced analogously to figures. See table 3.1

Table 3.1: This is a great table
First Second Third
1 2 2
2 3 6
3 4 12
4 5 20
5 6 30

3.4 Math

Math works pretty magically using \(\LaTeX\) syntax. For example inline math can be done like so $\sin^2x + \cos^2x = 1$. This will result in: \(\sin^2x + \cos^2x = 1\). And math can also be done as a code block like so:

Gauss’ Law says: \[
\oint_S {E_n dA = \frac{1}{{\varepsilon _0 }}} Q_{inside}
\]

Equations can be cross-referenced just like tables and figures.

4 Conclusion

I hope this makes writing a reproducible paper easier for you. A minimal template to produce output in PDF is here. And the PDF output itself is here. You’ll need \(\LaTeX\) installed of course.

 

Parting Thought

You can cite books too and get Greek letters too:

“Very truly I tell you,” Jesus answered, “before Abraham was born, \(\varepsilon\gamma\omega\) \(\varepsilon\iota\mu\iota\) (I Am)!” (4)

References

1. Brooks ER, Haymond S, Rademaker A, Pierce C, Helenowski I, Passman R, et al. Contribution of symmetric dimethylarginine to gfr decline in pediatric chronic kidney disease. Pediatr Nephrol. Springer; 2017;1–8.

2. Li M, Diamandis EP, Paneth N, Yeo K-TJ, Vogt H, Master SR. Wellness initiatives: Benefits and limitations. Clin Chem. Clinical Chemistry; 2017;63:1063–8.

3. Holmes DT, Levin A, Forer B, Rosenberg F. Preanalytical influences on DPC IMMULITE 2000 intact PTH assays of plasma and serum from dialysis patients. Clin Chem. Clinical Chemistry; 2005;51:915–7.

4. John the A, Chist J, Spirit H, God F. The Gospel According to John, 8:58. 1 Heavenly Way: Hosannah Press; 80AD.


 

Mining Your Routine Data for Reference Intervals: Hoffmann, Bhattacharya and Maximum Likelihood

Background

Let me preface this by saying I am not making a recommendation to use the Hoffmann method. Neither am I advocating for reference interval mining from routine data. There are many challenges associated with this kind of effort. That's for another post I think. However, I am going to how one does the calculations for two methods I have seen used: the Hoffmann Method and the Bhattacharya Method. Then I will show how to do this using the mixtools package in R which uses the expectation maximum algorithm to determine the maximum likelihood.

The Concept

When you look at histograms of routine clinical data from allcomers, on some occasions the data will form a bimodal looking distribution formed by the putatively sick and well. If you could statistically determine the distribution of the well subjects, then you could, in principle, determine the reference interval without performing a reference interval study. We can all dream, right?

All three of the approaches I show assume that the two distributions are Gaussian. This is almost never true. But for the purposes of the calculations, I will provide each approach data that meets the assumptions it makes. So, let's make a fake bimodal distribution and see how each method does. We will assume equal numbers of sick and well so that the bimodal distribution is obvious. One will have \(\mu_1 = 2\) and \(\sigma_1 = 0.5\) and the other will have \(\mu_2 = 6\) and \(\sigma_2 = 2\). The expected normal range for this population is based on \(\mu_1\) and \(\sigma_1\) and is \(2 – 0.5 \times 1.96\) and \(2 + 0.5 \times 1.96\) or about 1–3.

plot of chunk unnamed-chunk-1

To illustrate how the two populations add you can plot one in green and one in pink. The overlap shows in a yucky brown.

plot of chunk unnamed-chunk-2

Hoffmann

In 1963 Robert Hoffmann proposed a simple graphical approach to this problem and use of his method is alive and well—see here for example. The method assumes that both modes are Gaussian and that if one eye-balls (yes…the paper says “eye-fit”) the first linear-looking portion of the cumulative probability distribution (CDF) function as plotted on normal probability paper and finds its intersection with the lines y = 0.025 and y = 0.975, one can impute the normal range.

What do I plot for Hoffmann: a QQ-plot or the CDF?

It is very important to understand that the use normal probability paper, as Hoffmann described, was mandatory because it produces a normal probablity plot. As he says,

“This special graph paper serves the useful purpose of 'straightening out' a cumulative gaussian distribution. It forms a straight line.”

A CDF plotted on linear scale is sigmoidal. This is not what we want. We want a normal probability plot which is just a special case of the QQ-plot where the comparator distribution is the normal distribution. Inadvertently plotting a plain old CDF will not produce correct estimates of the lower and upper limits of normal (ie \(\mu \pm 1.96\sigma\)). The reason I emphasize this is that I have seen this error made in a number of reference interval papers—but not the one I cited above—it is correct. The importance of the distinction becomes not-very-subtle when you apply the Hoffmann approach to a pure Gaussian distribution. In short, use of the CDF in linear space generates erroneous results as we will show later on.

The Correct Approach

Here is the standard r-base normal QQ-plot of our mock data set:

plot of chunk unnamed-chunk-3

To prevent reader confusion, I am going present the plots the way Hoffmann originally showed them. So I will put the patient data on the x-axis. It doesn't change anything. You can do it as you like.

plot of chunk unnamed-chunk-4

From this you can see that there is obviously linear section between about x = 0 to x = 2 (and with the eye of faith, there is a second after x = 6). This is what Hoffmann calls the “eye-fit”. Since the first linear section is attributable to the first of the two normal distributions which form the overall distribution, we can use the it to determine properties of the first distribution. If I look only at the data between x = 0 and x = 2, I am sort-of guaranteed to be in the first linear section. You don't have to kill yourself to correctly identify where the linearity ends because the density of the points should be highest near the middle of the linear section and this will weight the regression for you.

Next if I extend this line to find its intersection with y = -1.96 and y = 1.96 (ie the z-scores corresponding the limits of normal, namely the 2.5th and 97.5th centiles), I can estimate the reference interval, by dropping perpendicular lines from the two respective intersections. Here is what I get:

plot of chunk unnamed-chunk-5

So the Hoffmann reference interval becomes 1.11 to 3.70 which you can compare to the expected values of about 1 and 3 based on the random data. Not the greatest but not bad.

What not to do

Let's apply the correct approach to the Hoffmann method (QQ-Plot) and incorrect approach (CDF on a linear scale) to a pseudorandom sampling (n=10,000) of the standard normal distribution, which has a mean of 0 and a standard deviation of 1. Therefore the central 95% or “normal range” for this distribution will be -1.96 to 1.96. I will plot regression lines through the linear part of each curve and find the respective intersections with the appropriate horizontal lines.

plot of chunk unnamed-chunk-6

The QQ-plot generates estimates the limits of normal, \(\mu \pm 1.96\sigma\), as about \(\pm 1.96\) as it should. You can easily show that the same procedure on the CDF intersects the lines \(y = \alpha /2\) and \(y = 1 – \alpha/2\) at values of \(\pm (1 – \alpha) \sqrt{\pi/2} \sigma\) which is about \(\pm 1.19\) for \(\sigma = 1\) and \(\alpha = 0.05\). This erroneous estimate is shown with the pink vertical lines. So the Hoffmann method does not work if one attempts to extend the linear portion of the CDF if it is plotted in linear space and it will produce estimates of \(\sigma\) that are 40% too low in this case. If you're puting this all together, this because the CDF is well away from its linear portion when the cumulative proportions are 0.025 and 0.975—not so for a QQ-plot. If you see a “Hoffmann plot” constructed from a sigmoidal CDF plotted on a linear scale, something is wrong.

Bhattacharya

This method is based on a much more highly cited paper in Biometrics published in 1967 by C.G. Bhattacharya. Loosely speaking, the method of Bhattacharya determines the parameter estimates of \(\mu_i\) and \(\sigma_i\) from the slope of the log of the distribution function. It was originally intended as a graphical method and so it also involves some human eye-balling.

We will need the log of the counts from the histogram. When we store the results of a histogram in R, we have the counts automatically.

We can now calculate the log of the counts (denoted \(y\)) and \(\Delta log(y)\) from bin to bin. We put these in a dataframe along with the counts and the midpoints of the bins. The bin width, which is chosen to be constant \(h\), is the distance between the midpoints of each bin.

Now let's plot \(\Delta log(y)\) as a function of the midpoints of the bins. I also number all the points to facilitate the next step.

plot of chunk unnamed-chunk-9

We can see from the figure that there are two sections where the plot shows a downsloping line: one between points 2 to 6 and another between points 10 to 21. How straight these lines appear is affected by how wide your bins are so if you get lines that are hard to discern, you can try making fewer bins.

In any case, using Bhattacharya's notation, the next step in the procedure is to draw regression lines through the \(r_{th}\) linear section and determine the intercept \(\hat{\lambda}_r\) with the x-axis. Bhattacharya intended this as a graphical procedure and advises,

“While matching the straight line it is better to fit closely to the points where the frequency is large even if the apparent discrepancy becomes somewhat large where the frequency is small.”

Since we are doing this by calculation, we can take his advice by weighting the linear regressions according to the counts. This allows the determination of the \(\hat{\mu}_r\) by:

\[\hat{\mu}_r = \hat{\lambda}_r + h/2\]
and also the determination of \(\hat{\sigma}_r\) by:

\[\hat{\sigma}^2_r = -h/ \text{slope}_r – h^2/12\]

And here are the results we get:

mu Values sigma Values Normal Range Limits
2.06 0.59 0.90
6.25 1.83 3.21

And here is what it all looks like

plot of chunk unnamed-chunk-12

In this demonstration, there are only two Gaussian distributions to resolve, but the method is not limited to the resolution of two Gaussian curves at all. If there are more, there will be more downsloping lines crossing the x-axis. So we get normal range estimates of 0.90 and 3.21 which compare much better with the expected values of about 1 and 3. We also get good estimates of \(\mu_2=\) 6.3 and \(\sigma_2=\) 1.8 which are about 6 and 2 respectively in our data set.

Bhattacharya also provides a means of calculating the mixing proportion of the two distributions—that is, the proportions of patients in the sick and abnormal populations. We don't need that here so I omit it.

Gaussian Mixture Model

In R there are a lot of ways to approach the separation of mixtures of distributions using maximum likelihood. Here I am using a function from the mixtools package that is particularly easy to use. The concept of using maximum likelihood for mining your reference interval is not new (see this paper) but many would be intimidated by the math required to do it from scratch.

With R, this is pretty easy but please be cautioned that real data does not play as nice as the data in this demonstration (even moreso for Hoffmann and Bhattacharya) and it is unlikely that you will get smashing results unless your data fits the assumptions of the model.

In any case,

which gives very good parameter estimates indeed! Estimates of \(\mu_1\) and \(\mu_2\) are 2.01 and 6.19 respectively and estimates of \(\sigma_1\) and \(\sigma_2\) are 0.52 and 1.97 respectively.

Looking at this graphically:

plot of chunk unnamed-chunk-14

So the normal range estimate from EM method is 1.00 to 3.03 which is pretty fantastic.

Summary of Results

LLN ULN
Raw Random Data 1.03 2.98
Hoffmann 1.11 3.70
Bhattacharya 0.90 3.21
mixtools EM – winner! 1.00 3.03

It's not too hard to figure out which one of these approaches works best. But what do you do if your patient data distribution is obviously not a mixture of Gaussians (ie when the distributions look skewed)? There are ways to do this in R for this but I will cover that another time–maybe in a paper.

Conclusion

  • Three methods of estimating the normal range from a mixture of Gaussians have been presented.
  • The Hoffmann method performs OK if you use a QQ-plot.
  • The Hoffmann method does not work for CDFs plotted on a linear scale.
  • The Bhattacharya method performs better but still requires some human oversight.
  • The normalmixEM() function from the mixtools package performs very well without any human oversight.
  • These results do not imply that any of these approaches will perform well on real patient data for which the components of the overall distribution are not likely to be Gaussian. Caution advised.

Parting Thought

Please don't fall on the wrong side of God's mixture separation procedures for wheat and chaff.

Said, John the Baptist, “But after me comes one who is more powerful than I, whose sandals I am not worthy to carry. He will baptize you with the Holy Spirit and fire. His winnowing fork is in his hand, and he will clear his threshing floor, gathering his wheat into the barn and burning up the chaff with unquenchable fire.”

Matt 3:11–12

Non-Linear Regression: Application to Monoclonal Peak Integration in Serum Protein Electrophoresis

Background

At the AACC meeting recently, there was an enthusiastic discussion of standardization of reporting for serum protein electrophoresis (SPEP) presented by a working group headed up by Dr. Chris McCudden and Dr. Ron Booth, both of the University of Ottawa. One of the discussions pertained to how monoclonal bands, especially small ones, should be integrated. While many use the default manual vertical gating or “drop” method offered by Sebia's Phoresis software, Dr. David Keren was discussing the value of tangent skimming as a more repeatable and effective means of monoclonal protein quantitation. He was also discussing some biochemical approaches distinguishing monoclonal proteins from the background gamma proteins.

The drop method is essentially an eye-ball approach to where the peak starts and ends and is represented by the vertical lines and the enclosed shaded area.

plot of chunk unnamed-chunk-1

The tangent skimming approach is easier to make reproducible. In the mass spectrometry world it is a well-developed approach with a long history and multiple algorithms in use. This is apparently the book. However, when tangent skimming is employed in SPEP, unless I am mistaken, it seems to be done by eye. The integration would look like this:

plot of chunk unnamed-chunk-2

During the discussion it was point out that peak deconvolution of the monoclonal protein from the background gamma might be preferable to either of the two described procedures. By this I mean integration as follows:

plot of chunk unnamed-chunk-3

There was discussion this procedure is challenging for number of reasons. Further, it should be noted that there will only likely be any clinical value in a deconvolution approach when the concentration of the monoclonal protein is low enough that manual integration will show poor repeatability, say < 5 g/L = 0.5 g/dL.

Easy Peaks

Fitting samples with larger monoclonal peaks is fairly easy. Fitting tends to converge nicely and produce something meaningful. For example, using the approach I am about to show below, an electropherogram like this:

plot of chunk unnamed-chunk-4

with a gamma region looking like this:

plot of chunk unnamed-chunk-5

can be deconvoluted with straightforward non-linear regression (and no baseline subtraction) to yield this:

plot of chunk unnamed-chunk-6

and the area of the green monoclonal peak is found to be 5.3%.

More Difficult Peaks

What is more challenging is the problem of small monoclonals buried in normal \(\gamma\)-globulins. These could be difficult to integrate using a tangent skimming approach, particularly without image magnification. For the remainder of this post we will use a gel with a small monoclonal in the fast gamma region shown at the arrow.

plot of chunk unnamed-chunk-7

Getting the Data

EP data can be extracted from the PDF output from any electrophoresis software. This is not complicated and can be accomplished with pdf2svg or Inkscape and some Linux bash scripting. I'm sure we can get it straight from the instrument but it is not obvious to me how to do this. One could also rescan a gel and use ImageJ to produce a densitometry scan which is discussed in the ImageJ documentation and on YouTube. ImageJ also has a macro language for situations where the same kind of processing is done repeatedly.

Smoothing

The data has 10284 pairs of (x,y) data. But if you blow up on it and look carefully you find that it is a series of staircases.

plot of chunk unnamed-chunk-8

It turns out that this jaggedness significantly impairs attempts to numerically identify the peaks and valleys. So, I smoothed it a little using the handy rle() function to identify the midpoint of each step. This keeps the total area as close to its original value as possible–though this probably does not matter too much.

plot of chunk unnamed-chunk-9

Now that we are satisfied that the new data is OK, I will overwrite the original dataframe.

Transformation

The units on the x and y-axes are arbitrary and come from page coordinates of the PDF. We can normalize the scan by making the x-axis go from 0 to 1 and by making the total area 1.

plot of chunk unnamed-chunk-11

Find Extrema

Using the findPeaks function from the quantmod package we can find the minima and maxima:

plot of chunk unnamed-chunk-12

plot of chunk unnamed-chunk-12

Not surprisingly, there are some extraneous local extrema that we do not want. I simply manually removed them. Generally, this kind of thing could be tackled with more smoothing of the data prior to analysis.

Fitting

Now it's possible with the nls() function to fit the entire SPEP with a series of Gaussian curves simultaneously. It works just fine (provided you have decent initial estimates of \(\mu_i\) and \(\sigma_i\)) but there is no particular clinical value to fitting the albumin, \(\alpha_1\), \(\alpha_2\), \(\beta_1\) and \(\beta_2\) domains with Gaussians. What is of interest is separately quantifying the two peaks in \(\gamma\) with two separate Gaussians so let's isolate the \(\gamma\) region based on the location of the minimum between \(\beta_2\) and \(\gamma\).

Isolate the \(\gamma\) Region

plot of chunk unnamed-chunk-14

Attempt Something that Ultimately Does Not Work

At first I thought I could just throw two normal distributions at this and it would work. However, it does not work well at all and this kind of not-so-helpful fit turns out to happen a fair bit. I use the nls() function here which is easy to call. It requires a functional form which I set to be:

\[y = C_1 \exp\Big(-{\frac{(x-\mu_1)^2}{2\sigma_1^2}}\Big) + C_2 \exp \Big({-\frac{(x-\mu_2)^2}{2\sigma_2^2}}\Big)\]

where \(\mu_1\) is the \(x\) location of the first peak in \(\gamma\) and \(\mu_2\) is the \(x\) location of the second peak in \(\gamma\). The estimates of \(\sigma_1\) and \(\sigma_2\) can be obtained by trying to estimate the full-width-half-maximum (FWHM) of the peaks, which is related to \(\sigma\) by

\[FWHM_i = 2 \sqrt{2\ln2} \times \sigma_i = 2.355 \times \sigma_i\]

I had to first make a little function that returns the respective half-widths at half-maximum and then uses them to estimate the \(FWHM\). Because the peaks are poorly resolved, it also tries to get the smallest possible estimate returning this as FWHM2.

The peak in the \(\gamma\) region was obtained previously:

plot of chunk unnamed-chunk-16

and from them \(\mu_1\) is determined to be 0.7. We have to guess where the second peak is, which is at about \(x=0.75\) and has an index of 252 in the gamma.data dataframe.

plot of chunk unnamed-chunk-17

Now we can find the estimates of the standard deviations:

The estimates of \(\sigma_1\) and \(\sigma_2\) are now obtained. The estimates of \(C_1\) and \(C_2\) are just the peak heights.

We can now use nls() to determine the fit.

Determining the fitted values of our unknown coefficients:

And now we can plot the fitted results against the original results:

plot of chunk unnamed-chunk-22

And this is garbage. The green curve is supposed to be the monoclonal peak, the blue curve is supposed to be the \(\gamma\) background, and the red curve is their sum, the overall fit. This is a horrible failure.

Subsequently, I tried fixing the locations of \(\mu_1\) and \(\mu_2\) but this also yielded similar nonsensical fitting. So, with a lot of messing around trying different functions like the lognormal distribution, the Bi-Gaussian distribution and the Exponentially Modified Gaussian distribution, and applying various arbitrary weighting functions, and simultaneously fitting the other regions of the SPEP, I concluded that nothing could predictably produce results that represented the clinical reality.

I thought maybe the challenge to obtain a reasonable fit related to the sloping baseline, so I though I would try to remove it. I will model the baseline in the most simplistic manner possible: as a sloped line.

Baseline Removal

I will arbitrarily define the tail of the \(\gamma\) region to be those values having \(y \leq 0.02\). Then I will connect the first (x,y) point from the \(\gamma\) region and connect it to the tail.

plot of chunk unnamed-chunk-24

Now we can define a new dataframe gamma.no.base that has the baseline removed:

plot of chunk unnamed-chunk-25

The black is the original \(\gamma\) and the dashed has the baseline removed. This becomes and easy fit.

plot of chunk unnamed-chunk-26

Lo and behold…something that is not completely insane. The green is the monoclonal, the blue is the \(\gamma\) background and the red is their sum, that is, the overall fit. A better fit could now we sought with weighting or with a more flexible distribution shape. In any case, the green peak is now easily determined. Since

\[\int_{-\infty}^{\infty} C_1 \exp\Big(-{\frac{(x-\mu_1)^2}{2\sigma_1^2}}\Big)dx = \sqrt{2\pi}\sigma C_1\]

So this peak is 2.4% of the total area. Now, of course, this assumes that nothing under the baseline is attributable to the monoclonal peak and all belongs to normal \(\gamma\)-globulins, which is very unlikely to be true. However, the drop and tangent skimming methods also make assumptions about how the area under the curve contributes to the monoclonal protein. The point is to try to do something that will produce consistent results that can be followed over time. Obviously, if you thought there were three peaks in the \(\gamma\)-region, you'd have to set up your model accordingly.

All about that Base(line)

There are obviously better ways to model the baseline because this approach of a linear baseline is not going to work in situations where, for example, there is a small monoclonal in fast \(\gamma\) dwarfed by normal \(\gamma\)-globulins. That is, like this:

plot of chunk unnamed-chunk-28

Something curvilinear or piecewise continuous and flexible enough for more circumstances is generally required.

There is also no guarantee that baseline removal, whatever the approach, is going to be a good solution in other circumstances. Given the diversity of monoclonal peak locations, sizes and shapes, I suspect one would need a few different approaches for different circumstances.

Conclusions

  • The data in the PDFs generated by EP software are processed (probably with splining or similar) followed by the stair-stepping seen above. It would be better to work with raw data from the scanner.

  • Integrating monoclonal peaks under the \(\gamma\) baseline (or \(\beta\)) is unlikely to be a one-size-fits all approach and may require application of a number of strategies to get meaningful results.

    • Basline removal might be helpful at times.
  • Peak integration will require human adjudication.

  • While most monoclonal peaks show little skewing, better fitting is likely to be obtained with distributions that afford some skewing.

  • MASSFIX may soon make this entire discussion irrelevant.

Parting Thought

On the matter of fitting

In bringing many sons and daughters to glory, it was fitting that God, for whom and through whom everything exists, should make the pioneer of their salvation perfect through what he suffered.

Heb 2:10

Compare Tube Types with R – Repeated Measures ANOVA

Background

Sometimes we might want to compare three or four tube types for a particular analyte on a group of patients or we might want to see if a particular analyte is stable over time in aliqioted samples. In these experiments are essentially doing the multivariable analogue of the paired t-test. In the tube-type experiment, the factor that is differing between the (‘paired’) groups is the container: serum separator tubes (SST), EDTA plasma tubes, plasma separator tubes (PST) etc. In a stability experiment, the factor that is differing is storage duration.

Since this is a fairly common clinical lab experiment, I thought I would just jot down how this is accomplished in R – though I must confess I know just about \(\lim_{x\to0}x\) about statistics. In any case, the statistical test is a repeated-measures ANOVA and this is one way to do it (there are many) including an approach to the post-hoc testing.

Some Fake Data to Work With

I’m going to make some fake data. I tried to dig up the data from an experiment I did as a resident but alas, I think the raw data died on an old laptop. But fake data will do for demonstration purposes. Let’s suppose we are looking at parathyroid hormone (PTH) in three different blood collection tubes: SST, EDTA and PST. For the sake of argument, let’s say that we collect samples from 20 patients simultaneously and we anlayze them all as per our usual process. This means that each patient has three samples of material that should be otherwise identical outside of the effects of the collection contained.

This is the way we usually express (and receive) data like this in an Excel spreadsheet:

Subject SST PST EDTA
1 17.5 18.1 19.9
2 15.1 15.7 20.0
3 29.0 29.2 32.9
4 5.7 6.2 6.4
5 25.0 26.1 27.0
6 25.7 26.4 29.0
7 41.2 40.8 48.1
8 20.4 22.1 24.3
9 28.7 26.9 36.0
10 11.0 13.9 13.7
11 32.4 31.9 36.9
12 44.5 49.2 57.4
13 16.2 17.1 15.7
14 21.7 24.1 26.3
15 38.8 36.8 42.6
16 34.4 34.0 44.2
17 12.6 12.1 14.1
18 19.8 20.9 25.4
19 19.9 18.2 23.0
20 35.4 37.4 34.1

This Excel-ish way of storing the data is referred to as the “datawide” format for obvious reasons.

Gather the Grain

As it turns out this is not the way that we want to store data to do the statistical analyses of interest. What we want to do is have the tube type in a single column because this is the factor that is different within the subjects. We want to gather() or melt() the data (depending on your package of choice) to be like so:

Subject Subject value
1 SST 17.5
2 SST 15.1
3 SST 29.0
4 SST 5.7
5 SST 25.0
6 SST 25.7
7 SST 41.2
8 SST 20.4
9 SST 28.7
10 SST 11.0
11 SST 32.4
12 SST 44.5
13 SST 16.2
14 SST 21.7
15 SST 38.8
16 SST 34.4
17 SST 12.6
18 SST 19.8
19 SST 19.9
20 SST 35.4
1 PST 18.1
2 PST 15.7
3 PST 29.2
4 PST 6.2
5 PST 26.1
6 PST 26.4
7 PST 40.8
8 PST 22.1
9 PST 26.9
10 PST 13.9
11 PST 31.9
12 PST 49.2
13 PST 17.1
14 PST 24.1
15 PST 36.8
16 PST 34.0
17 PST 12.1
18 PST 20.9
19 PST 18.2
20 PST 37.4
1 EDTA 19.9
2 EDTA 20.0
3 EDTA 32.9
4 EDTA 6.4
5 EDTA 27.0
6 EDTA 29.0
7 EDTA 48.1
8 EDTA 24.3
9 EDTA 36.0
10 EDTA 13.7
11 EDTA 36.9
12 EDTA 57.4
13 EDTA 15.7
14 EDTA 26.3
15 EDTA 42.6
16 EDTA 44.2
17 EDTA 14.1
18 EDTA 25.4
19 EDTA 23.0
20 EDTA 34.1

Now we see that there is a column for tube-type and a column for the PTH results which we can name accordingly. You can see why this called the “datalong” format.

Visualize

Summarize the data:

Let’s just have a quick look graphically:

plot of chunk unnamed-chunk-6

plot of chunk unnamed-chunk-6

And as a boxplot with the points overtop:

plot of chunk unnamed-chunk-7

Separate the Wheat from the Chaff

Now we want to make comparisons to see if these are different. To accomplish this, we will use the aov() function. This requires us to have data formatted “datalong” as it is in the tube.data.2 dataframe.

If you are like me, this syntax is confusing. But it goes like this. PTH is a function of Tube.Type which is straight forward–hence the PTH ~ Tube.Type bit. The error term has the Subject in front of the / and the factor that is different within the subjects (Tube.Type) after the /. That’s my grade 2 explanation from reading this and this and this.

This tells us that there is a difference between the groups but it does not specify where the difference is.

I can’t see the difference. Can you see the difference?

Sorry – I just had to make a pop-culture reference to this. We want to be specific about where the differences are without making a Type I error which might arise if we blindly charge ahead and do multiple paired t-tests. One easy way to accomplish this is to use the pairwise.t.test() function which does corrections for multiple comparisons. You can choose from a number of approaches for adjustment for pairwise comparison. This requires the “response vector” which is PTH and the “grouping factor” which is the tube type.

This is pretty easy to understand. There are statistically significant differences found between the EDTA and PST (p = 0.00083) and the EDTA and PST (p = 0.00008) but none between SST and PST (p = 0.35033).

Conclusion

Non-statistician’s approach to tube-type comparisons, which is also applicable to analyte stability studies. This is a one-way repeated measures ANOVA with one within-subjects factor. There is a great deal more to say on the matter by people who know much more in the citations in the links provided above.

God probably uses datawide format

All the nations will be gathered before him, and he will separate the people one from another as a shepherd separates the sheep from the goats. He will put the sheep on his right and the goats on his left.

(Matt 25:32-33)

Parse an Online Table into an R Dataframe – Westgard’s Biological Variation Database

Background

From time to time I have wanted to bring an online table into an R dataframe. While in principle, the data can be cut and paste into Excel, sometimes the table is very large and sometimes the columns get goofed up in the process. Fortunately, there are a number of R tools for accomplishing this. I am just going to show one approach using the rvest package. The rvest package also makes it possible to interact with forms on webpages to request specific material which can then be scraped. I think you will see the potential if you look here.

In our (simple) case, we will apply this process to Westgard's desirable assay specifications as shown on his website. The goal is to parse out the biological variation tables, get them into a dataframe and the write to csv or xlsx.

Reading in the Data

The first thing to do is to load the rvest and httr packages and define an html session with the html_session() function.

Now looking at the webpage, you can see that there are 8 columns in the tables of interest. So, we will define an empty dataframe with 8 columns.

We need to know which part of the document to scrape. This is a little obscure, but following the instructions in this post, we can determine that the xpaths we need are:

/html/body/div[1]/div[3]/div/main/article/div/table[1]

/html/body/div[1]/div[3]/div/main/article/div/table[2]

/html/body/div[1]/div[3]/div/main/article/div/table[3]

etc.

There are 8 such tables in the whole webpage. We can define a character vector for these as such:

Now we make a loop to scrape the 8 tables and with each iteration of the loop, append the scraped subtable to the main dataframe called biotable using the rbind() function. We have to use the parameter fill = TRUE in the html_table() function because the table does not happen to always a uniform number of columns.

Clean Up

Now that we have the raw data out, we can have a quick look at it:

X1 X2 X3 X4 X5 X6 X7 X8
Analyte Number of Papers Biological Variation Biological Variation Desirable specification Desirable specification Desirable specification
Analyte Number of Papers CVI CVg I(%) B(%) TE(%)
S- 11-Desoxycortisol 2 21.3 31.5 10.7 9.5 27.1
S- 17-Hydroxyprogesterone 2 19.6 50.4 9.8 13.5 29.7
U- 4-hydroxy-3-methoximandelate (VMA) 1 22.2 47.0 11.1 13.0 31.3
S- 5' Nucleotidase 2 23.2 19.9 11.6 7.6 26.8
U- 5'-Hydroxyindolacetate, concentration 1 20.3 33.2 10.2 9.7 26.5
S- α1-Acid Glycoprotein 3 11.3 24.9 5.7 6.8 16.2
S- α1-Antichymotrypsin 1 13.5 18.3 6.8 5.7 16.8
S- α1-Antitrypsin 3 5.9 16.3 3.0 4.3 9.2

We can see that we need define column names and we need to get rid of some rows containing extraneous column header information. There are actually 8 such sets of headers to remove.

Let's now find rows we don't want and remove them.

You will find that the table has missing data which is written as “- – -”. This should be now replaced by NA and the column names should be assigned to sequential integers. Also, we will remove all the minus signs after the specimen type. I'm not sure what they add.

Check it Out

Just having another look at the first 10 rows:

Sample Analyte NumPapers CVI CVG I B TE
S 11-Desoxycortisol 2 21.3 31.5 10.7 9.5 27.1
S 17-Hydroxyprogesterone 2 19.6 50.4 9.8 13.5 29.7
U 4-hydroxy-3-methoximandelate (VMA) 1 22.2 47.0 11.1 13.0 31.3
S 5' Nucleotidase 2 23.2 19.9 11.6 7.6 26.8
U 5'-Hydroxyindolacetate, concentration 1 20.3 33.2 10.2 9.7 26.5
S α1-Acid Glycoprotein 3 11.3 24.9 5.7 6.8 16.2
S α1-Antichymotrypsin 1 13.5 18.3 6.8 5.7 16.8
S α1-Antitrypsin 3 5.9 16.3 3.0 4.3 9.2
S α1-Globulins 2 11.4 22.6 5.7 6.3 15.7
U α1-Microglobulin, concentration, first morning 1 33.0 58.0 16.5 16.7 43.9

Now examining the structure:

It's kind-of undesirable to have numbers as characters so…

Write the Data

Using the xlsx package, you can output the table to an Excel file in the current working directory.

If you are having trouble getting xlsx to install, then just write as csv.

Conclusion

You can now use the same general approach to parse any table you have web access to, no mater how small or big it is. Here is a complete script in one place:

Parting Thought on Tables

You prepare a table before me in the presence of my enemies. You anoint my head with oil; my cup overflows.

(Psalm 23:5)

Determine the CV of a Calculated Lab Reportable – Bioavailable Testosterone

Background

At the AACC meeting last week, some of my friends were bugging me that I had not made a blog post in 10 months. Without getting into it too much, let's just say I can blame Cerner. Thanks also to a prod from a friend, here is an approach to a fairly common problem.

We all report calculated quantities out of our laboratories–quantities such as LDL cholesterol, non-HDL cholesterol, aldosterone:renin ratio, free testosterone, eGFR etc. How does one determine the precision (i.e. imprecision) of a calculated quantity. While earlier in my life, I might go to the trouble of trying to do such calculations analytically using the rules of error propagation, in my later years, I am more pragmatic and I'm happy to use a computational approach.

In this example, we will model the precision in calculated bioavailable testosterone (CBAT). Without explanation, I provide an R function for CBAT (and free testosterone) where testosterone is reported in nmol/L, sex hormone binding globulin (SHBG) is reported in nmol/L, and albumin is reported in g/L. Using the Vermeulen Equation as discussed in this publication, you can calculate CBAT as follows:

To sanity-check this, we can use this online calculator. Taking a typical male testosterone of 20 nmol/L, an SHBG of 50 nmol/L and an albumin of 43 g/L, we get the following:

which is confirmed by the online calculator. Because the function is vectorized, we an submit a vector of testosterone results and SHBG results and get a vector of CBAT results.

Precision of Components

We now need some precision data for the three components. However, in our lab, we just substitute 43 g/L for the albumin, so we will leave that term out of the analysis and limit our precision calculation to testosterone and SHBG. This will allow us to present the precision as surface plots as a function of total testosterone and SHBG.

We do testosterone by LC-MS/MS using Deborah French's method. In the last three months, the precision has been 3.9% at 0.78 nmol/L, 5.5% at 6.7 nmol/L, 5.2% at 18.0 nmol/L, and 6.0% at 28.2 nmol/L. We are using the Roche Cobas e601 SHBG method which, according to the package insert, has precision of 1.8% at 14.9 nmol/L, 2.1 % at 45.7 nmol/L, and 4.0% at 219 nmol/L.

plot of chunk unnamed-chunk-4

plot of chunk unnamed-chunk-4

Build Approximation Functions

We will want to generate linear interpolations of these precision profiles. Generally, we might watnt to use non-linear regression to do this but I will just linearly interpolate with the approxfun() function. This will allow us to just call a function to get the approximate CV at concentrations other than those for which we have data.

Now, if we want to know the precision of SHBG at, say, 100 nmol/L, we can just write,

to obtain our precision result.

Random Simulation

Now let's build a grid of SHBG and total testosterone (TT) values at which we will calculate the precision for CBAT.

At each point on the grid, we will have to generate, say, 100000 random TT values and 100000 random SHBG values with the appropriate precision and then calculate the expected precision of CBAT at those concentrations.

Let's do this for a single pair of concentrations by way of example modelling the random analytical error as Gaussian using the rnorm() function.

So, we can build the process of calculating the CV of CBAT into a function as follows:

Now, we can make a matrix of the data for presenting a plot, calculating the CV and appending it to the dataframe.

Now make plot using the wireframe() function.

plot of chunk unnamed-chunk-11

This shows us that the CV of CBAT ranges from about 4–8% over the TT and SHBG ranges we have looked at.

Conclusion

We have determined the CV of calculated bioavailable testosterone using random number simulations using empirical CV data and produced a surface plot of CV. This allows us to comment on the CV of this lab reportable as a function of the two variables by which it is determined.

Parting Thought on Monte Carlo Simulations

The die is cast into the lap, but its every decision is from the LORD.

(Prov 16:33)

Conditional Formatting of a Table in R

Background

There are a few ways to approach the problem of a conditionally formatted table in R. You can use the ReporteRs package's FlexTable() function, the formattable package, or the condformat package. These allow you to produce a conditionally formatted tables in HTML. You can also use xtable package and essentially program what you want in LaTeX via the xtable() function.

In my desire for something simple-ish, I am going do this graphically using the image() function as suggested here. The benefit is that I can then push the table into an RMarkdown generated PDF document easily.

The Problem

Suppose that you want to prepare a summary of how resident and medical student orders are placed on various wards. You obtain data that is formatted in the following manner.

There are 4 wards: medicine, surgery, ER and orthopedics. Orders can come in as computerized physician order entry (CPOE), verbal or written. The orders have to be cosigned by staff and this is recorded as TRUE/FALSE because staff are not always compliant in logging on to the EMR to cosign the trainee orders.

Preparing Proportions Table

Let's start with the assumption that we want to apply the same conditional formatting to all data in the table. That is, we want to color code all results with the same algorithm. We can used the image() function to get this done. Let's display the rates at which different order types (CPOE, verbal,or written) from the four wards. We can generate the proportions table in percent very easily with the prop.table() and table() functions operating on the first two columns of our orders data:

A DIY Approach with the Image Function

The image() function produces a tile plot based on matrix of z values, where z = f(x,y) using colours we can define and thresholds for switching from one colour to the next based on a breaks parameter. In our case, we will say that if the result is less than equal to 25%, we will colour the tile blue, if it is greater than 25% but less than or equal to 50%, we will colour it red, and if it greater than 50%, it will be yellow.

You will note that we have to transpose the data with the t() function because the image function plots the rows on the x axis on the columns on the y axis. You will also notice that we need to plot y descending on the y-axis to account for the fact that our tabular data has increasing index going down but the tile plot will default to have increasing y going up. We can also need to suppress the axes and their labels. The reader can comment out the lines xaxt = 'n' and yaxt = 'n' to see what is going on in terms of x and y values.

plot of chunk unnamed-chunk-5

Now we can write our values over top with the text() function.

plot of chunk unnamed-chunk-7

And then we can write the variable names (which we yank from the attributes of the table) into the figure margin and draw some lines to make it look pretty. It was necessary to use the adj and padj parameters to make it look a little cleaner.

plot of chunk unnamed-chunk-9

Conditionally Coloured Text

Now, if you want to make the text colour match the background colour, we will need a little function.

and then apply it over the values of the matrix:

plot of chunk unnamed-chunk-12

Different Conditions for Different Columns

Now suppose you wanted different conditional formatting for each column. This is kind of a pain because you will need to provide the image() function a matrix to generate an appropriate fill-colour and a different matrix for the data to be written in each cell. Let's imagine for example that we want to include the compliance rate for co-signing in a fourth column and this is the only column we want coloured. To this column we want a colour scheme applied wherein if compliance is less than or equal to 20%, the colour is red, between 20% and 80%, it is yellow, and above 80% it is green.

We can calculate a proportions table based on columns 1 and 3 of the orders dataframe and then we can define a matrix fill.data that has NA on all the rates we calculated above.

Now the proportions matrix is as follows:

and the fill data is:

Now we can apply the image() function to the fill.data matrix. When it comes to writing the data in the cells, we will use the original my.data matrix and we will adjust out color.picker() function.

plot of chunk unnamed-chunk-16

So, it looks like this could become super–awkward if we had elaborate conditions to apply. This is where a packages like condformat and formattable come in handy. If you use the condformat package, you can include the table in an RMarkdown generated PDF or HTML document. However, the formattable() function, though capable of much prettier output, does not work with PDFs generated using RMarkdown.

First, here is a condformat example. Suppose we wanted to colourized CPOE in shades of green because CPOE is more operationally desirable and verbal/written orders in shades of red because they are less operationally desirable. We also want the red/yellow/green formatting in the Cosigned column. Using condformat we could do the following:

CPOE Verbal Written Cosigned
1 49.3 8.7 42.0 75.3
2 30.0 4.0 66.0 52.0
3 89.5 7.0 3.5 88.5
4 8.0 23.0 69.0 13.0

You can see that the rownames are suppressed with condformat(). You could circumvent this by putting the rownames into their own column. This package is pretty easy to use and with PDF rendering (shown below) it produces something more LaTeX-ish than what is shown above which was generated straight to HTML.

plot of chunk unnamed-chunk-18

For something more attractive looking, here is an example of something similar using the formattable package (borrowing heavily from the code author's examples ):

CPOE Verbal Written Cosigned
Med 49.3 8.7 42.0 75.30 (rank: 02)
Surg 30.0 4.0 66.0 52.00 (rank: 03)
ER 89.5 7.0 3.5 88.50 (rank: 01)
Orth 8.0 23.0 69.0 13.00 (rank: 04)

I hope that this points you in the right direction.





And as for conditions:

“If you declare with your mouth, “Jesus is Lord,” and believe in your heart that God raised him from the dead, you will be saved.”

Romans 10:9

Make Easy Heatmaps to Visualize your Turnaround Times

The Problem

In two previous posts, I discussed visualizing your turnaround times (TATs). These posts are here and here. One other nice way to visualize your TAT is by means of a heatmap. In particular, we would like to look at the TAT for every hour of the week in a single figure. This manner of dataviz bling seems to be particularly attractive to managers because it costs you $0 to do this with R, but with commercial tools like Tableau, you'd have to pay a fortune and, as with Excel, your report would not be readily reproducible. Further, to make it autogenerate a PDF would mean you had to fork out more money for a report-generation module. Pffft.

The Data

We're going to read in a year's worth of order times and result times for a stat immunoassay test offered to a particular ward. The data, as I've formatted it, has two columns, ord and res.

Now, of course, we want to look at data collected from a long period of time so that we can be sure that the observations we are not simply an artifact of recent instrument downtime, maintenance, or whoever happened to be running the instrument. This is why I chose a year's worth of data. We are going to visualize the median order-to-file TAT for this test.

Formatting and Calculations

To calculate the hourly medians, we'll need to be able to label every TAT with the day it was run and the hour in the day it was run. This is pretty easy with the lubridate package. We'll do three things:

  • We'll convert the dates to POSIXct objects
  • We'll use the difftime() function to calculate the TATs
  • We'll use the wday() function to determine which day of the week the specimen was run on
  • We'll pull out the hour of the day on which it was run with the format() function.

And now the data will look like this:

where the order-to-file TAT is in the otf column, the day-of-week is in the dow column and the hour-of-day is in the hod column. Now we can cycle though the days of the week and the hours of the day and calculate the year's median TAT for each hour, storing it in a matrix:

Making the Heatmap

There are many ways to make the heatmap but I am particularly fond of the appearance of surface plots made with the fields package.

plot of chunk unnamed-chunk-5

Overlay Printed Times

We can see that there is a morning slowdown that is particularly bad on Saturday. But what if we wanted to know the exact value for these eye-catching problem times? We'd have trouble, unless we overlaid some text.

It turns out that if you use white printing, you can't read the numbers when the background colour is yellow and green. There is a 64 colour gradient used in the image.plot() function, so I calculated which integers in 0–64 were the problem and found the TATs that would correspond. It turned out that colours 20–45 out of the 64 colours in the gradient are the problem. By this means, I can make the printing black over the yellows and greens but white everywhere else:

plot of chunk unnamed-chunk-6

So, that is not too bad, and if you wanted to look at the 75th percentile instead you would only have to adjust the heat.data calculation as follows:

And this is what you will get.

plot of chunk unnamed-chunk-8

Hmmm…we'd better look at Saturday morning, 6 am. I hope you have found this helpful.





And as for heat

“He will sit as a refiner and purifier of silver”

Malachi 3:3

Make Bland Altman Plots with Marginal Histograms using ggExtra

The Problem

As you know in Clinical Chemistry, we are not always writing a major paper but sometimes just preparing a short-report to answer a technical question that we've encounted at work. For shorter papers, journals often have more stringent rules about how many figures you can submit and even sometimes forbid multipanelled figures. In these situations, we might want to cram a little more into your figure than we might otherwise. In a recent submission, I wanted to produce a difference plot of immunoassay results before and after storage but I also wanted to show the distribution of the results using a histogram–but this would have counted as two separate figures.

However, thanks to some fine work by Dean Attali of UBC Department of Statistics where he works with R-legend Jenny Bryan, it is quite easy to add marginal histograms to a Bland Altman (or any other scatter) plot using the ggExtra package.

How To

Let's make some fake data for a Bland Altman plot. Let's pretend that we are measuring the same quantity by immunoassay at baseline and after 1 year of storage at -80 degrees. We'll add some heteroscedastic error and create some apparent degradation of about 20%:

plot of chunk unnamed-chunk-1

Or if we plot this in the ggplot() paradigm

plot of chunk unnamed-chunk-2

Now we will prepare the difference data:

In standard Bland Altman plots, one plots the difference between methods against the average of the methods, but in this case, the x-axis should be the baseline result, because that is the closest thing we have to the truth.

plot of chunk unnamed-chunk-4

So that is the difference plot for the absolute difference. We can also obviously do the percent difference.

plot of chunk unnamed-chunk-5

Kickin' it Old School

You can also do this in a non-ggplot() paradigm using base plotting utilities as described in this R-bloggers post.

Conclusion

And that, friends, is a way of squishing in a histogram of your sample concentrations into your difference plot which allows you to graphically display your sampling distribution and justify whether you would use parametric or non-parametric statistics to assess the extent of loss of immunoreactivity from storage.

And speaking of scatterplots

“…then the Lord your God will restore your fortunes and have compassion on you and gather you again from all the nations where he scattered you.”
Deut 30:3