Break up with Excel: Intro and Advanced R Data Science Courses at MSACL.org

Salzburg Austria, September 21–23, 2019

MSACL Conference

There are two RStats Data Science courses happening in Salzburg Austria on September 22–24, 2019 at the 6th annual MSACL Clinical Mass Spectrometry Conference. These courses are held twice annually, once in Europe and once in Palm Springs.

plot of chunk msaclAd

Introductory Course

  • The introductory course will be taught by Dan Holmes, MD of the University of British Columbia and Will Slade, PhD of Laboratory Corporation of America.
  • Course description is here

Intermediate/Advanced Course

Registration

  • Although the conference is for clinical mass spectrometry, the courses are generic in nature and generally geared towards the biological and health sciences rather than mass spectometry per se.
  • You do not need to register for the conference to attend the pre-conference courses.
  • Academic rates apply. Registration for the course includes lots of snacks and coffee… like, good coffee.
  • Registration details are here.

Details

Both courses will take place in the following location:

  • Salzburg Congress Centre
    • Day 1: September 22, 2019, 1300h–1800h
    • Day 2: September 23, 2019, 0830h–1730h
    • Day 3: September 24, 2019, 0830h–1130h

RMarkdown Template that Manages Academic Affiliations – docx or PDF output

Background

I like writing my academic papers in RMarkdown because it allows reproducible research. The cleanest way to submit a manuscript made in RMarkdown is using the LaTeX code that it generates using the YAML switch keep_tex = true. A minimalist YAML header would look like so:

Introduction

However, when you want mutliple authors affiliations you discover that you can’t do as you would in LaTeX because Pandoc does not know what to do with the affiliations and you end out a dishearting PDF that looks like the output shown in figure 1 below:


This is so sad.

Figure 1: This is so sad.

The situation worsens if you want MS-Word output. As those of us in medical fields know, most journals (with some notable exceptions like the Clinical Mass Spectrometry Journal and other Elsevier journals like Clinical Biochemistry and Clinica Chimica Acta) require submission of a document in MS-Word format which goes against all that Data Science and Reprodicible Research stands for–he says, with hyperbole. Parenthetically, it is my hope that since AACC has indicated that they intend to make Data Science a strategic priority for Lab Medicine, they will soon accept submissons to Clinical Chemistry and Journal of Applied Laboratory Medicine written reproducibly in RMardown or LaTeX.

In the mean time, here are the workarounds for getting the affiliations to display correctly along with all the other stuff we want, namely, cross referencing of figures and tables and correct reference formatting and abbreviation of journal names. This allows you to avoid the horror of manually fixing your Word document after it generated from RMarkdown. In any case, let’s start with MS-Word.

Dependencies for MS-Word and the Associated YAML

You will also need to install Pandoc which is the Swiss Army Knife of document conversion. It’s going to turn your code into a .docx file for you. Mac users can do this with Homebrew on the terminal command line:

There are some extra installs required to help Pandoc do its job. Install the prebuilt binaries if you can.

Finally, you need to use some scripts written in the Lua scripting language which means you will need the language itself:

And you will need two Lua scripts:

These are in Pandoc github repository:

You want the files named scholarly-metadata.lua and author-info-blocks.lua.

You will need to choose a .csl file for your journal. This will tell Pandoc how to format the references. You can download the correct .csl file here. You will also need a journal abbreviations database. I have made one for you from the Web of Science list and you can download it here.

You will need to create a .bibtex database which is just your list of references. This can be exported from various reference managers or built by hand. Name the file mybibfile.bib.

Now follow the bouncing ball:

  1. Go to the directory containing your .Rmd file.
  2. Create a directory in it called “Extras”
  3. Put the two Lua scripts, the Bibtex database, the abbreviations database and the .csl file into the “Extras” folder.
  4. If you want to avoid Pandoc’s goofy default .docx formatting, then put this word document in the same folder.

OR

Download the contents of this folder from my github repo that has everything set up as I describe above.

For two authors, your YAML will need to look like this:

Et voila! Figure 2 shows that we have something reasonable.

This is so great

Figure 2: This is so great

Dependencies for LaTeX and the Associated YAML

It goes without saying that you need to install LaTeX. LaTeX markup language is available here: Mac, Windows. For Linux, just install from the command line with your package manager. Do a full install with all the glorious bloat of all LaTeX packages. This saves many headaches in the future.

You don’t need the lua scripts for LaTeX although you can use them. The issue with LaTeX is that the .tex template that Pandoc uses for generating LaTeX files does not support author affiliations as descibed in the Pandoc documentation. So what you need to do is modify the Pandoc LaTeX template. To get your current working copy of the Pandoc LaTeX template open up a terminal (Mac/Linux) and type:

This will push the contents to a file. Move the file to the “Extras” folder discussed above. If that seems difficult, you can also download it here. Now you have to edit it. Open it up in a text editor and find the section that reads:

Replace this with this code that will invoke the LaTeX authblk package.

Then make your YAML header look like this:

And as you can see in figure 3 you get a correctly list of authors.


This is also great.

Figure 3: This is also great.

Cross Reference of a Table

Of course, tables can be cross referenced in the same manner as figures. Here is a cross reference to table 1 using the code \@ref(tab:mytable) .

Table 1: A short table
term estimate std.error statistic p.value
(Intercept) 36.908 2.191 16.847 0.000
hp -0.019 0.015 -1.275 0.213
cyl -2.265 0.576 -3.933 0.000

This Template also Takes Care of Reference Abbreviation.

As usual, you can make a citation with the code [@bibtexname], where bibtexname is the articles’s abbreviated handle in your bibtex database. Here is a great resource on the bookdown package [1] and reproducible research [2] and here are references where the journal title is longer [3,4]. The references in your documnent (and shown below) will have appropriate abbreviations based on the .json abbreviations database I have provided. In this case, I have chosen the .csl file for Clinical Mass Spectrometry–’cause MSACL.

Other Ways to Skin the YAML Cat

I came across some other ways to deal with this that I did not like as much but they are simpler. Here is one using a footnote.

And you can also misuse the date variable:

Conclusion

This concludes my long personal struggle to get a completely reproducible .docx manusript genereated by RMarkdown and Pandoc. Here is the output for PDF and Word.

Parting Thought

Let us not become weary in doing good, for at the proper time we will reap a harvest if we do not give up.

Galations 6:9

References

[1] Y. Xie, J.J. Allaire, G. Grolemund, R markdown: The definitive guide, Chapman; Hall/CRC, 2018. https://bookdown.org/yihui/bookdown.

[2] R.D. Peng, Reproducible research in computational science, Science. 334 (2011) 1226–1227.

[3] G. Eisenhofer, C. Durán, T. Chavakis, C.V. Cannistraci, Steroid metabolomics: Machine learning and multidimensional diagnostics for adrenal cortical tumors, hyperplasias, and related disorders, Curr. Opin. Endocr. Metab. Res. 8 (2019) 40–49. doi:https://doi.org/10.1016/j.coemr.2019.07.002.

[4] F.B. Vicente, D.C. Lin, S. Haymond, Automation of chromatographic peak review and order to result data transfer in a clinical mass spectrometry laboratory, Clin. Chim. Acta. 498 (2019) 84–89. doi:https://doi.org/10.1016/j.cca.2019.08.004.

Non-Linear Regression: Application to Monoclonal Peak Integration in Serum Protein Electrophoresis

Background

At the AACC meeting recently, there was an enthusiastic discussion of standardization of reporting for serum protein electrophoresis (SPEP) presented by a working group headed up by Dr. Chris McCudden and Dr. Ron Booth, both of the University of Ottawa. One of the discussions pertained to how monoclonal bands, especially small ones, should be integrated. While many use the default manual vertical gating or “drop” method offered by Sebia's Phoresis software, Dr. David Keren was discussing the value of tangent skimming as a more repeatable and effective means of monoclonal protein quantitation. He was also discussing some biochemical approaches distinguishing monoclonal proteins from the background gamma proteins.

The drop method is essentially an eye-ball approach to where the peak starts and ends and is represented by the vertical lines and the enclosed shaded area.

plot of chunk unnamed-chunk-1

The tangent skimming approach is easier to make reproducible. In the mass spectrometry world it is a well-developed approach with a long history and multiple algorithms in use. This is apparently the book. However, when tangent skimming is employed in SPEP, unless I am mistaken, it seems to be done by eye. The integration would look like this:

plot of chunk unnamed-chunk-2

During the discussion it was point out that peak deconvolution of the monoclonal protein from the background gamma might be preferable to either of the two described procedures. By this I mean integration as follows:

plot of chunk unnamed-chunk-3

There was discussion this procedure is challenging for number of reasons. Further, it should be noted that there will only likely be any clinical value in a deconvolution approach when the concentration of the monoclonal protein is low enough that manual integration will show poor repeatability, say < 5 g/L = 0.5 g/dL.

Easy Peaks

Fitting samples with larger monoclonal peaks is fairly easy. Fitting tends to converge nicely and produce something meaningful. For example, using the approach I am about to show below, an electropherogram like this:

plot of chunk unnamed-chunk-4

with a gamma region looking like this:

plot of chunk unnamed-chunk-5

can be deconvoluted with straightforward non-linear regression (and no baseline subtraction) to yield this:

plot of chunk unnamed-chunk-6

and the area of the green monoclonal peak is found to be 5.3%.

More Difficult Peaks

What is more challenging is the problem of small monoclonals buried in normal \(\gamma\)-globulins. These could be difficult to integrate using a tangent skimming approach, particularly without image magnification. For the remainder of this post we will use a gel with a small monoclonal in the fast gamma region shown at the arrow.

plot of chunk unnamed-chunk-7

Getting the Data

EP data can be extracted from the PDF output from any electrophoresis software. This is not complicated and can be accomplished with pdf2svg or Inkscape and some Linux bash scripting. I'm sure we can get it straight from the instrument but it is not obvious to me how to do this. One could also rescan a gel and use ImageJ to produce a densitometry scan which is discussed in the ImageJ documentation and on YouTube. ImageJ also has a macro language for situations where the same kind of processing is done repeatedly.

Smoothing

The data has 10284 pairs of (x,y) data. But if you blow up on it and look carefully you find that it is a series of staircases.

plot of chunk unnamed-chunk-8

It turns out that this jaggedness significantly impairs attempts to numerically identify the peaks and valleys. So, I smoothed it a little using the handy rle() function to identify the midpoint of each step. This keeps the total area as close to its original value as possible–though this probably does not matter too much.

plot of chunk unnamed-chunk-9

Now that we are satisfied that the new data is OK, I will overwrite the original dataframe.

Transformation

The units on the x and y-axes are arbitrary and come from page coordinates of the PDF. We can normalize the scan by making the x-axis go from 0 to 1 and by making the total area 1.

plot of chunk unnamed-chunk-11

Find Extrema

Using the findPeaks function from the quantmod package we can find the minima and maxima:

plot of chunk unnamed-chunk-12

plot of chunk unnamed-chunk-12

Not surprisingly, there are some extraneous local extrema that we do not want. I simply manually removed them. Generally, this kind of thing could be tackled with more smoothing of the data prior to analysis.

Fitting

Now it's possible with the nls() function to fit the entire SPEP with a series of Gaussian curves simultaneously. It works just fine (provided you have decent initial estimates of \(\mu_i\) and \(\sigma_i\)) but there is no particular clinical value to fitting the albumin, \(\alpha_1\), \(\alpha_2\), \(\beta_1\) and \(\beta_2\) domains with Gaussians. What is of interest is separately quantifying the two peaks in \(\gamma\) with two separate Gaussians so let's isolate the \(\gamma\) region based on the location of the minimum between \(\beta_2\) and \(\gamma\).

Isolate the \(\gamma\) Region

plot of chunk unnamed-chunk-14

Attempt Something that Ultimately Does Not Work

At first I thought I could just throw two normal distributions at this and it would work. However, it does not work well at all and this kind of not-so-helpful fit turns out to happen a fair bit. I use the nls() function here which is easy to call. It requires a functional form which I set to be:

\[y = C_1 \exp\Big(-{\frac{(x-\mu_1)^2}{2\sigma_1^2}}\Big) + C_2 \exp \Big({-\frac{(x-\mu_2)^2}{2\sigma_2^2}}\Big)\]

where \(\mu_1\) is the \(x\) location of the first peak in \(\gamma\) and \(\mu_2\) is the \(x\) location of the second peak in \(\gamma\). The estimates of \(\sigma_1\) and \(\sigma_2\) can be obtained by trying to estimate the full-width-half-maximum (FWHM) of the peaks, which is related to \(\sigma\) by

\[FWHM_i = 2 \sqrt{2\ln2} \times \sigma_i = 2.355 \times \sigma_i\]

I had to first make a little function that returns the respective half-widths at half-maximum and then uses them to estimate the \(FWHM\). Because the peaks are poorly resolved, it also tries to get the smallest possible estimate returning this as FWHM2.

The peak in the \(\gamma\) region was obtained previously:

plot of chunk unnamed-chunk-16

and from them \(\mu_1\) is determined to be 0.7. We have to guess where the second peak is, which is at about \(x=0.75\) and has an index of 252 in the gamma.data dataframe.

plot of chunk unnamed-chunk-17

Now we can find the estimates of the standard deviations:

The estimates of \(\sigma_1\) and \(\sigma_2\) are now obtained. The estimates of \(C_1\) and \(C_2\) are just the peak heights.

We can now use nls() to determine the fit.

Determining the fitted values of our unknown coefficients:

And now we can plot the fitted results against the original results:

plot of chunk unnamed-chunk-22

And this is garbage. The green curve is supposed to be the monoclonal peak, the blue curve is supposed to be the \(\gamma\) background, and the red curve is their sum, the overall fit. This is a horrible failure.

Subsequently, I tried fixing the locations of \(\mu_1\) and \(\mu_2\) but this also yielded similar nonsensical fitting. So, with a lot of messing around trying different functions like the lognormal distribution, the Bi-Gaussian distribution and the Exponentially Modified Gaussian distribution, and applying various arbitrary weighting functions, and simultaneously fitting the other regions of the SPEP, I concluded that nothing could predictably produce results that represented the clinical reality.

I thought maybe the challenge to obtain a reasonable fit related to the sloping baseline, so I though I would try to remove it. I will model the baseline in the most simplistic manner possible: as a sloped line.

Baseline Removal

I will arbitrarily define the tail of the \(\gamma\) region to be those values having \(y \leq 0.02\). Then I will connect the first (x,y) point from the \(\gamma\) region and connect it to the tail.

plot of chunk unnamed-chunk-24

Now we can define a new dataframe gamma.no.base that has the baseline removed:

plot of chunk unnamed-chunk-25

The black is the original \(\gamma\) and the dashed has the baseline removed. This becomes and easy fit.

plot of chunk unnamed-chunk-26

Lo and behold…something that is not completely insane. The green is the monoclonal, the blue is the \(\gamma\) background and the red is their sum, that is, the overall fit. A better fit could now we sought with weighting or with a more flexible distribution shape. In any case, the green peak is now easily determined. Since

\[\int_{-\infty}^{\infty} C_1 \exp\Big(-{\frac{(x-\mu_1)^2}{2\sigma_1^2}}\Big)dx = \sqrt{2\pi}\sigma C_1\]

So this peak is 2.4% of the total area. Now, of course, this assumes that nothing under the baseline is attributable to the monoclonal peak and all belongs to normal \(\gamma\)-globulins, which is very unlikely to be true. However, the drop and tangent skimming methods also make assumptions about how the area under the curve contributes to the monoclonal protein. The point is to try to do something that will produce consistent results that can be followed over time. Obviously, if you thought there were three peaks in the \(\gamma\)-region, you'd have to set up your model accordingly.

All about that Base(line)

There are obviously better ways to model the baseline because this approach of a linear baseline is not going to work in situations where, for example, there is a small monoclonal in fast \(\gamma\) dwarfed by normal \(\gamma\)-globulins. That is, like this:

plot of chunk unnamed-chunk-28

Something curvilinear or piecewise continuous and flexible enough for more circumstances is generally required.

There is also no guarantee that baseline removal, whatever the approach, is going to be a good solution in other circumstances. Given the diversity of monoclonal peak locations, sizes and shapes, I suspect one would need a few different approaches for different circumstances.

Conclusions

  • The data in the PDFs generated by EP software are processed (probably with splining or similar) followed by the stair-stepping seen above. It would be better to work with raw data from the scanner.

  • Integrating monoclonal peaks under the \(\gamma\) baseline (or \(\beta\)) is unlikely to be a one-size-fits all approach and may require application of a number of strategies to get meaningful results.

    • Basline removal might be helpful at times.
  • Peak integration will require human adjudication.

  • While most monoclonal peaks show little skewing, better fitting is likely to be obtained with distributions that afford some skewing.

  • MASSFIX may soon make this entire discussion irrelevant.

Parting Thought

On the matter of fitting

In bringing many sons and daughters to glory, it was fitting that God, for whom and through whom everything exists, should make the pioneer of their salvation perfect through what he suffered.

Heb 2:10

Count The Mondays in a Time Interval with Lubridate

Recently, while working on quantifying the inpatient workload volume of routine tests as a function of the day of the week, I needed to be able to count the number of Mondays, Tuesdays, etc in a time–interval so I could calculate the average volume for each weekday in a time–interval.

The lubridate package makes this a very easy thing to do. Suppose the first date in your series is 21-May-2015 and the last date is 19-Aug-2015.

Now build a sequence between the dates:

The function wday() tells you which weekday a date corresponds to with Sunday being 1, Monday being 2 etc.

This means that 2015-05-21 was a Thursday. To get the abbreviation, you can enter:

and to get the full name of the day:

Leap years are accounted for:

So, we can use this as follows to find the Mondays:

So the whole code to count them is:

I was born on August 04, 1971. This was a Wednesday. How many Wednesdays since I was born?

Which means, today I am 2312 weeks old today! Hurray. This is not a typo. The time interval is flanked by Wednesdays so there is one more Wednesday than the number of weeks in the interval. I thank my first–year calculus prof for beating this into me with reference to Simpson's Rule numerical integration.

Hope that comes in handy.

-Dan

Teach us to number our days, that we may gain a heart of wisdom. Psalm 90:12.