Reproducible Research: Write your Clinical Chemistry paper using R Markdown

Abstract
Background: This blog post is going to show you how to write a reproducible article in the field of clinical chemistry using R Markdown. The only thing that will change for journal to journal will be the reference fomating and perhaps section numbering. The source code itself will be provided so that you can use it as a template.

Methods: The paper will use R, R-Markdown, bookdown and pandoc. The references will be taken care of using BibTeX and reference formatting will be managed with Zotero csl files.

Results: The result will be a manuscript that anyone can reproduce.

Conclusions: R Markdown makes reproducible research through literate programming pretty easy.

1 Background

Last week at the MSACL conference Dr. Keith Baggerly from MD Anderson Cancer Centre’s Bioformatics and Computational Biology Group spoke about the importance of reproducible research using the Duke University ovarian cancer biomarker scandal as a backdrop. The talk…was…incredible and illustrated how easy it is to introduce catastrophic errors into your research papers through the use of GUI analytical tools. The now retracted article that Baggerly dismantled is here. I urge everyone in our field to watch similar talks from Keith discussing biomarker analysis in mass spectrometric proteomic data and microarray data. Shannon Haymond and I were then discussing how to make a submission in the field of clinical chemistry that is reproducible. While this article will not discuss the basics of R and R Markdown, it will serve as a guide for those who know a little about these and give you a working YAML header and get the citations and cross-references correct.

2 Overhead

2.1 YAML

RMarkdown articles require a YAML header to instruct R Markdown how to process your article. This is the YAML code that worked for me. I am sure there are other ways to do this:

Indenting and spacing really matter a lot in YAML so don’t mess with them. You can generate PDF, MS Word or HTML output as needed by uncommenting and commenting out the output type as appropriate in the YAML above.

2.2 CSL Files

CSL files take care of citation formatting for you. Depending on what journal you are making submission to, you will need a different .csl file. They exist for every conceivable journal. In the world of non-reproducible reports, this process is taken care of by reference managers in GUI word processors but since GUI word processors do not produce reproducible research, we must break up with them.

From the YAML, you will see that you need a file called “clinical-chemistry.csl” which I downloaded from here. Put this file in the same folder as your R Markdown file. The Clinical Chemistry .csl depends on the .csl file of the American Association for Cancer Research but it will be downloaded for you on the fly. If you need a different reference format, search the CSL GitHub repository for the appropriate file.

2.3 BibTeX

Reference management in R Markdown is taken care of by BibTeX. You can see from the YAML that we need a bibliography text file called “bibliography.bib”. You can name it whatever you like but you will need to change the YAML accordingly. In any case, any citation you intend to make will have to be in the .bib file. I am going to cite Shannon Haymond because this article was her idea and I will toss in a couple of other references so you can see that they get cited in order as we would like.

Below is my bibliography.bib file. I put it in the same folder as my R Markdown file. You make a .bib file using a text file editor (or RStudio) by cutting and pasting the BibTex citations from Google Scholar

2.4 Journal Abbreviations

Now, the fussiest thing I had to do was get the references abbreviating properly. We need an abbreviation database. Fortunately, I could download the abbreviation database from the Web of Science as a .csv file and then convert it to a JSON file. This was Stephen’s idea. I had to deal with a couple of badly behaving characters from some journal titles. This script, if embedded in your document, will download the .csv for you and then make the abbreviation database for you. That way your citations will say, “J Clin Pathol” and not “Journal of Clinical Pathology” etc.

2.5 Citation Management

Now we can proceed to cite articles from our .bib file freely inserting syntax like this: [@shannon2017]. Shannon wrote this interesting article on symmetric dimethylarginine (1), Stephen wrote about wellness initiatives with Dr. Diamandis (2) and Dan wrote a paper about PTH when he was a resident (3). That makes for a total of 3 citations in this manuscript. (1–3)

3 Reporting Your Results

3.1 Figures

You can embed figures in R Markdown as local images or hyperlinks as follows:

![Grumpy Cat does not care about reproducibility](grumpy.jpg)

Grumpy Cat does not care about reproducibility

Grumpy Cat does not care about reproducibility

If you need to do cross-referencing of figures in your document which will change automatically for you if you insert another figure, you can do this by inserting your figure with an R code-chunk and giving the code-chunk a name.


This is the ancient aliens guy.

Figure 3.1: This is the ancient aliens guy.

Then you can reference your code chunk using syntax like this: See figure \@ref(fig:ancient-aliens)

and you will automatically create appropriately cross-referenced figures that get automatically numbered like this: See figure 3.1. Of course the hallmark of the reproducibility to embed R code right into the document. See for example figure 3.2

 

This is a reproducible figure

Figure 3.2: This is a reproducible figure

3.2 Inline Calculations

When you are reporting your amazing results you can have inline code calculations by syntax that looks like this: `r round(median(x),1)` and would result in the median value of \(x\) being reported as 43.9 mmol/L.

3.3 Tables

Tables are not a problem either and can be made with the kable() function of the knitr package or with the xtable package. Tables can be crossreferenced analogously to figures. See table 3.1

Table 3.1: This is a great table
First Second Third
1 2 2
2 3 6
3 4 12
4 5 20
5 6 30

3.4 Math

Math works pretty magically using \(\LaTeX\) syntax. For example inline math can be done like so $\sin^2x + \cos^2x = 1$. This will result in: \(\sin^2x + \cos^2x = 1\). And math can also be done as a code block like so:

Gauss’ Law says: \[
\oint_S {E_n dA = \frac{1}{{\varepsilon _0 }}} Q_{inside}
\]

Equations can be cross-referenced just like tables and figures.

4 Conclusion

I hope this makes writing a reproducible paper easier for you. A minimal template to produce output in PDF is here. And the PDF output itself is here. You’ll need \(\LaTeX\) installed of course.

 

Parting Thought

You can cite books too and get Greek letters too:

“Very truly I tell you,” Jesus answered, “before Abraham was born, \(\varepsilon\gamma\omega\) \(\varepsilon\iota\mu\iota\) (I Am)!” (4)

References

1. Brooks ER, Haymond S, Rademaker A, Pierce C, Helenowski I, Passman R, et al. Contribution of symmetric dimethylarginine to gfr decline in pediatric chronic kidney disease. Pediatr Nephrol. Springer; 2017;1–8.

2. Li M, Diamandis EP, Paneth N, Yeo K-TJ, Vogt H, Master SR. Wellness initiatives: Benefits and limitations. Clin Chem. Clinical Chemistry; 2017;63:1063–8.

3. Holmes DT, Levin A, Forer B, Rosenberg F. Preanalytical influences on DPC IMMULITE 2000 intact PTH assays of plasma and serum from dialysis patients. Clin Chem. Clinical Chemistry; 2005;51:915–7.

4. John the A, Chist J, Spirit H, God F. The Gospel According to John, 8:58. 1 Heavenly Way: Hosannah Press; 80AD.


 

Count The Mondays in a Time Interval with Lubridate

Recently, while working on quantifying the inpatient workload volume of routine tests as a function of the day of the week, I needed to be able to count the number of Mondays, Tuesdays, etc in a time–interval so I could calculate the average volume for each weekday in a time–interval.

The lubridate package makes this a very easy thing to do. Suppose the first date in your series is 21-May-2015 and the last date is 19-Aug-2015.

Now build a sequence between the dates:

The function wday() tells you which weekday a date corresponds to with Sunday being 1, Monday being 2 etc.

This means that 2015-05-21 was a Thursday. To get the abbreviation, you can enter:

and to get the full name of the day:

Leap years are accounted for:

So, we can use this as follows to find the Mondays:

So the whole code to count them is:

I was born on August 04, 1971. This was a Wednesday. How many Wednesdays since I was born?

Which means, today I am 2312 weeks old today! Hurray. This is not a typo. The time interval is flanked by Wednesdays so there is one more Wednesday than the number of weeks in the interval. I thank my first–year calculus prof for beating this into me with reference to Simpson's Rule numerical integration.

Hope that comes in handy.

-Dan

Teach us to number our days, that we may gain a heart of wisdom. Psalm 90:12.

NA NA NA NA, Hey Hey Hey, Goodbye

Removing NA’s from a Data Frame in R

The Problem

Suppose you are doing a method comparison for which some results are above or below the linear range of your assay(s). Generally, these will appear in your spreadsheet (gasp!) program as \(< x\) or \(> y\) or, in the case of our mass spectrometer, “No Peak”. When you read these data into R using read.csv(), R will turn then into factors, which I personally find super–annoying and which inspired this conference badge (see bottom right) as I learned from University of British Columbia prof Jenny Bryan.

For this reason, when we read the data in, it is convenient to choose the option stringsAsFactors = FALSE. In doing so, the data will be treated as strings and be in the character class. But for regression comparison purposes, we need to make the data numeric and all of the \(< x\) and \(> y\) results will be converted to NA. In this post, we want to address a few questions that follow:

  • How do we find all the NA results?
  • How can we replace them with a numeric (like 0)?
  • How can we rid ourselves of rows containing NA?

Finding NA's

Let's read in the data which comes from a method comparison of serum aldosterone between our laboratory and Russ Grant's laboratory (LabCorp) published here. I'll read in the data with stringsAsFactors = FALSE. These are aldosterone results in pmol/L. To convert to ng/dL, divide by 27.7.

You can see the problem immediately, our data (“Aldo.Us”) is a character vector. This is not good for regression. Why did this happen? We can find out:

Ahhh…it's the dreaded “No Peak”. This is what the mass spectrometer has put in its data file. So, let's force everything to numeric:

We see the warnings about the introduction of NAs. And we get:

Now we have 3 NAs. We want to find them and get rid of them. From the screen we could figure out where the NAs were and manually replace them. This is OK on such a small data set but when you start dealing with data sets having thousands or millions of rows, approaches like this are impractical. So, let's do it right.

If we naively try to use an equality we find out nothing.

Hunh? Whasgoinon?

This occurs because NA means “unknown”. Think about it this way. If one patient's result is NA and another patient's result is NA, then are the results equal? No, they are not (necessarily) equal, they are both unknown and so the comparison should be unknown also. This is why we do not get a result of TRUE when we ask the following question:

So, when we ask R if unknown #1 is equal to unknown #2, it responds with “I dunno.”, or “NA”. So if we want to find the NAs, we should inquire as follows:

or, for less verbose output:

Hey Hey! Ho Ho! Those NAs have got to go!

Now we know where they are, in rows 29, 46, and 76. We can replace them with 0, which is OK but may pose problems if we use weighted regression (i.e. if we have a 0 in the x-data and we weight data by 1/x). Alternatively, we can delete the rows entirely.

To replace them with 0, we can write:

and this is equivalent:

To remove the whole corresponding row, we can write:

or:

Complete Cases

What if there were NA's hiding all over the place in multiple columns and we wanted to banish any row containing one or more NA? In this case, the complete.cases() function is one way to go:

This function shows us which rows have no NAs (the ones with TRUE as the result) and which rows have NAs (the three with FALSE). We can banish all rows containing any NAs generally as follows:

This data set now has 93 rows:

You could peruse the excluded data like this:

na.omit()

Another way to remove incomplete cases is the na.omit() function (as Dr. Shannon Haymond pointed out to me). So this works too:

Row Numbers are Actually Names

In all of these approaches, you will notice something peculiar. Even though we have excluded the three rows, the row numbering still appears to imply that there are 96 rows:

but if you check the dimensions, there are 93 rows:

Why? This is because the row numbers are not row numbers; they are numerical row names. When you exclude a row, none of the other row names change. This was bewildering to me in the beginning. I thought my exclusions had failed somehow.

Now we can move on

Once this is done, you can go on and do your regression, which, in this case, looks like this.

Comparison of Serum Aldosterone

Finally, if you are ever wondering what fraction of your data is comprised of NA, rather than the absolute number, you can do this as follows:

If you applied this to the whole dataframe, you get the fraction of NA's in the whole dataframe (again–thank you Shannon):

Final Thought:

Ecclesiastes 1:9.

-Dan

Unit Converter

Introduction

Dan continues to crank out book chapter-length posts, which probably means that I should jump in before getting further behind…so here we go.

In the next few posts, I’d like to cover some work to help you to process aggregated proficiency testing (PT) data. Interpreting PT data from groups such as the College of American Pathologists (CAP) is, of course, a fundamental task for lab management. Comparing your lab’s results to peer group data from other users of the same instrumentation helps to ensure that your patients receive consistent results, and it provides at least a crude measure to ensure that your instrument performance is “in the ballpark”. Of course, many assays show significant differences between instrument models and manufacturers that can lead to results that are not comparable as a patient moves from institution to institution (or when your own lab changes instruments!). There are a number of standardization and harmonization initiatives underway (see http://harmonization.net, for example) to address this, and understanding which assays show significant bias compared to benchmark studies or national guidelines is a critical task for laboratorians. All of this is further complicated by the fact that sample matrix can significantly affect assay results, and sample commutability is one important reason why we can’t just take, say, CAP PT survey results (not counting the accuracy-based surveys) and determine which assays aren’t harmonized.

However.

With all of those caveats, it can still be useful to look through PT data in a systematic way to compare instruments. Ideally, we’d like to have everything in an R-friendly format that would allow us to ask systematic questions about data (things like “for how many assays does instrument X differ from instrument Y by >30% using PT material?”, or “how many PT materials give good concordance across all manufacturers?”). If we have good, commutable, accuracy-based testing materials, we can do even better. The first task is all of this fun, however, is getting the data into a format that R is happy with; no one I know likes the idea of retyping numbers from paper reports. I’m hoping to talk more about this in a future post, as there are lots of fun R text processing issues lurking here. In the mean time, though, we have a much more modest preliminary task to tackle.

Simple unit conversion

I’m currently staring at a CAP PT booklet. It happens to be D-dimer, but you can pick your own favorite analyte (and PT provider, for that matter). Some of the results are in ng/mL, some are ug/mL, and one is in mg/L. Let’s create an R function that allows us to convert between sets of comparable units. Now, although I know that Dan is in love with SI units (#murica), we’ll start by simply converting molar→molar and gravimetric→gravimetric. Yes, we can add fancy analyte-by-analyte conversion tables in the future…but right now we just want to get things on the same scale. In the process, we’ll cover three useful R command families.

First of all, we should probably decide how we want the final function to look. I’m thinking of something like this:

results <- labunit.convert(2.3, "mg/dL", "g/L")
results
## [1] 0.023

…which converts 2.3 mg/dL to 0.023 g/L. We should also give ourselves bonus points if we can make it work with vectors. For example, we may have this data frame:

mydata
##   Value   Units Target.Units
## 1  2.30    g/dL         mg/L
## 2 47.00 nmol/mL      mmol/dL
## 3  0.19    IU/L        mIU/L

and we would like to be able to use our function like this:

labunit.convert(mydata$Value, mydata$Units, mydata$Target.Units)
## [1] 2.3e+04 4.7e-03 1.9e+02

We should also handle things that are simpler

labunit.convert(0.23, "g", "mg")
## [1] 230

Getting started

Now that we know where we’re going, let’s start by writing a function that just converts between two units and returns the log difference. We’ll call this function convert.one.unit(), and it will take two arguments:

convert.one.unit("mg", "ng")
## [1] 6

Basically, we want to take a character variable (like, say, “dL”) and break it into two pieces: the metric prefix (“d”) and the base unit (“L”). If it isn’t something we recognize, the function should quit and complain (you could also make it return ‘NA’ and just give a warning instead, but we’ll hold off on that for now). We’ll start with a list of things that we want to recognize.

convert.one.unit <- function (unitin, unitout) {
  metric.prefixes <- c("y", "z", "a", "f", "p", "n", "u", "m", "c", "d", "", "da", "h", "k", "M", "G", "T", "P", "E", "Z", "Y")
  metric.logmultipliers <- c(-24, -21, -18, -15, -12, -9, -6, -3, -2, -1, 0, 1, 2, 3, 6, 9, 12, 15, 18, 21, 24)
  units.for.lab <- c("mol", "g", "L", "U", "IU")

Notice that the metric.prefixes variable contains the appropriate one- or two-character prefixes, and metric.logmultipliers has the corresponding log multiplier (for example, metric.prefixes[8] = “m”, and metric.logmultipliers[8] is -3). It’s also worth noting the "" (metric.prefixes[11]), which corresponds to a log multiplier of 0. The fact that "" is a zero-length string instead of a null means that we can search for it in a vector…which will be very handy!

And now for some regular expressions

This is the point where we tackle the first of the three command families that I told you about. If you’re not familiar with “regular expressions” in R or another language (Perl, Python, whatever), this is your entry point into some very useful text searching capabilities. Basically, a regular expression is a way of specifying a search for a matching text pattern, and it’s used with a number of R commands (grep(), grepl(), gsub(), regexpr(), regexec(), etc.). We’ll use gsub() as an example, since it’s one that many people are familiar with. Suppose that I have the character string “This is not a test”, and I want to change it to “This is a test”. I can feed gsub() a pattern that I want to recognize and some text that I want to use to replace the pattern. For example:

my.string <- "This is not a test"
my.altered.string <- gsub("not a ", "", my.string)   # replace "not a " with an empty string, ""
my.altered.string
## [1] "This is test"

That’s fine as far as it goes, but we will drive ourselves crazy if we’re limited to explicit matches. What if, for example, we also to also recognize “This is not…a test”, or “This is not my kind of a test”? We could write three different gsub statements, but that would get old fairly quickly. Instead of exactly matching the text, we’ll use a pattern. A regular expression that will match all three of our input statements is "not.+a ", so we can do the following:

gsub("not.+a ", "", "This is not a test")
## [1] "This is test"
gsub("not.+a ", "", "This is not my kind of a test")
## [1] "This is test"

You can read the regular expression "not.+a " as “match the letters ‘not’ followed by a group of one or more characters (denoted by the special symbol ‘.’) followed by an ‘a’”. You can find some very nice tutorials on regular expressions through Google, but for the purposes of this brief lesson I’ll give you a mini-cheat sheet that probably handles 90% of the regular expressions that I have to write:

Special Character Meaning
. match any character
\d match any digit
\D match anything that isn’t a digit
\s match white space
\S match anything that isn’t white space
\t match a tab (less important in R, since you usually already have things in a data frame)
^ match the beginning of the string (i.e. “^Bob” matches “Bob my uncle” but not “Uncle Bob”)
$ match the end of the string
* match the previous thing when it occurs 0 or more times
+ match the previous thing when it occurs 1 or more times
? match the previous thing when it occurs 0 or 1 times
( .. ) (parentheses) enclose a group of choices or a particular substring in the match
| match this OR that (e.g. “(Bob|Pete)” matches “Dr. Bob Smith” or “Dr. Pete Jones” but not “Dr. Sam Jones”

It’s also important to remember for things like "\d" that R uses backslashes as the escape character…so you actually have to write a double backslash, like this: "\\d". A regular expression to match one or more digits would be "\\d+".

OK, back to work. Our next step is to remove all white space from the unit text (we want "dL" to be handled the same way as " dL" or "dL "), so we’ll add the following lines:

  unitin <- gsub("\\s", "", unitin)
  unitout <- gsub("\\s", "", unitout)

See what we’ve done? We asked gsub() to replace every instance of white space (the regular expression is "\\s") with "". Easy.

Paste, briefly

Next, we want to put together a regular expression that will detect any of our metric.prefixes or units.for.lab. To save typing, we’ll do it with paste(), the second of our three R command families for the day. You probably already know about paste(), but if not, it’s basically the way to join R character variables into one big string. paste("Hi", "there") gives “Hi there” (paste() defaults to joining things with a space), paste("Super", "cali", "fragi", "listic", sep="") changes the separator to "" and gives us “Supercalifragilistic”.  paste0() does the same thing as paste(..., sep=""). The little nuance that it’s worth noting today is that we are going to join together elements from a single vector rather than a bunch of separate variables…so we need to use the collapse = "..." option, where we set collapse to whatever character we want. You remember from the last section that | (OR) lets us put a bunch of alternative matches into our regular expression, so we will join all of the prefixes like this:

  prefix.combo <- paste0(metric.prefixes, collapse = "|")
  prefix.combo
## [1] "y|z|a|f|p|n|u|m|c|d||da|h|k|M|G|T|P|E|Z|Y"

What we’re really after is a regular expression that matches the beginning of the string, followed by 0 or 1 matches to one of the prefixes, followed by a match to one of the units. Soooo…

  prefix.combo <- paste0(metric.prefixes, collapse = "|")
  unit.combo <- paste0(units.for.lab, collapse = "|")
  
  unit.search <- paste0("^(", prefix.combo, ")?(", unit.combo, ")$")

  unit.search
## [1] "^(y|z|a|f|p|n|u|m|c|d||da|h|k|M|G|T|P|E|Z|Y)?(mol|g|L|U|IU)$"

So much nicer than trying to type that by hand. Next we’ll do actual pattern matching using the regexec() command. regexec(), as the documentation so nicely states, returns a list of vectors of substring matches. This is useful, since it means that we’ll get one match for the prefix (in the first set of parentheses of our regular expression), and one match for the units (in the second set of parentheses of our regular expression). I don’t want to belabor the details of this, but if we feed the output of regexec() to the regmatches() command, we can pull out one string for our prefix and another for our units. Since these are returned as a list, we’ll also use unlist() to coerce our results into one nice vector. If the length of that vector is 0, indicating no match, an error is generated.

  match.unit.in <- unlist(regmatches(unitin, regexec(unit.search, unitin)))
  match.unit.out <- unlist(regmatches(unitout, regexec(unit.search, unitout)))
  
  if (length(match.unit.in) == 0) stop(paste0("Can't parse input units (", unitin, ")"))
  if (length(match.unit.out) == 0) stop(paste0("Can't parse output units (", unitout, ")"))

If we were to take a closer look look at match.unit.in, we would see that the first entry is the full match, the second entry is the prefix match, and the third entry is the unit match. To make sure that the units agree (i.e. that we’re not trying to convert grams into liters or something similar), we use:

  if (match.unit.in[3] != match.unit.out[3]) stop("Base units don't match")

…and then finish by using the match() command to find the index in the metric.prefixes vector corresponding to the correct prefix (note that if there’s no prefix matched, it matches the "" entry of the vector–very handy). That index allows us to pull out the corresponding log multiplier, and we then return the difference to get a conversion factor. Our final function looks like this1:

convert.one.unit <- function (unitin, unitout) {
  # the prefix codes for the metric system
  metric.prefixes <- c("y", "z", "a", "f", "p", "n", "u", "m", "c", "d", "", "da", "h", "k", "M", "G", "T", "P", "E", "Z", "Y")
  # ...and their corresponding log multipliers
  metric.logmultipliers <- c(-24, -21, -18, -15, -12, -9, -6, -3, -2, -1, 0, 1, 2, 3, 6, 9, 12, 15, 18, 21, 24)
  # The units that we'd like to detect.  I guess we could add distance, but that's not too relevant to most of the analytes that I can think of
  units.for.lab <- c("mol", "g", "L", "U", "IU")

  # remove white space
  unitin <- gsub("\\s", "", unitin)
  unitout <- gsub("\\s", "", unitout)
  
  # build the pieces of our regular expression...
  prefix.combo <- paste0(metric.prefixes, collapse = "|")
  unit.combo <- paste0(units.for.lab, collapse = "|")

  # ...and stitch it all together
  unit.search <- paste0("^(", prefix.combo, ")?(", unit.combo, ")$")

  # identify the matches
  match.unit.in <- unlist(regmatches(unitin, regexec(unit.search, unitin)))
  match.unit.out <- unlist(regmatches(unitout, regexec(unit.search, unitout)))
  
  if (length(match.unit.in) == 0) stop(paste0("Can't parse input units (", unitin, ")"))
  if (length(match.unit.out) == 0) stop(paste0("Can't parse output units (", unitout, ")"))
  
  if (match.unit.in[3] != match.unit.out[3]) stop("Base units don't match")
  
  # get the appropriate log multipliers
  logmult.in <- metric.logmultipliers[match(match.unit.in[2], metric.prefixes)]
  logmult.out <- metric.logmultipliers[match(match.unit.out[2], metric.prefixes)]
  
  # return the appropriate (log) conversion factor
  return(logmult.in - logmult.out)
}


# Try it out
convert.one.unit("mL","L")
## [1] -3

‘Apply’-ing yourself

We’re actually most of the way there now. The final family of commands that we’d like to use is apply(), with various flavors that allow you to repeatedly apply (no surprise) a function to many entries of a variable.  Dan mentioned this in his last post. He also mentioned not understanding the bad press that for loops get when they’re small. I completely agree with him, but the issue tends to arise when you’re used to a language like C (yes, I know we’re talking about compiled vs. interpreted in that case), where your loops are blazingly fast. You come to R and try nested loops that run from 1:10000, and then you have to go for coffee. lapply()mapply()mapply()apply(), etc. have advantages in the R world. Might as well go with the flow on this one.

We’re going to make a convert.multiple.units() function that takes unitsin and unitsout vectors, binds them together as two columns, and then runs apply() to feed them to convert.one.unit(). Because apply() lets us interate a function over either dimension of a matrix, we can bind the two columns (a vector of original units and a vector of target units) and then iterate over each pair by rows (that’s what the 1 means as the second argument of apply(): it applies the function by row). If the anonymous function syntax throws you off…let us know in the comments, and we’ll cover it some time. For now, just understand that the last part of the line feeds values to the convert.one.unit()function.

convert.multiple.units <- function (unitsin, unitsout) {
  apply(cbind(unitsin, unitsout), 1, function (x) {convert.one.unit(x[1], x[2])})
}

Finally, we’ll go back to our original labunit.convert() function. Our overall plan is to split each unit by recognizing the “/” character using strsplit(). This returns a list of vectors of split groups (i.e. “mg/dL” becomes the a list where the first element is a character vector (“mg”, “dl”)). We then make sure that the lengths match (i.e. if the input is “mg/dL” and the output if “g/mL” that’s OK, but if the output is “g” then that’s a problem), obtain all the multipliers, and then add them all up. We add because they’re logs…and actually we mostly subtract, because we’re dividing. For cuteness points, we return 2*x[1] - sum(x), which will accurately calculate not only conversions like mg→g and mg/dL→g/L, but will even do crazy stuff like U/g/L→mU/kg/dL. Don’t ask me why you’d want to do that, but it works. The final multiplier is used to convert the vector of values (good for you if you notice that we didn’t check to make sure that the length of the values vector matched the unitsin vector…but we can always recycle our values that way).

labunit.convert <- function (values, unitsin, unitsout) {
  insep <- strsplit(unitsin, "/")
  outsep <- strsplit(unitsout, "/")

  lengthsin <- sapply(insep, length)
  lengthsout <- sapply(outsep, length)
  
  if (!all(lengthsin == lengthsout)) stop("Input and output units can't be converted")

  multipliers <- mapply(convert.multiple.units, insep, outsep)
  
  final.multiplier <- apply(t(multipliers), 1, function (x) {2*x[1] - sum(x)})
  
  return(values * 10^final.multiplier)
}

OK, enough. Back over to you, Dan. We now have a piece of code that we can use when we start comparing PT data from different instruments. That’s the immediate plan for future posts2, and before long there may even be an entry with nice graphics like those of my Canadian colleague.

-SRM


  1. I received a request to convert “G/L” to “M/mL”, which was interpreted as converting billions/L to millions/mL. This requires changing our convert.one.unit() function to handle a “no units” case. Actually, it’s not as difficult as it sounds; if we just add an empty string (i.e. "") to the end of the units.for.lab vector, our regular expression does the right thing. Your edited line would read units.for.lab <- c("mol", "g", "L", "U", "IU", ""). The reason this works, incidentally, is that there’s no overlap (except "") between the prefixes and the units, so the pattern match doesn’t have a chance to be confused.
  2. Following Dan’s lead, I should point out a major caveat to any such plans is James 4:13-15. Double extra credit if you are interested enough to look it up.

A Closer Look at TAT Time Dependence

The Problem

We want to have a closer look at the time–dependence of turn around times (TATs). In particular, we would like to see if there is a significant trend in TAT over time (improvement or deterioration) and we would like the data to inform us of slowdowns and potentially unexpected problems that occur throughout each week. This should allow us to identify areas of the pre-analytical and/or analytical process phlebotomists that require attention.

My interest in this topic (which in past seemed entirely banal) came from the frustration of receiving monthly TAT reports showing spaghetti plots produced in Excel. In examining these figures is was entirely unclear to me whether any observed changes in the median (the only measure of central tendency provided) represented stochastic behaviour or a real problem. Utlimately, we want to be able to identify real problems in the preanalytical and analytical process but to do this, we need to visualize the data in a more sophisticated manner.

To do this, we are going to look at order–to–file times for a whole year for a nameless test X. You should be able to modify this approach to the manner in which your data is provided to you.

The real data was a little dirty but I have pre–cleaned it—this will have to be the topic of another post. In short, I purged the cancelled tests, removed duplicate records and limited my analysis to stat tests based on a stat flag that is stored in the laboratory information system (LIS). I won’t discuss this process here. The buffed–up file is named “2014_and_All_Clean.txt”. This happens to be a tab–delimited txt file. For this reason, I used read.delim() rather than read.csv(). These are basically the same function with different defaults for the seperator–one uses a comma and the other uses a tab. Please see our first post on TAT to understand how we are using the lubridate function ymd_hm().

Loading the Data

Now we want to look at a TAT. As in our first post on this topic, we will look at the order–to–file time.

Sanity Check

Let’s just have a quick look at this to make sure nothing crazy is happening.

unnamed-chunk-5-1

Some Nutty Stuff

We do note one thing—there is a sample with a TAT of 1656 min. This is a little crazy so we could investigate those samples to see if this is real (because of a lost sample) or an artifact of an add–on analysis being misidentified as a stat or some other similar nonsensical event.

If you wanted to list all of these extreme outliers for the year, you could do so like this:

which gives you the TAT of the 10 (or whatever number you prefer) worst specimens for the year. Obviously when you do this kind of analysis on your own data, you will retain the specimen ID in the data set and you could explore what is going on here–whether these are add–ons etc. You discover interesting things when you dig into your data.

Time Dependence

But we are interested in time-dependence of the TAT, so let’s look at a scatterplot of the whole year.

unnamed-chunk-7-1

So, that’s pretty hard to draw inferences from. We can see that there are some outliers with inconceivably low TAT. We will have to investigate what is going on with those collections but not right now. These outliers will not affect the non–parametric measures of central tendency.

Tunnelling Down

Let’s have a look at one week.

unnamed-chunk-8-1

See the first post on this topic for more information about the plotting parameters.

We can see there is a definite (and unsurprising) periodicity in the number of tests per hour. We can look at “volumes” another time. What we want to do now is look for time–dependence in the TAT so we can ultimately investigate what days of the week and times of the day are worse. But we don’t want to do this for one week—we want to do this for all weeks in the year. It would be nice, for example to plot all the Sundays, Mondays, Tuesdays etc overlapping and then see if we can see day–of–week and time–of–day trends.

Some More Lubridate Magic

Therefore, we need to assign every point in our myData dataframe a day of the week. The lubridate function wday() does this for us.

So, January 1, 2014 was a Wednesday, which is the 4th day of the week. Let’s assign the day of the week for all our days and then bind this to our data.

Now let’s plot all of the Monday–data for the whole year and look at the with–day–trends for Mondays. We are going to convert all of the TATs and times at which they are collected to decimal numbers so we don’t run into any hassles. (Yes, I ran into hassles when I did not do this.)

This little function accomplishes this for us:

So this seems to have worked and now we can make a scatter plot.

Monday Monday, So Good to Me?

unnamed-chunk-13-1

But now for the interesting part. We want to see how the median TAT is related to the time of day. We might want to look at, say, the running median over one–hour window all day long.  Notice that I have made the times, t, go from 0.5 to 23.5 because these are the only times for which a 60 min moving median can be calculated. Otherwise we’d have this really annoying situation where we’d have to fetch data from the last half-hour of Sunday and the first half-hour of Tuesday. I don’t need that level of perfectionism at present.

unnamed-chunk-14-1

Removing the For–ness

Many R folks don’t like for–loops and would rather use the apply() family of functions. I’m not sure I always understand contempt towards loops for small simple tasks but if you wanted to accomplish the same looping task without using a for–loop, you could do as follows:

Smoothing

This approach is reasonable but the problem (I have found) is that it is computationally expensive on large data sets. For this reason, it is nice to use a canned smoothing algorithm like LOWESS which is much faster. The parameter f of the lowess function has a default of 2/3 which in our case results in a fit that is way–too smoothed. I played around with f until I got something that more or less tracked with the 60–min moving median. There are many approaches to smoothing–don’t get lost in the vortex.

Lowess Smoothing

unnamed-chunk-16-1

So, that’s cool. Now lets loop over all the days of the week and make plots for each day.

unnamed-chunk-17-1

Now, let’s overplot all the lowess fits on a single graph and see what practical observations we can make. I have increased the lowess() smoothing to make things easier to look at.

unnamed-chunk-18-1

Observations

We can immediately see some issues. Weekends in the early hours of the morning are bad. 8 am is bad across all days. Noon is generally problematic and particularly so on Saturdays. There is also a slowdown in mid–afternoon and in the early evenings. Saturday midnight is the most problematic time, although the endpoints of the figure have fewer local weighting points and their confidence intervals are wider. This is something we can cover another time.

Remember, also, this is only the median we have looked at. Other horrors may be lurking in the 90th percentile.

Next time what we will do is move all of this TAT visualization to a 3D representation so we can more easily spot the problematic times.

-Dan

The lot is cast into the lap, but its every decision is from the LORD.
Proverbs 16:33

Generating Meaningful Turaround Time Plots for Clinical Laboratory Medicine

 

The Problem

It is standard practice in Clinical Laboratory Medicine to monitor turn around times (TATs) for high volume tests like potassium (K), Troponin (Tn) and Hemoglobin (Hb). The term TAT is typically understood to mean “the time elapsed from when the doctor orders the test to the time the result is available in the Laboratory Information System (LIS)”. This of course does not take into account the lag between the result availability and the time when the physician logs in to view it and respond, but let’s just say that we are not there yet.

Traditionally, some dedicated soul would take .csv extracts from the LIS and do laborious things in Excel to generate the median TAT for the month for each test and each lab location for which they were responsible. Not only is it impossible to automate such a process, it is entirely manual and produces fairly uninformative output since (at least at our site) only medians were generated.

What really frustrates physicians is not where the median goes each month, it is the behaviour of, say, the 90th percentile of TAT or the outliers. These are the ones they remember.

R allows us to produce a much more informative figure in an automatable fashion. I provide here an example of a TAT figure for Hb with some statistical metric included.

Look at the Data

Let’s start by reading in our data and looking at how it is structured.

In this simplified anonymized data set we can see that we have 4497 observations with all of the necessary time points to calculate the turnaround times of the preanalytical and analytical processes. For the sake of this example, let’s focus on the order-to-file time.

We are going to need to handle the dates, for which there is only one package worth discussing, namely lubridate.

Basic Data Preparation

The first thing we need to do is to convert the order, collect, receive and result times to lubridate objects (i.e. time and date objects) so that we can do some algebra on them. We can see from the structure of myData that the order, collect, receive and result time points are in the format “YYYY-MM-DD HH:MM”. Therefore we can use the lubridate function ymd_hm() to perform the conversion.

Applying str() again to myData, you will see that the dates and times are now POSIXct, that is, they are now dates and times. This allows use to calculate the order-to-file TAT, we can do with the difftime() function exporting the result in minutes. We will also append the order-to-file (otf) TAT to the dataframe and do some quick sanity-checking.

Sanity Check

unnamed-chunk-4-1

This looks reasonable, so we can proceed with a TAT scatterplot.

Scatterplot

unnamed-chunk-5-1

Beautifying

This is kind-of problematic because we really want to focus on results in the 0-200 minute range. There are some wild-outliers as occurs in real life because of instrument down-time, add-ons, etc. We can leave this matter for the present. Notice that I have displayed every day on the x-axis because this will allow us to investigate any problems we see. So we will adjust the ylim and we will also make the plot points semitransparent by using hexidecimal colour codes followed by a fractional transparency expressed in hexidecimal. Black is “#000000” and “20” is hexidecimal for 32 which is 32/256 or 12.5% opacity.

unnamed-chunk-6-1

We’ll accept the fact that we know that there are a number of outliers. We could easily have a plot that displayed them or a tabular summary of them.

Now we will need to prepare the vector of daily medians, 10th and 90th percentiles to plot. We will loop through each day of the month and then calculate the statistics for that day.

unnamed-chunk-7-1

But this is not all that easy to look at. First, it’s kind-of ugly and second, if we find a problem date, we can’t read it from the figure. So let’s start by fixing the x-axis labels:

unnamed-chunk-8-1

To paint the central 80% as a band, we will need to use the polygon() function. I am going to write a function to which and x-vector and two y-vectors is supplied which then fills the area between then with a supplied color. Naturally, the three vectors must have the same length.

unnamed-chunk-9-1

Final Product

Now we should just finish it off with a legend.

unnamed-chunk-10-1

And that is a little more informative. There are many features you could add from this point – like smoothing, statistical analysis, outlier report. You could also loop over different tests, examine both the preanalytical and analytical processes at different locations, and produce a pdf report using MarkDown for all the institutions you look after.

-Dan

 

“The LORD detests dishonest scales, but accurate weights find favor with him.”
Proverbs 11:1