Feature Engineering of the Gamma Region for Machine Learning Identification of Monoclonal Proteins

Background

In a previous post I was toying with better ways to integrate monoclonal proteins by deconvolution of monoclonal peaks. Turns out this is a hard problem. In any case, Deep Learning Neural Networks have become so mature that I wanted to see how easy it is to identify the presence of monoclonal proteins in the gamma region from the densitometric scan. This is not a new idea as it was first suggested in 1992 in J. Clin Pathol by Kratzer et al. I have to say, that was a very cool idea but as far as I can tell, it has not gotten much traction. This paper is mostly cited by review articles but there are a handful of further attempts. The challenges of developing a commercial product are probably regulatory in nature. However, I am hopeful that it would not be too hard to make a decision support tool.

When Kratzer et al. tackled this problem he was working on an Escom “IBM Compatible” computer with a 286 processor and 1 MB of RAM. Escom is a defunct German computer company that purchased Commodore after it began to flounder. This is why it would have been important for them to engineer their features carefully. I expect that Tensorflow may not need much feature engineering at all but I want to try what they have suggested as the features for the gamma region.

They took the scan of the gamma region, baselined it and then made it periodic and applied fast fourier transform (FFT) to represent the scan in frequency space. This is a very good idea.

Some Basic Stuff and Sanity Checking

Let’s make a function that is a sin function for \(\ell \le x \le \ell\) and is otherwise 0.

And now let’s examine this function for \(-10 < t < 10\) and \(\ell = 1\)

plot of chunk unnamed-chunk-205
The FFT of this function can be determined with the R fft() function. The differential \(\Delta f\) is calculated with the formula

\[\Delta f = \frac{1}{N \Delta t} \]

plot of chunk unnamed-chunk-206
Zooming in a bit we have:

plot of chunk unnamed-chunk-207

This looks as it should – trying to be a delta function, namely \(\delta(x – 1)\). As we increase the number of periods we feed the FFT the better this becomes:

So as the number of periods increase, things improve and by 10 periods, we have a very sharp \(\delta\) function.

plot of chunk unnamed-chunk-209

Applying FFT to a Gaussian

We can do the same thing to a Gaussian to mimic the gamma region of electrophoresis. To do this we just replicate the gamma function, inverting it in order to make the resulting function odd. Let’s just examine 5 periods for simplicity which is 10 replications of the gamma region.

plot of chunk unnamed-chunk-210

So, with a pure Gaussian, we more or less have 4 peaks in the FFT. If we now mimic a monoclonal protein by adding a second Gaussian, we can see that the FFT becomes more complex because more frequencies (i.e. \(n\)) of \(e^{\frac{-2 \pi i}{N}kn}\) are required to characterize the concatenated periodic function.

plot of chunk unnamed-chunk-211

The gamma region with a monoclonal results in an obviously more complex FFT.

Apply to a Real Gamma Region

Here is an example densitometric scan. We can pull out the gamma region.

plot of chunk unnamed-chunk-212

Then we can baseline the gamma region, replicate it 10 times and apply the FFT.

plot of chunk unnamed-chunk-213

plot of chunk unnamed-chunk-213

And that is more or less the gist of things. The simpler the gamma region is, the fewer peaks there will be in the frequency domain. This permits recapitulation of what Kratzer et al did in their 1992 paper with modern techniques. As I say, I am not sure its necessary but it might improve model performance. Of course it would seem possible to FFT the whole scan also and reduce the dimensionality of problem to about 30 frequencies and amplitudes.

Parting Thought

As the FFT discerns the frequencies of the function, so also:

“Do not consider his appearance or his height … The Lord does not look at the things people look at. People look at the outward appearance, but the Lord looks at the heart.”

1 Sam 16:7

A Deep Learning Classifier of New Testament Verse Authorship using the R Keras Package

Introduction

This is the first of what I am hoping are a number of posts on different machine learning classifiers. The subject matter is not lab medicine but the methodology applies to any similar project. For example, maybe you want to classify the text of a general internal medicine consult into its subspecialty based on the words used or perhaps you want to determine which IT tickets are likely high priority. Maybe you want to convert free text diagnoses into categorical diagnoses. Ultimately, the problem I want to tackle is text classification.

In any case, the book that I have been reading at home is Deep Learning with R by Francois Chollet JJ Allaire and given the many interesting and easy-to-follow examples. Since it’s on my mind, I thought a deep learning model would be a good place to start. But I did not want to just redo one of the examples from the book because the data sets are already cleansed and in that sense much of the heavy lifting is done. I wanted to start from a new data set and use the approach shown in section 3.5 but apply it to a new text classification problem. Because I want to follow the basic flow of the Reuters News Wire classifier, I need a similar natural language processing (NLP) multiclass text classifier problem.

The problem I have chosen is one of for authorship classification. Specifically, given any Greek sentence take from the New Testament, can I make a deep learning classifier that will identify the author of a verse that the classifier has never seen?

Data Cleansing

The text of the New Testament is available online from numerous sources. I downloaded it here and chose the Byzantine Textform 2005 file. The text has been cleansed, put to lower case and transliterated to English characters. There are several steps to get it to a simple dataframe which the following code achieves. The code makes a dataframe where each row is a verse.

Now that this wrangling is complete, we have a tibble that looks like this:

reference verse book author
1:1 biblov genesewv ihsou cristou uiou dauid uiou abraam MT05.ASC matthew
1:2 abraam egennhsen ton isaak isaak de egennhsen ton iakwb iakwb de egennhsen ton ioudan kai touv adelfouv autou MT05.ASC matthew
1:3 ioudav de egennhsen ton farev kai ton zara ek thv yamar farev de egennhsen ton esrwm esrwm de egennhsen ton aram MT05.ASC matthew
1:4 aram de egennhsen ton aminadab aminadab de egennhsen ton naasswn naasswn de egennhsen ton salmwn MT05.ASC matthew
1:5 salmwn de egennhsen ton booz ek thv racab booz de egennhsen ton wbhd ek thv rouy wbhd de egennhsen ton iessai MT05.ASC matthew
1:6 iessai de egennhsen ton dauid ton basilea dauid de o basileuv egennhsen ton solomwna ek thv tou ouriou MT05.ASC matthew
1:7 solomwn de egennhsen ton roboam roboam de egennhsen ton abia abia de egennhsen ton asa MT05.ASC matthew
1:8 asa de egennhsen ton iwsafat iwsafat de egennhsen ton iwram iwram de egennhsen ton ozian MT05.ASC matthew
1:9 oziav de egennhsen ton iwayam iwayam de egennhsen ton acaz acaz de egennhsen ton ezekian MT05.ASC matthew
1:10 ezekiav de egennhsen ton manassh manasshv de egennhsen ton amwn amwn de egennhsen ton iwsian MT05.ASC matthew

We should get verse counts that match what is expected, which we do.

book counts
MT05.ASC 1070
MR05.ASC 677
LU05.ASC 1149
JOH05.ASC 878
AC05.ASC 1003
RO05.ASC 432
1CO05.ASC 436
2CO05.ASC 256
GA05.ASC 148
EPH05.ASC 154
PHP05.ASC 103
COL05.ASC 94
1TH05.ASC 88
2TH05.ASC 46
1TI05.ASC 112
2TI05.ASC 82
TIT05.ASC 45
PHM05.ASC 24
HEB05.ASC 302
JAS05.ASC 107
1PE05.ASC 104
2PE05.ASC 60
1JO05.ASC 104
2JO05.ASC 12
3JO05.ASC 13
JUDE05.ASC 24
RE05.ASC 403

And we can check the unique word count

Normally at this point, we might remove stop words and then stem and lemmatize the text (ie get rid useless little words and suffixes that cause words of the same meaning to look different). This would be more important in more traditional learning classifiers but is likely less important when using Keras and Tensorflow. If I were running this classifier on the English text of the KJV for example, I would run it with and without such a process and guage the performance change. There are numerous NLP packages specifically dedicated to this task. I am going to skip it here. This process is, of course, highly language-dependent.

The other thing I need to do is make the author-factor column numbered 0-8 instead of 1-9 because R is going to be calling python code and python starts counting a 0. This bug took me a while to sort out.

Now we will make a tokenizer, that is a function to convert words to integers and we will limit the model to the top 15000 out of the 17156 unique words found in the text.

Now we need to split the text randomly into training and testing sets in an 80:20 split.

The data is very imbalanced, that is, there are authors (like Jude and James) that have very few verses ascribed to them and there are others (like Paul and Luke) who have many verses. For this reason, we should sanity check our training and testing data to make sure that we have sampled about 80% of each book. There are specific tools to achieve this process which is referred to as stratified sampling.

We can see that we have a problem with author 2 who has only 24 verses. This is probably not going to matter much but we can try balanced sampling for which we do get better proportions.

Now we can tokenize the data, that is, convert the verse from lists of integers to a one-hot encoded form.

Satisfy ourselves that the training data is random in order

reference verse book author author_factor verse_number
16:6 all oti tauta lelalhka umin h luph peplhrwken umwn thn kardian JOH05.ASC john 1 3584
2:20 all ecw kata sou oti afeiv thn gunaika sou iezabel h legei eauthn profhtin kai didaskei kai plana touv emouv doulouv porneusai kai fagein eidwloyuta RE05.ASC john 1 7563
21:23 kai elyonti autw eiv to ieron proshlyon autw didaskonti oi arciereiv kai oi presbuteroi tou laou legontev en poia exousia tauta poieiv kai tiv soi edwken thn exousian tauthn MT05.ASC matthew 5 705
5:1 dikaiwyentev oun ek pistewv eirhnhn ecomen prov ton yeon dia tou kuriou hmwn ihsou cristou RO05.ASC paul 6 4895
12:29 h pwv dunatai tiv eiselyein eiv thn oikian tou iscurou kai ta skeuh autou diarpasai ean mh prwton dhsh ton iscuron kai tote thn oikian autou diarpasei MT05.ASC matthew 5 374
4:24 alla kai di hmav oiv mellei logizesyai toiv pisteuousin epi ton egeiranta ihsoun ton kurion hmwn ek nekrwn RO05.ASC paul 6 4893
27:31 eipen o paulov tw ekatontarch kai toiv stratiwtaiv ean mh outoi meinwsin en tw ploiw umeiv swyhnai ou dunasye AC05.ASC luke 3 4734
1:25 kai hrwthsan auton kai eipon autw ti oun baptizeiv ei su ouk ei o cristov oute hliav oute o profhthv JOH05.ASC john 1 2921
3:6 kai exelyontev oi farisaioi euyewv meta twn hrwdianwn sumboulion epoioun kat autou opwv auton apoleswsin MR05.ASC mark 4 1149
8:4 ei men gar hn epi ghv oud an hn iereuv ontwn twn ierewn twn prosferontwn kata ton nomon ta dwra HEB05.ASC unknown 8 6930

Now we can build a basic model:

and pull out validation data, again in an 80:20 split.

Now we run the model:

plot of chunk unnamed-chunk-15

We can show the model performance graphically:

plot of chunk unnamed-chunk-17

Results are not great because many authors are being misclassified as Paul or Luke. This is likely from author imbalance so we can address the imbalance with weights and with dropout layers as suggested in this very informative tutorial from
Dr. Bharatendra Rai
.

What we get looks a little better with more counts on the diagonal.

plot of chunk unnamed-chunk-19

The model is jumpy on the small books, probably because of undersampling of them. This means that k-fold cross validation help us assess model performance. Not sure if I should try to have balanced sampling in the folds but I am not going to worry about that at the moment.

Run the k-fold cross-validation.

Validation accuracy improves modestly with more epochs but the model definitely overfits the training data (getting to the high 90s in accuracy). This is a bit of a conundrum to me for which I do not know the answer (those who know, please comment): namely, I can overfit the model to make gains on the validation set and these do improve performance on the test set but I expect that this improvement is happening in some non-generalizable way.

plot of chunk unnamed-chunk-22

Likewise loss slowly declines over many epochs but the model overfits.

plot of chunk unnamed-chunk-23

In any case, this is the model performance rerunning the k-fold cross validation with 5 epochs.

Final Outcome

Satisfied enough that 5 epochs should be OK, I can run the model on the whole training set and look at its performance on the testing set.

plot of chunk unnamed-chunk-26

Some interesting findings:

  • John seems to be the easiest to classify. This fits well with his unique authorship style.
  • The synoptic gospels are are easily misclassified among one another. Again, this fits with the overlap of stories, parables and other content.
  • Hebrews looks more like Hebrews than it looks like Paul. This fits with the perspective that Paul is not the author of Hebrews.
  • Poor James, Jude and Peter: just not enough verses to get proper classification. I am sure there are ways to address this kind of imballance were classifying Jude correctly a very important thing to do.

I think I am going to stop trying to improve this because it is not a real problem but I hope that someone else can recycle some code for a real-life problem. I would be interested in comments on how to get improved classification of small classes.

Parting Easter Thought

ouk estin wde hgeryh gar kaywv eipen deute idete ton topon opou ekeito o kuriov, Matthew 28:6

Going Viral with #rstats to Ramp up COVID Nucleic Acid Testing in the Clinical Laboratory

My friend Patrick Mathias from the University of Washington Department of Laboratory Medicine gave a fantastic talk on MSACL Connect last week about using #rstats and other open source tools for data automation in COVID sample collection, accessioning, pooling and direct-to-patient reporting.

Following his talk, I shared the details of our COVID sample pooling data automation strategy which is built on R, R Shiny and Ubuntu. Click here or on the image below to go to the talk on MSACL Connect.

 

 

“When he heard this, Jesus said, ‘This sickness will not end in death. No, it is for God’s glory so that God’s Son may be glorified through it.'”

John 11:4

Parse an Online Table into an R Dataframe – Westgard’s Biological Variation Database

Background

From time to time I have wanted to bring an online table into an R dataframe. While in principle, the data can be cut and paste into Excel, sometimes the table is very large and sometimes the columns get goofed up in the process. Fortunately, there are a number of R tools for accomplishing this. I am just going to show one approach using the rvest package. The rvest package also makes it possible to interact with forms on webpages to request specific material which can then be scraped. I think you will see the potential if you look here.

In our (simple) case, we will apply this process to Westgard's desirable assay specifications as shown on his website. The goal is to parse out the biological variation tables, get them into a dataframe and the write to csv or xlsx.

Reading in the Data

The first thing to do is to load the rvest and httr packages and define an html session with the html_session() function.

Now looking at the webpage, you can see that there are 8 columns in the tables of interest. So, we will define an empty dataframe with 8 columns.

We need to know which part of the document to scrape. This is a little obscure, but following the instructions in this post, we can determine that the xpaths we need are:

/html/body/div[1]/div[3]/div/main/article/div/table[1]

/html/body/div[1]/div[3]/div/main/article/div/table[2]

/html/body/div[1]/div[3]/div/main/article/div/table[3]

etc.

There are 8 such tables in the whole webpage. We can define a character vector for these as such:

Now we make a loop to scrape the 8 tables and with each iteration of the loop, append the scraped subtable to the main dataframe called biotable using the rbind() function. We have to use the parameter fill = TRUE in the html_table() function because the table does not happen to always a uniform number of columns.

Clean Up

Now that we have the raw data out, we can have a quick look at it:

X1 X2 X3 X4 X5 X6 X7 X8
Analyte Number of Papers Biological Variation Biological Variation Desirable specification Desirable specification Desirable specification
Analyte Number of Papers CVI CVg I(%) B(%) TE(%)
S- 11-Desoxycortisol 2 21.3 31.5 10.7 9.5 27.1
S- 17-Hydroxyprogesterone 2 19.6 50.4 9.8 13.5 29.7
U- 4-hydroxy-3-methoximandelate (VMA) 1 22.2 47.0 11.1 13.0 31.3
S- 5' Nucleotidase 2 23.2 19.9 11.6 7.6 26.8
U- 5'-Hydroxyindolacetate, concentration 1 20.3 33.2 10.2 9.7 26.5
S- α1-Acid Glycoprotein 3 11.3 24.9 5.7 6.8 16.2
S- α1-Antichymotrypsin 1 13.5 18.3 6.8 5.7 16.8
S- α1-Antitrypsin 3 5.9 16.3 3.0 4.3 9.2

We can see that we need define column names and we need to get rid of some rows containing extraneous column header information. There are actually 8 such sets of headers to remove.

Let's now find rows we don't want and remove them.

You will find that the table has missing data which is written as “- – -”. This should be now replaced by NA and the column names should be assigned to sequential integers. Also, we will remove all the minus signs after the specimen type. I'm not sure what they add.

Check it Out

Just having another look at the first 10 rows:

Sample Analyte NumPapers CVI CVG I B TE
S 11-Desoxycortisol 2 21.3 31.5 10.7 9.5 27.1
S 17-Hydroxyprogesterone 2 19.6 50.4 9.8 13.5 29.7
U 4-hydroxy-3-methoximandelate (VMA) 1 22.2 47.0 11.1 13.0 31.3
S 5' Nucleotidase 2 23.2 19.9 11.6 7.6 26.8
U 5'-Hydroxyindolacetate, concentration 1 20.3 33.2 10.2 9.7 26.5
S α1-Acid Glycoprotein 3 11.3 24.9 5.7 6.8 16.2
S α1-Antichymotrypsin 1 13.5 18.3 6.8 5.7 16.8
S α1-Antitrypsin 3 5.9 16.3 3.0 4.3 9.2
S α1-Globulins 2 11.4 22.6 5.7 6.3 15.7
U α1-Microglobulin, concentration, first morning 1 33.0 58.0 16.5 16.7 43.9

Now examining the structure:

It's kind-of undesirable to have numbers as characters so…

Write the Data

Using the xlsx package, you can output the table to an Excel file in the current working directory.

If you are having trouble getting xlsx to install, then just write as csv.

Conclusion

You can now use the same general approach to parse any table you have web access to, no mater how small or big it is. Here is a complete script in one place:

Parting Thought on Tables

You prepare a table before me in the presence of my enemies. You anoint my head with oil; my cup overflows.

(Psalm 23:5)

Flat File Interface your Mass Spectrometer to the Laboratory Information System with R

The Problem

As Clinical Pathologists we work hard to create laboratory developed tests (LDTs) using liquid chromatography and tandem mass spectrometry (LC-MS/MS) that are robust, repeatable, accurate and have a wider dynamic range than commercial immunoassays. In our experience, properly developed LC-MS/MS assays are much less expensive and outperform their commercial immunoassay counterparts from an analytical standpoint.

However, despite mass spectrometry's communal obsession with analytical performance of our LDTs, sometimes we overlook the matter of handling the data we generate. Unlike traditional diagnostic companies (e.g. Siemens, Roche) who take care of upload and download of patient data and results via HL7 streams to the laboratory information system (LIS), mass spectrometry companies have not yet made this a priority. This leaves us either paying out a lot of money for custom middleware solutions or manually transcribing our LC-MS/MS results.

We might naively think, “How bad can the transcription be?” but over time, it becomes painfully evident that manual transcription of result is tedious, error–prone and inefficient use of tech–time.

Many LIS vendors offer what is called a “flat-file interface”. In this case, there is no HL7 stream generated using a communication socket between instrument and LIS. Rather, the results are saved in an ASCII text file with a pre-defined format and then transferred to the LIS via a secure shell (SSH) connection.

For this post, we are going to take some sample flat files from a SCIEX API5000 triple quadrupole mass spectrometer and prepare a flat file for the SunQuest LIS. Please note that this code is provided to you as is under the GNU Public Licence and without any guarantee. You know how all the LC-MS/MS vendors say their instruments are for “research use only”? –yeah, I'm giving this to you in the same spirit. If you use or modify it, you do so at your own risk. Any changes to how your flatfile is generated by your mass spectrometer or any upgrades to your LC-MS/MS software could make this code malfunction. You have been warned.

The Required Format

SunQuest requires the output file to be a comma separated values (CSV) file with a unique specimen or internal QC result in each row. The first column is the instrument ID, the second columns is the specimen container ID (an E followed by a 10–digit integer), the third is testcode and the fourth is the result. The file itself is required to have a time–stamp so that it has a traceable name and should have no header. For an instrument named PAPI (short for Providence API 5000) and a testcode TES (for testosterone), the file might look like this:

The Starting Material

After we have completed an analytical run and reviewed all peaks to generate our fileable results, we can export the quatified sample batch to an ASCII text file. The file contains a whole lot of diagnostic information about the run like which multiple reaction monitoring (MRM) transitions we used, what the internal standard (IS) counts were, results from the quantifier and qualifier ion, fitted values for the calibrators etc. There are more than 80 columns in a typical file and we could talk about all the things we might do with this data but in this case, we are concerned with extracting and preparing the results file.

Dialogue Box

If we are actually going to make an R script usable by a human, it would be good to be able to choose which file we want to process and what test we want to extract using a simple graphical user interface (GUI). There are a number of tools one can use to build GUIs in R but the most rudimentary is TclTk. I have to confess that I find the language constructs for GUI creation both non–intuitive and boring. For this reason, I present without discussion, a modification of a recipe for creating a box with radio–buttons. We are going to choose which of three analytes (you can increase this number as you please) for which we wish to process a flat–file. These are: aldosterone, cortisol and testosterone. Please note that if you execute this code on a Mac, you will have to install XQuartz because Macs don't have native X-windows support despite the BSD Linux heritage of OSX.

This will give us the following pop-up window with radiobuttons in which I have selected testosterone.

dialogue1

You will notice that Tk windows do not appear native to the operating system. We can live with this because we are not shallow.

After you hit the OK button, the Tk widget then puts the chosen value into an Tk variable called rbValue. We can determine the value using the command tclvalue(rbValue). The reason we need to know which analyte we are working with is because the name of the MRM we want to pull out of the flat file is dependent on the analyte of course. We will also need to replace results below the limit of quantitation (LoQ) with “< x”, whatever x happens to be, which will be a different threshold for each analyte.

In our case, the testcodes for aldosterone, cortisol and testosterone are ALD,CORT and TES respectively, the LoQs are 50 pmol/L, 1 nmol/L and 0.05 nmol/L and the MRM names are “Aldo 1”, “Aldo 2”, “Cortisol 1”, “Cortisol 2” and “Testo 1” and “Testo 2” as we defined them within SCIEX Analyst Software. We will use the switch() function to define three variables (test.code, LoQ, and MRM.names) which we will use later to process the flat–file. We will also define the name of the worksheet in a variable called worksheet. These are the parameters you would have to change in order to modify the code for your purposes.

Building File Names

Now we will prompt the user to tell them that they are to choose an instrument flat–file and we will determine the path of the chosen file. We will need the path to both read in the appropriate file but also to write the output later.

This code will create this message box:

dialogue2

and this file choice dialogue box:

dialogue3

and after a file is selected and the Open is pressed, the path to the flat–file is stored in the variable flat.file.path.

Behold: The Data

So we chosen the file we want to read in but what does this file look like? To just get a gander at it, we could open it with Excel and see how it is laid out. But since we have broken up with Excel, we won't do this. SCIEX Analyst exports tab (not comma) delimited files. R has a built in function read.delim() for reading these files but we will quickly discover that read.delim() assumes the files have a rectangular structure, having the same number of columns in each row. R will make assumptions about the shape of the data file based on the first few rows and then try to read it in. In this case, it will fail and you will get gibberish. To get this to work for us we will need to tell R how many rows to skip before the real data starts or we will need to tell R the number of columns the file has (which is not guaranteed to be consistent between versions of vendor software). There are lots of ways to do this but I think the simplest is to use grep().

I did this by reading the file in with no parsing of the tabs using the readLines() function. This function creates a vector for which each successive value is the entire content of the row of the file. I display the first 30 lines of the file. Suppose that we chose a testosterone flat file.

All of the \t's that you see are the tabs in the file which are has read in literally when we use readLines(). We can see that in this file nothing of use happens until line 29 but this is not consistent from file to file so we should not just assume that 29 is always the magic number where the good stuff begins. We can see that the line starting “Sample Name \t Sample ID” is the real starting point so we can determine how many lines to skip by using grep() and prepare for some error–handling with a variable called problem by which we can deal with the circumstance that no approriate starting row is identified.

Now that we know how many lines to skip we can read in the data:

We can have a look at the structure of this file

Just Tell Me the Results

And we see that there is lots of stuff we don't need. What we do need are the columns titled “Sample.Name” (which is the specimen container ID in this case), the “Analyte.Peak.Name” (which is the MRM, either quantifier or qualifier), and the one whose name starts with “Calculated.Concentration..”. The last of these also contains the units of measure which is analyte–dependent. To get rid of this analyte–dependence of the column name, we can find out which column this is and rename it:

Now we can pull out the three columns of interest and put them into a dataframe named results.

Now we only need the quantifier ion results which we were defined by the user with Tk GUI, so we can pull them out with grep. I will pull out the qualifiers also but we do not need them unless we wanted to compute ion-ratios, for example.

Having pulled out the MRM of interest, we can define which rows correspond to standards, QC and patients by appropriate use of grep(). It happens that the CIDs all start with E followed by a 10 digit number so we can search for this pattern with a simple regular expression. Since we only need the QCs and patient data, the variable standards is calculated only as a matter of completeness.

Preparing Data for Output

Now we can prepare to write a dataframe corresponding to the required format of the output file. To do so, we'll need to find out how many rows we are writing and then prepare a vector of the same length repeating the name of the worksheet and testcode:

Now we can replace all the NA values that replaced “No Peak” with the correct LoQ according to which analyte we are looking at.

Our final.output.data dataframe looks like it behaved properly.

Timestamping, Writing and Archiving

And finally, we create directories to archive our data (if those directories do not exist) and write the files with an appropriate timestamp determined using Sys.time(). Since colons (i.e : ) don't play nice in all operating systems as filenames, we can use gsub() to get rid of them. We also pass along error messages or confirmation messages to the user as appropriate.

Finally, we would wrap all of the directory–creation and file–operation in an if statement tied to the variable called problem we created previously. You will see this in the final source–code linked below.

Other Things You Can Do

Now, you can easily modify this to deal with multiple anlytes that are always on the same run, such as Vitamin D2 and Vitamin D3. If you wanted to suppress results failing ion ratio criteria (which could be concentration–dependent of course) or if you had specimens unexpectedly low IS counts, you could easily censor them to prevent their upload and then review them manually. You could also append canned comments to your results with a dash between your result and the comment. In fact, you could theoretically develop very elaborate middleware for QC evaluation and interpretation. You could also use RMarkdown to generate PDF reports for the run which could include calibration curve plots, plots of quantifier results vs qualifier results, and results that fail various criteria.

Source

You can download the source code and three example flat files here. Setting the source–code up as a “clickable” script is somewhat dependent on the operating system you are working on. Since most of you will be on a windows system you can follow this tutorial. You can also use a windows batch file to call your script.

Final Thought

Now that your file is generated, it is read to upload via ssh. This is usually performed manually but could be automated. Don't implement this code into routine use unless you know what you are doing and you have tested it extensively. By using and/or modifying it, you become entirely responsible for its correct operation. Excel is like a butter knife and R is like Swiss Army Knife. You must be careful with it because…

From everyone who has been given much, much will be demanded; and from the one who has been entrusted with much, much more will be asked.

Luke 12:48

NA NA NA NA, Hey Hey Hey, Goodbye

Removing NA’s from a Data Frame in R

The Problem

Suppose you are doing a method comparison for which some results are above or below the linear range of your assay(s). Generally, these will appear in your spreadsheet (gasp!) program as \(< x\) or \(> y\) or, in the case of our mass spectrometer, “No Peak”. When you read these data into R using read.csv(), R will turn then into factors, which I personally find super–annoying and which inspired this conference badge (see bottom right) as I learned from University of British Columbia prof Jenny Bryan.

For this reason, when we read the data in, it is convenient to choose the option stringsAsFactors = FALSE. In doing so, the data will be treated as strings and be in the character class. But for regression comparison purposes, we need to make the data numeric and all of the \(< x\) and \(> y\) results will be converted to NA. In this post, we want to address a few questions that follow:

  • How do we find all the NA results?
  • How can we replace them with a numeric (like 0)?
  • How can we rid ourselves of rows containing NA?

Finding NA's

Let's read in the data which comes from a method comparison of serum aldosterone between our laboratory and Russ Grant's laboratory (LabCorp) published here. I'll read in the data with stringsAsFactors = FALSE. These are aldosterone results in pmol/L. To convert to ng/dL, divide by 27.7.

You can see the problem immediately, our data (“Aldo.Us”) is a character vector. This is not good for regression. Why did this happen? We can find out:

Ahhh…it's the dreaded “No Peak”. This is what the mass spectrometer has put in its data file. So, let's force everything to numeric:

We see the warnings about the introduction of NAs. And we get:

Now we have 3 NAs. We want to find them and get rid of them. From the screen we could figure out where the NAs were and manually replace them. This is OK on such a small data set but when you start dealing with data sets having thousands or millions of rows, approaches like this are impractical. So, let's do it right.

If we naively try to use an equality we find out nothing.

Hunh? Whasgoinon?

This occurs because NA means “unknown”. Think about it this way. If one patient's result is NA and another patient's result is NA, then are the results equal? No, they are not (necessarily) equal, they are both unknown and so the comparison should be unknown also. This is why we do not get a result of TRUE when we ask the following question:

So, when we ask R if unknown #1 is equal to unknown #2, it responds with “I dunno.”, or “NA”. So if we want to find the NAs, we should inquire as follows:

or, for less verbose output:

Hey Hey! Ho Ho! Those NAs have got to go!

Now we know where they are, in rows 29, 46, and 76. We can replace them with 0, which is OK but may pose problems if we use weighted regression (i.e. if we have a 0 in the x-data and we weight data by 1/x). Alternatively, we can delete the rows entirely.

To replace them with 0, we can write:

and this is equivalent:

To remove the whole corresponding row, we can write:

or:

Complete Cases

What if there were NA's hiding all over the place in multiple columns and we wanted to banish any row containing one or more NA? In this case, the complete.cases() function is one way to go:

This function shows us which rows have no NAs (the ones with TRUE as the result) and which rows have NAs (the three with FALSE). We can banish all rows containing any NAs generally as follows:

This data set now has 93 rows:

You could peruse the excluded data like this:

na.omit()

Another way to remove incomplete cases is the na.omit() function (as Dr. Shannon Haymond pointed out to me). So this works too:

Row Numbers are Actually Names

In all of these approaches, you will notice something peculiar. Even though we have excluded the three rows, the row numbering still appears to imply that there are 96 rows:

but if you check the dimensions, there are 93 rows:

Why? This is because the row numbers are not row numbers; they are numerical row names. When you exclude a row, none of the other row names change. This was bewildering to me in the beginning. I thought my exclusions had failed somehow.

Now we can move on

Once this is done, you can go on and do your regression, which, in this case, looks like this.

Comparison of Serum Aldosterone

Finally, if you are ever wondering what fraction of your data is comprised of NA, rather than the absolute number, you can do this as follows:

If you applied this to the whole dataframe, you get the fraction of NA's in the whole dataframe (again–thank you Shannon):

Final Thought:

Ecclesiastes 1:9.

-Dan

Unit Converter

Introduction

Dan continues to crank out book chapter-length posts, which probably means that I should jump in before getting further behind…so here we go.

In the next few posts, I’d like to cover some work to help you to process aggregated proficiency testing (PT) data. Interpreting PT data from groups such as the College of American Pathologists (CAP) is, of course, a fundamental task for lab management. Comparing your lab’s results to peer group data from other users of the same instrumentation helps to ensure that your patients receive consistent results, and it provides at least a crude measure to ensure that your instrument performance is “in the ballpark”. Of course, many assays show significant differences between instrument models and manufacturers that can lead to results that are not comparable as a patient moves from institution to institution (or when your own lab changes instruments!). There are a number of standardization and harmonization initiatives underway (see http://harmonization.net, for example) to address this, and understanding which assays show significant bias compared to benchmark studies or national guidelines is a critical task for laboratorians. All of this is further complicated by the fact that sample matrix can significantly affect assay results, and sample commutability is one important reason why we can’t just take, say, CAP PT survey results (not counting the accuracy-based surveys) and determine which assays aren’t harmonized.

However.

With all of those caveats, it can still be useful to look through PT data in a systematic way to compare instruments. Ideally, we’d like to have everything in an R-friendly format that would allow us to ask systematic questions about data (things like “for how many assays does instrument X differ from instrument Y by >30% using PT material?”, or “how many PT materials give good concordance across all manufacturers?”). If we have good, commutable, accuracy-based testing materials, we can do even better. The first task is all of this fun, however, is getting the data into a format that R is happy with; no one I know likes the idea of retyping numbers from paper reports. I’m hoping to talk more about this in a future post, as there are lots of fun R text processing issues lurking here. In the mean time, though, we have a much more modest preliminary task to tackle.

Simple unit conversion

I’m currently staring at a CAP PT booklet. It happens to be D-dimer, but you can pick your own favorite analyte (and PT provider, for that matter). Some of the results are in ng/mL, some are ug/mL, and one is in mg/L. Let’s create an R function that allows us to convert between sets of comparable units. Now, although I know that Dan is in love with SI units (#murica), we’ll start by simply converting molar→molar and gravimetric→gravimetric. Yes, we can add fancy analyte-by-analyte conversion tables in the future…but right now we just want to get things on the same scale. In the process, we’ll cover three useful R command families.

First of all, we should probably decide how we want the final function to look. I’m thinking of something like this:

results <- labunit.convert(2.3, "mg/dL", "g/L")
results
## [1] 0.023

…which converts 2.3 mg/dL to 0.023 g/L. We should also give ourselves bonus points if we can make it work with vectors. For example, we may have this data frame:

mydata
##   Value   Units Target.Units
## 1  2.30    g/dL         mg/L
## 2 47.00 nmol/mL      mmol/dL
## 3  0.19    IU/L        mIU/L

and we would like to be able to use our function like this:

labunit.convert(mydata$Value, mydata$Units, mydata$Target.Units)
## [1] 2.3e+04 4.7e-03 1.9e+02

We should also handle things that are simpler

labunit.convert(0.23, "g", "mg")
## [1] 230

Getting started

Now that we know where we’re going, let’s start by writing a function that just converts between two units and returns the log difference. We’ll call this function convert.one.unit(), and it will take two arguments:

convert.one.unit("mg", "ng")
## [1] 6

Basically, we want to take a character variable (like, say, “dL”) and break it into two pieces: the metric prefix (“d”) and the base unit (“L”). If it isn’t something we recognize, the function should quit and complain (you could also make it return ‘NA’ and just give a warning instead, but we’ll hold off on that for now). We’ll start with a list of things that we want to recognize.

convert.one.unit <- function (unitin, unitout) {
  metric.prefixes <- c("y", "z", "a", "f", "p", "n", "u", "m", "c", "d", "", "da", "h", "k", "M", "G", "T", "P", "E", "Z", "Y")
  metric.logmultipliers <- c(-24, -21, -18, -15, -12, -9, -6, -3, -2, -1, 0, 1, 2, 3, 6, 9, 12, 15, 18, 21, 24)
  units.for.lab <- c("mol", "g", "L", "U", "IU")

Notice that the metric.prefixes variable contains the appropriate one- or two-character prefixes, and metric.logmultipliers has the corresponding log multiplier (for example, metric.prefixes[8] = “m”, and metric.logmultipliers[8] is -3). It’s also worth noting the "" (metric.prefixes[11]), which corresponds to a log multiplier of 0. The fact that "" is a zero-length string instead of a null means that we can search for it in a vector…which will be very handy!

And now for some regular expressions

This is the point where we tackle the first of the three command families that I told you about. If you’re not familiar with “regular expressions” in R or another language (Perl, Python, whatever), this is your entry point into some very useful text searching capabilities. Basically, a regular expression is a way of specifying a search for a matching text pattern, and it’s used with a number of R commands (grep(), grepl(), gsub(), regexpr(), regexec(), etc.). We’ll use gsub() as an example, since it’s one that many people are familiar with. Suppose that I have the character string “This is not a test”, and I want to change it to “This is a test”. I can feed gsub() a pattern that I want to recognize and some text that I want to use to replace the pattern. For example:

my.string <- "This is not a test"
my.altered.string <- gsub("not a ", "", my.string)   # replace "not a " with an empty string, ""
my.altered.string
## [1] "This is test"

That’s fine as far as it goes, but we will drive ourselves crazy if we’re limited to explicit matches. What if, for example, we also to also recognize “This is not…a test”, or “This is not my kind of a test”? We could write three different gsub statements, but that would get old fairly quickly. Instead of exactly matching the text, we’ll use a pattern. A regular expression that will match all three of our input statements is "not.+a ", so we can do the following:

gsub("not.+a ", "", "This is not a test")
## [1] "This is test"
gsub("not.+a ", "", "This is not my kind of a test")
## [1] "This is test"

You can read the regular expression "not.+a " as “match the letters ‘not’ followed by a group of one or more characters (denoted by the special symbol ‘.’) followed by an ‘a’”. You can find some very nice tutorials on regular expressions through Google, but for the purposes of this brief lesson I’ll give you a mini-cheat sheet that probably handles 90% of the regular expressions that I have to write:

Special Character Meaning
. match any character
\d match any digit
\D match anything that isn’t a digit
\s match white space
\S match anything that isn’t white space
\t match a tab (less important in R, since you usually already have things in a data frame)
^ match the beginning of the string (i.e. “^Bob” matches “Bob my uncle” but not “Uncle Bob”)
$ match the end of the string
* match the previous thing when it occurs 0 or more times
+ match the previous thing when it occurs 1 or more times
? match the previous thing when it occurs 0 or 1 times
( .. ) (parentheses) enclose a group of choices or a particular substring in the match
| match this OR that (e.g. “(Bob|Pete)” matches “Dr. Bob Smith” or “Dr. Pete Jones” but not “Dr. Sam Jones”

It’s also important to remember for things like "\d" that R uses backslashes as the escape character…so you actually have to write a double backslash, like this: "\\d". A regular expression to match one or more digits would be "\\d+".

OK, back to work. Our next step is to remove all white space from the unit text (we want "dL" to be handled the same way as " dL" or "dL "), so we’ll add the following lines:

  unitin <- gsub("\\s", "", unitin)
  unitout <- gsub("\\s", "", unitout)

See what we’ve done? We asked gsub() to replace every instance of white space (the regular expression is "\\s") with "". Easy.

Paste, briefly

Next, we want to put together a regular expression that will detect any of our metric.prefixes or units.for.lab. To save typing, we’ll do it with paste(), the second of our three R command families for the day. You probably already know about paste(), but if not, it’s basically the way to join R character variables into one big string. paste("Hi", "there") gives “Hi there” (paste() defaults to joining things with a space), paste("Super", "cali", "fragi", "listic", sep="") changes the separator to "" and gives us “Supercalifragilistic”.  paste0() does the same thing as paste(..., sep=""). The little nuance that it’s worth noting today is that we are going to join together elements from a single vector rather than a bunch of separate variables…so we need to use the collapse = "..." option, where we set collapse to whatever character we want. You remember from the last section that | (OR) lets us put a bunch of alternative matches into our regular expression, so we will join all of the prefixes like this:

  prefix.combo <- paste0(metric.prefixes, collapse = "|")
  prefix.combo
## [1] "y|z|a|f|p|n|u|m|c|d||da|h|k|M|G|T|P|E|Z|Y"

What we’re really after is a regular expression that matches the beginning of the string, followed by 0 or 1 matches to one of the prefixes, followed by a match to one of the units. Soooo…

  prefix.combo <- paste0(metric.prefixes, collapse = "|")
  unit.combo <- paste0(units.for.lab, collapse = "|")
  
  unit.search <- paste0("^(", prefix.combo, ")?(", unit.combo, ")$")

  unit.search
## [1] "^(y|z|a|f|p|n|u|m|c|d||da|h|k|M|G|T|P|E|Z|Y)?(mol|g|L|U|IU)$"

So much nicer than trying to type that by hand. Next we’ll do actual pattern matching using the regexec() command. regexec(), as the documentation so nicely states, returns a list of vectors of substring matches. This is useful, since it means that we’ll get one match for the prefix (in the first set of parentheses of our regular expression), and one match for the units (in the second set of parentheses of our regular expression). I don’t want to belabor the details of this, but if we feed the output of regexec() to the regmatches() command, we can pull out one string for our prefix and another for our units. Since these are returned as a list, we’ll also use unlist() to coerce our results into one nice vector. If the length of that vector is 0, indicating no match, an error is generated.

  match.unit.in <- unlist(regmatches(unitin, regexec(unit.search, unitin)))
  match.unit.out <- unlist(regmatches(unitout, regexec(unit.search, unitout)))
  
  if (length(match.unit.in) == 0) stop(paste0("Can't parse input units (", unitin, ")"))
  if (length(match.unit.out) == 0) stop(paste0("Can't parse output units (", unitout, ")"))

If we were to take a closer look look at match.unit.in, we would see that the first entry is the full match, the second entry is the prefix match, and the third entry is the unit match. To make sure that the units agree (i.e. that we’re not trying to convert grams into liters or something similar), we use:

  if (match.unit.in[3] != match.unit.out[3]) stop("Base units don't match")

…and then finish by using the match() command to find the index in the metric.prefixes vector corresponding to the correct prefix (note that if there’s no prefix matched, it matches the "" entry of the vector–very handy). That index allows us to pull out the corresponding log multiplier, and we then return the difference to get a conversion factor. Our final function looks like this1:

convert.one.unit <- function (unitin, unitout) {
  # the prefix codes for the metric system
  metric.prefixes <- c("y", "z", "a", "f", "p", "n", "u", "m", "c", "d", "", "da", "h", "k", "M", "G", "T", "P", "E", "Z", "Y")
  # ...and their corresponding log multipliers
  metric.logmultipliers <- c(-24, -21, -18, -15, -12, -9, -6, -3, -2, -1, 0, 1, 2, 3, 6, 9, 12, 15, 18, 21, 24)
  # The units that we'd like to detect.  I guess we could add distance, but that's not too relevant to most of the analytes that I can think of
  units.for.lab <- c("mol", "g", "L", "U", "IU")

  # remove white space
  unitin <- gsub("\\s", "", unitin)
  unitout <- gsub("\\s", "", unitout)
  
  # build the pieces of our regular expression...
  prefix.combo <- paste0(metric.prefixes, collapse = "|")
  unit.combo <- paste0(units.for.lab, collapse = "|")

  # ...and stitch it all together
  unit.search <- paste0("^(", prefix.combo, ")?(", unit.combo, ")$")

  # identify the matches
  match.unit.in <- unlist(regmatches(unitin, regexec(unit.search, unitin)))
  match.unit.out <- unlist(regmatches(unitout, regexec(unit.search, unitout)))
  
  if (length(match.unit.in) == 0) stop(paste0("Can't parse input units (", unitin, ")"))
  if (length(match.unit.out) == 0) stop(paste0("Can't parse output units (", unitout, ")"))
  
  if (match.unit.in[3] != match.unit.out[3]) stop("Base units don't match")
  
  # get the appropriate log multipliers
  logmult.in <- metric.logmultipliers[match(match.unit.in[2], metric.prefixes)]
  logmult.out <- metric.logmultipliers[match(match.unit.out[2], metric.prefixes)]
  
  # return the appropriate (log) conversion factor
  return(logmult.in - logmult.out)
}


# Try it out
convert.one.unit("mL","L")
## [1] -3

‘Apply’-ing yourself

We’re actually most of the way there now. The final family of commands that we’d like to use is apply(), with various flavors that allow you to repeatedly apply (no surprise) a function to many entries of a variable.  Dan mentioned this in his last post. He also mentioned not understanding the bad press that for loops get when they’re small. I completely agree with him, but the issue tends to arise when you’re used to a language like C (yes, I know we’re talking about compiled vs. interpreted in that case), where your loops are blazingly fast. You come to R and try nested loops that run from 1:10000, and then you have to go for coffee. lapply()mapply()mapply()apply(), etc. have advantages in the R world. Might as well go with the flow on this one.

We’re going to make a convert.multiple.units() function that takes unitsin and unitsout vectors, binds them together as two columns, and then runs apply() to feed them to convert.one.unit(). Because apply() lets us interate a function over either dimension of a matrix, we can bind the two columns (a vector of original units and a vector of target units) and then iterate over each pair by rows (that’s what the 1 means as the second argument of apply(): it applies the function by row). If the anonymous function syntax throws you off…let us know in the comments, and we’ll cover it some time. For now, just understand that the last part of the line feeds values to the convert.one.unit()function.

convert.multiple.units <- function (unitsin, unitsout) {
  apply(cbind(unitsin, unitsout), 1, function (x) {convert.one.unit(x[1], x[2])})
}

Finally, we’ll go back to our original labunit.convert() function. Our overall plan is to split each unit by recognizing the “/” character using strsplit(). This returns a list of vectors of split groups (i.e. “mg/dL” becomes the a list where the first element is a character vector (“mg”, “dl”)). We then make sure that the lengths match (i.e. if the input is “mg/dL” and the output if “g/mL” that’s OK, but if the output is “g” then that’s a problem), obtain all the multipliers, and then add them all up. We add because they’re logs…and actually we mostly subtract, because we’re dividing. For cuteness points, we return 2*x[1] - sum(x), which will accurately calculate not only conversions like mg→g and mg/dL→g/L, but will even do crazy stuff like U/g/L→mU/kg/dL. Don’t ask me why you’d want to do that, but it works. The final multiplier is used to convert the vector of values (good for you if you notice that we didn’t check to make sure that the length of the values vector matched the unitsin vector…but we can always recycle our values that way).

labunit.convert <- function (values, unitsin, unitsout) {
  insep <- strsplit(unitsin, "/")
  outsep <- strsplit(unitsout, "/")

  lengthsin <- sapply(insep, length)
  lengthsout <- sapply(outsep, length)
  
  if (!all(lengthsin == lengthsout)) stop("Input and output units can't be converted")

  multipliers <- mapply(convert.multiple.units, insep, outsep)
  
  final.multiplier <- apply(t(multipliers), 1, function (x) {2*x[1] - sum(x)})
  
  return(values * 10^final.multiplier)
}

OK, enough. Back over to you, Dan. We now have a piece of code that we can use when we start comparing PT data from different instruments. That’s the immediate plan for future posts2, and before long there may even be an entry with nice graphics like those of my Canadian colleague.

-SRM


  1. I received a request to convert “G/L” to “M/mL”, which was interpreted as converting billions/L to millions/mL. This requires changing our convert.one.unit() function to handle a “no units” case. Actually, it’s not as difficult as it sounds; if we just add an empty string (i.e. "") to the end of the units.for.lab vector, our regular expression does the right thing. Your edited line would read units.for.lab <- c("mol", "g", "L", "U", "IU", ""). The reason this works, incidentally, is that there’s no overlap (except "") between the prefixes and the units, so the pattern match doesn’t have a chance to be confused.
  2. Following Dan’s lead, I should point out a major caveat to any such plans is James 4:13-15. Double extra credit if you are interested enough to look it up.