Feature Engineering of the Gamma Region for Machine Learning Identification of Monoclonal Proteins

Background

In a previous post I was toying with better ways to integrate monoclonal proteins by deconvolution of monoclonal peaks. Turns out this is a hard problem. In any case, Deep Learning Neural Networks have become so mature that I wanted to see how easy it is to identify the presence of monoclonal proteins in the gamma region from the densitometric scan. This is not a new idea as it was first suggested in 1992 in J. Clin Pathol by Kratzer et al. I have to say, that was a very cool idea but as far as I can tell, it has not gotten much traction. This paper is mostly cited by review articles but there are a handful of further attempts. The challenges of developing a commercial product are probably regulatory in nature. However, I am hopeful that it would not be too hard to make a decision support tool.

When Kratzer et al. tackled this problem he was working on an Escom “IBM Compatible” computer with a 286 processor and 1 MB of RAM. Escom is a defunct German computer company that purchased Commodore after it began to flounder. This is why it would have been important for them to engineer their features carefully. I expect that Tensorflow may not need much feature engineering at all but I want to try what they have suggested as the features for the gamma region.

They took the scan of the gamma region, baselined it and then made it periodic and applied fast fourier transform (FFT) to represent the scan in frequency space. This is a very good idea.

Some Basic Stuff and Sanity Checking

Let’s make a function that is a sin function for \(\ell \le x \le \ell\) and is otherwise 0.

And now let’s examine this function for \(-10 < t < 10\) and \(\ell = 1\)

plot of chunk unnamed-chunk-205
The FFT of this function can be determined with the R fft() function. The differential \(\Delta f\) is calculated with the formula

\[\Delta f = \frac{1}{N \Delta t} \]

plot of chunk unnamed-chunk-206
Zooming in a bit we have:

plot of chunk unnamed-chunk-207

This looks as it should – trying to be a delta function, namely \(\delta(x – 1)\). As we increase the number of periods we feed the FFT the better this becomes:

So as the number of periods increase, things improve and by 10 periods, we have a very sharp \(\delta\) function.

plot of chunk unnamed-chunk-209

Applying FFT to a Gaussian

We can do the same thing to a Gaussian to mimic the gamma region of electrophoresis. To do this we just replicate the gamma function, inverting it in order to make the resulting function odd. Let’s just examine 5 periods for simplicity which is 10 replications of the gamma region.

plot of chunk unnamed-chunk-210

So, with a pure Gaussian, we more or less have 4 peaks in the FFT. If we now mimic a monoclonal protein by adding a second Gaussian, we can see that the FFT becomes more complex because more frequencies (i.e. \(n\)) of \(e^{\frac{-2 \pi i}{N}kn}\) are required to characterize the concatenated periodic function.

plot of chunk unnamed-chunk-211

The gamma region with a monoclonal results in an obviously more complex FFT.

Apply to a Real Gamma Region

Here is an example densitometric scan. We can pull out the gamma region.

plot of chunk unnamed-chunk-212

Then we can baseline the gamma region, replicate it 10 times and apply the FFT.

plot of chunk unnamed-chunk-213

plot of chunk unnamed-chunk-213

And that is more or less the gist of things. The simpler the gamma region is, the fewer peaks there will be in the frequency domain. This permits recapitulation of what Kratzer et al did in their 1992 paper with modern techniques. As I say, I am not sure its necessary but it might improve model performance. Of course it would seem possible to FFT the whole scan also and reduce the dimensionality of problem to about 30 frequencies and amplitudes.

Parting Thought

As the FFT discerns the frequencies of the function, so also:

“Do not consider his appearance or his height … The Lord does not look at the things people look at. People look at the outward appearance, but the Lord looks at the heart.”

1 Sam 16:7

A Deep Learning Classifier of New Testament Verse Authorship using the R Keras Package

Introduction

This is the first of what I am hoping are a number of posts on different machine learning classifiers. The subject matter is not lab medicine but the methodology applies to any similar project. For example, maybe you want to classify the text of a general internal medicine consult into its subspecialty based on the words used or perhaps you want to determine which IT tickets are likely high priority. Maybe you want to convert free text diagnoses into categorical diagnoses. Ultimately, the problem I want to tackle is text classification.

In any case, the book that I have been reading at home is Deep Learning with R by Francois Chollet JJ Allaire and given the many interesting and easy-to-follow examples. Since it’s on my mind, I thought a deep learning model would be a good place to start. But I did not want to just redo one of the examples from the book because the data sets are already cleansed and in that sense much of the heavy lifting is done. I wanted to start from a new data set and use the approach shown in section 3.5 but apply it to a new text classification problem. Because I want to follow the basic flow of the Reuters News Wire classifier, I need a similar natural language processing (NLP) multiclass text classifier problem.

The problem I have chosen is one of for authorship classification. Specifically, given any Greek sentence take from the New Testament, can I make a deep learning classifier that will identify the author of a verse that the classifier has never seen?

Data Cleansing

The text of the New Testament is available online from numerous sources. I downloaded it here and chose the Byzantine Textform 2005 file. The text has been cleansed, put to lower case and transliterated to English characters. There are several steps to get it to a simple dataframe which the following code achieves. The code makes a dataframe where each row is a verse.

Now that this wrangling is complete, we have a tibble that looks like this:

reference verse book author
1:1 biblov genesewv ihsou cristou uiou dauid uiou abraam MT05.ASC matthew
1:2 abraam egennhsen ton isaak isaak de egennhsen ton iakwb iakwb de egennhsen ton ioudan kai touv adelfouv autou MT05.ASC matthew
1:3 ioudav de egennhsen ton farev kai ton zara ek thv yamar farev de egennhsen ton esrwm esrwm de egennhsen ton aram MT05.ASC matthew
1:4 aram de egennhsen ton aminadab aminadab de egennhsen ton naasswn naasswn de egennhsen ton salmwn MT05.ASC matthew
1:5 salmwn de egennhsen ton booz ek thv racab booz de egennhsen ton wbhd ek thv rouy wbhd de egennhsen ton iessai MT05.ASC matthew
1:6 iessai de egennhsen ton dauid ton basilea dauid de o basileuv egennhsen ton solomwna ek thv tou ouriou MT05.ASC matthew
1:7 solomwn de egennhsen ton roboam roboam de egennhsen ton abia abia de egennhsen ton asa MT05.ASC matthew
1:8 asa de egennhsen ton iwsafat iwsafat de egennhsen ton iwram iwram de egennhsen ton ozian MT05.ASC matthew
1:9 oziav de egennhsen ton iwayam iwayam de egennhsen ton acaz acaz de egennhsen ton ezekian MT05.ASC matthew
1:10 ezekiav de egennhsen ton manassh manasshv de egennhsen ton amwn amwn de egennhsen ton iwsian MT05.ASC matthew

We should get verse counts that match what is expected, which we do.

book counts
MT05.ASC 1070
MR05.ASC 677
LU05.ASC 1149
JOH05.ASC 878
AC05.ASC 1003
RO05.ASC 432
1CO05.ASC 436
2CO05.ASC 256
GA05.ASC 148
EPH05.ASC 154
PHP05.ASC 103
COL05.ASC 94
1TH05.ASC 88
2TH05.ASC 46
1TI05.ASC 112
2TI05.ASC 82
TIT05.ASC 45
PHM05.ASC 24
HEB05.ASC 302
JAS05.ASC 107
1PE05.ASC 104
2PE05.ASC 60
1JO05.ASC 104
2JO05.ASC 12
3JO05.ASC 13
JUDE05.ASC 24
RE05.ASC 403

And we can check the unique word count

Normally at this point, we might remove stop words and then stem and lemmatize the text (ie get rid useless little words and suffixes that cause words of the same meaning to look different). This would be more important in more traditional learning classifiers but is likely less important when using Keras and Tensorflow. If I were running this classifier on the English text of the KJV for example, I would run it with and without such a process and guage the performance change. There are numerous NLP packages specifically dedicated to this task. I am going to skip it here. This process is, of course, highly language-dependent.

The other thing I need to do is make the author-factor column numbered 0-8 instead of 1-9 because R is going to be calling python code and python starts counting a 0. This bug took me a while to sort out.

Now we will make a tokenizer, that is a function to convert words to integers and we will limit the model to the top 15000 out of the 17156 unique words found in the text.

Now we need to split the text randomly into training and testing sets in an 80:20 split.

The data is very imbalanced, that is, there are authors (like Jude and James) that have very few verses ascribed to them and there are others (like Paul and Luke) who have many verses. For this reason, we should sanity check our training and testing data to make sure that we have sampled about 80% of each book. There are specific tools to achieve this process which is referred to as stratified sampling.

We can see that we have a problem with author 2 who has only 24 verses. This is probably not going to matter much but we can try balanced sampling for which we do get better proportions.

Now we can tokenize the data, that is, convert the verse from lists of integers to a one-hot encoded form.

Satisfy ourselves that the training data is random in order

reference verse book author author_factor verse_number
16:6 all oti tauta lelalhka umin h luph peplhrwken umwn thn kardian JOH05.ASC john 1 3584
2:20 all ecw kata sou oti afeiv thn gunaika sou iezabel h legei eauthn profhtin kai didaskei kai plana touv emouv doulouv porneusai kai fagein eidwloyuta RE05.ASC john 1 7563
21:23 kai elyonti autw eiv to ieron proshlyon autw didaskonti oi arciereiv kai oi presbuteroi tou laou legontev en poia exousia tauta poieiv kai tiv soi edwken thn exousian tauthn MT05.ASC matthew 5 705
5:1 dikaiwyentev oun ek pistewv eirhnhn ecomen prov ton yeon dia tou kuriou hmwn ihsou cristou RO05.ASC paul 6 4895
12:29 h pwv dunatai tiv eiselyein eiv thn oikian tou iscurou kai ta skeuh autou diarpasai ean mh prwton dhsh ton iscuron kai tote thn oikian autou diarpasei MT05.ASC matthew 5 374
4:24 alla kai di hmav oiv mellei logizesyai toiv pisteuousin epi ton egeiranta ihsoun ton kurion hmwn ek nekrwn RO05.ASC paul 6 4893
27:31 eipen o paulov tw ekatontarch kai toiv stratiwtaiv ean mh outoi meinwsin en tw ploiw umeiv swyhnai ou dunasye AC05.ASC luke 3 4734
1:25 kai hrwthsan auton kai eipon autw ti oun baptizeiv ei su ouk ei o cristov oute hliav oute o profhthv JOH05.ASC john 1 2921
3:6 kai exelyontev oi farisaioi euyewv meta twn hrwdianwn sumboulion epoioun kat autou opwv auton apoleswsin MR05.ASC mark 4 1149
8:4 ei men gar hn epi ghv oud an hn iereuv ontwn twn ierewn twn prosferontwn kata ton nomon ta dwra HEB05.ASC unknown 8 6930

Now we can build a basic model:

and pull out validation data, again in an 80:20 split.

Now we run the model:

plot of chunk unnamed-chunk-15

We can show the model performance graphically:

plot of chunk unnamed-chunk-17

Results are not great because many authors are being misclassified as Paul or Luke. This is likely from author imbalance so we can address the imbalance with weights and with dropout layers as suggested in this very informative tutorial from
Dr. Bharatendra Rai
.

What we get looks a little better with more counts on the diagonal.

plot of chunk unnamed-chunk-19

The model is jumpy on the small books, probably because of undersampling of them. This means that k-fold cross validation help us assess model performance. Not sure if I should try to have balanced sampling in the folds but I am not going to worry about that at the moment.

Run the k-fold cross-validation.

Validation accuracy improves modestly with more epochs but the model definitely overfits the training data (getting to the high 90s in accuracy). This is a bit of a conundrum to me for which I do not know the answer (those who know, please comment): namely, I can overfit the model to make gains on the validation set and these do improve performance on the test set but I expect that this improvement is happening in some non-generalizable way.

plot of chunk unnamed-chunk-22

Likewise loss slowly declines over many epochs but the model overfits.

plot of chunk unnamed-chunk-23

In any case, this is the model performance rerunning the k-fold cross validation with 5 epochs.

Final Outcome

Satisfied enough that 5 epochs should be OK, I can run the model on the whole training set and look at its performance on the testing set.

plot of chunk unnamed-chunk-26

Some interesting findings:

  • John seems to be the easiest to classify. This fits well with his unique authorship style.
  • The synoptic gospels are are easily misclassified among one another. Again, this fits with the overlap of stories, parables and other content.
  • Hebrews looks more like Hebrews than it looks like Paul. This fits with the perspective that Paul is not the author of Hebrews.
  • Poor James, Jude and Peter: just not enough verses to get proper classification. I am sure there are ways to address this kind of imballance were classifying Jude correctly a very important thing to do.

I think I am going to stop trying to improve this because it is not a real problem but I hope that someone else can recycle some code for a real-life problem. I would be interested in comments on how to get improved classification of small classes.

Parting Easter Thought

ouk estin wde hgeryh gar kaywv eipen deute idete ton topon opou ekeito o kuriov, Matthew 28:6

Going Viral with #rstats to Ramp up COVID Nucleic Acid Testing in the Clinical Laboratory

My friend Patrick Mathias from the University of Washington Department of Laboratory Medicine gave a fantastic talk on MSACL Connect last week about using #rstats and other open source tools for data automation in COVID sample collection, accessioning, pooling and direct-to-patient reporting.

Following his talk, I shared the details of our COVID sample pooling data automation strategy which is built on R, R Shiny and Ubuntu. Click here or on the image below to go to the talk on MSACL Connect.

 

 

“When he heard this, Jesus said, ‘This sickness will not end in death. No, it is for God’s glory so that God’s Son may be glorified through it.'”

John 11:4

Calculate all the CVs of all the QC Levels of all the Methods of all the Instruments at all the Sites all at once … with Sunquest LIS and dplyr

Background

As part of our lab accreditation requirements, we have to provide measurement uncertianty estimates for all tests at all hospital sites. As you might imagine, with thousands of testcodes in Sunquest LIS, getting all the coefficients of variation (CVs) represents a daunting task for the quality technologist to accomplish. As it turns out, by capturing the ssh session in a .txt file, you can use R’s dplyr package to do this all in few lines of code.

Getting the Raw Data

You need to get the raw data from Sunquest. You can capture the telnet (yes… older versions of Sunquest use telnet and pass patient information and user passwords unencrypted across the hospital network o_O) or the ssh session to a file using the Esker SmarTerm which Sunquest packages in their product and refers to as “roll-n-scroll”. People disparriage SmarTerm as an old “dos tool”–whereas Sunquest is hosted on AIX operating system. SmarTerm access to Sunquest is a gagillion times faster than the GUI and permits us to capture the raw QC data we need. To capture the session select from the dropdown menu as shown here:

SQ Screenshot1

If you are using Mac OS or Linux OS, you can also capture the ssh session by connecting from the terminal and using tee to dump the session to a file.

ssh user@serverIPaddress | tee captured_session.txt

Once you have connected, use the QC function and select output printer 0 (meaning the screen) and make these selections, changing the dates as appropriate:

SQ Screenshot1

If you make no selections at all for any of:

  • TEST:
  • WORKSHEET:
  • METHOD:
  • CONTROL:
  • SHIFT #:
  • TECH:
  • TESTS REQUESTED:

then you will extract everything, which is what you want and which will make for a very big .txt file. There will be a delay and then thousands of QC results will dump to the screen and to your file. When this is complete, end your SmarTerm or ssh or telnet (cringe) session. I saved my text dump as raw_SQ8.txt.

Getting it intro R and parsing it

Your data will come out as a fixed with file with no delimiters. It will also have a bunch of junk at the bottom and top of the file detailing your commands from the start and end of the session. These need to be discarded. I just used grep() to find all the lines with the appropriate date pattern. After reading it in, because I am lazy, I wrote it back out and read it in again with read.fwf()

Now that all the data munging is done, we can examine the data:

test.code instr.code qc.name qc.expire date.performed tech.code shift result modifier
BCL JBGAS RAD1 R0173 EXP MAR 2021 2019-11-15 09:17:00 68 2 122 NA
BCL JBGAS RAD1 R0173 EXP MAR 2021 2019-11-15 20:51:00 68 3 122 NA
BCL JBGAS RAD1 R0173 EXP MAR 2021 2019-11-15 21:47:00 68 3 122 NA
BCL JBGAS RAD1 R0173 EXP MAR 2021 2019-11-15 21:50:00 68 3 122 NA
BCL JBGAS RAD1 R0173 EXP MAR 2021 2019-11-17 07:10:00 15 1 122 NA
BCL JBGAS RAD1 R0173 EXP MAR 2021 2019-11-17 07:11:00 15 1 122 NA

And finally, we can make the dplyr magic happen and discard results for which the counts are too small, which I have chosen to be <20:

Which gives us output like this:

instr.code test.code qc.name qc.expire median IQR mean SD min max CV count
JBGAS BCL RAD3 R0141 EXP SEP 2017 65.0 1.000 65.145454 0.6503043 63.0 66.0 1.00 55
JBGAS BCL RAD2 R0175 EXP MAR 2021 97.0 0.000 97.128205 0.3364820 97.0 98.0 0.35 78
JBGAS BCL RAD1 R0173 EXP MAR 2021 122.0 0.000 122.122807 0.5691527 121.0 124.0 0.47 57
JBGAS BGLUC RAD1 R0173 EXP MAR 2021 1.5 0.000 1.507017 0.0257713 1.5 1.6 1.71 57
JBGAS BGLUC RAD2 R0175 EXP MAR 2021 5.6 0.075 5.585897 0.0639081 5.4 5.7 1.14 78
JBGAS BGLUC RAD3 R0141 EXP SEP 2017 13.7 0.100 13.763636 0.1310409 13.4 14.1 0.95 55

This permits us to toss out results with low counts. But what about handling outliers? Well, we can calculate the z-scores of the raw data by joining the the mean and SD results back to the raw data.

This will permit you to suppress results outside a certain z-score. So, let’s suppress all results with an undefined z-score and all results with a z-score >= 4:

Now , we can re-run the dplyr summary:

And now we have a summary of every QC CV in our Sunquest system with outliers suppressed:

instr.code test.code qc.name qc.expire median IQR mean SD min max CV count
JBGAS BCL RAD3 R0141 EXP SEP 2017 65.0 1.000 65.145454 0.6503043 63.0 66.0 1.00 55
JBGAS BCL RAD2 R0175 EXP MAR 2021 97.0 0.000 97.128205 0.3364820 97.0 98.0 0.35 78
JBGAS BCL RAD1 R0173 EXP MAR 2021 122.0 0.000 122.122807 0.5691527 121.0 124.0 0.47 57
JBGAS BGLUC RAD1 R0173 EXP MAR 2021 1.5 0.000 1.507017 0.0257713 1.5 1.6 1.71 57
JBGAS BGLUC RAD2 R0175 EXP MAR 2021 5.6 0.075 5.585897 0.0639081 5.4 5.7 1.14 78
JBGAS BGLUC RAD3 R0141 EXP SEP 2017 13.7 0.100 13.763636 0.1310409 13.4 14.1 0.95 55

And there we have it:

SQ Screenshot1

Now I can write the output file

With dplyr, if you direct your energies to the right place, you reap much. Similarly:

“But seek ye first the kingdom of God, and his righteousness; and all these things shall be added unto you.”

Matthew 6:33

Is the pain worth it?: Can Rcpp speed up Passing Bablok Regression?

Background

R dogma is that for loops are bad because they are slow but this is not the case in C++. I had never programmed a line of C++ as of last week but my beloved firstborn started university last week and is enrolled in a C++ intro course, so I thought I would try to learn some and see if it would speed up Passing Bablok regression.

Passing Bablok Regression

As mentioned in the past, the field of Clinical Chemistry has a peculiar devotion to Passing Bablok regression… and hey, why not?

Here is the code for a minimal implementation of Passing Bablok regression as discussed in this paper. This is the scale-invariant version.

So let’s make some fake data and see what we get:

Just to sanity check, we can get the coefficients from the mcr() package.

Ok, looks good.

Rcpp

The Rcpp package permits compiling and execution of C++ code from within R. This can be good for computationally intensive tasks. Here is my child-like attempt at recapitulation of the R script above.

Saving this to a local directory as PB_Reg.cpp, we can compile it and call the function into the R session as follows:

And run it:

Looks good! But is it faster? Like really faster? Let’s find out. And let’s also compare it to what the compiler() package produces:

test replications elapsed relative user.self sys.self user.child sys.child
3 Rcpp 100 0.024 1.000 0.024 0.000 0 0
2 Compiler Package 100 0.214 8.917 0.214 0.000 0 0
1 Rscript 100 0.217 9.042 0.213 0.004 0 0

So this makes C++ code about 9x faster than R source code. Not bad but in this case it was probably not worth the effort. Oh well. I learned something. Incidentally, the compiler() package did not help much in this case as it is only marginally better than raw R source. I have seen better results from compiler() in other cases.

Final thought

Speaking of speed:

“Free yourself, like a gazelle from the hand of the hunter, like a bird from the snare of the fowler.”

Proverbs 6:5


Break up with Excel: Intro and Advanced R Data Science Courses at MSACL.org

Salzburg Austria, September 21–23, 2019

MSACL Conference

There are two RStats Data Science courses happening in Salzburg Austria on September 22–24, 2019 at the 6th annual MSACL Clinical Mass Spectrometry Conference. These courses are held twice annually, once in Europe and once in Palm Springs.

plot of chunk msaclAd

Introductory Course

  • The introductory course will be taught by Dan Holmes, MD of the University of British Columbia and Will Slade, PhD of Laboratory Corporation of America.
  • Course description is here

Intermediate/Advanced Course

Registration

  • Although the conference is for clinical mass spectrometry, the courses are generic in nature and generally geared towards the biological and health sciences rather than mass spectometry per se.
  • You do not need to register for the conference to attend the pre-conference courses.
  • Academic rates apply. Registration for the course includes lots of snacks and coffee… like, good coffee.
  • Registration details are here.

Details

Both courses will take place in the following location:

  • Salzburg Congress Centre
    • Day 1: September 22, 2019, 1300h–1800h
    • Day 2: September 23, 2019, 0830h–1730h
    • Day 3: September 24, 2019, 0830h–1130h

RMarkdown Template that Manages Academic Affiliations – docx or PDF output

Background

I like writing my academic papers in RMarkdown because it allows reproducible research. The cleanest way to submit a manuscript made in RMarkdown is using the LaTeX code that it generates using the YAML switch keep_tex = true. A minimalist YAML header would look like so:

Introduction

However, when you want mutliple authors affiliations you discover that you can’t do as you would in LaTeX because Pandoc does not know what to do with the affiliations and you end out a dishearting PDF that looks like the output shown in figure 1 below:


This is so sad.

Figure 1: This is so sad.

The situation worsens if you want MS-Word output. As those of us in medical fields know, most journals (with some notable exceptions like the Clinical Mass Spectrometry Journal and other Elsevier journals like Clinical Biochemistry and Clinica Chimica Acta) require submission of a document in MS-Word format which goes against all that Data Science and Reprodicible Research stands for–he says, with hyperbole. Parenthetically, it is my hope that since AACC has indicated that they intend to make Data Science a strategic priority for Lab Medicine, they will soon accept submissons to Clinical Chemistry and Journal of Applied Laboratory Medicine written reproducibly in RMardown or LaTeX.

In the mean time, here are the workarounds for getting the affiliations to display correctly along with all the other stuff we want, namely, cross referencing of figures and tables and correct reference formatting and abbreviation of journal names. This allows you to avoid the horror of manually fixing your Word document after it generated from RMarkdown. In any case, let’s start with MS-Word.

Dependencies for MS-Word and the Associated YAML

You will also need to install Pandoc which is the Swiss Army Knife of document conversion. It’s going to turn your code into a .docx file for you. Mac users can do this with Homebrew on the terminal command line:

There are some extra installs required to help Pandoc do its job. Install the prebuilt binaries if you can.

Finally, you need to use some scripts written in the Lua scripting language which means you will need the language itself:

And you will need two Lua scripts:

These are in Pandoc github repository:

You want the files named scholarly-metadata.lua and author-info-blocks.lua.

You will need to choose a .csl file for your journal. This will tell Pandoc how to format the references. You can download the correct .csl file here. You will also need a journal abbreviations database. I have made one for you from the Web of Science list and you can download it here.

You will need to create a .bibtex database which is just your list of references. This can be exported from various reference managers or built by hand. Name the file mybibfile.bib.

Now follow the bouncing ball:

  1. Go to the directory containing your .Rmd file.
  2. Create a directory in it called “Extras”
  3. Put the two Lua scripts, the Bibtex database, the abbreviations database and the .csl file into the “Extras” folder.
  4. If you want to avoid Pandoc’s goofy default .docx formatting, then put this word document in the same folder.

OR

Download the contents of this folder from my github repo that has everything set up as I describe above.

For two authors, your YAML will need to look like this:

Et voila! Figure 2 shows that we have something reasonable.

This is so great

Figure 2: This is so great

Dependencies for LaTeX and the Associated YAML

It goes without saying that you need to install LaTeX. LaTeX markup language is available here: Mac, Windows. For Linux, just install from the command line with your package manager. Do a full install with all the glorious bloat of all LaTeX packages. This saves many headaches in the future.

You don’t need the lua scripts for LaTeX although you can use them. The issue with LaTeX is that the .tex template that Pandoc uses for generating LaTeX files does not support author affiliations as descibed in the Pandoc documentation. So what you need to do is modify the Pandoc LaTeX template. To get your current working copy of the Pandoc LaTeX template open up a terminal (Mac/Linux) and type:

This will push the contents to a file. Move the file to the “Extras” folder discussed above. If that seems difficult, you can also download it here. Now you have to edit it. Open it up in a text editor and find the section that reads:

Replace this with this code that will invoke the LaTeX authblk package.

Then make your YAML header look like this:

And as you can see in figure 3 you get a correctly list of authors.


This is also great.

Figure 3: This is also great.

Cross Reference of a Table

Of course, tables can be cross referenced in the same manner as figures. Here is a cross reference to table 1 using the code \@ref(tab:mytable) .

Table 1: A short table
term estimate std.error statistic p.value
(Intercept) 36.908 2.191 16.847 0.000
hp -0.019 0.015 -1.275 0.213
cyl -2.265 0.576 -3.933 0.000

This Template also Takes Care of Reference Abbreviation.

As usual, you can make a citation with the code [@bibtexname], where bibtexname is the articles’s abbreviated handle in your bibtex database. Here is a great resource on the bookdown package [1] and reproducible research [2] and here are references where the journal title is longer [3,4]. The references in your documnent (and shown below) will have appropriate abbreviations based on the .json abbreviations database I have provided. In this case, I have chosen the .csl file for Clinical Mass Spectrometry–’cause MSACL.

Other Ways to Skin the YAML Cat

I came across some other ways to deal with this that I did not like as much but they are simpler. Here is one using a footnote.

And you can also misuse the date variable:

Conclusion

This concludes my long personal struggle to get a completely reproducible .docx manusript genereated by RMarkdown and Pandoc. Here is the output for PDF and Word.

Parting Thought

Let us not become weary in doing good, for at the proper time we will reap a harvest if we do not give up.

Galations 6:9

References

[1] Y. Xie, J.J. Allaire, G. Grolemund, R markdown: The definitive guide, Chapman; Hall/CRC, 2018. https://bookdown.org/yihui/bookdown.

[2] R.D. Peng, Reproducible research in computational science, Science. 334 (2011) 1226–1227.

[3] G. Eisenhofer, C. Durán, T. Chavakis, C.V. Cannistraci, Steroid metabolomics: Machine learning and multidimensional diagnostics for adrenal cortical tumors, hyperplasias, and related disorders, Curr. Opin. Endocr. Metab. Res. 8 (2019) 40–49. doi:https://doi.org/10.1016/j.coemr.2019.07.002.

[4] F.B. Vicente, D.C. Lin, S. Haymond, Automation of chromatographic peak review and order to result data transfer in a clinical mass spectrometry laboratory, Clin. Chim. Acta. 498 (2019) 84–89. doi:https://doi.org/10.1016/j.cca.2019.08.004.

Reproducible Research: Write your Clinical Chemistry paper using R Markdown

Abstract
Background: This blog post is going to show you how to write a reproducible article in the field of clinical chemistry using R Markdown. The only thing that will change for journal to journal will be the reference fomating and perhaps section numbering. The source code itself will be provided so that you can use it as a template.

Methods: The paper will use R, R-Markdown, bookdown and pandoc. The references will be taken care of using BibTeX and reference formatting will be managed with Zotero csl files.

Results: The result will be a manuscript that anyone can reproduce.

Conclusions: R Markdown makes reproducible research through literate programming pretty easy.

1 Background

Last week at the MSACL conference Dr. Keith Baggerly from MD Anderson Cancer Centre’s Bioformatics and Computational Biology Group spoke about the importance of reproducible research using the Duke University ovarian cancer biomarker scandal as a backdrop. The talk…was…incredible and illustrated how easy it is to introduce catastrophic errors into your research papers through the use of GUI analytical tools. The now retracted article that Baggerly dismantled is here. I urge everyone in our field to watch similar talks from Keith discussing biomarker analysis in mass spectrometric proteomic data and microarray data. Shannon Haymond and I were then discussing how to make a submission in the field of clinical chemistry that is reproducible. While this article will not discuss the basics of R and R Markdown, it will serve as a guide for those who know a little about these and give you a working YAML header and get the citations and cross-references correct.

2 Overhead

2.1 YAML

RMarkdown articles require a YAML header to instruct R Markdown how to process your article. This is the YAML code that worked for me. I am sure there are other ways to do this:

Indenting and spacing really matter a lot in YAML so don’t mess with them. You can generate PDF, MS Word or HTML output as needed by uncommenting and commenting out the output type as appropriate in the YAML above.

2.2 CSL Files

CSL files take care of citation formatting for you. Depending on what journal you are making submission to, you will need a different .csl file. They exist for every conceivable journal. In the world of non-reproducible reports, this process is taken care of by reference managers in GUI word processors but since GUI word processors do not produce reproducible research, we must break up with them.

From the YAML, you will see that you need a file called “clinical-chemistry.csl” which I downloaded from here. Put this file in the same folder as your R Markdown file. The Clinical Chemistry .csl depends on the .csl file of the American Association for Cancer Research but it will be downloaded for you on the fly. If you need a different reference format, search the CSL GitHub repository for the appropriate file.

2.3 BibTeX

Reference management in R Markdown is taken care of by BibTeX. You can see from the YAML that we need a bibliography text file called “bibliography.bib”. You can name it whatever you like but you will need to change the YAML accordingly. In any case, any citation you intend to make will have to be in the .bib file. I am going to cite Shannon Haymond because this article was her idea and I will toss in a couple of other references so you can see that they get cited in order as we would like.

Below is my bibliography.bib file. I put it in the same folder as my R Markdown file. You make a .bib file using a text file editor (or RStudio) by cutting and pasting the BibTex citations from Google Scholar

2.4 Journal Abbreviations

Now, the fussiest thing I had to do was get the references abbreviating properly. We need an abbreviation database. Fortunately, I could download the abbreviation database from the Web of Science as a .csv file and then convert it to a JSON file. This was Stephen’s idea. I had to deal with a couple of badly behaving characters from some journal titles. This script, if embedded in your document, will download the .csv for you and then make the abbreviation database for you. That way your citations will say, “J Clin Pathol” and not “Journal of Clinical Pathology” etc.

2.5 Citation Management

Now we can proceed to cite articles from our .bib file freely inserting syntax like this: [@shannon2017]. Shannon wrote this interesting article on symmetric dimethylarginine (1), Stephen wrote about wellness initiatives with Dr. Diamandis (2) and Dan wrote a paper about PTH when he was a resident (3). That makes for a total of 3 citations in this manuscript. (1–3)

3 Reporting Your Results

3.1 Figures

You can embed figures in R Markdown as local images or hyperlinks as follows:

![Grumpy Cat does not care about reproducibility](grumpy.jpg)

Grumpy Cat does not care about reproducibility

Grumpy Cat does not care about reproducibility

If you need to do cross-referencing of figures in your document which will change automatically for you if you insert another figure, you can do this by inserting your figure with an R code-chunk and giving the code-chunk a name.


This is the ancient aliens guy.

Figure 3.1: This is the ancient aliens guy.

Then you can reference your code chunk using syntax like this: See figure \@ref(fig:ancient-aliens)

and you will automatically create appropriately cross-referenced figures that get automatically numbered like this: See figure 3.1. Of course the hallmark of the reproducibility to embed R code right into the document. See for example figure 3.2

 

This is a reproducible figure

Figure 3.2: This is a reproducible figure

3.2 Inline Calculations

When you are reporting your amazing results you can have inline code calculations by syntax that looks like this: `r round(median(x),1)` and would result in the median value of \(x\) being reported as 43.9 mmol/L.

3.3 Tables

Tables are not a problem either and can be made with the kable() function of the knitr package or with the xtable package. Tables can be crossreferenced analogously to figures. See table 3.1

Table 3.1: This is a great table
First Second Third
1 2 2
2 3 6
3 4 12
4 5 20
5 6 30

3.4 Math

Math works pretty magically using \(\LaTeX\) syntax. For example inline math can be done like so $\sin^2x + \cos^2x = 1$. This will result in: \(\sin^2x + \cos^2x = 1\). And math can also be done as a code block like so:

Gauss’ Law says: \[
\oint_S {E_n dA = \frac{1}{{\varepsilon _0 }}} Q_{inside}
\]

Equations can be cross-referenced just like tables and figures.

4 Conclusion

I hope this makes writing a reproducible paper easier for you. A minimal template to produce output in PDF is here. And the PDF output itself is here. You’ll need \(\LaTeX\) installed of course.

 

Parting Thought

You can cite books too and get Greek letters too:

“Very truly I tell you,” Jesus answered, “before Abraham was born, \(\varepsilon\gamma\omega\) \(\varepsilon\iota\mu\iota\) (I Am)!” (4)

References

1. Brooks ER, Haymond S, Rademaker A, Pierce C, Helenowski I, Passman R, et al. Contribution of symmetric dimethylarginine to gfr decline in pediatric chronic kidney disease. Pediatr Nephrol. Springer; 2017;1–8.

2. Li M, Diamandis EP, Paneth N, Yeo K-TJ, Vogt H, Master SR. Wellness initiatives: Benefits and limitations. Clin Chem. Clinical Chemistry; 2017;63:1063–8.

3. Holmes DT, Levin A, Forer B, Rosenberg F. Preanalytical influences on DPC IMMULITE 2000 intact PTH assays of plasma and serum from dialysis patients. Clin Chem. Clinical Chemistry; 2005;51:915–7.

4. John the A, Chist J, Spirit H, God F. The Gospel According to John, 8:58. 1 Heavenly Way: Hosannah Press; 80AD.


 

Mining Your Routine Data for Reference Intervals: Hoffmann, Bhattacharya and Maximum Likelihood

Background

Let me preface this by saying I am not making a recommendation to use the Hoffmann method. Neither am I advocating for reference interval mining from routine data. There are many challenges associated with this kind of effort. That's for another post I think. However, I am going to how one does the calculations for two methods I have seen used: the Hoffmann Method and the Bhattacharya Method. Then I will show how to do this using the mixtools package in R which uses the expectation maximum algorithm to determine the maximum likelihood.

The Concept

When you look at histograms of routine clinical data from allcomers, on some occasions the data will form a bimodal looking distribution formed by the putatively sick and well. If you could statistically determine the distribution of the well subjects, then you could, in principle, determine the reference interval without performing a reference interval study. We can all dream, right?

All three of the approaches I show assume that the two distributions are Gaussian. This is almost never true. But for the purposes of the calculations, I will provide each approach data that meets the assumptions it makes. So, let's make a fake bimodal distribution and see how each method does. We will assume equal numbers of sick and well so that the bimodal distribution is obvious. One will have \(\mu_1 = 2\) and \(\sigma_1 = 0.5\) and the other will have \(\mu_2 = 6\) and \(\sigma_2 = 2\). The expected normal range for this population is based on \(\mu_1\) and \(\sigma_1\) and is \(2 – 0.5 \times 1.96\) and \(2 + 0.5 \times 1.96\) or about 1–3.

plot of chunk unnamed-chunk-1

To illustrate how the two populations add you can plot one in green and one in pink. The overlap shows in a yucky brown.

plot of chunk unnamed-chunk-2

Hoffmann

In 1963 Robert Hoffmann proposed a simple graphical approach to this problem and use of his method is alive and well—see here for example. The method assumes that both modes are Gaussian and that if one eye-balls (yes…the paper says “eye-fit”) the first linear-looking portion of the cumulative probability distribution (CDF) function as plotted on normal probability paper and finds its intersection with the lines y = 0.025 and y = 0.975, one can impute the normal range.

What do I plot for Hoffmann: a QQ-plot or the CDF?

It is very important to understand that the use normal probability paper, as Hoffmann described, was mandatory because it produces a normal probablity plot. As he says,

“This special graph paper serves the useful purpose of 'straightening out' a cumulative gaussian distribution. It forms a straight line.”

A CDF plotted on linear scale is sigmoidal. This is not what we want. We want a normal probability plot which is just a special case of the QQ-plot where the comparator distribution is the normal distribution. Inadvertently plotting a plain old CDF will not produce correct estimates of the lower and upper limits of normal (ie \(\mu \pm 1.96\sigma\)). The reason I emphasize this is that I have seen this error made in a number of reference interval papers—but not the one I cited above—it is correct. The importance of the distinction becomes not-very-subtle when you apply the Hoffmann approach to a pure Gaussian distribution. In short, use of the CDF in linear space generates erroneous results as we will show later on.

The Correct Approach

Here is the standard r-base normal QQ-plot of our mock data set:

plot of chunk unnamed-chunk-3

To prevent reader confusion, I am going present the plots the way Hoffmann originally showed them. So I will put the patient data on the x-axis. It doesn't change anything. You can do it as you like.

plot of chunk unnamed-chunk-4

From this you can see that there is obviously linear section between about x = 0 to x = 2 (and with the eye of faith, there is a second after x = 6). This is what Hoffmann calls the “eye-fit”. Since the first linear section is attributable to the first of the two normal distributions which form the overall distribution, we can use the it to determine properties of the first distribution. If I look only at the data between x = 0 and x = 2, I am sort-of guaranteed to be in the first linear section. You don't have to kill yourself to correctly identify where the linearity ends because the density of the points should be highest near the middle of the linear section and this will weight the regression for you.

Next if I extend this line to find its intersection with y = -1.96 and y = 1.96 (ie the z-scores corresponding the limits of normal, namely the 2.5th and 97.5th centiles), I can estimate the reference interval, by dropping perpendicular lines from the two respective intersections. Here is what I get:

plot of chunk unnamed-chunk-5

So the Hoffmann reference interval becomes 1.11 to 3.70 which you can compare to the expected values of about 1 and 3 based on the random data. Not the greatest but not bad.

What not to do

Let's apply the correct approach to the Hoffmann method (QQ-Plot) and incorrect approach (CDF on a linear scale) to a pseudorandom sampling (n=10,000) of the standard normal distribution, which has a mean of 0 and a standard deviation of 1. Therefore the central 95% or “normal range” for this distribution will be -1.96 to 1.96. I will plot regression lines through the linear part of each curve and find the respective intersections with the appropriate horizontal lines.

plot of chunk unnamed-chunk-6

The QQ-plot generates estimates the limits of normal, \(\mu \pm 1.96\sigma\), as about \(\pm 1.96\) as it should. You can easily show that the same procedure on the CDF intersects the lines \(y = \alpha /2\) and \(y = 1 – \alpha/2\) at values of \(\pm (1 – \alpha) \sqrt{\pi/2} \sigma\) which is about \(\pm 1.19\) for \(\sigma = 1\) and \(\alpha = 0.05\). This erroneous estimate is shown with the pink vertical lines. So the Hoffmann method does not work if one attempts to extend the linear portion of the CDF if it is plotted in linear space and it will produce estimates of \(\sigma\) that are 40% too low in this case. If you're puting this all together, this because the CDF is well away from its linear portion when the cumulative proportions are 0.025 and 0.975—not so for a QQ-plot. If you see a “Hoffmann plot” constructed from a sigmoidal CDF plotted on a linear scale, something is wrong.

Bhattacharya

This method is based on a much more highly cited paper in Biometrics published in 1967 by C.G. Bhattacharya. Loosely speaking, the method of Bhattacharya determines the parameter estimates of \(\mu_i\) and \(\sigma_i\) from the slope of the log of the distribution function. It was originally intended as a graphical method and so it also involves some human eye-balling.

We will need the log of the counts from the histogram. When we store the results of a histogram in R, we have the counts automatically.

We can now calculate the log of the counts (denoted \(y\)) and \(\Delta log(y)\) from bin to bin. We put these in a dataframe along with the counts and the midpoints of the bins. The bin width, which is chosen to be constant \(h\), is the distance between the midpoints of each bin.

Now let's plot \(\Delta log(y)\) as a function of the midpoints of the bins. I also number all the points to facilitate the next step.

plot of chunk unnamed-chunk-9

We can see from the figure that there are two sections where the plot shows a downsloping line: one between points 2 to 6 and another between points 10 to 21. How straight these lines appear is affected by how wide your bins are so if you get lines that are hard to discern, you can try making fewer bins.

In any case, using Bhattacharya's notation, the next step in the procedure is to draw regression lines through the \(r_{th}\) linear section and determine the intercept \(\hat{\lambda}_r\) with the x-axis. Bhattacharya intended this as a graphical procedure and advises,

“While matching the straight line it is better to fit closely to the points where the frequency is large even if the apparent discrepancy becomes somewhat large where the frequency is small.”

Since we are doing this by calculation, we can take his advice by weighting the linear regressions according to the counts. This allows the determination of the \(\hat{\mu}_r\) by:

\[\hat{\mu}_r = \hat{\lambda}_r + h/2\]
and also the determination of \(\hat{\sigma}_r\) by:

\[\hat{\sigma}^2_r = -h/ \text{slope}_r – h^2/12\]

And here are the results we get:

mu Values sigma Values Normal Range Limits
2.06 0.59 0.90
6.25 1.83 3.21

And here is what it all looks like

plot of chunk unnamed-chunk-12

In this demonstration, there are only two Gaussian distributions to resolve, but the method is not limited to the resolution of two Gaussian curves at all. If there are more, there will be more downsloping lines crossing the x-axis. So we get normal range estimates of 0.90 and 3.21 which compare much better with the expected values of about 1 and 3. We also get good estimates of \(\mu_2=\) 6.3 and \(\sigma_2=\) 1.8 which are about 6 and 2 respectively in our data set.

Bhattacharya also provides a means of calculating the mixing proportion of the two distributions—that is, the proportions of patients in the sick and abnormal populations. We don't need that here so I omit it.

Gaussian Mixture Model

In R there are a lot of ways to approach the separation of mixtures of distributions using maximum likelihood. Here I am using a function from the mixtools package that is particularly easy to use. The concept of using maximum likelihood for mining your reference interval is not new (see this paper) but many would be intimidated by the math required to do it from scratch.

With R, this is pretty easy but please be cautioned that real data does not play as nice as the data in this demonstration (even moreso for Hoffmann and Bhattacharya) and it is unlikely that you will get smashing results unless your data fits the assumptions of the model.

In any case,

which gives very good parameter estimates indeed! Estimates of \(\mu_1\) and \(\mu_2\) are 2.01 and 6.19 respectively and estimates of \(\sigma_1\) and \(\sigma_2\) are 0.52 and 1.97 respectively.

Looking at this graphically:

plot of chunk unnamed-chunk-14

So the normal range estimate from EM method is 1.00 to 3.03 which is pretty fantastic.

Summary of Results

LLN ULN
Raw Random Data 1.03 2.98
Hoffmann 1.11 3.70
Bhattacharya 0.90 3.21
mixtools EM – winner! 1.00 3.03

It's not too hard to figure out which one of these approaches works best. But what do you do if your patient data distribution is obviously not a mixture of Gaussians (ie when the distributions look skewed)? There are ways to do this in R for this but I will cover that another time–maybe in a paper.

Conclusion

  • Three methods of estimating the normal range from a mixture of Gaussians have been presented.
  • The Hoffmann method performs OK if you use a QQ-plot.
  • The Hoffmann method does not work for CDFs plotted on a linear scale.
  • The Bhattacharya method performs better but still requires some human oversight.
  • The normalmixEM() function from the mixtools package performs very well without any human oversight.
  • These results do not imply that any of these approaches will perform well on real patient data for which the components of the overall distribution are not likely to be Gaussian. Caution advised.

Parting Thought

Please don't fall on the wrong side of God's mixture separation procedures for wheat and chaff.

Said, John the Baptist, “But after me comes one who is more powerful than I, whose sandals I am not worthy to carry. He will baptize you with the Holy Spirit and fire. His winnowing fork is in his hand, and he will clear his threshing floor, gathering his wheat into the barn and burning up the chaff with unquenchable fire.”

Matt 3:11–12

Non-Linear Regression: Application to Monoclonal Peak Integration in Serum Protein Electrophoresis

Background

At the AACC meeting recently, there was an enthusiastic discussion of standardization of reporting for serum protein electrophoresis (SPEP) presented by a working group headed up by Dr. Chris McCudden and Dr. Ron Booth, both of the University of Ottawa. One of the discussions pertained to how monoclonal bands, especially small ones, should be integrated. While many use the default manual vertical gating or “drop” method offered by Sebia's Phoresis software, Dr. David Keren was discussing the value of tangent skimming as a more repeatable and effective means of monoclonal protein quantitation. He was also discussing some biochemical approaches distinguishing monoclonal proteins from the background gamma proteins.

The drop method is essentially an eye-ball approach to where the peak starts and ends and is represented by the vertical lines and the enclosed shaded area.

plot of chunk unnamed-chunk-1

The tangent skimming approach is easier to make reproducible. In the mass spectrometry world it is a well-developed approach with a long history and multiple algorithms in use. This is apparently the book. However, when tangent skimming is employed in SPEP, unless I am mistaken, it seems to be done by eye. The integration would look like this:

plot of chunk unnamed-chunk-2

During the discussion it was point out that peak deconvolution of the monoclonal protein from the background gamma might be preferable to either of the two described procedures. By this I mean integration as follows:

plot of chunk unnamed-chunk-3

There was discussion this procedure is challenging for number of reasons. Further, it should be noted that there will only likely be any clinical value in a deconvolution approach when the concentration of the monoclonal protein is low enough that manual integration will show poor repeatability, say < 5 g/L = 0.5 g/dL.

Easy Peaks

Fitting samples with larger monoclonal peaks is fairly easy. Fitting tends to converge nicely and produce something meaningful. For example, using the approach I am about to show below, an electropherogram like this:

plot of chunk unnamed-chunk-4

with a gamma region looking like this:

plot of chunk unnamed-chunk-5

can be deconvoluted with straightforward non-linear regression (and no baseline subtraction) to yield this:

plot of chunk unnamed-chunk-6

and the area of the green monoclonal peak is found to be 5.3%.

More Difficult Peaks

What is more challenging is the problem of small monoclonals buried in normal \(\gamma\)-globulins. These could be difficult to integrate using a tangent skimming approach, particularly without image magnification. For the remainder of this post we will use a gel with a small monoclonal in the fast gamma region shown at the arrow.

plot of chunk unnamed-chunk-7

Getting the Data

EP data can be extracted from the PDF output from any electrophoresis software. This is not complicated and can be accomplished with pdf2svg or Inkscape and some Linux bash scripting. I'm sure we can get it straight from the instrument but it is not obvious to me how to do this. One could also rescan a gel and use ImageJ to produce a densitometry scan which is discussed in the ImageJ documentation and on YouTube. ImageJ also has a macro language for situations where the same kind of processing is done repeatedly.

Smoothing

The data has 10284 pairs of (x,y) data. But if you blow up on it and look carefully you find that it is a series of staircases.

plot of chunk unnamed-chunk-8

It turns out that this jaggedness significantly impairs attempts to numerically identify the peaks and valleys. So, I smoothed it a little using the handy rle() function to identify the midpoint of each step. This keeps the total area as close to its original value as possible–though this probably does not matter too much.

plot of chunk unnamed-chunk-9

Now that we are satisfied that the new data is OK, I will overwrite the original dataframe.

Transformation

The units on the x and y-axes are arbitrary and come from page coordinates of the PDF. We can normalize the scan by making the x-axis go from 0 to 1 and by making the total area 1.

plot of chunk unnamed-chunk-11

Find Extrema

Using the findPeaks function from the quantmod package we can find the minima and maxima:

plot of chunk unnamed-chunk-12

plot of chunk unnamed-chunk-12

Not surprisingly, there are some extraneous local extrema that we do not want. I simply manually removed them. Generally, this kind of thing could be tackled with more smoothing of the data prior to analysis.

Fitting

Now it's possible with the nls() function to fit the entire SPEP with a series of Gaussian curves simultaneously. It works just fine (provided you have decent initial estimates of \(\mu_i\) and \(\sigma_i\)) but there is no particular clinical value to fitting the albumin, \(\alpha_1\), \(\alpha_2\), \(\beta_1\) and \(\beta_2\) domains with Gaussians. What is of interest is separately quantifying the two peaks in \(\gamma\) with two separate Gaussians so let's isolate the \(\gamma\) region based on the location of the minimum between \(\beta_2\) and \(\gamma\).

Isolate the \(\gamma\) Region

plot of chunk unnamed-chunk-14

Attempt Something that Ultimately Does Not Work

At first I thought I could just throw two normal distributions at this and it would work. However, it does not work well at all and this kind of not-so-helpful fit turns out to happen a fair bit. I use the nls() function here which is easy to call. It requires a functional form which I set to be:

\[y = C_1 \exp\Big(-{\frac{(x-\mu_1)^2}{2\sigma_1^2}}\Big) + C_2 \exp \Big({-\frac{(x-\mu_2)^2}{2\sigma_2^2}}\Big)\]

where \(\mu_1\) is the \(x\) location of the first peak in \(\gamma\) and \(\mu_2\) is the \(x\) location of the second peak in \(\gamma\). The estimates of \(\sigma_1\) and \(\sigma_2\) can be obtained by trying to estimate the full-width-half-maximum (FWHM) of the peaks, which is related to \(\sigma\) by

\[FWHM_i = 2 \sqrt{2\ln2} \times \sigma_i = 2.355 \times \sigma_i\]

I had to first make a little function that returns the respective half-widths at half-maximum and then uses them to estimate the \(FWHM\). Because the peaks are poorly resolved, it also tries to get the smallest possible estimate returning this as FWHM2.

The peak in the \(\gamma\) region was obtained previously:

plot of chunk unnamed-chunk-16

and from them \(\mu_1\) is determined to be 0.7. We have to guess where the second peak is, which is at about \(x=0.75\) and has an index of 252 in the gamma.data dataframe.

plot of chunk unnamed-chunk-17

Now we can find the estimates of the standard deviations:

The estimates of \(\sigma_1\) and \(\sigma_2\) are now obtained. The estimates of \(C_1\) and \(C_2\) are just the peak heights.

We can now use nls() to determine the fit.

Determining the fitted values of our unknown coefficients:

And now we can plot the fitted results against the original results:

plot of chunk unnamed-chunk-22

And this is garbage. The green curve is supposed to be the monoclonal peak, the blue curve is supposed to be the \(\gamma\) background, and the red curve is their sum, the overall fit. This is a horrible failure.

Subsequently, I tried fixing the locations of \(\mu_1\) and \(\mu_2\) but this also yielded similar nonsensical fitting. So, with a lot of messing around trying different functions like the lognormal distribution, the Bi-Gaussian distribution and the Exponentially Modified Gaussian distribution, and applying various arbitrary weighting functions, and simultaneously fitting the other regions of the SPEP, I concluded that nothing could predictably produce results that represented the clinical reality.

I thought maybe the challenge to obtain a reasonable fit related to the sloping baseline, so I though I would try to remove it. I will model the baseline in the most simplistic manner possible: as a sloped line.

Baseline Removal

I will arbitrarily define the tail of the \(\gamma\) region to be those values having \(y \leq 0.02\). Then I will connect the first (x,y) point from the \(\gamma\) region and connect it to the tail.

plot of chunk unnamed-chunk-24

Now we can define a new dataframe gamma.no.base that has the baseline removed:

plot of chunk unnamed-chunk-25

The black is the original \(\gamma\) and the dashed has the baseline removed. This becomes and easy fit.

plot of chunk unnamed-chunk-26

Lo and behold…something that is not completely insane. The green is the monoclonal, the blue is the \(\gamma\) background and the red is their sum, that is, the overall fit. A better fit could now we sought with weighting or with a more flexible distribution shape. In any case, the green peak is now easily determined. Since

\[\int_{-\infty}^{\infty} C_1 \exp\Big(-{\frac{(x-\mu_1)^2}{2\sigma_1^2}}\Big)dx = \sqrt{2\pi}\sigma C_1\]

So this peak is 2.4% of the total area. Now, of course, this assumes that nothing under the baseline is attributable to the monoclonal peak and all belongs to normal \(\gamma\)-globulins, which is very unlikely to be true. However, the drop and tangent skimming methods also make assumptions about how the area under the curve contributes to the monoclonal protein. The point is to try to do something that will produce consistent results that can be followed over time. Obviously, if you thought there were three peaks in the \(\gamma\)-region, you'd have to set up your model accordingly.

All about that Base(line)

There are obviously better ways to model the baseline because this approach of a linear baseline is not going to work in situations where, for example, there is a small monoclonal in fast \(\gamma\) dwarfed by normal \(\gamma\)-globulins. That is, like this:

plot of chunk unnamed-chunk-28

Something curvilinear or piecewise continuous and flexible enough for more circumstances is generally required.

There is also no guarantee that baseline removal, whatever the approach, is going to be a good solution in other circumstances. Given the diversity of monoclonal peak locations, sizes and shapes, I suspect one would need a few different approaches for different circumstances.

Conclusions

  • The data in the PDFs generated by EP software are processed (probably with splining or similar) followed by the stair-stepping seen above. It would be better to work with raw data from the scanner.

  • Integrating monoclonal peaks under the \(\gamma\) baseline (or \(\beta\)) is unlikely to be a one-size-fits all approach and may require application of a number of strategies to get meaningful results.

    • Basline removal might be helpful at times.
  • Peak integration will require human adjudication.

  • While most monoclonal peaks show little skewing, better fitting is likely to be obtained with distributions that afford some skewing.

  • MASSFIX may soon make this entire discussion irrelevant.

Parting Thought

On the matter of fitting

In bringing many sons and daughters to glory, it was fitting that God, for whom and through whom everything exists, should make the pioneer of their salvation perfect through what he suffered.

Heb 2:10