FAQ  •  Register  •  Login

2015-Levine et al-Cell

<<

mleipold

Guru

Posts: 5796

Joined: Fri Nov 01, 2013 5:30 pm

Location: Stanford HIMC, CA, USA

Post Mon Jun 29, 2015 3:09 pm

2015-Levine et al-Cell

"Data-​Driven Phenotypic Dissection of AML Reveals Progenitor-​like Cells that Correlate with Prognosis"
Levine Jacob H; Amir El-Ad D; Tadmor Michelle D; Litvin Oren; Simonds Erin F; Davis Kara L; Fienberg Harris G; Jager Astraea; Zunder Eli R; Finck Rachel; Bendall Sean C; Gedman Amanda L; Radtke Ina; Downing James R; Pe'er Dana; Nolan Garry P
Cell, 2015
http://dx.doi.org/10.1016/j.cell.2015.05.047

-introduction of PhenoGraph
<<

codekitty

Participant

Posts: 1

Joined: Thu Feb 12, 2015 9:50 pm

Post Mon Jun 29, 2015 4:55 pm

Re: 2015-Levine et al-Cell

PhenoGraph clustering implemented now in 'cyt' for macOS here:
http://www.c2b2.columbia.edu/danapeerla ... nload.html
<<

mleipold

Guru

Posts: 5796

Joined: Fri Nov 01, 2013 5:30 pm

Location: Stanford HIMC, CA, USA

Post Tue Sep 08, 2015 9:46 pm

Re: 2015-Levine et al-Cell

-Commentary in Nature Biotechnology:

"From mass cytometry to cancer prognosis"
Deborah R Winter, Guy Ledergor, Ido Amit
Nat. Biotechnol., 2015, 33, 931–932
http://dx.doi.org/10.1038/nbt.3346
<<

ErinSimonds

Master

Posts: 50

Joined: Tue May 13, 2014 8:04 pm

Post Wed Sep 09, 2015 1:56 am

Re: 2015-Levine et al-Cell

Thanks for posting that, Mike.

I want to add that PhenoGraph is now available in two implementations:

Matlab: http://www.c2b2.columbia.edu/danapeerla ... graph.html
Python: https://github.com/jacoblevine/PhenoGraph
<<

mccars

Participant

Posts: 4

Joined: Wed Jun 15, 2016 10:56 am

Post Wed Jun 22, 2016 3:47 pm

Re: 2015-Levine et al-Cell

Hello!
I’d like to ask about data pre-processing prior to using PhenoGraph.

In Levine et al (2015), the authors use a second normalisation where for each surface marker the maximum intensity observed is determined as the 99.5th percentile of the healthy bone marrow cells; and then data from all samples is divided by these maximum values.

Is there a Bioconductor package, or similar, for this normalisation?
If not, can you recommend a method for determining the maximum intensity for each surface marker of these samples?
And then, a way to divide the data from all samples by this maximum value?
I'm afraid I don't come from a bioinfomatics background, but I've recently been getting to grips with R and matlab.

Thanks! Sheila.
<<

ErinSimonds

Master

Posts: 50

Joined: Tue May 13, 2014 8:04 pm

Post Wed Jun 22, 2016 9:00 pm

Re: 2015-Levine et al-Cell

Hi Sheila,

For that paper, the data was first loaded from FCS files into Matlab. The normalization, arcsinh transformation, clustering, visualization, etc. were all done in Matlab.

If you want to do the same thing in R, you can use something like the script below. This script only normalizes a single FCS file. If you want to normalize a group of files based on the 99.5th percentile in another group of files, as we did in the PhenoGraph paper, you'll have to modify this quite a bit. But hopefully this will get you started.

- Erin

  Code:
# install flowCore
source("https://bioconductor.org/biocLite.R")
biocLite("flowCore")
library(flowCore)

# set 'myfile' to the location of your FCS file. If you are on Windows, use C:\Users\sheila\Desktop\mydata.fcs
myfile <- "/Users/sheila/Desktop/mydata.fcs"

f1 <- read.FCS(myfile, transformation=FALSE) # this takes in the data and metadata from the FCS file as a flowFrame object
summary(f1) # optional ... helpful function to check the columns and number of cells in the file
mydata <- as.data.frame(exprs(f1)) # this stores the raw data and channel names from the FCS file as a data.frame object

# Look at the output of the command below and write down which columns need to be arcsinh transformed
# Note:  You don't need to transform the Time or event_length columns
colnames(matrix)

# Set 'datacols' to indicate which columns need to be arcsinh transformed
datacols <- c(2,3:5)
mydata.asinh <- asinh(mydata[,datacols] / 5)

# Now we will find the 99.5th percentile value of each column:
col99th <- sapply(as.data.frame(mydata.asinh), quantile, probs=0.995)

# Now we will divide the data by the 99.5th percentile values:
mydata.normalized <- t(t(mydata.asinh) / col99th)

Return to Literature

Who is online

Users browsing this forum: Google [Bot] and 47 guests

cron