Molecular biology labs often house multiple -80 C freezers and liquid nitrogen tanks. I can think of only one function for which these are absolutely required: storage of mammalian cells and cell lines. Indeed I have never tried to store eukaryotic cells in a regular (-20 C) freezer, but I doubt it would work well. However, -80 C freezers are often used for storage of all sorts of other stuff, such as bacterial stocks, proteins, RNA, etc. For these items, would there be any serious problem with storage at warmer temperatures? Sure, there might be a small increase in rate of degradation, but for a “basement” molecular biology lab setup, I doubt the huge cost associated with an ultra low-temp freezer would be worth it. I’d stick with a good, cheap -20 C freezer.
DNAzol, a reagent for simple genomic DNA extraction which continues to be sold by Thermo for $300+ (for 100 ml!), is simply an aqueous mixture of guanidine thiocyanate, sodium acetate, isopropanol and sarkosyl. The associated patent was filed in 1995, which means it is now expired. See Example 6 for the recipe.
What are the components of an average, modern molecular biology lab? Equipment and space to perform basic microbiology, DNA and protein electrophoresis equipment and cell culture have been found in every lab space I’ve worked in. So…could these functions be performed in your basement, with a very small budget for equipment and reagents? I’m planning a series of posts on this topic.
I ran across this scary figure in a 2011 paper by a group from a large pharmaceutical company. It’s in a well-ranked specialty journal that doesn’t normally publish a lot of western blot data, and this is the only western blot found in the entire article. According to the figure legend, it is a “representative western blot from three independent experiments”. The unmodified (but relabelled/anonymized) version is shown below.
There are nice black boxes drawn around each blot, which would (conventionally) suggest that the enclosed bands were all developed on the same film. However, there’s something fishy here, which is more evident by increasing the contrast:
Needless to say, this is not the right way to assemble a western blot figure. Did these pasted-together bands come from the same blot, from the same experiment, or from the same exposure? I guess the reader needs to trust that the authors’ western blot “artwork” represents the reality of some experiment they performed, but this is a large leap of faith.
There is little appreciation of the degree of validation performed by commercial antibody suppliers (especially large ones with catalogs of tens of thousands of antibodies). New antibodies developed by a PI in-house are usually greeted by an appropriate degree of skepticism by peer reviewers when they are first published. However, I believe the same is not true for an antibody from a commercial supplier, even if validation data has never been published. When such a company says their antibody is specific for a given protein, people generally take their word for it.
On at least a few occasions, I’ve ordered previously-unpublished commercial antibodies against a particular protein that I’m studying. At this point, I usually know some basic details about the protein I’m investigating, including which cell lines express it and its molecular weight. I also normally have expression vectors encoding the protein. Most of the time, even with antibodies advertised for western blotting, they simply detect nothing, even in cell lines which I know express the target. More disturbingly, in several cases, the supplied antibodies bind strongly to something that’s approximately the correct molecular weight of the protein of interest, but upon further investigation (by transfecting an expression construct or comparing expression in different cell lines), it is not binding to the right protein.
It is clear to me that most of the hundreds of antibodies these suppliers add to their catalogs each month receive only a cursory validation. I guess when a PI purchases such antibodies, they are essentially paying the company for the honor of validating their products!
The issue of inappropriate image manipulation in figures published in peer-reviewed journals has received much attention in the last few years. I often wonder whether this has led to more “ugly” results – which obviously have not been manipulated – passing through the peer review process. It is also increasingly becoming a requirement to quantify data present in western blots and agarose gels (e.g. RT-PCR results) which are, by nature, qualitative assays. Often, it seems that reviewers accept the results of such quantitation, even if the quality of the blots/gels are suspect. I come across examples of these problems regularly – in good and bad journals. The example below comes from a very recent issue of a highly ranked specialty journal.
The authors exposed cells to Treatment 1 or Treatment 2 for different lengths of time and analyzed expression of Protein A by western blot. The Protein A blot is not terrible, even though it is not clean, and you need to take the authors’ word that the protein the arrow is pointing at is indeed Protein A (especially since there are no molecular weight markers indicated). The actin blot, however, is quite awful. Actin is a control protein whose expression should not be modulated by the treatments. The problem is that lanes 3-7 of the actin blot have clearly saturated the western blot film (they are black whereas lanes 1 and 2 are grey) indicating that actin levels are much higher in those samples and that equal amounts of protein were not loaded in all lanes. Also, the samples in lanes 3 and 4 have apparently leaked, giving a single actin band with no clear boundary between the lanes.
Because of these two issues, the authors have no business trying to quantify this result. To quantify bands in a western blot, they cannot be saturated, or you end up underestimating the amount of protein present. They are doubly wrong in trying to quantify neighboring bands whose boundaries can’t be discerned!
Below the western blots, the authors present a graph whose values represent the density of the “Protein A” bands normalized to the density of actin bands. A final issue that should have been recognized by the reviewers of this article is that the statistical analysis of the quantitation is incorrect. The authors repeated this experiment only twice, but the error bars on the graph are standard deviations, which should be based on at least three independent experiments.
This problematic figure makes up only a small part of a large and complex paper. Nonetheless, it consists of several figures that present densitometric quantitations of other gels, making the reader wonder whether they are similarly unreliable!
Biomedical research doesn’t require much beyond basic high school math…still, when I was a graduate student, I thought I could put these math skills to use by calculating the surface area of circular, “100-mm” tissue culture dishes.
One would assume 100 mm is the exact diameter…but no! Corning dishes, and, and far as I know, dishes from other suppliers, are in fact around 85 mm. I guess this explains why, when I tried to scale up cell culture experiments, my cells were always less confluent than expected in the larger dishes!