- Variability is what I would expect to see in a biological system because even if you have completely identical genotypes of say, bacterial cells, there could be subtle differences in the timing of expression of genes within those cells to give these identical genotypes different, what are called, phenotypes, different traits, even though they have exactly the same genotype. So the point is that variability is something we see in the natural world all of the time, it is just there. The rule of thumb is if there is variation in your experiment that is seemingly random, you can correct for it by increasing your sample size. So if you have a few samples, the effect of random variation is gonna have a large impact on the result that you ultimately report. But if you have a huge pool of samples, those random fluctuations are gonna kinda cancel each other out across all the samples. But it will not correct for systemic biases. So that's things like instrumentation bias. So if you have some systemic bias that is affecting your results, increasing the sample size will not correct for it. So one example is like if you have a scale that is broken and it always overestimates the weight that it reports like it adds two kilograms on every value, you can measure one sample, you can measure a million samples, there are always gonna have that error, so increasing the sample size does not correct for bias but it will correct for random noise. To understand the variability in your data is also important. The only way to do that is to create a sufficient sample size. For example, if one just has a duplicate measurement or triplicate measurement, one's often in a case where one of the points is quite different from the other. In that case, it's hard to know if it's a technical variation or it's a very weird outlier or if it's really an inherent variability in your biological system. So obviously, generating more replicas and examining the variation of data from more samples, would clarify that. - But that's not always the best option though because the number of replicates that you include expands the size of your experiment and then you may quickly run into impracticalities where you just cannot conduct this experiment with the level of replication which you need to move the noise away to the periphery. - As you add more samples, you're gonna begin exhausting other resources so the amount of time that it takes to set up an experiment is finite because there's only 24 hours in a day and you only have so much energy to work with at the bench at any given time so if you wanna do a thousand samples versus ten thousand samples, you really need to consider what it's gonna do to your body, to your energy levels, to your mental state, eventually you're gonna get fatigued and it's gonna be a lot harder to increase the sample size after that. Another case is when adding more samples becomes prohibitively expensive or it begins to take up too much space so when you're trying to decide how many biological samples you need in a particular experiment, that's when the power of calculations are really helpful.