I found a subtle bug in our code at the end last week, concerning how we make draws from a Maxwell-Boltzmann distribution. We use this distribution to determine the strength of the kick given to neutron stars and black holes from the asymmetry of their supernova. The distribution looks like this;

Where is a scale parameter known as the velocity dispersion, and is also known as the root-mean-squared (RMS) velocity. That's where the confusion started.

I wrote a small unit test to check the sample mean, variance and RMS values that we draw match up with the analytical values. The RMS was bang on, but the mean and variance were way off. We noticed, however, that the sample mean was consistently away from the analytical one. I dug into the code and discovered that we were deliberately dividing the drawn velocities by . I asked my colleague why this was, and he said it was to make the sample RMS match , since that is known as the "RMS velocity" after all.

It turns out that this would indeed be the right thing to do... in 1 dimension. I tidied up the code and reinserted the factor of and made a vow to always call the **dispersion** and not **RMS velocity** in future to avoid confusing myself.