Model compression is widely employed to deploy convolutional neural networks on devices with limited computational resources or power limitations. For high stakes applications, it is, however, important that compression techniques do not impair the safety of the system. This paper investigates the changes introduced by three compression methods that go beyond the test accuracy.