Simplifying testinfra tests



Some of the testinfra tests are redundant with Ansible, in the sense that they verify something that is exactly what Ansible does, only written differently and therefore not DRY. Here is an example:

The testinfra test

@pytest.mark.parametrize('setting', [
    'SUBJECT=`formail -xSubject:`',
    ':0 c',
def test_procmail_settings(File, Sudo, setting):
    Ensure procmail settings are correct. These config lines determine
    how the OSSEC email alerts are encrypted and then passed off for sending.
    # Sudo is required to traverse the /var/ossec directory.
    with Sudo():
        f = File("/var/ossec/.procmailrc")
        assert f.contains('^{}$'.format(setting))

and the corresponding ansible code

- name: Copy procmail config file.
    src: procmailrc
    dest: /var/ossec/.procmailrc
    owner: root
    group: ossec
    - procmail

which includes this file

SUBJECT=`formail -xSubject:`
:0 c

In this case and many others, testinfra duplicates what Asnible does and running the corresponding ansible code twice to verify it does not change would do exactly the same thing.

Some testinfra tests are actually useful and do not have their equivalent in Ansible such as OSSEC rule tests.

I’m not proposing we cleanup all duplicates, that would not help much. But I propose that whenever we are modifying testinfra code for some reason, we should feel free to trim the tests that are duplicate of an ansible task.

What do you think ?


Yes, you’re completely right that the testinfra tests mostly validate the implementation enforced by Ansible. That’s not the best way to test, but even though it’s not DRY, I advocate that it stay. The reason being: the current testinfra suite avoids mistakes from changing Ansible.

The goal here is to reduce or eliminate regressions caused by refactoring, cleanup, or any other class of common mistake. Since we are notoriously short on ops-focused contributors (judging by the current code ownership breakdown in the repo), this is doubly important as a stopgap until we have more robust testing procedures.

Eventually we should be focusing on testing functionality, rather than implementation. To take the firewall rules as an example, we should be testing that the hosts genuinely drop packets by trying to open a socket from an external testing client. We shouldn’t be checking the iptables rule chain output (as we are now), since doing so would allow us to cargo-cult bad configurations we’ve never noticed before.