Thank you @Maxwell_Morais for your comments, but I think you missed my point - I never said integration tests were not needed, I actually mentioned that integration tests are needed.
Also, Integration tests fall down, when you look at the changes for example done with changing the gui for version 12. They would need to have a complete rewrite.
However, if there were unit tests on all of the underlying api’s/docs/ their functionality (And there is definately some amount of tests in there, I just think that they need fleshing out more, and concentration upon, so that they actually bring value to the product) could be tested, and shown that oh, in the version 12 gui tests, it doesnt pass the sales order tests. However, you know, because the unit tests underneath, that the problem lies in the gui, nowhere else.
My main point of this discussion is how to build quality into the code, and allowing the system to be able to handle changes more easily.
Another point that we need to keep in mind is that.
Tests aren’t made for the developer
Tests aren’t made for ensure that the code works
Tests are made to ensure that the users are fully capable of really into the code you wrote to do what they want to do.
I disagree with all of the statements above, - As I explained in my previous post, but here is a good link to also show the advantages of unit testing, which is the opposite of everything you have mentioned here.
The results are that the end user gets a much more stable product. It is the responsibility of the developer, to either keep a clean house, and spend extra time, in writing unit tests, or they don’t, which then results in a fragile system that breaks when things change all the time, and then the user is the one that has to find the bugs, report it, get frustrated, wait for a fix etc.
All of that could have possibly be found, and fixed, early on when the code was being written. It’s really about the investment of time, and if it’s put early on in the development phase, it results in that time not being needed later on, which usually results in a lot more investment, to fix it.
I already started in the past some experiments in that area, like here:
Yeah that is similar to cypress/selenium - which frappe have started to use (Cypress tests that is), so they are already are on board with integration testing.
But for some initiative of redesign of testing framework be sucessfull, it’s not one engagement of us, developers, and project mainteiners, it need engagement of the whole community, but fundamentally users, because they are the target of any work we do.
I disagree strongly with this statement as well - It doesn’t fall upon the user at all. If I build you a house, but i leave bits of building materials lying around inside, a window that doesn’t close, and a hole in the roof - Who’s fault is it? The home owner just expects to be able to use their home as a normal person would. The builder did not finish their job and complete the job properly. Its not up to the home owner to even know how the building was built, or how they should fix the hole, they just know that a hole is there, which means they can’t live in the house.
I think of testing the same, it’s basically a duty of the developer to ensure what is done is of a high standard, for the future, and for the present.
Going back to the reason for this discussion, has anyone used pytest before? I’ve used it quite a lot for the last couple of years, and enjoy using it. The problem of database records being left in the database after a test are solved elegantly, with the below:
in conftest.py :
@pytest.fixture(scope='session')
def db_instance():
# monkey patch the commit, so that records do not get saved to the database.
frappe.connect(site=os.environ.get("site"))
frappe.db.commit = MagicMock()
yield frappe.db
@pytest.fixture()
def db_transaction(db_instance):
# create a new transaction.
db_instance.begin()
yield db_instance
db_instance.rollback()
Then I can just use this “db_transaction” parameter in whatever test I wish to write, and I have access to the db, and anything I do in that test will be reverted. As an example below, I test to ensure that there is a placeholder set in a field, and if it isn’t it should throw the specific error to the user.
def test_validation_ldap_search_string_needs_to_contain_a_placeholder_for_format(
db_transaction):
with pytest.raises(ValidationError,
match="LDAP Search String needs to end with a placeholder, eg sAMAccountName={0}"):
settings = get_doc("LDAP Settings")
set_dummy_ldap_settings(settings)
settings.ldap_search_string = 'notvalid'
settings.save()
This means that for a whole test session that is run, the commiting to database is mocked out and will never be run. However for each test, you can create a new database transaction, and do your test. This means you can add records, and check them that they exist, were updated etc, and once the test has finished, all of those database changes will be rolled back. It does that using a form of dependency injection, which is one of the awesome features of pytest.
This is just one of the ways I was thinking it would be good to re-think and redesign the testing framework.