At the end of November me and Eugene spent 2 days in Warsaw on Agile&Automation Days conference + workshop organized in National Stadium. Here is a bunch of my thoughts and impressions from the conference:
„Fresh view on test coverage” Bartek Szulc
The presentation was driven by a good question: What it means to have test coverage at 80%, 90% or higher level? Is it good? shall we be proud of this? Does it give us a safety and confidence? What about this missing 20%-10%? How we can assess which lines/areas of code are more important or critical to other? Let’s be methodical, evaluate it in an automated way.
Classic test coverage measure like line coverage may give us false positive impression we are good/safe. On the other hand we know testing is more or less about risk mitigation. Testing is infinitive process and we may never reach 100%, so we need to take a risk of finish at some point. We should assess risk of testing vs. not testing a part of code. Why not evaluate which lines of code are more important or critical and make sure we are testing them? Yes, there are thousands of lines to asses and we are not able to do this manually.
Let’s think about data we already have in our code repository, issue tracking tool and production logs. Based on this we can calculate which line of code is more important than other. In version control system we can check how often particular class have been changed and what was a purpose of the change. Next factor how many times a method is used. This is about static code analysis and should take into account whole stack of usage. To assess actual production usage of methods we can perform client log analysis. Of course we should invest in good logs coverage first. Assuming we have pre-commit hook to constrain that commit comment is required and starts with issue number for a change, we can count nr of bugs already detected/fixed in the particular code area. Those are severity indicators. Bartek presented us a formula with assigned weight for each of these factors. Sounds like a new tool is coming in Atlassian fleet…
„Mobile – the clue is in the name” Richard Bradshaw
Where we are testing applications for mobile devices? A “mobile” says a device is mobile so testing it just at your desk is not enough. We should consider where mobile is used, e.g. in train, or football match where 50 thousands people with their mobiles gather together etc.
One of the most repeated words by Richard was: “immature”. If you are new in mobile testing and hear it from a guy with 10 years experience in mobile testing, it becomes valuable opinion. Tools for mobile testing like simulators, emulators, automation frameworks are much better than two years ago, but they are still immature in terms of stability. Taking this into account, you need to plan more time making them work for you. As a confirmation to this, Richard confessed: “Appium should be your point of choice, but currently I hate appium (a new version at that time) and I’m waiting for upcoming release to fix issues”.
Richard work approach is not to start from mobile test automation, but rather think of how I can facilitate my manual mobile testing. If I have several devices and have to repeatedly install new versions of an app, why not prepare a script to do it for me? Are there any other preconditions like setup test data ? – let’s automate where it is rewardable and reliable. There was a funny story on how the author of Selenium ended with a contract to automate 100% of mobile tests for a big company. He build a machine to physically touch and swipe on smartphones. Let’s not go that way with a limited budget.
Next day we had workshops conducted by Richard: It was a challenge… Generally there was a part for testing mobile web application and native application. Tool selection was:
- Chrome Dev Tools
- Android Debug Bridge (adb)
- Android IDE
other to consider:
You may wonder why the 3th board with brainstorm results contains tools like JMeter or SoapUI. This is really not a bad approach as far as mobile apps are mostly “stupid” in terms of business logic and can be tricky if it comes to UI. So test a business logic of webservices first – before you even touch a mobile device. Then you test just UI things on the mobile device.
What anyone can try on mobile with Android without any extra tools:
- Connect via USB to your laptop
- Settings->About phone ->quickly tap 9 times on Build number to turn on Developer options
- Settings-> Developer options -> USB debugging
- open Chrome
- Chrome browser-> type chrome://inspect -> select your device -> click inspect on the
- right side of a displayed URL history
- Operate on your page on mobile and view results on your laptop
- use Chrome -> More Tools -> Developer Tools to do more, enjoy :
„Perfecting the craft of test automation” Maaret Pyhäjärvi
There were many of her thoughts that most of us will agree on:
- Need of automation of environment preparation as simple as having environment infrastructure from cloud provider like Amazon
- Skills compensation between team members or “learning through osmosis” (while doing the same thing, at the same time, in the same space and even on the same machine)
Also there were 2 sentences that I initially don’t agree with (requires more precision):
1) Maaret pointed out a case when she had to deal with programmers that didn’t want to write tests (any type). Their response was more or less “our code is so dirty that we want clean it up first. This have more priority, than writing tests”. Maaret got agreement with them – “Whenever I ask you for test, take this task and do what is necessarily in your opinion first but at the end write a test”. This statement is against good practice to have a solid set of unit tests before you actually start refactoring code (not to even mention TDD) We can treat her advice only as a real life solution for motivational problem of writing tests by programmers.
2) Split long tests into smaller to make them fast.
The argument of time execution to cut long test into pieces may not be relevant in some cases. You have to take under consideration that making 3 test scenarios instead of 1 is not only a matter of cut. You need to add preconditions and cleanup statements for each part (or you will make them dependent). To complete Maaret thought, I believe you may split test into independent parts if there is a way of make precondition part (and rollback) not by UI interaction, but by database or API interaction . Otherwise, total execution time will increase.
„Case Study – developers plans to abandon project” Jakub Drzazga
The speaker shared with us his real live story. He was hired as Agile Coach to a company in order to improve agile practices in a team. After some informal one-on-one meetings with indirect and open questions regarding what goes right and what requires a change, he realised team members had plans to abandon project (crucial for the company). The management did not take seriously his findings. He decided to write email to CEO. As a bad news messenger he was not welcomed or understood, but finally the issue became urgent and the management found resources to introduce changes. Developers had their voice and issues like lack of testing environment were resolved ASAP to give a fast feedback for the team. There were other changes that were signs of “people matters”.
Around half a year later Jakub had an opportunity to verify situation within the team. Well, nobody quit the job, but morale went back to the level before agile coach initiative. To change a company climate you really need to do it on all levels and let’s not think hiring coach for a few weeks will ultimately put on track people. The example described by Jakub just reminds me how lucky I’m in Kainos :)
(joint publication, author: Piotr Boho)
Workshop „Testing APIs with Postman”
One of the workshops was dedicated to testing with Postman (www. getpostman.com). Postman is a tool that allows you to send REST request towards an end-point and well… test if the result is what you expect it to be. I want to focus on its capabilities and whether or not you should use it and give preference to other tools.
First, Postman has come a long way – it improved quite a bit from what it was few years ago. It allows you to:
- group test cases into suits
- set environmental variables
- perform automated result check
but this is what other tools can do as well (just google API testing tools). The reason it works so well is that it also seamlessly integrates with:
- mocking service
- micro services (Hyvepod)
The workshop was pretty basic just to give an overlook of some things possible with Postman which is A LOT.
Postman cannot test SOAP endpoints, there is no integrated security testing and there is a lot of other limitations which I hope will be resolved in the paid version of Postman.
To sum up Postman is great for testing REST endpoints but you will have to get other tools like SoapUI if you want to perform security tests or Soap related tests.
(joint publication, author: Eugene Nosenko)