(joint publication, authors: Jolanta Hrywniak, Eugene Nosenko, Mateusz Radkiewicz, Marek Weihs, Dominika Wojtko)


from left to right: Jolanta, Dominika, Eugene, Marek, Mateusz

EuroStar is probably Europe’s biggest and most prominent software testing conference. Since 1993 it gathers professionals and software quality passionates for sharing ideas, learning and networking. This year it’ll be hosted in Stockholm at the beginning of November. Before that there’s a series of one-day EuroSTAR roadshow conferences organized in few Europe’s cities. We had a pleasure to attend Warsaw edition on 27th of April 2016.

Below is a conference agenda and summaries of all presentations.

author: M. Radkiewicz

author: M. Radkiewicz

“Test improvement. Any place, any time, anywhere”

Presenter: Ruud Teunissen
(authors: Jolanta Hrywniak, Dominika Wojtko)

The opening presentation by Ruud Teunissen was, in my opinion, one of most interesting. It was clear, that there is a lot of experience behind those words.

“Instaity: doing the same thing over and over again and expecting different results.”

Albert Einstein

This quote was the thought perfectly used to demonstrate what Ruud wanted to say: to achieve an improvement, you need to introduce a change. And the right place for change is: any place (at any time and anywhere). Going from pioneering to optimal solution gives a possibility to improve your actions in waterfall and agile projects, in test driven development and continuous integration.

To make test improvement successful we need to determine our conditions. First of all the objective needs to be known and defined. After setting the goal it is time to think about the scope and the most useful and efficient approach. When the proper approach has been selected, the assessment can be made. This is the best time for asking yourself questions like: where can I improve? What is the context? What would help? If we know the answers, then setting the improvement plan will be definitely easier task.

Ruud shared his experience using different models and approaches. He demonstrated many examples of bound models – improvement models with a pre-defined approach for assessment and implementation (TPI Next, TMMI, STEP, CTP, GQM). Tailor-made approaches made for specific situations are the other valuable options and here we can mention TI4Agile, TI4Automation, CTPI, SFAI-web or Belbin. Even more options can be found among the unbound approaches that are based on the expertise and experience of all involved.

After the presentation we could take home a practical set of guidelines based on valuable lessons learned and good practices.  Like Ruud noticed continuous improvement is better than delayed perfection.

What I found very important was emphasize, that it’s the people who need to change – no one can actually change them. And that means, they need to believe in a change to implement it in their work. Even the best solution forced into people will never be good, because they will actively act against it. The crucial thing to remember here is that one approach will never be the right one for every person, every team, every customer.

author: A. Kornecka

author: A. Kornecka

“Application security testing – an update!”

Presenter: Declan O’Riordan
(author: Eugene Nosenko)

Security Testing has to be considered a niche field. As our life is becoming increasingly digitalized, security has never been so important. Partly because there are so few security experts. Can you blame anyone? To be a good security expert you need to know basically everything there is to know about IT right now, you won’t even scrap the surface after spending 5 years exploring security field. So being a security expert is not that easy…or is it?

Well, no. At least that was what Declan O’Riordan was so passionately trying to tell us, that you don’t have to be a guru ethical-hacker anymore to do it, his motto was “Maybe You will be top notch security expert tomorrow” (I doubt it will happen tomorrow, though)

So let’s get to it. Declan mostly was trying to help us see how complicated the security field is and the first part of his presentation was about it, he was painting a very dark picture in the end just to show us that it wasn’t all that gloomy, that there is light in the end of the tunnel, so what is it?

The answer is real-time Interactive Application Security Testing (IAST). IAST is said to be miles ahead of DAST and SAST (dynamic and static application security testing). See how I underscored real-time? Yes, it’s application security processes run in real-time, but we will get back to it later.

So what is it and what are its benefits?

  • It is a SAST/DAST hybrid. Both static and dynamic testing miss huge portions of most applications. But interactive testing examines the entire application from the inside — including the libraries and frameworks. So you get better coverage over your entire codebase.
  • False Positives. False positives represent the single biggest weakness in security tools, commonly representing over 50% of the results. With interactive testing, access to more data leads to more accurate findings decreasing workload on scarce security resources
  • Vulnerability Coverage. IAST tools not only focus on the most common and riskiest flaws found in applications, but they also allow for custom rules to personalize the threat coverage for specific enterprises.
  • Static and dynamic tools don’t scale well. They typically require experts to set up and run the tool as well as interpret the results. But the size and complexity of an application don’t affect interactive testing, which can handle extremely large applications in stride.
  • Instant Feedback. Interactive testing provides instant feedback to a developer, within seconds of coding and testing new code. Developers can be sure they are only checking in „clean” code, saving time and money downstream.
  • No Experts Required. IAST works out of the box. Automatically. Without you doing anything extra but allowing for some advanced configuration.
  • Zero Process Disruption. IAST leverages existing activities to add security testing without separate disruptive activities or schedule breaking checkpoints.

Pretty awesome, don’t you think? Well, it does have some major drawbacks:

  • Performance/Stability/Ongoing Management.

I put everything under one point simply because this is a fairly new solution and we don’t know how tough of an impact it might have on the above aspects. There is some concern about performance since, remember it’s real-time, imagine IAST software running analysis on the stock market transactions that need to be lighting fast and installation/maintenance of a solution onto a large scale and sweeping website infrastructures can make your head spin.

So is Interactive Application Security Testing the future?

In my opinion, Yes – definitely. Just the fact that it works out-of-the-box is too tempting to pass. Plus, it seems that a lot of big names are investing in it: IBM, HP, NTO, Parasoft and Quotium, so it might be good to keep it on your radar.

author: A. Kornecka

author: A. Kornecka

 “Why examine your testing skills?”

Presenter: Alexandra Casapu
(author: Marek Weihs)

The presentation provided by Alexandra Casapu was around the testing skills and the need for their assessment and development. Similarly to what Ruud Teunissen was pointing out regarding the Testing process improvement, the testing skills require continuous improvement as well. And there is no improvement if you don’t know what to improve ;).

Alexandra tried to point out the importance of knowing your own skills and understanding how good you are with those. Testing is an activity that requires skills. The testing is better and more efficient if the appropriate skills are being used and the greater the skills are, the greater is the testing performed.

Alexandra gave a number of ways on how to assess, evaluate and examine your skills. One of the interesting ideas was to have a log of the challenges or problems you faced and the ways you overcome those.

Was the reason for the challenge or problem the lack of some skills? What were the possible solutions and which one was the best? Having this kind of information documented allows you to get back to it when you need to, e.g. when you face a very similar problem again. And because you already faced this in the past, you know how to deal with it now – lesson learnt. Also, you know what kind of knowledge or skill you were lack of back then. I am pretty sure you used the time wisely and that this is not the case anymore, as you have learnt this skill already. And all that because you have assessed yourself honestly.

Finding the gaps in your testing skills is one thing, another one is having a way to document your actual skills, which allows you to identify the gaps easier and find out what you already know more easily. One of the ideas given by Alexandra was to apply the mind map approach to illustrate your testing skills. I must say this was quite interesting and should be quite efficient.

Apart from gaining new skills, you should also take care of the ones you already possess. Alexandra has strongly emphasized the need of keeping your existing skills in a good shape. There is number of testing exercises that allow you to keep your mind fresh and your testing senses sharp. One of those can be the Mr. Buggy app presented by Radek Smiglin on one of the other EuroStar presentations.

The same as the testing itself, developing your testing skills is an ongoing process. This means it never ends. As they are always some new bugs to be found, there are also always some new skills you can gain or develop. Don’t stop developing them. The better skills you have the better tester you are and so is your testing. Keep learning! :)

author: A. Kornecka

author: A. Kornecka

“Moving to frequent releases”

Presenter: Rob Lambert
(author: Mateusz Radkiewicz)

Big releases are evil. Testing in big releases is slow, boring and what’s even worse – ineffective. During his energetic presentation Rob Lambert described the process of moving from big releases (several months) to one week releases.

In his company they used to deliver a software in big releases. It was causing a lot of issues. Time between customer needs analysis to product delivery was so long that end product was no longer tailored to meet business requirements. Timeframe between coding and testing was also too long to deliver quick and effective feedback about quality. There were always delays which sometimes resulted in cutting time for testing.

So they decided to change it radically – cut the release time from several months to one week. How they did it?

  • Implemented agile. Entire team was involved in testing, software was coded and tested in small chunks. Feedback from testing became immediate.
  • They focused on test automation on different levels: UI, integration, unit tests. Automated tests are giving immediate response in case of any regression in tested area.
  • Focusing on exploratory testing as one of the most effective ways to test
  • They were also able to introduce dogfooding idea – they were using their product on daily basis before releasing it to production.
  • Constant monitoring and testing after release – feedback about quality from production helps to improve pre-release testing.
  • Every team is supporting their part of the system on production. There is no separate support team.
  • Collecting and analyzing data from production. It tells how the product is behaving and is invaluable source of knowledge required to improve development and testing.
  • Constantly improving the process, development and testing.

Rob mentioned that they still have more to improve. But implementing such a radical transition from big to short releases already can be considered as a success.

author: A. Kornecka

author: A. Kornecka

„Tools supported Testers’ education”

Presenter: Radosław Smilgin
(author: Jolanta Hrywniak)

Next presentation by Radoslaw Smilgin was about teaching others how to be (become) a better tester. There are many ways to achieve this goal: one may learn from books, Internet, attend appropriate trainings, but – according to Radek – the best way is to just practice. His idea to throw somebody in at the deep end is “Mr. Buggy”. This application was designed for testers to train searching for and reporting the bugs that are already present in the application. This approach is better than testing a random software, because it allows to avoid problems like lack of control over the environment or authorisation for some actions. Working on application created to be tested eliminates the risk of breaking a real production environment, and therefore encourages tester to try out every idea they may have on their way in looking for bugs.

“Mr. Buggy” can be used by anyone at home. But there is a Testing Cup championship organized every year and the tool works there perfectly as well. Because all the existing bugs are well known from the start, it’s easier to point the team or individual competitor who found most (or maybe all) of them. And since the application is the same for everyone, participants have equal chances to win and to learn. Because “Mr. Buggy” was created to help testers improve their skills, and that’s the main role of the tool.

After the presentation I was wondering, if working on environment with pre-defined bugs is really the best way to become a good tester. It creates an artificial situation, when the goal is to find a specific number of issues. When that’s done, application is tested completely. When the problems are not known, further testing is always possible, so there are always new bugs to be found!

author: A. Kornecka

author: A. Kornecka

“End Users Involved at Last”

Presenter: Michał Stryjak
(authors: Dominika Wojtko)

The presentation provided by Michał Stryjak was about the implications of involving end users at last into the project. Michal experienced this situations in one of his projects and as the consequences were valid, he decided to share his lessons learned with wider audience.

Imagine a project where end users were completely separated from developers, even though they worked with the application for 8 hours a day and 5 days a week. As the application was complex and business demanding, IT team was well prepared to their work and focused on providing high quality – unit tests were executed, test cases written in details and every requirement was verified with line manager. As User Acceptance Tests were prepared by developers, not the team, they passed with no issues found and the project was considered as great success.  In the end there were only end users trainings missing and developers were supposed to provide them.

And then disaster happened. On the meeting with the users it occurred that UAT didn’t bring any value. It is not suprising if developers prepared them having no contact with end users. What is more, critical bugs were found on production environment, even though the team verified requirements with their manager.

How Michal and his team handle these problems? First of all, he asked himself a question how the requirements have been arised. Then it occurred that there were many sources of the requirements and in the end they were never consulted with end users who worked with application every day.  Team came to the conclusion that it is always worth checking where requirements originate from, who reviewed them and approved them.

As a solution to mentioned problems Michal proposed organizing UAT onsite where users had an opportunity to explore the application, test using predefined scenarios and finally show the team how they are using the application in their daily work. The new way of executing UAT proved to be a great success, as it not only helped to understand how users work, but also clarified what end users need. Involving users in Sprint Review allowed to get the feedback even faster. The users influenced the priorities and became a part of all the project and backlog decision process.

Lots of benefits can be found in this approach: communication became easy and effective, users’ satisfaction increased, requirements refined to satisfy both management and users and finally less work is needed.  So don’t forget about involving end users when planning your project! :)

author: M. Radkiewicz

author: M. Radkiewicz

“Test Automation from a Management Perspective”

Presenter: Dorothy Graham
(author: Mateusz Radkiewicz)

Dorothy shared her thoughts on how test automation is perceived by a management. Her enlightening presentation was of great value both for managers and also testers who want to effectively implement test automation in their projects.

It’s essential to be aware of what can be expected from investing in automation. Some of wrong expectations which tend to appear quite often are for example: system will be instantly completely tested, we can cut testing costs, tests can be automated once and will be working without maintenance. The reality is different: it takes time and effort to build good automation, it needs constant support as it’s a new asset. It may take longer to automate tests than execute them manually and it isn’t a panacea for problems like quality of requirements, design and code. But as a result we will gain faster response about quality, more accuracy and frequency. Testers will be freed for more exploratory testing.

Another aspect is to define good objectives. Test automation should reduce test execution time, but this will happen in after automation becomes mature. In early stages it takes more time to automate and stabilize tests. Another objective is to automate given fraction of manual tests. What is important that we shouldn’t try to automate 100%, there are tests which shouldn’t be automated: due to technical difficulties, executed just once, requiring human to judge the results, exploratory. We shouldn’t expect that automated regression tests will be finding many bugs. Numbers presented shows that automated tests find only 9% of bugs. Just to compare exploratory tests are finding 58% of all bugs.

Next important factor is to think about test automation as an asset. There are some aspects which has to be taken into consideration if we want this asset to be lasting and effective. Automated tests have to be run and maintained frequently. It has to be easy to update scripts and adapt to changing system under test. It should be easy and fast to add new automated tests and efficiently analyse failures. New people should be able to use and update automated tests quickly and easily. Building such an asset is not instant. It should be built gradually and progress should be monitored.

Dorothy also shared her ideas about responsibilities for test automation. She separated tester and test automator roles. If one person is responsible for both – manual and automatic tests, there could be a conflict of responsibilities. What is also important not all testers can automate well, not all testers want to automate and not all automators want to test – these aspects should be taken into consideration if we want to have effective tests and test automation.

At the end Dorothy mentioned that there are pitfalls that we have to avoid. The tool is not the only investment we need, we have to have good objectives, we have to be prepared for the ‘long haul’ and build lasting asset. Testing skill can’t be devalued, not all manual testers will become test automators. Testware architecture has to be designed for long term, easy to maintain, flexible to execute and constantly monitored.

author: M. Radkiewicz

author: M. Radkiewicz

„SMAC your testing”

Presenter: John Fodeh
(author: Eugene Nosenko)

The last talk was the least technical but also the most overwhelming. I say overwhelming because it once again showed me how fast technology industry is advancing and the pace at which it’s changing our world.

I’m only going to mention the key points of this presentation since it covered several large topics main of which was Data. The presentation itself was structured in a way to swarm you with a lot of data. Conscious decision of the author to prove some point? Maybe.

Code Halo. Whether you use incognito option in your browser or not, be assured, somewhere out there on the Internet there is a data representation of You. If you are a person who has something to hide, this might be worrying but if you are a QA/Tester, Code Halo represents an opportunity.

All the information you need to know about end users for your product is already out there. What devices they use, the kind of functionalities they prefer, what design, their favourite colour, working hours, favourite music, do they eat healthy? Code Halo offers the view into the life of an end user at a degree it wasn’t possible 10 years ago. This does shift the QA role and forces us to focus on the following

Holistic Quality View. Another thing is the holistic view(1), which seems like a no brainer. The level of Usefulness, Non-functional specs, Functionality and Deployability have a great impact on the application’s success. But also how well the application is interacting/integrated with other applications or other future applications (i.e. Smart house/Smart car)

Smarter QA & Testing. An advance in technology means that we have to adapt and use the new tools in our work. Focusing on Identifying patterns rather than finding defects, understanding the user (based on their Code Halo) and predicting the future usage/functionality based on the advances in technology. Exploit the technology to test beyond the constraints of your physical environment, team and skills

Summary. The degree of market disruption caused by the rapid adoption of digital technologies left companies like Kodak, Blockbuster or Nokia, who were once market leaders, broke and forgotten. Next generation won’t understand why the “Save” icon looks the way it does. Point is, the world is changing fast and we have to change with it. Improve, adapt and adopt.

1 Holism (from Greek, ‘holos’ – all, whole, entire) idea that systems and their properties should be viewed as wholes, not as collections of parts. This often includes the view that systems function as wholes.