Tag Archives: Agile QA

Published new stuff

Since I’m not only a terrible blogger but also a terrible self-marketer, I tend to forget to mention the things I’ve published so far.

But today I want to introduce you at least to some of them:

To start with the most recent one, there is an article on Pomodoro Technique from my colleague at crealytics, Martin Mauch, and myself. It’s been published on Projektmagazin: http://www.projektmagazin.de/artikel/mehr-schaffen-in-kuerzerer-zeit-die-pomodoro-technik (sorry, English folks: It’s in German only).

The second is a book which has been published last October. I had the honor to contribute a chapter to Henning Wolf’s “Agile Projekte mit Scrum, XP und Kanban im Unternehmen durchführen”, which is focused on case studies from hands-on folks. Of course, my chapter was about Agile in startups. (And: Sorry again, German only, too)

Last but not least, I want to mention a book which is on the market for quite a while, but fortunately, it seems to become a classic: The PHP QA book aka “Real-World Solutions for Developing High-Quality PHP Frameworks and Applications” (yes, finally, in English! :-)) and “Softwarequalität in PHP-Projekten” (German Edition). Also available on Kindle.
It’s partly theoretical knowledge, partly case studies. Mine was on QA with Selenium at studiVZ, together with MAx Horvath, so maybe I mention it for sentimental reasons (good old days!).
But the book itself is an invaluable compendium for any kind of testing in the PHP world – have a look at it.

Enjoy reading!

GTAC 2009 – Lightning Talks

As I mentioned before, there were eight lightning talks, held on Wednesday. Marie Patriarche from Google has published the list of topics in Wave – where available, I’ve added links to slides or to projects:

When I wanted to publish my own slides yesterday, I had to face the fact that Philipp’s Macbook I had worked with, obviously hadn’t liked my talk and had erased everything except the master slide.

Don’t cry, folks, my slides weren’t on World Heritage List. I will write it down again and publish it here, with some additional comments to the slides.

GTAC 2009 – first day

So, first day of GTAC is over – I’m back in my hotel room, now doing some extra-work especially for you, after having had a workday of twelve hours. I’ll give you a short overview of the talks we had.

But let’s start with the most important things: Office standard and food (dedicated to Alex, just in case he reads this ;-)). Even though I thought, the Google office in Seattle could not be topped, I have to admit: I was wrong! This office in Zurich is so unbelievable, guys. They have base camp capsules from South Pole as telephone cabins, a library furnitured in Old-English style, an indoor jungle (sic!), a massage room, fitness center, gaming zone, … and amazing food (mostly organic). Wow, congrats to Google, you managed to impress me (against my will).

Ok, back to the conference: The first and main difference compared to the event in Seattle is that this a very small, intimate convention: They selected just 100 people from those who wanted to attend – 40 Googlers and 60 Non-Googlers. All the more I appreciate being here.  And in order to prove me a liar (see my last posting): During his opening remarks Jürgen Allgayer, Director of Engineering Productivity EMEA, explained in detail the criteria of the attendee selection process.

The keynote was held by Prof. Niklaus Wirth, creator of some programming languages. Here’s something of what he said:

– Theoretically, tests should tend to zero, because testing is always about finding bugs, not about preventing them

– Programmers should have a very, very deep knowledge of software design and systems

– Programs shouldn’t be dependent on underlying structures

– Universities actually don’t provide what a programmer really needs (mostly because professors stopped coding years ago)

My personal opinion concerning the last point: But companies mostly neither do! So, if they both don’t – what could be the solution?

The first regular talk was called “Precondition satisfaction by Smart Object Selection in Random Testing”, by Yi Wei and Serge Gebhardt from ETH University Zurich. It was on Random Testing, evaluated academically. They compared an “or” strategy vs. a “ps” strategy. “or” uses objects randomly to feed methods under test. The problem is that this produces many failures, because many objects don’t match the test case’s preconditions. In the “ps” strategy, there is a check if created object match the preconditions, and if yes, the predicate validation pool will be updated. They closed with a recommendation of using both strategies combined.

The next talk was on “Fighting layout bugs” by Michael Tamm, optivo GmbH. This is the talk I’ve enjoyed the most so far. He demonstrated very clearly a proof of concept, live demo included, on testing the layout of a web page. He suggested three ways of doing that:

1. Integrate HTML validation into your Continuous Integration environment (I preferred the suggestion of writing a seperate test for that). Use the W3C validation service, which is downloadable for free, so you can maintain your own validation server.

2. Also make use of the W3C CSS validation server (same way like HTML). To achieve this, styles must be written in a *.css file (I think you generally should do this!). If you start them with * html, the service is also able to deal with CSS hacks.

3. Use the fighting-layout-bugs library, which is actually a PoC, but as I mentioned, Michael did very impressing live demos.

So, what can be said about this library? It can be used for layout bugs which occur despite of having validated HTML and CSS. The principle on which it is based is quite simple: Using Javascript and Image Processing.

Example: Too long texts overlapping some edges.

1. use a jQuery expression, make all text on a page black, capture a screenshot.

2. use a jQuery expression, make all text on a page white, capture a screenshot.

3. Everything what differs on these screenshot, is text.

4. Let the text disappear, find out where on the page the edges are, take a screenshot.

5. Compare these screenshots, then you know if there are any overlaps. If they are, you have a layout bug.

6. Mark the layout bug with a red circle.

Please keep in mind: All this is done automatically.

Michael demonstrated the same thing with low-contrast-bugs.

It was really quite impressive, so much that the audience gave spontaneous applause to his live demo.

If you are interested in learning more, have a look at:

http://code.google.com/p/fighting-layout-bugs/

and

http://groups.google.com/group/fighting-layout-bugs/

Then came a Googler: Nicolas Wettstein on “Lessons learned from testing GWT Applications”. First he showed that the massive use of web apps has introduced new challenges into programming and testing, because you have to re-invent all the tools you already have for desktop apps: IDEs, Debugger, Testing tools, etc.

Then he shortly explained  what GWT (pronounced “GWIT”) is – In brief: AJAX apps, written in Java.

This has the advantage of being

– versatile

– strongly typed

– i18n ready

– able to handle browser incompatibilities

Nicolas mentioned 5 Pitfalls related to testing GWT apps:

1. Complex asynchronous callbacks

2. direct DOM operation within the code (because Java cannot handle it)

3. mixing Java and JavaScript (same reason)

4. static / global access (oh, yes, the tester’s alltime favourite!)

5. no separation of concerns (mixing up domain logic, with services, views, etc.). He suggested an MVP solution instead of using the wide-spread MVC pattern. MVP means Model-View-Presenter. He also suggested to separate the services from the presenter logic in order to make things very clearly. In this case, the view communicates with the presenter level (not directly with a model, as in MVC). I hope I have written it down correctly – otherwise please give me a hint!

Furthermore, he mentioned a remarkable sentence:

“Software testing is not about writing tests, it’s about writing software that can be tested.”

Another talk (yes, pretty much stuff for just one day! :-))  was “Automatic workaround for web applications” by Alessandra Gorla and Mauro Pezzè (University of Lugano, Italy).

Here’s what they said:

– Assuming you make use of an external application on your own web page.

– When the external app has a bug, the classical approach is: Find the bug, report it, wait for bug being fixed, wait for a new version being released

– That costs much time, in the meantime your web site and your customer  have to live with the bug!

– The new approach: Runtime fixing.

– Automatically finding a workaround by using e.g. intrinsic code redundancy or equivalent sequencies.

– They provided a live demo, based on user feedback.

I liked the idea of using workarounds, even though it’s very academical at this time). But I’m wondering why they combined it with user feedback whether a workaround would be useful or not – instead of having an automated frontend test, which could check if a workaround is valid. In my opinion this would decrease the rate of false positives and would not mix two different problems.

The regular talks closed with Mark Micallef’s “Achieving Web Test Automation with a Mixed-Skills Team”. That was quite interesting because this was a talk very close to daily business. In fact, it was a case study on BBC’s web site creation and testing.

What Mark told us is also my own experience: Test Analysts have a completely different skill set and motivation than Test Engineers – and this is a good thing!

The more technical it is, the more a Test Engineer’s motivation will increase – and vice versa.

Beyond this, he described what actions he performed with his mixed-skills team:

1. Define Success (and failure)

2. Utilize Abstraction

3. Unify technologies (worth to remark that they have many technologies in their production environment: .Net, PHP, Perl, Java, Flash, … – but they decided to use just one common language for any automated testing activity: Ruby / RoR, with Cucumber as Behaviour Driven Testing Tool. That allowed them to write tests in good plain English. Something also Test Analysts and Product Owner can make use of).

4. Think about process – on a basis of the four testing quadrants (I think it was Lisa Crispin’s model).

I chatted with Mike during lunch and we had an interesting conversation about ATDD / BDD, DSLs and this stuff. Thanks a lot, Mike, it was a pleasure :-)

Last but not least in the evening there was an opportunity to give some lightning talks, just 5 minutes for each. There were eight slots, and I took this as a chance for curing myself of my most-favourite mindf***: Well, to be honest, it’s my fear of talking “officially” in English. Yes, it’s ridiculous, but it’s true: When I chat with other people in English, I really enjoy it and I don’t care if I make mistakes (neither when I write blog articles). But in some situations which tend to be formal, I’m convinced that I am not able to speak English – so this was a good challenge for me… ;-D And though I was nervous like hell and everybody in the room could hardly ignore it, I eventually managed talking about “5 ways to improve your developers’ sense of quality” (in Agile environments), in front of the audience. I hope it was not recorded. Now I’m convinced that, after a few more talks, I’ll love giving talks in English nearly as well as I already do in German. I can forget my fear – I’m over it, looking for a new one.

Slides will be published here – tomorrow, hopefully.

Well, these were Christiane’s adventures from GTAC 2009, day 1 – please stay tuned, more to follow!

GTAC 2008 – Google Test Automation Conference in Seattle, USA

Momentan bin ich auf der GTAC in Seattle, der von Google jährlich veranstalteten Konferenz rund um Quality Assurance und Software Testing.

Abgesehen von den ganzen lustigen Nebenschauplätzen wie Google-Office-Besichtigung (man könnte auch sagen: Entwickler-Spielplatz-Kinderzimmer-Rundgang ;-) – aber natürlich trotzdem auch sehr beeindruckend) haben hier Tester und QAler aus der ganzen Welt die Gelegenheit, sich auszutauschen, Know-How auszubauen und neue Impulse für ihre Arbeit zu sammeln.
Obwohl der Focus schon sehr auf Web Application Testing lag, war eines zwar tröstlich, aber trotzdem wenig hilfreich: Die Erkenntnis, dass gerade bei den beiden Themen “Agile” und “Acceptance Tests” wenig Neues zu holen war. Gerade was das Problem der Fragilität von Oberflächentests angeht, scheinen alle, mit denen ich sprach, mit den selben Widrigkeiten zu kämpfen – clevere Ideen Mangelware.

Die Jungs von Google vertraten in einem Talk sogar die Ansicht, Acceptance Test auf eine Stichprobenmenge zu reduzieren und statt dessen lieber schichtenübergreifende medium-sized Tests zu schreiben, die die Funktionalität sicherstellen – ein Ansatz, der bei mir spontan nicht auf Gegenliebe stieß, aber ich werde darüber nachdenken (und mein Fazit dann hier kundtun…).

Was das Thema Agilität in der Qualitätssicherung angeht, so war ich erstaunt, dass vieles von mir als selbstverständlich Empfundene wie Continuous Integration oder Collective Code Ownership noch lange nicht bei der Mehrheit der Unternehmen angekommen ist, sondern viele erst jetzt ihre ersten Gehversuche starten oder kürzlich umgestellt haben (ein Tenor, der aus vielen Tischgesprächen herauszuhören war). So können wir bei studiVZ vielleicht schon zufriedener sein, als ich es bisher in Bezug auf unsere Fortschritte war. Egal, es bleibt noch viel zu tun, und einige Anregungen habe ich auch aus den Vorträgen mitgenommen.