Tag Archives: Testing

Published new stuff

Since I’m not only a terrible blogger but also a terrible self-marketer, I tend to forget to mention the things I’ve published so far.

But today I want to introduce you at least to some of them:

To start with the most recent one, there is an article on Pomodoro Technique from my colleague at crealytics, Martin Mauch, and myself. It’s been published on Projektmagazin: http://www.projektmagazin.de/artikel/mehr-schaffen-in-kuerzerer-zeit-die-pomodoro-technik (sorry, English folks: It’s in German only).

The second is a book which has been published last October. I had the honor to contribute a chapter to Henning Wolf’s “Agile Projekte mit Scrum, XP und Kanban im Unternehmen durchführen”, which is focused on case studies from hands-on folks. Of course, my chapter was about Agile in startups. (And: Sorry again, German only, too)

Last but not least, I want to mention a book which is on the market for quite a while, but fortunately, it seems to become a classic: The PHP QA book aka “Real-World Solutions for Developing High-Quality PHP Frameworks and Applications” (yes, finally, in English! :-)) and “Softwarequalität in PHP-Projekten” (German Edition). Also available on Kindle.
It’s partly theoretical knowledge, partly case studies. Mine was on QA with Selenium at studiVZ, together with MAx Horvath, so maybe I mention it for sentimental reasons (good old days!).
But the book itself is an invaluable compendium for any kind of testing in the PHP world – have a look at it.

Enjoy reading!

GTAC 2009 – Lightning Talks

As I mentioned before, there were eight lightning talks, held on Wednesday. Marie Patriarche from Google has published the list of topics in Wave – where available, I’ve added links to slides or to projects:

When I wanted to publish my own slides yesterday, I had to face the fact that Philipp’s Macbook I had worked with, obviously hadn’t liked my talk and had erased everything except the master slide.

Don’t cry, folks, my slides weren’t on World Heritage List. I will write it down again and publish it here, with some additional comments to the slides.

GTAC 2009 – Day Two

Well, GTAC 2009 is over. I wish to thank Google very much and special thanks to the people who organized this fabulous conference!

Once again, I’ll give you a short overview of the second day:

The opening talk was held by Dr. Alberto di Maglio from CERN. And he told us a lot about the ETICS project, which is a large-scale web-based tool for configuring, integration, building and testing software on the grid. It is open source software and can be found on the CERN web server: http://etics2.cern.ch

It looked quite impressive at first sight. Especially the fact that you can run your software on several grids, from Amazon to gLite or Condor (in order to name just a few). Furthermore, it is possible to choose from several operating systems and hardware settings. Another benefit is the automated packaging system, who takes a lot of work off the developer’s shoulders. The software is plugin-based, so that you can integrate own plugins very easily.

The next talk was on Crowdsource Testing: “Testing Applications on Mobile Devices” by Doron Reuveni (uTest). I think that Crowdsource Testing is not yet another buzzword, in some cases it’s a valuable alternative to established ways of testing.

Doron took Mobile Testing as an example for a scenario with many different devices, operating systems or versions, carriers, data connections and settings. In most cases, companies cannot handle this with their own QA. Here Crowdsource Testing is a good option. You give your Application under Test to a crowd of testers (of course they are organized and led by uTest!) and let them do the work for you.

So, guys, immediately stop thinking about firing your in-house QA people!

As I mentioned, I think it’s a good alternative in *some* cases. The lively discussion afterwards pointed out that there are several reasons why sometimes CrowdTesting may be useful and sometimes not. If you are interested in this topic, send me an eMail, I’ll explain it to you in detail.

Jeremie Lenfant-Engelmann from Google provided the next talk on “JsTestDriver”. He said as follows:

– Existing JSUnit Frameworks are clumsy

– A good non-clumsy alternative is JsTestDriver

– Principles of JsTestDriver:

=> easy to setup and to use

=> fast

=> good IDE integration

=> Debugger support

=> Code coverage

=> Federated test execution across all browsers and platforms

=> HTML loading support

=> Focus on continuous builds, therefore it is CLI-based (besides the IDE integration for women and children *SCNR*)

Jeremie showed a live demo, which proved that he hadn’t promised too much: It was really fast and seemed to be very easy to use, because it’s analogue to other XUnit tools. He also showed some work with the Eclipse plugin and a CI integration into Hudson.

Jeremie Lenfant-Engelmann, JsTestDriver, @GTAC09

Finally he mentioned that the focus of JsTestDriver lies very clearly on Unit Testing, not on End-to-End-Testing (very important to keep in mind, because I also tended to understand it as and end-to-end-testing solution).

One of the talks I was really looking forward to, was “Selenium: To 2.0 and beyond!”, by Jason Huggins and Simon Steward. Simon is a Googler, and Jason, the former ThoughtWorker and Googler, is now working at his own company, Sauce Labs. While Jason has probably become well-known for those who have been using Selenium, Simon is the man behind WebDriver. The speakers pointed out that both systems are valuable, but both have different pros and cons.

While Selenium is a toolset with a wide range, Simon describes WebDriver as a “developer-focused tool for automated web apps testing”. They showed us a live demo with both tools, demonstrating the difficulty of key up/down/press operations with Selenium and how smooth it works with WebDriver. By the way: I belong to the people who had been driven nearly mad by this key problem…

The plan to merge WebDriver into Selenium 2.0 wasn’t something very new for the main part of the audience, I guess. But they explained why and what the advantages would be.

Strengths and weaknesses of Selenium are:

++ Easy to extend

++ Easy support of new browsers

– – Confusing API

– – Constrained by JavaScript sandbox

Strengths and weaknesses of WebDriver are:

++ Lovely API

++ No constrains by JavaScript sandbox

– – Hard to extend

– – bad support for new browsers

Jason and Simon showed one very attractive feature for those who already have many WebDriver or Selenium tests: By instancing a Selenium Class in a WebDriver test, you can use features of Selenium very easily, and vise versa. You don’t need to migrate old tests. So you (hopefully) have backwards compatibility for Selenium 2.0 / WebDriver.

For version 2.0, they will start co-hosting the code, merge developer and user mailing lists, and merge issue tracking.

An initial release will contain an “Uber-jar” file, which includes two API’s.

Online resources: selenium.googlecode.com

They will provide native support for Java, Python, Ruby and C#. And there will be contributions for PHP and Perl.

Jason Huggins and Simon Steward, Selenium / WebDriver @GTAC09

Beyond 2.0:

More clear design, divided into three parts:

1. Query

2. Execution

3. Interaction

For some of them, JavaScript will be useful, for some of them not. We will see how much Selenium code will remain in each section…

There were two more outlooks: One was mobile devices support, the other one was a WebDriver fork, developed by Opera.

What I missed was a release date for Selenium 2.0 :-)

The statement was like “we are working on it”. I hope it won’t last too long, because after this very entertaining talk I am very curious about version 2.0!

Joshua Williams and Ross Smith talked on “Score One for Quality”. It was on introducing games into working processes, in this case, into Quality Assurance, in order to fight frustration and demotivation. They shared their experiences at Microsoft with “42Projects”. It was focused on bringing back trust, innovation and play.

After a short game called “best bug story” they played with us, they explained why games are working: Because we belong to a generation of gamers. And because 81% of (IT?) business people are 34 years old & younger [oh dear, just 3 years to go…!].

Josh & Ross showed up the pitfalls when using games. There are situations where you shouldn’t play games. e.g. you shouldn’t play “who is the best for Steve’s job?” – when Steve eventually is not the best in this game, he -or you- might have a problem…

Interesting fact: Microsoft used games to review the Windows 7 i18n/l11n project.

Some guidelines for games:

– Set clear objectives

– Use rewards carefully

– Keep duration short

– Implement “Broad Appeal” mechanisms

– Focus on “Organizational Citizenship Behaviour” [what an expression…]

– Support

Just my 5 ct. to add: Playing games was also a topic at the Agile2009 conference. And even though I actually cannot find it, there must be a book on Agile Games. In agile environments, playing games is a very useful tool. But I don’t agree completely with the speakers how to use games in a production process. But that’s another story, will be written maybe in a few weeks.

The last regular talk of the conference was on “Automated Performance Test Data Collection and Reporting” by David Burns and David Henderson (smartFOCUS DIGITAL). They pointed out how important a site’s performance can be for visitors, page impressions or consuming rates, e.g. 400ms delay may result in 5%-9% loss of consumption. They demonstrated their own solution for performance measurement and tracking, based on YSlow. YSlow is a free and very useful tool, developed by Yahoo. It’s a Firefox Plugin which interoperates with Firebug. With that tool, you can do some manual ad-hoc measurements. The speakers mentioned an interesting feature of YSlow, Beacon, which enables YSlow to send results as HTTP GET requests to an URL.

As a special service for you, I looked for the documentation of this feature – and this is what I found: http://developer.yahoo.com/yslow/help/#yslow_beacon

David&David used this to integrate YSlow measurements in their Continuous Integration environment and  wrote a detailed web-based reporting and tracking tool around it.

Google Wave was also a big topic during the days  – and we had the opportunity to test Wave in real-life conditions for two days. All the communication concerning GTAC can be found in the wave “group:gtacgroup@googlegroups.com” (you have to be a beta tester). They didn’t have to convince me – I had already realized before, which impact Google Wave will have. But there were many attendees who changed their mind while using Wave and seeing it in action as a collaboration tool. I think it’s even more than a collaboration tool, in my eyes it’s going to revolutionize all kinds of virtual communication.

But back to its usage as a collaboration tool, I want to quote Jason Huggins. He tweeted:

@jhuggins “awesome use of Google Wave @ gtac- A new wave for each talk at the conf. Now the community can crowd-source their note-taking.”

GTAC ended with a discussion in a very funny and relaxed atmosphere. Everybody I talked with was very happy about having had a very good time at Google with so many high-level quality talks.

So, I hope that rumors will turn out as right and the next GTAC will be located in Bangalore or Hyderabad, India.

GTAC 2009 – first day

So, first day of GTAC is over – I’m back in my hotel room, now doing some extra-work especially for you, after having had a workday of twelve hours. I’ll give you a short overview of the talks we had.

But let’s start with the most important things: Office standard and food (dedicated to Alex, just in case he reads this ;-)). Even though I thought, the Google office in Seattle could not be topped, I have to admit: I was wrong! This office in Zurich is so unbelievable, guys. They have base camp capsules from South Pole as telephone cabins, a library furnitured in Old-English style, an indoor jungle (sic!), a massage room, fitness center, gaming zone, … and amazing food (mostly organic). Wow, congrats to Google, you managed to impress me (against my will).

Ok, back to the conference: The first and main difference compared to the event in Seattle is that this a very small, intimate convention: They selected just 100 people from those who wanted to attend – 40 Googlers and 60 Non-Googlers. All the more I appreciate being here.  And in order to prove me a liar (see my last posting): During his opening remarks Jürgen Allgayer, Director of Engineering Productivity EMEA, explained in detail the criteria of the attendee selection process.

The keynote was held by Prof. Niklaus Wirth, creator of some programming languages. Here’s something of what he said:

– Theoretically, tests should tend to zero, because testing is always about finding bugs, not about preventing them

– Programmers should have a very, very deep knowledge of software design and systems

– Programs shouldn’t be dependent on underlying structures

– Universities actually don’t provide what a programmer really needs (mostly because professors stopped coding years ago)

My personal opinion concerning the last point: But companies mostly neither do! So, if they both don’t – what could be the solution?

The first regular talk was called “Precondition satisfaction by Smart Object Selection in Random Testing”, by Yi Wei and Serge Gebhardt from ETH University Zurich. It was on Random Testing, evaluated academically. They compared an “or” strategy vs. a “ps” strategy. “or” uses objects randomly to feed methods under test. The problem is that this produces many failures, because many objects don’t match the test case’s preconditions. In the “ps” strategy, there is a check if created object match the preconditions, and if yes, the predicate validation pool will be updated. They closed with a recommendation of using both strategies combined.

The next talk was on “Fighting layout bugs” by Michael Tamm, optivo GmbH. This is the talk I’ve enjoyed the most so far. He demonstrated very clearly a proof of concept, live demo included, on testing the layout of a web page. He suggested three ways of doing that:

1. Integrate HTML validation into your Continuous Integration environment (I preferred the suggestion of writing a seperate test for that). Use the W3C validation service, which is downloadable for free, so you can maintain your own validation server.

2. Also make use of the W3C CSS validation server (same way like HTML). To achieve this, styles must be written in a *.css file (I think you generally should do this!). If you start them with * html, the service is also able to deal with CSS hacks.

3. Use the fighting-layout-bugs library, which is actually a PoC, but as I mentioned, Michael did very impressing live demos.

So, what can be said about this library? It can be used for layout bugs which occur despite of having validated HTML and CSS. The principle on which it is based is quite simple: Using Javascript and Image Processing.

Example: Too long texts overlapping some edges.

1. use a jQuery expression, make all text on a page black, capture a screenshot.

2. use a jQuery expression, make all text on a page white, capture a screenshot.

3. Everything what differs on these screenshot, is text.

4. Let the text disappear, find out where on the page the edges are, take a screenshot.

5. Compare these screenshots, then you know if there are any overlaps. If they are, you have a layout bug.

6. Mark the layout bug with a red circle.

Please keep in mind: All this is done automatically.

Michael demonstrated the same thing with low-contrast-bugs.

It was really quite impressive, so much that the audience gave spontaneous applause to his live demo.

If you are interested in learning more, have a look at:




Then came a Googler: Nicolas Wettstein on “Lessons learned from testing GWT Applications”. First he showed that the massive use of web apps has introduced new challenges into programming and testing, because you have to re-invent all the tools you already have for desktop apps: IDEs, Debugger, Testing tools, etc.

Then he shortly explained  what GWT (pronounced “GWIT”) is – In brief: AJAX apps, written in Java.

This has the advantage of being

– versatile

– strongly typed

– i18n ready

– able to handle browser incompatibilities

Nicolas mentioned 5 Pitfalls related to testing GWT apps:

1. Complex asynchronous callbacks

2. direct DOM operation within the code (because Java cannot handle it)

3. mixing Java and JavaScript (same reason)

4. static / global access (oh, yes, the tester’s alltime favourite!)

5. no separation of concerns (mixing up domain logic, with services, views, etc.). He suggested an MVP solution instead of using the wide-spread MVC pattern. MVP means Model-View-Presenter. He also suggested to separate the services from the presenter logic in order to make things very clearly. In this case, the view communicates with the presenter level (not directly with a model, as in MVC). I hope I have written it down correctly – otherwise please give me a hint!

Furthermore, he mentioned a remarkable sentence:

“Software testing is not about writing tests, it’s about writing software that can be tested.”

Another talk (yes, pretty much stuff for just one day! :-))  was “Automatic workaround for web applications” by Alessandra Gorla and Mauro Pezzè (University of Lugano, Italy).

Here’s what they said:

– Assuming you make use of an external application on your own web page.

– When the external app has a bug, the classical approach is: Find the bug, report it, wait for bug being fixed, wait for a new version being released

– That costs much time, in the meantime your web site and your customer  have to live with the bug!

– The new approach: Runtime fixing.

– Automatically finding a workaround by using e.g. intrinsic code redundancy or equivalent sequencies.

– They provided a live demo, based on user feedback.

I liked the idea of using workarounds, even though it’s very academical at this time). But I’m wondering why they combined it with user feedback whether a workaround would be useful or not – instead of having an automated frontend test, which could check if a workaround is valid. In my opinion this would decrease the rate of false positives and would not mix two different problems.

The regular talks closed with Mark Micallef’s “Achieving Web Test Automation with a Mixed-Skills Team”. That was quite interesting because this was a talk very close to daily business. In fact, it was a case study on BBC’s web site creation and testing.

What Mark told us is also my own experience: Test Analysts have a completely different skill set and motivation than Test Engineers – and this is a good thing!

The more technical it is, the more a Test Engineer’s motivation will increase – and vice versa.

Beyond this, he described what actions he performed with his mixed-skills team:

1. Define Success (and failure)

2. Utilize Abstraction

3. Unify technologies (worth to remark that they have many technologies in their production environment: .Net, PHP, Perl, Java, Flash, … – but they decided to use just one common language for any automated testing activity: Ruby / RoR, with Cucumber as Behaviour Driven Testing Tool. That allowed them to write tests in good plain English. Something also Test Analysts and Product Owner can make use of).

4. Think about process – on a basis of the four testing quadrants (I think it was Lisa Crispin’s model).

I chatted with Mike during lunch and we had an interesting conversation about ATDD / BDD, DSLs and this stuff. Thanks a lot, Mike, it was a pleasure :-)

Last but not least in the evening there was an opportunity to give some lightning talks, just 5 minutes for each. There were eight slots, and I took this as a chance for curing myself of my most-favourite mindf***: Well, to be honest, it’s my fear of talking “officially” in English. Yes, it’s ridiculous, but it’s true: When I chat with other people in English, I really enjoy it and I don’t care if I make mistakes (neither when I write blog articles). But in some situations which tend to be formal, I’m convinced that I am not able to speak English – so this was a good challenge for me… ;-D And though I was nervous like hell and everybody in the room could hardly ignore it, I eventually managed talking about “5 ways to improve your developers’ sense of quality” (in Agile environments), in front of the audience. I hope it was not recorded. Now I’m convinced that, after a few more talks, I’ll love giving talks in English nearly as well as I already do in German. I can forget my fear – I’m over it, looking for a new one.

Slides will be published here – tomorrow, hopefully.

Well, these were Christiane’s adventures from GTAC 2009, day 1 – please stay tuned, more to follow!

GTAC 2009 in Zürich (Switzerland)

Actually I’m hanging around in my hotel room in Zürich, waiting for the GTAC to start. GTAC means the annual Google Test Automation Conference. It’s the fourth time Google hosts this conference, and the second time I’m attending (first time was last year in Seattle, USA).

The specialty with GTAC is a very typical Google thing: You just cannot pay for attending the conference – you have to ask them if they’d be so kind to invite you (fortunately, in my case they did…). And I guess the algorithm for inviting people or not is their second most carefully guarded secret after the Google Page Rank. :-/

Well,  however: I’m looking very, very forward to joining the GTAC and I think I’ll benefit a lot from attending – especially since this year’s conference theme is “Testing for the Web”!

So, stay tuned, more to follow tomorrow!



4th Annual Google Test Automation Conference
October 21st, 22nd, 2009
Zurich, CH

Domain Specific Languages (DSL) – What, Why, How

There’s an interesting introduction to DSL (which is a very cool and powerful technique for automated testing!) on InfoQ, presented by Ola Bini. Have a look at this: http://www.infoq.com/presentations/DSL-What-Why-How-Ola-Bini

Agile Testing Days in Berlin

Even if I probably won’t attend to the Agile Testing Days next week here in Berlin, I can give it a warm recommendation, after having studied the program and the speakers list.

There are so many interesting people from the Agile world like Lisa Crispin, Elisabeth Hendrickson, Tom Gilb, Stuart Reid, Isabel Evans, Tom and Mary Poppendieck, in order to name just a few.
I saw some of them at Agile2009 in Chicago and I’ve read Lisa Crispin’s amazing book on Agile Testing.

So, if you have the opportunity to come to wonderful Berlin and to attend the convention, please do so!
And don’t forget to take some days off in order to spend some time in seeing the city.
(If you are looking for someone to show you the best places for Asian food in Berlin-Mitte, give me a call or send me an email). :-)

Agile Testing Days
October 12 – 14
Berlin, Germany

Powered by ScribeFire.

Unit Testing versus Unit Checking

I attended a session about Replacing End-to-End-Testing at Agile20009 in Chicago some days ago. There were some lightning talks where attendees could spread their opinions and insights about testing.
One of these I have kept in mind. Here it is:
When we talk about testing, we mean some very different things that go by the same name, which leads to confusion. There is unit testing, end-to-end-testing, automated testing, manual testing, exploratory testing, click testing, etc. etc. etc. A few of them have something in common, but some of them have not.
It’s important to realize that you need completely different hard skills and mindsets depending on whether you are talking about automated unit testing or manual exploratory testing.
In most cases, for unit tests you’ll need a software developer familiar with testing – for exploratory testing a traditional Quality Assurance tester should be the person of your choice. These persons have diverse backgrounds, their focus on testing is not the same, neither are their experiences (and they may come up with different results from their tests!).
Despite this fact, in both cases we are talking about “testing”. We use the term “testing” generally for some activities which have very different goals and focuses, and need different skills – and require different sorts of people who do them. This could be one of the reasons why there is so much controversy about “the future of testing”.
So, my lightning talker (I am sorry, I don’t know his name – if one of you does, please drop me a line!) called for a renaming. He demanded that we think about the differences (mentioned above) and rename some testing activities simply as “checking”, and, of course, call the other ones “testing”.
He called it “checking” when only a “true” or “false” is possible, and which can be repeated as often as you want. His main focus was on automated testing. Later I wondered if manual checklists are not also part of the “checking-not-testing-section”.

From my point of view it might be useful to differentiate between checking and testing, especially in order to discuss the question of how “Agile Testing” should work, or if a QA department in agile environments is still necessary or not (or what their prospective main tasks might be).

Because I’m still thinking about it and working on the distinction, I would appreciate your feedback very much. Please leave a comment or send me an email with your thoughts and insights.

Powered by ScribeFire.

GTAC 2008 – Google Test Automation Conference in Seattle, USA

Momentan bin ich auf der GTAC in Seattle, der von Google jährlich veranstalteten Konferenz rund um Quality Assurance und Software Testing.

Abgesehen von den ganzen lustigen Nebenschauplätzen wie Google-Office-Besichtigung (man könnte auch sagen: Entwickler-Spielplatz-Kinderzimmer-Rundgang ;-) – aber natürlich trotzdem auch sehr beeindruckend) haben hier Tester und QAler aus der ganzen Welt die Gelegenheit, sich auszutauschen, Know-How auszubauen und neue Impulse für ihre Arbeit zu sammeln.
Obwohl der Focus schon sehr auf Web Application Testing lag, war eines zwar tröstlich, aber trotzdem wenig hilfreich: Die Erkenntnis, dass gerade bei den beiden Themen “Agile” und “Acceptance Tests” wenig Neues zu holen war. Gerade was das Problem der Fragilität von Oberflächentests angeht, scheinen alle, mit denen ich sprach, mit den selben Widrigkeiten zu kämpfen – clevere Ideen Mangelware.

Die Jungs von Google vertraten in einem Talk sogar die Ansicht, Acceptance Test auf eine Stichprobenmenge zu reduzieren und statt dessen lieber schichtenübergreifende medium-sized Tests zu schreiben, die die Funktionalität sicherstellen – ein Ansatz, der bei mir spontan nicht auf Gegenliebe stieß, aber ich werde darüber nachdenken (und mein Fazit dann hier kundtun…).

Was das Thema Agilität in der Qualitätssicherung angeht, so war ich erstaunt, dass vieles von mir als selbstverständlich Empfundene wie Continuous Integration oder Collective Code Ownership noch lange nicht bei der Mehrheit der Unternehmen angekommen ist, sondern viele erst jetzt ihre ersten Gehversuche starten oder kürzlich umgestellt haben (ein Tenor, der aus vielen Tischgesprächen herauszuhören war). So können wir bei studiVZ vielleicht schon zufriedener sein, als ich es bisher in Bezug auf unsere Fortschritte war. Egal, es bleibt noch viel zu tun, und einige Anregungen habe ich auch aus den Vorträgen mitgenommen.