Tag Archives: test

A bug is just an unwritten test that failed

Posted on by

In the first week of March I attended two Spotify unconferences about Continuous Deliver and Quality (which I also had the pleasure to facilitate). I am amazed on how many we were (people had flown in from a lot of other places), the energy in the room, the quality of the discussions, and the massive number of practical initiatives that where suggested and started.

One reoccurring theme was the importance of a stop-the-line culture and what that actually means. I have to admit I was quite active in those discussion, and also held a short lightning talk about the broken windows syndrome. My this simple formula when it comes to bugs is this:

  • You write tests to create a product without defects
  • When a test fails you fix the underlying problem
  • A bug found outside testing is just an unwritten test that would have failed
  • Failing tests are always fixed
  • Therefore: a zero bug policy is the only thing that works in the long run
  • Otherwise you will suffer the broken windows system
  • Just do it
  • Now

Here’s my slides:

Stop the line song

Posted on by

I ended my talk on the SoapUI user gathering MeetUI singing the stop the line song. Now it has ended up on youtube.

Here’s the text:

I keep a close watch on these tests of mine
I keep my Jenkins open all the time
I see a defect coming down the line
Becuse you’re mine, I stop the line

Stop the line as eBook

Posted on by

Here’s the eBook collecting my articles on building the quality in by stoping the line: Stop The Line – Why it’s crucial to include a human touch to your automated processes

Where is that Red ‘Stop’ Button in Your Development Process?

Posted on by

If you don’t dare to stop the line, continuous integration might be waste. Here is the second part of my three-part series on building the quality in on the SmartBear blog.

In the first post of this series, I wrote about Toyoda Sakichi, the founder of the Toyota industries, who invented a loom that would automatically stop when a thread broke in the 1920. He thereby also invented the concept of “stop-the-line” to build quality in.

Incremental compile with visual feedback is a small step toward the automaticity of the Sakichi loom. Beyond that, we still have these longish feedback cycles, be it manually running unit tests or waiting on the automatic build or system tests run by our continuous integration (CI) system.

Read the rest of the blog at SmartBear.

Mönster för flertrådade enhetstester

Posted on by

Detta är ett designmönster för hur man skriver ett enhetstest som utför samma test samtidigt i flera trådar.

Genom att utnyttja java.util.concurrent på ett smart sätt säkerställer man maximal samtidighet, vilken kan avslöja trådbuggar.

Kom ihåg: det går inte att bevisa att ett program är fritt från trådbuggar. Det handlar om att göra det sannolikt att det fungerar i en flertrådad miljö.

Kodmall



    @Test
    public void testSomething() {
        assertTrue(true);
    }

    @Test
    public void testConcurrentAuthInfoResponse() throws InterruptedException {
        final int threads = 100;

        final CountDownLatch readyToStart = new CountDownLatch(threads);
        final CountDownLatch startingGun = new CountDownLatch(1);
        final CountDownLatch finishLine = new CountDownLatch(threads);
        final AtomicInteger failCount = new AtomicInteger();

        for (int i=0; i

Visserligen en hel del boilerplate, men det kan man faktorera ut till en Template á là Springs JdbcTemplate.

Mock the Clock

Posted on by

Say you have a test case “Given a departure more than 90 days from now, when a passenger logs on, should present a premature login page”.

You first thought is of course to set up a mock for departures so that when we simulate the log on, system logic will discover that there are more than 90 days. So you take the current time and adds 90 days to set the time for the mocked departure. Easy. But it fails twice a year when we go to and from daylight saving. You could fix that too, of course. But then we discover a  bug when it is exactly 90 days from now and you need to write a test that shows that. Again, you pick the current time and start calculating.

Later on, you discover that some of the manual tests are taking very long time. It turns out that the testers are constantly changing their test data to match the current date. Lets say that we have a test that involves the birthday of somebody. So the testers have to manipulate the test data to change birthdays for people in their test data.

That has to be waste.

“Now” is a concept that is readily available to your code so it will no specific point in the code where you can change the notion of “now” for the system. Also, how fast shall time pass? If there are test cases that requires time to pass, it might be useful if you could throttle the speed of the clock.

My advice is to consider which of your tests that are affected by current date and time passing. Create a utility that will return the current time as is or some fake time depending on configuration.

E.g. a system property in Java could be a date-time string which is read if the system is not in production mode. A more advanced variant would include a user interface for easy access to the meaning of “now”.

There are of course other solutions, but I am surprised that the problems with time are overlooked so often.