Per Lundholm

Per Lundholm

Everyday Software Development

Requirements Specification is Waste

So why do I say that requirements specifications are waste? After all, I’ve been trying to follow them for more than 25 years. But I never did that well. Sure, I think I understood the business value and created that, but it has been despite the specification.

What defines a system, besides itself, is the test cases. If the test cases are not running without manual intervention, the test protocol is needed as well. But now we are drifting into the fog of documentation.

There is no waste trying to find the requirements, it is the specification that tries to get so formal, that becomes the waste. When you search for requirements, you search for business value. But when you try to write a really good requirements specification, you get so formal, it almost like writing the code.

As I said, I’ve been around for a while. There have been so many attempts to find a way to express requirements in ways that can not be misunderstood and even be executable. But I will refrain from that history lesson.

Again, the point is that the requirements specification is waste. The spec as we know it, I mean.

If the requirements specification is waste, how that does this affect bug reports? I say, they are waste too.

At least, when the bug report is written by a tester that compares the outcome of a test case with the requirements specification. Since the latter is waste, so is the bug report.

You don’t need bug reports from your tester, you only need to know which test case that failed.

So all we have are our user stories (User stories are not use cases and are not requirements, they are not formal enough), which we use to discuss and later throw away, the code and the test cases.

That is all. The rest is waste, unless there is other value to it, such as a design guideline that really helps or an overview of the system architecture that really helps understanding.

Mock the Clock

Say you have a test case “Given a departure more than 90 days from now, when a passenger logs on, should present a premature login page”.

You first thought is of course to set up a mock for departures so that when we simulate the log on, system logic will discover that there are more than 90 days. So you take the current time and adds 90 days to set the time for the mocked departure. Easy. But it fails twice a year when we go to and from daylight saving. You could fix that too, of course. But then we discover a  bug when it is exactly 90 days from now and you need to write a test that shows that. Again, you pick the current time and start calculating.

Later on, you discover that some of the manual tests are taking very long time. It turns out that the testers are constantly changing their test data to match the current date. Lets say that we have a test that involves the birthday of somebody. So the testers have to manipulate the test data to change birthdays for people in their test data.

That has to be waste.

“Now” is a concept that is readily available to your code so it will no specific point in the code where you can change the notion of “now” for the system. Also, how fast shall time pass? If there are test cases that requires time to pass, it might be useful if you could throttle the speed of the clock.

My advice is to consider which of your tests that are affected by current date and time passing. Create a utility that will return the current time as is or some fake time depending on configuration.

E.g. a system property in Java could be a date-time string which is read if the system is not in production mode. A more advanced variant would include a user interface for easy access to the meaning of “now”.

There are of course other solutions, but I am surprised that the problems with time are overlooked so often.

Understanding a System

Understanding a system comes in pieces but each piece can take longer or shorter time depending on the circumstances.

What does it mean, then, to understand a system?

Well, if you don’t understand it and introduce a change, you risk going wrong. The code you wrote may seem to work, but …

 – You put the change in the wrong place so everyone else is confused.
 – Your unit tests ran fine but some other part of the code wasn’t under test and stopped working promptly.
 – Your change introduced a dependency to another component and we now have a circular dependency.
– The system can’t be deployed because there was a distributed interface which you thought were local.
– You didn’t follow the policies of error handling, logging or security. So now there is a security breach which is not caught by error handling and the log is nowhere.

So that’s the "why" of understanding. The "how" has some answers you probably know already.

  1. RTFC: Read The Fine Code. Yeah, that’s what we do. All the time. Don’t tell me to do something I do all the time. Thank You.
  2. Ask a Guru. There are gurus? No, guru means teacher which means the opposite of withholding information. That’s what some do, to be in need. Baaad. But you have to deal with it more often that you wish.
  3. Write unit tests until you get it. No harm in that if you do it as you go.

All these have the drawback of looking at the details without seeing the big picture. It reminds me of when I was 13 and learned the city by areas around the subway stations. After a while the areas got big enough to connect to each other and I could get the overall picture without looking at the map.

What is the big picture then? Well, it is the architecture of the system. There is no mystery about it, no magic patterns put down by a long gone super genius, no outdated ideas, just plain descriptions of how the system is designed.

Like a house is understood from blueprints of the outside, the disposition of rooms, the plumbing, etc, a system is understood from different views. There is a standard that will tell you that there are views but leave it to the author to decide which views.

The one I have seen most of is Kruchten’s 4+1, published in IEEE Software. You should be familiar with it but I guess you are not. I have a course on system architecture and I am still surprised how few of the students that have ever seen a document describing a system’s architecture, despite several years in the business.

The crash course is to study the Wikipedia article linked above. Start with the logical view and the physical (deployment) view, those are easiest to grasp.

But there is more to it. Remember that I said something about going wrong with logging, error handling and security? Those are principles of the design and impacts the system almost everywhere.

So to understand a system, I think you need the details and the views and principles of the big picture.

Oh, was the Wikipedia article too long? Here is a cheat sheet:

  • logical view: how components of the system relates to each other logically.
  • physical view: the servers and firewalls and stuff like that.
  • process view: how processes and threads are related.
  • development view: where the compiled code is package, e.g. a war.
  • scenario view: important functions of the system.

A Lean Simulation in JavaFX

My collagues are talking a lot about Lean these days. I thought it would be interesting to simulate one of their examples using JavaFX.

Here is a picture:

What’s cool about this then?

First, it gave me a deeper understanding of a how queue length affects cycle time. With this simulator, you can vary the parameters to control queue size and processing time. Just pull the slides.

Secondly, I knocked up this in hours and being my second JavaFX project, I consider it very fast.

There are always advocates for languages that speak loudly about how fast they write code. Sorry, I didn’t mean to be one of those.

I used almost the same number of hours with a spreadsheet to check whether the simulation was correct.

Speaking of spreadsheets, they are great tools  for understanding data.

The first point shows how JavaFX can be the same for understanding the interaction of parameters for a dynamic flow.

So the bottom line is, nothing beats a visual model and you can knock it up with JavaFX, being quick as a brick. (Did I just say that?)

Now, go and see for yourself:

Don’t let Java ruin your JavaFX

Me and Oscar is currently working on a small project, just to learn JavaFX.

We stumbled on some nasty crashes which we at first did not understand.

ArrayIndexOutOfBoundsException? Is there a bug in JavaFX?

It turned out to be a callback from Java. Let us see how we got there.

The application we are doing is based on Crisp’s famous planning poker cards. They are great but you need to be in the same room. So we thought, why not do an online version for those teams that are geographically disperse?


The table has room for you and 8 other players. As you see from the picture, there is also a text chat to the right. At the same time, a small bubble appears by the card of the player that wrote in the chat. The bubble fades away after a ten seconds, unless the player makes another comment within that time. In that case, the latter comment is added to the bubble and it was here our problems showed up.

The chat is using a standard protocol, XMPP, to talk to the server. We don’t have to provide our own server, any chat server that speaks XMPP will do, e.g. Of course, all players need to have an account there.

Here is a strength that JavaFX has as newcomer, you can use existing Java libraries.

We found Smack that talks XMPP and did a small test in Java to see that we had understood it.

Now, how do one provide a JavaFX class that receives callbacks from Java? Each time there is a message from the chat, Smack will call your PacketListener. That is an interface and JavaFX does not have interfaces. It turned out to be so straightforward, however. Just extend the PacketListener interface as if it had been a class.

Here is a code snippet:

So we override the function that gives us the packet. Now comes the crucial part, the callback is done on its own thread. JavaFX has a single thread model for everything in the GUI and that code is not thread safe.

In our case we wished to display a bubble if there was none, or add to an existing one.

You should not do that. You should wait in line for your turn. Or something nasty may happen.

Remember Swing’s invokeLater? Here we need to say FX.deferAction. But in JavaFX we can pass functions as arguments. So here goes that part of the code.

You may also note that we use the chat channel to send commands.

So if you remember the threads, it is safe to have a callback from Java to your JavaFX code.

Essensen och kruxet med testdriven utveckling

När du förstått poängen med testdriven utveckling kommer givetvis krånglet. Så hur tar man sig vidare?

Vi börjar med poängen så att vi är överens om vad vi menar.

Testdriven utveckling (TDD) säger att man först skriver ett test som fallerar (viktigt), sedan implementerar man så att det inte längre fallerer.

I det läget tar man sig en funderare om man tycker att designen och kodstrukturen är enklast möjliga. Om inte så fixar man till den, utan att ändra vad koden gör, refactor på engelska. Eftersom de tester man har, går igenom så är det tryggt att ändra implementationen.

Vi fortsätter sedan med ett nytt varv, test – implementation – refactor.

Enkelt så långt, men vad är poängen? Det finns flera poänger.

För det första så beskriver ett test vad som skall göras medan implementationen beskriver hur det ska göras. Testerna blir därmed specifikationer skrivna i ett formellt språk, Java (eller vad som gäller för projektet).

För det andra, eftersom testet skrivs innan implementationen så blir designen av gränssnittet gjort utifrån en brukare (testet) vilket blir bättre än om det görs med utgångspunkt i hur implementationen är utformad.

För det tredje så blir det av att skriva tester vilket inte är fallet om man skriver dem efteråt. Täckningsgraden för tester skrivna efteråt blir dessutom betydligt lägre eftersom det oftast blir svårt att komma åt delar av implementationen.

Motivationen att skriva tester för kod som uppfattas som fungerande är extremt låg.

Att det finns automatiskt exekverande tester är en kritisk framgångsfaktor. All kod behöver förändras, men kan säga att underhåll av en kodrad egentligen börjar så fort den är skriven.

Med automatiserade tester kan jag ändra i befintlig kod med större trygghet. Testerna specificerar ju vad som är korrekt så om jag ändrar något som går utanför det, så märker jag det direkt.

Det var essensen, så vad är då kruxet?

Det absolut vanligaste problemet med införandet av TDD är att det finns väldigt mycket kod som inte har automatiska tester. När man vill införa TDD så står denna kodmängd framför en som ett oöverstigligt berg.

Tanken är att nästa ändring som jag inför ska jag göra testdrivet, d v s skriva ett test som visar vad som ska hända, konstatera att det inte händer och sedan ändra så att det händer. Men koden jag ska ändra i, den har inga tester och är inte förberedd för det!

Så antingen struntar jag i TDD och ändrar ändå, som förut. Tar det försiktigt och rör ingenting som verkar krångligt. Kör lite informella tester och hoppas att testarna ska upptäcka eventuella fel. Förmodligen gör de det och så får jag tillbaka problemet någon tid senare när jag hunnit glömma alldeles för mycket. Eller så biter jag ihop för jag vill inte gå den vägen en gång till.

Det får kosta om det vill, bara jag slipper idissla varje ändring flera gånger.

Nu infinner sig problemet att arkitekturen är inte utformad så att jag lätt kan införa tester. Den komponent jag ska ändra i är beroende av flera andra komponenter som jag inte tänkt mig skulle behöva blandas in. I synnerhet som dessa i sin tur är beroende av ytterligare andra komponenter.

Här krävs att jag tar till något som isolererar komponenten från omgivande komponenter. Det finns flera knep, bl a olika ramverk som simulerar komponenter (mocking) och bara skickar tillbaka fördefinierade resultat. Min favorit för Java är Mockito.

Ibland räcker det dock med att ärva eller implementera gränssnitt och definera metoder som skickar givna svar.

Tyvärr behöver vi något mer. Även om vi lyckas begränsa komponenten så att den inte anropar kringliggande komponenter så vet vi inte om vi har haft sönder den eftersom vi inte har någon formell definition på "trasig" respektive "fungerande".

Vad vi kan göra då är att ta ett fingeravtryck. Innan vi ändrar något så skriver vi ett eller flera tester som är bara till för att generera utdata. Vi försöker skapa ett avtryck av hur komponenten beter sig. När vi sedan ändrar i komponenten så kan vi se om utdata förändrats på något oväntat sätt. I så fall kan vi gå in och analysera vad det berodde på och om det ska anses vara fel eller rätt.

Några böcker:
"Test Driven", av Lasse Koskela.
"Working effectively with Legacy Code", av Michael Feathers

Qualities Attributed to the Architecture

Functional requirements describe how a system delivers value. However, the quality attributes of those functions will make or break it. For example, if your functional requirment is about something that takes you from one city to another, I have a car to sell. Really cheap, for that matter.

Every system has an architecture. It may be elegant or it may be ugly. It may be described or it may be unknown. But it is.

Architecture is what determines the qualities that the system delivers. Is it fast? Is it secure? Can it be extended? Does it scale?

The qualities you strive for should determine the design of the architecture – not the other way around.

What do I mean with quality, then? I’m not in the philosphical mode today so there will be no Zen-talk. I’m simply refering to measurable requirements such as response time and throughput. Quality to an engineer.

Some say non-functional requirements. I’d say, stop saying that!

You can’t have response time without refering to a function or a group of functions. E.g. all interactive functions. Therefore, "response time" is an attribute of those functional requirements.

Since it is an attribute, it does not live on its own. There has to be a functional requirement, at least one. Also, functional requirements always have attributes even if they are not determined or poorly understood. Remember the cheap car? Well, forget it, it disintegrated on the way to that other city.

So, once you have understood what quality attributes are, why should it matter? Well, my point is, it is about the design of your system’s architecture!

It is how you divide your system in components, software-wise and hardware-wise, how these components interact and how new components can be added, that determine qualities of your system.

Say, you like security for your website. Is that a requirement on the website? No, it is the requirment on some of your pages. You probably have at least one public page.

So you decide you need people to login with name and password. That doesn’t make your system deliver any value to anyone, it only lowers usability. But still, you need that security so you put in some components for that. Considering using LDAP? Well, here comes a LDAP-server to your architecture.

Not all quality attributes determine architecture but those that do, are really handy when it is your job to design the architecture. They help you by narrowing your design options. Stay tuned.

You think architecture means Big Design Up Front (anti-pattern from the agile community)? Wrong. Every system has an architecture, regardless of you did it in increments or in one go.

Perspective of Retrospective

Scrum received some criticism today in Computer Sweden. The article featured an interview of Ken Schwaber and our guy Henrik Kniberg. Tobias Fors from Citerus was giving the comment that Scrum lacked support for retrospective. I am not sure if he was quoted correctly.

I am in the belief that Scrum has three roles, three artefacts and three meetings. Of the latter, there is one you should never skip.

The thing is, if you wish to get any better, you need to start thinking about what you do. If you do not do that, you will just muddle on like before. You probably reflect on your own ways, once in a while, as an individual, but you also need to do that as a group.

The only meeting you should never skip is the retrospective. Yet that is what seems to be happening very often, from what I hear and from the feature I mentioned.

As long as you do retrospective meetings, you can improve. You probably find that planning is also a good thing to do. But if you skipped retrospective and did planning only, you would not have a time to discuss how planning could improve. Except during your coffee break when everyone lets out their frustration anyway.

So how do you do a good retrospective meeting? There are many ways to go about, I’m sure, but let me give you a simple one that I even tried at home on a Sunday.

Divide a board in “good”and “not so good”. Hand out post-it notices and pens to everyone. Let everyone write down anything that springs to their minds when thinking about the last sprint (or last month, if you’re not iterative yet).

As they write down their thoughts, for each note, they place them on the board and states shortly what it is about. Try to refrain from discussion, is there any disagreement, just note that fact.

After a ten minutes, or when all are content, look through the notes and see if they some should be considered duplicates. If so, put them next to each other.

Now each participant is given 3 votes to put on the notes they feel is most important. All votes may be spent on one note or spread on more.

Count the votes and focus a discussion on the top five notes. Make a list of improvements from that and put on the wall visible to all.

Email eats your day

Email has reached into the everyday life of almost every profession. While I have been using it since the 80’s, its usage has accelerated enough to make it an issue even to us who are used to it since long.

There is research that shows that we use a lot of time reading email. It may be waste.

Here is a suggested personal policy for handling you email.

The first recommendation is to read mail at specific times only and not have an alarm whenever a mail arrives. You’ve probably heard it before. I think it might be a good idea if it suits you. But whenever you read email, you should do it efficently.

1. It is my strongest recommendation to have your inbox empty. If not, it will be a constant noice of unread, half-read or read mails every time you look in your inbox. It costs brain energy and slows you down.

2. Get yourself a todo-list. Use paper, use a email folder, use a spreadsheet or specific tool like Todoist. But do not use your inbox as a todo-list. It is an inbox. Would you put all documents in one pile because they all are made of paper?

Assign dates to all todo items so you can take them in good order. Keep separate lists for private matters and business so you can show the latter for co-workers and the former for your spouse.

3. Classify every email immediatley. A mail may typically be a request, information or junk. Act according to the classification. You may have another classification than mine, the important thing is to have one.

If it is a request to do something, put it on your todo-list or reject it by answering the email. Then delete the email or put it in a folder, should there be any information in it for the task.

If it is a request to attend a meeting, note everything needed in you calendar and delete the email. Or reject it as above.

If it is information that you really need to read, put a note in your todo-list that you should read it. Remember to note the date when you plan to have it done. Move the email to a folder, fit for the purpose.

If it is information that you need to track, such as updates on an ongoing process with several steps, move the email to folder that has a special meaning to you, namely "everything here should be deleted when finished".

If it is junk or information you do not need, delete the email without opening it. That’s the safest thing to do.

So there you go, a policy for handling you inbox. Empty inbox, a todo-list and classify immediatley. By doing this, reading email will not eat up your day.

Usability will cost you money, ignore or score

As a product owner, you should be highly aware that usability will cost you money, regardless if you ignore it or not.

But let us start with the classical observation made by Anna, a product owner. She knows her product well and has observed that users cope with bugs that make the system "unpredictable". When she asks the users what they think, they say it is annoying but "take it for granted". Hum.

I have heard other, similar stories about users coping with ridicolously bad user interfaces. 

My view is that users will stand on their head if they have to, to get their job done. So they cope with it.

This would fine if it were not for another aspect, they make mistakes due to the low usability. Making mistakes, depending on the system, may kill people (e.g. an X-Ray machine, airplane), or just slow users down.

I find it interesting that users not always complain about, or even mention, how awkward the system is. Instead they blame themselves for not remembering that the Return key is different from the Enter key, to take an example.

So, we can’t trust on getting feedback from the users, despite us doing really bad in the usability department. What shall we do, then?

As a product owner, I am responsible for prioritizing the quality attributes of the product.

As an architect,  I am responsible for delivering those qualitites. Well, it is the team that is responsible and their are no architects in Scrum since all they do is sit in ivory towers and philosophise. Not. People have different skills, don’t forget. Oops, a rant.

You have two options,  either ignore or score. If you ignore it, in the worst case people get killed or your product fails to get market acceptance. No business, no money. But if the risks were that high you wouldn’t even consider ignoring it, right?

It is worse when your users belong to the same organisation and there is hardly any comptetion that can kill you. Worst case, people see you as mediocre. Not!

Even then there is money to make. So consider the other option: score.

To score, you have to commit. It will take time, money and focus or you might as well skip it.

However, the bottom line is that if you look, there is money to find from usability. There may be a yearn for more functions but at some point, you have enough functions to satisfy most of your users need. They may have to use paper and pen in some less frequent cases, but most of the time they will do fine with what the system got.

Against that yearn for functions, you should weigh the possibilty of making the users life easier with less mistakes and better flow in their work. They like flow, too, you know.

So start from where you can go. If the mistakes were less than ten percent of today, how much would that increase the value of the product?

To score that value, where do you find the changes that take your product?

I am no expert in the field, but I know that if you could measure how many mistakes users do, that can be great. Also, you could video tape the users in daily action. The third option is to observe a user’s actions when asked to perform a task. Especially good when your homing in on a specific problem area.

So, what will it be – ignore or score?

Wicket + Mockito = Love

I’ve survived my first Rocket Day. RD are our seminars at Crisp where we talk for half a day on a subject of choice. Mine was Test Driven Development with Wicket and Mockito.

I chosed to do a live coding performance as I liked to do a very down-to-earth, practical seminar.

The slides are currently in swedish but I will translat them to english. Later. 😉

However, most of you read swedish so I have published them here.

Manage versions of your database schema!

In software development where the system persists data in a relational database, it is important to keep track of changes to the versions of the schema.

The importance comes from that you always have several database instances to keep track of. There is the production database, the database for system tests, the database for acceptance test, the database for performance test, the database for development team and the database each developer has.

All these will be at different versions and aligned with different versions of the code.

In this blog I will describe how you come about to control your schema versions. I have tried it in two completly different settings.

What you do is to create a table called, say, “version”. Here you store all versions that have affected the database schema and preloaded control data. Thus, when in doubt you can always check the table to see which version a particular database is in.

major minor
0.1 1
0.1 2

You may use any notation and number of columns to describe your versions, but each must be unique and guranteed through a unique index.

Furthermore, all changes to the schema are described in SQL script files. For every version or change, there is one file. You can never change the file once it has come to use. There must be no doubt how the file relates to the contents of the version table.


The file should start with a begin and end with a commit to make running of the script atomic. The first line should be an insert into the version table. Since the lines are unique, you are guarded against applying the same SQL script twice.

insert into version values (‘0,1’, 2);


The files are named after the version. You should use numbers with leading zeroes padded so they line up nicely in correct order when you list them.

Order matters, since some changes depend on earlier ones.

By this simple technique you get a database where you know which version of the schema it has and you are never afraid of running the database script too many times.

And, besides, at Spa Franchorchamps, keep your foot down through Eau Rouge… 😉