Per Lundholm

Per Lundholm

Everyday Software Development

On Unit Testing and Mockito

This is just a blog post to point to my presentation of the aforementioned subject. Or should I say, “prezi”, because there are no slides, just a big picture with a path through it. That’s is the way of Prezi presentations and as a first timer, I felt liberated. Slides are so dull!

The content of my presentation is aimed at those with some experience of unit testing that would like a dose of philosophy on testing styles. Classical or Mockist? State or Behavior? Also, if you are not that familiar with Mockito, take this prezi for a spin!

Here is the link to the prezi! That’s all for now.

read more »

Product Owner’s Product and Project Board

The team has its Scrum board as an information radiator. It is an excellent way of getting an overview of the sprint. But what about us, the product owners, don’t we need that too? Of course we do, we too have a need for an overview of our work and to radiate information. The stakeholders pass by and ask “what’s in next sprint”, “when will we migrate”. We’d like to just answer with a light gesture towards the wall. It is all there for everyone to see.

Let me tell you how our project owner board works, as an example. read more »

Shared Values Build Teams

What makes a group of good people a great team? I believe that shared values and experiences are fundamental. While there are many team-building activities available, such as paint ball and boule, I’d like to think that software development teams need more than that.

read more »

The TDD Tetrahedron, version 2.0

The TDD Tetrahedron, or if you wish, pyramid, has reached 2.0. Like cars, the new model is bigger and comes with new technology.

By pure coincidence, I ran into somebody willing to print this. So here it is, the version 2.0 of the TDD Tetrahedron. The new version has sides of 100 mm and it is made of plastic.

What’s it for?

Well, if you didn’t use the older version, you may be wondering what’s so great about this. It is all about mental focus.

read more »

Becoming a Product Owner, Part 2

Here is part 2, a week has past. I think as I write so it is a bit different in style and content than my other posts here.

It has been a hectic week, this first week as PO/SM. First of all, it was only four days thanks to national day. Then there were two new developers from Russia which meant twice as many developers as we had before, not counting me. Simple, you just tell people to pair program. That works out if they are well enough to come to work. Which they weren’t. So they russians were left a bit hanging. I had a lot to do just to keep the team running, at least according to my standards.

read more »

Becoming a Product Owner, Part 1

I am programmer and Scrum Master but has been offered to become Product Owner for the same part of the product I am currently working on. I decided to accept the offer and this is my live story, written as I go. Perhaps this will be of interest to you.

First of all, to me, this was not a decision I took lighthearted. However, I do have the fortune of peer coaching at Crisp and the offer came the same morning we had our last coaching. That was a timing like there was a meaning behind.

It is not supposed to be full-time either, so I need to choose what I shall do the rest of my day. It turned out that initially there was no choice. It was not possibly to find a Scrum Master to fill in my gap. The next sprint I will be both PO and SM. Hua! read more »

Tools of Our Trade

Today we developers are high in demand, at least here in Sweden. My client is now persuading russians to immigrate only because there are not enough of skilled programmers. While there still are people that think one programmer is as good as another, give or take some experience, others have realized that there is a huge difference. read more »

State of My Agile Mind

It has been 10 years since the Agile Manifesto was written and although I have not been following the agile community for that long, I have been a developer in Scrum teams since 2007. In total, I have done system development for 30+ years so I have lived and breathed both waterfall and RUP before trying Scrum.

So what is on an agile developer’s mind these days? Here are my current reflections on three things today, Agile Development, Architecture and Acceptance Testing.

read more »

The TDD Pen

Hi there.

Just wanted to show you our latest widget: The TDD Pen.

You may think that it is an ordinary pen but it is not. When you hold this pen you immediately become a superb programmer. Not only will you write tests that cover all your code, you will also write bug-free code and refactor everything into stunning beauty!

You only have to figure out how to hold a pen in one hand and still type the keyboard without being embarrassingly slow. 🙂

The TDD pen

For more widgets, see The TDD Tetrahedron, version 1.0 and The TDD Tetrahedron

Write Legacy Code and Secure Your Job

In this day and age with unstable economics, constant change in how to work with software, new languages and databases popping up from nowhere, it is important to cement yourself into your position at work.

Follow this guide and be sure of never being fired, no matter what. read more »

Canned Wicket Test Examples

Unit testing of the GUI is not the same as unit testing through the GUI. We are interested in the logic of the GUI rather than the placement and order of the GUI widgets on screen.

Testing the logic makes the tests less sensitive to changes in presentation but introduces the problem of JavaScript dependent features. AJAX is in the vogue so we wish to be able to do testing of that too without being forced to start a browser. There is some support for AJAX in Wicket that may be reached using the test framework that is part of Wicket. However, it is not straightforward to use and there are some pitfalls.

Here are three examples of avoiding those, one for each of the check box, drop down and radio group controls. read more »

Some Gotchas for Java Developers Learning JavaFX

In an earlier post, I had attached slides from a presentation on JavaFX that contained some code examples. I discovered that at least one of them, the ball game, stopped working when I switched to JavaFX 1.3.

I would say it is a quite subtle difference.

What happened was that the onKeyPressed and onKeyReleased were not called. My immediate reaction was that it was due to some bug in JavaFX but yesterday I realized what had happened.

read more »

Agile and Architecture

Yesterday I held a presentation on the subject “Agile and Architecture” at EDB. They have an internal competence network which meet regularly and discuss agile processes and methods.

My point in this presentation was that every system has an architecture that determines the qualities of it. Given a set of functions, different architectures will give these functions different qualities, such as performance and cost of maintenance.

This is still true, no matter if you do waterfall, RUP or Scrum.

What is different, however, is how we work with architecture. We became agile to cope with a faster changing world. So the architecture must be changed at a much higher pace now than before. Architecture, though, is known for being hard to change. After all, it is the pillars that the system stands on.

I held an exercise, or should I say, tried an exercise. But, gosh, what a cheat these cloud computing services are. 🙂

Ok. Here are the slides.

Lean vs Traditional Project Management

This week, we have Mary Poppendieck with us. She held an evening seminar which inspired me to think about the differences between Lean and traditional project management. I also am inspired by the questions I get from my spouse on this.

I thought that it would be interesting to do a side-by-side comparison between the two. I am no process expert, I am just a programmer who has been the subject of 30 years of different processes. I have seen DOD 2167, RUP, PROPS, PEJL, XP, Scrum and a few others. So this is just my humble opinion.

See also what Henrik Kniberg wrote earlier in his blog. read more »

Jada jada JavaFX – Why I Love the Idea

The other day I held a three-hour workshop, an introduction to JavaFX for some colleagues. The great thing about doing it, is that it forced me to answer the question: why is it necessary to create a new language just for GUI?

I brought a screenshot from a sprint demo with me as an example of why we need to go beyond the current widget based thinking. The example was a table that presented opening hours per sales item. During the demo it became clear how hard it was to understand the concept from the GUI. It did not show through, so to speak.

Standard widgets like tables and buttons are not sufficient to represent all kinds of information. The more advanced systems we build, the more power we give our users, the better we have to be at finding new ways of constructing user interfaces.

JavaFX is a promise of possibilities but you still have to think for yourself if you want to go further than adding a drop shadow and glow effect to your menues. Challenge your creativity!

Anyway, here are the slides from the presentation. There are four parts of about a dozen slides each. There is an introduction and three independent tracks that ends with a programming exercise. In three hours, there was time for only one of the three. This time, the audience selected the ball game, probably due to that the week before, the HTML5 presentation had a ball game as example. Needless to say JavaFX kicks ass compared to an HTML canvas. 🙂

Besides the ball game exercise, there is a simple button example and an example on using Inkscape with JavaFX showing how to set the identity on objects.

The Introduction. The ball game. Inkscape Example. The Button Example.

 

The TDD Tetrahedron, version 1.0

The TDD Tetrahedron has reached version 1.0.

As I write this, we have a course on advanced TDD with Robert C Martin as teacher.  I took the opportunity to introduce the first version to the participants.
Uncle Bob and the TDD Tetrahedron
Uncle Bob and the TDD Tetrahedron.

Some of you that participated, asked me for a digital version which I mailed to you. I thought that there may be other people that are interested, so, here is the PDF for you to download. There is no read-me file but you only need scissors and tape.

The biggest change from the prototype I presented in an earlier post, is that the bottom is removed. It became easier to put it together that way.

I really tried to find some way to manufacture it in plastic or paper to a reasonable price, but failed. I also thought about doing a TDD pen,   A pen with three sides where each side would be red, green or yellow in a similiar way to the tetrahedron, But that failed since the sides could not have different colors.

The TDD pen
Anyway, if you have any clue on manufacturing any of these ideas, I will be happy to hear from you!

TDD Illustrated

I am planning an introductory course on TDD. In that process I have been thinking about how to convey the productivity gain with TDD.

Being a visual person, I had an idea that would illustrate this in a few pictures. Here they are for your scrutiny and enjoyment!

The main idea is to illustrate how effort over time is affected when using TDD in contrast to not writing tests at all.

We will start off with a new component, respond to feature request and then finally attack some legacy code.

Phase 1: A New Component

Let’s first look at the creating the component without writing any tests at all. The illustration below shows how you create your first version. The yellow rectangle with an “I” is your perceived progress.

The first time you say “done” and deliver to test, the component comes back with a bug report, illustrated as a red flash. So you go “oops”. As easy as apple pie, you fix that and deliver again only to find that the fixing of the first bug introduced a new, second bug. Oops, again.

Finally, on the third attempt, you succeed and the component is put in production.

Phase 1 non TDD

Now let’s look at the TDD way and how that comes out. As you write tests, it will take slightly longer time to reach “done”. Not as much as it feels, though, since in the other case the perceived time becomes shorter as you sit there scratching your head. When you do TDD, you tend to spend more time typing and the perceived time will be somewhat longer.

The tests are illustrated as rectangles with a “T” in them. As you see, they grow with the implementation.

Well, in this scenario, you eventually reach “done” but the component returns from test anyway since you made the same mistake. TDD is not fool-proof. However, the interesting thing about this, is that you don’t introduce a new bug. The test suite that you have written, saves you.

Phase 1 TDD

In summary, the time to the first “done” was longer with the TDD way, but the time to pass the test was shorter.

Phase 2: Next Version

We now follow the life of our imaginary component as a request for a new feature comes in. We still compare the two ways of doing development.

If we don’t have any tests, the scenario probably will be as illustrated here. The new feature puts us in somewhat more trouble than when wrote the first version. The reason is that the code is larger so the risk of introducing bugs gets higher. As you see, we have more attempts at delivering this time.

The code has become more difficult to maintain. We could say it has started rotting, if you see the analogy.

Phase 2 non TDD

Looking at the TDD way, we find that we have our tests as a safety net for making modifications.

All we do is add some more tests to the suite as we go along and create the new feature for our software.

With the test suite backing us up, the new feature is introduced as easily as when we were coding the component from scratch!

Phase 2 TDD

Phase 3: The Legacy Update

TDD would not be particularly useful if it only applied when you started from scratch. Almost 100 percent of our time as developers is spent with legacy code in some way.

Let us assume that you have been given the task to update a software component that has no tests, what do you do?

As above, there are two alternatives and we will look at them both.

With no tests in place all we see is a big pile of … code that we need to change somehow. We read the code, make assumptions about how it works and make the change.

It is a small change so it only takes one trip back from test before going into production. Phew!

Phase 3 non TDD

Now, doing it the TDD way, you start with writing tests that express your assumptions about how the component works. These tests will fail initially, which is enlightening and they will not cover all of the code, which is not necessary. However, from there on, we add tests and implement the change in a more confident manner.

Since this is a small change it goes through testing to production on the first try. The testers call you and ask you what you have done, they can’t find any bugs! Lean back in your chair and smile, it is a good day in legacy land.

Phase 3 TDD

The legacy code is not as horrible anymore as you now have more test coverage when doing the next change. Also, your learning tests in the beginning gave you insights that you wouldn’t have had if you did it the old way.

There are of course other values of TDD and there are some obstacles that you have to learn to overcome. Hopefully this post gave you some motivation to try!

The TDD Tetrahedron

Are you looking for some concrete expression for Test Driven Development? Let me give you a glimpse of what I am working currently on – the TDD Tetrahedron.

The idea originates from when a colleague at Crisp, David Barnholdt, wrote about not focusing on one step at the time. So I thought for a while and came up with this idea, a tetrahedron where each side displayed “failing test”, “implementation” and “refactor”, respectively.

You turn it and look at the first side where you read “failing test”. You write a failing test and turn it again, reading “implementation”. Write the implementation and run the test to get the green bar. Once again you turn the tetrahedron and read “refactor”. You look for something to refactor, confident that if you do, you will be supported by unit tests all the way.

Or the thing just sit on your table to tell everyone how cool you are as being a TDD programmer. At least wish to be. 🙂

Anyways, here are some sneak preview pictures of the greatest thing that ever happened to the world of programming, ta da – the TDD Tetrahedron!

TDD TetrahedronTDD TetrahedronTDD Tetrahedron

Change Based Configuration Management

Configuration Management (CM) is crucial to any software project as neglecting it will easily get you in big trouble. It may look like bad luck, but it is not.

A CM-plan will deal with several matters, from simple to decide things, such as naming of releases to more advanced subjects, such as branching strategy. I will talk about the latter today.

There are of course different ideas about what a good branching strategy should be. It is my firm belief that it must be aligned with the subject at hand, namely changes.

If you are not used to looking at different branching strategies, they may all look similar, so how does it matter?

Let me put it this way, if your CM-strategy is not working, you may one day be in a situation like the one I had, where there almost were fist-fights between developers. Luckily, frustration is a more common reaction.

I recommend that you have a simple branching strategy that everyone understands, as long as it works. It does take you a long way and that is probably why so many experienced developers know surprisingly little about CM.

However, when a single main branch, also known as “trunk”, with an occasional branch off from it, does not cut it, it is time to look further.

If you have read Henrik Kniberg’s take on the subject: “Version Control for Multiple Agile Teams”, you may have a different view on this subject than I have.

Henrik’s view is what I would call “team-based” whereas mine is “change-based”.

There are a few things that you should be very aware of. It is not branches that are released, it is configurations. A configuration is a snapshot of the system that includes all the changes applied to the system so far. You typically use release-tags to mark configurations that you release.

Further, you can not state that a branch, such as the trunk, is always releasable. Besides what I just said about branches not being released, as soon as you merge anything new into the “always releasable branch”, the latest version on it, is not tested software anymore.

Let us take a look at the life-cycle of a change. A change springs from a configuration of the system, named or not, but a snapshot of the system at that point in the time. This is called the “baseline” of the change. Work is done on the change at the same time as on other changes.

As time goes by, the difference between the baseline and what is the latest version of the system, increases. Eventually we want merge the change into a release and this increasing divergence will make it harder. To solve that, we incorporate other changes that have been applied since our baseline. This called a “rebase”. Effectively, it moves the starting point of our change forward in time, so to speak.

Typically, you wish to do this on a daily basis, or the burden may be too large.  It is your decision on how often it will happen. However, the last rebase is crucial. See below.

A change may get stalled because of technical problems or changing priorities. Anyway, you want your changes to be isolated so you can put them on freeze or deliver them in a later release than planned.

If other changes are on the same branch, they are not isolated from each other which is the main problem with a team based branching strategy, unless a team works only on one change. Not likely, a change should be a job for about 1 – 3 developers, a team is 5-7.

When you reach testing, you probably need an environment to test in. What is interesting about this, is that the number of test environments determine how many changes you can have “hot”. Changes that are “hot” are worked on and can be deployed to a test environment at any time. Changes that are “cold” are just waiting in the wings and may be worked on at a later stage.

Before testing, you should do a rebase to be as close to the rest of the system as possible.

The final phase in our change’s life cycle is the delivery into the release branch. This happens when it is considered done. Potentially, other changes may have come in and your testing, in theory, should start over. But testing is really about balancing risk against cost, so you have to make an educated guess based on how the other changes may interact with your change. If the risk is high, you have to rebase and run the tests again. Note that this is not a consequence of this strategy, it is a consequence of how changes always interact. The strategy just reveals it.

This strategy is aligned with what you wish to control, changes. Not with your team structure or your system structure. This alignment frees you from adding a lot of rules of conduct and policies, everything works naturally when your branches follows the structures of changes.

As for implementing a strategy such as this, you first need to see if your current version control system can cope with it in a practical way. I have used Clear Case but there are other ones that certainly may cope or even do a better job. There will be lots of branches, so remember to have a naming strategy for them.

Secondly, there are only four concepts that every developer needs to understand, it is “change” (we called them “activities”), “baseline”, “rebase” and “deliver”. It will certainly not be an easy ride for as I said, many developers are not experienced with anything but simpler strategies. Added to that, most people seem to have a problem of understanding phenomena that stretch far in time with several interacting outcomes.

Remember that this is the simplest advanced branching strategy there is!

Beyond Basic TDD

This coming spring we will host a course with Robert C Martin on advanced TDD. I would really appreciate the input from my experienced TDD readers on what they consider to be the largest obstacles when it comes to TDD. This is your chance to shape the event so that it is customized to meet your needs.

A few months ago we hosted a very popular course with Michael Feathers. He talked about refactoring legacy systems and of course, unit tests which are an essential part of that. But the crowd cried out for more.

I have been practicing TDD for two years. I program in Java and frequently use Mockito and Wicket. The latter has support for unit testing web interfaces and it is great although it has its quirks.

But what is everyone else doing?

In my experience the GUI is the hardest part to develop using TDD, at the same time there is a lot to gain by using TDD. You really, really want automated tests instead of manually clicking through your interface.

A GUI can be built many ways and on different platforms. I mentioned web, where there are different technical issues with different frameworks. There are also Swing-based Java applications and mobile platforms.

How can you test the GUI? There are several different options. You can send events, but there are issues ensuring that the event reaches the correct control, timing issues and performance problems.

You can go slightly under the surface by programatically calling the functions that should be invoked by the events. But you may run into issues with correctness. For example, the button you try to test a click on may not be visible.

You can argue that a GUI is not considered done until a human has looked at it. Yes, but that is no excuse for not writing tests.

There are of course other complicated scenarios. Take multi-threading, how do you handle that?

Besides the technical problems, there are also social problems.

The first that springs to mind is that there is a feeling that it takes too long to write the tests. As a TDD addict, you know that this is an illusion. You reach "done" on the Scrum board faster when you write tests. The feeling stems from the fact that time feels shorter if you sit staring at the code thinking instead of expressing yourself in code. Also, if you don’t write tests, time gets split up between implementing and fixing bugs. So the latter does not included in the former, as it should. "Oh, I’m done, there are just some bugs to fix".

There is also a fear that maintaining the tests takes too long. That may be quite true. If you write code that is hard to maintain, it will apply to your test code as well. If the code you test has poorly designed interfaces that change frequently, that will adversely impact your test code AND other clients. Maintenance will always be necessary. As the requirements change, the tests need to change.

It’s important to point out that TDD is not about testing, it is about specification. That is why some people prefer to say BDD instead. It is a common misunderstanding that TDD means testing. That manifests itself in comments like "even though you wrote unit tests, there are bugs".

Real testing is a different animal altogether which can make use of the same tools.

It gets fuzzy. When should you stop and say that there are enough unit tests to specify the behaviour of the unit and when do you do real testing with bounds checking etc? And does it matter when?

Another social problem: How do approach co-workers who refuse to write unit tests?

The problem is not just that they are slower and missing their own potential. The larger problem is that they are not writing specification.

We are agile now, so we don’t write requirements in Word documents that are printed and read and transformed into test documents and design documents that are printed and reviewed and changed and printed again when they are finally approved for use in coding which uncovers contradictions, fuzziness and other problems which render the design document obsolete and now the requirements are also obsolete so the requirements document changes and gets printed …

We talk directly to the product owner. We have little notes that are topics for discussion. We jot down a few remarks on the notes. Then we code and when we are done, we can throw the notes away.

So where is the specification? Anyone who has tried to make changes to a legacy system knows that there is none but the code. Not even when there are huge systems of documents that are supposed to describe the system.

If you don’t write documents that specify the system and you don’t write unit tests that express what the system should do, the only thing left is the system itself and that is not enough. The system mixes the purpose with the solution on every line of code. And you are in legacy land from day 1.

But now I’d like to hear from you! Comment here or mail me (per.lundholm at crisp.se). What have I missed? What do you think should go into an advanced class on TDD?

Not the Fixed Price Contract

The fixed price contract has been discussed here at the Crisp blog by others. It is broken by design as it creates more problem than it fixes.

On the way from JAOO I talked to Udi Dahan and that made it fall into place in a different fashion. Not that this is what he said, this is what he sparked.

The fixed price contract is not necessarily fixed price, a contract or evil.

First of all, it is an excuse to get a budget. Let’s face it, your client on the other side of the desk, the man with the tie, has a boss. He wants to look good in front of his boss. So he needs a sum of money written down next to a promise of something. The sum must be big enough to justify him talking to the boss. The promise must sound like it has value and of course it does, you wrote a proposal full of great ideas.

Once the contract is approved, the fixed price changes as there are always change requests and they cost money. So the price is not really fixed.

But you know that is going to happen, so the proposal is gold plated with features that are not really necessary. When the client comes with a change, you suggest they trade it for something of that gold plated stuff. Of course they do. So you seem very agile as you take on a change request for practically nothing.

Best of all, if you trade something with higher risk for something with lower risk, you are better off than before.

In the end you deliver the product on time, different from what was in the contract and perhaps for a slightly higher price (not all changes can be traded for something else).

The project is a success and nobody cares about what was in the original contract.

You will get a request for a second version and this time you know the customer even better so the gold plating can be even more cunning *cough* clever.

Evil? Depends on how you see it. As long as the customer is happy and gets a return on their investment, all is well. After all, the value of something is what someone is willing to pay for it.

So the price changes and the scope changes. Why call it a fixed price contract?

Now, Udi may think I am madly misinterpreting him. So, I guess he is only responsible for the subject and not the content.

Design Principles for Error Handling

Besides understanding the most important structures of a system, it is an architect’s responsibility to understand and influence the design principles.

One common and important set of principles are those of error handling. It is good to have the same principles throughout the system as it produces fewer surprises and mistakes.

In this post, I will discuss you some principles that including checked exceptions and Null Object, which you may not fancy. But it is always good to think this subject through, so please come along.

My first principle is: "Deal with the errors directly".

What? Errors happen. Not my fault. Somebody else will have to fix this.

My point is that you should try to design your interfaces so that the implementation has a slighter risk of failing. E.g. a function called "getSingleResult" is asking for trouble. Either you get more than one result  or you get none. In either case, you have to do something.

It is better to return a set of objects and let the client determine if the result was good or bad.

If you are asked to read object x from file y, you could return a Null Object if there is no x object in file y or if file y does not exist. Should the client think it is important whether there is a file y, it could find out by other means. You can not protect the client from the fact that a file is missing. It is annoying if all clients need to catch a FileNotFoundException when most of them would be happy with a Null Object.

The second is: "Tell your supervisor".

When you were a newborn you probably screamed when you were hungry. Later

you got old enough to ask for food.

The same goes for exceptions, you are just screaming that something is wrong and that someone else should fix it.

Instead, you should call your supervisor. There are times when you will be asked to deal with data that is inconsistent or use resources that you can not locate. Go call support. Writing in the log is ok if you have a tight check on your log scanning. Maybe there are such people out there. Otherwise the system should have a mechanism for calling support, so to speak.

Other than calling support, you should be silent. At least try to shut up.

The third is: “Use Null Object Instead of null”.

A bit Java specific, but it applies to other languages as well.

As soon as somebody returns a null, be it an innocent getter function, someone else will either have to check that or get killed by a NullPointerException. I hate those.

You may think that returning null is a perfectly sensible way of telling that nothing is there. Now we have exceptions to tell that us something is not ok, so we do not need return values that indicate errors. The benefit is that we do not have to get our code messy with checks after each function call.

Like this:

int error = dothis(withobj, toobj)
if (error != ERROR_OK) return error
error = dothat(someobj)
if (error != ERROR_OK) return error

Get it? Why would I need to do the same with null when calling your code?

Use a Null Object instead. Let me decide if I need to know that there was nothing there.

If you design with “tell” rather than “ask”, it is easier to avoid both null and Null Object. Imagine having no return values, all methods have the void return type.

The fourth is: “Use Checked Exceptions”. (Java specific)

Now, remember that you should follow the other principles, but if you are going to throw an exception, why not tell your client beforehand? I mean, if you throw an unchecked exception, your client is caught off guard and will go down.

The objection is that the client code gets messy as it is forced to catch exceptions which it has no idea what to do with. Now, if that is the case, there is something wrong. It should be evident to the client what it should do.

Take login as an example. If you have a function “loginUser(user, passwd)”, it could fail for both technical reasons or wrong credentials. Either way, an exception will be thrown.

Now, since it is a login function, it is evident that the client would like to tell the user why it failed by showing an error message and remain on the same page. It is not messy, it is natural and better that returning null or throwing an unchecked exception.

If you design the function as "isCorrectCredentials(user, passwd)" you are asking for trouble as the login may fail for technical reasons as well as incorrect credentials.

As you may have noted, you should know your clients. Try to walk a mile in their shoes,  i.e. try to use your own interfaces with a critical view. Look at actual client code.

Remember, these are only suggested principles. Make sure you have some where you work today!

The Last on Code Review

Code quality. It has been haunting me for so long I forgot when I started to think about it. Do other people think about it? For sure. Do everyone? Certainly not.

I was doing RUP and before that some waterfall process from DoD. Before that I was programming Fortran. Now, what has been my single most important recommendation for reaching code quality?

Peer code review.

But enough. It just struck me how much I do not miss code reviews.

I’ll tell you why and I’ll tell you what is the replacement, because there should be one.

I had completely forgotten about code reviews until I ran into a colleague from the past and suddenly remembered a code review in particular.

It was humiliating. First of all, it was boring to print the code on paper and then spend hours reading it. Secondly, I had to face the fact that a lot of the code was copied but not used. I put a big X across the whole page of dead code. Humiliation became a fact when everyone saw it on the meeting.

It was so boring that despite we were caring about code quality and believed in code review, it was hard to make it happen.

Code review is dead – long live the new code review: pair programming!

There are today two very important practices for code and design quality, one of them is pair programming and the other one is test driven design.

Pair programming is code review because there is somebody looking at the code as you type. If you write code that is either hard to understand or breaks some design rule, it is spotted before you have put down hours into it.

It easier to change something before you get emotionally attached to it.

The old code review also had another downside, the reviewers became attached to the flaws they found. Imagine building up arguments for a position on the design of the code that you think needs to be fixed. You come to the review meeting and the design is already changed. Would you not be tempted to tell everyone what you had found, despite being useless information? I would.

Pair programming is also a setting where two people help each other to solve a problem. A code review is a meeting were a group of people discuss a single persons efforts, the one that was the last to touch the code. I hated it when it was me.

Bye, bye peer code review. Will you rest in peace and not be awaken ever again.

Modal Windows Considered Harmful

On the Wicket user’s mailing list there was a question about modal windows and it set me off. Since my excellent wisdom 😉 is larger than just modal windows and Wicket, I thought that it would be of interest to all of you, dear readers.

I have discovered that the modal windows that was gone when web applications started to spread, are starting to come back. And they are bad, even if they are not as bad as the goto statement that was accused the same way as I just did: harmful.

A modal window is something that pops up in the face of the user, screaming its importance by not letting the user touch anything else until the modal window had its way.

We have a back office application written in Swing that use modal windows a lot and it is just getting worse by each feature added.

Modal windows are really a last resort and should not be used at all, if you can avoid it. What I have seen is that they tend to grow in functionality over time and suddenly you are faced with the question: "should I put a modal window here, oh, I am already in a modal window".

(Ranting further), modal windows are primarily for non-expert users that need guidance when you wish to be certain that they know the implications of what they do.

There should be nothing but some information and a yes/no question.

If the users are pushing you around demanding modal windows and customer is always right, so what to do? I suggest take a step back and present a complete new style of interaction that would give users a much better flow in the interaction than now.

Having said that on the list, the following question was fired at me.

"Per,
I see what you’re saying and I have a question.
How would you implement (UI concern) a setting page?
What I mean is, suppose I have a page that shows some statistics.
The statistics can be set by the user.
We implemented a link / button that opens up a modal window to select the
statistics.
How would you do it?"

Oh, that is a good one.

You could make it a modal window. After a while that window (I assume) would get to contain more and more settings. Then all of a sudden, the last setting you added will really make statistics take a very long time. Since the user probably can’t foresee that, you wish to confirm that the user have understood the implications. So you need a modal window that … oops … you are already in a modal window.

The first thing is to think about as an alternative is to start with direct manipulation. Is there any way you could change the settings right when you are looking at the statistics?

Typical example is the familiar "click on column heading to sort table on contents of that column".  Consider drag-n-drop objects if that is natural.

Second is to have the modal window inline on the page in panel. After all, selected settings and the result in the same window feels better than switching to another window, modal or not and then back.

But there may not be room for that. Can you split the settings in groups to inline on several places on the page?

Next thing to consider is to have it on another page and here comes another concern in regarding the concept of settings, life cycle. Do all settings have the same life cycle?

Which ones are per request, per session, per user, per application? Side point? Well, it sure controls presentation since settings with different life cycle should not be presented together.

If the selection of statistics is a very separate activity, maybe it should be on separate page before the page that presents the result? Changing settings would be reached by pressing the back button.

These are my thoughts on alternatives to modal windows jungle:

  • Try direct manipulation,
  • Keep selections visible all the time
  • Resort lastly to a separate page
  • Save modal windows for that yes/no confirmation. You will need it eventually.

Thanks for reading!

Stable Interfaces – any good?

I once worked in a rather large project, about 1000 persons. There are many stories about that project but the one that I’m thinking of now is that we loved to say "stable interface, we must have stable interfaces".

Now, stable means not changing which means nothing gets better. So why would anyone want stable interfaces? And what should we say about the opposite, "unstable"?

Stable interfaces is a cornerstone in tactics for modifiability, so how do stability and modifiability go hand in hand?

Do you see my finger?

No, I am not doing a rude gesture, don’t worry. The fingers, part of the hand, are so powerful because, among other things, they bend a specific points, the joints.

So a finger is flexible, to a certain extent, which makes it more powerful than if it could bend any imaginable way.

The joints of a finger are the stable interfaces in your system’s architecture. E.g. we want to modify the system by adding new modules in runtime. So we define plug-in interfaces that the modules adheres to and uses for interacting with the rest of the system.

Those interfaces must be carefully designed as we don’t want to change them. Should we change a plug-in interface, every module would have to be rewritten and retested.

So some interfaces really need to be stable. But other interfaces we change all the time. Every class in your code has an interface so you can’t do much without changing that. It will affect other parts of the code but thanks to type safe languages and automatically running tests, it is not as bad as it used to be.

Bottom line. Some interfaces are part of an architectural strategy and need to be stable therefore. But most of the interfaces really need to be on a roll.

Requirements Specification is Waste

So why do I say that requirements specifications are waste? After all, I’ve been trying to follow them for more than 25 years. But I never did that well. Sure, I think I understood the business value and created that, but it has been despite the specification.

What defines a system, besides itself, is the test cases. If the test cases are not running without manual intervention, the test protocol is needed as well. But now we are drifting into the fog of documentation.

There is no waste trying to find the requirements, it is the specification that tries to get so formal, that becomes the waste. When you search for requirements, you search for business value. But when you try to write a really good requirements specification, you get so formal, it almost like writing the code.

As I said, I’ve been around for a while. There have been so many attempts to find a way to express requirements in ways that can not be misunderstood and even be executable. But I will refrain from that history lesson.

Again, the point is that the requirements specification is waste. The spec as we know it, I mean.

If the requirements specification is waste, how that does this affect bug reports? I say, they are waste too.

At least, when the bug report is written by a tester that compares the outcome of a test case with the requirements specification. Since the latter is waste, so is the bug report.

You don’t need bug reports from your tester, you only need to know which test case that failed.

So all we have are our user stories (User stories are not use cases and are not requirements, they are not formal enough), which we use to discuss and later throw away, the code and the test cases.

That is all. The rest is waste, unless there is other value to it, such as a design guideline that really helps or an overview of the system architecture that really helps understanding.

Mock the Clock

Say you have a test case “Given a departure more than 90 days from now, when a passenger logs on, should present a premature login page”.

You first thought is of course to set up a mock for departures so that when we simulate the log on, system logic will discover that there are more than 90 days. So you take the current time and adds 90 days to set the time for the mocked departure. Easy. But it fails twice a year when we go to and from daylight saving. You could fix that too, of course. But then we discover a  bug when it is exactly 90 days from now and you need to write a test that shows that. Again, you pick the current time and start calculating.

Later on, you discover that some of the manual tests are taking very long time. It turns out that the testers are constantly changing their test data to match the current date. Lets say that we have a test that involves the birthday of somebody. So the testers have to manipulate the test data to change birthdays for people in their test data.

That has to be waste.

“Now” is a concept that is readily available to your code so it will no specific point in the code where you can change the notion of “now” for the system. Also, how fast shall time pass? If there are test cases that requires time to pass, it might be useful if you could throttle the speed of the clock.

My advice is to consider which of your tests that are affected by current date and time passing. Create a utility that will return the current time as is or some fake time depending on configuration.

E.g. a system property in Java could be a date-time string which is read if the system is not in production mode. A more advanced variant would include a user interface for easy access to the meaning of “now”.

There are of course other solutions, but I am surprised that the problems with time are overlooked so often.

Understanding a System

Understanding a system comes in pieces but each piece can take longer or shorter time depending on the circumstances.

What does it mean, then, to understand a system?

Well, if you don’t understand it and introduce a change, you risk going wrong. The code you wrote may seem to work, but …

 – You put the change in the wrong place so everyone else is confused.
 – Your unit tests ran fine but some other part of the code wasn’t under test and stopped working promptly.
 – Your change introduced a dependency to another component and we now have a circular dependency.
– The system can’t be deployed because there was a distributed interface which you thought were local.
– You didn’t follow the policies of error handling, logging or security. So now there is a security breach which is not caught by error handling and the log is nowhere.

So that’s the "why" of understanding. The "how" has some answers you probably know already.

  1. RTFC: Read The Fine Code. Yeah, that’s what we do. All the time. Don’t tell me to do something I do all the time. Thank You.
  2. Ask a Guru. There are gurus? No, guru means teacher which means the opposite of withholding information. That’s what some do, to be in need. Baaad. But you have to deal with it more often that you wish.
  3. Write unit tests until you get it. No harm in that if you do it as you go.

All these have the drawback of looking at the details without seeing the big picture. It reminds me of when I was 13 and learned the city by areas around the subway stations. After a while the areas got big enough to connect to each other and I could get the overall picture without looking at the map.

What is the big picture then? Well, it is the architecture of the system. There is no mystery about it, no magic patterns put down by a long gone super genius, no outdated ideas, just plain descriptions of how the system is designed.

Like a house is understood from blueprints of the outside, the disposition of rooms, the plumbing, etc, a system is understood from different views. There is a standard that will tell you that there are views but leave it to the author to decide which views.

The one I have seen most of is Kruchten’s 4+1, published in IEEE Software. You should be familiar with it but I guess you are not. I have a course on system architecture and I am still surprised how few of the students that have ever seen a document describing a system’s architecture, despite several years in the business.

The crash course is to study the Wikipedia article linked above. Start with the logical view and the physical (deployment) view, those are easiest to grasp.

But there is more to it. Remember that I said something about going wrong with logging, error handling and security? Those are principles of the design and impacts the system almost everywhere.

So to understand a system, I think you need the details and the views and principles of the big picture.

Oh, was the Wikipedia article too long? Here is a cheat sheet:

  • logical view: how components of the system relates to each other logically.
  • physical view: the servers and firewalls and stuff like that.
  • process view: how processes and threads are related.
  • development view: where the compiled code is package, e.g. a war.
  • scenario view: important functions of the system.

A Lean Simulation in JavaFX

My collagues are talking a lot about Lean these days. I thought it would be interesting to simulate one of their examples using JavaFX.

Here is a picture:

What’s cool about this then?

First, it gave me a deeper understanding of a how queue length affects cycle time. With this simulator, you can vary the parameters to control queue size and processing time. Just pull the slides.

Secondly, I knocked up this in hours and being my second JavaFX project, I consider it very fast.

There are always advocates for languages that speak loudly about how fast they write code. Sorry, I didn’t mean to be one of those.

I used almost the same number of hours with a spreadsheet to check whether the simulation was correct.

Speaking of spreadsheets, they are great tools  for understanding data.

The first point shows how JavaFX can be the same for understanding the interaction of parameters for a dynamic flow.

So the bottom line is, nothing beats a visual model and you can knock it up with JavaFX, being quick as a brick. (Did I just say that?)

Now, go and see for yourself: http://www.crisp.se/leanmachine

Don’t let Java ruin your JavaFX

Me and Oscar is currently working on a small project, just to learn JavaFX.

We stumbled on some nasty crashes which we at first did not understand.

ArrayIndexOutOfBoundsException? Is there a bug in JavaFX?

It turned out to be a callback from Java. Let us see how we got there.

The application we are doing is based on Crisp’s famous planning poker cards. They are great but you need to be in the same room. So we thought, why not do an online version for those teams that are geographically disperse?

 

The table has room for you and 8 other players. As you see from the picture, there is also a text chat to the right. At the same time, a small bubble appears by the card of the player that wrote in the chat. The bubble fades away after a ten seconds, unless the player makes another comment within that time. In that case, the latter comment is added to the bubble and it was here our problems showed up.

The chat is using a standard protocol, XMPP, to talk to the server. We don’t have to provide our own server, any chat server that speaks XMPP will do, e.g. jabber.org. Of course, all players need to have an account there.

Here is a strength that JavaFX has as newcomer, you can use existing Java libraries.

We found Smack that talks XMPP and did a small test in Java to see that we had understood it.

Now, how do one provide a JavaFX class that receives callbacks from Java? Each time there is a message from the chat, Smack will call your PacketListener. That is an interface and JavaFX does not have interfaces. It turned out to be so straightforward, however. Just extend the PacketListener interface as if it had been a class.

Here is a code snippet:

So we override the function that gives us the packet. Now comes the crucial part, the callback is done on its own thread. JavaFX has a single thread model for everything in the GUI and that code is not thread safe.

In our case we wished to display a bubble if there was none, or add to an existing one.

You should not do that. You should wait in line for your turn. Or something nasty may happen.

Remember Swing’s invokeLater? Here we need to say FX.deferAction. But in JavaFX we can pass functions as arguments. So here goes that part of the code.

You may also note that we use the chat channel to send commands.

So if you remember the threads, it is safe to have a callback from Java to your JavaFX code.