Continuous discovery means an open backlog where everything is considered speculation and hypothesis. Continuous validation means that the user experience is validated for each release, rather than up front. This may sound like big budget to you, but let me give you a case study, about how a single team accomplished it on a tight budget.
A small team with a small budget has the advantage of not losing its head with big ideas from experts in different fields, be it architecture or user experience. The budget constraint sharpens your effort in a way that could be healthy even to a larger team.
Background
At Stockholm University, there was growing frustration of not having digital examinations. There is of course plenty of software available as open source and some healthy competition from specialized companies.
We were a small team and we started from scratch, how could we ever compete with open source solutions and specialized companies? Well, the open source products were feature rich but their user experience was so bad that most of the professors refused to use them and preferred to continue to do them on paper. The commercial alternative had much better user experience, but it was a standard system with rigid assumptions about how examinations worked.
With all that in mind, we made a few decisions. Technically, we selected a very traditional solution we were very familiar with. Not the latest Javascript framework, no cool NoSQL database either. Just web pages, Java backend and a relational database.
Related to the domain, we invested some time in creating an initial domain dictionary. We did some quick personas and put them on the wall.
We decided to create a demonstration system that covered the whole process from authoring the test, doing the examination, checking the result and handing out the result to the students.
We had only one chance to demonstrate our capability to deliver, or the commercial option would be selected. And we took that chance.
With that obstacle removed, we had a small budget and could get to work.
Our journey
One of our first challenges was a requirements document written by a professor who had been dreaming about this for a long time. In his mind, the longer time something had been a requirement, the more important it was.
For user experience challenges, there was a prestudy. An expert on user experience had designed a feature loaded demo that was clickable and felt very real. Anything we did would be compared with that.
To us it was quite obvious that although the ideas in the requirements and in the GUI prototype looked sensible, we felt that we could not safely assume anything.
The first validation was early in the project. We had a system that covered the whole process but there were only essay questions. In other words, the students got web pages with a question and a text area for the answer. One question per page, that was all.
We invited third year Law students to try the system on a first year examination test. They got a symbolic compensation and a chance to influence the future system. We wanted to see how users experienced sitting long hours. It turned out that law students work with each question for more than 30 minutes, which was the limit for our session timeout! Nothing in the requirements document or the prototype had prepared us for that.
We interviewed the students after the test. Just informal chats, really. Seeing them using the interface and then talking to them was extremely valuable. Not only did we learn about the session timeout, we also realized it was important to them to know if their answers were saved. They put in quite an effort in each answer so losing it was their biggest concern.
Immediately after, we added the first feature, auto-save with visual feedback. There was nothing on this in the requirements, yet it was obviously the most important thing we could do. Also, we concluded that was the only changes we needed to do to go live and start delivering value.
Discover, deliver, validate
From here on, we started delivering every two weeks. But to know what to deliver we needed to know what was most valuable for next release. How do you measure value when your customer is in-house? In our situation, the professors had other alternatives so when they used our solution, we knew we brought value to them. Therefore, we counted the number of examinations that used our solution as a measurement of value delivered.
Our discovery work was straight forward, simply ask the professors what the current version lacked that stopped them from using it. Surprisingly little, it turned out.
We talked to the examination assistants how their work could be more effective.
We talked to the supervisors about identification of students to prevent cheating.
After examinations, we waited outside and talked to the students. What interested us most was whether they talked about the exam itself or our tool. If they talked about the exam only, we knew our user interface had been transparent, so to speak.
We prefered to do small changes and look for user feedback from real usage. Especially the parts which supported checking a large number examinations, had to be redesigned several times. But each time we based our conclusions on real usage.
Our journey took us to unexpected places. Initially, the scope for the system had a narrow scope of only classroom examination. It turned out that home examinations was as important. We also discovered that an examination form called peer review was labor intensive when performed on paper.
This was a completely new digital examination idea, so for the first time we had to do user interface prototyping to find the design. We tried out ideas by paper and then a mock version of our system.
Discovering the peer review feature gave us something unique. Nothing in the user experience prestudy or requirements documents had a clue about it. When released, it was only for didactics but other institutions soon got interested. An example on how the release of new functions affect the processes it supports.
Summary
The most important lesson was that whatever you think is a prioritized feature or good user experience, it is pure speculation until validated. Therefore, you want to make the feedback loop as short as possible. Doing the user interface in one sprint and implement it in the next is out of the question as it takes too long time.
As a team, we turned out to be better at coming up with design solutions to hard user interface problems than the one UX specialist we had in the team. We worked without isolation between competencies, everything was tackled as a team.
Despite a lot of experience with the domain, creating a requirements document or long backlog, is just qualified guessing.
It is a good investment to sacrifice some developer time to discovery or validation work and to do it as a team. It keeps the developers better on target and avoids product ideas that are very expensive to develop.
Attend my and Mia’s course on Lean Team to learn more on this. Lean Team – Continuous Product Discovery och Delivery med ett team
Really like the simple approach to things here. Like the value counting..
A great story with rich with powerful questions, which should be used more frequently in discovery work. For example:
“We talked to the examination assistants how their work could be more effective.”.
Nothing about a tool or feature here, just how can work become more effektive.
Good stuff.
Thanks, Mattias.
Oh my ցoodness! Amazing articⅼe dude! Many thanks, Howеver I am going
thrоuyh issues with your RSS. I don’t understand the rеason ᴡhy I can’t subscrіbe to it.
Is there anyone else haᴠing similar RSS problems? Anyone who knows thee answer wіll you kindly гespond?
Thanks!!