Sitemap

Award-worthy: The Reviewer’s experience.

9 min readApr 27, 2024

--

TL:DR; Entering your design project for an award — be it IxDA, Webby Awards, or Awwwards takes time and consideration. How, exactly, should you present your project so you’re putting your best foot forward? Here are my reflections from being a reviewer for the IxDA Awards shortlist — and notes on how you can put your best foot forward.

Illustration by author

Interactive’24 may have been cancelled, but the Interaction awards are still on! This year, the interaction awards received 216 entries from 26 countries, of which 41 projects shortlisted. That’s an 18% acceptance rate, and the finalists were just announced! Congratulations!

The winning entries (and the shortlist) are a remarkable sign of how human-centered design has evolved. The breadth and complexity of the problems has increased, and the solutions are increasingly sophisticated. I’m really impressed by the projects that try to balance new interaction paradigms and the simplicity of the solution.

What can we do with new modalities in interaction, and which of these do we actually need to solve the problem at hand?

I was a peer reviewer for the IxDA awards, and their process was excellent — at par, and in some cases, superior to some academic conferences that I’ve reviewed papers for. Even though IxDA makes it easy, reviewing is hard for many reasons. I think it’s useful to understand some of these, so you can factor these when you’re creating a portfolio or submitting a project for an award.

The who and the how

Reviewers for IxDA need to apply and provide background information to qualify. I answered questions about the kind of design work I do and where I work, the number of years of experience, and to ensure diversity of perspectives — where I’m from. This gives me the confidence that projects that make the final cut are shortlisted by reviewers who have the necessary background to properly assess these projects; and are willing to volunteer their time. I’d imagine that most reviewers are deeply invested in the state of the art and future of the design discipline.

The review is blind, which means that while reviewers know who all contributed to the project, submitters aren’t told who reviewed their projects. I prefer this to double-blind reviews. Knowing who did what in the project, their backgrounds and skill sets, and the type of project — whether it was a student project, a research lab experiment, or agency work — makes it easier to assess a broader range of projects with lesser bias.

From “And the 2024 Interaction Awards Finalists are…

IxDA has well-defined criteria for judging submissions and a peer review process, which makes for fair and equitable judging. These can be easily applied to different projects (hardware, mobile, AR, VR, SaSS, robotics, etc.), but are concrete enough for objective assessments. Each reviewer gets about 10 projects to review against all the criteria. Each criteria is rated on a 5 point likert scale, and reviewers are able to explain their choices. I reviewed one hardware, two exhibition design projects, and several digital projects and was pleasantly surprised to find that criteria made sense!

In all cases, reviewing requires a lot of self-awareness and additional research. As a rule of thumb, I evaluated each project at least three times, spending at least 30 minutes each. I found that spacing these sessions out helped me organize my thoughts and poke at my assumptions. I took notes. I looked up the people and companies involved in the projects. If these were client projects, I looked up the clients. If these were created by an agency, I looked at other projects to establish a baseline of quality of work produced by that agency – had they surpassed themselves?

The motivation for this effort is two-fold and deceptively simple. First, many of these projects had taken months to put together. Second, it takes a lot of courage to submit a project for an award. So my evaluation needed to justify their effort.

Human after all: however hard reviewers try, they’re going to compare submissions.

At my pace, 10 submissions took a little over twenty hours to review, and I reviewed a few projects at a time. No matter how hard I tried to evaluate each submission individually, I found myself comparing them to each other — and to other projects in my orbital sphere.

Our own knowledge and experiences can also have some biasing effects. I work at a large technology company known for it’s design — so I found myself subconsciously comparing process and craft to projects I see on a daily basis. On my socials, I’m exposed to “the best of {interaction / industrial / ui / ux} design”, colorful descriptions of exemplary, world-class projects. But the reality is, many of those projects showcase craft, which doesn’t guarantee that those concepts and demo reels will scale.

What would make it easier? While your work can speak for itself, this is a rare opportunity to talk about your process — What worked? What didn’t? What other ideas did you consider, and why did you settle on the one that you did?

Authors may not be incentivized to do so, but if your submission explained the constraints that you’re working with, that would make it so much easier for a reviewer to confidently assess your work. For example, a project created by two students working on the concept of an application for thirteen weeks, averaging 3 hours a week, with illustration, interaction design, and animation skills will be evaluated very differently from an agency in which 12 individuals spend 40 hours a week for eight weeks, armed with a much more diverse skillset. It’s no surprise that the bar for evaluation will be much higher for the latter.

Here’s the punchline: Assume that reviewers don’t and won’t know the full story.

Student work is often classwork. It’s shaped by time and resource constraints. Feedback is usually from classmates, who are themselves in the process of developing their own craft. Student projects don’t have the same benefits that larger professional studio or agency teams enjoy owing to their many years of diverse experience. But for studio teams, some of the most compelling or meaningful parts of a project may need to be held back because it’s client work, bound by an NDA, or is proprietary knowledge.

Many of the submission that I read were hyper-focused on a specific facet of the project. It might be because that’s what project owners were most excited about, or there just wasn’t enough time or room to talk about the rest of the project.

While there’s no way to walk reviewers through the minutiae of your work, you can paint a more comprehensive picture of your work. It boils down to four things:

1. Your approach and constraints

2. The scope of the problem or challenge

3. The context of the problem or challenge

4. The value of the problem or challenge

1. Your Approach and Constraints

What were your constraints? How did these shape your process?

Design is fundamentally about problem-solving, and creating solutions that can generalize and scale well. So it’s no surprise if two near-identical solutions have entirely different underlying processes and constraints. It’s hard to compare two projects even if they tackle similar problems when the behind-the-scenes are so different. Project entries, on their own, take time and effort, so reviewers don’t always know the bounds or limitations that project teams are working with. As reviewers, it’s our duty to evaluate your project while considering your process and constraints. To be clear, tight constraints are no excuse for a poor submission. Rather, clearly articulated constraints can elevate a good solution, justify simplicity — or complexity in your solution.

For example, Adaept, which also won the 2023 UX design awards, does a really good job of describing constraints:

As a result, they need assistance from their caregivers every few minutes, which can be incredibly frustrating for them. The underlying issue causing this problem is the mismatched interaction between their hands and physical input devices, as well as the mismatch between their motor skills and the design of computer navigation methods.

2. The Scope of your Problem

Clearly articulate the scope of problem you’re trying to solve.

Depression is a mood disorder that is affects people in different ways. A product that wants to reduce depression due to isolation will be very different one that attempts to alleviate a major depressive period for someone who suffers from bipolarity. In a way, we’re trying to solve the same, essential problem, and these two products may have many similar characteristics. But, you see the difference that scope plays in this example.

For example, Dentixbuddy addresses dental anxiety in children with a gamified narrative, while Drawtooth turns a daily habit into a fun experience.

One could argue that reviewers should compare and evaluate the final designs alone. But consider this — if a student project and a professional think-tank come up with equally novel designs, which one would you expect would scale? Perhaps the think-tank has come up with a brilliant and elegant solution that can scale, but solves the problem incrementally; and our students have come up with a brilliant solution that solves the problem entirely only if it could scale. As a reviewer, where would you be more forgiving?

3. The Context of Your Problem and Solution

Give reviewers enough context to ground your work.

Design rarely occurs without domain, context and purpose. Reviewers will functionally compensate for knowing too much or too little about the domain of the project (Will every reviewer understand the nuance required to build a healthcare app? Probably not). This makes it harder to evaluate if your project is innovative, meaningful, and can scale with reasonable effort. We’re trying to assess craft and innovation with just enough context.

I’ve been on projects where we’ve had to use sub-optimal visualizations simply because they were the industry standard. Teaching a thousand technicians to learn to read a new visualization would cause a lot of disruption and open the door to too many errors. But instead we created a companion (and significantly better) visualization that would help users improve their understanding of the data, without disruption. In this case, the reason for wasting premium screen real-estate on a poor visualization is an important detail for the reviewer.

In your submission, don’t use big words just because — each word and visual shapes how reviewers evaluate your project. Assume that reviewers know nothing, but will scrutinize everything. We’re being rewarded for being judgmental and nit-picky. Use keywords with caution, because they should mean something for your project. If you’re going to use domain jargon, or choose one visualization technique over the other, it’s important that the relevance be clear. Importantly, include enough objective information so reviewers can assess your decisions.

4. The Problem-Value statement

Convince reviewers that the problem you’re trying to solve is valuable.

If you’re in a crunch for time, and there’s one bit of advice I’d encourage you to take, it’s this point.

There are far too many beautiful user interfaces (UIs) that don’t actually solve any problem, and many submissions conflate these two. If you want to win an award for aesthetics and beauty, that’s perfectly okay. There’s enough literature that states that the good user experience design is intended to make products more usable, utilitarian. If you want to win an award for usability, then be clear that usability is the salt that your project is worth. But, if you’re going to create a product to address depression, or help people privacy, you’ll need to be clear about that.

Justify that the problem you’re solving is value not just in business terms, or value to customer, or by a statistic describing the populate afflicted by the problem — but what actually makes it hard to solve. How do different people — and not personas — experience it? What gap are you filling, and why have other solutions have failed to address this gap?

To conclude

If you didn’t make it this year, don’t worry. You put your best foot forward, and only stand to gain from this experience. Next year, when the awards come around, you’ll be prepared to build a compelling submission that fills in the blanks. I’m championing you — you’ve got this!

--

--

Mahima Pushkarna
Mahima Pushkarna

Written by Mahima Pushkarna

Design @Google, People + AI Research. Designing 'stuff' for human-AI understanding since 2017. Opinions mine.

No responses yet