Tips to make your UX Design benchmark more efficient:
Benchmarking is a secondary research methodology that aims to investigate methods, designs, and processes that are already practiced by a brand, company, or market niche, in order to search for best practices and weak points of a particular research object, such as a user journey or product usability, for example, or to generally understand how this market is behaving.
An invisible problem is that it appears to be an easy methodology to apply because it resembles a research that we routinely do online. The answer is partially correct, as it is possible to conduct a superficial research with benchmark characteristics that are irrefutable. However, when done without proper method and depth, it may simply not provide an adequate response or, even worse, it may give us wrong impressions about the analyzed object.
For example, the researcher may conclude that there have been no significant movements in the Brazilian tomato sauce market based on their research and fail to discover that the increase in input prices may have stimulated the entry of foreign brands into the national market, or that sanitary issues may have induced changes in the final product composition of different players (I emphasize that this example is fictitious). Such an error can cause failures not only in the benchmark itself, but in the entire research chain, as it can inaccurately bias other investigation processes, such as interviews and surveys or the client’s product planning.
Therefore, how can we conduct a qualified benchmarking and avoid data collection and analysis errors that may harm the outcome and consequently those who will consume the final report? Let’s see some tips from my personal experience:
- Before starting the research, align with all parties involved, whether it be the client to know the real need that will be addressed, or the design or product team, which needs data to create or redesign a new product. Communication error can spoil days of work and delay or even invalidate the rest of the project, since this methodology is often one of the first to be applied.
- After alignment, carefully define the research object by reflecting on it, writing down the central question to be answered, and conducting a desk research on the topic, as we often have a faulty or erroneous understanding of topics that seem very familiar to us. For example, when conducting research on bill payments through banking apps, many people are unaware of titles such as DARF or IPVA, which often have very peculiar journeys and which many financial apps do not support. That is why it is so important to gather information about the product beforehand.
- Find templates for your research on the mural you will use. Miro and Figjam have many options and a quick Google search can bring even more options. This way, you can find the most suitable option for your study, which results in time savings, and also gain good ideas as a bonus to apply. I created my template over time and with a lot of alignment with the research teams I participated in, and I will leave it as a suggestion later on.
- After data collection, such as screen captures of app user journeys, try to systematize the pre-analysis by dividing it into themes so that all players are analyzed in a similar way. This process can be done in tables, where each row represents a player and each column represents a theme to be analyzed (such as design, communication, or pain points, for example) and each cell can be filled with different sticky notes so that each relevant observation found is presented separately, as I will show below. Analysis and insight gathering can be done directly in the table in a last column, where the best findings and resulting ideas can be placed.
- After this process, the final data can be organized in a presentation format, where the most relevant cases are displayed, along with their observations and insights. I usually create this file with the following topics: Contextualization, Desk Research (if any), a list of all the players analyzed during the research, screens and observations of best practices found, research insights and points of attention, and finally, comparative tables of characteristics of the analyzed players, such as communication or design model, for example.
Template model for the Mural
Each research or product may have its peculiarities in the format of the investigation or organization of collected data, but for user journey research in applications, the example below has worked well for me, as it keeps me from getting lost or changing the analysis method during the process and keeps the data well organized.
The first point is to have a template for the collected screens next to the table of observations and insights. Besides being well-presented, anyone else who accesses this data will have considerable ease in understanding the study’s dynamics and how the results were collected.
The template for collecting screens can be done with Sections (frames), which make it easy to save as PDFs for individual players or the entire collection, and have dynamic titles where the name of the player can be inserted, which will always be visible regardless of whether the zoom is enlarged or reduced.
Creating a table with sticky notes and different colors per column helps a lot in understanding the train of thought and segmenting the data. Of course, it is not necessary to fill all available spaces, as in the illustration below, but it is important not to have a shortage of space in order not to compromise the quality of the analysis or the annotations, which can be simplified but should be as extensive as necessary.
Conclusion
Benchmarking is a valuable tool that can add a lot to research, bring important improvements, and even reduce implementation costs through good ideas. Additionally, it can help guide other moments of research, making it more effective.
The idea of this article was to provide models and reflections so that this methodology can be applied as efficiently as possible. Therefore, this text will always be unfinished, as any improvement, criticism, or suggestion from its readers may lead to corrections or updates.