Usability And User Experience In Training

Usability And User Experience In Training
Summary: It is not difficult to find entities that invest budget, resources, and equipment in the digitization of training content, with results that are far below the expected return on investment. Experts say that the contents are correct and adequate; programmers have achieved a robust and secure platform, and tutors do a proactive monitoring of students. However, there is something that still does not work, and users do not come to the platform to complete their training. What problems might be taking place? It is likely that usability and user experience on the platform and online courses are not appropriate to the needs, expectations, and knowledge of the user.

Corporate Training: Discussing Usability And User Experience

What are the main problems or barriers that the user faces at the platform? An investigation of usability and user experience can help identify these problems and reveal solutions.

Companies like ScrollUp.es specialize in conducting this research. In LearningLovers.org, we had the opportunity to meet its main managers, Lucia Palacios and Maria Benavides, in an event organized by the BBVA Innovation Center. There, we were told about their business venture, the main concepts related to usability and user experience, and the stages to be followed. Later, we had an interview with them so they had the chance to explain to us these keys applied to the training sector. Lets go see it!

Maria And Lucia

"We have been working together for 8 years. We met at Bankia. A laboratory was founded and there we cooperated. It was our first point of contact and when we decided to start our adventure as entrepreneurs in the field of user experience consultancy. We define ourselves as designers of experiences. The experience has an emotional component, which is where we want to focus."

Concept Of Usability

"What is usability? We always focus on the ISO standard, which defines usability as the extent to which a product is used by a specific audience for the purpose for which it has been designed with effectiveness, efficiency, and satisfaction:

  • Effectiveness is that you can achieve the objective for which it was designed.
  • Efficiency means that resources that people use when reaching that goal are the lowest possible (fewer clicks, less time).
  • Satisfaction is the emotional part; it refers to generate a positive experience."

"We like to mention an author, John Maeda, and his work "The Laws of Simplicity”, written in 2006. He works at the MIT. He identified 10 laws when designing any digital page: Reduce, Organize, Time, Learning, Differences, Context, Emotion, Reliability, Failure, and the Unique (simplicity). Basically, the ultimate law of all is collecting all others: Simplicity is to remove everything that is obvious and to stay with what actually adds value."

The Importance Of Research: Dimensions And Laws Of Analysis

"We want to emphasize the importance of research to reach the end customer and meet their needs. When you work with a client, you have to give some results and lines of action. You have to make a value proposition so that he can understand how to modify its website, its digital device, an interface, etc. In the analysis we do, we go from macro to micro. Dimensions at the analysis we perform are:

  • Social.
    Culture, social trends, innovation, how to work in different countries.
  • Physiological dimensions.
    Ergonomics, positions, distance between buttons, limitations of human physiology, age...
  • Emotional dimensions.
    It is what generates customer loyalty. When you generate a negative interaction, the person will not use your product. If you create an emotional bond with the person, they will get engaged.
  • Cognitive behavioral dimension.
    It is important to know how the cognitive processes of memory, attention, and learning work. We are faced with increasingly advanced and complex technologies, and we want to transform them into something very simple, so they can be used by as many people as possible. You try to get a short learning curve, so customers can use the product or service, they learn quickly, and they do not spend a long time with trial and error behavior. If a person has to do a lot of trial and error, they will probably end up frustrated. The learning curve has expanded over time and we will have lost a potential customer."

"We also work with different laws. We go to secondary sources on research that has been done at a social and psychological level, to get these 'findings' on a general level and allow us to provide better customer solutions. There are universal laws through research that are already known:

  • It is for example the Hicks Law, which says the time it takes to make a decision increases as the number of alternatives increases. The Hicks law is quite used in linear supermarket, but it can also be used on a web page with the number of menus that are to be applied, so that the client does not have to spend too much time deciding where to go (the fewer the options, the  shorter the decision).
  • Another law, the law of Miller, is much applied in design of local interfaces. It says that we are able to correctly recall information in groups of 7 +/- 2 items. In telephone locutions, for example, it often works with menus from 5 to 7 options."

Biases In Research

"It is important to investigate, because you have to apply the laws in a particular context, with a certain audience. We must investigate because often what it seems obvious, it is not. We must apply the "thinking outside the box". Many times you can find a change in trends, an important cultural change, or just a solution that has worked in a sector and in a particular environment. However, when you apply it in another one, it won’t work as well as you expected. Research needs to detect change in early states that lower the costs of modifications."

During the investigation, it is possible to make mistakes or receive bias by the researchers. One of those is the false-consensus effect: "It is the belief that other people have a tendency to think like you and extrapolate your knowledge or your experience to others. Therefore, the final assessment of the analysis cannot be obtained until we have obtained all the data. We must make a list of categories on which we discuss usability, and at the end we propose the lines of action for improvements. We must avoid the false consensus. We are working with the interpretation of things and the sense that others are giving to them. To get to know that, you have to ask and observe their behavior."

Usability And User Experience

"Sometimes, data obtained during the investigation leads to the conclusion that it is not only necessary to redesign a website, but that a redesign of the full service is required, because the market is not technologically mature and it requires different solutions to be applied. Usability is a natural evolution of market research. The design of user experience goes beyond that. It does not only include a screen or an interface, but all channels involved in the process."

"Usability is not a new issue; It comes from the ergonomics of human factors. During World War II it was used for the design of cockpits, displays, control knobs, radars... There was a study in depth done with the soldiers to analyze what the most visual displays were, which helped them to better differentiate what they were seeing."

"We study the behavior and the user environment anywhere. We are focused on what are digital interfaces (ATMs, watches...), but we have also made retail, office design, to make the experience more global and not only transmitted through the web... It is suitable for any environment."

Research Based On The Process Steps

"One of the great advantages of research is that is transverse to the cycle: It starts from the moment of conceptualization and goes to the monitoring phase, or even when a product has been released; you can investigate it even if it has already been launched."

"There are different techniques used at different times: Not all techniques are worth at every moment of the cycle, but there are techniques that are more appropriated in each time cycle. When choosing the type of research, the time you'll spend, the cost for the customer, and the type of information you're going to get make its influence. For example, we cannot perform a test without a product to work with. You have to have at least a prototype, which can be in paper or papier mache to test it."

  • In the early stages of conceptualization, focus groups are very good, because you work in communication, innovation... a speech is generated, from which we get more information.
  • Then there are other times of the cycle in which we can combine focus groups with user tests.
  • There are always alternatives at any point in the cycle and there are always viable solutions, both in cost and time. The most important thing is that you get adapted to the customer to get the most information in the shortest possible time (or as long as possible, if what they want is to extend an investigation) to give them what they need."

Classical Research Methodology

"In user experience, we are working with classical research methodology:

1. The phase of proposal. 

This is where we generate the hypotheses to be analyzed, we set the goals, the sample, we define what our target audience is... everything. The target audience often does not have to be the end customer, but customers can often be involved in the process of a product or service. For example, if we are working on innovation, we do not go to the end customer, but we select more specialized profiles. On innovation, it is known that when you present to the end customer a too innovative product, that perhaps might generate a cognitive dissonance; you don’t get what in research is called the "Wow effect", for them to think that this is really worth it. Usually it works with specialized profiles that will help you create a more innovative concept."

2. The data collection phase. 

All information is prepared, all the materials are prepared, the tools that are necessary and all data are collected, either because the user is being watched or because the tests are performed.

3. The analysis phase. 

In research, we advocate the study with users, but we often combine; a View of Expert or 360 view, we call it: We have an expert vision in which we, as usability experts and user experience at a level of interaction, we analyze. We also ask the end customer and we get a business vision. We do some workshops with people in the company to have the three points of view, and over all that we analyze whether the information taken is the most appropriate to work with.

4. The results phase.

Those are 4 traditional phases that have not been changed. They are applied as in any other environment."

Research Techniques And User Test

"How do you know what customers will need? Because we observe them, we analyze them, we ask them, we talk to them, we see how they behave, and how they work with certain devices, especially the digital ones (but it applies to any environment)."

"We apply research techniques such as discussion groups, heuristics, usuTG tests (beyond usability test group), Card sorting, Tree testing, Mystery shopper, in-depth interviews, ethnographic, customer segmentation, positioning, brand image, intended use, pricing test, advertising test, triads and couplets, competition benchmark, tests of concept, web analytics, and others, because the user experience is anywhere: You can make an odor test, a test of flavors, you can do a focus group, but in reality, in usability, the excellence technique is the user test.

The user test applies when we already have a device on which we can work. A user test is based on facing a user or many users separately to a device, whatever it is. You cannot get the same information if you have a final product with complete navigation and interaction and if you have something drawn on a sheet ".

Number Of Users Required

"The number of users needed is one of the biggest differences with the more conventional research techniques:

  • When we talk of quantitative analysis, we always talk of very large samples.
  • When we talk about qualitative analysis, as a focus group, with which we analyze trends, we are speaking of smaller samples.
  • In usability, we always talk about an error detection; i.e., what we do is to detect errors, difficulties, and barriers that there are in a web page or a device".

"The theory is that with five users, 85% of the problems that an interface or device has are detected. There are five users per profile. This comes from the laws of probability and was reported by Nielsen. On the page of Jeff Sauro, we can find a chart based on the law of probability. It says that an error that occurs in a greater ratio than 30%, it can be detected with five users. If the occurrence of the error is less than 30%, it is very rare that you will detect it.

If you need to analyze in depth with all the errors that your page has, you have to increase the sample. It is not always worth it, because with 80 users we get to detect errors that occur above 10% of occurrence. There are 80 users to capture, to pay, and it often does not compensate."

Combined Techniques

"When we see that a web page is very bad, sometimes we combine techniques: We do a heuristic, which is an expert analysis where we analyze all the patterns, and then we consult users to confirm or refute some things. This in quantitative techniques is an aberration, but here we are detecting errors, problems of the interface, difficulties, or barriers, which is basically what we do in usability and user experience.

Discourse analysis, obtained by capturing what users are saying, is governed by the size of the actual sample. Trends are being registered. The margin of error is brutal.

When we analyze the ease or efficiency, we see the effort it costs the user to perform a task, and we do metrics, clicks and times. But some other times, we ask the user how easy or difficult it was to perform that task. If we combine that with further observation of mistakes, we actually get feeding back information."

The User Test

User Test Requirements

"In the simplest version, to do a user test. A table, two chairs, a computer and a webcam are needed, so that we can record the session. It is not always done this way; sometimes, we use technology to record the interaction on the screen. In addition, the customer must see the sessions: The client can be the designer, and we also want them to be there because we are seeing how are the sessions directly. For that observation rooms are rented, so that the customer can see the session directly through a screen or a one-way mirror.

We have also shared the session with the client with Hangouts or IP, for example. We also share documents and videos with google, and we do not need more expensive infrastructure; there are much cheaper alternatives that also do research more agile."

Dynamic Of A User Test

"The dynamic of a user test is individual. Then tests can be done remotely, to reach more sample, but the user test is individual. It takes place in a laboratory, in a room, in a meeting room... anywhere. In the case of focus groups is the same: They take place in a room, but with more people.

The steps of a user test are:

1. Welcoming the user. 

We show up and we indicate how the user will pass the session.

2. Performing an initial scan

We use it to check that the sample is well chosen. It also serves to draw insights beyond the behavior of motivations, expectations, and perceptions of the object of study. Getting to know the usage habits then helps when analyzing.

3. Presenting some tasks to the user. 

The number of tasks depends on the complexity thereof. If there are very difficult tasks, we go up to three, and if there are easier tasks, then we go up to five or six. Users have to perform the task in a certain way from the start to an end. There are many ways to do this: Some people are very orthodox and they want the user to read the task, to perform the task by itself... You cannot interrupt them at any time, and you cannot talk to them even if they ask, until they say: "I cannot continue anymore". There are other people with a little more lax ways to perform the test, where you can intervene at certain times. It all depends on what you want to measure: If you want to get metrics, if you want to take the time it takes to accomplish a task, you cannot intervene at any time. But there are times you're working with "low-fi" prototypes that do not have navigation, and that allows you to be a little more flexible.

4. Making an overview.

Then you make an overview in which you expose some doubts that have emerged throughout the interaction.

5. Saying goodbye.

You say goodbye, you give an incentive, and you go home.

There are variants of the user test. It can be done online, remotely. That way, you get more shows. It is not as expensive as a face to face or telephone survey, but you have to go to an internet public.

You can also do a "guerrilla style". It is much faster, you do not need specialized rooms, and you go to a smaller sample.

You can combine user test techniques. When the project requests it, we do a mini-group discussion with 4 or 5 people and then with 5 technicians; we take each user to a different room and we do a user test. We put them back together at the end, and then we put together everything that has happened. Those are long sessions."

User Test Research Tools

"To identify the needs and problems, we use research tools such as:

  • Tobii EyeTracker.
    Eye-tracking techniques with tracking tours and fixing points recorded in the order in which they have occurred.
  • Morae.
  • Silverback.
  • Cardzort. 
    An analysis of information architecture; cluster analysis to see the mental structure of users and the labeling and architecture of the page: You face the user to an unorganized menu presented on labels, and you ask them to organize them as they consider. Then, each of the clusters conform a name, selected by users. It can be done manually or online, so you get a lot more people.
  • Treejack.
  • UserZoom.
  • Loop11. 
  • Atlas-TI:
    It takes a long time and makes discourse analysis, semantic analysis. It works great, but it also takes a lot of manual work behind.

Those are tools that capture the interaction on the screen; some take metrics and others don’t."

Indicators Measured In The User Test

"There are several indicators that can be measured with the user test:

1. If in the user test, the user is facing some tasks, and we obtain the ratio of success

It questions whether the user gets the task according to the objectives that have been set. The response is yes / no.

2. What effort is taken from the user to perform this task? 

This can be measured in many ways also: Clicks and time (how many clicks and how long it takes to perform the task, with a base line to make stockings). You can also choose a more subjective way: By asking directly the user "How much effort do you think it cost you to do this?" and also reviewing it with the client. In many occasions, the answer does not correspond to reality: There are customers who have obtained very bad results, but they say "This has cost me nothing". Those are biases that must be eliminated.

3. The learning curve is studied (if the interaction improves when the action is repeated). 

It can be measured in different days at an evolutionary level, or in the same day, asking the user to do the same task several times.

4. User satisfaction through satisfaction tests is measured.

User speech is measured: The user, while interacting with a website, is talking and gesturing, which sometimes are more important things than the task itself. Sometimes there is a shock, because what they are saying has nothing to do with what they are doing."

Metrics associated with other techniques different from the user test are obtained depending on the tool used: You can do a dendrogram if Cardzorting is used, you can measure eye movements or fixing points if you are using eyetracker…

Results Obtained With The User Test

"We work in two ways:

1. First we show a scale of impact to see the different elements of cross usability. 

(How is the page in terms of graphics and tables, help pages, language, typography, layout structure, iconography, communication, etc.). This is discussed based on a scale of impact (with faces and colors, to do it in a very visual way and to be better understood, so that the user has to choose, for example, between a face that is smiling, and a face that is serious or unhappy).

2. Then, if there are flows, we analyze the full flow

The effectiveness, efficiency, difficulties, barriers, and facilitators found on the page. If there are further pages to be analyzed in detail (the home page, for example) or any additional element, this is also discussed.

In addition, we must offer the customer a solution with lines of action, with recommendations. Therefore, we do drivable proposals. First, we describe what has been the problem and what we would do as experts, which are some recommendations and lines of action. When we go down to detail, we present wireframes, which are conceptual proposals with no design, without a visual layer above, but that explains how the client has to have different elements on the page, so that it really gets to work and the changes that have been requested take effect.

Typical Mistakes To Be Avoided In Research

1. In the proposal, objectives are not well established. The client thinks they know what they want, but there is no understanding. You have to really see what the client needs and in what stage the project is, because it also varies depending on the degree of the product development.

2. In preparation, in laboratories, technological part has not been reviewed and no sessions are recorded. Then you cannot access the metrics and this is quite important.

3. The discussion guide always has to go through the client. The client sometimes wants to get involved and wants to launch certain questions that have to get reflected in the discussion guide.

4. In data collection, many biases occur in moderation, and those are the most important, such as:

  • Not being patient.
  • Assuming that the user is saying certain things, when in fact the researcher has to be quiet, but help the user when they don't know how to continue; because if not, the user gets stuck and a different bias is generated.
  • Another bias occurs when we do value judgments or we give clues to the user to find the button they want.
  • Social desirability is another bias: Many people tend to put high scores and say yes, they like the proposal, when in fact they have obtained terrible results. This bias is removed by the speech by asking the user why they have that score. Then, a differential analysis between speech and behavior is made. The behavior is less direct, but it is very powerful and gives a lot of information. It also happens to people unaccustomed to internet (housewives over 50 years with low tech skills), who usually have a very positive perception of the experience and they score higher, although they are not actually taking the work forward. They do it because they believe that although it is difficult, for other users it has to be easier.
  • When collecting data, the inter-rater difference is also given."

"We always try to be very orthodox: Tasks are read aloud, the participant is reading, the controller does not do any action on the page to help users...

After the first session, a checkpoint is done to check defaults and to avoid dragging them with all participants, to avoid bias. The first session is usually never overlapped, because you have to see if the guide works and if the instructions are correct. The researcher who will overlap, sees the first session to do things equal afterwards.

  • In the results analysis, the biggest mistake is made due to lack of time, because sessions are not well reviewed and notes are not read, which are extremely important. Then we tend to give the results of the most striking users; not what really happened with the middle users.
  • To be as faithful as possible, sometimes we go to the user environment (we do ethnographic). For example, to see how they use ATMs, we go with them and we accompany them during the process."

Researched User Selection

"The customer is the one who better knows the user’s profile. There are times that the customer has segmented users because they have created a 'persona'; sometimes they do not and you have to help them; and sometimes you have to prioritize, because there are customers who have so many segments of users that is overwhelming. You must go to those who matter most for business or that return you more money.

Users are identified through a recruitment agency, working with databases and panels. Sometimes, the agency has to do cold calls because there are very complicated profiles. They receive a filter, a screener collection, and they have to meet certain guidelines.

Sometimes, we also operate with databases of users from the client, while complying with the data protection law. The recruiter directly calls these users who are in the database that has been sent by the client. The rejection is usually higher, due to people who are not used to perform user tests, so you have to offer them more money. Through the pickup company, that contact becomes more expensive."

User Motivation

"We always do a first part and a final part of the research in which we talk about perceptions, expectations, motivations... these are the most difficult to analyze. These kinds of things are discussed much better in group than in personal interviews. When you want people to talk about their motivations, it works much better to face four or five people with a certain profile, but compatible, so that the speech is much richer and you can get more information."

Evolutionary Analysis

In evolutionary analysis, you always have to see that you are measuring the same thing. For example, it is not the same to measure on an initial prototype and on the whole web with all the options available. You have to work with a product that, although it has evolved, it has equivalent conditions, and thereafter operational metrics which you want to work with are established. Improvements are incorporated with the first analysis and these are repeated every three months to see if there have been changes to these improvements.

Usability And User Experience At Training

"The issue of eLearning is usually quite complicated, because many things get mixed; whether the interface is easy to use, other cognitive behavioral learning issues, how do you measure that learning, etc.

Although this is beginning to change, research in Spain is considered secondary. This means that when it gets to reducing budget from somewhere, it is always reduced from research. Therefore, customers call you only when they are really desperate.

Learning platforms are multi platforms where you exchange files. But eLearning services platforms, where you get self-assessment, are being tested, and follow an innovative gamification learning systems, are very rare.

In many organizations, users must make compulsory courses that are of no interest for anyone. User motivation is almost always forgotten, and  the psychological processes of learning are not being taken into account (it is not the same to teach languages and any other conceptual subject):

1. By taking courses, users are limited to passing the screens.

2. Multiple choice questionnaire systems are answered almost by intuition.

3. The information is not internalized. 

The system is not adapted to internalize that information if you're in a hurry, or if you want to go slowly...  That's where there is a very significant improvement journey: The user must decide how long it will take them.

4. The internalization process can be enhanced with images, concepts...

...But it must be done in some way: Either receiving the information, or seeing it, or shaping it, or writing it. But you have to allow the internalization process, so that information is learned.

5. In the evaluation, we must allow the self-motivated user to evaluate themselves. 

But there are users working with extrinsic motivation, and for them it is important to be compared with others.

6. It is important to get adapted to the time the user has to internalize the information. 

Therefore, the content must not only be in a nice, attractive, and well-structured screen, but also be adapted to all devices, with a liquid design. There are different ways of displaying the content, such as cards, for easy navigation. The aim is that it does not get to be a simple document container, which ultimately is what happens in some universities, where the teacher already has content and they simply have digitized it in a PDF. These documents are also needed to deepen a subject. For example, in the UOC they have MOOCs with videos, and this is much more friendly, more attractive, and engaging.

The challenge is to generate video content with small bits of information, structured with the concept of what is most internalized in a highlighted manner.

Where there is a much more developed path and a more highlighted development is on the issue of language teaching; perhaps because it requires a different method of teaching from other knowledge, and because it generates more revenue. Barriers are overcome because you can decide how long to study, and you are allowed to choose which challenges you want to overcome.

Anxo Perez in education have an application that is working quite well. Also, at Education First we have done things with online training and mobile training for learning the English language. There you realize how you have to feature all the different elements to improve the interface. Here are the key points:

  • You're assigning learning tasks to users, which can be longitudinal, and then you can compare the results with them directly to take all that information.
  • For example, for motivators, we are working a lot now with gamification. You can see if they like the concepts of competition, punctuation...
  • You're working with different concepts that users are watching on a screen and you are also contrasting their interaction with their response to those expectations. You see if they perceive that online motivation or they prefer it in person.
  • The user test stands as the quintessential technique at a level of usability, because you work with the cognitive behavioral part. If a person says that they are really liking a platform, but they are doing everything wrong, there is a dissonance. These, transferred to real life, are things that are not resolved at the first try. They are different, and there are barriers that prevent a person take the next step to do something (to get subscribed in a course, for example)."

What Steps Are Followed At  A Research Project For Platforms, Applications, And Online Training Courses?

Although many of the processes are consistent with any research project on usability and user experience, there are some characteristics that are derived from the characteristics of training applications, market research platforms and online training devices, and the peculiarities of the user learning process. These need to be added to that already described as user experience (UX) research in any other area where usability applies:

1. "First you have to have an initial contact with the client and see at what phase of the project they are, whether they have a platform, whether they are going to buy one, whether they will develop their own... If it is an innovation process or an idea generation process, we see the profiles and the subject or content with which we are going to work (if it is language learning or marketing, for example).

2. We look for the right professionals to work with.

3. A discursive analysis is done to meet their needs, what difficulties they have been encountered in this type of platform, why, how they would improve them... The strengths and weaknesses, especially to meet the mental map of these people in different groups and to work with these ideas.

4. If looking for a platform, a small market research is done to find out what there are, how they work, which one is worse, which one is better, etc. Also, you use feedback received from users to be able to compare when they are talking about one thing or another, and to get to know a little more about what they mean.

5. Then we work the conceptual level. Often, the platform already exists; the problems are identified and confirmed with the client. In this case, we will observe the user in an environment as real as possible to see how they work with the tool. There are many techniques, such as the benchmark. The choice will depend a little bit on the project status, on problems declared by the customer, on other problems that will arise later on...

6. All this information is collected and it is processed at the level of prototypes, where all these needs are collected. You can reuse part of what they have done, focusing on running on a particular device (mobile), focusing on the research objectives. As usually, full development is very costly; we work with prototypes, which are watertight screens where you show the examples with a false navigation that allows you to pass the screens.

7. We compare all this information in a single session where we propose tasks to the users (registration application etc., depending on the objectives). They are given four or six tasks more or less, depending on how long they are. Any part of the process is analyzed to see if it's easy, if it is intuitive, if it is taking too long... If the user leaves the task, we have to identify the reason. We also take into account the time it takes the user to perform the task.

Normally, when people know about what they are watching, it takes longer to perform the task. We always calculate a longer time than normal to allow the user to develop the task. Here you learn a lot about the user and the strategies they are implementing to solve a problem and they do not allow them to continue with the task. Things like whether users want or don't want to share their results with the community are also analyzed. Here, motivations, brakes, and barriers are identified.

We normally work in laboratories that are prepared to be a pseudonatural or at least a neutral environment, with one-way mirrors so that the client can observe. In this type of platform, it is also very interesting to see how users work from home, in their natural environment. We meet the user during the time they usually use the platform, recording it or not; notes are collected, observing how the user does behave in a normal session. This provides a lot of data.

8. The final result of the investigation is a diagnosis. Depending on whether it is a discussion group (where the results are much more relational and discursive) or if it is a report of a user test, at the end you deliver a diagnosis where information on what happened is included; what brakes have been identified, what barriers, what facilitators... Effectiveness (success in the task) and efficiency (what effort is involved performing that task) are measured. These can be measured in many ways, such as clicking and time (how many clicks were necessary to perform the assigned task). A journey is made and we obtain a base line (the task is done among so many clicks). There is also another way of looking at efficiency; it is seeing the perceived efficiency, by taking a test with Likert scale after each task.

9. All this is analyzed and a PPT report is performed. It includes what happened; not only information about the analyzed tasks, but also conclusions on cross patterns or generic navigation, interaction, buttons...

10. Furthermore, the solution is included. This solution can be given in two ways: You can explain the identified problems to the customer, or it can be represented in a low level prototype. All the problems that have come and identified get solved on a prototype. For the customer, this is easier to see. We tell the client that when relocating the contents, text, or buttons in a certain way, for example using some forms, these problems, barriers, or difficulties get solved. More elaborated work is given. Buttons are relocated so that they are visible without scrolling, etc.

Everything depends on the scope of the project, which is included in the initial proposal. How far to go on the recommendations of interaction and what will be reflected in the prototype depends on what the customer wants. There are customers who already have a team of interaction and they only need the diagnosis, a picture of what happens, to know where the problems are. Any recommendations are given at high level, and their internal team is the one that works.

11. Sometimes the customer wants to go one step further, and they want these written recommendations directly applied on the platform. Translating those ideas into prototypes is more expensive, and you have to spend more time than with the written report.

The client does not always apply everything which is recommended in the report at a 100%. There are things that if not applied nothing happens, because they are minor, but there are things that if not applied, then this hits squarely on the user experience.

You must also distinguish what is the technological part from which is the part of user experience. The user ratings can be very negative due to a technical failure of the page, or because very few features have been included on the platform, but not due to difficulties in navigation, or in finding something, or in performing a particular task. We also take into account the expectations and needs of users, because sometimes the tool is correct but not used, because users don't need it.

Sometimes mistakes are clear and are easy to detect and solve, and sometimes are due to something less defined, more ethereal, repeated on every screen, and that makes the valuation of the user experience diminish.

12. Satisfaction questionnaires allow us to measure the degree of improvement obtained with a change introduced when it does not have a direct impact on the economic performance of the site. It is a quality process, because you're not measuring behavior but verbal expression, but it serves to measure an improvement in a particular service."

Customer Team Engagement

"We engage with the internal teams of the company for the results of the reports to be carried out. We work with design teams; we make reviews of what they do to ensure that usability patterns are followed. Many people coming from design usually have experience in usability, although it is true that our report also includes a customer knowledge. We also work with the development team with a programmer.

Depending on the client, we can involve teams in the preliminary stages of the investigation or not. It is preferable that they are there from the beginning, but this is the decision of the client. Sometimes, you do two sessions and those from the company come to only one; this skews much perception and they already go with a preconceived idea. They are invited to come to all sessions, and we try to evangelize not to come with a preconceived idea, because you have to cross the information with all users, and the report's conclusions do not always coincide with the part witnessed by the client. There is work of sharing the findings by all experts; there may be results that will come out different at each expert, and then you have to question why this happens, and draw the definitive conclusions. When the client is involved in the session, they see the value it has and this is very rewarding; but you also have to manage their expectations, so they do not go with preconceived ideas."

Duration And Research Costs

"Research time will depend on the customer; it's not the same to apply three research techniques and only apply one. We talked about two formats: Rapid tests or guerrilla tests, which can work in two weeks, and classical research, which would take four weeks.

Budget gets modified according to things like number of rooms required for the investigation, whether there is a previous list of available users on which conducting the research or not... It can be done from a base of 7,000 euros."

Added Value

"Our added value is that we always participate directly in the projects. If a service is subcontracted, it is only for a point of the process and it is always under our supervision. Moreover, as we do not maintain a strong infrastructure, we are competitive in pricing."