cover-tenba-2560x1920.png

iCIMS Research

 
 

Information Architecture

Overview

After the initial release of “new iCIMS” (the name given to the UI/UX overhaul of the iCIMS platform), there was quite a lot of feedback hinting at an IA issue.

Role

I was the sole product designer for this project, in charge of research, synthesis, and recommendations.

Disclosure

The design work on this feature had started over a year before I was put on this project. Ideally, this IA work should have been done upfront. However, after starting on the project and seeing the feedback, it was clear that we needed to pivot.

cover-card sorting.png

Problem Identification

We received 269 responses to a survey after our initial release of “new iCIMS.” At a glance there was a lot of unactionable feedback, things like “STOP MAKING CHANGES” or comparing the UI to “90s Nintendo” (I’m still unsure if this was meant to be criticism or praise – both the SNES and N64 were arguably Nintendo’s most influential systems).

After reading through all the feedback (and applying some burn cream), I noticed a trend, which was that a decent portion of the feedback was related to navigation and the findability of information:

“You can not find anything in the new icims feature.”

“Many of the functions made it more difficult to find other functions from the earlier release, including simple things like the contact information screen for a candidate profile.”

“I find the release much more confusing.”

“The navigation is trickier with the new update.”

“We are still using the old iCIMS because we are not able to find what we need in the new one.”

“‘New iCIMS’ is very difficult to navigate.”

“Even what you probably think is a little thing, moving the search in the “person” from the left to the right side - driving me crazy STILL!”

“Where are the buckets? Where are the bins?”

There are generally two reasons people can’t find what they’re looking for:

  1. It doesn’t exist, or

  2. It’s not where they’re looking

So I set off down the “how do we…” path by setting up some card-sorting sessions.

Topic Creation

I worked with my Product Owner, Technical Product Manager, and two Customer Service Managers (CSMs) to come up with all the topics that represented both the current features and key information as well as some planned features, so we could see where users might want to see those appear. We ended up with 44 topics in total.

With the topics created the next step was setting up time with 15-20 users to do the sorting activity.

Setting up Sessions

At the time, iCIMS didn’t have an active program in place to access users, so I took the initiative to contact some of our CSMs and Account Managers to see if they could help me get access to some of their customers. They were more than happy to do so!

The only hitch was that our CSM/AM customer contacts were all at the manager/director level, and I needed to talk to people who were using the software every day (recruiters and hiring managers). This meant setting up short calls with the directors to explain what I was looking to do and see if they would be willing to have some of their recruiters or hiring managers participate. Everyone I spoke with was happy to give me access to their recruiters, and this also created the opportunity to get some additional feedback from those who were closer to making the purchasing decisions and help include them in the process of addressing their concerns.

In the meantime, I set up a meeting with our in-house recruitment manager to see if I could get time with our internal users while I was in the process of getting access to our external users.

Running Sessions

Over the next few weeks, I ran 15 one-hour card sorting sessions with participants from companies ranging from 900 employees up to 100,000.

There were two hurdles to get over while running the sessions, which were

  1. we were in the middle of a pandemic, so in-person sessions were not possible, and

  2. I didn’t have a license to card-sorting software, such as OptimalLabs, so I had to get creative with how to run a low-effort, easy-to-use activity

I ended up choosing InVision Freehand, since participants only needed a link to access the board (no account creation required). Here’s a screenshot of what participants would see when joining the board:

Screen Shot 2021-09-28 at 10.03.30 AM.png

I moderated each session, but only asked participants to talk through their thought process as they sorted – I didn’t question why they put things in certain places so as not to bias their responses. Once they were finished sorting I would ask two main questions, "were there any cards that were difficult to sort?” and “were there any cards that you thought belonged in multiple categories?” these questions helped clarify if certain information might be useful in multiple locations (more on this later).

If participants finished early I’d take the time to ask a few additional questions, mainly “what’s a feature you use often that could be improved?” and “is there a feature you’d like to see implemented in a future update?” While not specifically related to card-sorting this did yield some interesting suggestions for me to bring to the team.

 
Screen Shot 2021-09-28 at 12.21.24 AM.png

Sorted

Here’s a shot of nine of the completed card sorts. Everyone had a slightly different way of organizing their boards.

Synthesis

Once all the sorting sessions were complete I took the next step of going through all the data. The first step was normalizing the categories that were created. In total, participants created 88 categories. Many of them were extremely similar, for example, “interview,” “interview process,” “interview stages,” “interviewing” – I grouped all of these into a singular “interview” category. After normalizing all the categories I was left with 20 unique categories, including a few outliers that were only used once, but didn’t map to any of the normalized categories.

The next step was to organize the data to analyze trends for both the topics and the categories they were put into. As mentioned earlier, not having access to card-sorting software meant I had to do this work manually which took some extra time, but on the plus side I feel like I learned a lot. I documented all my findings in Confluence.

Screen Shot 2021-09-28 at 10.27.37 AM.png

Card (Topic) Data

Showing how many categories a card was sorted into and how frequently would help show areas of high agreement as well as areas where there was no clear consensus (perhaps indicating that it belonged in multiple categories).

Screen Shot 2021-09-28 at 10.27.51 AM.png

Category Data

Seeing how many cards each category contained helped determine which categories were the most used and representing the broadest descriptions of what they might contain. Categories that had very low usage typically indicated unique naming that wasn’t possible to normalize.

Highest level of agreement

After synthesizing all the data I was able to create a view of where the majority of participants wanted things categorized. The items inside the yellow rectangle are things that were frequently grouped as “candidate info” that appear together in our Profile Card component. The items inside the blue rectangles were frequently cited as things that could be helpful to be able to access from multiple locations (or perhaps anywhere).

highest agreement v3.png

Observations

A few topics came up consistently as needing to be in multiple categories or split between two categories. Namely “Notes,” “Feedback,” “Candidate details,” and “Job details.” It was clear that these data points needed to always be visible or at least accessible from multiple locations. This lined up with the feedback we received on the Notes feature.

Suggestions

Using the last few minutes of hour-long sessions with participants yielded some really interesting suggestions for enhancements to features. One of the more interesting ones was to add a “timed undo” for status changes, in case a candidate was either advanced or rejected by mistake. These were all brought to the product team for future implementation.

Next Steps

Having gotten a pretty clear idea of where the majority of participants expected to find things, next I want to do a closed card sort to validate the categories that were created. After that, I want to move on to concept testing to get some feedback from users on the reorganized structure to see how well they were able to navigate it.