How This Software Engineer Transformed User Experience by Challenging 'Expected Failures'
Watch the Complete Interview
See the candidate's full response, body language, and how they handle follow-up questions in real-time.
Complete interview transcript & analysis below
INTERVIEWER
For this one I'd like to focus on. Um, ongoing improvements, right? As a, as an engineering leader you're often tasked with keeping track of what's going on with your, your, your product, your services, whatever, um, and so what I'd like you to walk me through is the most significant continuous improvement project that you have led or taken on as an individual effort. What was the project and why was it so important?
CANDIDATE
Yes, so I will talk about um Again, with like file packer experience, and with our file packer, of course, we're setting up many matches in order to measure the reliabilities and engagement. So with the reliability, Um, it's always not going to happen, uh, by 99.99%, right? So I constantly look at our mattress dashboard, and sometimes, let's say it's like, like 98% of. For these particular services, then I will look into the logs in, in order to understand, OK, where are these like 400 invalid requests coming from, uh, all the 400
INTERVIEWER
invalid requests.
CANDIDATE
Yes. OK. Yes. So, are they all coming from the same user scenario, or they are coming from different scenario? And also, when I am marking the 400 invalid requests as an expected failure. Is it truly an expected failure that has to be understanding based on the user scenario. So when I look at the logs, and I realized that sometimes it may not be a real Expected failure. For example, maybe it's like mm the timeout uh numeric request, I mean like With the token being timeout and then when the user is sending a request over and then this will become an MRI request because the token was timed out. But then when the token is time out and then the user keep having this emeric request, it's actually a bad user experience. So do we want to clarify it as an expected failure? Maybe from the server side standpoint, yes. But then from the client size standpoint, It may not be 100% because I also want to see like this timeout is in a uh reasonable. Time frame 9 is it the user is watching a video from their OneDrive, but then after they watch this video, and they go back to their OneDrive and They cannot bounce again, they have to refresh the page in order to make the new request. So, um, over this, um, mattress dashboard setup, I try to improve the, um, mattress quality and also to improve the Improve the lumber on the mattress. So I'm not only, I'm not just like marking or or uh failure as expected failure in order to make the number higher so then it can be presenting better, but I'm actually trying to care about like how the user experience it.
INTERVIEWER
So can you give me more details? You're speaking at a very high level, right? You're looking at metrics, OK, but that's a, it's a very broad term, right? So can you give me a bit more detail?
CANDIDATE
Uh, let me see. Um, So one of the example is, uh, you will be able to Uh Look at a file as an anonymous, but then you have to click a particular link. And when you go to someone's drive as an anonymous with that particular shareable link, um, you will be able to navigate into different, uh, navigation bar. However, that is belonging to someone's drive. So when you try to navigate into those navigation bar, It actually reached to 500 and I figured this out by looking at the logs and later on
INTERVIEWER
when when that happened they would see uh HTP 500 error.
CANDIDATE
Yes, because they basically just don't have, uh, don't have permissions to look into someone's uh drive. But then that was how the original architecture looks like, because we uh send you a sharable link, and this link is actually someone's drive, but then containing the files that you have permissions to read. And then after I realized this issue, I actually make the 500 failure page to be a redirect login page. So it will be less confusion for the anonymous user. OK. And this is, uh-huh go ahead. And this is how I uh try to continue to improve the product uh experience by monitoring the matches and uh engagement event.
INTERVIEWER
So the follow up question was. Targeting asking about metrics and ultimately tied back to the original question which is a continuous improvement project but for me as a as an interviewer now having listened to these answers I'm still not clear as to what the metric was or what the improvement I I understand the end improvement which is a better customer experience but I'm so unclear as to what the. What you improved, right? To be able to say to your boss, hey, I fixed a problem that's true sure but it's unclear to me based on what you've shared with me. How you were determining that you did better, so can you just. Maybe speak to the actual metrics or what you were looking at or just help me understand how you diagnose this a little bit better.
CANDIDATE
Got it. Yes, uh, your question is very good. So, uh, so first is like, I look into the mattress, and then I see, oh, right now the, uh, number is 98%, let's say. And then, uh, I realized that 1% of the uh expected failure is actually, um, Invalid, uh, invalid request because the, the person is an anonymous user. And I looked into this data, and then I figured that out. And so, I go back to fix the issues that the anonymous user shouldn't have the chance to click on someone's drive navigation bar. And If I'm talking to my boss, then I will say this is a bad user experience. And after I fix this, uh, we see the expected failure rate, uh, goes down. Will that better answer the question?
INTERVIEWER
Went down by 100%, went down by 5%.
CANDIDATE
Mm, uh, I will say went down by, uh, 30% because like, not all the users are just like anonymous user. It's just like maybe this 30% is the, uh, anonymous users who click on a shareable link.
Get the Expert Assessment
Unlock the interviewer's detailed analysis, scoring breakdown, and specific feedback on this candidate's performance.