Watch the episode on YouTube →
(0:00) Introductions
(0:56) Kristen’s role at Depot
(3:44) Defining science in developer experience
(8:56) The “ghost engineers” controversy
(16:02) Balancing peer review with research
(19:17) What Kristen wants from quality research
(22:09) Why developer experience matters
(25:46) Developer experience in the modern world
(29:58) The need for empirical studies
(34:23) How sample sizes affect research validity
(37:18) Replicating qualitative research
(39:18) Applying research in practice
Rebecca: Kristen, thanks so much for coming. It's great to have you here.
Kristen: Well, thanks for having me, Rebecca. You know, I've been kind of following you from the social media platforms for a while. And I was just tickled to get the invitation to come and chat with you live!
Rebecca: Yeah, yeah! Likewise! And, you know, you were previously working at a Swarmia competitor, so it was a little bit more awkward to bring you on. But I've been really excited to talk to you and hoping to get some of your colleagues on in the future too, because the work that was going on at the Dev Success Lab was pretty interesting.
But today I wanted to talk about, well, first I want to hear what you're doing at Depot, but I want to talk about what you learned at the Dev Success Lab and especially about what is science in this space? And what is research in this space? And that sort of thing. But first, tell me about your new role at Depot.
Kristen: Yeah. So I'm at Depot; I've been there for about two months now. It's a very different kind of product that I've ever been involved with. Depot is basically a build acceleration platform. So, the co-founders are both engineers. A few years ago, they started building Depot’s first products because they were just really frustrated with slow Docker builds. Who hasn't been there, right?
At the time – this was about three years ago – there was a real lack of tooling and providers to make those builds fast and performant. So, like many software practitioners turned entrepreneurs, they went and built the solution that they'd always wanted. So that's Depot. Again, I joined in October of this year to head up our developer experience function. I think that Depot is just like a developer experience product, and so to me, it makes sense that we have this developer experience function.
Part of what I get to do is basically product work. A lot of folks have likened developer experience to user experience. So when we're talking about tools that are built for developers, then oftentimes we're talking about how easy is it to find what you need in the public-facing documentation? How easy is it to install and use the CLI, or use drop-in command replacements to actually run the software? How easy is it to navigate the application UI to see what you need to do? So just anything that our users might encounter as part of using our products, we want to make sure that that experience is next level. And, yeah, that's what I'm heading up.
Outside of that main function, the co-founders are really supportive of my inherent interest – I might even call deep love – of developer experience, both as a scientific construct and then as a community as well. So, I spend some of my time just kind of keeping a pulse on what's going on in the space, reading the latest research, doing a bit of writing around DevEx, going to conferences and stuff like that. So it's been really fun so far. Different kind of role for me!
Rebecca: Yeah, very different. So it reminds me of mine a little bit, but it sounds like you're getting to both interact with the outside world, but also influence the product that you're working on at Depot.
Kristen: Yeah, exactly.
Rebecca: That's really cool. Really cool. So, I want to talk about science because you like science; I like science. So talk to me first, there's been so much talk about developer productivity and how you measure it and how, in some cases, people want to compare people to other people. In some cases, people want to use these sorts of metrics for maybe less-than-ideal purposes. But what is science in this space? What does that mean?
Kristen: Yeah. So yeah, it's such a good question. When we're talking about developer experience science, what exactly are we talking about? So, we are talking about research that applies empirical scientific methods to study and understand – and ideally to improve – the experience of developing software for those who do it, either as part of their jobs, or as maybe open-source contributors. There's a lot of this science that focuses on open-source projects or folks who contribute to those projects.
But a fair bit of it is looking at folks who develop as part of jobs working for corporations on enterprise code basis. And that's the crux of it. That's the short answer is that when we're talking about the science, we're talking about research. It's not product research. It's research that is done according to the principles of the Western science paradigm.
Rebecca: Can you talk to me about what that is? I did not grow up in academia, so I remember learning about the scientific method long ago, but can you talk to me a little bit about what that is and what you're seeing and maybe not seeing in – I'm going to use air quotes around “research” because I do think there's some research that's research and there's some research that's more on the marketing end. How do you distinguish between those two as a professional consumer of this sort of information?
Kristen: Yeah, absolutely. So, I think intent is paramount here. I mean, as you just mentioned, Rebecca, some of this stuff, it is posted, flagged, published under the name of research. It's actually just marketing, right? The reason that research was conducted was to maybe support the efficacy or the benefit of a certain tool or a certain practice. And while the research was done, most likely, to produce that artifact that's then given to consumers, it doesn't necessarily adhere to those Western scientific principles.
So what are those? Well, some people get PhDs in experimental methods. It's a really deep topic. But I think that if I were to distill it into its essence, we're talking about ethical research. So especially when we're dealing with human subjects, when we're doing research on humans, there are moral and ethical considerations that we have to take into account.
So Institutional Review Boards – they exist most often at universities, but the Developer Success Lab actually stood up their own IRB. And what that does is that you write up a research proposal, you lay out, “well, this is what I want to do. These are the questions I want to ask. And these are the methods that I want to use. IRB, can you please look at this and assess this for ethics or morality? Make sure that we're not doing any harm to the humans who are involved here.” I don't think people doing product research are doing that kind of thing.
Rebecca: If I'm really just poring through a few gigs or terabytes of GitHub data or JIRA data, what are the possible human repercussions of that? Because it's just data, right?
Kristen: Well, there's data privacy, right? Which is probably just a given to think about. But, the ghost engineer research…
Rebecca: Yes, which we were bound to talk about. So let's do it. Let's do it right now.
Kristen: So the findings and resultant claims that are made based on those findings can have serious repercussions for folks. And so, for example, if we had somebody come out and say, “Hey, 10 percent of software engineers are doing absolutely no work at all, there is a way to suss this out so that you can fire them.”
Well, if that research was not conducted in a moral, ethical, or rigorous way, according to the standards of science and methods of science… Well, then we have something that, if people take it serious enough, can have serious repercussions for the humans that provided the data as inputs to that research and analysis and whatnot.
Rebecca: It does. And let's just step back and talk about, so there was some – air quotes, again – “research” that was making the rounds on LinkedIn in the last few weeks about how some significant percentage of software engineers are, like you said, “doing nothing.” And I know you dug into that research a bunch, and I would love to hear not just what you found, but also just your methods for understanding. Is this reliable information?
Kristen: Okay, so this is kind of interesting. So I'm actually, just to say up front, I'm really confused about why this thing has blown up the way that it has. Not in terms of people being angry about it, but just that it's gotten so much attention, even from outlets like The Washington Post.
Rebecca: Oh, wow. I didn't know it made that.
Kristen: Yeah! There were major news outlets who have reported on this. So let me just kind of explain how I went about my own research. So there's this conversation that has kind of been blowing up social media feeds in the software space around this purported research around ghost engineers posted on X on Wednesday, November 20th, with a claim that data from his research shows that 9.5 percent of software engineers do virtually nothing.
So again, that was on November 20th. I was hardly on platforms like LinkedIn in the roughly 12 days that followed this post because in the US, we had our Thanksgiving holidays. So, between travel and just kind of trying to unplug, I was not following this conversation. But was popping in and out. Kept seeing people that I really respect, like scientists whose research I know is guided by deep expertise in scientific research methods, as well as that commitment to ethical research practices that we talked about.
I saw these folks commenting on this ghost engineers research, and I am ashamed to say I even liked some of those comments. And I'm ashamed to admit that because I think it's such a testament to how stuff like this goes viral on social media. I think a lot of us throw out those reactions. We post comments kind of willy-nilly when we don't always know exactly what we're reacting to or commenting on.
But anyway, so Monday, December 2nd, this is the Monday following the U.S. Thanksgiving holiday week. I actually had a task on my calendar to just take an hour and just get up to speed on what was going on with this ghost engineer research. So, the first thing I did was look at this guy's Twitter post, the original post. I actually have it pulled up here. So I'm just reading here.
He says, “I'm at Stanford and I research software engineering productivity. We have data on the performance of over 50,000 engineers from hundreds of companies. Inspired by Didi Das, somebody he mentions, our research shows that approximately 9.5% of software engineers do virtually nothing.” And then he has the term “ghost engineers.”
So my first question as I'm looking at this is, okay, what does this guy mean that he's at Stanford? He says, “I'm at Stanford.” Is he a student at Stanford? Is he pursuing a PhD at Stanford?
Rebecca: Is he sitting on the campus at Stanford?
Kristen: Is he literally just sitting in the student center, drinking a cup of coffee while he's writing this post? Yes!
So, for me to take anything that he says seriously, I need to figure out who he is, right? Credentials aren't everything. I'm not saying that him posting with a PhD in software engineering research is going to change everything. But this was just a first step, right? Look at your sources. Check the credibility and reliability of the person who's making these claims.
So looked at his Twitter profile, looked at his LinkedIn profile, and even there, it's very unclear what this guy exactly does at Stanford. So I ended up going to the Stanford website, searched for his name. He's on the list with hundreds of other people who got an MBA at Stanford's business school this year. So not in the computer science department, not pursuing or received a PhD. He got an MBA this year from Stanford.
But this guy has listed research software engineering productivity at Stanford as his most recent current work experience on LinkedIn. But, as far as I can tell, this is when he was getting his MBA at Stanford. He wasn’t employed by Stanford.
Rebecca: Let's back up, though. I am somebody who didn't even graduate college. Oops. And so what is it? Like you said, you don't have to have a PhD to do research, but what kind of raises your spidey sense about somebody who's getting an MBA who's doing research?
Because theoretically, they could be doing like equally valid, scientifically based… There's nothing about the fact that they're getting an MBA that says they aren't doing science. So talk to me a little bit, maybe just continue the story and talk about how did you figure out whether this met that real research bar? That science bar?
Kristen: Absolutely. And no, Rebecca, you're absolutely right. And I'm not claiming that with an MBA that you can't do this kind of research. I mean, I have a master's degree in teaching English as a foreign language, and I was doing research with the Developer Success Lab, right? So, no, you don't need the PhD. It doesn't need to be in a specific area to do that research.
But it's one piece of that credibility-building puzzle. And so it was just something that I noticed as I'm doing my own research on this guy. From what I could tell, people seem to think that the research that he's referencing in that original Twitter post is this paper, I think it's called “Predicting Expert Evaluations in Software Code Reviews.” And that actually is– there's a pre-print that's up. He's got Stanford professors who are co-authors on that paper.
But it's not entirely clear that that's where the 9.5 percent claim is coming from. Also, that paper hasn't been evaluated by peers through the peer review process. It hasn't been accepted for publication. For all we know, it could just be an artifact from his master's thesis. If he was required to complete a master's thesis, I'm not sure how Stanford awards those MBA degrees.
Rebecca: I'm going to just poke at this a little bit more. Not because I’m disagreeing but just because I really want to understand what is research? And I want listeners to understand what is research.
Peer review takes time, right? And so, yes, you can put a pre-print out there, but is it real until it's been peer-reviewed? In this world where technology moves fast, trends move fast, fads move fast, how do you balance that with the need for peer review? What if this was true, but it had to go through peer review and we don't learn about it for a year? That seems bad, too, right? So what's the balance of that?
Kristen: So the peer review process is rife with challenges. And some of those are people challenges. Good research doesn't always make it through the peer review process, and sometimes bad research makes it through the peer review process. So, it's very imperfect, but it does serve a purpose.
We need other people to look at our research, our method, our findings, our conclusions, our claims, and we need other people to provide their input on all of those things. It's just kind of the way that we think about knowledge building in that kind of scholarly academic sense, and the peer review process provides one way to do that.
But again, it's imperfect, right? I mean, there are stories about competing researchers. One submits to a journal where the other is one of the reviewers and doesn't make it through, not because it's not great research that contributes something valuable to the conversation, but because of these kind of people and ego issues. Which is just one example of why something might not make it through.
Rebecca: But let's imagine that pre-prints– I'm not going to say they solve this, but they mitigate a lot of this, giving you a chance to get feedback from the broader community before you go through that peer review process. And I know that the Developer Success Lab has published a variety of pre-prints. Which I'm glad for because I don't have to wait until a year later to actually read them, and I can get feedback on them.
I think that brings up an important point, is that the ability to give feedback on methods and findings, I think is a really important part of being able to call something research. The fact that you can see, “how'd you get there?” It's great that you saw a 6.2 percent improvement in X when you did Y. And I want to believe that. I'm happy. I want to be happy for you, but I want to see the numbers too, and I want to see how you came to that conclusion. And is it even statistically sound, the way that you came to this conclusion?
And I don't want to talk forever about what makes good research, but maybe we can just talk about, when somebody is claiming that they're putting out research in this productivity experience effectiveness space, what do you want to see from it?
And maybe peer review, like peer review in a published journal, I'm going to push back on that being the standard because I do think that, you know, eventually, sure, publish it, don't hold it back until it can be published. But besides that, what are the standards that you want to see?
If not somebody's got a research label on something that they're putting out into the world. Because I believe that you can do good research that is also good marketing. I believe that is possible. So, setting aside the marketing bit, what do you want to see from research in order to accept it as research?
Kristen: Complete transparency about what was done from start to finish. So, in a research paper, we see the methods section, right? And that will typically lay out for you exactly what was done to both gather the data and then analyze the data. And if we, as consumers of that research, don't have insight into the data collection and data analysis process, then it's really hard for us to evaluate that research as being quality or not.
So that, to me, is a gold standard. And we don't have that with the ghost engineer research. Nobody has seen a paper or the paper. There was a research article that I just reviewed as part of our developer science review that I collaborate with the Developer Success Lab on. And this study included a replication package. Which is huge, and I think should be the standard.
It's basically something that the researchers host online where anybody can access it. It's all of the data and instructions for replicating their data so that you can go in and do that data analysis yourself. All of their work, all of their conclusions are checkable. That is complete transparency. And, to me, it's kind of the gold standard. We need to have more of that.
Rebecca: I agree. I would love to see a lot more of that, so thank you for talking to me through that. There's a lot of other stuff I want to make sure we talk about too, but just based on your past experience working with all those researchers in the Dev Success Lab... Again, I think what they do is great, and I would love to see more of that in this developer experience space.
Going back to your job at Depot and how Depot is giving you the space to continue to explore and understand developer experience. What is it, and why is it valuable to a company like Depot that's building dev tools, and which companies need to be thinking about this?
Kristen: Yeah, totally. Cool. So fundamentally, developer experience is just concerned with the nature and the quality of the experience of developing software. So really high level. Historically, DevEx was really focused on improving the experience of developing with a particular tool. So, looking at developers as users of a tool, improving that experience, completely analogous to UX, user experience.
But at some point, folks started doing research, empirical research. They started looking at DevEx, really as a construct. And then as a result of that, we started to see the conversation around developer experience take a much more holistic view as something that encompasses everything that affects a developer's day to day. So not just the tools that they're using, but the processes that they're expected to use, the environments that they work in, both development environments but also people environments, and the people that they interact with.
So again, you know, one thing that I've heard people say recently is, “Well, what isn't developer experience? Isn't everything developer experience?” And in my opinion, yes, everything that touches how developers do their work is developer experience. But I think that that is not an argument that we shouldn't pay attention to it, but even more of an argument that we should pay attention to it.
So I really think that software teams – engineering leaders, even ICs – that they need to be caring about developer experience within their unit. I think organizations and companies need to be paying attention to it. Paying attention to DevEx is basically caring about people, in my opinion. And at a high level, I think we should care about other people. But when we make it better or easier for software folks to do their jobs, when we make the experience of building and shipping software great, I think we're more likely to ship more stuff and we're more likely to keep those folks on our team as a result.
Both lived experience and empirical evidence points to the importance of just improving the experience of developing software for software practitioners. But I don't think we can ignore the fact that we're living and working in a capitalist culture. A lot of us, right? Not all of us, but a lot of us are. And a lot of people are just trying to make a lot of money, and a lot of the people who work for those people are just trying to survive and maybe thrive, but maybe just trying to keep their jobs and not be part of the next layoff. And to do that, they have to, we have to show results, right? We have to show that we're getting stuff done, we're delivering value in some way.
So I think that good developer experience enables our teams and the individuals on those teams to deliver that value in a way that hopefully provides them a little bit of job security. Outside of keeping work fun, also let's help folks get stuff done.
Rebecca: You mentioned job security there a couple of times. And I think that's a really interesting part of this, especially today. I remember I gave a talk somewhere in the mid-2010s about optimizing for developer delight and about a project I had done that just made a code base a whole lot easier for developers to work with. I don't even remember the details now. But the point was we invested time in making it easier for developers to work with the code base, which had no material immediate impact on our customers. And wasn't that an interesting idea? And this was, like I said, 2014-ish, maybe, 2014/2015, that I gave this talk.
But today, the economic situation has changed quite a bit. And why is this still important? Because, in 2019, you said retention, and I'm like, “cool. Yep. Let's do it.” But now that retention is maybe even an anti-goal, maybe people are looking for voluntary attrition. Where does developer experience fit in that world? Why is this still important? Right? Hard question.
Kristen: It's a really interesting question. Yeah. Yeah. So, okay, let me chew on that for a second. So retention may not be as important to folks right now. I would like to disagree with that, maybe. But there's a value judgment there. Because in my mind, what my mind is yelling is, well, retention should always be important to people, no matter what the economic environment and realities are. Because when you've got good people that you do want to keep, you want to retain those people. Even if you've got seven out of a hundred people on this team that you're like, “Oh, we'd be okay if they moved on.” Well, you probably have fifty who you would really like to keep. Yeah. Yeah. So I don't know, does that answer your question, Rebecca?
Rebecca: Yeah, I don't know that there really is an answer. I think it's something that this industry has been struggling with, and I've been struggling with, is that, again, in the late 2010s, this was a very, very easy argument to make. And it was made not just in “the development environment should be good,” but in “the physical environment should be good” as well. There was a real premium on experience.
But maybe let's talk about this in the terms of not just, “of course, you want to keep people” – because I agree. Voluntary attrition is a risky game to play if you hit the wrong person. So, maybe we can talk about what are the positive outcomes that research has shown when you do invest in this? Besides retention, what are other kinds of values that you get out of investing in experience?
Kristen: So one thing that's kind of happening with the developer experience research is that there's a large body of it that is theoretical and that essentially theorizes that we should be investing in these things. And here are some of the things that we should be investing in. Here are some of the metrics that we can track to make sure that the developer experience is good.
But I'm having a hard time thinking of empirical research that actually tests a hypothesis around “these investments in developer experience have made a difference. Here's the research that we did to show it made a difference. Here's the data.” You know what I'm saying?
Rebecca: Yeah. Well, I think that's one of the biggest challenges in this space is attribution. Because I agree with the theory that, and there is – maybe not in the field of software development – but maybe generally, there's research and empirical research about like happy people do better work. I don't know. But I think that's one of the things that's really challenging about this experience part.
So at Swarmia, we talk about how there are three pillars to engineering effectiveness. There's business outcomes, which is the most important thing. Are you working on the right thing and doing what the business needs? There's developer productivity. How is work moving through the system? Is it moving through the system efficiently? And then there's developer experience, and this is the squishiest of all, because this is like “how does it feel?” But it's totally legit. Like how does it feel?
At Stripe, we did this program where people could just complain whenever they wanted to about how it felt to be a software engineer at the moment. “I'm sitting here waiting for this build that's failed three times.” And we had an angry button, that they could hit if something like that was happening. And that was really powerful 'cause you could actually read the frustration that people were feeling and you could kind of quickly empathize with, “I wouldn't want to be doing that. That sounds like it sucks.”
And again, when there was a lot of opportunity for movement between tech companies, I think that retention story was really real. Now, though, I think… I'm answering a question I was going to ask you, which is what research would I like to see? And I'll ask you in a minute. But I would love to see research that shows – and I don't know if this would pass an IRB – be nice to these people and don't be nice to these people and see what happens.
Kristen: No, that would not make it past an IRB!
Rebecca: But, yeah, and I think that's one of, again, the challenges is you can't run an A/B test on this. You can't be mean to some people and give other people free beer and call that ethical. But maybe that person who got his PhD in experimental methods could tell me how to actually do this experiment in a way that will pass the IRB. Because I mean, we see it, right? We see that if engineers are engaged with their work and feel like they can make progress, they do more, better work. We know it intuitively. And that is research that I would like to see because then this question kind of goes away. It stops being squishy.
Kristen: Well, Rebecca, I couldn't agree with you more. I'm always kind of thinking about– well, rather, ideas are popping into my head constantly around like, “Oh, it would be great if somebody would examine this phenomenon in the software engineering context.” And so one of the things that I really wish that we would see more of is replication studies, exactly what you're talking about, that case study-type research. We have, again, all of these big papers on developer experience and the SPACE framework, the DORA framework. Some of the stuff that Forsgren and Noda have done with Peggy-Anne Storey.
We've got these big theoretical papers, but we haven't seen that research replicated with small sample sizes. Again, I want to see case studies. And I want to see reporting on demographics. You know, a lot of these big papers have samples that are super homogenous. There's one in particular that I think that there was one female, one woman. It's a very small percentage and, I'm sorry, but I think that we can't make claims based on findings when our sample sizes are so homogenous. This can’t be applicable to all software developers.
Rebecca: I know you are not a PhD scientist, so I hope it's okay to ask you this. I know there is some research out there in this space where n equals 20 or n equals 50. So I'm curious, how do you think about that kind of research? It seems more qualitative than quantitative.
And I know in plenty of other industries as well, they do research with very small populations, and still are able to save all the things. So what are your thoughts about– how much does it matter how many people were involved? I guess that's my question.
Kristen: It's such a good question. And it's something that scientists, they mull over this and there are actually standards. There are statistical – and I'm going to botch this. I hope that Dr. Kat Hicks, my professor, is not evaluating these comments. There are statistics that researchers use to help them understand whether or not their sample size is actually big enough that we can trust the results and the effect sizes.
But that's with a certain kind of research. So we're talking about quantitative research. Oftentimes, we're going to be looking at those tools to help us understand if our sample size is big enough. With something like qualitative research, especially when you're doing some kind of exploratory, like “I'm trying to work on building a framework or a theory.” I'm purely in the exploratory phase. There's something called data saturation.
So you might start with doing a two hour interview with ten people who hopefully, to the greatest extent possible, represent the full range of demographics. That's hard with ten people though, right? But qualitative research is very time-consuming. You do the two-hour interview. You have to transcribe it. You have to analyze it, read through it, categorize it, label it many times over, and then compare your results with another person or two other people who have gone through that same process.
You might find that after doing this with ten people, with a sample size of ten, that you've reached what's called data saturation, that no new themes or insights are coming out of those interviews. And so, sample size of ten is okay in that case. You might find that you need to do ten more or twenty more interviews in order to reach that data saturation point.
Rebecca: So once you've done that qualitative research with ten or twenty people, going back to how you want to see more replication. Is that the kind of thing that you can replicate? Is that the kind of thing where we're also expecting to see, here are your notes? What is the level of kind of inspectability that you expect out of qualitative research?
Kristen: Oh, sure. Yeah. So that gets a little bit trickier because if you're looking at the words that people have actually used, there's more data privacy concerns, right? So, I'm not sure if having a replication package is as common for qualitative work as it is for quantitative work, where we're looking at numbers, not the words that people actually use.
But as somebody who's deeply interested in research, and sometimes as a software practitioner, wants to replicate that research to the best of my ability in my context, I would love to see those replication packages for that kind of qualitative stuff. Does that answer your question?
Rebecca: Yeah, and thinking about what I would want, I would want to know, not literally, who those ten people are, but tell me how I can go find ten people kind of like them and have the same conversations, right? And maybe that's what I want. Because I do think we have seen in this space too, where there's a lot of stuff that's being represented as empirical, where it is actually a lot more qualitative and a lot more nuanced and you could talk to ten different people and maybe get slightly different results – or twenty.
Kristen: And that's where that reporting is so important. So that we can try to replicate the findings to the greatest extent possible. And, you know, also pinpoint if the sample size was too homogenous.
Rebecca: How can practitioners put this stuff to use? How can they find research that they can trust and put it to use?
Kristen: So finding the research, fortunately, is not super challenging these days. So that's a good thing. I think evaluating research to kind of suss out whether it is worthy of trying to replicate in your own setting is a more challenging thing. We've talked about that a lot already in this conversation, but just kind of having that research literacy to know, okay, this particular research is worthy of trying to replicate a little bit more challenging. But once you make it past those steps, and you've got the research and you know, “okay, I want to try to find a practical application for this in my own context.” Something that I do is just look at the methods that the researchers used in their study. And assess to what extent I can directly replicate those methods.
And I'm not super versed in data analysis. So, I know that, for myself, with a certain kind of study that used certain types of data analysis methods, I'm not going to be able to fully replicate that. But there are plenty of studies – some of those qualitative studies – where you're analyzing a kind of categorizing this qualitative input from participants, you can make a good stab at that on your own.
And you can see, “okay, this is what the researchers found, very similar to what I found running a facsimile of this study in my own org or with my own team.” Or maybe what you find in your own team is very different than what the researchers found. But, yeah, I think just deeply reading those studies and giving yourself, and that's the time thing. It's always a time thing, right? You want to do things quickly.
You got to deep read this stuff and you got to give it a read, make some notes, take your dog for a walk. Ideas will come to you and then come back and read it again. And make a plan, make a little research proposal that helps keep you organized and kind of helps you just develop your own way of administering that same study as closely as you can.
Rebecca: I think the other thing I would add to that is that it doesn't have to be something where you're changing your whole organization based on something that a study said. It can be something that you're just running within your team. And maybe it works and maybe it doesn't, but if it does, that's a great story to go tell to the rest of the organization, right?
Kristen: Yeah.
Rebecca: Well, this has been really great. I'm sorry to cut this. It feels like I'm cutting it short. We've been talking for a while. So, this has been really fantastic. I really appreciate your time. And thanks for all the insights and looking forward to see what you do at Depot!
Kristen: Yeah. Thanks, Rebecca. This has been fun.
Rebecca: Excellent. Take care.
Kristen: Thanks. Bye.
Rebecca: And that's the show. Engineering Unblocked is brought to you by Swarmia, where we're rethinking developer productivity. See you next time.