Part of it is figuring out which part of the argument is deontological and which part is consequential. And which part is post-hoc justification and which part is actually first-principle.
For example: when I said "I have no problem with the death penalty, it's just that it doesn't work" it's very easy to figure out what I mean. I'm
not saying that the Death Penalty is a moral good, despite its consequences. For me, it's as simple as predicting (and measuring) which speed limit is best for a residential street. From my first principles, it's acceptable. And then I see what's going to happen. I'm not willing to execute because "killing is bad". I'm willing to execute because "killing is undesirable, but if it must happen, this guy volunteered".
Likewise, I think it's acceptable to want a gun. And to want you to not have a gun. But I only care about the eventual shake-out of how the stats work. I don't think I even have a first-principles stance on owning a gun beyond a simple "we should be allowed to own stuff that doesn't make everything else worse".
Compare that to my moral stance on charity. Now, sometimes I think a charity donation is
wise. But that's just rational self-interest. Me supporting the Rotary to wipe out polio just makes my future life better. And I think people are
nuts for not donating more to Alzheimer's research. Like bonkers. But some of my charity is not an attempt to make the future 'better', it's just that I think it's
good to forgo in order to help someone else out.
It might even be a post-hoc rationalization of an instinct. It's easier if there's some type of leverage (I gave up my ice-cream so that they could get spaghetti), but it needn't even be so clear-cut. Like, 'have my coupon for a free coffee'. And when I decide to donate to charity, I'm doing so out of a moral urge. The specific spending will be a weird combination of post-hoc justifications and attempts at predicting consequences.
And sometimes my entire framing is entirely post-hoc and I'm embarrassed later at having convinced myself. When I do animal research, we kill animals with the intended goal of creating human medicines. "If I were starving on an island, I would need to eat one rat per day in order to stay alive. Ergo, a human day is worth one rat life. Ergo, if the sum of the animal research causes humans to live longer, we can just simply calculate the benefit in rat-days."
There's no first-principle there. I just declared that I was morally allowed to kill rats in order to stay alive. Now, I can
pretend it's a first-principle. "Oh, nature forced us to choose between rats and ourselves, and then used natural selection to declare that 'selfishness' is going to be the winning strategy in the grand scope of time". But that's about it. And someone else might reasonably say "dude, people are going to chose themselves over rats 99% of the time, might as well deal with it". But then, that's kinda the end of the conversation. We're gonna kill rats to stay alive. If you want fewer rats killed, help me create alternatives. Don't try to force me to stop. Because, one will work and the other one won't.
With guns (and me) it happened very easily. I want a gun for self-defense. And then people said "if you look at arranging society a certain way, you're actually safer without a gun". And then I said "sure, whatevs, sounds good. I was mainly just trying to not get killed."