My answer to that guy in the video:
Hello, Epydemic2020. This is a response to your video 'Deontology vs Consequentialism'. As a utilitarian consequentialist I want to answer to your claims at 2:55.
First, if moral conclusions of a moral theory you do not hold are different from the ones your intuition points you to, you could argue that the moral theory is wrong, incorrect. Also, you could argue your intuition is the one being wrong, incorrect. It is tautologically the case that as long as you don't hold the moral theory you critique, it will not reach the same conclusions as yours. If moral function f(x) is unequal to moral function p(x), then it is not true that for any value of x, the value of f(x) is equal to the value of p(x). The question I would pose to you is, why do you believe your intuition is any more reliable than an academic moral theory, be it consequentialism or deontology, to the point that you would choose one or the other in relation to how well they perform compared to your already established moral intuition. If that is truly inevitably how you're going to choose your moral theory, based on your intuition, I suggest you don't delude yourself and directly do what your intuition tells you morally, without the need for moral theories.
However, instead of this, I suggest you develop more sophisticated tools with which to compare moral theories rather than simply comparing them to what your intuition tells you. I see in your video that you are quite young, and have a liking for knowledge, so you're probably in the right track for that. With this first reasoning I wanted to point out above all the use you make of your bare intuition as a weighting scale, and make you think whether this is truly a method we should rely on to choose or develop a moral theory.
Second, the consequences of killing a total stranger to save another total stranger will be very very rarely the same. Consequentialists don't measure consequences unconsciously, precisely because if you believe consequences are the only thing to be measured to obtain a moral conclusion with which to decide, you take care of picking up all the consequences that might occur. Let's say following your scenario that, in anything related to the world and later happenings, the consequences of either killing a stranger to let some other stranger live, or not killing either thus having the second stranger die accidentally, will be the same (which is false, but let's say for the sake of the argument that it's true). The person will not be held accountable for anything in either case, he would not go to jail in either case, he will/will not go in a murdering rampage or will be equally depressed in either case, etc. Ah! But, it happens to be that people feel something themselves in regards to killing other people. Thus if in regards to the world and anything that might happen later on either choice causes same consequences (same well-being, or same amount of consequentialist unit of measure), but one choice makes the potential murderer feel worse than the other at that precise moment, it makes him feel bad about himself more than the other choice, then the potential murderer should consequentially speaking take the choice he is more comfortable with: not actively killing would be the morally good choice for someone who doesn't like killing, actively killing would be the morally good choice for someone who likes killing. Arguably most people should choose not to kill, if only for that single reason (in your very very unlikely scenario).
There are extremely few situations or dilemmas in which well-being consequentialism delivers an answer of moral neutrality, none of them realistic I believe.
Edit: He follows a religion, he is Christian. Personally I find it a trait of people unworthy of being listened in ethical matters, unreliable, as they are willing to believe wildly fantastic stories by mere faith.