I got a perfect 10/10. She even approached me after class to say she really loved my topic and novel approach to the problem. Did get chastised for especially poor sampling, but I was self-aware in the paper to realize that this was not really a good idea for a scientific survey, and even included a tentative plan to do random sampling on online communities if I ever tried this again.
As I promised, general conclusions from the data:
Firstly, the sample size wasn't large enough that SPSS, the program I was using for interpretation, was not confident in its calculations. Therefore, with one notable exception, I got relatively low lambda scores. For people not in social sciences, the lambda score is the value of correlation between two nominal values (aka, values with no inherient numerical value, which is what all my values were), or a nominal and an ordinal (which does have a numerical value, but is not a raw number with clearly defined values. Basically, ranges or abstract comparisions like "rate from 1-10 how likely you support something"). It can range from 0 to 1; the closer it is to 1, the stronger the relationship is. The actual calculations is pretty blackboxed to me (this is why I am in political science and not math lol), so not going to bother explaining
how it works. But theoretically, a lambda score of about .1 should be about enough to prove
something is going on between the varibles, albeit weak.
Secondly, all you idiots are way too homogenous when it comes to demographics. 93% of all responses were cismales, and 72.7% were the same political affiliation. I had to scrap my demographical analysis because there was no way I was going to get any meaningful data from it* >:c
As for what I was looking for, my working hypothesis was "[People] with empathy towards trans people are more likely to support pro-trans public policy". Empathy is a bit hard to operationalize, so that was kind of the point of the first part of questions. I asked basic questions which measured if you were trans (because everyone is empathic to themselves!), if you knew a trans person, and two questions which exposed hidden biases towards trans people. So...
I'm not actually endorsing any position on this, but "cultural drivers" and "chemical environments" are both excluded by the "is it innate or a personal choice" question, both of which my bias tells me seems more likely than the answers provided.
I know it isn't really a meaningful option as it doesn't answer the question, but I'd love if I could've answered "It really doesn't matter, people should be allowed to choose or be as they choose or are" or something similar under the "innate or personal choice" question.
That is why I asked the questions the way I did. Because those who believe transgender identification is a choice were operationalized as being less empathic compared to people who said it was innate. Likewise, those who refused to date trans people were also considered to be less empathic as well. I know some of you are going to complain to me that just because they wouldn't date me or think I chose to be trans doesn't make them less empathic to me and that they aren't transphobes. I promise in advance I will only respond with sarcastic gifs involving the letter K to such complaints.
So, in order to make sure I wasn't writing a 40 page paper, I only looked at what I thought were the most contentious questions of both section II and III. That ended up being Tertiary Operations and Birth Certificates, respectively. I then measured each intake question (with the exception of if you personally identified as trans, because it was so lopsided in favor of cis people that I wouldn't get anything meaningful out of it) as the independent variable, with those two as the dependent.
Firstly, knowing a trans person had basically no effect at all how you responded to either question. The lambda score for knowledge of trans people versus support of tertiary surgeries was a clean .000, in fact. Simply knowing a trans person did not make you more empathic.
However, the relationships for the hidden biases did give middling support for a weak relationship to both questions. They ranged around .1, with two tied at the highest being .148. Again, that's enough to say that there is a weak relationship between my variables, but it's not that predictive. Having only 45 respondents was a big problem in that regard, unfortunately. If I had more cases, it might have spit out more favorable numbers towards me.
Interestingly enough, in some cases, I actually got unusually high lambda scores if I swapped what was the independent and the dependent variables. For example, if I made support of changing trans birth certificate my independent varible, and knowing trans people my dependent variable, the lambda score rises to the high .409. That's a huge correlation, considering my lack of reponses. That also doesn't make any logical sense, because knowledge of trans people clearly does not rely on your convictions of trans public policy. SPSS will calculate illogical things if its theoretically possible; part of good interpretation is knowing what outputs are illogical horsecrap
In general, I concluded that there was a promising start to my hypothesis, but there was too much sampling error and lack of responses in teneral to make any of it significant.
*Theoretically I got enough varied responses for age and country that I could have tried to use them, but my report was already long enough that I decided to not bother with demographics at all.