How To Create A Truly Objective Survey About The Media



The Make America Great Again Committee released the “Mainstream Media Accountability Survey” less than one month after Donald Trump entered the oval office. I stumbled upon the survey circulating social media websites. I am an academic who focuses on the analysis and construction of testing materials, including survey methods. Public surveys like the Mainstream Media Accountability Survey interest me because I have the background to look at the instrument from an analytic perspective. I decided to take the survey and share my opinions of its construction. The “Mainstream Media Accountability Survey” provides participants with a dangerous and frowned upon format; twenty-two of the twenty-five items provide only “Yes”, “No”, “No opinion”, or “Other” as responses. Using “Yes” or “No” survey responses removes the possibility of doing meaningful inferential analysis, produces biased questions, and leaves participants confused. 
The goal of survey research is to use the responses of a sample of people to make inferences about a larger group of people. This procedure requires certain properties about the survey responses. One major requirement is that responses are considered “continuous”. This means that questions allow participants to answer on a scale. A common method for this is providing participants with several options (e.g. strongly agree, agree, neutral, disagree, strongly disagree). Without this “continuous” data, statistical inference to larger groups is very restricted, and accurate predictions are nearly impossible. Further, providing “Yes” or “No” responses to complex questions is frustrating to participants. How can one accurately express their opinion of a complex topic with a “Yes” or a “No”? The instrument violates a handful of survey research standards. This makes any claims about the survey invalid. Consider the question: “Do you believe that the media wrongly attributes gun violence to Second Amendment rights? “ This item falls victim to leading question bias. The wording of the question assumes that participants already agree with the claim that the media attributes gun violence to Second Amendment rights. Leading questions are easy to avoid by not asserting a stance in the question. For example, an acceptable way to probe for opinions on the attribution of gun violence would be to ask the following questions: On a scale from strongly agree to strongly disagree, how do much do you agree with the following statements: 1.) The media gun violence to Second Amendment rights. 2.) Gun violence attributed to the second Amendment right by the media is wrong Another concerning question focuses on participant responses to the following item: “Do you believe that the media uses slurs rather than facts to attack conservative stances on issues like border control, religious liberties, and ObamaCare?” This question is what survey researchers call a double-barreled question. The problem here is that the question is simultaneously asking multiple opinions. If a participant believed the media was using slurs to attack border control, but not religion, how could they accurately respond to the item? Consider the next question: “On which issues does the mainstream media do the worst job of representing Republicans? (Select as many that apply.)” With the possible selections: Immigration Economics Pro-life values Religion Individual liberty Conservatism Foreign policy Second Amendment rights This item asks for the “worst” representation by the media of the Republican Party, and then asks participants to select all that apply. This question is very confusing. The question probes for a single response, “worst”, however the responses allow for as many as 8 responses. Can 8 different issues all be equally the worst? The sampling procedure used for the survey also introduces problems. It is a well-accepted scientific process to randomly sample survey respondents to avoid biases. Recall that the goal of survey research is to make predictions about a larger group or population. Without random sampling, you cannot be certain that your sample is reflective of the larger group; the effect of this is inaccurate predictions. For example, let’s pretend I want to make predictions about the opinions of the American public on gun control policy. Instead of randomly sampling, I go to a college campus to collect opinions. This introduces bias into my results, because college students tend to have more liberal attitudes. If I decided to make claims about the American population on their opinions of gun control based on data from college campuses, I would likely be making inaccurate claims. The Mainstream Media Accountability Survey appears to have utilized a biased, non-random sampling procedure. The survey was originally spread by email to subscribed supporters of the Make America Great Again Committee. Due to this method of sampling, claims made to the American Public, or even conservatives are invalid. “The Mainstream Media Accountability Survey” is a poorly written instrument which will produce biased results. The available response options do not allow for useful analytics and leaves participants confused. The wording of the items violates commonly-held standards of practice through double barreled and leading questions. The sampling procedure is biased and restricts the ability to generalize any claims. •

YOU can take the Mainstream Media Survey Here:

https://action.trump2016.com/trump-mms-survey/

If you liked this article, check out more work by Zachary J. Roman, M.S., at www.zacharyroman.com


#Reviews #GlobalIssues

Member Login

  • Instagram - Black Circle
  • Facebook - Black Circle
Howl Magazine NY©