Applied Behavioral Analysis 2

Applied Behavioral Analysis 2

Resource: How to Make a Graph Using Microsoft Excel

The Unit 6 Assignment requires you to apply the theories, concepts, and research that you have covered so far this term to a hypothetical case study. Your answers to the questions and completed graph should consist of information from the text and supplemental readings.You also may use sources from the Kaplan library or other credible Internet sources, but your primary sources should be the readings assigned for the course.

Read each Case Study and answer the questions below. You will need to write 2–3 typed pages for each case in order to address all required parts of the project.Answers to the questions should be typed in an APA formatted Word document, double-spaced in 12-point font and submitted to the Dropbox.

Your final paper must be your original work; plagiarism will not be tolerated. Be sure to review the Syllabus in terms of what constitutes plagiarism.Please make sure to provide proper credit for those sources used in your case study analysis in proper APA format. Please see the APA Quick Reference for any questions related to APA citations. You must credit authors when you:

Summarize a concept, theory or research
Use direct quotes from the text or articles
Read Case Study 1: Martin

Martin, a behavior analyst, is working with Sara, a 14-year-old girl with severe developmental delays who exhibits self-injurious behavior (SIB). Sara’s target behavior is defined as pulling her hair, biting her arm and banging her head against the wall. After conducting a functional analysis, Martin decided to employ an intervention program consisting of differential reinforcement of other (DRO) desired behavior. Martin collected data on Sara’s SIB before and during the intervention. Below is a depiction of the data that Martin collected:

Sara’s Frequency of SIB

BASELINE Occurrences DRO Occurrences
22 5
25 5
27 3
26 2

 

Address the following questions, and complete the following requirements:

Create a basic line graph using Microsoft Excel, to be included in your Word document. The graph should depict the data provided in this case study. You should only need to create one graph, with SIB depicted, both in baseline and in intervention.
What type of research design did Martin employ when working with Sara? What is an advantage and a disadvantage of using this research design?
According to the data in the graph, was the intervention that Martin selected effective in modifying Sara’s self-injurious behavior?
Martin had considered using an ABAB reversal design when working with Sara. What are some ethical implications of selecting a reversal design when working with the type of behavior problems that Sara was exhibiting?
Martin’s supervisor requested a graph of the data he collected when working with Sara. Why are graphs useful in evaluating behavior change?
Discuss how a graph demonstrates a functional relationship. Identify whether the graph that you created using the data provided in this section depicts a functional relationship.

 
Do you need a similar assignment done for you from scratch? Order now!
Use Discount Code "Newclient" for a 15% Discount!

Spas 1 Psychology homework help

Spas 1 Psychology homework help

PSY 520 SPSS Assignment 1

 

Before you begin the assignment:

 

• Read Chapter 8 in your Discovering Statistics Using IBM SPSS Statistics textbook.

• Review the video tutorial for an overview of conducting multiple regression in SPSS.

• Download and open the Popularity SPSS data set.

 

An overview of the data set:

 

This data set represents hypothetical data from a study that examined how well some core personality traits predict a person’s level of popularity. Personality was measured using the “Big 5,” which is a very commonly used measure of personality. In fact, a Big 5 personality scale was included in the Module Two discussion.

 

Here is some more information about the variables in this hypothetical data set:

 

• Number: This is the ID number of the participant

• Sex: Participants’ sex, with “1” standing for male and “2” standing for female

• Age: College year of the participant, with “1” standing for freshman, “2” standing for sophomore, etc.

• Popularity: Popularity measured with a questionnaire that could range from 0 to 100, with higher numbers indicating more popularity

• Extroversion: A Big 5 trait indicating level of sociability. Scores range from 1 to 5, with high numbers indicating extroverted and low numbers indicating introverted

• Agreeableness: A Big 5 trait indicating level of interpersonal warmth and friendliness. Scores range from 1 to 5, with high numbers indicating warmth and low numbers indicating coldness towards others

• Conscientiousness: A Big 5 trait indicating level of self-control and responsibility. Scores range from 1 to 5, with high numbers indicating high conscientiousness and low numbers indicating low conscientiousness

• Neuroticism: A Big 5 trait indicating level of anxiety and emotional stability. Scores range from 1 to 5, with high numbers indicating high neuroticism and low numbers indicating low neuroticism

• Openness: A Big 5 trait indicating level of willingness to try new things and creativity. Scores range from 1 to 5, with high numbers indicating high open-mindedness and low numbers indicating closed-mindedness

 

Questions:

 

1) Describe in your own words what type of research situations call for a researcher to use a multiple regression analysis.

 

Type answer below:

 

 

 

2a) Run a basic correlation of matrix for the Popularity, Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness variables.

 

Paste output below (Read carefully: The best way to do this is to select “Copy Special” when copying from the SPSS output. Then select image as a format to copy. When pasting in Word, select Paste Special, choose a picture format, and then resize the image so it fits the screen):

 

 

 

2b) Based on these results, which personality variables are significantly correlated with Popularity?

 

Type answer below:

 

 

 

3a) Conduct a multiple regression analysis using Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness as predictors of Popularity.

 

Paste the output below:

 

 

 

3b) Which variables are significant predictors of Popularity? Compare and contrast the results from the multiple regression analysis to the basic correlation results from question 2b.

 

Type your answer below:

 

 

 

3c) What is the R-squared of this model, and what does it tell us about how well this model predicts Popularity?

 

Type your answer below:

 

 

 

3d) Write the results of the multiple regression in APA style. For help, refer to the Regression section in this document.

 

Type answer below:

 
Do you need a similar assignment done for you from scratch? Order now!
Use Discount Code "Newclient" for a 15% Discount!

Psychology Quiz homework help

Psychology Quiz homework help

Question 1  1.  According to DSM-5, the time frame for a sustained remission specifier is:

2–10 months.

13+ months.

2 weeks.

3–12 months. 5 points

Question 2  1.  According to Kurtz (2008), hope first occurs at which step?

Step 1.

Step 2.

Step 3.

Step 4. 5 points

Question 3  1.  According to the Dynamic Model of Relapse, these are factors that determine how one perceives recovery.

Distal risks.

Cognitive processes.

Tonic responses.

Phasic responses. 5 points

Question 4  1.  According to the text, a Substance Use Disorder is a (or an):

acute relapsing condition.

chronic relapsing condition.

consistent condition.

unstable condition. 5 points

Question 5  1.  According to the text, a person in recovery from alcohol use disorder who has one sip of beer at a wedding is said to have experienced a:

slip.

relapse.

lapse.

abstinence violation effect. 5 points

Question 6  1.  As a student, if you wanted to learn about AA or NA groups, this would be the type of group you would most likely attend.

Closed.

Open.

Either would be appropriate.

AA or NA only permits those in recovery to enter. 5 points

Question 7  1.  Covering up the addicted individual’s behaviors and negating the associated negative consequences is an example of:

co-dependency.

enabling.

helping.

sacrificing. 5 points

Question 8  1.  In Bowenian Theory, telling family members to talk to each other and not to you as counselor is called:

joining.

creating enactments.

differentiation.

creating a triad. 5 points

Question 9  1.  In Marlatt’s Model, the second category of variables contributing to addiction consists of:

coping skills.

outcome expectancies.

cravings.

poor parenting. 5 points

Question 10  1.  In the prior version of the DSM (the DSM-IV), which of the following was considered the less severe disorder?

Substance dependence.

Mild substance use disorder.    Mild substance dependence.

Substance abuse. 5 points

Question 11  1.  Non-support of a family member’s addiction recovery is an example of:

homeostasis.

boundaries.

triads.

subsytems. 5 points

Question 12  1.  The 12-Step Philosophy does NOT embrace which of the following?

Physical.

Medical.

Spiritual.

Mental. 5 points

 

Question 13  1.  The 12-Steps are concerned with:    character defects.

psychopathology.

illness.

detox. 5 points

Question 14  1.  The ability to separate from the family and be an independent individual is called:

breaking.

emotional cutoff.

differentiation.

triads. 5 points

Question 15  1.  This model proposes that co-occurring SUD and psychiatric disorders originate from the same risk factor.

Correlated liabilities model.

Secondary substance abuse model.

Common factor model.

Reciprocal causation model. 5 points

Question 16  1.  This model proposes that co-occurring psychiatric disorders precede and cause the onset of SUD.

Correlated liabilities model.

Secondary substance abuse model.

Common factor model.

Reciprocal causation model. 5 points

Question 17  1.  Which of the following substances may have anti-psychotic qualities?

Alcohol.

Opiates.

Cocaine.

Marijuana. 5 points

Question 18  1.  Which of these are NOT one of the three rules within an alcoholic family?

Obtain help for the addicted family member.

Protect the addicted family member from the consequences of their behavior.

Do not discuss the addiction.

Do not confront the addiction. 5 points

Question 19  1.  ______ is a step beyond ______.

Abstinence, recovery.

Recovery, abstinence.

Relapse, recovery.

Recovery, relapse. 5 points

Question 20  1.  ______ is the 12-Step program for families of an addicted individual.    Alanon.    Alateen.    AA.    NA. 5 points     Save and Submit  Click Save and Submit to save and submit. Click Save All Answers to save all answers.

 
Do you need a similar assignment done for you from scratch? Order now!
Use Discount Code "Newclient" for a 15% Discount!

Psychology questions review homework help

Running Head: Review Questions 1

 

Review Questions 7

 

Review Questions, 1-8,. Please answer questions in detail and support your answers with scholarly research citations where appropriate. Support your paper with a minimum of 5 resources. In addition to these specified resources, other appropriate scholarly resources, including older articles, may be included.

Length: 5-7 pages not including title and reference pages References: Minimum of 5 scholarly resources. Your paper should demonstrate thoughtful consideration of the ideas and concepts that are presented in the course and provide new thoughts and insights relating directly to this topic. Your paper should reflect scholarly writing and current APA standards.

Review Questions 1. What is naturalistic observation? How does a researcher collect data when conducting naturalistic observation research? 2. Why are the data in naturalistic observation research primarily qualitative? 3. Distinguish between participant and nonparticipant observation; between concealed and nonconcealed observation. 4. What is systematic observation? Why are the data from systematic observation primarily quantitative? 5. What is coding system? What are some important considerations when developing a coding system? 6. What is a case study? When are studies used? What is a psychobiography? 7. What is archival research? What are the major sources of archival data? 8. What is content analysis?

 

 

Review Questions

Name

Institutional Affiliation

 

Naturalistic observation is a commonly used research method used by psychologists and other scientists in social platform. Naturalistic observation is a technique that involves observation of subjects in natural environments. This technique is mostly used in areas where doing laboratory research is not necessary and cost is not required or would interfere with the information. Unlike structured observation, naturalistic observation technique entails observing a behavior as it happens in its natural sense without any interference from the researcher. This technique is advantageous in the sense that, it allows direct observation of a behavior by a researcher as it occurs. Also naturalistic observation method allows the study that cannot be done in laboratories due to ethical reasons. A good example is that one cannot study the impacts of imprisonment by confining the subjects. The only way is to gather information from the prison through naturalistic observation technique (Wertz, 2011).

There are several methods used when collecting data using a naturalistic observation technique. Observers use tallying counts by writing down when and how often some behaviors occur. Another technique utilized in data collection is by use of observer narratives method. This involves taking notes when observing then go back later and try to get data and differentiate behavioral patterns from the notes. Depending on the kind of behavior being studied, the observer may choose to record audio or video tapes during each session (Wertz, 2011).

The data in naturalistic observation is mainly qualitative because it is a description of real observations instead of statistical quantitative summaries. The main purpose of naturalistic observation is to give accurate description and interpretation. The researcher has a responsibility of describing the setting, events and individuals observed. At the end, Researcher analyzes observed behavior and gives detailed information of the data collected. Naturalistic observation is primarily qualitative because the researcher gets the data from the field where the subjects are affected directly by the topic under study. Researchers personally get the data from different sources such as; studying documents, observation of behavior and conducting interviews. On the other hand, researchers collect data forms through interviews and observation of behaviors of subjects, recording both audio and video tapes. All this leads to naturalistic observation technique. This technique is a non-experimental, mainly qualitative research technique in which items are researched in their environmental setting. Behaviors of interest are studied and recorded by the researcher. This method is mostly used during the starting stages of the research project. Therefore, naturalistic observation technique is a qualitative study method utilized in research data collection (Goodwin, 2010).

Participant observation takes on an active role;it includes direct participation in lives of individuals being researched, this technique lowers the individual’s role where he or she observes. Participant observation should be well understood in the field. A big issue affecting participant observation is that observers might lose the main objective to conduct the study. This is opposed to non-participant observation whereby the observer does not take an active role during the study. Participant observation does not only show site and writing content down, on the otherhand, it is a complicated method that involves various components. One of the things a researcher or a person conducting the study should do after choosing to perform a participant observation to collect data. He or she should choose what kind of observer should be selected.(Wertz, 2011).

Participant observation has several disadvantages, some of which have a direct impact on the outcome. The recorded audio or video about a group of individuals is not always of full description. Since this is due to selective basis of any kind of recorded data process: it is greatly affected by researchers’personal life depending on what is of importance and relevance. The researchers’ point of view greatly influences how the data is interpreted and evaluated (Goodwin, 2010).

On the other hand, non-participant observation has less interaction with the individual or subject one observes. The recording of both audio and video tapes is a good choice in this technique. Non-participant observation conducts a concealed study. In this technique, the subject being studied is unaware of the research being carried on them. This technique is highly preferred in conducting research; this is because the behavior of the subject will not affect the person conducting the study. However, ethically non-concealed observation is mostly preferred to some groups of individuals. Participant observation is recommended due to reasons that one does not have to hide themselves from the individuals in order to collect data. The fact that one has to hide his or her identity depends on the ethical issues, setting and the targeted group (Goodwin, 2010).

By reducing the interaction between the subject and the researchers, it is likely that the risk of the Hawthorne effect will be less. Also it is easier to record data and information if you are not participating, so the recording of both audio and video tapes is easier. If people realize that they are being watched, they will not be able to give the information freely thus increasing the Hawthorne risk (McBurney& White, 2010).

A setting specifically put in place to reduce or completely eliminate bias is referred to as a systematic observation. A careful observation of behaviors in a specified setting, this technique is less global unlike naturalistic observation. Rules are set up ahead to minimize inferences, decision rules are methods taken before the data collection process starts. Data from system observation are mainly quantitative due to the fact that, observations quantified before hypothesis on the behavior. This is created by researchers before carrying out the study (Goodwin, & Goodwin, 1996).

Coding system is an analytical method whereby, data inform of qualitative and quantitative is grouped together to enable analysis. It is a system researchers use to characterize letters and numbers to obtain a meaningful message. Coding can also be used to means changing data to more understandable form. Several behaviors can be researched using system observation; researcher should be able to decide which one is of interest. Also the setting to do the observation, most importantly the researcher should create a coding system (McBurney& White, 2010).

A case study is a method whereby a specific event, program or a person is studied in details for a specified period. In this technique, a researcher puts together a lot of information on the event, program and person(s) on which the study is focused. The data is got from observation, newspapers, recorded audio, recorded video photographs and interviews. It is an observation method that gives detailed description of a person, this person is normally an important figure in the society, and it may also include an institution or a business (McBurney& White, 2010).

This techniques are used when researchers want to enquire more about a less or badly understood situation. Case study is conducted when a person has little information on a particular item. The results can enlighten the public on something that is uncommon or rare thus helping greatly. On the other hand, psychobiography aims to understand important people’s history such as politicians or musicians. This is achieved through use of psychological theory by researchers to give worthy explanations about an individual’s life.Archival research on the other hand,generally is a kind of research which gets the information from a previous study records. Researchers use written documents to get answers to their research questions. Researchers can also use other sources such as internet, books and information from the library. There are different kinds of archival research data, like statistical records, written records and surveys. Conducting archival research has many advantages since the researcher spends less money in data collection. The researcher also has more strength and energy to go through the previous data.The information gotten from this report may have measures that closely match the existing literature (Goodwin, 2010).

Lastly content analysis is a technique used to summarize any kind of information considering the type of the content. It allows wider objective evaluation compared on the content based on favoring the audience. Content analysis also refers to a general method used for analysis and better understanding of a collection of content. This method involves larger content of textual information (McBurney & White, 2010).

 

Reference

Angrosino, M. V. (2007). Naturalistic observation. Walnut Creek, Calif: Left Coast Press.

Goodwin, C. J. (2010). Research in psychology: Methods and design. Hoboken, NJ: Wiley.

Goodwin, W. L., & Goodwin, L. D. (1996).Understanding quantitative and qualitative research in early childhood education. New York [u.a.: Teachers College Press.

McBurney, D., & White, T. L. (2010).Research methods. Belmont, CA: Wadsworth Cengage Learning.

Wertz, F. J. (2011). Five ways of doing qualitative analysis: Phenomenological psychology, grounded theory, discourse analysis, narrative research, and intuitive inquiry. New York: Guilford Press.

 
Do you need a similar assignment done for you from scratch? Order now!
Use Discount Code "Newclient" for a 15% Discount!

Methods In Behavioral Research, Ch. 3 homework help

Methods In Behavioral Research, Ch. 3 homework help

ETHICS IN BEHAVIORAL RESEARCH CHP. 3

 

LEARNING OBJECTIVES

· Summarize Milgram’s obedience experiment.

· Discuss the three ethical principles outlined in the Belmont Report: beneficence, autonomy, and justice.

· Define deception and discuss the ethical issues surrounding its use in research.

· List the information contained in an informed consent form.

· Discuss potential problems in obtaining informed consent.

· Describe the purpose of debriefing research participants.

· Describe the function of an Institutional Review Board.

· Contrast the categories of risk involved in research activities: exempt, minimal risk, and greater than minimal risk.

· Summarize the ethical principles in the APA Ethics Code concerning research with human participants.

· Summarize the ethical issues concerning research with nonhuman animals.

· Discuss how potential risks and benefits of research are evaluated.

· Discuss the ethical issue surrounding misrepresentation of research findings.

· Define plagiarism and describe how to avoid plagiarism.

Page 44ETHICAL PRACTICE IS FUNDAMENTAL TO THE CONCEPTUALIZATION, PLANNING, EXECUTION, AND EVALUATION OF RESEARCH. Researchers who do not consider the ethical implications of their projects risk harming individuals, communities, and behavioral science. This chapter provides an historical overview of ethics in behavioral research, reviews core ethical principles for researchers, describes relevant institutional structures that protect research participants, and concludes with a discussion of what it means to be an ethical researcher.

MILGRAM’S OBEDIENCE EXPERIMENT

Stanley Milgram conducted a series of studies (1963, 1964, 1965) to study obedience to authority. He placed an ad in the local newspaper in New Haven, Connecticut, offering a small stipend to men to participate in a “scientific study of memory and learning” being conducted at Yale University. The volunteers reported to Milgram’s laboratory at Yale, where they met a scientist dressed in a white lab coat and another volunteer in the study, a middle-aged man named “Mr. Wallace.” Mr. Wallace was actually a confederate (i.e., accomplice) of the experimenter, but the participants did not know this. The scientist explained that the study would examine the effects of punishment on learning. One person would be a “teacher” who would administer the punishment, and the other would be the “learner.” Mr. Wallace and the volunteer participant then drew slips of paper to determine who would be the teacher and who would be the learner. The drawing was rigged, however—Mr. Wallace was always the learner and the volunteer was always the teacher.

The scientist attached electrodes to Mr. Wallace and placed the teacher in front of an impressive-looking shock machine. The shock machine had a series of levers that, the individual was told, when pressed would deliver shocks to Mr. Wallace. The first lever was labeled 15 volts, the second 30 volts, the third 45 volts, and so on up to 450 volts. The levers were also labeled “Slight Shock,” “Moderate Shock,” and so on up to “Danger: Severe Shock,” followed by red X’s above 400 volts.

Mr. Wallace was instructed to learn a series of word pairs. Then he was given a test to see if he could identify which words went together. Every time Mr. Wallace made a mistake, the teacher was to deliver a shock as punishment. The first mistake was supposed to be answered by a 15-volt shock, the second by a 30-volt shock, and so on. Each time a mistake was made, the learner received a greater shock. The learner, Mr. Wallace, never actually received any shocks, but the participants in the study did not know that. In the experiment, Mr. Wallace made mistake after mistake. When the teacher “shocked” him with about 120 volts, Mr. Wallace began screaming in pain and eventually yelled that he wanted out. What if the teacher wanted to quit? This happened—the volunteer participants became visibly upset by the pain that Mr. Wallace seemed to be experiencing. The experimenter told the teacher that he could Page 45quit but urged him to continue, using a series of verbal prods that stressed the importance of continuing the experiment.

The study purportedly was to be an experiment on memory and learning, but Milgram really was interested in learning whether participants would continue to obey the experimenter by administering ever higher levels of shock to the learner. What happened? Approximately 65% of the participants continued to deliver shocks all the way to 450 volts.

Milgram went on to conduct several variations on this basic procedure with 856 subjects. The study received a great deal of publicity, and the results challenged many of our beliefs about our ability to resist authority. The Milgram study is important, and the results have implications for understanding obedience in real-life situations, such as the Holocaust in Nazi Germany and the Jonestown mass suicide (see Miller, 1986).

But the Milgram study is also an important example of ethics in behavioral research. How should we make decisions about whether the Milgram study or any other study is ethical? The Milgram study was one of many that played an important role in the development of ethical standards that guide our ethical decision making.

What do you think? Should the obedience study have been allowed? Were the potential risks to Milgram’s participants worth the knowledge gained by the outcomes? If you were a participant in the study, would you feel okay with having been deceived into thinking that you had harmed someone? What if it was a younger sibling? Or an elderly grandparent? Would that make a difference? Why or why not?

In this chapter, we work through some of these issues, and more. First, let us turn to an overview of the history of our current standards to help frame your understanding of ethics in research.

HISTORICAL CONTEXT OF CURRENT ETHICAL STANDARDS

Before we can delve into current ethical standards, it is useful to briefly talk about the origin of ethics codes related to behavioral research. Generally speaking, modern codes of ethics in behavioral and medical research have their origins in three important documents.

The Nuremberg Code and Declaration of Helsinki

Following World War II, the Nuremberg Trials were held to hear evidence against the Nazi doctors and scientists who had committed atrocities while forcing concentration camp inmates to be research subjects. The legal document that resulted from the trials contained what became known as the Nuremberg Code: a set of 10 rules of research conduct that would help prevent future research atrocities (see http://www.hhs.gov/ohrp/archive/nurcode.html).

Page 46The Nuremberg Code was a set of principles without any enforcement structure or endorsement by professional organizations. Moreover, it was rooted in the context of the Nazi experience and not generally seen as applicable to general research settings. Consequently, the World Medical Association developed a code that is known as the Declaration of Helsinki. This 1964 document is a broader application of the Nuremberg that was produced by the medical community and included a requirement that journal editors ensure that published research conform to the principles of the Declaration.

The Nuremberg Code and the Helsinki Declaration did not explicitly address behavioral research and were generally seen as applicable to medicine. In addition, by the early 1970s, news about numerous ethically questionable studies forced the scientific community to search for a better approach to protect human research subjects. Behavioral scientists were debating the ethics of the Milgram studies and the world was learning about the Tuskegee Syphilis Study, in which 399 African American men in Alabama were not treated for syphilis in order to track the long-term effects of this disease (Reverby, 2000). This study, supported by the U.S. Public Health Service, took place from 1932 to 1972, when the details of the study were made public by journalists investigating the study. The outrage over the fact that this study was done at all and that the subjects were African Americans spurred scientists to overhaul ethical regulations in both medical and behavioral research. The fact that the Tuskegee study was not an isolated incident was brought to light in 2010 when documentation of another syphilis study done from 1946 to 1948 in Guatemala was discovered (Reverby, 2011). Men and women in this study were infected with syphilis and then treated with penicillin. Reverby describes the study in detail and focuses on one doctor who was involved in both the Guatemala and Tuskegee studies.

As a result of new public demand for action, a committee was formed that eventually produced the  Belmont Report . Current ethical guidelines for both behavioral and medical researchers have their origins in The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1979). This report defined the principles and applications that have guided more detailed regulations developed by the American Psychological Association and other professional societies and U.S. federal regulations that apply to both medical and behavioral research investigations.

The three basic ethical principles of the Belmont Report are:

· Beneficence—research should confer benefits and risks must be minimal. The associated application is the necessity to conduct a risk-benefit analysis.

· Respect for persons (autonomy)—participants are treated as autonomous; they are capable of making deliberate decisions about whether to participate in research. The associated application is informed Page 47consent—potential participants in a research project should be provided with all information that might influence their decision on whether to participate.

· Justice—there must be fairness in receiving the benefits of research as well as bearing the burdens of accepting risks. This principle is applied in the selection of subjects for research.

APA ETHICS CODE

The American Psychological Association (APA) has provided leadership in formulating ethical principles and standards. The Ethical Principles of Psychologists and Code of Conduct—known as the  APA Ethics Code —is amended periodically with the current version always available online at http://apa.org/ethics/code. The Ethics Code applies to psychologists in their many roles including teachers, researchers, and practitioners. We have included the sections relevant to research in Appendix B.

APA Ethics Code: Five Principles

The APA Ethics Code includes five general ethical principles: beneficence and nonmaleficence, fidelity and responsibility, integrity, justice, and respect for rights and responsibilities. Next, we will discuss the ways that these principles relate to research practice.

Principle A: Beneficence and Nonmaleficence As in the Belmont Report, the principle of Beneficence refers to the need for research to maximize benefits and minimize any possible harmful effects of participation. The Ethics Code specifically states: “Psychologists strive to benefit those with whom they work and take care to do no harm. In their professional actions, psychologists seek to safeguard the welfare and rights of those with whom they interact professionally and other affected persons and the welfare of animal subjects of research.”

Principle B: Fidelity and Responsibility The principle of Fidelity and Responsibility states: “Psychologists establish relationships of trust with those with whom they work. They are aware of their professional and scientific responsibilities to society and to the specific communities in which they work.” For researchers, such trust is primarily applicable to relationships with research participants.

Researchers make several implicit contracts with participants during the course of a study. For example, if participants agree to be present for a study at a specific time, the researcher should also be there. If researchers promise to send a summary of the results to participants, they should do so. If participants are to receive course credit for participation, the researcher must immediately let the instructor know that the person took part in the study. These may seem Page 48to be little details, but they are very important in maintaining trust between participants and researchers.

Principle C: Integrity The principle of Integrity states: “Psychologists seek to promote accuracy, honesty and truthfulness in the science, teaching and practice of psychology. In these activities psychologists do not steal, cheat or engage in fraud, subterfuge or intentional misrepresentation of fact.” Later in this chapter, we will cover the topic of integrity in the context of being an ethical researcher.

Principle D: Justice As in the Belmont Report, the principle of Justice refers to fairness and equity. Principle D states: “Psychologists recognize that fairness and justice entitle all persons to access to and benefit from the contributions of psychology and to equal quality in the processes, procedures and services being conducted by psychologists.”

Consider the Tuskegee Syphilis study, or the similar study conducted in Guatemala. In both cases there was a cure for syphilis (i.e., penicillin) that was withheld from participants. This is a violation of principle D of the APA Ethics Code and a violation of the Belmont Report’s principle of Justice.

Principle E: Respect for People’s Rights and Dignity The last of the five APA ethical principles builds upon the Belmont Report principle of Respect for Persons. It states: “Psychologists respect the dignity and worth of all people, and the rights of individuals to privacy, confidentiality, and self-determination. Psychologists are aware that special safeguards may be necessary to protect the rights and welfare of persons or communities whose vulnerabilities impair autonomous decision making. Psychologists are aware of and respect cultural, individual, and role differences, including those based on age, gender, gender identity, race, ethnicity, culture, national origin, religion, sexual orientation, disability, language, and socioeconomic status, and consider these factors when working with members of such groups. Psychologists try to eliminate the effect on their work of biases based on those factors, and they do not knowingly participate in or condone activities of others based upon such prejudices.”

One of the ethical dilemmas in the Milgram obedience study was the fact that participants did not know that they were participating in a study of obedience. This limited participants’ rights to self-determination. Later, we will explore this issue in depth.

Protecting Research Subjects

The preamble to the APA Ethics Code states: “Psychologists are committed to increasing scientific and professional knowledge of behavior and people’s understanding of themselves and others and to the use of such knowledge to improve the condition of individuals, organizations and society.” By internalizing and adhering to ethical principles we support and nurture a healthy science. Page 49With this in mind, we will consider the ways in which research subjects—humans and animals—are protected in behavioral research.

ASSESSMENT OF RISKS AND BENEFITS

The principle of beneficence leads us to examine potential risks and benefits that are likely to result from the research; this is called a  risk-benefit analysis.  Ethical principles require asking whether the research procedures have minimized risk to participants.

The potential risks to the participants include such factors as psychological or physical harm and loss of confidentiality; we will discuss these in detail. In addition, the cost of notconducting the study if in fact the proposed procedure is the only way to collect potentially valuable data can be considered (cf. Christensen, 1988). The benefits include direct benefits to the participants, such as an educational benefit, acquisition of a new skill, or treatment for a psychological or medical problem. There may also be material benefits such as a monetary payment, some sort of gift, or even the possibility of winning a prize in a raffle. Other less tangible benefits include the satisfaction gained through being part of a scientific investigation and the potential beneficial applications of the research findings (e.g., the knowledge gained through the research might improve future educational practices, psychotherapy, or social policy). As we will see, current regulations concerning the conduct of research with human participants require a risk-benefit analysis before research can be approved.

Risks in Behavioral Research

Let’s return to a consideration of Milgram’s research. The risk of experiencing stress and psychological harm is obvious. It is not difficult to imagine the effect of delivering intense shocks to an obviously unwilling learner. A film that Milgram made shows participants protesting, sweating, and even laughing nervously while delivering the shocks. You might ask whether subjecting people to such a stressful experiment is justified, and you might wonder whether the experience had any long-range consequences for the volunteers. For example, did participants who obeyed the experimenter feel continuing remorse or begin to see themselves as cruel, inhumane people? Let’s consider some common risks in behavioral research.

Physical harm Procedures that could conceivably cause some physical harm to participants are rare but possible. Many medical procedures fall into this category, for example, administering a drug such as alcohol or caffeine. Other studies might expose subjects to physical stressors such as loud noise, extreme hot or cold temperatures, or deprivation of sleep for an extended period of. The risks in such procedures require that great care be taken to make them ethically acceptable. Moreover, there would need to be clear benefits of the research that would outweigh the potential risks.

Page 50Stress More common than physical stress is psychological stress. The participants in the Milgram study were exposed to a high level of stress; they believed that they were delivering fatal doses of electricity to another person. Milgram described one of his participants:

While continuing to read the word pairs with a show of outward strength, she mutters in a tone of helplessness to the experimenter, “Must I go on? Oh, I’m worried about him. Are we going all the way up there (pointing to the higher end of the generator)? Can’t we stop? I’m shaking. I’m shaking. Do I have to go up there?”

She regains her composure temporarily but then cannot prevent periodic outbursts of distress (Milgram, 1974, p. 80).

There are other examples. For instance, participants might be told that they will receive some extremely intense electric shocks. They never actually receive the shocks; it is the fear or anxiety during the waiting period that is the variable of interest. Research by Schachter (1959) employing a procedure like this showed that the anxiety produced a desire to affiliate with others during the waiting period.

In another procedure that produces psychological stress, participants are given unfavorable feedback about their personalities or abilities. Researchers may administer a test that is described as a measure of social intelligence and then told that they scored very high or very low. The impact of this feedback can then be studied. Asking people about traumatic or unpleasant events in their lives might also cause stress for some participants. Thus, research that asks people to think about the deaths of a parent, spouse, or friend, or their memories of living through a disaster could trigger a stressful reaction.

When using procedures that may create psychological distress, the researcher must ask whether all safeguards have been taken to help participants deal with the stress. Usually a debriefing session following the study is designed in part to address any potential problems that may arise during the research.

Confidentiality and privacy Another risk is the loss of expected privacy and confidentiality. Confidentiality is an issue when the researcher has assured subjects that the collected data are only accessible to people with permission, generally only the researcher. This becomes particularly important when studying topics such as sexual behavior, divorce, family violence, or drug abuse; in these cases, researchers may need to ask people very sensitive questions about their private lives. Or consider a study that obtained information about employees’ managers. It is extremely important that responses to such questions be confidential; revealing the responses of an individual could result in real harm. In most cases, researchers will attempt to avoid confidentiality problems by making sure that the responses are completely anonymous—there is no way to connect any person’s identity with the data. This happens, for example, when questionnaires are administered to groups of people and no Page 51information is asked that could be used to identify an individual (such as name, taxpayer identification number, email address, or phone number). However, in other cases, such as a personal interview in which the identity of the person might be known, the researcher must carefully plan ways of coding data, storing data, and explaining the procedures to participants so that there is no question concerning the confidentiality of responses.

Invasion of privacy becomes an issue when the researcher collects information under circumstances that the subject believes are private—free from unwanted observation by others. In some studies, researchers make observations of behavior in public places without informing the people being observed. Observing people as they are walking in a public space, stopped at a traffic light, or drinking in a bar does not seem to present any major ethical problems. However, what if a researcher wishes to observe behavior in more private settings or in ways that may violate individuals’ privacy (see Wilson & Donnerstein, 1976)? For example, would it be ethical to rummage through people’s trash or watch people in public restrooms? The Internet has posed other issues of privacy. Every day, thousands of people post messages on websites. The messages can potentially be used as data to understand attitudes, disclosure of personal information, and expressions of emotion. Many messages are public postings, much like a letter sent to a newspaper or magazine. But consider websites devoted to psychological and physical problems that people seek out for information and support. Many of these sites require registration to post messages. Consider a researcher interested in using one of these sites for data. What ethical issues arise in this case? Buchanan and Williams (2010) address these and other ethical issues that arise when doing research using the Internet.

INFORMED CONSENT

Recall Principle E of the APA Ethics Code (Respect for People’s Rights and Dignity)—research participants are to be treated as autonomous. They are capable of making deliberate decisions about whether to participate in research. The key idea here is informed consent—potential participants in a research project should be provided with all information that might influence their active decision of whether or not to participate in a study. Thus, research participants should be informed about the purposes of the study, the risks and benefits of participation, and their rights to refuse or terminate participation in the study. They can then freely consent or refuse to participate in the research.

Informed Consent Form

Participants are usually provided with some type of informed consent form that contains the information that participants need to make their decision. Most commonly, the form is presented for the participant to read and agree Page 52to. There are numerous examples of informed consent forms available on the Internet. Your college may have developed examples through the research office. A checklist for an informed consent form is provided in Figure 3.1. Note that the checklist addresses both content and format. The content will typically cover (1) the purpose of the research, (2) procedures that will be used including time involved (remember that you do not need to tell participants exactly what is being studied), (3) risks and benefits, (4) any compensation, (5) confidentiality, (6) assurance of voluntary participation and permission to withdraw, and (7) contact information for questions.

 

FIGURE 3.1

Creating an informed consent form

Page 53The form must be written so that participants understand the information in the form. In some cases, the form was so technical or loaded with legal terminology that it is very unlikely that the participants fully realized what they were signing. In general, consent forms should be written in simple and straightforward language that avoids jargon and technical terminology (generally at a sixth- to eighth-grade reading level; most word processors provide grade-level information with the Grammar Check feature). To make the form easier to understand, it should not be written in the first person. Instead, information should be provided as if the researcher were simply having a conversation with the participant. Thus, the form might say Participation in this study is voluntary. You may decline to participate without penalty, instead of I understand that participation in this study is voluntary. I may decline to participate without penalty. The first statement is providing information to the participant in a straightforward way using the second person (“you”), whereas the second statement has a legalistic tone that may be more difficult to understand. Finally, if participants are non-English speakers, they should receive a translated version of the form.

Autonomy Issues

Informed consent seems simple enough; however, there are important issues to consider. The first concerns lack of autonomy. What happens when the participants lack the ability to make a free and informed decision to voluntarily participate? Special populations such as minors, patients in psychiatric hospitals, or adults with cognitive impairments require special precautions. When minors are asked to participate, for example, a written consent form signed by a parent or guardian is generally required in addition to agreement by the minor; this agreement by a minor is formally called assent. The Society for Research on Child Development has established guidelines for ethical research with children (see http://www.srcd.org/about-us/ethical-standards-research).

Coercion is another threat to autonomy. Any procedure that limits an individual’s freedom to consent is potentially coercive. For example, a supervisor who asks employees to fill out a survey during a staff meeting or a professor requiring students to participate in a study in order to pass the course is applying considerable pressure on potential participants. The employees may believe that the supervisor will somehow punish them if they do not participate; they also risk embarrassment if they refuse in front of co-workers. Sometimes benefits are so great that they become coercive. For example, a prisoner may believe that increased privileges or even a favorable parole decision may result from participation. Sometimes even an incentive can be seen as coercive—imagine being offered $1,000 to participate in a study. Researchers must consider these issues and make sure that autonomy is preserved.

Page 54

Withholding Information and Deception

It may have occurred to you that providing all information about the study to participants might be unwise. Providing too much information could potentially invalidate the results of the study; for example, researchers usually will withhold information about the hypothesis of the study or the particular condition an individual is participating in (see Sieber, 1992). It is generally acceptable to withhold information when the information would not affect the decision to participate and when the information will later be provided, usually in a debriefing session when the study is completed. Most people who volunteer for psychology research do not expect full disclosure about the study prior to participation. However, they do expect a thorough debriefing after they have completed the study. Debriefing will be described after we consider the more problematic issue of deception.

It may also have occurred to you that there are research procedures in which informed consent is not necessary or even possible. If you choose to observe the number of same-sex and mixed-sex study groups in your library, you probably do not need to announce your presence and obtain anyone’s permission. If you study the content of the self-descriptions that people write for an online dating service, do you need to contact each person to include their information in your study? When planning research, it is important to make sure that you do have good reasons not to obtain informed consent.

In research, deception occurs when there is active misrepresentation of information about the nature of a study. The Milgram experiment illustrates two types of deception. First, as noted earlier, participants were deceived about the purpose of the study. Participants in the Milgram experiment agreed to take part in a study of memory and learning, but they actually took part in a study on obedience. Who could imagine that a memory and learning experiment (that title does sound tame, after all) would involve delivering high-intensity, painful electric shocks to another person? Participants in the Milgram experiment did not know what they were letting themselves in for.

The Milgram study was conducted before informed consent was routine; however, you can imagine that Milgram’s consent form would inaccurately have participants agree to be in a memory study. They would also be told that they are free to withdraw from the study at any time. Is it possible that the informed consent procedure would affect the outcome of the study? Knowledge that the research is designed to study obedience would likely alter the behavior of the participants. Few of us like to think of ourselves as obedient, and we would probably go out of our way to prove that we are not. Research indicates that providing informed consent may in fact bias participants’ responses, at least in some research areas. For example, research on stressors such as noise or crowding has shown that a feeling of “control” over a stressor reduces its negative impact. If you know that you can terminate a loud, obnoxious noise, the noise produces less stress than when the noise is uncontrollable. Studies by Gardner (1978) and Dill, Gilden, Hill, and Hanslka (1982) have demonstrated Page 55that informed consent procedures do increase perceptions of control in stress experiments and therefore can affect the conclusions drawn from the research.

It is also possible that the informed consent procedure may bias the sample. In the Milgram experiment, if participants had prior knowledge that they would be asked to give severe shocks to the other person, some might have declined to be in the experiment. Therefore, we might limit our ability to generalize the results only to those “types” who agreed to participate. If this were true, anyone could say that the obedient behavior seen in the Milgram experiment occurred simply because the people who agreed to participate were sadists in the first place!

Second, the Milgram study also illustrates a type of deception in which participants become part of a series of events staged for the purposes of the study. A confederate of the experimenter played the part of another participant in the study; Milgram created a reality for the participant in which obedience to authority could be observed. Such deception has been most common in social psychology research; it is much less frequent in areas of experimental psychology such as human perception, learning, memory, and motor performance. Even in these areas, researchers may use a cover story to make the experiment seem plausible and involving (e.g., telling participants that they are reading actual newspaper stories for a study on readability when the true purpose is to examine memory errors or organizational schemes).

The problem of deception is not limited to laboratory research. Procedures in which observers conceal their purposes, presence, or identity are also deceptive. For example, Humphreys (1970) studied the sexual behavior of men who frequented public restrooms (called tearooms). Humphreys did not directly participate in sexual activities, but he served as a lookout who would warn the others of possible intruders. In addition to observing the activities in the tearoom, Humphreys wrote down license plate numbers of tearoom visitors. Later, he obtained the addresses of the men, disguised himself, and visited their homes to interview them. Humphreys’ procedure is certainly one way of finding out about anonymous sex in public places, but it employs considerable deception.

Is Deception a Major Ethical Problem in Psychological Research?

Many psychologists believe that the problem of deception has been exaggerated (Bröder, 1998; Kimmel, 1998; Korn, 1998; Smith & Richardson, 1985). Bröder argues that the extreme examples of elaborate deception cited by these critics are rare.

In the decades since the Milgram experiments, some researchers have attempted to assess the use of deception to see if elaborate deception has indeed become less common. Because most of the concern over this type of deception arises in social psychological research, attempts to address this issue have focused on social psychology. Gross and Fleming (1982) reviewed 691 social psychological studies published in the 1960s and 1970s. Although most research in the 1970s still used deception, the deception primarily involved false cover stories.

Page 56Has the trend away from deception continued? Sieber, Iannuzzo, and Rodriguez (1995) examined the studies published in the Journal of Personality and Social Psychology in 1969, 1978, 1986, and 1992. The number of studies that used some form of deception decreased from 66% in 1969 to 47% in 1978 and to 32% in 1986 but increased again to 47% in 1992. The large drop in 1986 may be due to an increase that year in the number of studies on such topics as personality that require no deception to carry out. Also, informed consent was more likely to be explicitly described in 1992 than in previous years, and debriefing was more likely to be mentioned in the years after 1969. However, false cover stories are still frequently used. Korn (1997) has also concluded that use of deception is decreasing in social psychology.

There are three primary reasons for a decrease in the type of elaborate deception seen in the Milgram study. First, more researchers have become interested in cognitive variables rather than emotions and so use methods that are similar to those used by researchers in memory and cognitive psychology. Second, the general level of awareness of ethical issues as described in this chapter has led researchers to conduct studies in other ways. Third, ethics committees at universities and colleges now review proposed research more carefully, so elaborate deception is likely to be approved only when the research is important and there are no alternative procedures available (ethics review boards are described later in this chapter).

THE IMPORTANCE OF DEBRIEFING

Debriefing occurs after the completion of a study. It is an opportunity for the researcher to deal with issues of withholding information, deception, and potential harmful effects of participation. Debriefing is one way that researchers can follow the guidelines in the APA Ethics Code, particularly Principles B (Fidelity and Responsibility), C (Integrity), and E (Respect for People’s Rights and Dignity).

If participants were deceived in any way, the researcher needs to explain why the deception was necessary. If the research altered a participant’s physical or psychological state in some way—as in a study that produces stress—the researcher must make sure that the participant has calmed down and is comfortable about having participated. If a participant needs to receive additional information or to speak with someone else about the study, the researcher should provide access to these resources. The participants should leave the experiment without any ill feelings toward the field of psychology, and they may even leave with some new insight into their own behavior or personality.

Debriefing also provides an opportunity for the researcher to explain the purpose of the study and tell participants what kinds of results are expected and perhaps discuss the practical implications of the results. In some cases, researchers may contact participants later to inform them of the actual results of the study. Thus, debriefing has both an educational and an ethical purpose.

The Milgram study can also teach us something about the importance of debriefing. Milgram described a very thorough debriefing. However, an Page 57examination of original records and interviews with subjects by Perry (2013) reveals that often the debriefing was little more than seeing that Mr. Wallace was indeed not harmed. Many subjects were rushed from the lab; some did not even learn that no shocks were actually administered but only found that out when Milgram mailed a report of his research findings to the subjects 6 months after data collection was completed (and some never received the letter). Today we would consider Milgram’s less than thorough debriefing immediately following the experiment to be a real problem with his research procedure.

Despite all the problems of the stress of the procedure and the rather sloppy debriefing, most of the subjects in the Milgram studies were positive about their experience. The letter that Milgram sent with a detailed report of the study included a questionnaire to assess subjects’ reactions to the experiment; 92% of the subjects returned the questionnaire. The responses showed that 84% were glad that they had participated, and 74% said they had benefited from the experience. Only 1% said they were sorry they had participated (Blass, 2004). Other researchers who have conducted further work on the ethics of the Milgram study reached the same conclusion (Ring, Wallston, & Corey, 1970).

More generally, research on the effectiveness of debriefing indicates that debriefing is an effective way of dealing with deception and other ethical issues that arise in research investigations (Oczak, 2007; Smith, 1983; Smith & Richardson, 1983). There is some evidence that in at least some circumstances, the debriefing needs to be thorough to be effective. In a study on debriefing by McFarland, Cheam, and Buehler (2007), participants were given false feedback about their ability to accurately judge whether suicide notes were genuine. After making judgment, they were told that they had succeeded or failed at the task. The researchers then gave different types of debriefing. A minimal debriefing only mentioned that the feedback they received was not based on their performance at all. A more thorough debriefing also included information that the suicide notes were not real. Participants with the additional information had a more accurate assessment of their ability than did subjects receiving the minimal debriefing procedure.

INSTITUTIONAL REVIEW BOARDS

While the Belmont Report provided an outline for issues of research ethics and the APA Ethics Code provides guidelines as well, the actual rules and regulations for the protection of human research participants were issued by the U.S. Department of Health and Human Services (HHS). Under these regulations (U.S. Department of Health and Human Services, 2001), every institution that receives federal funds must have an Institutional Review Board (IRB) that is responsible for the review of research conducted within the institution. IRBs are local review agencies composed of at least five individuals; at least one member of the IRB must be from outside the institution. Every college and university in the United States that receives federal funding has an IRB; in addition, most psychology departments have their own research Page 58review committee (Chastain & Landrum, 1999). All research conducted by faculty, students, and staff associated with the institution is reviewed in some way by the IRB. This includes research that may be conducted at another location such as a school, community agency, hospital, or via the Internet.

The federal regulations for IRB oversight of research continue to evolve. For example, all researchers must now complete specified educational requirements. Most colleges and universities require students and faculty to complete one or more online tutorials on research ethics to meet these requirements.

The HHS regulations also categorized research according to the amount of risk involved in the research. This concept of risk was later incorporated into the Ethics Code of the American Psychological Association.

Exempt Research

Research in which there is no risk is exempt from review. Thus, anonymous questionnaires, surveys, and educational tests are all considered exempt research, as is naturalistic observation in public places when there is no threat to anonymity. Archival research in which the data being studied are publicly available or the participants cannot be identified is exempt as well. This type of research requires no informed consent. However, researchers cannot decide by themselves that research is exempt; instead, the IRB at the institution formulates a procedure to allow a researcher to apply for exempt status.

Minimal Risk Research

A second type of research activity is called minimal risk, which means that the risks of harm to participants are no greater than risks encountered in daily life or in routine physical or psychological tests. When minimal risk research is being conducted, elaborate safeguards are less of a concern, and approval by the IRB is routine. Some of the research activities considered minimal risk are (1) recording routine physiological data from adult participants (e.g., weighing, tests of sensory acuity, electrocardiography, electroencephalography, diagnostic echography, and voice recordings)—note that this would not include recordings that might involve invasion of privacy; (2) moderate exercise by healthy volunteers; and (3) research on individual or group behavior or characteristics of individuals—such as studies of perception, cognition, game theory, or test development—in which the researcher does not manipulate participants’ behavior and the research will not involve stress to participants.

Greater Than Minimal Risk Research

Any research procedure that places participants at greater than minimal risk is subject to thorough review by the IRB. Complete informed consent and other safeguards may be required before approval is granted.

Researchers planning to conduct an investigation are required to submit an application to the IRB. The application requires description of risks and benefits, procedures for minimizing risk, the exact wording of the informed consent form, how participants will be debriefed, and procedures for maintaining confidentiality. Even after a project is approved, there is continuing review. If it is a long-term project, it will be reviewed at least once each year. If there are any changes in procedures, researchers are required to obtain approval from the IRB. The three risk categories are summarized in Table 3.1.

Page 59

TABLE 3.1 Assessment of risk

 

RESEARCH WITH NONHUMAN ANIMAL SUBJECTS

Although much of this chapter has been concerned with the ethics of research with humans, you are no doubt well aware that psychologists sometimes conduct research with animals (Akins, Panicker, & Cunningham, 2005). Animals are used in behavioral research for a variety of reasons. Researchers can carefully control the environmental conditions of the animals, study the same animals over a long period, and monitor their behavior 24 hours a day if necessary. Animals are also used to test the effects of drugs and to study physiological and genetic mechanisms underlying behavior.

Page 60About 7% of the articles in Psychological Abstracts (now PsycINFO) in 1979 described studies involving nonhuman animals (Gallup & Suarez, 1985), and data indicate that the amount of research done with animals has been steadily declining (Thomas & Blackman, 1992). Most commonly, psychologists work with rats and mice and, to a lesser extent, birds (usually pigeons); according to surveys of animal research in psychology journals, over 95% of the animals used in research are rats, mice, and birds (see Gallup & Suarez, 1985; Viney, King, & Berndt, 1990). Some of the decline in animal research is attributed to increased interest in conducting cognitive research with human participants (Viney, et al., 1990). This interest in cognition is now extending to research with dogs. Canine cognition labs have been growing at universities in the United States, Canada, and around the world (e.g., Yale, Harvard, Duke, Barnard, University of Florida, University of Western Ontario; see, for example, dogcognition.com). Typically the subjects are family pets that are brought to the lab by their owners.

In recent years, groups opposed to animal research in medicine, psychology, biology, and other sciences have become more vocal and active. Animal rights groups have staged protests at conventions of the American Psychological Association, animal research laboratories in numerous cities have been vandalized, and researchers have received threats of physical harm.

Scientists argue that animal research benefits humans and point to many discoveries that would not have been possible without animal research (Carroll & Overmier, 2001; Miller, 1985). Also, animal rights groups often exaggerate the amount of research that involves any pain or suffering whatsoever (Coile & Miller, 1984).

Plous (1996a, 1996b) conducted a national survey of attitudes toward the use of animals in research and education among psychologists and psychology majors. The attitudes of both psychologists and psychology students were quite similar. In general, there is support for animal research: 72% of the students support such research, 18% oppose it, and 10% are unsure (the psychologists “strongly” support animal research more than the students, however). In addition, 68% believe that animal research is necessary for progress in psychology. Still, there is some ambivalence and uncertainty about the use of animals: When asked whether animals in psychological research are treated humanely, 12% of the students said “no” and 44% were “unsure.” In addition, research involving rats or pigeons was viewed more positively than research with dogs or primates unless the research is strictly observational. Plous concluded that animal research in psychology will continue to be important for the field but will likely continue to decline as a proportion of the total amount of research conducted.

Animal research is indeed very important and will continue to be necessary to study many types of research. It is crucial to recognize that strict laws and ethical guidelines govern both research with animals and teaching procedures in which animals are used. Such regulations deal with the need for proper housing, feeding, cleanliness, and health care. They specify that the research must avoid any cruelty in the form of unnecessary pain to the animal. In addition, institutions in which animal research is carried out must have an Institutional Animal Care and Page 61Use Committee (IACUC) composed of at least one scientist, one veterinarian, and a community member. The IACUC is charged with reviewing animal research procedures and ensuring that all regulations are adhered to (see Holden, 1987).

The APA Ethics Code (see Appendix B) addresses the ethical responsibilities of researchers when studying nonhuman animals. APA has also developed a more detailed Guidelines for Ethical Conduct in the Care and Use of Nonhuman Animals (http://www.apa.org/science/leadership/care/guidelines.aspx). Clearly, psychologists are concerned about the welfare of animals used in research. Nonetheless, this issue likely will continue to be controversial.

BEING AN ETHICAL RESEARCHER: THE ISSUE OF MISREPRESENTATION

Principle C of the APA Ethics Code focuses on integrity. The ethical researcher acts with integrity and in so doing does not engage in misrepresentation. Specifically, we will explore two specific types of misrepresentation: fraud and plagiarism.

Fraud

The fabrication of data is fraud. We must be able to believe the reported results of research; otherwise, the entire foundation of the scientific method as a means of knowledge is threatened. In fact, although fraud may occur in many fields, it probably is most serious in two areas: science and journalism. This is because science and journalism are both fields in which written reports are assumed to be accurate descriptions of actual events. There are no independent accounting agencies to check on the activities of scientists and journalists.

Instances of fraud in the field of psychology are considered to be very serious (cf. Hostetler, 1987; Riordan & Marlin, 1987), but fortunately, they are very rare (Murray, 2002). Perhaps the most famous case is that of Sir Cyril Burt, who reported that the IQ scores of identical twins reared apart were highly similar. The data were used to support the argument that genetic influences on IQ are extremely important. However, Kamin (1974) noted some irregularities in Burt’s data. A number of correlations for different sets of twins were exactly the same to the third decimal place, virtually a mathematical impossibility. This observation led to the discovery that some of Burt’s presumed co-workers had not in fact worked with him or had simply been fabricated. Ironically, though, Burt’s “data” were close to what has been reported by other investigators who have studied the IQ scores of twins.

In most cases, fraud is detected when other scientists cannot replicate the results of a study. Suspicions of fabrication of research data by social psychologist Karen Ruggiero arose when other researchers had difficulty replicating her published findings. The researcher subsequently resigned from her academic position and retracted her research findings (Murray, 2002). Sometimes fraud is detected by a colleague or by students who worked with the researcher. For example, Stephen Page 62Breuning was guilty of faking data showing that stimulants could be used to reduce hyperactive and aggressive behavior in children (Byrne, 1988). In this case, another researcher who had worked closely with Breuning had suspicions about the data; he then informed the federal agency that had funded the research.

A recent case of extensive fraud that went undetected for years involves a social psychologist at Tilburg University in the Netherlands (Verfaellie & McGwin, 2011). Diederik Stapel not only created data that changed the outcome of studies that were conducted, he also reported results of studies that were never conducted at all. His studies were published in prestigious journals and often reported in popular news outlets because his research reported intriguing findings (e.g., being in a messy, disorderly environment results in more stereotypical and discriminatory thoughts). Students eventually reported their suspicions to the university administration, but the fact that Stapel’s misconduct continued for so long is certainly troublesome. According to a committee that investigated Stapel, one cause was the fact the professor was powerful, prestigious, and charismatic. He would work closely with students to design studies but then collect the data himself. He would invite a colleague to take his existing data set to analyze and write a report. These are highly unusual practices but his students and colleagues did not question him.

Fraud is not a major problem in science in part because researchers know that others will read their reports and conduct further studies, including replications. They know that their reputations and careers will be seriously damaged if other scientists conclude that the results are fraudulent. In addition, the likelihood of detection of fraud has increased in recent years as data accessibility has become more open: Regulations of most funding agencies require researchers to make their data accessible to other scientists.

Why, then, do researchers sometimes commit fraud? For one thing, scientists occasionally find themselves in jobs with extreme pressure to produce impressive results. This is not a sufficient explanation, of course, because many researchers maintain high ethical standards under such pressure. Another reason is that researchers who feel a need to produce fraudulent data have an exaggerated fear of failure, as well as a great need for success and the admiration that comes with it. Every report of scientific misconduct includes a discussion of motivations such as these.

One final point: Allegations of fraud should not be made lightly. If you disagree with someone’s results on philosophical, political, religious, or other grounds, it does not mean that they are fraudulent. Even if you cannot replicate the results, the reason may lie in aspects of the methodology of the study rather than deliberate fraud. However, the fact that fraud could be a possible explanation of results stresses the importance of careful record keeping and documentation of the procedures and results.

Plagiarism

Plagiarism refers to misrepresenting another’s work as your own. Writers must give proper citation of sources. Plagiarism can take the form of submitting an entire paper written by someone else; it can also mean including a paragraph or Page 63even a sentence that is copied without using quotation marks and a reference to the source of the quotation. Plagiarism also occurs when you present another person’s ideas as your own rather than properly acknowledging the source of the ideas. Thus, even if you paraphrase the actual words used by a source, it is plagiarism if the source is not cited.

Although plagiarism is certainly not a new problem, access to Internet resources and the ease of copying material from the Internet may be increasing its prevalence. In fact, Szabo and Underwood (2004) report that more than 50% of a sample of British university students believe that using Internet resources for academically dishonest activities is acceptable. It is little wonder that many schools are turning to computer-based mechanisms of detecting plagiarism.

It is useful to further describe plagiarism as being “word for word” or “paraphrased.” Word-for-word plagiarism is when a writer copies a section of another person’s work word for word without providing quotation marks indicating that the segment was written by somebody else, nor a citation indicating the source of the information. As an example, consider the following paragraph from Burger (2009):

“Milgram’s obedience studies have maintained a place in psychology classes and textbooks largely because of their implications for understanding the worst of human behaviors, such as atrocities, massacres, and genocide.” (Burger, 2009, p.10).

Word-for-word plagiarism would be if a writer wrote the following in his or her work without attributing it to Burger (2009):

Since they were conducted in the 1960s, Milgram’s obedience studies have maintained a place in psychology classes and textbooks largely because of their implications for understanding the worst of human behaviors, including atrocities, massacres, and genocide.

In that case, plagiarized text is highlighted. Note that adding a few words, or changing a few words, does not change the fact that much of the text is taken from another source, without attribution.

Being an ethical writer would mean using quotation marks around sentences that were directly taken from the original source and including a citation. For instance:

Burger (2009) concluded that since they were conducted in the 1960s “Milgram’s obedience studies have maintained a place in psychology classes and textbooks largely because of their implications for understanding the worst of human behaviors, such as atrocities, massacres, and genocide.” (p. 10).

Paraphrasing is when a writer expresses the meaning of a passage of text without using the actual words of the text. So, in paraphrasing plagiarism the words are not directly copied without attribution, but the ideas are copied without attribution. Page 64Note that there is not a “number or percentage of words” that moves writing from plagiarism to not being plagiarism, but rather it is the underlying idea.

An example of paraphrasing plagiarism is more difficult. Let us use the same passage:

“Milgram’s obedience studies have maintained a place in psychology classes and textbooks largely because of their implications for understanding the worst of human behaviors, such as atrocities, massacres, and genocide.” (Burger, 2009, p. 10).

One example of paraphrasing plagiarism would be:

Humans are capable of many vile and reprehensible acts. The reality is that Milgram’s studies have remained important to psychology because they seem to explain these behaviors.

Here the basic idea presented is directly related to the passage in Burger (2009). In this case, ethical writing may be:

Humans are capable of many vile and reprehensible acts. The reality is that Milgram’s studies have remained important to psychology because they seem to explain these behaviors (Burger, 2009).

Figure 3.2 provides a useful guide in how to understanding plagiarism in your own writing using two key questions: Did I write the words? And did I think of the idea?

 

FIGURE 3.2

Guide for avoiding plagiarism in writing

Page 65Plagiarism is wrong and can lead to many severe consequences, including academic sanctions such as a failing grade or expulsion from the school. Because plagiarism is often a violation of copyright law, it can be prosecuted as a criminal offense as well. Finally, it is interesting to note that some students believe that citing sources weakens their paper—that they are not being sufficiently original. In fact, Harris (2002) notes that student papers are actually strengthened when sources are used and properly cited.

CONCLUSION: RISKS AND BENEFITS REVISITED

You are now familiar with the ethical issues that confront researchers who study human and animal behavior. When you make decisions about research ethics, you need to consider the many factors associated with risk to the participants. Are there risks of psychological harm or loss of confidentiality? Who are the research participants? What types of deception, if any, are used in the procedure? How will informed consent be obtained? What debriefing procedures are being used? You also need to weigh the direct benefits of the research to the participants, as well as the scientific importance of the research and the educational benefits to the students who may be conducting the research for a class or degree requirement (see Figure 3.3).

These are not easy decisions. Consider a study in which a confederate posing as another subject insults the participant (Vasquez, Pederson, Bushman, Kelley, Demeestere, & Miller, 2013). The subject wrote an essay expressing attitudes on a controversial topic; subsequently, the subject heard the confederate evaluate the essay as unclear, unconvincing, and “one of the worst things I have read in a long time.” The subject could then behave aggressively in choosing the amount of hot sauce that the other person would have to consume in another part of the experiment. The insult did lead to choosing more hot sauce, particularly if the subject was given an opportunity to ruminate about it rather than being distracted by other tasks. Instances of aggression following perceived insults are common so you can argue that this is an important topic. Do you believe that the potential benefits of the study to society and science outweigh the risks involved in the procedure?

Obviously, an IRB reviewing this study concluded that the researchers had sufficiently minimized risks to the participants such that the benefits outweighed the costs. If you ultimately decide that the costs outweigh the benefits, you must conclude that the study cannot be conducted in its current form. You may suggest alternative procedures that could make it acceptable. If the benefits outweigh the costs, you will likely decide that the research should be carried out. Your calculation might differ from another person’s calculation, which is precisely why having ethics review boards is such a good idea. An appropriate review of research proposals makes it highly unlikely that unethical research will be approved.

Page 66

 

FIGURE 3.3

Analysis of risks and benefits

Ethical guidelines and regulations evolve over time. The APA Ethics Code and federal, state, and local regulations may be revised periodically. Researchers need to always be aware of the most current policies and procedures. In the following chapters, we will discuss many specific procedures for studying behavior. As you read about these procedures and apply them to research you may be interested in, remember that ethical considerations are always paramount.

In the time when Stanley Milgram was conceptualizing his obedience experiments there were no institutional review boards. If there had been, it might have been a difficult study to have approved. Participants were not informed of the purpose of the study (indeed, they were deceived into thinking that it was a study of learning), and they were also deceived into thinking that they were harming another person. The struggle is, of course, that if participants had known the true nature of the study, or that they were not really delivering electric shocks, the results would not have been as meaningful.

The Milgram study was partially replicated by Berger in 2009. That study is included as the Illustrative Article for this chapter.

Page 67

ILLUSTRATIVE ARTICLE: REPLICATION OF MILGRAM

Burger (2009) conducted a partial replication of the classic Stanley Milgram obedience studies.

First, acquire and read the article:

Burger, J. M. (2009). Replicating Milgram: Would people still obey today? American Psychologist, 64(1), 1–11. doi:10.1037/a0010932

Then, after reading the article, consider the following:

1. Conduct an informal risk-benefit analysis. What are the risks and benefits inherent in this study as described? Do you think that the study is ethically justifiable given your analysis? Why or why not?

2. Do you think that the study is ethically justifiable given your analysis? Why or why not?

3. How did Burger screen participants in the study? What was the purpose of the screening procedure?

4. Burger paid participants $50 for two 45-minute sessions. Could this be considered coercive? Why or why not?

5. Describe the risks to research participants in Burger’s study.

6. Burger uses deception in this study. Is it acceptable? Do you believe that the debriefing session described in the report adequately addresses the issues of deception?

Study Terms

APA Ethics Code (p. 47)

Autonomy (Belmont Report) (p. 46)

Belmont Report (p. 46)

Beneficence (Belmont Report) (p. 47)

Confidentiality (p. 50)

Debriefing (p. 56)

Deception (p. 54)

Exempt research (p. 58)

Fidelity and Responsibility (p. 47)

Fraud (p. 61)

IACUC (p. 61)

Informed consent (p. 51)

Institutional Review Board (IRB) (p. 57)

Integrity (p. 48)

Justice (Belmont Report) (p. 48)

Minimal risk research (p. 58)

Paraphrasing plagiarism (p. 63)

Plagiarism (p. 62)

Respect for person (p. 48)

Risk (p. 49)

Risk-benefit analysis (p. 49)

Word-for-word plagiarism (p. 63)

Page 68

Review Questions

1. Discuss the major ethical issues in behavioral research including risks, benefits, deception, debriefing, informed consent, and justice. How can researchers weigh the need to conduct research against the need for ethical procedures?

2. Why is informed consent an ethical principle? What are the potential problems with obtaining fully informed consent?

3. What alternatives to deception are described in the text?

4. Summarize the principles concerning research with human participants in the APA Ethics Code.

5. What is the difference between “no risk” and “minimal risk” research activities?

6. What is an Institutional Review Board?

7. Summarize the ethical procedures for research with animals.

8. What constitutes fraud, what are some reasons for its occurrence, and why does it not occur more frequently?

9. Describe how you would proceed to identify plagiarism in a writing assignment.

Activities

1. Find your college’s code of student conduct online and review the section on plagiarism. How would you improve this section? What would you tell your professors to do to help students avoid plagiarism?

2. Indiana University created an excellent online resource called “How to Recognize Plagiarism” (you can find it here: https://www.indiana.edu/∼istd/plagiarism_test.html). Complete the test!

3. Consider the following experiment, similar to one that was conducted by Smith, Lingle, and Brock (1978). Each participant interacted for an hour with another person who was actually an accomplice. After this interaction, both persons agreed to return 1 week later for another session with each other. When the real participants returned, they were informed that the person they had met the week before had died. The researchers then measured reactions to the death of the person.

a. Discuss the ethical issues raised by the experiment.

b. Would the experiment violate the guidelines articulated in APA Ethical Standard 8 dealing with research with human participants? In what ways?

c. What alternative methods for studying this problem (reactions to death) might you suggest?

d. Would your reactions to this study be different if the participants had played with an infant and then later been told that the infant had died?Page 69

4. In a procedure described in this chapter, participants are given false feedback about an unfavorable personality trait or a low ability level. What are the ethical issues raised by this procedure? Compare your reactions to that procedure with your reactions to an analogous one in which people are given false feedback that they possess a very favorable personality trait or a very high ability level.

5. A social psychologist conducts a field experiment at a local bar that is popular with college students. Interested in observing flirting techniques, the investigator instructs male and female confederates to smile and make eye contact with others at the pub for varying amounts of time (e.g., 2 seconds, 5 seconds, etc.) and varying numbers of times (e.g., once, twice, etc.). The investigator observes the responses of those receiving the gaze. What ethical considerations, if any, do you perceive in this field experiment? Is there any deception involved?

6. Should people who are observed in field experiments be debriefed? Write a paragraph supporting the proposition and another paragraph supporting the con position.

7. Dr. Alucard conducted a study to examine various aspects of the sexual behaviors of college students. The students filled out a questionnaire in a classroom on the campus; about 50 students were tested at a time. The questionnaire asked about prior experience with various sexual practices. If a student had experience, a number of other detailed questions were asked. However, if the student did not have any prior experience, he or she skipped the detailed questions and simply went on to answer another general question about a sexual experience. What ethical issues arise when conducting research such as this? Do you detect any specific problems that might arise because of the “skip” procedure used in this study?

8. Read the following research scenarios and assess the risk to participants by placing a check mark in the appropriate box (answers on next page).

Page 70

 

9. Review this slide show that describes the Stanford Prison Experiment: http://www.prisonexp.org. Then address questions 12 and 13 from the Discussion Questions on the website:

· Was it ethical to do this study? Was it right to trade the suffering experienced by participants for the knowledge gained by the research? (The Page 71experimenters did not take this issue lightly, although the Slide Show may sound somewhat matter-of-fact about the events and experiences that occurred.) (Source: http://www.prisonexp.org/discussion.htm)

· How do the ethical dilemmas in this research compare with the ethical issues raised by Stanley Milgram’s obedience experiments? Would it be better if these studies had never been done? (Source: http://www.prisonexp.org/discussion.htm)

Answers

QUESTION 8:

a. Greater than minimal risk

b. Minimal risk

c. No risk

d. Minimal risk

 
Do you need a similar assignment done for you from scratch? Order now!
Use Discount Code "Newclient" for a 15% Discount!

Test Review(Becks Depression) homework help

Test Review(Becks Depression) homework help

I’ve added an example of how it should look like when completed. I also added the actual review form two reviewers. all the data is collected, the questions below will need to be answered. PLEASE LOOK AT THE EXAMPLE THAT I ATTACHED AND LOOK/READ OVER THE ACTUAL REVIEW.

 

1)The Test- cost, time to take the test, theory behind the test, number of items, age appropriateness, and any other information relevant to teaching me about the test ( Approximately one page double spaced)

2)Reviewer #1- norm sample, practicality and cultural fairness, validity, reliability, final comments  ( At a Minimum, one page double spaced)

3)Reviewer #2- norm sample, practicality and cultural fairness, validity, reliability, final comments ( At a Minimum, one page double spaced)

4) Your thoughts on norm sample, practicality and cultural fairness validity, reliability, final comments about using the test. Why or why not. (At a Minimum, one page double spaced).  I want your thoughts based on specific information and not just opinions such as “I don’t like the GRE’s” or “I don’t think it’s fair to subject students to standardize testing.”  I want to know what you think about the norm sample, practicality and cultural fairness validity, reliability based   specifically on what you learned from both reviewers and any other source.

 

Accession Number

14122148
Classification Code Personality [12]
Database Mental Measurements Yearbook
Mental Measurements Yearbook The Fourteenth Mental Measurements Yearbook 2001
Title Beck Depression Inventory-II.
Acronym BDI-II.
Authors Beck, Aaron T.Steer, Robert A.Brown, Gregory K.
Purpose “Developed for the assessment of symptoms corresponding to criteria for diagnosing depressive disorders listed in the … DSM IV”.
Publisher The Psychological Corporation, 555 Academic Court, San Antonio, TX 78204-2498
Publisher Name The Psychological Corporation
Date of Publication 1961-1996
Population Ages 13 and over
Scores Total score only.
Administration Group or individual
Manual Manual, 1996, 38 pages.
Price 1999 price data: $57 per complete kit including manual and 25 recording forms; $27 per manual; $29.50 per 25 recording forms; $112 per 100 recording forms; $29.50 per 25 Spanish recording forms; $112 per 100 Spanish recording forms.
Special Editions Available in Spanish.
Cross References See T5:272 (384 references); for reviews by Janet F. Carlson and Niels G. Waller, see 13:31 (1026 references); see also T4:268 (660 references); for reviews by Collie W. Conoley and Norman D. Sundberg of an earlier edition, see 11:31 (286 references).
Time (5-10) minutes.
Reviewers Arbisi, Paul A. (University of Minnesota); Farmer, Richard F. (Idaho State University).
Review Indicator 2 Reviews Available
Comments Also available in Spanish; hand-scored or computer-based administration, scoring, and interpretation available; “revision of BDI based upon new information about depression.”
Full Text Review of the Beck Depression Inventory-II by PAUL A. ARBISI, Minneapolis VA Medical Center, Assistant Professor Department of Psychiatry and Assistant Clinical Professor Department of Psychology, University of Minnesota, Minneapolis, MN: After over 35 years of nearly universal use, the Beck Depression Inventory (BDI) has undergone a major revision. The revised version of the Beck, the BDI-II, represents a significant improvement over the original instrument across all aspects of the instrument including content, psychometric validity, and external validity. The BDI was an effective measure of depressed mood that repeatedly demonstrated utility as evidenced by its widespread use in the clinic as well as by the frequent use of the BDI as a dependent measure in outcome studies of psychotherapy and antidepressant treatment (Piotrowski & Keller, 1989; Piotrowski & Lubin, 1990). The BDI-II should supplant the BDI and readily gain acceptance by surpassing its predecessor in use. Despite the demonstrated utility of the Beck, times had changed and the diagnostic context within which the instrument was developed had altered considerably over the years (Beck, Ward, Mendelson, Mock, & Erbaugh, 1961). Further, psychometrically, the BDI had some problems with certain items failing to discriminate adequately across the range of depression and other items showing gender bias (Santor, Ramsay, & Zuroff, 1994). Hence the time had come for a conceptual reassessment and psychometrically informed revision of the instrument. Indeed, a mid-course correction had occurred in 1987 as evidenced by the BDI-IA, a version that included rewording of 15 out of the 21 items (Beck & Steer, 1987). This version did not address the limited scope of depressive symptoms of the BDI nor the failure of the BDI to adhere to contemporary diagnostic criteria for depression as codified in the DSM-III. Further, consumers appeared to vote with their feet because, since the publication of the BDI-IA, the original Beck had been cited far more frequently in the literature than the BDI-IA. Therefore, the time had arrived for a major overhaul of the classic BDI and a retooling of the content to reflect diagnostic sensibilities of the 1990s. In the main, the BDI-II accomplishes these goals and represents a highly successful revamping of a reliable standard. The BDI-II retains the 21-item format with four options under each item, ranging from not present (0) to severe (3). Relative to the BDI-IA, all but three items were altered in some way on the BDI-II. Items dropped from the BDI include body image change, work difficulty, weight loss, and somatic preoccupation. To replace the four lost items, the BDI-II includes the following new items: agitation, worthlessness, loss of energy, and concentration difficulty. The current item content includes: (a) sadness, (b) pessimism, (c) past failure, (d) loss of pleasure, (e) guilty feelings, (f) punishment feelings, (g) self-dislike, (h) self-criticalness, (i) suicidal thoughts or wishes, (j) crying, (k) agitation, (l) loss of interest, (m) indecisiveness, (n) worthlessness, (o) loss of energy, (p) changes in sleeping pattern, (q) irritability, (r) changes in appetite, (s) concentration difficulty, (t) tiredness or fatigue, and (u) loss of interest in sex. To further reflect DSM-IV diagnostic criteria for depression, both increases and decreases in appetite are assessed in the same item and both hypersomnia and hyposomnia are assessed in another item. And rather than the 1-week time period rated on the BDI, the BDI-II, consistent with DSM-IV, asks for ratings over the past 2 weeks. The BDI-II retains the advantage of the BDI in its ease of administration (5-10 minutes) and the rather straightforward interpretive guidelines presented in the manual. At the same time, the advantage of a self-report instrument such as the BDI-II may also be a disadvantage. That is, there are no validity indicators contained on the BDI or the BDI-II and the ease of administration of a self-report lends itself to the deliberate tailoring of self-report and distortion of the results. Those of us engaged in clinical practice are often faced with clients who alter their presentation to forward a personal agenda that may not be shared with the clinician. The manual obliquely mentions this problem in an ambivalent and somewhat avoidant fashion. Under the heading, “Memory and Response Sets,” the manual blithely discounts the potential problem of a distorted response set by attributing extreme elevation on the BDI-II to “extreme negative thinking” which “may be a central cognitive symptom of severe depression rather than a response set per se because patients with milder depression should show variation in their response ratings” (manual, p. 9). On the other hand, later in the manual, we are told that, “In evaluating BDI-II scores, practitioners should keep in mind that all self-report inventories are subject to response bias” (p. 12). The latter is sound advice and should be highlighted under the heading of response bias. The manual is well written and provides the reader with significant information regarding norms, factor structure, and notably, nonparametric item-option characteristic curves for each item. Indeed the latter inclusion incorporates the latest in item response theory, which appears to have guided the retention and deletion of items from the BDI (Santor et al., 1994). Generally the psychometric properties of the BDI-II are quite sound. Coefficient alpha estimates of reliability for the BDI-II with outpatients was .92 and was .93 for the nonclinical sample. Corrected item-total correlation for the outpatient sample ranged from .39 (loss of interest in sex) to .70 (loss of pleasure), for the nonclinical college sample the lowest item-total correlation was .27 (loss of interest in sex) and the highest (.74 (self-dislike). The test-retest reliability coefficient across the period of a week was quite high at .93. The inclusion in the manual of item-option characteristic curves for each BDI-II item is of noted significance. Examination of these curves reveals that, for the most part, the ordinal position of the item options is appropriately assigned for 17 of the 21 items. However, the items addressing punishment feelings, suicidal thought or wishes, agitation, and loss of interest in sex did not display the anticipated rank order indicating ordinal increase in severity of depression across item options. Additionally, although improved over the BDI, Item 10 (crying) Option 3 does not clearly express a more severe level of depression than Option 2 (see Santor et al., 1994). Over all, however, the option choices within each item appear to function as intended across the severity dimension of depression. The suggested guidelines and cut scores for the interpretation of the BDI-II and placement of individual scores into a range of depression severity are purported to have good sensitivity and moderate specificity, but test parameters such as positive and negative predictive power are not reported (i.e., given score X on the BDI-II, what is the probability that the individual meets criteria for a Major Depressive Disorder, of moderate severity?). According to the manual, the BDI-II was developed as a screening instrument for major depression and, accordingly, cut scores were derived through the use of receiver operating characteristic curves to maximize sensitivity. Of the 127 outpatients used to derive the cut scores, 57 met criteria for either single-episode or recurrent major depression. The relatively high base rate (45%) for major depression is a bit unrealistic for nonpsychiatric settings and will likely serve to inflate the test parameters. Cross validation of the cut scores on different samples with lower base rates of major depression is warranted due to the fact that a different base rate of major depression may result in a significant change in the proportion of correct decisions based on the suggested cut score (Meehl & Rosen, 1955). Consequently, until the suggested cut scores are cross validated in those populations, caution should be exercised when using the BDI-II as a screen in nonpsychiatric populations where the base rate for major depression may be substantially lower. Concurrent validity evidence appears solid with the BDI-II demonstrating a moderately high correlation with the Hamilton Psychiatric Rating Scale for Depression-Revised (r = .71) in psychiatric outpatients. Of importance to the discriminative validity of the instrument was the relatively moderate correlation between the BDI-II and the Hamilton Rating Scale for Anxiety-Revised (r = .47). The manual reports mean BDI-II scores for various groups of psychiatric outpatients by diagnosis. As expected, outpatients had higher scores than college students. Further, individuals with mood disorders had higher scores than those individuals diagnosed with anxiety and adjustment disorders. The BDI-II is a stronger instrument than the BDI with respect to its factor structure. A two-factor (Somatic-Affective and Cognitive) solution accounted for the majority of the common variance in both an outpatient psychiatric sample and a much smaller nonclinical college sample. Factor Analysis of the BDI-II in a larger nonclinical sample of college students resulted in Cognitive-Affective and Somatic-Vegetative main factors essentially replicating the findings presented in the manual and providing strong evidence for the overall stability of the factor structure across samples (Dozois, Dobson, & Ahnberg, 1998). Unfortunately several of the items such as sadness and crying shifted factor loadings depending upon the type of sample (clinical vs. nonclinical). SUMMARY. The BDI-II represents a highly successful revision of an acknowledged standard in the measurement of depressed mood. The revision has improved upon the original by updating the items to reflect contemporary diagnostic criteria for depression and utilizing state-of-the-art psychometric techniques to improve the discriminative properties of the instrument. This degree of improvement is no small feat and the BDI-II deserves to replace the BDI as the single most widely used clinically administered instrument for the assessment of depression. REVIEWER’S REFERENCES Meehl, P. E., & Rosen, A. (1955). Antecedent probability and the efficiency of psychometric signs, patterns, or cutting scores. Psychological Bulletin, 52, 194-216. Beck, A. T., Ward, C. H., Mendelson, M., Mock, J., & Erbaugh, J. (1961). An inventory for measuring depression. Archives of General Psychiatry, 4, 561-571. Beck, A. T., & Steer, R. A. (1987). Beck Depression Inventory manual. San Antonio, TX: The Psychological Corporation. Piotrowski, C., & Keller, J. W. (1989). Psychological testing in outpatient mental health facilities: A national study. Professional Psychology: Research and Practice, 20, 423-425. Piotrowski, C., & Lubin, B. (1990). Assessment practices of health psychologists; Survey of APA Division 38 clinicians. Professional Psychology: Research and Practice, 21, 99-106. Santor, D. A., Ramsay, J. O., & Zuroff, D. C. (1994). Nonparametric item analyses of the Beck Depression Inventory: Evaluating gender item bias and response option weights. Psychological Assessment, 6, 255-270. Dozois, D. J. A., Dobson, K. S., & Ahnberg, J. L. (1998). A psychometric evaluation of the Beck Depression Inventory-II. Psychological Assessment, 10, 83-89. Review of the Beck Depression Inventory-II by RICHARD F. FARMER, Associate Professor of Psychology, Idaho State University, Pocatello, ID: The Beck Depression Inventory-II (BDI-II) is the most recent version of a widely used self-report measure of depression severity. Designed for persons 13 years of age and older, the BDI-II represents a significant revision of the original instrument published almost 40 years ago (BDI-I; Beck, Ward, Mendelson, Mock, & Erbaugh, 1961) as well as the subsequent amended version copyrighted in 1978 (BDI-IA; Beck, Rush, Shaw, & Emery, 1979; Beck & Steer, 1987, 1993). Previous editions of the BDI have considerable support for their effectiveness as measures of depression (for reviews, see Beck & Beamesderfer, 1974; Beck, Steer & Garbin, 1988; and Steer, Beck, & Garrison, 1986). Items found in these earlier versions, many of which were retained in modified form for the BDI-II, were clinically derived and neutral with respect to a particular theory of depression. Like previous versions, the BDI-II contains 21 items, each of which assesses a different symptom or attitude by asking the examinee to consider a group of graded statements that are weighted from 0 to 3 based on intuitively derived levels of severity. If the examinee feels that more than one statement within a group applies, he or she is instructed to circle the highest weighting among the applicable statements. A total score is derived by summing weights corresponding to the statements endorsed over the 21 items. The test authors provide empirically informed cut scores (derived from receiver operating characteristic [ROC] curve methodology) for indexing the severity of depression based on responses from outpatients with a diagnosed episode of major depression (cutoff scores to index the severity of dysphoria for college samples are suggested by Dozois, Dobson, & Ahnberg, 1998). The BDI-II can usually be completed within 5 to 10 minutes. In addition to providing guidelines for the oral administration of the test, the manual cautions the user against using the BDI-II as a diagnostic instrument and appropriately recommends that interpretations of test scores should only be undertaken by qualified professionals. Although the manual does not report the reading level associated with the test items, previous research on the BDI-IA suggested that items were written at about the sixth-grade level (Berndt, Schwartz, & Kaiser, 1983). A number of changes appear in the BDI-II, perhaps the most significant of which is the modification of test directions and item content to be more consistent with the major depressive episode concept as defined in the Diagnostic and Statistical Manual of Mental Disorders-Fourth Edition (DSM-IV; American Psychiatric Association, 1994). Whereas the BDI-I and BDI-IA assessed symptoms experienced at the present time and during the past week, respectively, the BDI-II instructs the examinee to respond in terms of how he or she has “been feeling during the past two weeks, including today” (manual, p. 8, emphasis in original) so as to be consistent with the DSM-IV time period for the assessment of major depression. Similarly, new items included in the BDI-II address psychomotor agitation, concentration difficulties, sense of worthlessness, and loss of energy so as to make the BDI-II item set more consistent with DSM-IV criteria. Items that appeared in the BDI-I and BDI-IA that were dropped in the second edition were those that assessed weight loss, body image change, somatic preoccupation, and work difficulty. All but three of the items from the BDI-IA retained for inclusion in the BDI-II were reworded in some way. Items that assess changes in sleep patterns and appetite now address both increases and decreases in these areas. Two samples were retained to evaluate the psychometric characteristics of the BDI-II: (a) a clinical sample (n = 500; 63% female; 91% White) who sought outpatient therapy at one of four outpatient clinics on the U.S. east coast (two of which were located in urban areas, two in suburban areas), and (b) a convenience sample of Canadian college students (n = 120; 56% women; described as “predominantly White”). The average ages of the clinical and student samples were, respectively, 37.2 (SD = 15.91; range = 13-86) and 19.58 (SD = 1.84). Reliability of the BDI was evaluated with multiple methods. Internal consistency was assessed using corrected item-total correlations (ranges: .39 to .70 for outpatients; .27 to .74 for students) and coefficient alpha (.92 for outpatients; .93 for students). Test-retest reliability was assessed over a 1-week interval among a small subsample of 26 outpatients from one clinic site (r = .93). There was no significant change in scores noted among this outpatient sample between the two testing occasions, a finding that is different from those often obtained with college students who, when tested repeatedly with earlier versions of the BDI, were often observed to have lower scores on subsequent testing occasions (e.g., Hatzenbuehler, Parpal, & Matthews, 1983). Following the method of Santor, Ramsay, and Zuroff (1994), the test authors also examined the item-option characteristic curves for each of the 21 BDI-II items as endorsed by the 500 outpatients. As noted in a previous review of the BDI (1993 Revised) by Waller (1998), the use of this method to evaluate item performance represents a new standard in test revision. Consistent with findings for depressed outpatients obtained by Santor et al. (1994) on the BDI-IA, most of the BDI-II items performed well as evidenced by the individual item-option curves. All items were reported to display monotonic relationships with the underlying dimension of depression severity. A minority of items were somewhat problematic, however, when the degree of correspondence between estimated and a priori weights associated with item response options was evaluated. For example, on Item 11 (agitation), the response option weighted a value of 1 was more likely to be endorsed than the option weighted 3 across all levels of depression, including depression in the moderate and severe ranges. In general, though, response option weights of the BDI-II items did a good job of discriminating across estimated levels of depression severity. Unfortunately, the manual does not provide detailed discussion of item-option characteristic curves and their interpretation. The validity of the BDI-II was evaluated with outpatient subsamples of various sizes. When administered on the same occasion, the correlation between the BDI-II and BDI-IA was quite high (n = 101, r = .93), suggesting that these measures yield similar patterns of scores, even though the BDI-II, on average, produced equated scores that were about 3 points higher. In support of its convergent validity, the BDI-II displayed moderately high correlations with the Beck Hopelessness Scale (n = 158, r = .68) and the Revised Hamilton Psychiatric Rating Scale for Depression (HRSD-R; n = 87, r = .71). The correlation between the BDI-II and the Revised Hamilton Anxiety Rating Scale (n = 87, r = .47) was significantly less than that for the BDI-II and HRSD-R, which was cited as evidence of the BDI-II’s discriminant validity. The BDI-II, however, did share a moderately high correlation with the Beck Anxiety Inventory (n = 297; r = .60), a finding consistent with past research on the strong association between self-reported anxiety and depression (e.g., Kendall & Watson, 1989). Additional research published since the manual’s release (Steer, Ball, Ranieri, & Beck, 1997) also indicates that the BDI-II shares higher correlations with the SCL-90-R Depression subscale (r = .89) than with the SCL-90-R Anxiety subscale (r = .71), although the latter correlation is still substantial. Other data presented in the test manual indicated that of the 500 outpatients, those diagnosed with mood disorders (n = 264) had higher BDI-II scores than those diagnosed with anxiety (n = 88), adjustment (n = 80), or other (n = 68) disorders. The test authors also cite evidence of validity by separate factor analyses performed on the BDI-II item set for outpatients and students. However, findings from these analyses, which were different in some significant respects, are questionable evidence of the measure’s validity as the test was apparently not developed to assess specific dimensions of depression. Factor analytic studies of the BDI have historically produced inconsistent findings (Beck et al., 1988), and preliminary research on the BDI-II suggests some variations in factor structure within both clinical and student samples (Dozois et al., 1998; Steer & Clark, 1997; Steer, Kumar, Ranieri, & Beck, 1998). Furthermore, one of the authors of the BDI-II (Steer & Clark, 1997) has recently advised that the measure not be scored as separate subscales. SUMMARY. The BDI-II is presented as a user-friendly self-report measure of depression severity. Strengths of the BDI-II include the very strong empirical foundation on which it was built, namely almost 40 years of research that demonstrates the effectiveness of earlier versions. In the development of the BDI-II, innovative methods were employed to determine optimum cut scores (ROC curves) and evaluate item performance and weighting (item-option curves). The present edition demonstrates very good reliability and impressive test item characteristics. Preliminary evidence of the BDI-II’s validity in clinical samples is also encouraging. Despite the many impressive features of this measure, one may wonder why the test developers were not even more thorough in their presentation of the development of the BDI-II and more rigorous in the evaluation of its effectiveness. The test manual is too concise, and often omits important details involving the test development process. The clinical sample used to generate cut scores and evaluate the psychometric properties of the measure seems unrepresentative in many respects (e.g., racial make-up, patient setting, geographic distribution), and other aspects of this sample (e.g., education level, family income) go unmentioned. The student sample is relatively small and, unfortunately, drawn from a single university. Opportunities to address important questions regarding the measure were also missed, such as whether the BDI-II effectively assesses or screens the DSM-IV concept of major depression, and the extent to which it may accomplish this better than earlier versions. This seems to be a particularly important question given that the BDI was originally developed as a measure of the depressive syndrome, not as a screening measure for a nosologic category (Kendall, Hollon, Beck, Hammen, & Ingram, 1987), a distinction that appears to have become somewhat blurred in this most recent edition. Also, not reported in the manual are analyses to examine possible sex biases among the BDI-II item set. Santor et al. (1994) reported that the BDI-IA items were relatively free of sex bias, and given the omission of the most sex-biased item in the BDI-IA (body image change) from the BDI-II, it is possible that this most recent edition may contain even less bias. Similarly absent in the manual is any report on the item-option characteristic curves for nonclinical samples. Santor et al. (1994) reported that for most of the BDI-IA items, response option weights were less discriminating across the range of depression severity among their college sample relative to their clinical sample, an anticipated finding given that students would be less likely to endorse response options hypothesized to be consistent with more severe forms of depression. Also, given that previous editions of the BDI have shown inconsistent associations with social undesirability (e.g., Tanaka-Matsumi & Kameoka, 1986), an opportunity was missed to evaluate the extent to which the BDI-II measures something different than this response set. Despite these relative weaknesses in the development and presentation of the BDI-II, existent evidence suggests that the BDI-II is just as sound if not more so than its earlier versions. REVIEWER’S REFERENCES Beck, A. T., Ward, C. H., Mendelson, M., Mock, J., & Erbaugh, J. (1961). An inventory for measuring depression. Archives of General Psychiatry, 4, 561-571. Beck, A. T., & Beamesderfer, A. (1974). Assessment of depression: The Depression Inventory. In P. Pichot & R. Oliver-Martin (Eds.), Psychological measurements in psychopharmacology: Modern problems in pharmacopsychiatry (vol. 7, pp. 151-169). Basel: Karger. Beck, A. T., Rush, A. J., Shaw, B. F., & Emery, G. (1979). Cognitive therapy of depression. New York: Guilford. Berndt, D. J., Schwartz, S., & Kaiser, C. F. (1983). Readability of self-report depression inventories. Journal of Consulting and Clinical Psychology, 51, 627-628. Hatzenbuehler, L. C., Parpal, M., & Matthews, L. (1983). Classifying college students as depressed or nondepressed using the Beck Depression Inventory: An empirical analysis. Journal of Consulting and Clinical Psychology, 51, 360-366. Steer, R. A., Beck, A. T., & Garrison, B. (1986). Applications of the Beck Depression Inventory. In N. Sartorius & T. A. Ban (Eds.), Assessment of depression (pp. 123-142). New York: Springer-Verlag. Tanaka-Matsumi, J., & Kameoka, V. A. (1986). Reliabilities and concurrent validities of popular self-report measures of depression, anxiety, and social desirability. Journal of Consulting and Clinical Psychology, 54, 328-333. Beck, A. T., & Steer, R. A. (1987). Beck Depression Inventory manual. San Antonio, TX: The Psychological Corporation. Kendall, P. C., Hollon, S. D., Beck, A. T., Hammen, C. L., & Ingram, R. E. (1987). Issues and recommendations regarding the use of the Beck Depression Inventory. Cognitive Therapy and Research, 11, 289-299. Beck, A. T., Steer, R. A., & Garbin, M. G. (1988). Psychometric properties of the Beck Depression Inventory: Twenty-five years of evaluation. Clinical Psychology Review, 8, 77-100. Kendall, P. C., & Watson, D. (Eds.). (1989). Anxiety and depression: Distinctive and overlapping features. San Diego, CA: Academic Press. Beck, A. T., & Steer, R. A. (1993). Beck Depression Inventory manual. San Antonio, TX: Psychological Corporation. American Psychiatric Association. (1994). Diagnostic and statistical manual of mental disorders (4th ed.). Washington, DC: Author. Santor, D. A., Ramsay, J. O., & Zuroff, D. C. (1994). Nonparametric item analyses of the Beck Depression Inventory: Evaluating gender item bias and response option weights. Psychological Assessment, 6, 255-270. Steer, R. A., Ball, R., Ranieri, W. F., & Beck, A. T. (1997). Further evidence for the construct validity of the Beck Depression Inventory-II with psychiatric outpatients. Psychological Reports, 80, 443-446. Steer, R. A., & Clark, D. A. (1997). Psychometric characteristics of the Beck Depression Inventory-II with college students. Measurement and Evaluation in Counseling and Development, 30, 128-136. Dozois, D. J. A., Dobson, K. S., & Ahnberg, J. L. (1998). A psychometric evaluation of the Beck Depression Inventory-II. Psychological Assessment, 10, 83-89. Steer, R. A., Kumar, G., Ranieri, W. F., & Beck, A. T. (1998). Use of the Beck Depression Inventory-II with adolescent psychiatric outpatients. Journal of Psychopathology and Behavioral Assessment, 20, 127-137. Waller, N. G. (1998). [Review of the Beck Depression Inventory-1993 Revised]. In J. C. Impara & B. S. Plake (Eds.), The thirteenth mental measurements yearbook (pp. 120-121). Lincoln, NE: The Buros Institute of Mental Measurements.
Copyright Copyright © 2011. The Board of Regents of the University of Nebraska and the Buros Center for Testing. All rights reserved. Any unauthorized use is strictly prohibited. Buros Center for Testing, Buros Institute, Mental Measurements Yearbook, and Tests in Print are all trademarks of the Board of Regents of the University of Nebraska and may not be used without express written consent.
Update Code 20140731
Annotation(s)  

http://ovidsp.tx.ovid.com.gate.lib.buffalo.edu/sp-3.13.1a/gifs/new_no_annotation.gifMy Projects

 
Do you need a similar assignment done for you from scratch? Order now!
Use Discount Code "Newclient" for a 15% Discount!

Social and Emotional Intelligence homework help

Social and Emotional Intelligence

Discussion 1:

 

Social and Emotional Intelligence

What ideas or phrases come to mind when you hear the term intelligence? Prior to the current emphasis on emotional and social intelligence, individuals tended to associate intelligence with one measurement: intelligence quotient or the IQ. While the IQ focuses on intellectual abilities, emotional intelligence focuses on an individual’s awareness of his or her feelings and the feelings of others, and social intelligence focuses on an individual’s interpersonal skills (Zastrow & Kirst-Ashman, 2016, pp. 506-509).

 

To prepare for this Discussion, read “Working With People With Disabilities: The Case of Andres” on pages 28–31 in Social Work Case Studies: Foundation Year. Consider what you have learned about social and emotional intelligence in this week’s resources as well as what you learn about the person and environment as it relates to young and middle adulthood.

 

Post a Discussion that includes the following:

 

o   An explanation of how social and emotional intelligence are related to cultural factors

o   An explanation about how you, as a social worker, might apply the concepts of emotional and/or social intelligence to the case of Andres

o   An explanation of how social workers, in general, might apply social and emotional intelligence to social work practice. (Include a specific example in the explanation.)

 

Be sure to support your posts with specific references to the resources. If you are using additional articles, be sure to provide full APA-formatted citations for your references.

 

References (use 2 or more)

 

Plummer, S. -B., Makris, S., & Brocksen, S. M. (Eds.). (2014). Social work case studies: Foundation year. Baltimore, MD: Laureate International Universities Publishing. [Vital Source e-reader].

 

Zastrow, C. H., & Kirst-Ashman, K. K. (2016). Understanding human behavior and the social environment (10th ed.)Boston, MA:  Cengage Learning.

 

 

 

 

 

Working With Clients With Disabilities: The Case of Andres

 

 

Andres is a 68-year-old male originally from Honduras. He is married and the father of two grown children: a daughter who is married with one child and a son who is unmarried. Andres lives with his wife in a brownstone in an upper-class urban neighborhood, and they are financially stable. He relies on Medicare for his health insurance. Andres is a retired child psychiatrist who completed medical school in Honduras and committed his career to working with Latino children and families in a major metropolitan area. Andres’ wife is a clinical psychologist who still maintains an active practice. Andres has a good relationship with his children, seeing them at least once a week for dinner, and his granddaughter is the light of his life.

Approximately 6 years ago, Andres was diagnosed with a rare brain tumor and Parkinson’s disease. Prior to his diagnosis, Andres was still on staff at a hospital, jogged daily, and had plans to travel with his wife. In a short time, Andres’ health deteriorated significantly. He now uses a cane and walker to ambulate. His speech is slow and soft. He requires assistance to get dressed and eat at times due to severe tremors and the loss of dexterity in his hands. Andres has fallen on multiple occasions and therefore cannot go out alone. He suffers from depression and anxiety and is currently on medication for these conditions. Andres spends a majority of time at home reading. He has lost contact with many of his friends and almost all of his professional colleagues.

Andres presented for treatment at an outpatient mental health setting. His daughter suggested it because she was concerned about her father’s worsening depression. Andres came into treatment stating his family thought he needed to talk to someone. He complied, but was unsure if treatment was really necessary. Andres agreed to weekly sessions and was escorted to each session by an aide who helped him at home.

While Andres had difficulty stating specific goals in the beginning, the focus of treatment became obvious to both of us early on, and we were able to agree to a treatment plan. Across multiple spheres of his life, Andres was struggling with accepting his illness and the resulting disabilities. In addition, he was extremely socially isolated despite the fact that he lived with his family and they were supportive of his medical needs. Finally, Andres’ role and identity had changed in his family and the world overall.

In a mere 6 years, Andres had lost his independence. He went from being a man who jogged every day to a man who could not carry a glass of water from one room to the next in his own home. Andres was trying valiantly to hold on to his independence. While his wife and his children were willing to provide any assistance he needed, Andres hated the idea of asking for help. As a result, he did things that compromised his balance, and he had several bad falls. In addition, Andres’ wife had assumed responsibility for all of the family’s affairs (i.e., financial, household, etc.), which had been Andres’ job before he got sick. Andres struggled as he saw his wife overwhelmed by all that she now had to take on. At the same time, he did not feel like he had the ability to reclaim any of what had been “taken” from him. Together, Andres and I identified the things he felt he was capable of doing independently and worked on how he could go about reclaiming some of the independence he had lost. We spoke about how he could communicate his needs, both for help and independence, to his family. We explored his resistance to asking for help. On many occasions Andres would say, “I was the one my children came to for help; now they have to help me. I can’t stand that.”

In addition to the struggles Andres faced in his everyday life, he also had to cope with the reality of his illness. Andres was well aware that his illness was degenerative, and with each change in his condition, this became a stronger reality. Andres frequently spoke of “a miracle cure.” He constantly researched new and experimental treatments in hopes that something new would be found. While I never attempted to strip Andres of his hope for a cure, we spent a considerable amount of effort getting Andres to accept his condition and work with what was possible now. For example, Andres had always been resistant to physical therapy (PT), but during our treatment, he began PT to work on maintaining his current balance rather than trying to cure his balance problems. Facing his illness meant facing his own mortality, and Andres knew his fate as much as he wanted to deny it. He often spoke of the things he would never experience, like his granddaughter graduating from high school and traveling through Europe with his wife.

Andres’ treatment lasted a little bit more than a year. He demonstrated significant improvement in his ability to communicate with his wife and children. Andres continued to struggle with asking for help, repeatedly putting himself in compromising situations and having several more falls. After the fact, he was able to evaluate his actions and see how he could have asked for limited assistance, but in the moment it was very difficult for him to take the active step of asking for help. Andres was also able to reconnect with an old friend who he had avoided as a result of his physical disabilities and feelings of inadequacy. We were forced to terminate when I left my position to relocate out of state.

 

 

_________________________________________________________

 

 

 

Discussion 2: The Impact of Social Policy

 

Social policies can have a significant impact on individuals and families, as well as the organizations and agencies that implement the policies. In some cases, the policy, as written, appears comprehensive and effective. Yet, despite appearances, the policy might fail to be effective as a result of improper implementation, interpretation, and/or application of the policy. As a social worker, how might you reduce the potential negative impact faulty social policies might have on organizations and agencies, as well as the populations you serve?

 

For this Discussion, review this week’s resources, including cases “Working with Immigrants and Refugees: The Case of Luisa” and “Social Work Policy: Benefit Administration and Provision.” Then, select either of the cases and consider how the social welfare policies presented in the case influenced the problems facing Luisa or Tessa. Finally, think about how policies affect social agencies and how social workers work with clients such as Tessa or Luisa.

 

·      Post an explanation of the effects of the social welfare policies presented in the case study you selected on Luisa or Tessa.

 

·      Be specific and reference the case study you selected in your post.

 

·      Finally, explain how policies affect social agencies and how social workers work with clients, such as Tessa or Luisa.

 

Support your post with specific references to the resources. Be sure to provide full APA citations for your references.

 

 

References (use 2 or more)

 

Plummer, S.-B., Makris, S., & Brocksen, S. (Eds.). (2014). Social work case studies: Foundation year. Baltimore: MD: Laureate International Universities Publishing. [Vital Source e-reader].

 

Popple, P. R., & Leighninger, L. (2015). The policy-based profession: An introduction to social welfare policy analysis for social workers. (6th ed.). Upper Saddle River, NJ: Pearson Education.

 

 

Center on Budget and Policy Priorities. (2011). Policy basics: Introduction to the federal budget process. Retrieved from www.cbpp.org/files/3-7-03bud.pdf

 

 

Working With Immigrants and Refugees: The Case of Luisa

Luisa is a 36-year-old, married, Latino female who immigrated to the United States from Colombia. She speaks only Spanish, so a translator must be used for communication. She came to the United States on a visa, but remained beyond the allotted time. While in the United States, she met and married Hugo, who was in the country with documentation. Once Luisa married Hugo, she became pregnant with a daughter, who is now 3 years old.

Luisa has a 10-year-old son named Juan in Colombia. Luisa has always had the desire to reunite with Juan and bring him to the United States to live with her. After her marriage and status change, she began the process of sponsoring Juan. She has been advised that in order for sponsorship to be achieved, she cannot receive welfare benefits because she needs to prove that she can support herself and her child.

Luisa came to the local welfare agency after she and her daughter entered the domestic violence shelter. She reported that Hugo had a history of violence, which was exacerbated when he drank alcohol. Hugo had been drinking more frequently, and the episodes of violence had increased in severity. The domestic violence program requires all residents to apply for any available benefits in order to remain enrolled in their services.

In one particular episode, Hugo almost fractured her orbital bones. She had extensive facial bruising and blood pooled in one eye. Luisa is quite fearful of Hugo. She is also financially dependent on him. She is reluctant to apply for benefits because she fears that this will compromise her ability to sponsor her son in Colombia. She is tearful and tells me that she cannot sacrifice her son’s opportunity to come to the United States.

Luisa is socially isolated because she has no family in the United States, and Hugo has restricted her ability to socialize and establish friendships. However, she is a practicing Catholic and does belong to a church that offers bilingual services.

Luisa began to discuss returning to Hugo because she felt that this was her only viable option. I advised her that under the new federal changes in immigration laws she might be allowed to apply for benefits and still sponsor her son because she is experiencing domestic violence. I explained that we would need to speak to an immigration lawyer to verify this, but it could possibly be an alternative to returning to Hugo.

Luisa reported that she had given money to lawyers in the past who had been unhelpful. She was suspicious of the law’s ability to protect her. Hugo had also threatened to report her to the authorities, stating that he would tell them she only married him to remain in the country. Although this is not true, she feared that he would do this, and she would never see her daughter again.

I offered to speak with someone at the domestic violence program and advocate that they allow her some time to research her options. I told Luisa that these were difficult decisions to make and that she would be supported in her decision. I told her that she knew what was best for her family. I offered to research the options that she might have under this new federal program. I also asked for permission to contact the priest at her church so that she might be able to review her situation with a religious leader in the community. Luisa agreed.

Two weeks later, Luisa applied for services on behalf of her daughter and herself. She has decided not to return to Hugo.

 
Do you need a similar assignment done for you from scratch? Order now!
Use Discount Code "Newclient" for a 15% Discount!

Model Matrix” Worksheet homework help

Model Matrix” Worksheet homework help

cid:D7D4B297-EEAE-4174-AD01-F87097282051@canyon.com

Complete The Strategic Section In The “Model Matrix” Worksheet.

PCN-518 Topic 4: The Six Stages of Kohlberg

 

Scenario:

A female adolescent’s parents place a low priority on the value of an education. In fact, they prefer that she care for younger siblings instead of studying or completing a high school education. It is March. The student has told her parents that she has in-school suspension for the rest of the school year in order to have time to study, as she dreams of attending college one day.

 

Directions: Read the scenario listed above. Complete all sections of the matrix provided below from the perspective of an individual in each of the six stages of Kohlberg’s theory of moral development and the information from the provided scenario. Use complete sentences and include proper scholarly citations for any sources used.

 

Level 1: Preconventional Morality

Stage Adolescent’s Perspective Rationale for your Responses
 

Stage 1: Obedience and Punishment Orientation

 

The adolescent should take care of her younger siblings because her parents want her to do so. A child assumes that those with authority hand down a set of rules which the child must obey unquestionably. In this case, the adolescent must unquestionably obey her parents’ desire for her to quit school to take care of her siblings (Gibbs, 2013).

 

 

 

Stage 2: Instrumental Relativist Orientation/Exchange of Favors

 

The child can go to the in-school suspension to improve her chances of going to college one day, or obey her parents and stay at home to take care of her siblings. The child recognizes that there is no single right view handled down by authorities and different individuals have different opinions. Everyone is free to pursue his/her own personal interests because everything is relative (Gibbs, 2013).
 
 

Stage 3: Conventional Level/Good Boy or Girl

 

The adolescent should live up to her parents’ expectations of her taking care of her siblings. She should exhibit good intentions to her siblings by taking care of them.

 

Goswami (2008) argues that children see morality as being more complex; people should conform to the expectations of their family and community and be good mannered. People should exhibit good behavior by having good feelings and motives such as empathy, love trust as well as concern for others.

 

 

Stage 4: Maintaining the Social Order

 

Should go to the in-school program to enhance her knowledge. In this stage, the respondent is more concerned with the society in its entirety. They emphasize on respecting authority, obeying laws and performing one’s duties to maintain the social order. One should not break the law whenever he/she feels they have a good reason (Gibbs, 2013).

 

 
 

Stage 5: Social Contract and Individual Rights

 

Adolescent should continue with her studies as it is her right to get basic education Respondents believe that a good society is based on a social contract which they freely enter. They argue that basic rights should be protected (Goswami, 2008).
 

Stage 6: Universal Principles

 

Adolescent should go to school as getting an education is a protected right. According to Gibbs (2013), Respondents in this stage almost consider the society as good. They believe people need to protect certain individual rights, and settle disputes democratically.

 

 

 

 

 

References

Gibbs, J. C. (2013). Moral development and reality: Beyond the theories of Kohlberg, Hoffman, and Haidt. Oxford University Press.

Goswami, U. (Ed.). (2008). Blackwell handbook of childhood cognitive development. John Wiley & Sons.

© 2017. Grand Canyon University. All Rights Reserved.

© 2017. Grand Canyon University. All Rights Reserved.

 
Do you need a similar assignment done for you from scratch? Order now!
Use Discount Code "Newclient" for a 15% Discount!

877025 Psychology homework help

877025 Psychology homework help

For this assignment, you will write the psychological report Using the Sample Report as a format guide, construct a psychological report with a referral question, incorporating Frank’s Psychosocial History and MSE located under additional resources. These are raw materials to incorporate into your report. If you feel some points need elaboration or clarification feel free to do an imaginary interview with Frank and incorporate additional information.   For the Test Results section of your report, review and interpret the WAIS-IV protocol that is located under the additional resources tab.  Conclude your paper with diagnostic impressions and summary/recommendations.
Please refer to the following materials located under additional resources to help you complete this assignment: WAIS-IV Results, WAIS-IV PowerPoint, Psych Report Writing, and Sample Report.  You will need to review and interpret the WAIS-IV protocols for the test result section of your report. Please review the WAIS-IV PowerPoint to gain further information about the WAIS-IV. Psych Report Writing is a helpful guide on how to discuss the results of the WAIS-IV and at the end of this document, you will find a table to convert scores to percentiles and classifications. The Sample Report is a helpful guide on how to format and write this assignment.

Review and interpret the WRAT-4 protocol that is located under the additional resources tab, and add to your previous report.  In light of this new information, consider if you want to revise or add to your diagnoses, summary, and recommendations.
When discussing the WRAT-4 results, be sure to include a discussion of the WRAT-4 scores in which you would want to present the Standard Scores, percentile ranks, and classifications for each subtest of the WRAT-4 (Word Reading, Sentence Comprehension, Spelling, Math Computation, and Reading Composite). You do not need to present grade levels.  You also want to talk about scores that are out of the normal range and what that might suggest.

Please refer to the following materials located under additional resources to help you complete this assignment: WRAT4 Results, WRAT4 PowerPoint, Psych Report Writing, and Sample Report.  You will need to review and interpret the WRAT4 protocols for the test result section of your report. Please review the WRAT4 PowerPoint to gain further information about the WRAT4. Psych Report Writing is a helpful guide on how to discuss the results of the WRAT4 and at the end of this document, you will find a table to convert scores to percentiles and classifications. The Sample Report is a helpful guide on how to format and write this assignment.

Review and interpret the MMPI-2 protocol that is located under the additional resources tab, and add this to your previous report.  In light of this new information, consider if you want to revise or add to your diagnoses, summary, and recommendations.

When discussing the MMPI-2 results, be sure to include a discussion of the validity scales (you can refer to your text for further guidance). Then interpret/discuss the clinical scales that are clinically significant, which are a T-score of 65 or greater. Your text and the PowerPoint of the MMPI-2 (found under the additional resources tab) list interpretive paragraphs of such scores, which you can integrate into your interpretation section of your paper.

Please refer to the following materials located under additional resources to help you complete this assignment: MMPI-2 protocol, MMPI-2  PowerPoint, Psych Report Writing, and Sample Report.  You will need to review and interpret the MMPI-2 protocols for the test result section of your report. Please review the MMPI-2 PowerPoint to gain further information about the MMPI-2. Psych Report Writing is a helpful guide on how to discuss the results of the MMPI-2. The Sample Report is a helpful guide on how to format and write this assignment.
All assignments MUST be typed, double-spaced, in APA style, and must be written at graduate level English.

All resources are attached.

https://www.psychiatry.org/psychiatrists/practice/dsm/educational-resources/assessment-measures#Level2

 

9. Assessment

9.01 Bases for Assessments

(a) Psychologists base the opinions contained in their recommendations, reports, and diagnostic or evaluative statements, including forensic testimony, on information and techniques sufficient to substantiate their findings. (See also Standard 2.04, Bases for Scientific and Professional Judgments.)

(b) Except as noted in 9.01c, psychologists provide opinions of the psychological characteristics of individuals only after they have conducted an examination of the individuals adequate to support their statements or conclusions. When, despite reasonable efforts, such an examination is not practical, psychologists document the efforts they made and the result of those efforts, clarify the probable impact of their limited information on the reliability and validity of their opinions, and appropriately limit the nature and extent of their conclusions or recommendations. (See also Standards 2.01, Boundaries of Competence, and 9.06, Interpreting Assessment Results.)

(c) When psychologists conduct a record review or provide consultation or supervision and an individual examination is not warranted or necessary for the opinion, psychologists explain this and the sources of information on which they based their conclusions and recommendations.

9.02 Use of Assessments

(a) Psychologists administer, adapt, score, interpret, or use assessment techniques, interviews, tests, or instruments in a manner and for purposes that are appropriate in light of the research on or evidence of the usefulness and proper application of the techniques.

(b) Psychologists use assessment instruments whose validity and reliability have been established for use with members of the population tested. When such validity or reliability has not been established, psychologists describe the strengths and limitations of test results and interpretation.

(c) Psychologists use assessment methods that are appropriate to an individual’s language preference and competence, unless the use of an alternative language is relevant to the assessment issues.

9.03 Informed Consent in Assessments

(a) Psychologists obtain informed consent for assessments, evaluations, or diagnostic services, as described in Standard 3.10, Informed Consent, except when (1) testing is mandated by law or governmental regulations; (2) informed consent is implied because testing is conducted as a routine educational, institutional, or organizational activity (e.g., when participants voluntarily agree to assessment when applying for a job); or (3) one purpose of the testing is to evaluate decisional capacity. Informed consent includes an explanation of the nature and purpose of the assessment, fees, involvement of third parties, and limits of confidentiality and sufficient opportunity for the client/patient to ask questions and receive answers.

(b) Psychologists inform persons with questionable capacity to consent or for whom testing is mandated by law or governmental regulations about the nature and purpose of the proposed assessment services, using language that is reasonably understandable to the person being assessed.

(c) Psychologists using the services of an interpreter obtain informed consent from the client/patient to use that interpreter, ensure that confidentiality of test results and test security are maintained, and include in their recommendations, reports, and diagnostic or evaluative statements, including forensic testimony, discussion of any limitations on the data obtained. (See also Standards 2.05, Delegation of Work to Others; 4.01, Maintaining Confidentiality; 9.01, Bases for Assessments; 9.06, Interpreting Assessment Results; and 9.07, Assessment by Unqualified Persons.)

9.04 Release of Test Data

(a) The term test data refers to raw and scaled scores, client/patient responses to test questions or stimuli, and psychologists’ notes and recordings concerning client/patient statements and behavior during an examination. Those portions of test materials that include client/patient responses are included in the definition of test data. Pursuant to a client/patient release, psychologists provide test data to the client/patient or other persons identified in the release. Psychologists may refrain from releasing test data to protect a client/patient or others from substantial harm or misuse or misrepresentation of the data or the test, recognizing that in many instances release of confidential information under these circumstances is regulated by law. (See also Standard 9.11, Maintaining Test Security.)

(b) In the absence of a client/patient release, psychologists provide test data only as required by law or court order.

9.05 Test Construction

Psychologists who develop tests and other assessment techniques use appropriate psychometric procedures and current scientific or professional knowledge for test design, standardization, validation, reduction or elimination of bias, and recommendations for use.

9.06 Interpreting Assessment Results

When interpreting assessment results, including automated interpretations, psychologists take into account the purpose of the assessment as well as the various test factors, test-taking abilities, and other characteristics of the person being assessed, such as situational, personal, linguistic, and cultural differences, that might affect psychologists’ judgments or reduce the accuracy of their interpretations. They indicate any significant limitations of their interpretations. (See also Standards 2.01b and c, Boundaries of Competence, and 3.01, Unfair Discrimination.)

9.07 Assessment by Unqualified Persons

Psychologists do not promote the use of psychological assessment techniques by unqualified persons, except when such use is conducted for training purposes with appropriate supervision. (See also Standard 2.05, Delegation of Work to Others.)

9.08 Obsolete Tests and Outdated Test Results

(a) Psychologists do not base their assessment or intervention decisions or recommendations on data or test results that are outdated for the current purpose.

(b) Psychologists do not base such decisions or recommendations on tests and measures that are obsolete and not useful for the current purpose.

9.09 Test Scoring and Interpretation Services

(a) Psychologists who offer assessment or scoring services to other professionals accurately describe the purpose, norms, validity, reliability, and applications of the procedures and any special qualifications applicable to their use.

(b) Psychologists select scoring and interpretation services (including automated services) on the basis of evidence of the validity of the program and procedures as well as on other appropriate considerations. (See also Standard 2.01b and c, Boundaries of Competence.)

(c) Psychologists retain responsibility for the appropriate application, interpretation, and use of assessment instruments, whether they score and interpret such tests themselves or use automated or other services.

9.10 Explaining Assessment Results

Regardless of whether the scoring and interpretation are done by psychologists, by employees or assistants, or by automated or other outside services, psychologists take reasonable steps to ensure that explanations of results are given to the individual or designated representative unless the nature of the relationship precludes provision of an explanation of results (such as in some organizational consulting, preemployment or security screenings, and forensic evaluations), and this fact has been clearly explained to the person being assessed in advance.

9.11. Maintaining Test Security

The term test materials refers to manuals, instruments, protocols, and test questions or stimuli and does not include test data as defined in Standard 9.04, Release of Test Data. Psychologists make reasonable efforts to maintain the integrity and security of test materials and other assessment techniques consistent with law and contractual obligations, and in a manner that permits adherence to this Ethics Code.

 
Do you need a similar assignment done for you from scratch? Order now!
Use Discount Code "Newclient" for a 15% Discount!

Child Development Multiple Choice Test Questions

Child Development Multiple Choice Test Questions

1. The “frog in the well” analogy illustrates:

Answers
1. a. that frogs start life as tadpoles.
2. b. that frogs are limited in perspective when trapped in a well, but once freed, they can see the whole world.
3. c. frogs change and evolve throughout their lives.
4. d. humans evolved from frogs.
2. The way people grow and change across the life span is referred to as ____.

Answers
1. a. development
2. b. evolution
3. c. change
4. d. growth
3. What is the pattern of a group’s customs, beliefs, art, and technology?

Answers
1. a. clan
2. b. society
3. c. culture
4. d. beliefs
4. ____ is the pattern of a group’s customs, beliefs, art, and technology.

Answers
1. a. Culture
2. b. Ethnicity
3. c Race
4. d Nationality

6. Who did developmental researchers focus on studying because they assumed that the processes of development were universal?

Answers
1. a. Mexicans
2. b. Europeans
3. c. Canadians
4. d. Americans
7. Which study would provide the best picture of worldwide developmental growth patterns?

Answers
1. a. Examining patterns of friendship in each grade level at an elementary school in Tokyo.
2. b. Watching a newborn turn into an adult.
3. c. Comparing children raised in Bangladesh to those raised in the United States.
4. d. Every two years, looking at a set group of subjects across 50 randomly chosen countries from birth to death.
8. What did the text define as the increasing connections between different parts of the world in trade, travel, migration, and communication?

Answers
1. a. globalization
2. b. social networks
3. c. the Internet
4. d. small world syndrome
9. Globalization is ____.

Answers
1. a. the number of births per woman
2. b. the ways people grow and change across the life span
3. c. the total pattern of a group’s customs, beliefs, art, and technology
4. d. the increasing connections between different parts of the world in trade, travel, migration, and communication
10. Which is the BEST example of globalization?

Answers
1. a. Jane immigrated from China to the United States.
2. b. Rita participates in a course online in which she is in daily contact with people all over the world.
3. c. The SARS virus spread from Southeast Asia to North America.
4. d. 19.4% of the world`s population lives in China.
11. According to the text, for most of history the total human population was under ______.

Answers
1. a. 1 million
2. b. 10 million
3. c. 100 million
4. d. 1 billion
12. For most of human history how many children did women typically birth?

Answers
1. a. 1 to 2
2. b. 4 to 8
3. c. 10 to 12
4. d. 13 to 15
13. The human population began to increase noticeably around 10,000 years ago. What has been hypothesized as the reason for the population increase at that time?

Answers
1. a. the discovery of medicine
2. b. the development of agriculture and domestication of animals
3. c. an increase in the size of women’s pelvic openings that assisted in labor
4. d. construction techniques that allowed for stronger homes that were better heated
14. When did the human population reach 500 million people?

Answers
1. a. 400 years ago
2. b. 1,000 years ago
3. c. 4,000 years ago
4. d. 10,000 years ago
15. How long did it take the human population to double from 500 million to 1 billion?

Answers
1. a. 150 years
2. b. 300 years
3. c. 450 years
4. d. 600 years
16. The human population doubled from 1 to 2 billion between 1800 and 1930. What led to this increase in population?

Answers
1. a. government-controlled farming
2. b. globalization and shared resources
3. c. medical advances that eliminated many diseases
4. d. people had more children
17. Which of the following fields had the greatest impact on the Earth’s population explosion in the last 10,000 years?

Answers
1. a. Medical
2. b. Agriculture
3. c. Architecture
4. d. Domesticity
18. The total fertility rate (TFR) is defined as the number of ____.

Answers
1. a. births per woman
2. b. conceptions per woman
3. c. fetuses that were spontaneously aborted
4. d. women on fertility drugs
19. What is the current total fertility rate (TFR) worldwide?

Answers
1. a. 1.4
2. b. 2.8
3. c. 4.2
4. d. 5.6
20. What total fertility rate (TFR) is referred to as replacement rate?

Answers
1. a. 1.4
2. b. 2.1
3. c. 2.8
4. d. 3.2
21. If a country wanted to decrease population, it would want which rate pattern to be in effect?

Answers
1. a. The world total fertility rate (TFR) to be equal to the country`s replacement rate.
2. b. The world total fertility rate (TFR) to be less than the country`s replacement rate.
3. c. The country`s replacement rate must be above 2.1
4. d. The country`s replacement rate must be below 2.1
22. If current trends continue, when will the worldwide total fertility rate (TFR) reach replacement rate?

Answers
1. a. 2020
2. b. 2050
3. c. 2080
4. d. 3010
23. ____ is the number of births per woman.

Answers
1. a. Total fertility rate
2. b. Expressive births
3. c. Implicit calculation of replacement
4. d. The sum of replacement
24. Nearly all of the population growth in the decades to come will take place in ____.

Answers
1. a. developed countries
2. b. developing countries
3. c. emerging countries
4. d. South American countries
25. What will happen to the populations of developed countries during the next few decades and beyond? They will _____.

Answers
1. a. increase more than developing countries
2. b. remain stable in population
3. c. decrease
4. d. increase slowly
26. What term is used in the text to refer to the most affluent countries in the world?

Answers
1. a. affluent countries
2. b. developed countries
3. c. developing countries
4. d. population-rich countries
27. What term is used in the text to refer to countries, which have less wealth, but are experiencing rapid economic growth?

Answers
1. a. impoverished countries
2. b. developed countries
3. c. developing countries
4. d. population-rich countries
28. If a study randomly selected 100 participants from a global pool, where would the majority of participants come from?

Answers
1. a. A developing country
2. b. A developed country
3. c. A declining country
4. d. It could not be determined.
29. What percent of the current world’s population lives in the most affluent countries?

Answers
1. a. 18%
2. b. 34%
3. c. 51%
4. d. 68%
30. ____ refers to the most affluent countries in the world.

Answers
1. a. Developed countries
2. b. Developing countries
3. c. Collective cultures
4. d. Individualistic cultures
31. The United States, Canada, Japan, South Korea, Australia, New Zealand, and nearly all the countries of Europe are examples of ____.

Answers
1. a. developed countries
2. b. developing countries
3. c. collective cultures
4. d. individualistic cultures
32. Developed countries roughly make up ____ of the world’s population, whereas, developing countries make up ____.

Answers
1. a. 18%, 82%
2. b. 27%, 73%
3. c. 37%, 63%
4. d. 47%, 57%
33. Developed countries can be viewed as ____, whereas, developing countries can be seen as ____.

Answers
1. a. wealthy; populated
2. b. populated; wealthy
3. c. collective; individualistic
4. d. individualistic; collective
34. What developed country will have the steepest decline in population between now and 2050?

Answers
1. a. the United States
2. b. Germany
3. c. Japan
4. d. Canada
35. Between now and 2050, what will the increase in population in the United States be nearly entirely due to?

Answers
1. a. immigration
2. b. minority fertility
3. c. majority fertility
4. d. in-vitro fertilization
36. What country allows for more legal immigrations than most other countries and has tens of millions of illegal immigrants as well?

Answers
1. a. the United States
2. b. Canada
3. c. Germany
4. d. Japan
37. What portion of the United States’ population will increase from 16 to 30 percent by 2050?

Answers
1. a. African American
2. b. Anglo American
3. c. Asian American
4. d. Latino American
38. José was born in a country where his parents make less than $2 a day and he is expected to attend grade school but not college. Jose was most likely born in a ____.

Answers
1. a. developed country
2. b. developing country
3. c. collective culture
4. d. individualistic culture
39. What percent of the world’s population lives on a family income of less than $6,000 per year?

Answers
1. a. 20%
2. b. 40%
3. c. 60%
4. d. 80%
40. Although economic growth has been strong for the past decade, what region remains the poorest region in the world?

Answers
1. a. Africa
2. b. South America
3. c. Southeast Asia
4. d. Western Australia
41. What percent of individuals in developed countries attend college or other post-secondary training?

Answers
1. a. 30%
2. b. 50%
3. c. 70%
4. d. 90%
42. What percent of children in developing countries complete primary schooling?

Answers
1. a. 20%
2. b. 40%
3. c. 60%
4. d. 80%
43. Statistically speaking, a child born today will most likely be from ______.

Answers
1. a. a developing country
2. b. a developed country
3. c. an economically wealthy country
4. d. a high social economic status culture
44. Tim’s family has passed down and adhered to traditions that his ancestors practiced hundreds of years ago. His family believes in interdependence, and that he should help support his community and nation. Tim is most likely from a(n) ______________ culture.

Answers
1. a. individualistic
2. b. traditional
3. c. modern
4. d. developed
45. ____ cultures emphasize independence and self-expression, whereas ____ cultures emphasize obedience and group harmony.

Answers
1. a. Individualistic; collective
2. b. Collective; individualistic
3. c. Developed; developing
4. d. Developing; developed
46. What percent of children in developing countries are enrolled in secondary education?

Answers
1. a. 30%
2. b. 50%
3. c. 70%
4. d. 90%
47. Who attends colleges, universities, and other forms of post-secondary education in developing countries?

Answers
1. a. the wealthy elite
2. b. most of the population
3. c. about half of the middle class
4. d. about one fourth of the middle class
48. What term is used to refer to people in the rural areas of developing countries, who tend to adhere more closely to the historical aspects of their culture than people in urban areas do?

Answers
1. a. agrarian cultures
2. b. conventional cultures
3. c. traditional cultures
4. d. rural cultures
49. What general values do developed countries tend to regard highly?

Answers
1. a. collectivistic
2. b. individualistic
3. c. traditional
4. d. modern
50. What general values do developing countries tend to regard highly?

Answers
1. a. collectivistic
2. b. individualistic
3. c. traditional
4. d. modern
51. What percent of the world’s population lives in the United States?

Answers
1. a. 5%
2. b. 10%
3. c. 15%
4. d. 20%
52. Within any given country, which of the following sets most of the norms and standards, and holds most of the positions of political, economic, intellectual, and media power?

Answers
1. a. majority culture
2. b. minority culture
3. c. ethnic populace
4. d. subcultural groups
53. Who sets most of the norms and standards and holds most of the positions of political, economic, intellectual, and media power in most countries?

Answers
1. a. power culture
2. b. controlling culture
3. c. minority culture
4. d. majority culture
54. By position, power and prestige, the President of the United States and his/her family are members of the______.

Answers
1. a. minority culture
2. b. majority culture
3. c. developed culture
4. d. developing culture
55. What term is often used to refer to a person’s social class, which includes educational level, income level, and occupational status?

Answers
1. a. social class status
2. b. socioeconomic status
3. c. tax bracket status
4. d. education status
56. The expectations that cultures have for males and females are different from the time they are born. The degree of the difference depends on _____.

Answers
1. a. culture
2. b. age
3. c. gender
4. d. socioeconomic status
57. ____ includes an individual’s educational level, income level, and occupational status.

Answers
1. a. Nationality
2. b. Ethnicity
3. c. Sociohistorical index
4. d. Socioeconomic status
58. Also referred to as a person’s social class, his or her ____ includes the level of education, their income, and occupational status.

Answers
1. a. socioeconomic status
2. b. ethnicity
3. c. culture
4. d. sociohistorical index
59. In American culture, a physician spends 12 years in college and training, generally has a high income, and possesses a strong occupational status. In terms of socioeconomic status, a physician would most likely be _____.

Answers
1. a. low SES
2. b. middle SES.
3. c. moderate SES.
4. d. high SES.
60. LaWanda has a high school diploma and is currently working as a waitress but is attending school in hopes of becoming a pediatrician. Her current socioeconomic status is likely ____; however, when she becomes an established pediatrician, her socioeconomic status will be ____.

Answers
1. a. low; high
2. b. high; moderate
3. c. high; low
4. d. moderate; low

 
Do you need a similar assignment done for you from scratch? Order now!
Use Discount Code "Newclient" for a 15% Discount!