Honours thesis in psychology/Guidelines

< Honours thesis in psychology
Honours thesis in psychology guidelines

Acknowledgement: Adapted by James Neill from work by Professor Marie Carroll, University of Canberra.

Overview

This article provides important advice for the early development of Honours Thesis in Psychology projects.

Planning

Psychological research is conducted for a variety of reasons, such as to test hypotheses, to establish the existence of a phenomenon, to determine the conditions under which a phenomenon will be observed or to explore its correlates, to test a new method or technique, to indulge the researchers' curiosity, or reduce uncertainty about psychological processes (Sidman, 1960). No particular type of research can be considered more legitimate than the others, although some types of research lend themselves to good thesis work.

A good topic for a thesis is one which is of sufficient significance to obtain a high mark if it is executed well, which is manageable within the time and resources available, and which allows the candidate to demonstrate his or her research skills. Unless the research question is of some psychological significance, wizardry in execution alone will not result in a good mark. Similarly, a brilliant idea which cannot be brought to fruition satisfactorily within the time available will not earn as good a mark as it should. Nor will pedestrian research, such as correlating scores on tests, be rated highly.

Significance

Significance means simply that the thesis must have some implications for psychological theory or practice. A difficulty sometimes encountered in this respect is that the candidate is employed by a particular clinical, educational, or industrial organisation and sees an opportunity for helping the agency solve a problem while completing the degree. Ideas of this kind are obviously seductive but can make for very bad theses, because a problem which may be of some significance within a particular organisation can be trivial outside that context. This should not be interpreted as proscribing theses on applied topics which can be marked as highly as theses on so-called "pure" topics. The relevant dimension is not pure versus applied, but significant versus trivial.

The time available in which to complete the project is an important constraint on the choice of topic. Realistic thinking is required so that an otherwise good project is not compromised because there is not sufficient time in which to conduct it. So too, consideration must be given to the resources available. While the discipline may be able to make some contribution if it is considered warranted, the size of any such contribution must of necessity be small.

Try to choose a topic that demonstrates that you have ingenuity as a researcher rather than simply a high threshold for boredom. Ingenuity should not be confused with complexity of instrumentation or data analysis, both of which may be potent sources of error in the wrong hands. Ingenuity in this context is concerned more with the quality of reasoning than with use of different techniques.

Types of projects

Fourth year topics, as initially posed by candidates, are usually of three main types: Pedestrian research, Formula research, and Theory-driven research.

Pedestrian research

Pedestrian research involves empirical work which is completed without every setting out to address a theoretically-driven research question. Examples are:

  1. Manipulative research of the "lights on/lights off" variety; an independent variable is manipulated (e.g., lights are on or of) and a dependent variable is measured (e.g., mood) but there is no testing of an underlying theory.
  2. Applications of a particular data-analytic technique (to virtually any data set that can be readily obtained)
  3. Literal replications of studies reported in the literature, which can be the beginning of a mature research program but which, of itself, is pedestrian.
  4. Crude survey research (e.g., a local polling study to find out "What do students think of the food served in the cafeteria?")

The distinguishing feature of Pedestrian research is that no thought about any issue or problem in psychology is required.

Formula research

In Formula research, a research question is posed, often a significant one, but the method of answering it is straightforward (if time-consuming). Examples are:

  1. Norming a psychological test
  2. Evaluation of standard procedures in a particular organisation
  3. Scaling exercises

The distinguishing feature of Formula research is that the steps followed in providing an acceptable answer to the research question are well known.

Theory-driven research

Theory-driven research involves significant development of both a research question and a strategy for answering the question. Examples are:

  1. Comparing two theories with respect to the predictions they make about certain phenomena
  2. Testing the implications of a particular theory
  3. Constructive, as opposed to literal, replications (cf Smith 1970)
  4. Understanding the processes for change, as distinct from just the outcomes, in evaluation studies.

The defining feature of this type of Theory-driven research is the exercise of imaginative but disciplined thought about a problem of behaviour or experience of some generality.

Which type?

Most supervisors would agree that the first type of topic is not, and the third type is, appropriate for fourth year theses. There may be some disagreement about the second type as this type forms a major and significant part of applied research in psychology. It can demonstrate some of the skills of research (e.g., analysing and interpreting data, writing effectively) but stresses some (e.g., organising time and resources) to the exclusion of others (e.g., formulating a research question). In short, it demands rather too much perspiration and rather too little imagination. However, if a topic of this type is contemplated, check with your supervisor about its acceptability. It is well to do this early before you invest too much effort and ego in it.

One final point :"Bathtub" theorising divorced from a base of knowledge of the work and thinking of others is not recommended, except as a preliminary exercise. Inspiration is of course the key to first class research, but do not expect inspiration without perspiration. The serendipitous finding is typically preceded by a great deal of hard work.

Sources of ideas

The source of good ideas for theses cannot be specified. A supervisor may suggest some ideas but do not expect a blueprint for the generation of good research. Ideas come from your own reading and thought and the more of this the better. After three years of psychology, some questions should have occurred to you as interesting and important and these can serve as a starting point.

One fairly conventional approach to generating research ideas is to immerse yourself in the literature in some particular area of interest, then to engage in some uninhibited divergent thinking, before picking among the products for the most significant and practicable. Among the suggestions that Marx (1970) makes are the following:

Another important source of research ideas is discussion with your supervisor and other staff members. Your research topic needs to match the interest and expertise of your supervisor to bring out the b est in their skills.

Method and design

The nature of the research problem should dictate the method to be used in its investigation. Although this seems self-evident, it sometimes happens that a method, because of the researcher’s fondness for it or because of ignorance of other methods, is allowed to set the direction of research. Good theses employ the best method available for investigation of the problem. Where the best method cannot be employed, some rethinking of the problem is called for to make it more tractable, or perhaps even a change of topic is required.

A simple classification of research methods in psychology was proposed by Willems (1969). He argued that research methods can be located within a two dimensional space, with one dimension being the extent to which manipulation of antecedent conditions is involved and the other being the extent to which units of measurement are imposed or the observations made. The clinical case study of the sort Freud, for example, reported would fall at the low end of both the dimensions of manipulation and measurement. In the case study, there is no attempt to control the host of variables influencing the individuals behaviour nor is any system of measurement involved in the gathering of data. The laboratory experiment marks the upper end of both dimensions, as in this case the independent variables are strictly controlled and the dependent variable is measured in terms of an interval or ratio scale of measurement.

Methods which are high on both dimensions are not necessarily superior to those which are low on both dimensions. Judgements of this kind cannot be made independently of the problem to which they are applied. However, two rough rules of thumb might be of use to the beginner. First, methods high on both dimensions are preferred. Second, methods which are low on either or both dimensions present more "traps for the young player".

Once a method has been selected, get down to the "nuts and bolts" of planning for the project. The major concern here is the overall design of the study, since from this most of the details will flow. There are a variety of conventional designs employed in psychological research, and wittingly or unwittingly, the research plan will probably conform to one of these.

There are two basic problems in research design, which Campbell and Stanley (1963) have described as ensuring the internal and external validity of the study. Internal validity is primary as without this external validity is not possible, and much of the methodological criticism of your study will be directed to ensuring that the requirements of internal validity have been met. Internal validity refers to the correctness of the inferences drawn from the actual observations in the study. Your thinking about the problem leads you to expect a certain outcome and you design the study so that the outcome will be realised or, if it is not, that a negative result will have some meaning. You of course wish to conclude that the variables that are the focus of your thinking are responsible for the results you obtain. But there may be plausible rival interpretations of the results. These arise when variables other than those you are interested in have not been neutralised as a result of the design employed, and these pose threats to internal validity. It is only when the critic cannot propose a plausible rival interpretation that he or she will be happy to accept your inference. So it is your task to second guess the critic, to look for weaknesses in the design which will allow alternative interpretations to intrude, and to take steps to remedy these. Correlational designs are particularly troublesome in this regard and should come in for close scrutiny.

External validity refers to the soundness of the generalisations made from your observations to a wider situation. Some level of generalisation is almost always implied, since statements about a particular set of data obtained under a particular set of conditions are of themselves not of a great deal of interest. External validity cannot be guaranteed. However, steps can be taken to ensure that some reasonably credible generalisations are possible beyond the particular participants or measures or other aspects of the study that were employed.

A practical suggestion for the design stage is to list the variables you are concerned with in the study. This itself may be a salutary exercise as long lists may suggest you have a messy project in mind. Then group the variables into dependent and independent (or quasi-dependent and independent in the case of non manipulative designs). This will indicate the direction of influence within the set of variables you have in mind. In other than purely descriptive projects, which are not as a rule recommended for thesis work, some direction of influence is implied and you should be able to indicate this by labelling the variables. Next, note beside each variable how it is to be measured, selected for, or manipulated and indicate the type of metric involved. Finally, in a diagram or series of diagrams set out the way in which the variables interact. This may be in terms of, for example, a contingency table, correlation surface, one-way or factorial design. This final step might be beyond those without a background in research design but it is worth having a shot at. The purpose of planning out the design in this way is to clarify your thinking (or show how woolly it still is), to pinpoint details you have yet to work out, and to suggest a model for any statistical analysis that might be required. It will be of considerable value in drafting your proposal.

Proposal

It is a useful discipline to have to communicate research ideas to others and to have them criticised. Presentation of a proposal to staff reviewers provides an opportunity to clarify your thinking about the project and to gain some feedback on the adequacy of your ideas.

Candidates are sometimes nervous about the prospect of presenting a proposal to staff. On the one hand, a candidate may feel inadequately prepared because of the short time for reading and thinking about the project. Staff appreciate this and do not expect a final and complete statement. It is inevitable that there will be gaps in the proposal, but the exercise serves one of its purposes by drawing attention to these. On the other hand, a candidate may feel that s/he is being "put on trial". While there is an obvious need for staff to be critical, the purpose of the presentation is not to evaluate the candidate but to provide feedback on the proposed project, and the atmosphere is accordingly co-operative rather than judgemental.

A range of opinions will be expressed about the proposal and your task is to note and evaluate them. You do not necessarily have to accept them all; it is your project and in the end you must take responsibility for it. Listen to the criticisms, ask for clarification where necessary and do not be afraid to defend your ideas where you think appropriate.

So that staff to do not come to the presentation "cold" it is a good idea to post the proposal at least a day in advance of the presentation. The proposal poster should summarise the essential points to be made in the presentation; only references central to the thesis should be cited, and publication details of these should be provided. The proposal should include the working title you have chosen for the proposal, which need not be the title of the finished thesis.

Do not talk for more than five minutes, which will leave about five to ten minutes for comments and questions. The presentation should include two key components: Statement of the problem and the Method, with approximately equal time devoted to each. A statement of the problem should provide the answers to two questions: What is being attempted in the thesis - and why? It is surprising how difficult some candidates find it to articulate in one or two sentences the response to the question: What are you trying to do? The questions is often met with a "flight into the literature". Important sounding names or concepts are reiterated, but the listener is left none the wiser about the specific purpose of this particular project. It is a useful discipline to attempt to write in one clear sentence the aim of the project. Once this has been done to your satisfaction, the details of the problem can be suitably elaborated.

The answer to why the question is being investigated involves an outline of the background to the question and the implications for psychological theory or practice of resolving it. A detailed review of the literature, such as to be found in the final version of the thesis, is unlikely to be available, nor is one required. Remember the presentation is oral, and listeners, as opposed to readers, are not in the position to process large chunks of detailed material. Sketch in the essential references in marshalling the argument for why the topic you have chosen needs to be investigated. Less pertinent material can be kept in store for answering questions should they arise. A systematic review of the background to the problem should make its significance obvious, though a brief comment on the implications of an answer to the questions may not be wasted. A statement of the problem should conclude with your specific aim or expectation. You should have a specific hypothesis (or hypotheses) that you are seeking to confirm. Where this is not available, explain why it is not and when you expect to be able to make a specific claim. If the response to the latter is not until the thesis is complete, this indicates that you are on a fishing expedition. These are leisurely affairs which often result in catching nothing.

In the Method section, the traditional categories of a research report (Participants, Materials, Procedure) should be used with the addition of a category in which methods of data analysis are discussed. Under Participants, comment should be made on the number to be employed, where they are to be obtained, and any characteristics pertinent to the proposal (e.g., gender, age, dimension on which matching might be contemplated). Under Materials, the instruments to be employed in the project should be described. Where these are psychological in nature (tests, questionnaires) comment should be made on their reliability and validity. Tests for reliability and validity may form part of the project, or you may be relying on previously published data. Clarify. This can be a major source of weakness in a proposal, with the candidate being rendered speechless by simple questions such as: How stable are the scores you obtain using this instrument? or How do you know the test measures the particular concept in your hypothesis? Replies such as "I hope so" or "It says so on the label" will not impress. Be sure you know the instruments you will be using and their reliability and validity for your purposes before the presentation.

The Procedure to be followed in the project should be briefly but clearly explained with comment on the way in which it relates to the problem under consideration. In particular, some comment should be made on the potential sources of confounding in the study and way in which these are controlled for by the procedures to be adopted. It is assumed that the procedures to be followed are at all times ethical. Where there may be some doubt about this (e.g., withholding treatment, deceit, high intensity stimulation), the issue should be discussed.

Under Data Analysis, you should indicate the nature of the data you expect to obtain and the technique appropriate for its analysis. Some operations are usually performed on raw observations to produce data (in the simplest case, summing over categories of response to questionnaire items to produce one score for each participant), and the nature of these operations should be made clear. Second, the tactics of analysis should be indicated. The actual statistical technique to be employed need not be known in detail, but the general approach should be indicated (e.g., correlation of two sets of scores, determination of the difference between means, analysis of frequencies of response in different categories). Since an empirical thesis will generate measurements of some order, the nature of these and how they are to be dealt with should be anticipated in the proposal.

A brief summing up of the proposal should locate what you expect to find within the context of the problem as you have outlined it. What will it mean if the outcome is as expected, and what if it is not?

If you have made clear what you are intending to do and why, staff can spend most of the discussion time on the substantive issues raised, which is of course to your benefit. If your presentation has been garbled, much time will be wasted trying to clarify what is proposed. What you want is feedback on the merits and demerits of your planned project and this will be facilitated if your presentation is to the point.

In criticising your proposal, different staff members will take different tacks. Some will be concerned with measurement and analysis while others will be more concerned with the fit between what you are actually doing and what you claim to be doing. Still others will focus on the theory from which the study is derived. What makes for effective criticism is difficult to describe, which is unfortunate as forewarned is forearmed. One of the best papers on the topic is that by Lumsden (1973); a reading of this would prepare you for the types of arguments several of your critics will put to you. As it is a brief and pithy treatment by avery astute critic and raises a number of problems commonly encountered by beginning researchers, Lumsden's paper is highly recommended. Good books on research design and analysis are available in the library, including Bordens and Abbott (1991, 2nded), Mayer and Goodchild (1990), Bachrach (1981), and Shea (1995).

When it comes to actually presenting the proposal, the following advice might help:

  1. Don’t begin by apologising for the proposal or by attempting to make someone else the scapegoat for its manifest weaknesses. Limitations are understood; lack of preparation is not. Either way an apology is superfluous.
  2. Don’t talk for too little or too long. Keep the time limit in mind and pace your presentation. Candidates who suggest that the Abstract is self explanatory and that they will only answer questions be asked to proceed with their presentation. Candidates who expect to avoid questions or criticism by speaking over time will be asked to stop.
  3. Try not to read a prepared essay. This will either bore or enrage the listener. You will of course need to prepare a draft of the proposal, but then summarise the essential points on reference cards or one sheet of paper and speak from these. Bear in mind that the requirements of effective oral and written forms of communication differ.
  4. Avoid vague generalisations and unsupported assumptions.
  5. Be prepared for criticism. There is no need to be defensive, nor do you need to accommodate every counter argument put to you. Be open to contrary opinion when it comes.
  6. Before coming to the proposal formulate answers to the following:
    1. What am I attempting to do? Can I verbalise this in a comprehensible way to non-psychologists?
    2. Why am I attempting to do this? Has it been done before? If not, why not? It is because it is a silly idea or because the answer to it is not possible? If it has been done before, why should it be done again?
    3. What is the theoretical orientation of the project? Why have I chosen this particular theoretical orientation? Are there others equally applicable to the problem? If so what are the grounds for my choice?
    4. How do I plan to do it? Are there any gaps between what I want to do and what I will be actually doing? For example, do the instruments I am planning to use really reflect the concepts I am interested in?
    5. What guarantee have I that I will have access to the population from which participants are to be drawn?
    6. How will I know when the question is answered? What,specifically, are the results I am expecting?
    7. What are the implications of an answer to the question? For psychological theory? For psychological practice?

Finally, once the dust has settled, reconsider your study in the light of the criticisms made of it. Redesign where necessary and fill in gaps with further library search. This is easier to do if you make comprehensive notes of the objections raised during your proposal as soon as possible after the sessions. Keep these notes to aid you later in the year when you begin drafting the thesis proper. You may be predisposed to overlook some of the awkward points raised, but your critics, if they become examiners, will not.

Conducting the study

When the design is as complete as you can make it, begin work as soon as you can on the actual conduct of the study. You can anticipate delays along the way and some loss of participants due to spoilt records, failure to arrive at the appointed time and so forth, so begin early. It can be good idea to conduct a pilot study to ensure that your proceedings are working as they should, that the time allowed is adequate, and that you have developed a sound routine in administering the procedures. What you are seeking is a standard approach to all participants only varying, in the case of manipulative studies, in those factors of pertinence to the research question. To do this you need to train yourself. For example, prepare a set of instructions and learn them by rote so that you are treating each participant in the same way. The more complicated the procedures the more training you will require.

It is helpful to act as a participant yourself. This allows you to feel what it is like to be on the receiving end, and to anticipate problems that others may have in doing what you ask. Playing the role of the naive participant can alert you to ambiguity and at times, absurdity in the procedure you are employing.

It is important in the conduct of psychological research that you develop a rapport with your participant. This calls for certain social skills which you may have, but which, if you do not, must be learned from observation of others or by self-reflection. An authoritarian approach is not recommended as a way of gaining rapport. Many participants will be apprehensive in the research context, and one way of reducing this is to provide full information on what you are doing and why. At the conclusion of the study "debrief" participants either verbally or by giving them a written account of the projects aims, and where possible, allow them access to the results.

Treating the participant with integrity is not only required to develop rapport but is also a necessary characteristic of an ethical researcher. Where your research raises obvious ethical problems these should be discussed with staff and fellow students and modified until a satisfactory state of agreement is reached. But all psychological research involves some ethical problems (invasion of privacy, confidentiality) and these should not be lost sight of in the "march of science". For most projects you are required to submit an ethics application to the University Human Research Ethics Committee. Your supervisor will advise you on whether this is necessary. If it is, the ethics application must be submitted and approved before you begin collecting data. Where your project involves some agency outside the university (e.g., hospital patients, school students, members of community groups) you must also obtain the necessary clearance from the agency's own ethics committee (e.g., Health Board, Education Board). This requirement for clearance from an outside ethics committee can mean lengthy delays before you can begin, since meetings may be scheduled only on an infrequent basis. You must anticipate such delays in the planning of your project, and avoid reliance on such organisations unless you are well advanced in your planning

The essence of empirical research is observation, and in the conduct of the study you should use your powers of observation to the full. Even if the situation is a highly structured one as in a laboratory experiment, it is important not to "switch off" and let the experiment run itself. Be alert for the unexpected result which may signal a problem with the procedures being used, or which may suggest an interesting, but previously overlooked, hypothesis. A potentially important source of information is the report of the participant on completion of the study. The participants' perceptions of the research situation may provide clues to what you are actually doing as distinct from what you think you are doing. The value of this information will depend very much on the nature of the task set.

When all the data are to hand, begin analysis informally, then proceed to the formal treatment you had planned, and finally examine the data in ways which might have suggested themselves during the conduct of the study or the formal data analysis. The preliminary and informal analysis is most important for gaining a "feel" for the data. How systematic are the results you have? What are the trends? Are there data points which are widely discrepant from the majority of observations, the outliers? What might account for these? When small sample sizes are involved, this preliminary analysis can be done by simply scanning the raw data. In the case of larger sample sizes, plotting frequency distributions and scatterplots is probably called for. Those familiar with Exploratory Data Analysis (see e.g., Velleman & Hoagbin, 1981) could use these techniques to good effect at this stage. This preliminary analysis will provide the information conventionally summarised in descriptive statistics such as the mean, standard deviation, and correlation coefficient. Compute these at this stage, if they are not to be provided by the formal tests you have in mind, but do not omit the essential first step of "eyeballing" the data. From this preliminary analysis you should be able to predict what your formal analysis will show (unless you are using a complex multivariate procedure). A discrepancy between prediction and outcome should then alert you to problems with the formal analysis (or with your preliminary analysis) which should be resolved to your satisfaction before proceeding. If you are thoroughly conversant with your data in the manner suggested here, you will be able to draw sound conclusions from them. If you are not, you could find yourself talking nonsense. Bear in mind that "the finding of statistical significance is perhaps the least important aspect of a good experiment" (Lykken, 1968).

References

  1. Bachrach, A. J. (1981). Psychological research: An introduction (4th ed.). New Youk: Random.
  2. Bordens, K. S., & Abbott, B. B. (1991). Research design and methods: A process approach (2nd ed.). California: Mayfield.
  3. Campbell, D. & Stanley, J. (1963). Experimental and quasi-experimental designs for research. Chicago, IL: Rand-McNally.
  4. Lykken, D. T. (1968). Statistical significance in psychological research. Psychological Bulletin, 70, 151-159.
  5. Lumsden, J.(1973). On criticism. Australian Psychologist, 8, 186-192.
  6. Marx, M. H. (????). Observation,discovery, confirmation, and theory building. In A.R. Gilgen (Ed.), Contemporary scientific psychology. (pp. 13-42).
  7. Mayer, R., & Goodchild, F. (1990). The critical thinker. California: Brown.
  8. Sidman, M. (1960). Tactics of scientific research. New York: Basic Books.
  9. Smith, N. C. (1970). Replication studies: A neglected aspect of psychological research. American Psychologist, 25, 970-974.
  10. Velleman, P. F., & Hoagbin, D. C. (1981). Application, basics, and computing of exploratory data analysis. Belmont, CA: Wadsworth.
  11. Willems, E. P. (1969). Planning a rationale for naturalistic research. In E. P. Willems & H. L. Raush (Eds.) Naturalistic viewpoints in psychological research (pp. 44-71). New York: Holt, Rinehart & Winston.
This article is issued from Wikiversity - version of the Monday, December 07, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.