Research and testing are used in human services to provide the most effective program achievable. In the same way researchers employ vast amount of data collection methods. Additionally, it is important to understand how effective these instruments function. Equally important, it is necessary to know the magnitude to which the outcomes produces the same findings in the future. Moreover, these are reasons researchers study different types of validity and reliability. The testing methods should include both reliability and validity as well as remain consistent and specific. I will discuss types of reliability, types of validity, and present examples of how each apply to human services research. Finally, the writer will discuss methods of gathering data, the instruments used, and why it is imperative these methods must have reliability and validity. Define Types of Reliability and Provide Examples
Reliability references to the condition or status of being reliable in detail to the point, which observations, experiments, tests, and measuring procedures produces the identical results on repetitive examinations (Rosnow & Rosenthal, 2008). The following are five types of reliability: alternate-form, internal-consistency, item-to-item, judge-to-judge, and test-retest reliability (Rosnow & Rosenthal, 2008). First, alternate-form reliability is the level of similarity of different arrangements of the same test (Rosnow & Rosenthal, 2008). Another type of test is, internal-consistency reliability, which demonstrates how reliable the test is wholly or how the judges keep a tally (Rosnow & Rosenthal, 2008). Next is item-to-item reliability, which refers to the reliability of every item on average. Then judge-to-judge reliability is the reliability of every judge on average, and it is clear to see the similarity between those two (Rosnow & Rosenthal, 2008). Finally test-retest reliability is the level of stability of the measuring instruments or tests (Rosnow & Rosenthal, 2008). For examples, a questionnaire or survey informs researchers how well specific items on a test relate (Rosnow & Rosenthal, 2008).
However, if the questions do not demonstrate any relationship the survey is not of much use. The test-retest reliability measures stability and the alternate-form reliability measures equivalence (Rosnow & Rosenthal, 2008). If a group of participants take the identical test, which is taken by him or her at different times, he or she may score higher. This is because him or her having a previous experience of the material. On the other hand, alternate-form reliability attempts to eliminate this by using instruments, which measure identical componentswhile using a different form. For example, the Standard Achievement Test (SAT a standardized test, which regularly goes through the process of revisions and still the new material continues to be measured with the same standards (Rosnow & Rosenthal).
Every type of reliability is put to use in human services research. It is important the researcher ensure the tests he or she is uses are reliable. For instance with the alternate-form reliability, the researcher gives his or her participant a test, which describes his or her characteristics, then gives the participant the same test, but changes the wording, the results ought to be the same. If the answers are different, the test will need revising. This causes the researcher to look into the reliabilities of internal-consistency, item-to-item, and judge-to-judge. The outcomes of these will determine how reliable the characteristics test is or stands. (Rosnow & Rosenthal, 2008) Define Types of Validity and Provide Examples
Validity refers to the level, which an instrument measures what it alleges to measure and the ability with complete accuracy (Rosnow & Rosenthal, 2008). There are eight types of validity, which are construct, content, convergent and discriminant, criterion, external, face, internal, and statistical-conclusion validity. There are three out of the eight types of validity, which are very important relating to instrument construction, and they are content, criterion as well as construct validity (Rosnow & Rosenthal, 2008). Additionally, when developing tests, which measures personality (e.g., MMPI), or aptitude, such as a classroom examination these three types of validity taken into account. Content validity represents how well the test uses portions of the content it needs to cover.
However, content validity is not to be consorted with face validity. For instance, face validity measures how effective the instrument seems to perform its functional capability to the user and not how effectively it performs. The look of a test defines how sincerely a user will approach the test (Rosnow & Rosenthal, 2008). Next criterion validity means the sum of the link relating to the test and outcomes of what the test alleges to measure. Then construct validity highlights how effective the instruments measures what the researcher had in mind. Finally, convergent and discriminate validity adds to construct validity by measuring the association of related (i.e., convergent) and unrelated (i.e., discriminant) tests (Rosnow & Rosenthal, 2008). Example of a Data Collection Method and Instrument
Data collection is the course taken to gather information, which addresses critical evaluation questions, which were previously set apart in the evaluation process (Cherry, 2010). There are many means available to collect information as well as a variety of resources. Questionnaires and opinion surveys are two specific data collection approaches a human service organization uses to collect information for research. For example, questionnaires consist of a group of questions, which people respond to either aloud or put in writing. Additionally, opinion surveys are much simpler as well as cost efficient. Moreover, opinion surveys are simple assessments, which evaluate how specific groups of individuals think or believe concerning a specific topic. Importance of Data Collection Methods and instruments Reliability and Validity
Concerning questionnaires and opinion surveys both of these approaches of data collecting can be put into operation in human service organizations. For instance, one may put to use to test the opinions of employee’s regarding the effectiveness of a certain program. In the same way, the other may be put to use to test the opinion of the organizations clients concerning the effectiveness of a specific program. No matter what the reason is concerning why the data collection methods are put to use or, which method is put into service to collect it, it is important for the data collection method to remain both valid and reliable. Furthermore, the method put into service has to be reliable to make certain the results will stay coherent. In the same way, each method has to ensure validity with the purpose of gathering information researchers are truly seeking. (Rosnow & Rosenthal, 2008) Example of Different Data Collection Method and Instrument in Manager Research
Human service and manager research use various data collection methods. Both normally fall within the following two categories; systematic observational research and self-report measures. Systematic observational research is an approach or strategy researchers use to observe amounts with the intention of formulating a scientific explanation. The researchers accomplish his or her objectives by using a set of standards to measure his or her observations adjacent to (Rosnow & Rosenthal, 2008). For example, participant observation is a type of fieldwork, which is a means of conducting observational research. Researchers place themselves in the environment alongside his or her participant in study, and he or she interacts with him or her in activities within that environment (Rosnow & Rosenthal, 2008). Content analysis is another style of observational study, and it is different from participant observation because it uses tangible material instead of personal accounts. Content analysis consists of classifying and coding different records, which include magazines, newspapers, prior research, and other documents (Jackson, Drummond, & Camara, 2008). Importance of Data Collection Methods and instruments Reliability and Validity
The use of various data collection methods and instruments are of to the researchers. Information, which is reliable and valid, ensures the research study has a high level of credibility. Researchers in the area of human services want to provide valid research designs with the purpose of accurately repeating it. The outcome of the study must also be reliable to facilitate its replication with the aim of producing identical outcomes. Reliability and validity are imperative not only to human services, but also to other authorities also. Moreover, the each of the instrumental tools previously discussed are important to securing the reliability of future research. Conclusion
Although the assistance of human beings is needed to conduct research, it is important that information regarding the research, benefits, and risk, are given to the participant. Ethical responsibilities belong to researchers as well as other trained professionals and should be practiced thoroughly. Describing observation and measurement as well as how each relate to human services research were discussed in this paper. Furthermore, data collection methods and the importance of reliability and validity in data collection were also examined. The purpose for the researcher is to achieve positive results from the test he or she administers. However, the tests must have reliability and validity as well as consistently produce the same outcomes.