The CYC-Net Press CYC-Online

eJOURNAL OF THE INTERNATIONAL CHILD AND YOUTH CARE NETWORK (CYC-Net) – ISSN 1605-7406

ISSUE 137 JULY 2010 •  CONTENTS •  HOME PAGE

PRACTICE

Reporting, assessment and research in child care practice: A personal account of discovery

Dennis McDermott

Child care workers (CCWs) are like parents in that they are the only adults who are concemed with the overall development of the child on a day-to-day basis. However, unlike parents, CCWs see the child as a problem to be solved, or a question to be answered. Their involvement with the child ends once this problem is solved, or until another one occurs.

There is at least one other major distinction between parents and CCW; CCWs spend considerably more time discussing and writing about the child. There are daily reports, progress reports, intake and assessment reports, staff meetings, case conferences, consultations, and interviews — countless forms and forums in which CCWs report on children. If concem for the overall development of the child distinguishes CCWs from teachers, doctors, and the like, reporting on the child surely distinguishes CCWs from parents.

Assessment reports, and most other forms of reporting in child care work, involve collecting information about the child in question. If we see this child as a problem to be worked on (investigated) it can be said that CCWs are doing research, given McMillan and Schumacher’s (1984) broad definition of research as “a systematic process of collecting and analyzing information . . . to investigate a problem or question” (p.4). It is this issue, assessment as research, and the broader issue of child care work as research, that this paper addresses.

One major difference between traditional research and child care assessment is the phenomenon under investigation. In the former, it is usually some defined set of variables, like the effect of Vitamin C on colds. In the case of child care, the phenomenon is an entire, and unique, person. There is no matching “control group,” no exact clone for comparison. So, if doing an assessment report on a child is research, it is obviously not the traditional kind of research. Nor is the data that is collected the type associated with traditional research. One rarely, if ever, sees means, standard deviations, and other statistical measurements in an assessment report. Nevertheless, it is my contention that CCW reports are in many respects research reports, and there is much to be gained from recognizing this fact.

Behavioral assessment as research
My interest in the research aspects of child care goes back to my student days in child care. I took the CCW training program when it was offered on an in-service basis in various mental health centres in Ontario. At our centre, classes were held one day a week. The remaining four days were spent working shifts in the child and adolescent residences operated on the semi-custodial medical model. Up to that point, though I had worked with kids for a number of years, I saw no connection between child care work and research. Then I took a course in behaviour modification. As part of that course, we were to assess the level of a specific behaviour in one of the children with whom we were working. I chose Chris, a 10 year old “toughy” with a shy, winning smile.Iit bothered me that he was spending so much time in the “Quiet Room” (isolation). Some days it seemed he was in there more than he was out. I decided to record how long he was in each day: “duration recording”. The results were not quite as high as I had expected, which was interesting. I figured that it seemed longer because when he was in isolation, there was a lot of screaming, swearing, and the like — more noticeable than most of his other, more appropriate behaviours.

But even more interesting to me were the situations that led up to his being take to the Quiet Room. These usually involved getting into fights with the staff after refusing to comply with some request. Seeing this, and having just leamed about “latency recording” (duration recording of time elapsed before a behaviour occurs) in the behaviour modification class, I thought it might be interesting to record how long he took to comply with a request after it had been made.

I got measures that went from around 15 seconds up to about 5 minutes; but what was even more interesting was the connection revealed between this time lapse and his going to the quiet room. Most of his quiet room “visits” happened after certain staff made requests and insisted on compliance within 5-10 seconds. The “data” allowed me to suggest, convincingly, that requests should simply be given to Chris and be left at that. He would usually comply but it might take a minute or two, which I had the data to prove.

This whole experience was exhilarating for me. Here I was using a research approach to solve real problems of direct relevance to my role as a CCW. Chris (and the staff) were clearly benefiting from my use of research.

Problems with behavioral approach
Impressed with my own experiences in using behavioral assessment to “research” kids’ problems, I also made it a major part of a course I taught in assessment and treatment planning some years later when I became a CCW teacher. Though most students appreciated leaming this approach, there were some definite drawbacks. For some, the whole behavioral approach did not come naturally and it was a real effort to get them to see kids’ that way. For others, this approach was a problem because their agencies did not use (and sometimes resisted the use of) any form of behaviourism, assessment or otherwise. There was also the general issue of efficiency. The systematic collection of quantitative data was time-consuming and could only be justified in very problematic or puzzling situations. And finally, though the students came up with some very creative ways of taking behavioral measurements unobtrusively, the process was difficult and “unnatural.”

Given all of these impediments, I considered abandoning the whole idea of a research approach to assessment. It seemed that the idea of taking systematic, objective measures of kids was incompatible with most child care work, work that was spontaneous, subjective, and intuitive.

However, other events persuaded me not only to keep the idea but to develop it further. About this time, I was taking a renewed interest in discovering just what child care was about. The Journal of Child Care had recently come on the scene and through reading it, and other material on child care and education, while taking a master’s degree in education (M.Ed.), I felt a new confidence in myself and the CCW profession. Through the M.Ed. program I saw how teachers were turning to a study of their own profession and turning away from borrowing from the academic disciplines of psychology and sociology. This encouraged me to do the same with child care work. I decided, for instance, that rather than push behavioral assessment onto my students I would try to find out how they naturally assessed kids. Then, since I still believed in the value of being systematic and objective, I would see if there wasn’t some way a systematic approach could be compatible with their natural inclinations. This change in attitude felt familiar; it was the same feeling I had as a front-line worker when I finally sat back and really tried to listen to the child I was working with, rather than trying to direct or control him.

At the same time, as part of my M.Ed. training and further reading in child care, I came to realize that my knowledge of research was very limited, based mainly on my awareness of the techniques used in the physical sciences and in psychology, generally referred to as quantitative methods. Porter (1982) and Beker and Baizerman (1982), were two CCW sources that opened my eyes to a whole other research tradition, based in anthropology but practised by others like Piaget: the qualitative approach.

Qualitative research and child care
A quick perusal of the list of qualitative research characteristics presented in Bogdan and Biklen (1982, pp.45-48) illustrates that this research tradition is more congruent with child care work than quantitative research methods. Some of the attributes they list are:

When I first saw this list, it looked almost like a definition of child care work, the parallels were so striking. It was not hard to see why Porter (1982) would elaborate on the compatibility between child care work and qualitative research. But it was Beker and Baizerman (1982) who took the point one step futher. In their view, the qualitative research approach is more than just compatible with CCW; qualitative research is child care work: “direct care work is not simply researchlike. Rather, it is a research process” (p.15). Armed with this intriguing idea, and the confidence to “listen to” my students, I was determined to discover what the connection was between research and child care work, particularly child care assessment.

The Treatment Planning course that became the focus for this investigation comes in the second year of a three year Child and Youth Worker Program. At this point, students are working in a field placement three days a week and coming to school for two days. The format for the classes is initially lecture and discussion, moving to a case conference format. In the first semester we concentrate on the content and style of assessment reports; in the second, assessment information is presented in a case conference format with a view to developing or changing treatment programs and techniques. By this semester, most of the students have some form of case management responsibility in their placements. The need for accurate, concise, and useful information in the second semester “case conference,” and the problems students have with this, have been the motivating force behind my re-examination of child care assessment as a research endeavour.

A CCW assessment as qualitative research
Assessing the range 0f a child's behaviour
Much of the content of assessment reports had already been outlined in child care texts or in agency report forms collected for the Treatment Planning course. Although these resources were helpful, there were two problems with them. The welter of items and questions for the students to respond to were either overwhelming, or not relevant to their situation. Secondly, the categories (containing the items and questions) to be reported on were too few to get a well-rounded picture of the child. There needed to be fewer response items per category, but more categories.

Response items in agency and text assessment outlines, for instance, might ask the observer: Is the child co-operative with her peers? Is she fearful of them? Is she demanding? Similar lists of questions were given for the other categories as well (e. g., response to adults). To reduce the list of items and allow the students to report on those items relevant to their client and setting, I suggested they answer the more general question, “What is the child’s characteristic interactions with peers?” The same formula, “What is the child’s characteristic response to . . .” was also used for the other categories (e.g., adults, routines, activities, etc,). Though this made for a more adaptable and useful set of response guidelines, the resulting reports were incomplete; they only contained information about the child’s “characteristic” responses.

To get a more complete picture of the child, but as efficiently as possible, I examined both the reports of the students who seemed to be most concise and complete in their assessments as well as those who were the least complete. The latter tended to give only the child’s best or worst performance on any item (e. g., response to adult direction). Besides being an incomplete picture, it was also obviously biased. The better reports, in contrast, gave a sense of the total range of the child’s functioning.

To answer both the problems of incompleteness and of bias, I suggested that students give both the child’s highest and lowest level of functioning as well as the typical or “characteristic” level of functioning. Later, in reflecting on the notion of assessment as research, I realized that these were the verbal equivalents of common statistics used in quantitative research; the highest and lowest “scores” being the range (a measure of variability), and the most frequent score, the mode (a measure of central tendency). The words served the same function as the statistics, to give a quick but relatively complete description of the phenomenon under investigation. I was a bit surprised, and encouraged, to see such a connection between research and assessment, especially since the connection was with quantitative rather than qualitative research.

Assessing the total child
The problem of too few categories to report on came up as a result of the kinds of placements in which the students worked. A number were in school and community placements and other less traditional CCW settings. These agencies often did not have CCW assessment forms and most CCW texts did not mention such categories as family background, cognitive functioning, and sensory abilities, probably because they were aimed at traditional residential CCWs (Trieschman, Whittaker and Brendtro, 1969; Adler, 1976). Later texts either omit the categories or approach assessment from a different angle (Brendtro and Ness, 1983; Savicki and Brown, 1985).

That such categories as family background (socio-economic status, child-rearing methods, etc.) and cognitive functioning (grade level, subjects taken and performance in these, etc.) were important categories to be assessed became quite obvious during class discussions. For instance, in trying to decide on the meaning of one mother’s apparent ambivalence towards her children (fighting for custody but otherwise being inattentive to them), assessing her socio-economic status clarified the issue. We added up the financial benefits (welfare, mother’s allowance, etc.) of having the children and it was immediately apparent that they were her major source of income.

Treatment decisions likewise became clearer when categories were added that assessed the totality of the child. It was quite common for class discussions, for instance, to jump to treatment altematives only to be forced back to a more holistic assessment in order to decide on the relative merits of one of the techniques suggested. In one case, the student was unsure about what to do when one of her young charges sprayed water while at the water fountain in the school. Without knowing his cognitive level, we were unable to say whether the CCW student should walk away (breaking the pattem of attention-seeking set up between the child and his mother); limit the boy more loudly, face-to-face (to compensate for his hearing loss); or physically guide him away from holding his hand on the water spout, to break the self-stimulating effects of the spray (related to his lower cognitive level).

At this time, we simply kept adding assessment categories to enable us to do the job at hand. It wasn’t until later that I discovered that this was one of the fundamental methods in qualitative research. In discussing one of the major types of this research approach, ethnography, McMillan and Schumacher (1984) make the point that such an approach “needs a perspective of the totality from which to make . . . decisions” (p.3 17). This was true for us, whether the decisions involved assessment (the meaning of the mother’s ambivalence),or treatment (as in the boy’s case).

I suspect that, had we started with ideas of being good qualitative researchers from the beginning, we would have saved considerable puzzlement and back-tracking in this class. It is for this reason that I maintain that not only is CCW assessment a qualitative research endeavour, but taking such a research approach would likely lead to better assessment. And, no doubt, better assessments would in turn lead to better treatment programs and techniques.

Objectivity and sources of assessment “data”
The source of the assessment information (data) is another aspect of assessments that I think would be improved by being treated as a research project. One of the recurring problems in the students’ written reports was the use of judgmental or quasi-diagnostic terms such as “obnoxious,” and “lazy,” or “dependent,” and/or “schizoid” (along with many others). When I asked students why they used such terms, or what they meant by them, they often responded that these were terms other workers used about the child. Since the student and I considered the use of such terms by these workers as significant in assessing the child, we were loathe to leave them out of the report. However, to simply say, for example, “Billy is obnoxious,” or “Suzie is schizoid,” did not make for a very objective report.

The answer to this was simply to state the source along with the term: “A number of the workers refer to Billy as ‘obnoxious.’ ” This seemed to be a standard research report technique: supplying references for an idea or observation — recognizing one’s sources. In doing so, the student produced both a more objective report and a more complete one: objective in that it did away with judgmental language (bias) in the assessment, and complete in that it preserved the fact that the bias existed in the agency and was therefore a significant aspect of the child’s life.

Another issue concerning sources of information and objectivity in reporting occurred frequently, and dramatically in one particular case. The student gave a very descriptive account of a teenage boy with whom he was working. It appeared complete in that all the categories were covered, giving us all a very clear picture of the boy. However, the picture presented was so negative it left us all wondering what to do with him. It wasn’t until the next discussion period that we realized all the “data” was essentially from two (negative) sources. Neither the boy’s peers nor the boy himself had been used effectively as sources. This resulted in the omission of more positive information such as his aspirations, his abilities, and his interests (other than the antisocial ones).

I suspect that had the student presenting the case seen himself as a researcher (a more objective stance), he might have conducted a more rigorous examination of sources. McMillan and Schumacher (1984) emphasize the use of “multiple data sources” in qualitative research. They point to the qualitative researcher’s practice of “obtaining different kinds of data from different persons in different organizational positions in different situations at different times” (p.317). Though this happens to some degree in child care work, it would undoubtedly happen more where assessments are seen as pieces of research in the qualitative mode.

Language as an assessment tool
As a final point about assessment as research, it should be noted that implicit in qualitative research is the idea that words are the equivalents of numbers in quantitative research —measurement is done verbally. This notion of verbal measurement was introduced earlier, in the example of the measures of variability and central tendency; but it deserves further comment.

The role of writing and general facility with language has been an issue in our CCW program for some years. In fact, English and Research Methods as courses in the curriculum share the lowest rank in the students’ hierarchy of wants (if not needs); their relevance to child care work is questioned. As I have tried to show above, I think that research methods might better be considered as part of the methodology of child care, the assessment part of treatment planning. I also think that language belongs in the same place, as the CCW’s measurement device.

My interest in these issues arose after two different forms of invalid assessments occurred in class: obviously biased reports and reports that could not be validated. In each case the reasons for invalidity discussed earlier were ruled out (only giving one end of the range of behaviour, not recognizing sources, using limited sources, etc.); facility with language appeared to be the problem.

A simple to correct form of this problem was the use of vocabulary having to do with frequency. Some of the most biased reports contained an inordinate number of “alwayses” and “nevers”: “Jimmy never talks to his peers; he’s always sitting alone; he never joins in any group activities.” After questioning the student it was apparent that “usually,” or “seldom,” or equivalent terms more accurately reflected the frequency: “Jimmy seldom talks to his peers; he usually sits alone; on a few occasions he has joined in on group activities.”

The more difficult language problem to find and correct showed up in a number of ways, all amounting to a failure to validate the information presented. In one such type of problem, the student would present a case in class that produced a picture for us of, say, a devious child. Another student, working with the same child, would produce a different picture, that of a fearful child. Or, a case would be presented that had each of us in the class coming up with a different picture. Or, in a written report, I could not get a clear picture of the child, or I got a contradictory one. When the problem was not for one of the reasons mentioned above, it invariably turned out to be lack of facility with language.

Unclear or contradictory reports usually involved a heavy reliance on jargon and generalities. Agency jargon such as “receiving a violation,” “going to the east side,” “being assessed a negative,” made the meaning of such details difficult to discern. Similarly, terms like “manipulative,” “aggressive,” and “appropriate” were either so general or used to cover so many situations that there was little hope for clear understanding and an accurate assessment. In other cases, after talking with the writers or speakers, it was clear that they often did not know the word for what they were describing, or they misunderstood the meaning or nuances of the word they did use. “Sarcastic” might be used where the writer meant “cynical”; “compliant” would be more accurate than “agreeable”; and “with forethought” would fit the facts better than “methodically.” It soon became clear that to be a good CCW researcher one had to have a fairly extensive vocabulary of certain types of words, words that described personality traits and ways of doing things (largely adjectives and adverbs).

Conclusion
ln summary, the line of reasoning that has developed from my work with students, and has been presented here is this:

Since well done assessments lead to effective treatment, it can be argued that whatever leads to well done assessments ultimately leads to effective child care work (treatment). Thus, I think if we as child and youth care workers took ourselves more seriously as researchers and therefore language users, we would be more effective.
 

References

Adler, J. (1976). The child care worker: Concepts, tasks, and relationships. New York: Brunner/Mazel.

Beker, J. and Baizennan, M. (1982). Professionalization in child and youth care and the content of the work: Some new perspectives. Journal of Child Care, 1, 1. pp. 11-20.

Bogdan, R.C. and Biklen, S.K. (1982). Qualitative research for education: An introduction to theory and methods. Boston: Allyn and Bacon.

Brendtro, L.K. and Ness, A.E. (1983). Re-educating troubled youth: Environments for teaching and treatment. New York: Aldine de Gruyter.

McMillan, J.H. and Schumacher, S. (1984). Research in education: A conceptual introduction. Boston: Little, Brown.

Porter, CJ. (1982). Qualitative research in child care. Child Care Quarterly, 11. pp. 44-53.

Savicki, V. and Brown, R. (1985). Working with troubled children. New York: Human Sciences Press.

Trieschman, A.E., Whittaker, J.K. and Brendtro, L.K. (1969). The other 23 hours. Chicago: Aldine.

 

This feature: McDermott, D. (1991). Reporting, assessment and research in child care practice: A personal account of discovery. Journal of Child and Youth Care, 5, 2. pp.41-49.