This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
The POSAS (POLBAN Online Self-Assessment System) Project at Computer and Microprocessor Laboratory, Electrical Engineering Departement at POLBAN has recently developed. The POSAS team has developed an online, interactive Self-Assessment System to reinforce POLBAN students' classroom learning. The system provides an instructional mode in which students practice solving problems, and an assessment mode that provides feedback about progress. The practice mode uses student proficiencies to determine the type of hint the student is most likely to need, while the assessment mode uses proficiencies to provide feedback to students and lecturers about progress. The practice mode functions as a tutorial until students feel ready to test themselves by requesting a "Quick Check" to see their progress. In this paper we discuss how the POSAS assessment tools provided support for the development and implementation of the Self-Assessment System.
POSAS is defined as students judging the quality of their work, based on evidence and explicit criteria, for the purpose of doing better work in the future. When students are teached how to assess their own progress, and when they do so against known and challenging quality standards, it can be find that there is a lot to gain. Self-evaluation is a potentially powerful technique because of its impact on student performance through enhanced self-efficacy and increased intrinsic motivation. Evidence about the positive effect of self-evaluation on student performance is particularly convincing for difficult tasks, especially in academically oriented schools among high need pupils. Perhaps just as important, students like to evaluate their work.
The Web-based self-assessment tool for POSAS provides a number of benefits to POLBAN students.
A secure online environment to manage POLBAN assessment activities.
Reports available when POLBAN need them with ability to create email attachments.
Dynamic educational content that will be frequently updated.
Flexible structure allows set up and reporting for multiple vineyards/blocks and wineries in an organization.
Multiple users from POLBAN organization can access facilities they are responsible for.
Access action plan templates from a shared library and save your own personal plans.
The Theory/Theoretical Model Behind SELF-evaluation
In the model that follows we provide the theoretical model for self-evaluation (Rolheiser, 1996). Research indicates that self-evaluation plays a key role in fostering an upward cycle of learning. Figure 1: Show How Self Evaluation Contribute to Learning. When students evaluate their performance positively, self-evaluations encourage students to set higher goals (1) and commit more personal resources or effort (2) to them. The combination of goals (1) and effort (2) equals achievement (3). A student's achievement results in self judgment (4), such as a student contemplating the question, "Were my goals met?" The result of the self-judgment is self-reaction (5), or a student responding to the judgment with the question, "How do I feel about that?"
Goals, effort, achievement, self-judgment, and self-reaction all can combine to impact self-confidence (6) in a positive way. Self-evaluation is really the combination of the self-judgment and self-reaction components of the model, and if we can teach students to do this better we can contribute to an upward cycle of better learning.
Figure 1: How Self Evaluation Contribute to Learning 
But it is not hard to see how a downward cycle could develop if there was a significant gap between students' goals and those of the classroom or if students perceive themselves to be unsuccessful performers. In the downward cycle low self-evaluations lead students to develop negative orientations toward learning, select personal goals that are unrealistic, adopt learning strategies which are ineffective, exert low effort, and make excuses for poor performance.
The problem is that without lecturer involvement in student self-evaluation, lecturers have no direct knowledge about whether individual students are on an upward or downward path. The choice for lecturers is not whether students evaluate their own work (they will regardless of lecturer input) but whether lecturers will attempt to teach them how to do so effectively. The goals of our ongoing research and the practical model and ideas that follow are aimed at assisting lecturers with this important work.
Methodology, Model, analysis, Design and Implementation
The sample for the study consisted of 9 course assessment summaries for Diplome III and Diplome IV Telekommunication Engineering POLBAN over fiscal year 2009 - 2010 year-to-date. Individual courses average 25-30 students per course. Single courses, such as independent studies, were excluded from the study.
The responses taken for statistical analysis were derived from a thirty-question student assessment
employed by the Telecommunication Study Program, which contains a five point Liker scale. The thirty questions were clustered into four categories: student self-assessment of learning; assessment of teaching; assessment of course content; and assessment of web-based technology.
Table 1. Example of student self-assessment of learning cluster, combining seven different assessment questions together to achieve a mean score of student's self-assessment of learning.
This study focused on the results for the category of
student self-assessment of learning, which was derived from seven of the thirty-question student Assessment. An example of this category is provided in Table 1. For the analysis, a focus was placed on Question 4 identified in Table 1: "I gained significant knowledge about this subject." The purpose was to minimize the clustering effect that potentially weighted certain questions to favour onsite or hybrid learning, for example, Question 2 in Table 1: "My oral communication skills have improved." Oral communication skills are admittedly diminished in the traditional online learning environment, especially where the
majority of the course activities rely on asynchronous communications.
Course summary data of student self-assessment was sorted by academic level (Diplome III and
Diplome IV), and finally by course delivery modalities of Online, Hybrid, and Onsite. Each of the eight major course delivery modalities were modified in some instances to include online supplements for onsite courses, online courses with synchronous audio/visual components and these represented the final fourth level of sorting. Descriptive statistics were used to determine trends in both Diplome III and Diplome IV courses across the eight major course delivery modalities across the 2009-2010 fiscal year-to-dates. 
2.2. General Description of the POSAS Self-Assessment System
The POLBAN Online Self-Assessment System (POSAS) curriculum that included units on resistance, curent and voltage was published in a second edition in May 2010. During the final stages of its development, curriculum developers at the Telecommunication Study Program turned their attention to creating assessment items that could form the basis of a student self-assessment that was to accompany the revised curriculum unit. A Self-Assessment System was developed in which students received feedback while they practiced on problems and were also able to monitor their own progress through short summative assessments.
Figure 2: Broad design of the POLBAN Self-Assessment System.
Steps for broad design in figure 2:
Students take 10 pre-test problems, one from each of the 10 types of problems speed.
Students work through the practice problems, scaffolded by hints that model good problem solving at key steps.
Students complete practice problems until they feel confident enough to tackle the "Quick-check"
Quick-check problems are unscaffolded. Student responses from these are sent to the scoring engine. Students get feedback on overall performance.
As shown in Figure 2, the self-assessment has three parts: a short pretest, the practice problems, and the quick check. Students first take a pretest so the
system can determine their initial ability level. There are ten different levels of problems in the curriculum and students answer one problem from each of the ten levels in the pretest. When working through practice problems, students gain experience with problems that are similar to those that they recently encountered in the classroom with their lecturer. In this phase of the system, students get support from the computer system in the form of feedback on their responses and hints to aid their progress. Students work on problems at one level until they feel confident to go on, at which point they take a quick check. In the quick check, students complete a set of up to five problems that include three from the current level and one each from the previous two levels. Based on the score from the quick check, the system makes decisions about whether the student needs more practice on problems at that level or is ready to proceed to the next level. 
2.3. Use of the POSAS System in the Design of the Assessment Program
As stated above, the Self-Assessment System has ten levels of problems representing successively more difficult problem solving contexts. Designers wanted students to see different problems at each level throughout the practice sessions and in the Quick Check. To accomplish this, the system was designed to use item shells in which the main body of the question remained constant, but the numeric values in the problems were randomly selected from an appropriate (to the specific problem) range of values. The item shells for each problem level were developed by using the POSAS Design System, and stored as task specifications within the POSAS database.
In the current project, development of the task specifications began with an analysis of what was to be measured by the assessment overall. Designers discussed trade-offs between detailed individual measures on each of the cognitive components involved and an aggregated measure of knowledge. The individual measures we considered included knowledge of resistance concepts, knowledge of current concepts, knowledge of voltage concepts, and mathematical ability. Ultimately, the decision was made to produce one content-oriented measure and one mathematical ability measure from the assessment. This measurement objective of the assessment, termed a Student Model in the POSAS Design System, included two student model variables, one for knowledge about resistance, current and voltage (RCV) and one for mathematical ability (Math). The Student Model "RCV + Math" became the basis for all of the task specifications; the tasks that students eventually interact with were intended to elicit evidence that would indicate levels of knowledge on RCV and Math.
Figure 3: The Overall architecture of the POSAS Self- Assessment System and the POSAS Design System 
The next step in designing the task specifications was to layout the format of the problems students would encounter. Although specific problem levels had some variation in the number of questions that were asked in the problem, the general layout of the problems was the same. Each problem begins with a prompt and a graphical representation of the problem situation. Students are first asked to select the appropriate equation for solving the problem from a list of equations. Then, they fill in the equation with numbers and units from the problem text. Next, they solve the equation and use that solution to respond to the question posed in the problem prompt. Each response is considered evidence of student knowledge, and as such needs to be associated with the appropriate measure (RCV or Math). Designers determined that solving the equation required primarily mathematical ability, while the other responses required primarily RCV knowledge.
The layout of stimulus materials and the type of individual responses (i.e., selection from a list, constructed response, etc.) became parts of the task specification. The association of responses to measures also became part of the task specification so that proficiency estimates could be computed correctly from the response data when the Self-Assessment System is interacting with students. In POSAS terms, a Measurement Model defines the way proficiency estimates are computed from the response data.
As mentioned above, the POSAS Design System is used to design task specifications, or item shells, as they are called in PASAS. It is not an authoring or assessment delivery system. So, once all of the task specifications were finalized and entered into the POSAS Design System, they were copied into the database of the POSAS Self-Assessment System. Then, when the POSAS Self-Assessment System needs to deliver an item to a student, it retrieves an item shell for the appropriate problem level from the local database, fills in the numeric parts with randomized values, and renders the item on the computer screen. It then gathers response data from student entries, and if, needed, packages the student response data and the task specification data (i.e., the measurement model part) to send to the POSAS Scoring Engine. The POSAS Scoring Engine takes that data and computes proficiency estimates using multidimensional item response modeling and returns proficiency estimates on each measure for the student.
Table 2. Example of 9 courses database in POSAS, combining 9 different course assessments.
Table 3. Example of students database in POSAS
Table 4. Example of students database in POSAS
Figure 4 is an excerpt from the task specification for a level eight problem. Note that the Student Model Summary indicates that items generated from this shell will produce evidence of students' knowledge of current and of their mathematical ability. The Student Model, "POSAS RCV + Math," and a general summary of the Measurement Model is provided, along with an Evaluation Summary and a summary of Work Products. Details of the item layout are contained in an Activity object that is associated with the task specification.
Every task specification contains at least one activity. In the POSAS Self-Assessment System, we generated exactly one activity for each task specification. An excerpt of the activity for the Problem Level Eight task specification is shown in Figure 5. Here, the details of the measurement models, which are used by the Scoring Engine, are specified, as are the evaluation procedures, the work products, and the way the item is to be presented to the student. All of this information is encoded in a format that is accessible by a computerized assessment authoring and delivery system, such as the POSAS Self-Assessment System.
Figure 4: Part of the task specification from the POSAS Design System for a problem level eight item shell
An example of an item at problem level one as generated and rendered to a student is shown in Figure 5.
As can be seen in Figure 5, the shell for an item has several components. At the top is the statement of the problem. The text of the problem is constant across problems, but the values of the given information, such as the resistance and the current travelled in problem eight, are randomly generated from a list of realistic values for each item. The values are stored as pairs to ensure that appropriate values result each time. The other component is the drop-down selection list of the possible equations to use to solve the problem. In this prototype, there
Figure 5: Screen shot of a problem at level one.
were three options in the list; Voltage = Resistance x current, current = voltage/resistance, and resistance = voltage/current. After selecting an equation from the drop-down menu, the student presses the "select equation" button, which then causes the equation to be represented in another part of the item shell, the white "work space" area. The equation representation in the workspace has input boxes for each of the equation values and for the calculated answer values. The student determines from the problem statement and graphic the values to be entered and types them into the workspace area along with the units. The student then makes the calculation pertinent to the equation, and enters that value and the units into the box for the result. Upon completion, the student clicks on the "I'm done" button to end the problem.
Students are scored on two variables in each problem; one that measured their understanding of current (as a component of a larger voltage, current and voltage knowledge variable) and one that measured their mathematical ability. Scores were assigned for each of the relevant steps in the problem. For example, in problem type one shown in Figure 5
2.4. The Automated Help System
Figure 6: shows the architecture of the Self-Assessment System that includes the automated help system. The first time that a student logs in to the system, he or she takes the pretest. Throughout the pretest, the system records the student's responses to each question, tracking separately the responses that map to the content knowledge variable and to the mathematics variable. At the conclusion of the pretest, the system sends the scores on both variables to the POSAS Scoring Engine, which resides on an external server and is part of the POSAS system. The statistical model of the evidence rules that relate how student scores on the tasks (problems) should be interpreted are contained in the measurement model components of the task specifications stored in the Self-Assessment System database.
Using this statistical model, the Scoring Engine calculates a proficiency estimate for each student model variable, which represents the student's current ability level. These values are returned to the Self-Assessment System. The Self-Assessment System maintains a history of these estimates each time they are generated for a student. The student's most recent ability estimate (e.g., from the pretest or from a prior Quick Check) is used to determine the type of hints that the student will receive in the next round of practice problems.
Figure 6: Flowchart of the POSAS Self-Assessment System logic.
After the pretest, the student begins the practice problems at level one. During the practice session hints are available to the student. Generally, the student has control over when to have their work checked, as described shortly. One exception to this is at the start of the problem. If the student has not selected the correct equation to solve the problem and tries to add it to the workspace by clicking the "Select Equation" button, the system offers a hint. An example of how a hint is provided to a student who has selected the wrong equation is shown in Figure 7: If the student did not choose the correct equation, a hint appears in the field at the bottom right hand section of the screen, next to the "check my work" button. Hints are text-based and are designed to point out to the learner the features of the problem he or she should pay attention to. The system gives more or less detailed help depending on the hint stream that has been selected based on their latest ability estimate. In addition to the written hint in the text box there are other visual prompts with some hints.
Figure 7: Screen shot of a "Practice" problem showing the feedback the student received.
Hints are selected based on the ability estimate of the student. For each problem there are three "streams" of hints. The level 1 hint stream is for higher ability learners who need the least amount of help, level 2 is for medium ability learners who need intermediate help, and level 3 is for learners of lower ability who need deeper, more concrete hints. In Figure 7: Tthe hint shown is from the level 3 hint stream, the most helpful level of hints. The written hint tells the student that he or she needs to select a different equation. Because it is a level 3 hint, the text specifies the equation to select and, for additional reinforcement, a visual hint in the form of the equation being shown in green alongside the "select equation" button. This initial hint repeats until the student selects the correct equation. The rationale for preventing students from proceeding through the problem after selecting the wrong equation is that it is a waste of instructional time to allow them to continue. A student cannot be guided to a successful solution of the problem if he or she is applying the wrong equation, so feedback on any other steps beyond that could imply that the student was on a correct solution path and thus confuse them.
POSAS has now been tested by both by Telecommunication Study Program students in POLBAN. Students have reported a high degree of satisfaction with the system. They found the system useful and informative, and it gave them a clear view of what they still need to learn and where they stand relative to the requirements of various roles.
We have recognized, however, that many improvements are possible, and we have elected to enhance the system further along the following lines:
We are modifying POSAS to enable its use at many levels of detail, both at the detailed competency assessment level that it currently addresses, and at the level of competency categories (groups of related competencies), thereby allowing more rapid assessment of those areas recognized by the student as weaknesses.
We are improving its user interface so that it can be more easily understood and efficiently used.
We are completing the addition of learning links to Web-based educational sources to be referenced by students with identified weaknesses.
We will extend the tool to incorporate Research and Development academic level for self-assessment for all POLBAN students
We are in the process of offering the system to other programs for use as a tool by students.
The student also gets feedback after each quick check in the form of a report. The report is based upon the student ability estimate on each variable and maps the student progress against certain learning benchmarks on the dimensions being measured.
The results obtained in the sampling data suggest student self-assessment of learning (a measure of overall satisfaction) is dependent on factors beyond the simple delivery of online course content. When measured against other modalities, online delivery of education ranks below more traditional methods of instruction, particularly onsite instruction. However, online courses that employ technologies that more closely mimic onsite, face-to-face interactions (for example, synchronous interactions via live video and audio feeds) tend to show higher levels of student satisfaction than entirely asynchronous online delivery. The implication for using online course delivery to maximize sustainable outcomes is clear; online courses should employ a mix of synchronous interaction opportunities to maximize student satisfaction opportunities.
We have developed a POLBAN Online Self-Assessment System (POSAS) for POLBAN Students sucessfully. This system allows individuals to assess themselves as to the congruence of their competencies with those required for specific roles. The system has been tested on students and is in the process of being enhanced and disseminated. Individuals interested in accessing the system personally or using it as a component of their programs is invited to contact the first author.