Amazon Vehicles Fall Reading 2016 Amazon Fashion Learn more Discover it $5 Albums Fire TV Stick Health, Household and Grocery Back to School Totes Amazon Cash Back Offer conj2 conj2 conj2  Amazon Echo  Echo Dot  Amazon Tap  Echo Dot  Amazon Tap  Amazon Echo Starting at $49.99 All-New Kindle Oasis Kiss Rocks Vegas Back to School STEM
Customer Discussions > Education forum

State To Replace Mastery Test With Computerized, Personalized Test

Sort: Oldest first | Newest first
Showing 1-10 of 10 posts in this discussion
Initial post: Jul 15, 2012 7:30:56 AM PDT
I realize that I am jumping to conclusions, but I cannot think of a more ridiculous--although high-flown sounding--way of testing students.

State To Replace Mastery Test With Computerized, Personalized Test In Two Years
Goodbye To Bubbled-In Answers; New Tests Will Be Interactive, Use Audio, Video


The Hartford Courant July 13, 2012

If all goes as planned, the Connecticut Mastery Test, bubbled in with pencil on paper, will become a relic of the past in two years, replaced by a new, more customized online testing system.

By the 2014-2015 school year, state officials hope to retire the Connecticut Mastery Test, which is taken by third- through eighth-graders, and its companion, the Connecticut Academic Performance Test, which is taken in 10th grade.

In its place will be computerized tests, essentially personalized for each student. As a student progresses through a test, the questions presented will vary depending on whether the student got previous questions right or wrong.

Education Commissioner Stefan Pryor said that with the new tests - under development by the federally-funded Smarter Balanced Assessment Consortium - if a student gets "a question wrong, the system will provide other questions to try to get at what the underlying deficiency is."

Posted on Jul 15, 2012 12:35:44 PM PDT
Lisareads says:
Got to keep the education industry employed. Tests are of very little help when it comes to people living quality lives.

Posted on Jul 16, 2012 5:35:07 AM PDT
Last edited by the author on Jul 16, 2012 5:42:14 AM PDT
Actually, having taken this type of test, it could be great - all depends on the implementation!

Say you're looking at children in 3rd grade, and the first 5 questions are based on reading a paragraph at a 1st grade reading level. Some students will answer all correctly, others will not. It is beneficial for the students who got all correct to more to a paragraph at a 2nd grade level, while those that got 2 or more wrong might be given another paragraph at the same level. As the test goes on, those who are above grade level progress to questions that become a challenge. Those who are behind never see those questions, because they are of little value. If they have difficulty with 2-3 paragraphs at that level, why subject them to the higher level paragraph (and even run the risk of inaccurate results due to lucky guessing at the higher level. If they do well on a second paragraph at level 1, their next paragraph moves to level 2. Current state tests are useless for children at both ends of the extreme - they do serve the purpose of categorizing whether children are "at grade level" on not, but not much else (no need to go into the argument about setting grade level). When my child has a perfect score, I know she is ahead of her grade, but I have no idea how far. If an 8th grader scores a zero, we have no idea how far behind she might be, because the questions didn't start at a low enough level (we can argue that she didn't belong in the 8th grade, but that's another topic). If we're going to spend the money on these tests, it would be nice if we could get some useful data.

I see a few other advantages as well - poor reading won't skew other scores as much, and it should be harder for teachers to influence results if the children are not all answering the same questions at the same time.

In reply to an earlier post on Jul 16, 2012 6:12:09 AM PDT
I was referred to the URL

where this type of test is discussed.

I have seen some of the powerpoint presentations, and it may be that "adaptive testing" may be better than the current testing methods. However, in my opinion there is no substitute for giving tests where students must show all the work which is actually read by a person. For 25.5 years I taught mathematics at a small college, made up all my quizzes and tests, and did all the grading. It was truly painful to see all the rubbish that many of my students wrote--rubbish that they had been taught in preparation for taking the so-called mastery tests.

In reply to an earlier post on Jul 23, 2012 5:22:24 AM PDT,0,6425539.story

Tests Aren't The Problem - And New One Is No Solution


The Hartford Courant July 20, 2012

State Education Commissioner Stefan Pryor said that a new standardized test given on computers will adjust its questions according to a student's answers. If a student gets a question incorrect, "the system will provide other questions to try to get at what the underlying deficiency is," he said.


This computer-based test, which replaces the Connecticut Mastery Test, will narrow down what lies behind a student's incorrect answer. As a former elementary school teacher with years of mastery test experience, I question this new test's ability to determine whether children had breakfast, are proficient in English or are familiar with standardized testing and computer use, among the myriad factors that affect scores or create "deficiencies."

New is not necessarily better. Rather than ask whether a standardized test score accurately reflects what students know, or are able to do, or how standardized tests may reflect students' access to opportunity rather than their achievement; our answer appears to be simply creating a "smarter" test.

Under this new computer-adaptive testing, students who answer a question correctly will get a more difficult question. When an answer is incorrect, the computer reduces the degree of difficulty. Eventually, you are able to determine a person's ability (imagine a see-saw that leans to one side or the other until it is balanced). The issue is, however, that you start with an assumption that the questions are valid.

A sample question reported by The Courant is a perfect example of an imperfect question. The first sentence of the question is "You will learn about young people who, because of their actions, are considered to be wonders." I asked my husband to read the question. He said, "Wonders? What are wonders?" Exactly.

A CPA with two master's degrees was immediately confused about the vocabulary and stopped reading the directions. At the bottom of the question, it tells students to look up wonders in a dictionary and then define it in their own words. As a teacher, I know that many students will try to answer the question without reading all of the directions. In addition, there are eight steps a student needs to complete for this question, after defining a "wonder."

The convoluted language, difficult directions and multiple steps all confuse students rather than prompt student thought. The vocabulary in the sample was difficult enough for my husband, but factor in cultural and socioeconomic differences, as well as students with limited English, and the tests limit access to achievement.

Compounding the overall issue of test complexity is the added pressure of the timed format of standardized tests. The ticking clock may cause students to rush through directions, leave work unfinished or give less in-depth responses. These responses do not accurately show what students know or can do, but what they can do in 45 minutes?

Also, some types of questions, such as those requiring essays or the new performance tasks, need to be hand-scored. Differences among scorers, their biases or fatigue could change a student's score without reflecting their ability. If students get a question wrong (or loses points) will the computer attribute it to time management, scorer bias, vocabulary, reading fluency, unfamiliarity with technology or attention issues? How can you determine students' ability or challenges simply in the way they answer a question?

As we move to this new testing, there is an additional unmentioned elephant in the room. This system requires that all students complete at least a portion of the tests on a computer. Schools will need to be equipped. When school systems such as New Britain are cutting budgets (and apparently 50 teaching positions), where will they find the money to not only provide the technology, but the time and the instructors necessary to prepare students?

Connecticut has the largest opportunity gap in the country with regard to education. Implementing new standardized tests that not only have similar issues to the old tests, but also depend on technology does nothing to reverse this trend.

Is testing the problem? No, but testing that narrows curriculum and reduces content to the point that we determine whether or not a student is "achieving" by one variable is. To truly help our students achieve, we need to move away from our current mindset of "test pedagogy" in schools and think about how to create environments that draw from students' strengths.

Jennifer Dolan of Storrs is a doctoral candidate at the NEAG School of Education Department of Curriculum & Instruction University of Connecticut. She previously taught fifth grade at the Charter Oak International Academy in West Hartford.

Posted on Jul 23, 2012 9:28:12 AM PDT
Lisareads says:
"the system will provide other questions to try to get at what the underlying deficiency is," he said."
They have yet to make a computer have the abilities of humans. Are you training a human to be like a computer?
This story is propaganda to support the industry, just marketing for unneeded toys that do not add to the quality of life. The basis of creativity and inspiration is making mistakes and asking the right new questions. Not answering computer questions.

In reply to an earlier post on Jul 25, 2012 11:01:23 AM PDT
Last edited by the author on Jul 25, 2012 11:02:18 AM PDT

"They have yet to make a computer have the abilities of humans."------sentence 1 of paragraph 2, from the comment of "Lisareads" on Jul 23, 2012 9:28:12 AM PDT regarding the discussion topic
"State To Replace Mastery Test With Computerized, Personalized Test"




(1) That doesn't make sense for you to argue "Lisareads".

You have argued on multiple occasions in these topic that Bruce Bain is a "bot".

Who are you trying to kid here?


You are arguing the issue both ways at once, thus violating the Aristotelian Law of Non Constradiction.


In reply to an earlier post on Aug 8, 2012 5:44:55 AM PDT
Hot Hands says:
Tests are of very little help when it comes to people living quality lives...

Maybe 'tests' aren't but try to get a job without an education...McDonald's ain't paying the mortgage!

Posted on Aug 8, 2012 5:46:18 AM PDT
Hot Hands says:
I LOVE the fact that it will try to determine what the child's deficiency is, a new dimension in understanding, but sadly do schools have the time per student to really use and address the data?

In reply to an earlier post on Aug 8, 2012 1:08:30 PM PDT
Last edited by the author on Aug 8, 2012 1:15:51 PM PDT
Lisareads says:
The idea is to support yourself not get a job. Too many people sign up to be slaves rather than starting their own business. We should not be teaching to a job we need to teach to success. The skinner/Pavlov conditioning sets children to consume rather than explore the morals and ethics of their activities, actions, inspirations, and passions. Responsibility should be the priorities not jobs.
‹ Previous 1 Next ›
[Add comment]
Add your own message to the discussion
To insert a product link use the format: [[ASIN:ASIN product-title]] (What's this?)
Prompts for sign-in

Recent discussions in the Education forum (426 discussions)


This discussion

Discussion in:  Education forum
Participants:  5
Total posts:  10
Initial post:  Jul 15, 2012
Latest post:  Aug 8, 2012

New! Receive e-mail when new posts are made.

Search Customer Discussions