1. #1
    Forum Member

    Join Date
    Feb 2005
    Posts
    7

    Default Narrative Ranking - How is done?

    Hi -

    I am wondering if anyone has any insight in how the narratives are evaluated.

    Do the evaluators use a rubric to rank the narratives?

    Is the rubric based on the questions in the tutorial?

    Would it help if I wrote the narrative in such a way that the answers to the questions in the tutorial stuck out and were easy to recognize?

    I do not want to repeat myself in the narrative nor do I want to state to obvious but if the evaluation is by rubric and the evaluators follow the rubric then it make sense to write closely to the questions.

    Thank you for any insight.

    Sean Kelleher
    Quaker Springs FD

  2. #2
    MembersZone Subscriber

    Join Date
    Sep 2004
    Location
    Linwood, NC
    Posts
    469

    Default

    The scoring is intially done by computer, based on your community classification & department profile (answers to the questions in the application). Then the score is issued. If you have a competitive score, your narrative is then read and scored/evaluated by a 'peer panel'. Three people will read your narrative, independantly and score it based on the 4 areas the narrative is broken down to - Project Description, Financial Need, Cost/Benefit and Operational Outcome.

    The scores are then averaged, and a new total score is issued. This is when they start with the lowest scoring and begin issuing 'Dear John' rejection letters. If you have a mid to high scoring application (entire app + narrative) you have a good chance of being funded. The highest scores start off the early rounds of awards.

    Check out their guidelines and make sure you can properly justify what you are asking for. Also, you may want to search some of the old forum threads on here for samples of the Dear John letter. Some gave a little feedback - example: not enought info, etc. You may also want to check out the various links in the threads for sample narratives. It helps to read those that were funded and those that weren't. Good Luck!

  3. #3
    Forum Member

    Join Date
    Feb 2005
    Posts
    7

    Default More Detail

    Dixie Chick -

    Thank you for your reply and an outline of the process. I am not sure if I was not clear in what I am looking for.

    Dixie Chick wrote
    "If you have a competitive score, your narrative is then read and scored/evaluated by a 'peer panel'. Three people will read your narrative, independently and score it based on the 4 areas the narrative is broken down to - Project Description, Financial Need, Cost/Benefit and Operational Outcome."

    What I am trying to figure out is more detail on how they score the 4 areas? Do they use a rubric for each one of the four areas? Is the rubric based on questions in the tutorial? I guess what I am asking has to do with the grantsmanship. I have a draft of my grant and I have the four areas covered. Now I am reviewing and improving based on the questions in the tutorial and seeing other model grants. I feel that I have done a good job with the questions but they do not stick out (the way that the 4 areas of the narrative do.) I can make them stick out however my fear is being repetitive. If I knew how they scored/evaluated then I could do a better a better job of meeting the reviewers needs.

    (Sorry to get into this much detail - I write grants professionally - but this is for my volunteer fire department so I am volunteering and want to do well - my competitive spirit. I most likely have too much of a background in assessment methods but it makes me wonder and write to the assessment tool.)

    Dixie Chick, thank you again for writing your response and reviewing some very important information.

    Be safe,
    SK

  4. #4
    Forum Member
    Bones42's Avatar
    Join Date
    Mar 2001
    Location
    Pt. Beach, NJ
    Posts
    10,678

    Default

    Do they use a rubric for each one of the four areas?
    Basically, no. It's up to those 3 reviewiers gut feelings. If they like it a lot, they give a high score. If they don't, it gets a low score. And the 3 reviewers change every 2 or 3 weeks I believe, so there is not always consistency throught the program.
    "This thread is being closed as it is off-topic and not related to the fire industry." - Isn't that what the Off Duty forum was for?

  5. #5
    MembersZone Subscriber
    ameryfd's Avatar
    Join Date
    Nov 2003
    Location
    Wisconsin
    Posts
    598

    Default

    Bones is right, however I do believe that the peer reviewers are given a criteria to follow, but in the end it is thier gut feeling that produces the final score.

    The reviewers change every week.

    One thing that happens is that if there is a huge difference (I forget the exact point number), between the highest scoring reviewer and the lowest scoring reviewer, then they are asked to talk about the app and rescore to see if maybe someone overlooked something.

    At least that's what they said at the workshop. I know there are several former peer reviewers who peep at this forum, maybe they'll chime in. Otherwise run a search on the fourm because I know in the past there have been other threads here that answer former reviewers have shared thier thoughts on.

  6. #6
    FH Mag/.com Contributor

    Join Date
    Feb 2002
    Location
    Cypress, TX
    Posts
    7,288

    Default

    You won't hear from the peer reviewers, they can't say.

    In a nutshell, they are given a set of criteria which includes the program's priorities and they grade the narrative & project on how well they address the priorities. There is some personal interaction to it, so the view or the person reading it does come into play to an extent, but like amery said, any large differences are talked out amongst the reviewers before any final scoring is entered, and if they still disagree, a DHS "referee" gets involved.

    No I wasn't a reviewer, and no none of them spilled their beans to me, they have to keep the goods under wraps so they can't go out and suddenly become a grant writer. Lots and lots of digging along with some edjumacated guesses at the process. So far it's worked.

  7. #7
    MembersZone Subscriber

    Join Date
    Aug 2002
    Location
    Rising Sun, MD
    Posts
    168

    Default edjumacation

    Ahhhh... now it all makes sense!

  8. #8
    Forum Member

    Join Date
    Feb 2005
    Posts
    7

    Default Thank you for your comments

    I just want to say thank you for your comments.

    That is all a standard way for evaluating items. I have seen it done with other federal grant programs (in education) and actually it is similar to the way that state wide school assessments are done - I am familar with NY, NH, ME and GA and the essay portions are graded in a similar method.

    It has been my impression that sharing the rubric for evaluation is actually a positive thing because it lets the writer create a better document for the reviewer. But of course that is up to the Feds to to so that it is fair to all.

    Oh well, thank you for all your comments.

    SK

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts

Log in

Click here to log in or register