WK 4 DISHWRK549: SEE ATTACHED | Human Resource Management
SEE ATTACHED
GradDiscussionRubric.pdf
TCOB Graduate Studies Discussion Rubric
Criteria No Submission
0 points
Novice
(Criterion is missing or not
in
Jun 02, 2025
1 views
Check out this sample solution our expert writers created for a client with a "write assignment" task.
SEE ATTACHED
GradDiscussionRubric.pdf
TCOB Graduate Studies Discussion Rubric
Criteria No Submission
0 points
Novice
(Criterion is missing or not
in evidence)
1-13 points
Basic
(works towards meeting
expectations;
performance needs
improvement)
14-16 points
Proficient
(meets expectations;
performance is satisfactory)
17-18 points
Exemplary
(exceeds expectations;
performance is outstanding)
19-20 points
Support of
Week's Reading
No Student
Submission
(0 points)
Does not refer to the
readings to support postings
(1-13 points)
Alludes to the readings
to support postings
(14-16 points)
Refers to examples from the
readings to support postings
(17-18 points)
Provides concrete examples from
the readings to support postings;
integrates prior readings in
postings
(19-20 points)
Observations No Student
Submission
(0 points)
Does not integrate personal
observations or knowledge;
does not present new
observations
(1-13 points)
Integrates personal
observations and
knowledge in a cursory
manner; does not
present new
observations
(14-16 points)
Integrates personal
observations and knowledge
in an accurate way; presents
new observations
(17-18 points)
Integrates personal observations
and knowledge in an accurate and
highly insightful way; presents
new observations
(19-20 points)
Response to
Classmates
No Student
Submission
(0 points)
Responds in a cursory
manner to classmates’
postings
(1-13 points)
Constructively responds
to classmates’ postings
(14-16 points)
Constructively responds to
classmates’ postings; offers
insight that encourages other
students to think critically
about their own work.
(17-18 points)
Constructively responds to
classmates’ postings; masterfully
connects the material presented
in classmates’ postings to their
responses; encourages classmates
to think critically about their own
work.
(19-20 points)
Organization,
Word Choice,
and Sentence
Structure
No Student
Submission
(0 points)
Posts are disorganized and
information is not presented
in a logical sequence; word
choice and sentence
structure are not suitable
(1-13 points)
Posts are somewhat
disorganized, and
information is not
presented in a logical
sequence; word choice
and sentence structure
are not suitable
(14-16 points)
Posts are organized, and
information is presented in a
logical sequence; word
choice and sentence
structure are suitable; there
are a few errors; however,
errors do not affect
readability.
(17-18 points)
Posts are organized and
information is presented in a
logical sequence; word choice and
sentence structure are suitable;
no errors in the response.
(19-20 points)
References No Student
Submission
(0 points)
Includes no sources to
support conclusions
(1-13 points)
Includes 1 outside
source to support and
enrich the discussion;
Includes 2 or more outside
sources to support and
enrich the discussion;
sources are properly cited in
Includes 2 or more outside
sources to support and enrich the
discussion; sources are cited using
APA format; style guidelines are
TCOB Graduate Studies Discussion Rubric
sources are not properly
cited in APA format
(14-16 points)
APA format and are properly
integrated into the discussion
response
(17-18 points)
masterfully integrated into the
discussion response.
(19-20 points)
HRA549_m4_Transcript.pdf
HRA549 Module 4 AVP Transcript
Slide 1
Title: Levels of Measurement: What They Really Show
Slide content:
• Nominal Scales - categorizing
– Team jerseys
– Departments, Gender
• Ordinal Scales - ranking
– Likert Scales (Strongly Agree / Agree / Disagree / Strongly Disagree)
– Almost any selection tool you can think of
• Interval Scales – ranking + consistency
– Thermometer
– Percentile scores
• Ratio Scales – ranking, consistency, + an absolute floor
– Objective, scientific measurement
– Firefighter strength and endurance test
Narrator: Distinguishing levels of measurement is important because measuring or expressing
differences can happen several different ways in the staffing process. Traditionally, measurement falls
into 1 of 4 different levels: a nominal, ordinal, interval, or ratio scale.
Nominal scales show difference, and that’s all. Think of numbers on sports jerseys. 1 is different from 5
is different from 19 and so on. But the numbers do not show any particular order – 1 is not “better” than 5
or 19, just different. In staffing, HR professionals might use a nominal scale to create categories, such as
1 = sales reps, 2 = engineers, 3 = managers…or 1 = females and 2 = males.
Ordinal scales show order, but not magnitude. That means that when we measure on an ordinal scale,
we can rank from best to worst, but it is not clear how much difference exists between the rankings. For
example, a recruiter might rank candidates 1, 2, 3, 4, 5 from best to worst, but the actual difference
between candidates 1 and 2 might be very small, while the difference between candidates 2 and 3 might
be very large. Whenever you see a Likert-type scale (with choices like Strongly Agree – Agree –
Disagree – Strongly Disagree), it is an ordinal scale, because the space in between each choice is not
entirely clear. Most of the scales used to score candidates on tests or interviews are ordinal scales.
Interval scales show both order and magnitude. That means that we can see a clear ranking and equal
differences between any two places on the scale. An everyday example of an interval scale is a
thermometer. We can clearly see that 40 degrees Fahrenheit is colder than 50, and the same difference
would exist at any point on the scale. Percentile grades that job candidates might obtain on a cognitive
ability test are interval scales, because it is clear that someone who got a 90% did better than someone
who got an 80% and exactly how much better that person did.
Ratio scales show order and magnitude and have an absolute lower threshold or zero point. A simple
example of a ratio scale is how much weight firefighter candidates might be able to carry up a flight of
stairs. One person might be able to carry 200 pounds, while another might only be able to carry 100
pounds. Understanding how much each candidate can carry relative to each other and relative to zero is
an objective and scientific way of comparing abilities and establishing BFOQs and job requirements. Can
you imagine a situation in which someone would be unable to carry any weight?
Slide 2
Title: Correlation: How (and How Strongly) are Variables Related?
Slide content:
• Denoted as r
• Range between 1.00 and -1.00
• Absolute value = strength
• Sign = direction of relationship
• Significance
– Denoted as p (usually p < .05)
– Confidence that the relationship observed is true and not a coincidence
• .05 confidence interval means 95% sure that it is not a random coincidence
Narrator: There are many statistics that can be used in the staffing process. One of the most important is
the correlation coefficient. A correlation shows the relationship between two concepts, usually a
predictor, such as a selection tool and an outcome, such as some aspect of job performance. The
correlation coefficient is denoted as a lower-case r. Correlations can range anywhere between 1.00 and -
1.00, including zero. A zero correlation means there is no relationship between the two concepts. A
correlation of 1 means there is a perfect relationship between the two concepts, and is the strongest
possible correlation. Correlation can be positive or negative, indicating the “direction” of the relationship
between the concepts. If both concepts rise or fall together, it is called a positive correlation. If one
concept increases while the other decreases, it is called a negative correlation. Here’s an example of a
positive correlation: as years of experience rises, scores on a job knowledge test rise. An example of a
negative correlation might be that as scores on an integrity test go up, incidents of theft go down.
One other important statistic to understand related to the selection process is statistical significance. This
is denoted as a lower-case p. Significance shows the probability of how confident we can be that a
relationship like a correlation truly exists, vs. just happened by random coincidence. The standard
significance level used in social science and business research is sometimes called the .05 confidence
level. This means that we can feel 95% confident that the relationship we see is true, versus only a 5%
chance that the relationship we see happened because of a fluke.
These two statistics are of paramount importance, particularly when HR professionals are trying to show
that scores on selection tools are strongly related to performance on the job.
Slide 3
Title: Reliability: Consistency or Freedom from Error
Slide Content:
Will multiple sources, methods, or evaluators come to similar conclusions about the same candidate?
• Inter-Rater Reliability
– Example: Multiple Interviewers or Panel Interviews
• Multi-Method Reliability
– Example: Tests, Interviews, Application Blanks, References
• Internal Consistency
– Example: Are similar items within any selection tool consistent?
• What can HR do?
– Training and careful evaluation and use of selection tools
Narrator: No discussion of measurement would be complete without a refresher on reliability and validity.
These are more theoretical concepts in measurement, but some of the most important to understand.
Reliability refers to consistency or freedom from error. There are many different types of Reliability. In
selection, HR professionals should be concerned about inter-rater reliability (and a similar concept,
multi-method reliability) and internal consistency.
Inter-rater reliability refers to consistency between the people doing the evaluating. For example, if part
of your selection system is a panel interview, or multiple interviews with different people, are your
interviewers seeking and receiving information in a consistent manner? Are they rating the candidate
consistently? Raters and interviewers are human, so it is important to work with them to make sure they
are consistent across the candidates they meet, and consistent between themselves for each candidate.
The same thing is true of the individual selection tools or procedures that are used in the selection
process. Are interviews, tests, application blanks, references, background checks all providing consistent
information about the candidate? If a candidate claims on her application that she has 5 years of
experience with a particular type of software and her references back that up, but she cannot pass a
simple knowledge test about that software, either the test is unreliable or the candidate is. Either way,
additional steps need to be taken.
Finally, internal consistency refers to the individual items within a selection tool. The individual items
should “hang together” such that all the items are measuring the same KSAOs. As another example,
have you ever applied for a job and taken a test where you were asked the same or very similar
questions more than once? Usually these questions are in place to make sure you are answering
consistently and not just answering the way you think the employer wants you to.
Slide 4
Title: Validity: Accurate Evaluation
Slide Content:
Construct: A concept or idea
• Do selection tools accurately reflect the “concept” of the job?
– Thought Exercise: How would you conceptualize the job of a theater manager? What
kinds of tools could accurately test that construct?
Content: The complete set of necessary skills and abilities
• Do selection tools test the whole job and nothing but the job?
– Would you give management candidates a personality test? Why/Why Not?
– Would you give management candidates a physical ability test? Why/Why Not?
Criterion-Related: Shows predictors are related to outcomes
• Do selection tools predict good outcomes (job performance)?
– This is where correlational statistics are used in validation studies
Face: Accuracy from a surface perspective
• Does it seem like it should be valid?
– Particularly important for end users and test-takers
Narrator: Validity refers to the accuracy of the selection tools used. Is the selection process truly and
effectively providing information that will ultimately help HR to hire the best employees and the
organization to remain competitive?
There are 4 kinds of validity that warrant discussion. A Construct is a concept or idea that is commonly
understood. A store, a business, human resources, these are all constructs. Construct validity, then,
refers to whether selection tools accurately reflect the concept of a particular job. Take a minute to try the
thought exercise on your own. How would you conceptualize the job of “firefighter”, and what tools might
you use to accurately evaluate your construct?
Content validity refers to what is sometimes called the content domain, the complete set of skills and
abilities necessary to describe a construct, in this case, the job of a firefighter. It can be difficult to
determine the entire content domain without missing something or including something that doesn’t really
belong.
Construct and Content validity are often confused with each other. As you were considering the
construct of a firefighter, you probably automatically considered the content domain of job skills and
abilities that a firefighter would need. Personality might or might not fit into that content domain, whereas
physical abilities like strength and endurance would likely be a top priority.
The term criterion-related validity comes from the idea that selection tools are intended to predict some
sort of criterion, or job outcome, such as better job performance, less absenteeism, more diligence, etc.
In the textbook “Staffing Organizations,” the authors often use the term “predictors” to refer to specific
selection tools. This is where HR can best show how we can impact ROI and the bottom line. Further, if
your organization does any kind of business with the federal government, the OFCCP requires that you
be able to show your selection tools predict job outcomes.
Finally, face validity is just what it sounds like: do selection tools appear valid and accurate? Will casual
observers perceive these tools to be job related and effective in hiring good people for this job? It sounds
kind of simple, but face validity may be the most important type of validity. If the test doesn’t look like it
does what it is supposed to, candidates who take the test or interview and local HR reps who administer
the test or interview may perceive the test as unfair.
Slide 5
Title: For a Few Stats More: Utility
Slide Content:
• Base Rate: What % are doing well?
Number of “Successful” Employees
Total Number of Employees
• Yield Ratio
• Selection Ratio: What % will be hired?
Number of Applicants Hired
Total Number of Applicants
– What if? 100 applicants for 5 positions
– What if? 52 applicants for 50 positions
– Acceptance Ratio
Narrator:
Utility is what matters. While reliability and validity are imperatives for an effective selection tool, utility IS
the effectiveness itself, the measure of usefulness, ROI and a good indication of how HR can affect the
bottom line. Utility is typically represented by two things: either more effective hiring, or lower costs and
higher revenues based directly on HR’s staffing processes.
A couple of factors that influence utility are worth mentioning.
Base Rate refers to the ratio between current employees who are successful on a given job outcome
divided by the total number of employees. “Success” is relative, and might refer to job performance, such
as meeting sales quotas, or attendance, such as fewer days absent from the job. The whole idea behind
quality selection processes is to help raise base rates and show an increase in new employees who are
successful at what matters in the job.
Yield Ratio is a measure of how many candidates at each stage of the hiring process will yield how many
hires. Ideally, you would want to knock out a moderate percentage, maybe 30-70% of your candidates in
each step. For example. if 20 candidates sit for an employment test and 10 of them pass, that step in the
pipeline or selection process has a yield ratio of 50%. You might think it would make selection easier if
out of the 20 candidates, only 3 of them pass your employment test (this would be a yield ratio of 15%).
In fact, something might be wrong with the reliability or validity of the test – or with the earlier steps in your
hiring process – if resume review demonstrated that all 20 candidates were qualified, why are so few of
them passing your test? From a practical perspective, you might be losing potentially good employees.
From a legal perspective, you might be opening up the door to discrimination issues. Conversely, if 18
out of the 20 candidates pass the test (a yield ratio of 90%), what’s the point of spending time and money
giving the test at all?
Selection Ratio is the ratio of who is selected out of the total number of applicants (basically, percentage
hired). A low percentage means that a lot of people applied but only a few were hired, for example if 5
people were hired out of 100 applicants, this would be a 5% selection ratio. Anyone who has ever been a
recruiter knows that this is a mixed blessing; it’s great to have so many interested applicants in the pool,
but you’ve got to make sure that your selection tools are really finely tuned to help you separate the very
best fitting, most qualified people from a talented group. A high ratio or high percentage is troubling for
two reasons. Imagine staffing for 50 positions and only having 52 applicants show interest (do the math;
this would be about a 96% selection ratio). First of all, you should be concerned about why so few
candidates are interested in the positions your organization offers, and second of all, selection tools will
not be much help to you since you’ll have to hire almost everyone who applies, unless you are willing to
leave those positions empty.
Finally, Acceptance Ratio is simply a measure of how many candidates accept when they are offered a
position. When you make job offers and candidates choose not to accept them, either because they lost
interest, decided they didn’t want to work for you, found another job, or whatever, you’ve just wasted all
that time, effort, and money establishing your best candidates and then losing them at the last minute. If
your acceptance ratio is lower than you would like, you might consider whether you are offering
competitive compensation, providing a strong enough Realistic Job Preview, or taking too long with your
time to hire so that good candidates decide not to choose you after you have chosen them.
At the end of the day, no selection system is perfect. The key is to lose as few truly qualified and
potentially productive candidates as possible, and to hire as few unqualified and unproductive candidates
as possible in a process that is as bias-free and discrimination-free as possible. An ideal staffing system
would include high reliability and validity, a high base rate, a moderate yield ratio, a low selection ratio,
and a high acceptance ratio, meaning that your organization is enjoying effective selection tools,
competitive candidates who want to work for you, and ultimately, successful employees.
End of Presentation
Need a similar assignment?
Our expert writers can help you with your specific requirements. Get started today.