Coaching for Success Webinar: Set, Measure, Report – Using Data to Improve Client Success


– [Ashley Winning] And welcome, everyone.
Thank you for being on the call. So yes, as Tina said, I’m
going to be talking about using data to improve client success, and… we’ve already been
through this part. Here we go. So the main purpose of this
presentation is to provide an overview of the outcomes
measurement process and to suggest ways of reporting
outcomes and using data for continuous improvement
in your organization. Because this is a webinar and
I’m not sure who’s on the call and levels of experience
with outcomes measurements, I’ve tried to strike a balance between kind of explaining some terms
and assuming some knowledge, so please feel free to
stop me if there’s anything you’d like me to go into more detail on. And as Tina mentioned,
we’ll also have time for questions at the end. So, oops, I almost pressed my phone to advance to the next slide. (chuckling) Okay, there we go. So, outcomes measurement. Outcomes measurement is a systematic way to assess the extent to which a program achieved its intended results. Oh, I should mention that
one of my favorite things about PowerPoint is the use of cartoons. So I’ve scattered them
throughout, and I think that this one does a
good job of explaining why it’s important to
think about outcomes. So every organization hopes
to deliver quality services and hopes that they are
having a positive impact. And outcomes measurement,
or in government agencies and businesses sometimes it’s called performance measurement,
that will help you understand whether you are in fact delivering the quality services
that you’re hoping to. So with the information that you collect you can determine maybe
which activities to continue and build upon, which might need to change in order to improve the
effectiveness of your program. So it’s basically answering the question: What has changed in the lives
of–it might be individuals, families, or even organizations, or communities–as a result of the program or the services or the
activities that you’ve performed? Has the program made a difference? And how are the lives
of participants better, or are they better, as a result? So the surgery example here
is, you might be thinking, “Okay, I’m doing this great
work, it’s really high quality,” but if the patient’s dying at the end, maybe you want to be
changing what you’re doing. And of course you’ll always have cases where you are doing it exactly right and you’re not going to have the outcomes that you’re hoping for. But if you are collecting over time; if you find all your patients are dying that’s clearly an issue. And so they’re important for, as I said, measuring that effectiveness
of an intervention. But collecting data and measuring outcomes also serve other purposes. For example, proving your value to funders and getting continued
funding to actually sustain and maintain your programs. It will help identify effective practices and not so effective practices. It can also be just a useful procedure to go through to gain clarity and consensus around the
purpose of your programs. And most importantly, or
maybe not most importantly, but importantly, it’s also used for continuous ongoing learning. So the ultimate purpose of an
evaluation should be focused on the continuous learning
and developing practices that move the organization
toward greater effectiveness. So a successful system, a successful outcome measurement system, is you can think of it as a feedback loop where performance
measurement leads to learning and then subsequent
actions to change programs and improve performance. So for this presentation I’m
going to kind of follow along with the title of “Set,
Measure, Report” to align with these common outcome
measurement process steps. So, (clearing throat) excuse me, the first step
being identifying the outcome. So what is it that you
want to be capturing? The performance indicators that
go along with those outcomes and setting targets or
comparison benchmarks. Then I’ll very briefly talk, because this could be a whole other webinar about the implementing data collection
and then analysis plans of the actual doing the measurement. And then finally, communicating
and reflecting on results. And then this is something
you reflect, learn and prove, and repeat, go back to maybe
identifying new outcomes and you can continue the cycle. So for set, for identifying outcomes, performance indicators and targets. This comic here saying, “Get
all the information you can, we’ll think of a use for
it later,” is an example of how not to go about data collection and choose your measures. Ideally, you begin with a small number of clear simple measures and you’re beginning with the end in mind. So as opposed to just get
everything and then figure out, “Oh, I’ve got this and
maybe I can look at this, and I’ll look at this,” you say, “What do I want to know?” Starting at the end: What is
what we’re hoping to achieve? And working backwards from
that of: What do we then need to gather and collect to
show whether we are or not. So a good way to do that is a logic model. And the main kind of
two steps of this phase, or the two objectives of this phase: The first is establishing
a shared understanding of what the program or project is and how it’s supposed to work. And doing a logic model
can help in that process. And then creating a set of
measures that corresponds with that logic model can be used to assess the accomplishments of staff and project primaries. So here’s very simply, and I’m sure
many of you actually know this, but a logic model’s just
a systematic visual way to present the relationships
among the resources that you have to operate
the program, the activities that you plan, and then the changes or results you hope to achieve from it. So basically it’s the
picture of how the program is intended to work, the
logic behind the program. The input section we have:
What you’re investing, so the ingredients you put
into to operate the program. It might be the funding that you have. Maybe it’s the number of
staff, mentors, or employees, different tools that you can funnel in. Then the activities are
what you’re actually doing. What are the main things that
the program does or provides? It could be mentoring. It could be a number of
things including mentoring, assisting with goal
setting, making referrals to specialists, that kind of thing. And then the outputs would
be the tangible products. So what are the products
or direct services resulting from the program activities? For example, number of people served, number of workshops held,
number of trainings held, number of referrals
made, things like that. And then outcomes usually get
separated into short-term, intermediate, and long-term
or impact outcomes. And that’s really what happens because of your inputs and activities. What impact will the
program have on the clients? And short-term ones are typically… usually in the short term
you’re thinking, we might be able to change knowledge
or skills or attitudes. So it might be increasing
financial literacy, enhancing job readiness, with intermediate being typically more like
behavioral or policy practice, so number of participants who are maintaining permanent housing or acquiring jobs or educational degrees. And then the impact or
the long-term outcome might be: we are in the long-term striving for economic self-sufficiency
for low-income populations. That’s an example. In the next slide I have just
a very simplified logic model that I pulled from a recent
Urban Institute paper that actually could be a useful resource if anyone is looking to
go through this process. And very simplified, but
applicable to the TANF programs of some client outcomes
regarding employment, earnings, and self-sufficiency with
readiness and experience and skills being the
intermediate outcomes. And then all that it takes
to… the work you have to do that go into that: the trainings
and do the assessments, the support services, et cetera. So not all logic models look the same, but they basically serve the same purpose, which is to graphically
capture the assumptions and the cause-and-effect relationships that drive the organization’s
work on a project. So once you have the sense–
okay, this is what the program is intending to do, and this
is how we intend to do it; this is how it fits together– then you can select what
indicators you’re going to use to basically answer the question: How will I know when changes have occurred or if we have achieved the
outcomes that we wanted? So typically, when as the
previous logic models show, when outcomes are
usually kind of too broad to allow you just to collect
data on that outcome. So you decide: Well, how will you make your intended outcomes measurable? And this cartoon here–so if you’re trying to achieve an improvement
in learning outcomes, this kid’s saying, “Well,
grades aren’t the only way to measure outcomes. That might be one way.” You might say: Have math
scores gone up in the school? Or are individuals
increasing their math scores? But you might also have other indicators that learning is
occurring, and it might be… an indicator for learning
outcomes could be percent of students who graduate. Or it might even be an
observational measure done in class, or their motivation,
there might be other ways to indicate that you’ve
had an impact on their education. So the indicators, they
indicate an outcome rather than being something
that would predict the outcome or occur because of the outcome. And it’s best when they can be specific, and they need to be
something that’s observable. So that means you can either
hear it, you can see it, you can count it, you can
report it, or some way to enumerate it using some
kind of data collection method. Some outcomes and performance indicators are much more straightforward about– oh, income, I have some
ways I can capture that– whereas if you’re thinking of, I’m interested in someone’s sense
of agency…you know, how do we… you might not just have one measure, one number that tells you that. You might have to think
about that in another way. And typically– I mean,
this is just a suggestion– having one to three
indicators per outcome is a nice place to start. Some that are larger and more complicated, you might have a lot more. And then just briefly,
some other considerations when you’re thinking about your
outcomes and your indicators… is thinking about: Who
are we measuring this for? So who’s that target population? And that can differ
for different outcomes. So whose performance or
outcomes are being measured? Is it the program itself? We want to know overall in the program how’s this program doing? Is it particular staff members or mentors? How is their work going? Is it the clients or
participants themselves? What achievements have they made? How do they look at the end of the program compared to the beginning? And then related to that is what level are you looking at? Is it an individual
level, we’re interested in individual change? Or are you interested
in neighborhood change or office level, county
level, even state-level, when we’re talking about TANF work? And also want to be thinking
about comparison groups. So when you’re interested in outcomes, are you interested in just those who received a particular service and seeing what happened to those people? Or it could be people who
are eligible for a service and interested in whether
they’re doing it or not. So all candidates for a service. And it could also be
people who are currently receiving a service, so people who are currently
enrolled in a program. But you may instead, for some,
choose to look at outcomes only of those who have exited the program. So what happened to them at the exit? And in many cases you’re kind
of, you’ll have some outcomes, some measures that are looking at those receiving the service and those currently in the service, and others where it’s at exit. So these are just kind of
questions to think about when you are starting to think about your measures and your outcomes. And then setting targets and
benchmarks can also be helpful, too, as to be realistic about what
outcomes you’re attempting to achieve and also for kind of tracking to see if you are on track
and you’re doing the work as you’re hoping to meet those benchmarks and targets, and also for motivation. And so it targets just your
desired level of achievement and for… if you’re looking at:
so here’s the target, it’s important to establish
what that baseline is for the indicators that you’re planning to measure over time to see how they move towards the target. And you may want to have comparisons. Comparison is another useful
way to measure performance, so it would be performance
against something else. It could be another period of time, so like previous fiscal year. It could be another organization
that does similar work, how you compare there. Or just an established set of standards. If we know that this is sort
of the gold standard level for this, and we want
to bring the community that we serve up to that level. That’s another way to set
a benchmark or target. And it can be static or relative. So a fixed or static target
would be, for example, 75% of clients achieve X. Or it could be relative, which might be a ranking
in comparison to others. So you know, we are, the staff, is in the top five at the organization. Or it could be relative to other programs or performance over time. So we want to see a 10% increase in this outcome from our previous years, so it’s relative to the previous year. And then also you think
about who is setting them. So at EMPath, we have some
targets that are programs that are funded by government
and other agencies; they all set a target. They say, we want you to achieve this. In other cases we’re setting
it with the staff based on a comparison to… based
on our understanding of the whole situation, including
where people are beginning and what things we have
seen in previous years, so what we think is possible to achieve. And we always try to kind
of stretch ourselves, but also be realistic. And the other thing when
thinking about targets is do you want to adjust for and how do you want to
adjust for conditions? So that could be either
demographic conditions. So might you have one
target for, let’s say, people who enter the program without a high school or a HiSET… a high school diploma or HiSET, and another expectation or
target for those who do. And might you also…
then thinking too about like just the broader context, what are the economic
conditions that year? So sometimes with our targets we’ll say, especially with housing ones, if we know, “Oh, there’s a housing
freeze in Boston right now,” or we know that we’re getting
a certain amount of vouchers, we will align our targets to what the conditions
are existing in that area. All right, so now on to actually creating and implementing a data collection plan and analyzing that data. So this table might not
be totally applicable, but you can use the heading part there on the top line in black. So you have your outcome, and
you’ll have a number of them, but you’ll have that first outcome. And then what is the indicator for that? So this example here of increased ability to raise funds for program services. The indicator might be
the number and percent of organizations who put new
fundraising practices in place. Then you’ll have… well, what data collection method would you use? And we’ll just talk briefly
about some of those options. When will it be collected? Is this something you collect at baseline and then at exit or every month? So frequency and timing of collection. Who is going to collect it? How, so what will they do to collect it? Then how will the data
collected be monitored? All of these are important
things to consider as you set up the program of your outcomes measurement system. So I think this is my only slide,
actually, under this section and I shoved a lot in here,
but also I’m intending to just go through it briefly. Data collection methods– so
some measures just generate more consistence, so
reliable and accurate, which is the validity piece,
information than others. And so you want to be thinking about that. In some cases you can use measures that have already been used so you know, oh, this has shown to be
accurate in a similar population. And you also want to think about
what resources are available, so that includes: Staff
availability and staff expertise, time, money, all those things. Because it’s time-consuming. So all the pieces that
will go into the ability to be collecting data and also
being culturally sensitive with the measures. So the method should fit
the language, the norms, and the values of the groups from whom you’re collecting data. Then you can think about other
sort of more specific types of data collection methods. So how are you going
to be collecting this? Is it surveys, interviews,
focus groups, observation, or reviewing existing
records that other people have collected or that
you’ve collected in the past that you’re just kind of going to go over, information that’s there? So those are all considerations
for thinking of how, the data collection methods. And then there are certain
designs to think about. Post-only measures would be
you’re just collecting data once at the end of an activity or service. This can work… this can be sufficient for
things like satisfaction with an occurrence, so if like
people have a doctor’s visit, they might just get a survey at the end or like a phone call with Comcast or a survey that says,
“How was that experience?” You only have the post-measure. People can use it, too, for
knowledge after training. But even in that instance, having a pre- and post-
measure would be better ’cause then you understand:
well, what do people know before and then what did they know after? And so then you can kind of
measure: what was that gain? So the pre/post-measures
you’re collecting at the beginning to establish a baseline and then at the end. And that’s useful for,
obviously, change over time. In our programs, what was
the annual income coming in and what was income when they left? Did that change? For time series, it’s just
repeated measures multiple times. So we have certain information
that we are collecting on a monthly basis to track,
observe how things change just on a month-by-month basis. And then even stronger, so this is kind of moving towards a stronger design, is having a comparison group. And so you could look at
the skill level pre- and post- of those who complete a workshop versus those who didn’t do a workshop. Or like those who were in the program versus those who weren’t or
were in a different program. And the best, which is often
hard to do in our field, is if you can randomly
assign people to participate. Then you can really make
a stronger assumption that the outcomes that
you see are attributable to the program that you performed if you have randomized
people into the groups. But anyway that, we can
talk a lot more about that if people wanted to, (laughing) about the actual design
of the data collection. And then there are
procedures to think about, such as: Who will be
actually collecting the data? How will they be trained? Who will be monitoring
that data collection? How will you prepare your participants or clients for data collection so that, first, they know what’s happening and so that they
understand its importance and its intended use and
let them see its value? And making sure that they know
that their confidentiality is ensured and how will you be doing that? So thinking of those kinds of things. And also how are you going to
ensure quality of the data? So that might involve auditing it through doing some spot checking, perhaps doing double entry and making sure that things are lined up. Using automation is a nice way; it’s a very simple way to ensure quality. So you have a drop-down
menu of limited options, so people can only enter
at least the options that are possible. And you might want to format databases to accept only certain
numbers, things like that, to keep the quality strong. And then I’m not going to
go into data analysis, ’cause again, that’s a whole other thing. But just to say that it’s about looking at the information
that you’ve collected and asking yourself what it all means. In that process, you’re
saying: hey, what is all this? And once you have the data, it’s really up to you to make use of it to inform the decisions
about your program. And there’s lots of ways
you can piece it apart and sort of subdivide it
to look at certain things by different groups and look at relationships between things. And you can answer a lot
of questions in this phase. So what are the main…
I guess now turning to the main piece that
I wanted to get into and specifically talking about the stuff that we do here around: How
do you communicate the results and reflect on the learning and use it to kind of infuse the organization
with a culture of data and a culture of continuous improvement? So what I have for the next
bit of the presentation is really just some
examples, and I’m going to talk about how we’ve been here. This is not to say this
is the best way to do it, but just sort of going off the point to show some examples of the work. So this is just a snippet of one of our quarterly reports
for one of our programs. On an annual basis, we hold
target-setting meetings with all programs. We have 11 programs right now. And in those meetings
we’ll have a target-setting and also sort of a
strategy review meeting, a reflection on the whole year and a looking towards the year ahead. We will do a SWOT analysis,
and we’ll review strategies that we’ll need to achieve the targets. And then on a quarterly
basis, so every three months, we’re meeting… again, the data
team meets with each program and ideally with everybody on the team. So you have the staff and the managers. And we’ll use that to,
first of all, each quarter check in on the numbers, so this is like another
way of auditing it. We do monthly audits, so we
have that process happening on an ongoing basis, but
then when we’re gathering the quarter-by-quarter data, we’ll still meet and say:
Does this look right? And often people,
they’re so in their work, they’re like, “No, I know that
so and so just graduated, so that number should be here.” So we do just like some
checking on the numbers, but also really we’re trying
to understand patterns that we’re seeing. We’re trying to highlight
successes and things that we’ve seen improvement on, and also highlight areas
that may have been neglected. We maybe need to focus more
in understanding the trends and understanding maybe potential outliers and gaining a sense of the
context around these numbers, too, and where we may want to focus. And then from that, we’ll
do that program by program and then pull together sort of a summary of all the programs. And often you’ll see that the pattern– you know, it’s not just one program that
is struggling with savings, you see that across the board. Or you may see from these means that most of the programs
are struggling with, let’s say the savings target,
to meet the savings target. But one or two programs was struggling, and now they’re doing better. And you know, this stuff gets highlighted when you’re comparing and you’re pulling together these reports. And so then you can say: What’s going on? What’s going on in this program? What has been working here? And we’ll try to pull together
learning teams around that. So that if a new strategy
had been used or something, some tool or some activity
that was successful or maybe just a way of doing practice, that can be then shared
across the organization so that we’re not all just
these separate programs. ‘Cause actually several of them are in separate physical spaces, but that we are still reaching
out and learning across that divide about what’s working and then what practices we
can use across programs. And these quarterly
reports, we also then meet. We pull together an executive summary and meet with the executive
team to review this, too, so that the whole organization and then the reports go out
across the organization. So they kind of get a sense
not just of what’s happening in their program, but what’s
happening organization-wide and how they fit into that picture. So that’s one learning
table, and as you can see, I don’t think I have the ability to point or highlight anything. Okay, well, anyway, so this report also has some target-setting
and comparison to the previous fiscal year. So this is our FY18 report and we have a column at the end, on the right, that has the FY17 numbers. And then we have quarter by quarter, so you can see if there’s
changes over a quarter. And then the year-to-date
kind of is just keeping tabs on what’s happening across that whole year and how does that year look
in comparison to the targets. Do we meet the target? Did we get close and maybe we’re in the range, or did we not? And also, how did that look,
what we achieved this year in comparison to last year? Are there areas where we
really saw huge improvements, like one of the bottom ones? Somewhere we actually got a bit lower. So we’ll review all of those things in our regular meetings. Another way of reflecting
on the data is looking at, again, just a different
way of visualizing it on what the targets were and
what the actual outcomes were and how that changed over time. So is this is showing,
first: Are we getting better at setting targets? And are we seeing any trends
or patterns in our results? Sometimes you might see in these cases especially if it’s just sort of like, “Oh, it’s fluctuating a bit.” In other… And this is just one example that I pulled and this is actually just three outcomes pulled from that, but in other cases we’re seeing like a steady improvement over time or sometime you might see a dip a certain year and going up. But all these things are used
for sparking conversation around what can explain the trends and what can explain the
things, the anomalies. And what can we learn from that? And in addition to trying
to improve the work and the outcomes also,
can we improve the way, what we’re collecting,
how we’re showing it, the process, the way that we’re doing measurement? We’re looking always to tweak that. And even our report, so this one back here, we’ve tweaked this to each year we’re kind of trying to get better at the way
that we do this work as well. So we’re always getting feedback to try to improve on all of those
levels that I mentioned. Then another way of… Oh, I should say this was
just in the quarterly one so we’ll also put out reports
or data that are broader and that we’ll put out on an annual basis. It says what has happened over this year and how does that look
compared to other years. And then data can also be
used for helping the work of our staff and employees in
understanding their caseload, or understanding their employees’ workflow so that they can help them
manage their caseload, but also improve where they need to. So the top row that is
a little hard to read is just a small bit we pulled from a client management report, and it’s at the level of a program. It says, how many households are there. How many of those households, how many people– in our
case, we’re interested in that we do Bridges. So how many people have had a Bridge and then highlighting how many
people have not been Bridged. The next one that got cut off is how many participants are
missing an entry assessment. And then it goes on and on to show how many people have
currently active goals, how many are missing. And it’s just a way to highlight, rather than having to
keep track case by case and checking on everything, you can get this summary that says, “Okay, we have this many people,
but we’re missing a few here.” And then you can click
on that in these reports, click in and say: Who are they? And then do some follow up. We’ve also created reports
that would indicate for staff the days since last contact with someone and then they can sort that column if they’re thinking who to reach out to. We can sort by the people who
it’s been the longest with and they might say, “Oh
gosh, it’s been three months since I connected with this person and I will start here in my work.” And we’ve heard really
positive feedback from staff saying it helps them to manage their work and see where they need
to be doing other things. And then also it’s useful
then, or can be useful to see, kind of tracking the
activities that are going on. And do those aligned with the
outcomes that we’re seeing? And especially if you’re
interested in looking up by like program or even by staff, like do the activities that
certain staff members are doing align with different
outcomes for their caseload? And obviously, so it’s
going to be complicated because you don’t necessarily just get a randomly distributed caseload with demographics and
various people come in. Oh, I guess I should kind of say here, especially in this section
when you’re thinking of looking at a particular,
maybe even individual, staff members is that it’s so
important not to use the data to judge or punish, or
even to like rank people against each other, because
it’s really essential for learning, improving and
encouraging and celebrating. And it’s hard to do that
because I think it’s just by nature, I think the way we grow up,
we get grades all the time, which do say, “Okay, you’re
good at this or you’re not.” And so I feel that sense from people sometimes when we look at
reports they might say, “Oh well, this should be higher, or this should look better.” But we try as much as possible to say: This is not to say “this is bad” and we are giving you a report card, but what can we learn from this and to ultimately just do
the work better and improve, or even just understand the work better. So that’s always a struggle. Because I think inherent
in reports and numbers is a little bit of the
feeling of being evaluated. This second chart below here that is– you see it with all these colored spikes. I crossed out participant–or not
participant, excuse me–staff names, but that’s an example that
a staff manager could look at, what activity is happening across her or his department by staff. So this is one example. This particular example is looking at… I believe this is looking at goals set by an area in the pillars. So we have like: was
it an education goal? Was it a well-being goal? Was it a financial goal? And you can see here that
there’s certain people who are much more likely
to set one type of a goal, and actually it nicely aligns with their expertise in this case. But also seeing where
there’s been low activity and where there’s been a lot of activity. And this can be very
useful for managers, too, if they’re doing reviews with their team and to have meetings with them to check in to understand what’s going on, and then also share if
something’s working well, with one person in the same
way I mentioned earlier on if something is working
well in one program, we can share that with other programs. If something’s working well
with a particular mentor or manager or staff member, can that be… you know,
what’s going on? And can that be shared with other people? So there are just other
ways of using the data to understand the work and
ultimately with the goal of linking it with outcomes
and improving the outcomes for the participants
or achieving the goals that you are hoping to achieve. And then speaking of using
the data to encourage and celebrate, one thing
that we do annually is… it’s usually during our Thanksgiving, we have monthly all staff meetings and during one that
happens at Thanksgiving we present what we have called
“bragging boards.” And it’s a chance for each program to highlight the three statistics that they were most, or three outcomes, that they were most proud
of over the previous year. This is actually an old one;
we’ve changed them a bit. And in this short period of time, they present to the whole staff. They have their board. They say what went into
getting these numbers, like the work that they had to do, why they’re proud of them. They talk about who they’re thankful for, and it’s a very nice way
of showing the connections across the programs and the organization. Who made this work possible? And then what they’re
focusing on for the next year, striving on for the next year. So even though we’re not
showing everything and it really is specifically around just
showing the good stuff here, it also has the same spirit
to it of: you’re reflecting, you’re showing the gratitude,
and you’re also looking forward and thinking of strategies
for the next year. So similar to our reflection meetings that we have with the teams. And then from there…
and these are really, really lovely meetings. And then from there, so each program will get a physical board
to put at their center. And then it’s at our main
administrative offices we have another copy that we put up. And we kind of just plaster the walls with them for the year. And so this is a way of infusing… the environment gets infused
with data and with outcomes, and it helps to change the culture that we are an organization
that looks at outcomes and focuses on them and celebrates them. And this also leads to when we have people visit, too, it just leads to conversations. So another way to report
is in the qualitative way in telling personal stories. And this also follows from…
this is celebrating the successes and even
more so really around promoting a growth mindset
and high expectations. We try to saturate messages
of counternarrative. In the populations that
we’re serving there are a lot of negative stereotypes and biases. And so as much as possible,
we want to show that other story and present that counternarrative that people get faced with a lot. So we can do it in
newsletters, success blasts that we email out across
organization, reports, media. Like one of them is a newspaper
story and social media. And these ones that are
featured here are of clients or participants, but you can also do this, and we also do this, at the level of staff and even at organizational successes. One thing that we have
is High Five, this big, very silly-looking blue
mitt you give someone a physical high five with, and it’s for when we
notice something impressive that another staff has done. And that’s a way of publicly acknowledging and celebrating that work. And then another one of my
favorites of getting that out is I’m changing how people
are thinking around data as just being okay now we’re evaluating and looking at numbers. We just weekly, by email
to the organization across the organization,
send out Stat Attacks. And we have a mascot, this little data cat who has become our data
team’s mascot that we just use with our emails and various
things to make things more fun. And we try to also, or
we have lately been trying to make the stats that we send
out fit with what’s going on. So this one here about moving, we put out that first week of September. We’re in Boston. I don’t
know what city everyone is calling from, but
in Boston everybody… that is just that the whole city moves. And so we thought, “Oh,
let’s put out a stat about what movement is happening
in our shelter programs. How many people are moving out of shelter into permanent housing.” And we just provide this
little bit of information. This one is in May when everyone, a lot of people are graduating. We have a ton of schools here in Boston and we put out a Stat Attack
around what is going on in terms of education
and training programs and our participants,
how many were enrolled, and how many people were graduating. And here’s a third example. I just like these factoids,
threw on three of them. This one was over the summer. We pulled out… and this is interesting, ’cause this is not even something that’s one of our outcomes. It’s not something we really tracked, but we had this data on how many kids we enrolled in summer camp programs. So these Stat Attacks, we also
are increasingly been trying to get… just generate
discussion with them. So sometimes we’ll put out
a little interesting one and then say… ask a
follow up question like: Okay, this percent of people
enrolled in education. And next week, guess how
many people graduated? Or something like that so that
we get people writing back. And then we’ll do a raffle to get winners. So it’s just a way of
getting engagement around and making these stats more fun, making data playful and fun and also interesting to everyone. And then this… all that
the last few slides have been talking about is really ways of enhancing a data
culture of an organization, which is so important
for when you’re wanting to be collecting data and
doing outcomes measurement, to have that support and environment that makes that work easier. So I just will end with a couple of slides about data culture and then we’d be happy to talk about any of this stuff. So one plan of a
data-positive organization exists where people believe
that good information is important enough to warrant the resources needed to produce it. A data-positive organization
is one that relies on data to make decisions and strategic
organizational change. And it uses data in every
facet of the organization, and it encourages and celebrates
data collection analysis for internal, so institutional knowledge as well as external use. So there might be more on
marketing or sending out to funders or promoting
the work that’s being done. And to enhance a data culture
it requires prioritizing and investing in data
collection, data management, and analysis/knowledge production. Encouraging staff to access and derive insight from the data, so that there’s data literacy across the whole organization. It’s not just the IT team
or the evaluation staff who have the access to
the data and can say: “Oh, here’s what I’m learning,
here’s what I’m seeing.” But really it’s something that’s shared. It’s something that’s brought to the table and getting perspectives
from people who are on the front lines doing the work and from people who are
looking across the programs and seeing patterns like that. You really want the data
literacy and the input and feedback and the reflection on it to happen across the whole organization. And the informed cultures are also those that have conscious use of assessment, revision, and learning built into the way that they plan, manage, and operate. So from the leadership
team, the strategy planning to the decision making, to meetings, even to job descriptions,
a data-informed culture has continuous improvement embedded into the way it functions. You’re using data to solve problems, make decisions, tell
stories as I’ve been kind of talking about throughout,
continuously improve. Okay, so that concludes the presentation. It’s a little odd just
talking to a computer. I don’t know if people
are still here. (laughing) Oh, I see that there’s a
highlight on the Q and A section. So I’m going to click on
this and see what I see. Okay, hold on a minute. Okay, hold on, only part of
this is showing up for me. So I do see a question here. Tina, I’m not sure if you
can see the full question. I’m only seeing one line. Maybe… oh, here we go. – [Tina] Yeah, it’s how it’s played out. But I can read it for you. I see the whole entire question. – [Ashley] Oh, okay. I
see it now. (laughing) – [Tina] You do? Okay. – [Ashley] I see it now.
So okay, the question is: How do you balance the
amount of data you collect, what your funders require you to collect, versus what you’re interested in knowing? So that’s a good question and a challenge. I guess what we… and it is
kind of a balancing act a bit. What we try to do here, so
starting at the beginning to the extent that we
can, we will kind of try to supply the information
that we want funders to ask for to them. So if, for example, if
we’re wanting funding for a certain project and we know that the funder will want
to know employment outcomes, they might not specify how to
get that. And so we can say, well… so kind of like when
we look back at the outcomes and then the indicators, we try to say, “Well, this is how we can indicate that we are having the
outcome you’re hoping for.” And so we can align so that
we’re showing the outcome or the impact that a
funder might want to see, but we’re using indicators that we are already collecting
for our participants. We can say, “Well, we can show you this, this, and this to show this idea.” But in some cases we will
have funders who say, “We want to know this specific thing measured in this specific way.” So sometimes you just have to do that, because it is important to get funding. But it’s also what we’ll
do on an annual basis is review all of the measures that we want that we’re interested
in, what we’re collecting, and see where that lines up
with things that are required. And so one big example for all of our shelter housing programs
we have required information that we have to get in to
DC in a particular system in the particular way. We have our own system for collecting and managing our data, and
sometimes what they want, they change it annually, they tweak it a little. And
so it ends up being different than what we are asking. But we’ll, each year,
we’ll look at the two and say, “Okay, we’re basically
getting at the same thing. We aren’t able to change what they want, but we can change how we’re doing it.” So we always try to, on an annual basis, ’cause if we did it more
regularly it would be too much, but on an annual basis
align what we’re collecting with what we have to be, whether it’s for a contract
or for a funding requirement. So that’s another way
to kind of balance it, if you can, to the extent you can and… yeah, I hope that starts to answer your question, I think. And also I guess ideally
you have the alignment in where you’re seeking the funding and the types of programs you’re doing. So if you would want to get
funding for a program, you’d probably be
interested in the outcomes that a funder who’s funding your program would be interested in, too. So there’s alignment there, you hope, that you’re interested in knowing, at least similar things to
what they are interested in. Let’s see… oh okay, I
see that there’s… okay. Do we have time for
another question, Tina? I think we have… – [Tina] Yes, it would help
me to expand these screens so it doesn’t mush the question. But I had another question. It says, “Can you suggest a way to start a robust data
collection program slowly?” – [Ashley] Okay, let’s see. So I guess it depends
where you’re starting from. I think for starting
slowly… If you go back, I’m thinking of just
at the very beginning. Starting slowly, starting with a small, really starting with: What
do I really want to know? What would those key metrics be? And starting with a small set of precise key measures
is a good starting place. So we work with a lot of
organizations around the US and actually internationally,
too, and some are… they’re partnering with us and they want to be collecting more data,
but they are very small. And so we’re fortunate
to have a research team and a data team. Some of the organizations are…
they don’t have that capacity and they are so wanting,
though, to collect this. So in that case we talk
about how you share that work and share that burden in a way. But even in our team,
we have our own staff who are working with… They’re collecting the data and they’re entering their own data. So each person is responsible for that. Where we can and where
we have the capacity, an administrative assistant
can help take some of that off by
doing the data entry part or maybe some of the data cleaning part. So to the extent that you
can kind of not put it all on one person if you
don’t have that person and sharing that work
is a way to start slow. And you don’t say, “We’re going to now all of a sudden collect everything and look at everything all
the time and all the ways.” You say, “We’ll start here and we’ll start with a basic maybe pre- and
post- on these 10 key measures that will show us what
we’re hoping to see. And as you start that way,
you can start to build and kind of build in some
of those muscles, too. I think aligned with the
previous question around funding, you know, that might be a place to start ’cause it might be that
through the funding you might have more capacity to do the
work, and it’s very important that you get those questions answered. So you might want to know,
just start with the measures that a funder is interested
in and think about how can you do that in the simplest, most straightforward way. Or another way of focusing
might be if you’re making a change or
implementing a new program you just focus there. And you could also think about piloting where it’s possible. So you might not do the
assessment with all programs, all participants. You might start with one. I don’t know if this is someone…
you’re having tons of programs or maybe you just have one, but you might not even
start with everybody. You pilot, you will test
out some of the measures and see how that goes, and
in that process saying: How long did it actually
take to conduct this? Like how much time did it
take of the participant? How much extra time of
the staff, of the mentor? Then when you test it out in a small way you can see how you could
scale out across the rest of the program or other
programs in the organization. And you also will sort of build out– okay, now that you have an accurate sense of the resources that you need
to do that in a smaller way. So some of our new measures– ’cause I feel that we’re sort of… some people feel they’re at capacity– for some of our new measures,
that’s how we do it. We’ll just pilot it in a program that does have a little more capacity. They have a bit more time,
maybe lower caseloads, and we’ll test it out there, see how it’s working, and
then expand from there, especially if you start
to see promising results and find a way to make it help the work, then that can be a good way to start. Yeah. Are there any other questions? – [Tina] I don’t see any other
questions in the Q and A box. And again, if you all would like to use the Raise Your Hand option, it’s located on the
right side of the screen. And we can unmute you, and you can verbally
ask what your question is so Ms. Winning can answer it. If not, we do still have a Q and A box for those who may be
shy or have earbuds in and can’t verbally speak. So yeah, we still have options available for those who want to have
a question answered. – [Ashley] I see a question
on staff resistance. How do you deal with
staff who are resistant to collecting or considering data? I think for this, it’s… well, I think a lot of things about this. One key thing is relationship building. It’s really all about
relationship building. So when there’s resistance,
which there often is, especially if it’s a new thing
that you’re implementing, if you’re starting to just collect data where you haven’t
before, or even if you’ve been collecting for a long
time and you’re wanting to collect it more or in a different way, there’s often resistance. And it’s important to think about why and where that comes from, and usually it’s some kind of fear. Either fear of having not enough time and being totally overwhelmed/
overburdened already. Or maybe it could be fear of not knowing, thinking you don’t know how
to do it or how it might work. Or some of the fears around data that we talked about
earlier around judgment. Maybe the burden on clients. So in talking with staff, understanding why they have a concern. So just recently where I’m
getting a little bit of, I wouldn’t call it resistance,
but a little pushback on trying to add a new measure and even though it’s a
measure, I’m thinking about what can it be for and I’m listening: well, what is the concern here? And they had some really
great, valid concerns. And so it’s important to listen to that. Also, the other thing we try to do is to make things as easy as possible. So we won’t put… say,
“Okay, we want to do this and you’re going to need to do this and you’re going to need to
do this and there it is.” We would try to automate
as much as possible and make it the simplest way possible. But that involves talking about it and relationship building. And I think when there’s resistance it can also come from having people feel like it’s being pushed on them. So what we try to do a lot is integrate all staff into that decision. We’ll do a sort of… if I’m
interested in something new, I’ll sort of preview and give a heads up and look for the
champions in the programs who… There are some people who are really into data collection. (laughing) And they’re like, yes,
let’s do it, let’s get more. So finding those people so
they can really then share it with their teams and get people onboard and get excited with them, but including people in the process, including their input
to say: “We’re thinking of collecting data. How
would that work for you? What challenges would that create? How could we alleviate
some of those challenges?” All that involvement in the
process will make the team, make the staff feel like they’re not having something imposed on them. They’re a part of something
and they got to shape it. And I think really importantly is showing and explaining and
getting excitement around and showing how it’s why we’re doing it. If you just throw out a measure and say, “Now you have to collect this. The end.” Then there should be resistance. But if you are saying, “We think
that the work you’re doing is having this impact that
we aren’t yet able to capture and we think this is a way of
capturing it. And here’s why and here’s the literature around it and this is why it’s
important or could be useful and/or… and this is how it can help you do your work differently or better, and this is what your participant or client might learn
from this experience.” All of that really integrates
it into their world and, I think, lessens the resistance. It makes it feel like, “This is
a worthwhile use of my time.” And aligning with the other question: and starting it slowly
and doing it pilotingly that can be a great way too. You can do it in one program and say, “Wow, look what we learned here.” And then other people might
say, “I want to capture that too.” And what we’ve found over
time is where we used to get resistance, now we’re
starting to see staff saying, “I don’t think you’re collecting
this, right? Can you… I think you should be looking at this?” And we’re like, “That means you have to collect and enter more data.” So we’re getting the
request from the other side because of people seeing its value and being able to do the work. And I guess another thing
I’ll say is whenever we try to add work collecting data, we will also in that
annual review say what haven’t we been using, we collected and we didn’t do anything with it or it didn’t show us anything interesting, or we’re not getting accurate
feedback so we can’t use… it’s not good quality data. We’ll cut that out and
replace it with something that we can use better,
so that we’re trying to at least set balance…
balance it out so it’s not just add
more, more, more, more. ‘Cause that will be overwhelming. Yeah, okay. Okay (laughing) so thank you. Are there any other… does anyone else have
anything they want to say? I believe I have a time set for if people do have follow-up questions, but if anyone has anything… Oh, and maybe I can move
to my last slide here. If you do want any other information, or to continue the conversation, feel free to contact me and
my email address is there. Tina, is there anything else? – [Tina] No, I don’t see anyone
in the queue for questions, and there are no more Q
and A questions in the box. – [Ashley] Okay. – [Tina] If you don’t have
any additional questions, I will end the call and thank you all for
joining today’s webinar. We appreciate your time and attention and if you have any questions
related to the presentation, please submit them to
your program administrator or to Ms. Winning. And her contact information
is here on the screen if you would like to have
that for your records. And thank you again and
have a great day, everyone. – [Ashley] Thank you. Happy weekend.




Leave a Reply

Your email address will not be published. Required fields are marked *