Author Archives: Margaret Smith

5 W’s of a Process Evaluation: Part 2

In a recent blog post, we introduced the first two W’s of a process evaluation:

  1. Why conduct a process evaluation
  2. Who should conduct a process evaluation

This blog post will cover the remaining three W’s:

  1. What methods to use to conduct a process evaluation
  2. Where to conduct a process evaluation
  3. When to conduct a process evaluation
WHAT METHODS TO USE WHEN CONDUCTING A PROCESS EVALUATION

There are several different data tools and methods you can use during a process evaluation. It may be helpful to use a combination of these methods!

  • Review documentation: It can be helpful to review staff logs, notes, attendance data and other program documents during a process evaluation. This method will help you to assess if all staff are following program procedures and documentation requirements.
  • Complete fidelity checks: Many programs/curriculums come with fidelity checklists for assessing program implementation. This is especially important if you are implementing an evidence-based program or model. Programs may have a set number of required sessions and guidelines for how frequently they should occur. You can use fidelity checklists to assess if the program’s implementation is consistent with the original program model.
  • Observe: Observations can be especially helpful when you Y Observationshave multiple sites and/or facilitators. During observations, it’s crucial to have a specific rating sheet or checklist of what you should expect to see. If a program has a fidelity checklist, you can use it during observations! If not, you should create your own rubric.
  • Collect stakeholder feedback: Stakeholder feedback gives you an idea of how each stakeholder group is experiencing your program. Groups to engage include program staff, clients, families of clients and staff from partner programs/organizations. You can use interviews, surveys, and focus groups to collect their feedback. These methods should not focus on your clients’ outcomes, but on their experience in the program. This will include their understanding of the program goals, structure, implementation, operating procedures and other program implementation components.

In our evaluation project with the Wabash YMCA’s 21 Century Community Learning Center, we used a combination of the methods described above. Our staff observed each program site using a guiding rubric. Our team collaborated beforehand to make sure they had a consistent understanding of what components to look for during observations. We also collected stakeholder feedback by conducting surveys with students, parents and teachers. The content of these surveys focused on their experiences and knowledge of the program. After the program was complete, we reviewed documentation, including attendance records and program demographic information.

WHERE TO CONDUCT A PROCESS EVALUATION

You should conduct a process evaluation wherever the program takes place. To capture an accurate picture of implementation, an evaluator needs to see how the program operates in the usual program environment. It is important to assess the implementation in all program environments. For example, if a program is being implemented at four different sites, you should assess the implementation at each site.

In our evaluation project with the Wabash YMCA, we assessed the program implementation at three different school sites. This involved physically observing the program at each site as well as reviewing records and documentation from each site. Being in the physical environment allowed us to assess which procedures were used consistently among sites. It also helped us identify program components that needed improvement.

WHEN TO CONDUCT A PROCESS EVALUATION

An organization can conduct a process evaluation at any time, but here are a few examples of times when its use would be most beneficial:

  • A few months to a year after starting a new program, you can conduct a process evaluation to assess how well your staff followed the implementation plan.
  • When you’re thinking about making a change to a program, a process evaluation will help you determine in what program areas you need to make changes.
  • If your program is not doing well, conduct a process evaluation to see if something in your process is interfering with program success.
  • When your program is doing well, conduct a process evaluation to see what in your process is making it successful.
  • If you’ve had issues with staff turnover, conducting a process evaluation can help identify gaps in staff training, professional development and ongoing support that may be contributing to the turnover rate.

To determine when to conduct a process evaluation, it is also important to consider the capacity of your organization. Make sure that your staff will have enough time to devote to the evaluation. Even when using an external evaluator, staff may need to spend extra time meeting with evaluators or participating in focus groups/interviews.

We conducted our evaluation with the Wabash YMCA at the end of their first year of program implementation. Evaluating their first year of implementation allows us to provide them with recommendations on how to improve the program’s implementation in future years. We will conduct a similar evaluation during the next three subsequent years to track their operations and processes over time.

If your organization needs support in conducting a process evaluation, contact us today to learn more about our evaluation services!

Share this article:Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Email this to someone
email
Share on LinkedIn
Linkedin

5 W’s of a Process Evaluation: Part 1

When it comes to program evaluation, people often think of evaluating the effectiveness and outcomes of their program. They may not think about evaluating how the program was administered or delivered, which may affect the program outcomes. There are several types of valuable evaluations that do not focus on outcomes. One type of evaluation, called “process or formative evaluation”, assesses how a program is being implemented.

In this two part blog series, we are going to cover the 5 W’s of a Process Evaluation:

  1. Why conduct a process evaluation
  2. Who should conduct a process evaluation
  3. What methods to use to conduct a process evaluation
  4. Where to conduct a process evaluation
  5. When to conduct a process evaluation

In this first blog in the series we will cover the first two W’s. The next blog will discuss the other three.

WHY CONDUCT A PROCESS EVALUATION

Let’s start with the “why”. A process evaluation helps an organization better understand how their program is functioning and operating. Process evaluations also serve as an accountability measure and can answer key questions, such as:Screen Shot 2018-07-27 at 4.38.23 PM

  • Is the program operating as it was designed and intended?
  • Is the current implementation adhering to program fidelity?
  • Is the program being implemented consistently across multiple sites and staff, if applicable?
  • What type and frequency of services are provided?
  • What program procedures are followed?
  • Is the program serving its targeted population?

It is important to determine what you want to learn from your process evaluation. Maybe you want to assess if the program is being implemented as it was intended or you want to know if the program model is being followed. Whatever the reason, you want to be clear about why you are completing the process evaluation and what you hope to learn.

We are currently working with the Wabash YMCA’s 21st Century Community Learning Center to evaluate their program implementation. Each center is required to work with an external evaluator to conduct a process evaluation. Here is what we hope to learn and the why of this evaluation:

  1. The evaluation will assess if the program has been implemented as it was intended and if it is adhering to state standards;
  2. This evaluation will capture the population served through the assessment of attendance trends;
  3. The findings from the process evaluation will be used for program improvement in subsequent years.

WHO SHOULD CONDUCT YOUR PROCESS EVALUATION

When determining who will conduct your process evaluation, you have the option of either identifying an internal staff member (e.g., program manager or quality assurance) from your organization or hiring an external evaluator. Many organizations find that there are challenges with an internal team member: they may not be objective, they don’t have a fresh perspective, and they often have other job responsibilities beyond the evaluation.

For the reasons mentioned above, it is beneficial to have an external evaluator (like TCG!). An external evaluator will be able to assess the operations of your program from an unbiased lens. This is especially helpful if a program has multiple sites. An external evaluator can assess all sites/facilitators for consistency more objectively than a program staff member. (If you’re interested in learning more about how to evaluate multi-site programs, view our blog post here!).

In our evaluation project with the Wabash YMCA, the decision to conduct an evaluation with an external group was made by their funders. This decision ensures that the evaluation is high quality and objective.

The other three W’s will be discussed in a later blog post, so stay tuned! In the meantime, contact us today to learn more about our evaluation services!

Share this article:Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Email this to someone
email
Share on LinkedIn
Linkedin

Press Pause: Redesigning an Existing Program

In January, we kicked off our Exercise Your Impact campaign. Throughout 2019 we’re sharing tools and resources that highlight critical phases of organizational planning. During Quarter 2, we are focusing on program development. An important aspect of program development is knowing when to redesign an existing program.
Too often, organizations operate on “auto-pilot” and keep running their programs in the same way because that’s how they’ve always done it. Programs can start to feel stagnant or stuck in a rut, but the effort involved in making a big programmatic change can keep programs in this place. Staying on auto-pilot can cause staff to feel overworked and stressed, especially if a program is not having the desired impact. Instead of forcing an ineffective program forward, it may be time to pause, reflect, and redesign the program.

In a previous blog post, we provided a few tips on how to know when it is time to redesign a program. For example we suggest you reflect on the following three questions:

Adapt

  1. Are we making the impact we hoped to make?
  2. Is our program aligned with the latest research?
  3. Is our program meeting the needs of the target population?

Transform Consulting Group (TCG) is currently working with La Plaza to help them redesign their Tu Futuro college and career readiness program. They evaluated their program a couple years ago and the data showed them that they weren’t making the impact they set out to make. They realized that they needed to better align their program with best practice research and narrow their target population by working with fewer schools and narrowing their focus to high school students instead of high school and middle school students.

Here are three steps they engaged in during the redesign process that may help your organization during the redesign process:

1. Engage Stakeholders

A key part of the redesign process is engagement of stakeholders, including staff. This is a great opportunity to talk to staff at all levels involved with a program for their feedback on what is and is not working. Including staff in this collaborative process is reinvigorating and creates a renewed sense of purpose.

It’s important to also engage other stakeholders impacted by or interacting with your program. For example, in their previous program evaluation, La Plaza collected stakeholder feedback from students, parents, school partners, and funders. This information was crucial during the redesign process.

What Works Image2. Identify and Engage Key Partners

Redesigning a program is no easy task. It is time intensive and, depending on the subject matter, may require bringing in experts. La Plaza identified partners to help them accomplish their new goals and make their vision a reality. Their key partners included TCG to help design a new curriculum based on best practice research and a philanthropic partner to fund the project.

3. Pilot the Program

Once you redesign your program, it is helpful to pilot it on a small-scale. This allows you to catch potential problems and fix them before full implementation. Piloting a program can also help test the efficacy of the redesigned program. By piloting the program with a smaller target population, you mitigate the risk of overstretching your staff.

When launching the first year of their redesigned Tu Futuro program, La Plaza decided to partner with one local high school. While they will expand to more schools in the future, this pilot period allowed staff to learn and successfully implement the new curriculum and form deeper relationships with students.

We know that redesigning a program is a daunting task that can disrupt your day to day operations. If you want help assessing your current program or beginning the process of a redesign, contact us today. We would love to learn more about your programming goals to see how we could support you!

Share this article:Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Email this to someone
email
Share on LinkedIn
Linkedin

5 Tips to Implement an Evidence-based Program

When awarding funding, philanthropic funders want to invest in “what works” and is proven effective. Many funders show preference for programs and practices that are evidence-based. Implementing an evidence-based program is a great way for grant seekers to demonstrate that they are also committed to “what works”.

For example, the Richard M. Fairbanks Foundation recently awarded funding to over 20 schools and school districts as part of their Prevention Matters initiative.  Prevention Matters is a three-year grant initiative aiming to help Marion County schools identify, implement and sustain proven substance use prevention programs.

To apply for this funding, schools selected an evidence-based substance use prevention program that aligned with their needs. In their proposal, schools had to demonstrate that they had a strong plan for implementation and sustainability. Developing such a plan can be a daunting task, but is crucial for successful implementation. We worked with Bishop Chatard and the North Deanery Schools of the Archdiocese of Indianapolis to help them develop their implementation plan and proposal (Which was fully funded by the Fairbanks Foundation! Learn more about our fundraising services here.). Here are 5 tips we used to help them prepare to successfully implement their evidence-based program!

1. Select an Evidence-based ProgramWhat Works Image (1)

First, you need to find a program that aligns with the needs you are trying to address. For example, if you are a school looking to prevent substance use and violence, while also promoting positive youth development, you may choose to implement the Botvin LifeSkills Training curriculum.

Taking the time to research available programs is crucial to ensuring successful implementation and maximum impact. To learn more about how to find an evidence based program, check out this blog!

2. Assess your Organization’s Capacity

Once you have selected an appropriate evidence-based program, it is important to assess your current funding and staffing capacity. You want to assess if your current organizational capacity will allow you to implement the program with fidelity. Fidelity refers to the extent to which you deliver your program as the original program model intended. Evidence-based programs are  proven effective and that effectiveness relates to how the program is implemented. Therefore, fidelity to the model is crucial to successful implementation.

Completing a feasibility study is a great way to assess your capacity and readiness. A well designed feasibility study will help an organization assess 1) if what they are thinking of implementing is possible and 2) how to consider implementing it. Check out this blog to learn more about completing a feasibility study.

The assessment of your capacity may indicate that you need to make some organizational changes. For example, you might need to tweak your program budget to purchase necessary materials and/or hire additional staff. Making these operational and workforce investments will lead to more successful implementation and program outcomes.

3. Create an Implementation Plan

Next, it’s time to flesh out your implementation plan. This plan should include a timeline and should specify staff members’ responsibilities for program related tasks. Many evidence-based programs have a set number of required sessions and guidelines for how frequently they should occur. Make sure that your implementation plan aligns with program requirements.

4. Train and Prepare Staff

Once you create your implementation plan, provide training for staff involved in the implementation. Involved staff should have a clear understanding of the program goals, activities, and their responsibilities throughout implementation. Your implementation plan should also include continued professional development opportunities and training for staff, to ensure continued high quality implementation.

5. Establish Continuous Monitoring Procedures

Once you begin implementing the program, you want to continuously monitor your fidelity to the program model. Many evidence-based programs come with accompanying fidelity checklists. It is important to identify a staff member, or an outside evaluator, who will conduct observations of the program to evaluate the implementation. You can use observations and fidelity checklists to assess if the program’s implementation is consistent with the original program model.

If your organization is looking for support in choosing, implementing or evaluating an evidence-based program, contact us today to learn more about our program development and evaluation services!

Share this article:Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Email this to someone
email
Share on LinkedIn
Linkedin

How to Implement your Strategic Plan

In January, we kicked off our Exercise Your Impact campaign. Throughout 2019 we’re sharing tools and resources that highlight critical phases of organizational planning. Our first quarter focus is on the first phase: Strategic Planning. As we finish out the quarter, we want to provide some tips on how to use and implement your strategic plan.

Our process graphic

In a previous blog series, we highlighted the 4 steps of strategic planning and detailed each step (step 1, step 2, step 3, step 4). But what happens once you’ve completed those steps? We often find that organizations get stuck on figuring out how to take the big picture elements in the strategic plan and make them operational. To avoid this, we create an “implementation plan” during Step 4. This implementation plan unpacks the strategic plan into actionable steps. It provides your organization with a road map for how to accomplish the goals identified in the strategic plan.

HFI - Implementation Plan Pg 7

We recently worked with Healthy Families Indiana (HFI) to help them develop their strategic plan, including a detailed implementation plan. We are currently helping them implement the plan while providing project management support for their Think Tank. The following four components are key pieces of any implementation plan.

Key Implementation Plan Components 

Strategies/Goals: It is important to include any priorities and goals that are set within your strategic plan in your implementation plan. For example, one of HFI’s key goals from their strategic plan is to translate information about brain science to share with parents. Including this goal in the implementation plan ensured that they had a detailed plan to actually achieve it.

Action steps: For each strategy/goal, the implementation plan will spell out action steps to help your organization meet that goal. The tasks associated with the HFI goal listed above included developing or identifying a family-facing brain science resource to share with parents.

Timeline: The implementation plan will indicate which year you plan to complete each task. For example, if you have set a three-year strategic plan, it is important to evenly schedule tasks out over those three years. For example, HFI determined that in their long-term schedule, developing and/or identifying family facing brain science resources would fit best within 2019.

Responsibility: It is important that the implementation plan indicate who is responsible for completing each task. Identifying those responsible helps to ensure accountability and track progress. The responsible party could be a group or individual. They could be staff, board members, or a committee. HFI identified the Think Tank as being the most appropriate group to complete the task outlined in the previous components.

Tips for Using your Implementation Plan

  • When carrying out an implementation plan, it is important to establish a procedure to track progress on tasks. For example, we’ve done this for clients by creating a spreadsheet that they can update on a quarterly basis to track actions they’ve taken.
  • An organization can update their implementation plan over time if needed. Tasks may take a longer or shorter amount of time than anticipated, so it is appropriate to update timelines as you go.

TCG is privileged to work with organizations at all stages of the strategic planning and implementation process. We are currently helping another client, Manchester Early Learning Center, finalize their strategic plan. We’re excited to watch them carry it out!

Track Image - NEW

As we finish the first quarter of 2019, we’re also looking forward to the second ‘leg’ of our Exercise Your Impact campaign: Program Development. Once you have set the strategic vision for your organization it is time to take a closer look at your program(s) to assess whether your current programming is in a place to help you achieve that vision. Contact us today to see how we can assist you with your strategic planning or programming needs!

Share this article:Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Email this to someone
email
Share on LinkedIn
Linkedin

Putting Data into Context

At Transform Consulting Group, we are proud data nerds. Through our evaluation services, we help clients collect, analyze, and share meaningful data. In this blog post, we explained who to share your data with and why. In today’s post, we will go one step further by providing tips on how to present your data in a meaningful way. More specifically, we’ll discuss how to put your data in context and why it is important to do so.

Impact Image- blogWhen presenting your data, you shouldn’t share it in isolation. For example, an after school tutoring program might find that 75% of their students pass their required standardized tests. If the program shared this data point by itself, their audience might have a lot of unanswered questions, like:

  • How does this pass rate compare to other students who don’t receive tutoring services?
  • How does this rate compare to local and national data?
  • What standardized tests is the statistic referring to?

 

To avoid this problem and present their data it a meaningful way, it would be best for the tutoring program to cite outside data sources to provide comparison, credibility, and context. By including this additional information, the program could more fully illustrate their impact and outcomes.

We are currently working with the Center for Leadership Development to develop an evaluation plan. Through this process, we have helped them demonstrate their impact by presenting their data within context. Here are three tips we shared with them that can also help you use outside data sources to put your data into context.

1. Find credible data sources that add meaning to your data.

When citing outside data, it’s important to make sure the data is credible, accurate, and relevant to your organization’s work. When working with clients like CLD, we often provide a resource sheet listing different data sources they can cite for comparison and context. An example of a data source we shared with CLD is the Indiana Commission for Higher Education’s College Readiness Dashboard. This was an appropriate choice because it is a reliable interactive data set that can be used to compare the outcomes CLD students experience to other students in their state and county in similar demographic groups. Check out this blog post for a list of our go-to data sources. This list may help you identify which data sources you can cite to move your organization forward.

2. Benchmark similar programs.

In a previous blog post, we explained that you may want to benchmark the practices of organizations similar to yours when making a programmatic change or looking to diversify your funding. Benchmarking can also be helpful when creating an evaluation plan and reporting your data. Looking at the outcomes of similar programs gives you comparable data to assess your program’s efficacy.

When working with CLD, we benchmarked similar programs such as College Bound in St. Louis. Their programming aims to help low-income students get into and graduate from college. Not only were they a similar program for CLD to compare their outcomes to, but they are also a great example of an organization who puts their data into context to make it more meaningful. For example, they compare their data to St. Louis Public School data and low-income students across the nation:

94% of College Bound students have matriculated to college immediately after high school, compared to 66% of St. Louis Public School graduates and only 51% of low-income graduates nationwide.

By presenting this statistic in the context of the students’ school system and other low-income students, College Bound is displaying the impact they are having and the success of their students relative to their peers.

3. Make sure you’re comparing apples to apples.

We always tell clients to make sure they’re not trying to compare apples to oranges. This phrase refers to the comparison of items that aren’t really comparable. An example of this came up in our work with CLD when reporting their alumni’s postsecondary persistence rates. When comparing their persistence data to local and national data, we needed to make sure the outside data set was defining persistence in the same way they were. They define it as persisting from Freshman to Sophomore year of college. Other sources defined persistent students as those who were enrolled at any institution or had attained a degree 3 years after first enrolling. Therefore, these two data points aren’t really talking about the same thing and aren’t comparable. By finding the right data sources to compare your data to, you ensure that the data and context is meaningful.

If you need help presenting your data in a meaningful way and using it to make data-informed decisions, give us a call to see how we can help through this process!

Share this article:Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Email this to someone
email
Share on LinkedIn
Linkedin

3 Steps to Establish Clear Outcomes

Evaluation is key in determining if your program is making the desired impact. While critical, evaluation can be an overwhelming and intimidating process for organizations. We have worked with several clients to help them embark on the journey of evaluating their program(s). At Transform Consulting Group, we follow a four-step evaluation process. The first step of establishing clear outcomes can be one of the most difficult. You know what your mission is and you know your vision for a better community, but how do these translate into measurable outcomes?

4 eval steps

 

1. Establish clear outcomes

2.  Create or modify data tools and system

3. Analyze the data

4. Use data to make informed decisions

 

Outputs vs. Outcomes

When determining outcomes, the conversation usually starts with program outputs. Outputs are what your program produces: activities, services and participants. Tracking, analyzing and reporting your program outputs is a valuable way of displaying an organization’s work! For example, let’s say an after-school tutoring program served 650 students during the 2017-2018 school year. You could further break that number down by age and frequency of services:

Age group Session Frequency Number of participants Total number of sessions provided
3rd-5th grades Weekly for 10 weeks 320 320×10=3,200
6th-8th grades Weekly for 15 weeks 330 330×15=4,950
Total tutoring sessions provided= 8,150

With a few simple calculations, we have a powerful representation of the work this tutoring team has accomplished! However, outputs alone don’t display programmatic impact.

Outcomes go one more step in showing impact. Outcomes are the changes in knowledge or behavior that you want your clients to experience as a result of your program. They are the “so what” of your services and activities. There are three levels of outcomes that you want to set and measure:

  1. Short-term: What changes in knowledge, attitude or behavior do you want to see in your clients by the time they complete your program or service?
  2. Intermediate: What changes do you want to see in client knowledge, attitude or behavior 6 months-12 months following program completion?
  3. Long-term: What changes do you want to see in client knowledge, attitude or behavior 1+ years after program completion?

IMG_5774

We recently worked with the Center for Leadership Development (CLD) to develop short-term, intermediate and long-term outcomes. They are focused on helping get more students of color to and through postsecondary education. Here are three steps that we used to help them establish clear outcomes that assess the impact of their organization.

1. Align to Organizational Mission and Purpose

When you set outcomes, you want to make sure that they align with your organizational mission and benchmarks. CLD’s programming and organizational benchmarks are centered around five principles for success: character development, educational excellence, leadership effectiveness, community service, and career achievement. We helped them establish several outcomes that aligned with their programs, missions, and key principles. 

2. Review Funder’s Priorities 

When receiving grant funding or large donations, organizations often make commitments about what they will accomplish with those funds. Therefore, you want to make sure that future outcomes still align with your current funding priorities and commitments. We worked with CLD to make sure that their many outcomes aligned with the commitments they had made with their current funders.

3. Develop SMART Outcomes

When working with clients to develop outcomes, we follow the “SMART” rubric. We plan to write a full blog to go more in-depth about the SMART rubric, but for now the main takeaway is that they are specific, measurable, achievable, relevant and timely.

One of CLD’s long-term desired outcomes is for 75% of their participants to earn a bachelor’s degree or credential within six years of high school graduation. This outcome aligns perfectly with their mission and funding commitments, but is it SMART? Let’s check!

Copy of Establishing Clear Outcomes draft (2)With their clear outcomes established, CLD now has a road map of where they want their participants to go. This road map not only helps CLD stay on course, but it also helps to paint a picture of their desired impact for their funders and supporters. Now they are ready to move on to the next step of their evaluation: Creating or modifying data tools and systems!

If you’re ready to evaluate your program, but are hesitant to take the first step, contact us today!

 

Share this article:Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Email this to someone
email
Share on LinkedIn
Linkedin